Facebook Pixel Code

The Five Most Key Takeaways of This Blog

  • A chatbot that was meant to “transform” and “democratize” education ended up flopping. Many public schools implemented Ed, the chatbot, with the hope that it could lead to higher grades and engagement across the board. 
  • The company that makes the chatbot has now shrunk by quite a bit, citing financial constraints despite the amount of taxpayer investment. 
  • This failure has relevance far beyond the public-school system, as it puts an important question to the forefront: what is it that makes a chatbot successful? 
  • There are several factors to consider here. One of the factors involves data-privacy concerns. Another factor is the audience. Perhaps the most important factor of all is trustworthiness of answers.
  • By taking these factors into consideration, business owners can better determine how to use chatbots, specifically for customer relations. 
Concerns with Data Privacy

What are your customers unwilling to share with a chatbot? 

The interesting thing is that what a customer may share with a human customer-service rep, even if that rep is a total stranger, may not be what the customer is willing to share with a chatbot. 

The Ed situation actually produced a perfect example of this in the context of education. Ed the chatbot could tell a parent or guardian in a virtual chat about that parent or guardian’s child-student’s grades and progress in school.

For many students and parents alike, this felt just a little too uncomfortable. Why trust this computer program to have such privileged information. 

Their worries are actually quite legitimate, because there have been cases of hacking groups that access and hold for ransom privileged information about students. So, having just one more platform that stores this information makes it that much more accessible to bad (virtual) actors. 

Meanwhile, most of us are just fine with teachers and administrative staff having access to and knowledge of a student’s grades, with little to no worries there of the human staff members leaking that info. 

Audience Considerations

One of the biggest, probably insurmountable in hindsight, challenges using yet another screen-mediated application for students to sink time into. 

Many parents and educators alike are already concerned about the amount of time that students spend on their phones. Several state legislatures in the U.S. have been toying around with the idea of a school cellphone ban.

This writer personally knows a high-school educator whose descriptions of students’ classroom behavior w/r/t device usage during lesson time paint a mental image not too far from the old-man-yells-at-cloud Sunday Funnies topical cartoons depicting “screenagers”. 

Though the company that created Ed tried to stress that the tablet or computer usage of a student interacting with Ed was of an identifiably different sort than, it seems that to many of the authority figures in the students’ lives, the response was more or less “tomayto, tomahto”. 

You can see why it is more than just the parents who would have serious trepidations of routing students to a chatbot, instead of just, y’know, the teachers themselves, to answer questions. 

So, for business owners: is your audience (i.e., customers and/or clients) amenable to speaking with a chatbot? And, further, in what contexts? 

One pitfall is failing to discern when a chatbot may actually harm customer relations. A business owner needs to find out when customers want to speak with a bot versus a human.

A classic example of this is in troubleshooting a malfunctioning product. An available human customer-care rep may be preferred by the majority of your customers in this case.

However, some ethically questionable companies may implement a chatbot for that precise reason (i.e., in the hope that a potentially costly fix to a customer complaint is avoided by just frustrating or wearing down the customer to the point of forfeit). 

Trustworthiness of Answers

Last but certainly not least is the problem of trustworthiness in answers.

In the A.I. world, the fact that chatbots sometimes hallucinate is a known fact, and the idea that there is such a thing as a chatbot that never hallucinates is met with quite a bit of skepticism. 

So the worry here is pretty clear: what if the chatbot gives the students the wrong answers, and so gets a miseducation instead of an education? 

As a business owner, you need to ask yourself what questions should be off-limits for a chatbot. Those questions should involve ones that in zero cases should be wrong.

One example would be hallucinated wrong advice about how to fix a product in a way that may actually end up causing harm to the customer.