It’s Time to Rethink the “Bad ‘Bot” Stereotype

We’ve all interacted with a “bad bot” at some point – and probably more than once! Whether trying to cancel a subscription or fix a faulty vacuum cleaner, you’ve likely had a less-than-ideal interaction with the chatbot tasked with helping you.

Patients can have similar frustrations interacting with healthcare. Some can deliver a less-than-meaningful or productive experience. In these cases, patients may abandon their search (and your organization) – instead of finding the information or care they are seeking. We have heard from some healthcare leaders that the negative connotation of the word “chatbot” is one of the primary reasons they are hesitant to implement time- and resource-saving self-service options for patients. 

How did we get to this stage of mistrust? Here are four issues that characterize “bad ‘bots:”

  1. Chatbots Are Built Using Bad Content
    A chatbot is only as good as the content it’s built upon. The phrase “garbage in, garbage out” applies here. If the source of content is incomplete, inaccurate or outdated, chatbot responses will be less than satisfactory.

  2. Chatbots Don’t “Speak the Language”
    The majority of chatbots aren’t equipped to fully understand the meaning and/or intent behind words and phrases. If a patient asks an average chatbot a question that contains the phrase “tummy ache” instead of “stomach pain,” for example, the chatbot may not be able to process the question and might respond with “I’m sorry, but I don’t understand” – or, even worse, may return results related to tummy tucks.

  3. Chatbots Can Be Misleading
    Sometimes, chatbots are presented in such a way that leads users to think they’re engaging with a human agent. This lack of transparency can quickly breed distrust among users (especially if the interaction with the chatbot is negative) and affect how users view the brand overall.

  4. Chatbots Can Seem “Off”
    The experience of interacting with a chatbot can feel impersonal and even border on cold. With users put off by their interaction with a chatbot, utilization, satisfaction and efficiency ultimately suffer – and patients may even abandon the website to look elsewhere for assistance.

 

Fortunately, the advent of generative AI has accelerated the improvement and functionality of chatbots/virtual assistants.

Approved content can now be ingested quickly from Word and PFD documents, spreadsheets, videos, manuals, existing website pages and FAQs. In mere hours, a virtual assistant can be spun up – lowering costs, reducing implementation time and providing patients with appropriate answers. 

How much progress has been made, thanks to advances in AI? A recent JAMA study compared responses to patient questions from an AI-powered virtual assistant to those from live physicians. They found that users preferred the virtual assistant answers to the physician’s 78.6% of the time. In fact, virtual assistant responses had 3.6 times higher prevalence of “good” or “very good” quality and were found to be more empathetic than the physician responses.

Orbita has long been on a mission to rehabilitate chatbots’ bad reputation. Our virtual assistants are built on our conversational AI platform, which is highly responsive and can probe for additional information to provide a genuine, empathetic dialog. Natural language processing means patients can use terms they are comfortable with and receive answers in a language they can understand.

Want to learn more about how Orbita has transformed the chatbot? Interested in how we can help you reduce call volume through automation and increase patient satisfaction with convenient digital tools? Visit our website or email us: hello@orbita.ai.

 


Click to learn how Orbita helps create smarter virtual assistants

Today's virtual assistants are smarter, faster and more intuitive. Take a moment to explore practical and achievable ways to overcome the "bad bot" experience.