Healthcare Chatbot Explosion on The Way?

While healthcare providers have been relatively slow to put chatbots to work, the time may have come for a chatbot explosion. With ChatGPT generating frenzied global interest, we may be at the tipping point for using chatbots in our daily lives, and providers will have good reason to meet consumers where they are.

Over the last few years, there has been a slowly growing movement toward using chatbots in clinical care. One of the highest-profile projects involves the deployment of a chatbot created by Babylon Health to triage patients seen in several hospitals within the UK’s National Health Service.

Providers have also begun testing chatbot applications to improve patient communication. One interesting case comes from my colleague Colin Hung, who recently shared the story of how a chatbot helped stem a crisis at Piedmont Healthcare.

Also, public health leaders have begun to use chatbot technology to address COVID-19 management activities such as vaccine scheduling, surveillance, information dissemination, and risk assessment, especially during the pandemic.

That being said, while these projects have generated a lot of good information, they’ve also begun to expose some of the weaknesses of current healthcare chatbot technology.

For example, the NHS project has been something of a financial bust, though the case might not be generalizable given the way it’s wrapped around the quirks of the health agency’s funding model.

Then, there are the limitations inherent in health AI deployments generally, some of which become particularly dangerous in a care-related setting.

Here are some weaknesses that ChatGPT itself identifies about its use in healthcare:

* It can’t make decisions or diagnose like a human doctor, but rather, only generates text based on the input patients provide.

* While it has been trained on a large amount of text data, ChatGPT may not be specifically trained on specific medical terminology or concepts, which could lead to inaccuracies or misunderstandings in patient communications.

* Patients may be hesitant to share personal or sensitive information with a software-based human simulation rather than an actual human.

* Language models like ChatGPT may struggle to understand the context of a patient’s question or statement, which could lead to confusion or misinterpretation.

* Being that it is a machine, ChatGPT does not have the ability to empathize with patients and their concerns, which is important for building trust and rapport.

Given the financial wreckage many healthcare institutions face in the wake of COVID, it wouldn’t be surprising they would batten down the hatches and spend little on anything other than critical issues. In fact, I wouldn’t be surprised to see healthcare players backtrack on existing AI research in the near term if they see it as even somewhat experimental.

However, I think the conditions are right for mass deployment of chatbots in healthcare relatively soon even though it will definitely face some challenging headwinds. Any tool which can make it easier to keep patients informed and engaged, particularly when the scales up easily, likely has a place in the healthcare technology toolbox.

About the author

Anne Zieger

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

   

Categories