CHatbots like OpenAI’s ChatGPT can have fun conversations about many topics. But when it comes to providing people with accurate health information, they need people’s help.
As tech enthusiasts researching and developing AI-driven healthcare chatbots, we are optimistic about the role these agents will play in delivering consumer-centric healthcare information. However, they must be designed with specific uses in mind and built with safeguards to protect their users.
When we asked ChatGPT in January 2023 whether children under the age of 12 should be vaccinated against Covid-19, the answer was no. It also suggested that an elderly person should rest to treat their Covid-19 infection but was unaware that Paxlovid was the recommended therapy. Such guidance may have been accurate when the algorithm was first trained based on accepted knowledge, but it has not been updated.
When the Covid-19 vaccines were first rolled out, we asked young people in US East Coast cities why they would want to use a chatbot to get information about Covid-19. They told us that chatbots feel easier and faster than web searches because they provide a condensed, instantly focused response. In contrast, searching for this information on the Internet could return millions of results, and the search could quickly lead to increasingly alarming topics – a persistent cough becoming cancerous within a single page. Our respondents also disliked the targeted ads they received after doing a health-related web search.
Chatbots also gave the impression of anonymity, presenting themselves as a safe place to ask any question, even a scary one, without leaving an obvious digital trail. Additionally, the bots’ often anodyne personas seemed non-judgmental.
In the second year of the pandemic, we developed the Vaccine Information Resource Assistant (VIRA) chatbot to answer questions people had about Covid-19 vaccines. Similar to ChatGPT, VIRA uses natural language processing. But we review VIRA’s programming weekly and update it as needed so the chatbot can respond with updated health information. Collaborating with users has helped facilitate non-judgmental conversations about the Covid-19 vaccines.
We continue to monitor the questions VIRA is asked (all questions are anonymous in our data, IP addresses are removed) and the chatbot’s responses so we can improve its responses, identify and counteract emerging areas of misinformation, and identify emerging ones community concerns.
Several health departments have launched VIRA to help their constituents with accurate, up-to-date information about Covid-19 vaccines.
Our experience is part of the growing body of evidence supporting the use of chatbots to inform, support, diagnose and even offer therapy. These agents are increasingly being used to treat anxiety, depression, and substance use between doctor visits or even without clinical intervention.
Additionally, chatbots such as Wysa and Woebot have shown promising early results in creating human-like bonds with users when performing cognitive behavioral therapy and reducing self-reported measurements of depression. Planned Parenthood has a chatbot that offers verified, confidential sexual health advice, and several other chatbots are now offering anonymous abortion treatment advice.
Large healthcare organizations understand the value of this type of technology, with an estimated $1 billion market for them by 2032. In the face of increasing labor restrictions and unmanageably high and increasing volumes of patient messaging to providers, chatbots offer a pressure valve. Incredible advances in AI capability over the past decade are paving the way for deeper conversations and subsequent greater adoption.
While ChatGPT can write a poem, plan a party, or even create a school essay in a matter of seconds, such chatbots cannot be safely used to engage in health and talking about health topics should be off-limits to them to avoid harm. ChatGPT offers disclaimers and encourages users to consult doctors, but such words are wrapped around compelling, comprehensive answers to health questions. It’s already programmed to avoid comments on politics and profanity; Healthcare should be no different.
Risk-taking in healthcare is low for a reason, as potential negative consequences can be severe. Without the necessary checks and balances, chatbot skeptics have many reasons to be concerned.
But the science of how chatbots work and how they can be used in healthcare is evolving brick by brick, experiment by experiment. They could one day offer unprecedented opportunities to support natural human inquiry. In ancient Greece, Hippocrates admonished worshiped, godlike healers not to do harm. AI technology must answer that call today.
Smisha Agarwal is Director of the Center for Global Digital Health Innovations and Assistant Professor of Digital Health in the Department of International Health at the Johns Hopkins Bloomberg School of Public Health. Rose Weeks is a Research Fellow at the International Vaccine Access Center in the Department of International Health at the Johns Hopkins Bloomberg School of Public Health. The opinions of the authors who lead the team that developed and brought to market the Vaccine Information Resource Assistant do not necessarily reflect the opinions of their employer.