Chatbots like OpenAI’s ChatGPT can hold fun conversations across many topics. But when it comes to providing people with accurate health information, they need help from humans.
As tech enthusiasts who research and develop AI-driven chatbots in health care, we are optimistic about the role these agents will play in providing consumer-centered health information. But they must be developed with specific uses in mind and be built with precautions to safeguard their users.
When we asked ChatGPT in January 2023 about whether children under the age of 12 should get vaccinated for Covid-19 vaccines, the response was “no.” It also suggested that an older person should rest up to address his Covid-19 infection, but did not know Paxlovid was the recommended therapy. Such guidance may have been true when the algorithm was first trained, based on accepted knowledge, but it hadn’t been updated.
When Covid-19 vaccines were first being rolled out, we asked young people in U.S. cities on the East Coast what would make them want to use a chatbot to get information about Covid-19. They told us that chatbots felt easier and faster than web searches, since they gave a condensed, instantly focused answer. In contrast, searching for that information on the web might retrieve millions of results and searches could quickly spiral into increasingly alarming topics — a persistent cough becomes cancerous within a one-page scroll. Our respondents also disliked the targeted ads they got after a health-related web search.
Chatbots also offered the impression of anonymity, presenting themselves as a safe space where any question, even a scary one, can be asked without creating an obvious digital trail. Further, the bots’ frequently anodyne personas seemed nonjudgmental.