AI-powered world health chatbot is flubbing some answers
- WHO warns on its website that this early prototype, introduced on April 2, provides responses that ‘may not always be accurate’
- SARAH was trained on OpenAI’s ChatGPT 3.5, which used data through September 2021, so the bot does not have up-to-date information
The World Health Organisation (WHO) is wading into the world of artificial intelligence (AI) to provide basic health information through a humanlike avatar. But while the bot responds sympathetically to users’ facial expressions, it does not always know what it is talking about.
SARAH, short for Smart AI Resource Assistant for Health, is a virtual health worker that is available to talk 24/7 in eight different languages to explain topics like mental health, tobacco use and healthy eating. It is part of the WHO’s campaign to find technology that can both educate people and fill staffing gaps with the world facing a health-care worker shortage.
WHO warns on its website that this early prototype, introduced on April 2, provides responses that “may not always be accurate”. Some of SARAH’s AI training is years behind the latest data. And the bot occasionally provides bizarre answers, known as hallucinations in AI models, that can spread misinformation about public health.
SARAH does not have a diagnostic feature like WebMD or Google. In fact, the bot is programmed to not talk about anything outside of the WHO’s purview, including questions on specific drugs. So SARAH often sends people to a WHO website or says that users should “consult with your health-care provider”.
“It lacks depth,” Ramin Javan, a radiologist and researcher at George Washington University, said. “But I think it’s because they just don’t want to overstep their boundaries and this is just the first step.”