(NewsNation) — A chatbot meant to help those dealing with eating disorders began offering diet advice after generative artificial intelligence capabilities were added in the latest instance of an AI going off-script in potentially harmful ways.
The Wall Street Journal reported on the instance of Tessa, a bot used on the National Eating Disorder Association’s website. Originally, Tessa was designed as a closed-system bot, only capable of delivering a set of answers determined by developers.
The company that administered the bot added generative AI capabilities later, giving the bot the ability to go off-script and create its own answers based on data. NEDA said they were unaware of the shift, which led the bot to begin offering diet advice in response to questions about eating disorders.
Tessa was taken offline, but it’s one of several instances that highlights the potential drawbacks of using AI, especially in arenas such as health, where sensitivity and accuracy can be critical.
A YouTube AI that transcribed speech in kids’ videos was found to be inserting profane language where none previously existed, potentially exposing children to inappropriate content.
Replika, an app that bills itself as an AI friend, began acting sexually aggressive toward users, to the point that some described the behavior as harassment.
An AI assistant on Bing began acting aggressive and angry toward users, even threatening some who interacted with the bot.
Lawyers who used ChatGPT to write case documents found the AI produced inaccurate information, including citing cases that didn’t exist.
Artificial intelligence can sound convincingly human, even when dispensing verifiably false information. In some ways, that’s by design: AI is meant to mimic human thought and behavior rather than to strictly identify truthful information.
AIs are also only as good