Chatbots Are Not Medical Devices
Why does the FDA want to regulate AI wellness apps?
In November, the Food and Drug Administration (FDA) held a Digital Health Advisory Committee meeting where it considered treating artificial intelligence mental health chatbots as medical devices. As the FDA more formally describes it, the agency "intends to apply its regulatory oversight" to software functions that it considers "medical devices" in cases where poor "functionality could pose a risk to a patient's safety."
The agency clarified that its intended "approach applies to generative AI-enabled products as well." That's formal language for a regulatory approach that threatens to rope into the FDA's broad regulatory oversight many AI chatbots that operate as useful wellness applications, not medical devices by any reasonable definition. It would be a mistake for the agency to apply medical device regulations to such wellness chatbots.
Registering a medical device with the FDA is extremely costly. To start with, there is the $11,423 per year registration fee. From then on, the company will be burdened with stringent government red tape that adds layered costs and harms consumer accessibility to regulated products.
For medical devices, the FDA requests premarket performance paperwork, risk management designs, and postmarket reports to assess reliability and effectiveness, which involve more expenses and costs for companies. Perhaps these costs would be justified if all of the potentially affected mental health care chatbots were actually medical devices—but they are not.
The FDA labels a product or service a medical device if it intends to diagnose, cure, mitigate, treat, or prevent a specific disease, including mental health conditions. AI chatbots do none of this.
What mental health chatbots do is offer general coping skills, mindfulness exercises, and cognitive behavioral therapy–style reframing meant to support users' well-being. They do not claim and are not intended to treat any specific medical condition. Since they do not evaluate users before or after interactions—and do not tailor specific medical interventions—mental health chatbots clearly fall outside FDA medical device requirements.
However, the FDA does regulate mental health technologies that explicitly market their products as intended treatments. For instance, Rejoyn and DaylightRx, two digital apps that explicitly intend to treat and cure previously diagnosed mental health conditions, have been labeled as medical devices. Both of these apps demand accuracy and, therefore, accountability because they are marketed as treatments for conditions such as depression. It makes sense that they are held to a higher standard than tools that do not intend to do the same.
AI mental health care chatbots are different because they do not claim to do any type of medical diagnosis, treatment, or cure. They are best characterized as "wellness apps," at their best helping people understand or feel better about themselves.
Nonetheless, AI mental health chatbots can be therapeutic without delivering what the FDA considers treatment.
As psychologists and users have pointed out, these chatbots respond to questions and complaints, provide summaries of conversations, and suggest topics to think about. These are forms of general wellness support, not clinical care. The companies behind these tools are explicit about this.
In the public comments submitted to the FDA on the digital mental health committee, Slingshot AI, the company that developed Ash (a popular AI mental health chatbot), specifies that it aims to provide "general wellness by making mental health support more accessible," not treatments or diagnoses of mental health issues. Another AI mental health care chatbot, Wysa, whose functions involve listening and responding to emotions and thoughts from users, will not diagnose or attempt to treat any conditions from its users.
But these companies are facilitating a low-cost solution to certain people's felt needs for communication, one that's available at all hours, day or night, amid a shortage of mental health care providers affecting millions of Americans.
Therabot, a mental health chatbot, showed that it was able to reduce depressive symptoms by 51 percent and downgrade moderate anxiety to mild in many of those who interacted with it for a couple of weeks. The developers of Ash carried out a 10-week study that found 72 percent of people using their app reported a decrease in loneliness, 75 percent reported an increase in perceived social support, and four out of five users had an increased feeling of hope and greater engagement with their lives. These products are successfully helping people, and the FDA ought not make access to them more expensive or complicated with new regulatory efforts.
Treating mental health care chatbots as medical devices misses the point: Mental health chatbots are not professional therapy. In fact, AI mental health chatbots are more like educational chatbots developed by licensed professionals than like medical advisers. Their advice does not involve clinical relationships or a personalized diagnosis.
Some worry that without being designated and regulated as medical devices, AI mental health chatbots will become decidedly unsafe spaces. But companies in the field are already setting higher standards to prevent such risks. For instance, ChatGPT incorporated input from mental health professionals into its model to recognize distress from users, de-escalate conversations, and avoid affirming ungrounded beliefs, and it guides people to seek in-person mental health care. The company behind the AI Claude is also placing safeguards in its model by partnering with ThroughLine, a global crisis app with mental health care professionals who are helping shape how the model deals with sensitive conversations.
Unlike general-purpose chatbots such as Claude and ChatGPT, AI mental health chatbots are specifically designed to handle sensitive conversations. Ash, for example, relies on experts' input and scientific evidence to improve user interactions. This is the case for many other AI mental health chatbots, such as Earkick, Elomia, and Wysa.
Labeling AI mental health chatbots as medical devices would stymie progress in helping people in need with simple tools that do not require medical advice. Imposing costly regulations on a technology that provides significant benefits will harm Americans who are seeking help. Any FDA decision to treat AI mental health chatbots as medical devices would be a mistake.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please to post comments
The more things the FDA regulates, the easier it is for them to justify their own existence and maintain job security by making it look like they're performing some essential function. The self-licking ice-cream cone at work.
AI is really great! Not dangerous at all!*
*written with AI
They do occasionally help with M.A.I.D.
Because the FDA advocates for and protects the medical industry and needs to provide barriers to entry so that the established players can hold on to their turf and incomes.
While this is true, it also forces one to confront the fact that if indeed their industry is under threat by these chatbots, that further suggests that people are getting their mental health advice from a chatbot.
So now we're all a meme of that guy sweating over which button to push. My opinion is that OpenAI should never have hired licensed and trained mental health professionals to provide the best medical advice, while claiming that nothing therein should be confused with medical advice.
Some worry that without being designated and regulated as medical devices, AI mental health chatbots will become decidedly unsafe spaces. But companies in the field are already setting higher standards to prevent such risks. For instance, ChatGPT incorporated input from mental health professionals into its model to recognize distress from users, de-escalate conversations, and avoid affirming ungrounded beliefs, and it guides people to seek in-person mental health care.
*sigh*
So, you're telling me that OpenAI has employed real medical and mental health professionals to train its model while at the same time insisting that it's not a mental health or or real medical service? Does anyone else here see how these industries create their own regulatory doom-loop?
This is like the 'fake news shows' like The Daily show or Colbert who insist loudly how influential their programming is, but then when they get called out on their horseshit they retreat to "c'mon man, it's just jokes".
So ChatGPT is assuring us that their model is trained by the best licensed and highly trained medical minds in the business while at the same time saying that nothing their chatbot does should be confused with healthcare provided by highly trained, licensed professionals.
Oh and...
and avoid affirming ungrounded beliefs
Can you give me a clear definition of this, with real world examples please? Take your time. Feel free to write your answer in essay form.
I know a partial professor of political economy who is slowly being driven insane by AI because he's been using it to develop 'quantum economics'.
Why does the FDA have such a hard-on for the initiation of deadly force?
>Chatbots Are Not Medical Devices
Why does the FDA want to regulate AI wellness apps?
Because they make medical claims that are not true.
Correct. I deal with this in my work.
>For instance, ChatGPT incorporated input from mental health professionals into its model to recognize distress from users, de-escalate conversations, and avoid affirming ungrounded beliefs, and it guides people to seek in-person mental health care.
Except for the issue where it won't disengage from user who is showing obvious signs of mental health issues being acerbated by LLM use - even up to when they kill themselves.
I’ve used wellness tools during stressful periods, not expecting diagnoses or cures, but simple structure, reflection, and consistency. That distinction matters. Regulation makes sense when something claims to treat disease, but applying full medical-device rules to tools that offer coping frameworks or guided reflection feels disproportionate, especially given the provider shortages the article mentions. We already accept this difference in physical health in Canada: when I searched for physiotherapy near me here in Ontario, I understood the gap between education, movement guidance, and clinical intervention, and that clarity helped me choose responsibly. Mental health deserves the same nuance. Facts show these chatbots don’t diagnose, don’t prescribe, and don’t replace clinicians, yet studies cited here suggest measurable benefits like reduced loneliness and anxiety. Overregulation risks limiting access to low-cost, low-risk support that many people rely on daily.