Chatbots Are Not Medical Devices
Why does the FDA want to regulate AI wellness apps?
In November, the Food and Drug Administration (FDA) held a Digital Health Advisory Committee meeting where it considered treating artificial intelligence mental health chatbots as medical devices. As the FDA more formally describes it, the agency "intends to apply its regulatory oversight" to software functions that it considers "medical devices" in cases where poor "functionality could pose a risk to a patient's safety."
The agency clarified that its intended "approach applies to generative AI-enabled products as well." That's formal language for a regulatory approach that threatens to rope into the FDA's broad regulatory oversight many AI chatbots that operate as useful wellness applications, not medical devices by any reasonable definition. It would be a mistake for the agency to apply medical device regulations to such wellness chatbots.
Registering a medical device with the FDA is extremely costly. To start with, there is the $11,423 per year registration fee. From then on, the company will be burdened with stringent government red tape that adds layered costs and harms consumer accessibility to regulated products.
For medical devices, the FDA requests premarket performance paperwork, risk management designs, and postmarket reports to assess reliability and effectiveness, which involve more expenses and costs for companies. Perhaps these costs would be justified if all of the potentially affected mental health care chatbots were actually medical devices—but they are not.
The FDA labels a product or service a medical device if it intends to diagnose, cure, mitigate, treat, or prevent a specific disease, including mental health conditions. AI chatbots do none of this.
What mental health chatbots do is offer general coping skills, mindfulness exercises, and cognitive behavioral therapy–style reframing meant to support users' well-being. They do not claim and are not intended to treat any specific medical condition. Since they do not evaluate users before or after interactions—and do not tailor specific medical interventions—mental health chatbots clearly fall outside FDA medical device requirements.
However, the FDA does regulate mental health technologies that explicitly market their products as intended treatments. For instance, Rejoyn and DaylightRx, two digital apps that explicitly intend to treat and cure previously diagnosed mental health conditions, have been labeled as medical devices. Both of these apps demand accuracy and, therefore, accountability because they are marketed as treatments for conditions such as depression. It makes sense that they are held to a higher standard than tools that do not intend to do the same.
AI mental health care chatbots are different because they do not claim to do any type of medical diagnosis, treatment, or cure. They are best characterized as "wellness apps," at their best helping people understand or feel better about themselves.
Nonetheless, AI mental health chatbots can be therapeutic without delivering what the FDA considers treatment.
As psychologists and users have pointed out, these chatbots respond to questions and complaints, provide summaries of conversations, and suggest topics to think about. These are forms of general wellness support, not clinical care. The companies behind these tools are explicit about this.
In the public comments submitted to the FDA on the digital mental health committee, Slingshot AI, the company that developed Ash (a popular AI mental health chatbot), specifies that it aims to provide "general wellness by making mental health support more accessible," not treatments or diagnoses of mental health issues. Another AI mental health care chatbot, Wysa, whose functions involve listening and responding to emotions and thoughts from users, will not diagnose or attempt to treat any conditions from its users.
But these companies are facilitating a low-cost solution to certain people's felt needs for communication, one that's available at all hours, day or night, amid a shortage of mental health care providers affecting millions of Americans.
Therabot, a mental health chatbot, showed that it was able to reduce depressive symptoms by 51 percent and downgrade moderate anxiety to mild in many of those who interacted with it for a couple of weeks. The developers of Ash carried out a 10-week study that found 72 percent of people using their app reported a decrease in loneliness, 75 percent reported an increase in perceived social support, and four out of five users had an increased feeling of hope and greater engagement with their lives. These products are successfully helping people, and the FDA ought not make access to them more expensive or complicated with new regulatory efforts.
Treating mental health care chatbots as medical devices misses the point: Mental health chatbots are not professional therapy. In fact, AI mental health chatbots are more like educational chatbots developed by licensed professionals than like medical advisers. Their advice does not involve clinical relationships or a personalized diagnosis.
Some worry that without being designated and regulated as medical devices, AI mental health chatbots will become decidedly unsafe spaces. But companies in the field are already setting higher standards to prevent such risks. For instance, ChatGPT incorporated input from mental health professionals into its model to recognize distress from users, de-escalate conversations, and avoid affirming ungrounded beliefs, and it guides people to seek in-person mental health care. The company behind the AI Claude is also placing safeguards in its model by partnering with ThroughLine, a global crisis app with mental health care professionals who are helping shape how the model deals with sensitive conversations.
Unlike general-purpose chatbots such as Claude and ChatGPT, AI mental health chatbots are specifically designed to handle sensitive conversations. Ash, for example, relies on experts' input and scientific evidence to improve user interactions. This is the case for many other AI mental health chatbots, such as Earkick, Elomia, and Wysa.
Labeling AI mental health chatbots as medical devices would stymie progress in helping people in need with simple tools that do not require medical advice. Imposing costly regulations on a technology that provides significant benefits will harm Americans who are seeking help. Any FDA decision to treat AI mental health chatbots as medical devices would be a mistake.