Lawmakers Want To Shield Kids From AI Chatbots. But Restricting Them Could Cut Off a Mental Health Lifeline.
Crackdowns on AI chatbots over perceived risks to children's safety could ultimately put more children at risk.

Federal regulators and elected officials are moving to crack down on AI chatbots over perceived risks to children's safety. However, the proposed measures could ultimately put more children at risk.
On Thursday, the Federal Trade Commission (FTC) sent orders to Alphabet (Google), Character Technologies (blamed for the suicide of a 14-year-old in 2024), Instagram, Meta, OpenAI (blamed for the suicide of a 16-year-old in April), Snap, and xAI. The inquiry seeks information on, among other things, how the AI companies process user inputs and generate outputs, develop and approve the characters with which users may interact, and monitor the potential and actual negative effects of their chatbots, especially with respect to minors.
The FTC's investigation was met with bipartisan applause from Reps. Brett Guthrie (R–Ky.)—the chairman of the House Energy and Commerce Committee—and Frank Pallone (D–N.J.). The two congressmen issued a joint statement "strongly support[ing] this action by the FTC and urg[ing] the agency to consider the tools at its disposal to protect children from online harms."
Alex Ambrose, policy analyst at the Information Technology and Innovation Foundation, tells Reason that she finds it interesting that the FTC's inquiry is solely interested in "potentially negative impacts," paying no heed to potentially positive impacts of chatbots on mental health. "While experts should consider ways to reduce harm from AI companions, it is just as important to encourage beneficial uses of the technology to maximize its positive impact," says Ambrose.
Meanwhile, Sen. Jon Husted (R–Ohio) introduced the CHAT Act on Monday, which would allow the FTC to enforce age verification measures for the use of companion AI chatbots. Parents would need to consent before underage users could create accounts, which would be blocked from accessing "any companion AI chatbot that engages in sexually explicit communication." Parents would be immediately informed of suicidal ideation expressed by their child, whose underage account would be actively monitored by the chatbot company.
Taylor Barkley, director of public policy at the Abundance Institute, argues that this bill won't improve child safety. Barkley explains that the bill "lumps 'therapeutic communication' in with companion bots," which could prevent teens from benefiting from AI therapy tools. Thwarting minors' access to therapeutic and companion chatbots alike could have unintended consequences.
In a study of women who were diagnosed with an anxiety disorder and living in regions of active military conflict in Ukraine, daily use of the Friend chatbot was associated with "a 30% drop on the Hamilton Anxiety Scale and a 35% reduction on the Beck Depression Inventory" while traditional psychotherapy—three 60-minute sessions per week—was associated with "45% and 50% reductions on these measures, respectively," according to a study published this February in BMC Psychology. Similarly, a June study in the Journal of Consumer Research found that "AI companions successfully alleviate loneliness on par only with interacting with another person."
Protecting kids from harmful interactions with chatbots is an important goal. In their quest to achieve it, policymakers and regulators would be wise to remember the benefits that AI may bring and not pursue solutions that discourage AI companies from making potentially helpful technology available to kids in the first place.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Follow the Junk Science.
How did kids ever survive in the past?
Their teachers weren't trying to talk them into suicidal or murderous ideologies?
Or convincing them they’re in the wrong body, while giving them graphic books on how to handle sex with other boys.
They figured things out at the battlebox.
It's surprising how little understanding the government has of tools like AI. Banning kids from interacting with chatbots because they might generate objectionable content is like banning kids from crayons because they might draw objectionable pictures.
Ban teeth. Don't let kids chew Pop-Tarts into gun shapes.
Seconded. All nutrients must be processed through a USDA licensed food mill prior to consumption by children to address the deadly scourge of childhood respiratory obstruction. Use of unlicensed food mills will be subject to fines and penalties.
While I have no disagreement with you on politicians understanding tech in general, let alone AI, comparing crayons to ChatGPT is like comparing a lawn dart to a heat seeking missile.
You may be surprised at what can be created with crayons. It just takes a little more talent and patience than a computer prompt. Should we deny those without talent or patience the freedom of creativity?
How would you deal with AI generated realistic pornographic blackmail? Cyber bullying advanced to a point you can't actually distinguish between truth and fiction?
What law did someone break for sending an AI generated pornographic pic to a teen and threatening to make it public and the resulting suicide? <- something that actually happens.
At least the crayon drawings don't have the element of realism that causes people to take it as true.
What was that old saying? Pics or it didn't happen. Time to unlearn that one, but what will be the social cost while we are adapting to the new reality that pics don't mean a damn thing anymore?
I predict a resurgence of chemical photography. It's already happened in one niche: among paranormal investigators, stereo film cameras are the gold standard for anomalous images. Very difficult to fake.
Naturally that is a failing of the operator, not the technology, which remains unaccountable for how it is leveraged. Are crowbars not used in countless B&Es? One would not expect the manufacturer of the tool to be accountable for its misuse.
But AI has destroyed a tool. Photographs used to be generally reliable evidence of events. Electronic photographs are now nearly worthless, only as good as the testimony of the photographer.
Faked photos are nothing new. Electronic photograph data and metadata are easily manipulated by a skilled operator with the right tools. They're not immutable. AI just makes the process more accessible to those who lack the talent to do it themselves. Its a better tool in some ways, not so much in others. The weird text seen in some amateur AI-enhanced photos is a pretty good tell.
You're making my point. Faking photographs used to require specialized skills and tools. AI is nearly to the point of making good quality fake photos accessible to anyone. Going forward, we'll have to keep in mind that ANY photo could be fake—we'll have only the word of the photographer or publisher that it's real. That's something new.
Rationally, at this point, the only things you should trust are things you saw first hand with your own two eyes.
The next generations are going to have to find that one out the hardest way possible.
Although since AI is starting to show signs of eating itself, perhaps the worries are overblown in terms of what it is actually capable of. So far, the skeptics are largely batting 1000 when it comes to AI hype(rbole).
Says Jack Nicastro ...
No mention of it's none of their business? No mention of government just butting out?
The modern libertarian writer in the modern libertarian website.
It's the parents who decide what is harmful and protect their children from it. Not the government.
If you think an AI trained on reddit is good for mental health, you may be a reason editor.
It's possible that their position is that access to the reddit-trained AI Chatbot rid us of the nuisance that was Charlie Kirk...
Lawmakers Want To Shield Kids From AI Chatbots. But Restricting Them Could Cut Off a Mental Health Lifeline.
This may be the most Teen Reason headline I've read this month.
Why can't you make a basic case for liberty... why do you have to chin-scratch about lack of access to a fucking AI Chatbot psychotherapist causing a goddamned mental health crisis?
I agree. Reason is the Teen Beat of pretend libertarianism.
Reason engages in tech idolatry. Since Postrel it has been unable to critically analyze the tech industry. Do tech companies fund Reason Foundation?
>But Restricting Them Could Cut Off a Mental Health Lifeline.
CHATBOTS ARE NOT MENTAL HEALTH CARE!!!!!
WTF Nicastro? People are murdering others and killing themselves because of chatbots. Others are more and more isolated because they become emotionally dependent on chatbots.
But you have some hypothetical situation where a chatbot could help a kid's emotional state and that trump's all the actual, really being done right now, harm they do?
This is like 'ignore the fentanyl zombies - drugs help mellow some people out'.
Or its another 'net benefit' argument. Let's ignore the real harms because some people benefit from it.
Still not justification for government interference.
Is it not? What does justify government interference then?
Violations of the Non-aggression Principle or constitutionally protected rights.
There is no Constitutional authority for FedGov "to protect children from online harms." Period. Full stop.
So quit fucking justifying it as some "important goal" for government, especially in (anymore, the most nominally) libertarian think tank OpEds.
It is a damning indictment of our health care system if we need AI chat bots to help with mental health.
Yeah, you democrats really fucked things up with Obamacare. Did you know that after my $250 per month health insurance was canceled due to Obamacare, the next best option was an exchange plan that was largely inferior coverage for $300 per month more?
That’s what you democrats did.
This has to be Jack's drug addiction talking.
Simulated humanity with a computer that only says things you want to hear is "a mental health solution."
Only someone stoned out of their gourd more often than they're lucid could come up with something that retarded.
I stopped talking to my imaginary friends when I was like 5, Jack. What's your excuse?
Tp be fair, he’s probably kind of dumb.
I guarantee he talks to chatbots instead of other humans.