The CHAT Act Won't Protect Kids, But it Might Break the Internet.
By forcing government ID verification for AI tools, Congress risks censoring everyday digital services and driving young Americans to unsafe overseas platforms.
Congress is quietly drifting toward an ID-verified internet. The Children Harmed by AI Technology (CHAT) Act of 2025 is Washington's newest attempt to regulate speech through mandatory age checks. Rather than protecting kids, it would normalize showing a government ID for basic online speech.
The CHAT Act tries to target fictional role-play bots you've likely heard horror stories about. The problem is that the bill defines a chatbot so broadly that anything that "simulates emotional interaction" could be restricted. By the CHAT Act's standards, ChatGPT, some video game characters, or even a customer service bot could all require users to upload a government ID just to log in.
Large language models (LLMs) learn to write by training on billions of real conversations and stories. That makes their output naturally resemble human dialogue—including emotional tone. Forcing AIs to remove random "interpersonal" behaviors and eliminating all traces of emotional tone or dialogue could mean deleting much of the training data itself.
If the CHAT Act were to pass, developers would face two terrible choices. They could impose ID verification across their platforms or censor outputs so aggressively that American AI products could become unusable. Just as China's DeepSeek censors references to Tiananmen Square, the CHAT Act could force U.S. developers toward a similar censorious model of compliance. The result could be an industry-wide unforced error that would hobble innovation relative to foreign competitors.
And it wouldn't stop with chatbots. AI is now built into nearly every digital product. AI now runs through everyday products: Duolingo's language tutor, Alexa's music suggestions, and video game NPCs offering advice. Under the CHAT Act, any of them could require a government ID. Lawyers and developers, unsure where Congress's ill-defined lines will fall, could slow or suspend AI integrations altogether.
On the internet, the danger extends even deeper. As Google search increasingly leverages its AI chatbot Gemini, as OpenAI builds its new browser "Atlas," and as queries increasingly take place through LLMs instead of search engines, the CHAT Act brings us closer to an ID verification layer across tomorrow's internet.
What's more, the bill wouldn't protect vulnerable users. Age-verification laws are prone to backfire.
Requiring users to upload government IDs may sound simple, but it creates a massive honeypot for hackers. Once those databases are inevitably breached, millions of Americans—including minors—could have their most sensitive personal data stolen in the name of "safety."
Other obstacles such as ID portals or geoblocking become an incentive for tech-savvy generations to download VPNs that allow users to spoof their location to different countries with fewer restrictions. When the UK implemented similar age-verification laws, VPN usage spiked by up to 1,400 percent as users flocked to unregulated platforms abroad. Without our guardrails or basic consumer protections, these foreign platforms could expose children to more dangerous and explicit content.
U.S. lawmakers should avoid driving young Americans toward foreign platforms with weaker protections and lower accountability.
The United States has a long history of consumer protection laws rooted in evidence and precedent, rather than preemptive panic over emerging technologies. Consumer protection frameworks evolved through tested case law rather than reactionary moral legislation.
With the earliest cases still ongoing or only just filed, it is too early to know how courts will treat AI-related harms and whether existing laws can address them. But those cases are likely to provide a more comprehensive understanding of the shortcomings in our current legislation. Congress needs to stop legislating out of fear and start learning how the technology works.
The panic around "AI harm" has pushed Congress into reactionary policymaking that risks rewriting the rules of online speech without meaningfully protecting kids. By copying the censorious and restrictive internet frameworks of China and the UK, lawmakers could end up creating more danger by forcing children into darker corners of the web.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please to post comments
Of course, getting rid of government ID is not under consideration.
Vote for fascists, get fascism.
So the complaint is that a US govt regulated access to AI will push users to unregulated AI platforms? And that is dangerous because the unregulated platforms are unregulated? Sounds a bit jenky.
That might just be what saves the internet. I want to break the internet as it exists for most people now: a bunch of bullshit, curated apps that have just made everyone retarded.
+1 Broken by design: too few buttons with sharp edges.
Thankfully, you don't get to impose your preferences on others.
I know, people like being retarded, and it's not up to me to take that away from them.
So . . . we're supposed to assume that the US platforms would be safe? When they're not safe *now*?
But it Might Break the Internet.
Kids, back in the day people used to use words and phrases like "information superhighway" and "the cloud" to describe the ubiquity and pervasiveness of internet and computer network technology. A phrase like "break the internet" was just a euphemism for "blowing up" or attracting a lot of attention. The rest of the internet still worked even if your specific server, or cloud, or collection of tubes was broken.
Now, out-of-touch morons say it as the preamble to an earnest argument like "This law might break the entire US Highway system." or "This law might break clouds."
I actually read this article with some amount of interest, wondering what driving young Americans to unsafe overseas platforms would look like. The article doesn't really give any indication of that, aside from a quick note that the Chinese AI regurgitation bot censors information about Tiananmen square.
My first thought was, "you know who else wildly censored stuff across all the major platforms while Reason screeched "Sexshun toothirtee!"? I mean, you know who else built an entire "private sector" censorship industrial complex made to look like a fabric of independent actors just making their voices heard and helpfully pointing out 'disinformation'.
I was genuinely curious as to why overseas models were presumed 'less safe' but the article doesn't seem to explore that in any great detail, leading me to think this article is a kind of mobius strip of logic... "We shouldn't regulate our AI platforms because it will drive users to unregulated and therefore unsafe overseas platforms-- away from the domestically regulated and therefore safer platforms."
Then I got to this paragraph:
Um, I've been here... possibly longer than almost any commenter posting. Oh, that's not a brag-- that probably says something about my inherent lack of development, but still, it's a matter of record that I can't escape. Either way, my reaction would be, "Uh, this magazine has railed against (and often for good reason) every attempt to pass laws regarding consumer protection regulations as unnecessary industry and innovation-stifling government meddling. Now they handwave decades of regulations as "rooted in evidence and precedent", having slowly evolved through careful government action and thoughtful leadership.
So... I guess we're back to "We can't abolish everything otherwise we won't know what's in our drugs and AI chatbots! I mean, just look at those unregulated, unsafe overseas alternatives!"
I didn’t rta. But feel they would need to show an appreciable increase in danger to an unregulated overseas AI platform versus an unregulated domestic AI platform. Otherwise, the thesis is bad.
that probably says something about my inherent lack of development - Angel on the right shoulder/The first wolf/the Noble...
"Yeah, but at least you aren't sliding towards complete, pro-child raping incoherence like most of the rest of this place." - Devil on the left shoulder/The second wolf/the savage...
There are two Akitas inside of you. Both say they don’t want a top inside of you.
Government should stick to what it's best at - nothing.
Without our guardrails or basic consumer protections, these foreign platforms could expose children to more dangerous and explicit content.
The only reason those 'foreign platforms' MIGHT do that is the assumption that LLM or cloud AI is the only possible AI model. But that is only the AMERICAN model. That model is not designed for guardrails or basic consumer protections for kids. It is designed to steal all your data so they can train their models on it - and then either sell data back to some pervert/groomer with money or put that kids parents out of work - in the quest for imminent superintelligence - so a techbro oligarchy can roundtrip their revenues in a tighter and tighter group while hodling/mining bitcoin on those cloud AI servers.
The foreign model - including generally the Chinese model - is to put AI models on a small/local/edge device. To stovepipe it and specialize it. Not to connect it to the entire world for purposes that only those developers know. For kids this would mean AI devices that are specifically designed to teach kids or help them organize their own stuff or play or something. Not AI devices where the question is how many locks and backdoors are there to the porn part of that model.
U.S. lawmakers should avoid driving young Americans toward foreign platforms with weaker protections and lower accountability.
Aah yes. What we need is a STRUCTURE to make sure that a future TikTok doesn't fill our kids heads with questions about the foreign policy choices made by an elite that has less than zero accountability.
The United States has a long history of consumer protection laws rooted in evidence and precedent
Bullshit. The US has a long history of corrupt laws enacted for the benefit of large donors. Kids are not large donors.