Childproofing the Internet
How online “child protection” measures could make child and adult internet users more vulnerable to hackers, identity thieves, and snoops.

For the past several years, lawmakers and bureaucrats around the country have been trying to solve a problem. They wanted to regulate the internet, and in particular, they wanted to censor content and undermine a variety of systems that allow for privacy and anonymity online—the systems, in other words, that allow for online individuals to conduct themselves freely and outside of the purview of politicians.
There was something like a bipartisan agreement on the necessity of these rules and regulations. Lawmakers and regulators test-drove a number of potential arguments for online speech rules, including political bias, political extremism, drug crime, or the fact some tech companies are just really big. But it turned out to be quite difficult to drum up support for wonky causes like antitrust reform or amending the internet liability law Section 230, and even harder to make the case that the sheer size of companies like Amazon was really the problem.
Their efforts tended to falter because they lacked a consensus justification. Those in power knew what they wanted to do. They just didn't know why, or how.
But in statehouses and in Congress today, that problem appears to have been solved. Politicians looking to censor online content and more tightly regulate digital life have found their reason: child safety.
Online child safety has become an all-purpose excuse for restricting speech and interfering with private communications and business activities. In late May, Surgeon General Vivek Murthy issued an advisory on social media and youth mental health, effectively giving the White House's blessing to the panic. And a flurry of bills have been proposed to safeguard children against the alleged evils of Big Tech.
Unlike those other failed justifications, protecting children works because protecting children from the internet has a massive built-in constituency, lending itself to truly bipartisan action.
Many people have kids old enough to use the internet, and parents are either directly concerned with what their offspring are doing and seeing online or at least susceptible to being scared about what could be done and seen.
In addition to longstanding fears surrounding kids and tech—sexual predators, namely—there's a rising though heavily disputed belief that social media is uniquely harmful to minors' mental health.
The resulting flurry of bills represent what one could call an attempt to childproof the internet.
It's misguided, dangerous, and likely doomed to fail. Not only has it created a volatile situation for privacy, free expression, and other civil liberties, it also threatens to wreak havoc on any number of common online businesses and activities. And because these internet safety laws are written broadly and poorly, many could become quiet vehicles for larger expansions of state power or infringements on individual rights.
Threats to Encryption
End-to-end encryption has long been a target of government overseers. With end-to-end encryption, only the sender and recipient of a message can see it; it is scrambled as it's transmitted between them, shielding a message's contents from even the tech company doing the transmitting. Privacy-focused email services like Protonmail and Tutanota use it, as do direct messaging services like Signal and WhatsApp. These days, more platforms—including Google Messages and Apple's iCloud—are beginning to offer end-to-end encryption options.
The fact that people can communicate in such ways doesn't sit right with a certain flavor of authoritarian. But encryption also provides your average internet user with a host of benefits—not just protection from state snoops but also identity thieves and other cyber criminals, as well as prying eyes in their personal lives (parents, spouses, bosses, etc.) and at the corporations that administer these tools. Encryption is also good for national security.
An outright ban on end-to-end encryption would be politically unpopular, and probably unconstitutional, since it would effectively mandate that people communicate using tools that allow law enforcement clear and easy access, regardless of whether they are engaged in criminal activity.
So lawmakers have taken to smearing encryption as a way to aid child pornographers and terrorists, while trying to disincentivize tech companies from offering encryption tools by threatening to expose them to huge legal liabilities if they do.
That's the gist of the Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act, from Sen. Lindsey Graham (R–S.C.).
The heart of the measure (S. 1207) relates to Section 230, the federal communications law protecting computer services and users from civil liability for speech by other users, and what was once called child pornography but has recently been rebranded by authorities as child sexual abuse material, or CSAM. Essentially, EARN IT could make tech platforms "earn" immunity from civil liability when users upload or share such material by showing that they're using "best practices," as defined by a new National Commission on Online Child Sexual Exploitation Prevention, to fight its spread.
That sounds reasonable enough—until you realize that hosting child porn is already illegal, platforms are already required to report it to the National Center for Missing and Exploited Children, and tech companies already take many proactive steps to rid their sites of such images. As for civil suits, they can be brought by victims against those actually sharing said images, just not against digital entities that serve as unwitting conduits to this.
Experts believe the real target of the EARN IT Act is end-to-end encryption. While not an "independent basis for liability," offering users encrypted messaging could be considered going against "best practices" for fighting sexual exploitation. That means companies could have to choose between offering security and privacy to their users and avoiding legal liability for anything shared by or between them.
Similar to the EARN IT Act is the Strengthening Transparency and Obligations to Protect Children Suffering from Abuse and Mistreatment (STOP CSAM) Act (S. 1199), from Sen. Dick Durbin (D–Ill.). It would also amend Section 230.
Riana Pfefferkorn of the Stanford Internet Observatory calls the bill "an anti-encryption stalking horse." Pfefferkorn notes that "Congress has heretofore decided that if online services commit … child sex offenses, the sole enforcer should be the Department of Justice, not civil plaintiff." But "STOP CSAM would change that."
The bill amends Section 230 to allow civil lawsuits against interactive computer service providers (such as social media platforms) or software distribution services (such as app stores) for "conduct relating to child exploitation." This is defined as "the intentional, knowing, or reckless promotion or facilitation of a violation" of laws against child sex trafficking, pornography, and enticement.
The big issue here is the lax and/or vague standards under which tech companies can become liable in these lawsuits. Precise legal meanings of "promote" and "facilitate" are unclear and subject to legal dispute.
Indeed, there's an ongoing federal lawsuit over the similar language in FOSTA, the Fight Online Sex Trafficking Act, which criminalizes websites that "promote or facilitate" sex work. In that case, the challengers have argued that the language is unconstitutionally broad—an argument with which judges seemed to agree. And while it's fairly clear what it means to act "knowingly" or "intentionally," it's less certain what acting "recklessly" in this circumstance would entail.
Pfefferkorn and others worry that offering encrypted communication tools could constitute acting in a "reckless" manner. As with EARN IT, this would force tech companies to choose between offering private and secure communications tools and protecting themselves from massive legal risk—a situation in which few companies would be likely to choose the latter.
Age Verification
Threatening encryption isn't the only way new tech bills threaten the privacy and security of everyone online. Proposals at both the state and federal level would require age verification on social media.
Age verification schemes create massive privacy and security concerns, effectively outlawing anonymity online and leaving all users vulnerable to data leaks, corporate snoops, malicious foreign actors, and domestic spying.
To verify user ages, social media companies would have to collect driver's licenses or other state-issued ID from all users in some capacity—by having users directly submit their documentation to the platform or by relying on third-party ID services, potentially run by the government. Alternatively they may rely on biometric data, such as facial scans.
Several such proposals are currently before Congress. For instance, the Making Age-Verification Technology Uniform, Robust, and Effective (MATURE) Act (S. 419), from Sen. Josh Hawley (R–Mo.), would ban people under age 16 from social media platforms. To verify users are above age 16, platforms would have to collect full names, dates of birth, and "a scan, image, or upload of government-issued identification." The requirement would be enforced by the Federal Trade Commission and a private right of action. (In the House, the Social Media Child Protection Act, from Utah Republican Rep. Chris Stuart, would do the same thing.)
The Protecting Kids on Social Media Act (S. 1291), from Sen. Brian Schatz (D–Hawaii), is another bill that would explicitly require social media platforms to "verify the age of their users." This one would ban children under 13 entirely and allow 13- to 17-year-olds to join only with parental consent, in addition to prohibiting the use of "algorithmic recommendation systems" for folks under age 18.
Schatz's bill would also launch a "digital identification credential" pilot program in the Department of Commerce, under which people could verify their ages or "their parent or guardian relationship with a minor user." Social media platforms could choose to accept this credential instead of verifying these things on their own.
Commerce would allegedly keep no records where people used their digital identification—though considering what we know about domestic data collection, it's hard to trust this pledge. In any event, administering the program would necessarily require obtaining and storing personal data. If widely adopted, it would essentially require people to register with the government in order to speak online.
The Kids Online Safety Act (KOSA) wouldn't formally require age verification. But it would mandate a host of rules that social media platforms would be forced to follow for users under age 18.
The bill (S. 1409) comes from Sen. Richard Blumenthal (D–Conn.), who claims it will "stop Big Tech companies from driving toxic content at kids." But according to Techdirt's Mike Masnick, it would give "more power to law enforcement, including state AGs … to effectively force websites to block information that they define as 'harmful.'" Considering some of the things that state lawmakers are attempting to define as harmful these days—information about abortion, gender, race, etc.—that could mean a huge amount of censored content.
KOSA would also create a "duty of care" standard for social media, online video games, messaging apps, video streaming services, and any "online platform that connects to the internet and that is used, or is reasonably likely to be used, by a minor." Covered platforms would be required to "act in the best interests" of minor users "by taking reasonable measures… to prevent and mitigate" their services from provoking a range of issues and ills. These include anxiety, depression, suicidal behavior, problematic social media use including "addiction-like behaviors," eating disorders, bullying, harassment, sexual exploitation, drug use, tobacco use, gambling, alcohol consumption, and financial harm.
This standard would mean people can sue social media, video games, and other online digital products for failing to live up to a vague yet sprawling duty.
As with so many other similar laws, the problems arise with implementation, since the law's language would inevitably lead to subjective interpretations. Do "like" buttons encourage "addiction-like behaviors"? Do comments encourage bullying? Does allowing any information about weight loss make a platform liable when someone develops an eating disorder? What about allowing pictures of very thin people? Or providing filters that purportedly promote unrealistic beauty standards? How do we account for the fact that what might be triggering to one young person—a personal tale of overcoming suicidal ideation, for instance—might help another young person who is struggling with the same issue?
Courts could get bogged down with answering these complicated, contentious questions. And tech companies could face a lot of time and expense defending themselves against frivolous lawsuits—unless, of course, they decide to reject speech related to any controversial issue. In which case, KOSA might encourage banning content that could actually help young people.
These bills have serious flaws, but they are also unlikely to become law.
In contrast, some state laws with similar provisions have already been codified.
In March, Utah passed a pair of laws slated to take effect in early 2024. The laws ban minors from using social media without parental approval and requires tech companies to give parents complete access to their kids' accounts, including private messages. They also make it illegal for social media companies to show ads to minors or employ any designs or features that could spur social media "addiction"—a category that could include basically anything done to make these platforms useful, engaging, or attractive.
Utah also passed a law requiring porn platforms to verify user ages (instead of simply asking users to affirm that they are 18 or above). But the way the law is written doesn't actually allow for compliance, the Free Speech Coalition's Mike Stabile told Semafor. The Free Speech Coalition has filed a federal lawsuit seeking to overturn the law, arguing that it violates the First and 14th Amendments. In the meantime, Pornhub has blocked access for anyone logging on from Utah.
In Arkansas, the Social Media Safety Act—S.B. 396—emulates Utah's law, banning kids from social media unless they get express parental consent, although it's full of weird exceptions. It's slated to take effect September 2023.
Meanwhile, in Louisiana, a 2022 law requires platforms where "more than thirty-three and one-third percent of total material" is "harmful to minors" to check visitor IDs. In addition to defining particular nude body parts as being de facto harmful to minors, it ropes in any "material that the average person, applying contemporary community standards" would deem to "appeal or pander" to "the prurient interest." Porn platforms can comply by using LA Wallet, a digital driver's license app approved by the state.
California's Age-Appropriate Design Code Act (A.B. 2273) would effectively require platforms to institute "invasive age verification regimes—such as face-scanning or checking government-issued IDs," as Reason's Emma Camp points out. The tech industry group NetChoice is suing to stop the law, which is supposed to take effect in July 2024.
The List Goes On
Those are far from the only measures—some passed, some pending—meant to protect young people from digital content.
Montana's legislature passed a bill banning TikTok, and Montana Gov. Greg Gianforte, a Republican, signed the bill into law on May 17. In a sign of the state's dedication to accuracy, the short title of the bill, SB 419, erroneously refers to the video-sharing app as "tik-tok." It's scheduled to take effect at the start of next year. The law firm Davis Wright Tremaine is already suing on behalf of five TikTok content creators, and it seems unlikely to survive a legal challenge. TikTok itself has also sued over the ban.
Back in Congress, two bills—Hawley's No TikTok on United States Devices Act and Virginia Democrat Sen. Mark Warner's RESTRICT Act—take aim at TikTok under the auspices of national security.
Then there's the Cooper Davis Act (S. 1080), named after a Kansas City teenager who died after taking what he thought was a Percocet pill that he bought online. The pill was laced with fentanyl, and Cooper overdosed. Lawmakers are now using Davis' death to push for heightened surveillance of social media chatter relating to drugs. Fentanyl is "killing our kids," said bill co-sponsor Jeanne Shaheen (D–N.H.) in a statement. "Tragically, we've seen the role that social media plays in that by making it easier for young people to get their hands on these dangerous drugs."
The bill, from Sen. Roger Marshall (R–Kansas), "would require private messaging services, social media companies, and even cloud providers to report their users to the Drug Enforcement Administration (DEA) if they find out about certain illegal drug sales," explains the digital rights group Electronic Frontier Foundation (EFF). "This would lead to inaccurate reports and turn messaging services into government informants."
EFF suggests the bill could be a template for lawmakers trying to force companies "to report their users to law enforcement for other unfavorable conduct or speech."
"Demanding that anything even remotely referencing an illegal drug transaction be sent to the DEA will sweep up a ton of perfectly protected speech," Masnick points out. "Worse, it will lead to massive overreporting of useless leads."
The Children and Teens' Online Privacy Protection Act (S. 1628), from Sen. Edward Markey (D–Mass.), updates the 1998 Children's Online Privacy Protection Act (COPPA) and is being referred to by its sponsors as "COPPA 2.0." The original bill included a range of regulations related to online data collection and marketing for platforms targeted at kids under age 13. Markey's bill would expand some of these protections to apply to anyone under the age of 17.
It would apply some COPPA rules not just to platforms that target young people or have "actual knowledge" of their ages but to any platform "reasonably likely to be used" by minors and any users "reasonably likely to be" children. (In the House, the Kids PRIVACY Act would also expand on COPPA.)
Ultimately, this onslaught of "child protection" measures could make child and adult internet users more vulnerable to hackers, identity thieves, and snoops.
They could require the collection of even more personal information, including biometric data, and discourage the use of encrypted communication tools. They could lead social media companies to suppress a lot more legal speech. And they could shut young people out of important conversations and information, further isolating those in abusive or vulnerable situations, and subjecting young people to serious privacy violations.
Won't somebody please actually think of the children?
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Money making home based work to start earning every day more than $600 simplydoing copy and paste work online. Previous month imade $18320 from this job andi gave this only 2 hrs from my whole busy day. Very simple to do work andregular earning from this are much better than other regular 9 to 5 jobs. Go tothis site right now for more info.
.
.
HERE ————->> http://Www.pay.hiring9.com
I am making a good salary from home $6580-$7065/week , which is amazing under a year ago I was jobless in a horrible economy. I thank God every day I was blessed with these instructions and now it’s my duty to pay it forward and share it with Everyone,
🙂 AND GOOD LUCK.:)
Here is I started.……......>> http://WWW.RICHEPAY.COM
*ctrl-f decency 0/0*
I am making a good salary from home $1500-$2500/week , which is amazing, undera year earlier I was jobless in a horrible economy. I offer thanks toward Godeach day I was blessed with these instructions and now it’s my duty to pay itforward and share it with Everyone, Here is website where i startedthis……………..
.
.
EARN THIS LINK—————————————➤ https://Www.Coins71.Com
Just keep all democrats and their fellow travelers off the internet. That would help a lot. It also includes Shreek. Which pays ongoing dividend for the rest of us here..
I'd settle for just keeping all politicians (of both stripes) off the internet.
The Children and Teens' Online Privacy Protection Act (S. 1628), from Sen. Edward Markey (D–Mass.), updates the 1998 Children's Online Privacy Protection Act (COPPA) and is being referred to by its sponsors as "COPPA 2.0."
I dunno guys, sounds like a petri dish for our next First Amendment of the Internet.
Crusty, Establishment-Politician, Boomer white guys need to just put out a Greatest Amendments of The Internet album and then hang it up. This whole 'buck the system by dressing up like the Old Boss to oppose the things the New Boss opposes' shtick is getting tired.
Ugh. So not only does this piece of shit law have a name that forms a catchy acronym, it's proposed by Lindsey fucking Graham. That's all you need to know to conclude that this law fucking sucks.
Don’t buy your kid a device that has internet access if you are worried.
Yep. Name deliberately chosen to spell out a stupid acronym: check. Proposed by one of the worst human beings on Earth: check. They might as well be the exact same law.
Sigh...
They might not be likely to become law now, but statists are nothing if not tenacious. With enough fear mongering, enough edge cases being trumpeted by their allies in the media as typical instead of edge cases, and enough time they'll eventually succeed in getting something passed. It might not be quite as shitty as all of these examples here, but that just means they'll have to keep trying. The thing about "Progressives" is they never stop, because "progress."
but statists are nothing if not tenacious
This is the ratchet problem in a nutshell. They only have to succeed once and then we're screwed.
Even fractionally. They can get an entire shitty bill passed, get SCOTUS to interpret one section of it in a rather literal legal immaculate conception, and less than a generation later will have every millennial and tech moron acting like free speech didn't exist before Al Gore invented S230.
Wouldn't it be easier to just pass a law requiring parents to keep watch over their children?
Plan B would be to just outlaw the entire internet.
That would stop global warming and chatbot lies and media propaganda and teen sex and all the gender insanity and (un)social media and almost every bad thing in society at this time.
I AM Making a Good Salary from Home $6580-$7065/week , which is amazing, under a year ago I was jobless in a horrible economy. I thank God every day I was blessed with these instructions and now it's my duty to pay it forward and share it with Everyone. go to home media tech tab for more detail reinforce your heart ......
SITE. ——>>> bitecoindollar12.com
It would also stop the exposure of media's lies and biases. The New York Time's fact checkers never noticed that reporter Jayson Blair was just making up stories until random people on the internet exposed him. Since then, hundreds of stories have simply been buried by the mass media, but posted on the internet.
Google is by and by paying $27485 to $29658 consistently for taking a shot at the web from home. I have joined this action 2 months back and I have earned $31547 in my first month from this action. I can say my life is improved completely! Take a gander at it what I do.....
For more detail visit the given link..........>>> http://Www.jobsrevenue.com
Without access to the internet, how will children learn that they are probably trans?
Oh. The schools. I forgot.
Well, journolisming, that's one way of looking at it.
Also, global warming has increased the number of sunny days making the average WA resident more tan than a decade ago.
In just a few years, WA will be full of tanned old people, just like Florida or Arizona.
OK, that's all well and good for The Children. What about Grandma?
Sure, let's just ignore the fact that 99.99% of the time bills in Congress actually do the opposite of what their titles say.
This whole pretext for infringing on the First Amendment is pseudoscientific. It's based on calling certain interactions "abuse" by definition, then cherrypicking or misinterpreting data to justify the foregone conclusion. Take, for example, the study "Victims’ Voices: The Impact of Online Grooming and Sexual Abuse," published in the Universal Journal of Psychology (2013). It's based on interviews with eight British youth the researchers label as "victims." Yet none of them has a bad word to say about anything they experienced before the relationship was ended by parental or state intervention, and at least one of them is described as remaining resentful toward police for interfering. Further, everything the researchers describe as a "negative impact of abuse" is plainly not a consequence of the relationship itself but of stigma, such as embarrassment that the parents found out or bullying by schoolmates. But the authors still call it "abuse" because that's what's politically correct, and they might jeopardize their careers if they did otherwise.
Efforts to child-proof ANYTHING as guaranteed to fail. Efforts to "anything-proof" individual children, at least theoretically, have a chance of success.
This would be bad economic policy. For sound takes on the economy see: https://honesteconomics.substack.com/