For the past several years, lawmakers and bureaucrats around the country have been trying to solve a problem. They wanted to regulate the internet, and in particular, they wanted to censor content and undermine a variety of systems that allow for privacy and anonymity online—the systems, in other words, that allow for online individuals to conduct themselves freely and outside of the purview of politicians.
There was something like a bipartisan agreement on the necessity of these rules and regulations. Lawmakers and regulators test-drove a number of potential arguments for online speech rules, including political bias, political extremism, drug crime, or the fact some tech companies are just really big. But it turned out to be quite difficult to drum up support for wonky causes like antitrust reform or amending the internet liability law Section 230, and even harder to make the case that the sheer size of companies like Amazon was really the problem.
Their efforts tended to falter because they lacked a consensus justification. Those in power knew what they wanted to do. They just didn't know why, or how.
But in statehouses and in Congress today, that problem appears to have been solved. Politicians looking to censor online content and more tightly regulate digital life have found their reason: child safety.
Online child safety has become an all-purpose excuse for restricting speech and interfering with private communications and business activities. In late May, Surgeon General Vivek Murthy issued an advisory on social media and youth mental health, effectively giving the White House's blessing to the panic. And a flurry of bills have been proposed to safeguard children against the alleged evils of Big Tech.
Unlike those other failed justifications, protecting children works because protecting children from the internet has a massive built-in constituency, lending itself to truly bipartisan action.
Many people have kids old enough to use the internet, and parents are either directly concerned with what their offspring are doing and seeing online or at least susceptible to being scared about what could be done and seen.
The resulting flurry of bills represent what one could call an attempt to childproof the internet.
It's misguided, dangerous, and likely doomed to fail. Not only has it created a volatile situation for privacy, free expression, and other civil liberties, it also threatens to wreak havoc on any number of common online businesses and activities. And because these internet safety laws are written broadly and poorly, many could become quiet vehicles for larger expansions of state power or infringements on individual rights.
Threats to Encryption
End-to-end encryption has long been a target of government overseers. With end-to-end encryption, only the sender and recipient of a message can see it; it is scrambled as it's transmitted between them, shielding a message's contents from even the tech company doing the transmitting. Privacy-focused email services like Protonmail and Tutanota use it, as do direct messaging services like Signal and WhatsApp. These days, more platforms—including Google Messages and Apple's iCloud—are beginning to offer end-to-end encryption options.
The fact that people can communicate in such ways doesn't sit right with a certain flavor of authoritarian. But encryption also provides your average internet user with a host of benefits—not just protection from state snoops but also identity thieves and other cyber criminals, as well as prying eyes in their personal lives (parents, spouses, bosses, etc.) and at the corporations that administer these tools. Encryption is also good for national security.
An outright ban on end-to-end encryption would be politically unpopular, and probably unconstitutional, since it would effectively mandate that people communicate using tools that allow law enforcement clear and easy access, regardless of whether they are engaged in criminal activity.
So lawmakers have taken to smearing encryption as a way to aid child pornographers and terrorists, while trying to disincentivize tech companies from offering encryption tools by threatening to expose them to huge legal liabilities if they do.
That's the gist of the Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act, from Sen. Lindsey Graham (R–S.C.).
The heart of the measure (S. 1207) relates to Section 230, the federal communications law protecting computer services and users from civil liability for speech by other users, and what was once called child pornography but has recently been rebranded by authorities as child sexual abuse material, or CSAM. Essentially, EARN IT could make tech platforms "earn" immunity from civil liability when users upload or share such material by showing that they're using "best practices," as defined by a new National Commission on Online Child Sexual Exploitation Prevention, to fight its spread.
That sounds reasonable enough—until you realize that hosting child porn is already illegal, platforms are already required to report it to the National Center for Missing and Exploited Children, and tech companies already take many proactive steps to rid their sites of such images. As for civil suits, they can be brought by victims against those actually sharing said images, just not against digital entities that serve as unwitting conduits to this.
Experts believe the real target of the EARN IT Act is end-to-end encryption. While not an "independent basis for liability," offering users encrypted messaging could be considered going against "best practices" for fighting sexual exploitation. That means companies could have to choose between offering security and privacy to their users and avoiding legal liability for anything shared by or between them.
Similar to the EARN IT Act is the Strengthening Transparency and Obligations to Protect Children Suffering from Abuse and Mistreatment (STOP CSAM) Act (S. 1199), from Sen. Dick Durbin (D–Ill.). It would also amend Section 230.
Riana Pfefferkorn of the Stanford Internet Observatory calls the bill "an anti-encryption stalking horse." Pfefferkorn notes that "Congress has heretofore decided that if online services commit … child sex offenses, the sole enforcer should be the Department of Justice, not civil plaintiff." But "STOP CSAM would change that."
The bill amends Section 230 to allow civil lawsuits against interactive computer service providers (such as social media platforms) or software distribution services (such as app stores) for "conduct relating to child exploitation." This is defined as "the intentional, knowing, or reckless promotion or facilitation of a violation" of laws against child sex trafficking, pornography, and enticement.
The big issue here is the lax and/or vague standards under which tech companies can become liable in these lawsuits. Precise legal meanings of "promote" and "facilitate" are unclear and subject to legal dispute.
Indeed, there's an ongoing federal lawsuit over the similar language in FOSTA, the Fight Online Sex Trafficking Act, which criminalizes websites that "promote or facilitate" sex work. In that case, the challengers have argued that the language is unconstitutionally broad—an argument with which judges seemed to agree. And while it's fairly clear what it means to act "knowingly" or "intentionally," it's less certain what acting "recklessly" in this circumstance would entail.
Pfefferkorn and others worry that offering encrypted communication tools could constitute acting in a "reckless" manner. As with EARN IT, this would force tech companies to choose between offering private and secure communications tools and protecting themselves from massive legal risk—a situation in which few companies would be likely to choose the latter.
Threatening encryption isn't the only way new tech bills threaten the privacy and security of everyone online. Proposals at both the state and federal level would require age verification on social media.
Age verification schemes create massive privacy and security concerns, effectively outlawing anonymity online and leaving all users vulnerable to data leaks, corporate snoops, malicious foreign actors, and domestic spying.
To verify user ages, social media companies would have to collect driver's licenses or other state-issued ID from all users in some capacity—by having users directly submit their documentation to the platform or by relying on third-party ID services, potentially run by the government. Alternatively they may rely on biometric data, such as facial scans.
Several such proposals are currently before Congress. For instance, the Making Age-Verification Technology Uniform, Robust, and Effective (MATURE) Act (S. 419), from Sen. Josh Hawley (R–Mo.), would ban people under age 16 from social media platforms. To verify users are above age 16, platforms would have to collect full names, dates of birth, and "a scan, image, or upload of government-issued identification." The requirement would be enforced by the Federal Trade Commission and a private right of action. (In the House, the Social Media Child Protection Act, from Utah Republican Rep. Chris Stuart, would do the same thing.)
The Protecting Kids on Social Media Act (S. 1291), from Sen. Brian Schatz (D–Hawaii), is another bill that would explicitly require social media platforms to "verify the age of their users." This one would ban children under 13 entirely and allow 13- to 17-year-olds to join only with parental consent, in addition to prohibiting the use of "algorithmic recommendation systems" for folks under age 18.
Schatz's bill would also launch a "digital identification credential" pilot program in the Department of Commerce, under which people could verify their ages or "their parent or guardian relationship with a minor user." Social media platforms could choose to accept this credential instead of verifying these things on their own.
Commerce would allegedly keep no records where people used their digital identification—though considering what we know about domestic data collection, it's hard to trust this pledge. In any event, administering the program would necessarily require obtaining and storing personal data. If widely adopted, it would essentially require people to register with the government in order to speak online.
The Kids Online Safety Act (KOSA) wouldn't formally require age verification. But it would mandate a host of rules that social media platforms would be forced to follow for users under age 18.
The bill (S. 1409) comes from Sen. Richard Blumenthal (D–Conn.), who claims it will "stop Big Tech companies from driving toxic content at kids." But according to Techdirt's Mike Masnick, it would give "more power to law enforcement, including state AGs … to effectively force websites to block information that they define as 'harmful.'" Considering some of the things that state lawmakers are attempting to define as harmful these days—information about abortion, gender, race, etc.—that could mean a huge amount of censored content.
KOSA would also create a "duty of care" standard for social media, online video games, messaging apps, video streaming services, and any "online platform that connects to the internet and that is used, or is reasonably likely to be used, by a minor." Covered platforms would be required to "act in the best interests" of minor users "by taking reasonable measures… to prevent and mitigate" their services from provoking a range of issues and ills. These include anxiety, depression, suicidal behavior, problematic social media use including "addiction-like behaviors," eating disorders, bullying, harassment, sexual exploitation, drug use, tobacco use, gambling, alcohol consumption, and financial harm.
This standard would mean people can sue social media, video games, and other online digital products for failing to live up to a vague yet sprawling duty.
As with so many other similar laws, the problems arise with implementation, since the law's language would inevitably lead to subjective interpretations. Do "like" buttons encourage "addiction-like behaviors"? Do comments encourage bullying? Does allowing any information about weight loss make a platform liable when someone develops an eating disorder? What about allowing pictures of very thin people? Or providing filters that purportedly promote unrealistic beauty standards? How do we account for the fact that what might be triggering to one young person—a personal tale of overcoming suicidal ideation, for instance—might help another young person who is struggling with the same issue?
Courts could get bogged down with answering these complicated, contentious questions. And tech companies could face a lot of time and expense defending themselves against frivolous lawsuits—unless, of course, they decide to reject speech related to any controversial issue. In which case, KOSA might encourage banning content that could actually help young people.
These bills have serious flaws, but they are also unlikely to become law.
In contrast, some state laws with similar provisions have already been codified.
In March, Utah passed a pair of laws slated to take effect in early 2024. The laws ban minors from using social media without parental approval and requires tech companies to give parents complete access to their kids' accounts, including private messages. They also make it illegal for social media companies to show ads to minors or employ any designs or features that could spur social media "addiction"—a category that could include basically anything done to make these platforms useful, engaging, or attractive.
Utah also passed a law requiring porn platforms to verify user ages (instead of simply asking users to affirm that they are 18 or above). But the way the law is written doesn't actually allow for compliance, the Free Speech Coalition's Mike Stabile told Semafor. The Free Speech Coalition has filed a federal lawsuit seeking to overturn the law, arguing that it violates the First and 14th Amendments. In the meantime, Pornhub has blocked access for anyone logging on from Utah.
In Arkansas, the Social Media Safety Act—S.B. 396—emulates Utah's law, banning kids from social media unless they get express parental consent, although it's full of weird exceptions. It's slated to take effect September 2023.
Meanwhile, in Louisiana, a 2022 law requires platforms where "more than thirty-three and one-third percent of total material" is "harmful to minors" to check visitor IDs. In addition to defining particular nude body parts as being de facto harmful to minors, it ropes in any "material that the average person, applying contemporary community standards" would deem to "appeal or pander" to "the prurient interest." Porn platforms can comply by using LA Wallet, a digital driver's license app approved by the state.
California's Age-Appropriate Design Code Act (A.B. 2273) would effectively require platforms to institute "invasive age verification regimes—such as face-scanning or checking government-issued IDs," as Reason's Emma Camp points out. The tech industry group NetChoice is suing to stop the law, which is supposed to take effect in July 2024.
The List Goes On
Those are far from the only measures—some passed, some pending—meant to protect young people from digital content.
Montana's legislature passed a bill banning TikTok, and Montana Gov. Greg Gianforte, a Republican, signed the bill into law on May 17. In a sign of the state's dedication to accuracy, the short title of the bill, SB 419, erroneously refers to the video-sharing app as "tik-tok." It's scheduled to take effect at the start of next year. The law firm Davis Wright Tremaine is already suing on behalf of five TikTok content creators, and it seems unlikely to survive a legal challenge. TikTok itself has also sued over the ban.
Then there's the Cooper Davis Act (S. 1080), named after a Kansas City teenager who died after taking what he thought was a Percocet pill that he bought online. The pill was laced with fentanyl, and Cooper overdosed. Lawmakers are now using Davis' death to push for heightened surveillance of social media chatter relating to drugs. Fentanyl is "killing our kids," said bill co-sponsor Jeanne Shaheen (D–N.H.) in a statement. "Tragically, we've seen the role that social media plays in that by making it easier for young people to get their hands on these dangerous drugs."
The bill, from Sen. Roger Marshall (R–Kansas), "would require private messaging services, social media companies, and even cloud providers to report their users to the Drug Enforcement Administration (DEA) if they find out about certain illegal drug sales," explains the digital rights group Electronic Frontier Foundation (EFF). "This would lead to inaccurate reports and turn messaging services into government informants."
EFF suggests the bill could be a template for lawmakers trying to force companies "to report their users to law enforcement for other unfavorable conduct or speech."
"Demanding that anything even remotely referencing an illegal drug transaction be sent to the DEA will sweep up a ton of perfectly protected speech," Masnick points out. "Worse, it will lead to massive overreporting of useless leads."
The Children and Teens' Online Privacy Protection Act (S. 1628), from Sen. Edward Markey (D–Mass.), updates the 1998 Children's Online Privacy Protection Act (COPPA) and is being referred to by its sponsors as "COPPA 2.0." The original bill included a range of regulations related to online data collection and marketing for platforms targeted at kids under age 13. Markey's bill would expand some of these protections to apply to anyone under the age of 17.
It would apply some COPPA rules not just to platforms that target young people or have "actual knowledge" of their ages but to any platform "reasonably likely to be used" by minors and any users "reasonably likely to be" children. (In the House, the Kids PRIVACY Act would also expand on COPPA.)
Ultimately, this onslaught of "child protection" measures could make child and adult internet users more vulnerable to hackers, identity thieves, and snoops.
They could require the collection of even more personal information, including biometric data, and discourage the use of encrypted communication tools. They could lead social media companies to suppress a lot more legal speech. And they could shut young people out of important conversations and information, further isolating those in abusive or vulnerable situations, and subjecting young people to serious privacy violations.
Won't somebody please actually think of the children?