Kids Have First Amendment Rights Too, Federal Judge Reminds State Lawmakers
Banning people under age 16 from accessing social media without parental consent "is a breathtakingly blunt instrument" for reducing potential harms, the judge writes.
An Ohio law requiring that people under age 16 get parental permission to use social media is unconstitutional, a federal judge held this week.
On Monday, U.S. Judge Algenon Marbley told Ohio's attorney general not to enforce the law against tech industry group NetChoice or any of its members—a group that includes all sorts of major U.S. tech companies, including Meta, Pinterest, and TikTok. While this week's ruling is just a preliminary injunction, Marbley's opinion leaves little room for doubt that the tech companies will ultimately win here.
Marbley's ruling is the latest in a string of federal court orders against state laws intended to limit minors' social media or require platforms to follow special rules for users under the age of 18. Meanwhile, similar measures are still spreading like a bad viral meme throughout U.S. statehouses.
You are reading Sex & Tech, the newsletter from Elizabeth Nolan Brown on sex, technology, bodily autonomy, law, and online culture. Want more on sex, technology, and the law? Subscribe to Sex & Tech. It's free and you can unsubscribe any time.
Childproofing the Internet
Ohio's "Parental Notification by Social Media Operators" law was passed last summer, following the passage of similar legislation in Utah and Arkansas. It requires many websites and apps to get consent from parents or guardians before allowing anyone under age 16 to sign up. Companies that fail to do this could be sued by the state's attorney general and face fines.
The Ohio law—which was slated to take effect on January 15, 2024—is part of a wave of attempts to childproof the internet through mechanisms like checking IDs for people who want to use social media or visit adult websites, raising the minimum age for opening a social media account, requiring minors to prove they have their parents' consent to use social media, and banning the use of certain social media features for users under a certain age.
Of course, applying specific rules to minors means tech companies must verify the ages and identities of all users. You or I might not need parental permission to use TikTok, but we would need to first prove we are old enough to get around that step.
For a good overview of the issues with online age-check laws, see this series from the R Street Institute's Shoshana Weissmann. She points out myriad downsides and unintended consequences, including the fact that these laws violate privacy, make people vulnerable to hackers and hostile governments, discourage data minimization (even when they sometimes purport to do the opposite), threaten our First Amendment right to anonymity, interfere with parental choice, risk criminalizing kids who try to outsmart them, and sometimes ban features—like algorithms—that actually make platforms more useful and safe.
The NetChoice Lawsuit
"Like other States before it, Ohio has unconstitutionally tried to limit certain minors' access to protected and valuable speech on the Internet," argued NetChoice in a complaint filed in January. The group argues that Ohio's parental consent for social media law is unconstitutional in multiple ways.
The law interferes with minors' right to access and engage in protected speech, it "baldly discriminates among online operators based on the type of speech they publish," and its provisions are also "unconstitutionally vague," NetChoice argued. For these reasons, it violates the First Amendment rights of Ohioans under age 16 and the First and Fourteenth Amendment rights of tech companies.
The law would apply to online entities that allow users to interact socially, construct profiles, and create or post content when such an entity "targets children, or is reasonably anticipated to be accessed by children"—a rather porous category. "Websites have no way to know what this means," NetChoice argued in its complaint.
Further complicating compliance and understanding, Ohio's legislature exempted sites where "interaction between users is limited" to e-commerce reviews or to comments on content "posted by an established and widely recognized media outlet" that primarily reports news and current events.
These parameters mean review forums focused on products for sale online wouldn't have to get parental consent, but platforms dedicated to other types of reviews would. Likewise, "established" or "widely recognized" media focused on news or current events wouldn't have to get parental consent, but newer, lesser known, or mixed-purpose media platforms would. This exception discriminates against non-news content, "such as literature, art, history, or religion," argued NetChoice. "Thus, 15-year-olds must secure parental consent to join web forums devoted to United States history, but not to comment on contemporary news stories."
Ohio Attorney General David Yost argued in response that the law didn't concern speech at all but the right to contract, and that this fell within the state's authority to regulate commercial transactions.
This is a common tactic with authorities when it comes to speech restrictions involving social media or internet platforms. Rather than outright banning speech, they'll put burdensome restrictions on it and then argue that since the law focuses on contracts, or paperwork collection (as in the case of a new federal porn bill), or product design or some such thing, the First Amendment couldn't possibly apply.
No Contract Exception to Free Speech
Judge Marbley does not seem persuaded by Yost's argument. After issuing a temporary restraining order prohibiting enforcement against NetChoice or its member groups back in January, Marbley this week granted NetChoice's motion for a preliminary injunction, extending that temporary block on enforcement as the case plays out.
"The Act regulates speech in multiple ways: (1) it regulates operators' ability to publish and distribute speech to minors and speech by minors; and (2) it regulates minors' ability to both produce speech and receive speech," wrote Marbley, adding that there is no "contract exception" to free speech rights.
"In the State's view, the Act is a regulation striking at the commercial aspect of the relationship between social media platforms and their users, not the speech aspect of the relationship," noted Marbley. "But this Court does not think that a law prohibiting minors from contracting to access to a plethora of protected speech can be reduced to a regulation of commercial conduct." Thus, the judge agreed with NetChoice that the law restricts minors' First Amendment rights.
To come to this determination, Marbley cited a Supreme Court case concerning video games (Brown v. Entertainment Merchants Association), in which the justices wrote that even if "the state has the power to enforce parental prohibitions," it didn't follow "that the state has the power to prevent children from hearing or saying anything without their parents' prior consent." Laws that do so do not "enforce parental authority over children's speech and religion; they impose governmental authority, subject only to a parental veto," the justices wrote in that case.
The judge also agreed with NetChoice that the law represents a content-based regulation, which makes it subject to a more strict level of scrutiny than regulations that are not. Interestingly, the judge reached this conclusion in part because the law targets sites with particular functions. "Functionalities allowing users to post, comment, and privately chat—in other words, to connect socially online—may very well be conveying a message about 'the type of community the platform seeks to foster,'" wrote Marbley (citing a case in which NetChoice challenged a Texas social media law). "The features that the Act singles out are inextricable from the content produced by those features" and thus "the Act's distinction on the basis of these functionalities" is content-based.
Subjecting the law to strict scrutiny means that in order for it to be OK, it must further a compelling governmental interest and be narrowly tailored to that end. But while the state says it's out to protect minors from dangerous contracts, "the Act is not narrowly tailored to protect minors against oppressive contracts," wrote Marbley. The judge also rejected the idea that the act is narrowly tailored to other areas that may or may not be considered compelling government interests, such as protecting kids' mental health.
"Foreclosing minors under sixteen from accessing all content on websites that the Act purports to cover, absent affirmative parental consent, is a breathtakingly blunt instrument for reducing social media's harm to children," wrote Marbley. "The approach is an untargeted one, as parents must only give one-time approval for the creation of an account, and parents and platforms are otherwise not required to protect against any of the specific dangers that social media might pose."
Lastly, the court found that the law's provisions are "troublingly vague" with regard to which websites are subject to it. Terms like "reasonably anticipated to be accessed by children," or established" and "widely recognized" media outlets, do not precisely describe whether or not a particular website or media outlet is subject to the law's rules, and "such capacious and subjective language practically invites arbitrary application of the law."
Legislative Contagion
Marbley's recent ruling is the latest blow to the legislative trend of requiring age checks for social media, parental consent for minors to use social media, or other regulations targeting teens and social media use. Some of these decisions have been quite bold in their rebukes, too.
In August, a federal judge held that Arkansas' law to this effect likely violated the First Amendment. "Requiring adult users to produce state-approved documentation to prove their age and/or submit to biometric age-verification testing imposes significant burdens on adult access to constitutionally protected speech," wrote Judge Timothy Brooks in his decision. Furthermore, "a State possesses legitimate power to protect children from harm, but that does not include a free-floating power to restrict the ideas to which children may be exposed. Speech that is neither obscene as to youths nor subject to some other legitimate proscription cannot be suppressed solely to protect the young from ideas or images that a legislative body thinks unsuitable for them."
In September, a federal court in California halted enforcement of the state's "Age-Appropriate Design Code" (CAADCA), in another case where the state tried to argue that the rules didn't regulate speech but conduct—collecting and using data. Among other things, implementation of the code would have required that tech companies use "age estimation" techniques that would be invasive to all users' privacy or else treat all users as children.
Alas, these rulings don't seem to have dampened enthusiasm for this sort of legislation, however. So many states are now considering online age-check proposals that it's hard to keep track of them all.
Some of these—like bills under consideration in Georgia, Idaho, Indiana, Iowa, Kansas, Tennessee, and West Virginia—would require people to verify their ages before visiting sites featuring adult content such as pornography.
Others would require social media users to verify their ages, ban children from signing up for accounts, require parental consent for people under 18 or 16 years old, or set up specific social media features that couldn't be made available to minors. In the past several months, social media restrictions bills like these have been introduced in Florida, Iowa, New York, Pennsylvania, and Tennessee. Utah, which passed the first of these laws last year and is trying to get out of defending it in court, is also considering a revised version.
The Pennsylvania bill is particularly egregious. It would "require social media companies to monitor the chats of two or more minors on the platform and notify parents or legal guardians of flagged sensitive or graphic content," per a press release from the state's Democratic Caucus.
How many times will tech and civil liberties groups have to fight measures like these in court? I guess we're about to find out.
"This is the fourth ruling NetChoice has obtained, demonstrating that this law and others like it in California and Arkansas not only violate constitutional rights, but if enacted, would fail to achieve the state's goal of protecting kids online," said Chris Marchese, Director of the NetChoice Litigation Center, in an emailed statement. "We look forward to seeing these laws permanently struck down and online speech and privacy fully protected across America."
Show Comments (56)