'Let Parents Decide' What Kids Can Do Online, Argue Tech Groups in New Lawsuit
The groups are challenging a Florida law that bans some teens from social media.
The Computer and Communications Industry Association (CCIA) and NetChoice, two prominent tech-industry trade groups, have filed a lawsuit against a Florida statute barring younger teens from social media. Their suit—filed Monday in the U.S. District Court for the Northern District of Florida—cites First Amendment concerns with Florida House Bill 3, which the groups also portray as an imposition on parents' rights.
"Florida House Bill 3 is the latest attempt in a long line of government efforts to restrict new forms of constitutionally protected expression based on concerns about their potential effects on minors," their complaint opens. "Books, movies, television, rock music, video games, and the Internet have all been accused in the past of posing risks to minors. Today, similar debates rage about 'social media' websites."
"These debates are important, and the government may certainly take part in them," the tech groups continue. "But the First Amendment does not take kindly to government effort to resolve them. The Constitution instead leaves the power to decide what speech is appropriate for minors where it belongs: with their parents."
You are reading Sex & Tech, the newsletter from Elizabeth Nolan Brown on sex, technology, bodily autonomy, law, and online culture. Want more on sex, technology, and the law? Subscribe to Sex & Tech. It's free and you can unsubscribe any time.
"Like adults, minors use these websites to engage in an array of First Amendment activity"
Signed into law by Florida Gov. Ron DeSantis (R) last March, HB3 requires social media platforms to categorically reject accounts from under anyone age 14 (or anyone the company suspects is under age 14), and to ban 14- and 15-year-old users unless they get parental permission. The law also requires websites and apps to verify the ages of all visitors if the platform publishes material "harmful to minors"—a category defined broadly to include all sorts of content that depicts or describes sexual conduct, "appeals to the prurient interest," and is deemed by the state to lack "serious literary, artistic, political, or scientific value" for people under age 18. "While the bill does not specify how exactly social media sites should verify a customer's age, with such large consequences for violating the law, it's likely that companies will require customers to hand over their government ID, submit to a facial scan, or otherwise hand over sensitive information," noted Reason's Emma Camp earlier this year.
The new NetChoice and CCIA lawsuit challenges Section 1 of HB3, the part pertaining to teens and social media.
Opponents of such measures often argue that they are a privacy nightmare—creating a trove of personal information vulnerable to hackers, and infringing on the rights of adults in the name of protecting children. This is all true, but it's also true that kids have First Amendment rights, too.
We should also oppose laws like HB3 because they infringe on the free speech rights of kids.
This is an argument that CCIA and NetChoice run with in their complaint:
Like adults, minors use these websites to engage in an array of First Amendment activity on a wide range of topics. Minors use online services to read the news, connect with friends, explore new interests, follow their favorite sports teams, and research their dream colleges. Some use online services to hone a new skill or showcase their creative talents, including photography, writing, or other forms of expression. Others use them to raise awareness about social causes and to participate in public discussions on salient topics of the day. Still others use them to build communities and connect with others who share similar interests or experiences, which is particularly helpful for minors who feel isolated or marginalized at home, or are seeking support from others who understand their Experiences.
Reasonable people can have differences of opinion about what kinds of platforms are appropriate for minors and at what ages, suggests the complaint—and that's precisely why such decisions are best left up to parents. There are myriad tools available allowing parents to monitor and limit their own children's online activities, and these can serve the purpose of protecting kids in a less "draconian" manner than age verification rules and blanket bans do.
"In short, in a Nation that values the First Amendment, the preferred response from the government is to let parents decide what speech is appropriate for their minor children, including by using tools that make it easier for them to restrict access should they choose to do so," argue CCIA and NetChoice.
"Burdening protected speech that citizens find especially interesting is especially inconsistent with the First Amendment."
It's not just the infringement of minors' First Amendment rights and on parental decision making that CCIA and NetChoice object to in their complaint. It also criticizes the strange parameters of HB3, which only covers social media platforms where 1) 10 percent or more of daily active users under age 16 spend an average of two hours per day or more on the platform on days that they use the platform, and 2) customized recommendation algorithms and functions that the law defines as "addictive features" (things like infinite scroll, push notifications, auto-play functions, livestreaming functions, and "personal interactive metrics") are present. The law specifically excludes services dedicated solely to email or direct messaging.
The groups point out that section one of HB3 "does not focus on any particular content that may pose special risk to minors, nor "on identifying specific means of or forums for communication that those seeking to take advantage of minors have proven more likely to use." Instead, it centers on how much minors seem to enjoy a particular platform, and "whether it employs tools designed to bring to their attention content they might like."
"By that metric, the state could restrict access to the most popular segments of nearly any medium for constitutionally protected speech, be it enticing video games, page-turning novels, or binge-worthy TV shows," the groups suggest. "Burdening protected speech that citizens find especially interesting is especially inconsistent with the First Amendment."
There are practical issues to enforcing HB3, too, the groups point out. The law lists some broad parameters for how platforms are supposed to determine who is a child's parent or guardian, but these seem a mix of toothless and totally invasive.
Similar Laws—and Challenges—Abound
Lawsuits against social media age verification rules in other states—including Utah and Tennessee—are currently underway. And a number of such laws have already been rejected by federal judges.
Similarly, suits challenging age verification rules for adult websites are proliferating.
Multiple states have enacted requirements similar to Florida's rule regarding sexually oriented online content. So far, federal courts have been pretty good at seeing these for the unconstitutional messes that they are. This includes cases out of California, Arkansas, Texas, Indiana, and Mississippi—though, in the Texas case, an appeals court vacated the lower court's injunction against an age verification mandate and, in April, the U.S. Supreme Court declined to stay the appeals court's ruling. In July, however, SCOTUS said it would take up the case in full sometime during the term that began this month.
Laws requiring age verification for adult sites have become popular among some Republicans—including those behind Project 2025, who see them as a back door to banning porn. In many states with age verification laws for adult content, major porn platforms have blocked viewers from those states rather than comply with the onus of checking IDs.
More Sex & Tech News
• Using tracking technology to collect data on visitors to a website does not count as illegal wiretapping, Massachusetts' highest court has held. With regard to the state's Wiretap Act, which bans intercepting "wire and oral communications," judges could not "conclude with any confidence that the Legislature intended 'communication' to extend so broadly as to criminalize the interception of web browsing and other such interactions," they wrote.
• A federal judge has extended a temporary restraining order against Florida officials threatening TV stations that air an ad promoting a reproductive freedom ballot initiative up for a vote this year. (More on the case, from Reason, here.)
• The family of a teen who committed suicide is suing over an AI chatbot that they claim encouraged him to take his own life.
• "Part of internet literacy is recognizing that what an algorithm presents to you is just a suggestion and not wholly outsourcing your brain to the algorithm," writes Mike Masnick in a Techdirt piece lampooning yet another New York Times article mischaracterizing Section 230. "If the problem is people outsourcing their brain to the algorithm, it won't be solved by outlawing algorithms or adding liability to them." Masnick goes on to explain why algorithmic recommendations are—and certainly should be—protected by the First Amendment.
Today's Image
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Just ban kids from using or possessing mobile computer devices already.
This is the way.
If a child tells an editor to make him a sammich, that child might end up getting doxxed.
Do we let parents decide if children can smoke, drink, smoke weed, go to a strip club, join armed services? Come on.
Democrats already want to let children decide if they can hack genitals off, WITHOUT input from parents.
Any rational person can see where this goes, and it isn’t good.
[WE] are not the parents of everyone else’s children.
So sick of the [WE] and ‘Guns’ will fix this. [WE] and ‘Guns’ will fix that.
POWER-MAD DICTATORS all over the place.
If you have ever seen me comment here before, you would know how anti government control I am.
However, I do recognize how to destroy our society and children very quickly, you numb nuts. Let me rephrase my opening words: “Does any sane person…”
You renob.
“how to destroy [our]” … “children”
No matter how Anti-Government you think you are; It isn’t an [OUR] issue. [OUR] issues is exactly how this nation ended up with such a over bloated government in the first place.
The USA was founded on the principles of Individualism.
To only use ‘Guns’ to ensure Individual Liberty and Justice for all.
I get your point, and I agree with the general concept, but it is a fairly simplistic view in a nation of 400 million people.
But fair enough. You do what you want and I’ll keep my children and my guns over here.
That pretty much sums it up. Before becoming a parent – freedom loving libertarian. After becoming a parent – state worshiping cuck. It is what it is.
Masnick goes on to explain why algorithmic recommendations are—and certainly should be—protected by the First Amendment.
Which would be an abjectly retarded take by a brain subscribed to automagic thinking and/or outsourced to an algorithm. If an algorithm recommends you pick up a gun and go shoot Mike Masnick and Elizabeth Nolan Brown in the face, even if the person reading the algorithm’s recommendation knows not to take such action, the speech definitively constitutes a specific and direct threat that should not and would not be protected under the 1A.
Moreover, Section 230 wouldn’t protect such algorithmic speech definitively and if the person who created the algorithm and the website/app recommending people shoot Mike Masnick and ENB in the face knowingly engaged in said behavior, it would be rightly clear that they weren’t merely acting as impartial purveyors of free speech and Section 230, for all the things it doesn’t do, shouldn’t apply.
AI isn’t going to surpass humans, humans like Mike Masnick and ENB are going to retard human intellect to history, contextual cues, and general awareness and meet AI on their way down.
You have summoned Sqrlsy. Prepare for gibberish.
Twatzactly!!!
Just like with any other aspect of the libertine debate that ENB refuses not to be absolutely batshit insane under the banner of libertarianism about; sure there’s absolutely a case that this technology shouldn’t be banned at all corners, purged with fire, and the earth salted to prevent it from coming back. But such an act isn’t really feasible or effective or being called for to begin with and, more critically, the idea that there’s nothing morally or intellectually wrong with what’s described here and/or that it should receive some sort of legal or social protection is absolutely malevolently, insidiously ghoulish.
The fact that it gets one sentence after your page-long rant against efforts to prevent such situations is testament to your Mike Masnick faux-tech-genius intellect and human decency.
‘Let Parents Decide’
What are you, nuts?
Some of those damn fool parents let the kids go play in the park across the street without a bodyguard!
And a significant number of parents are REPUBLICANS!!!
Only the state know what is right for you, and even then, it has to be a democrat controlled state.
If people would just do the right thing, the government wouldn’t have to force them.
/jeff
Nobody asked for their internet to be blocked and screened by Good Samaritans for Offensive material but, by God, if we don’t put pressure on platform providers to do the right thing those internet terrorists at places like 4chan will win and then we’ll have no internet.
OTOH, parents are specifically asking for protection so that they themselves don’t have to perform some sort of as-yet-undefined Turing Test on every AI a child under their care could come across to make sure it’s not going to trick their kids into killing themselves, weeping over the children who already have killed themselves, but buy God, if we cave in to *their* demands to put pressure on platform providers those internet terrorists at places like 4chan will win and then we’ll have no internet.
She’s like a living, breathing, evil caricature of an actual human being. “We can’t let trolls take over the internet and prevent teens from accessing algorithms that tell them to kill themselves! We’ve got our phone-baloney jobs to protect here!” Senior Editor William J. Le Petomane
It’s recognized that the brains of teens under 14 are not developed and therefore society doesn’t let the teens drink, drive, vote, sign contracts, and so on. But sure, let them go online and get groomed and convinced to cut off their genitals just so lubbertarians can be Pure.
stanfordchildrens.org/en/topic/default?id=understanding-the-teen-brain-1-3051
let them go online and get groomed and convinced to cut off their genitals just so lubbertarians can be Pure
We should be careful what power we give to the government over our children. You never know, the government may start promoting the things you fear, like, maybe even in school or something. It’s probably best to let parents choose what’s best for their children.
Well, the left AND the right have no problem with government nanny-state intervention in all our lives, so long as it enforces thier values on everybody else. Especially when it comes to “The Children”.
WHO “let them go online and get groomed”???????????????
If you don’t like that you have EVERY F’EN means to prevent it without Gov ‘GUNS’.
You do not need to be a POWER-MAD DICTATOR trying to RULE every family on the planet.
Oh, tech groups are for parental rights now. Well that’s terrific news! No more gender madness, pronouns, government school mandates, vaccine mandates, or anything else big tech has helped shove down our throats all these years. Happy days.
The left won’t like the idea of parental guidance.
The left believes The State is the best parent as did Stalin, Hitler, Mao and Castro did.
“Part of internet literacy is recognizing that what an algorithm presents to you is just a suggestion and not wholly outsourcing your brain to the algorithm,” writes Mike Masnick in a Techdirt piece lampooning yet another New York Times article mischaracterizing Section 230. “If the problem is people outsourcing their brain to the algorithm, it won’t be solved by outlawing algorithms or adding liability to them.” Masnick goes on to explain why algorithmic recommendations are—and certainly should be—protected by the First Amendment.
While I won’t waste a scintilla of my extremely valuable and rapidly diminishing time reading the New York Times’ ‘characterization of section 230’, I’m not sure why Masnick is wasting his precious time on the First Amendment, if Section 230 is all we need.
Let’s pretend for a minute that the NYT did mischaracterize section 230– which I have little doubt that they did– this all seems a lot like someone attacked a case of stand-your-ground, and then the response was to go on about the 2nd amendment.
So I guess I’m left wondering, is Section 230 the First Amendment of the Internet or is the First Amendment the First Amendment of the internet?
And I swear to fucking god, don’t make me go through section 230 line by line again like I did last time. For criminal liability, the ‘platform shall not be considered the speaker’ has fuck-all to do with this case. As far as the civil liability shit goes, I’m not sure what ChatGPT is doing to “block or screen” “offensive material” consistent with the Communications Decency act.
The juxtaposition of S230 with the calls for increased teen access with the story of the 14 yr. old suicide is just ghoulishly, insanely evil. Whether the algorithm knew what it was doing or not, it *asked* the teen to kill himself. Whether the teen knew what he was doing or not, he did. The idea that there’s nothing wrong there or that it’s the parents’ fault for not monitoring things more closely (and juxtaposing it with a Lenore Skenazy piece about oppressing teens and using technology) is abjectly fucked up.
The idea that we need some sort of 1A of the internet to protect corporations from legions of litigious trolls but that teens who rather overtly have killed themselves because their tech asked or convinced them to needs to have every instance of litigation opposed out-of-hand? It’s evil beyond their own tropes about “We protect banks more than we protect our children” that they use to try and demonize Conservatives/Capitalists/Republicans/everyday Americans.
This is all true, but it’s also true that kids have First Amendment rights, too.
Access to social media isn’t a First Amendment right.
Also, social media isn’t a right AT ALL.
Your right to free speech does not come with a platform and a microphone – digital or otherwise – to speak it. Just like your free exercise of religion doesn’t guarantee you a church or temple. Just like your freedom of the press doesn’t entitle you to a newspaper company or television channel.
The entire premise of this lawsuit – and this article – is stupid and wrong.
“Access to social media isn’t a First Amendment right. ”
So can social media be banned for adults to? Also, Do you think the government could ban other ways to express ideas, information, messages, opinions and viewpoints for example, Books, as long as the restriction isn’t content based?
So can social media be banned for adults to?
Sure, why not. Or, at the very least, meaningfully restrict it. If there’s a legitimate state interest, and a rational connection between the ban and the state interest- so be it.
That first one might be tough to illustrate (though not impossible, considering the amount of mental health problems that have resulted from social media addiction/obsession). And you’d probably have a much harder times with books (or television, or video games, or music, or what have you).
Do you think the government could ban other ways to express ideas, information, messages, opinions and viewpoints
So long as they’re not depriving you of your freedom of speech, sure. But like I said – your freedom of speech does not come with a platform on which to speak it. If you want to go build a platform at your own expense and on your own property, like starting your own webpage, I suppose they can’t interfere with that (unless it crosses other lines). But to think you’re owed one, created and provided by others, no. That goes outside the boundaries of 1A.
Alright, I got to admit, at least you are consistant in you’re views.
I may possibly be down with some time, place and manner restrictions on social media for EVERYBODY due to it’s addictiveness but an outright ban would be rediculous. For example, an hour per day allowed for social media during the week and 5 hours per weekend. That is a big IF though as the government is bound to fuck it up and go to far with it though.
Sorry, double post.
That is right. When we speak of minors, it makes sense that parents control what content they use on the internet. However, I find some of the comments suggesting the overall ban of computer games irrelevant to the context. Didn’t you play computer games yourself? My son is 19 and he has his own money. He plays CS2 and read articles https://daddyskins.com/blog/counterpedia/how-long-are-cs2-games/ to make his gaming even nicer. Do you think I shall restrict his computer time? Yes, but only if he plays all day round. While my son doesn’t! He also works and does sports. So I see no problem in that.