YouTube ISIS Videos Mean the Supreme Court Could Reconsider Section 230
Plus: A court rejects a "discriminatory harassment" ban at a Florida university, a private space mission heads back to earth, and more...
The father of woman killed by ISIS asks the U.S. Supreme Court to consider a case involving algorithms, terrorism, and free speech. As YouTube celebrates its 17th birthday, the game-changing user-generated video platform once known for funny pet videos and other benign content continues to attract criticism for its alleged role in fostering extremism. Ample evidence casts doubt on the idea that social media platforms are radicalizing American youth, but high-profile anecdotes about bad turns allegedly inspired by YouTube, Facebook, and other sites make it hard to combat such claims. Now one such story may come before the Supreme Court—and threaten a foundational internet speech law.
The case (Gonzalez v. Google LLC) involves a man whose daughter was killed in a 2015 ISIS attack in Paris. The grieving father, Reynaldo Gonzalez, sued YouTube's parent company, Google, under the U.S. Anti-Terrorism Act. Gonzalez claims that ISIS posted recruitment videos on YouTube, that YouTube recommended these videos to users, and that this led to his daughter's death.
Gonzalez's circumstances are tragic, but his claim is convoluted and patently illogical. To buy it, you have to buy the idea that without YouTube, the terrorist group not only would be unable to recruit members but would cease to use existing networks to carry out attacks.
The Supreme Court has not yet decided whether it will hear Gonzalez's case. But if it does, it could give the Court a chance to consider the parameters of Section 230, the federal law preventing internet companies and users from being liable for all third-party speech (and pretty much the backbone of the internet as we know it).
SCOTUSblog notes that in 2020, Justice Clarence Thomas suggested that "in an appropriate case, we should consider whether the text of this increasingly important statute aligns with the current state of immunity enjoyed by Internet platforms." (Thomas made a similar suggestion earlier this year.)
"The petition in Gonzalez v. Google LLC tries to present itself as the case Thomas has been looking for," writes Andrew Hamm:
The district court dismissed Gonzalez's claim on the ground that Section 230 nonetheless protected Google for its recommendations because ISIS produced the videos, not Google. The U.S. Court of Appeals for the 9th Circuit affirmed, concluding that Section 230 protects such recommendations, at least if the provider's algorithm treated content on its website similarly. However, the panel in Gonzalez considered itself bound to reach this result because of a recent case on the same issue that another 9th Circuit panel decided while Gonzalez was pending. The Gonzalez panel further concluded that, if it could resolve the question itself, Section 230 would not protect a provider's content recommendations.
Gonzalez admits that the circuits to face this question are not split. Instead, Gonzalez suggests that, had his case simply come out before the other 9th Circuit decision, then Google would be the one seeking review. Regardless, he maintains, the providers' financial dependence on advertisements and hence on algorithms that can target users makes this question of recommendations important to resolve.
Gonzalez is far from the first person to sue social media companies over terrorist acts. But these people "have yet to talk a court into agreeing with their arguments," as Tim Cushing noted at Techdirt last fall.
"The facts in these cases are invariably awful: often people have been brutally killed and their loved ones are seeking redress for their loss. There is a natural, and perfectly reasonable, temptation to give them some sort of remedy from someone, but as we argued in our brief, that someone cannot be an internet platform," wrote constitutional lawyer Cathy Gellis in 2017:
There are several reasons for this, including some that have nothing to do with Section 230. For instance, even if Section 230 did not exist and platforms could be liable for the harms resulting from their users' use of their services, for them to be liable there would have to be a clear connection between the use of the platform and the harm.
Section 230 "should prevent a court from ever even reaching the tort law analysis," added Gellis. "With Section 230, a platform should never find itself having to defend against liability for harm that may have resulted from how people used it."
A new study backs up what advocates for Section 230 have been saying for a long time: The law protects small companies and creators more than big tech giants.
"If you understand the mechanisms by which Section 230 actually works, the key is that it gets frivolous and wasteful lawsuits kicked out of court much earlier—when the costs are still intense, but more bearable," points out Mike Masnick in a Techdirt post about the study. "Without Section 230, the end result of the court case may be the same—where the company wins—but the costs would be much, much higher. For Facebook and Google that's not going to make a huge difference. But for smaller companies, it can be the difference between life and death."
"Larger enterprises are less impacted by lawsuits," point out the authors of the study, in a report titled "Understanding Section 230 & the Impact of Litigation on Small Providers." A large company "may already have: 1) lawyers ready to deal with lawsuits (allowing the business to continue with minimal disruption); 2) insurance coverage that can offset the cost of a defense, a settlement, or a damage award; and 3) sufficient revenue and cash reserves such that most cases are viewed as a cost of doing business rather than a cataclysmic event."
Small companies are less likely to have or be able to afford any of these things, making it less likely that they will fight even lawsuits that they would win.
Politicians, alas, have taken to scapegoating Section 230 for basically anything they or their constituents don't like about the internet. And lawyers keep getting creative in their anti–Section 230 arguments, suggesting that while Section 230 may broadly protect internet platforms from liability for many things that users post, the law does not covers various facets of these platforms—their design, their algorithms, etc..
Occasionally, judges appear sympathetic to this. In Taamneh v. Twitter, which the 9th Circuit ruled on last year, Judge Marsha Berzon wrote that Section 230 protects "only traditional activities of publication and distribution—such as deciding whether to publish, withdraw, or alter content—and does not include activities that promote or recommend content or connect content users to each other. I urge this Court to reconsider our precedent en banc to the extent that it holds that section 230 extends to the use of machine-learning algorithms to recommend content and connections to users."
"This is a really weird, really dangerous place to start drawing the line in Section 230 lawsuits," wrote Tim Cushing last summer. "Algorithms react to input from users. If YouTube can't be held directly responsible for videos uploaded by users, it makes sense it would be immunized against algorithmically suggesting content based on users' actions and preferences. The algorithm does next to nothing on its own without input from content viewers. It takes a user to really get it moving."
FREE MINDS
A federal appeals court has rejected a university's ban on "discriminatory harassment." The University of Central Florida's "discriminatory harassment" ban likely violates the First Amendment, the 11th Circuit Court of Appeals said. The case is Speech First, Inc. v. Cartwright, and you can read the whole ruling here. In essence, the court found the school's policy against allegedly discriminatory speech to be too broad:
[T]he policy (1) prohibits a wide range of "verbal, physical, electronic, and other" expression concerning any of (depending on how you count) some 25 or so characteristics; (2) states that prohibited speech "may take many forms, including verbal acts, name-calling, graphic or written statements" and even "other conduct that may be humiliating"; (3) employs a gestaltish "totality of known circumstances" approach to determine whether particular speech, for instance, "unreasonably alters" another student's educational experience; and (4) reaches not only a student's own speech, but also her conduct "encouraging," "condoning," or "failing to intervene" to stop another student's speech.
The policy, in short, is staggeringly broad, and any number of statements—some of which are undoubtedly protected by the First Amendment—could qualify for prohibition under its sweeping standards. To take a few obvious examples, the policy targets "verbal, physical, electronic or other conduct" based on "race," "ethnicity," "religion [or] non-religion," "sex," and "political affiliation." Among the views that Speech First's members have said they want to advocate are that "abortion is immoral," that the government "should not be able to force religious organizations to recognize marriages with which they disagree," that "affirmative action is deeply unfair," that "a man cannot become a woman because he 'feels' like one," that "illegal immigration is dangerous," and that "the Palestinian movement is anti-Semitic."
In a concurring opinion, Judge Stanley Marcus warned against "the grave peril posed by a policy that effectively polices adherence to intellectual dogma."
"History provides us with ample warning of those times and places when colleges and universities have stopped pursuing truth and have instead turned themselves into cathedrals for the worship of certain dogma," wrote Marcus. "By depriving itself of academic institutions that pursue truth over any other concern, a society risks falling into the abyss of ignorance. Humans are not smart enough to have ideas that lie beyond challenge and debate."
The Volokh Conspiracy has more on the ruling (which includes the lovely line "the state is a really big boy") here.
FREE MARKETS
Private space mission on its way back to earth. The first privately-funded trip to the International Space Station started its return trip yesterday. More from CNN:
The mission, called AX-1, was brokered by the Houston, Texas-based startup Axiom Space, which books rocket rides, provides all the necessary training, and coordinates flights to the ISS for anyone who can afford it.
The four crew members—Michael López-Alegría, a former NASA astronaut-turned-Axiom employee who is commanding the mission; Israeli businessman Eytan Stibbe; Canadian investor Mark Pathy; and Ohio-based real estate magnate Larry Connor—left the space station aboard their SpaceX Crew Dragon capsule on Sunday at 9:10 pm EST….They will spend about one day free flying through orbit before plummeting back into the atmosphere and parachuting to a splashdown landing off the coast of Florida around 1 pm ET Monday.
AX-1, which launched on April 8, was originally billed as a 10-day mission, but delays extended the mission by about a week.
This isn't the first time private citizens have been able to pay for seats on spacecraft. "But AX-1 is the first mission with a crew entirely comprised of private citizens with no active members of a government astronaut corps accompanying them in the capsule during the trip to and from the ISS," reports CNN. "It's also the first time private citizens have traveled to the ISS on a US-made spacecraft."
And it's also a powerful reminder that perhaps the next era of space exploration needn't be government funded and managed.
QUICK HITS
• Air Force Major General William T. Cooley has been found guilty of abusive sexual conduct. It's "the first court-martial trial and conviction of a general officer in the 75-year history of the military branch," The New York Times reports.
• Twitter is "reportedly prepared to accept Elon Musk's offer to buy the company."
• Wynn Bruce, a climate activist who set himself on fire in front of the Supreme Court on Friday, has died.
• The European Union is demanding that tech companies crack down on "hate speech."
• What happened to CNN+?
• Actor Johnny Depp's defamation trial against ex-wife Amber Heard is underway. Depp testified last week and is set to be cross-examined by Heard's legal team today.
• Shanghai is fencing in COVID-19 quarantine areas. "On social media, images of government workers wearing hazmat suits have gone viral as they sealed off entrances to housing blocks in the city and closed off entire streets with green fencing, prompting questions and complaints from residents," reports Al Jazeera. "Many of the fences have been erected around locations designated as 'sealed areas', which are residential buildings where at least one person has tested positive for COVID-19, meaning those inside are forbidden from leaving their front doors."
Show Comments (398)