The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
N.Y. Appellate Court Rejects Addictive Design Theory in Lawsuit Against Social Media Defendants Over Buffalo Shootings
[UPDATE: A New York lawyer writes that the plaintiffs will be entitled to have the case heard by New York's highest court, if they so wish (as I assume they would): "I wanted to note that this case appears to be an automatic appeal as of right to the NY Court of Appeals due to the two dissenters. CPLR 5601(a). The decision seemed to indicate that the case is fully dismissed, which is the requirement for finality in NY. CPLR 5611."]
An excerpt from Patterson v. Meta Platforms, Inc., decided Friday by a panel of the New York intermediate appellate court, in an opinion by Judge Stephen Lindley joined by Judges John Curran and Nancy Smith:
These consolidated appeals arise from four separate actions commenced in response to the mass shooting on May 14, 2022 at a grocery store in a predominately Black neighborhood in Buffalo. The shooter, a teenager from the Southern Tier of New York, spent months planning the attack and was motivated by the Great Replacement Theory, which posits that white populations in Western countries are being deliberately replaced by non-white immigrants and people of color….
[S]urvivors of the attack and family members of the victims … [sued various parties, including] the so-called "social media defendants," i.e., [the companies responsible for Facebook, Instagram, Snap, Google, YouTube, Discord, Reddit, Twitch, Amazon, and 4chan], all of whom have social media platforms that were used by the shooter at some point before or during the attack…. According to plaintiffs, the social media platforms in question are defectively designed to include content-recommendation algorithms that fed a steady stream of racist and violent content to the shooter, who over time became motivated to kill Black people.
Plaintiffs further allege that the content-recommendation algorithms addicted the shooter to the social media defendants' platforms, resulting in his isolation and radicalization, and that the platforms were designed to stimulate engagement by exploiting the neurological vulnerabilities of users like the shooter and thereby maximize profits…. According to plaintiffs, the addictive features of the social media platforms include "badges," "streaks," "trophies," and "emojis" given to frequent users, thereby fueling engagement. The shooter's addiction to those platforms, the theory goes, ultimately caused him to commit mass murder….
Plaintiffs concede that, despite its abhorrent nature, the racist content consumed by the shooter on the Internet is constitutionally protected speech under the First Amendment, and that the social media defendants cannot be held liable for publishing such content. Plaintiffs further concede that, pursuant to section 230, the social media defendants cannot be held liable merely because the shooter was motivated by racist and violent third-party content published on their platforms. According to plaintiffs, however, the social media defendants are not entitled to protection under section 230 because the complaints seek to hold them liable as product designers, not as publishers of third-party content.
The majority concluded that section 230 immunity protects the defendants against the plaintiffs' claims:
Section 230 provides, in pertinent part, that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." … "By its plain language, [section 230] creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service." …
Based on our reading of the complaints, we conclude that plaintiffs seek to hold the social media defendants liable as publishers of third-party content. We further conclude that the content-recommendation algorithms used by some of the social media defendants do not deprive those defendants of their status as publishers of third-party content. It follows that plaintiffs' tort causes of action against the social media defendants are barred by section 230….
If content-recommendation algorithms transform third-party content into first-party content, … then Internet service providers using content-recommendation algorithms (including Facebook, Instagram, YouTube, TikTok, Google, and X) would be subject to liability for every defamatory statement made by third parties on their platforms. That would be contrary to the express purpose of section 230, which was to legislatively overrule Stratton Oakmont, Inc. v Prodigy Servs. Co. (N.Y. trial ct. 1995), where "an Internet service provider was found liable for defamatory statements posted by third parties because it had voluntarily screened and edited some offensive content, and so was considered a 'publisher.'" …
In any event, even if we were to … conclude that the social media defendants engaged in first-party speech by recommending to the shooter racist content posted by third parties, it stands to reason that such speech ("expressive activity" as described by the Third Circuit) is protected by the First Amendment under Moody v. Netchoice Inc. (2024).…
In the broader context, the dissenters accept plaintiffs' assertion that these actions are about the shooter's "addiction" to social media platforms, wholly unrelated to third-party speech or content. We come to a different conclusion. As we read them, the complaints, from beginning to end, explicitly seek to hold the social media defendants liable for the racist and violent content displayed to the shooter on the various social media platforms. Plaintiffs do not allege, and could not plausibly allege, that the shooter would have murdered Black people had he become addicted to anodyne content, such as cooking tutorials or cat videos. {It cannot reasonably be concluded that the allegedly addictive features of the social media platforms (regardless of content) caused the shooter to commit mass murder, especially considering the intervening criminal acts by the shooter, which were … "not foreseeable in the normal course of events" and therefore broke the causal chain.}
Instead, plaintiffs' theory of harm rests on the premise that the platforms of the social media defendants were defectively designed because they failed to filter, prioritize, or label content in a manner that would have prevented the shooter's radicalization. Given that plaintiffs' allegations depend on the content of the material the shooter consumed on the Internet, their tort causes of action against the social media defendants are "inextricably intertwined" with the social media defendants' role as publishers of third-party content…. It was the shooter's addiction to white supremacy content, not to social media in general, that allegedly caused him to become radicalized and violent….
Judges Tracey Bannister and Henry Nowak dissented; an excerpt:
"[W]hy do I always have trouble putting my phone down at night? … It's 2 in the morning … I should be sleeping … I'm a literal addict to my phone[.] I can't stop cons[u]ming." These are the words of a teenager who, on May 14, 2022, drove more than 200 miles to Buffalo to shoot and kill 10 people and injure three more at a grocery store in the heart of a predominantly Black community.
Plaintiffs in these consolidated appeals allege that the shooter did so only after years of exposure to the online platforms of the so-called "social media defendants"—… platforms that, according to plaintiffs, were defectively designed. Plaintiffs allege that defendants intentionally designed their platforms to be addictive, failed to provide basic safeguards for those most susceptible to addiction—minors—and failed to warn the public of the risk of addiction. According to plaintiffs, defendants' platforms did precisely what they were designed to do—they targeted and addicted minor users to maximize their engagement. Plaintiffs allege that the shooter became more isolated and reclusive as a result of his social media use and addiction, and that his addiction, combined with his age and gender, left him particularly susceptible to radicalization and violence—culminating in the tragedy in Buffalo….
[W]e reject the foundation upon which the majority's opinion is built—that plaintiffs' causes of action necessarily seek to hold defendants responsible for radicalizing the shooter given their status "as the publisher[s] or speaker[s] of any information provided by another information content provider," i.e., that plaintiffs only seek to hold defendants liable for the third-party content the shooter viewed. If that were the only allegation raised by plaintiffs, we would agree with the majority. But it is not.
The operative complaints … also allege that defendants' platforms are "products" subject to strict products liability that are addictive—not based upon the third-party content they show but because of the inherent nature of their design. Specifically, plaintiffs allege that defendants' platforms: "prey upon young users' desire for validation and need for social comparison," "lack effective mechanisms … to restrict minors' usage of the product," have "inadequate parental controls" and age verification tools that facilitate unfettered usage of the products, and "intentionally place[ ] obstacles to discourage cessation" of the applications. Plaintiffs allege that the various platforms "send push notifications and messages throughout the night, prompting children to re-engage with the apps when they should be sleeping." They further allege that certain products "autoplay" video without requiring the user to affirmatively click on the next video, while others permit the user to "infinite[ly]" scroll, creating a constant stream of media that is difficult to close or leave.
Plaintiffs assert that defendants had a duty to warn the public at large and, in particular, minor users of their platforms and their parents, of the addictive nature of the platforms. They thus claim that defendants could have utilized reasonable alternate designs, including: eliminating "autoplay" features or creating a "beginning and end to a user's '[f]eed'" to prevent a user from being able to "infinite[ly]" scroll; providing options for users to self-limit time used on a platform; providing effective parental controls; utilizing session time notifications or otherwise removing push notifications that lure the user to re-engage with the application; and "[r]emoving barriers to the deactivation and deletion of accounts." These allegations do not seek to hold defendants liable for any third-party content; rather, they seek to hold defendants liable for failing to provide basic safeguards to reasonably limit the addictive features of their social media platforms, particularly with respect to minor users….
The conduct at issue in this case is far from any editorial or publishing decision; defendants utilize functions, such as machine learning algorithms, to push specific content on specific individuals based upon what is most apt to keep those specific users on the platform. Some receive cooking videos or videos of puppies, while others receive white nationalist vitriol, each group entirely ignorant of the content foisted upon the other. Such conduct does not "maintain the robust nature of Internet communication" or "preserve the vibrant and competitive free market that presently exists for the Internet" contemplated by the protections of immunity but, rather, only serves to further silo, divide and isolate end users by force-feeding them specific, curated content designed to maximize engagement.
The majority concludes, based upon Moody, that even if plaintiffs seek to hold defendants liable for their own first-party content, such conduct is protected by the First Amendment. We disagree…. Government-imposed content moderation laws that specifically prohibit social media companies from exercising their right to engage in content moderation is a far cry from private citizens seeking to hold private actors responsible for their defective products in tort.
Such a vast expansion of First Amendment jurisprudence cannot be overstated. Taken to its furthest extent, the majority essentially concludes that every defendant would be immune from all state law tort claims involving speech or expressive activity. If the majority is correct, there could never be state tort liability for failing to warn of the potential risks associated with a product, for insisting upon a warning would be state-compelled speech in violation of the First Amendment. Nor could there ever be liability for failing to obtain a patient's informed consent in a medical malpractice action—for the defendant physician's explanation of the procedure, its alternatives, and the reasonably foreseeable risks and benefits of each proposed course of action—necessarily implicates the defendant physician's First Amendment rights. That simply cannot be the case.
My sense is that the majority has it right: The plaintiffs' theory can't be just that social media has addictive features that cause harm apart from its content (e.g., because it tempts people away from sleeping). Rather, it's that social media has features that help promote harmful but constitutionally protected speech, whether it's "white nationalist vitriol" or more broadly speech that "silo[s], divide[s] and isolate[s] end users." Moreover, holding a company liable for providing "content designed to maximize engagement" is indeed holding it liable as a publisher of that third-party content, even when the content is "specific" and "curated." But in any event, this struck me as an interesting an important case, both because it's an appellate precedent (albeit from an intermediate state appellate court) and because of the 3-2 split among the judges.
UPDATE: See also this detailed post from Prof. Eric Goldman (Technology & Marketing Law Blog).
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
If we ever relax platform immunity we need a heightened pleading standard rather than allowing general allegations of negligent algorithms.
Immunity is unauthorized industrial policy by the lawyer profession. Immunity will grow the enterprise. Liability will shrink it. The internet is mature and the biggest enterprise of all. Its immunity is unfair. The biggest place to end liaiblity is the government, the courts, the lawyer profession. Shrink it into oblivion by holding it accountable for the $trillions in damages it causes every year. Immunity justifies retaliatory violence in formal logic, supreme over all laws, constitutions, and ratified treaties. I plan to argue that in an amicus brief in the Luigi Mongioni trial. The victim killed 50000 people a year and was legally immune. Formal logic made his murder almost a moral duty.
Let them argue they are common carriers, like a phone line, not responsible for the plannning of a bank roberry over the phone.
"I plan to argue that in an amicus brief in the Luigi Mongioni trial."
The one stand out observation in the public reaction to Luigi's pistol work is a very strong and wide dislike of ObamaCare.
Because the AP Style Guide says so and you simply don’t question gods of progressivism. They are infallible—kind of like the pope.
Just don’t tell progressives they worship a god and don’t know it. They will deny it even under threat of death. They believe they are capable of independent thought.
My previous post is not a reply to you.
My sincere apologies.
I am using a new browser with features I am not completely familiar with.
Maybe I’m just an idiot—certainly debatable. I intensely disagree with you. And when I do, I will address those concerns directly. I hope to not post a non sequitur as this post is.
A true response to your post!!!
I love how progs thought how Mangione was a modern day Robin Hood when he was on their side. But the second an anti-Obamacare quote comes, he is Lucifer committing an act of murder. (Oh cool, we can blame people who had nothing to with it.)
Progs: he is on my side—freedom fighter
Progs: he is on the other side—murderer.
Conservatives (people with a sense of reflection):
Conservatives: he is on their side—murderer.
Conservatives: he is on our side—murderer.
He walked up and shot a guy in cold blood. He is a murderer. Only you guys think murder U.S. relative.
Some homicides are justifiable. Luigi was defending other, in imminent danger of having life saving care denied and of being killed for the enrichment of the victim and of the employer.
I am adding legal immunity as a justification from formal logic with its 100% certainty and with its supremacy over all laws, constitutions, and ratified treaties.
Self-Defense Imminent deadly threat to self
Defense of Others Imminent deadly threat to another
Defense of Home Armed intruder inside dwelling
Law Enforcement Deadly force to prevent grave harm or fleeing felon
Preventing a Crime Stopping murder, rape, kidnapping
War (Combat Immunity)
Lawful killings under laws of war
Execution of Sentence State-sanctioned capital punishment
"Immunity is unauthorized industrial policy by the lawyer profession."
No. It is simply codification of what is accepted as common sense law in regards to all other 'industries' and associations. For example, cases of a hotel or convention center being held 'liable' for the speech of those who hire halls in which to speak are immediately tossed out on their ear because they are recognized to be prima facie invalid. This is true when it comes to countless actual crimes and countless personal or professional forms of associations.
Put simply, it is well established law that the sheer fact you host a party does not - by itself - make you responsible for crimes committed by others at your party. s230 simply makes this legal standard EXPLICIT because so many people FAIL to accurately apply this common, rights-defending standard when it comes to "social media" and the like.
But one thanks SC for identifying the fact he FEELS 'Guilt by Association' is not only "fair" (!) but additionally FEELS the State's refusal to treat people guilty of crimes SOLELY because of who they associate with 'justifies violence' against those whom the State 'fails' to PUNISH for their associations (!!).
Talk about the very definition of evil!
Hi, Rad. I do not feel guilt by association is fair. It is a jury question. All immune entities and virtually immunities parties should have open season for violent retaliation if formal logic, supreme over all laws, constitutions, and ratified treaties. Formal logic has 100% certainty, that even the laws of physics do not have.
The contrapositive of a true assertion is always true. All bats are mammals is true (A then B is true.) This animal is not a mammal, it cannot be a bat is true 100% certainaty. (The contrapositive, not B then not A, must be true.) Civil and criminal liability replaced endless cycles of violent revenge. This rule of law made civilization possible.
This is a brief review of immunity and liability historical experiments. The results are all in one direction.
1. Vaccine Industry
Legal Immunity Gained: National Childhood Vaccine Injury Act (1986) granted manufacturers legal immunity from most lawsuits.
Growth: Massive expansion in R&D, introduction of many new vaccines, and consolidation of manufacturers.
Partial Loss of Immunity:
COVID-19 vaccines: Immunity under PREP Act, but political backlash and efforts to lift protections.
Broader anti-vaccine movement threatens policy protections.
Contraction Signs: Some companies are reducing vaccine research investment or exiting certain vaccine markets (e.g., GlaxoSmithKline withdrawing from dengue).
2. Tobacco Industry
De Facto Immunity: For most of the 20th century, courts dismissed most tobacco lawsuits under "assumption of risk."
Growth: Enormous profitability; tobacco companies became global empires.
Loss of Immunity:
1990s saw lawsuits succeed (e.g., State AG settlements, Master Settlement Agreement in 1998).
Internal documents exposed knowledge of harms.
Shrinkage:
U.S. smoking rates and profits fell.
Heavy taxation, regulation, and massive settlements (~$206 billion).
3. Railroads (19th Century U.S.)
Early Legal Privilege: Heavily subsidized and protected from certain liabilities (e.g., rate regulation, liability caps).
Growth: Became the dominant transportation mode.
Loss of Legal Favor: Interstate Commerce Act (1887), and antitrust enforcement, especially in early 20th century.
Shrinkage: With legal regulation and competition (trucking, air), the industry declined, with massive bankruptcies in the 1960s–70s (e.g., Penn Central).
4. Medical Device Industry
Temporary Immunity: Preemption under Riegel v. Medtronic (2008) gave immunity for FDA-approved Class III devices.
Growth: Surge in high-risk device development and FDA fast-tracking.
Erosion of Immunity: Medtronic v. Lohr (1996) and subsequent challenges allowed more lawsuits for negligence and failure to warn.
Contraction: Legal costs, recalls, and liability pressures caused smaller firms to exit, and consolidation into a few giants.
5. Private Military Contractors
Post-9/11 Immunity: U.S. granted contractors like Blackwater significant legal shields for overseas conduct.
Growth: Boomed during Iraq/Afghanistan wars.
Loss of Immunity: Criminal and civil lawsuits (e.g., Nisour Square massacre), plus Congressional scrutiny.
Decline: Major firms renamed or exited (e.g., Blackwater → Xe → Academi), contracts dropped, legal costs soared.
6. Gun Industry
Legal Immunity Gained: Protection of Lawful Commerce in Arms Act (PLCAA, 2005).
Growth: Surge in gun manufacturing and sales, esp. post-2008.
Partial Erosion: Some states (e.g., New York, California) use "public nuisance" law to circumvent PLCAA.
Early Contraction Signs: Increasing insurance costs, pressure from banks, and exit of certain retailers (e.g., Dick’s).
7. Tech Platforms (e.g., Facebook, YouTube)
Legal Immunity Gained: Section 230 of the Communications Decency Act (1996).
Massive Growth: Enabled social media and content platforms to scale globally.
Ongoing Challenge to Immunity: Bipartisan criticism, lawsuits (e.g., Gonzalez v. Google), legislative reform proposals.
Potential Shrinkage: If Section 230 is repealed or narrowed, exposure to defamation, content harms, and product liability could shrink industry dominance.
Sigh. I should have read further down the posts here. SC is a nonsense troll. Got it.
Hi, Rad. That is a personal remark. Fallacy of Irrelevance. You are not a lawyer. I have no problem with you. I support your advocacy for common sense. The lawyer profession would lose income if your view were to be accepted.
"Put simply, it is well established law that the sheer fact you host a party does not - by itself - make you responsible for crimes committed by others at your party." I recommend the movie Project X. "Best fucking movie ever made," said a teen dude.
https://www.youtube.com/shorts/cndrU_7-hx4
1. Types of Host Liability
A. Social Host Liability (Alcohol-Related)
This applies when a private individual hosts a gathering and serves alcohol.
Adults serving to minors: Hosts are almost always liable for any injuries or damages caused by an intoxicated minor guest.
Serving to obviously intoxicated guests: Liability may arise if the host continues serving an intoxicated person who then causes harm (e.g., drunk driving, assault).
Statutes vary by state: Some states explicitly allow lawsuits against hosts; others do not.
Example (PA):
In Kleine v. McCullough, Pennsylvania courts recognized liability for knowingly serving alcohol to a visibly intoxicated minor.
B. Premises Liability
Hosts may be liable for injuries on their property.
Guests (licensees): Hosts must warn of hidden dangers they know about (e.g., broken steps, holes in the yard).
Invitees (e.g., caterers): Higher duty — must make the property reasonably safe.
Trespassers: Generally no duty, except not to willfully harm.
C. Negligent Supervision
Hosts may be liable for failing to supervise guests or prevent foreseeable harm.
Failing to stop a fight
Letting children access dangerous areas
Allowing dangerous stunts (e.g., jumping from a roof into a pool)
The case is ridiculous. The defendant should be executed on the spot. The death penalty should be mandatory with no judge discretion. It should be reviewed by investigative experts for a bad verdict not by know nothing appellate courts for legal error. The death penalty appellate business is a $multi-billion business. It is a scam. Arrest all its perpetrators including responsible Supreme Court Justices. The error rate is low. A car crashed, ban driving. Someone fell, struck head, and died, ban walking. That's the logic behind this scam, committing the Exception Fallacy.
The plaintiff lawyers should have to pay costs from personal assets. Like Rule 11. 22 NYCRR § 130-1.1 — Sanctions for Frivolous Conduct
This rule allows New York courts to impose financial sanctions or award costs (including attorney's fees) when a party engages in frivolous conduct in civil litigation.
Key Provisions:
1. Authority to Impose Sanctions
A court may award costs and impose financial sanctions on any party or attorney engaging in "frivolous conduct" (130-1.1[a]).
2. Definition of Frivolous Conduct (130-1.1[c])
Conduct is frivolous if:
(1) it is completely without merit in law and cannot be supported by a reasonable argument for an extension or modification of existing law;
(2) it is undertaken primarily to delay or prolong the resolution of the litigation, or to harass or injure another; or
(3) it asserts material factual statements that are false.
3. Sanctions Amount and Procedure
Sanctions are discretionary and can be up to $10,000 per event (130-1.2).
The court must give the attorney or party an opportunity to be heard before imposing sanctions (130-1.1[d]).
I think Zuckerberg leaves something to be desired too, but that seems a bit harsh.
Why does the opinion capitalize tbe word black but not the word white?
I'm guessing because the black adjective modifies a specific, particular thing (predominately Black neighborhood) while the white adjective modifies a general group (white populations in Western countries).
A neighborhood is a specific thing now? It's not a general group anymore?
Why do you care?
It seems like a way for the judges to virtue signal in the opinion, similar to when they refer to a transgender person using their preferred pronouns. But maybe there is a style guide that they are obligated to follow that requires this. I've seen it more and more since 2020.
I love how you think elevating one descriptive word that is deserving of proper nouns status is not, in and of itself, a micro aggression. No doubt you think “White neighborhood” would not only be a micro aggression, but a full-throated admission of overt racism.
“How dare you think being a ‘White’ neighborhood is the same as a ‘Black’ neighborhood.”
It is, and will always be, about your feelz. You can’t logically argue white should not be a proper noun while simultaneously arguing black should be. What you have left is I “feel” it should be.
"Why do you care?"
DN here is seriously asking why someone should "care" about racism?! Wow.
Hi, David. Racism, all other -isms and -phobias are mostly true, most of the time. They are folk statistics. They also change in time. Very dark-skinned African immigrants outperformed whites in the 2010 census. Now the stareotype is they are the New Koreans. If you spot one, you chase waving wads of cash to come to your school or to take your job. You are a woke and a denier.
In other examples, there is good reason to fear the homosexual. By their sexual selfishness, they blocked the quarantine of the early AIDS cases. A health worker reporting one faced prison, 10 years and a fine of $250,000. That killed 20 million people. Yea, scary.
Holy shit how did you survive this microaggression?
So it's good for the goose but not for the gander. Got it.
Translation: you're an unprincipled hypocrite and your future opinions will be weighted accordingly.
The distinction was made above.
I personally don't take much issue if you capitalize Black or not.
Y'all are so hungry for something to resent it makes your breed of white guys just a buncha fragile snowflakes and it's hilarious.
Lol, someone touched a live wire!
Because the AP Style Guide says so and you simply don’t question gods of progressivism. They are infallible—kind of like the pope.
Just don’t tell progressives they worship a god and don’t know it. They will deny it even under threat of death. They believe they are capable of independent thought.
You're right. The guidelines reflect the reporting to highlight Black because it makes them a target for aggression. It goes to the truth about Leftist wanting to keep blacks down on the plantation forever. The highlight is to raise blacks up for future mistreatment, same as Latino, etc., etc. This is perpetuation of racism by reporters.
You see, it's the opposite, as always, of the purported concern.
The facebook genocide lawsuit would have a been much better vehicle for this sort of negligence-of-product-design test case than this lawsuit is. Particularly if you argue that selling and operating the product in a foreign country and foreign language isn't necessarily protected by section 230 at all.
Whatever happened to that lawsuit, anyway?
A French court can fine Facebook a trillion dollars. When plaintiffs come to California to execute on the judgment American courts will not recognize it if Section 230 immunity would have been available in an American court.
I did not author this post.
I simply employed an algorithm to selectively display portions of third-party content (Webster's Unabridged) in an order my algorithm predicted would best promote engagement with visitors to this site.
"Moreover, holding a company liable for providing "content designed to maximize engagement" is indeed holding it liable as a publisher of that third-party content, even when the content is "specific" and "curated.""
While this lawsuit seems absurd for a number of reasons, this overview doesn't sound right to me.
Section 230 would certainly protect the published content from liability; however the means, methods, and composition of the content being delivered (curated and designed to maximize engagement) would seem to be better placed under patent law and I don't know that Section 230 addresses that issue.
As an example (hoping it helps, but also aware I might just be muddying things more): Back in the 80's, subliminal messaging was all the rage & fear. While nobody thought "Drink Coca Cola" is a bad (or at least illegal) message, what if subliminal messaging truly worked and made people crave it? At the time, there was hubbub about trying to ban subliminal messages over fear of companies using it to manipulate the market. They wouldn't have banned the copy (e.g. "Drink Coca Cola"), they would have banned the means, methods, and composition of delivery. Subliminal messaging was largely debunked (or at the very least greatly overstated) and the fear faded, and it is my guess that additive social media is going to go the same route. In the meantime, I just don't know that Section 230 is the best defense of against means, methods, and composition.
"better placed under patent law"
You choosing what slogan will attract more people to support your cause is NOT a "patent" issue. HOW you chose your slogan - tea leaves, tarot cards, a psychic, market research, pulling it out of a hat, etc - is NOT a "patent" issue. HOW you get your message to people is NOT a "patent" issue.
My point was that it isn't a Section 230 issue either. The process and method by which you put info together and decide what to present is patentable. For better or (much more likely) worse, companies have been patenting all manner of virtual and digital processes for decades (e.g. virtual "shopping carts", on-channel/on-time for TV guide displays, etc.). Doing so in a way that is addictive or feeds more of [no matter how vile] Section 230 protected content is not a crime, but that doesn't mean it should be dismissed based on Section 230.
It might be, if there's some unique algorithm. It isn't inherently so. But that's not really relevant to the issue. It's still editorial decisions.
It's more of a 1A issue.
The headline to this article evokes images of Buffalo hunting out west
To summarize the Plaintiffs' theory.
The murderer was addicted to social media and the algorithms recommended Badthink®™, and because the killed on the basis of Badthink®™, social media companies are liable for wrongful death.
Can't tell from the opinion whether the case is still pending below on claims against other defendants, and so this may be a non-final order. The 2-judge dissent on a question of law would give plaintiffs an appeal as of right to the NY Court of Appeals if the order is a final determination of the action. If it's not, then plaintiffs can seek leave to appeal from the Appellate Division. Either way, this isn't over yet, although I think the majority got the 230 question right.
Does anyone have any responsibility for their own actions?
Yes, the shakedown is still going on, because in NY, guns (and 2A) bad. "Mean Arms ran a misleading advertising campaign about a magazine lock that they knew was easily removable, making a gun more deadly. The court recognized that deceptive advertising has real-life consequences, and that justice must reach the companies whose choices enabled the tragedy, killing our community members and harming our community as a whole." GIFFORDS Law Center, https://giffords.org/press-release/2025/07/buffalo-mass-shooting-victims-lawsuit-against-gun-company-mean-arms-goes-forward/
Exactly what was the advertisement in question?
Also, can I just say how stupid the whole "addictive" thing is? It's a stupid and bad metaphor. Social media platforms — and other non-biological products — are not actually "addictive" in the first place, and the claim that a product is "addictive" boils down to nothing more than the claim that its manufacturer made it too much fun to use. Which is not a legitimate cause of action. What manufacturer doesn't aim to make people like its products? And nobody needs to be "warned" about that.