Australia Offers a Terrifying Vision of an Internet Without Section 230
What happens when YouTube and Facebook can be held liable for their users’ speech?

The former deputy premier of New South Wales, John Barilaro, won an unlikely battle against Google earlier this month. The High Court of Australia ruled that Google had defamed the now-retired politician by failing to remove videos published by the wildly popular YouTube journalist and commentator FriendlyJordies, whose real name is Jordan Shanks-Markovina.
Barilaro's victory attracted worldwide media coverage, not least because of the unprecedented damages awarded: Google was ordered to pay AU$715,000 (roughly $500,000 in U.S. money) and referred for a potential contempt prosecution. Ruling for the plaintiff, Justice Steven Rares accused Google of assisting Shanks-Markovina in the "vitriolic, obsessional, hate filled cyberbullying and harassment of Mr Barilaro."
Shanks-Markovina has repeatedly contested this characterization, arguing that his videos represent legitimate public interest journalism. The FriendlyJordies channel routinely investigates matters of grave national importance, such as water theft, police misconduct, and government corruption.
A combination of slick production values and Shanks-Markovina's acerbic personal style has allowed the FriendlyJordies channel to amass over 600,000 subscribers. This is no small feat, considering that Australia has a population of 25.7 million.
Barilaro could not have pulled this off in the United States. Platforms like YouTube are protected by Section 230 of the Communications Decency Act, which limits their civil and criminal liability for the actions of their users. Australia does not have an equivalent to Section 230.
In 2019, Dylan Voller, an Aboriginal Australian artist and prison reform activist, launched defamation proceedings against three major news outlets—News Corp, Fairfax Media, and the Australian News Channel—over comments posted to their social media accounts.
Voller had a troubled childhood. His teenage years were punctuated with periods of incarceration following convictions for car theft, robbery, and assault. His experiences became national news following the publication of an Australian Broadcasting Corporation investigation into the Northern Territory's youth justice system.
The documentary, titled Australia's Shame, made for harrowing watching. It showed Voller, at age 17, shackled to a chair and forced to wear a spit hood. Another clip showed a correctional officer strike Voller, then just 14 years old, in the face after minor misbehavior. The footage shocked Australia and provoked a national soul searching.
It also provoked a backlash. Defamatory comments inevitably followed coverage of his case. One falsely claimed that Voller had "brutally bashed a Salvation Army officer." Another accused him of raping and beating an elderly woman.
Voller filed suit. In 2019, the New South Wales Supreme Court ruled in his favor. Without making a determination as to whether the comments were defamatory, it said publications could no longer "turn a blind eye" to defamatory comments, arguing they provide a forum and therefore are responsible for them. The High Court of Australia affirmed this ruling two years later.
The Voller case fundamentally redefined who can be held liable for user comments online, paving the way for Barilaro's victory.
From its inception, the internet has facilitated robust debate, eliminating traditional media gatekeepers. Social media accelerated this trend, facilitated by legislation like Section 230. Without the risk of civil liability, tech companies can allow the unfettered flow of discourse.
As the Voller ruling demonstrates, changing that legal ground limits the avenues for open debate. In the wake of the 2021 High Court ruling, many Australian publications opted to limit the ability of third parties to comment on their articles.
CNN—admittedly a minnow in the Australian media ecosystem— took the most drastic action and deleted its Australian Facebook page entirely.
The ruling even prompted some politicians to reassess their social media presence. Peter Gutwein, who led Tasmania until April of this year, switched his Facebook profile to read-only mode, cutting off a potential communications channel for constituents.
Announcing the change, Gutwein said, "The recent Facebook defamation ruling by the High Court has determined that the page owner is now legally responsible for user comments on posts. We know social media is a 24/7 medium, however, our moderation capabilities are not."
Dan Andrews, the leader of Victoria, suggested that he may follow suit, though that has not yet come to fruition.
And now John Barilaro—a man who was once second most senior politician in Australia's most populous state—has suppressed criticism from an independent media operation, and in so doing likely made internet companies less willing to help such operations reach an audience in Australia.
When Reason asked Shanks-Markovina for comment, a spokesperson compared the ruling to "making a paper mill responsible for everything that may in the future be printed on their paper," and added: "If Australia continues to go down this path, it essentially means that anyone will be able to submit vexatious complaints to Google about public interest content and it will be in Google's commercial interest to remove the content complained of no matter the merits of the complaint."
Here in the United States, Section 230 has its opponents on both sides of the political aisle. The Australian experience demonstrates how removing the protections enjoyed by internet platforms can chill all sorts of online speech.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
The internet can't survive with ONLY the First Amendment protecting it!
The internet is global. The First Amendment (and section 230) only applies to the United States.
But... but... but...
Suffering idiots dancing on the head of a pin to put an end to lying without criminalizing it.
Google just needs to close its Australia office.
The US won't enforce defamation judgments from jurisdictions that don't have at least as many speech protections as the US. (I believe it's the SPEECH Act or something like that).
Heaven's to Betsy! Australians might be forced (FORCED!) to do without the wonders of YouTube or Facebook.
The horror, the horror.
Not to worry. I am sure their government will provide equivalent online services, with proper oversight.
Tomayto, Tomahto...
Well you don't need 23 kinds of internet, mate.
That government will do it right, and you will know what you need to know.
But for equity, everyone must use dial up modems.
What about hollow log modems?
Reason's stance on section 230 reveals an explicitly "rights are granted by government" perspective
Actually legal rights are granted by the government you live under. Natural rights are survival of the fittest. The greatest innovation of the United states was to establish individual rights the government could not take away. The Bill of rights was unique in human history.
From its inception, the internet has facilitated robust debate, eliminating traditional media gatekeepers. Social media accelerated this trend, facilitated by legislation like Section 230. Without the risk of civil liability, tech companies can allow the unfettered flow of discourse.
Is someone on a different internet than I am?
Shut up and watch the cake being force baked.
Can't a company just put up a disclaimer like "views expressed by users and commenters are their own, and do not necessarily reflect the views of Megacorp Corp, it's subsidiaries, affiliates, or officers"?
I'm trying to figure out how other free-speechy kind of entities have avoided such terrifying visions by not being liable for stuff going on in their midst. You know, what with a first amendment and all.
In a sane world this would do just fine: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. We reserve the right to remove any illegal content posted by users. Comments do not represent the views of (insert company name here).
But that’s not enough to protect my sensitive eyes from horrible words.
Control freaks often being most concerned about you not recognizing, much less anyone else acknowledging their control freakiness.
I'm old enough to just barely remember the times before the internet. And I can tell you I was robustly called a cunt much less frequently in those dark days.
Only because nobody knew for certain.
Is someone on a different internet than I am?
I was about to post "different planet".
Apples and Oranges:
My discontent with section 230 is that it allows BIASED moderation of content. Aside from removing anything explicitly illegal, you should be able to make 2 choices: Free-for-All mode or moderate and risk being sued.
If you, as a platform, want to moderate what people say, you should be liable. If you truly support the free exchange of ideas (even mean or bad ones), you have to let things go that you don’t like.
That's terrible as policy and on principle. People cannot be up moderating 24-7. Small websites would immolate in lawsuits the moment the owners went home for the night.
It should be possible to moderate, say, a kid-friendly board without getting sued out of existence because someone posted something 5 minutes after you stopped paying attention for the evening.
In your world, a website either needs 24h moderation, or require posts to be vetted before they are posted (which is even more work-hour intensive). Large companies could do that - but you'd basically kill comments on anyone smaller than facebook who had any interest in not being a cesspit.
Moderation does not make you responsible for what other people write. That's just nonsense.
Here's how WIRED, the Magazine of... I don't know what Wired is any more... but I think they were the flagship publication of Internet LIfe at one point... here's how WIRED views that pesky freedom of speech:
If you think Journalism is your first amendment friend, or as NPR used to say it back in the 90s, "The Gatekeepers of the First Amendment" then... wait... "gatekeepers of the first amendment".
Oh, Jesus, I think I finally get that statement now. They are indeed the "gatekeepers of the first amendment".
"Sorry bub, you don't get to speak here"
Sure, Wired, sure.
Fuck Wired. And NPR.
If you really want to see the decline of a once pro-freedom publication, check out Ars Technica after the dual Trump and COVID panics.
Ars is a progressive shit-show
Their stable of covid cowards rivals the washington post.
Sandberg (allegedly) stealing Facebook corporate funds to plan her wedding is just a distraction, and okay because Zuckerberg spent corporate funds to burnish his public image, too, according to Bloomberg:
https://www.bloomberg.com/news/articles/2022-06-15/sandberg-s-wedding-expenses-distract-from-facebook-scandals
The investigation into possible misappropriation, which Sandberg has denied, threatens to disrupt what had been a well-choreographed departure and could dent Sandberg’s prospects as a potential chief executive or Senate candidate. It could also, as the Journal points out, lead to Securities and Exchange Commission penalties if Sandberg’s personal expenses weren’t properly disclosed. Yet the focus on alleged wedding planning, if it indeed played a role in her departure, should feel a bit strange to anyone who has paid attention to the company’s well-established policy of spending extreme sums of money to ensure the comfort and preserve the reputations of its senior executives. It smacks of either a sexist double standard... (emphasis mine)
Yeah, no... stealing is still wrong.
(I distinctly remember being told in high school to consult primary sources to be most accurate.)
I don't understand the point here. Is he saying that in High School he learned that Primary Sources are most accurate, and because Trump is inaccurate he's not a Primary Source? Is he saying that he was taught in High School to always dig down into what Primary Sources said to confirm?
The former is bizarre, but I'm really not clear that that's not what he meant. The latter is reasonable, but people posting their own thoughts isn't journalism and so these are different questions.
Look, no right is unlimited. Thus, every right is only exactly what I say it is, damn it.
Zuck needs to be worried about the stock price of Meta, it's about 42% of its value from September 2021. All those old ladies posting pictures of their cats and grandchildren have moved on and will not be coming back.
Lol
>>tech companies can allow the unfettered flow of discourse.
they *can* ...
America Offers a Terrifying Vision of an Internet With Section 230's Good Samaritan Clause.
What happens when YouTube and Facebook can censor presidents, academic journals and scientists?
Moderating in good faith!
Duh. Social justice and equity.
Just like in the Soviet Union.
The quiet part is that everybody will be equally destitute and hungry (except for party functionaries).
And equally fucked.
If China's or Russia's leading private Internet company banned opposition political candidates, with encouragement from the government, it would rightly be decried as oppression.
Because in China or Russia it would be dictatorship, but in places like the US or Ukraine it's totes protecting democracy...
Not at all. Democrats have openly admired and desired the level of control Chinese government has over information and the economy.
What happens when YouTube and Facebook can censor presidents, academic journals and scientists?
And why on earth would they need to? They're IMMUNE from liability if those scientific journals spew misinformation! It's unfettered out there thanks to section 230! UNFETTERED, DO YOU HEAR ME! SPEECH IS UNFETTERED!
"What happens when YouTube and Facebook can censor presidents..."
Name me ONE person who has an internet connection today, who can NOT find and access ALL the lies of the POTUS, AND of the ex-POTUS, that they can stand to read? PLEASE?
The Mammary-Fuhrer "fix": If FacePooooo "publishes" some lies written by The Donald... We must sue the socks off of FacePooooo!!! Obviously!!! THAT will incentivize us all to write more responsibly, right?!?!?
So unless someone is banned completely from everything and everywhere it isn't censorship?
The Soviets didn't practice censorship because dissidents could still secretly print woodblock political tracts in food cellars?
Mighty authoritarian view on speech you have there, Shillsy.
So because You (Oh Perfect One) can postulate extremes that do NOT get ANYWHERE near existing in the USA today... This means that it is a GOOD thing to punish "Party A" for the writings of "Party B"?
Mammary-Mammary-Necrophilia-Fuhrer... Perfect Double-Mammaried Marxist One... My "punishment boner" is MUCH smaller than Your Perfect Punishment Clitoris! Yet... If this is the ONLY way that You (Oh Perfect One) can learn the true meanings of fundamental, basic justice principles... Then I hope that they punish YOU extremely severely and sternly... For what I have written! THEN You (Oh Perfect One) might finally learn a DAMNED thing! (I know, I am asking for the moon and the sky here. Perfect Ones steadfastly REFUSE to learn!)
Do you even know what you wrote?
Do you even know how utterly stupid and evil You are, perfect Bitch?
Listen carefully now, read carefully... Bitches who advocate punishing "Party A" for the writings of "Party B"? They are EVIL bitches!!! Satanic, self-righteous, power-hungry NAZI bitches!
So then "Party A" is you, your Big Lie and the Jan 6 committee?
"Party A" is the evil Demon-Craps who brain-controlled The Donald into telling His Big Lie. Because of what the evil Demon-Craps did... We should BURN ALL OF THE WITCHES, such as Mammary-Necrophilia-Fuhrer!
(Random inputs processed to create random outputs... "Justice" per Mammary-Necrophilia-Fuhrer!)
Reason writers applaud
It shouldn't come down to "either or." No reason that, like universities, some outlets support only one brand of speech - excludes progressives or conservatives - and other outlets welcome all speech no matter how insulting, disgusting or wrong-thinking it is.
And on a totally unrelated note
Whistleblower Docs Reveal Deep State Desire to Deputize Big Tech as Speech Police
https://weingarten.substack.com/p/whistleblower-docs-reveal-deep-state
"...operationalizing public-private partnership between DHS and Twitter..."
Like polygamists who live on the UT-AZ state line, when authorities come looking, they can conveniently step across the border.
First Amendment violations? Heavens no, that was the private partner acting on their own accord.
It's the fascist playbook: government and the party control the economy and speech (as in socialism), but they don't technically own it, meaning they can always shift blame.
In other news, snowflakes around the globe sue makers of paper and smart phones for not preventing someone, somewhere from spreading mean words.
WTF is wrong with people?
They want more government to make you and me behave as they wish.
https://www.facebook.com/turningpointusa/photos/if-you-protest-for-more-government-power-dont-be-shocked-when-that-government-tu/1417185548330157/
And yet another magnificent piece of Reason journalism.
The most significant fact is deliberately omitted.
" . . . defamed the now-retired politician by failing to remove videos published . . . "
Did they refuse to take it down after a request from John Barilaro, or just not take it down on their own?
Absent a court order, why should they take anything down?
Yeah, a request shouldn't be enough. There should be a presumption that it is fair use and/or non-defamatory until proven otherwise, then you can order google to take it down. Otherwise you just encourage trolls to submit false claims to get stuff taken down for lulz. (Heck, it's already a problem with non-copyright owners making bogus copyright violation claims to google in the US, under DMCA).
If the platforms do not provide an unfettered flow of information, then why should anyone do anything but laugh when they are hit by liabilities.
The problem is not 230, it is that it is not being enforced. The social media companies are getting the protection of 230 - but under 230 they should NOT have protection since they are not letting everything through but are actively censoring, and doubly so because they are censoring based on a viewpoint. Such censoring social media companies should NOW be subject to liability because they do not meet the prerequisite for protection. If there is any doubt on this under 230, then it should be modified to make it clear that once you start deleting users and posts for anything but the most egregious situations that are defined in the statute (e.g., postings that are not protected by the 1st amendment [in lieu of instead trying to define exceptions for sexually explicit items, incitement to violence, etc].
230 allows for moderation, you're just wrong here.
Continuing my comments because I hit Send too quickly - modify the statute as I indicated and if you delete items outside of that limited authorization, then you lose you 230 insulation from liability.
Where did it all go wrong for Australia? How did they become such thin-skinned cowards?
Trick question [progressive politics]?
Australia has always been this way. It started out as a colony of prison wardens, after all.
Are there still rednecks in Australia? If so, what do they think of all these city fops?
We’re supposed to feel bad that these companies lost a lawsuit?
I don’t feel bad when Putin is attacked. We don’t talk about how "unfair" it is to Putin.
When these companies start being fair to us, we’ll start making an effort to treat them fairly in return. Until then, anything bad for them looks like justice.
But what happens when Putin sues and wins after someone "defames" him?
Then the ghosts of Putin's enemies eat Putin. The end.
Why are we making up stories?
This is why Democrats and Republicans want to get rid of Section 230 (both sides!). Because they want us to be more like authoritarian Australia. A land down under where you are not allowed to criticize government officials.
Or go outside.
As opposed to the government colluding with or outright intimidating those companies right now?
This is why you want to keep Section 230: because you want to be more like Nazi Germany, where corporations censored on behalf of the government.
Anyone who DARES to think even a TINY bit differently than Noy-Soy-Boy-Toy?!?! They do so... 'Cause they want to be more like Nazi Germany!!! Now... Vee must all OBEY Der Noy-Soy-Boy-Toy!!! Mach schnell!!!
Looks like they can just take a nap, then. Mission accomplished.
Let's see if we can draw a direct line from Youtube to the Intelligence Community in its effort to deplatform, demonetize and and have other youtubers shut down.
I guess we will have to see how Google responds to the Democrat Members of Congress letter requesting they hide crisis pregnancy centers in their search results to know how unfettered the flow of information is as a result of Section 230.
But that's not, of course, what Section 230 actually enables.
Cubby, Inc. v. CompuServe Inc. (1991) established, without any Congressional action, that tech companies could allow the unfettered flow of free discourse without civil liability. This was prior to and independent of Section 230.
What Section 230 (a part of the pro-censorship Communications Decency Act of 1996) established was that tech companies may fetter the flow of free discourse without incurring civil liability. This was because after Stratton Oakmont, Inc. v. Prodigy Services Co. (1995) set the precedent that such fettering made the company liable for what it let through. Faced with the prospect of an entirely censorship-free Internet (as service providers refused to moderate lest they be liable for content), Congress carved out a safe harbor for tech company censorship. The name of that safe harbor for censorship is Section 230.
This.
Which has been pointed out ad infinitum. Yet Reason writers simply must continue with their brown envelope journolisming on behalf of their big tech benefactors.
Reason's just doing its part to advance the cause of totalitarian leftism
It's not censorship if private companies do it, only if the government does it.
How can you not see the value of allowing websites to maintain, say, kid-friendly message boards without being liable the moment someone posts something inappropriate. Especially given the 24h/7d nature of the internet. Do you want your only online speech options to be Google or Facebook, forever?
It's not censorship if private companies do it, only if the government does it.
Bullshit.
private companies
The major platforms are "private companies" only in a highly technical sense. They can function the way that they do only because of the special privileges granted by Section 230; they openly take direction from government regarding what content to allow; and they are in open collusion with each other to suppress anti-government and anti-Democrat content. Private, my ass.
They should be able to maintain "kid friendly" standards without liability as long as the same standards apply equally to everyone.
The same rules should apply to Google and Facebook as to anyone else. I don't see how that would give them a monopoly.
The proposed repeal of Section 230 usually involves "common carrier exemptions". That is, as long as Google doesn't exercise editorial control, they are exempt from liability. That was the original intent behind Section 230.
Furthermore, even if companies like Google and Facebook were to find it impossible to host discussions because of liability concern, I'd say: good, mission accomplished! There is no need for corporations to gain control of user-created content like comments. Comments should be owned by the people who created them.
As for Australia, it doesn't have free speech, so you are confusing an abolition of Section 230 with the absence of free speech protections in Australia.
Pure fear mongering by Reason again. Pure advocacy of crony capitalism by Reason again.
What happens when YouTube and Facebook can be held liable for their users’ speech?
If they do not censor legal speech, then they should not be held liable. If they do, then they should. You're a magazine, or you're the phone company. Pick one.
Well, the problem isn't section 230, but that the sites protected by it don't allow free speech and edit the content. When they do that, beyond removing malevolent or criminal posts, they have assumed responsibility for content. That means they should no longer be protected by section 230 when they control content.
Protection should only accompany the widest possible interpretation of free speech.
There are a very complex set of legal considerations here that are much deeper than the so-called libertarians are seemingly able to consider.
For instance, Instagram is not responsible or "liable" for a user posting child pornography.
However, Instagram is NOT immune based on what Instagram DOES with that child-pornography post (as one legal analyst pointed out).
So for example, user posts child porn and before it's caught or identified as child porn, Instagram's algorithm forwards or promotes that post to certain people's feeds-- now Instagram is potentially liable for promoting child porn. The legal argument being that while Instagram is not responsible for the child porn being posted by a user on their platform, they promoted and/or forwarded that into other people's timelines. The latter is a possible violation making them potentially liable.
Another less inflammatory example would be a photo-sharing site sees a photo from a user and says, "Man, that's great, can we use that in our marketing materials?" and the user agrees. Then it turns out later that the user stole that photograph from a copyrighted source and had no rights to it. NOW the photo-sharing site is potentially liable because it utilized an image for its own business purposes.
Interesting point. A traditional publisher makes things available for anyone to access, or avoid. One unique thing about tech is the targeted distribution, often without a specific request.
On the photo-sharing example: it would seem to me that if the business was as stupid as to phrase it in the blase way you do, then yeah, probably liable. But if they're sensible, they'd confirm with the user that the user owns the image. I'm not actually sure what happens if the user lies and says they own the image - but this would be analogous to any contracting with someone for use of a copyrighted work that they claim to own but don't actually own.
That said, it's also the case that unless the photo has a registered copyright, there isn't any possible liability, because only registered copyrights can get damage judgments. (The real owner can still demand you stop using it if it's not registered, i believe, but they can't get any monetary relief).
only registered copyrights can get damage judgments.
That is incorrect Only registered works can get STATUATORY judgements. Unregistered works can get only actual damages.
Section 230 is too broad.
It should be modified to protect services from data posted by users IF it will not censor content. If the service chooses to censor (except for illegal activity...which may have resolved this Google vulnerability), then it is controlling the content, and should be liable. Services should choose 230 protection or censorship, but not have it both ways.
"Australia has a population of 25.7 million"
That's only about 5M larger than NY MSA so GOOG could just close off access from the entire continent easily enough with only a minor sting to the wallet. If someone VPNs out and tries to sue, GOOG can counter sue legitimately saying the content was illegally accessed on their servers.
I'm sure I don't really understand how this works, but if I create a web site here in the U.S., allow users to add content such as comments, they say something nasty about some Australian POS, how in the hell could Australia possibly do anything to me? They can't. FB, Google, etc could simply pull any physical nexus out of Australia, users could still access the platforms via servers in the U.S. or other countries. Those platforms could simply give the well-deserved middle finger to the Nazis in Australia.
Fine, until somebody in OZ does something to insult a US official, and they decide to arrest him and you, and then trade. For the children, or planet, or something.
That's exactly the point. Right now tech companies are only censoring people on the right
Repealing 230 would mean they'd have to censor everyone equally.
Or reforming it would mean they could not censor people on the right
Social Media has been hiding behind 230 for years. They are not the open platforms that 230 is ment to protect. The big guys want protections but also want to be able to censor speech that meets the TOS. They should not be allowed to have it both ways. Sadly, they have been doing it for a long time and getting away with it.
Maybe it would be good for them to get sued a few times.
Censorship is censorship, no matter how you accomplish it. You have to have a government and laws to go after people for what other post, that is not the US government (so far), but it could become that soon. Keep speech free, it is the only answer.
I am disappointed in the comment section more than usual. Apparently most posters on Reason (1) don't have children, (2) don't know anyone who has children, (3) have no qualms with children being regularly exposed to the likes of 4chan (which pretty much every internet discussion site would devolve to without moderation).
These same commenters also seem to have no idea what editorial control actually is.
seem to have no idea what editorial control actually is.
Editorial control should go with liability for content. Immunity from liability should be in exchange for no viewpoint discrimination. Social media platforms want it both ways.
Viewpoint discrimination is not the same as editorial control.
Editorial control requires:
(1) you review all submissions *before* publication
(2) you choose which submissions actually get published.
(3) that creates an obligation to vet submissions ahead of time, because you are making an active choice on what gets disseminated. (And that's why editorial control creates liability).
Moderation is totally different. It happens *after* "publication", which means there's no possibility (much less obligation) to vet posts beforehand, because there is no beforehand. There can be no due diligence when you didn't even know it was posted.
So yes, editorial control implies liability, but moderation (including after-the-fact viewpoint discrimination) is not editorial control, because it assumes no control over the publication process. Social media companies don't want it both ways - they are explicitly not editors or even publishers. Every comment on social media is self-published.
Meanwhile, you're seriously arguing for a world where someone who wants to provide a child-friendly forum either has to allow links to pornographic videos or has to accept liability for missing something defamatory a user wrote (or not knowing it was defamatory in the first place). You're basically shuttering every forum that isn't run by Facebook or Alphabet that isn't okay with being the worst of the worst on 4chan.
Words games and a red herring, trying to have it both ways. Platforms that practice viewpoint discrimination should not enjoy immunity from liability.
They should be immune, even without section 230. (The court case that made 230 necessary was wrongly decided). Why should a website be liable for a comment some third party made that they aren't even *aware of*? That makes no sense whatsoever. That's not word games, that's a legitimate and highly relevant difference between an editor and a website hosting a forum which allows anyone to post.
But don't obfuscate: you're saying a platform which wants to host a kid-friendly comment section has to accept liability because they reasonably restrict things like (1) pornography and links to pornography, (2) swearing and other foul language, (3) bullying, (4) trolling? Are you sure that's what you want? Even if it's operated by a busy mom who doesn't have time to read all the posts and won't have a clue 99% of the time if someone posts something that's defamatory?
Why should a website be liable for a comment some third party made that they aren't even *aware of*?
They shouldn't be.
you're saying blah blah blah
No, that's not what I'm saying. Red herrings again.