Section 230 Is the Internet's First Amendment. Now Both Republicans and Democrats Want To Take It Away.
From Josh Hawley to Kamala Harris, online free speech is under attack.

Imagine, for a moment, the following series of online exchanges. This isn't a real conversation. But it's the sort of chaotic, revealing, and messy back and forth that could spread across the internet on any given day in 2019:
A nonprofit immigrant rights group creates and publishes a Facebook invite for an upcoming event: a rally calling on city cops to stop carrying out sex stings at immigrant-owned massage businesses. An LGBTQ activist shares the invite link on Twitter, adding a note about how transgender and undocumented immigrant sex workers both face especially high rates of abuse.
The activist is retweeted by a number of people. Some of them add additional comments, many of them supportive. Someone is spamming their mentions with rude memes, but the activist doesn't see it because that jerk has already been muted. When a friend points out the new replies, this is upgraded to a block.
A parenting and religion blogger using the handle @ChristianMama96 shares a link to a blog post titled "Biblical Views of Sex and Gender," which she wrote and published on her WordPress blog.
In the comments of Christian Momma's blog post—hosted with a Bluehost plan and a URL purchased via GoDaddy—someone who has been banned from Twitter for violating its misgendering policy is holding court about Caitlyn Jenner. A new commenter calls the first an asshole—a comment Christian Mama deletes because it violates her no-profanity policy.
Christian Mama's blog gets some new readers from Twitter, where the guy whose comment she deleted has been tweeting at her as part of an extended riff on the religious right. (She stays quiet and lets her fans push back, but keeps screenshots for next week's newsletter.)
Meanwhile, the sex worker rights rally that was the focus of the initial Facebook invite is well attended and gets picked up by several media outlets, some of whom embed Instagram posts from the event and video that's been uploaded to YouTube. The articles are indexed by search engines and get shared on social media.
In the end, none of the people, tools, nor tech companies mentioned get forced off the internet or hit with lawsuits and criminal charges.
Tomorrow, the event invite in such a scenario might be for a Black Lives Matter rally, a Libertarian Party fundraiser, a Mormon church group outing, an anti-war protest, or a pro-life march. The blogger and newsletter creator might be a podcaster, an Instagram model, a Facebook group moderator, a popular YouTuber, or an indie press website. The Twitter tribes might be arguing about the latest rape allegations against someone powerful, the drug war, Trump's tweets, immigration, internet regulation, or whether a hot dog counts as a sandwich.
These sorts of fictitious exchanges are, probably, no one's ideal of online speech. Like real-world conversations, they will frequently be unpredictable, uncontrollable, and frustrating. Which is to say, they are something like the way people actually communicate—online and off—in a world without top-down government control of our every utterance and interaction. People talk, argue, and disagree, sometimes in florid or outrageous terms, sometimes employing flat-out insults. It can be messy and irritating, outrageous and offensive, but also illuminating and informative—the way free speech always is.
At the end of the day, a diverse array of values and causes has space to coexist, and no faction gets to dictate the terms by which the entire internet has to play. It isn't always perfectly comfortable, but online, there is room for everyone.
In the digital realm, that freedom is only possible because of a decades-old provision of the Communications Decency Act, known as Section 230. Signed into law by President Bill Clinton, when both Democrats and Republicans were mostly worried about online indecency, it has enabled the internet to flourish as a cultural and economic force.
Widely misunderstood and widely misinterpreted, often by those with political ambitions and agendas, Section 230 is, at its core, about making the internet safe for both innovation and individual free speech. It is the internet's First Amendment—possibly better. And it is increasingly threatened by the illiberal right and the regressive left, both of which are now arguing that Section 230 gives tech industry giants unfair legal protection while enabling political bias and offensive speech.
Ending or amending Section 230 wouldn't make life difficult just for Google, Facebook, Twitter, and the rest of today's biggest online platforms. Eroding the law would seriously jeopardize free speech for everyone, particularly marginalized groups whose ideas don't sit easily with the mainstream. It would almost certainly kill upstarts trying to compete with entrenched tech giants. And it would set dangerous precedents, with ripple effects that extend to economic and cultural areas in the U.S. and around the world.
As Sen. Ron Wyden (D–Ore.), one of the provision's initial sponsors, put it during a March 2018 debate on the Senate floor: "In the absence of Section 230, the internet as we know it would shrivel."
Section 230 is one of the few bulwarks of liberal free speech and radically open discourse in a political era increasingly hostile to both.
The How and Why of Section 230
The Communications Decency Act (CDA) came into being as part of the Telecommunications Act of 1996, and was intended to address "obscene, lewd, lascivious, filthy, or indecent" materials. At the time, mainstream discourse around telecom regulation was primarily concerned with rules regarding telephone companies and TV broadcasts. To the extent lawmakers considered the CDA's effect on the internet, the focus was on curbing the then-new phenomenon of online pornography.
Under the CDA, telecommunications facilities could face criminal charges if they failed to take "good faith, reasonable, effective, and appropriate actions" to stop minors from seeing indecent content. The law faced a First Amendment challenge, and the U.S. Supreme Court in late 1997 struck down many of the decency provisions entirely. But an amendment introduced by Rep. Chris Cox (R–Calif.) and Wyden, then in the House of Representatives, survived. That amendment was what we now know as Section 230.
After touting the unprecedented benefit of digital communication mediums, the statute notes in a preamble that "the Internet and other interactive computer services have flourished…with a minimum of government regulation." The legislation states that it's congressional policy "to preserve the vibrant and competitive free market" online.
The point of Section 230 was to protect the openness of online culture while also protecting kids from online smut, and protecting the web at large from being overrun by defamatory, hateful, violent, or otherwise unwanted content. Section 230 would do this by setting up a legal framework to encourage "the development of technologies which maximize user control over what information is received" and removing "disincentives for the development and utilization of blocking and filtering technologies that empower parents." Rather than tailoring the entire internet to be suitable for children or majority sensibilities, Congress would empower companies, families, and individuals to curate their own online experiences.
Section 230 stipulates, in essence, that digital services or platforms and their users are not one and the same and thus shouldn't automatically be held legally liable for each other's speech and conduct.
Which means that practically the entire suite of products we think of as the internet—search engines, social media, online publications with comments sections, Wikis, private message boards, matchmaking apps, job search sites, consumer review tools, digital marketplaces, Airbnb, cloud storage companies, podcast distributors, app stores, GIF clearinghouses, crowdsourced funding platforms, chat tools, email newsletters, online classifieds, video sharing venues, and the vast majority of what makes up our day-to-day digital experience—have benefited from the protections offered by Section 230.
Without it, they would face extraordinary legal liability. A world without Section 230 could sink all but the biggest companies, or force them to severely curtail the speech of their users in order to avoid legal trouble.
There are two parts of Section 230 that give it power. The first specifies:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
The second part says voluntary and "good faith" attempts to filter out or moderate some types of user content don't leave a company on the hook for all user content. Specifically, it protects a right to "restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable."
Without that second provision, internet companies wishing to avoid trouble would be better off making no attempts to police user content, since doing so would open them up to much greater legal liability. "Indecent" and "offensive" material—the stuff social giants at least try to filter or moderate—could proliferate even more than they already do.
Beyond the First Amendment
Santa Clara University law professor Eric Goldman argues that Section 230 is "better than the First Amendment," at least where modern communication and technology are concerned.
"In theory, the First Amendment—the global bellwether protection for free speech—should partially or substantially backfill any reductions in Section 230's coverage," Goldman wrote on his blog recently. "In practice, the First Amendment does no such thing."
To be legally shielded on First Amendment grounds, offline distributors like bookstores and newsstands must be almost entirely ignorant about materials found to be illegal. A store is legally protected so long as its owners don't know about specific offensive material in a publication, even if they know they are stocking the publication. But legal blame can shift if a court determines that owners should have known something was wrong.
And all it can take to reach that should have known threshold is an alert that something might be off. Once a distributor is alerted, by anyone, that a work is problematic or that those involved with it have a problematic history, the distributor may be legally liable for the content—possibly as liable as the work's creator and the parties directly responsible for its very existence.
In an analog world—with limited content suppliers and limited means of distribution—this expectation may effectively balance free speech and preventing criminality. Because a bookstore cannot hold infinite books, we expect bookstore owners to know what they have in stock. But that expectation doesn't scale to the digital world, where users are continuously uploading content and companies receive notice about thousands (or more) of potentially problematic posts per day.
"Congress passed Section 230 because the First Amendment did not adequately protect large online platforms that processed vast amounts of third-party content," writes Jeff Kosseff in his 2019 book on Section 230, The 26 Words That Created the Internet.
As far back as 1997, courts understood that Section 230 was essential to online innovation and expression. In a world without it, computer service providers "would be faced with ceaseless choices of suppressing controversial speech [or] sustaining prohibitive liability," wrote 4th Circuit Court of Appeals Judge J. Harvie Wilkinson in his decision on Zeran v. America Online, a case involving defamatory messages posted to an AOL message board by a third party.
Some third-party content moderation calls are easy, such as when outright threats of violence or child pornography are concerned. But some—like allegations of defamation—require facts beyond what is immediately obvious. Many require context calls that aren't readily apparent across cultures or cliques.
Without Section 230, any complaint could thus be sufficient to make a company liable for user-created content. Companies would have every incentive to simply take down content or ban any users whom others flagged.
The result would be a dramatically less permissive environment for online speech."Section 230 extends the scope of protection for 'intermediaries' more broadly than First Amendment case law alone," says tech lawyer and Reason contributing editor Mike Godwin in a new book of essays, The Splinters of our Discontent. And more protection for intermediaries means more free speech for all of us.
The Threat From the Right
The free-speech merits of the law haven't stopped lawmakers on both sides of the aisle from attacking 230, using varying justifications. Some claim it allows companies to be too careless in what they allow. Others claim it encourages "politically correct" censorship by tech-world titans.
"If they're not going to be neutral and fair, if they're going to be biased, we should repeal" Section 230, argued Sen. Ted Cruz (R–Texas) last fall. Cruz has helped popularize the idea that only through increased government control over online speech can speech really be free.
No U.S. politician has been more aggressive in pushing this idea than Sen. Josh Hawley, the 39-year-old Republican from Missouri who ousted Democratic Sen. Claire McCaskill in the 2018 midterms. McCaskill was also prone to fits over Section 230, but mostly as part of a tough-on-crime charade against "online sex trafficking." Hawley has almost single-handedly turned tech industry "arrogance" generally—and Section 230 specifically—into a leading front in the culture war.
Hawley has repeatedly suggested big social media platforms should lose Section 230 protection, claiming (incorrectly) that there's a legal distinction between online publishers and online platforms. According to Hawley—but not the plain text of Section 230 or the many court decisions considering it—platforms lose Section 230 protection if they don't practice political neutrality.
"Twitter is exempt from liability as a 'publisher' because it is allegedly 'a forum for a true diversity of political discourse,'" tweeted Hawley last November, in a call for Congress to investigate the company. "That does not appear to be accurate."
It's actually Hawley's interpretation of the law that is inaccurate. Section 230 protections are simply not conditioned on a company offering "true diversity of political discourse" (whatever that means), or anything like it.
The Communications Decency Act does say that "the Internet and other interactive computer services" can be venues for a "true diversity of political discourse," "cultural development," and "intellectual activity"—but this comes in a preamble to the actual lawmaking part of Section 230. It merely sums up congressional dreams for the web at large.
Nonetheless, Hawley's incorrect characterization of Section 230 has been echoed by numerous Republicans, including President Donald Trump and those on the fringe right. They say digital companies are biased against conservatives and Washington must take action.
In June, Hawley did just that, introducing a bill that would require tech companies to act like he's already been insisting they have to. Under his proposal, dubbed the "Ending Support for Internet Censorship Act," web services of a certain size would have to apply to the Federal Trade Commission (FTC) every two years for Section 230 protection, which would only be granted to companies that could prove perfectly neutral moderation practices.
The bill, in other words, would put the federal government in charge of determining what constitutes political neutrality—and correct modes of expression, generally; it would then grant government the power to punish companies that fail at this subjective ideal. Hawley's bill would replace the imperfect-but-market-driven content moderation practices at private companies with state-backed speech police.
In talking about tech generally, Hawley makes no concession to the conscience rights of entrepreneurs, freedom of association, or personal responsibility—values Republicans have historically harped on when it comes to private enterprise. Rather, Hawley insinuates that permissible technology and business practices should be contingent on their social benefit.
In a May speech titled "The Big Tech Threat," Hawley criticized not only social media content moderation but the entire business model, which he condemned as "hijacking users' neural circuitry to prevent rational decision making." He implied individuals have little free will when confronted with algorithmic wizardry, blaming Facebook for making "our attention spans dull" and killing social and familial bonds.
For Hawley, overhauling Section 230 is part of a larger war on "Big Tech," in which Silicon Valley has been cast in the role once occupied by Hollywood, violent games and music, or "liberal media elites." It's a battle over control of culture, and Hawley wants to use the power of the federal government to advance the conservative cause. (Hawley's office did not respond to multiple requests for comment.)
Hawley isn't the only one. Arizona Republican Rep. Paul Gosar—also parroting Hawley's falsehoods about the publisher/platform distinction—recently introduced his own version of an anti-Section 230 bill. In January, Rep. Louie Gohmert (R–Texas) proposed conditioning Section 230 protection on sites displaying user content in chronological order.
From Fox News host Tucker Carlson, who has railed that "screens are poison," to Trump boosters like American Majority CEO Ned Ryun, Hawley's anti-tech, anti-230 sentiment is gaining traction across the right. Like the social conservatives who once demonized comic books and rap music, contemporary conservatives are siding with censors—but not for the protection of any particular values. They champion regulation as leverage against companies that have made high-profile decisions to suspend or "demonetize" right-leaning content creators.
It's an impulse borne of bitterness, nurtured by carefully stoked culture war outrage, and right in line with the trendy illiberalism of the MAGA-era right. Their movement has plenty to say about what they wouldn't allow so long as Republicans are wearing the censor hat, but these conservatives are strangely silent about how the same control would be used by a Democratic administration.
The Threat From the Left
What Democrats would do with the power Republicans are seeking matters too, because Democrats have their own plans for Section 230 and just as many excuses for why it needs to be gutted: Russian influence, "hate speech," sex trafficking, online pharmacies, gun violence, "deepfake" videos, and so on.
After Facebook declined to take down deceptively edited video of Nancy Pelosi (D–Calif.), the House Speaker called Section 230 "a gift" to tech companies that "could be a question mark and in jeopardy" since they were not, in her view, "treating it with the respect that they should."
Senator and presidential candidate Kamala Harris (D–Calif.) has been pushing for the demise of Section 230 since she was California's attorney general. In 2013, she signed on to a group letter asking Congress to revise or repeal the law so they could go after Backpage for its Adult ad section.
Like Hawley, Harris and bipartisan crusaders against sex-work ads offered their own misinformation about Section 230, implying again and again that it allows websites to offer blatantly illegal goods and services—including child sex trafficking—with impunity.
Section 230 "permits Internet and tech companies like Backpage.com to profit from the sale of children online" and "gives them immunity in doing so," wrote law professor Mary Leary in The Washington Post two years ago.
"Backpage and its executives purposefully and unlawfully designed Backpage to be the world's top online brothel," Kamala Harris claimed in 2016. Xavier Becerra, California's current attorney general, said in 2017 that the law lets criminals "prey on vulnerable children and profit from sex trafficking without fully facing the consequences of their crimes."
But attorneys like Harris and Becerra should know that nothing in Section 230 protects web platforms from prosecution for federal crimes. When passing Section 230, Congress explicitly exempted federal criminal law from its purview (along with all intellectual property statutes and certain communications privacy laws). If Backpage executives were really guilty of child sex trafficking, the Department of Justice (DOJ) could have brought trafficking charges at any time.
Nor does Section 230 protect web operators who create illegal content or participate directly in illegal activities. Those operators can be prosecuted by the feds, sued in civil court, and are subject to state and local criminal laws.
Finding deep-pocketed and directly criminal websites is pretty rare, however. And state attorneys general don't get settlement and asset forfeiture money—nor their names in the headlines—when DOJ does the prosecuting. Hence, AGs have pushed hard against Section 230, asking Congress for its destruction once again in 2017.
In 2018, Congress did honor that request (at least in part) by passing a bipartisan and widely lauded bill—known by the shorthand FOSTA—that carved out a special exception to Section 230 where violations of sex trafficking laws are concerned. The Allow States and Victims to Fight Online Sex Trafficking Act also made it a federal crime to "operate an interactive computer service with the intent to promote or facilitate the prostitution of another person."
After the law passed, Craigslist shuttered its personals section almost immediately, explaining that the new law opened "websites to criminal and civil liability when third parties (users) misuse online personals unlawfully." The risk wasn't worth it. Since then, there's been ample evidence of a FOSTA chilling effect on non-criminal speech. But the law has failed to curb forced and underage prostitution, and police, social service agents, and sex workers say that FOSTA has actually made things worse.
Proponents portrayed FOSTA as a one-time edit of Section 230 that was necessary due to the special nature of underage prostitution and the existence of companies that allow online ads for legal sex work. But this year, state attorneys general asked Congress to amend Section 230 for the alleged sake of stopping "opioid sales, ID theft, deep fakes, election meddling, and foreign intrusion."
The range of reasons anti-230 crusaders now give belies the idea that this was ever really about stopping sex trafficking. And they will keep taking bites out of the apple until there's nothing left.
Harris has since suggested that we must "hold social media platforms accountable for the hate infiltrating their platforms." Sen. Mark Warner (D–Va.) said companies could see changes to Section 230 if they don't address political disinformation. Senator and 2020 presidential candidate Amy Klobuchar (D–Minn.) said it "would be a great idea" to punish social media platforms for failing to detect and remove bots.
What both the right and left attacks on the provision share is a willingness to use whatever excuses resonate—saving children, stopping bias, preventing terrorism, misogyny, and religious intolerance—to ensure more centralized control of online speech. They may couch these in partisan terms that play well with their respective bases, but their aim is essentially the same. And it's one in which only Washington, state prosecutors, and fire-chasing lawyers win.
"There's a lot of nonsense out there about what [Section 230] is all about," Wyden said in a May interview with Recode. In reality, "it's just about the most libertarian, free speech law on the books."
Is it any wonder most politicians hate it?
How 230 Makes Online Argument Possible
Let's go back to the hypothetical online arguments from the beginning. All of the characters involved benefit from Section 230 in crucial ways.
Because of Section 230, digital platforms can permit some sex work content without fearing prosecution. Without it, any information by and about sex workers, including content essential to their health and safety, could trigger takedown alarms from skittish companies worried about running afoul of local vice laws. The same goes for content concerning noncommercial sexual and romantic relationships, as well as stigmatized legal businesses like massage.
Because of Section 230, bloggers and other independent content creators can moderate their own comment sections as they see fit, even if that means deleting content that's allowable elsewhere or vice versa. Big companies like Twitter can set their own moderation rules, too, as well as provide users with customizable filtering tools. Meanwhile, blogging platforms, domain name providers, and web hosting companies needn't worry that all of this puts them at risk.
Without Section 230, all sorts of behind-the-scenes web publishing and speech dissemination tools would be in legal jeopardy. Twitter could lose the many recent lawsuits from former users challenging their respective suspensions in court. And online communities large and small could lose the right to discriminate against disruptive, prurient, or "otherwise objectionable" content.
Without Section 230, companies would thus be more likely to simply delete all user-flagged content, whether the report has merit or not, or at least immediately hide reported content as a review proceeds. It's easy to imagine massive backlogs of challenged content, much of it flagged strategically by bad actors for reasons having nothing to do with either safety or veracity. Silencing one's opponents would be easy.
Finally, because of Section 230, search engines needn't vet the content of every news article they provide links to (the same goes for news aggregation apps and sites like Reddit and Wikipedia). And embedding or sharing someone else's social content doesn't make you legally liable for it.
Without Section 230, finding and sharing information online would be much harder. And a retweet might legally be considered an endorsement, with the shared liability that entails.
Yet for all the protections it provides to readers, writers, academics, shitposters, entrepreneurs, activists, and amateur political pundits of every persuasion, Section 230 has somehow become a political pariah. While antitrust actions, data protection laws, and other regulatory schemes have gained some momentum, abrogating Section 230 is now a central front in the digital culture war.
With both major parties having been thrown into identity crises in the Trump era, it's not surprising they would somehow converge on a pseudo-populist crusade against Big Tech. The political class now wants everyone to believe that the way the U.S. has policed the internet for the past quarter-century—the way that's let so many of the world's biggest internet companies be founded and flourish here, midwifed the birth of participatory media we know today, helped sustain and make visible protest movements from Tunisia to Ferguson, and shined a light on legalized brutality around the world—has actually been lax, immoral, and dangerous. They want us to believe that America's political problems are Facebook's fault rather than their own.
Don't believe them. The future of free speech—and a lot more—may depend on preserving Section 230.
CORRECTION: This post previously misidentified the state Cox represented in Congress.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
[…] Internet’s First Amendment. Now Both Republicans and Democrats Want To Take It Away. – https://reason.com/2019/07/29/section-230-is-the-internets-first-amendment-now-both-republicans-and-… (adsbygoogle = window.adsbygoogle || []).push({}); Be Sociable, […]
Next up, we need to do something about Taco Bell, KFC and Pizza Hut only offering Pepsi and not Coke.
They used to be owned by Pepsi, but the real problem is that places like McDonalds and Burger King don't have Mountain Dew
Consuming such products should in fact be encouraged, to the extent that it helps keep certain populations under control. No Internet platform, on the other hand, should be allowed to offer its services to the criminals infesting the Net, to advertisers who promote illegal activities, or to "satirists" who spread mayhem on our college campuses. What is needed, above all, is a massive expansion of legislative and law enforcement efforts in the wake of our nation's leading criminal "parody" case. See the documentation at:
https://raphaelgolbtrial.wordpress.com/
Firms can offer whatever drinks they want. No law against only using one distributor. The problem is that Pepsi and Coca Cola actually own these chains. Solution: Find a different chain. Use your feet you lazy ass whiner!
Pepsi, Taco Bell, KFC, and Pizza Hut are one big happy corporate family, along with Frito-Lay.
But Coca-Cola brands and McDonald's are entirely separate. In fact, most McD restaurants are franchised, which is to say they aren't owned by McD. There's no reason a local McD couldn't strike a deal with a distributor to offer Pepsi products in addition to the Coke brands that come as part of the franchise agreement
But, it's unlikely, because the demand isn't there.
Go down to the 7-11, and get a Big Gulp instead... they have all the Pepsi brands, and all the Coke brands, and often a few others. (YMMV).
This thread is a perfect example of how neither side gives a flying fuck about any sort of principle. The only principle either side follows is "in-group good, out-group bhaaaaaad!".
Section 230 Is the Internet's First Amendment.
The First Amendment is everybody's First Amendment and Google, Twitter and Facebook don't need their own special one.
Fuck off you corporatist shill.
What a &%$# idiot.
We have a first amendment and it works just fine.
Section 230 is not an internet first amendment. It was intended to protect companies hosting information (like Google, Apple, Microsoft, Facebook, etc.) from being held liable for the information hosted because THEY WERE NOT CREATING, EDITING OR MODERATING that information.
Since they were not doing those things, they were not to be treated like a newspaper that can be sued for printing known falsities.
The situation has changed ... markedly. These former hosting companies now manipulate search results for political content (Google Search), select and reject content based on political content (YouTube and Facebook), and even generate their own content.
In short, these internet giants now act just like traditional publishers and Section 230 should be eliminated so they can be treated the same as other publishers. The First Amendment will still apply, but will apply to all.
This is all very fine and good, except there appears to be considerable evidence to suggest that various content venues on the internet are editing for political POV.
So, explain to me in small words why ‘If you behave like a editor, you get treated like and editor’ is unfair.
Are the politicians going about it in a ham-handed way? Probably. But the internet companies are not innocent lambs.
I’s LIKE to see a simple codicil added to the existing law, reading “this protection may be judged as void where a jury concludes that a host has shown a pattern of editing for political content.”
*sigh*
The internet giants like Google have an absolute right to manage their content in any way they wish. But they also have an absolute right to get called on it. They shouldn’t have it both ways. Either they are neutral or they are editors.
You're right that Google, Facebook, Twitter, et al, have a right to manage content how they wish, and you're also right that they should be called on it when they do so in a biased manner. But that in no way means we need to repeal free speech protections or bring in force of government to try manufacture neutrality, first and foremost because the government has shown they are incapable of being impartial arbiters of neutrality
There is nothing in Section 230 that says the platforms can't moderate for political content, nor should it say that
What he is asking for is truth in advertising. youtube especially has arbitrarily changed the rules retroactively after promising its users a share of profits.
Yes, it does. Section 230 treats them like a platform, which indicates they would NOT moderate for political content.
Why Reason feels "Don't censor for viewpoint OR we will treat you like a media company" is so abhorrent...but, you know, "muh private companies" and all.
Hawleys bill doesnt even disallow political censorship, the company would just have to tell their users openly.
That knitting site would be free to continue with their no trump patterns under the Hawley bill. They are open with their moderation policy. What it would attack is Pinterest putting a pro life group on their porn ban list.
How libertarians are against transparency is beyond me.
How does it treat people who imagine they're being discriminated against because of their politics but they're just being treated like antisocial assholes?
The answer is that they are not libertarians, they are progressives in libertarian drag.
Where does Section 230 say anything about political content? It allows the platform to moderate anything they consider "objectionable." That's a pretty wide margin that I find difficult to believe would not include political content
How, exactly, are you a platform if you're actively censoring political content? You control what is there you are then NOT a platform.
Doesn't it simply make you a platform that is only open to a select group of people?
Which is what a publisher is.
Is that really all it takes, though? I'm trying to converse in good faith here, but I'm not a lawyer or legal scholar. To me, Twitter et al seem more like the bar that hosts open mic night. They've got a bouncer at the door and they can throw you out if they want, but you're still that one that's responsible for the things you say.
When I think of a publisher I think of someone/something that's providing editing, promotion, etc., in addition to running the "printing presses" that actually put words to "paper" (with obvious modification for digital content).
It just seems like you are trying to force internet companies into a paradigm that simply doesn't fit. I'll readily admit there are similarities, but the differences are big enough to make them distinct.
At any rate, see my comment below for a different approach to both traditional and internet media.
Here's what section 230 actually does.
AUTHORS of libels are liable for libel. So are publishers (with a whole bunch of complicated factors). So if you SAY "Joe has sex with goats", and Joe can prove damages arising from your false claim, then Joe can recover from you to cover his losses. But if somebody else REPEATS your lie, Joe can recover from them, too.
All right, flash back 25 years. The Internet is exciting and new. Traditional media companies establish Internet presence, but so do a lot of others... creating the opportunity to see and read content from a lot of people who might have been shut out of traditional publishing. But the Internet also offers something new... interactivity. Not only can you go somebody's website and see what they have to say, but they can solicit information from users and share that, too. (Think of how THIS exact website operates One of the contributors writes a piece, and we can interact with it, and each other, sharing and contributing our own perspective.
But wait. What if one of those users says something libelous? The person who wrote it is liable, but... what if the website, as distributor, is ALSO liable? If so, that might stifle the development of interactive systems. There were two ways of dealing with it. On the one hand, they could offer no moderation whatsoever, in which case it was not possible to assume that because it was on their site, they stood behind the words. The other choice was to hold everything until someone could look it over, and cull the potentially objectionable material before publishing everything else. The problem was that blocking ANYTHING meant that you were liable for EVERYTHING ELSE, whether or not any employee or operator ever even saw it, and the public generally wanted at least some things deleted... the swears, the political rants, but mostly the spam. If a website operator installed a system that automatically deleted those, they could be held liable for things they not only didn't say, but quite possibly had never even read.
In the context of the CDA, which contemplated criminalizing publishing anything on the web that was appropriate for small children, that could open a huge liability hold for publishers. So they wrote into the law a liability shield that allowed website operators to have interactivity on their sites, protecting them from being liabble for things done by somebody else. So a user (as opposed to an operator) puts something on a website that creates liability, that liability lies only on the user who wrote it, rather than the website. If the operator posts something libelous, the ordinary rules of libel apply If the plaintiff can make out all the elements of defamation, they win the case and collect from the website operator. But if a user posts something libelous, and takes it down as soon as the target of the libel points out "hey, that's libel!", the website operator can't be sued.
There are exceptions to the freedom of speech, such as "yelling fire in a crowded theater", "fighting words", directed threats and the newly added "hate enhancements", but they are generally accepted as being defined as they are.
Similarly, something like those, recruitment for terrorism/terrorist acts and "content unfit for children" could also apply to internet platforms, but when that is extended to anything that hints that "orange man, good" while not doing the same for "communist/socialist candidate, good", we no longer are dealing in generally accepted exceptions.
You, and those supporting the social media amnesty from censoring for political content, are just happy with the communist/socialist candidate gaining an advantage.
If the general bent of these platforms was conservative, we would be seeing you leftists rioting over this.
It is not 'repealing free speech protections' to remove their insulation from liability over the things they publish.
It is a simple principle, it even appears in (an unenforced section of) the text of 230. If you in any way exert control over the content that appear on your service you cease to be a platform and become a content provider.
Content providers are not supposed to enjoy liability protections, but as multiple recent court cases have shown us, they do.
When the courts cannot properly apply the law as written mighty as well get rid of the law.
It is a simple principle, it even appears in (an unenforced section of) the text of 230. If you in any way exert control over the content that appear on your service you cease to be a platform and become a content provider.
That is the exact opposite of what Section 230 says. It says that any that any entity shall *not* be treated as the publisher and held liable for "(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;"
Note that it says "material that the provider...considers to be obscene...harassing, or otherwise objectionable" which gives them a wide margin to moderate and censor however they see fit
Thanks, now I understand that you are only arguing from ignorance.
Here's the relevant text. Found in the 'defintions' section.
Please do take careful note of the verbiage "whole or in part" and "development""
What these services - Google, Facebook, Twitter, etc - by their acts of deletion, other restriction, and overt curation are clearly 'development of information provided.'
They are content providers, not mere platforms.
They fail in the "good faith" application of what they consider "objectionable".
They know, full well, that what has been censored as "objectionable" is no such thing, to almost half of the country.
They do so on bad faith.
It deosn't matter whether they consider it to be objectionable.
It matter wheteher they consider it to be objectionable.
Only a small fraction of conservative speech is being removed.
No one has been banned for calliung for lower taxes or smaller governmant.
For something to get removed it has to be reported. Which means that someone has considered it to be objectionable.
There's no bad faith here.
"Content providers are not supposed to enjoy liability protections"
Under sec. 230, the "content provider" of user-written content is the user who wrote it, and not anyone else.
A publisher, acts as editor, not only when he changes or modifies content, but when he rejects content. When any of these internet "platforms" begins to reject content they act as editor. In acting as an editor they become a publisher.
The only reason REASON is advocating to maintain the status quo with Section 230 is that they are actually progressives and happy with the job these internet giants are doing as "publishers with immunity".
In short, there is no higher principle involved in shilling for Section 230, just partisan politics by fake libertarians.
Do you want to strip them of section 230 protections merely as a punitive measure? Because I don't see how stripping them of section 230 accomplishes your goal of achieving less content moderation.
The point of section 230 was to be able to encourage more freedom for hosting user content without the associated liability risk.
230.stays under hawleys bill, it just forces companies to be open about their policies for moderation. THE HORROR.
The horror of saying that a platform cannot censor due to viewpoint is a distopian nightmare, clearly.
I wonder if Reason would support T-Mobile treating its calls the way Twitter treats tweets. "We don't like what you're talking about so, well, this call is now over. You violated our TOS"
I dont give 2 shots of Twitter bans all conservatives as long as they are openly saying they ban conservatives. It is this entire ban and use vague language crap that is dishonest and an affront to conversation.
What if they don't say they're banning "conservatives", but do say they're banning "stupidity" instead? That would leave the non-stupid conservatives with Twitter account, and only affect the other 90% of them AND it would affect at least 80% of the partisan crap coming from the other side.
"What if they don’t say they’re banning “conservatives”, but do say they’re banning “stupidity” instead?"
Alyssa Milano is on Twitter. So they can't make claims about "stupid" people.
...besides, some of us would miss you, Jimmy.
Or, like they often do, just "shadow ban" so you THINK you are leaving a message but no one can see it.
That's not exactly correct. It forces companies, to apply to the FTC every 2 years for protection under Section 230. It leaves the decision of whether a company is in compliance to partisan bureaucrats at the FTC.
So, you're correct. Section 230 stays for those companies that are connected enough with the FTC to maintain their protection. That's how protection rackets like this work.
Even if you think the goals of Hawley's bill are justified, the implementation should be enough to make anyone who values "limited government" shudder in horror.
As opposed to leaving it up to as few Federal blackrobes, who are currently getting it wrong.
Federal judges are not giving approval for moderation decisions on Twitter. Under Hawley's bill, the FTC would be.
Fuck off pedo
Fuck you asshole for your slanderous accusation.
Don't want to be called a pedophile?
Then don't promote child rape.
Yo, dipshit. in case you haven't noticed, that is your nickname around here.
And if you want to argue that dipshit is slander I'd remind you that truth is always an affirmative defense,
"Federal judges are not giving approval for moderation decisions on Twitter. Under Hawley’s bill, the FTC would be."
But the resultant squabbles WILL land in court.
"So, you’re correct. Section 230 stays for those companies that are connected enough with the FTC to maintain their protection. That’s how protection rackets like this work."
Imagine Ajit Pai, only in the FTC instead of the FCC.
It's not punishment. It is righting an ongoing wrong.
The only thing they are being "stripped" of is something they were never supposed to be entitled to.
I'm becoming more and more convinced that the best way to deal with it all is through better contracts with better enforcement. Make the internet companies put their standards in writing and hold them to those standards. Make them have terms of service that are enforceable against them.
Either they are neutral or they are editors.
I really don't think those are the only possibilities. Editors go through and modify content before it is published. They decide what is published. Platforms removing some content or users after the fact certainly isn't necessarily viewpoint neutral, but it's not the same thing as what an editor does. "Censor" would probably be a more accurate term. They aren't deciding what content goes up on their site, they are selectively removing various things. I think it's a worthwhile distinction.
Hey, you agree with Hawley then. The fact is these courts are forcing lawsuits through bay area courts and having issues outside of moderation struck down under 230. See Murphys lawsuit.
Why people are pushing to allow billion dollar companies to hide their contractual terms and services or apply new terms retroactively is beyond me. We wouldnt allow this for any other business.
So what?
Seriously, so fucking what?
If you don't like Google's editorial policy, use a different search engine. If they politicize their search results and it makes it hard for your business to get clicks, then make do with other forms of advertising. Google is a hugely dominant force in the search engine works, but it's still just a private company, and you do not have a right to dictate how they run their business or what their political leaving are, nor do you have the right to force them to advertise your business.
A libertarian should not need to be reminded of this.
Google is more than a search engine fucktard. They also operate under standard contract laws. What other abusive industries do you support as a libertarian?
What contracts is Google violating?
Don't get in the way of a good rant.
Those bastards at Google know what they did.
You think they don't?
Hey, want to buy this bridge in Brooklyn? Sell it to you cheap.
Exactly. How is this different from "bake that cake"?
Very simple. “Bake that cake” is coercive compulsion. Removal of 230 merely subjects content hosts to be held accountable to the same liabilities as any other publisher. They aren’t being compelled to do anything.
The fuck they aren't. If you are forcing them to host content they don't want to host, that is compulsion.
Nobody is calling for them to be forced to host content they don't want to host, you ridiculously dimwitted eunuch.
People are saying that they should not receive extra legal protection intended for open access platforms when they actively curate and select content as publishers.
Cronyism currently allows them to have their cake and eat it too.
Of course, you'll continue to lie in the hopes that the "cool" kids will approve and be your friend.
Instead, they'll continue tolerating your slavish devotion and hold you, as the rest of us do, in contempt.
How?
Bakeries are not regulated under Section 230 for starters. They are not afforded liability protection for anything that a customer might request be written on a cake. And, should that text be libelous, could be subject to suit.
Obtuse doesn't begin to describe you.
IF cake bakers were afforded such protections from liability AND that protection had proved to be such a boon to their business THEN I could see validity in arguments that they must bake any cake OR risk losing those protections.
That's what this is about, Reason is defending rent seekers who do not want to lose their GOVERNMENT provided legal protection.
This isn't about a loss of liberty, this is about a favored class keeping their license. And that is the definition of an aristocracy.
"How is this different from “bake that cake”?"
It seems everyone wants a cake with their preferred message on it. Oh, and they want to eat it too.
You've found Big Tech's, and Reason's, position.
Somehow I'm guessing you are wilfully blind to that.
There are four kinds of lies. Lies, damn lies, statistics and libertarianism.
It does seem to be going that way. Of course, that is what happens when the progressives move in and claim to be the "real" libertarians.
" explain to me in small words why ‘If you behave like a editor, you get treated like and editor’ is unfair."
Don't hold your breath. Every time there is this sort of article here at Reason somebody expressly points this out.
And every time it is studiously ignored by the authors.
They don't have an answer, and they don't feel like they need to give an answer.
Libertarian website studiously avoids being libertarian.
It's no wonder libertarianism doesn't get any respect.
I do not consider Reason to be libertarian any more.
They have done nothing but damage the reputation of libertarians to those who might have been interested.
They pick their battles in such a way that libertarians come off as flaky idiots who love crony capitalism and have no realistic approach to any issue.
Well said.
When you have actual quotes from Google executives saying that they are trying to manage content to keep Trump from being elected again, then they are painting the target right on their own backs.
I'm a lot more worried about Google, Facebook, twitter etc, interfering in the election than I am about the Russians.
Government conferred a benefit on the tech companies with section 230, if they want to keep the benefit they better use it responsibly.
Who ever gives a shit, at this point we’re fucked. Left and right want to push some sort of fairness doctrine while libertarians are happy to allow California to control the flow of information and police speech.
I'm a little more optimistic than that. There's got to be a viable competitor for Google and for Facebook out there. There is a big part of the market that doesn't give a fuck about woke, PC bullshit. At some point I think living in an ideological bubble will become a liability for the big online companies.
There would be more but Facebook and google commonly practice buy and kill. Another disruption in the market and the primary issue in the monopoly investigation just opened.
That is true. Facebook does seem to buy up any competition and then make it worse.
But big, dominant companies always look invincible until they don't.
But buying up competition and shutting it down is not a free market. Maybe in some larger theoretical sense it is, but such a situation does not produce competition and the benefits that go with that or any of the benefits that free markets can provide. So, why is allowing such a situation a good idea? Also, why is wanting to remedy the situation to create competition anti free market?
They can only buy what is for sale.
Start up your Google competitor, offer service your customers like better than the existing services, and count all your money.
(If you're tempted to claim it can't be done, I'm suggesting you follow the model set by Fox News wrt cable news programming.)
This is true in a limited sense. However, Google is perfectly capable of making sure that every key employee you hire gets an above market offer via recruiter to complicate your ability to function.
Of course, those looking for you online using most browsers are not going to find you because you are "deprioritized" on Google Search.
You might even find angel investors being offered more favorable "special investments" rather that work with you. Only after you have spent months working with them of course and are down to final terms and such.
Amazing how all these companies eventually decide to sell isn't it?
I think this goes to the heart of it. If you look at what Cloudflare did to some right wing sites, what happened to gab, and the efforts by Mastercard to deny e-commerce they don't like, it becomes clear that Google and Facebook aren't just managing their own platforms; they are using their established positions to destroy any competition, political or otherwise. And it's worth noting that this is happening in a context where new enterprises find themselves constrained in ways that would have killed Facebook if they'd been in there from the beginning. In particular, Facebook would have been shut down for its mistakes and deliberate misrepresentations on privacy except that they were there before the rules got locked down. There is a difference between letting private companies run their businesses as they see fit and John D. Rockefeller sending out goons to beat up anybody who tries to drill an oil well or build a refinery without establishing a partnership with him.
Government is like a guy who insists on helping a family paint the outside of their house, despite there being a professional house painter on the scene. The “volunteer” then proceeds to screw up his painting really badly, getting more paint on the windows and lawn then on the wood siding. And, meanwhile, his kid and dog are both completely forgotten, dying in the locked car he left parked in an adjacent lot.
Moral of the story: Do your fucking constitutionally-mandated job, Government, instead of sticking your incompetent nose in other people’s business.
These Congress critters who want to regulate the internet are slavers who need to fuck off.
But... But what if someone irresponsibly uses the innertubes to HURT MY PRECIOUS BABY FEELINGS??! What THEN?!?!
Also, in the past, some Big KKKorporations have made VAST amounts of filthy lucre selling pens and paper... SOME of which got used to write letters that ALSO hurt my baby feelings!!! I should have legal recourse against them also!!!
And Google is like an HOA, whose members all have family in city, county, and state government.
An HOA composed of developers, who all have family in government
But it won't be the government that regulates the internet companies, it will be the people, via lawsuits the government has now given the tech companies immunity to.
Then when the flame wars start in the comments here and someone calls someone else a pedophile, or claims to be nailing his mother, the Reason as publisher is liable for the content they publish.
Of course no one will sue Reason because its net worth is about 15 cents, but Facebook, Twitter? They better think long and hard about whether it's worth it.
A lot of words and misdirection from Reason yet again.
First they dont even mention the contract issues Meagan Murphy and now Gabbard are bringing up, the former had her claims struck down under 230.
Second they dont mention that Hawleys bill doesnt strip them of anything as long as their censor algorithms are public. This stops companies from lying to the public about their policies. YouTube can no longer tell people they will get ad revenue from their videos and then arbitrarily end the share agreements based on hidden and non advertised rules applied in a discriminatory fashion.
Hawleys bill also doesnt censor anybody, despite pedo Jeffrey's best misunderstanding of them, but merely opens them up to civil torts for not being open about their censorship.
Odd that reason kept mentioning reasons to censor as a good thing... but for some reason doesnt think it is wise to open up the actual rules the companies are using. Truth in advertising used to be a libertarian principle.
Bake that cake, Google!
No one is making Google do anything. They could continue to operate as they like. They just then are responsible for the content they moderate. If they don't want to have that responsibility, fine. They can admit all comers and stop censoring their content.
Google is getting special treatment under the law here. Taking away a special immunity is not forcing Google to do anything but operate like every other publisher.
"They just then are responsible for the content they moderate. If they don’t want to have that responsibility, fine. They can admit all comers and stop censoring their content."
Holy shit, dude!!! Would you (or our Political Masters in Congress) EVER even DREAM of applying this to the local rag? I have noticed that my local paper rag does NOT publish all the letters to the editor that are submitted... And sure as hell not the editorials or articles either!!! The slavers are trying to make the e-world submit to rules FAR nastier than the rules for the paper world!!!
You wanna publish your stuff? You're doing it VERY freely right here! You want MORE e-publishing freedom? Pay for a Go-Daddy web site like I did!
This whole thing about trying to boss around FB and Google and so forth, is TOTALLY a wasted effort, and a tempest in a teapot!
Holy shit, dude!!! Would you (or our Political Masters in Congress) EVER even DREAM of applying this to the local rag? I have noticed that my local paper rag does NOT publish all the letters to the editor that are submitted… And sure as hell not the editorials or articles either!!! The slavers are trying to make the e-world submit to rules FAR nastier than the rules for the paper world!!!
Holy shit dude. If the local rag published copyright protected or slanderous materials, they would be responsible. The fact that it was in a "letter to the editor" would not save them. God damn, where do you people get this stuff?
The local rag does NOT have a copyright to my letters to the editor to them! Reason.com here specifically stipulates that I, not they, own what materials I write here. There is NO difference here between on-line and the paper world (or, there shouldn't be, and the slavers are trying to change things).
If I slander on the local rag, I, not they, are held accountable. If I slander in a hand-written letter, I, not the sellers of my pens and paper, are held accountable. It is a slaver-deed to go and punish one person for the doings of another. It is that simple!
You are just wrong here. You are a fucking moron.
April 24, 2007 · The U.S. Supreme Court announced last week that it will not review a libel case in which a small Virginia newspaper was ordered to pay $75,000 after publishing a letter from an inmate containing allegations about a local prosecutor.
The owner of the Martinsville, Va.-based Buzz sought the high court’s review to determine if newspapers can be sued for libel for not thoroughly investigating letters to the editor.
The prosecutor, Joan Ziglar, sued the Buzz in 2002, after the paper printed a letter from a local jail inmate, Za’Kee P.J. Tahlib, that said Ziglar was trying to frame him for first-degree murder because Tahlib claimed he had an affair with Ziglar’s sister. Tahlib wrote that Ziglar was encouraging his co-defendant to lie.
The newspaper did not attempt to verify Tahlib’s charges, and noted in its brief to the high court that Ziglar had a policy of not commenting on ongoing cases and that the jail would not allow access to inmates.
A jury awarded Ziglar $75,000 in damages in 2005. The newspaper appealed to the Virginia Supreme Court, which refused to hear the case.
http://www.rcfp.org/supreme-court-will-not-hear-letter-editor-libel-case/
So Government Almighty blesses punishing one person for the doings of another. USA Government Almighty ALSO blessed slavery, no voting rights for women, stealing the lands of Native Americans (even after they learned the ways of whites and competed against them successfully, as in, Cherokees), and sending them to concentration camps. Also, concentration camps for Japanese-Americans as well...
Might does NOT make right! Now please justify the above mis-deeds of Government Almighty, in terms of ethics and morality.
If a newspaper publishes something that is libelous, they are responsible for the consequences. That is totally correct. The paper is under no obligation to control what is published in their paper and excerpt complete control over what does. They should held responsible for that content.
You are incapable of understanding that freedom comes with responsibility for your actions. You are just a typical moron Libertarian who thinks freedom means you can do anything you like without any responsibility.
If a guest uses the air that I own, in my house, to transmit sound waves that contain slanderous words and concepts, am I responsible? If I sell pens and paper that are used to libel someone, am I responsible? More importantly... SHOULD I be? If there is no force involved, WHY should you and Government Almighty be entitled to say which of my words are "responsible", to the extreme extents that both left and right are now power-grabbing?
You're just looking for the "deepest pockets" to sue, when they won't transmit your fascist, anti-freedom ideas. Go buy a Go-Daddy web site if you feel put upon, it is that simple. Go-Daddy has NOT yet been targeted by your pet fascists in Government Almighty.
SQRLSY One is not a moron libertarian. He is just a moron.
It's 'libel' - not 'slander'. You would use 'defamation' to cover both written and spoken communication.
Aside from that, you have no clue what you're talking about.
Every time I deign to read a post of yours, it simply confirms my reasoning for skipping past 99% of them.
It is "information" or "data", not "clue". That is why I conclude that your posts are data-less.
They were trying to force the baker to write something. Nobody is forcing Google to say or print a word.
Also, platforms DO NOT HAVE SPEECH. They solely transmit the speech of others. THAT is why they have protections.
" They solely transmit the speech of others."
to the extent they do that you are correct. However as soon as they begin controlling content - ie limiting the speech of those on their services they cease to be platforms and become content providers.
That is also clearly defined in Section 230, but nobody want to pay any attention to that portion.
OMG, you guys cannot possibly be this dense. If nobody is forcing Google to do anything, why does the government need to get involved? Are you serious?
Google is choosing to use their speech rights to not broadcast speech they dislike.
Fine.
THAT MAKES YOU A PUBLISHER, NOT A PLATFORM.
So, who cares what you call it? You wanna force publishers to publish things they don't want to, or to make them responsible for the words of the authors? Fuck off, slaver.
You are an unbelievably stupid, slavish little bitch, eunuch
Fuck off.
Take your loathsome comments elsewhere.
"So, who cares what you call it? "
Federal law, mainly.
federal laws such as...
The desire is to get the government OUT OF BEING INVOLVED, by giving providers protection from being sued.
I thought libertarians were all in favor of contracts.
Someone, who can't sue over the terms of a contract, doesn't sound like an even participant in the transaction.
Also, platforms DO NOT HAVE SPEECH. They solely transmit the speech of others. THAT is why they have protections.
The owners of the platform DO have speech.
They have protections because the owners of a platform are not responsible for the speech of others.
"The owners of the platform DO have speech."
In so much as they are free to call the person saying the stuff idiotic and wrong.
They do not have the right to simply stifle the speech or, again, THEY CEASE BEING A PLATFORM.
"They have protections because the owners of a platform are not responsible for the speech of others."
If you censor, happily, stuff you do not like, then legally, anything remaining is something you support.
Don't like it? Then don't censor.
They do not have the right to simply stifle the speech or, again, THEY CEASE BEING A PLATFORM.
That's not true. There are plenty of Internet platforms dedicated to specific interests that have no problems banning speech that run contrary to the interest of the platform.
Should a Christian platform be forced to accommodate hateful speech against them ON THEIR OWN PLATFORM?
If you censor, happily, stuff you do not like, then legally, anything remaining is something you support.
That is not even true. For example, because I do not like broccoli, and I choose to ban broccoli from my house, it does not necessarily follow that I DO like every other vegetable. It is the same as with speech.
"Should a Christian platform be forced to accommodate hateful speech against them ON THEIR OWN PLATFORM?"
They would be a publisher not a platform. Feel free to provide the example you're referring to.
Let's use a meatspace analogy then.
Suppose I have my church lady friends over to my private house to have a private discussion about the Bible. However, a stranger barges in and disrupts the conversation and wants us to look at his porn collection instead.
So according to you, my two choices are:
1. I can demand that the stranger leave my property, but if I do so, then everything that the church ladies say become my liability. So if one of the church ladies gets into a heated discussion and defames another one, then the defamed church lady can sue both the defamer AND ME.
2. I can permit the stranger to disrupt the discussion, at which point, having a discussion about the Bible is no longer going to be productive, and my church lady friends will instead just leave.
"Suppose I have my church lady friends over to my private house to have a private discussion about the Bible. However, a stranger barges in and disrupts the conversation and wants us to look at his porn collection instead."
So, breaking and entering. Got it. THIS is your analogy.
Trespassing, not B&E.
Looking, then, to the common law, from whence came the right which the Constitution protects, we find that, when private property is “affected with a public interest, it ceases to be juris privati only.” This was said by Lord Chief Justice [Matthew] Hale more than two hundred years ago, in his treatise De Portibus Maris, 1 Harg.Law Tracts 78, and has been accepted without objection as an essential element in the law of property ever since. Property does become clothed with a public interest when used in a manner to make it of public consequence and affect the community at large. When, therefore, one devotes his property to a use in which the public has an interest, he, in effect, grants to the public an interest in that use, and must submit to be controlled by the public for the common good, to the extent of the interest he has thus created. He may withdraw his grant by discontinuing the use, but, so long as he maintains the use, he must submit to the control.
10th paragraph, majority opinion, Munn v. Illinois, 1876
The Supreme Court disagrees with you.
Well I disagree with them, and I reckon most libertarians would as well.
Nobody cares
The baker did not have a contract to do anything. He rejected the commission before that stage. The social media platforms arguably do have contractual obligations to their content creators.
Hawley's bill is censorship via extortion.
Tech companies would have to go begging to the FTC for permission to moderate their own forums. And if the FTC didn't like it - poof, no Section 230 "license" for you, and now you're liable for every single thing every single shitposter ever posted to your forum.
It puts the FTC in charge of moderation decisions on private platforms.
pedo Jeffrey
Fuck you.
Hawleys bill doesnt strip them of anything as long as their censor algorithms are public.
This is not true. Hawley's bill would impose liability on tech companies that moderated content in a manner that was not "politically neutral", as determined by the FTC, whether or not their algorithms were public.
We had plenty of unrestricted online discussion before Section 230. What we didn't have was monopolization of discussion platforms by a few ad based corporate giants.
Like net neutrality, Section 230 was regulatory capture by big internet companies and has been very harmful to the internet and "marginalized communities". It should be struck entirely or limited to pure carriers. The fact that this makes it more difficult for Google, Facebook, and Twitter to operate is the result of regulatory capture going away.
Section 230 was regulatory capture by big internet companies
Seems more like it enabled the big companies to exist. Google and Facebook didn't exist in 1996. It may be harmful in some ways, but also allowed the internet as we know it to exist.
Other big companies like AOL and Microsoft did. Regulatory capture is a good word for it. Just because new companies have arisen that benefited from the capture doesn't make it any less of a regulatory capture. Section 230 is special immunity to the law given to one industry without any concomitant responsibility that was given as a result of that industry lobbying Congress. If that is not an example of regulatory capture, what is?
You aren’t getting it: “the internet as we know it”, an Internet dominated by a few giant corporations deriving revenue from ads placed on user-generated content published for free, is precisely what makes it harmful.
It is absurd for libertarians to argue that a targeted exemption from liability is a libertarian policy that supports “free speech”, in particular when the exemption creates corporate giants that are in bed with the government.
Your argument is akin to arguing that since private property rights permit large corporations to accumulate vast amounts of wealth, and that's a bad result, the solution is to undermine private property rights.
No doubt, if you have the reading comprehension of a seven year old, you might indeed think that a “targeted exemption from liability” created in response to massive lobbying is just like “private property rights”.
Jeff is a psychotic.
He's only capable of seeing that which fits his fantasy.
Go away. Go join Shithead in his Progressive Hunting Parties.
"Go away"
No.
Kill yourself.
"Seems more like it enabled the big companies to exist. Google and Facebook didn’t exist in 1996"
This is something you would say if you didn't understand tech.
1996 is a long time ago in tech years. Big companies come, and big companies go, if they can't continue to provide value. In 1996, the top provider of network operating systems was Novell. The leader provider of browser software was Netscape. AOL was a big ISP. Each was displaced along the way, just as had happened previously. Once, the top provider of word processing software was WordPerfect, they themselves having displaced WordStar. Lotus once produced the top spreadsheet program. Microsoft, originally a vendor of programming tools, had leveraged a relationship with IBM to become a dominant provider of PC OS software, and now had cash reserves and an interest in dominating office productivity software.
Noting that Facebook hadn't shown up on scene yet in 1996 has nothing to do with section 230.
You don’t say! But in 1996, there were a few big companies who wanted to monopolize online access and discussions, and for them Section 230 was very important.
The structure of the market is determined by regulations and other conditions. And rent seeking and regulatory capture have created a market structure that favors big content and advertising monopolies. Rescinding those ill-conceived regulations is the libertarian thing to do, and it may incidentally also solve some of the problems we have had with online censorship.
Tech history... you should learn about it and understand it.
"Tech history… you should learn about it and understand it."
Dimwit, I've been in the tech industry since 1982. I teach tech history. I keep trying to tell you that you don't understand it nearly as well as you think you do, but the message isn't getting through.
That's just how stupid works, alas.
So not as long as me then.
Nevertheless, you have been in the industry long enough that you should know better. The logical conclusion is that you are lying because the rent seeking, regulatory capture, and corruption furthers your own financial interests. Do you work for one of the big tech companies? Do you hold a lot of their stock? Thought so.
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Understand that this is completely different than the law as it applies to other platforms. The famous NYT v. Sullivan case involved a full page advertisement run in the Times. A newspaper or a magazine can't claim that someone else created the content as a defense to libel.
So the question is why should these companies be any different than the Times. The claim was that they were not really publishers. They let their users put up whatever they liked and if they were liable for the content, they couldn't do that and would have to moderate and restrict what goes up on their platforms.
That makes sense as far as it goes, except that they have begun to moderate anyway. The harm that was supposed to be prevented by giving them a special exception to the law is happening now.
The entire logic of giving them this exception is that they just provide a means of communication the way the phone company does. Well, the phone company doesn't monitor all of my calls and doesn't refuse service because it doesn't like my politics.
If they want to have this protection, they should have to act like the phone company and not the NYT. The defenders of section 230 seem unable to come up with a reason why the law shouldn't be amended or changed other than "but we like it and the law says..." Even if you don't believe that 230 implicitly imposes a duty of impartiality and reasonableness to those who benefit from its protection, you still have to answer the question of why amending it to impose such a duty is so bad?
All amending the act would do is force providers to either act impartially and reasonable or accept the responsibility of whatever goes on their platform if they don't. If you want to be a publisher, you can be one but you then have to accept the responsibility that comes with that freedom.
All amending the act would do is force providers to either act impartially and reasonable or accept the responsibility of whatever goes on their platform if they don’t.
Define impartial and reasonable.
Hawley's bill, as I understand it, leaves those definitions up to bureaucrats at the FTC to define and measure the practices of internet companies against those definitions. That's the big problem with Hawley's bill.
Simply enforce contract law (TOS) between internet platforms and the content creators. We don't need a heavy handed bureaucracy like the FTC involved.
Define impartial and reasonable.
Let anyone post anything and only remove it if it is somehow illegal. That is very easy to do. And indeed, that is exactly what these platforms did for years. It is only recently that they have begun kicking people off for conduct that wasn't illegal or slanderers.
Hawley’s bill, as I understand it, leaves those definitions up to bureaucrats at the FTC to define and measure the practices of internet companies against those definitions. That’s the big problem with Hawley’s bill.
That is only a problem if you think these tech companies should be above the law. They are always free to take responsibility for what is on their platform and forgo the 230 immunity.
The problem here is that you are assuming they are somehow entitled to be above the law and normal liability for their actions and any attempt to take that immunity away is somehow infringing on their rights. No it is not. The immunity itself is nothing but a special privilege granted by Congress to a powerful interest group.
Let anyone post anything and only remove it if it is somehow illegal.
But the Democrat FTC commissioner would argue "that doesn't go far enough, 'hate speech' is unreasonable and that should be banned."
And the Republican FTC commissioner would argue "that doesn't go far enough, 'unpatriotic speech' is unreasonable and that should be banned."
Whatever truly reasonable definition you come up with, will be twisted by politics to favor those in charge.
So what? If you want to control over what is on your server, then take the responsibility for that and forgo the immunity. As usual you are completely missing the point and shitting all over the thread with your ignorance.
"So what?"
So what? The point is, YOUR reasonable standard of "allow everything that's not illegal" will not be the actual standard used by the FTC. The standard used by the FTC will be something that is twisted by politics to suit the agendas of whichever tribe is in charge.
So your proposed "solution", then is to either (a) allow your own site to be overrun by shitposters and filth, or (b) submit to censorship via extortion by the state.
Where is the option for (c) I get to use my property as I see fit, and fuck off if you don't like it?
No, the point is I can always just forgo the immunity and run my server anyway I want to.
Again, you are missing the point. You are either too stupid or too dishonest to add anything to the conversation.
Except if you didn't have immunity and you ran your server anyway you wanted to, you would have to approve every word posted on your server because you would be liable for every word. Even for words and statements that you did not create.
Given that they are ALREADY censoring viewpoints based on political considerations, seems like it's a problem they made for themselves. Too bad, so sad. They could always, you know, STOP doing so and limit removals to court orders.
And you are illustrating precisely why Hawley's bill is a censorship extortion racket. By not submitting to the racket, they are opening themselves up to UNDESERVED liability for being responsible for the words that OTHERS write on their platform.
Let's apply the same principle in a different context.
If I get in an car accident with a dude who doesn't have insurance, I should be able to sue the insurance company anyway, right?
OK, Jeff, there is a video on Twitter still where a PA state rep is calling for the doxxing of underage girls who were doing a prayer protest at PP.
Twitter STILL won't do anything about it. But their TOS couldn't be more clear about the action violating it.
So, they are showing that their TOS means nothing if they AGREE with you. Rules for thee but not for me.
THIS is acceptable?
I have no idea about the story behind the video. Let's just stipulate the facts as you state them for the purpose of this discussion. I looked over the Twitter TOS and it doesn't actually say that Twitter is *obliged* to punish accounts that break their rules. It says that Twitter *may*, "at any time for any or no reason", suspend or terminate accounts that break the rules. Sure it is not "acceptable" that they are apparently acting like hypocrites, but it doesn't necessarily mean they've broken the TOS and it certainly doesn't necessarily mean that the government has to get involved to try to "fix" things.
"I looked over the Twitter TOS and it doesn’t actually say that Twitter is *obliged* to punish accounts that break their rules."
...so, their TOS don't actually mean anything. Got it. "You cannot do this...but if you DO, we MIGHT do something".
Well, gee, that sounds like a right valid contract there.
How do you not get that the effort is to get the government to UNFIX the things they already "fixed"?
And, as usual, government's "fix" created more problems than if they left things alone.
It is only recently that they have begun kicking people off for conduct that wasn’t illegal or slanderers.
What constitutes slander is often debatable among even the best legal scholars. It would be almost impossible for a content provider to make millions of decisions and be correct all of the time. Take Twitter as an example. Even if the success rate were 99.999% at roughly 500 million posts per day this results in 5000 cases of slander getting through the cracks. Twitter would have to make an algorithm that was so restrictive, in order to avoid this issue, that it would certainly not exist as it does today.
The sheer amount of content which goes on Twitter vs NYT (as an example) is orders of magnitude different. That's a point that I feel you're missing in your analogy with NYT.
But Twitter is taking it upon themselves to censor. Nobody is MAKING them do it. They choose to.
Doing it poorly is not a defense.
My point is that they can choose to censor content more liberally with the Section 230 protections. They don't have to necessarily censor based on potential legal liabilities.
Without Section 230, Twitter, Youtube, Facebook, etc as we know them today likely wouldn't be able to exist without huge legal liabilities. If you hold them accountable for the content they host (as a publisher), then they will certainly act more like a publisher and pre-screen content before hosting it... which is what the NYT does.
"Without Section 230, Twitter, Youtube, Facebook, etc as we know them today likely wouldn’t be able to exist without huge legal liabilities. "
Seems a lot like a "not my problem" issue. If your business model requires extra-legal protections...then maybe you have a bad model.
" If you hold them accountable for the content they host (as a publisher), then they will certainly act more like a publisher and pre-screen content before hosting it… which is what the NYT does."
So no Facebook, Youtube, or Twitter. Sounds horrible. Really
They could, you know, NOT
If you hold them accountable for the content they host (as a publisher), then they will certainly act more like a publisher and pre-screen content before hosting it… which is what the NYT does.
And that's fine. They should do so if they choose to be a publisher. What they shouldn't have the ability to do is act like a publisher and still leave up stuff that violates the rights of others (IP infringement, defamation, etc.) and evade liability for it.
...or, they could not censor anything and retain their position as a platform for free expression.
Why is the default position of those supporting government protectionism, that censorship must happen?
" they could not censor anything and retain their position as a platform for free expression."
That's what we had, in the early days of the Usenet. You got the occasional (always) flame wars, and then, once commercial activity was permitted on the Internet, you got spam, spam and more spam, with extra spam.
People don't like spam.
I've often considered running for President on a platform of "if I'm elected, I will deploy Seal Team 6 and as many drones as necessary against email, comment, and phone spammers. No more phone calls from Judy about lowering your credit-card rates. No more offers to buy cheap v1ag4a. No more pleas from widows of former Nigerian oil ministers who just need a little help to move money out of a Nigerian bank. Just darkness, sound of helicopters hovering, and then silence."
The only ones who don't get a quick tap to the head and then dumped in the ocean on the way home is whoever invented the fake "malware alarm" webpages that make hideous noises and try to lock up your computer so you can't just close them. There's a special place in Hell waiting for them, so better prepare them for it while they're still here in this plane of existence. I figure holding them in solitary with Yoko Ono's greatest hits at about 100 decibels for a couple of weeks should be a good warm-up.
The difference between the NY Times and Twitter, is that with the NY Times, there are editors providing affirmative consent to every single piece of content in their newspaper. With Twitter, that is not the case. Tweets are not reviewed before being posted. Objectionable tweets are only removed after the fact.
If they want to have this protection, they should have to act like the phone company and not the NYT.
You have your analogy wrong. The Internet itself is like the phone company, not Twitter or Facebook.
Even if you don’t believe that 230 implicitly imposes a duty of impartiality and reasonableness to those who benefit from its protection, you still have to answer the question of why amending it to impose such a duty is so bad?
Who decides what is "impartial" or "reasonable"? The government? Then this is de facto censorship by the state.
The difference between the NY Times and Twitter, is that with the NY Times, there are editors providing affirmative consent to every single piece of content in their newspaper. With Twitter, that is not the case. Tweets are not reviewed before being posted. Objectionable tweets are only removed after the fact.
Yes. The Times has editors because it is responsible for the content. Twitter doesn't because right now it is not responsible for the content on Twitter. Treating Twitter like a publisher would mean they would have to hire editors and do a more thorough job of monitoring the content that goes up on their platform. So what? If they want the ability to determine what goes on there, they they should have to assume the responsibility that comes with that just like the Times.
Jeff you are just dumb as a post. You don't even mean well. Just go away. The discussions here are so much better when you are not fucking them up.
You have it backwards. Twitter ISN'T a publisher BECAUSE they don't have editors pre-approving every piece of content on their platform. Unlike the situation with the NY Times. So since there is no one on Twitter approving every piece of content, why should Twitter suffer any legal liability for the content that others place on their platform?
You have it backwards. Twitter ISN’T a publisher BECAUSE they don’t have editors pre-approving every piece of content on their platform.
By that logic, the Times could stop being a publisher by firing all of its editors and claiming no control over what goes into its paper. Sorry, it doesn't work that way. You are a publisher because you "publish" something not because you create it.
For the second time, go away. You don't know anything and you won't listen. Stop messing up the conversation of those who do.
By that logic, the Times could stop being a publisher by firing all of its editors and claiming no control over what goes into its paper.
If that claim was a TRUTHFUL claim, then yes they wouldn't be a publisher anymore, they'd be more like what Twitter is now.
Anyway I have tried to explain to you why analogizing Twitter to the NY Times is problematic. Twitter doesn't actually approve every tweet before it is posted. The NY Times has editors that DO approve every word before it is published. Because of this important distinction, Twitter and NY Times should not be treated equivalently on a legal basis.
If you approve ANY, you approve ALL.
It's a court's job to decide what act is censorship and what is just Twitter being incompetent?
No. If you censor anything, then you are on the hook for everything else. Sucks for you, but it was a decision you made.
So if, on my private Christian platform, if I permit speech favorable to Christianity, then I must also permit speech hostile to Christianity. Is that your standard?
Whoosh. This question has been answered for you several times in this thread.
"So if, on my private Christian platform, if I permit speech favorable to Christianity, then I must also permit speech hostile to Christianity. Is that your standard?"
If you wish to control the message, then you aren't a platform. It's pretty easy to follow this. If you wish to create a Christian site --- then you won't be a platform and you won't have those protections. C'est la vie. Lots of businesses manage to pull it off.
"If you wish to control the message, then you aren’t a platform. It’s pretty easy to follow this."
So if I want to have a website that talks about law and Internet, I have to let people post about whatever they want.
If my users ask if I can't do anything about the trolls who pop out to drop political insults and then disappear again, I can't do that, because that would be "directing content" and make me liable for when some Internet rando posts something defamatory on my site. But, if I leave the trolls alone, then I'm not liable for any Internet rando's libels.
Does that about sum up your position?
"So if, on my private Christian platform, if I permit speech favorable to Christianity, then I must also permit speech hostile to Christianity."
Ask Jesus to smite the unbelievers for you. He has a different kind of liability shield.
"By that logic, the Times could stop being a publisher by firing all of its editors and claiming no control over what goes into its paper."
Uh, yeah. Fire all the editors and writers, and you're not publishing a newspaper any more.
Banning someone's account, on Twitter, is going beyond "reviewing before posting".
What do you think a moderating algorithm does, but "review before posting"?
Understand that this is completely different than the law as it applies to other platforms. The famous NYT v. Sullivan case involved a full page advertisement run in the Times. A newspaper or a magazine can’t claim that someone else created the content as a defense to libel.
Maybe they should be allowed to do so? Legal liability lies with the entity creating the information. If that was a NYT reporter then the NYT is responsible. If it was someone who took out an ad in the Times, then it's the person who created the ad.
It allows the NYT and Twitter et al. to control what they publish or allow to be posted on their platform (as it should be) and makes entities responsible for their own words.
Honest question - what's so bad about that? Doesn't that expand freedom rather than racing to the bottom in the name of fairness?
" A newspaper or a magazine can’t claim that someone else created the content as a defense to libel. "
Well, actually, they can, in some cases. If you pick up a copy of the Times in the subway, and someone did half the crossword and then wrote something defamatory over the rest of the squares, the target of the defamation can sue the writer (if they can identify) but not the Times, (nor the subway system.)
"Honest question – what’s so bad about that?"
There are a couple of problems. One, of course, is that many people are unidentified or judgment-proof, leaving no recourse if the deep-pockets publisher is off-limits. Second, putting something into a publication means it gets wider distribution, thus creating more potential damage from a defamation. We do want the publication to avoid publishing things that are defamation, and probably want to limit their publishing of things that may be defamation. Potential liability keeps them vigilant unless the publisher is judgment-proof, as well.
Maybe this is a dead thread now, but...
One, of course, is that many people are unidentified or judgment-proof, leaving no recourse if the deep-pockets publisher is off-limits
That might be true for random internet commenter, but doesn't Section 230 already make this the norm in online forums? For a traditional print newspaper, someone who takes out an ad is going to have to pay and leave some means of identification. I suppose a letter to the editor could be anonymous or use fake contact information, but it seems like that's just a small-scale version of the way the online space is now.
I think it's still perfectly reasonable to make a publisher/online forum liable if they receive a credible notice that they are publishing/hosting defamatory content and don't print a retraction/remove it.
"I think it’s still perfectly reasonable to make a publisher/online forum liable if they receive a credible notice that they are publishing/hosting defamatory content and don’t print a retraction/remove it."
So does the legal system. But if you just show up and sue them because Joe User posted something on their site, and you DON'T bother to prove that they knew it was defamatory or don't even bother to prove that they knew it was there, you're suing the wrong defendant, because the online forum isn't the one who defamed you.
Section 230 was regulatory capture by big internet companies
Section 230 was written in response to a horrible court decision, Stratton Oakmont v. Prodigy, which declared that if Prodigy moderated any content at all on their forum, even for unobjectionable things like porn and spam, Prodigy was legally responsible for every remaining piece of content. It applied the wrong standard to tech companies. Section 230 fixed that.
It was not some amendment slipped into some bill by corporate lobbyists. It was in response to a court decision.
It’s your opinion that this was “the wrong standard”. I think it was the right standard. Without Section 230, companies like Prodigy, Facebook, and Google couldn’t exist in their current form. Instead, we’d likely have a large number of hosting platforms that don’t moderate at all, and a large number of legally separate content providers on top of those hosting platforms. That’s the likely free market outcome without Section 230 and therefore what libertarians should favor. For some incomprehensible reason, you don’t like it.
Instead, we’d likely have a large number of hosting platforms that don’t moderate at all
So the whole internet would turn into 4chan. Wonderful.
Could I start a platform that discussed, say, only chemistry? And I kicked out all the non-chemists because I wanted the discussion to be only about chemistry. Would this be permitted in your world?
It's permitted under Hawley's bill.
You'd just be liable for things posted on it.
You'd manage. Or not. It's not my concern.
It’s permitted under Hawley’s bill.
You’d just be liable for things posted on it.
Oh I see. So therefore if I wanted a chemistry-only forum, or a Christian-only forum, or a knitting-only forum, then the owners of the forum would have to be super-vigilant about all content posted on the forum. One shitposter deliberately trying to ruin me, or even one person in a heated discussion getting out of hand, and I am completely ruined. So why would I take on that kind of risk?
So the wiser course of action would be for me to not moderate anything at all, which is what you evidently want me to do. Then my forum would be overrun by trolls, shitposters, spam, off topic nonsense, etc., which I would be powerless to stop. Why would anyone come to such a place seeking discussions about chemistry, or Christianity, or knitting, when it would be full of such garbage instead?
So if I wanted to find discussions about chemistry, or Christianity, or knitting, I would have to go to some corporate platform - like Facebook! - where they have the resources to have a properly moderated forum. Taking away Section 230 means making the big platforms EVEN BIGGER because they would be the only ones with the resources to fend off every frivolous defamation lawsuit. Your average blogger does not.
THIS is why we have Section 230. Because these would be the only choices available to average regular users like myself in its absence.
If you think FaceCrack moderates for spam, you haven't seen too many sites that use FaceCrack for their comments.
Frequently spam post outnumber the on-topic ones.
As for "shitposters" and "off-topic nonsense", aimed at "chemistry, Christianity, or knitting" sites, I detect a slightly guilty conscience.
"Oh I see. So therefore if I wanted a chemistry-only forum, or a Christian-only forum, or a knitting-only forum, then the owners of the forum would have to be super-vigilant about all content posted on the forum."
Sounds rough.
Again, if your business model requires special protections from liability, then you have a business model problem. Which, ultimately, is not MY problem.
(1) You admit that your objective is actually censorship.
(2) No, it wouldn’t, because individuals would be responsible for their own content.
Of course you could in my (libertarian) world. You’d also be liable for the content on your platform if you control it. As you should be.
(1) You admit that your objective is actually censorship.
The only thing I admit is that I want private property owners to be able to exercise their private property rights over their own property. That includes the right to control what should or should not be on that property. So yes I want private individuals to be able to censor views on their own property. For example, I don't think Christian forums should have to put up with anti-Christian trolls disrupting their discussions, and I think the owners of those forums should have every right to censor the trolls' speech and ban them from the platform. Do you?
Of course you could in my (libertarian) world. You’d also be liable for the content on your platform if you control it. As you should be.
Even if the words are not my own? You would hold me responsible for the words of someone else?
Even if the words are not my own? You would hold me responsible for the words of someone else?
Standard practice in ink-on-paper publishing, even today. Nothing wrong with it, and a lot right with it.
Note also that the notion of a "platform" that is free from liability because it only publishes other people's words, and does not edit them, has no counterpart in ink-on-paper publishing. And ink-on-paper publisher which does that—a local classified advertising rag, for instance—remains 100% liable for everything it publishes.
"Note also that the notion of a “platform” that is free from liability because it only publishes other people’s words, and does not edit them, has no counterpart in ink-on-paper publishing"
So, in your imaginary world, somebody can sue Mead for what they write on Mead's paper products?
You keep changing your story.
You are responsible for the words of someone else if you choose to republish them. The only institutions who are not are big corporations like Google.
"
You are responsible for the words of someone else if you choose to republish them. "
if you CHOOSE. It's like you don't even notice that word. If I choose to let people write on the blank notepads I sell, am I liable for what they write? If I publish a book full of deep thoughts and insightful observations, and someone buys the book, and then takes a felt marker and writes something libelous on it, to whom are they (the writer of the libel) liable, and to whom am I (the writer of the book) liable? Are they the same? Now, how about the publishing house? To whom are THEY liable for the libel on the pages of their book?
You think you understand, but you just don't, despite repeated attempts to correct your ignorance.
Writing things in private is never libelous; libel is the publishing of false and/or damaging statements. If you publish, or participate in publishing, libelous statements, you may be held legally responsible, except for the crony capitalist exemptions carved out in Section 230.
I think you have just demonstrated again that you don’t understand at all what libel is about. The question is whether you are simply engaging in a campaign of disinformation out of self-interest or whether you really are that stupid.
Section 230 Is the Internet's First Amendment. Now Both Republicans and Democrats Want To Take It Away.
^Rent seekers seeking rent^
A plain reading of the 1A and section 230 readily reveals that the 1A protects private citizens and organizations from Congressional legislation and that section 230, a piece of Congressional legislation, protects private internet service providers from their customers, users, employees, competitors, etc.
Before section 230, reporters had to actually uncover facts and craft narratives sensitive to their reader's sensibilities. After section 230, Russian tweets elected Donald Trump. The rest of the CDA was repealed, it's time for section 230 to be as well. Reporters and the media, even if they aren't being sued, like section 230 because it further entrenches them as the right and reasonable source of knowledge and shields them from the consequences of passing off shoddy regurgitations of hearsay as real reporting. If you can saturate the market with the absolute shittiest of yellow journalism without any liability, the 1A is moot.
Well, it's clear SOMEBODY doesn't know how section 230 works.
Lots of people. The 1A starts off with "Congress shall make no law..." section 230 is a piece of the CDA, a law passed by Congress in order to govern free speech on the internet. The 1A contains one explicitly named right, the right to petition. Section 230, a speech governing law passed by Congress, prevents people from petitioning. It's doubly-unconstitutional in the abstract.
On the ground, Google, Facebook, and Twitter can knowingly craft and distribute a narrative, no matter how damaging, cravenly untrue, and counterfactual. Journalists in the media are free to knowingly redistribute a narrative they know is false in order to attract advertising because they can pass the blame to Google, Facebook, and Twitter, who can't be sued. Then, when a story like Russian interference comes along, Congress *must* act because private individuals have had their right to sue Twitter for misinforming them stripped away from them.
Just because you think you know how section 230 works doesn't mean that's the only way it works. It's a little thing you may've heard of called unintended consequences.
Enter Sandmann
Say your [likes], little one
Don't forget, my [ungendered child]
To include everyone
I [follow] you in, warm within
Keep you free from sin
Till the sandman he [smirks]
[Tweet] with one eye open
[Biting] your pillow tight
Exit: [right]
Enter: [s]ight
Take my hand
[You]'re off to never never land
Something's wrong, [it's the right]
Heavy thoughts tonight
And they aren't of [non] White
Dreams of [facts], dreams of [fascists]
Dreams of [wrongthinkers]
And of things that will bite, yeah
[Tweet] with one eye open
Gripping your pearls tight
Exit: [right]
Enter: [site]
Take my hand
We're off to never never land
Yeah heah
Now I [log] me [in] to [tweet]
(Now I lay me down to sleep)
Pray [Google] my soul to [bleat]
(Pray the Lord my soul to keep)
If I die before I wake
(If I die before I wake)
Pray [Facebook] my [data] to take
(Pray the Lord my soul to take)
Hush little baby, don't say a word
And [always] mind that noise you heard
It's just the [racists] under your bed
In [our] closet, in [our] head
Exit: [right]
Enter: night
Grain of sand
Exit: [right]
Enter: night
[Report by] hand
We're off to never never land
Yeah ahaaha
Boo
Yeah yeah
Yo oh
We're off to never never land
[Censor by] hand
We're off to never never land
[Censor their] hand
We're off to never never land
Section 230, a speech governing law passed by Congress, prevents people from petitioning. It’s doubly-unconstitutional in the abstract.
How does Section 230 prevent you from petitioning your government (the enumerated right in the 1A)?
How does Section 230 prevent you from petitioning your government (the enumerated right in the 1A)?
A civil suit is how a citizen petitions their government for redress of civil grievances. Otherwise, only the King's men can be held in violation of the law and the Constitution and the king can just hire private mercenaries and freely disregard any complaints citizens may have against their violent oppression.
Hmm I had never regarded the right to petition in the First Amendment as being extended to include a right to file a civil suit. That's a creative reading of the 1A. Not saying it's wrong, just creative.
But even still, even if we go with your reading, CDA 230 doesn't pre-empt any civil lawsuit you may want to file against anyone.
"Not saying it’s wrong, just creative."
It's wrong.
It’s wrong.
Go on Twitter and tell AOC and Trump it is.
Telling you doesn't seem to help.
Hmm I had never regarded the right to petition in the First Amendment as being extended to include a right to file a civil suit.
IDK, it seems pretty plain to me. Not everything that causes a grievance is a crime and I'm pretty sure they didn't mean literally petition with signatures.
It means you can call your Congressman.
"On the ground, Google, Facebook, and Twitter can knowingly craft and distribute a narrative, no matter how damaging, cravenly untrue, and counterfactual."
If Google, Facebook or Twitter do it, and it's libelous, you can sue them for libel. But you can't sue Google because BigStud99 did it with a GMail account, and you can't sue Facebook because TrumpLiesAboutEverything said you own a MAGA hat, and you gave it to Goodwill.
But you can’t sue Google because BigStud99 did it with a GMail account
Which wouldn't be consistent with virtually any other form of media. If KBBL radio broadcasts libelous claims they heard from BigStud99, I can rightly sue KBBL for BigStud99's identity and any complicity they have in BigStud99's libel. It's up to a judge to decide whether they're in for just the ID or something more. If TrumpLiesAboutEverything solicits the NYT (or if they do it for free) to publish falsehoods about any MAGA hats I may or may not own, I can sue the NYT and they can't plead the 5th, the 1st, or necessarily claim anonymity of their source pending a judge's ruling.
But, for some reason, if BigStud99 and TrumpLiesAboutEverything both claim the same falsehood about me or you it gets bumped by various algorithms, Google can selectively reinforce it and even refuse my requests to remove it *and* my ability to bring suit against them is significantly diminished despite any and all involvement known and unknown. Keep in mind, this is the instigation of a civil trial even if they are innocent until proven guilty beyond a shadow of a doubt that has no bearing or indication of making them guilty beyond any/all accusations.
Bottom line, plain letter reading of the 1A says Congress shall make no law abridging free speech or the people's ability to petition their government and the plain facts are that the CDA, of which section 230 is/was a part, is a law passed by Congress specifically abridging people's ability to speak and petition against Digital Information Content Providers. Free Speech, even on the internet, existed well before the CDA and, by and large, the CDA apart from section 230 has been repealed. The idea that free speech on the internet couldn't exist without the CDA is a patent falsehood on its face.
Other than demonstrating that you don't under stand the petition of redress part of the 1A any better than sec 230 of CDA, what was your point?
Other than demonstrating that you don’t under stand the petition of redress part of the 1A any better than sec 230 of CDA, what was your point?
You've proven beyond a shadow of a doubt that you don't actually care if congress passes a law abridging the people's free speech and I'm not talking specifically about section 230.
OK, so you don't understand the petition part of 1A AND you are extremely poor and summarizing other peoples' opinion.
Anything else you need to add to your list of "bad at"s?
Section 230 is not the internet's First Amendment. It is the internet Megacorporations' "Heads I win, tails you lose."
It's them having their cake (liability protections as 'platforms') and eating it too (getting a free pass when they behavior as content providers.)
The big thing social media is missing is moderation. Many forums keep their cool and stay on polite topic through moderation. But there's no way sites and services as huge as Facebook and Twitter could possible moderate all comments.
One solution is self moderation. But it's a tricky prospect. Slashdot's metamoderation seems to work, but Reddit's voting scheme is a farce.
But I had an idea. One I borrowed from Richard Stallman. While Stallman and I agree on almost nothing, he does recognize in himself a counterproductive impulsivity. So in his email replies he waits ten minutes before posting. And oftentimes he ends up rewriting or just deleting his messages. It's a good idea that I've tried to follow because I can be just as much of an asshole hothead as the next guy.
So here's my idea: simply put a time filter all all social media posts and comments. Wait one hour for a post, or fifteen minutes for a comment. Then you're asked to confirm posting or commenting. Obviously, relax the limit for certain vetted sources (such as a news organization reporting on current events), but otherwise apply it to everyone. Make people stop and think about what they are saying. Get rid of that heat of the moment.
No, it won't fix things. We need a giant fix in the culture for that. But it may stop throwing more matches onto the forest fire.
No, the solution is to have no control on content except to remove illegal or slanderous content when it is pointed out to the service provider. That is the system we were supposed to have when 230 was passed. I don't see why depriving Google or Facebook of the ability to deny their platform to people they don't like but in return giving them immunity from the content posted on their platforms is a bad state of affairs.
> That is the system we were supposed to have when 230 was passed.
There is no "supposed to". What we have now is an unplanned emergent order. All 230 says is don't hold providers legally accountable for posts and comments made by their users. Nothing in there saying they can't moderate. Nothing in there saying they can't have voting systems. Nothing in there saying they can't take down posts they don't like. Nothing in there saying people can post shit all day long and the site can't do a damned thing about it. All is says is that the government can't throw the cops at them if they do.
What we have now is an unplanned emergent order. All 230 says is don’t hold providers legally accountable for posts and comments made by their users.
No that is not all it says
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
You know where 230 sits in the US Code? It is in Title 47 and specifically the common carrier section.
And it defines interactive computer service as
(2) Interactive computer service
The term “interactive computer service” means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.
I would also point you to the policy subsection. It states among other things
(3) to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services;
When you read the act in its context of where it is in the US Code and you see the point of the act was to give users the control over the information not the internet providers, it is pretty clear that the assumption behind this was that internet providers would act like common carriers and not be kicking people off of their services for anything other than obscene or illegal content.
You're reading way to much into it. The intent was to separate user comments from the site itself. Like a bulletin board, the board operators are not liable for what a user posts on the bulletin board.
There are no restrictions or compulsions on the services themselves.
Intent doesn't mean shit when it comes to practical use. The intent of the ACA was to allow people who liked their doctors to keep them. Intent doesn't mean shit.
" The intent of the ACA was to allow people who liked their doctors to keep them"
The only person who said anything about liking and keeping your doctor was Obama, and he didn't write the thing, and he was wrong about keeping your doctor if your insurance company couldn't reach a deal with your doctor.
Other than those slight corrections, you got that one dead on.
You’re reading way to much into it. The intent was to separate user comments from the site itself. Like a bulletin board, the board operators are not liable for what a user posts on the bulletin board.
Incorrect. It was meant to obviate and 'rectify' judicial incongruities to grant more power and/or selective freedom to bulletin board operators who moderated content. Compuserv was already found free of liability for printing content which it did not moderate and Prodigy was found partially culpable for content which it did moderate. Congress saw this as a dangerous incongruity which they felt would shut down a lot of internet platforms and it granted protection specifically to moderated platforms.
To say the intent wasn't specifically to grant power to preferred speech and message crafting is to pretend that Section 230 wasn't passed as part of the CDA which has largely been repealed.
It's net neutrality back before anybody had any idea about packet filtering, ISPs, backbone service providers, and how shitty an idea net neutrality really is.
All 230 says is don’t hold providers legally accountable for posts and comments made by their users.
Oh really? Strange how its being used to make sure that they can't be sued for changing their ToS after the fact and then banning users for content posted prior to when the ToS were changed.
Sure is strange how Section 230 is being used to allow certain corporations to totally ignore contract law.
"Sure is strange how Section 230 is being used to allow certain corporations to totally ignore contract law."
Or would be, if, y'know, that were happening.
Or would be, if, y’know, that were happening.
How would you know? Alex Jones and countless Youtube channels can't even bring suit after getting shut down. A law passed by Congress prevents them from petitioning their government for redress of grievances from these contract violations.
" Alex Jones and countless Youtube channels can’t even bring suit after getting shut down. "
You know who can't bring suit? People who don't have a cause of action. Coincidentally, you know who doesn't have a cause of action? People whose contracts haven't been violated.
You know who can’t bring suit?
No one. Anyone in America can bring suit for any thing at any time. The suit may be thrown out, but they can bring suit. Section 230, passed by Congress in clear violation of the 1A (along with the rest of the CDA which you acknowledge was a fool's errand) says who cannot bring suit proactively.
People who don’t have a cause of action.
How does Congress know, a priori, who does and does not have cause of action and who can and cannot petition for redress of grievances? But, again, you've proven that you don't actually care if Congress does violate the 1A so long as section 230 stands.
"Alex Jones and countless Youtube channels can’t even bring suit"
"Anyone in America can bring suit for any thing at any time."
Go home. You're so twisted in knots you're contradicting yourself.
Also note: Mr. Jones magically went back in time and took my advice retroactively, and built his own site where he can make whatever rules he wants about what content meets their "acceptable use" policy. People who are interested in what he has to say remain free to go to his site and consume whichever and whatever of his content he sees fit to provide for them.
Now, boo-hooing about how everyone's out to get him IS part of his shtick, but nobody who doesn't want to has to listen to him. That's win-win-win. Mr. Jones has his platform, his fans know where to find it, and non-fans know where to avoid.
Also note: Mr. Jones magically went back in time and took my advice retroactively
It's not funny or cute, just naive, uninformed, and tiresome. It's not your advice and, believe it or not, Mr. Jones actually had a smaller audience on an older platform in a different market before he started his Youtube channel. His being booted from YouTube's service cost him time, money, and sponsors. The exact sort of thing that even without a contract in other circumstances would qualify as grounds to sue someone. But you don't care about facts or the law, you care that you can say 'win-win-win' on the internet and believe it despite loss after loss after loss.
"It’s not funny or cute, just naive, uninformed, and tiresome."
We agree on your traits.
" His being booted from YouTube’s service cost him time, money, and sponsors."
Perhaps he should have thought of that in advance, then? Surely it doesn't come as a surprise to him at his late state of affairs that his act offends people? (Look how fast he ran away from it when his wife's defense lawyer started showing clips of his show...)
No, the solution is to have no control on content except to remove illegal or slanderous content when it is pointed out to the service provider. That is the system we were supposed to have when 230 was passed.
That isn't true at all. Section 230 itself presupposes that platforms will moderate content, for reasons that are "otherwise objectionable", which is in the eye of the beholder alone.
Repeat after me, John:
SECTION 230 DOES NOT REQUIRE PLATFORMS TO BE IMPARTIAL
Yes it was you fucking RETARD. That is why it is in the common carrier section of the Code.
AGAIN, YOU ARE A FUCKING MORON AND AN EMBARRASSMENT TO THE HUMAN RACE. GO AWAY!!
No, section 230 does not, and never has, required neutrality.
The point of the law is to allow sites on the Internet that allow for user interaction to do partial moderation without incurring the status of "publisher" of the content of others. Such that, for example, they can remove comments that have four-letter words in them, without thus ALSO acquiring a mandate to read every damn word of every comment, in real-time, to certify that it is free of actionable material.
Specifically, they were concerned with material that is harmful to minors (assuming there is such a thing) being posted by anonymous third parties to websites by people who wanted to see that website be shut down or otherwise hassled by government.
Just like we don't want the phone company to listen in to all of our phone calls, and call the police whenever they hear people arranging illegal things with their phones, we don't want Internet websites censoring all our online postings just in case one might be illegal.
It is in the common carrier section of the code. Why is it there? Because the assumption was internet service providers were common carriers. There is no other explanation for it being there.
And that intervention in the market is justified... how?
And your concern for privacy is touching, but the fact is that Google actually does read everything you say and do. And that is a consequence of Section 230.
"that intervention in the market is justified… how?"
It's what users want.
They want interactive Internet sites that let them post and share content, but they don't want spam, and they also don't want to pay people to sit in a room somewhere, reading everybody's postings, before approving it.
" the fact is that Google actually does read everything you say and do."
Google doesn't read anything I say or do. The NSA might, but Google doesn't.
Because “users want” regulatory capture, the government should give it to them? What kind of “libertarian” argument is that?
We’ve had that since long before Google.
But people are paying Google, massively, with their free content and their eyeballs.
All your E-mails, chats, images, and other contents are passed through Google’s content analysis engine and, if flagged by that, examined by live human beings; it’s just like the NSA and other spy agencies work.
"What kind of “libertarian” argument is that? "
People getting what they want is libertarian. You are presently arguing the opposite, that people should NOT get what they want.
"We’ve had that since long before Google."
And? This is a relevant response because...
"All your E-mails, chats, images, and other contents are passed through Google’s content analysis engine"
This is both false AND a touch paranoid.
People getting what they want through the government is (democratic) socialism. Libertarianism is about individual liberties, free markets, non-aggression, and personal responsibility. As you may have noticed, libertarianism isn’t very popular because people actually don’t really want it.
It’s relevant because it shows that Section 230 and Google are unnecessary.
How do you think Google adds your flight and restaurant reservations to your calendar from your email without analyzing your messages? How do you think Google notifies you of package deliveries without analyzing your messages? How do you think Google puts spam, commercial offers, and important messages into separate mailboxes without analyzing them? How do you think ads for some product category follow your for days around the Internet if you search for a product in that category or otherwise mention it?
Nobody can be as stupid as you pretend to be; obviously, you’re engaging in a campaign of lies and disinformation.
Just like we don’t want the phone company to listen in to all of our phone calls, and call the police whenever they hear people arranging illegal things with their phones, we don’t want Internet websites censoring all our online postings just in case one might be illegal.
You do realize that Google (and by extension Gmail), Facebook, et. al. have an algorithm scan all of their user content before its sent out right? So if you upload illegal content to an email before sending it, Google gets alerted and they turn around and provide that content to the authorities. They read all of your shit even if there isn't a human doing it. You have no privacy with Big Tech. Their business model is built off of providing a an accessible service and charging you $privacy in return.
The flaw in your argument is that you assume that Google has access to any of my email.
I didn't make an argument.
Agreed it wasn't much of one.
No. Moderation is moderation. There is nothing inherently wrong with it, it is just a form of content control.
But once you do that you should not be afforded the state sanctioned protections afforded to platforms (which, by definition are therefore content neutral.)
But that logic, a brick and mortar business that goes and scrubs off graffiti on its walls gets in trouble if it's too slow to scrub off the graffiti and some sees a slur they don't like.
Section 230 is not state sanctioned protection, it's a restriction AGAINST THE STATE ITSELF! It prevents the state from stomping you down with its jackboot for something someone else posted.
But that logic, a brick and mortar business that goes and scrubs off graffiti on its walls gets in trouble if it’s too slow to scrub off the graffiti and some sees a slur they don’t like.
And that is the logic of the common law. If someone paints "Brandybuck fucks sheep" on the side of my wall and I let it stay there even though you told me it is lible and that you don't fuck sheep, at some point I take ownership of the statement.
You people refuse to comprehend that there could ever be any consequences for your actions.
And the whole arguments presumes that you have an established pattern of allowing people to write on that wall.
Otherwise the first person to write on that wall without you permission has harmed you. REGARDLESS of that was written.
Correct. And that is as it should be.
It protects certain companies from liability in ways other companies and individuals are not protected from liability. That’s a special privilege, and one that has created censorious and privacy invading near-monopolies.
"Section 230 is not state sanctioned protection"
Mind boggling that anyone could utter something as woefully incorrect as that.
platforms (which, by definition are therefore content neutral.)
Umm wut? A platform is not "content neutral" by definition. A platform is a place for the owner of that platform to host content, according to the whims of the owner.
Correct. And without Section 230, we’d likely have millions of little platforms, each individually responsible for their content.
With the special privileges of Section 230, we end up having half a dozen platforms dominating everything because they don’t have to fear liability anymore.
You have that completely backwards. We DO HAVE millions of little platforms RIGHT NOW. They are allowed to thrive because their owners can be ordinary people like you and me, and not large corporations with legal departments having to monitor every word on the platform to make sure it doesn't run afoul of some law.
You obviously don’t understand how the Internet works.
If the thrust of your argument is that he doesn't understand the Internet because he knows that websites exist, there's a flaw in there somewhere.
Someone who claims that there isn’t a problem with concentrated corporate and political control on the Internet right now is obviously out of touch with reality.
Furthermore, such concentration of corporate power is almost always the result of ill-conceived government policies; in this case, Section 230 and a few other policies are the obvious culprits.
And it’s fascinating that jerks like him and you defend such regulatory capture with bogus appeals to “free speech” and “private property” and “it’s what users want” and “Google doesn’t read any of my messages”. It’s either a case of motivated reasoning (i.e. you work for one of those companies) or utter and complete ignorance.
"Someone who claims that there isn’t a problem with concentrated corporate and political control on the Internet right now is obviously out of touch with reality."
There isn't a problem with concentrated corporate control on the Internet right now, in this reality. There is a problem with political control, in China, but section 230 is the topic, and it does not apply in China.
"Furthermore, such concentration of corporate power is almost always the result of ill-conceived government policie"
That's phenomenally stupid. Concentration of corporate power on the Internet comes from network effects. Section 230 has nothing whatsoever to do with it.
"it’s fascinating that jerks like him and you"
Oh, noes. The idiot thinks I'm a jerk for understanding things better than he does. No, wait, it can't possibly be that I know I'm talking about and he doesn't... it must be motivated reasoning or I work for one of these companies. No, sorry, it's just because you are stupid, whereas I know what I'm talking about.
I’m glad you agree that there is concentration of corporate power on the Internet.
Alphabet has $115b in cash alone, and even more in other assets. That makes them a very juicy target for lawsuits. Legal liability creates diseconomies of scale, and the stated purpose of Section 230 is to eliminate the major class of liabilities for a publisher.
Without Section 230, small corporations have a big advantage over companies like Google because small corporations simply aren’t very attractive targets for lawsuits.
Thanks for proving my point.
Try reading the the law you twit. Read the whole thing.
https://www.law.cornell.edu/uscode/text/47/230
Section (f)(3)
It's in the definition of the terms. Once you alter posted content (beyond the specified safe harbors for "offensive material") you cease to be a platform and become a content provider.
Well that is a creative reading of the definition.
If I post a tweet on Twitter, then I am the "information content provider" in this context, and Twitter is the "interactive computer service". Agreed?
Now if Twitter then suspends my account and will only un-suspend it if I delete the tweet, I'm having a hard time then seeing how Twitter now steps into the role of "information content provider".
Even if Twitter were to outright erase my tweet, I don't see how that would be the case still. They would only be in that role if they modified the contents of the tweet in some way.
"modified the contents of the tweet in some way."
Forcing deletion doesn't do that?
Well, if your tweets are as devoid as your posts here...I can see that.
Twitter didn't modify the contents of any tweets, in this scenario.
They demanded they be REMOVED.
Deletion is a fundamental change of content.
"But once you do that you should not be afforded the state sanctioned protections afforded to platforms (which, by definition are therefore content neutral.)"
Section 230 arose from efforts to make the entire Internet safe for children. That was a fool's errand.
But you have businesses attempting to deliver what their customers wanted. They wanted websites that were interactive, and allowed them to create and share content. They also wanted to be free from spam. No, I don't want to buy v1ag4a pills from your unlicensed pharmacy. I don't want to hear from any more widows of former oil ministers needing a little assistance to get money out of Nigeria. AND, when I post something, I want it to be visible now, not 12-36 hours later after someone's had a chance to look it over to make sure that I'm not an unlicensed pharmacist or former oil minister.
To deliver that combination, human moderation is out of the question (takes too long) but no moderation at all is also out (because spam). That leaves partial moderation, with sophisticated spam detection that is automated, backed up with interrupt-driven human moderation ("say, people keep flagging this particular posting. Somebody better take a look at it.")
Pull the liability protection for sites (like, presumably) this one, and you get a limited subset of possible responses. One is to simply substitute a corporate liability shield, which is freely available to the bigger players who can afford the legal help to set it up and make it airtight. Another is to simply operate judgment-proof operations. Third, you can just pull user interaction entirely.
^ Exactly this. An excellent summary.
"To deliver that combination, human moderation is out of the question (takes too long) but no moderation at all is also out (because spam)."
The FCC just permitted cell carriers --- undeniably platforms, mind you --- to finally work on something to deal with spam robocalls.
What has cell carriers and robocalls to do with liability for user-supplied content on websites?
You're saying platforms cannot handle things like spam.
Providing examples of platforms specifically dealing with things like spam.
I thought you'd keep up with your own analogy.
"You’re saying platforms cannot handle things like spam"
I'm... what now? Huh?
James Pollock, you seem to ignore who the customers are. They are not the people who access Facebook. The customers are the advertisers. Those are the ones spending the money to make the big social media companies rich.
Far from being customers, the people who access the sites are the raw material being sold to the customers. That raw material has been processed into monster databases, featuring personal information about all who access social media sites. That data makes targeted advertising work better, and delivers a decisive advantage in competition for advertising sales.
In any direct competition, no one without a comparable database has a chance. Only some new gimmick to engage site viewers could provide a business workaround, but even that could not support a direct attack on an established internet giant.
If a workaround succeeded, it would merely vary slightly more the monopolists among the oligarchy. Facebook and the others would remain untouched. Only the size of the database, and the software to deploy it for ad sales, count. Inventing a means to engage a limitless supply of site visitors is the key. Variations in site content and quality barely matter.
That business model is what Section 230 enabled: hoovering up an unlimited supply of internet traffic to build an unlimited database. Doing that was the great accomplishment of the internet publishing invention. It has thus delivered a crushingly monopolistic publishing business model on a national scale.
Section 230 was critical, because only by relieving publishers of liability for what they published could the publishers be freed to publish everything, and build their databases accordingly. The databases became the hearts of their enterprises. The databases provide abiding assurance that meaningful competition will remain a pipe dream. For a few companies, Section 230, plus being there first, turned network effects into a near-permanent advantage.
Imagine being a start-up publisher with a fantastic endowment—maybe a few hundred-million in venture capital—and a business plan to take on Facebook. Why would any advertiser even look at you? How big is your database? Some tiny percentage of Facebook's, at best. You could not accomplish a meaningful direct challenge with even a few billion to invest.
So in your taxonomy of consequences from the repeal of Section 230, you missed the most important one—the dissolution of otherwise impervious internet monopolists. With that, new smaller publishers could take advantage of a copious fount of newly-freed-up advertising revenue. Deprived of their ability to publish everything, the monopolists' databases would suffer decline. Advertising money would seek new outlets, and support a plethora of new publishers.
With publishing monopolies replaced by profusion and variety, all these cacophonous cries for government intervention to regulate the free press would quiet. Internet publishing has been edging ever-closer toward three perils: monopoly, government censorship, and loss of pubic support for press freedom. With Section 230 out of the picture, all three threats would recede.
"Imagine being a start-up publisher with a fantastic endowment—maybe a few hundred-million in venture capital—and a business plan to take on Facebook. Why would any advertiser even look at you?"
Because you can deliver eyeballs.
A few decades ago, all the advertising went to big 3 network broadcasts. That's where the eyeballs were. Why would an advertiser even talk to some upstart who said they were starting a fourth broadcast network? Pfft! CABLE TV channel, what's that? Internet ads? You must be joking. Why would an advertiser be interested in any of those things. All the eyeballs are watching ABC, CBS, and NBC, and obviously always will be.
The notion that absence of section 230 means that large Internet companies will collapse and be replaced by small ones is the sort of thing that someone who had no understanding of business might put forth. Ford has no section 230 protection. The number of small car companies competing with them is...
I have no idea whether removing Section 230 at this point will cause Google and Facebook to collapse, but that really doesn’t matter. The point is that Section 230 is regulatory capture and therefore should be abolished, no matter what the effect may be.
Thanks for brining up this example. In fact, the auto industry has engaged in a century of regulatory capture, rent seeking, and crony capitalism; that is the reason why there are few small competitors to companies like Ford.
How about you just man up, brandy
it's great
Iran Can Take Revenge: the secret tests of Israel and the United States In 2019, the risks of preserving the Iranian nuclear deal are much more serious
Iran Can Take Revenge: the secret tests of Israel and the US
In 2019, the risks of preserving the Iranian nuclear deal are much more serious, and there are enough prerequisites for escalating the conflict. This conclusion was reached in the new report of the International Crisis Group
The report is timed to the third anniversary of the implementation of the “Joint Comprehensive Action Plan” - a political agreement between Iran and the United States, Russia, France, China and the United Kingdom regarding a nuclear program.
As always, it is amusing to note that it is Section 230, specifically, that allows random people on the Internet the opportunity to argue about whatever-the-topic-of-the-day is here on this website.
Take away the liability shield, and the easiest way to eliminate the risk of liability arising from allowing random commenters on the Internet to comment, is to simply disable the "allow comments" function.
Yes, that is exactly what will happen.
Well, it's one of the things.
The other thing that will happen is that companies big enough to afford the lawyers will reorganize to use a corporate liability shield in place of section 230's safe harbor liability shield.
Or pull a double-offshore, and keep few resources within the U.S., subject to seizure by US legal authority.
Without Section 230, we wouldn’t have companies like Google in the first place, because it’s not feasible from a liability point of view.
What we would have instead is distributed content platforms, platforms in which individuals own what they post, can’t be censored, and are legally responsible for it.
For some reason, you think that’s a bad thing.
"Without Section 230, we wouldn’t have companies like Google in the first place, because it’s not feasible from a liability point of view."
That's just stupid, assuming by "companies like Google" you mean "corporations" which have... a liability shield.
So, Section 230 is therefore redundant. Got it.
You misunderstand. Section 230 can be made to be redundant, by companies big enough to pay the lawyers to make it be that way.
Section 230 ALSO applies to all the other websites that allow users to post content, many of whom are not corporations with enough money to pay lawyers to build them a corporate structure with no liability, and some of whom aren't even run by corporations at all.
Small corporations rarely get sued for this kind of stuff because there is nothing to recover. But big corporations like Google have deep pockets and are prime targets for these kinds of lawsuits.
I can’t tell whether you are just profoundly ignorant or engaging in motivated reasoning. But don’t try to pass off your rent-seeking crap as libertarian.
"Small corporations rarely get sued for this kind of stuff"
And when they do, it's catastrophic for them because they are small corporations with few legal resources. And when a nonprofit or individual gets sued for this, all the money goes to the bankruptcy attorney.
"I can’t tell whether you are just profoundly ignorant or engaging in motivated reasoning. "
Do you react this way EVERY time you run into someone who knows more than you do? If so, you should be well used to it by now.
Now again, in short words so you can follow along.
You
are
dumb.
Indeed, if the market consisted of 40000 $10 million dollar publishers, as opposed to a single $400 billion dollar publisher, lots of them would get sued and lots of them would go out of business every year. But what’s wrong with that? That’s how healthy markets operate. Companies that make mistakes get punished for them and their owners can go on and do better next time.
You erroneously claimed that the act of libel consisted of writing down false and damaging statements, and you erroneously claimed that Google does not apply content analysis to all E-mails. I have no idea whether that is out of ignorance or whether you are engaging in a deliberate campaign of lies and misinformation. Either way, your errors are fortunately easily demonstrated.
You’re halfway there understanding it.
You’re right that limited liability entities don’t need Section 230 because their liability is limited. So Section 230 is not needed to protect small publishers because their liability limitations are sufficient. That’s why the justification for Section 230 is b.s.
So what is Section 230 for? It’s for protecting multi-billion dollar corporations who want to monopolize entire sectors, because without Section 230, they are big, juicy targets for lawsuits.
"You’re halfway there understanding it."
That's halfway more than you.
You suffer from Dunning-Kruger.
Profoundly
Sure. How would I know anything about computers? I only have two degrees in the field and a lifetime of work experience. Law? All I have there is a law degree and having passed a state bar exam.
You suffer from stupidity.
Well, the good thing about reason is that people can examine arguments for themselves and draw their own conclusions.
As for your credentials, you’re in good company: Google and Facebook management and many academics hold the same beliefs you do. It’s hard to get a man to understand something when his livelihood depends on him not understanding it.
"You’re right that limited liability entities don’t need Section 230 because their liability is limited."
This makes no sense.
Limited liability just means that a the company goes bankrupt the shareholders will at worst only lose the money they have invested, rather than having to deal with debt collectors coming after them for the companies debts. The company can still go bankrupt after a lawsuit.
So in fact smaller publishers do still need Section 230 to protect them.
Even a smaller company earning a couple million dollars still has deep enough pockets to someone who wins a lawsuit against them can gain a large payday.
Comment sections are disappearing across the internet which is what stoked the creation of the "Dissenter service"-- which was blocked from the Google and Apple store while admitting it didn't violate any of their terms of service.
The rabble WILL be kept in line.
If you're a developer and want your app on their store you have to... well, read it yourself:
That means your Terms of Service needs to be just like theirs- even if your app by itself doesn't violate anything.
Gosh, you mean Google can exercise discretion in which products they choose to offer in their store, unlike every other kind of store in the world?
They absolutely do and they're liable for lawsuits if they violate their contracts with their suppliers.
Sure they can, they just shouldn't enjoy special protections from liability that 'every other kind of store in the world' does not have.
" they just shouldn’t enjoy special protections from liability that ‘every other kind of store in the world’ does not have."
wait. I can sue every retailer in the world because of what you just said? Because they aren't shielded from liability just because they didn't say it, you did? Cool.
You have a good legal case against retailer X who allows libelous or privacy invading material by customer Y to be visible in their store.
"who allows".
To make that stick, you have to prove A) that they knew the libel was there, B) that they knew it was libel, and C) that they let it stay anyway.
If some random customer leaves a piece of paper inside a book at a bookstore, and you buy the book and take it home and then find the piece of paper, notice any of A, B, or C missing?
Correct: if a random customer leaves a piece of paper inside a book at a bookstore, someone buys the book, then the bookstore would not be liable (though a court of law might still have to look at the circumstances).
If, on the other hand, the bookstore puts up a community bulletin board or publishes a circular that includes customer quotes, then the bookstore would be liable.
Ergo: bookstores do not have a blanket exemption for user-generated content; their liability is determined on a case-by-case basis. And the same should be true for Google and Facebook.
Correct. But that doesn’t mean that without Section 230 there would be no discussion. What we would get instead is systems where individuals host, distribute, and own their own comments and content and are responsible for it.
Section 230 is part of a set of rent-seeking government regulations that allows a few corporations to monopolize, monetize user created content from billions of users while pushing ads on people. Under normal liability rules, that would not be a feasible business plan.
As always, it is amusing to note that it is Section 230, specifically, that allows random people on the Internet the opportunity to argue about whatever-the-topic-of-the-day is here on this website.
That's weird because I was having random arguments about whatever-the-topic-of-the-day was well before section 230 passed. Usenet is still a thing.
Take away the liability shield, and the easiest way to eliminate the risk of liability arising from allowing random commenters on the Internet to comment, is to simply disable the “allow comments” function.
So, what you're saying is you could still run a whole business from the internet without incurring liability as long as you don't mindlessly redistribute other people's content, selectively reinterpret or edit the content against their wishes, or deliberately misrepresent facts and the truth by either one of the two previous actions? Huh.
It's almost like all this bullshit was largely settled closer to the invention of the printing press, the telegraph, and radio waves but, somehow, the internet is completely different because HTTP/IP or something.
"So, what you’re saying is[...]"
You quoted me accurately, and then went on to say that what I was saying was something totally different. Why? Was it too complicated for you? Should I have used shorter words?
You quoted me accurately, and then went on to say that what I was saying was something totally different. Why? Was it too complicated for you? Should I have used shorter words?
If your words were perfectly clear and contained only one possible interpretation or intent, I wouldn't have been able to do as you assert. I was able to say something totally different with your words because you assumed your intent. It's the sort of thing the internet totally fucks up and things like editing, formatting, and distribution can really fuck with. The sort of thing Section 230 insists information content providers can't be culpable of even if they do it intentionally. Even if I'm just a bot and, say, Reason is wholly responsible for this (mis)interpretation.
" I was able to say something totally different with your words because you assumed your intent."
You were able to say something totally different because you didn't care if you were accurate.
That, plus the shorter words thing.
You were able to say something totally different because you didn’t care if you were accurate.
It's the internet and you've made it clear you don't care about accuracy.
Now you're accurately summarizing me. Congratulations!
As always, it is amusing to note that it is Section 230, specifically, that allows random people on the Internet the opportunity to argue about whatever-the-topic-of-the-day is here on this website.
No it doesn't, it merely shields the platforms from liability. The idea was to protect internet companies from the inevitable and crazy shit that people posting will post to their platforms with the idea that it would reduce the fear and risk of creating new platforms. I'm all for section 230 and keeping it intact. But not discussing the elephant in the room seems willfully ignorant.
"No it doesn’t, it merely shields the platforms from liability."
Why would a business want THAT?
Yes, it’s a really sweet deal when one group of businesses gets exemption from liability while another group doesn’t. Libertarianism FTW /sarc
What group lacks exemption from liability for acts they don't commit?
Internet publishers have blanket exemption from liability for acts they did commit, namely publishing libelous material. Other publishers do not have such a blanket exemption.
Bars, restaurants and nightclubs aren't held liable for defamatory speech made by people the people who go there. (and to head off your inevitable reply, they can in fact kick you out for ALMOST any reason.)
If a news organisation goes out and decides to, on LIVE TV ask random people about their opinion of a various topic; Then they're not going to be held liable for if one of those people suddenly blurts out a random defamatory statement.
What publishers are held liable for 3rd party content that can be created by any random member of the public without prior approval?
I'm going to have to read this post in detail, but here are my thoughts about 230: It's a chimera for the real problem, for which I don't really have an answer.
Firstly, I would like to take issue with the headline. Section 230 is not the first amendment for the internet, the First Amendment is the first amendment for the internet. Section 230, from my understanding is about liability. As much as I have come to distrust the Big Tech firms, we don't seem to have a problem with liability-- less Kamala Harris arresting Backpage execs and parading them in orange jumpsuits on TV.
What we have is a cultural problem where fear has gripped everyone in any kind of power. The speed at which corporations have adopted a kind of social justice agenda* is alarming, and this fight is about the fear these corporations have for allowing wrongthink on their platforms. Everyone should be concerned about how swiftly anyone with a public persona has been run out of their careers or jobs for even being perceived to have engaged in what's broadly termed "problematic behavior".
Conservatives are confused about 230 requiring them to be 'content neutral'. The left is just power hungry as they always have been.
*Brendan O'Neill has discussed this rapid adoption of so-called civil rights issues by every institution, both public and private.
In essence, what we've entered into is a kind of Orwellian discourse in our public discussions taking place on the internet. Not Orwellian as so many people like to throw the term around carelessly, but Orwellian in the specific sense, that the language is being aggressively policed, wrongthink is swiftly condemned and acted upon, coordinated efforts to de-person or deplatform anyone outside acceptable norms, and even some people have cited incidents of "facecrime" (the Covington Kids were an example of that).
Wrapped up in here is what in my opinion is possibly an opportunity for civil action due to Terms of Service violations by the Big Tech firms. These tech firms like Youtube have created a method of "revenue sharing" which has led to people making their livelihoods on these platforms. That Terms of Service is a contract which works both ways. Everyone seems to agree that the platforms are sanctioning users without giving any specific violation of their terms of service-- and also applying the terms of service in wholly inconsistent ways. That TOS IS a contract (IMHO) and I think there's opportunity for real legal action in the courts regarding how the platforms are essentially breaking that contract with their creators.
"Conservatives are confused about 230 requiring them to be ‘content neutral’. The left is just power hungry as they always have been."
It's not confusion. While 'content neutral' does not appear anywhere in the law. The NET effect of the law should be content neutrality - on platforms. With the limited ability to alter content within the described safe harbors. Outside of those rather more specific safe harbors the definitions sections should be broadly applied.
And, as such, these services - as the presently behave - are not platforms they are information content providers.
Under 230, as written, content providers should not enjoy the liability protections of those safe harbors. The law is being misapplied, and as such if the courts cannot correct themselves it is up to the legislature to alter or abolish the law.
"... content providers should not enjoy the liability protections when altering content outside of the limits of those safe harbors... "
e.g. - altering content for political or business reasons.
And, as such, these services – as the presently behave – are not platforms they are information content providers.
While true, section 230 makes no such distinction. The closest distinction it makes between platforms and content providers is between Information content providers and software access providers. But it makes no protections or proscriptions for software access providers and only mentions them in passing. You are right that the legislature broke it and the legislature needs to fix it but the underlying issue is that common carrier protections for (e.g.) phone service exists and section 230 is a protectionist scheme *meant* to extend such immunity to content generators/providers (implication that common carrier is a protection scheme intended).
"section 230 makes no such distinction."
Read. The. Fucking. Thing.
Information content provider is a term clearly defined in the law.
It is an unfortunate mistake that the definitions appear at the end of the statute. But that does not mean that those words do not apply to the interpretation of the whole statute. Read them first, then read everything else within their light.
Read. The. Fucking. Thing.
The word 'platform' doesn't appear once in the entirety of section 230.
" Section 230, from my understanding is about liability. "
Sounds reasonable. Our government wants to see high tech widely adopted. Having Internet publishers dragged through the courts doesn't help. Hence, liability protection. It's very similar to the liability protections afforded to nuclear power generation, another industry our government bends over backwards to promote and protect.
So, you admit then that it’s a progressive policy intended to support rent-seeking corporations.
And hence libertarians say: it should be abolished and the market should be allowed to operate.
"So, you admit then that it’s a progressive policy intended to support rent-seeking corporations."
Governments support rent seeking corporations. Of course I admit it. I'm not sure though, that our problems will be solved by leaving them to something as nebulous and ill-defined as 'the market.' That seems pollyannaish.
Well, I’m glad we have identified our difference: you desire rent seeking and distrust markets, while I consider rent seeking to be corrupt and want an economy as close to free market capitalism as possible.
Yes the market should be allowed to operate by not holding companies liable for 3rd party content that they did not specifically pre-approve.
"The speed at which corporations have adopted a kind of social justice agenda* is alarming"
The better to control a population with, my dear
>>>Imagine, for a moment, the following series of online exchanges.
imagine a world with less commas. also try to not imagine online exchanges. at all times there is something better to do.
I'm imagining a world with fewer commas.
even better gracias.
This is why libertarians are always a joke; because when everyone decides to share unlibertarian values, libertarians won't defend what they have because doing so is itself unlibertarian. That's why it will always be a pipe dream and not popular. Popular ideas are practical ones that address issues today, a week from now, and going forward. Not some hypothetical prayer for the world and moral handwashing.
"This is why libertarians are always a joke"
No, the reason why libertarians are always a joke is that no two of them agree about anything... specifically including on just what, exactly, it means to be a "libertarian".
A lot of arguments with libertarians boil down to “fuck off slaver” followed by “why won’t people vote for my preferred policies?”
>>>That’s why it will always be a pipe dream and not popular.
popularity is for the self-afraid.
popularity is for the self-afraid and politically successful.
yeah but who wants to be those assholes?
We're better off hiding behind our ideology rather than trying to address people's practical concerns, amirite?
implication being anyone in DC is not hiding behind their ideology rather than addressing people concerns?
The implication being that libertarians don't have anyone in DC because libertarians make it a principle to refuse to address anyone's concerns.
"and no faction gets to dictate the terms by which the entire internet has to play."
Factually untrue. Just as an example, on Twitter, if you refer to that Yaniv bitch as a man, you will be banned for "misgendering", even though, biologically, it is 100% accurate. That's per Twitter's expressed policy. Most social media companies are heading that route as well.
Can YOU explain why Instagram banned Paul Joseph Watson, given his page consisted of pictures of shit like sunsets and that was it?
"Tomorrow, the event invite in such a scenario might be for a Black Lives Matter rally, a Libertarian Party fundraiser, a Mormon church group outing, an anti-war protest, or a pro-life march. "
And if they decide to stifle one of those groups...then what? What is your proposal then?
"'and no faction gets to dictate the terms by which the entire internet has to play.'
Factually untrue. Just as an example, on Twitter..."
Twitter is not the entire Internet. If you find Twitter's terms of service not to your liking, demand a full refund of what you paid them and take your business to a different corner of the Internet.
You can’t demand a full refund- the company is notorious for not letting people know how much data they collect and what do with it. Good luck trying to get them to refund your data.
Twitter, google and Facebook are the vast majority of the internet.
"Twitter, google and Facebook are the vast majority of the internet"
No, sir. Porn is the vast majority of the Internet.
So, they can take my data and don't have to abide by their TOS?
Intriguing.
That's what you get for agreeing to a contract without reading it.
The reserve the right to change the TOS as they see fit, and you accept the new terms by continuing to use the service.
Unless you decline to continue using the service, of course.
"The reserve the right to change the TOS as they see fit"
I bet that'll live up to even the most rudimentary legal challenge.
You have to read all the way to the end of the sentence.
They reserve the right to change the terms of service, AND you agree to those changes to the terms by continuing to use the service.
Yeah. That'll stand up to the most rudimentary legal challenge. And the ones raised by people smarter than you, too.
If you don't agree to the terms of service, or to their changes to the terms of service... stop using the service. Or don't start in the first place, which is the easiest way to stop.
If you find Twitter’s terms of service not to your liking, demand a full refund of what you paid them and take your business to a different corner of the Internet.
Putting aside monies changing hands-- if we agree that the Terms of Service is a contract, then Twitter may very well be in breach of contract. If we want to bring monies into the discussion, when it comes to services like Youtube and Patreon where monies are in fact changing hands, then when the platform violates its own terms of service, I suggest there is in fact a possible case here for civil suits and possibly an FTC complaint if there appears to be collusion and coordination between the major firms.
" if we agree that the Terms of Service is a contract, then Twitter may very well be in breach of contract."
"may very well be" and "is" are wildly different concepts.
Doesn't change the equation. If you find that Twitter is in absolute breach of their contract with you, demand a full refund of what you have paid them and take your business to a different corner of the Internet.
If you find that Twitter is in absolute breach of their contract with you, demand a full refund of what you have paid them and take your business to a different corner of the Internet
I know you think you're being clever here - "hehe, once they realize they haven't had to pay money to the platform, my point will be proven!" - but its not a very strong point. Data has value and it has been provided to the company in return for their service. When there's a breach of contract, there are typically remedies for such a breach. For a contract to be considered valid there must be consideration. The consideration you are providing to these platforms is your data - data which should absolutely be returned to you or purged upon breach of contract. Try sending a letter to them to request a purge of your data and see how far that gets or even if they respond.
" Data has value and it has been provided to the company in return for their service."
And... if you find the service of no value... you should stop giving them any of your data. Yes. That is the point.
" Try sending a letter to them to request a purge of your data and see how far that gets"
It would get as far as someone saying "who the hell is this guy? He doesn't even have an account with us!"
PS:
Note that I said to demand a full refund, but did NOT say you should expect to receive one.
It's actually the meta-data (data about data) that is of importance to the High Tech Giants. Unlike content, users have no control over their meta-data. The High Tech Giants harvest this from subscribers and non-subscribers alike.
" The High Tech Giants harvest this from subscribers and non-subscribers alike."
This is neither limited to tech nor new. Ever hear of a "Nielsen raiting"?
The entire field of management accounting is about harvesting information from metadata about a businesses operations.
Nielsen ratings relied on self reporting, giving the viewer control over the information supplied to the ratings people. The viewers were also paid for their efforts. Neither is true of today's techniques.
And then get deplatformed like Gab because the Marxists control the payment processors too...
Oh, the MARXISTs control the banks?????
Not the banks, but certainly online payment processors and webhosting services. You don't see The Root getting deplatformed for Afro Nationalism and Tariq Nasheed isn't getting kicked off Twitter or unverified anytime soon.
Right. Businesses don't like people with money. That's a well-known trope.
And what part of the terms of service say they can't remove your content if you don't violate them.
" if we agree that the Terms of Service is a contract,"
If they are why can't they sue you for violating their terms of service.
"And if they decide to stifle one of those groups…then what? What is your proposal then?"
Gather together with other people who feel stifled, and create and manage your own site which does not stifle people who have opinions similar to your own. If there are enough of you, your site will thrive. If not...
If not, we should let people California silence you into oblivion. Fuck your free speech, California is more important than you.
So... you object, on ideological grounds, the fact that businesses that have insufficient customers are not successful?
I'm not an ideological person. Ideology rots the brain. I don't object to businesses failing, I object to market oligarchies propped up by special privileges bestowed by the government which cause viable businesses to fail.
So you're not an ideological person, who thinks ideology rots the brain, who objects on ideological reasons to the fact that businesses that have insufficient customers are not successfull? Because fuck you, California? Must be tricky to keep all that straight.
Funny, gab tried that.
These same firms did a lot of work to try and shut them the hell down. Did a really good job of it, too.
Unlike in every other kind of business, where existing businesses welcome new competitors, and help them out any way they can.
How many other businesses have the major competitors protected from specific laws that others have to face?
0, same as in this case.
Do you have a multinational bank in your back pocket when Big Tech gets the banks to cut you off from credit card processing?
No?
Then your whole concept goes down the drain.
So what is your solution?
Force banks to do business with people they don't want to do business with?
"Bake that cake"?
Actually, yeah, Federally Insured banks already were and are required to for over 40 yrs. You can absolutely lend money to whomever you do or don't want to without FDIC backing.
I'm not sure I agree with that approach. But if things are as you say they are, so what precisely is the problem that Finrod is alluding to?
If it is already illegal for credit card companies to cut off otherwise valid customers, then if a bank does that to, say, the NRA or Gab, then it's patently illegal anyway, and no need to muck around with Section 230.
" You can absolutely lend money to whomever you do or don’t want to without FDIC backing."
And if they're late with the vig, kneecap 'em.
"So what is your solution?
Force banks to do business with people they don’t want to do business with?
“Bake that cake”?"
Does the bank like access to the Feds?
FDIC protections?
You know, stuff like that.
Then abide by certain rules.
So once again, it's the government extortion racket.
"Ya wanna do business in these parts, ya gots ta make an offering to the Godfather, see?"
Is there anything that the government extortion racket doesn't justify, in your view?
OK, so I get to PAY for the banking system but if I engage in a completely legal business that the bank thinks is "icky", it can deny me access?
How is this fundamentally different than refusing to serve blacks?
Banks aren't limited to denying loans to only those businesses which are illegal.
They can also deny loans to individuals and businesses that can't show means to repay.
If the bank thinks your chances of paying them back are "icky", you're not walking out with a cashier's check, no.
Also, note that when your argument has twisted and turned the point where it hinges on the premise "banks don't like conservative people", there's something seriously wrong with your argument.
"Do you have a multinational bank in your back pocket when Big Tech gets the banks to cut you off from credit card processing?"
Don't recall saying anything about credit card processing.
Sounds like there might be opportunity there, too. Don't like the way big multi-national banks cut you off from credit card processing? Get together with other people cut off from credit card processing, and work out an alternative. You might start by asking people in the state-legal marijuana business, who are cut off from all banking, not just credit-card processing, what they did about it.
If there are enough of you, your site will thrive. If not…
This is an incorrect assumption. You're suggesting that an honest man should be able to succeed in a fair market, but it's not a fair market. Google can re- or deprioritize, selectively redistribute, and arguably falsely contribute to your content in order to defame you.
"This is an incorrect assumption."
It's not an assumption. It's how economics works.
" You’re suggesting that an honest man should be able to succeed in a fair market"
Don't recall saying anything even remotely like that.
"it’s not a fair market"
Didn't say it was. Being late to market is often disadvantageous.
But if you have a better product, or just one that people want more than the existing products, and enough people want it, there's economic opportunity there.
It’s not an assumption. It’s how economics works.
I don't think you understand the word economics. Section 230 isn't economics. The 1A isn't economics. Google offers services for free, it's not economics (or at least not overt capitalism). Technically, they all fit into economics even under the umbrella of command or authoritarian economies. But the only way it makes sense to say "It's how economics works." in context allows for pretty much any philosophy to fit into your statement; "It's how physics works." "It's how fascism works."
Don’t recall saying anything even remotely like that.
Fuck off. Socialist command economies are economics too right? I'm going to give you the benefit of the doubt and suggest you're just a disingenuous piece of shit. Otherwise, you're an abject moron who, when asked simple legal questions says, "That's economics for ya!" and "We're all gonna die someday!"
But if you have a better product, or just one that people want more than the existing products, and enough people want it, there’s economic opportunity there.
No there's not. People can be too poor, oppressed, or otherwise confined to take advantage of a better or more popular product and, a facet of competitive economics suggests that generally they are. Drugs are phenomenally more popular than foi gras but there's not necessarily an opportunity there. Water is phenomenally more popular than drugs but there's not necessarily an opportunity there.
I'd say your input has been enlightening but, you know, that's dumbasses for you.
"I don’t think you understand the word economics. Section 230 isn’t economics."
Dimwit, nobody said it was.
"Google offers services for free, it’s not economics (or at least not overt capitalism)."
Google offers some services for free. It's quite profitable, quite likely because they understand economics better than you do.
"I’m going to give you the benefit of the doubt and suggest you’re just a disingenuous piece of shit."
Whereas you deserve no benefit, and can be safely dismissed as a person who gets angry and frustrated when they run into someone who understands things better than you do, which means you're angry and frustrated 24/7.
"People can be too poor, oppressed, or otherwise confined to take advantage of a better or more popular product"
Then there aren't enough of them, QED.
I'm sorry you're stupid. More correctly, I'm sorry you're stupid AND have Internet access..
Gather together with other people who feel stifled, and create and manage your own site which does not stifle people who have opinions similar to your own. If there are enough of you, your site will thrive. If not…
People have done that and the backbone providers have cut them off.
Again, revoking section 230 is not the answer. But this refusal to even recognize there's a problem here is bizarre to me.
What do you think would happen if say, Planned Parenthood had their banking provider, their internet provider, their certificate authority, their hosting company and all the major social networks ban their accounts? Do you think there would be no outcry, just a "hey, start your own internet, broheim"?
The glib response of "start your own youtube" is just that, glib and unhelpful.
That was an interesting article, thanks.
I agree that there is a problem, but I suppose it would help to first precisely identify what the particular problem is.
I think the problem is, broadly, "bigness". Not just big government but big corporations too. I can agree with many segments of both the left and the right that the danger to our liberties does not arise solely from the state (although they will always be the *biggest* danger due to their monopoly on the legal use of force). But if, for example, no banks wanted to do business with me, then I would certainly feel like the liberty to do things in modern society was greatly curtailed, even if none of my rights had been formally violated.
So what makes companies so big? Well, in my view, as usual, it often boils down to government. Expensive regulations and arcane tax laws that only big rich corporations can afford to obey. Cronyist giveaways and loopholes that keep the big big and shut out competition from little guys.
That's just my thoughts off the top of my head, I'd be interested in hearing what you think.
I’d be interested in hearing what you think.
No you aren't. Otherwise, the section 230 solution would be obvious. The law was explicitly crafted to grant companies (less) like Compuserv and (more) Prodigy protection from harmful disinformation and manipulative behavior.
It was crafted so that platforms wouldn't be held liable for the words created by others. It seems eminently reasonable to me that I alone am responsible for my own speech, not someone else.
It was crafted so that platforms wouldn’t be held liable for the words created by others. It seems eminently reasonable to me that I alone am responsible for my own speech, not someone else.
It doesn't once use the word 'platform'. You haven't read it, you don't care.
I took telecommunications law as my seminar class back in law school.
The guy who taught it was one of the regulators who successfully forced one of the entities that's now Comcast to open its cable plant to competitors within the franchise agreements in and around Portland, Oregon back in the early days of broadband.
I think that bigness is... or can be a problem. As a long-time libertarian, I used to reflexively dismiss bigness, but lately I'm willing to grant the left some points in this area. That big conglomerations of companies, working in tandem can and do create some sticky market conditions. I'm still reflexively reticent about legislative solutions because they always turn out bad, but I am at least willing to discuss the problems.
1791l whom I respect a lot put out a video titled "built your own youtube" which is a response to the problem of online tech censorship and the refrain "build your own youtube". However, I disagree with them on many points in this video as they are adopting a more right-wing populist response.
I do agree though, with the admission that yes, Houston, we do have a problem here.
You're another of those "I'm in favor of free-market solutions, except when the free-market produces a solution I don't like, and then I think government should intervene until the results start to look more like my preferences" "libertarians".
English definitely isn't your first language.
Nuh-uh, YOU are!
James, Diane has been pretty firm around here in not going along with the "abolish CDA 230" crowd that you see everywhere else here at Reason.
Diane, that's why I like you, you always bring such interesting info.
He has a good point - people aren't going to go to these niche political 'free speech' websites because most people don't really care that much about it, they aren't into that aspect of it so much. Plus I hadn't realized how much of a money loser Youtube is.
But I think he also points the way of what to do - instead of "build your own youtube", it should be "build your own full online experience" (which is much harder, but more likely to be successful). So maybe places like Gab and Parler should stop trying to push the free speech angle so much and instead try to entice people to come for their awesome cat videos or whatever. Or just an experience that is better than Facebook and Twitter. I don't think that is beyond the realm of possibility.
Again, revoking section 230 is not the answer.
*The* answer? No. *An* answer? Yes, and a given it's redundancy, plain illegality, and counter productivity, rather appropriate one.
"The glib response of “start your own youtube” is just that, glib and unhelpful."
Look, if the people who trade kiddie porn can manage to keep THEIR sites up, but you can't, that's a sign that your product is worse than kiddie porn. Take from that sign what you will.
In the meantime, if you want control of something, build it yourself and then you get to make the rules about how it gets used. Whining because the people who already went to the trouble of building something want to impose THEIR rules on using it instead of yours, and won't let you use it as you see fit, is just that, whining.
If you want to compare a youtuber with millions of views operating in the light of day with someone scurrying in the darkest corners of the web peddling kiddie porn, that's your prerogative.
Whining because the people who already went to the trouble of building something want to impose THEIR rules on using it instead of yours, and won’t let you use it as you see fit, is just that, whining.
Except when someone who makes his or her livelihood on said platform and IS playing within THEIR rules has that livelihood cut off with no explanation and in some cases, with monies still owed, I'm willing to admit we might have a problem, and I think there is a case for a civil lawsuit.
"If you want to compare a youtuber with millions of views operating in the light of day with someone scurrying in the darkest corners of the web peddling kiddie porn, that’s your prerogative."
English not your first language?
I took from that sign what I did.
Not your second, either?
"In the meantime, if you want control of something, build it yourself and then you get to make the rules about how it gets used. Whining because the people who already went to the trouble of building something want to impose THEIR rules on using it instead of yours, and won’t let you use it as you see fit, is just that, whining."
Project Veritas had a video on Pinterest putting religious sites and words on a porn block. The video was banned by YouTube followed by account suspensions on Twitter and Reddit. Why did Twitter ban it?
They then did an investigation on Google and bias. YouTube banned the video. Then their COMPETITOR Vimeo did the same, claiming it was "hateful" with literally zero evidence of it being the case or what was "Hateful"
What is your plan now? The companies aren't even COMPETING. They are COLLUDING with one another. And it's happened repeatedly.
Any youtuber talking about the Project Veritas video also risked a ban.
"What is your plan now?"
To continue to laugh at people whining because other people won't let them use things that don't belong to them in the way they see fit.
In the meantime, if you want control of something, build it yourself and then you get to make the rules about how it gets used (and by who) Whining because the people who already built something, but won't let you use it on your own terms, remains childish whining and may be dismissed as such.
If you don't like the way Youtube runs its site, demand a refund of everything you paid them and stop using their site.
If you don’t like the way Youtube runs its site, demand a refund of everything you paid them and stop using their site.
Aaand you definitely don't read English.
Ow, wow, you totally zinged me with that one!
Whine on, whiner!
I'm going to laugh at you when section 230 is repealed and the world doesn't end.
Knock yourself out.
Meanwhile, I'm going to laugh at you when section 230 is not repealed, and the world doesn't end.
Yeah man, fuck the BoR and the Constitution if they aren't popularly supported.
You're the sort of Libertarian who would let people elect an actual fascist and respect it because it's the will of the people. Well guess what; when the will of the people is anti-individualist, fuck it by any means necessary. You can't let collectivism grow and dealing with it in a "collective" manner with force isn't unlibertarian. When your principles further unlibertarian objectives, your principles are unlibertarian. If you can't be a libertarian without contradicting yourself at some point, get better principles.
As a rule, there is one thing the government really, really, needs to do regarding the internet:
Nothing. Nothing at all.
I agree, section 230 is a bad idea.
Section 230 stipulates, in essence, that digital services or platforms and their users are not one and the same and thus shouldn't automatically be held legally liable for each other's speech and conduct.
Incorrect. Section 230 stipulates that digital services or platforms and their users *cannot* be considered the same. The law was passed to specifically countermand two cases, one where it was found that the platform was responsible for its content (because it curated it) and the other because the platform wasn't responsible (because it didn't moderate). So, prior to section 230, a platform and the speech on the platform were fully capable of being regarded as either the same or separate entities and section 230 specifically prevents that going forward.
Section 230 is easily abused and becomes a sham when the publishers are actively curating the content in order to deliver certain viewpoints and censor others. The publisher has now involved in the speech and should have liability.
Is it an "abuse", in your view, if a Christian forum wants to censor speech that is unflattering to Christianity? Should they be regarded as "publishers"?
Of course... I mean... I mean... their CHRISTIANS, right? Same for the Jews, the Muslims, the Pagans, and, by logical extension, political parties. And then we can move on to make double-sure that newspapers, magazines, and leaflets handed out a rock concerts all meet the same standards. And if you are accepting an advertisement in your community newsletter for a "garage sale," it had better damned-well be in held in a garage, and not in the front yard. Then we can all be happy that our benign and loving government is looking out for us by assuring us that the only thing we get is Newspeak... er.... I mean, Truth. (is it necessary for me to write "sarc-font off?)
Should they be regarded as “publishers”?
The fact that you say "publishers" rather than just publishers makes me think you don't know what a publisher really is. You do know that some of the main differences between a (book) publisher and a (book) printer are rights acquisition, editing, design and formatting, distribution, and promotion, right? Admittedly, Google, Facebook, etc.'s TOS are completely shameful and should be better written and enforced, but the parts that are clear are that they own the material on their platform and are in charge of how and to whom it gets presented. Making Google, Youtube, Facebook, etc. rather overtly publishers rather than simply printers or "publishers".
On the Internet, publishers may distribute content without exerting any editorial control on it. They may choose to tightly control every word and every punctuation mark.. or they may choose not to. What section 230 does is allow them to exert partial control, without the assumption that this means they are exerting full control.
So they can insert spam filtering in their user comment system, and the fact that the Nigerian Prince got his comment removed doesn't imply that the publisher of the site had a human read and vet and approve anyone else's comment.
Rather, they can allow users to post content (at the user's risk if there's liability for it) without assuming that risk themselves unless they're specifically informed of the liability and allow it to persist.
This keeps you, for example, from copying links to your political adversary's website and then demanding that it be shut down.
And, the liability shield can be defeated in court if a plaintiff can show that the site is designed to defeat, say, copyrights, or libel laws, or the like.
This keeps you, for example, from copying links to your political adversary’s website and then demanding that it be shut down.
It's interesting you brought this up, because exactly this has been happening after a fashion with Youtube. Not the youtube site being shut down, but a creator on the site have a video taken down with the revenue transferring to the striker. And the copyright strike was completely false.
You can have a video stricken because of user comments? Harsh.
Yes. There have been several family friendly, completely non-political channels that have had their videos removed because of dumbass comments on the videos.
Meh. Their site. If you don't like the way they run it, demand a full refund and stop using the site.
There are quite a few Patreon users who actually would like a refund.
This affects me... how?
It makes you look like a considerably larger douche than most of the rest of the libertarians and even some of the trolls around here.
Fine. YOU pay them back, and show me how to undouchify.
Blame the DMCA, not youtube or section 230.
If they leave up copyrighted material after someone has told them, to take it down they can experience extreme liability.
This means they can't afford to give the content the benefit of the doubt.
If a video gets a copyright claim, then unless youtube is near certain that the claim is false then they will side with the claimant.
They've also been sued multiple times by groups like the RIAA and the MPAA
If section 230 applied to copyrighted material like it applied to all other content, this problem, wouldn't be there.
If there were actual easily enforceable penalties for making false DMCA claims, this would be less of a problem.
On the Internet, publishers may distribute content without exerting any editorial control on it.
This sentence tells me that you're willing to set aside the majority of common law and 1A history and take what I was saying to someone else explicitly out of context because Congress told you something about internet. When you're thinking originates not with history, the market, the 1A, or even common sense, but with an idea that was minted by Congress two decades ago, you're going to have considerable trouble putting ideas like actual free speech, freedom of choice, and libertarianism first.
From Twitter's TOS:
You retain your rights to any Content you submit, post or display on or through the Services. What’s yours is yours — you own your Content (and your incorporated audio, photos and videos are considered part of the Content).
By submitting, posting or displaying Content on or through the Services, you grant us a worldwide, non-exclusive, royalty-free license (with the right to sublicense) to use, copy, reproduce, process, adapt, modify, publish, transmit, display and distribute such Content in any and all media or distribution methods (now known or later developed).
By submitting, posting or displaying Content on or through the Services, you grant us a worldwide, non-exclusive, royalty-free license (with the right to sublicense) to use, copy, reproduce, process, adapt, modify, publish, transmit, display and distribute such Content in any and all media or distribution methods (now known or later developed).
OK, now find me a book printer that says anything like this.
A book printer? No can do. A book printer doesn't want your data, they want cash.
On the other hand, a publisher's work-for-hire contract likely has terms considerably stricter. Back in the day, magazines bought all rights, and not just first-publication. See if you can get ahold of a comic-book artist's contract sometime, or pretty much any recording contract for any band that hasn't already had at least 7 platinum-selling records.
They're not like a book printer.
But they aren't a publisher either, since they don't actually take possession of the users' intellectual property.
You retain your rights to any Content you submit, post or display on or through the Services. What’s yours is yours — you own your Content (and your incorporated audio, photos and videos are considered part of the Content).
I think a better analogy is that of a private library. The authors retain their intellectual property rights, and Twitter "licenses" the content to patrons wishing to see it. It is an imperfect analogy, but I think it is better than either the publisher or printer analogies.
"Section 230 is easily abused and becomes a sham when the publishers are actively curating the content in order to deliver certain viewpoints and censor others"
Why shouldn't a private operator be 100% free to do this? I mean, back in the day, Apple ran ads openly claiming that IBM was evil and oppressive. Later they ran ads claiming that Microsoft was stuffy and stifling like John Hodgson, while Apple made products for cool people like Justin Long. Should they have been restricted to running ads that Apple made products for people who liked to pay more for something that worked about the same, but came in designer colors?
They should be free to do this. Absolutely. They should also be free to shut down major political candidate's advertising accounts as soon as they become popular for being anti-war.
We need to make sure that Google and Twitter can keep the popularity of war sky-high.
Can they decide "you know what? We don't want to run this ad. Why don't you go place it with somebody else?"
Another segment of this (not related to 230) is the collusion that appears to be happening in the tech world in other areas. I'm very suspicious about the ban on Huawei phones, for instance. It feels like it might merely be a way for US tech firms to keep out a major Chinese competitor.
Aren't Huawei phones banned by the US government under federal law?
Huh, you're right. I was wrong on this. I think I've lost track of the escalation here. Initially the federal government banned its employees from using the phone, but it looks like the Feds banned Google from licensing Android to it. I do know the carriers pulled out of deals with the phone maker, but I don't know if that was at the behest of the federal government either. Perhaps it was just Angela Merkel whispering into their ears.
This is interesting:
If you're a major tech company and you're not on board with the latest virtue signal craze, the media will call you out.
Get in line, and get on board with the message. If you're not with us, you're against us.
Then the OTHER media will call you out for caving to the first callout.
If you're not with us, you're with THEM.
So we gave certain types of tech companies special privileges in the market and they became dominant market forces. We want to go back to having a free market, and now people are upset that the companies that were built special protections might have their business models disrupted. Sounds libertarian to me.
Let me correct that for you.
The government applied an ORDINARY and NORMAL standard to the Internet, that what existed in meatspace: that a person is responsible only for his/her own speech. If I type defamatory things here on Reason, I am responsible for the defamation, not Reason. It wasn't clear in the early stages of the commercial Internet how these forums should be regarded, and so Section 230 settled that matter.
Now, a bunch of activists on the left and the right want to change Section 230, either because they don't like the politics of the tech companies and they want to see them punished, or because they don't like some of the speech itself and want to see it banned by the state. In so doing, they risk stifling creative expression online and putting even more power in the hands of the state to censor online speech. Which is the absolute last thing any libertarian ought to be in favor of.
I don't think anyone around here actually gives a damn about Facebook and Twitter per se. Facebook could go out of business tomorrow and I would not shed a tear. It is about the future of online speech itself.
So ordinary and standard that they had to write a special law for it
You can just call section 230 what it is: special privilege.
You can call it that, but it won't make it into one.
Or it's one of those special privileges that literally everybody has.
You can't sue Reason for anything I wrote on their site. Then again, Reason can't sue you over it, either. And neither can anybody else. You've got special privileges!
James, if "literally everybody had it" --- then why does Section 230 EXIST?
They don't throw in "Also, nobody can illegally search your premises" in legislation and it is very much a protection everybody has.
Clearly everybody did NOT have it.
James, if “literally everybody had it” — then why does Section 230 EXIST?
Because a court case got it wrong, and Congress fixed it.
https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prodigy_Services_Co.
The law was not created at the behest of some corrupt lobbyist trying to solicit favor for his industry.
The law was created as a reaction to a bad court decision.
"James, if “literally everybody had it” — then why does Section 230 EXIST?"
Because some people didn't have it. Duh.
So, everybody didn't have it. Thanks.
It required a law to give them special benefits.
That law was a mistake. Should be repealed.
Repeating a lie doesn't make it true, no matter how many times you do it.
And you can say that it isn't a special privilege, but it still is. I work with a lot of contracts but for some reason, I'm not allowed to breach mine like certain online publisher platforms are.
Luckily, more and more people are agreeing with me every day. And you, being a libertarian, are going to propose the ultimate libertarian solution - "lets do nothing, guys!"
"And you can say that it isn’t a special privilege, but it still is. I work with a lot of contracts but for some reason, I’m not allowed to breach mine like certain online publisher platforms are."
Section 230 has nothing to do with whether online services can write their terms of service to give themselves the right to rewrite them, and you accept them by continuing to use the service.
If you don't like the (new) terms of service, don't accept them by not continuing to use the service. Problem solved.
Section 230 makes the standards for offline content the same as online content.
Free speech will survive the axing of section 230. These companies are just upset that their business model, built off of libel and slander, will no longer be profitable. Boo-hoo.
The idea that this is a threat to free speech is bullshit. Free speech survived before section 230 and it will survive after. Cry me a damn river.
You will have corporate-approved speech even more so than now, because your only choices, in a post-230 world, will be either:
1. Complete free-for-all platforms full of trolls, Nazis, spam, porn, basically an open sewer of content
2. Heavily moderated corporate platforms, because they will be the only ones with the resources to fend off the defamation lawsuits
So yeah, great choice.
This is NOT about the health of any specific company. That is how the right is framing the issue - "special privileges for Facebook". Section 230 is far broader than that.
One analogy I've thought about in the realm of deplatforming comes in light of the fact that some people are now earning a living from their online presence on specific platforms-- and this is where the Terms of Service comes into play.
The Terms of Service-- as I see it a contract-- and it's a contract with goes both ways. It demands certain things from the user-- and it demands certain things from the provider.
If you'll allow me, I'd like to create an analogy:
You own some property that requires a right-of-way from a neighbor to access it. Zoning discussions aside, you decide to start a business on your property and you make a handshake deal with your neighbor that he'll allow patrons across that right of way for free-- as long as you the business owner abide from some basic Terms of Service. No operations outside of X hours, traffic kept to a minimum etc. After some time, your business is established and you're now making a living from it. You're also abiding by the Terms of Service.
One Monday morning, you find the Right of Way has been blocked by your neighbor with no explanation. Customers can't reach you, contracts can't be fulfilled, you're effectively out of business.
It turns out that your neighbor was a bit of a racist and noticed that a lot of Hispanics tended to patronize your business, and he didn't like that, so he shut the gate and locked it permanently. He'll never TELL you he's a racist or admit it, but he just said "you violated the terms of service" and refuses to tell you what specifically you were in violation of.
I'll bet you there's not a court in the land that wouldn't at least entertain a lawsuit on some kind of breach of contract.
We can poke fun at the business owner all day, claim that he's whining because he was dependent on something he didn't control... but even a verbal contract is a binding contract.
Steven Crowder has, for me, a thoroughly legitimate case for a lawsuit against YouTube. He joined them as part of their partners program. They demonetized him completely NOT because he broke the rules but because he ALMOST did so (which is ALSO known as following the rules).
And libertarians seem OK with companies doing this shit. Because while any group of people can do shitty things, businesses are the one grouping that does not do bad things. Ever.
The tech companies are openly admitting they're violating their own Terms of Service, and it's "whining" if you point that out.
The tech companies wrote their own terms of service to allow them the flexibility of changing them on a whim, and you can either accept the new terms, or stop using the service.
Refusing to stop using the service and whining about how unfair the new terms are.. that's whining.
The problem is the changes are progressive and extend across platforms arbitrarily. A cancellation at Youtube shouldn't mean a simultaneous cancellation at Twitter, Paypal, or Patreon, but it does. It's an effective end around anti-trust.
It's not really economics but any fair market economist would regard both parties equally in such an unenforceable contract. If the landowner above put up billboards to sell to the passing public and made profits off your customer's traffic and then shut down your business arbitrarily or sold the property to someone who did, they'd rightly be liable as you certainly had a hand in generating their advertising profits.
This sort of thing happens all the time in other corporate scenarios. Divorce and Bankruptcy court/hearings being a obvious examples.
" A cancellation at Youtube shouldn’t mean a simultaneous cancellation at Twitter, Paypal, or Patreon"
This is an assumption on your part, unsupported by reality. If you're an antisocial asshole, then this is likely to be obvious to everyone. You know, at the same time. And if they all have anti-antisocial-asshole policy...
Who defines "anti-social asshole"?
Pretty much everybody does. Do you know someone who doesn't?
Oh, so you're going to propose ideas and then not define them.
Funny, I almost thought you were trying to argue in good faith.
My mistake.
The old "I'm losing this, so whine 'you're not playing fair'", declare victory, and get the hell out?
Wasn't expecting that one.
When businesses misuse or abuse their "terms of service," which is a contract, they should be as liable as any other entity who breaks a contract. As a "libertarian" for around forty years now, you can take this from me.
Where things get murky is when the law, and/or the lawyers get into the language of the contract. For instance, if I ran FB, I might want to say something like "I, and agents acting on my behalf, reserve the right to censor any post or entry, or delete the account, of anybody who offends me, or for any reason at all."
Of course, what they actually try to do is establish "community standards." While this might be a noble attempt, it is just a wee bit more difficult.
Of course, you as a user, could withhold your custom until they negotiated a contract with you that isn't their standard user contract, and leave those terms out rather than just clicking "accept terms" and whining that that stuff's in there later.
Just clicking without reading the terms of service? But... but... but.... who would do that? Oh, nevermind....
I'm the one over in the corner, who never clicked on "accept terms" in the first place, and somehow live a full life despite not using GMail, not having an social media accounts, and owning but rarely carrying a cellular device. Wait, did I say "despite"? I meant "because of".
James:
I get it. Excepting for business and job-related email accounts, I have had the same email account since..... the middle 90's? I quit FB years ago, have yet to tweet, and the only text-messages I ever get are from my wife. Comments here are about as close as I get to "social media."
Opps. I forgot. Sometimes I will write something on the forums at mandolincafe.com.
"One analogy I’ve thought about in the realm of deplatforming comes in light of the fact that some people are now earning a living from their online presence on specific platforms"
In other news, lots of people in lots of industries earn a living because their business interacts with others. Domino's earns its living based largely on it's telephonic presence. Obviously, the phone company should owe you $7.99 if the pizza is delivered cold, 31 minutes after you ordered it..
If the phone company refused to deliver calls to Domino's, even though their bill was paid, there'd be no problem?
"I called Dominos."
"Sorry, at Sprint, we only do Little Caesar's"
"This is bullshit"
"BUILD YOUR OWN CELL NETWORK! JUST BID ON THE SPECTRUM AND SET UP THE ENTIRE BUSINESS!"
Lol
"If the phone company refused to deliver calls to Domino’s, even though their bill was paid"
Should have read the fine print under "Traffic shaping", and the phone company's definition of what "unlimited" actually means to them.
But.. why is your complaint about net neutrality offered here in a discussion of section 230 of the CDA?
Should have read the fine print under “Traffic shaping”, and the phone company’s definition of what “unlimited” actually means to them.
But.. why is your complaint about net neutrality offered here in a discussion of section 230 of the CDA?
You do realize a phone company is a common carrier and can't shape traffic like that or loses common carrier status and becomes subject to lawsuit (state, federal, other, and even criminal prosecution), right? That, unlike newspapers, book publishers, phone companies, and registered broadcasters, internet companies are the only ones (currently) having issue (once again) with this purveyor/publisher issue.
You do realize the selective ignorance game only plays for so long, right?
"You do realize the selective ignorance game only plays for so long, right?"
Whereas you can keep your complete ignorance going indefinitely?
Guess he ran out.
The Communications Decency Act was passed well before Google, Facebook and Twitter existed in a diverse early Web. Section 203, clever as it was at the time, is clearly out of date, social media having since 1997 come to dominate flow of news in a way not foreseen then and now run by an oligopoly whose executives, furious at Trump’s win, are determined to influence the results of future elections in this country, as we adopt an era of continuous campaigning where “netroots” organizations and “corporate citizens” deliver much of the message.
Like it or not, campaign speech is regulated by the federal government and the courts. If the CBS TV network of the 1970s couldn’t refuse to run GOP campaign ads, then why should Mark Zuckerberg be free to silence an entire political faction he dislikes? CBS stations had to obey the FCC Fairness Doctrine, too, until cable arrived. I’m not saying we should prescribe Google from routing commercial ads to InfoWars’ YouTube channel when sponsors may not want to appear there, but they banned InfoWars and a number of immigration or globalism other skeptics altogether—though it’s smart enough not to try that with Fox.
Survival of the banned on websites outside social media doesn’t compensate for this loss; most people won’t see them there. Our laws must take into account how we live life today and how that changes; for them to apply models of freedom appropriate to 1790 or 1997 is absurd despite our reverence for our founding fathers. We must revise Section 203 in a way that preserves the protections it gave the Web platforms while requiring these platforms, if in oligopoly market positions, say those commanding 5% of Internet traffic volume, remain open to opposing viewpoints. I’m not a Paul Joseph Watson fan, by the way.
" while requiring these platforms, if in oligopoly market positions, say those commanding 5% of Internet traffic volume, remain open to opposing viewpoints."
So... Apple has to leave the "Apple sucks!!! Windows Forever!" comments on its website? Xbox Live has to allow 11-year-olds to declare that "Playstation Rulez!" on every audio channel?
I offer a counterproposal:
Section 230 stays exactly as is, but users have to pass a knowledge test about both the law and the Internet before they're allowed to post comments about how Internet law should be written.
I offer a counterproposal:
Section 230 stays exactly as is, but users have to pass a knowledge test about both the law and the Internet before they’re allowed to post comments about how Internet law should be written.
Not a real deep thinker are you? If your test and section 230 ever come into conflict section 230 wins. If comments are going to be moderated whether people pass your test or not, the test only serves to flatter your idiocy. Which is precisely where we are with Section 230 except your brand of idiocy happens to be (not even necessarily popular) law.
You missed the joke. Shocking.
You missed the joke. Shocking.
No. I was rather openly flattering your idiocy. Which isn't bound by law, logic, or internet.
Suuuuuuuure you were. wink-wink.
Who could have possibly foreseen that a joke about how people comment about Internet law despite knowing little about law or Internet would draw a disparaging comment from someone who little understands law nor Internet? Who, I ask? Who?
https://www.techdirt.com/articles/20190731/17202442692/enough-with-myth-that-big-tech-is-censoring-conservatives-that-law-requires-them-to-be-neutral.shtml
Nice little bizarro world you have going there. Don't leave the gate open and let the pink unicorns out. This article is just another in the long line of "the internet is special and needs special laws." No, it doesn't. If you were real libertarians, you'd be saying "no special law (privilege = private law) for anybody."
" This article is just another in the long line of 'the internet is special and needs special laws.'"
Sorry to rain on your parade, but section 230 isn't about the Internet.
Back in the olden days, before the Internet was opened to commercial use (thanks, Al), there were other kinds of information service providers. There were some that used a computer and a modem to connect to, there were some that used a terminal and a modem to connect to, and there were even some that just used a touch-tone phone.
Who ever gives a shit, at this point we’re fucked. Left and right want to push some sort of fairness doctrine while libertarians are happy to allow California to control the flow of information and police speech.
[…] No Comments […]
"The How and Why of Section 230
The Communications Decency Act (CDA) came into being as part of the Telecommunications Act of 1996, and was intended to address "obscene, lewd, lascivious, filthy, or indecent" materials. "
Meaning that enabling curation *political content* was not part of the legislative intent for this crony capitalist exemption from publishing laws, nor was banning non objectionable material from individuals who once posted something found objectionable on a platform, nor was banning non objectionable material from individuals who once said or wrote something found objectionable somewhere in the universe.
The CDA is premised on a number of findings and policy assertions, including user control and true diversity of political discourse.
https://www.law.cornell.edu/uscode/text/47/230
(2) These services offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops.
(3) The Internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.
(3) to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the Internet and other interactive computer services;
What exactly a finding or policy means in its implications to enforcement of the law is unknown to me. I would assume it delimits applicability of the law, as otherwise it's superfluous to the legal interpretation of the statute. Maybe. Maybe not.
But what's clear is that the intent of the law assumes user control of content on a forum, not a publisher of curated ideas.
The relevant clause for immunity is here:
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
In this context, "objectionable" does not simply mean "anything the provider objects to", as this renders all the other types of material superfluous parts of the law. They could have written "ban anything you want". They didn't.
"Otherwise objectionable" means that the previously listed category are meant as exemplars of what objectionable is to mean within the statute. The objection needs to be within the ballpark of those categories, not simply "whatever you happen to object to".
"“Otherwise objectionable” means that the previously listed category are meant as exemplars of what objectionable is to mean within the statute. The objection needs to be within the ballpark of those categories, not simply “whatever you happen to object to”"
You might have a point, if it just said "or objectionable material. But it doesn't.. I says OTHERWISE objectionable. "otherwise" means "different from". It allows moderation for objections OTHER than lewd filthy harrassing etc etc etc, of a different character.
Redundancy is very commonly in contracts and laws.
"Meaning that enabling curation *political content* was not part of the legislative intent"
Because curation *political content* is not illegal.
If I try to sue Fox News for being blatantly pro-Republican, they don't need to cite an exemption to the law that otherwise lets people sue businesses for having political leanings. There isn't any such law to exempt FROM.
"otherwise it’s superfluous to the legal interpretation of the statute."
Yeah. The preamble is pretty much superfluous to the legal interpretation of the statute.
If Section 230 were in ANY way equivalent to the First Amendment, it would have to be applied equally to ALL viewpoints. It isn't.
Therefore, your entire article is based on a false premise. Did you realize that?
"If Section 230 were in ANY way equivalent to the First Amendment, it would have to be applied equally to ALL viewpoints. It isn’t."
It, er... is. Going out on a limb, here, but I'm going to guess that you don't actually know what section 230 does.
So many people commenting here are angry about their misunderstanding of what section 230 does. They don't understand what it does, and they're made about something they THINK it does.
Start with the basics.
In ordinary defamation law, you CAN sue someone, and recover damages, if they have defamed you. You CANNOT sue someone, and recover damages, if they have not defamed you.
This is simple and not complicated.
So... what if person A defames you, and person B repeats it? Well then BOTH persons A and B have defamed you, and you get to sue either or both.
This is where it starts to get complicated.
News coverage might report on what A is saying about you. If the news coverer adopts A's version, which is defamatory, and repeats them as facts, then you're back to the previous case... both have defamed you and you can sue either or both.
Now we throw in modern electronic communication. Suppose that a TV news reporter reads a story about you on the news. That news report is on in a bar somewhere, and A stands up at the end and says something defamatory about you to everyone in the bar? Who can you sue? Well, A obviously. How about the TV station that broadcast a story about you? Well, no, you can't sue them because they didn't say anything defamatory about you. OK, then, how about the bar where A actually made the defamatory statement? Aha! The lawyerly answer to that is "it depends". Did the bar "adopt" the statement? Did they attract customers by saying A would speak about you? Did they record the statement, and play it again for other customers who missed it the first time? If the bar did adopt the defamatory statement as its own, they've defamed you, too, and you and sue them, too. If they're just a bar, and the guy who popped off about you is just a customer and nothing more, then no, you can't sue the bar for being the place where the defamation happened.
Final round. A owns and operates a website, where he publishes his own thoughts about the issues of the day. He also allows other people to post their responses to his opinions. Honestly, though, although he sometimes does read and respond to what the users offer, most of the time he doesn't even read them, he has other things to do.
Suppose User types up a response that is defamatory. Can you sue the guy who wrote it? Sure. Can you sue the guy whose website it appeared on? Lawyerly "it depends" again. Did the website operator adopt the defamatory message as his own? Did he repeat it, cite it affirmatively? Did he write a follow up comment that says "That's probably defamatory, but I'm going to leave it up because I don't like that guy, either!" In that case, then yeah, you can sue. At the other end, suppose you try, but can't prove that the website operator ever even read the message. You can't sue the website operator because there was no adoption of the message.
But wait! Suppose the operator removed several other messages, posted recently before and after the defamatory message. Doesn't that prove that the website operator was reading messages about the time the defamatory one went up, AND, since they left the defamatory message on the site, isn't that adoption?
And finally, we have what section 230 actually does. It answers that question, and it answers it with a firm "NO". If you can prove that the operator knew the message was there, and left it up despite knowing that it was or could be defamatory, you still get to sue because the operator has adopted the defamatory message. But, if the only evidence you can raise is that other messages were removed but not the defamatory one, you only get to sue the actual writer of the defamation, and not the website operator.
In short version, a website operator can remove some messages from an interactive forum without thereby adopting all the messages that remain as their own.
“In the digital realm, that freedom is only possible because of a decades-old provision of the Communications Decency Act, known as Section 230….
“It is the internet's First Amendment—possibly better.”
The Internet has already been crumpled up by its giants—Facebook, Google (which includes Youtube and Blogger), Twitter, Wikipedia, WordPress—who decided to silence people and organizations on their Enemies List (including yours truly).
“Curate.” Please use English. The author was apparently thinking “censor,” but realized that that would refute her argument.
Section 230: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
The author sounds like someone who is completely unfamiliar with how the Internet works. Its titans are publishers because they increasingly make it impossible for individuals whose politics they hate, to use their platforms. Thus, they violate 230’s terms.
“Without Section 230, any complaint could thus be sufficient to make a company liable for user-created content. Companies would have every incentive to simply take down content or ban any users whom others flagged.”
But they are already rampantly taking down content!
I don’t get the author’s take on 230 at all. She asserts that it protects ordinary users, as well as tech monopolists, but that’s a complete misrepresentation. When I publish a comment, or an article on the Web, I do not do so under the framework of 230, but under that which was created by the Supreme Court’s 1964 Sullivan decision.
The author’s pathetic mockery of Sen. Josh Hawley is a contemporary update of the “conspiracy theory” nonsense originated by Democrat propagandist Richard Hofstadter in 1962. Glibertarians who are already in the author’s choir will no doubt enjoy a laugh at the Senator’s expense, but such glib propaganda will backfire with those not in the choir.
“It's easy to imagine massive backlogs of challenged content, much of it flagged strategically by bad actors for reasons having nothing to do with either safety or veracity. Silencing one's opponents would be easy.”
That’s the reality with 230, not without it.
The author lists many virtues, which she attributes to 230. Those virtues are thanks to the First Amendment, not 230.
'The author sounds like someone who is completely unfamiliar with how the Internet works."
Whereas you...
" Its titans are publishers because they increasingly make it impossible for individuals whose politics they hate, to use their platforms"
Whereas you, apparently, would like to assert a positive right to use other peoples' property as you see fit, rather than as they see fit, and apparently you want the government to assist you with this.
Fucking Communist.
[…] Elizabeth Nolan Brown, Section 230 Is the Internet’s First Amendment. Now Both Republicans and Democrats Want To Take… […]
[…] The final piece is a recent one from Reason, highlighting just how ridiculous it is that both Republicans and Democrats are attacking Section 230. […]
[…] The final piece is a recent one from Reason, highlighting just how ridiculous it is that both Republicans and Democrats are attacking Section 230. […]
[…] Elizabeth Nolan Brown / Reason: Section 230 of the CDA is the internet’s First Amendment; repealing it would hurt online speec… — From Josh Hawley to Kamala Harris, online free speech is under attack. […]
[…] Nolan Brown / Reason:Section 230 of the CDA is the internet’s First Amendment; repealing it would hurt online speec… — From Josh Hawley to Kamala Harris, online free speech is under attack. — Imagine, for […]
[…] Nolan Brown / Reason:Section 230 of the CDA is the internet’s First Amendment; repealing it would hurt online speec… — From Josh Hawley to Kamala Harris, online free speech is under attack. […]
[…] completely gutted or repealed. In a recent essay for Reason magazine, reporter and associate editor Elizabeth Nolan Brown explored how Section 230 is under attack from both the left and the right. The right thinks social media sites are not giving conservative opinions a fair shake, while the […]
[…] by Section 230 if they start making editorial decisions like “publishers.” (In reality, the law makes no distinction and requires no such neutrality.) Nonetheless, the Times has started spreading its own extremely misguided analysis of the law. So […]
[…] by Section 230 if they start making editorial decisions like “publishers.” (In reality, the law makes no distinction and requires no such neutrality.) Nonetheless, the Times has started spreading its own extremely misguided analysis of the law. So […]
[…] And the White House isn’t alone in promoting this misguided idea, some top Democrats have also called for weakening CDA […]
[…] crucial free speech House isn’t alone in promoting this misguided idea, some top Democrats have also called for weakening CDA […]
[…] that. And the White House isn’t alone in promoting this misguided idea, some top Democrats have also called for weakening CDA […]
[…] content. If they fail to do so, the platforms could be stripped of the protections afforded under Section 230 of the Communications Decency […]
[…] that. And the White House isn’t alone in promoting this misguided idea, some top Democrats have also called for weakening CDA […]
[…] by the aspects of the internet they don’t like, some Republicans and Democrats have even argued for more radical changes to Section 230. This would endanger the review sites that help guide many […]