Section 230

If Facebook Can Get Rid of Dick Pics, It Can Stop All Drug Deals Too, Claim Activists

Gutting Section 230 would make it harder to track drug deals, not easier.

|

"When was the last time anybody here saw a dick pic on Facebook?" asked Gretchen Peters, the chair of the Alliance to Counter Crime Online (ACCO), addressing representatives from the House Committee on Energy and Commerce last week. "If they can keep genitalia off of these platforms, they can keep drugs off of these platforms."

Peters was part of a panel sent to testify on Section 230, the piece of legislation that protects tech companies from facing liability for content posted by third-party users online. There has been a great deal of misinformation peddled about the law—namely that platforms are prohibited from moderating content they deem harmful. In reality, it encourages tech companies to remove posts "in good faith," so as to prevent the internet from devolving into a cesspool of negativity. Peters says they should be doing more.

But her testimony was laced with inaccuracies that reflect a fundamental misunderstanding about her goal, which is to punish online platforms and remove their legal protections should they fail to aggressively police illicit content.

There is an "incredible range and scale of illicit activity happening online," she mused. "It is far worse than I ever imagined."

Issuing an apocalyptic warning to the audience of members of Congress, Peters continued on to describe how social media giants are actively helping facilitate the "public health crisis" known as the opioid epidemic, which she said "is claiming more than 60,000 American lives each year." (That's incorrect: There were about 47,000 deadly overdoses in 2017, the latest year for which we have the data.) A stream of studies "by ACCO members and others" affirmatively show the "widespread use of Google, Twitter, Facebook, Reddit, [and] YouTube to market and sell fentanyl, oxycodone, and other highly addictive, often deadly substances to U.S. consumers," said Peters. "Every major internet platform has a drug problem."

Yet which studies she is referencing remains unclear, as does her definition of "widespread." The Center for Safe Internet Pharmacies (CSIP), which aims to curb illegal pharmaceutical sales online, estimates that less than 5 percent of all opioid purchases come from internet transactions. Interestingly, that same study surmises that the majority of such sales occur on the "dark web," as opposed to the legally operated platforms that have drawn Peters' ire.

"The voluntary efforts of internet and payment platforms such as Google, PayPal, and Bing, among other companies, to curb the online promotion of illicit products have disrupted these illicit businesses' operations," the report concludes, "specifically by removing the options of paid advertising and the most common payment methods."

CSIP also specifically mentions Reddit, whose CEO, Steve Huffman, testified that the website would cease to exist in its current form should Section 230 protections be removed. "Drug vendors also take advantage of anonymous chat forums to find customers, but the most popular of these, reddit.com, has recently seen more scrutiny and enforcement by platform operators," the report says.

Yet a vastly different impression was left with many members of Congress, several of whom said they found Peters' testimony "jarring" and "horrifying."

But never before has an industry been granted "total immunity no matter what their harm brings to consumers," Peters said, which sounds deeply alarming on its face. The implication is that platforms can facilitate heinous crimes in broad daylight without fear of any repercussions.

The problem is that Peters' claim is blatantly untrue.

Section 230 already contains a carve-out for federal criminal law, meaning that online platforms that are implicated in illegal behavior can be charged accordingly. Consider the Silk Road, a now-defunct dark website used to traffic illicit drugs. Its owner, Ross Ulbricht, is serving a double-life sentence plus 40 years. Not even Section 230 could save him from the harsh consequences of the drug war.

So what does Peters propose that Congress do? One can't be sure—she offered no practical insight, other than a veiled remark about the need for "revised language" in the legislation.

What course of action she would have these companies take isn't completely clear, either, although she provided some clues. Her dick pic example—which she gave in response to a question about concrete solutions—suggested that these websites have concocted the perfect crime-fighting algorithm. If they've figured out how to eliminate full-frontal nudity, so too should they be able to pinpoint and eradicate the drug market, she implies.

But this is preposterous when considering the intricate nature of black markets. It's easy to find and flag a post that reads, "CHEAP FENTANYL FOR SALE!"—just as a picture of a penis might stand out in one's newsfeed. Platforms are already working to erase that kind of content with relative success. Harder still is it to uncover covert illegal postings, many of which use secret Facebook groups and code words to fly under the radar. No advanced algorithm can perfectly pinpoint those. A 100 percent removal rate would likely require the manual review of every single item posted to those platforms—both an impossible undertaking, and something Section 230 was supposed to protect companies from having to do.

Even still, Peters would like to see platforms punished for failing to adequately stand up to organized crime. "If it's illegal in real life, it ought to be illegal to host it online," she repeated, meaning that tech firms would be criminally liable for failing to find each and every unlawful post.

And in reality, that effort would have the opposite of its intended effect: Worried about pending lawsuits or criminal charges, companies would be apt to remove any and all troublesome content, making it more difficult for law enforcement to track down those who break the rules.

Ironically, Peters seemed to recognize that shortfall when testifying. When tech companies remove illicit activity, she said that it destroys "critical evidence of a crime," and "[helps] criminals to cover their tracks." She should not expect that to get any better if her vague policy solution prevails.

NEXT: Why Don't You Write More About Right-Wing Antisemitism?

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. >>”When was the last time anybody here saw a dick pic on Facebook?” asked Gretchen Peters

    threat?

    1. She looks like she might be able to make good on that threat.

    2. Nostalgia?

  2. >o?”When was the last time anybody here saw a dick pic on Facebook?” asked Gretchen Peters,>o?

    Who’s the dude in the pic?

    1. Richard Nixon?

      1. Rico has let himself go.

  3. But her testimony was laced with inaccuracies that reflect a fundamental misunderstanding about her goal, which is to punish online platforms and remove their legal protections should they fail to aggressively police illicit content.

    I had to re-read this several times. I’m not deeply familiar with the full text of section 230, but what familiarity I do have mainly centers around the simple notion that sec. 230 shields the platforms from liability based on what their users post. Fine.

    But clearly we’ve seen websites shutdown precisely for the content their users posted: Backpage.com comes to mind. So it seems to me that under our current regime, sites ARE responsible if they fail to aggressively police illicit content. Otherwise Ross Ulbricht would be Jeff Bezos.

    1. The difference is that websites like Silk Road existed for the sole purpose of actively promoting illicit content. Facebook, Google, and the like are general platforms that do not exist for such reasons and have millions and millions of daily posts—far too many to police. Making them culpable for every random post would destroy them. What’s more, as stated in the piece, social media sites are doing a fairly good job at pinpointing illegal content; studies show that most online opioid transactions are completed on the dark web, because legally operated platforms remove those posts fairly rigorously.

      1. The difference is that websites like Silk Road existed for the sole purpose of actively promoting illicit content.

        I’m sure the courts saw it that way, but did Silk Road really exist and did it “actively promote” illicit content? Was there any explicit communications from Silk Road operators– or the website that it was a drug market?

        What’s more, as stated in the piece, social media sites are doing a fairly good job at pinpointing illegal content; studies show that most online opioid transactions are completed on the dark web, because legally operated platforms remove those posts fairly rigorously

        Right, so see my post below. Does Ms. Peters not have a point?

        1. Yes, that was indeed the point of the Silk Road. It existed as a marketplace to sell illegal drugs to consumers. That is not the point of the online platforms that Peters would like to regulate.

          And no, she does not. Her point operates on two false assumptions: that there are many illicit drug transactions occurring on these platforms (there aren’t) and that social media’s AI is advanced enough to find stealthy illegal postings (it isn’t). Regarding the latter, the algorithms are fairly unreliable in these cases⁠—it can pinpoint the very obvious examples, but not the ones shrouded in some sort of facade. Most people don’t advertise illegal drugs in plain English. Holding companies accountable for not finding those covert posts would incentivize them to remove content excessively, squashing legal speech, so as to avoid lawsuits and criminal penalties.

          1. : that there are many illicit drug transactions occurring on these platforms (there aren’t) and that social media’s AI is advanced enough to find stealthy illegal postings (it isn’t).

            The rate if illicit activity isn’t interesting to me– even if she’s hammering that point. Let’s pretend that it is, or might become some time in the future, which leads to her second point: Social media’s AI is advanced enough to find stealthy illegal postings.

            Don’t the tech companies believe their AI and moderation techniques are in fact advanced enough?

            You know they’re not, I know they’re not. But I keep seeing tech company execs and representatives suggest it is.

            1. Most of the responses that I’ve heard from executives haven’t claimed that. The typical remark is that they’re doing the best they can, but that they “can always do more.” Regardless of what vague replies they give, regulating them as if they have all of the answers—when we know they don’t—would seriously damage both technological innovation and the exchange of ideas online.

              1. Most of the responses that I’ve heard from executives haven’t claimed that. The typical remark is that they’re doing the best they can, but that they “can always do more.”

                In a quest to make Instagram a kinder, gentler place, the founders had borrowed from Facebook an AI tool known as DeepText, which was designed to understand and interpret the language people were using on the platform. Instagram engineers first used the tool in 2016 to seek out spam. The next year, they trained it to find and block offensive comments, including racial slurs. By mid-2018, they were using it to find bullying in comments, too. A week after Mosseri took over in October, Instagram announced it wouldn’t just use AI to search for bullying in remarks tacked below users’ posts; it would start using machines to spot bullying in photos, meaning that AI would also analyze the posts themselves.

                That’s an Instagram executive bragging that their AI would be able to identify threatening photographs.

                Again, to me, I don’t support Peters position at all. However, it seems that she’s(?) merely getting her hooks into something the major social media platforms have been claiming they can do for some time now, and demanding they apply it to drug activity. And I say again, let’s pretend for a moment that “we’re” right, that the problem isn’t as pervasive as she claims. that’s a hollow point. it might be pervasive tomorrow, or next year, or in five years. If it becomes pervasive, does her point become legitimate, and since the companies themselves claim to have magical tools which police problematic conversations and photos, it doesn’t seem a stretch to demand they remove drug activity.

          2. With social media users numbering in the billions, all hailing from various backgrounds and bringing diverse moral codes to today’s wildly popular platforms, a space for hate speech has emerged. Internet service providers have responded by employing AI-powered solutions to address this insidious problem.

            Hate speech is a serious issue. It undermines the principles of democratic society and the rules of public debate. Legal views on the matter vary. On the internet, every statement that transgresses the standards for hate speech established by a given portal (Facebook, Twitter, Wikipedia etc.) may be banned from publication. To get around such bans, numerous groups have launched platforms to exchange their thoughts and ideas. Stricter definitions of hate speech are common. They make users feel safe, which is paramount for social media sites as the presence of users is often crucial to income. And that’s where building machine learning models spotting the hate speech comes in.

            Needless to say, the paper goes on with great and furious confidence in how AI can tackle hate speech. Hate… fucking… speech.

        1. To the people that matter: The people with guns who throw you in a cage.

    2. But clearly we’ve seen websites shutdown precisely for the content their users posted: Backpage.com comes to mind. So it seems to me that under our current regime, sites ARE responsible if they fail to aggressively police illicit content.

      Under the current regime, big corporations need to do the bidding of whoever is in power in Washington, otherwise, they get hit with audits, anti-trust investigations, new legislation, and bad press from politicians. That’s why Google and Facebook are likely sending all your data to the NSA and enforce social justice ideology; they don’t have a choice if they want to continue to exist.

  4. Not even Section 230 could save him from the harsh consequences of the drug war.

    But why not? It seems to me the section 230 is selectively applied, no?

    But this is preposterous when considering the intricate nature of black markets. It’s easy to find and flag a post that reads, “CHEAP FENTANYL FOR SALE!”—just as a picture of a penis might stand out in one’s newsfeed. Platforms are already working to erase that kind of content with relative success.

    full disclaimer before I start: My comments should not be interpreted as support for Ms. Peters.

    I can’t help but think Ms. Peters has a point. She’s kind of hoisting the tech companies by their own petard. We’ve seen the tech companies claim to have the capabilities to remove “hateful” content– and they’ve even deployed tools to do so.

    No matter those AI tools are patently retarded and wipe out tons of legitimate content in their wake, the big tech companies have staked ground in this area. Ms. Peters simply suggests they apply their own logic and tools in service of stopping Drug Content. There’s a part of me that can’t help but wonder if the Big Tech companies put themselves in this untenable position.

    1. “can’t help but wonder if the Big Tech companies put themselves in this untenable position.”

      Live by the algorithm, die by the algorithm. And do not believe their “can’t” which is inevitably more about “want.”

    2. Drug dealers should change their products names to Trumps (coke) and Obamas (heroin), so that conservatives and liberals will start yelling about censorship once the algorithms catch on.

    3. There’s a part of me that can’t help but wonder if the Big Tech companies put themselves in this untenable position.

      No, it’s mostly government that put them into that untenable position. When they got big, they got subject to government and political pressure on everything from surveillance to speech controls and social justice (“nice company you have there, would be a shame if anything happened to it”). That’s when they started hiring lobbyists, diversity consultants, and management that is in bed with the federal government. The founders were often inexperienced graduate students who didn’t (and don’t) know any better and just went along for the ride.

      Facebook and Google turned into the evil statist monsters they are today at the hand of evil statist government. In a different kind of economic and political system, they would have different management and policies today.

      1. I respectfully disagree. It seemed the tech companies started taking political sides and pushing agendas due to the overwhelming internal politics of their staff and management. It wasn’t until after that process started that government got involved. The fact the government has exactly the wrong solutions is merely the continuation of government doing what government does. But the big companies started putting their thumbs on the scale long before Uncle Sam was tapping them on the shoulder.

    4. “and they’ve even deployed tools to do so.”

      They’ve claimed to have deployed tools to do so. Where is the evidence these tools actually exist? Where is the evidence that said tools, assuming they exist, actually work? What is the false negative rate? What is the false positive rate?

      1. I don’t think there’s reason to doubt whether the tools exist. There is mountains of evidence the tools don’t work.

  5. I’d like to propose that websites which allow multiple users to share information be made illegal. Any person who wants to post anything online will need to create their own website. If any other person wants to comment on the first person’s comments, they’ll have to start their own website and hope the first person bothers to look at it. Once every person has their own website that they are legally responsible for, it will be much easier to stop online crime and put an end to bitching about the unfairness of biased views.

    1. Hey, here’s a nifty idea: First, you are can limit yourself to communicating your own content on your own service, but you are solely responsible for what appears; or second, you can limit what others may communicate via your service and so share responsibility for what they communicate; or lastly you can allow anyone to communicate anything via your service and bear no responsibility for their private actions.

      Yeah, I know, it’s a novel approach that has never been tried before.

      1. Another person who doesn’t approve of people he doesn’t like doing things he doesn’t like. What a surprise.

        1. Yes, I don’t approve of big government and Facebook/Google colluding on censorship and creating a surveillance state.

          Apparently, you approve, which makes you some kind of statist.

          1. How about that. Not telling people how to use their property makes you a statist. I’m laughing at you so hard right now.

            1. What dishonest dreck. But from you that is hardly surprising.

              I am not telling anyone what they can or cannot do with their property, but as a libertarian with an understanding of the NAP I am noting that nobody should be legally insulated from the consequences of their actions.

              IOW, fuck off slaver.

              1. Lastly, if you were not such a twit you might have realized that the principles I laid out pretty much describe how we have treated both common carriers and the telecomms since about as long as they have been around.

                But it appears that was just too subtle for you to grasp.

                1. Careful with the insults, you’ll hurt my feelings and prove that you have nothing useful to say.

                  I am not telling anyone what they can or cannot do with their property

                  You’re just complaining about the unfairness of it all which is apparently super libertarian if this site is any guide.

    2. Any person who wants to post anything online will need to create their own website.

      We have that. It’s called federated social networking. “Hosting your own site” amounts to running an app on your PC or a few clicks on DigitalOcean. Hopefully, it can destroy Facebook and Twitter. Repealing regulatory capture in the form of Section 230 would help with that.

      Once every person has their own website that they are legally responsible for, …

      Let me help you with this: … things work in a more libertarian way.

      1. Great! Can’t wait for all this new technology for people to shit their pants over.

  6. We could fix this problem if we just got government a little more involved.

    Also, wouldn’t.

  7. “In reality, it encourages tech companies to remove posts “in good faith,””

    Binion, please define “good faith”. The problem is that they aren’t doing so in good faith.

    1. “Good faith” is a legal term of art. It occurs in numerous places in the US code, as well as state statutory codes, and numerous court decisions, especially in contract law. Basically, it’s that both parties will act honestly in accordance to the contract or agreement between the two parties.

      When you have a service level agreement that you, as a user, agree to abide by, and that SLA is written as broadly as the SLAs on Facebook, Youtube, etc, are, the likelihood of the site breaching good faith is nearly infinitesimally small.

  8. Backpage was glaringly obvious. Prostitution exists on facebook and Instagram, but there’s an added step and more fishing involved. If a sexy girl posts asking if any guys want to hang out and she puts up a contact (usually as text included in the image) it’s discussed off messenger. You really can’t prevent that because then pretty girls wouldn’t be able to post and ask if anyone wants to hang out and people would get angry if a post removal or a ban happened because it means facebook assumed you’re a prostitute. That would be funny though. Grandma gets banned for being lonely.
    Same with asking for “tea” or if anyone knows Tina. That means you’re looking to score meth. Facebook would think Grandma wants to have people over to shoot up crystal.

    The story above is forgetting that a dick picture is, well, a penis. You can’t make it not a penis and still have it be a dick pic.
    The problem with these companies is you can use terminology against them. For instance, “Does anyone have any spare Libra” could mean you’re looking to get heroin. They’re not going to ban mentions of their own currency. If Paypal, Starbucks, Walmart, etc stood for prostitution it creates another problem.

    1. Same with asking for “tea” or if anyone knows Tina. That means you’re looking to score meth. Facebook would think Grandma wants to have people over to shoot up crystal.

      *furiously taking notes*

      1. No – stay away from that shit, man.

      2. “Mark Zuckerberg” is code for unprotected sex and/or sharing needles so if anyone says they’re looking to party with Tina and Mark Zuckerberg don’t respond. Well, unless you’re into that. If so have fun.

    2. Same with asking for “tea” or if anyone knows Tina.

      Now tea means meth and not weed?

      1. Tea means meth to gay people. T, tea, do you know Tina, ever party with Tina, etc.

    3. “Does anyone have any spare Libra” could mean you’re looking to get heroin.

      You’d run into the same problem some KFC employees once did. Their secret code for buying crack from them through the drive through window was to ask them for “extra biscuits.” Such a rare request, so it was just bad luck when someone asked for extra biscuits and they were handed a crack rock.

  9. There has been a great deal of misinformation peddled about the law—namely that platforms are prohibited from moderating content they deem harmful. In reality, it encourages tech companies to remove posts “in good faith,”

    But it lets them get away with removing posts for partisan political reasons.

    Section 230 is regulatory capture. It should be eliminated. There is no reason for Facebook, Google, and other companies not to operate under the exact same terms as any other publisher.

    If we need anything like Section 230 at all, it should only apply to pure file and content hosting services, and companies should only be able to claim exemption from liability under it under clear, objective, strictly limited criteria for content removal.

    1. “If we need anything like Section 230 at all, it should only apply to pure file and content hosting services,”

      Well said. I would keep in email servers and ISP’s as well.

  10. What is very much in dispute is that the social media platforms are moderating in good faith. In good faith would suggest that the moderation rules are understandable, transparent, and equitably applied. There is tremendous evidence that they are none of these things.

  11. “prevent the internet from devolving into a cesspool of negativity”

    Too late.

    The Nattering Nabobs of Negativism have taken over.

  12. FascistBook shouldn’t ban its dick pics.
    It should ban its Tom and Harry pics for reasons too obvious to mention here.

  13. This pretty much explains the problem with the “logic” of these idiots.

    https://xkcd.com/1425/

  14. Not surprisingly, a woman references dick pics when talking about nudity on Facebook without mentioning the fact that women send pussy pics. I mean, Twitter has tons of those pics and even Snapchat has been nicknamed SnatchChat (and for good reason). But no, no, no, we must always protect women’s image by ignoring the fact that they would ever do such a thing like this. The denial and ignoring of the existence of pussy pics is the 21st century equivalent to the “girls don’t fart” argument.

Please to post comments