Shadow-Censorship on Social Media Sparks New Concerns for Open-Internet Advocates

The digital censors of tomorrow will control information by secretly limiting or obscuring the ways that people can access it online.



The future of information suppression may be much harder to detect—and thus enormously more difficult to counteract. The digital censors of tomorrow will not require intimidation or force; instead, they can exploit the dark art of "shadow-censorship."

Shadow-censorship is a way to control information by secretly limiting or obscuring the ways that people can access it. Rather than outright banning or removing problematic communications, shadow-censors can instead wall off social-media posts or users in inaccessible obscurity without the target's knowledge. To an individual user, it just looks like no one is interested in his or her content. But behind the scenes, sharing algorithms are being covertly manipulated so that it's extremely difficult for other users to view the blacklisted information.

In theory, there are a variety of ways that shadow-censorship could be applied on platforms like Twitter, Facebook, and YouTube. Users may be automatically unsubscribed from blacklisted feeds without notice. Social media analytics can be selectively edited after the fact to make some posts look more or less popular than they really were. Individual posts or users can be flagged so that they are shown in as few feeds as possible by default. Or provocative content that originally escaped selective filtering may be memory-holed after the fact, retrievable only by the eagle-eyed few who notice and care to draw attention to such curious antics.

In each situation, the result is to manipulate network dynamics so that individuals end up censoring themselves. No-knock raids and massive anti-sedition campaigns are unnecessary. To control sensitive information today, you can just make people believe that no one else cares. Eventually, they give up, cease their broadcasts, and move on to something else.

The concept of shadow-censorship tends to quickly invite skepticism. The scheme sounds more like a derivative plot to a lesser Phillip K. Dick story than a true threat facing today's keyboard kulturkampfers. After all, what seems more likely: a conspiracy to silence posts that speak truth to power or that our Internet friends simply find us boring? Sure, major technology companies like Facebook and Google and Twitter could engage in this malicious filtering. But why would they? They have reputations to uphold and users to keep happy. Should enough come to distrust these platforms, they will exit for fairer alternatives and doom these networks' futures.

But as the Edward Snowden revelations have made clear, technology companies are prime targets for government capture or coercion. Internet firms have taken major hits to reputation and future profitability for their collaboration with U.S. surveillance programs, whether they were willing henchmen for authorities or quiet resistors dragging their feet. The mere capacity to control data-access for major Internet traffic centers could prove irresistible for powerful forces seeking to massage perceived reality. We therefore cannot rule out the possibility that such shadow-censorship measures may be employed simply because they don't seem to be in a network's best interest.

What's more, we know that social-media platforms are able to shadow-censor because some of them already wield these techniques to neuter speech that's considered spam or abuse. Take Reddit. The content aggregator's Help page states that users who suddenly flood the website with a new submission may be "shadow banned," meaning that the user can continue to submit posts and links but they will not be visible to any other user. Reddit co-founder and current CEO Steve Huffman explained that he created the shadow-banning capability 10 years ago to abate the constant spambot attacks that plagued the website's early days. In an "Ask Me Anything" session with the community last summer, Huffman assured redditors that "real users should never be shadowbanned. Ever. If we ban them, or specific content, it will be obvious that it's happened and there will be a mechanism for appealing the decision."

But this tool could easily be abused by petty administrators seeking to stifle opposing opinions or entice advertiser dollars with a more wartless brand. This appeared to be the case during the "Reddit Revolt" earlier this year, a tumultuous internecine battle between Reddit users and certain administrators over heavy-handed censorship and shadow-banning. Some Reddit users claimed they had been shadow-banned merely for criticizing controversial former CEO Ellen Pao and Reddit's lack of transparent moderation. After Pao stepped down from her post, many users hoped that the new management would repair damaged community trust and retake the website's mantle as a "bastion of free speech on the World Wide Web," as Reddit co-founder Alexis Ohanion described the platform in 2012. However, perceived shadow-censorship continues to be a hotly debated topic on Reddit, with some leaving the platform altogether in favor of the (currently) more open Voat community.

Rightly or wrongly, some people downplay or even welcome underhanded censorship of controversial Internet movements like the #RedditRevolt or the more widely-reviled #Gamergate. If you consider these groups' speech and behaviors to be abuse on the level of physical violence, you will see no problem in shadow-banning their accounts or shadow-censoring their speech. But most of us would be deeply concerned to learn that the shadow-censorship tactics started to prevent spamming and harassment are being used instead to suppress evidence of government oppression or violence. And a recent incident on Twitter highlights just how salient this risk has become.

Last week, Jordan Pearson of Motherboard reported that Twitter appeared to have shadow-censored certain tweets about the "Drone Papers," a trove of leaked documents about the U.S. government's bloody drone campaign in Syria and Yemen that were released by The Intercept in mid-October.

An independent transparency activist named Paul Dietrich noticed that one of the documents released by the The Intercept contained some barely-readable text on the scanned page's back side that was not released with the first cache. He flipped the image, highlighted some of the obscured text so that it was easier to read, and shared it on Twitter. This nice bit of investigative work caught the eye of Jacob Appelbaum, a security researcher and hacker boasting a Twitter following of almost 100,000.

Appelbaum retweeted the image, but for some reason, Dietrich was not notified of the share, nor did the tweet appear on Appelbaum's feed. Noticing other strange notification behavior, Dietrich attempted to access Twitter over the anonymity network Tor. Lo and behold, the tweets appeared!

After a bit of poking around, a tweet alerting his followers to the shady behavior, and reports that others were experiencing similar problems, Dietrich concluded that some mechanism in Twitter itself was causing his tweets to be filtered out of many users' default feeds. Adding to the sketchiness, some tweets discussing this potential shadow-censorship subsequently appeared to be shadow-censored as well.

This unexplainable selective filtering caught the attention of the notoriously censorship-sensitive security community, who turned to Twitter staff for answers. Pearson contacted Twitter spokesperson Rachel Millner for comment, who provided this response: "Earlier this week, an issue caused some Tweets to be delivered inconsistently across browsers and geographies. We've since resolved the issue though affected Tweets may take additional time to correct."

Millner's unsatisfying statement provides little explanation for a quite serious concern. It does not explain the strangely inconsistent personal notifications that users have reported, nor the extent or duration of the "issue." Most importantly, it does not explain why these particular tweets were affected and how many others might have been as well. Twitter declined to provide these details to me when I reached out, and it looks like others haven't had much luck getting more information either.

Without clarification, theories about potential shadow-censorship on Twitter have taken off online. Dietrich himself speculates that Twitter's new "suspected abusive Tweets" policy may be to blame. In April, Twitter announced that this new feature can limit the reach of certain tweets that fall into a "wide range of signals and context that frequently correlates with abuse." Tweets that are flagged by this feature can be viewed by users that explicitly seek them out, but will be hidden by default to the broader Twitter base.

The similarities between this feature and the issue that appeared to shadow-censor the drone tweets this past week are evident. But without a more transparent explanation from Twitter's offices, most users simply cannot know whether their tweets are being filtered by such a tool or not. This is not just bad news for Twitter users who may be inappropriately shadow-censored because their innocent tweets happen to resemble the "signals and context" that Twitter programmers believe "correlates with abuse." It's also bad for Twitter itself. As the company attempts to eke into profitable territory with its new "Promoted Moments" feature, user trust and engagement will be more important than ever. But as the Reddit incident suggests, launching a broad "war on abuse" to coax advertiser investment can backfire if it undermines platform credibility.

The long-term risks that shadow-censorship presents, however, are much bigger than the potential decline of any one social-media platform. Should these tactics become widespread enough, or even if the mere perception that these tactics are being abused becomes widespread enough, much of the Internet will cease to function as an open and decentralized network where ideas and opinions can freely comingle, clash, and develop. Those with a radical streak in them may come to feel as though they are alone in their own walled garden, their rebel yells unreturned by their suspiciously quiet peers.

If shadow-censorship indeed becomes a major concern over the coming years, advocates of freedom will face challenges in counteracting it. Social-media platforms are obviously run by private businesses that are free to design their algorithms as they desire within the confines of current law. Introducing government regulations of such activity merely introduces a higher mechanism that can be abused by powerful parties to further their own interests.

Perhaps liberty-minded entrepreneurs will build alternative platforms that promise to protect and promote free speech, as was the case with Voat. For now, defenders of the open Internet must stay vigilant, monitoring platforms for possible abuses of shadow-censorship tools, and vocal. Raising awareness of the problem and sparking a public conversation about the unconditional value of free speech online is an imperative first step.

NEXT: School Days

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. Perhaps liberty-minded entrepreneurs will build alternative platforms that promise to protect and promote free speech…

    Who will eventually also succumb to the demands of the hypersensitive victimhood industry and start putting mechanisms in place to protect their customers from speech whether those customers want that protection or not. Or are even aware of it. Activists suffering the online vapors usually win out.

    1. Ubiquitous process. Something new is created, through competition, risk, and rough environments. Then a middle phase. Then egalitarian/sameness/safety rulers take over. I wonder whether phase 4 inevitably is destruction, or equilibrium, and why the creators allow their creation to be occupied.

  2. If it can be done, it will be done.

    1. One must certainly hope so, for there will predictably be circumstances where this type of technique can usefully be employed by law enforcement authorities. For example, it is troubling that prosecutors have so far spent seven years combating inappropriately deadpan Gmail “parodies” in New York, without yet securing a final resolution and incarceration of the culpable party. It would appear that defendants are allowed, as matters stand, to force the authorities to spend too much time and money working on one criminal speech case, thereby taking away resources needed for suppressing other similarly inappropriate forms of expression. See the documentation of America’s leading criminal satire case, where readers will find a lengthy list of other “free speech” conduct crying out for prosecution, at:


  3. If you want more info on Reddit’s shadowbanning. Just ask the Mods on https://www.reddit.com/r/kotakuinaction . When a shadowbanned user post on their Subreddit. They have to approve it. They’re also nice enough to tell the user they’ve been Shadowbanned.

    & It’s been happening for about a year now.

    1. If you want all the correct info on Reddit’s shadowbanning, relying on Reddit is unreliable.

  4. “Should these tactics become widespread enough, or even if the mere perception that these tactics are being abused becomes widespread enough, much of the Internet will cease to function as an open and decentralized network where ideas and opinions can freely comingle, clash, and develop.”

    That’d be a good enough result for those who manipulate information and information flow. Instead of having to completely hide abuse – staying entirely in shadows – a certain extent of awareness is even helpful. So this a very viable, realistic attempt.

  5. A local newspaper and radio website in Louisiana are now censoring comments in this deceptive manner. Anything that resembles a comment that would appear on Reason.com is blacklisted.

    1. That’s going to be really rough if they run an article about woodchippers.

  6. Isn’t the EU mandate that can force search engines to remove or “memory hole” results under the “right to be forgotten” principle a sort of meta “shadow banning?” The data is still there on the original website, but because it will never show up on search engine results, it effectively doesn’t exist for anyone not already familiar with the website.

  7. this is not really all that surprising. it is the natural progression of how media has learned to game the digital age. first, they will publish a hundred versions of articles with view points they support, and will mark viewpoints they don’t support as less “popular.” additional ways to move dissenting opinions down the accessibility scale seems like the logical next step

    the unfortunate part, is that someone with enough information and access to information to be a substantial source for the public to get a full picture of reality, will have a natural tendency to try and shape that picture. sources being for one side or the other of an issue is the norm. it is a problem, but I’m not sure that human nature makes it easy to correct.

    1. i suppose it is ironic that they are using anti bot technology, though….. software meant to keep robo spam from keeping real peoples posts from being seen, now being used on the people.

  8. I had to remove Reason from a media faucet because it was flooding my space with pictures of grinning or frowning Republicans and Democrats–just like the media I would never dream of letting in. Along with it came Republican infiltrators and loudmouthed heretic-hunters of the sort that shadow all outlier publications. This is an unusually interesting article of the sort I do want to highlight. Many thanks.

  9. What a bunch of twaddle. There’s no grand conspiracy at play here. Why, I can say whatever I like, like Jimmy Hoffa is buried in , and JFK was actually killed by , and a most certainly did crash in . All perfectly true and I have the evi . Thumbs up if you agree with me. Hey, how come I have 900 thumbs down and I haven’t even

  10. This isn’t shadow censorship, but I’m always fascinated to find out what kinds of posts of mine get removed on other sites. It’s really startling, sometimes, the amount of ideological conformity required. Look at these two recent examples of posts removed:

    At Insider Higher Ed:

    Discussion on Inside Higher Ed 41 comments
    Instructor says Gannon U demanded his resignation after ‘Newseek’ wrote about his work in Iraq
    Vizzini 8 days ago Removed
    Note that Gannon is apparently not the kind of place you want to send your kids. I shudder to think what their sexual harassment policy must be like.

    At UCLA Institute of the Environment and Sustainability:

    Discussion on UCLA Institute of the Environment and Sustainability 28 comments
    Back to School
    Vizzini 8 days ago Removed
    …one needs only to consider how VW cheated in its emissions, and wonder how many extra lives might have been lost due to air pollution as a result?
    Approaching zero, I’d wager.

    Yes. saying you might not want to send your kid to a school that demanded a teacher’s resignation without even an investigation for behavior that wasn’t illegal and was known to be part of his previous job and was relevant to the courses he taught, or saying that you don’t think there have been any actual deaths associated with VW’s diesel engine emissions, are examples of BadThink that the special snowflakes of the public should not be exposed to.

  11. Great article, too bad it will be shadow-censored. I’ll share it anyway ’cause I still believe they can’t stop all of our communication without blow back(insert woodchipper comment here).

  12. Who knows what evil lurks in the hearts of men? The shadow (censors) knows.

  13. sometimes its not so shadowy, Scientific American kicked me off their website for pointing out that two of their articles conflicted on how climate change was going to ruin the world.

  14. Start working at home with Google! It’s by-far the best job I’ve had. Last Wednesday I got a brand new BMW since getting a check for $6474 this – 4 weeks past. I began this 8-months ago and immediately was bringing home at least $77 per hour. I work through this link, go? to tech tab for work detail,,,,,,,

    …………………. http://www.4cyberworks.com

  15. I discovered many of my fb posts mysteriously gone a few weeks ago, mostly historical/political. I post my comments elsewhere now to be reposted there.

  16. I tried to register on reddit and they claimed that no matter what name i tried to use that it was already taken. I came to the conclusion that they recognized my computer ID and banned me. Who knew they had that capability?

Please to post comments

Comments are closed.