Science & Technology

Shadow-Censorship on Social Media Sparks New Concerns for Open-Internet Advocates

The digital censors of tomorrow will control information by secretly limiting or obscuring the ways that people can access it online.

|

gruntzooki/Flickr

The future of information suppression may be much harder to detect—and thus enormously more difficult to counteract. The digital censors of tomorrow will not require intimidation or force; instead, they can exploit the dark art of "shadow-censorship."

Shadow-censorship is a way to control information by secretly limiting or obscuring the ways that people can access it. Rather than outright banning or removing problematic communications, shadow-censors can instead wall off social-media posts or users in inaccessible obscurity without the target's knowledge. To an individual user, it just looks like no one is interested in his or her content. But behind the scenes, sharing algorithms are being covertly manipulated so that it's extremely difficult for other users to view the blacklisted information.

In theory, there are a variety of ways that shadow-censorship could be applied on platforms like Twitter, Facebook, and YouTube. Users may be automatically unsubscribed from blacklisted feeds without notice. Social media analytics can be selectively edited after the fact to make some posts look more or less popular than they really were. Individual posts or users can be flagged so that they are shown in as few feeds as possible by default. Or provocative content that originally escaped selective filtering may be memory-holed after the fact, retrievable only by the eagle-eyed few who notice and care to draw attention to such curious antics.

In each situation, the result is to manipulate network dynamics so that individuals end up censoring themselves. No-knock raids and massive anti-sedition campaigns are unnecessary. To control sensitive information today, you can just make people believe that no one else cares. Eventually, they give up, cease their broadcasts, and move on to something else.

The concept of shadow-censorship tends to quickly invite skepticism. The scheme sounds more like a derivative plot to a lesser Phillip K. Dick story than a true threat facing today's keyboard kulturkampfers. After all, what seems more likely: a conspiracy to silence posts that speak truth to power or that our Internet friends simply find us boring? Sure, major technology companies like Facebook and Google and Twitter could engage in this malicious filtering. But why would they? They have reputations to uphold and users to keep happy. Should enough come to distrust these platforms, they will exit for fairer alternatives and doom these networks' futures.

But as the Edward Snowden revelations have made clear, technology companies are prime targets for government capture or coercion. Internet firms have taken major hits to reputation and future profitability for their collaboration with U.S. surveillance programs, whether they were willing henchmen for authorities or quiet resistors dragging their feet. The mere capacity to control data-access for major Internet traffic centers could prove irresistible for powerful forces seeking to massage perceived reality. We therefore cannot rule out the possibility that such shadow-censorship measures may be employed simply because they don't seem to be in a network's best interest.

What's more, we know that social-media platforms are able to shadow-censor because some of them already wield these techniques to neuter speech that's considered spam or abuse. Take Reddit. The content aggregator's Help page states that users who suddenly flood the website with a new submission may be "shadow banned," meaning that the user can continue to submit posts and links but they will not be visible to any other user. Reddit co-founder and current CEO Steve Huffman explained that he created the shadow-banning capability 10 years ago to abate the constant spambot attacks that plagued the website's early days. In an "Ask Me Anything" session with the community last summer, Huffman assured redditors that "real users should never be shadowbanned. Ever. If we ban them, or specific content, it will be obvious that it's happened and there will be a mechanism for appealing the decision."

But this tool could easily be abused by petty administrators seeking to stifle opposing opinions or entice advertiser dollars with a more wartless brand. This appeared to be the case during the "Reddit Revolt" earlier this year, a tumultuous internecine battle between Reddit users and certain administrators over heavy-handed censorship and shadow-banning. Some Reddit users claimed they had been shadow-banned merely for criticizing controversial former CEO Ellen Pao and Reddit's lack of transparent moderation. After Pao stepped down from her post, many users hoped that the new management would repair damaged community trust and retake the website's mantle as a "bastion of free speech on the World Wide Web," as Reddit co-founder Alexis Ohanion described the platform in 2012. However, perceived shadow-censorship continues to be a hotly debated topic on Reddit, with some leaving the platform altogether in favor of the (currently) more open Voat community.

Rightly or wrongly, some people downplay or even welcome underhanded censorship of controversial Internet movements like the #RedditRevolt or the more widely-reviled #Gamergate. If you consider these groups' speech and behaviors to be abuse on the level of physical violence, you will see no problem in shadow-banning their accounts or shadow-censoring their speech. But most of us would be deeply concerned to learn that the shadow-censorship tactics started to prevent spamming and harassment are being used instead to suppress evidence of government oppression or violence. And a recent incident on Twitter highlights just how salient this risk has become.

Last week, Jordan Pearson of Motherboard reported that Twitter appeared to have shadow-censored certain tweets about the "Drone Papers," a trove of leaked documents about the U.S. government's bloody drone campaign in Syria and Yemen that were released by The Intercept in mid-October.

An independent transparency activist named Paul Dietrich noticed that one of the documents released by the The Intercept contained some barely-readable text on the scanned page's back side that was not released with the first cache. He flipped the image, highlighted some of the obscured text so that it was easier to read, and shared it on Twitter. This nice bit of investigative work caught the eye of Jacob Appelbaum, a security researcher and hacker boasting a Twitter following of almost 100,000.

Appelbaum retweeted the image, but for some reason, Dietrich was not notified of the share, nor did the tweet appear on Appelbaum's feed. Noticing other strange notification behavior, Dietrich attempted to access Twitter over the anonymity network Tor. Lo and behold, the tweets appeared!

After a bit of poking around, a tweet alerting his followers to the shady behavior, and reports that others were experiencing similar problems, Dietrich concluded that some mechanism in Twitter itself was causing his tweets to be filtered out of many users' default feeds. Adding to the sketchiness, some tweets discussing this potential shadow-censorship subsequently appeared to be shadow-censored as well.

This unexplainable selective filtering caught the attention of the notoriously censorship-sensitive security community, who turned to Twitter staff for answers. Pearson contacted Twitter spokesperson Rachel Millner for comment, who provided this response: "Earlier this week, an issue caused some Tweets to be delivered inconsistently across browsers and geographies. We've since resolved the issue though affected Tweets may take additional time to correct."

Millner's unsatisfying statement provides little explanation for a quite serious concern. It does not explain the strangely inconsistent personal notifications that users have reported, nor the extent or duration of the "issue." Most importantly, it does not explain why these particular tweets were affected and how many others might have been as well. Twitter declined to provide these details to me when I reached out, and it looks like others haven't had much luck getting more information either.

Without clarification, theories about potential shadow-censorship on Twitter have taken off online. Dietrich himself speculates that Twitter's new "suspected abusive Tweets" policy may be to blame. In April, Twitter announced that this new feature can limit the reach of certain tweets that fall into a "wide range of signals and context that frequently correlates with abuse." Tweets that are flagged by this feature can be viewed by users that explicitly seek them out, but will be hidden by default to the broader Twitter base.

The similarities between this feature and the issue that appeared to shadow-censor the drone tweets this past week are evident. But without a more transparent explanation from Twitter's offices, most users simply cannot know whether their tweets are being filtered by such a tool or not. This is not just bad news for Twitter users who may be inappropriately shadow-censored because their innocent tweets happen to resemble the "signals and context" that Twitter programmers believe "correlates with abuse." It's also bad for Twitter itself. As the company attempts to eke into profitable territory with its new "Promoted Moments" feature, user trust and engagement will be more important than ever. But as the Reddit incident suggests, launching a broad "war on abuse" to coax advertiser investment can backfire if it undermines platform credibility.

The long-term risks that shadow-censorship presents, however, are much bigger than the potential decline of any one social-media platform. Should these tactics become widespread enough, or even if the mere perception that these tactics are being abused becomes widespread enough, much of the Internet will cease to function as an open and decentralized network where ideas and opinions can freely comingle, clash, and develop. Those with a radical streak in them may come to feel as though they are alone in their own walled garden, their rebel yells unreturned by their suspiciously quiet peers.

If shadow-censorship indeed becomes a major concern over the coming years, advocates of freedom will face challenges in counteracting it. Social-media platforms are obviously run by private businesses that are free to design their algorithms as they desire within the confines of current law. Introducing government regulations of such activity merely introduces a higher mechanism that can be abused by powerful parties to further their own interests.

Perhaps liberty-minded entrepreneurs will build alternative platforms that promise to protect and promote free speech, as was the case with Voat. For now, defenders of the open Internet must stay vigilant, monitoring platforms for possible abuses of shadow-censorship tools, and vocal. Raising awareness of the problem and sparking a public conversation about the unconditional value of free speech online is an imperative first step.