(Page 3 of 3)
And when third parties aren’t armed with subpoenas, court orders, or other valid legal processes, Twitter typically does not take action. In September 2012, for example, Al-Shabaab, a Somali-based affiliate of Al Qaeda, tweeted photos showing the corpses of several Kenyan soldiers it took credit for killing in combat. A few weeks later, seven Republican members of Congress sent a letter to the FBI urging it to make Twitter remove Al-Shabaab’s account and those of various other “Specially Designated Global Terrorist entities.” As of early January, the five accounts the lawmakers complained about continue to function on Twitter. A letter to the FBI, even by influential types on Capitol Hill, does not carry the force of a court order. Twitter was unmoved.
Dissidents and Dilettantes
In a November 2006 Washington Post column, just a few months after Twitter was launched, Michael Kinsley described the site as “the ultimate in solipsism.” A few months later, Minneapolis Star-Tribune columnist James Lileks concluded that the platform was “as banal as it [was] addictive,” a “useless productivity-destruction mechanism” that he couldn’t stop reading. In those early days, even fans of the service were mostly struck by its technologically enforced shallowness.
Perhaps the first person to truly recognize—or at least demonstrate—Twitter’s revolutionary power was the actor Ashton Kutcher, who joined the service in early 2009 and, in a matter of months, became its first user to attract more than a million followers. Beating out CNN for that honor in a closely watched battle, Kutcher showed that Twitter was a full-fledged broadcast medium that could push content to huge numbers of people in a single instant, and that you did not need to own a broadcast network to get your message out. A few months later, in Iran, dissidents jumped on the bandwagon, using Twitter to help publicize their efforts to hold their government accountable during a contested election. At one critical juncture, the U.S. State Department even asked Twitter to postpone its scheduled maintenance to ensure that the protesters had uninterrupted access to the outside world.
Iran’s dissidents and others throughout the Islamic world demonstrate the flip side of lawmaker complaints about terrorists using Twitter. Enabling pseudonymity gives cover both to bad actors and those trying to evade repressive governments. Forcing disclosure would expose the good guys along with the bad guys. That said, true anonymity is extremely hard to achieve online, and dissidents who are in real danger of retribution should take greater efforts to conceal their identities than a simple Twitter pseudonym.
Twitter’s pseudonymity functions as a liberating force in far less noble cases as well. For anyone with a more casual interest in maintaining a low profile, Twitter is an oasis in the post-Zuckerbergian world of pervasive disclosure. Sure, it wants to know what you’re doing every second, just like Facebook and Google+ do. But while Facebook orders experience around identity, and Google+ really, really wants you to use “the name your friends, family, or coworkers call you” when you sign up, Twitter doesn’t blink an eye if you submit a ridiculous alias and a throwaway email address.
Fifteen years ago, the entire online world pretty much worked like this, and the multiplicity of selves that cyberspace permitted was generally considered a feature rather than a bug. Indeed, it’s one reason the early Web was such a liberating place, where kindergarten teachers could freely discuss their passion for medical marijuana without alerting their superiors, or Buffy fans could dissect the latest episode without exposing their digital necks to armies of vampire marketers who would feast on the knowledge for eternity. There are still plenty of places in cyberspace that accommodate anonymity or pseudonymity, but that’s less and less true for centralized platforms such as Facebook.
But Twitter is just as mainstream as Facebook, just as central to the culture, and its easy pseudonymity offers a small degree of shelter from the Panopticon. The Geek Feminism Wiki has compiled a list of the various sorts of users who are harmed by a “real names” policy. The list is lengthy, filled with both broad and narrow classes of users, including LGBT people, people with disabilities, stalking victims, ex-convicts, local government whistleblowers, etc. As much as explicit identity engenders accountability, it can also prohibit candor and discourse that would otherwise benefit individuals and the culture at large.
There is, in short, a huge market for pseudonymity, not just in specialized or obscure realms but on mass-market platforms that make it easy to connect instantly with millions of other users. “We’re not wedded to pseudonyms,” Dick Costolo stated at a Twitter press conference in September 2011. “We’re wedded to people being able to use the service how they see fit.” But as services like Facebook and Google+ demand increasing degrees of disclosure from their users, the counter-market for platforms that offer more opacity and autonomy will simultaneously expand. Twitter is the pseudonymity wing of the pseudonymity party.
Free Speech: The “Competitive Advantage”
In the summer of 2010, Google CEO Eric Schmidt intimated that “absolute anonymity” was going the way of floppy disks and answering machines, and that eventually governments would implement some sort of mandatory identity verification program in cyberspace. In July 2011, at a social media round table sponsored by Marie Claire, Randi Zuckerberg, who was then Facebook’s marketing director (and is still Mark’s sister), exclaimed that “anonymity on the Internet has to go away” in order to promote better user behavior.
Twitter has developed into a major hub for mainstream discourse in large part because millions of people tweet under their real names, including thousands of celebrities and other prominent people whose accounts have been explicitly verified by Twitter. This high degree of disclosure leads to a high degree of trust. Twitter offers substantial opportunities to interact with immediately identifiable people, and that makes it a good place for strengthening real-world social ties and business relationships, and also for keeping track of Donald Trump’s latest feuds (“Dummy Graydon Carter doesn’t like me too much…great news. He is a real loser!”) and what Kim Kardashian thinks of Atlantic City (“Atlantic City!!!!!!”).
But as much as Twitter benefits from users who tweet under their real names—especially from verified celebrities—it hasn’t seen any need to prohibit fake personas. Its combination of authenticated identity and easy pseudonymity, with no barriers to access between these two very distinct classes of users, is a pretty unique attribute, and potentially quite volatile. Indeed, if you asked the average crazed fan to develop an ideal stalking application, he’d probably come up with something pretty close to Twitter.
Twitter gives its users a great degree of latitude. It lets them determine how explicitly or covertly they want to present themselves. It places comparatively few limits on the types of permissible speech. Some users persistently abuse this tolerance, and others unfortunately pay the price. Eliminating pseudonymity on Twitter would surely reduce its supply of unpleasant discourse, as would taking a more proscriptive stance on what kinds of speech Twitter explicitly prohibits. As Twitter seeks to satisfy the investors who have poured more than $1 billion into the company over the last several years, it might lose its tolerance for corpse-tweeting terrorists in the name of creating a better advertising climate.
But so far, Twitter’s laissez-faire attitude toward online discourse has been its greatest business proposition, so much so that Twitter’s chief legal counsel, Alex Macgillivray, told The New York Times that he views Twitter’s commitment to free speech as a “competitive advantage.” The trust that Twitter places in users creates complications, but it also creates a place for people who appreciate the convenience and efficiency of contemporary social networks but would still like a little bit of the freedom and autonomy that animated the first-generation Web.
In 2012, the privately held Twitter generated an estimated $250–$350 million selling “promoted tweets” to advertisers who are eager to reach the company’s millions of users. Those users are drawn to the service in large part because of its relatively unregulated nature. By 2014, industry observers predict, Twitter’s ad revenues could top $1 billion a year. But what if those predictions are actually short-changing our appetite for robust, unfettered discourse?
Consider the cultural landscape now. At college campuses across the nation, highly restrictive speech codes are the norm. Anonymous political speech, once the province of our Founding Fathers, is now viewed as a scourge upon the land. An increasing number of states have adopted anti-bullying laws that potentially make it a crime to “torment or embarrass” a person simply for sending them a “lewd” or “obscene” electronic message. Such measures arise out of our expressed desires for more civility, more accountability, more tightly controlled speech zones. And yet every month, the least regulated mainstream communications platform on the planet draws millions of new users. They come for Twitter’s immediacy, for its simplicity, and for the way it so easily connects them to The Game and Michelle Malkin. Free speech, it turns out, is popular.