It was a typical Thursday night on Twitter. Mid-December, 6 p.m. Pacific Time. Louisiana Gov. Bobby Jindal was riffing on why the GOP should endorse over-the-counter birth control. Jenna Haze, two-time winner of the FAME Dirtiest Girl in Porn award, was posting an Instagrammed photo of storm clouds lit by the setting sun. Whole Foods Markets wanted to let everyone know that groceries make thoughtful Christmas gifts. And the hip hop artist The Game and scores of his fans were jawing at the conservative pundit Michelle Malkin.
Malkin’s website Twitchy.com, whose brand of sustainable post-peak journalism feasts on foraged tweets to produce a timely stream of snarky outrage, had published an item about the cover artwork on The Game’s new album, Jesus Piece. Even The Game had described this artwork as “controversial.” It shows Jesus with a teardrop tattoo on his cheek and a red bandanna covering the lower half of his face, the Lamb of God as gangsta Messiah.
Twitchy harvested reactions the image had inspired on Twitter—some positive, some negative—and appended a parting shot: “Would The Game dare to do to Allah what he did to Jesus Christ? Just asking, though we already know the answer. Peace out.”
The Game did not peace out. “#BOYCOTT @michellemalkin NOW!,” he Tweeted. “She’s racist, & makin racial & blasphemist comments about my album. Same b!$&% said Obama isnt AMERICAN RT.”
That such charges were baseless did nothing to deter loyal Game fans, who started peppering Malkin with ugly tweets. “fuck that racist Asian looking hoe!” exclaimed one. Another threatened to rape her. A third suggested she needed to be hit in the head with a Louisville Slugger. Malkin, whose career is based on courting confrontation rather than routing around it, struck back quickly, sometimes with sarcasm (“You need TwitterViagra”), sometimes with Biblical verse (“Do not be overcome with evil, but overcome evil with good.”)
Her fans entered the fray too, and for the next several hours, in an awesome display of Twitter’s capacity to inspire unlikely convergence, dozens of disparate individuals, many of them operating under pseudonyms, found common ground in their quest to see how much contempt for one another they could pack into the 140 characters Twitter allots per post.
As the drama unfolded, Twitter did what it generally does in such situations: nothing. Maintaining order on the micro-publishing platform is the responsibility of Twitter’s Trust & Safety department, which, despite its Orwellian moniker and intimations of bland bureaucratic intrusiveness, is more free-range parent than helicopter mom. Until a user proactively files a complaint about another user’s behavior, Trust & Safety stays on the sidelines. And even when complaints are filed, it often takes no action.
This strategy appears to be paying off. In a little over a year, Twitter’s user base doubled, going from 100 million monthly active users in September 2011 to 200 million monthly active users in December 2012.
Trust & Safety
If you want to talk with a Twitter employee in person, prepare to be vigorously authenticated. In the small, ground-floor entryway of the downtown San Francisco building where the social media company is headquartered, a security guard behind the front desk demands picture ID from all visitors. Once you are matched against a list of expected guests and sign in, you can proceed to the elevator, where another security guard punches in the floor you have been cleared to visit. (The interior of the elevator has no control panel, so it’s impossible to reroute your trip on the fly.) The elevator opens onto Twitter’s 9th floor lobby, where you sign in one more time and receive a name badge. Then a PR person will emerge to escort you to your designated appointment.
Mine was with Del Harvey, director of Trust & Safety. In October 2008, when Twitter had only a couple dozen employees and approximately 6 million monthly users (according to the market research firm eMarketer), it hired Harvey to head up the standards department. Harvey had a good friend who was an engineer at the company, and when Twitter decided it needed to do something about the increasing number of spam and abuse complaints that were arising with the service’s exponential growth, Harvey’s friend suggested her for the job. “My friend was like, ‘I know somebody who is super, super obsessive-compulsive, she’d be fantastic at this,’ ” Harvey recalls. “My interview was like a 20-minute phone call, and then I was hired.”
Before joining Twitter, Harvey worked for five years at Perverted-Justice.com, a nonprofit that targets online predators by posing as underage teens in chatrooms. When Harvey joined Twitter, she wasn’t just the head of Trust & Safety; she was the entire department. Today she oversees a staff of around three dozen, who monitor the excesses of the estimated 500 million Tweets per day. As the Daily Dot noted in August 2012, when Twitter was averaging 340 million daily tweets, a five-second manual review of each one would “take the equivalent of 35,416 eight-hour shifts.” At a half-billion per day, subjecting just 1 percent of Twitter’s output to such cursory human discretion would take approximately 868 eight-hour shifts, absorbing the attention of roughly all of Twitter’s current staff.
Those impossible figures help explain one of the company’s core mantras: “We don’t mediate content,” Harvey says. “We don’t proactively go out and do stuff that frankly wouldn’t be scalable.” Instead, Twitter simply explains what users can and cannot say and do in the Terms of Service (TOS) and Rules that it posts on the site. Other Internet juggernauts do the same, but what makes Twitter stand out in this field is the extent to which its house rules embrace laissez faire.
Consider, for example, some statements that appear in the user policies of other sites. “You will not post content that: is hate speech, threatening, or pornographic; incites violence; or contains nudity or graphic or gratuitous violence,” Facebook commands. “Colorful language and imagery is fine, but there’s no need for threats, harassment, lewdness, hate speech, and other displays of bigotry,” Yelp says. Flickr “is not a venue for you to harass, abuse, impersonate, or intimidate others. If we receive a valid complaint about your conduct, we’ll send you a warning or delete your account.”
Twitter, in contrast, governs in much less proscriptive fashion. “All Content, whether publicly posted or privately transmitted, is the sole responsibility of the person who originated such Content,” its TOS reads. “We may not monitor or control the Content posted via the Services and, we cannot take responsibility for such Content. Any use or reliance on any Content or materials posted via the Services or obtained by you through the Services is at your own risk.” In its Rules section, Twitter reaffirms this hands-off policy: “We do not actively monitor user’s content and will not censor user content, except in limited circumstances.”