Social Media

The Government Wants a 'Red Flag' Social Media Tool. That's a Terrible Idea.

The FBI is looking for companies to comb through social media posts and pinpoint possible threats ahead of time. Think of it like a meme-illiterate Facebook-stalking precog from Minority Report.

|

Did anyone truly believe that the government cares about our privacy on social media? At the same time that Congress and the Federal Trade Commission (FTC) were taking Facebook to task for neglecting user data, the FBI was soliciting bids for technologies to hoover up and analyze your social media posts—just in case you are a threat.

It's yet another example of state double talk on online surveillance. Politicians preen for the cameras when a private company fails their users. But that same championing of our privacy rarely extends to government programs. When it comes to their own surveillance programs, it's just in the public interest.

In early July, the FBI posted a solicitation notice for a "Social Media Alerting Subscription," which would "acquire the services of a company to proactively identify and reactively monitor threats to the United States and its interests through a means of online sources." The request singles out Twitter, Facebook, Instagram "and other social media platforms" for snooping.

Essentially, the FBI is looking for companies to build a tool to comb through "lawfully access[ed]" social media posts and pinpoint possible threats ahead of time. Think of it like a meme-illiterate Facebook-stalking precog from Minority Report.

Although the notice was posted well before this month's mass shootings, it is easy to see how this system could empower the Red Flag law ideas that have since gained prominence. This kind of "proactive identification" could allow law enforcement to target and even disenfranchise social media users whose posts may have been merely misinterpreted. So let's call this the Red Flag tool for short.

The FBI's Red Flag tool statement of objectives provides a glimpse into the agency's sprawling "social media exploitation" efforts. There are "operations centers and watch floors," which monitor news and events to create reports for the relevant FBI team. These spur the activation of "fusion centers," tactical teams which use "early notification, accurate geo-locations, and the mobility" of social media data to issue their own reports. There are also FBI agents in the field, "legal attaches" whose jobs would be much easier with a translation-enabled Red Flag tool. And last are the "command posts," teams of "power users" assigned to monitor specific large events or theaters of operations.

To be clear, the proposed tool does not seek to access private messages or other hidden data. Rather, it would scrape and rationalize publicly accessible posts. This could be fortuitously combined with other FBI data to build detailed, but possibly inaccurate, portraits of suspected ne'er-do-wells.

Unsurprisingly, social media companies are not pleased. Although they are often criticized for their own data practices, many of them have explicit bans against building such tools to share data with intelligence agencies.

Facebook disallows developers from "[using] data from us to provide tools that our used for surveillance." This seems to fit the bill. Twitter similarly forbids developers from making Twitter content available to "any public sector entity (or any entities providing services to such entities) whose primary function or mission includes conducting surveillance or gathering intelligence." Sounds like the FBI to me.

But despite these company policies, similar tools already exist. The Department of Homeland Security, for instance, collects social media data on the many people who apply for visas each year. Germany's NetzDG law, which requires social media companies to proactively monitor and take down posts for hate speech, doesn't mandate that companies share data with intelligence bodies, but it requires comparable infrastructure. The European Union (EU) has proposed a similar system for terrorist content.

The FBI says that the system will "ensure that all privacy and civil liberties compliance requirements are met." Few will find that comforting. But let's be extremely charitable and assume that the system will be fully on the up-and-up. There is still the problem of interpretation, which is formidable.

These kinds of systems are predictably ridden with errors and false positives. In Germany, posts that are clearly critical or satirical are taken down by proactive social media monitoring systems. To a dumb algorithm, there isn't much of a difference. It sees a blacklisted word and pulls or flags the post, regardless of whether the post was actually opposing the taboo concept.

Computers just aren't that great at parsing tone or intent. One algorithmic study of Twitter posts was only able to accurately gauge users' political stances based on their posts about a third of the time. And this was in standard English. The problem gets worse when users use slang or a different language. Yet the FBI apparently expects these programs to quickly and accurately separate meme from menace.

So the FBI's desired "red flag" tool is creepy and dubious. It's also a bit schizophrenic, given last month's grand brouhaha over Facebook data sharing.

The FTC just issued a record-breaking $5 billion settlement with Facebook for the Cambridge Analytica data scandal. Facebook had allowed developers access to user data that violated their terms of service, as well as a 2012 FTC consent decree against the company for its data practices. This means that data was exploited in ways that users thought were verboten. Granting programming access for tools to shuttle data to intelligence agencies, which is also against Facebook policies, won't seem much different to users.

But the Red Flag tool may violate more than Facebook's own policies. It could also go against the FTC's recent settlement, which ties Facebook to a "comprehensive data security program." The Wall Street Journal quotes an FTC spokesman stating that the consent decree protects all data from being gathered without user knowledge. How can Facebook square this circle?

Few will be surprised that the FBI would seek this kind of Red Flag tool for social media. Yet polls show that most Americans support more federal data privacy regulation in the vein of the EU's sweeping General Data Privacy Regulation (GDPR).

Social media companies make fine foes, especially for politicians. But we shouldn't forget that the same governments that we expect to "protect our privacy" are all too willing to junk it at the first sign of a snooping opportunity.

Robust solutions to social media woes are unlikely to come from the same governments that would sacrifice our privacy at their earliest convenience. Rather, we should look to advances in decentralizing and cryptographic technologies that will place the user in control of their own data.

NEXT: Americans Spend Nearly As Much on Illegal Drugs As They Do on Booze, Which Shows What a Ripoff Prohibition Is

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. To be clear, the proposed tool does not seek to access private messages or other hidden data.

    Yet. (Generously assuming they would tell us.)

    1. If the FBI can find it, it wasn’t hidden very well, was it? And if you didn’t hide it very well, it obviously wasn’t meant to be private. Besides, the FBI has procedures and procedures will be followed so you can trust the FBI won’t be invading your privacy. At least they won’t be for the FBI’s definition of “won’t” and “invading” and “your” and “privacy”.

    2. If someone is publicly publishing information without regard to who can see it, I have a lot less issue with their “privacy” than I would with, for example, filtering emails.

      The latter is more like searching a home or intercepting the mails. The former is akin to observing that someone has Nazi tattoos on their foreheads, clothing that says, “Kill all the (pick a group)”, a hand grenade in one hand and an AK-47 in the other and drawing the conclusion that they just might be a source of trouble.

      Face it – if you publish something openly, you’re inviting everyone to read it – including the government.

  2. This is great! Just think, under this concept, most of the socialist political prattle would become suspect, and illegal.
    No more racist rants against a certain pale skin color.
    No more sexists rants against only one specific, actual, genetically determinable sex.
    No more rants against the major religions of the world.
    No more anti-semitic rants.
    No more advocacy for child sacrifice.
    What a wonderful world.

  3. The FBI says that the system will “ensure that all privacy and civil liberties compliance requirements are met.”

    “And if not — oh, well.”

    1. They’d just change the requirements.

      Or they’d have a different set of requirements which were national security sekrit.

  4. Not only a bad idea for privacy and Due Process concerns. Imagine all the red flags from all those motherfuckin’ white supremacists Trumpistas who fap all over the image of that little girl pleading the State to return her kidnapped father, or cum all over the images of deaf ‘Mexicans’ gunned dead by a Trumpista.

    In the meantime, you have here Trunpistas who decry racism against them. Go cry under your bed, you homicidal maniacs. Fuck you! You WILL be replaced!

    1. Some of us need to wait and see what Trump says about red flag laws before we decide whether they’re a terrible idea or a great idea. And whether Reason should be applauded or vilified for opposing them. But we’re not a cult.

      1. Hey! You’re pretty damned fickle. *I’m* the puppet, not that other sock.

  5. You can trust the government. As long as you don’t do anything wrong you have nothing to worry about. And if you are doing something wrong then you deserve to have S.W.A.T. break down your door, shoot your dog, and then lock you in a cage.

    But corporations? No fucking way can they be trusted. They will use this information to try to sell you stuff! How horrific is that? I mean, they’re forcing advertising on you! Targeted advertising! Targeted! That’s some scary shit! They force this advertising on you and that brainwashes you into buying their stuff! It’s terrible!

    Please government, save us from the corporations!

    1. So much for only being back a little bit.

      1. I’m just taking a few pot shots here and there.

        It would be nice if you actually added to the discussion instead of throwing poo like a monkey. I believe in you! You can do it!

        1. Not this morning; maybe tomorrow. He’s already playing fast and loose with his sock puppets.

    2. “But corporations? No fucking way can they be trusted. They will use this information to try to sell you stuff!”

      If only that’s all corporations like FB or Google cared about today, few people would be scared of them.

  6. Well, it’s the FBI vs. Big Tech. The Trumpbots hate them both, so wonder who they will cheer for here?

    1. “The Trumpbots”

      Lol, I love how you constantly whine about dehumanizing people, and are so hilariously obtuse about your own behavior.

      Loloolololol

    2. Since when has big tech, other than Apple making some token efforts at privacy protection, ever stood up to the FBI or the NSA or the Justice Dept?

    3. Say I was adamantly opposed to the obviously moronic Trumpbots, who are you proposing to be the good guys here?

  7. It’s my understanding that there are already tools available to industry that flag posts that mention, say, a retailer–so that the retailer can respond to them. If you say something shitty about the service you received at Nordstrom or Forever 21 in a Facebook or Twitter post–maybe accuse a salesperson of homophobia or racism–I suspect someone from one of those companies may post something in response, asking for more information, being apologetic, etc.

    In addition to all the other reasons why the government shouldn’t be monitoring our conversations, why are they trying to reinvent the wheel? $20 says there are already competing software platforms that will monitor both Facebook and Twitter for specific content. I’d also point out that Facebook seems to be both willing and able to write their own software–and they already monitor such content themselves.

    As advertising platforms, Facebook and Twitter are already highly incentivized to take down content that advertisers find objectionable. Failing at monitoring such content and taking it down costs them advertising accounts–as I’ve posted so many times before. Advertisers certainly don’t want their ads appearing near posts that are homophobic, racist, or xenophobic, and that goes double for posts that point to an imminent mass shooting.

    In other words, Facebook and Twitter have more incentive, higher budgets, and better coders than the federal government. I see no good reason to believe that the federal government would do a better job of monitoring social media content for warning signs of mass shootings than social media can do itself. Looks to me like the federal government is just looking to make work for itself.

  8. Federal government power is out-of-control!!! Get a judicial warrant!!! Tapping online conversations is no different than tapping telephones. Once upon a time that was a huge deal (i.e. Watergate scandal)…. Now its just media side stories.

    Rand Paul was right – The Patriot Act served its time and now its time for it to go.

  9. Did anyone truly believe that the government cares about our privacy on social media?

    Of course they care. If you didn’t have privacy on social media, how would they violate it?

  10. Next up: A “red-flag” system for Reason comment pages.

    1. Just go with the assumption that Reason commenters are already included in a government database of subversives.

      1. But are the CORRECT ones included? LOL

        1. And did they get ALL the socks?

      2. Heck, I know the police were going around photographing license plates at more than one rally I was at back in the 90’s, so that’s a given in my case. Probably have a file as long as my arm.

      3. I’ve legit always assumed that posters here probably ARE on some sorts of lists. Anybody who doesn’t think the government should be all powerful is of course a de facto enemy of the state right?

    2. Not really a system, more of an IF THEN GOTO statement.

      IF posting on Reason comments, THEN GOTO reeducation center.

  11. >>The FBI says that the system will “ensure that all privacy and civil liberties compliance requirements are met.”

    expected like nine million hahas at the end of that sentence.

    1. A haha is a shallow ditch surrounding an English stately home. It prevents grazing animals from getting on the lawns close to the house and keeps them at a suitable distance. It also doesn’t interfere with the view as a fence or hedge might.

      example: “Oh look, Lady Finch Cornwall has tripped into the haha.”

      1. this ^^ is the thing i learned today gracias.

        my favorite haha is the standard

        http://www.youtube.com/watch?v=kdOPBP9vuZA

  12. “The Government Wants a ‘Red Flag’ Social Media Tool. That’s a Terrible Idea.”

    No, it isn’t.
    This way the ruling elitist filth can filter what can and cannot be said on social media sites, and as we all know, our obvious betters in government know so much more than we do, know what’s best for us and are always looking out for our best interests.
    Censorship forever!

  13. “This kind of “proactive identification” could allow law enforcement to target and even disenfranchise social media users whose posts may have been merely misinterpreted. So let’s call this the Red Flag tool for short.”

    The kind of software to identify criminals and terrorists are based on neural networks and will certainly improve with time and additional data. As I understand, the ‘posts’ of users is of secondary value. The meta-data, where you are, who you link with, your browser type etc, is what is of interest. Unlike the content of posts, the meta-data isn’t something the user has a lot of control over. It’s a powerful idea and corporations like Facebook are already using variations on it to sell advertising.

  14. “But the Red Flag tool may violate more than Facebook’s own policies. ”

    You mean like the First Amendment? And then using that to violate the Second Amendment? All the while ignoring the Fifth Amendment?

  15. Remember when Omar Mateen’s employers, mosque, and local law enforcement agencies reported him to the FBI prompting 3 separate FBI investigations? Whoever was in charge of the FBI then should’ve been shitcanned. How about the time the FBI interviewed Tamerlan Tsarnaev (sp?)? Remember when a user named ‘nikolas cruz’ posted messages about becoming a school shooter on YouTube and the FBI couldn’t identify him? How about when Enrique Marquez Jr. was talking openly about the San Bernadino shooter’s sham marriage and illegal activity openly on facebook less than a month before the shooting?

    The FBI has repeatedly proven that such a red flag tool would only leave them standing around with their dicks hanging out while people get shot.

  16. It may be presently ‘classified for national security purposes under the 2002 Sanctity of the Children Act’ but anyone with a semi-functioning brain knows that the Stasi…uh, DHS is already doing this. What the hell do you think the TATs (Threat Assessment Teams) are for? What do you imagine the Stasi apparatchiks do in the 70+ ‘Fusion Centers’ operating 24/7?

    The government says:
    ‘the FBI swears up and down that the system will “ensure that all privacy and civil liberties compliance requirements are met.”

    The operational translation: “Privacy? Seriously? We ensure Americans that all privacy and civil liberties are henceforth forfeited, and you will fully comply.”

  17. I like shiny new things as much as the next guy… But damn, there are some really horrible and dark aspects to all the new technology that has come about in the last couple decades.

    As the Unibomber said in his manifesto, the thing about technology is there is no way to separate out the good parts from the bad… Understanding nuclear physics gives us nuclear reactors AND nuclear bombs… There is no way to only get the reactors.

    And all this crap is definitely giving us lots of good and bad things.

  18. There is no way that all the fine algorithms and analysts are going to be able to separate out those with the actual means, motivation and intent to commit mass shootings from the rest of the population.

    And even beyond trying to find what I’d estimate as one in ten million or so, you’re still not going to find them all. That asshole who decided that the C&W festival in Las Vegas was a chance to show off his sniping skills left no traces and as far as I know his motive was never determined.

    But giving the FBI & the rest of the alphabet agencies this kind of power will just lead to “rounding up the usual suspects”.

  19. The government needs to be red flagged.

  20. In fact, the practice of using books, poetry and other written words as a form of therapy has helped humans for centuries. Fiction is a uniquely powerful way to understand others, tap into creativity and exercise your brain. More info about fiction you can find here tooly.io/george-orwells-1984/ in form of free essays.

Please to post comments