The Government Wants a 'Red Flag' Social Media Tool. That's a Terrible Idea.
The FBI is looking for companies to comb through social media posts and pinpoint possible threats ahead of time. Think of it like a meme-illiterate Facebook-stalking precog from Minority Report.
Did anyone truly believe that the government cares about our privacy on social media? At the same time that Congress and the Federal Trade Commission (FTC) were taking Facebook to task for neglecting user data, the FBI was soliciting bids for technologies to hoover up and analyze your social media posts—just in case you are a threat.
It's yet another example of state double talk on online surveillance. Politicians preen for the cameras when a private company fails their users. But that same championing of our privacy rarely extends to government programs. When it comes to their own surveillance programs, it's just in the public interest.
In early July, the FBI posted a solicitation notice for a "Social Media Alerting Subscription," which would "acquire the services of a company to proactively identify and reactively monitor threats to the United States and its interests through a means of online sources." The request singles out Twitter, Facebook, Instagram "and other social media platforms" for snooping.
Essentially, the FBI is looking for companies to build a tool to comb through "lawfully access[ed]" social media posts and pinpoint possible threats ahead of time. Think of it like a meme-illiterate Facebook-stalking precog from Minority Report.
Although the notice was posted well before this month's mass shootings, it is easy to see how this system could empower the Red Flag law ideas that have since gained prominence. This kind of "proactive identification" could allow law enforcement to target and even disenfranchise social media users whose posts may have been merely misinterpreted. So let's call this the Red Flag tool for short.
The FBI's Red Flag tool statement of objectives provides a glimpse into the agency's sprawling "social media exploitation" efforts. There are "operations centers and watch floors," which monitor news and events to create reports for the relevant FBI team. These spur the activation of "fusion centers," tactical teams which use "early notification, accurate geo-locations, and the mobility" of social media data to issue their own reports. There are also FBI agents in the field, "legal attaches" whose jobs would be much easier with a translation-enabled Red Flag tool. And last are the "command posts," teams of "power users" assigned to monitor specific large events or theaters of operations.
To be clear, the proposed tool does not seek to access private messages or other hidden data. Rather, it would scrape and rationalize publicly accessible posts. This could be fortuitously combined with other FBI data to build detailed, but possibly inaccurate, portraits of suspected ne'er-do-wells.
Unsurprisingly, social media companies are not pleased. Although they are often criticized for their own data practices, many of them have explicit bans against building such tools to share data with intelligence agencies.
Facebook disallows developers from "[using] data from us to provide tools that our used for surveillance." This seems to fit the bill. Twitter similarly forbids developers from making Twitter content available to "any public sector entity (or any entities providing services to such entities) whose primary function or mission includes conducting surveillance or gathering intelligence." Sounds like the FBI to me.
But despite these company policies, similar tools already exist. The Department of Homeland Security, for instance, collects social media data on the many people who apply for visas each year. Germany's NetzDG law, which requires social media companies to proactively monitor and take down posts for hate speech, doesn't mandate that companies share data with intelligence bodies, but it requires comparable infrastructure. The European Union (EU) has proposed a similar system for terrorist content.
The FBI says that the system will "ensure that all privacy and civil liberties compliance requirements are met." Few will find that comforting. But let's be extremely charitable and assume that the system will be fully on the up-and-up. There is still the problem of interpretation, which is formidable.
These kinds of systems are predictably ridden with errors and false positives. In Germany, posts that are clearly critical or satirical are taken down by proactive social media monitoring systems. To a dumb algorithm, there isn't much of a difference. It sees a blacklisted word and pulls or flags the post, regardless of whether the post was actually opposing the taboo concept.
Computers just aren't that great at parsing tone or intent. One algorithmic study of Twitter posts was only able to accurately gauge users' political stances based on their posts about a third of the time. And this was in standard English. The problem gets worse when users use slang or a different language. Yet the FBI apparently expects these programs to quickly and accurately separate meme from menace.
So the FBI's desired "red flag" tool is creepy and dubious. It's also a bit schizophrenic, given last month's grand brouhaha over Facebook data sharing.
The FTC just issued a record-breaking $5 billion settlement with Facebook for the Cambridge Analytica data scandal. Facebook had allowed developers access to user data that violated their terms of service, as well as a 2012 FTC consent decree against the company for its data practices. This means that data was exploited in ways that users thought were verboten. Granting programming access for tools to shuttle data to intelligence agencies, which is also against Facebook policies, won't seem much different to users.
But the Red Flag tool may violate more than Facebook's own policies. It could also go against the FTC's recent settlement, which ties Facebook to a "comprehensive data security program." The Wall Street Journal quotes an FTC spokesman stating that the consent decree protects all data from being gathered without user knowledge. How can Facebook square this circle?
Few will be surprised that the FBI would seek this kind of Red Flag tool for social media. Yet polls show that most Americans support more federal data privacy regulation in the vein of the EU's sweeping General Data Privacy Regulation (GDPR).
Social media companies make fine foes, especially for politicians. But we shouldn't forget that the same governments that we expect to "protect our privacy" are all too willing to junk it at the first sign of a snooping opportunity.
Robust solutions to social media woes are unlikely to come from the same governments that would sacrifice our privacy at their earliest convenience. Rather, we should look to advances in decentralizing and cryptographic technologies that will place the user in control of their own data.