The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
The Evolving Challenges to Maintaining Anonymity
Privacy law should supplement the First Amendment's anonymity protections.
First Amendment anonymity safeguards have been vital over the past half-century, but they have limits. The constitutional protections prevent certain uses of government power to unmask people. Cases like Talley limit the government's ability to force authors to disclose their real names. And John Doe online subpoena opinions protect online anonymity when a private party seeks to use a court-issued subpoena to compel the disclosure of identifying information. But due to the state action doctrine, these precedents generally do not restrict purely private activities that could compromise a person's anonymity.
We disclose information to companies, the government, and other people that, when pieced together, can provide a roadmap to our identities. This occurs even when we assume that we are anonymous. As Helen Nissenbaum wrote in 1999, while anonymity in the computer age is not impossible, "achieving it is a more demanding business than merely allowing people to withhold their names."
My book outlines three primary challenges to maintaining anonymity that are largely beyond the reach of the First Amendment protections. First, some online platforms have adopted "real-name" policies, prohibiting customers from using their services anonymously or pseudonymously. Second, the vast amount of public information that is available about people enables them to be unmasked, even when they try to speak anonymously online. Third, companies maintain large swaths of unregulated personal information that can make it harder to operate anonymously.
Social media platforms decide whether to require their users to post under or register their real names. Facebook has long had a real name policy, while Twitter and Reddit have long touted their users' abilities to operate pseudonymously (though they prohibit impersonation). Just as the government cannot require platforms to have real-name policies, it cannot prohibit them either. Although I question the efficacy of real-name policies and believe that they too often disproportionately harm marginalized groups, platforms are free to determine what level of anonymity, if any, they will offer.
The second challenge—the availability of public information about speakers—also largely cannot be addressed by changes to the law. As I describe in the book, anonymous speakers' identities have been disclosed because there were enough clues to enable others to put together these puzzle pieces to learn their identities.
Some clues were in their posts, and often could be linked to other publicly available data. Academic research has long established that only a few data points are necessary to identify people. In a 2000 research paper, Latanya Sweeney used Census data to establish that 87 percent of Americans had a unique combination of five-digit ZIP code, gender, and birth date. As Paul Ohm presciently observed in 2010, the ease of reidentification of presumably anonymous data poses great threats to individual safety. "Our enemies will find it easier to connect us to facts that they can use to blackmail, harass, defame, frame, or discriminate against us," Ohm wrote. "Powerful reidentification will draw every one of us closer to what I call our personal 'databases of ruin.'"
In the years since Sweeney's groundbreaking research, the amount of publicly available information has surged, along with the popularity of social media. As I describe in my book, in recent years many people who posted controversial or objectionable speech on social media, presumably under pseudonyms, were unmasked based on clues that they included in their posts combined with publicly available information. How easily people can be unmasked and tracked based on public information suggests that the anonymity empowerment provided by the First Amendment and technology also requires people to carefully assess the information that they are providing to the world.
Legal changes could address the third modern challenge to anonymity empowerment: a great deal of identifying personal information is collected by data brokers and other companies, with few restrictions on its use in piercing anonymity both online and offline. As I document in the book, U.S. law imposes few limits on the collection, use, and sharing of facial recognition data, precise geolocation points, and other personal information that can identify people.
Privacy, data protection, and data security laws should aim to protect anonymity. Some data protection laws, such as Europe's General Data Protection Regulation, encourage the pseudonymization or anonymization of data. While this is a positive step, I worry that we might to quickly assume that data is anonymized or pseudonymized when it can actually be linked back to individuals. Privacy laws should consider anonymization tactics such as adding noise to datasets.
The GDPR and some state laws such as the California Consumer Privacy Act also provide data subjects with the ability to access and request deletion of certain personal information. This also is a positive step, but it places a tremendous burden on individuals. And even if a person wants to take the time to request deletion of personal information, they may be unaware of every company and data broker that has their data. While privacy laws should provide individuals with control over their data, these options are not a panacea.
Ultimately, we need a strong national privacy law that not only encourages anonymization and provides data subjects with choices, but also prohibits particularly egregious forms of data collection, use, and sharing. One such example can be seen in local governments, such as San Francisco and Boston, which have banned law enforcement use of facial recognition technology. At the national level, we need a dialogue about how much we want to protect anonymity, and Congress should incorporate those values into a national privacy law.
Likewise, data security laws should more effectively address data that can be used to identify people, such as precise geolocation information. U.S. data security laws are scattered and weak, and many focus on particular sectors such as healthcare and banking. While protecting that information is important, data security laws should more effectively protect personal information that can compromise the ability to speak anonymously.
I've enjoyed guest blogging this week and hearing your thoughts about the very complex equities surrounding anonymous speech. I have several book talks scheduled over the next few weeks, and am happy to speak with classes, bar associations, or other groups that are interested in the issue.
-- Jeff Kosseff is an associate professor of cybersecurity law at the United States Naval Academy. The views presented are only his, and do not represent the Naval Academy, Department of Navy, or Defense Department. This piece is adapted from The United States of Anonymous: How the First Amendment Shaped Online Speech, by Jeff Kosseff, published by Cornell University Press. Copyright (c) 2022 by Cornell University.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I don't see the US government supporting truly effective limits on data collection. Law enforcement wants to rely on commercial surveillance to reduce the risk of being caught directly violating constitutional rights.
Au contraire. . . .
I think privacy is an issue that unites all parties, especially small govt Republicans, Libertarians, Dems, etc.
And Prof. Kerr's blogs about 4A seem to indicate we're moving in the right direction concerning monitoring/restricting govt's unconstitutional activities.
Keeping businesses from publishing our personal data is uniting. We can't have Bork's video rental records and the Uber boss can not use "God view" at will. Keeping businesses from collecting our personal data and selling it to the government is not uniting. Police can still buy tracking data online. The NSA can still spy on us.
In a publishing world characterized by diversity and profusion of private publishers, all practicing editing prior to publication, anonymity is well worth protecting, and not likely to do much harm.
In a publishing world characterized by fewer publishers, with a disproportionate few wielding outsized market power, and none of them practicing editing prior to publication, anonymity will turn out a public menace, and an unwise policy.
In short, what anonymity means for public policy depends critically on context. It is unwise to suppose that anonymity can be treated as a notion complete in itself—and already fully justified by practice and history. Present publishing reality is disconnected from both.
Isn't the ban in Boston based on (incorrect) ideas of racism, not privacy? I don't see how facial recognition implicates privacy anyway. The datasets are made up mostly of volunteers or people paid for their visage but the term encompasses so many different technologies as to make it unclear what kind is being considered. If the concern is that the government can track individuals through surveillance cameras then the real privacy issue is that the government has cameras everywhere.
Facial recognition without some sort of informed consent is the digital equivalent of being asked show one's "papers" as they move about their daily lives. Your cell phone pretty much does this now but you can always power it off if you choose. Cameras owned by third parties cannot be turned off by their targets, however.
To complicate things, which brings in your Boston example, a lot of the machine learning-based facial recognition technology fails to make accurate matches for non-white faces. It's not that they don't make matches, they're just full of errors where they mix up multiple people and can show a person being in a place, like near a crime, where they weren't. Non-white Americans have good reasons to worry about being made a suspect for a crime and the resulting interactions with law enforcement that put their lives at risk.
And finally, the issue isn't "the government has cameras everywhere." The issue could be that the government has access to private cameras (see police forces acquiring Ring doorbell footage) but even that isn't the real issue. The real issue is that government is filled with your average human who cannot always be trusted to do the right thing with sensitive, private, and potentially powerful information. Consider how police forces around the US used to publish the faces and names of homosexual men found inside gay bars in local news papers across the country only multiply that by the power of pervasive technologies like cameras, AI, and social media.
If you're worried about private cameras then the issue is still the ubiquitous cameras and not facial recognition.
The race issue in facial recognition is overstated and not all current algorithms have higher false positive rates for blacks and Asians (who also enjoy higher false negative rates). It's almost entirely a problem with the training sets; either not enough data or mislabeled data. Asian algorithms work very well for Asians, for instance, because of the number of Asians in the training sets. I am unconvinced that facial recognition is less accurate than actual law enforcement, who routinely stop black men who look nothing like the suspect. At the very least, facial recognition succeeds at differentiating by race.
What, exactly, is the privacy risk of facial recognition when an official already has a private video? The privacy issue is that they can get that video, not that they can perform a match with an algorithm rather than by leafing through papers or ID sites (which is what they usually do and they still suck at it). It's not as if many non-Chinese agencies are applying facial recognition all the time and to everything. The only possible exceptions I know of are high security areas and airports, which are public and you don't have any expectation of privacy when entering.
"But the example that stands apart from the above events is, of course, the invasion of Iraq without any legal grounds. They used the pretext of allegedly reliable information available in the United States about the presence of weapons of mass destruction in Iraq. To prove that allegation, the US Secretary of State held up a vial with white power, publicly, for the whole world to see, assuring the international community that it was a chemical warfare agent created in Iraq. It later turned out that all of that was a fake and a sham, and that Iraq did not have any chemical weapons. Incredible and shocking but true. We witnessed lies made at the highest state level and voiced from the high UN rostrum. As a result we see a tremendous loss in human life, damage, destruction, and a colossal upsurge of terrorism. "
Frank "I'm really Vladimir Vladimirovich Putin"
Pretending-to-be-stupid trolling is the most self-debasing form of trolling.
I agree, why don't you stop doing it? "Gormadoc"?? just call yourself "Nerd".
Vlad
A current example is Chapman university law professor David Berkowitz suing students who used an online testing cheating site to cheat on his test. He's claiming copyright on the "fact patterns" [IANAL] in his test questions and using that to try and force the website to release indentifying information about the anonymous posters. The professor is also very clear that he has no intention of following through on the lawsuit but merely wants to use it as a mechanism to force anonymous people into the open.
I'm have little sympathy for cheaters but this case worries me as it looks to be applicable to pretty much any anonymous person on the internet.
There's a huge gap between striking de-anonymization laws (Talley, McIntyre) and recognizing a Constitutional right to anonymity for non-anonymous communications.
Definitely. I'm not calling for a constitutional right for non-anonymous communications (I don't even know what that would look like). I'm calling for privacy statutes that limit the ability of individuals to be unmasked due to the personal information that private companies collect, use, and store.
Technology makes it easier to capture, store and use PII that individuals expose to the public. But other than the ease of access, what makes that kind of information different than the clunkier analog versions that have always been accessible -- like, in the context of facial recognition, a photo or description of a criminal suspect or recognizing a friend at a restaurant? Is the real problem the efficiency of the technology rather than an interest in privacy?
This was a lucid, intelligent, well-thought-out series of posts, devoid of polemics, bigotry, superstition, and belligerent ignorance.
I'm beginning to doubt this guy is associated with the Federalist Society.
I describe some such cases in a good amount of detail. Slate excerpted one of the chapters, about a very horrific cyberstalking, this week. I tried to post the link but the commenting system wouldn't allow it for some reason.