Is Facial Recognition a Useful Public Safety Tool or Something Sinister?
Your Face Belongs to Us documents how facial recognition might threaten our freedom.

Your Face Belongs to Us: A Secretive Startup's Quest To End Privacy as We Know It, by Kashmir Hill, Random House, 352 pages, $28.99
"Do I want to live in a society where people can be identified secretly and at a distance by the government?" asks Alvaro Bedoya. "I do not, and I think I am not alone in that."
Bedoya, a member of the Federal Trade Commission, says those words in New York Times technology reporter Kashmir Hill's compelling new book, Your Face Belongs to Us. As Hill makes clear, we are headed toward the very world that Bedoya fears.
This book traces the longer history of attempts to deploy accurate and pervasive facial recognition technology, but it chiefly focuses on the quixotic rise of Clearview AI. Hill first learned of this company's existence in November 2019, when someone leaked a legal memo to her in which the mysterious company claimed it could identify nearly anyone on the planet based only on a snapshot of their face.
Hill spent several months trying to talk with Clearview AI's founders and investors, and they in turn tried to dodge her inquiries. Ultimately, she tracked down an early investor, David Scalzo, who reluctantly began to tell her about the company. After she suggested the app would bring an end to anonymity, Scalzo replied: "I've come to the conclusion that because information constantly increases, there's never going to be privacy. You can't ban technology. Sure, it might lead to a dystopian future or something, but you can't ban it." He pointed out that law enforcement loves Clearview AI's facial recognition app.
As Hill documents, the company was founded by a trio of rather sketchy characters. The chief technological brain is an Australian entrepreneur named Hoan Ton-That. His initial partners included the New York political fixer Richard Schwartz and the notorious right-wing edgelord and troll Charles Johnson.
Mesmerized by the tech ferment of Silicon Valley, the precocious coder Ton-That moved there at age 19; there he quickly occupied himself creating various apps, including in 2009 the hated video-sharing ViddyHo "worm," which hijacked Google Talk contact lists to spew out a stream of instant messages to click on its video offerings. After the uproar over ViddyHo, Ton-That kept a lower profile working on various other apps, eventually decamping to New York City in 2016.
In the meantime, Ton-That began toying online with alt-right and MAGA conceits, an interest that led him to Johnson. The two attended the 2016 Republican National Convention in Cleveland, where Donald Trump was nominated. At that convention, Johnson briefly introduced Ton-That to internet financier Peter Thiel, who would later be an angel investor in what became Clearview AI. (For what it's worth, Ton-That now says he regrets his earlier alt-right dalliances.)
Ton-That and Schwartz eventually cut Johnson out of the company. As revenge for his ouster, Johnson gave Hill access to tons of internal emails and other materials that illuminate the company's evolution into the biggest threat to our privacy yet developed.
"We have developed a revolutionary, web-based intelligence platform for law enforcement to use as a tool to help generate high-quality investigative leads," explains the company's website. "Our platform, powered by facial recognition technology, includes the largest known database of 40+ billion facial images sourced from public-only web sources, including news media, mugshot websites, public social media, and other open sources."
***
As Hill documents, the billions of photos in Clearview AI's ever-growing database were scraped without permission from Facebook, TikTok, Instagram, and other social media sites. The company argues that what it is doing is no different than the way Google catalogs links and data for its search engine, only that theirs is for photographs. The legal memo leaked to Hill was part of the company's defense against numerous lawsuits filed by social media companies and privacy advocates who objected to the data scraping.
Scalzo is right that law enforcement loves the app. In March, Ton-That told the BBC that U.S. police agencies have run nearly a million searches using Clearview AI. Agencies used it to identify suspects in the January 6 Capitol riot, for example. Of course, it does not always finger the right people. Police in New Orleans misused faulty face IDs from Clearview AI's app to arrest and detain an innocent black man.
Some privacy activists argue that facial recognition technologies are racially biased and do not work as well on some groups. But as developers continued to train their algorithms, they mostly fixed that problem; the software's disparities with respect to race, gender, and age are now so negligible as to be statistically insignificant. In testing by the National Institute of Standards and Technology, Hill reports, Clearview AI ranked among the world's most accurate facial recognition companies.
***
To see the possible future of pervasive facial recognition, Hill looks at how China and Russia are already using the technology. As part of its "safe city" initiative, Russian authorities in Moscow have installed over 200,000 surveillance cameras. Hill recounts an experiment by the Russian civil liberties activist Anna Kuznetsova, who submitted her photos to a black market data seller with access to the camera network. Two weeks later, she received a 35-page report detailing each time the system identified her face on a surveillance camera. There were 300 sightings in all. The system also accurately predicted where she lived and where she worked. While the data seller was punished, the system remains in place; it is now being used to identify anti-government protesters.
"Every society needs to decide for itself what takes priority," said Kuznetsova, "whether it's security or human rights."
The Chinese government has deployed over 700 million surveillance cameras; in many cases, artificial intelligence analyzes their output in real time. Chinese authorities have used it to police behavior like jaywalking and to monitor racial and religious minorities and political dissidents. Hill reports there is a "red list" of VIPs who are invisible to facial recognition systems. "In China, being unseen is a privilege," she writes.
In August 2023, the Chinese government issued draft regulations that aim to limit the private use of facial recognition technologies; the rules impose no such restrictions on its use for "national security" concerns.
Meanwhile, Iranian authorities are using facial recognition tech to monitor women protesting for civil rights by refusing to wear hijabs in public.
Coappearance analysis using artificial intelligence allows users to review either live or recorded video and identify all of the other people with whom a person of interest has come into contact. Basically, real-time facial recognition will not only keep track of you; it will identify your friends, family, coreligionists, political allies, business associates, and sexual partners and log when and where and for how long you hung out with them. The San Jose, California–based company Vintra asserts that its co-appearance technology "will rank these interactions, allowing the user to identify the most recurrent relationships and understand potential threats." Perhaps your boss will think your participation in an anti-vaccine rally or visit to a gay bar qualifies as a "potential threat."
What about private use of facial recognition technology? It certainly sounds like a product with potential: Personally, I am terrible at matching names to faces, so incorporating facial recognition into my glasses would be a huge social boon. Hill tests a Clearview AI prototype of augmented reality glasses that can do just that. Alas, the glasses provide viewers access to all the other photos of the person in Clearview AI's database, including snapshots at drunken college parties, at protest rallies, and with former lovers.
"Facial recognition is the perfect tool for oppression," argued Woodrow Hartzog, then a professor of law and computer science at Northeastern University, and Evan Selinger, a philosopher at the Rochester Institute of Technology, back in 2018. They also called it "the most uniquely dangerous surveillance mechanism ever invented." Unlike other biometric identifiers, such as fingerprints and DNA, your face is immediately visible wherever you roam. The upshot of the cheery slogan "your face is your passport" is that authorities don't even have to bother with demanding "your papers, please" to identify and track you.
In September 2023, Human Rights Watch and 119 other civil rights groups from around the world issued a statement calling "on police, other state authorities and private companies to immediately stop using facial recognition for the surveillance of publicly-accessible spaces and for the surveillance of people in migration or asylum contexts." Human Rights Watch added separately that the technology "is simply too dangerous and powerful to be used without negative consequences for human rights….As well as undermining privacy rights, the technology threatens our rights to equality and nondiscrimination, freedom of expression, and freedom of assembly."
The United States is cobbling together what Hill calls a "rickety surveillance state" built on this and other surveillance technologies. "We have only the rickety scaffolding of the Panopticon; it is not yet fully constructed," she observes. "We still have time to decide whether or not we actually want to build it.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Use of copyrighted material for profit.
Just because something is available on the internet does not mean it is not copyrighted.
It's important to notice that the only reason this is bad is because the primary software developer was involved with the "alt-right".
It does seem like it would be a useful thing to post a whole bunch of cameras tied to this database near the southern border, though. Then at least we'd have some record of who is coming in.
Every month start earning more than $12,000 by doing very simple Online job from home.i m doing Qg this job in my part time i have earned and received $12429 last month .I am now a good Online earner and earns enough cash for my needs. Every person can get this Online job pop Over...
Here This Site—>>> http://Www.Smartcareer1.com
Just because something is available on the internet does not mean it is not copyrighted.
Copyright doesn't exist to let you control your public image.
Yes. We live in a surveillance state because Law Enforcement thinks they can thwart the people's law (US Constitution) over their government and just go snoop without a warrant.
It's that simple. If the US Constitution is ignored the USA can't exist.
Meanwhile, Iranian authorities are using facial recognition tech to monitor women protesting for civil rights by refusing to wear hijabs in public.
They could just wear some sort of head covering to hide their identities while protesting against... oh wait.
A few years ago, when the CCP was seizing control of Hong Kong, I remember reading that protesters there had worked out ways of thwarting facial recognition systems. Seems that needs to be revived.
Mandatory masking AND umbrellas?
Was the answer ".22 LR"?
A couple things.
I have encountered several times in Sci-Fi people wearing an electronic device that fuzzes faces enough to make their wearer invisible to facial recognition.
AntiFA made wearing of masks part of their uniforms starting in summer of 2020 when they rioted, burning all across the country in the big cities run by Democrats. It was pretty obvious that the goal of the masks was anonymity from police surveillance. At least at the federal level, this was somewhat compensated for by (quasi illegal) wholesale cell phone tracking. The two are easily integrated, with modern software technology. Suspiciously, a couple months later, the entire country was forced to mask in a fruitless attempt to control COVID-19. And then, with the change of Administrations the next year, focus was shifted from AntiFA to the new Administration’s political enemies, starting with the J6 protesters, but expanding to cover anyone considered a potential domestic terrorist, now including Roman Catholics.
Strikes me as a lot of worry over something that can't be prevented. Sure, you can make facial recognition illegal evidence in courts, you can even try to make it illegal for cops to use it, but you can't stop the technology, and when ordinary people can use it, so can cops. It's easy enough to just claim it was a hot tip from someone on the street who ran off before they could be identified.
Not only that, but the reasons for objecting to it are all based on, not the capability or accuracy itself, but its use for evil purposes — and then the problem is with the evil purposes. Like guns and other weapons. The problem with Red China is not what they can do with tools, but the fact that they're Red.
something sinister. the phone tracking and bank tracking is enough.
right after I posted this ^^ I received a text from bankofyamerica lolwtf
In the meantime, Ton-That began toying online with alt-right and MAGA conceits, an interest that led him to Johnson. The two attended the 2016 Republican National Convention in Cleveland, where Donald Trump was nominated. At that convention, Johnson briefly introduced Ton-That to internet financier Peter Thiel, who would later be an angel investor in what became Clearview AI.
I fail to see what any of that has to do with facial recognition or privacy. All it really reveals is your strong political biases.
But that is the true crime here. Consorting with figures to the right of Mao is beyond the pale to the writers at Reason.
Wasted opportunity. The title should have been, "All your Face Are Belong to Us"