Reason.com - Free Minds and Free Markets
Reason logo Reason logo
  • Latest
  • Magazine
    • Current Issue
    • Archives
    • Subscribe
    • Crossword
  • Video
  • Podcasts
    • All Shows
    • The Reason Roundtable
    • The Reason Interview With Nick Gillespie
    • The Soho Forum Debates
    • Just Asking Questions
    • The Best of Reason Magazine
    • Why We Can't Have Nice Things
  • Volokh
  • Newsletters
  • Donate
    • Donate Online
    • Donate Crypto
    • Ways To Give To Reason Foundation
    • Torchbearer Society
    • Planned Giving
  • Subscribe
    • Reason Plus Subscription
    • Print Subscription
    • Gift Subscriptions
    • Subscriber Support

Login Form

Create new account
Forgot password

Civil Liberties

No Facial Recognition Tech for Cops

A sloppy panopticon is almost as dangerous as an effective one.

C.J. Ciaramella | From the March 2021 issue

Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL
Media Contact & Reprint Requests
topicscivilliberties-march-2021 | Illustration: KrisCole/iStock
(Illustration: KrisCole/iStock)

The Los Angeles Police Department (LAPD) banned the use of commercial facial recognition apps in November after BuzzFeed News reported that more than 25 LAPD employees had performed nearly 475 searches using controversial technology developed by the company Clearview AI. That's one of several recent developments related to growing public concern about police surveillance using facial recognition.

Clearview AI's app relies on billions of photos scraped from Facebook and other social media platforms. The app, like other facial recognition technologies, pairs that database with machine learning software to teach an algorithm how to match a face to the photos the company has collected.

Clearview is just one player in an expanding market. The Minneapolis Star Tribune reported in December that the Hennepin County Sheriff's Office had coordinated 1,000 searches through its Cognitec facial recognition software since 2018.

Concerns about such technologies have led several legislative bodies to delay, restrict, or halt their use by law enforcement agencies. In December, the Massachusetts legislature approved the first state ban on police use of facial recognition tech. During nationwide protests over police abuse last summer, the New York City Council passed the Public Oversight of Surveillance Technology Act, which requires the New York Police Department to disclose all of the surveillance technology it uses on the public.

This technology is often deployed without public knowledge or debate, sometimes before the kinks have been worked out. An independent audit found that London's A.I. technology for scanning surveillance footage labeled suspects accurately only 19 percent of the time.

Studies by researchers at the Massachusetts Institute of Technology and the National Institute of Standards and Technology have found that many of these algorithms have especially high error rates when trying to match nonwhite faces. A December 2019 study of 189 software algorithms by the latter group found that they falsely identified African-American and Asian faces 10–100 times more often than white faces.

Michigan resident Robert Julian-Borchak Williams, who is black, is the first American known to have been wrongly arrested and charged with a crime because of a faulty face match. In January, Williams spent 30 hours in police custody and had to pay a $1,000 bond after an algorithm incorrectly matched him to a shoplifting suspect.

While the potential benefits of reliable facial recognition technology shouldn't be dismissed out of hand, a sloppy panopticon is almost as dangerous as an effective one. Privacy and accuracy concerns demand intense scrutiny from the public and transparency from the government regarding how this emerging technology is used.

Start your day with Reason. Get a daily brief of the most important stories and trends every weekday morning when you subscribe to Reason Roundup.

This field is for validation purposes and should be left unchanged.

NEXT: Was the Reagan Revolution Really Reagan's?

C.J. Ciaramella is a reporter at Reason.

Civil LibertiesFacial RecognitionSurveillancePolice State
Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL
Media Contact & Reprint Requests

Hide Comments (28)

Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.

  1. Simple landscape drawing   4 years ago

    Hi. I discovered your blog using MSN. It really is a carefully written article. I would be sure to bookmark it and come back to read more of your helpful information. Thanks for the post. I will definitely return. Drawing for children
    Easy landscape drawing

  2. Jerryskids   4 years ago

    An independent audit found that London's A.I. technology for scanning surveillance footage labeled suspects accurately only 19 percent of the time.

    What is the error rate of cops reviewing surveillance footage? And how would you know? If the word of a cop is automatically believed and the cop says "Yep, that's the guy alright, I'd recognize him anywhere", well, how do you know he's right?

    1. Rossami   4 years ago

      1. We all have an intuitive sense of the fallibility of human identification. Facial recognition software gets undeserved deference because of cognitive biases that make us favor "new" things.
      2. Surveillance footage can be easily checked by another human. The algorithms used by facial recognition tools are generally "tuned" through machine learning and cannot be easily checked. Often, the systems do not even create a human-readable record of the image that's being "recognized". (Technologically, they could create such a human-readable record but they sometimes don't because the logging becomes unreasonably large.)

      1. Tom Dial   4 years ago

        All of which means next to nothing. To paraphrase Lord Kelvin, if you can't quantify something, you know very little about it. The decisive question that the "studies" reported in the press have failed to answer is whether AI based recognition is better, the same as, or worse than recognition based on examination of the same data by people. If it is better, or even the same, then not using it is foolish, especially since it undoubtedly is both quicker and cheaper.

        The problem with this technology is not particularly that it may produce incorrect "results" in a search for people of interest in a criminal investigation, but that it may be taken, because computer based, as a cause by itself of government action.

        In reality it is no better at best than a an ID by a person. It may be useful in fingering a suspect, but the police still have to establish, or should, that the person identified met a number of other criteria that tie him or her to the crime. An alibi has to be as effective a defense against AI as it is against a human identification. The cases I have seen reported where someone was arrested "because of AI facial recognition identification" represent, without exception, police negligence and judicial misconduct. And that is where attention should be directed. Suppressing facial recognition technology will be unnecessary if it is put in its proper place as a tool that has limitations but can be used in investigations to supplement other investigative techniques. It is mind-boggling that a police officer would make an arrest based solely on this one technique, or that a judge would issue an arrest warrant citing it as probable cause.

    2. Chumby   4 years ago

      If the suspect was smiling and had decent teeth, the computer indicated “Not British.”

    3. Mick 2   4 years ago

      You don't. The weakest part of a case in court is eyewitness identification. No two people see the same thing and a single identification is shaky at best when the suspect is not known to the person testifying.

      This is just ONE MORE TOOL - it is a layer of information along with forensics, evidence found on the scene, spontaneous outcry statements, confessions, independent and expert witnesses, obviously unique & observable indentifying physical anomalies, video surveillance, photos, established MO's of the accused, prior criminal history, etc.

      Getting your shorts in a bunch over one small piece of the entire puzzle.

  3. chemjeff radical socialist   4 years ago

    I believe the attempted overthrow of our government on January sixth, by less desirable, low achievement members of society; demonstrated a pressing need to monitor those on the lower end of the economic scale with unhealthy political opinions.
    Since we are now (wisely) eliminating police departments infested with institutional racism, something needs to fill the void. I think that facial recognition technology can be an important part of monitoring the threat posed by the lower strata.
    In the hands of a new CIA/FBI-like federal agency dedicated to monitoring internal political threats, this could be a valuable tool for liberty and fortifying our democracy.

    1. Chumby   4 years ago

      A new agency called the Korrect Government Bureau could be created to oversee this. Or KGB for short.

    2. creech   4 years ago

      Why bother with facial recognition? We know that merely wearing a MAGA hat, or waving a Gadsden Flag, is evidence of neo-Nazi white supremacy insurrectionist criminality. Even better, let's just arrest everyone who stands up when the national anthem is played.

    3. Mick 2   4 years ago

      less desirable, low achievement members of society and a pressing need to monitor those on the lower end of the economic scale with unhealthy political opinions.

      To paraphrase: if you are poor and hold a disparate opinion on something which is unsubstantiated, that needs "monitoring?"

      Who decides then, who are the "less desirable?" Who decides which people are "low achievement?"

      Don't say FBI or CIA - that is not their mission. They identify criminals, terrorist organizations, & genuine threats to the government - many of which the public NEVER hears about.
      Not the disadvantaged, poor, or obnoxious people with unpopular opinions - none which is illegal. Your kind of rhetoric is exactly what fuels the idiotic notion of Defund the Police.

      Lose the elitist tone. Embrace the reality that many of the rioters engaging in the coup attempt were professionals, educated, people of means with a specific, organized plan of action and members of the SENATE and a POTUS who enabled them to stage the events of Jan 6th. None of them are remotely disadvantaged.

  4. Earth Skeptic   4 years ago

    I will wager that the average facial recognition technology is more accurate than human eye witnesses, and at least any camera will get basic characteristics like color of clothing more correct. And a program, no matter how "racially biased" will be more objective than most cops.

    So if we need to ban the technology because it is not perfect, we should certainly also ban any human identification of suspects. Perhaps in the enlightened future, we will wait to prosecute only those who "self identify" as criminals.

    1. Rossami   4 years ago

      You would be wagering incorrectly. Facial recognition software is tuned to ignore clothing (on the logic that it is easily changed). Facial recognition, like many other machine learning algorithms, has been shown to develop its own racial biases. Insurance companies in particular have to be very careful with their data mining and artificial intelligence efforts because the machines regularly generate underwriting recommendations that recreate prohibited redlining.

      Computer algorithms are also subject to the same statistical mistake that underlies the Prosecutors Fallacy. Humans try to match their recognition to people they have a prior reason to be in the area. Computers attempt to match against everything in the database. When you have a 1 in 1000 chance of generating a false positive and you look at 10 faces, your odds that the match is a false positive are pretty low. When you have the same odds of generating a false positive but you look at 1000 faces, the odds of generating at least one false positive approaches 100%.

      Could facial recognition someday become as reliable as a human (and be more objective)? Yes. But the technology is nowhere close now.

      1. perlhaqr   4 years ago

        And when it looks at a million faces, you get a thousand false positives.

        "They all did it! Computer says so."

        1. Mick 2   4 years ago

          You get a boatload of bias and false positives with people too - so what it is your point beyond simply one more tool; not an empirical answer to all our problems?

      2. Illocust   4 years ago

        False positive aren't really an issue unless your prosecuting solely based on the computer match. I was talking to my father (an ex-cop) the other day about why police departments won't investigate porch pirates even if you have them on video, and he asked me the simple question, how do you identify the person on the video.

        Unless you have a license plate in view or you already know the person's name, all you have is a face. Unless you live in a very isolated small town, there are thousands of local criminals that could have done it. You would spend weeks sifting through mugshots, and that's assuming the guy had been caught before. A tool that can look at that face and give you even a hundred potential people that person could be, would be extremely useful in narrowing down that suspect pool and making it possible to find the guy you caught on camera stealing your package.

        1. Rossami   4 years ago

          re: prosecuting solely based on the computer match
          Did you miss the reference in the article where police basically did exactly that? Look for the paragraph beginning "Michigan resident Robert Julian-Borchak Williams..."

          Sifting through thousands of mug shots, whether sifting by computer or by human eye, will result in the same statistical fallacy. You're still looking at too large a population and will inevitably sweep up false-positives. A tool that "narrows down the suspect pool" does not solve the problem in the slightest. It merely gives you a false sense of confidence in your results.

          1. Illocust   4 years ago

            I'm just here for the comments. The articles are rarely of a quality worth reading.

            You keep complaining about looking at too big of a suspect pool, but you're against using a tool that would reduce its size? It's easier to investigate a hundred people, than it is to investigate a hundred thousand. So a tool that narrows your suspect pool from a hundred thousand to one hundred is useful. The false positives don't matter, because if you get a list of people that it could be, you know by definition it's not all of them.

            1. Rossami   4 years ago

              Your mistake is in thinking that the tool reduces the size. It doesn't. The subject pool is the same hundred thousand whether the human reads it through raw or a computer pre-filters it.

              Now I will concede that you're making the math a little more complicated by running a two-stage filter (one by the computer, the second by a human) but ultimately, you're still starting with a search against all possible elements in the database and running your search using only one factor - recognition of a face. What you should be doing instead is to filter your target population on some independent factor. With a truly independent factor, you can multiply your odds of a false positive (since we're multiplying fractions, that makes the result lower) and make your results more reliable. The covariance between false-positives of computer facial recognition and human facial recognition means that you cannot count those as independent tests.

              1. Illocust   4 years ago

                No, when you get your list of potential suspects, then you start investigating. These guys don't live in the right town, these guys died two years ago, eventually you narrow down to people you can reasonably ask alibis of. You could eyeball the list of suspects and see if you agree, but it wouldn't do much good, as the computer wouldn't have picked them if they didn't look like the suspect.

                1. Rossami   4 years ago

                  So you're going to do what I just said you have to do - apply independent tests. That's very much not the same as what you were originally talking about above which was computer pre-filtering of mug shots that will be put in front of the homeowner.

                  Independent tests increase the confidence in your investigation results. Computer filtering of mug shots is faster than purely manual review but it adds no confidence to your investigation. In fact, it should reduce your confidence in the results both because you are adding the error in two covariant tests and because you (and the jury) will have to fight the cognitive biases that lead us to trust computer answers more than are justified.

          2. Mick 2   4 years ago

            Police suspect sketch artists are still used: that's not infallible either but it's one more tool in the box.

  5. Illocust   4 years ago

    No, you just assumed that I meant the next step to be show the photos to the home owner. I was always taking about using it as a way to narrow the suspect pool down to a manageable number of suspects for investigation.

  6. Ben_   4 years ago

    So someone else uses it and the police get an anonymous tip. How is that different?

    Technology doesn’t go backwards. The answer is to decriminalize anything you don’t want them using facial recognition for and let them use it for the other stuff and catch genuinely dangerous people with it.

    Pretending you can make technology go backwards isn’t going to accomplish anything.

    1. Rossami   4 years ago

      So because nukes have been invented, we have to allow police to use them to stop dangerous people?

      We may not be able to make technology go backwards but we can forbid bad actors from abusing it. (And by definition, government is presumed to be a bad actor until proven otherwise.)

  7. GroundTruth   4 years ago

    " the potential benefits of reliable facial recognition technology shouldn't be dismissed out of hand"

    Actually, it should be dismissed out of hand. Some things are just wrong. If you're going to have this, then why not go all the way and require that everyone wear a nametag and serial number at all times?

  8. James4567   4 years ago

    Facial Recognition is something new for me and I hope this technology
    will help us in a positive way.

  9. hpearce   4 years ago

    "public concern about police surveillance using facial recognition."

    People use facial recognition all the time - if they can see !
    @reason should not demean the obvious truth.

    Secondly @reason should provide the actual reasoning for banning technology in this case (as opposed to drugs and guns) - not to mention why the ban only applies to the government - an indication that a constitutional rationale must be found.

  10. MaryRobles   4 years ago

    [ PART TIME JOB FOR USA ] Making money online more than 15$ just by doing simple works from home. I have received $18376 last month. Its an easy and simple job to do and its earnings are much better than regular office job and even a little child can do this and earns money. Everybody must try this job by just use the info
    on this page…. Visit Here

Please log in to post comments

Mute this user?

  • Mute User
  • Cancel

Ban this user?

  • Ban User
  • Cancel

Un-ban this user?

  • Un-ban User
  • Cancel

Nuke this user?

  • Nuke User
  • Cancel

Un-nuke this user?

  • Un-nuke User
  • Cancel

Flag this comment?

  • Flag Comment
  • Cancel

Un-flag this comment?

  • Un-flag Comment
  • Cancel

Latest

The 'Big Beautiful Bill' Will Add $2.4 Trillion to the Deficit

Eric Boehm | 6.4.2025 5:05 PM

Anti-Israel Violence Does Not Justify Censorship of Pro-Palestinian Speech

Robby Soave | 6.4.2025 4:31 PM

Belated Republican Objections to the One Big Beautiful Bill Glide Over Its Blatant Fiscal Irresponsibility

Jacob Sullum | 6.4.2025 2:50 PM

A Car Hit and Killed Their 7-Year-Old Son. Now They're Being Charged for Letting Him Walk to the Store.

Lenore Skenazy | 6.4.2025 1:30 PM

Everything Got Worse During COVID

Christian Britschgi | 6.4.2025 1:15 PM

Recommended

  • About
  • Browse Topics
  • Events
  • Staff
  • Jobs
  • Donate
  • Advertise
  • Subscribe
  • Contact
  • Media
  • Shop
  • Amazon
Reason Facebook@reason on XReason InstagramReason TikTokReason YoutubeApple PodcastsReason on FlipboardReason RSS

© 2024 Reason Foundation | Accessibility | Privacy Policy | Terms Of Use

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

r

Do you care about free minds and free markets? Sign up to get the biggest stories from Reason in your inbox every afternoon.

This field is for validation purposes and should be left unchanged.

This modal will close in 10

Reason Plus

Special Offer!

  • Full digital edition access
  • No ads
  • Commenting privileges

Just $25 per year

Join Today!