Politics

Face the Facts

Facial recognition technology?s troubled past -- and troubling future.

|

When construction worker Rob Milliron sat down to eat his lunch in Tampa?s Ybor City entertainment district in July 2001, he didn?t expect that it would result in a visit to his workplace by police officers looking to arrest him. His innocent dining inadvertently placed him in a showcase for one of the hottest trends in high-tech surveillance security: facial recognition cameras.

Milliron?s face, scanned by the cameras, ended up in a U.S. News & World Report article about the technology. The accompanying headline read: "You can?t hide those lying eyes in Tampa." Then a woman in Oklahoma saw the picture, misidentified Milliron as her ex-husband, who was wanted on child neglect charges, and called the police. After convincing police he had never been married, had kids, or even been to Oklahoma, he told the St. Petersburg Times, "They made me feel like a criminal."

Milliron perhaps can take some comfort from the fact that, as the use of facial recognition technology (FRT) for police work spreads, he won?t be alone in being falsely suspected. FRT advocates offer Americans a sweet-sounding deal: sell your privacy for security. That?s an especially comforting pitch in the wake of 9/11. We?ve all seen the grainy photos of Mohammed Atta and crew waltzing through airport security. If only those cameras had been linked to the proper criminal databases, say FRT proponents, the attackers never would have made it onto the planes.

But FRT, currently used in at least two U.S. cities and widespread throughout Great Britain, is notoriously unreliable and ineffective. At its best, it brings to our streets the high-tech equivalent of the Department of Transportation?s airport security policy: humiliate and search everyone ineffectively.

That?s bad enough, but the real problems will occur if FRT ever does start working as promised. It threatens to create a creepy future of ubiquitous spy cameras that will be used by police for purposes far less noble than thwarting terrorists.

FRT works by combining photographic images with computer databases. After an image is captured by a camera, a computer program measures some of the 80 or so nodal points on your face, such as the distance between your eyes, the width of your nose, the depth of your eye sockets, and the length of your jaw line. The technology then turns the nodal measurements into a numerical code called a "faceprint." A properly working face recognition system supposedly can match a person standing in front of a camera with a record from a database including tens of millions of faceprints. Criminals, say proponents, would have nowhere to hide. And law-abiding citizens would have no reason to fear.

Yet for the most part, FRT hasn?t worked as intended. The Milliron incident in Tampa was just one of a string of national flops for face scanners, which prior to 9/11 were widely derided even as they were being implemented or considered by several municipalities, including Tampa; Jacksonville, Florida; and Virginia Beach, Virginia. At the time, FRT was being marketed as a tool to catch wanted felons and find runaways and missing persons. Nonetheless, a measure was introduced in the Virginia legislature requiring a judge?s approval to use FRT, and in Tampa city council members who had approved its use claimed they had been fooled and didn?t know what they had voted for.

Then the terrorists attacked, and everything seemed to change. The Virginia legislature dropped the bill requiring judicial approval. Executives at FRT firms testified before Congress and were called on by intelligence and law enforcement agencies. Tom Colatosti, CEO of Viisage Technology, told reporters, "If our technology had been deployed, the likelihood is [the terrorists] would have been recognized." Major airports, including Boston?s Logan, Dallas-Fort Worth International, and Palm Beach International, have installed test versions of FRT. While the stock market was slumping, scanning companies Visionics and Identix saw their share prices shoot up 244 percent and 197 percent, respectively, in the six months following 9/11. (The two companies have since merged and are now known as Identix.)

Yet one stubborn fact remains: FRT doesn?t work. In March, Palm Beach International Airport ran a test of Visionics? "Argus" facial recognition system (a version that can be plugged into existing closed circuit TV systems) at its Concourse C security checkpoint. Fifteen airport employees tried to get through the security checkpoint. The security checkers had a database of 250 faceprints, including those of the 15 testers. Over a four-week period, the testers made 958 attempts to pass through the checkpoint; the face recognition system stopped them 455 times, for a success rate of only 47 percent. On top of that, there were 1,081 false alarms triggered by ordinary passengers and other airport employees passing through the scanners. That worked out to two or three false alarms per hour.

In the experiment, the people who triggered false positives didn?t have to be pulled aside and interrogated by the police. But imagine what would happen if they did; imagine FRT on every concourse of the airport, using a database of "wanted" suspects numbering in the hundreds of thousands. That would make for at least dozens of false positives every hour.

What happens if you?re one of the folks detained? After the police have marched you into the interrogation room, how do you prove that you?re really who your identification says you are, and not a terrorist (or an ordinary fugitive) using false ID? If you?re one of the airport cops, how do you go about your job knowing that the overwhelming majority of the suspects are in fact innocent?

The Palm Beach test isn?t the only one that casts a shadow on FRT. Richard Smith, former head of the Privacy Foundation at Denver University and now a privacy and Internet security consultant in Brookline, Massachusetts, conducted his own test of the Visionics "FaceIt" system. He got similarly unimpressive results.

Smith found that changes in lighting, eyeglasses, background objects, camera position, and facial position and expression all seriously affected image quality and system efficacy. He concluded that airport use of FRT would require "special walkways where lighting is tightly controlled and passengers pass single file." Passengers would have to be "instructed to remove hats and glasses and look straight ahead at a head-height camera."

None of this fits with the exaggerated claims in favor of FRT made by those selling it and repeated as fact by gullible media outlets. According to the Denver Post, "It doesn?t matter if you gain 200 pounds or go bald between photographs. Short of plastic surgery the camera will recognize you." Unless, of course, you put on sunglasses, or cock your head, or make a funny face.

Or get older. A study by the National Institute of Standards and Technology found a 43 percent failure rate for pictures of the same person taken one and a half years apart. Similarly, the Defense Department funded a test of commercially available FRT. It found that the systems failed one-third of the time and produced a large number of false positives. The impressive reliability rates you hear from the face scanning companies are usually based on tests in laboratories under optimal conditions.

Yet "even if the technology worked perfectly," Smith observes, "it would still allow 99 percent of the terrorists through….The biggest problem with face recognition systems is the simple fact that we don?t know who the terrorists are and law enforcement doesn?t have their pictures. Spotting terrorists at airports is simply the wrong use of this technology."

Even if we did know who the terrorists were, we?d have to sit them all down for a session with Annie Leibovitz for FRT to be useful. According to the testers at Palm Beach International, "Input photographs needed to be of good quality to make successful matches." Similarly, people being scanned need to stand still, look straight at the camera, and not wear glasses. As the Palm Beach study acknowledged: "Motion of test subject head has a significant effect on system ability. There was substantial loss in matching if test subject had a pose 15 to 30 degrees off of input camera focal point and eyeglasses were problematic."

Mass face scanning was formally introduced to the American public in January 2001 at Super Bowl XXXV in Tampa, when football fans had their faces surreptitiously checked with a Viisage system and compared to a database of known criminals. The Tampa authorities then began to use scanning in the Ybor City entertainment district. They targeted people strolling down the street or eating lunch, comparing their faceprints to a database of criminals and runaways.

Using open record requests, the American Civil Liberties Union (ACLU) discovered that the system was essentially abandoned within months of its highly publicized rollout. No correct matches had been made to the criminal database. Instead, according to the ACLU, the system matched "male and female subjects and subjects with significant differences in age and weight."

Put another way, FRT can be dumber than Inspector Clouseau: Even he could distinguish a man from a woman, or a short fat man from a tall thin man. The Tampa police, for their part, deny they have abandoned FRT, saying they are revamping it to work with more cameras.

Before they spend the money, they should take a closer look at the experience in the world capital of FRT. The London borough of Newham, with a population of about 250,000, is widely touted by advocates as proof of the technology?s awesome crime-fighting ability. Newham boasts approximately 300 government cameras located in strategic places and linked to Visionics? FaceIt system.

Newham?s FRT is credited by advocates and local government officials with cutting crime by nearly 40 percent since 1998. That effect, if real, was apparently not long-lasting. According to a United Press International report, street robberies and car theft—two crimes for which FRT is supposed to be an especially powerful deterrent—were on the rise again in Newham last year.

And as Jeffrey Rosen reported in The New York Times Magazine last October, the Newham spy system has not resulted in a single arrest during its three years of operation. Nor do the people who run the system even know who is in the database. The deterrent effect, to the extent there may be one, appears to lie with the signs posted throughout Newham telling criminals that cameras are watching and that the police know who they are, where they live, and what crimes they have committed. Of course, "it?s not true," as the Newham monitoring chief admitted to Rosen.

Newham is simply a part of Great Britain?s growing spy camera network, which arose as a response to terrorist bombings in London?s financial district in the early 1990s. Britain now has some 1.5 million government cameras in place. As the cameras were first being set up, the government, then under the control of the Tories, insisted that "if you have nothing to hide, you have nothing to fear." Now under control of the Labour Party, the government is spending $115 million for still more spy cameras.

Despite ubiquitous cameras, however, violent and property crime in England is soaring. A three-year government study by the Scottish Center for Criminology recently concluded there is no evidence to suggest that Britain?s spy cameras have reduced serious crime overall. Another study, this one by the National Association for the Care and Resettlement of Offenders, looked at 14 British cities and found that the cameras had little effect in reducing crime. The study suggested that improving street lighting would be a more cost-effective crime prevention method.

This much can be said in favor of the cameras: In some cases, they have been used to convict speeders, other traffic law offenders, and litterbugs. Yet it?s one thing to give up your privacy to catch Irish Republican Army terrorists. It?s another thing to surrender privacy so the police can catch people who litter.

Of course, just because FRT doesn?t work very well today doesn?t mean it will never work. FRT companies are receiving massive amounts of corporate welfare. According to a March General Accounting Office (GAO) report, as of June 2001 the Departments of Justice and Defense had given about $21.3 million and $24.7 million, respectively, to the research and development of FRT. All this research will probably result in much-improved products eventually.

What then? Philip Agre, an associate professor in the Information Studies Department of the University of California at Los Angeles, argues that as FRT gets better the potential for abuse will rise commensurately. "As the underlying information and communications technologies (digital cameras, image databases, processing power and data communications) become radically cheaper (and more powerful)," he writes on his Web site, "new facial image databases will not be hard to construct, with or without the consent of the people whose faces are captured."

Once those databases exist, their uses will doubtless expand, consistent with typical bureaucratic mission creep. Look, for example, at the 2001 Colorado law allowing the Division of Motor Vehicles (DMV) to use biometric technology to map applicants? faces for driver?s licenses. The stated intent was to stop the same person from obtaining multiple licenses. But the law?s language was much broader, allowing access to the DMV database to "aid a federal, state or local government agency in carrying out such agency?s official function"—in other words, for any government purpose whatsoever. Illinois and West Virginia also have turned their driver?s license bureaus into mandatory faceprint collection points.

In March, after national criticism, the Colorado legislature refined the face mapping scheme, declaring that before a government agency can tap into the image database, it must have "a reasonable suspicion that a crime has been committed or will be committed and a reasonable suspicion that the image requested is either the perpetrator of such a crime or the victim of such a crime." Like Colorado, states can establish guidelines that ostensibly limit government use of your faceprint. But once your faceprint is in a state database, the federal government has legal authority to use it for any purpose at all. By federal statute, every state driver?s license record is available to every federal agency, "including any court or law enforcement agency, in carrying out its functions." Because of the Supremacy Clause of the U.S. Constitution, a state government cannot limit the uses to which federal agencies put these state-gathered faceprints.

Even before 9/11, many local law enforcement agencies considered political surveillance to be one of their official functions. For example, last spring it came to light that the Denver Police Intelligence Unit has for years kept surveillance files on government protesters, including about 3,000 individuals and 200 organizations. Among those targeted for police spying were the American Friends Service Committee (a Quaker group), Amnesty International, and Copwatch (a group that protests police brutality). The surveillance program was supposedly scaled back (though not eliminated), but only after secret documents were brought to public attention by the Colorado Civil Liberties Union.

Telling Colorado cops that they must have "reasonable suspicion" before accessing the faceprint database sounds good, but law enforcement will easily find ways around such restrictions. The Denver police surveillance guidelines have always required criminal suspicion, so the police simply listed as extremists the groups they wanted to spy on.

Indeed, the main constraint on the Denver Police Department?s political spying program was manpower. There are only so many people a police unit can spy on at once. But with FRT, political surveillance may one day escape such limits. Consider a mobile monitoring unit equipped with a face scanning camera, face recognition software, and the state?s driver?s license faceprint database. It would be a simple matter to compile a list of everyone who attends a rally to protest police brutality, to denounce drug laws, or to oppose U.S. foreign policy.

Nor will the technology necessarily be confined to use at political protests. In July 2001, the conservative U.S. House Majority Leader Dick Armey (R-Texas) joined with the ACLU to warn: "Used in conjunction with facial recognition software…the Colorado database could allow the public movements of every citizen in the state to be identified, tracked, recorded and stored."

Sound far-fetched? On September 20, 2001, Joseph Atick, CEO of Visionics, told a Department of Transportation airport security committee that FaceIt, in conjunction with security cameras, could be linked via the Internet to a federal monitoring station and alert officials to a match within seconds. He added that virtually any camera, anywhere, could be linked to the system, as could a "wide network of databases."

Opponents of FRT should not count on much help from the courts. Standard legal doctrine holds that there is little or no expectation of privacy in public. There is nothing unconstitutional, for example, about a police officer?s sitting on a bench at a shopping mall and making notes about the people who pass by. FRT advocates can argue that massive surveillance is simply like having 10—or 100—police officers in the mall, and that the quantitative difference is of no constitutional significance.

Yet even if we acknowledge that electronic government eyes are no different than a cop on every corner, do we really want that? One of the conclusions of Jeffrey Rosen?s New York Times Magazine piece on spy cameras in Great Britain was that the cameras are designed not to produce arrests but to make people feel they are being watched all the time. "The people behind [the cameras] are zooming in on unconventional behavior in public that has nothing to do with terrorism," Rosen wrote. "And rather than thwarting serious crime, the cameras are being used to enforce social conformity in ways that Americans may prefer to avoid."

There is some reason for hope, however. In the past, U.S. courts have acknowledged that technological change can make a constitutional difference. Under 19th-century constitutional doctrine, there was no need for the police to get a warrant before eavesdropping. If a policeman stood on public property and could hear a conversation going on inside a house, he did not need a search warrant. That doctrine made sense in the 1800s; if you talk so loudly that people on the sidewalk can hear you, you don?t have a legitimate expectation of privacy for your words.

But in the 1967 case Katz v. United States, the Supreme Court considered the issue raised by police officers who, without trespassing on private property, used parabolic microphones or wiretaps to listen in on conversations. Justice Hugo Black said this kind of surveillance was permissible because the new technology was simply an updated version of eavesdropping. The majority of the Court, however, ruled that wiretaps and other electronic surveillance should be permitted only if the police obtained a search warrant. The intrusiveness of electronic surveillance, its great potential for abuse, and its infringement on traditional expectations of privacy all distinguished it from old-fashioned eavesdropping.

Similarly, widespread face scanning could eventually make it possible for the government to track the movement of most citizens most of the time. It would expand the government?s tracking capability by several orders of magnitude—as great an increase as the one from human ears to parabolic microphones.

Like the Fourth Amendment itself, Katz relies on a subjective judgment of reasonableness. Thus, there is no guarantee that Katz would stand as a barrier to omnipresent British-style face scanning; nor would Katz necessarily forbid placing information about every person?s movements in a permanent government database.

Ultimately, the future of face scanning will depend on the political process. There is almost no chance that the American public or their elected officials would vote in favor of tracking everyone all the time. Yet face scanning is typically introduced and then expanded by administrative fiat, without specific legislative permission.

So there is a strong possibility that future Americans will be surprised to learn from history books that in the first centuries of American independence citizens took for granted that the government did not and could not monitor all of their movements and activities in public places.