Can Algorithms Run Things Better Than Humans?
Welcome to the rise of the algocracy.
Police in Orlando, Florida, are using a powerful new tool to identify and track folks in real time. Video streams from four cameras located at police headquarters, three in the city's downtown area, and one outside of a recreation center will be processed through Amazon's Rekognition technology, which has been developed through deep learning algorithms trained using millions of images to identify and sort faces. The tool is astoundingly cheap: Orlando Police spent only $30.99 to process 30,989 images, according to the American Civil Liberties Union (ACLU). For now the test involves only police officers who have volunteered for the trial.
But the company has big plans for the program. In a June meeting with Immigration and Customs Enforcement (ICE), Amazon Web Services pitched the tech as part of a system of mass surveillance that could identify and track unauthorized immigrants, their families, and their friends, according to records obtained by the Project on Government Oversight.
Once ICE develops the infrastructure for video surveillance and real-time biometric monitoring, other agencies, such as the FBI, the Drug Enforcement Administration, and local police, will no doubt argue that they should be able to access mass surveillance technologies too.
Amazon boasts the tool is already helping with everything from minimizing package theft to tracking down sex traffickers, and the company points to its terms of use, which prohibit illegal violations of privacy, to assuage fears.
As impressive as Rekognition is, it's not perfect. The same ACLU report found that a test of the technology erroneously matched 28 members of Congress with criminal mugshots. Being falsely identified as a suspect by facial recognition technology, prompting police to detain you on your stroll down a street while minding your own business, would annoy anybody. Being mistakenly identified as a felon who may be armed would put you in danger of aggressive, perhaps fatal, police intervention.
Are you willing to trust your life and liberty to emerging algorithmic governance technologies such as Rekognition? The activities and motives of a police officer or bureaucrat can be scrutinized and understood by citizens. But decisions made by ever-more-complex algorithms trained on vast data sets likely will become increasingly opaque and thus insulated from public oversight. Even if the outcomes seem fair and beneficial, will people really accept important decisions about their lives being made this way—and, as important, should they?
Enter the Witness
In Nick Harkaway's gnarly near-future science fiction novel Gnomon, Britain is protected by "the perfect police force"—in a pervasive yet apparently benign total surveillance state—called the Witness. "Over five hundred million cameras, microphones and other sensors taking information from everywhere, not one instant of it accessed initially by any human being," explains the narrator. "Instead, the impartial, self-teaching algorithms of the Witness review and classify [the inputs] and do nothing unless public safety requires it.…It sees, it understands, and very occasionally it acts, but otherwise it is resolutely invisible."
When it comes to crime, the Witness identifies incipient telltale signs of future illegal behavior and then intervenes to prevent it. The system "does not take refuge behind the lace curtain of noninterference in personal business.…Everyone is equally seen." The result is that it delivers "security of the self to citizens at a level unprecedented in history," and "all citizens understand its worth."
The Witness is a fictional example of what National University of Ireland Galway law lecturer John Danaher calls algocracy—algorithmic governance that uses data mining and predictive/descriptive analytics to constrain and control human behavior. (Broadly speaking, an algorithm is a step-by-step procedure for solving a problem or accomplishing a goal. A mundane example is a recipe for baking a cake.)
The exponential growth of digital sensors, computational devices, and communication technology is flooding the world with data. To make sense of all this new information, Danaher observes, humans are turning to the impressive capabilities of machine-learning algorithms to facilitate data-driven decision making. "The potential here is vast," he writes. "Algorithmic governance systems could, according to some researchers, be faster, more efficient and less biased than traditional human-led decision-making systems."
Danaher analogizes algocracy to epistocracy—that is, rule by the wise. And epistocracy is not too dissimilar from the early 20th century Progressive idea that corruptly partisan democratic governance should be "rationalized," or controlled by efficient bureaucracies staffed with objective and knowledgeable experts.
If rule by experts is good, wouldn't rule by impartial, infallible computers be better? "Bureaucracies are in effect algorithms created by technocrats that systematize governance," argues James Hughes, executive director of the Institute for Ethics and Emerging Technologies. "Their automation simply removes bureaucrats and paper."
Of course, what makes the Witness potent is that when its ever-watchful algorithms spot untoward behavior, they can direct human inspectors to intervene. But the narrator in Gnomon assures us that all citizens understand and accept the Witness' omnibenevolent surveillance and guidance.
The Power of 163 Zettabytes of Data
It's not too early to ask how close we are to living in a hypernudging algocratic surveillance regime. The construction of infrastructure to support something like the Witness is certainly proceeding apace: Nearly 6 million closed circuit TV (CCTV) cameras keep an eye on public spaces in the United Kingdom. By one estimate, the average Londoner is caught on camera 300 times per day. Data on the number of public and private surveillance cameras deployed in the U.S. is spotty, but the best estimate, from the global business intelligence consultancy IHS Markit, is that there were at least 62 million of them in 2016. Police authorities monitor roughly 20,000 CCTVs in Manhattan, while Chicago boasts a network of 32,000 such devices.
As intrusive as it is, video surveillance today is mostly passive, since there are simply not enough watchers to keep track of the massive amounts of video generated by camera feeds in real time. Video from the CCTVs is generally more useful for solving crimes after the fact than for preventing them. However, a 2016 Stanford University study on artificial intelligence predicts that increasingly accurate algorithmic processing of video from increasingly pervasive CCTV networks will, by 2030, be able to efficiently detect anomalies as they happen in streets, ports, airports, coastal areas, waterways, and industrial facilities.
Nascent facial recognition technologies such as Amazon's Rekognition, IBM's Watson Visual Recognition, and Microsoft's Azure Face API in the U.S. and Megvii and SenseTime in China are still quite clunky, but they're improving rapidly. In August, the Department of Homeland Security began rolling out its Traveler Verification Service (TVS). To verify passengers are who they claim to be, the system matches photographs taken by Customs and Border Protection (CBP) cameras at airports with a cloud-based database of photos previously captured by the CBP during entry inspections, photos from previous encounters with Homeland Security, and photos from the Department of State, including U.S. passport and U.S. visa photos. The agency claims that TVS inspection cuts the wait in security checkpoint lines in half.
Electronic billboards publicly shame those with low social credit while lionizing high scorers. By making each citizen's score publicly accessible, the Chinese government hopes to forge "a public opinion environment where keeping trust is glorious."
As pervasive as they will become, CCTV cameras will actually constitute just a minor segment of the surveillance infrastructure, much of which we are assembling voluntarily. The International Data Corporation projects that an average connected person anywhere in the world in 2025 will interact with connected digital devices nearly 4,800 times per day, or one interaction every 18 seconds. These devices will be part of the omnipresent "internet of everything," which will include security cameras, smartphones, wearable devices such as Fitbits, radio-frequency identification (RFID) readers, automated buildings, machine tools, vending machines, payment systems, self-driving vehicles, digital signage, medical implants, and even toys. As a result, humanity in 2025 will be generating 163 zettabytes (163 trillion gigabytes) of data annually, a tenfold increase from the amount released into the global datasphere in 2016.
Machine learning provides computational systems with the ability to improve from experience without being explicitly programmed. The technique automatically parses pertinent databases, enabling algorithms to correct for previous errors and make more relevant choices over time. The rising flood of data will train these always-improving algorithms to more quickly detect subtle behavioral anomalies, to include identifying actions and activities that government actors view as undesirable. Based on the results, authorities might seek to intervene before an individual causes harm to himself (through unhealthy eating habits, for example) or others (by committing fraud or even violence).
With Liberty and Justice for All?
In his 2017 essay "From Remote-Controlled to Self-Controlled Citizens," Swiss Federal Institute of Technology in Zurich computational sociologist Dirk Helbing highlights how wielding such algorithmic power can impair individual liberty. He acknowledges that the torrents of cheap and abundant data generated by omnipresent digital appliances and sensors need to be filtered in order to be usable.
"Those who build these filters will determine what we see," Helbing warns. "This creates possibilities to influence people's decisions such that they become remotely controlled rather than make their decisions on their own."
This effort need not be explicitly coercive. Authorities might use personal data to "nudge" people's decisions so that they will behave in ways deemed more beneficial and appropriate. Already, Helbing says, "our behavior is increasingly steered by technology."
Commercial enterprises from Amazon and Alibaba to Apple and Microsoft to Twitter and Facebook now exercise this sort of nudging power over their users. Algorithms rank web pages to serve up more relevant search results on Google, make book recommendations at Amazon, offer compatible relationship possibilities at Match.com, operate Super Cruise to brake our cars when sensors detect obstacles ahead, alert people when an untagged photo of them is posted on Facebook, generate playlists at Spotify, and advise us on how to avoid traffic tie-ups.
Although these algorithms do sometimes lead to bum dates and boring Netflix picks, for the most part we're willing participants in what Harvard Business School social psychologist Shoshana Zuboff decries as "surveillance capitalism." We supply our personal information to private companies that then use their data-parsing technologies to offer tailored suggestions for services and products that they hope will fulfill our needs and desires. We do this because most of us find that the algorithms developed and deployed by commercial enterprises are much more likely to be helpful than harmful.
As a consequence, we get locked into the "personalized" information and choices that are filtered through the algorithms behind our social media feeds and commercial recommendation engines.
Tools like the Gobo social media aggregator developed by MIT's Media Lab enable users to circumvent the algorithmic filters that their clicks on Facebook and Google build around them. By moving sliders like the one for politics from "my perspective" to "lots of perspectives," for example, a user can introduce news stories from sources he or she might not otherwise find.
But things can turn nasty when law enforcement and national security agencies demand and obtain access to the vast stores of personal data—including our online searches, our media viewing habits, our product purchases, our social media activities, our contact lists, and even our travel times and routes—amassed about each of us by commercial enterprises. Algorithms are already widely deployed in many agencies: The Social Security Administration uses them to calculate benefits and the Internal Revenue Service uses them to identify tax evaders. Police forces employ them to predict crime hot spots, while courts apply them to help make judgment calls about how arrestees and prisoners should be handled.
Scraping both commercial and government databases enables data-mining firms like Palantir Technologies to create "spidergrams" on people that show their connections to friends, neighbors, lovers, and business associates, plus their organizational memberships, travel choices, debt situations, records of court appearances, and more. The Los Angeles Police Department (LAPD) is using Palantir data parsing to track suspected gang members, Bloomberg Businessweek reported this year. Information from rap sheets, parole reports, police interview cards, automatic license plate readers, and other sources is used to generate a point score for each person of interest to police, and the scores are then incorporated into probable offender bulletins.
Like the fictional Witness, the LAPD is not pretending to a policy of noninterference in individuals' personal business. Beat cops use the bulletins to identify precrime suspects and then use minor violations, such as jaywalking or fix-it tickets for automobile mechanical faults, as a pretense to stop and question them. Data from these contacts are then entered back into Palantir's software.
The result, critics allege, is that this algorithmic surveillance process creates a self-justifying feedback loop. "An individual having a high point value is predictive of future police contact, and that police contact further increases the individual's point value," writes University of Texas sociologist Sarah Brayne in a 2017 study of LAPD criminal surveillance practices. Once an individual is trapped inside an LAPD spidergram, the only way to escape the web of extra police scrutiny is to leave the city. The ideal of algorithmic objectivity ascribed to software like Palantir's actually "hides both intentional and unintentional bias in policing and creates a self-perpetuating cycle."
Surveillance Communism
As discomfiting as surveillance capitalism can be—especially when turned to the benefit of a meddling state—the system of surveillance communism now being deployed by the People's Republic of China is far scarier.
Since 2014, the Chinese Communist Party has been developing and implementing something called a Social Credit System. The goal is to build "a harmonious socialist society" by establishing a "sincerity culture," which in turn is accomplished by using "encouragement to keep trust and constraints against breaking trust as incentive mechanisms."
At the core of this system of gamified obedience is a rating scheme in which the sincerity and trustworthiness of every citizen, as defined by the Chinese government, will be distilled into a three-digit social credit score. Like Harkaway's fictional Witness, artificial intelligence operating invisibly in the background will continuously sift through the enormous amounts of data generated by each individual, constantly updating every person's score. The system is slated to become fully operational for all 1.4 billion Chinese citizens by 2020.
The Social Credit System is a public-private partnership in which the government has contracted out much of the social and commercial surveillance to Chinese tech giants such as Alibaba, Tencent, and Baidu. "The information is gathered from many different sources of data like banks and financial institutions, stores, public transportation systems, Internet platforms, social media, and e-mail accounts," explain Copenhagen University philosophers Vincent Hendricks and Mads Vestergaard in their forthcoming book, Reality Lost (Springer). Factors include things like excessive time and money spent on video games vs. a propensity to donate blood. These commercial data points are combined with government records detailing an individual's adherence to family-planning directives, court orders, timely tax payments, academic honesty, and even traffic violations.
Keeping company with individuals with low social credit scores will negatively affect one's own number. Citizens wanting a better score will therefore have an incentive to either pressure their low-life friends into complying more fully with the norms of the Social Credit System or to ditch those friends entirely.
There are currently 170 million surveillance cameras with integrated facial recognition up and running in China. By 2020, the count will reach 570 million. "Digitalization and the Internet have enabled such massive data collection that surveillance may be almost total with no angles out of sight or blind spots," write Hendricks and Vestergaard. Like the Witness in Gnomon, the Social Credit System promises that "everyone is equally seen."
Earlier totalitarian regimes used fear, terror, and violence as means of social control. The Chinese state still has recourse to such measures, of course, but the new system is designed to operate chiefly by instilling self-discipline through incentives. Citizens who earn low scores by "breaking trust" will encounter limits on their internet usage and will be barred from restaurants, nightclubs, and luxury hotels. More consequentially, low scorers face restrictions on their access to housing, insurance, loans, social security benefits, and good schools; bans on air, high-speed rail, and foreign travel; and exclusion from jobs as civil servants, journalists, and lawyers.
Conversely, folks with high social credit scores have access to all of those benefits and more, including such mundane conveniences as no-deposit apartment and bicycle rentals. The goal is to "allow the trustworthy to roam everywhere under heaven, while making it hard for the discredited to take a single step," according to a statement from Beijing.
In some jurisdictions, electronic billboards publicly shame those with low social credit while lionizing high scorers. By making each citizen's score publicly accessible, the government hopes to forge "a public opinion environment where keeping trust is glorious."
In George Orwell's novel Nineteen Eighty-Four, the Thought Police are tasked with discovering and punishing thoughtcrime—that is, personal and political thoughts that have been proscribed by the Party. By motivating citizens to constantly monitor their own behavior and that of their associates with respect to the behavior's impact on their ratings, the Social Credit System decentralizes this enforcement. As each person calculates how to boost his or her score, adherence to the rules, regulations, and norms decreed by the authorities becomes an unconscious habit.
As Zhao Ruying, the bureaucrat in charge of implementing the system in Shanghai, has said, "We may reach the point where no one would even dare to think of breaching trust, a point where no one would even consider hurting the community. If we reached this point, our work would be done." The Chinese Communist Party intends for the thought police to reside in the head of each citizen.
Will the Social Credit System actually result in a high-trust society? Helbing, the computational sociologist, argues that supposedly omnibenevolent algorithmic control systems such as China's may look successful in the short run, but they will end up producing chaos rather than harmony.
Centralized systems are peculiarly vulnerable to hacking, corruption, and error, Helbing says. In Gnomon, it is revealed that a secret epistocracy has compromised the purportedly objective algorithmic decisions of the Witness, justifying the interference by saying it is needed in order to maintain order. Likewise, if the Central Committee of the Chinese Communist Party is seen as exercising excessively heavy-handed control over the Social Credit System, citizens may ultimately reject its legitimacy.
Aside from elite misbehavior, Helbing identifies three major flaws in this type of top-down governance. First, algorithmic micromanagement can destroy the basis of social stability and order in societies by undermining traditional institutions of self-organization, such as families, companies, churches, and nonprofit associations. Second, society-scale algorithmic nudging could narrow the scope of information and experience available to people, thus undercutting the "wisdom of the crowd" effect and herding people into making worse decisions, producing problems such as stock market bubbles or malignant nationalism. And third, he argues that imposing algorithmic choice architectures on society will dangerously reduce the economic and social diversity that is key to producing high innovation rates, marshaling collective intelligence, and sustaining social resilience to disruptions.
Helbing's diagnosis of the problems associated with algorithmic top-down control is persuasive. But his pet solution—Nervousnet, an open, participatory, bottom-up, distributed information platform for real-time data sensing and analysis devised by his colleagues at the Swiss Federal Institute of Technology—is much less compelling. Among other things, this alternative requires billions of concerned people across the globe to spontaneously yet self-consciously adopt the system. Good luck with that.
Tech vs. Tech
It may seem inevitable that, given advancing technology, society will eventually adopt some version of troubling algorithmically driven surveillance capitalism or communism. But perhaps that assumption is overly pessimistic.
The Institute for Ethics and Emerging Technologies' Hughes notes that any technology can be used for good or ill. "The application of algorithmic governance to Orwellian social control and the undermining of democratic processes is not an indictment of algorithms or the Internet," he wrote in a 2018 article for the Journal of Posthuman Studies, "but rather of authoritarian governments and the weaknesses of our current democratic technologies."
Moreover, Hughes believes technology can help prevent authoritarian abuses of algorithmic governance. He commends the salutary effects of new electronic tools for monitoring the activities of politicians and corporate cronies; one example is a suite of open-source artificial intelligence tools developed by the Iceland-based Citizens Foundation to enable policy crowdsourcing, participatory budgeting, and information curation to avoid filter bubbles. He also points to the existence of all sorts of new automated platforms for organizing and influencing policy decisions, including change.org, avaaz.org, Nationbuilder, and iCitizen.
At his most visionary, Hughes suggests that one day soon, cognitive enhancements in which citizen brains connected to machines offering expanded information processing and memory capabilities will be able to counter attempts to impose algorithmic tyranny. These "enhanced humans" would become conscious and objective participants in the datasphere, able to follow and endorse the logic of government algorithms for themselves.
In a recent white paper, Center for Data Innovation analysts Joshua New and Daniel Castro similarly warn against an anti-algocracy panic. "In most cases," they point out, "flawed algorithms hurt the organizations using them. Therefore, organizations have strong incentives to not use biased or otherwise flawed algorithmic decision-making and regulators are unlikely to need to intervene."
Market participants have every reason to want to get things right. If a financial institution makes bad lending decisions, for instance, the bankers lose money. Consequently, New and Castro reject policies such as mandating algorithmic explainability (i.e., forbidding the deployment of machine-learning systems until they can explain their rationales for the decisions they make) or the establishment of a centralized algorithm regulatory agency.
The authors do acknowledge that in cases where the cost of the error falls primarily on the subject of the algorithmic decision, these salubrious incentives may not exist. Government bureaucracies such as police and courts—which are not disciplined by the profit motive—largely occupy this category. To encourage bureaucratic accountability, New and Castro recommend that any algorithms involved in judicial decision making be open-source and subject to ongoing assessments of their social and economic consequences.
In Gnomon, the shadowy epistocracy is eventually overthrown when the Desperation Protocol, a long-hidden worm, shuts down the Witness. Pervasive algorithmic governance is gone, but so too are its many benefits. In the real world, the algocratic tradeoffs we will face are not likely to be so stark.
This article originally appeared in print under the headline "Can Algorithms Run Things Better Than Humans?."
Show Comments (94)