Artificial Intelligence

Welcoming Our New Algorithmic Overlords?

Algocracy and its effect on government decision making

|

Algorithms are everywhere. You can't see them, but these procedures or formulas for solving problems help computers sift through enormous databases to reveal compatible lovers, products that please, faster commutes, news of interest, stocks to buy, and answers to queries.

Dud dates or boring book recommendations are no big deal. But John Danaher, a lecturer in the law school at the National University of Ireland, warns that algorithms take on a very different profile when they're employed to guide government behavior. He worries that encroaching algorithmic governance, or what he calls algocracy, could "create problems for the moral or political legitimacy of our public decision making processes."

And employ them government agencies do. The Social Security Administration uses the tool to aid its agents in evaluating benefits claims; the Internal Revenue Service uses it to select taxpayers for audit; the Food and Drug Administration uses algorithms to study patterns of foodborne illness; the Securities and Exchange Commission uses them to detect trading misconduct; and local police departments employ their insights to predict the emergence of crime hotspots.

Conventional algorithms are rule-based systems constructed by programmers to make automated decisions. Because each rule is explicit, it is possible to understand how and why the algorithm produces its outputs, although the continual addition of rules and exceptions over time can make keeping track of what the system is doing difficult in practice.

Alternatively, so-called machine-learning algorithms (which are increasingly being deployed to deal with the growing flood and complexity of data that needs crunching) are a type of artificial intelligence that gives computers the ability to discover rules for themselves—without being explicitly programmed. These algorithms are usually trained to organize and extract information after being exposed to relevant data sets. It's often hard to discern exactly how the algorithm is devising the rules it's using to make predictions.

While machine learning is highly efficient at digesting data, the answers it supplies can be skewed. In a recent New York Times op-ed titled "Artificial Intelligence's White Guy Problem," Kate Crawford, a researcher at Microsoft who serves as co-chairwoman of the White House Symposium on Society and Artificial Intelligence, cited several instances of these algorithms getting something badly wrong. In 2015 Google Photo's facial recognition app tagged snapshots of a couple of black guys as gorillas, for example, and in 2010, Nikon's camera software made headlines for misreading images of some Asian people as blinking. "This is fundamentally a data problem. Algorithms learn by being fed certain images," Crawford noted. "If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing nonwhite faces."

But algorithmic misfires can have much more dire consequences when they're used to guide government decisions. It's easy to imagine the civil liberties implications that could arise from, say, using such imperfect algorithms to try to identify would-be terrorists before they act.

Crawford cites the results of a May investigation by ProPublica into how the COMPAS recidivism risk assessment system evaluates the likelihood that a criminal defendant will reoffend. Although judges often take COMPAS risk scores into consideration when making sentencing decisions, ProPublica found that the algorithms were "particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants." In addition, "white defendants were mislabeled as low risk more often than black defendants." (Northpointe, the company that developed COMPAS, has plausibly asserted that ProPublica mangled its technical analysis and says that when the correct statistics are used, "the data do not substantiate the ProPublica claim of racial bias towards blacks.")

In Wisconsin, a man named Eric Loomis has filed a legal challenge to his sentencing judge's reliance on COMPAS. Loomis was told that he was "high-risk" based on his score and was consequently sentenced to six years in prison for eluding police; his attorney is arguing that he needs access to the proprietary algorithm so he can make arguments about its validity.

Lots of police departments also now use "predictive policing" software programs like PredPol. According to its maker, the PredPol algorithm uses only three data points—past type, place, and time of crime—to pinpoint the times and places where future crimes are most likely to occur. Crawford worries that predictive policing could become a self-fulfilling prophecy in poor and minority neighborhoods, in the sense that more policing could lead to more arrests, which in turn predict the need for more policing, and so on. "Predictive programs are only as good as the data they are trained on, and that data has a complex history," she warns.

Cities plagued by violence are turning to these programs in an effort to identify not just the places but the citizens who are most likely to commit or be the victims of crime. For example, Chicago police have been applying an algorithm created by researchers at the Illinois Institute of Technology that uses 10 variables, including whether an individual's criminal "trend line" is increasing, whether he's been shot before, and whether he's ever been arrested on weapons charges, to make predictions. To avoid bias, the program excludes consideration of race, gender, ethnicity, and geography.

The algorithm has identified some 1,400 residents who are highly likely to shoot or be shot. As The New York Times reported in May, police warn those highest on the list that they're being closely monitored and offer social services to anyone who wants to get away from the violence. And the output is fairly accurate: 70 percent of Chicago residents shot and 80 percent arrested in connection with shootings in 2016 had been designated as high-risk. But since the algorithm is proprietary, there's no way to challenge your status on the list.

Can the algocracy be tamed? Danaher doesn't think so. Resisting won't work, he says, because the convenience of algorithmic decision-making processes will, for most people, outweigh their downsides. Trying to impose meaningful human oversight and control will also founder. Why? Because such systems will become increasingly opaque to their human creators and operators. As machine-learning algorithms trained on larger and larger datasets generate ever more complex rules, they become less interpretable by human beings. Users will see and experience the outputs without understanding how they got there.

Humans may still be in the loop in the sense that they can always reject or modify the machines' determinations before they're implemented. But people may be reluctant to interfere with algorithmic assessments due to a lack of confidence in their own understanding of the issues involved. How likely is a judge to let a criminal defendant with a high-risk recidivism score go free?

"We may be on the cusp of creating a governance system which severely constrains and limits the opportunities for human engagement, without any readily available solution," Danaher concludes. Even if algocracy achieves significant gains and benefits, he warns, "we need to be sure we can live with the tradeoff."