Computer modeling

Welcoming Our New Algorithmic Overlords?

Algocracy and the moral and political legitimacy of government decision-making

|

Algorthims
Vichaya Kiatying-angsulee/Dreamstime.com

Algorithms are everywhere, and in most ways they make our lives better. In the simplest terms, algorithms are procedures or formulas aimed at solving problems. Implemented on computers, they sift through big databases to reveal compatible lovers, products that please, faster commutes, news of interest, stocks to buy, and answers to queries.

Dud dates or boring book recommendations are no big deal. But John Danaher, a lecturer in the law school at the National University of Ireland, warns that algorithmic decision-making takes on a very different character when it guides government monitoring and enforcement efforts. Danaher worries that encroaching algorithmic governance, what he calls "algocracy," could "create problems for the moral or political legitimacy of our public decision-making processes."

Given algorithms' successes in the private sector, it is not surprising that government agencies are also implementing algorithmic strategies. The Social Security Administration uses algorithms to aid its agents in evaluating benefits claims; the Internal Revenue Service uses them to select taxpayers for audit; the Food and Drug Administration uses them to study patterns of foodborne illness; the Securities and Exchange Commission uses them to detect trading misconduct; and local police departments employ algorithmic insights to predict both the emergence of crime hotspots and which persons are more likely to be involved in criminal activities.

Most commonly, algorithms are rule-based systems constructed by programmers to make automated decisions. Because each rule is explicit, it is possible to understand how and why the algorithm produces its outputs, although the continual addition of rules and exceptions over time can make keeping track of what the system is doing ever more difficult.

Alternatively, various machine-learning algorithms are being deployed as increasingly effective techniques for dealing with the growing flood and complexity of data. Broadly speaking, machine learning is a type of artificial intelligence that gives computers the ability to learn without being explicitly programmed. Such learning algorithms are generally trained to organize and extract information from being exposed to relevant data sets. It is often hard to discern exactly how the algorithm is devising the rules from which it makes predictions.

While machine learning offers great efficiencies in digesting data, the answers supplied by learning algorithms can be badly skewed. In a recent New York Times op-ed, titled "Artificial Intelligence's White Guy Problem," Kate Crawford, a researcher at Microsoft who serves as co-chairwoman of the White House Symposium on Society and Artificial Intelligence, cites several such instances. For example, in 2015 Google Photo's facial recognition app tagged snapshots of a couple of black guys as "gorillas." Back in 2010, Nikon's camera software misread images of Asian people as blinking.

"This is fundamentally a data problem. Algorithms learn by being fed certain images," notes Crawford. "If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing nonwhite faces." As embarrassing as the photo recognition problems were for Google and Nikon, algorithmic misfires can have much direr consequences when used to guide government decision making. It does not take too much imagination to worry about the civil liberties implications of the development of algorithms that purport to identify would-be terrorists before they can act.

In her op/ed, Crawford cites the results of a recent investigation by ProPublica into how the COMPAS recidivism risk assessment system evaluates the likelihood that a criminal defendant will re-offend. Judges often take into consideration COMPAS risk scores when making sentencing decisions. Crawford notes that the software is "twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk."

In Wisconsin, Eric Loomis has filed a legal challenge against his sentencing judge's reliance on his COMPAS score. Based on his risk score, Loomis was told that he was "high risk" and was consequently sentenced to six years in prison for eluding police. His attorney is arguing that he should be able to get access to the proprietary algorithm and make arguments with regard to its validity.

Lots of police departments are now using predictive policing programs such as PredPol. According the company, PredPol's algorithm uses only three data points—past type, place, and time of crime—to pinpoint the times and places where a day's crimes are most likely to occur. Crawford worries that predictive policing could become a self-fulfilling prophecy in poor and minority neighborhoods, in the sense that more policing could lead to more arrests, which in turn predict the need for more policing, and so on. "Predictive programs are only as good as the data they are trained on, and that data has a complex history," she warns.

Cities plagued by violence are turning to predictive policing programs in an effort to identify in advance citizens who are likely to commit or be the victims of violent crimes. For example, Chicago police have been applying an algorithm created by researchers at the Illinois Institute of Technology that uses 10 variables, including whether an individual's criminal "trend line" is increasing, whether he has been shot before, and whether he has been arrested on weapons charges. In order to avoid racial bias, the program excludes consideration of race, gender, ethnicity, and geography.

The program has identified some 1,400 Chicago residents who are highly likely to shoot or be shot. As The New York Times reported in May, police warn those highest on the list that they are being closely monitored and offer social services to those who want to get away from the violence. The algorithm's output is fairly accurate: 70 percent of Chicago residents shot and 80 percent arrested in connection with shootings in 2016 were on the list. Since the algorithm is proprietary, there is no way for people to challenge being on the list.

Can algocracy be tamed? Danaher argues that both resistance and accommodation are futile. Resistance will falter because the efficiencies and convenience of algorithmic decision-making processes will, for most people, outweigh their downsides. Trying to accommodate algorithmic decision-making to meaningful human oversight and control will also founder. Why? Because such systems will become increasingly opaque to their human creators. As machine-learning algorithms trained on larger and larger datasets generate ever more complex rules, they become less and less interpretable by human beings. Users will see and experience their outputs without understanding how such systems come to their conclusions.

Officials may still be in the decision-making loop in the sense that they could reject or modify the machines' determinations before they are implemented. But they may be reluctant to interfere with algorithmic assessments due to a lack of confidence in their own understanding of the issues involved. How likely is a judge to let a criminal defendant with a high-risk recidivism score go free?

"We may be on the cusp of creating a governance system which severely constrains and limits the opportunities for human engagement, without any readily available solution," Danaher concludes. Even if algocracy achieves significant gains and benefits, he warns, "we need to be sure we can live with the tradeoff."

NEXT: Dallas PD Used a "Bomb Robot" to Neutralize Alleged Cop-Killing Sniper

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. “we need to be sure we can live with the tradeoff.”

    Great, another way for the government to fuck us. And I think we already know where the vast, vast majority of “The American People?” fall re: the “tradeoff”.

    “If it saves even ONE life….”

    1. Oh, Almanian, you have *nothing* to fear if your current medical trials and tribulations where suddenly handed off to an emotionless, strictly-by-the-numbers, algorithmic IPAB calculating your current (and future) medical cost budget analysis.

      Nope, nothing at all. To say nothing of a few commenters here with less than stellar medical HX’s, by the by….

      Oh, and Ron, what makes you think *YOU* would be immune from an Algorithmic Flavoured Overlord (AFO) in charge of your medical care? I know you fancy yourself a candidate for immortality: What if the AFO disagrees and it’s Logan’s Bailey’s Run for you? And, no, you *DON’T* get Jenny Agutter either!

    2. I’m making over $15k a month working part time. I kept hearing other people tell me how much money they can make online so I decided to look into it. Well, it was all true and has totally changed my life.
      This is what I do_________ http://www.trends88.com

  2. So, the Algore-ocracy was the Beta release with its Algore-ithmic programme?

    1. +1 Manbearpig

    2. Excelsior!

  3. Hey Ron, you should look into the DAO, and how it got royally screwed recently. An interesting tale and an indicator of where I think technology is heading in the wide world of governance and business.

  4. Will government officials dare to contradict what their computers are telling them?

    Sounds like effort would be involved, so no.

    1. I’m sure bureaucrats love automated decision making because it shifts the blame of any decision off of themselves. Why do you think they like zero tolerance policies and procedures? “Rules are rules”, “My hands are tied”, “Just doing my job”

      1. What Lee Genes said…

      2. “I’m sorry, the computer says…”

        *is shot*

  5. But they may be reluctant to interfere with algorithmic assessments due to a lack of confidence in their own understanding of the issues involved.

    That’s bullshit. They’ll be reluctant to contradict the system because it involves risk for them. Bureaucrats always play by the letter of the regulation because they absolutely cannot get fired if they don’t make judgement calls outside of the system rules.

    They’ll love being able to put the blame for a bad/stupid/evil decision on a computer.

  6. Speaking of algorithms, what has Tim Egan got to say to us today?

    In barely two weeks, Republicans will converge in Cleveland for the Trumpocalypse, a fact-free and hate-filled gathering likely to be as scary as it will be entertaining.

    ——-

    The Republicans, on the verge of nominating a man with the temperament of a sociopath, someone who praises a murderous dictator while damning the American troops who fought him, should crack up and break apart.

    For years, this party stood for something ? lower taxes, a lighter government hand, personal responsibility, global engagement. After Cleveland, they will stand for nothing but the bombastic tyrant who lets the smallest thing get under his very thin, very orange skin. From Trump, you hear more praise of Saddam Hussein and Vladimir Putin than for Lincoln or Reagan. The Republican base created this monster; by the power of hubris, he should destroy their party.

    Beep boop, click clack.

    IF REPUBLIKKKIN THEN HATE

    1. Brooksie, c’mon. Click Clack? They don’t use relays and vacuum tubes anymore. Get serious.

    2. This tells me a lot more about Tim Egan than any of those Rethuglicans.

    3. TLPB: From the Timster: “So wish for a crackup in Cleveland…” Yes, I do. See my March 30 blogpost, “The Demise of the Republican Party, and Duverger’s Law.”

      FWIW, I’m voting Johnson/Weld in November.

    4. a man with the temperament of a sociopath, someone who praises a murderous dictator while damning the American troops who fought…

      But enough about Sean Penn…

  7. Ronald Bailey is a science correspondent at Reason magazine and author of The End of Doom (July 2015)n algorithm bent on world domination.

    It’s a conspiracy.

  8. Speaking as one who is in the business (not of government) –

    Rules that guide algorithms are called policies; policy control is today’s hot button that will solve all our problems. This blog post describes some of the pitfalls. Here are a few more:

    A simple policy statement may imply undesirable side effects. Perhaps the best example is the father who asked the lamp-genie that his children might never again be hungry. “Your wish is my command,” said the genie, and lightning struck the children.

    The tension between complex policies can be unresolvable. Policies for public works and for long-term financial stability come to mnd.

    From this last example: how do our policies trade off short term (political election cycle timing) vs long term results (US bankruptcy)?

    Algorithms can oscillate. Think boom-bust cycles. Assuming the best of intentions from all concerned, how do we predict and avoid oscillation?

    As noted, however, we are increasingly going to be stuck with policy-driven algorithms. It is absolutely predictable that there will arise truly Kafka-esque scenarios. Now would be a good time to think about how we will deal with them.

    To those who want full disclosure of the execution of an algorithm as it has affected them, and again granting the best of intentions from all concerned, it is fair to predict that a trace through the algorithm will be the only possible response: the loops and interactions will turn out to be too complex for any human to explain.

      1. ok Ron, so his cogent, on point, well articulated comment gets a reply from you but my simple suggestion of looking at a similar situation in business organization gets ‘nuttin?…it’s FTL neutrinos all over again.

        * runs away bawlling

        1. CB: With regard to FTL’s, I didn’t want to waste pixels on them because I figured that it was likely the experimental results would likely be shown to be an error. So it proved to be.

          But dry your tears – I will be doing some reporting on blockchain technology and the DAO screw-up.

          1. awesome…but remember re: FTLs, when experiments fail is when you actually learn something new in science (in this case how to calibrate a damn clock).

    1. Yes, algorithms can do all those things. So can humans. In fact, every problem a system based on algorithms can have, a system based on humans can have as well, and then some, foremost: corruptibility, self-interest, errors in judgment, prejudice, irreproducibility, and lack of testing. Note that Kafka-esque scenarios are constantly produced by human based decision makers.

      By moving from human to algorithmic decision making, we could gain a lot of things: clarity about what policies actually are, transparency about how decisions are made, impartiality of the decision maker (though not automatically of the policies), consistency, and the ability to test, all highly desirable features utterly lacking from current government.

  9. Algocracy.

    In other words, the p1ss-poor idea of Soviet Cybernetics, rehashed for the modern-era historically illterate idiots to consume.

    1. Soviet Cybernetics tried to automate the economy, something that humans are just as bad at running. In fact, the real problem with the Soviets, progressivism, and the article, is simply that most functions that are handled by government right now shouldn’t be handled by government in the first place. Arguing about whether humans or machines do them better is pointless.

  10. We already do deal with often faulty machine learning results in law enforcement. Just the machines in question are the brains of judges and police officers.

  11. In a recent New York Times op-ed, titled “Artificial Intelligence’s White Guy Problem,” Kate Crawford, a researcher at Microsoft who serves as co-chairwoman of the White House Symposium on Society and Artificial Intelligence, cites several such instances.

    Maybe when statistically driven software picks up on sex differences or racial differences or whatever, it’s not because the software is “racist” and “bigoted”, but because those differences actually exist?

    1. Kate Crawford wrote in that article:

      A very serious example was revealed in an investigation published last month by ProPublica. It found that widely used software that assessed the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes.

      In fact, recidivism rates are much higher among African Americans according to government statistics and analyses.

      This is a “very serious example” indeed, since Crawford’s faulty criticism means she places her prejudices and beliefs ahead of simple, verifiable facts when making arguments about science and policy.

  12. uptil I saw the paycheck four $4289 , I have faith that my mom in-law could actualie bringing in money part-time at there computar. . there sisters neighbour had bean doing this 4 only about thirteen months and by now paid for the mortgage on there condo and bought a brand new Alfa Romeo .????????? http://www.factoryofincome.com

  13. Email – support@scholarspro.com

    http://www.scholarspro.com

    ScholarsPro is a prominent name in the online training industry, known for its world-class training and consulting solutions for internationally recognized certifications and leading technologies such as Big Data Hadoop, SAS, Python, Data Science, ITIL, PRINCE2, Scrum, PMP, and many more. With a powerful team comprising of international scholars, certified trainers, and industry experts, ScholarsPro stands strong in the training and consulting sector, assisting professionals from diverse industry domains from all over the world, fulfill their professional development endeavors.

Please to post comments

Comments are closed.