What If the Algorithms Are Racist?
Much of the controversy over algorithmic decision making is concerned with fairness. Generally speaking, most of us regard decisions as fair when they're free from favoritism, self-interest, bias, or deception and when they conform to established standards or rules. However, it turns out that defining algorithmic fairness is not always simple to do.
That challenge garnered national headlines in 2016 when ProPublica published a study claiming racial bias in COMPAS, a recidivism risk assessment system used by some courts to evaluate the likelihood that a criminal defendant will reoffend. The journalism nonprofit reported that COMPAS was twice as likely to mistakenly flag black defendants as being at a high risk of committing future crimes (false positives) and twice as likely to incorrectly label white defendants as being at a low risk of the same (false negatives).
Because the system is sometimes used to determine whether or not an inmate is paroled, lots of black defendants who would not have been re-arrested remain in jail while many white defendants who will be re-arrested are let go. This is the very definition of disparate impact, or discrimination in which a facially neutral practice has an unjustified adverse impact on members of a protected class. Under that standard, ProPublica declared the outcome unfair.
The COMPAS software's developers countered with data showing that black and white defendants with the same COMPAS scores had almost the exact same recidivism propensities. For example, their algorithm correctly predicted that 60 percent of white defendants and 60 percent of black defendants with COMPAS scores of seven or higher on a 10-point scale would reoffend during the next two years (predictive parity). The developers argued that the COMPAS results were therefore fair because the scores mean the same thing regardless of whether a defendant is black or white. Consequently, because there is a difference in the recidivism base rate between blacks and whites, a system with predictive parity will necessarily produce racially disparate rates of false positives and negatives.
The controversy over COMPAS highlights the tension between notions of individual fairness and group fairness, which can be impossible to reconcile. In fact, the Princeton computer scientist Arvind Narayanan has identified more than 21 different algorithmically incompatible definitions of fairness.
In 2015, Eric Loomis, a man who had been convicted of eluding police in Wisconsin, challenged the use of COMPAS as part of his judge's sentencing determination, arguing that it violated his due process right to an individualized sentence. The Wisconsin Supreme Court sided with the state, finding that COMPAS results may be employed as long as they're not the only factor in a judge's rationale.
One potential problem is that the arrest data that are used to train tools like COMPAS are most likely skewed by prior policing practices in which black people are subject to higher arrest rates than are white folks who commit the same crimes. If the raw data sets are biased, an algorithm based on those data sets will perpetuate existing patterns of discrimination—unless we can figure out how to correct for the underlying flaws.
Fortunately, there is a fast-growing literature aimed at learning how to alter algorithms to make their outcomes fairer and, in so doing, improve the fairness of loan approvals, hiring decisions, court decisions, and college admissions. When Infor Talent Science applied algorithms to pick out job candidates from a database of 50,000 applicants, it found that its system selected 26 percent more black and Hispanic hires across a number of industries and positions after deploying the software.
In a 2017 National Bureau of Economic Research study, a team led by the Cornell computer scientist Jon Kleinberg found that an algorithm trained on hundreds of thousands of cases in New York City was better at predicting pretrial defendant behavior, such as whether a person would skip out on his court date after posting bail, than judges were. The researchers found that applying their algorithm would produce large welfare gains: Crime could be reduced by nearly 25 percent with no change in jailing rates, or jail populations could be reduced by 42 percent with no increase in crime rates. And these outcomes can be had while also significantly reducing the percentage of African Americans and Hispanics who are detained while awaiting their trials.
The upshot is that, rather than maintaining invidious forms of discrimination, properly tested and intentionally "debiased" algorithms can go a long way toward making our society fairer and more inclusive.
This article originally appeared in print under the headline "What If the Algorithms Are Racist?."
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Can algorithmic decision making predict recidivism risk among those evil, nasty, horrible criminals who have a prior record of blowing upon a cheap plastic flute, sans permission of a state-certified doctor of doctorology? Inquiring minds want to KNOW, dammit!
In the meantime, it being as new year now, people getting older and forgetting things… Please allow me to refresh y’all’s minds here, as I perform my self-chosen pubic duties once more… I am trying to keep y’all SAFE!!!
To find precise details on what NOT to do, to avoid the flute police, please see http://www.churchofsqrls.com/DONT_DO_THIS/ ? This has been a pubic service, courtesy of the Church of SQRLS!
It all depends on who writes the Algorithms. At least until they can write themselves. Then again, they will probably be against all humans at that point.
Never mind the racism. This algorithm is only 60% correct and it is used to determine who gets paroled? What kind of minority report bullshit is this?
I essentially started three weeks past and that i makes $385 benefit $135 to $a hundred and fifty consistently simply by working at the internet from domestic. I made ina long term! “a great deal obliged to you for giving American explicit this remarkable opportunity to earn more money from domestic. This in addition coins has adjusted my lifestyles in such quite a few manners by which, supply you!”. go to this website online domestic media tech tab for extra element thank you .
http://www.Mesalary.com