The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Are Stealth Quotas the Cure for AI Bias?
Episode 366 of the Cyberlaw Podcast
This week the Business Software Alliance issued a new report on AI bias. Jane Bambauer and I come to much the same conclusion: It is careful, well-written, and a policy catastrophe in the making. The main problem? It tries to turn one of the most divisive issues in American life into a problem to be solved by technology. Apparently because that has worked so well in areas like content suppression. In fact, I argue, the report will be seen by many, especially in the center and on the right, as an effort to impose racial and gender quotas by stealth in a host of contexts that have never been touched by such policies before.
Less controversial, but only a little, is the U.S. government's attempt to make government data available for training more AI algorithms. Jane more or less persuades me that this effort too will end in tears -- or stasis.
In cheerier news, the good guys got a couple of surprising wins this week. While encryption and bitcoin have posed a lot of problems for law enforcement in recent years, the FBI has responded with imagination and elan, at least if we can judge by two of the week's stories. First, Nick Weaver takes us through the laugh-out-loud facts behind a government-run encrypted phone app for criminals complete with influencers, invitation-only membership, and nosebleed pricing to cement the phone's exclusive status. Jane Bambauer unpacks some of the surprisingly complicated legal questions raised by the FBI's creativity.
Paul Rosenzweig lays out the much more obscure facts underlying the FBI's recovery of much of the ransom paid by Colonial Pipeline. There's no doubt that the government surprised everyone by coming up with the private key controlling the bitcoin account. We'd like to celebrate the ingenuity behind the accomplishment, but the FBI isn't saying how it gained the access, probably because it hopes to do the same thing again and can't if it blows the secret.
The Biden administration is again taking a shaky and impromptu Trump policy and giving it a sober interagency foundation. This time it's the TikTok and WeChat bans. These were rescinded last week. But a new process has been put in place that could restore and even expand those bans in a matter of months. Paul and I disagree about whether the Biden administration will end up applying the Trump policy to TikTok or WeChat or to a much larger group of Chinese apps.
For comic relief, Nick regales us with Brian Krebs's story of the FSB's weird and counterproductive attempt to secure communications to the FSB's web site.
Jane and I review the latest paper by Bruce Schneier (and Henry Farrell) on how to address the impact of technology on American democracy. We are not persuaded by its suggestion that our partisan divide can best be healed with understanding, civility, and aggressive prosecutions of Republicans.
Finally, everyone confesses to some confusion about the claim that the Trump Justice Department breached norms in pursuing phone and internet records of prominent Democratic congressmen and at least one Trump administration official. Best bet: this flap will turn out to be less interesting the more we learn. But I renew my appeal, this time aimed at outraged Democrats, for more statutory guardrails and safeguards against partisan misuse of national security authorities. Because that's what we'll need if we want to keep those authorities on the books.
And More!
Download the 366th Episode (mp3)
If you are reading this and wondering why you haven't received episode 366 on your iPhone, it's because Apple's podcast subscription service has melted down. It didn't deliver episode 365 either. Really, it's time to subscribe to The Cyberlaw Podcast using a reliable method, which iTunes is not. Try Spotify, Pocket Casts, or one of a dozen other services that work.
As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
"our partisan divide can best be healed with understanding, civility, and aggressive prosecutions of Republicans"
There should be an Undersecretary for Civility in the Ministry of Truth.
"...coming up with the private key controlling the bitcoin account."
Get rid of the pro-criminal lawyer profession, and how much does waterboarding cost?
Or, we could acknowledge that there is a reason to not allow anonymous bitcoin transactions, at least not in large amounts. They are almost entirely done by organized crime, rogue governments, and ransomware artists. Do away with anonymous bitcoin transactions and legitimate business won't even notice; it's only criminals whose dealings will be disrupted.
But 'anonymous' transactions were the default in a cash world, it's only with the rise of checks and electronic transactions that we've started to get used to panopticon surveillance of our financial dealings. Are you going to propose to get rid of cash, too, for the same reason? (Some people have.)
There's a fairly basic argument here in favor of anonymity: Government's tend to abuse information, and you can't abuse information you don't have.
I would not get rid of cash, but then again, I'm having trouble picturing Russian ransomware artists telling Colonial Pipeline to put $4 million in a suitcase in tens and twenties and leave it under the Brooklyn Bridge. Given the amount of cybercrime that takes place across international borders, cash is not a feasible option for cybercriminals. Bitcoin is. So it's reasonable to treat the two of them differently.
If you look at who mostly uses Bitcoin for large transactions, it's not honest businessmen.
So we need an untrackable digital currency for small transactions, that's not feasible for big ones. That's possibly a feasible design goal.
Mind, a lot of this cybercrime will go away when people get used to the idea that they should actually make systems secure, not just convenient.
Mmmmm...government tracking all transactions!
Every time you increase the government panopticon to catch a crook in the West, two billion around the world under dictatorship sigh as the boot stepping on their face presses a little harder. And a little foreverer.
Ah, the worshiper at the Church of the Holy State comes forward.
How about: no, we don't want to give the government more power to run people's lives.
The biggest tax evasion comes from people paying off a politician to get the laws rewritten. The tax evasion you could stop by going after large crypto-currency transfers is chump change
I want to give government enough power to put cybercriminals out of business, without giving them more than necessary. So it's not an either/or. There are more choices than the just two extremes of all or nothing.
You can't put cyber criminals out of business. People have been paying kidnappers long before there was crypto currency
What you CAN do is take away all anonymity from private citizens and their purchases. What you can do is make all of us more subservient, and more bound by the State.
That is the net result of what you're trying to do. If that's not your goal, then stop trying to do it.
Nothing is going to cure crime 100%, but there are things that can be done to reduce it. One of them is making it more difficult for criminals to move their ill gotten gains.
I recently attended a real estate closing in which the buyer brought $200,000 in cash. It was accepted, but it was also required to document where it came from. I'm just not seeing why that is onerous or burdensome.
I’m just not seeing why that is onerous or burdensome.
You're looking at this from a 1st world/western democracy perspective which has property rights. Although the Civil Forfeiture practices are certainly a stain on those rights.
You need to consider it from an authoritarian government perspective. USSR, Nazi Germany, Venezuela, Iran, China, Castro, etc. Seizure of private assets is very easy if the government has all the info. Anonymity counteracts that.
You are looking at this the wrong way.
The question should never be "what's the big deal? Just give the government the info!"
It should always be: "Justify WHY the government should have that power, that control, that information."
If I want to leave the country and take all my assets with me, then unless I'm a criminal (guilty of a real crime, not just heresy against the State) I should be able to do so, and the government should have no way to stop me. And if they're allowed to require me to tell them about it, then they can stop me.
"But then some people can engage in money laundering!"
I don't care. My freedom is more important that government power
newlib 'science' 101.
1. Make claim.
2. Say its supported by the data and science as your primary argument regardless of what the subject is from culture to pronouns to weather to opinions on movies. Carry on as if you're some ancient tribal priest and Science is some anthromorphic god.
3. Construct a study rigged to give you the answer you want or torture the interpretation into what you want.
4a: You get the answer you want: Gloat incessantly and smugly at every opportunity. Repost to every MSM site and leftwing internet haunts like HUFFPOO or r/politics or r/science blasting the airwaves with a highly biased sweeping soundbite that goes far beyond even what the already rigged study says so that your compatriots can circle jerk each other all day about how they are wise ponderous intellectuals who take the scientific truth as it is no matter what.
4b: You don't get the answer you want: Cry about bias in the data and algorithm. Write 10 page op/eds with sinister pastel abstract artwork disguised as scientific articles crying about the scourge of scientific bias and repost to leftwing haunts like HUFFPOO or r/politics or r/science so that your compatriots can commiserate about how horrible evil racist sexist scientists are making things for fill in the blank group. Suggest xyz changes to studies give you exact results you want...I mean improve fairness and repeat step 3.
5. If Step 4b again lather rise repeat as often as necessary.
The party of Science and Reason everybody.
Ah yes, all studies are rigged by the libs except for the ones you like which just show how strong the data is despite the rigging.
Unintentional bias in study design is absolutely a thing; intentionally rigging an experiment to get the result you want is not why sciences get in the biz. Even if one scientist lets their wishes author their results, they'd get screwed come publication time in peer review.
All the papers that are unquestionable SCIENCE you cannot secondguess or you are a neanderthal happen to agree with progs and all the papers that are questionable and rife with possible bias and must be audited as much as possible are the ones that don't agree with progs. According to progs.
What a coincidence!
You should meet science with science.
You can't seem to do that.
Plenty of science includes outcomes liberals don't love. Guns and violent crime are a great example. Or the study about conservatives understanding liberal worldview better than the reverse.
You just feel like it's always against you because you really hate reality.
"AI Bias"? That's when the data give answer the Left doesn't like, right?
Because I remember the probation assignment flap. And the reality was that blacks assigned to group X had as high or higher recidivism as whites assigned to the same group.
Which meant the AI did a good job.
Which is what had the left so pissed
I also am reminded of Amazon's experience with HR, as their resume-evaluation tool found that women did worse on average, the AI promoted men's resumes. They eliminated gender, and so it downgraded women's colleges. The discussion focused on how horrible it was without trying to figure out why.
The best explanation anyone could give without seeing the code was that Amazon had been systematically using affirmative action for so long that whatever seed data had been used showed the program that graduates of women's universities were inferior workers.
Of course, instead of fixing or acknowledging the problem when it was revealed, the program was blamed and scrapped.
That's not what happened at all! There was no data showing women were inferior workers.
The only data the machine was fed were past resumes let through and resumes deemed insufficient. The machine learning algo found that the most salient variable was gender, and not some actual indicator of merit. It was revealing about the past hiring practices, as well as the AI.
Yeah, that's the rationalization.
No, that's the facts: there was no data showing women were inferior workers.
Unless you think women are inherently worse at working for Amazon, in which case I'd ask you to show your evidence.
Men and women have the same average IQ
Men and women have different standard deviations in IQ. The standard deviation for men is greater than that for women. Do I need to provide you proof of those two statements?
The result of these statements is that there are more male morons than female morons, and more male geniuses than female geniuses.
As a practical matter when you're looking at the top 10% and up, it's going to be "male enriched", and it's going to be MORE "male enriched" the higher up you go.
When Lawrence Summers talked about this, the female "scientist" in the audience threw a hissy fit, marched out in anger, and got him cancelled. because her feelings were hurt by reality.
When James Damore discussed this at Google, the SJWs screamed bloody murder, and he was fired, again, for telling the truth that they didn't want to hear or read.
So, given the demonstrated inability of females to put their brains above their gonads, which is to say put thought, reason, and logic over emotion, and given the reality the the std dev difference in IQ, if you're hiring for technical positions women ARE inferior to men
Your statement about male IQ variability is hardly uncontroversial: https://en.wikipedia.org/wiki/Variability_hypothesis#Modern_studies
I also don't think you can assume IQ is a very good proxy for quality at Amazon. I would expect merit to come down to something more specific.
Agreed the Summers controversy was bullshit.
Damore talked about women being inherently worse at their jobs at Google, which is absolutely sexism for a number of reasons, not the least of which is that Damore's hot take is not really science, but is damaging to women in the workplace.
given the demonstrated inability of females to put their brains above their gonads, which is to say put thought, reason, and logic over emotion...
Holy shit, fuck you.
1: There's lots of things the Left hates that are "controversial". Which doesn't mean any of them are false.
I found a dozen studies at your that found the male IQ variability, before I stopped looking.
2: It depends on the job. For sucking up to customers, women may well be better. But for purely tech jobs, real IQ, not "emotional IQ", is what matters.
3: Wow, that's a pretty dishonest description of what Damore wrote. Thank you for establishing your bad faith.
4: When reality is "damaging" to your desires, and you support firing the person who reports reality, that would be just about solid proof that left wing complaints about "AI Bias" are really left wing complaint's about reality's bias.
5: So nice of you to show us your gonads
I found a dozen studies at your that found the male IQ variability, before I stopped looking.
Quantity of studies does not a scientific proof make. Plenty of studies on the other side as well.
For sucking up to customers, women may well be better. But for purely tech jobs, real IQ, not “emotional IQ”, is what matters
Women may be better at sucking up? You continue to be pretty freaking awful. But I can think of plenty of things that are more useful than IQ for tech jobs. Creativity, intuition, and social skills because collaboration is a thing that exists.
I can't find the original memo, but from the wiki: "the memo states that while discrimination exists, it is extreme to ascribe all disparities to oppression, and it is authoritarian to try to correct disparities through reverse discrimination. Instead, the memo argues that male to female disparities can be partly explained by biological differences."
So IOW my characterization is right, but you don't like that.
And the rest is question begging about how women really are inferior and irrational. Which is absolutely ridiculous MRA Biotruths trash. Where are you from, 1920?
"Instead, the memo argues that male to female disparities can be partly explained by biological differences.”
Which is true
"Damore talked about women being inherently worse at their jobs at Google"
Bzzt. Thank you for playing, but you just bricked your shot and lost the game.
Someone with a 150 IQ and a strong interest in programming is going to be better at it than someone with a 130 IQ and a strong interest in programming.
Males are more likely to have a 150 IQ than females
Therefore if you're doing honest recruiting, you're going to find more high end men than high end women.
That does not say that a female with a 150 IQ will be worse than a male with a 150 IQ, it says you're just going to find more of the males.
Which is what Damore was saying: disparities in hiring are not any sort of proof that the hiring is biased.
If you can't follow that logic, you're clearly no threat to be in the 150 IQ range, or even the 130
Do you have a link for that claim?
Or is this just a story you're making up?
My claim that Ben from Houston's claim that there was data that women were inferior workers at Amazon is bogus?
That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.
https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
Read the article. Didn't see anything where they provided proof that "bias" was actually invalid. Such as "we tested women the algorithm didn't like, and compared them to the bottom men the algorithm did like, and the women performed better."
All we get is it never gave results that Amazon management liked, so they junked it.
Given the political actions of Amazon, there's no reason to believe that it's failures were real, rather than just political
Ben made a positive claim that is not supported. You also seem to be writing fan-fiction based on what you're sure happened.
Not how it works.
You made a positive claim not supported, which is the claim that the AI was ONLY "discriminating against women" because of past resume handling practices. They spent years working on it before junking it. The claim that they never added any other data is laughable.
And the fact that adding that other data didn't change the results, can be seen in the fact that the politicized Amazon leadership junked the program
I showed that the input to the algo was past hiring practices. BfH thought there was an element about subsequent work performance.
The claim that they never added any other data is laughable.
Haha, what are the odds? At Amazon? With Computer engineers?
And the fact that adding that other data didn’t change the results, can be seen in the fact that the politicized Amazon leadership junked the program
Uh, nope. You need evidence before you can submit your fan-fiction as true. Maybe it is true, but you don't get to just decide it is based on your sense of the area.
AI bias? If the results are accurate, that is what matters, rather than if they are to one's political tastes.
Yeah, but that's not what the 'researchers' in question mean by "AI bias". They mean, "not producing politically correct results".
They want to produce results that are useful and unobvious, but at the same time they want to predetermine what sorts of results they get, and without having to explicitly tell the system what sorts of results to produce. A complete contradiction.
Like, you want an AI to tell you where to efficiently deploy police, but it has to NOT tell you to deploy them to black neighborhoods, and it has to somehow accomplish this without being told where those black neighborhoods are. But when you give the AI crime data, it keeps telling you to send the police to the black neighborhoods even though you didn't give it any racial data,, because that's where the crimes are happening.
They can't figure out how to accomplish this, but can't admit it's impossible because their demands are contradictory. I expect they'll eventually just give up and tell the AI directly what its results are supposed to be.
No, that's not what AI bias is. Google for one second before defining something in a way to further your persecution complex.
Did you hear about when Amazon tried to use machine learning to vet resumes?
Based on the stack of accepted and rejected resumes, the ML algorithm found the best discriminator between the two groups, and ended up discarding people whose resumes mentioned the applicants were women.
Not my area, but a quota seems a dumb way to solve the problem; anyone involved in computers can tell you a kludge that doesn't get to the source of the issue is asking for more hidden issues down the road from the same root cause - in this case an unrealized assumption input bias.
That doesn't mean there's a liberal conspiracy in AI research.
No, Sarcastro, that IS what AI bias is. It's exactly what it is: You feed the AI data, and it doesn't produce the results that were politically predetermined. This has to be a result of bias, it can never be acknowledged to actually come out of the data and reflect reality, because 'reality' is politically dictated, not derived from empirical data.
The root cause is reality not conforming to ideology.
No. The issue with ML is that the results have something of the quality of a black box to them - you need to check the results, and not follow them blindly.
It's folly to assuming the data is fine and the algo is fine. It's not politics, it's good science.
1: You are correct, it is folly to assume that a black box is correct
2: You are wrong in claiming that issue in most of the cases that get publicity is an algorithmic error, rather than political "error".
As the COMPAS case showed, the screams came not because the algorithm was bad, but because the algorithm was correct, but the correct result makes lefties sad
3: It's like with BLM, whose "martyrs" are almost always violent criminals who got exactly what a white violent criminal would have gotten in the same situation. And the only case I can think of where that isn't true? Breonna Taylor was killed in a "no knock" raid. Has have been hundreds of white males. It's amazing about how the people most worked up about her death, never cared about no-knock raids when they were only killing innocent white male gun owners
So, you got some actual cases of real AI bias harming people? Then give us links
This is not the clear case you take it to be - the algo was predictive, which means you get to test it against reality. And that test found absolutely a bias.
Overall, Northpointe’s assessment tool correctly predicts recidivism 61 percent of the time. But blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend. It makes the opposite mistake among whites: They are much more likely than blacks to be labeled lower risk but go on to commit other crimes. This a major risk of machine learning models might possess and when it comes to someone’s freedom it’s a flaw that shouldn’t go unnoticed.
Now, this is because even if the same probability is used, blacks are (correctly) clustered higher. But what that means is the existing false positives fall more often on them.
So the question is a very common one for algos feeding policy - about what outcome you want? Do you want to blindly focus on the possibility of recidivism? Or do you want to not overburden certain groups with incorrect predictions. That's not an unreasonable ask, even if you disagree with it and call it PC.
IMO, purely metrics-based policymaking is reductive and collectivist; we are all individuals. But as machine learning gets better, it'll get more individually tailored; and they we really need to asked nuanced questions about the policy outcomes we're seeking.
Here, let me rephrase those questions for you, so they're asked more honestly:
"Do you want to blindly focus on the possibility of recidivism?"
Do you care more about protecting innocent people from being victimized by crime, or do you care about pushing left wing goals?
"Or do you want to not overburden certain groups with incorrect predictions"
Do you want to treat and judge everyone as an individual? Or do you want policy to be racist, and treat and judge everybody based on the color of their skin?
My answer, and what I believe is the only answer a decent human being can give, is "protect the innocent from being victimized, and treat everyone as an individual"
Blacks assigned to group X and whites assigned to group X had the same recidivism rates. What that means is that every individual was correctly assigned to the right group, within the limits of anyone's ability to predict the future.
There's only two ways to "fix" the "problem" you see:
1: Take prisoners with the politically disfavored skin color ("whites") and arbitrarily push them to more imprisoned groups. Which is to say, arbitrarily imprison individuals because you don't like their skin color
2: Take prisoners with the politically favored skin color ("blacks") and arbitrarily push them to less imprisoned groups. Which means more of them get out of jail and commit crimes (robbery, assault, rape, and murder) against innocent people, people who wouldn't have been victimized if you hadn't changed the rules.
Sorry, but neither of those count as a moral or ethical choice
1: Never "Google" anything of a political nature, unless you wish to be lied to
2: https://jacobitemag.com/2017/08/29/a-i-bias-doesnt-mean-what-journalists-want-you-to-think-it-means/
'In the conception of these authors, “bias” refers to an algorithm providing correct predictions that simply fail to reflect the reality the authors wish existed.'
'This example is also important because of its real world consequences. After the article was published, a team of statisticians at Stanford decided to study the cost of fairness. It is possible to take the COMPAS algorithm and manipulate it to be fair. But in the process of doing this, accuracy is reduced. The Stanford team shows that if this were done, the (mostly black) high risk convicts that the manipulated algorithm would release would then commit 9 percent more violent crimes. Furthermore, 17 percent of the people in jail would be (mostly white) individuals at a very low risk of re-offending.'
I've shown you mine, now you show me yours
Your linked source has a partisan outcome it's seeking, but that's fine. However, if it's all you look to, you're engaging in confirmation bias as bad as any machine learning algo. That's media analysis 101.
Despite conservatives claiming otherwise, Google reliably turns up texts both sides. Read both; then at least you'll know if you're talking out of your ass.
The article you linked gave a couple of examples.
One is adsense, which IIRC was not really an algo problem, so much as it was letting companies choose to target specific demographics.
Another is about housing prices, which only shows that if your algorithm is created to do pure risk analysis, but that's not all regulations require, it's not a good algorithm.
"Despite conservatives claiming otherwise, Google reliably turns up texts both sides."
No, it doesn't. I've repeatedly established this by doing google searches for things I remembered, getting no worthwhile hits on the first page, switching to Duck Duck Go, and getting the correct hit in the top 2 - 3.
"Read both; then at least you’ll know if you’re talking out of your ass." Well, when I'm communicating with someone who isn't talking out of his a$$, he's provided links to his side.
Besides, Duck Duck Go provides links to both sides. Their difference from Google is that they DO provide honest links, not politically curated ones
1: Every source has a partisan outcome it’s seeking.
2: My source provided the situation, the problem, and the reality of any "solutions".
As opposed to the left wing presentations, which present only the racist view of the situation, followed by reporting cherry picked individual results, rather than the overall results (you know, the ones that showed that blacks in group X and whites in group X had the same recidivism rate, and that therefore there was no "racial bias" in the assignments).
One of the quick ways to figure out which articles are utter trash is by looking at what they don't mention. And what I've found is that none of the anti-COMPAS articles ever mention the within group recidivism rates.
If that were a garbage stat, then they'd mention it, so they could take it down. So the fact that they don't mention it means that:
1: They understand that the stat destroys their argument, and don't care
2: They live in such a bubble (thank you Google) that they're totally unaware of the issue. Which is to say they're so (willingly) ignorant that they're not worth listening to
It's not the Right that lives in a bubble of ignorant stupidity
According to the BSA framework linked to, it’s simply a matter of avoiding a practice that “systematically and unjustifiably yields less favorable, unfair, or harmful outcomes to members of specific demographic groups.” Certainly, reasonable people could not disagree on what is justifiable and what isn’t. Let’s let the programmers decide.
Certainly, reasonable people could not disagree on what is justifiable and what isn’t.
Are you being sarcastic? Because that cost-benefit is a big part of policy analysis.
Yes, I was being sarcastic.
Hard to say these days. Some big swings being taken in the comments recently.
Idnpokernow Adalah Situs Bandar Ceme Online Terpercaya di Indonesia & Daftar Agen Judi Online idn poker Deposit Pulsa Tanpa Potongan Terbaik
Sbobet adalah agen pusat taruhan judi bola online terpercaya menyediakan permainan sportbook sbobet terlengkap di indonesia
Whining right-wing authoritarians are among my favorite culture war casualties.
Carry on, clingers. You get to whine as much as you like, so long as you continue to comply with the preferences of your betters.