The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Stealth quotas take a big step back in Congress
But not big enough: How APRA should be amended.
There are new twists in the saga of algorithmic bias and the American Privacy Rights Act, or APRA. That's the privacy bill that would have imposed race and gender quotas on AI algorithms. I covered that effort two weeks ago in a detailed article for the Volokh Conspiracy.
A lot has happened since then. Most importantly, publicity around its quota provisions forced the drafters of APRA into retreat. A new discussion draft was released, and it dropped much of the quota-driving language. Then, a day later, a House commerce subcommittee held a markup on the new draft. It was actually more of a nonmarkup markup; member after member insisted that the new draft needed further changes and then withdrew their amendments pending further discussions. With that ambiguous endorsement, the subcommittee sent APRA to the full committee.
Still, it is good news that APRA now omits the original disparate impact and quota provisions. No explanation was offered for the change, but it seems clear that few in Congress want to be seen forcing quotas into algorithmic decisionmaking.
That said, there's reason to fear that the drafters still hope to sneak algorithmic quotas into most algorithms without having to defend them. The new version of APRA has four provisions on algorithmic discrimination. First, the bill forbids the use of data in a manner that "discriminates in or otherwise makes unavailable the equal enjoyment of goods and services" on the basis of various protected characteristics. Sec. 113(a)(1). That promising start is immediately undercut by the second provision, which allows discrimination in the collection of data either to conduct "self-testing" to prevent or mitigate unlawful discrimination or to expand the pool of applicants or customers. Id. at (a)(2). The third provision requires users to assess the potential of an algorithm "to cause a harm, including harm to an individual or group of individuals on the basis of protected characteristics." Id. at (b)(1)(B)(ix). Finally, in that assessment, users must provide details of the steps they are taking to mitigate such harms "to an individual or group of individuals." Id.
The self-assessment requirement clearly pushes designers of algorithms toward fairness not simply to individuals but to demographic groups. Algorithmic harm must be assessed and mitigated not just on an individual basis but also on a group basis. Judging an individual on his or her group identity sounds a lot like discrimination, but APRA makes sure that such judgments are immune from liability; it defines discrimination to exclude measures taken to expand a customer or applicant pool.
So, despite its cryptic phrasing, APRA can easily be read as requiring that algorithms avoid harming a protected group, an interpretation that leads quickly to quotas as the best way to avoid group harm. Certainly, agency regulators would not have trouble providing guidance that gets to that result. They need only declare that an algorithm causes harm to a "group of individuals" if it does not ensure them a proportionate share in the distribution of jobs, goods, and services. Even a private company that likes quotas because they're a cheap way to avoid accusations of bias could implement them and then invoke the two statutory defenses -- that its self-assessment required an adjustment to achieve group justice, and that the adjustment is immune from discrimination lawsuits because it is designed to expand the pool of beneficiaries.
In short, while not as gobsmackingly coercive as its predecessor, the new APRA is still likely to encourage the tweaking of algorithms to reach proportionate representation, even at the cost of accuracy.
This is a big deal. It goes well beyond quotas in academic admissions and employment. It would build "group fairness" into all kinds of decision algorithms – from bail decisions, and health care to Uber trips, face recognition, and more. What's more, because it's not easy to identify how machine learning algorithms achieve their weirdly accurate results, the designers of those algorithms will be tempted to smuggle racial or gender factors into their products without telling the subjects or even the users.
This process is already well under way -- even in healthcare, where compromising the accuracy of an algorithm for the sake of proportionate outcomes can be a matter of life or death. A recent paper on algorithmic bias in health care published by the Harvard School of Public Health recommended that algorithm designers protect "certain groups" by "inserting an artificial standard in the algorithm that overemphasizes these groups and deemphasizes others."
This kind of crude intervention to confer artificial advantages by race and gender is in fact routinely recommended by experts in algorithmic bias. Thus, the McKinsey Global Institute advises designers to impose what it calls "fairness constraints" on their products to force algorithms to achieve proportional outcomes. Among the approaches it finds worthy are "post-processing techniques [that] transform some of the model's predictions after they are made in order to satisfy a fairness constraint." Another recommended approach "imposes fairness constraints on the optimization process itself." In both cases, to be clear, the model is being made less accurate in order to fit the designer's views of social justice. And in each case, the compromise will fly below the radar. The designer's social justice views are hidden by a fundamental characteristic of machine learning; the machine produces the results that the trainers reward. If they only reward results that meet certain demographic requirements, that's what the machine will produce.
If you're wondering how far from reality such constraints wander, take a look at the "text to image" results originally produced by Google Gemini. When asked for pictures of German soldiers in the 1940s, Gemini's training required that it serve up images of black and Asian Nazis. The consequences of bringing such political correctness to healthcare decisions could be devastating – and much harder to spot.
That's why we can't afford APRA's quota-nudging approach. The answer is not to simply delete those provisions, but to address the problem of stealth quotas directly. APRA should be amended to make clear the fundamental principle that identity-based adjustments of algorithms require special justification. They should be a last resort, used only when actual discrimination has provably distorted algorithmic outcomes – and when other remedies are insufficient. They should never be used when apparent bias can be cured simply by improving the algorithm's accuracy. To take one example, face recognition software ten or fifteen years ago had difficulty accurately identifying minorities and darker skin tones. But today those difficulties can be largely overcome by better lighting, cameras, software, and training sets. Such improvements in algorithmic accuracy are far more likely to be seen as fair than forcing identity-based solutions.
Equally important, any introduction of race, gender, and other protected characteristics into an algorithm's design or training should be open and transparent. Controversial "group justice" measures should never be hidden from the public, from users of algorithms or from the individuals who are affected by those measures.
With those considerations in mind, I've taken a very rough cut at how APRA could be amended to make sure it does not encourage widespread imposition of algorithmic quotas:
"(a) Except as provided in section (b), a covered algorithm may not be modified, trained, prompted, rewarded or otherwise engineered using race, ethnicity, national origin, religion, sex, or other protected characteristic --
(1) to affect the algorithm's outcomes or
(2) to produce a particular distribution of outcomes based in whole or in part on race, ethnicity, national origin, religion, or sex.
(b) A covered algorithm may be modified, trained, prompted, rewarded or engineered as described in section (a) only:
(1) to the extent necessary to remedy a proven act or acts of discrimination that directly and proximately affected the data on which the algorithm is based and
(2) if the algorithm has been designed to ensure that any parties adversely affected by the modification can be identified and notified whenever the modified algorithm is used.
(c) An algorithm modified in accordance with section (b) may not be used to assist any decision unless parties adversely affected by the modification are identified and notified. Any party so notified may challenge the algorithm's compliance with section (b). "
It's not clear to me that such a provision will survive a Democratic Senate and a House that is Republican by a hair. But Congress's composition could change dramatically in a few months. Moreover, regulating artificial intelligence is not a just a federal concern.
Left-leaning state legislatures have taken the lead in adopting laws on AI bias; last year, the Brennan Center identified seven jurisdictions with proposed or enacted laws addressing AI discrimination. And of course the Biden administration is pursuing multiple anti-bias initiatives. Many of these legal measures, along with a widespread push for ethical codes aimed at AI bias, will have the same quota-driving impact as APRA.
Conservative legislators have been slow to react to the enthusiasm for AI regulation; their silence guarantees that their constituents will be governed by algorithms written to blue-state regulatory standards. If conservative legislatures don't want to import stealth quotas, they will need to adopt their own laws restricting algorithmic race and gender discrimination and requiring transparency whenever algorithms are modified using race, gender and similar characteristics. So even if APRA is never amended or adopted, the language above, or some more artful version of it, could become an important part of the national debate over artificial intelligence.
UPDATE: Thanks to an alert reader, I can report that Colorado has already become the first state to impose stealth quotas on developers of artificial intelligence.
On May 17, 2024, Colorado adopted SB 205, which prohibits algorithmic discrimination, defined as "any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law." There is a possibility that, when Colorado talks about "unlawful differential treatment" it is talking about deliberate discrimination. Much more likely, this language, with its focus on an "impact that disfavors a … group" will be viewed as incorporating disparate impact analysis and group fairness concepts.
Many of SB 205's requirements take effect on February 2026. So that's the deadline for action by States that don't want their AI built to Colorado specifications.
https://legiscan.com/CO/bill/SB205/2024
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Which industry group or right-wing think tank is writing this stuff?
Don't take Rev's black and Asian Nazis away from him!
That's one of his proudest Culture War victories!
The Jerry Sandusky Man-Boy-Love Association most likely.
‘even at the cost of accuracy.’
There’s a massive category error here inasmuch as hahaha accuracy?
Which brings us to
‘It would build “group fairness” into all kinds of decision algorithms’
There is so much wrong with this. Every algorithm is going to have the prejudices, conscious or unconscious, of the programmers baked in, not to mention their complete ignorance of whatever area of expertise their stupid AI bullshit is being inserted into. The idea of letting these things anywhere near decisions that affect people’s lives and lievlihoods ought to count as criminal malpractice.
You’re compaining about efforts to program these things to pay attention to certain marginalised groups so as not to harm them, and give an example of how that backfired or was at least done with hilarous incompetence. But these things are ALL supposed to be constrained and limited and designed not to harm anyone. The idea of trusting these constraints when their efforts at this implementation is so ridiculously bad is ludicrous.
Top this all off with no real effort to assess whether the technology was in fact harming minorities or groups (like facial recognition software used to – oh but that’s been fixed, supposedly, hey, was trying to get that fixed a stealth quota too?) before the amazingly shitty effort to fix that seems to have made it even worse. Whatever, you have strong worlds for all sorts of people who probably deserve them, but none for the programmers responsible for shitty programming, and definitely no compelling reasons for them to be trusted with ANYTHING.
These algorithms are based upon pattern recognition.
The liberals always say they are the reality based community, but as soon as you have a tech that makes predictions based upon reality, ya’ll have a hissy fit about “fairness” and demand the algorithms be manipulated to no longer reflect reality.
As is typical, the Liberal is the exact opposite of their claim.
Lol he thinks they make predictions based on reality
I know AI is getting pretty good, and you have to have a Computer OK your face to get on a flight, but can it judge the content of one's Character?
Frank
Skin color and secret gender fluidentity pronouns are the most important characteristics.
They certainly drive you guys wild.
I’m the who doesn’t ask (or tell) you’re the one who has to explain how Elon Musk, born in Africa, American Citizen, isn’t an “African American” while Common-Law-Willie-Brown- Harris izzzzzzzzzzzzzz(HT William Juffuhson Clinton, some claim the first “Black President”
I’ve got Lesbos and Homos in my Group, I never asked, but they seem to have to tell, if they can pass Gas skillfully(and more importantly, EARN) I could give a fuck what exit/entry they prefer
Frank
The recommended age for Colonoscopy (American Cancer Society, they should know) is 45 in Afro-Amuricans and 50 in everyone else (I guess Barry Hussein should have got his at 47.5, and Common-Law-Harris-Willie-Brown at 46.25) Apparently Jay-hey throws his Colon Cancer sticks at Afro-Amuricans at a younger age.
Frank “All Afro-Amuricans now boarding for Tuskegee!”
Not true. One was recommended at 45 for pasty-white me because of family history. Then they told me to come back for further awkwardness in another 5 years.
It’s 45 for Afro-Amuricans no matter what their history, do you think Barry Hussein knew if his dad had polyps?
Democrats and other Leftists: Judging people by the color of their skin since at least 1861.
The right: thrilled to see depictions of their fantasies of minorities in Nazi uniforms.
The right: thrilled to see the leftists hoisted by their own petards*
I know I can Google it (AlGores gotta be pissed we don’t call it “Gore-ing”(term for flying 15,000 miles in Private jet to tell people to lower their “Carbon Footprint”)
But what’s a “Petard”?
Frank
"A small bomb made of a metal or wooden box filled with powder, used to blast down a door or to make a hole in a wall."
So says the Oxford Languages Dictionary.
Yes, but also a slang term for a fart, which I think is the meaning intended in that phrase.
Hoist with his own petard" is a phrase from a speech in William Shakespeare's play Hamlet that has become proverbial. The phrase's meaning is that a bomb-maker is blown ("hoist", the past tense of "hoise") off the ground by his own bomb ("petard"), and indicates an ironic reversal or poetic justice.[1]
From Wiki
The hoistee affixed the bomb to the castle door but did not retreat fast enough or the bomb exploded prematurely (unreliable fuzes in those days).
It turns out 'woke' AI is as shit as racist AI, and still racist. This is a massive selling point to the right.
The Left: Reacts to facts like Dracula reacts to garlic infused holy water at high noon.
Trump supporters have opinons about 'facts!'
The populist ultra-right: obsessively driven to post things seemingly chosen to imply their fact-based belief in things like Dracula+garlic+holywater+noon=reality
And their breeding qualities, don’t forget that, 300 years of Slavery favored those with more fast twitch muscle, abstract thinking? Not so much
This post is Steward Baker taking, from all the examples of AI silliness for all sorts of underlying reasons, which align with wokeness.
He says 'look, AI regulation did this (because regulators are woke). Therefore be against this privacy law.'
But commenters here are too fixated on the woke bad bit to move on to the privacy law bad bit.
I wrote my congressman after the first post on the issue, I'm glad it got enough visibility to at least take out the worst parts.
Baker is generating/imagining a whole host of terrible things from a vaguely worded requirement.
And of course the mob is riled up.
Any sort of anti-discrimination rules can lead to some of the behaviors he deplores.
I don’t think “Anti- Discrimination” means what you think it means. You don’t Anti-Discriminate by Discriminating
Frank
I don't think you came close to understanding my point.
Well then un-vague the requirement then.
From:
https://reason.com/volokh/2024/05/15/congress-is-preparing-to-restore-quotas-in-college-admissions/
"lifeguards have to be able to swim."
"" And there are few if any employment qualifications that don't have some disparate impact. As Prof. Heriot has pointed out, "everything has a disparate impact on some group:" ""
Do away with groups ?
Problems point back to level of maturity, for one, however, each of these perceived and actual slights, etc. are from someone trying to gain entry into an exclusive group, so there will always be discrimination of some sort as it is an inherent quality found in living things. Life replicates only by discrimination to like life.
Efforts to eliminate discriminations is problematic, because of continual change in those factors of past discriminations and future discriminations created by attempting to remove the past ones. Factors of discrimination based on fluid parameters will constantly need to be revised to the point of great effort, thus to stay on top of the ever changing nature of people, their qualities, their essences, those factors being addressed by legislation, updates must be made continually. Just what type of time-frame for such changes in law be made will have to found before proceeding with this legislation.
That's Baker pulling the same shit - 'You hate quotas, right? So you should hate this privacy bill!'
And yet again VCers that fall for it stop at the 'I would like to now rant about affirmative action' and forget about the bill Baker's disingenuously trying to gin up opposition to.
Why does a persons race matter? If you saw Larry Bird at Walmart you wouldn’t think he was one of the best players ever. Ability Uber Alles, if they’re the best who gives a fuck if they’re a Nigerian Transexual Dwarf?
Frank
Well what's the worst thing that happens?
The bill doesn't get passed.
I'm good with that.
One thing to note: The phrase “or otherwise makes unavailable” creates disparate impact liability. That was the holding of Texas Department of Housing and Community Affairs v. The Inclusive Communities Project, which interpreted that phrase in the Fair Housing Act.