The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Congress is Preparing to Restore Quotas in College Admissions
And everywhere else -- as a very quiet part of the bipartisan "privacy" bill
More than two-thirds of Americans think the Supreme Court was right to hold Harvard's race-based admissions policy unlawful. But the minority who disagree have no doubt about their own moral authority, and there's every reason to believe that they intend to undo the Court's decision at the earliest opportunity.
Which could be as soon as this year. In fact, undoing the Harvard admissions decision is the least of it. Republicans and Democrats in Congress have embraced a precooked "privacy" bill that will impose race and gender quotas not just on academic admissions but on practically every private and public decision that matters to ordinary Americans. The provision could be adopted without scrutiny in a matter of weeks; that's because it is packaged as part of a bipartisan bill setting federal privacy standards -- something that has been out of reach in Washington for decades. And it looks as though the bill breaks the deadlock by giving Republicans some of the federal preemption their business allies want while it gives Democrats and left-wing advocacy groups a provision that will quietly overrule the Supreme Court's Harvard decision and impose identity-based quotas on a wide swath of American life.
This tradeoff first showed up in a 2023 bill that Democratic and Republican members of the House commerce committee approved by an overwhelming 53-2 vote. That bill, however, never won the support of Sen. Cantwell (D-WA), who chairs the Senate commerce committee. This time around, a lightly revised version of the bill has been endorsed by both Sen. Cantwell and her House counterpart, Cathy McMorris Rodgers (R-WA). The bill has a new name, the American Privacy Rights Act of 2024 (APRA), but it retains the earlier bill's core provision, which uses a "disparate impact" test to impose race, gender, and other quotas on practically every institutional decision of importance to Americans.
"Disparate impact" has a long and controversial history in employment law; it's controversial because it condemns as discriminatory practices that disproportionately affect racial, ethnic, gender, and other protected groups. Savvy employers soon learn that the easiest way to avoid disparate impact liability is to eliminate the disparity – that is, to hire a work force that is balanced by race and ethnicity. As the Supreme Court pointed out long ago, this is a recipe for discrimination; disparate impact liability can "leave the employer little choice . . . but to engage in a subjective quota system of employment selection." Wards Cove Packing Co. v. Atonio, 490 U.S. 642, 652-53 (1989), quoting Albemarle Paper Co. v. Moody, 422 U.S. 405, 448 (1975) (Blackmun, J., concurring).
In the context of hiring and promotion, the easy slide from disparate impact to quotas has proven controversial. The Supreme Court decision that adopted disparate impact as a legal doctrine, Griggs v. Duke Power Co., 401 U.S. 432 (1971), has been persuasively criticized for ignoring Congressional intent. G. Heriot, Title VII Disparate Impact Liability Makes Almost Everything Presumptively Illegal, 14 N.Y.U. J. L. & Liberty 1 (2020). In theory, Griggs allowed employers to justify a hiring rule with a disparate impact if they could show that the rule was motivated not by animus but by business necessity. A few rules have been saved by business necessity; lifeguards have to be able to swim. But in the years since Griggs, the Supreme Court and Congress have struggled to define the business necessity defense; in practice there are few if any hiring qualifications that clearly pass muster if they have a disparate impact.
And there are few if any employment qualifications that don't have some disparate impact. As Prof. Heriot has pointed out, "everything has a disparate impact on some group:"
On average, men are stronger than women, while women are generally more capable of fine handiwork. Chinese Americans and Korean Americans score higher on standardized math tests and other measures of mathematical ability than most other national origin groups….
African American college students earn a disproportionate share of college degrees in public administration and social services. Asian Americans are less likely to have majored in Psychology. Unitarians are more likely to have college degrees than Baptists.…
I have in the past promised to pay $10,000 to the favorite charity of anyone who can bring to my attention a job qualification that has made a difference in a real case and has no disparate impact on any race, color, religion, sex, or national origin group. So far I have not had to pay.
Id. at 35-37. In short, disparate impacts are everywhere in the real world, and so is the temptation to solve the problem with quotas. The difficulty is that, as the polls about the Harvard decision reveal, most Americans don't like the solution. They think it's unfair. As Justice Scalia noted in 2009, the incentives for racial quotas set the stage for a "war between disparate impact and equal protection." Ricci v. DeStefano, 557 U.S. 557, 594 (2009).
Not surprisingly, quota advocates don't want to fight such a war in the light of day. That's presumably why APRA obscures the mechanism by which it imposes quotas.
Here's how it works. APRA's quota provision, section 13 of APRA, says that any entity that "knowingly develops" an algorithm for its business must evaluate that algorithm "to reduce the risk of" harm. And it defines algorithmic "harm" to include causing a "disparate impact" on the basis of "race, color, religion, national origin, sex, or disability" (plus, weirdly, "political party registration status"). APRA Sec. 13(c)(1)(B)(vi)(IV)&(V).
At bottom, it's as simple as that. If you use an algorithm for any important decision about people -- to hire, promote, advertise, or otherwise allocate goods and services -- you must ensure that you've reduced the risk of disparate impact.
The closer one looks, however, the worse it gets. At every turn, APRA expands the sweep of quotas. For example, APRA does not confine itself to hiring and promotion. It provides that, within two years of the bill's enactment, institutions must reduce any disparate impact the algorithm causes in access to housing, education, employment, healthcare, insurance, or credit.
No one escapes. The quota mandate covers practically every business and nonprofit in the country, other than financial institutions. APRA sec. 2(10). And its regulatory sweep is not limited, as you might think, to sophisticated and mysterious artificial intelligence algorithms. A "covered algorithm" is broadly defined as any computational process that helps humans make a decision about providing goods or services or information. APRA, Section 2 (8). It covers everything from a ground-breaking AI model to an aging Chromebook running a spreadsheet. In order to call this a privacy provision, APRA says that a covered algorithm must process personal data, but that means pretty much every form of personal data that isn't deidentified, with the exception of employee data. APRA, Section 2 (9).
Actually, it gets worse. Remember that some disparate impacts in the employment context can be justified by business necessity. Not under APRA, which doesn't recognize any such defense. So if you use a spreadsheet to rank lifeguard applicants based on their swim test, and minorities do poorly on the test, your spreadsheet must be adjusted until the scores for minorities are the same as everyone else's.
To see how APRA would work, let's try it on Harvard. Is the university a covered entity? Sure, it's a nonprofit. Do its decisions affect access to an important opportunity? Yes, education. Is it handling nonpublic personal data about applicants? For sure. Is it using a covered algorithm? Almost certainly, even if all it does is enter all the applicants' data in a computer to make it easier to access and evaluate. Does the algorithm cause harm in the shape of disparate impact? Again, objective criteria will almost certainly result in underrepresentation of various racial, religious, gender, or disabled identity groups. To reduce the harm, Harvard will be forced to adopt admissions standards that boost black and Hispanic applicants past Asian and white students with comparable records. The sound of champagne corks popping in Cambridge will reach all the way to Capitol Hill.
Of course, Asian students could still take Harvard to court. There is a section of APRA that seems to make it unlawful to discriminate on the basis of race and ethnicity. APRA Sec. 13(a)(1). But in fact APRA offers the nondiscrimination mandate only to take it away. It carves out an explicit exception for any covered entity that engages in self-testing "to prevent or mitigate unlawful discrimination" or to" diversify an applicant, participant, or customer pool." Harvard will no doubt say that it adopted its quotas after its "self-testing" revealed a failure to achieve diversity in its "participant pool," otherwise known as its freshman class.
Even if the courts don't agree, the Federal Trade Commission can ride to the rescue. APRA gives the Commission authority to issue guidance or regulations interpreting APRA – including issuing a report on best practices for reducing the harm of disparate impact. APRA Sec. 13(c)(5)&(6). What are the odds that a Washington bureaucracy won't endorse race-based decisions as a "best practice"?
It's worth noting that, while I've been dunking on Harvard, I could have said the same about AT&T or General Electric or Amazon. In fact, big companies with lots of personal data face added scrutiny under APRA; they must do a quasipublic "impact assessment" explaining how they are mitigating any disparate impact caused by their algorithms. That creates heavy pressure to announce publicly that they've eliminated all algorithmic harm. That will be an added incentive to implement quotas, but as with Harvard, many big companies don't really need an added incentive. They all have active internal DEI bureaucracies that will be happy to inject even more race and gender consciousness into corporate life, as long the injection is immune from legal challenge.
And immune it will be. As we've seen, APRA provides strong legal cover for institutions that adopt quota systems. And I predict that, for those actually using artificial intelligence, there will be an added layer of obfuscation that will stop legal challenges before they get started. It seems likely that the burden of mitigating algorithmic harm will quickly be transferred from the companies buying and using algorithms to the companies that build and sell them. Algorithm vendors are already required by many buyers to certify that their products are bias-free. That will soon become standard practice. With APRA on the books, there won't be any doubt that the easiest and safest way to "eliminate bias" will be to build quotas in.
That won't be hard to do. Artificial intelligence and machine learning vendors can use their training and feedback protocols to achieve proportional representation of minorities, women, and the disabled.
During training, AI models are evaluated based on how often they serve up the "right" answers. Thus, a model designed to help promote engineers may be asked to evaluate the resumes of actual engineers who've gone through the corporate promotion process. Its initial guesses about which engineers should be promoted will be compared to actual corporate experience. If the machine picks candidates who performed badly, its recommendation will be marked wrong and it will have to try again. Eventually the machine will recognize the pattern of characteristics, some not at all obvious, that make for a promotable engineer.
But everything depends on the training, which can be constrained by arbitrary factors. A company that wanted to maximize two things -- the skill of its senior engineers and their intramural softball prowess -- could easily train its algorithm to downgrade engineers who can't throw or hit. The algorithm would eventually produce the best set of senior managers consistent with winning the intramural softball tournament every year. Of course, the model could just as easily be trained to produce the best set of senior engineers consistent with meeting the company's demographic quotas. And the beauty from the company's point of view is that the demographic goals never need to be acknowledged once the training has been completed – probably in some remote facility owned by its vendor. That uncomfortable topic can be passed over in silence. Indeed, it may even be hidden from the company that purchases the product, and it will certainly be hidden from anyone the algorithm disadvantages.
To be fair, unlike its 2023 predecessor, APRA at least nods in the direction of helping the algorithm's victims. A new Section 14 requires that institutions tell people if they are going to be judged by an algorithm, provide them with "meaningful information" about how the algorithm makes decisions, and give them an opportunity to opt out.
This is better than nothing, for sure. But not by much. Companies won't have much difficulty providing a lot of information about how its algorithms work without ever quite explaining who gets the short end of the disparate-impact stick. Indeed, as we've seen, the company that's supposed to provide the information may not even know how much race or gender preference has been built into its outcomes. More likely it will be told by its vendor, and will repeat, that the algorithm has been trained and certified to be bias-free.
What if a candidate suspects the algorithm is stacked against him? How does section 14's assurance that he can opt out help? Going back to our Harvard example, suppose that an Asian student figures out that the algorithm is radically discounting his achievements because of his race. If he opts out, what will happen? He won't be subjected to the algorithm. Instead, presumably, he'll be put in a pool with other dissidents and evaluated by humans -- who will almost certainly wonder about his choice and may well presume that he's a racist. Certainly, opting out provides the applicant no protection, given the power and information imbalance between him and Harvard. Yet that is all that APRA offers.
Let's be blunt; this is nuts. Overturning the Supreme Court's Harvard admissions decision in such a sneaky way is bad enough, but imposing Harvard's identity politics on practically every part of American life -- housing, education, employment, healthcare, insurance, and credit for starters – is worse. APRA's effort to legalize, if not mandate, quotas in all these fields has nothing to do with privacy. The bill deserves to be defeated or at least shorn of sections 13 and 14.
These are the provisions that I've summarized here, and they can be excised without affecting the rest of the bill. That is the first order of business. But efforts to force quotas into new fields by claiming they're needed to remedy algorithmic bias will continue, and they deserve a solution bigger than defeating a single bill. I've got some thoughts about ways to legislate protection against those efforts that I'll save for a later date. For now, though, passage of APRA is an imminent threat, particularly in light of the complete lack of concern expressed so far by any member of Congress, Republican or Democrat.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
"the bill breaks the deadlock by giving Republicans some of [whatever] their business allies want while it gives Democrats and left-wing advocacy groups [insane garbage policies]"
Tale as old as time
I will sign on the bad provisions in your amendment if you sign off on the bad provisions in my amendment!
Does the algorithm cause harm in the shape of disparate impact? Again, objective criteria will almost certainly result in underrepresentation of various racial, religious, gender, or disabled identity groups. To reduce the harm, Harvard will be forced to adopt admissions standards that boost black and Hispanic applicants past Asian and white students with comparable records.
This is where you lost me. Even if a spreadsheet does constitute an algorithm (seems dubious, but I'll take your word for it), then I don't see how you make the leap. It's not the spreadsheet causing a disparate impact; it's simply reporting and reorganizing the scores.
For the algorithm to be causing the problem, the spreadsheet would need to be gathering lots of different data, computing it, then generating some composite score. If that process produces skewed results (as AI models tend to do), then I understand saying that you should look whether you need to tweak the model.
Not saying there should be a law about it, that this particularly law doesn't have problems, or that disparate impact claims are at odds with equal protection. But without addressing the logical flaw above, your parade of horribles doesn't make sense.
If a spreadsheet averaged or added scores from test applicants, wouldn't that be considered an algorithm? And if one demographic consistently performed poorly on one of the tests, wouldn't the employer have to reweigh the tests or add others to ensure that it doesn't have a disparate impact on the overall scores of that demographic?
re-writing the test ? Wasnt that done in Ricci?
Nope, that's not the way this bill is worded. A spreadsheet that merely helps you make a decision (by, for example, reporting and reorganizing the scores) is defined as an "algorithm" in this bill. The algorithm does not need to "cause" the disparate impact for it to be subject to this bill.
What you're describing would make sense in a rational bill - but that's not the bill we're being given. The logical flaw you describe is not in Prof Baker's analysis but coded into the bill itself.
Was disparate impact a clever invention to bypass a need to prove causal linkage?
Which isn't necessarily bad. Medicine has improved checking outcomes to make sure people taking a new drug lived longer, beyond looking if the drug did what it was supposed to do, like lower blood pressure.
Disparate impact is the dark matter of discrimination law.
Dark matter was conceived to square the conflict between theory, (That all long range interactions in the universe are gravitational.) and observation. (That there wasn't remotely enough observable matter to generate that much gravity.) You could make the numbers work by positing the existence of a form of matter that was unobservable except for its gravitational influences. Which just happened to be most of the mass of the universe. And since it was unobservable, you could attribute to it any sort of properties you needed to make the numbers work, without fear of being proven wrong.
A great many theorists start from the unalterable premise that, absent discrimination, there would be no racial disparities. This runs up against the fact that racial disparities are omnipresent, yet actual provable discrimination is hard to demonstrate, and can't remotely be demonstrated on a scale necessary to cause the disparities.
But if you start from the premise that all disparities are the product of discrimination, then just finding a disparity is proof of discrimination. Dark discrimination, otherwise unobservable...
One of the most clickbaity headline I've seen on the VC. Impressive hustle!
(B) IMPACT ASSESSMENT SCOPE.—The impact assessment required under 23 subparagraph (A) shall provide the following:
…
(IV) disparate impact on the basis of individuals’ race, color, religion, national origin, sex, or disability status; or
(V) disparate impact on the basis of individuals’ political party registration status.
—-
The Commission, in consultation with the Secretary of Commerce,
shall conduct a study, to review any impact assessment or evaluation submitted under this subsection. Such study shall include an examination of—
(i) best practices for the assessment and evaluation of covered algorithms; and
(ii) methods to reduce the risk of harm to individuals that may be related to the use of covered algorithms.
Yeah, unless you think ‘methods to reduce the risk of harm’ is some magic wand, this Act says *nothing* about instantiating affirmative action in contravention of the Constitution.
It's the usual problem with using disparate impact to trigger an investigation: The process IS the punishment, and the only way to prevent disparate impact IS to discriminate, because the numbers don't actually come out right unless you're actively discriminating to assure that, maintaining a quota system.
So the only way to avoid the process firing up and making your life hell is to institute a quota. And then lie about it, of course.
Using disparate impact automatically generates quotas in the real world.
Worth noting that some professions are heavily dominated by certain races (subset of races). Light to heavy construction in my part of the country is almost exclusively hispanic with the exception of electrical, plumbing and hvac with is approaching 50% hispanic.
The garment industry, what is left of it in this part of the country is almost exclusively vietnamese.
This is the first time I've heard of this bill so I'd certainly appreciate a correction, but I'm not seeing how it has the scope being claimed.
1. The bill says that the disparate impact provisions apply only to "a large data holder", which is defined to mean, "a covered entity or service provider that, in the most recent calendar year had an annual gross revenue of not less than $250,000,000" and accessed various combination of very large amounts of data. How would that apply to Harvard, or to "practically every business and nonprofit in the country"?
2. The provision restrict how a large data holder uses a "covered algorithm", which is defined to an algorithm that uses "covered data". And "covered data" expressly includes "employee information". So how does it end up restricting employment decisions?
re: 1 - Pretty much every company with a marketing department will meet the data access threshold. Of 8400 publicly-traded companies in the first online listing I found, I was down around number 6000 before I got below $250M in gross revenue. Call it two-thirds. As hyperbole goes, jumping from two-thirds to "practically every" is small change.
Unless indexed that 250 mill will cover lemonade stands after a while.
Also, a "small business" (defined to be an entity, including a non-profit, with revenues of under $40,000,000 in the preceding three years) is expressly excluded from the definition of "covered entity". So again—how does it "cover[] practically every business and nonprofit in the country"?
FY 2023 Harvard gross revenue was approximately $6.1 . . . what for it . . . billion.
The bill has three size categories, as I read it: small businesses, "covered entities," and "large data holders." I noted the obligations of large data holders when I said, "big companies with lots of personal data face added scrutiny under APRA; they must do a quasipublic "impact assessment"." That's 13(c)(1), which you quote. But13(c)(2) applies to covered entities of all sizes, and it too imposes an obligation to reduce any harm (i.e., any disparate impact) that their algorithm causes in access to housing, education, employment, healthcare, insurance, or credit. The only entities that should escape the quota obligation are small businesses with under $40 million in average annual gross receipts and limited sales of personal data. They are defined out of being covered entities. Please let me know if you disagree.
So Stewart, a lie right in the first sentence?
More than two-thirds of Americans think the Supreme Court was
right
to hold Harvard’s race-based admissions policy unlawful.They didn’t say they thought the decision was right. They said it was “mostly a good thing.” The vast majority of Americans think stricter gun control is “mostly a good thing” too. That doesn’t mean rulings enforcing the Second Amendment aren’t
right
.You're conflating public opinion with moral claims. Presumably the large section of the public that supposedly favors stricter gun control would also say that court decisions upholding such controls would be "mostly a good thing", at least before they understood what the gun controls mean in practice.
Yeah, that's exactly my point.
If you can't even get Californians to vote for AA or quotas in University admissions, you've lost (unless Congress does it under the radar):
https://www.theatlantic.com/ideas/archive/2020/11/why-california-rejected-affirmative-action-again/617049/
As I’ve been saying for years, the left should’ve given up on AA long ago and actually pivoted towards diversity, which is much more defensible. I think Harvard was truly doing it for diversity and not AA purposes, in the main. But I was incredibly disappointed by the dissents. They fully admitted that the diversity justification was, in their view, just a front for AA. So sad.
Oh, they let the mask slip.
They were actually being honest.
They certainly were. And a lot of people on the left feel that way.
But not everyone does. We don’t have to throw out diversity just because some people have ulterior motives. Lots of people have ulterior motives about lots of things. We usually don’t make good things illegal just because some people misuse them.
I mean... a lot of people on the right have much worse ulterior motives about this very issue!
I'm... not quite clear how intentionally discriminating against whites, Jews, or Asians, is somehow less bad than intentionally discriminating against blacks or latinos. I mean, its ALL treating people according to immutable characteristics, instead of the content of their character, isn't it?
I suppose you could claim that they're not discriminating against group A, but instead in favor of group B. But in a zero sum situation, those are the same thing.
The problem with diversity separated from discriminatory intent is that, in the real world, "diversity" really is just an excuse to keep discriminating, nothing more. I don't think it's primarily out of actual hatred of the groups being discriminated, (Though I'm coming around to that view after seeing quotes from DEI managers.) so much as it is a Procrustean determination to get the intended outcome at any cost.
Yes Brett, we know you don't get it, or at least pretend not to. And I'm highly skeptical of your motives on the matter.
It certainly is interesting to see the Left continually justify racial discrimination and act as if only needs the correct 'branding' to be appropriate.
On the other hand, it's very sad to see the Right continually cling to whatever vestiges of bigotry they can manage to hang on to while maintaining plausible deniability (or without it, in cases such as yours).
Which industry group wrote this?
Who thought this should be published at a blog that advocates for affirmative action for right-wing law professors?
Carry on, clingers.
Are you a Mongoloid? It’s “Klingers” with a “K” and you know the only Afro- Amuricans at whatever Jerk-Water College you inflicted your presence on were mopping the “Flo’s”
Frank
How pleasantly calm the day had been, having nothing to fret about save incipient war, disaster, elections fraud, inflation, health scares, and, of course, the fate of the First Amendment as applied to the former president of the United States. (Hint: it's a goner.)
No sooner did I relax than I came upon this announcement, which caused me to wonder whether the recent changes in nomenclature surrounding discrimination were made with this incipient opportunity for legislative action in mind.
Could be, and maybe not. Something to keep an eye on, nonetheless. To that end, Congressional Research Service has its own (mercifully) brief overview, to be found here:
https://crsreports.congress.gov/product/pdf/LSB/LSB11161
The problem with disparate impact is that it assumes that genetically inferior 85 IQ blacks should have the same proportionate outcomes as high quality 100 IQ whites.
I guess it’s not realistic to judge people’s on their abilities and not the hue of their skin, shape of their eyes, texture of hair.
Frank
Oh My God....
The only thing that would undo this is a shooting civil war.
These antisocial, un-American, disaffected bigots are your fans, defenders, and target audience, Volokh Conspirators . . . and the reason strong, mainstream law faculties hire movement conservatives solely as tokens.
"...Instead, presumably, he'll be put in a pool with other dissidents and evaluated by humans—who will almost certainly wonder about his choice and may well presume that he's a racist...."
I don't see how this would happen. If I were on an Admissions panel, and saw your hypo; I'd never jump to the assumption that Mr X was a racist. Exactly the opposite...I'd assume that X was a member of a demographic group that is being hurt by this particular algorithm, and she/he wants to opt out, for the obvious reasons.
Can you elaborate on why you think opt-outs would be labeled as racists (or anti-gay, or anti-handicapped, etc)? That seems really counter-intuitive to me.
It seems to me that there's a substantial body of academic opinion that would say the very desire on the part of Asian or white applicants to escape the algorithmic imposition of quotas is problematic. It certainly isn't antiracist, which to many in academia means it's the opposite. But the bigger problem is that opting out of the algorithm just turns your fate over to an unknown set of people who don't have to evaluate you in any objective or fair way.
I can’t wait to see the first airline pilot with Down Syndrome.
Or the first paraplegic to start for the NBA.
Put a two year sunset on the whole thing, including any regs made under it.
If it turns out to be a fluffy kitten, you get the chance to renew it.
If it turns out to be a Dementor, don't renew it.
Disparate impact is bad, in and of itself. You can make that argument without having to say "actually, disparate impact is really SOMETHING ELSE in sheep's clothing, and that's the bad thing."
Wait, why is 'disparate impact' bad, in and of itself?
Look, my local grocery store charges more for ribeye steaks than they do for ground chuck. Blacks on average have lower incomes than whites. So, grocery store prices have disparate impact!
That's most "disparate impact", you know: Impartial rules running up against average population differences. Given the average population differences, the only way to avoid disparate impact is to stop being impartial, and instead discriminate.
Now, maybe whatever is causing a given average population difference is bad. Maybe. Identify a particular cause, and we could discuss that.
But there's nothing bad, in and of itself, about ribeye costing more than ground chuck.
Now, maybe whatever is causing a given average population difference is bad. Maybe. Identify a particular cause, and we could discuss that.
One significant cause is that many of these differences are self-perpetuating. There’s no reset button we can push to cancel out the effects of past discrimination.
Imagine you had twins, a daughter and a son. But you’re sexist, so you only sent the son to school. Then when they were 15, you had a change of heart and embraced women’s rights. So you enroll your daughter as a sophomore in high school alongside your son.
Surprise, you son gets As but she gets all Fs. Do you:
1. Assume she’s just dumb
2. Assume that you were right all along, and women are all just dumb
3. Realize that she missed grades K-9 and so is at a disadvantage, but just say fuck it, I’m treating her equally at this point, so she should be grateful and just deal
4. Give her some extra attention to try to get her back on track
Discuss.
Sorry, I was not clear. Basing laws and policies off disparate impact is bad, in and of itself, for all the reasons you posit and more.