Child Welfare Algorithm May Unfairly Target Disabled Parents, Complaints to DOJ Allege
"When you have technology designed by humans, the bias is going to show up in the algorithms," said one former child welfare worker.
The Justice Department has reportedly been examining an algorithm used by one Pennsylvania county's child welfare agency to help determine which allegations of child neglect deserve a formal investigation, following a series of complaints that the algorithm is unfairly targeting parents with disabilities. While the county claims that the algorithm is intended to reduce human error in child welfare investigations, critics argue that the tool places disabled parents—who are already disproportionately investigated by child welfare agencies—at risk of unnecessary government intervention.
According to the Associated Press, in 2016, Allegheny County—where Pittsburgh is located—began using "The Allegheny Family Screening Tool," an algorithm designed to help social workers better identify which families needed to be investigated for child neglect—a broad term encompassing everything from leaving children unsupervised, to not having enough food, to frequent school absences.
The tool compiles data from "Medicaid, substance abuse, mental health, jail and probation records, among other government data sets," and generates a "Family Screening Score." According to the county's website, a high score indicates a high likelihood that the child will be seized by state authorities in the future. "When the score is at the highest levels, meeting the threshold for 'mandatory screen in,' the allegations in a call must be investigated," the county's website reads.
According to the A.P., the Justice Department has been receiving complaints about the algorithm since at least last fall. The complaints primarily focus on the algorithm's inclusion of disability-related data in its Family Screening Score, a practice that could be unfairly punishing to disabled parents—and possibly violate the Americans with Disabilities Act.
The county seems to support claims that its algorithm singles out disabled parents, telling the A.P. that when data related to disabilities is included, it "is predictive of the outcomes," adding that "it should come as no surprise that parents with disabilities … may also have a need for additional supports and services."
The full extent of the Justice Department's involvement is unknown. However, two anonymous sources to the A.P. that attorneys from the Justice Department's Civil Rights Division "[urged] them to submit formal complaints detailing their concerns about how the algorithm could harden bias against people with disabilities, including families with mental health issues."
Allegheny County claims its algorithm is simply a tool used to make it easier to screen families for possible child welfare investigations, insisting that the tool was responsibly designed. "The design and implementation of the AFST was a multi-year process that included careful procurement, community meetings, a validation study, and independent and rigorous process and impact evaluations," the county's website reads. "In addition, the resultant model was subjected to an ethical review prior to implementation."
But critics argue that these kinds of algorithms frequently end up unfairly targeting families due to their race, income, or disabilities. "When you have technology designed by humans, the bias is going to show up in the algorithms," Nico'Lee Biddle, a former Allegheny County child welfare worker, told the A.P. in an earlier investigation into the Family Screening Tool last year. "If they designed a perfect tool, it really doesn't matter, because it's designed from very imperfect data systems." In June of last year, a similar algorithm in use in Oregon was discontinued over concerns that it was racially biased.
Parents with disabilities are already at heightened risk of losing their children to state custody. While Allegheny County's algorithm may be intended to help social workers make better decisions, it could end up further ingraining biases against disabled parents.
"I think it's important for people to be aware of what their rights are," Robin Frank, a family law attorney representing an intellectually disabled man whose daughter was seized into state custody, told the A.P. "And to the extent that we don't have a lot of information when there seemingly are valid questions about the algorithm, it's important to have some oversight."
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I am making $162/hour telecommuting. I never imagined that it was honest to goodness yet my closest companion is earning $21 thousand a month by working on the web, that was truly shocking for me, she prescribed me to attempt it simply
COPY AND OPEN THIS SITE________ https://richmoney41.blogspot.com
I get paid over 190$ per hour working from home with 2 kids at home. I never thought I’d be able to do it but my best friend earns over 10k a month doing this and she convinced me to try. The potential with this is endless. Heres what I’ve been doing..
HERE====)> http://WWW.NETPAYFAST.COM
Google pay 200$ per hour my last pay check was $8500 working 1o hours a week online. My younger brother friend has been averaging 12000 for months now and he works about 22 hours a week. I cant believe how easy it was once I tried it outit.. ???? AND GOOD LUCK.:)
https://WWW.APPRICHS.com
“When you have technology designed by humans, the bias is going to show up in the algorithms,” Nico’Lee Biddle, a former Allegheny County child welfare worker, told the A.P. in an earlier investigation into the Family Screening Tool last year. “If they designed a perfect tool, it really doesn’t matter, because it’s designed from very imperfect data systems.”
Interesting that when DCFS cries “Algorithms!” suddenly Reason seems sympathetic to the notion that at least some algorithms can be susceptible to manipulation, biased, or fallible.
Almost as if Reason‘s algorithm had a bias.
Great article, Mike. I appreciate your work, I’m now creating over $35,000 dollars each month simply by doing a simple job online! I do know You currently making a lot of greenbacks online from $28,000 dollars, its simple online operating jobs.
.
.
Just open the link—————————————>>> http://Www.SmartJob1.Com
thisss
I am making a good salary from home $6580-$7065/week , which is amazing under a year ago I was jobless in a horrible economy. I thank God every day I was blessed with these instructions and now it’s my duty to pay it forward and share it with Everyone,
🙂 AND GOOD LUCK. 🙂
Here is I started.…………>> http://WWW.SALARYBEZ.COM
Algorithms use rules and logical flows, Reason does not.
I hope you know that rules and logic, and probably algorithms, are white culture privilege. We should be proud of Reason for resisting.
Well, of course Al Gore Rhythms are biased. 🙂
These kinds of algorithms are based on machine learning. When they preferentially “target” certain groups, it is because those groups statistically are overrepresented among the targets.
It is possible but difficult to hardcode human biases into machine learning algorithm. People ironically call that “debiasing”. It is ludicrous to believe that people go through the trouble of doing so to oppress the disabled or minorities.
Usually, when people try to manipulate machine learning algorithms this way, it is in an attempt to remove supposed discrimination or biases, and it usually doesn’t work very well, meaning that the resulting manipulated machine learning algorithm will give more false answers. In the case of machine learning involving text or images, the answers are often even comical or contradictory.
On the contrary, it is astonishingly easy to hardcode human biases into machine learning. In fact, it’s almost inevitable.
Machine learning does not start from scratch – it starts from “training sets” – examples of data matched to the answers reached by human decisionmakers. If those initial human decisionmakers were biased, even unintentionally, those become the outputs that the machine learning algorithm gets trained to replicate. The algorithm replicates (and often amplifies) the bias buried in the training set.
No one “goes to any trouble” or hardcodes bias in intentionally. But it is still in there and becomes a self-fulfilling prophecy.
Note, by the way, that the human decisionmaker bias in this case was startlingly obvious. The fact that “parents with disabilities … may also have a need for additional supports and services” does not even slightly suggest that parents with disabilities need to lose their kids.
You are correct that the efforts to remove bias can be even more ham-handed and counterproductive than just leaving in the original errors but you are wrong to claim that those errors don’t exist.
No, training data is not “matched to the answers reached by human decisionmakers”, it is matched to actual, objective observations. In the case of predicting the likelihood of crime or child abuse using a machine learning algorithm, a training set consists of demographic data, economic data, police interactions, and legal records. The machine learning algorithm does not predict whether some biased human thinks a crime might occur, it predicts whether a crime actually has occurred under those circumstances. If the training set is statistically representative and there is no error in the machine learning algorithm, the results will be statistically correct. Even if the training set is not statistically representative, the algorithm will still be unbiased.
It is, of course, possible that the Allegheny system is using a traditional score card or point system, in which human experts make guesses about what indicators of future problems are and assign points that are then totaled to arrive at some final score. But in that case, it would be incorrect to call such a system an “algorithm”. Calling it an algorithm would simply be an attempt to obfuscate what is actually going on.
And even then, it is not clear that such a system would be “biased”, since expert opinion that represents “bias” in a social justice sense may well correspond to statistical fact and hence be “unbiased” in terms of outcomes. Human biases generally reflect statistical facts in the real world fairly well.
Leave it to Emma Camp to write such a confused and superficial article that we can’t even tell whether what the county did was a point scoring system or a machine learning algorithm, and to not even understand the different kinds and nature of “biases”, representative training sets, or data vs algorithms. A truly poor showing.
“No, training data is not “matched to the answers reached by human decisionmakers”, it is matched to actual, objective observations. In the case of predicting the likelihood of crime or child abuse using a machine learning algorithm, a training set consists of demographic data, economic data, police interactions, and legal records. The machine learning algorithm does not predict whether some biased human thinks a crime might occur, it predicts whether a crime actually has occurred under those circumstances.”
What we don’t know is the biases of the human beings who interacted with, investigated and prosecuted the individuals who ultimately created these statistics. Going forward relying on these statistics may just be creating self fulfilling prophesies that embed those biases in a system wherein reasonable individuals might reach different conclusions.
If there were persistent biases in the criminal convictions (the output variable), then people would and should be suing over that, instead of suing over the “algorithm”.
On the other hand, if there are persistent biases in interactions, investigations, and prosecutions (the input variables), the machine learning algorithm would actually correct for that.
In fact, that’s the opposite of what would happen.
Home earnings allow all people to paint on-line and acquire weekly bills to financial institutions. Earn over $500 each day and get payouts each week instantly to account for financial institutions. (bwj-03) My remaining month of earnings was $30,390 and all I do is paint for as much as four hours an afternoon on my computer. Easy paintings and constant earnings are exquisite with this job.
More information→→→→→ https://WWW.DAILYPRO7.COM
You are confusing your inputs and outputs. Demographic and economic data are inputs, not objective output observations. Police interactions are outputs that you want the algorithm to model but they are most emphatically not objective. They are multi-layered human interactions which subject to the biases we are talking about. Legal records suffer the same problem – they are the output of a subjective, not objective process and they are not immune from bias.
Your statement about the unbiased algorithm regardless of statistically unrepresentative training sets is also very wrong.
The article above may not demonstrate perfect understanding of what “machine learning” is but your comment shows far more serious errors of understanding.
No, you are confusing inputs and outputs. In correctly implemented machine learning algorithms, police interactions are not the output variable, guilt is. That is, we are not predicting “how likely is it that police interacted with this individual given the input data”, we are predicting “how likely is it that this individual is guilty given the input data”. Based on probability of guilt, we then determine whether a police interaction should occur.
Of course, it is possible that the people implementing this system chose the wrong output variables or made a serious of other mistakes. But the article suggests that somehow humans encode their biases when they write algorithms and that is nonsense.
Guilt is determined in a court of law; we have strong mechanisms in our justice system to deal with biases in the determination of guilt. So the output variable is as unbiased as we can make it.
The input variables are likely biased in various ways. But a machine learning algorithm predicting a (say) racially unbiased output variable from racially biased input variables will attempt to remove those biases on the inputs.
So, in summary:
– Machine learning algorithms know nothing about race and don’t have any biases in the human sense.
– Training data can be biased in some way, but that bias isn’t coded into the algorithm.
– Furthermore, if the output variables in the training set are unbiased, machine learning algorithms will work hard to remove biases from the input data.
– If the output variable contains (say) racial biases, that is a problem with the legal system, not with the algorithm or the training data and must be addressed in the legal system, not by suing over the use of algorithms.
if Reason dot com truly cared about the disabled they’d argue to make SSDI more generous.
they don’t because they’re glibertarian shitheads who want to impose austerity and misery on the whole of humanity.
If by “austerity” you mean the forwarding of all public welfare benefits to anyone on the globe who finds themselves within the social construct loosely defined as “The United States” then, they’re 142% behind it.
So an emotive leftist screed devoid of actual facts considered discriminatory by the activists Emna labels as experts to prop up her diatribe.
I assume if the children suffered and they didn’t provide these screenings/services she’d be up in arms about the added needs of disabled patents and why didn’t government notice they were struggling.
Fuck, she can’t even be bothered to tell us what these screenings contain. Is it a checklist to remove the kids or things needing improving for the kids welfare with direction to services to help? One I could understand the issue the other sounds like hollow leftist whining.
So what kind of disabilities? MAGA supporters? School board dissidents? Exemplars of white culture?
This software could cripple families with parents that are cri…crap.
This is happening right now.
Which, apparently is absolutely diddly squat.
If I’m a Chinese CCP official, I now know I can fly very, very VERY large surveillance balloons over your country and you won’t do anything about it. Thank you, and noted.
You know, I’m seriously thinking about this and wondering if this actually violates anything either morally/ethically, let alone legally. I may have to rethink my initial reaction.
Biden, like the balloon, is full of hot air.
People with disabilities. That is, people who have physical or mental hurdles to overcome just to do normal things. Like parenting. So there is a EXTRA risk factor to getting actions to the ‘not neglect’ good enough stage.
By definition, that means the children of parents with disabilities are more at risk of neglect (because the world isn’t fair).
And they needed to create an algorithm to figure that out?
Let’s be specific:
Single parent A is an alcoholic (which, I think is a chatagorized as a disbility). Child comes to school unkempt and hungry. Another child with a single working ‘normal’ mother. Child comes to school unkempt and hungry. You have enough resource (time/money) to ‘check’ on 1 child. Which should you choose?
“algorithms frequently end up unfairly targeting families due to their race, income, or disabilities.”
But do the algorithms incorrectly target those families? What matters is the results of the investigations, not the algorithms that triggered them.
Home earnings allow all people to paint on-line and acquire weekly bills to financial institutions. Earn over $500 each day and get payouts each week instantly to account for financial institutions. (bwj-03) My remaining month of earnings was $30,390 and all I do is paint for as much as four hours an afternoon on my computer. Easy paintings and constant earnings are exquisite with this job.
More information→→→→→ https://WWW.DAILYPRO7.COM
They are not removing all children from black households–therefore they clearly are not serious about children’s well-being.
Just more govt employee make-work.