I wrote a 🧵

As part of my research, I study the ethics of AI/machine learning algorithms & the implications of their applications. I want to talk to you abt #Prop25 on the California ballot, which seeks to end cash bail & replace it w/a “pre-trial risk assessment” algorithm. 1/
Abolition of cash bail is a necessary step to address the violent racism of the prison-industrial complex & criminal justice system, but replacing it with an algorithm will create an even more biased, unfair system to keep people in jail while awaiting trial. 2/
Machine learning algos like this one work by taking in 1000s or millions of pieces of data--in this case, court records. They then find correlations between certain characteristics & showing up for trial. It does this free of human input & there is no way to correct for bias. 3/
Because the criminal justice system in the US is biased against people of color, poor people, & those with mental illnesses, the data are biased. The algorithm will efficiently learn this bias & go on to replicate it under the false veneer of neutrality. Racism in, racism out. 4/
If #Prop25 passes, people who are awaiting trial will have their information fed into this algorithm, and it will return to judges a value of “low,” “medium,” or “high” risk, based on a score determined by how this person compares to those in the training dataset. 5/
This isn’t the first time algorithms like this have shown to be discriminatory. Millions of Black people have been discriminated against by algorithms used to recommend healthcare decisions, because the model learned to penalize self-identified Black patients. 6/
Amazon had to stop using their black box algorithm for resume screening when they discovered that the model had learned, based on their previous hiring decisions, to penalize resumes that included the word “women’s.” The algorithms replicate our own bias. 7/
Pre-trial risk assessment models don’t explicitly factor in variables such as race or class, but because we live in a society that is racist & classist, the effects of systemic oppression are apparent in other variables & the algorithms disfavor the most vulnerable among us. 8/
The majority of jurisdictions that use these models include home ownership as a variable for prediction. Those who rent their homes or are unhoused are penalized by the algorithms. Black people are overrepresented amongst the unhoused in California. 9/
Redlining & other racist policies have led to a homeownership gap between white & Black families of nearly 30%. A pre-trial risk assessment algorithm will further this injustice by factoring centuries of historic, systemic racism into its calculations. Racism in, racism out. 10/
Age at first arrest is another oft-used variable, regardless of whether charges were filed. In a state in which Black children are overpoliced & over 3X more likely to be arrested than white youth, the model will punish Black people for living in a racist society. 11/
Some algorithms use “past or current mental health treatment” as a factor in determining whether someone should have their freedom while awaiting trial. Those with a current or previous history of mental illness should not be imprisoned for it. 12/
Most include a history of substance use. The War on Drugs led to disproportionate levels of imprisonment among Black & Latinx communities. The legalization of marijuana in CA didn’t include automatic record expungement for those who had been imprisoned on marijuana offenses. 13/
These algorithms are not calculating pre-trial risk based explicitly on a person’s race, but racism is so deeply entrenched in our country that its impact can be seen in nearly every variable used by these models to make their calculations and predictions. 14/
Machine learning is popular because it is good at what it does. It can find connections between variables & make very accurate predictions, but it does not understand the concept of fairness. It takes our bias as an input, & returns more of the same. Racism in, racism out. 15/
In 2019, professors and academics who are experts in this field wrote an open letter detailing the flaws in using models such as the one at the heart of #Prop25, and strongly recommended against their use. I agree with their analysis and conclusion. 16/
The algorithm isn’t the only issue with #Prop25. Judges will be allowed to ignore the model’s assessment. Studies of other states using these methods found that judges are more likely to release white people deemed a “medium” risk & detain Black people with the same score. 17/
#Prop25 also expands the powers of law enforcement agencies & will increase their funding. This is especially egregious during a time when Californians are loudly & righteously calling to defund the police in response to systemic brutality perpetrated by law enforcement. 18/
I stand in solidarity with racial justice groups in my home state of California against both cash bail and replacing it with a system that will exacerbate the harm inflicted on Black people, Latinx and Indigenous people, the unhoused and the mentally ill. /19
You can follow @ellouelle.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: