Videos from @StanfordAIMI Symposium are up! I spoke on why we need to expand the conversation on bias & fairness.

I will share some slides & related links in this THREAD, but please watch my 17-minute talk in full (the other talks are excellent too!) 1/

While using a diverse & representative dataset is important, there are many problems this WON'T solve, such as measurement bias

Great thread & research from @oziadias on what happens when you use healthcare *cost* as a proxy for healthcare *need* 2/ https://twitter.com/oziadias/status/1293598376869507074?s=20
Another form of measurement bias is when there is systematic error, such as how pulse oximeters (a crucial tool in treating covid) and fitbit heart rate monitors (used in 300 clinical trials) are less accurate on people of color 3/ https://twitter.com/math_rachel/status/1291512973580623872?s=20
Another type of bias is historical bias (aka systemic racism & systemic sexism). Gathering more data doesn't fix it 5/ https://twitter.com/math_rachel/status/1191065892341239808?s=20
There are scores of studies on racial & gender bias in medicine, showing the pain of women is taken less seriously than of men. The pain of people of color is taken less seriously than pain of white people.

Result: longer time delays, lower quality of care, & worse outcomes 6/
Domain expertise is crucial for any applied machine learning project. In medicine, this must include PATIENTS.

I was invited to speak as an AI researcher, but my experience of being a patient is just as valuable (2 brain surgeries, a life-threatening brain infection, etc) 7/
Overall I've had access to great medical care, but I've also had many experiences of being dismissed & disbelieved

Being sent home with aspirin when I needed brain surgery

Being told to "relax" & take melatonin when I had a foreign object in my heart

& others I won't share 8/
You must listen to patients to understand the ways their data is incomplete, incorrect, missing, & biased

To understand the gap between what they experience in their bodies and what they can convince a doctor of

The tests that aren't ordered, the notes that aren't recorded 9/
Machine learning can often (unintentionally) have the effect of centralizing power. Since the medical system is already too often disempowering for patients, we need to be extra cautious of this 10/
We need to move the conversation on bias & fairness ➡️ power & participation. As Dr. @timnitGebru wrote, to not just check error rates across groups, but to question the underlying foundation, whether a task should even exist, who creates it, who owns it, how is it used 11/
I am excited about work happening in Participatory Machine Learning, moving from

Explainability ➡️ Recourse
Transparency ➡️ Contestability
"Is this fair?" ➡️ "How does this shift power?"
Predictive accuracy ➡️ Good decision-making

See this thread: 12/ https://twitter.com/math_rachel/status/1284976543769309184?s=20
"Data are not bricks to be stacked, oil to be drilled, gold to be mined, opportunities to be harvested. Data are humans to be seen, maybe loved, hopefully taken care of." @rajiinio 13/
This talk was part of @StanfordAIMI session on Fairness in Clinical Machine Learning, together with @dxmartinjr @judywawira @drnigam @jonc101x

You can watch here:
You can follow @math_rachel.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: