Finally got a chance to read this and @kareem_carr 's original thread. I totally agree with the spirit of this, but generally disagree with 2. 🧵 https://twitter.com/mmitchell_ai/status/1375893744998739968
Statistical models literally cannot create social bias. The modeler's social biases determine what data and what statistical models to use. The models themselves (and the algos that do the fitting) can neither add nor subtract from these initial social biases, nor amplify them.
However, the *modeler* *creates* social bias in their data and modeling choices, then *amplifies* that social bias using their modeling output.

That is: "1. The *social* bias starts before the data and algorithms."
I feel both @mmitchell_ai and @kareem_carr would generally agree that the responsibility for mitigating social bias ultimately lies with the modeler and their team/org, not with the statistical model or algorithm.
That said, I completely agree that *biased systems* (systems or pipelines put in place by an org with all their socially biased decisions) of statistical models or algorithms can and are built to create and amplify social bias.
So I'd slightly change it to "2. Systems/pipelines of algorithms create, and amplify, race/gender biases, racism, and sexism".
Thoughts?
You can follow @ericjdaza.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: