I'm going to start a thread of companies & execs not understanding that algorithms can still be biased on variables that are not inputs.

Machine learning is great at finding latent variables. https://twitter.com/math_rachel/status/1191064500834750464?s=19
CEO of Goldman Sachs assuring us that gender is not an input to their @AppleCard credit decisions (incorrectly implying that they can't be biased on gender): https://twitter.com/gsbanksupport/status/1194022629419704320?s=19
CPO of Google's YouTube told the New York Times that YouTube doesn't measure extremism, so the recommendation algorithm can't be biased towards extremist videos. This logic is false https://twitter.com/math_rachel/status/1112141462190219264?s=19
The Compas recidivism algorithm used in US courts has double the false positive rate (people rated high risk who do not reoffend) for Black defendants compared to white defendants, yet race is not an input variable: https://twitter.com/math_rachel/status/1191062268793872384?s=19
Important ex from @lizjosullivan: proposed changes to HUD that would make housing discrimination much easier (in part, by saying that an algo is ok if its inputs are not substitutes for a protected characteristic). Such defenses “provide a huge loophole” https://twitter.com/lizjosullivan/status/1194059449591304193?s=19
If you are interested in learning more about different types of bias & steps for mitigating it, here is a recent talk I gave: https://twitter.com/math_rachel/status/1191059437860950016?s=19
You can follow @math_rachel.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: