When we talk about algorithms being biased, it's not necessarily because that's built in there. This thread shows twitter cropping images of Mitch McConnell and Pres. Obama. Mitch gets picked every time. Other experiments also show a racial preference in the algorithm. Why? 1/ https://twitter.com/bascule/status/1307440596668182528
It's not that anyone at Twitter said "Let's prioritize white face over black ones!" It's a side effect of the algorithm. It used to look for faces, and all facial recognition works way better for white people because it learns on more examples of white people. 2/
If you see a wide range of white faces, you can recognize them better. Facial algorithms also work better on men. They often learn from celebrity photos, & women have a smaller range of acceptable appearance to be a celebrity.
(side note: facial recognition should be banned)
3/
Anyway, that's not what Twitter is doing now. It's just trying to detect prominent areas. That ends up being brighter areas, sometimes. That means whiter faces.

So does that make the algorithm racist? YES 4/
It's not explicitly racist like someone made it do that on purpose. But the fact that this happens means that either (1) no one thought to check if this was an issue or (2) they checked, they know, and they don't care. Both of those are racist problems 5/
And this is a problem we see a lot with AI. Engineers should have auditing processes in place to look for all kinds of bias. More diversity in teams also helps here.

Women and POC can warn white guys about ALL KINDS OF PROBLEMS they don't even know exist, esp on social media 6/
White guy engineers may care a lot about fair AI, but they don't know what they don't know.

Like my husband has never seen me get harassed when we run together, but I get harassed ALL THE TIME when I run alone. How would he know such a thing happens if I didn't tell him? 7/
More diverse teams AND explicit auditing for racial / gender / other bias are critical elements to building fair AI that is not racist, sexist, and otherwise unfairly biased. When algorithms do what Twitter's does, they ARE racist, even if it's unintentional, but... 8/
BUT these kinds of biases are SO WELL KNOWN in the AI community that no platform should not get a pass for allowing it to happen. They should be checking for this and correcting for it constantly. If they aren't, then they are explicitly stating that they don't care 9/
So don't let people argue that cropping images to show white people most of the time isn't racist. It is. Twitter has no excuse to be unaware of these issues. The fact that it happens means, in one way or another, they don't care enough to do anything about it. That's racist /end
Addendum: below is from CDO of Twitter. It's great that they are working on this!
Let me also add: I'm using twitter as an example here, but these issues are prevalent in AI across platforms. It's an issue we should watch out for everywhere https://twitter.com/dantley/status/1307468956278480896
You can follow @jengolbeck.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: