This is a perfect example of why we are stuck in this place. This experiment is seeking to answer a question that misses the point entirely. And it muddies the waters on getting at the real issues. This person probably thinks they're helping. https://twitter.com/vinayprabhu/status/1307497736191635458
Okay. Let’s talk about why it misses the point. Because a lot of people are still confused. https://twitter.com/loweffort/status/1307773051430461440
My interpretation is that this person wants to run an experiment to answer a particular question. “Does twitter’s cropping algorithm discriminate against Black faces in a statistically significant way”. But that question does not get at the real issue.
The conversation is about algorithmic bias. In some cases, that means what people assume it means; racial bias has been baked into the results. We see this in situations like systems used by police departments to identify “potential criminals”. Those are just racist.
But algorithmic bias is not that cut and dry. When you read books like Algorithms of Oppression, it’s a whole book about the different ways that this can manifest. So keeping such a simplistic view of things becomes a liability that hinders us from confronting the issues.
So this person is running their experiment, and the early results show “actually it will choose a black face half the time”. So everybody’s going to go “oh, this was just a fluke” and go back to what they were doing. That is a failure to engage with the deeper question.
Here’s the deeper issue. Twitter decided that they would control how we see images. They built a machine that makes decisions about which faces to show us. And whether intentional or not, that machine produced this. And this is fucking traumatizing to Black people. https://twitter.com/notafile/status/1307337294249103361
It’s not about *how often* it makes this kind of decision. Which is why that experiment misses the point. Algos are incapable of understanding the trauma that is caused by these decisions. The problem with social algos is they don’t understand that we are more than just data.
And it’s not just trauma. It is manipulation. It is radicalization. It is triggering mental illnesses and addictions. These things are happening on a massive scale *automatically*. With no one at the wheel. And when we shout no, the response we get is “we’ll make some tweaks”.
It doesn’t matter to me how often this happens. If it happens *once*, then you are feeding into an ungoing an unresolved trauma that Black people are dealing with. And we have to spend our time and mental energy figuring out how to deal with it. https://twitter.com/notafile/status/1307337294249103361
When you build an algorithm that makes decisions about faces, and that algorithm has no concept of the history of violence and oppression that is inherent in those faces, then you are building a machine that will inadvertently traumatize me. That is the problem.
So the reason I’m angry about this experiment is that it simplifies and misrepresents the issue that is being raised. And it is going to allow a bunch of people to accept this bullshit as justification for dismissing the real problem. As long as it’s not “on purpose” it’s fine.
You can follow @polotek.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: