Here’s the deeper issue. Twitter decided that they would control how we see images. They built a machine that makes decisions about which faces to show us. And whether intentional or not, that machine produced this. And this is fucking traumatizing to Black people. https://twitter.com/NotAFile/status/1307337294249103361
It’s not about *how often* it makes this kind of decision. Which is why that experiment misses the point. Algos are incapable of understanding the trauma that is caused by these decisions. The problem with social algos is they don’t understand that we are more than just data.
And it’s not just trauma. It is manipulation. It is radicalization. It is triggering mental illnesses and addictions. These things are happening on a massive scale *automatically*. With no one at the wheel. And when we shout no, the response we get is “we’ll make some tweaks”.
It doesn’t matter to me how often this happens. If it happens *once*, then you are feeding into an ungoing an unresolved trauma that Black people are dealing with. And we have to spend our time and mental energy figuring out how to deal with it. https://twitter.com/notafile/status/1307337294249103361
When you build an algorithm that makes decisions about faces, and that algorithm has no concept of the history of violence and oppression that is inherent in those faces, then you are building a machine that will inadvertently traumatize me. That is the problem.
So the reason I’m angry about this experiment is that it simplifies and misrepresents the issue that is being raised. And it is going to allow a bunch of people to accept this bullshit as justification for dismissing the real problem. As long as it’s not “on purpose” it’s fine.
You can follow @polotek.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: