So here’s the thing, tech people: I see a lot of comments on this saying it can’t be racist, because it’s “just” selecting a certain brightness or edge contrast or whatever and hunting for faces. But if your algorithm reliably prefers white faces? It’s racist. https://twitter.com/ericajoy/status/1307420092594974720
This is maybe the purest example there is of intent not mattering when the outcome is inequality. An algorithm has no intent at all. The programmers almost certainly didn’t intend this outcome. But the result is still racist, because they didn’t proactively prevent it.
So you didn’t intend your algorithm to be racist, programmers, but here it is, being super racist. Why is that? Most likely because you didn’t think to test it on different skin tones. Which is, itself, a racist action. Racism exists in who you don’t think of, too.
If your whole process for making something like this, getting it approved, and rolling it out to millions of users doesn’t include at some point vigorously testing it with lots of different faces of color, your process is racist.
And it’s a reflection of the lack of diversity in tech. Which is also, yes, racist. None of which means “you, developer #86, are personally about to join the Klan.” It means “you need to examine your practices to actively consider people of color as end users and eliminate bias”
I think when a lot of white people hear “this algorithm is racist” they think we’re picturing the developers as a cackling coven of open white supremacists out to hurt Black people in obscure ways. When racism doesn’t have to be active, it can just be not considering consequences
I know there’s a phrase in tech that applies here: garbage in, garbage out. It’s crucial to remember with machine learning and AI. Some reports are saying Twitter used eye movement data in developing its algorithm. So who’s doing the looking in these tests? What are their biases?
If you’re not controlling for bias in the people you’re testing, your algorithm is going to come out with the same biases intact. If you’re not actively taking steps to counter that, you’re allowing racism to infect your algorithm.
Anyway I think all the time about how when one resume screening algorithm was examined it was found that the algorithm was using two particular factors to predict job performance: if the candidate was named Jared, and if he played high school lacrosse https://www.google.com/amp/s/qz.com/1427621/companies-are-on-the-hook-if-their-hiring-algorithms-are-biased/amp/
It’s my favorite example because it’s so absurd. And it perfectly encapsulates the issue: if your data on who will “be successful” is defined on who is “successful” *right now,* in an executive workforce that’s still disproportionately male and white and upper middle+ class...
Then of course the machine thinks being named Jared and playing lacrosse are good metrics for success. It’s not even wrong! Because lacrosse-playing Jared undoubtedly does hang out in the country clubs that grant him access to the boys’ club. But that’s not what was intended.
You can follow @LouisatheLast.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: