This is an essential point in understanding the increasingly-crystal-clear problem of algorithms encoding racial bias (and other forms of bias, at that). The issue is not nefarious devs adding 'IF USER.RACE==BLACK THEN DISCRIMINATE()'. https://twitter.com/LouisatheLast/status/1307692894573260802
The issue (well, fine, *one of the fundamental issues*) is devs building "smart" systems for everything from customer support to security to job application triage to photo cropping… and replicating already-existent patterns of discrimination and harm in yet more ways.
This is compounded by broadly unquestioned view in our popular culture that things like "algorithms" and "calculated scores" are objective and fair and incapable of bias because the code itself lacks human agency and *motive*.
But that's the entire point of addressing *systems* of harm and oppression as something distinct, significant, and particularly enduring. They steer results in a way that harms, regardless of the individuals that operate within them, without any individual human acts of animus.
When I was much more naive, I thought that focusing on systems of harm was a promising direction towards greater justice *because it did require individuals to accept personal responsibility for oppressive outcomes*. The system's the problem! We can just go fix it together!
Increasingly, I've come to realize that lots and lots of people are just as angry when systems of harm are identified and revealed. Because they *like the outcomes those systems produce*, and giving up those outcomes is unacceptable to them, too.
The introduction of @redsesame's book Everyday Information Architecture is a deep dive into the way harmful systems emerge — despite creators' best intentions, from neglect or ignorance of potential impact, or even as a deliberate means of enforcing a creator's perspective.
But once the gears start turning, "how this happened" is only as useful to the extent that it can help fix the system and prevent the problem from happening again. "There was no ill intent" and "It used a standard training set" etc. don't matter; outcomes are what matters.
You can follow @eaton.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: