Thread on AI racism!
We tend to think that AI is perfect and impartial, but it carries out the same biases humans have.
Every AI application requires data sets to learn. Data must be selected by humans. Software engineers are overwhelmingly white, so you get stuff like this. https://twitter.com/Chicken3gg/status/1274314622447820801
We tend to think that AI is perfect and impartial, but it carries out the same biases humans have.
Every AI application requires data sets to learn. Data must be selected by humans. Software engineers are overwhelmingly white, so you get stuff like this. https://twitter.com/Chicken3gg/status/1274314622447820801
AI isn't a utopian tool that is neutral to people's differences. It is oblivious to our inequalities, and thus fails to fight them.
Take the RAND Fire project algorithm in the late 70s. It advised the closure of FDNY locations, mostly in poor areas. The Bronx was decimated.
Take the RAND Fire project algorithm in the late 70s. It advised the closure of FDNY locations, mostly in poor areas. The Bronx was decimated.
Here's an article on the disastrous effects of the RAND Fire project algorithm. It includes footage of South Bronx in the early 80s, many buildings charred and even burned to the ground. https://fivethirtyeight.com/features/why-the-bronx-really-burned/
AI carries out biases as ruthlessly as humans do, in the exact same way: if it was never explicitly taught to look at certain factors, it will ignore them.
Here's another example, where an algorithm prescribed more healthcare for white people than POC https://www.nature.com/articles/d41586-019-03228-6
Here's another example, where an algorithm prescribed more healthcare for white people than POC https://www.nature.com/articles/d41586-019-03228-6
This is why it is not enough to "not be racist", you must be anti-racist. Because otherwise the same biases remain.
Today, we still see AI applications for facial recognition, self-driving cars, and even SOAP dispensers that fail to recognize Black skin. https://gizmodo.com/why-cant-this-soap-dispenser-identify-dark-skin-1797931773
Today, we still see AI applications for facial recognition, self-driving cars, and even SOAP dispensers that fail to recognize Black skin. https://gizmodo.com/why-cant-this-soap-dispenser-identify-dark-skin-1797931773
AI also learns from the ways we behave to each other. Here's a Microsoft chatbot that wrote messages based on what it learned from Twitter.
It started saying incredibly racist things within the first few days. https://gizmodo.com/here-are-the-microsoft-twitter-bot-s-craziest-racist-ra-1766820160
It started saying incredibly racist things within the first few days. https://gizmodo.com/here-are-the-microsoft-twitter-bot-s-craziest-racist-ra-1766820160
In the few instances where AI developers try to be considerate of Black people, it's not that great. The tech world is famous for lacking transparency and not obtaining proper consent. https://fortune.com/2019/10/08/why-did-google-offer-black-people-5-to-harvest-their-faces-eye-on-a-i/
So basically #AI is racist just like everything else. Don't trust algorithms to avoid prejudice. AI literally operates under the idea of "I don't see colour"... as we know, that doesn't work out well. #BlackLivesMatter
#BLM #technology #BlackTechTwitter #thread

Shoutout to @walmartyr @RyersonU @RTARyerson for teaching this stuff, one of the most useful courses I've ever taken
Someone asked in the replies: How can this happen, even at companies that hire plenty of East Asian, South Asian, and Middle Eastern programmers?
This is why: https://twitter.com/heyromanyk/status/1275146073187778562?s=20
This is why: https://twitter.com/heyromanyk/status/1275146073187778562?s=20
Here are some statistics about the lack of diversity in tech companies. Specifically, there has been very little progress in recruiting, hiring, and retaining Black, Indigenous, and Latinx employees. https://www.wired.com/story/five-years-tech-diversity-reports-little-progress/