Artificial Intelligence can be a bitch.

Here are 6 high-profile projects that have miserably failed and have made the respective companies look really foolish:

🧵👇
1⃣ Back in 2015, a software engineer reported that Google Photos was classifying his black friends as gorillas.

The algorithm powering the service was unable to properly classify some people of color 🤦!

Here is the story: https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai

👇
2⃣ Back in 2016, Amazon had to scrap it's AI recruiting tool because it discovered that the system taught itself that male 👨 candidates were preferable, and penalized every resume that pointed to a female 👩 candidate.

Here is the story: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

👇
3⃣ The COMPAS system, used in the US to assess a criminal's likelihood to re-offend, incorrectly classified black defendants as far more likely than white defendants to be at a higher risk of recidivism.

It was a scandal.

Here is the story: https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm

👇
4⃣ In 2016, Microsoft created a chatbot named Tay, which really quickly turned into a Holocaust-denying racist 👿.

Microsoft had to take the chatbot down from Twitter and never put it back again.

Here is the story: https://www.bbc.com/news/technology-35902104
5⃣ In 2018, MIT found that three of the gender-recognition systems (from IBM, Microsoft, and Megvii), had an accuracy of 99% for white men and only 35% for dark-skinned women.

Just think about that difference!

Here is the story: https://www.newscientist.com/article/2161028-face-recognition-software-is-perfect-if-youre-a-white-man/
6⃣A few months ago, in May 2020, Microsoft’s MSN came under fire after it mistakenly paired an article on Little Mix singer Jade Thirwall with a photo of her bandmate Leigh-Anne Pinnock, both of whom are mixed-race.

Here is the story: https://hivelife.com/microsoft-ai-msn-racial-bias/

👇
There's something at play in every one of these cases: "algorithm bias."

This is a phenomenon that occurs when an algorithm produces results that are systemically prejudiced due to erroneous assumptions in the machine learning process.

Basically, garbage-in, garbage-out.

👇
The study of ethics, and the impact of biases when implementing Artificial Intelligence is paramount to achieve the results we want as a society.

Take a look at this TED Talk that @_jessicaalonso_ shared with me. It's pretty revealing:

👇
I personally need to do better when thinking about how the solutions I build impact society and how to avoid biases that could undermine their usefulness.

I encourage you to do the same.
You can follow @svpino.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: