Artificial Intelligence can be a bitch.
Here are 6 high-profile projects that have miserably failed and have made the respective companies look really foolish:
https://abs.twimg.com/emoji/v2/... draggable="false" alt="🧵" title="Thread" aria-label="Emoji: Thread">
https://abs.twimg.com/emoji/v2/... draggable="false" alt="👇" title="Rückhand Zeigefinger nach unten" aria-label="Emoji: Rückhand Zeigefinger nach unten">
Here are 6 high-profile projects that have miserably failed and have made the respective companies look really foolish:
The algorithm powering the service was unable to properly classify some people of color
Here is the story: https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai
Here is the story: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
It was a scandal.
Here is the story: https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
Microsoft had to take the chatbot down from Twitter and never put it back again.
Here is the story: https://www.bbc.com/news/technology-35902104">https://www.bbc.com/news/tech...
Just think about that difference!
Here is the story: https://www.newscientist.com/article/2161028-face-recognition-software-is-perfect-if-youre-a-white-man/">https://www.newscientist.com/article/2...
Here is the story: https://hivelife.com/microsoft-ai-msn-racial-bias/
There& #39;s something at play in every one of these cases: "algorithm bias."
This is a phenomenon that occurs when an algorithm produces results that are systemically prejudiced due to erroneous assumptions in the machine learning process.
Basically, garbage-in, garbage-out.
https://abs.twimg.com/emoji/v2/... draggable="false" alt="👇" title="Rückhand Zeigefinger nach unten" aria-label="Emoji: Rückhand Zeigefinger nach unten">
This is a phenomenon that occurs when an algorithm produces results that are systemically prejudiced due to erroneous assumptions in the machine learning process.
Basically, garbage-in, garbage-out.
The study of ethics, and the impact of biases when implementing Artificial Intelligence is paramount to achieve the results we want as a society.
Take a look at this TED Talk that @_jessicaalonso_ shared with me. It& #39;s pretty revealing: https://www.youtube.com/watch?v=UG_X_7g63rY&feature=youtu.be
https://www.youtube.com/watch... class="Emoji" style="height:16px;" src=" https://abs.twimg.com/emoji/v2/... draggable="false" alt="👇" title="Rückhand Zeigefinger nach unten" aria-label="Emoji: Rückhand Zeigefinger nach unten"> https://www.youtube.com/watch...
Take a look at this TED Talk that @_jessicaalonso_ shared with me. It& #39;s pretty revealing: https://www.youtube.com/watch?v=UG_X_7g63rY&feature=youtu.be
I personally need to do better when thinking about how the solutions I build impact society and how to avoid biases that could undermine their usefulness.
I encourage you to do the same.
I encourage you to do the same.