Getting computers to explain their behaviour to us is a fundamental trust issue. Widespread adoption & acceptance of deep learning depends on it. Otherwise, we’ll end up dominated by Hal 9000s & Skynet Terminators. Thoughts on how to avoid this in the thread. #AI
[1 of 6]
Much deep learning uses neural nets, which work like this —> (see picture). It’s all mathematics & numbers & tweaking node-weights. A big black box. So when a machine recognizes a cat, or wins a game of Go, it can’t really tell you, comprehensibly, what it just did [2 of 6]
But humans already do this all the time. Our brains are in fact neural nets, a mass of layered neurones firing & reinforcing each other, until somehow thought & behaviour happens. We don’t know fully how the brain works, but we’re quite happy to explain our actions [3 of 6]
That’s because we have a theory of other minds: to explain & predict what other humans are going to do, we assume they have minds (except politicians) & we make up stories to account for what they just did & what they might do next [4 of 6]
Today’s big idea: a strand of #AI research where we encourage machines to develop a theory of other machine minds. Whenever the learning algorithm is just a complex black box, we ask the machine to make up a story for how it might have got that result from the inputs [5 of 6]
The answers may not be correct—our own accounts of our actions are notoriously flawed—but they will be recognizably human in nature, & a step toward building trust in #DigitalTransformation
[6 of 6]

https://www.psychologytoday.com/us/blog/mind-in-the-machine/201811/truly-intelligent-ai-must-have-theory-mind%3famp
You can follow @Buck_Rogers23.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: