Much of what’s being sold as "AI" today is snake oil. It does not and cannot work. In a talk at MIT yesterday, I described why this happening, how we can recognize flawed AI claims, and push back. Here are my annotated slides: https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf
Key point #1: AI is an umbrella term for a set of loosely related technologies. *Some* of those technologies have made genuine, remarkable, and widely-publicized progress recently. But companies exploit public confusion by slapping the “AI” label on whatever they’re selling.
Key point #2: Many dubious applications of AI involve predicting social outcomes: who will succeed at a job, which kids will drop out, etc. We can’t predict the future — that should be common sense. But we seem to have decided to suspend common sense when “AI” is involved.
There’s evidence from many domains, including prediction of criminal risk, that machine learning using hundreds of features is only slightly more accurate than random, and no more accurate than simple linear regression with three or four features—basically a manual scoring rule.
Key point #3: transparent, manual scoring rules for risk prediction can be a good thing! Traffic violators get points on their licenses and those who accumulate too many points are deemed too risky to drive. In contrast, using “AI” to suspend people’s licenses would be dystopian.
The best part of the event was the panel discussion with @histoftech, @STurkle, @edenmedina, and the audience. Thanks to @crystaljjlee for the excellent summary! https://twitter.com/crystaljjlee/status/1196611062126317576
You can follow @random_walker.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: