I'll be live tweeting the talk by Dr. Chris McKillop ( @atscmc ) at #EuroSTARConf on this thread, starting in about 10 minutes.
Here's the title — Developing Your BS Detector: The Hype and Reality of AI.
We encounter AI everywhere these days, in our phones, in our computers, at immigration. Sheriff McKillop is in town to sell the AI Snake Oil. Part 1: Confusion. With this be a robot-infested world? There are lots of fascinating, scary characters out there. Are they a threat?
Here's an instance of a robot: it's really an iPad on wheels. And it got stuck on a plush carpet; it had to be rescued by its human minder. We have humanoid robots, but we can't, and don't, interact with them like real humans.
Dr. McKillop presents cute picture of a chihuahua, interpreted by deep learning system as... a muffin. AI systems are of about rodent-level intelligence (mind you, rodents know the difference between dogs and muffins --MB).
Its focus and speed on one task at at time makes AI look intelligent.

Part 2: Singularity Fever is the next manifestation of AI BS. "Soon we will see super-intelligences that will make us irrelevant and redundant" is the claim. We don't have to worry about it any time soon.
The robots are not coming for us. The world is not a nice place for robots, and the robots don't like it. Evidence: a security robot that has committed suicide by diving into a fountain because the world is terrible. (You can find the link yourself. --MB)
What makes the technology is our imagination of what we could do. We have choices about what we will do. The choices that we make have a profound influence on individuals, groups, and society. We must remember that we get to make the choices.
Symptom 3: Deception. Introducing Sophia the Robot (who insists that she isn't one). She is not a robot, as she claims. She's a puppet. She's basically an animatronic; a marketing gimmick, making lots of money for her owners. Next: introducing the (original) Mechanical Turk.
People were terrified that the Mechanical Turk was possessed by spirits. It was possessed... by the live person inside it. You can make a lot of money by selling fake AI. We need a lot of testing to find out if we're being deceived.
The Boston Scientific robots are remote controlled by human operators. Make stuff look like it's AI, and you can raise a lot of investment money. Symptom 4: Failure. With great BS comes great failure. Trouble is, when AI fails, people get hurt, killed, discriminated against.
We start with biased data, and the AI ramps up the bias against people who are already being discriminated against. Example: poor people don't (can't) spend money on health care, so they clearly don't need much of it. The AI bakes the bias in.
Woman CEO in China tagged by AI, named and shamed for being a habitual jaywalker... because her face was on an ad on the side of a bus. This was a colossal testing fail. Another: the Uber car identified the pedestrian it killed 6 seconds before the accident.
The system had been getting too many false positives on anticipated collisions, which was discomfiting for the passengers. So the choice was made to favour passenger comfort over safety.

Tesla markets it as AutoPilot, but after accidents always insists it's "driver assist".
The trouble with called driver assistance "AutoPilot" is that the words that we choose frame our decisions and actions.

In less than one day, Microsoft Tay became a racist, sexist, homophobic asshole after being trained on Twitter's echo chamber.
Symptom 5: Data Obsession. AI has an obsessive compulsive disorder, needing to feed on more and more data. This has turned capitalism into surveillance capitalism. Is this the world we want, where all our data is going to be sold? Who is benefiting from selling our data.
Answer: the billionaires who already have lots of our money. Symptom 6: Empathy Dysfunction. We're moving from face-to-face interaction to face-to-screen interaction. Yet one of the good things about AI is how it can show us the good, the bad, and the mundane about us.
Instead of worrying about the bias in AI systems, we should be worried about the bias in the real world. The AI is a good mirror for us, showing us what we don't want to get wrong.

Thomas Nagel proposed the question: what is it like to be a bat? We can have an idea about that.
Yet we don't often think very much about what it's like to be a *human*. What is it like? We share experiences that we have shared, and can learn from the differences between them.

It's not good to mix morality and machines. AI has no conception about what a human IS.
AI doesn't know what it's like to experience joy or pain; it doesn't have humanity. We shouldn't put our own moral codes into our AI. We get fooled by the complexity and the work and the prestige of the people working on it.
Bonus Symptom: Dunning-Kruger Syndrome. Lots of billionaires have strong opinions about AI but don't realize they don't understand it, Elon Musk and Mark Zuckerberg being canonical examples. Just because someone can shout and get publicity doesn't mean they know AI.
Cure for AI BS: the elixirs of knowledge, ethics, and empathy. We need to bring these things to our practice of testing. We must understand what's going wrong, where we're baking in bias so that we don't harm people. Join with those who want to build ethical, empathetic AI. -fin-
In the questions: we've got to bring more psychological understanding into the development and testing of AI. There are tools emerging that can help to detect bias. (But how well with THAT be tested — an infinite regress problem? MB)
You can follow @michaelbolton.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: