Yesterday, I watched CS professor Michael Kearns speak on AI ethics, and now I feel compelled to rant.

Namely, how a lot of AI ethics represents a complete failure in moral imagination, a disingenuous redefiniton of the problem, and a down right swindle. 1/n
The lecture begins with arguing that social scientific and humanistic accounts of issues like privacy and fairness have not reached "scientific maturity", because they lack mathematical formalism. (wow)
It then apparently follows that CS should pick up the slack, and given a mathematical definition, ethics will be technically implementable (reducing the need for regulation)
The presented technical formalism to privacy, differential privacy, is only privacy in a very restricted and limited sense.
DF assumes we are only interested in aggregate statistics (mostly untrue), and that privacy is only relevant in terms of being publically identified (as opposed to being infrastructurally surveilled for example).
If only there was something like "surveillance studies", so we could get a qualitative understanding of what privacy is really about. (spoiler: there is, scientifically immature no doubt)
But more importantly, Kearns' treatment of privacy assumes that there are no bad actors. You know, like evil. You know, like Amazon.
Amazon, who have constructed a vast dragnet using products like Ring and Echo, drawing on suburban fears to become a centralized, naturalized surveillance infrastrucure.
At the same time Amazon is providing facial recognition and profiling services to law enforcement (Rekognition). And dash cams with a ''traffic stop" mode for consumers.

A veritable arms dealer, dealing to all fronts.
Differential privacy totally misses the mark on these kinds of surveillance. For an exact definition, we have traded real understanding.

Oh, and I find it important to disclaim here that Kearns is in the Amazon Scholars program (which means working at Amazon as a side hustle)
This I think is the core failure of moral imagination in AI ethics, and one that is very convenient to companies like Amazon: the depiction of some common human values, common human goals, and the technical obsticles to overcome en route.
At the same time, the AI ethics discourse involves a denial of evil and a denial of the deep agonisms that underpin these systems. Essentially removing the ethics from AI ethics.
You can follow @SanttuRaisanen.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: