I think that "AI" as a narrative wouldn't be as strong if traditional truth and meaning generators such as science and religion hadn't mostly collapsed. "AI" as a "meme" fills that void.
It would be really interesting to have the time to clearly separate the narrative of "AI" and the cultural logic of automation from one another to see where they still overlap and interlock. I feel that there's something there.
The more I work in that space the more obvious it becomes that the only materially relevant lense to analyze "AI" is to interpret it as "narrative" and "text" describing systems of power.
These days "AI" as a narrative is a way in which power is expressed. Not just explicit power but the invisible structures we build the world on. The metrics and numbers of truth creation.
That's what narrative analysis of "AI" has to focus on: Whose power is being expressed (without limiting oneself to looking at companies or state actors). Right now "AI" for example is mostly white supremacy. Simple and trivial fact.
The first level of analysis created the idea of looking at biases and for example fixing those. But that surfaces an even more powerful force: The idea that everything needs to be fed to the machine, need to be made compatible.
Why is the solution to "facial recognition is racist" "let's feed more black faces to our little statistical hacks"?

It is because that is a solution that does not challenge power (which wants facial recognition to detect unruly individuals)
The push for "AI" has a lot more to do with the push to find another truth generator after god and politics and journalism died than with research about machines.
It's mechinized scientism.
You can follow @tante.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: