Machine learning is a quintessential manifestation of what James C Scott wrote about in Seeing Like a State. And i mean that with the same level of nuance around benefits and great risks as he wrote with. If you believe in any centralization you cannot be an AI nihilist. 1/
trivial example before i discuss the "legibility" stuff: say you acknowledge dangerous AI bias in bureaucratic selection processes but you also want more efficient immigration systems to prevent people from being stuck in danger. If you want to scale it you have to use AI. 2/
So while autonomous mutual aid projects and things like this can be done without AI, any time you want to scale and interconnect complexity you're going to need ways to simplify, sort, and filter information which is what AI does. 3/
This process of trying to widen the throughput of a central authorities ability to make decisions in a pinch point is legibility. It's artificial simplicity in order to more effectively control populations. Dataset creation is creating legibility and the rest is governance. 4/
Like last names, maps, and accurate income reporting are tools for early statecraft, AI is the tool for modernized globalization, corporations, and states to "see" and direct the populace it manages. Artificial reduction of complexity to make something "legible" has problems. 5/
So just like scientific forestry failed to accurately predict yields bc it ignored the complex ecosystems, AI can only work with the information you give it. It can't model complexity or structural issues so it has "bias" in the social science sense. 6/
But the alternative of feeding it ever more complex data also requires ever more complete surveillance and AI-driven governance. It's a vicious circle of automation where accuracy has it's own risks and seductions. 7/
But the state doesn't neccesarrily care. As long as it saves money and increases productivity of control it's a success. Worse bc it's hard to hold an algorithm accountable from a legal perspective in the way you could (theoretically) hold an individual case manager responsible.8
So returning to the first example i gave of immigration we can see the crux of the issue. Efficiency in this case can literally save lives but at the cost of further entrenching bias. Now instead of racist bureaucrats it's a biased black-box AI system. Is that better or worse?9/
Any type of scale requires over-simplification. So no individual hears and senses the story of these people (except maybe through an appeals) but more people get accepted and denied. People like @tinysubversions oppose scale for the complexity reasons (and it makes sense!). 10/
As james c scott pointed out, legibility can be useful for many things such as fighting disease even as the same logs create other risks for overreach. If you are invested in any kind of scale you have to make some kind of peace with AI. There's no scale without it. 11
The real question at that point is who controls it, how was it developed, and why. This is why i'm neither an AI optimist, nor pessimist. They're tools that will be used for radical and oppressive purposes. My best hope is to pull them towards the former. 12
You can follow @emmibevensee.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: