Painfully late to the game here, but my two cents (and then some) on this whole distressing saga https://twitter.com/VentureBeat/status/1276575314706038785
While it is correct that re-training the super resolution mode on a dataset with a different demographic distribution would produce a different distribution of output images, in the context of a conversation about the harms of AI, this framing misses the forest for the trees
This is made worse by the fact that the forest is burning. The superresolution failure that does not stand alone -- is part of a larger systematic pattern of sociotechnical systems failing people of color, LGBTQ+ communities, and other marginalized groups
Framing such failures as solely problems of ‘bias’ - a word reduced to mean unbalanced error rates or unbalanced dataset demographics - minimizes the depth of the problem and implicitly (or often explicitly) orient solutions around technical de-biasing of localized components
And when fixes are oriented exclusively at technical components of AI systems and statistical properties of datasets, we obscure how these failures are inextricably linked to knowledge and power asymmetries that characterize the field of AI
We also obscure the manner in which certain tasks, research agendas, applications, etc. are fundamentally implicated in harms to marginalized communities -- no amount of ‘de-biasing’ a model will fix these problems
This reductive tendency is widespread and has been characterized within the algorithmic fairness community by @aselbst et al. as a failure of abstraction http://sorelle.friedler.net/papers/sts_fat2019.pdf
Of course sometimes AI researchers and developers need to attend to the technical components of the systems they build, discuss and develop rigorous methodologies of dataset design and testing, etc.
BUT in the context of a conversation about dangers of AI, in a public forum, it is critical for leaders to communicate the nuances of these topics and engage with the structural roots of these failures
As @timnitGebru and I discuss in our tutorial, AI research and development cannot be decoupled from the social relations that structure our world and AI failures are intimately linked to asymmetries wrt who is driving research agendas, shaping the incentives of the field, etc.
And the lived experiences of those directly harmed by AI systems gives rise to knowledge and expertise that MUST to be valued. Timnit’s expertise stems BOTH from her expansive disciplinary knowledge and through her lived experiences as a Black woman in this field
Timnit herself echos a long radiation of Black feminist scholars, such as Patricia Hill Collins, when she says lived experience is expertise https://twitter.com/yoavgo/status/1274811210806984705
The disrespectful and dismissive manner in which @timnitGebru, and others speaking out against these issues, have been treated here is itself a reflection of the underlying issues we’re discussing, and a stark reminder of how far the field has to go
You can follow @cephaloponderer.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: