Just spoke to @npr about what I’m now calling “content moderation-washing”: when platforms make a show of good citizenship by a. making an exceptional new policy b. enforced by the least empowered, most precarious workers that c. is relatively easy to implement.
Case in point? No wishing death on Trump. Meets a. by being immediately implemented when other political figures experience this abuse constantly, now and in the past. Meets b. because it is Yet Another Problem to be solved by real-time production-level content mods and
meets c. because going after abuse directed toward one public figure with a unique name is...computationally and operationally relatively easy, whereas rooting out quotidian abuse of regular people on the platforms is a virtual impossibility under status quo.
This is what makes it an exercise in content moderation-washing. It does nothing substantive to really change the degradation of platforms as spaces of exchange — but it sure will look good to bring it up in front of a Section 230 hearing, say, up on Capitol Hill.
Meanwhile, responses to @TwitterComms’ announcement is a laundry list of people who routinely experience actual DEATH THREATS, doxxing, racist, sexist, homophobic, transphobic, fat phobic, ableist, etc. abuse, etc., on the platform. Every day. Who have reported it. To no change.
In fact, I can think of one Twitter account with a huge following that has been responsible for an extraordinary amount of just that kind of abuse against people for their identities, places of origin, political beliefs, and so on, with very little consequence at all.
It belongs to Donald Trump.
Like, have the platforms gotten together to decide that adjudicating against poor taste is now the measure? If so, I’ve got news for them.