Twitter "clarified" their terms of service to enable "minor attracted people" to discuss their attraction to children on the platform, as long as they didn't encourage offending. But "MAPS" *are* offending on the platform, because it's saturated in child sexual abuse material. https://twitter.com/Pdsagainstmaps/status/1254620606991085576
I've been conducting research for a paper on user-initiated efforts to prevent child abuse on social media, because platforms won't and govts don't. And even I've been shocked at the amount and severity of child abuse images and videos on Twitter. I honestly had no idea.
CSAM accounts on Twitter can be active for *months* and attract hundreds of followers, with abuse videos clocking thousands of views. They only come down because anti-abuse accounts seek them out and report them.
The effectiveness of these anti-abuse Twitter volunteer networks demonstrates that 1) it's easy to find CSAM on Twitter if you are looking for it, and 2) Twitter has not invested adequately in anti-CSAM measures and personnel.
There's no transparency or accountability on this platform for the CSAM that it hosts and circulates. The paedophile accounts come down when they are mass reported. But what about the retweets? What about the commenters? What happens to them? This is a basic public safety issue.
Don't forget, Twitter made the ridiculous and disastrous decision to *encourage* the formation of public networks of paedophiles on this platform, while under-investing in CSAM prevention and detection. Twitter also wants to encrypt its DMs. Beyond belief and out of control.
You can follow @mike_salter.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: