We found that in Discord, perpetrators would join and disrupt open voice channels by spamming slurs, porn audio, two-hour-long extremely loud music, or just pure noise—these will probably ring a bell if you're familiar with the recent Zoombombing cases. [2/n]
The lesson here is that new technology almost always comes with even newer ways to abuse it, so effective moderation is a CRITICAL need for any new technology (or newly popular ones like Zoom). [3/n]
This moderation effort may even need to go beyond what we're already doing, as we found that text-based moderation strategies completely broke down in voice chat—mostly due to the difficulty to preemptively prevent abuse,
and the almost impossible task to retain evidence. [4/n]
While it's easy to say "oh tech designers should've thought of this," it really difficult to predict all the ways in which people will abuse new technology, nor how rules may need to change to prevent such abuse. [5/n]
Even though speculating is hard, it doesn't mean we shouldn't do it. @cfiesler has an excellent thread on the importance of speculating the potential negative consequences of tech. [6/n] https://twitter.com/cfiesler/status/1247914696503480321
At the same time, it is also important for us to be willing to react quickly, and implement new moderation rules and strategies when we adopt new technology. [end]
You can follow @aaroniidx.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: