One of our classes has been the victim of some really intense zoombombing, and all I can think about is that this is exactly why ethical speculation around unintended consequences and bad actors is a CRITICAL part of the design process for any new technology. [Thread]
Not long ago, there was a viral tweet about how google drive could be used to harass people. Of course it's a totally bizarre use case. I remember responses like "how on earth could the designers have anticipated that awful people would use it that way"?! https://www.buzzfeednews.com/article/katienotopoulos/google-drive-harassment-remove-shared
Yes, you should anticipate that awful people will use your tech to be awful. You should be sitting in a room and imagining EVERY AWFUL THING that people might do. Even if it seems like the most fringe, bizarre use case in the world. THINK OF ALL OF THEM. And then fix them.
Tip: You know who might be the best people to think about ways that tech might be used to harass? People who are harassed a lot. Marginalized folks. Vulnerable folks. Women and people of color and queer people and... oh, right all the people who are underrepresented in tech.
I bet there are a LOT of people who, if asked, could have imagined "oh yeah if Zoom became really popular people would start going into public Zoom meetings and screensharing pornography and shouting racist, sexist things."
Yes, there are some ways to help prevent Zoombombing, but you would be AMAZED at the workarounds that people have found even when instructors are trying to be as diligent as they possibly can. Also these methods are not intuitive. And they get destroyed by social engineering.
I'm not saying that Zoom did a terrible job here. There definitely are some design features that suggest some thought went into this. But the magnitude of this problem should show you how insanely important this part of the design process is.
Policy/moderation is super important but there are also things that you can do with the *design* of a technology that can make it much more difficult to use it in the ways you don't want people to use it (like zoombombing classes to blast pornography). ADD SOME FRICTION.
If you're creating user personas as part of your process, and those personas don't include "user stalking their ex", "user who wants to traumatize vulnerable folks", and "user who thinks it's funny to show everyone their genitals" then you're missing an important design step.
To sum all this up: Bad people will figure out (surprising) ways to use tech for bad things. Anticipating these and designing to mitigate them is key. And a diverse team who are all thinking about ethics and harm and doing so creatively will be the best at figuring that out.
In case you were wondering, in the context of this thread, whether Zoom actually did consider potential harms, Zoom's CEO literally told @natashanyt that they never considered the possibility of misuse of their platform. https://twitter.com/natashanyt/status/1248238004419821570?s=20
I'm not trying to vilify anyone here, but it seems that when it comes to big tech, the clearest path to change is often controversy, and this is how we learn from mistakes of the past. My hope is, the more scared they are of these problems, the less ethical debt companies accrue.
This thread seems to be making the rounds again, so If You Enjoyed This Thread, You Might Also Enjoy: https://twitter.com/cfiesler/status/1273604933007192065
You can follow @cfiesler.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: