Social media platforms are ramping up efforts to combat electoral misinformation ahead of US #ElectionDay2020 Their efforts reveal some key challenges & complexities of online content moderation at scale 1/
https://www.hrw.org/news/2020/10/30/can-social-media-platforms-stop-electoral-disinformation-and-respect-free-speech
https://www.hrw.org/news/2020/10/30/can-social-media-platforms-stop-electoral-disinformation-and-respect-free-speech
In what's probably an unparalleled effort, platforms have rolled out new policies, dedicated resources, and coordinated with one another to address voter suppression & misinfo/disinfo about how/when/where to vote 2/
But even well-intentioned efforts by platforms to rein in electoral misinfo can result in silencing political expression & dissent. So, it's critical any restrictions on content be necessary & proportionate, carried out transparently, & give people access to meaningful remedy 3/
Unfortunately, changes to policies have been rolled out piecemeal & can be *really* difficult to follow. Shout out to @carlymil & @2020Partnership for meticulously tracking & analyzing the constant updates to polices of 15 platforms in almost real time 4/ https://www.eipartnership.net/policy-analysis/platform-policies
And some policy changes aren’t even formally announced. Today we learned from reporting from @RMac18 & @CraigSilverman @buzzfeednews that Facebook quietly suspended recommendations for political/social groups ahead of #ElectionDay2020 5/ https://www.buzzfeednews.com/article/ryanmac/facebook-suspended-group-recommendations-election?scrolla=5eb6d68b7fedc32c19ef33b4
As critical, if not more so, is *how* platforms interpret/enforce their policies. They have considerable leeway & offer little transparency. This has led to some difficult to understand decisions, esp with regard to posts by politicians & repeat spreaders of misinfo/disinfo 6/
This is why transparency. is. key. Beyond insight into platforms’ decisions, we need data on error rates, on whether labels work to provide corrective info (if so, which kinds), is adding “friction” or downranking content effective in slowing the spread of disinfo/misinfo 7/
We know that marginalized voices are the first to get censored & the least able to use find alternative platforms. So it’s crucial that platforms enforce their policies in a fair, unbiased, and proportional manner and provide people w/meaningful ways to appeal decisions 8/
An
in the room is that dominant social media platforms were designed to maximize engagement - not to deliver reliable & accurate election information. Efforts to address misinformation have to grapple with this underlying business model & algorithmic recommendation systems 9/

It's also important not to overstate the role of social media. A @BKCHarvard study found that a highly effective disinfo campaign about voter fraud was an elite-driven, mass-media led process. Social media played only a secondary & supportive role 10/ https://cyber.harvard.edu/publication/2020/Mail-in-Voter-Fraud-Disinformation-2020
Finally (and most importantly), elections are taking place all over the world (69 national elections in 2020 alone). I still get chills when I remember Sophie Zhang’s memo & how differently social media companies approach threats to people and elections outside the US 11/11