This is a strong, important & timely decision, esp in light of Facebook's enforcement mistakes around #ResignModi yesterday.
There's some really important recommendations here. Facebook has 30 days to respond and that'll be worth watching.
A few things worth highlighting /1 https://twitter.com/OversightBoard/status/1387685869398732801
There's some really important recommendations here. Facebook has 30 days to respond and that'll be worth watching.
A few things worth highlighting /1 https://twitter.com/OversightBoard/status/1387685869398732801
The case concerned a video that was critical of Modi & the BJP. Facebook removed it under its Dangerous Orgs policy. It was a mistake, & when the Board selected the case, but before it decided, fb said it got it wrong and reversed the decision. /2
This is not the first time the Board has done this. It's telling Facebook "you can't moot a case by simply reversing decisions when we pick a case. You wouldn't have found the mistake otherwise." /3
This was a case of human error. It was flagged for terrorism, a human looked at it and removed it. Facebook said the length of the video (17 mins), the complexity of the content could have contributed to the mistake—reviewers don't always have time to watch videos in full /4
The user's account was automatically locked, and they did not get a review. Facebook said this was due to reduced appeal capacity due to COVID. The Board says "yeah, we get that, unusual times etc. but you really need to get back to full capacity asap" /5
The Board was very conscious of the political context. It expressed concern about mistakes that especially impact minority language speakers or religious minorities, and noted that the political context in India right now "underscores the importance of getting decisions right" /6
The Board asked for, and Facebook refused to provide (on the basis that it was not "reasonably required" or for legal reasons), answers to the Board's qs re: possible communications from Indian authorities. It's possible fb is legally restricted from doing so, but still (!) /7
The Board acknowledges that "mistakes are inevitable when moderating content at scale" but that without knowing error rates it's impossible to tell from one case whether this is a systemic problem or a one-off. /8
This is real progress from the Board!! Its previous decisions were WAY too case-specific, & didn't reckon with the way CoMo is different from offline speech cases
I talk about why that's important here ( https://columbialawreview.org/content/governing-online-speech-from-posts-as-trumps-to-proportionality-and-probability/) & am stoked to see the Board talking about this /9
I talk about why that's important here ( https://columbialawreview.org/content/governing-online-speech-from-posts-as-trumps-to-proportionality-and-probability/) & am stoked to see the Board talking about this /9
The recommendations then.
1. Fb should translate its Community Standards into Punjabi because, you know, 30 million ppl speak it in India and more around the world.
Uh... INSANE this had to come from the Board, but here we are. Geez, fb. /10
1. Fb should translate its Community Standards into Punjabi because, you know, 30 million ppl speak it in India and more around the world.
Uh... INSANE this had to come from the Board, but here we are. Geez, fb. /10
2. Fb should restore human review to pre-pandemic levels asap, while protecting health of staff.
Yes. And (this is me now) fb in its response really should disclose how far off those levels it is, and its timeline and plan for getting back to full capacity. /11
Yes. And (this is me now) fb in its response really should disclose how far off those levels it is, and its timeline and plan for getting back to full capacity. /11
(me still) This has been going on for a while now, and looks set to continue in certain areas for even longer. There's only so long a company with the resources of fb should be able to keep pleading "pandemic" when the "pandemic" also makes adequate CoMo even more important /12
3. Finally, the Board says fb should report increase public info on "error rates"


this recommendation is like Christmas come early for me, and is just obviously the way the conversation about CoMo needs to go
(Again, sorry, can't resist: https://columbialawreview.org/content/governing-online-speech-from-posts-as-trumps-to-proportionality-and-probability/) /13



(Again, sorry, can't resist: https://columbialawreview.org/content/governing-online-speech-from-posts-as-trumps-to-proportionality-and-probability/) /13
The Board says fb shd do this by "making this info viewable by country and language" & "underscores that more detailed transparency will help the public spot areas where errors are more common, incl potential specific impacts on minority groups, & alert fb to correct them." /14
These recommendations are targeted, strong, and important. They are not binding. How fb responds in 30 days is the most critical part of this process (& also the part ppl tend to pay least attention to). /15
There's a lot of (in many respects, justified!) skepticism of the Board. It can't fix everything. But these recommendations show why I'm still hopeful it can make some meaningful impact. If fb plays ball. /16
As always, we're tracking this @lawfareblog FOBblog, and will be watching next steps.
The team @tia_sewell, @jacob_r_schulz and @qjurecic get this stuff up before you can say FOBblog 3 times fast. /17 https://www.lawfareblog.com/welcome-fob-blog-overseeing-facebook-oversight-board
The team @tia_sewell, @jacob_r_schulz and @qjurecic get this stuff up before you can say FOBblog 3 times fast. /17 https://www.lawfareblog.com/welcome-fob-blog-overseeing-facebook-oversight-board
Okay, I think that's it from me. The decision is worth reading.
Your move, Facebook. /18
Your move, Facebook. /18
Actually, one more thing. This mistake happened bc the human reviewer did not have time to watch the whole video & make a considered decision.
Yesterday, the EU Parliament approved a reg requiring platforms to remove terrorist content w/in 1 hour. https://edri.org/our-work/european-parliament-confirms-new-online-censorship-powers/
Yesterday, the EU Parliament approved a reg requiring platforms to remove terrorist content w/in 1 hour. https://edri.org/our-work/european-parliament-confirms-new-online-censorship-powers/