Everyone should be paying WAY more attention to GIFCT, the database platforms use to share information about terrorist content (or things that met someone’s definition of terrorist content, which is part of the issue). Here comes a thread. 1/
This is a good moment to pay attention because GIFCT is in the middle of overhauling its internal governance, and a lot of civil society groups are very vocally disappointed by the direction it’s taking. See this https://www.hrw.org/news/2020/07/30/tech-firms-counterterrorism-forum-threatens-rights# or this https://blog.witness.org/2020/07/witness-joins-14-organizations-to-urge-gifct-to-respect-human-rights/post 2/
GIFCT is a big deal both because of what it specifically does (set rules for “violent extremist” speech, an important and very contested category) and because it is a model for future semi-private Internet governance. 3/
On the governance model issue, see @evelyndouek's great "Content Cartels" article, which gives lots of details on GIFCT and some similar efforts, and tees up key questions we should be asking. https://knightcolumbia.org/content/the-rise-of-content-cartels 4/
GIFCT was started by four big platforms (MS, YT, FB, T) in 2016 as a way to share hashes of extremist images and videos (and later URLs). That way any member company can automatically detect and block (or otherwise respond to) those images or videos. 5/
Hashes get added to the database because they violate a platform’s TOS (and originally they were supposed to be particularly egregious violations). These images and videos don’t necessarily violate any law, or may violate some countries' laws but not others. 6/
So an important first-order question about GIFCT is “WHAT content is banned?” Which groups count as “terrorist”? (Answer: Mostly Islamist groups.) Can speakers “glorify” terrorism? (Answer: no.) 7/
72% of images, videos, and URLs GIFCT blocks are in this "glorification of terrorist acts" category, so we should know what it means. Which groups get to celebrate violent "rebellion" or "resistance" and which do not? At the edges, that's an incredibly fraught question. 8/
GIFCT also has the second order “HOW is content banned” issues that matter so much in platform regulation. How does GIFCT decide what speech to prohibit? What role does government pressure play? Can govts use GIFCT to bypass courts and human rights protections? 9/
*Which* governments can influence GIFCT, achieving global enforcement for their own domestic definitions of prohibited extremist content? How do domestic disputes involving powerful govts - like the ones on the streets of the U.S. today - play out? 11/
How are the rules for our online speech shaped by platforms’ operational implementation of GIFCT? GIFCT filters will block lawful and important speech if humans don’t carefully check filters’ output. But small platforms don’t necessarily have resources to do that. 12/
That makes small platforms the downstream recipients of major platforms’ decisions about contested speech. (And maybe downstream recipients of whatever govt pressures influenced the major platforms.) 13/
Filters like GIFCT fail, and block legitimate speech, if (1) content is in the database, but doesn’t violate the law or that platform’s TOS; or (2) content that was prohibited in one use gets re-used in scholarship, counterspeech, news, etc. Filters can’t tell the difference. 14/
The platforms that set up GIFCT know these are big and hard issues. That’s why GIFCT is undergoing a major governance overhaul – with a new Exec Director, Board, and Independent Advisory Council. https://gifct.org/transparency/  15/
That process matters IMO at least as much as the Facebook Oversight Board. But it’s received about one ten-thousandth of the attention. I’d say that’s mostly because FB, a major player behind both efforts, *wants* attention to the Oversight Board and not GIFCT. 16/
There is a new tool that lets member companies say if they think an image, video, or URL should *not* be on the blocklist. They attach a label to the hash, which other platforms can then see and bake into their own assessment or review queueing. That's a really important fix. 19/
They created a tool for members who find terrorist content at a URL hosted by another member to report it, one-to-one, to the host for assessment. That’s a pretty key piece of communication infrastructure, I think. 20/
They’re working with a firm called SITE Intelligence to share translations of potential extremist content, context, and info about groups. This could have real problems, & needs its own transparency. But it responds to a real issue for small platforms who can’t hire experts. 21/
Those are important moves. They tell us that people inside GIFCT and platforms are thinking carefully and trying to resolve very hard issues. But this governance will affect all of us, for a long time. We should demand transparency and pay attention. 22/
There's also a ton to say about the EU's pending Terrorist Content Regulation. Its filtering provisions would push still more power into GIFCT and other unchecked private mechanisms. Here's what civil society groups had to say about that: https://cdt.org/wp-content/uploads/2019/02/Civil-Society-Letter-to-European-Parliament-on-Terrorism-Database.pdf 24/
I'll wrap up here. Thanks for listening. And -- despite my criticisms -- thanks to the many truly conscientious people who have taken on the incredibly hard job of making GIFCT better. I hope you prevail. 25/25
You can follow @daphnehk.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: