First @justinhendrix points out the need for a National Commission to investigate the January 6th insurrection and wants to ensure that Congress asks each of these companies to preserve any and all potential evidence: 2/
My question: what are you doing to "ensure your tools—algorithms, recommendation engines & targeting tools—aren't amplifying conspiracy theories & disinformation, connecting people to dangerous content, or recommending hate groups or purveyors of disinformation...to people?": 3/
Then @GretchenSPeters of @CounteringCrime also asks about these tools, "Why can’t you, or won’t you, shut down these tools, at least for criminal and disinformation content?" 4/
Then @Imi_Ahmed of @CCDHate says "There is clear evidence that Instagram’s algorithm recommends misinformation from well-known anti-vaxxers whose accounts have even been granted verified status" and asks when Facebook will fix this: 5/
Then UK MP @DamianCollins writes "research from Facebook showed that 64% of people who joined FB groups promoting extremist content did so at the prompting of Facebook’s recommendation tools" and asks about related policy changes: 6/
Regarding Facebook's "Responsible AI" project, Erin Shields of @mediajustice asks if they are using AI to benefit society, instead of just their own growth: 7/
On Facebook's role in recommending groups like Boogaloo boys, @TTP_updates's @AnthroPaulicy shows the receipts and asks "Why is it that even with your specialized teams and AI tools, outside researchers and journalists continue to easily find this content on your platform?" 8/
. @AnthroPaulicy continues with proof that "Facebook’s algorithms were offering up advertisements for armor and weapons accessories to users alongside election disinformation and posts about the Capitol riot" and asks why FB didn't do enough to stop it, despite public reports: 9/
Then @zittrain asks about Google's responsibility to surface accurate info: "To what extent do you view Google Web search as simply about “relevance,” rather than tweaking for accuracy?: 10/
. @evan_greer writes "Attempts to remove or reduce the reach of harmful or misleading content, whether automated or conducted by human moderators, always come with tradeoffs and can lead to the silencing of legitimate speech" and asks: 11/
TLDR: It's important that the public understand how the design choices and tools these companies make contribute to the spread of disinformation and harmful content. "We are well past the point where platitudes and evasions are acceptable--we deserve complete answers..." 13/
You can follow @YaelEisenstat.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: