This is related to today's Biden story: A few months ago the fact-respecting portion of the internet was outraged at social media companies allowing Plandemic to go wildly viral, despite stated policies against health misinfo. The video was taken down after millions of views...
which triggered a second-wave story about the outrageous censorship of taking down a video claiming microbes in sand would cure COVID and ppl shouldn't wear masks.

My personal opinion is that the takedown was a bad call; turned the misinfo actor into a martyr for free speech.
One of the significant sources of frustration around Plandemic for me was that there were very clear signals that the person in the video had been trying to go viral for weeks, and that the video itself appeared to be getting a lot of pickup. The platforms knew. They waited.
The factchecks eventually came out, 2 days after the fact. See my pinned tweet for how this all played out, and what impact those factchecks had.

Had they throttled distribution to give fact-checkers time to act, the spread of false info could have been managed far better.
There are 3 action buckets FB (and most others) use for moderation: remove, reduce, inform. Remove is takedown - when that happens there is a discussion of censorship. Inform is factceck - when THAT happens there is ALSO a discussion of censorship, which is ridiculous ref-working
And then there is "reduce" - throttling virality, not pushing the share of the content into the feeds of friends of the person sharing it. This is now apparently also being cast as censorship, because ppl are trying to reframe *distribution* as speech (when it is reach)
Coming up with policies & new mechanisms to address virality, and curation, are two of the most significant things platforms and the public, and likely regulators, need to do to address the most destructive facets of the current information environment.
As far as the Biden article: FB is using "reduce" to enable "inform". It is buying some time for verification of a very significant story that itself falls under a different policy area - concern about the veracity of leaked material pursuant to an election.
I think this is actually a very good use of the policy levers at its disposal. It is also doing this transparently, despite the fact that the censorship-howlers, who also think factchecking is censorship, immediately began howling about censorship.

No big shock there.
There are tradeoffs: if virality is unfettered & nothing is fact-checked, don't be surprised when wild nonsense trends. Provided that this policy is applied in a viewpoint-agnostic way, it seems to be a very solid middle ground for addressing info threats ahead of 2020 and beyond
You can follow @noUpside.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: