I’ve been quietly writing about the “borderline content” policies at YouTube and Facebook for a while, or failing to - it’s taking me more time than I want to get all the words in the right order. But let me drop a few takeaway thoughts:
Both YouTube and Facebook instituted policies in 2018, where they will reduce the circulation of content they judge to be not quite bad enough to remove. The content remains on the platform, but it is recommended less, or not at all. https://www.wired.com/story/youtube-algorithm-silence-conspiracy-theories/
They not the only ones. Tumblr blocks hashtags; Twitter reduces content to protect “conversational health”; Spotify keeps select artists off of their playlists; Reddit quarantines subreddits, keeping them off the front page. And other platforms do it without saying so.
The kind of stuff these platforms educe include misinformation, conspiracy + hoaxes, anti-vax, 2020 election fraud; also, they’ll reduce stuff that, if it were a little worse, would violate whatever policy against harassment, nudity/porn, hate speech, etc. https://about.fb.com/news/2020/08/recommendation-guidelines/
This is not the same as “shadowbanning,” despite some critics equating the two. Way back, shadowbanning meant a tactic where it appeared to the user that the content was public, but no one could in fact see it. Here the content can be found, but its recommendation is limited.
Certainly, @beccalew is right: proclaiming that they’ve reduced borderline content by X percent is meaningless without knowing what counts, how they decide, how widespread that content is, etc https://twitter.com/beccalew/status/1387125473553960963
These approaches were intended as a response to the kind of corrosive content online that, while each instance may not be worth removing, is a public problem in the aggregate. Remember, many of us have called these platforms out for not doing enough in this regard.
To say that platforms shouldn’t amplify / reduce the circulation of content is meaningless; recommendation algorithms take hundreds of factors into account. Facebook is less likely to recommend posts that are weeks old, but we wouldn’t consider them suppressing that content.
That’s why it's absurd for FB VP Nick Clegg to say algorithms aren't the issue, that its what people want. Facebook is already adjusting for what its algorithms overvalue - and he praises that policy in the very same essay. Of course algorithms matter. https://nickclegg.medium.com/you-and-the-algorithm-it-takes-two-to-tango-7722b19aa1c2
But whether platforms algorithms “amplify,” and with what consequences, is still not entirely settled. (I recommend @niftyc for a subtle understanding of the interaction between users and algorithms for explaining things like polarization.) https://www.wired.com/2015/05/facebook-not-fault-study/
The rallying cry of amplification may itself earn political points. @daphnekeller rightly notes that this accusation is often overstated by policymakers eager to take action on platforms for whatever harm is most pressing at the moment. https://twitter.com/daphnehk/status/1377665960622977027?s=20
Still, this is content moderation by other means, it is not nearly as transparent or accountable as removals, and it is certainly designed to limit content the platform decides is problematic - and of course, protect themselves and their relationship with advertisers.
Given that, the same concerns apply: is this reduction of content fair to those who post it? is this reduction beneficial or problematic for the diversity of public discourse? is it done according to fair principles? Should platforms be making these decisions?
And it throws a real wrench into policy discussions, where it tangles with efforts to protect speech and a healthy public sphere. https://policyreview.info/articles/news/borderline-speech-caught-free-speech-limbo/1510
It is very hard to know when reduction happening, call it out, or judge its effects, because there’s no baseline for how far a post should have circulated. Reduced compared to what? The way the algorithm works otherwise, but thats also a product of the platforms.
And it feeds into the suspicion of users who don’t trust platforms and their moderation efforts. Its not the same as shadowbanning, bit I get why critics might equate the two, and why conservative pundits can score political points by lumping them together.
We make a category error when we do not include reduction techniques in the content moderation debate. But we also make a mistake when we assume that any adjustment of recommendation must be suspect - because whats recommended is entirely a construct of the platform.
What’s really hard to grasp is how some interventions come to be seen as suspect, and others as reasonable moves made in the service of the user. They’re ALL techniques that shape what circulates and doesn’t, whats hosted and whats removed, but we assess them differently.
This suggests to me that all of this - what platforms amplify, what they monetize, what they moderate away - is founded on a long give-and-take between platforms and publics about what we’re okay having enjoy visibility, what we’re okay having a platform dial back.
Spam? Clickbait? Old content? Newer content? News? Not news? “Coordinated inauthentic behavior”? Ads? The question is not only where platforms draw lines and what they do about them, but what categories become acceptable and not, and how that happens.
Also, “borderline content” - which both YouTube and Facebook use - is a problematic term, as it rings too closely to “borderline personality disorder”. I’d prefer they use “reduction” policies, “recommendation” policies, or “do not recommend guidelines.”
You can follow @TarletonG.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: