The mistake that the most strident Facebook critics make is imagining that removing bad things is easy, and down to decisions that aren't taken or buttons that aren't pressed. Wrong. But FB approach is "it's hard and complex but we can keep adding filters and moderation". And...
I keep wondering if that's fundamentally unscalable. The problem is NOT that it's much easier than Facebook says and they just lack the will or competence. But maybe the real problem is that it's much *harder* than Facebook (or Google or Twitter) think.
Hence the analogue with anti-virus on Windows 20 year ago. In the end, anti-virus scanners were not the answer - the solution was iOS, or ChromeOS, or the cloud - to move to models where that behaviour is physically impossible, instead of trying to filtered it out.
And the alternative might be that we just accept that a certain level of 'bad' is inherent in the new thing and plan to reduce it slowly, over decades, rather than insisting on solving it all right now. We accepted car accidents, and criminals using the telephone, after all.
In other words: the stupid criticism of Fabeook is 'removing bad stuff is much easier than they say". But a much tougher criticism might be "no, it's much harder than they think and actually cannot be solved any time soon"
Car safety took time.
You can follow @benedictevans.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: