I don& #39;t envy the moral questions facing Zuckerberg, Dorsey, and employees of social media companies. Allow disinformation campaigns or subjectively censor? There& #39;s no great solution and they& #39;re forced to walk a tightrope trying to keep diverse users/regulators/politicians happy
2/ at its core, this is a fundamental problem of speech. There are objective facts, but the moment we ask humans to determine them we introduce subjectivity. As we& #39;ve seen, consensus of experts& #39; can be not just wildly wrong, but wildly dishonest and politicized (like with WHO).
3/ at the extremes we could allow nearly all speech (as the US first amendment requires for government policy), or we could hold the platform aggressively accountable for curating/censoring complex info and political opinions in real time.
4/ we& #39;re currently in a very awkward middle ground. This balancing act has always existing in social institutions - in private schools, churches, even restaurants. But it& #39;s magnified when we have A. a global user base with no consistent local social/moral/political norms,
5/ B. the premium platforms for political advertising as well as for social interaction. i.e. we now depend on advertising platforms for socializing, blending social content with marketing more aggressively than ever before.
6/ C. technology of advertisers and propogandists ahead of users& #39; understanding of how they& #39;re targeted by marketers. Advertising algos are "smarter" than users who don& #39;t understand how they& #39;re being manipulated (to buy a product or cast a vote).
7/ people often seem to think it& #39;s easy to define what& #39;s hate and propoganda and disinformation. It& #39;s not. The top body of experts (that we& #39;re supposed to trust?), the World Health Organization, was deliberately spreading politicized disinformation about covid in January/Feb.