1/In the last two days my Timeline has filled with people expressing concern about accounts spreading twaddle.

Globally in 2020 nearly a fifth of shares, likes, comments on all platforms were linked to misleading or mischievous posts.
2/ That figure and much of this thread comes from a piece by Neuroscientist Tali Sharot in the current edition of the science journal, Nature.

She points out that Social Media platforms currently reward users for making up nonsense or getting outraged by the nonsense of others.
3/ The debate so far has been about how Social Media companies should suppress fake news or clickbaity nonsense.

But maybe it needs to be about how you improve the trustworthiness of what users want to share.
4/ If users get an ego boost or other chemical reward in the brain for likes, follows and retweets, how might Twitter, Facebook, Insta etc rejig their platforms to reward users for reliability, accuracy and truth?
5/ At the moment users are rewarded when their post is popular. And there is a greater reward for sharing unreliable or contentious information. Fake News generates many more retweets and likes than reliable material, spreading between 6 and 20 times faster.
6/ That's our fault for sharing crud. But it's also the platform's fault for constructing their user experience in a way that gives carrots to the mischievous, the populists and the dopes.

So what if the system visibly rewarded reliability or accuracy?
7/ The history of humankind suggests that we will engage in the kind of activity that rewards us.

So Twitter could add a "Trust" button. The number of "Trusts" being displayed alongside the likes and re-tweets.
8/ You would have to assume users would chase "Trusts" in the same way they do likes and re-tweets.

There's also no obvious downside for the Social Media company because engagement is engagement whether its in outraged comments or "Trusts"
9/ If a post has received next to no "Trusts" you would have to imagine that it would give most users a moment's pause before they consider engaging with it.

Of course it could be gamed, but so can likes and retweets, and Twitter is getting better at spotting this kind of thing
10/ Social Media companies are already employing fact checkers who could be given the means to disable the "Trust" button on any egregious piece of anti-vax nonsense or suchlike. They could even allow them to disable re-tweets.
11/ Prof Sharot also points out that the blue tick is pretty meaningless. It only confirms that a user is who they say they are, not that they are reliable users.

What if the Blue Tick were replaced or added to with a "Thumbs Up" for reliable users?
12/ User's goal would then become to be identified as a reliable tweep with a high trust rating for each tweet, and not just popular and re-tweeted.
13/ She concludes that no doubt the human tendency to gossip or share misinformation runs very deep, but that this is an experiment that is worth trying.
14/ I have summarised Prof Sharot's piece in this thread because she doesn't appear to have an account here, and the article is behind Nature's paywall, but the ideas deserve a bit of debate.
You can follow/subscribe to Nature here 👉 @nature
You can follow @boucherhayes.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: