From calling crash out Brexit an Australia deal to faked videos of victims in protests we are all exposed and influenced by misinformation much more than we realise. We need to get wise to this. Thread below.
The stakes are high. Disinformation threatens the globe’s ability to effectively combat a deadly virus and hold free and fair elections. Artificial intelligence is accelerating the threat.
They are spamming us
“Since the Cold War, propaganda has evolved in a direction opposite to that of most other weapons of war: it has become more diffuse and indiscriminate, not less.”
— JOSHUA YAFFA, THE NEW YORKER
Seventy countries used online platforms to spread disinformation in 2019 — an increase of 150% from 2017. Most of the efforts focused domestically on suppressing dissenting opinions and disparaging competing political parties.
However, several countries — including China, Venezuela, Russia, Iran, and Saudi Arabia — attempted to influence the citizens of foreign countries.
The threat is not limited to foreign actors. In fact, domestic players are expected to have a greater impact on the 2020 US election than foreign actors,
according to a report by New York University (NYU).
Nearly two-thirds of Americans saw news about the
coronavirus that was completely made up. @WHO said “The antidote lies in making sure that science-backed facts and health guidance circulate even faster, and reach people wherever they access information.”
In the case of COVID on the more popular social media, the number of questionable posts represents a small fraction of the reliable ones; the same thing happens in Reddit.
However on niche site Gab: while the volume of questionable posts is just the ∼70% of the volume of reliable ones, the volume of engagements for questionable posts is ∼3 times bigger than the volume for reliable ones. https://www.nature.com/articles/s41598-020-73510-5
Engagement with fake new varies by platform. On more popular platforms, Twitter is the most neutral & YouTube amplifies unreliable sources less. On less popular platforms, Reddit reduces the impact of unreliable sources and Gab strongly amplifies them https://www.nature.com/articles/s41598-020-73510-5/tables/2
Hyper-realistic doctored videos called Deepfakes are with us. Commercial programmes only replace the target’s face below the forehead. Stanford has done entire 3D heads and Heidelberg Uni an entire body to be faked
So we can expect sophisticated state actors to have the capability to create entirely believable fakes. But everyone has the tools to create “shallow fakes” — which speed up, slow down, or otherwise alter a video. That represents a threat to diplomacy and reputation.
For example, (and this is one I was exposed to and believed) Nancy Pelosi, the speaker of the US House of Representatives, was slowed down to make it seem like she was slurring her words.
The video was on Facebook and was shared widely, by Trump and former NYC mayor Rudy Giuliani, who tweeted: “What is wrong with Nancy Pelosi? Her speech pattern is bizarre.” Facebook refused to remove the clip but after it was fact-checked as false.
Whether a deepfake or shallowfake, the accessibility and potential virality of doctored videos threaten public figures’ reputation and governance. They threaten one of our most basic rights. That of choosing our own leaders.
These are tactics. The more strategic part of this is computational propaganda. Nearly half of the world’s population is active on social media. Recent polls show Americans are more likely to receive political and election news from social media than cable.
User engagement helps Facebook, TikTok, Twitter and Google to generate revenues. To drive engagement, they employ algorithms that surface the content most likely to appeal to a particular user to the top of their feed.
The use of algorithms to feed us what we want to see creates and perpetuates, entrenchment and bias as we engage predominantly with content and opinions that reinforce their existing beliefs. These bubbles can be manipulated.
They block out alternative perspectives and evidence. letting false narratives influence their target population ore effectively. The platforms measure engagement through likes, comments, and shares so malicious actors use shillbots, to increase the reach of disinformation.
This study alone uncovered 13,493 accounts that tweeted during the United Kingdom European Union membership referendum, only to disappear from Twitter shortly
after the ballot. https://journals.sagepub.com/doi/pdf/10.1177/0894439317734157
Just do a wee experiment with me. On your Twitter search type each of this @EuFear, @steveemmensUKIP, @uk5am, and @no_eusssr_thx in. They are bots the study identified. Their accounts were deleted after the referendum. The accounts @trendingpls and @uk5am were repurposed.
That's just 5. Think about tens of thousands of those.
The vast number of bots on social media platforms undermines truth and democracy, as bots and those algorithms help disinformation spread much faster than the truth. On average, false stories reach 1,500 people 6 times faster than factual ones.
What can we do? I think relying on tech and government to solves this has a low chance of success. The government seem more likely to use this against us than fix it and the platforms make more money this way. I think the only real way on a personal level is digital literacy.
Make sure your setting only allows you to see verified accounts. Make sure your kids look at the videos that are above so they know that they shouldn't trust everything they see.
You can follow @markpalexander.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: