Lots of high-profile political & media discussion of story suggesting that 1/2 the tweets ab #reopen are bots... Bots began to captivate politicos/ public debate ~2yrs after their effectiveness heyday. Researchers had moved to looking at multiple factors to gauge manipulation. https://twitter.com/noupside/status/1037347612549173249">https://twitter.com/noupside/...
There are still lots of fake accounts on Twitter & FB; company transparency reports are a good place to see the #s companies are taking down. But even so, they don’t generally achieve the kind of impact they used to in 2016; things like weights in trending algos have changed
Unfortunately a lot of the bot-detection classifiers overindex on certain factors that particular groups think of as activism - rapid, high volume retweeting, for example. Saying the same thing. Bombarding a targeted hashtag or person.
The effects still feels like an automated attack, particularly in some of the communities that attract a lot of people who pop up in mentions to yell/harass, get shut down, & respawn (so, recent account creation date). These are low-quality/jerk accounts,but still human-operated.
This post by @benjaminwittes details being part of some of the topics where this happens. He looked to bot tools to try to gauge what was going on and compares a few in here. https://www.lawfareblog.com/random-toxicity-whats-going-benjaminwittess-mentions">https://www.lawfareblog.com/random-to...
Those tools would be improved if they were a little clearer about what they are analyzing and what the results mean.
Media should also check the claims in bot articles more carefully; much like other misinformation, the correction won’t go as far as the original story.
Media should also check the claims in bot articles more carefully; much like other misinformation, the correction won’t go as far as the original story.