How to spot fake Twit­ter accounts and misin­form­a­tion

Published Categorised as How to, Life in General, Popular Posts No Comments on How to spot fake Twit­ter accounts and misin­form­a­tion

This was origin­ally a Face­book post with the sections as comments. I updated it as a Google docu­ment, and finally as a blog post to make it more access­ible and easi­er to find and update.

Intro­duc­tion

The main way to keep up to date with what’s happen­ing on the ground with the Black Lives Matter Protests is via social media (and Twit­ter in partic­u­lar).

It’s really confus­ing though, because there are a lot of people and organ­isa­tions acting in bad faith and delib­er­ately making commu­nic­a­tion and fact-check­ing diffi­cult, and using manip­u­la­tion strategies to drown out the real inform­a­tion about what’s happen­ing on the ground.

Twit­ter genu­inely does have a lot of really useful citizen journ­al­ism and inform­a­tion, but it can also be a pretty terrible place because the owner lets racists, bigots and fake accounts run by author­it­ari­an govern­ments and the far right run free (and does very little to stop it).

A lot of the commu­nic­a­tion control and miscom­mu­nic­a­tion meth­ods used by the 20th century author­it­ari­an regimes to squash dissent have been updated as new digit­al strategies. Putin uses them, the Chinese govern­ment uses them, the Repub­lic­an Party and lots of far-right groups use them, and Domin­ic Cummings made his career and money using them for the Leave campaign.

I’m going to post some links here of common social media manip­u­la­tion strategies and tricks and how to spot them:

Narrat­ives

Polit­ic­al campaigns talk about “narrat­ives”- how the story is presen­ted, who is presen­ted as a hero or villain, what perspect­ive you are encour­aged to see things from. Being in control of the narrat­ive makes you in control of how people see the situ­ation.

When discuss­ing narrat­ives, you talk about “fram­ing” some­thing, like it’s a picture hung on wall show­ing that view.

Bots and botfarms

A big one: Auto­mated fake accounts (aka bots). They pretend to be real people, but what they post is from a script set by the owner. The guide in the link is really thor­ough and has lots of photo examples of things to look out for.

https://medium.com/dfrlab/botspot-twelve-ways-to-spot-a-bot-aedc7d9c110c

The Digit­al Forensic Research Lab who wrote this article have a lot of other good ones about social media and author­it­ari­an­ism around the world https://medium.com/@DFRLab

Astro­turf­ing

The high­er budget version of bots is astro­turf­ing (ie creat­ing fake grass­roots support). The comments are writ­ten by real people instead of being auto­mated, but it’s still a fake account.

Putin is known to have an in-house depart­ment of people doing this. Any time there’s criti­cism of him in the Guard­i­an for example, the comments are always myster­i­ously full of people defend­ing him who claim to be from places in the UK yet whose English is either too perfect and form­al or is good but has small mistakes typic­al of Russi­an speak­ers.

A recent example of astro­turf­ing happened last week when Twit­ter and news­pa­per story comments were suddenly full of people defend­ing Domin­ic Cummings with the same defences.

“Astro­turf also tends to reserve all of its public skep­ti­cism for those expos­ing wrong­do­ing rather than the wrong­do­ers. A perfect example would be when people ques­tion those who ques­tion author­ity rather than the author­ity.”

https://medium.com/@nwankpa_ik/3-ways-to-identify-astroturf-and-propaganda-in-media-messages-1d386316ed16

Why bots and astro­turf­ing are used

Setting up bots and paying astro­turfers is meant to drown out genu­ine voices and inform­a­tion with the agenda of the person or organ­isa­tion paying for them. It’s harder to find the real posts because there are so many fake ones.

They hope that their arti­fi­cially ampli­fied message will then be accep­ted as the gener­al public opin­ion.

The “fire­hose of false­hood” is a propa­ganda tech­nique where the same lie is continu­ally repeated in multiple differ­ent places to try to give it legit­im­acy. 

https://en.wikipedia.org/wiki/Firehose_of_falsehood

It also takes advant­age of the fact that algorithms track what seems popu­lar on their plat­form and shows “popu­lar” content to users more often. Twit­ter and instagram have charts on their front page with trend­ing topics. If you can use the Hasht­ag Flood­ing strategy via bots and astro­turf­ing on a popu­lar topic, it hides the real inform­a­tion people are post­ing on that subject. To see it, you have to be follow­ing the right people, rather than coming across it on the front page.

Sealion­ing 

(Named after this comic)

When someone pretends to be politely asking a lot of ques­tions, but actu­ally they don’t care about the answers. They just want to waste everyone’s time and energy and derail the conver­sa­tion.

The ques­tion­er delib­er­ately puts a lot of emphas­is on how they are just sooo civil and soph­ist­ic­ated and such a wonder­ful debater, and how the person they are hass­ling and wast­ing the time of is rude and ignor­ant for not expend­ing lots of energy on talk­ing to them.

A common devel­op­ment of sealion­ing is to demand “evid­ence” from someone, but if they do waste their time find­ing some, it’s some­how never the right sort of evid­ence and always dismissed. The inten­tion isn’t to actu­ally ask for inform­a­tion, it’s to delib­er­ately waste someone’s time and energy.

Origin­al source of the comic
http://wondermark.com/1k62/

What­aboutery

This is anoth­er way to waste people’s time and energy and derail the conver­sa­tion away from the origin­al point. For example if the origin­al topic was “the police target and brutal­ise black people” then there will be replies going “what about gangs?” or “what about this other case that is noth­ing to do with what you’re talk­ing about”.

The aim is to use up so much energy and time address­ing these tangents that no one gets to discuss the origin­al topic in detail.

I am Very Reas­on­able

Anoth­er way is to frame anyone who is point­ing out injustice or asking for change as  unstable, pushy, a bit crazy. The person support­ing the status quo frames them­self in the narrat­ive as sens­ible, a grown up, reas­on­able. 

Delib­er­ately misspelling hasht­ags

They’ll pick a current trend­ing hasht­ag and delib­er­ately misspell it slightly. Then loads of bots and astro­turfers will use it. It will come up in the Twit­ter trend­ing chart on the front page along with the real tag, but the slightly wrong tag will ONLY have fake accounts in it, arti­fi­cially inflat­ing in the public eye the agenda of whoever paid for the fake accounts.

So for example a current one might be to write “black­livesma­ter” with one t, and then fill it with fake twit­ter accounts claim­ing that the protest­ers did some­thing wrong and deserved to be beaten by the police.

(Yes, I know, weird that the Chris­ti­an Scient­ists news­pa­per has a good article about it- they often surpris­ingly win Pullitzer prizes too)

https://www.csmonitor.com/Technology/2018/0718/From-Russia-with-hashtags-How-social-bots-dilute-online-speech

Divide and conquer

Also a strategy used in elec­tions. Make people who would other­wise be allies fight, so all their atten­tion is on fight­ing each other rather than the real issue or enemy.

Fake iden­tit­ies to sow divi­sion from with­in

Does the account bio claim to be a partic­u­lar margin­al­ised iden­tity, but all the comments are attack­ing and criti­cising other people from that group?

For example a common one is that the bio claims to be a trans person, but the account is really unpleas­ant to other trans users, criti­cising them that they’re doing everything wrong and harm­ing others in a myster­i­ous never explained way.

Or perhaps they tell people that they’re wrong to speak out about injustice. They’re doing anti-racism or anti-homo­pho­bia or anti-miso­gyny all wrong and hurt­ing people. It’s often not actu­ally made clear what the person is doing wrong either. It’s just this nebu­lous label of wrong­ness and guilt.

This is a strategy from hate groups- they pretend to be one of the people they are attack­ing as cover. They also use this as a meth­od of play­ing divide and conquer.

How to check out a Twit­ter account

1) Click on the user’s name

2) Check the account creation date

3) Click on tweets and replies

4) Does this seem like a real person? Do they seem to have a real life? Or do they only post about one (usually polit­ic­al) topic? Nobody has only one topic they ever talk about

5) Scroll back a bit- how do they inter­act with other people? Do they have real conver­sa­tions? Do they seem like a real person or do they only argue?

6) Was it created a while ago, but they have no other comments about anything, except to suddenly weigh in aggress­ively about a big news story? That’s usually a sign it was a spare fake account that has recently been fired up to make up the numbers

7) Has this account dramat­ic­ally changed iden­tity, tone or topic at some point? Even a differ­ent language? Usually a sign of a fake account

8) Number of posts. Does anyone really have time to post 50 times every day about how a certain politi­cian should be re-elec­ted? Does it almost feel like it’s on a sched­ule? Do they post heav­ily at a time of day that clashes with the time zone they claim to be in? Genu­ine heavy Twit­ter users have conver­sa­tions with people, and natur­al ebbs and flows.

Receive new posts via email. Your data will be kept private.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.