Lions and bots and trolls—oh my! Online conversations provide organizations with honest thoughts and feelings from their audiences, but are all of those mentions actual people? Bots and trolls aren’t just for science fiction and fairy tales anymore.
Bots and trolls are similar, but there are some key differences between the two. Knowing what they are and what they look like can reduce the negative impacts they might have on your campus’s online conversation and your overall brand perception. Let’s look at what they are and how to identify each.
- Are automated social media accounts programmed to perform actions to mimic humans (e.g., liking, sharing, or commenting on posts).
- Can be helpful or entertaining. Some are programmed to help Twitter users condense threads into clickable links (@threadreaderapp) or regularly share pictures of Earth from space (@dscovr_epic).
- Can be harmful. Some are programmed to spread fake information, violate the privacy of others, or spam other audiences.
- Can work independently or as part of a more extensive network (botnet).
People who use bots for good (humor, activism, etc.) usually indicate they’re bots in their bio text and adhere to rules to avoid upsetting real humans and avoid being banned by Twitter. Twitter has started to test a new feature to label “good” bots on their profiles and in each post.
Good bots don’t:
- @ mention people who haven't opted in.
- Follow Twitter users who haven't opted in.
- Use a pre-existing hashtag.
- Go over rate limits for the daily limit of tweets/retweets from an account.
|Good bots are programmed to share information or perform specific tasks when requested. @dscovr_epic account regularly shares pictures of Earth from space. This account also shares on its bio that it is indeed a bot.|
Bots used for more dishonest purposes (e.g., fake news, spam, propaganda) don’t usually say they’re bots and work to hide their bot-ness. Although social media companies continue to identify and remove bots from their platforms, new accounts emerge daily.
- Are fake accounts directly controlled by a human.
- Post inflammatory messages or off-topic comments, but the most sophisticated trolls try to be friendly online, not aggressive.
- Can work independently or coordinate as a group.
Many consider trolls just to be highly inflammatory accounts that actively work to anger and frustrate others. While it’s true of some trolls, not all trolls operate that way.
The most cunning trolls work to slowly gain support from individuals from all over the ideological spectrum and push them further into their already solidified beliefs. In this way, they don’t start fights—they slowly maintain polarization online. They may look friendly, but their purpose is to insert themselves into a community and gain a following to influence later. Similar to bots, trolls can actively spread disinformation online and frustrate audiences.
|Twitter users engaged with this mention from @IamTyraJackson at a high rate without knowing that Tyra Jackson wasn’t real. Twitter eventually suspended this troll account.|
Impact of Bots and Trolls
In 2020, researchers from Carnegie Mellon University captured and reviewed 200 million tweets about COVID-19 (stay-at-home orders, reopening the US, etc.). Here’s what they found.
- Of the top 50 influential retweeters, 82% of the accounts were bots
- Of the top 100 retweeters, 62% were bots
- Bots fueled nearly 50% of the entire conversation
The high number of fake accounts that exist online can have a direct effect on how higher ed marketers accurately understand online conversation about their campus and their brand’s perception.
Bots and trolls have the potential to:
- Spread false information about a campus by disguising themselves as a community member.
- Intentionally deceive a campus's audience into believing false information.
- Dilute a campus's brand or public perception.
- Increase the difficulty of providing accurate information to audiences.
- Frustrate or generally annoy online audiences.
Identifying Bots and Trolls
So what are some potential red flags that can help you detect bots and trolls?
Account Information Red Flags
- No profile picture. If there’s a picture, do a quick Google image search (via Catfish) to see where else the picture appears online. Often multiple bot or troll accounts use the same profile picture.
- The screenname or Twitter handle don’t imply a human's name.
- The Twitter handle looks like a computer-generated alphanumeric scramble.
- For bots, bio information is not specific to a person. Trolls present themselves as real people and include information to disguise themselves.
- Many of their account followers don’t have profile pictures or similar-looking accounts that follow and engage others.
Activity Red Flags
- More than 50–60 tweets per day is suspicious and more than 144 tweets per day is highly questionable.
- Many retweets and/or tweets with word-for-word quotes of article headlines with few original posts.
- Tweets in multiple languages from one account or tweets from numerous international locations in a short amount of time.
- A high proportion of tweets are advertisements.
- When the account replies to other accounts, they use odd phrasing, similar to a program that mimics human speech. Replies might not often result in honest conversations.
- The account has few followers (e.g., 76), but their posts get tons of likes/retweets (e.g., 23,000 interactions with one tweet).
- The number of likes and retweets on a single post are very similar (e.g., liked 100 times, retweeted 105 times).
Although these red flags may aid in detecting trolls and bots, they’re not a surefire way to get them every time. Some accounts might seem fake due to their behavior, bio information, or profile picture, but people use social media for many reasons and the “correct” way to use social media looks different for everyone. Not every angry comment or grammatically incorrect mention is from a bot or a troll.
Responding to a Troll Account
You might find yourself in a situation where you have to respond publicly to a troll account.
In 2020, an incoming Clemson University student posted racist and offensive content online. The public shared their frustration over the account and many demanded that the institution take action against the student. Clemson officials shared a statement explaining that there was never a student with that name registered at the institution and cited the post as the work of trolls.
Not every troll requires a public statement to your audiences. Higher ed communications professionals should aim to properly match a response to the severity of the issue. In some cases, refer to the old expression “don’t feed the trolls.” Trolls who are hoping to disrupt conversations might be best left alone. Giving potential troll accounts attention might be exactly what they’re looking for to continue their troll-y ways.
That being said, trolls who repeatedly antagonize members of your online community or who spread misinformation should be reported and/or blocked. It’s important to watch over your online communities and to stay alert to bad actors who might be intentionally disruptive.
Here are a few other resources to help detect bots and trolls. If they fit your needs, you can use them to gain a better understanding of the bots and trolls that are out there and the impact they have on your campus and brand.
- Bot Sentinel—Google Chrome extension that helps identify fake accounts.
- Twitter Audit—Gauges likely fake followers and whether an account is real or fake.
- Botometer—Created and managed by researchers at Indiana University, this gauges the likelihood of a bot account.
- Spot the Troll—Developed by researchers at Clemson University, this online quiz tests an individual’s knowledge of trolls and educates the public on detecting them. The quiz provides several examples of potential trolls.
Learn more about social media strategy in our book Fundamentals of Social Media Strategy and the training series based off the book.