Not all bots are friendly
Using accounts on social media sites like Facebook and Twitter, a bot profile looks as if it’s a real person. However, automated messages can be rapidly sent by the account and target specific users.
“[Bots] were used to sow discord and create confusion amongst Americans,” says Samuel Woolley, a research director at DigIntel Lab and a research associate at the Computational Propaganda Project at Oxford University.
While misinformation has always been spread in the media, dating back even to the Revolutionary War, it’s now different on social media.
“Bots allow it to be computationally enhanced. They basically boost the numbers like a 1,000 fold of the ability to reach people,” Wooley says. “What we saw happen in 2016 was that bots were being used to drive bogus information or to boost the image of a particular candidate or campaign. That continued in 2017 and never really tapered off too much, and it looks to continue in a completely new way in 2018.”
What happened in 2017?
A University of Southern California and Indiana University study suggests that between nine and 15 percent of Twitter accounts are actually bots. Alarmingly, there have been several cases in which these accounts were tied to Russia, and many of them tried to stir up civil unrest in the United States.
Then there are the bots that leave false comments on government websites. This was the case when the Federal Communications Commission received millions of fake comments when the agency asked for public input on its proposal to repeal net neutrality rules.
Bots even tried to ruin Christmas when they bought up all the Fingerlings, a popular toy, on online retail sites. The toys were then put up for sale at double, triple and quadruple their original price.
Who are programming these bots?
Of course it’s important to remember that these bots, at least for now, are not doing this on their own. They are programed by bad actors who have nefarious intentions, and as the technology becomes better, so will the ways bots can be used to harm society.
“If you have one person who has resources in terms of time, money, and expertise, they can be used to functionally launch a 1,000 strong bot-net on Twitter,” Woolley says.
And those who take advantage of this are widespread — from terrorist groups like ISIS, to entities affiliated with political campaigns, to foreign governments, to even just some person at home wanting to troll on the internet.
How can bots hurt us in 2018?
“Bots have become increasing issue specific,” Woolley says. He believes bots will be used to target subsections of the population to achieve narrow objectives. This would be different to what we’ve been seeing now with bots, where blanket statements are just fired out to the masses.
For example, Woolley points to the upcoming Senate race in Utah. He believes bots could be used to just target that race and the demographic subsections it involves. This would create even more division within subgroups of our citizenry. This is a worrying trend, because the more specific bots can be, the easier it will become to divide societies into even more tribes.
It’s important to remember, however, that these bots don’t have total control. We humans still have free will and decision making abilities — making us more advanced than the computer programs that target us.
How can we fight back?
Woolley believes the best way the public can arm themselves against bots is not to be reactive but forward thinking. In other words, we need to be aware that information we consume online might be coming from a bot. This means when something seems suspect, we should take the extra step to investigate who is sharing it before believing it or sharing it ourselves.
This may take a little more time, but the more we normalize these bots, the more human they become.
We know 2018 will be a year we experience more attacks from bots; but we can also make it the year we learn how to keep them from dividing humanity.