Distil Networks tried to warn everyone that the election could be manipulated. It didn't work.
One day a few months ago, a "suggested friend" showed up on my Facebook news feed: my old college roommate, a good friend named Paul. It was a little baffling. Had Paul unfriended me at some point? At a glance, it was definitely the Paul I knew: same profile photo, our common alma mater, a bunch of mutual friends (although, weirdly, most of them were my high school buddies).
As you might guess, it wasn't Paul. Someone had replicated his account, complete with his most basic info. It wasn't immediately obvious why someone would want to create a second Paul. Was it a prank? Were the intentions more malicious?
Rami Essaid has a theory. The co-founder and chief product officer of security company Distil Networks has been fighting off bots for years. As the public is beginning to learn, these bots can be used to rapidly perform tasks across the internet, like posting on social media, breaking into accounts, or publicizing politically charged posts.
On Facebook, Essaid says, hackers can create a profile that's a replica of a real person. Then, using bots, they'll send requests to friends of the real person's friends--which explains why, when I spotted Fake Paul, most of the friends we had in common were my high school pals. Once the fraudulent profile builds out a network of friends, it will begin--again, using bots--Liking and sharing posts to get them to appear in other people's news feeds.
And this is far from the only route a hacker can take to get as many eyes on propaganda as possible. They could create a new person from scratch--often with an attractive photo--then seek out people with similar hometowns or college backgrounds with the hope that a few suckers accept.
It's even easier on Twitter, where people are far less selective about who they follow. There, hackers can create bots--complete with convincing bios and stock photo avatars--to follow people with similar profiles, knowing that a sizable fraction will follow back. Then the bots can get to work, retweeting stories that align with their programmed inclinations and using authentic-sounding language to criticize ones that don't.
It might seem like a lot of work just to influence a handful of people at a time. But it's not a lot of work--and therein lies the purpose of the bots. All hackers have to do is write the initial code and the bots do the rest, spreading information, or misinformation, far and wide. In an election decided by 80,000 votes in three states, that can be meaningful.
And manipulating public opinion isn't all that bots can do; they can spam review sites, assist ticket scalpers, or test out stolen user names and password combinations on hundreds of websites at once.