No matter which side of the aisle that you may find your seat in, and even if you traipse around the political auditorium in balance trying to find your side or place in the muck and mire, you will agree that during the 2016 election there was influence from afar and meddling forces from nearby as well. It's just a truism that technologies affected the otherwise democratic electioneering processes. I will not here go farther and suggest that those factors moved the needle, in common parlance, but can we at the least be truthful enough to know that it happened? It's complicated. It's scary even. So just know that in 2020 there will be more of the same to deal with.
The technologies were varied. The tactics focused. Let's just get to know one of these technological tactics to whet our analytical whistles in advance of participating in and understanding modern day elections. Before I introduce that let us also acknowledge another truism, or at minimum a settled theory in technology, that many know as Moore's Law. That observation by former Intel CEO Gordon Moore originally meant that every year engineering advances would fit twice as many components onto integrated circuits as the prior year. He was suggesting that because of that pace of technological evolution our computers and other devices would be markedly faster and hold notably more information each and every year. That was his position from a 1965 paper, and since then he'd adjusted his theory a bit. Now we know Moore's Law to stand for the theory that the doubling effect occurs every two years rather than annually. In any case, we know with or without Moore's Law that techie stuff of 2020 will materially outperform the same 2016 versions: computers, smartphones, software and apps, etc.
Way back in 2016 during the campaigning and election activities of America's general election, only the 58th similarly important democratic event since we created the federal election systems, one of the more influential forms of interference was the bot, as in RO-bot. Here, I'm not talking about Robby the Robot, Awesom-O, or HAL, all robotic versions implanted into pop culture respectively from "Forbidden Planet," "South Park," and "2001: A Space Odyssey." Rather, I'm talking about the more recent robotic technologies, the internet bots that are sometimes known as web robots or simply "bots." Today's bots are not shiny humanoid things. They are programs, usually autonomous with their artificial intelligence, and are designed to interact with computers, their programs, and their users. To be more specific within this piece I am talking about Twitter bots.
As I've written about more than once, and you've likely been bombarded about hundreds or more times through other media outlets since 2016, Twitter bots were successfully deployed by Russian, North Korean, Iranian, and other state-level actors, intelligence agencies, and spies. No need to place a futon mattress behind you for what's next, either: American actors, agencies, and spies use them too, and generally for the same disruptive reasons. Just as Moore's Law was at work upgrading your iPhone and Windows based laptop during the past few years so too have bots been grown and improved. In 2015 and 2016 the bots that are at issue here were of the type that would, for example, retweet messages from IRL (in real life) users as well as from other bots. In other words, the programs would scan Twitter looking for certain information that fit the political bill, and then re-send the tweets to the thousands or more of the Twitter bot's followers; the followers, we shall presume, were naïve enough to believe that the bots were actual, IRL users.
Back then the intelligence of the Twitter bots was relatively unsophisticated. Those programs were pretty basic (no pun intended, techies from the 1980s) and did not too much more than to recognize favorable tweets and then mechanically spread the word, as it were. Some have described them as mere retweeters on a timer. Still, impressive to most. Now, after some years of technology has passed, we must be prepared for smarter, more subversive Twitter bots. That latter notion is a problem in and of itself. It's not just what the bots do. It's first even being able to identify the Twitter account as a bot. One university has created a tool to do just that. At Indiana University you can find the Botometer. The Hoosier scholars designed their own bot-defense program of sorts that can plausibly determine whether a Twitter account is IRL or if it's a bot.
At IU researchers fed around 250,000 Twitter accounts into their Botometer. They selected accounts from both 2016 and then 2018, all of which were rife with politically charged messages. Of those quarter million, 30,000 were discovered to be fictitious, meddling yet influential Twitter bots. That's actually par for the course. An in-depth study estimated that 25% of all Tweets were created by bots. So, actually, Botometer's findings may be seen as a positive. Not to me, though.
The more concerning results from the IU work came from their characteristics of the 2018 bots as compared to their conceivably halved 2016 counterparts, again factoring Moore's law, only backwards. They saw what we now know about the 2016 bots. They were mechanical, almost predictable. Their "intelligence" was, well, 2016-like and not what AI has given us for artificial intelligence today, or even where it was in 2018. The 2018 Twitter bots, on the other hand, were more humanlike in their sophistication. They relied less so on retweeting others' political viewpoints and created more of their own, intelligently.
Maybe, like me, you don't even use Twitter. Maybe you think, "So what? What's new?" Politics is ugly. Please don't continue your naïveté, especially if you participate in elections. Disinformation played a real role in 2016. Next year, more so.
Ed is a professor of cybersecurity, an attorney, and a trained ethicist. Reach him at email@example.com.