Artificial intelligence is a growing subset of information technology. It’s an intriguing phrase—“artificial intelligence”—in my read of the words alone. How, for example, might anything considered to be intelligent at the same time be artificial? At the original root of the word “intelligence” is one’s ability to understand: Latin’s intelligere. We’re all intelligent to some degree because we can understand things. We can communicate because we can understand.
The AI version of intelligence, incorporating the “artificial” component of the phrase, points to intelligence that is produced by mankind rather than being a product of life’s natural occurrences. Artificial sweeteners make diet soda a thing. Artificial turf turns a concrete pad into a lawn. If you want to get lost in a logical downward spiral, realize that these artificial versions of natural occurrences can only become part of our reality because of the intelligence of mankind, some few of whom dreamed up and then created sugar- or grass-substitutes. Without human intelligence … no artificial intelligence.
Technology prognosticators will have you understand that AI will become a normal and maybe pervasive part of IT in the years coming. It’ll be such a movement that it harkens back to the nineties when computers themselves began being part of average households. A wave of IT washed over modern society such that the powers of computing were no longer limited to governments, universities, and big corporations. That era posed a revolution of sorts, technologically speaking, and AI is looking to have the same scope and power in its effects. So many say.
The humanmade understandings programed into AI devices result in intelligence demonstrated by machines. In the academic world researchers refer to the machines capable of AI as intelligent agents. Leave it to scholarly folk to define “AI” with a phrase potentially abbreviated to “IA.” Perennial arguers, these researcher types tend to be. They look at the spectrum of AI, as a technology and concept, from the lower end where computers can perform artificial general intelligence feats to its upper, still aspirational end where artificial biological intelligence is the goal. The more general form of AI, which we’re used to reading about, employs computing power to perform statistical methods that look at data and foretell some outcome. Mathematics, stats, engineering and other sciences somewhat limit the power of artificial general intelligence.
The more fascinating if not scary type of AI, that seeking biological level understanding, draws on human emotion and psychology. In the 1950s when the first notions of AI were drummed up the aim was to precisely simulate human intelligence and its thought process. That’s a tall order still 70 years later. Any dream that engages some of the world’s most capable thinkers for generations will have such lofty goals, though, I suppose. Realize, too, that the early, nearly incomprehensible goals of AI had already been germinated in fiction. What was Mary Shelley’s Viktor Frankenstein doing in his lab if not creating an AI device, in a sense? That was 1818, and it too has a predecessor I’d bet.
The torch-and-pitchfork mob after Dr. Frankenstein’s monster must’ve intuitively known that humankind’s creation of intelligence was a dangerous venture, one of the fitting demonstrations of the oftentimes misplaced, usually overused metaphor of the slippery slope. The wheel and fire were good for mankind, sure. Swapping storytelling and oral histories with the written word … good technological advance. Harnessing energy from the natural world and creating alternating current electricity, and the machines that consume it, was for the most part a good slope to jump aboard. So, when exactly did the slope’s angle so violently change that it inspired fears? I guess that’s either rhetorical, or at the least you could say that the answer depends on the person answering it. You know someone who believes they will never use a mobile phone; or swears by vehicles with manual transmissions; or “just doesn’t get TikTok.” The slope is as complicated and unknowable as some universal point on it that all humans see as its turning point.
Here’s one sense of AI turning from useful, intriguing, and fascinating technology in this writer’s feeble mind, and note that it’s not the ol’ “the robots are coming for us” trope. The manmade intelligence that has been evolving since the fifties, or earlier, has progressed not so far as its aims and goals, but close enough to give me concerns. Today’s AI researchers have created such credible, effective supplements to the human intelligence abilities that even the most gifted scientific and medical researchers, some of them who work on and develop AI themselves, are being fooled by AI-generated misinformation.
The word “misinformation” has garnered your attention and skepticism in recent years, lest you didn’t notice the last two presidential elections. Bully for you if you’re so occupied with better things! A number of tech journalism outlets last week reported that scientists in medicine, cybersecurity, and nearly any field are being challenged to sift bogus, AI-derived “information” from actual, reliable and valid information. Seems that one way to show off your AI chops is to produce a scholarly journal article, peer-reviewed as they are, that pulls the wool over the researcher’s eyes. In sum, AI has become good enough—maybe think close enough to our own mind’s powers—that it can generate false information in some of the most critical fields of research, such as defense, medicine, and AI itself. Lewis Carroll himself would be impressed with the results. Remember the Cheshire Cat’s “I’m not crazy, it’s just my reality is different from yours.”
Misinformation to tarnish one’s reputation, or even to win an election, is striking especially when it works. Think of the threats that these seriously critical areas of research pose. We’re intelligent and understand enough to know that this slope might not lead to a soft landing.
Ed is a professor of cybersecurity, an attorney, and a trained ethicist. Reach him at firstname.lastname@example.org.