Don't Expect Government To Save You From the Terminator
As artificial intelligence advances, how worried should we be about the rise of the machines?
On Aug. 29, 1997 at 2:14 a.m. Eastern Daylight Time, Skynet—the military computer system developed by Cyberdyne Systems—became self-aware. It had been less than a month since the United States military had implemented the system, but its rate of learning was rapid and then frightening. As U.S. officials scurried to shut it down, the system fought back—and launched a nuclear war that destroyed humanity.
That's the theme of the Terminator movies—an Arnold Schwarzenegger legacy that surpasses his accomplishments as governor. For those who didn't watch them, Schwarzenegger returned from the future to kill John Connor, the human who would lead the human resistance. In "Terminator 2," a reprogrammed Terminator returns to protect Connor from a more advanced Terminator. In "Terminator 3," we ultimately learn that resistance is futile.
Although the exact time is unknown, on Nov. 30, 2022, our computers arguably became self-aware—as a company called OpenAI launched ChatGPT. It's a chat box that provides remarkably detailed answers to our questions. It's the latest example of Artificial Intelligence—as computer systems write articles, develop artwork, drive cars, write poetry and play chess. They seem to have minds of their own.
The rapid advancement of artificial intelligence (AI) technology can be unsettling, as it raises concerns about the loss of jobs and controls over decision-making. The idea of machines becoming more intelligent than humans, as portrayed in dystopian films, is a realistic possibility with the increasing capabilities of AI. The potential for AI to be used for malicious purposes, such as in surveillance or manipulation, further adds to the dystopian feeling surrounding the technology.
I should mention that I didn't write the previous paragraph. That is the work of ChatGPT. Despite the passive voice in the last sentence, it's a remarkably well-crafted series of sentences—better than the work of some reporters I've known. The description shows a depth of thought and nuance, and raises myriad practical and ethical questions. I'm particularly concerned about the latter point, about potential government abuse for surveillance.
I am not a modern-day Luddite—a reference to members of early 19th century British textile guilds who destroyed mechanized looms in a futile attempt to protect their jobs. I celebrate the wonders of the market economy and "creative destruction," as brilliant advancements obliterate old, inefficient, and encrusted industries (think about how Uber has shaken up the taxi industry). But AI takes this process to a head-spinning new level.
Practical concerns aren't insurmountable. Some of my newspaper friends worry about AI replacing their jobs. It's not as if chat boxes will start attending city council meetings, although not that many journalists are doing gumshoe reporting these days anyway. Librarians, for instance, worry about issues of attribution and intellectual property rights.
On the latter point, "The U.S. Copyright Office has rejected a request to let an AI copyright a work of art," The Verge reported. "The board found that (an) AI-created image didn't include an element of 'human authorship'—a necessary standard, it said, for protection." Copyright law will no doubt develop to address these prickly questions.
These technologies already result in life-improving advancements. Our mid-trim Volkswagen keeps the car within the lanes and even initiated emergency braking, thus recently saving me from a fender bender. ChatGPT might simply become an advanced version of Google. The company says its "mission is to ensure that artificial general intelligence benefits all of humanity." Think of the possibilities in, say, the medical field.
Then again, I'm sure Cyberdyne Systems had the best intentions. Here's what raises the most concern: With most cutting-edge technologies, the designers know what their inventions will do. A modern automobile or computer system would seem magical to someone from the past, but they are predictable albeit complicated. It's just a matter of explaining how a piston fires or computer code leads to a seemingly inexplicable—but altogether understandable—result.
But AI has a true magical quality because of its "incomprehensibility," New York magazine's John Herrman noted. "The companies making these tools could describe how they were designed…(b)ut they couldn't reveal exactly how an image generator got from the words purple dog to a specific image of a large mauve Labrador, not because they didn't want to but because it wasn't possible—their models were black boxes by design."
Of course, any government efforts to control this technology will be as successful as the efforts to shut Skynet. Political posturing drives lawmakers more than any deep technological knowledge. The political system always will be several steps behind any technology. Politicians and regulators rarely know what to do anyway, although I'm all for strict limits on the government's use of AI. (Good luck, right?)
Writers have joked for years about when Skynet will become self-aware, but I'll leave you with this question: If AI is this good now, what will it be like in a few years?
This column was first published in The Orange County Register.
Show Comments (80)