Sen. Josh Hawley (R–Mo.) has cultivated a reputation as one of the federal government's most tech-phobic legislators. Naturally, he is now setting his sights on artificial intelligence (A.I.), a technology he describes as dangerous and likely to "manipulate" Americans unless subjected to crushing regulatory burdens.
In a recent interview with Fox News, Hawley said he was "worried about AI's power to manipulate our attention, to manipulate our opinions and to manipulate the information that we're given."
His solution is for the government to increase the liability incurred by companies that use A.I., such that they can be sued by users for engaging in misinformation. What constitutes misinformation, of course, is open to interpretation. Per usual, Hawley's approach wouldn't actually protect users of new technology—A.I., in this case—from harm. Rather, it would create opportunities for costly, constant litigation to cripple the technology.
"Already you can see these generative AI systems—these large language models—that are trained on all the information on the internet," said Hawley.
Hawley was likely referencing ChatGPT, an A.I. chatbot that can mimic human conversation. This is a tool that, yes, could be used for ill—like most technological advances—but also has the capacity to improve human understanding, communication, and fulfillment. (Reason's Fiona Harrigan used it to plan dinner.) And though the very term A.I. can summon scary images from science fiction dystopias along the lines of Terminator and The Matrix, it's important to note that ChatGPT is not thinking for itself in any appreciably sinister way; it's essentially an online encyclopedia with an impressively vast repository, and its responses are guided by the prompts given to it by humans.
Yet Hawley frets that technologies like this one will be used to monopolize human attention spans. His concern that A.I. is being used to "misinform" Americans proves that the overhyped threat of misinformation is not solely a hobbyhorse of mainstream Democrats. The First Amendment, thankfully, prohibits the government from censoring speech that allegedly misinforms the public.
Hawley's overall anti-tech agenda overlaps neatly with Democratic regulatory priorities. Both Republicans and Democrats have joined together to demand the repeal of Section 230, which would subject social media platforms to increased liability for user-generated speech. Progressive Democrats favor this approach in order to force tech companies to moderate more content. Republicans, on the other hand, think Facebook and Twitter are moderating too much content already, and are willing to punish the companies even if it means giving Democrats the exact result they want: increased online censorship.
This strategy has become even more glaringly flawed as of late. Twitter's new CEO, Elon Musk, is transforming the site into a space that particularly welcomes conservative content. But it is Section 230 that empowers Musk to permit The Daily Wire and Tucker Carlson to host their programming on Twitter. Scrapping the federal statute would increase Twitter's own liability, rendering social media's post-at-will protocols untenable.
Yet Hawley is pushing an agenda of subjecting A.I., social media companies, and the broader tech sector to greater government scrutiny, which he describes as putting "more power in the hands of individual Americans to say, 'I will hold you accountable if you come after me, if you manipulate me.'"
Increased regulation doesn't make tech more accountable to users. It makes tech more accountable to politicians and federal bureaucrats, many of whom wrongly believe that the internet would be a better place if people—and chatbots!—were less free to speak.