Artificial Intelligence

Senate's AI Roadmap Released

It could have been a lot worse.


The new U.S. Senate AI roadmap is full of aspirations and empty of much actual content. Basically, the Driving U.S. Innovation in Artificial Intelligence report is a wish list of issues that its authors hope various congressional committees will consider when they get around to addressing the possible ramifications of the burgeoning field of artificial intelligence.

Yet the senators behind it, including Majority Leader Chuck Schumer (D–N.Y.), are sure that following the roadmap requires that Congress appropriate "at least $32 billion per year for (non-defense) AI innovation," as recommended by the National Security Commission on Artificial Intelligence (NSCAI). As the NSCAI leadership explained, "This is not a time for abstract criticism of industrial policy or fears of deficit spending to stand in the way of progress." There is clearly bipartisan agreement on that despite the fact that Goldman Sachs projects that private sector spending on AI could reach $200 billion by 2025.

The good news is that the roadmap is not the European Union's innovation-killing AI Act. Unlike the AI Act, the roadmap does not impose any actual new regulations or constraints on the fast-developing applications of AI in the U.S.

What it does do is tick off issues that the working group thinks Senate committees ought to consider. These include the development of legislation aimed at federal funding of AI research and development, voting security and election fraud, worker displacement and training, copyright concerns, online child sexual abuse material, banning the use of AI for social scoring, data privacy, international cooperation with allies, export controls on critical tech, defense against chemical, biological, radiological, and nuclear threats, and limits on AI use in warfare.

Unlike the EU's AI Act, the roadmap does not call for regulations requiring that new AI products and services be evaluated by government bureaucrats for safety before being offered to the public. Instead, the roadmap more sensibly recommends developing a "framework that specifies what circumstances would warrant a requirement of pre-deployment evaluation of AI models." The working group is also apparently content for now to allow courts to sort out how AI companies can use copyrighted material. The roadmap does ask policymakers to consider the need for legislation that protects against the unauthorized use by AI products and services of one's name, image, likeness, and voice, and identifies novel synthetic content generated by AI as such.

The roadmap is not nearly enough nor soon enough say critics. "The long list of proposals are no substitute for enforceable law," said AI Now Institute co-executive directors Amba Kak and Sarah Myers West in a statement. "What we need are stronger rules that put the onus on companies to demonstrate products are safe before they release them, ensure fair competition in the AI marketplace, and protect the most vulnerable in society from AI's most harmful applications—and that's just the start."

In a press release, Fight for the Future director Evan Greer declared, "Schumer's new AI framework reads like it was written by Sam Altman and Big Tech lobbyists. It's heavy on flowery language about 'innovation' and pathetic when it comes to substantive issues around discrimination, civil rights, and preventing AI-exacerbated harms."

Setting proposed federal deficit spending on AI research and development aside, those of us who favor permission-less innovation should be somewhat relieved that the roadmap, for the most part, does not call for the imposition of sweeping federal regulations at this early stage of AI development. Of course, there is no guarantee that Congress won't screw up AI innovation in the future.