Senate's AI Roadmap Released
It could have been a lot worse.

The new U.S. Senate AI roadmap is full of aspirations and empty of much actual content. Basically, the Driving U.S. Innovation in Artificial Intelligence report is a wish list of issues that its authors hope various congressional committees will consider when they get around to addressing the possible ramifications of the burgeoning field of artificial intelligence.
Yet the senators behind it, including Majority Leader Chuck Schumer (D–N.Y.), are sure that following the roadmap requires that Congress appropriate "at least $32 billion per year for (non-defense) AI innovation," as recommended by the National Security Commission on Artificial Intelligence (NSCAI). As the NSCAI leadership explained, "This is not a time for abstract criticism of industrial policy or fears of deficit spending to stand in the way of progress." There is clearly bipartisan agreement on that despite the fact that Goldman Sachs projects that private sector spending on AI could reach $200 billion by 2025.
The good news is that the roadmap is not the European Union's innovation-killing AI Act. Unlike the AI Act, the roadmap does not impose any actual new regulations or constraints on the fast-developing applications of AI in the U.S.
What it does do is tick off issues that the working group thinks Senate committees ought to consider. These include the development of legislation aimed at federal funding of AI research and development, voting security and election fraud, worker displacement and training, copyright concerns, online child sexual abuse material, banning the use of AI for social scoring, data privacy, international cooperation with allies, export controls on critical tech, defense against chemical, biological, radiological, and nuclear threats, and limits on AI use in warfare.
Unlike the EU's AI Act, the roadmap does not call for regulations requiring that new AI products and services be evaluated by government bureaucrats for safety before being offered to the public. Instead, the roadmap more sensibly recommends developing a "framework that specifies what circumstances would warrant a requirement of pre-deployment evaluation of AI models." The working group is also apparently content for now to allow courts to sort out how AI companies can use copyrighted material. The roadmap does ask policymakers to consider the need for legislation that protects against the unauthorized use by AI products and services of one's name, image, likeness, and voice, and identifies novel synthetic content generated by AI as such.
The roadmap is not nearly enough nor soon enough say critics. "The long list of proposals are no substitute for enforceable law," said AI Now Institute co-executive directors Amba Kak and Sarah Myers West in a statement. "What we need are stronger rules that put the onus on companies to demonstrate products are safe before they release them, ensure fair competition in the AI marketplace, and protect the most vulnerable in society from AI's most harmful applications—and that's just the start."
In a press release, Fight for the Future director Evan Greer declared, "Schumer's new AI framework reads like it was written by Sam Altman and Big Tech lobbyists. It's heavy on flowery language about 'innovation' and pathetic when it comes to substantive issues around discrimination, civil rights, and preventing AI-exacerbated harms."
Setting proposed federal deficit spending on AI research and development aside, those of us who favor permission-less innovation should be somewhat relieved that the roadmap, for the most part, does not call for the imposition of sweeping federal regulations at this early stage of AI development. Of course, there is no guarantee that Congress won't screw up AI innovation in the future.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Ok there baby boomer,
butt chuck schumer,
child groomer,
doomer gloomer,
teams call zoomer,
not a tumor,
bobsled flumer,
Spread a rumor.
Think they can teach an AI to recklessly spend the way they do?
I'm sorry, Dave. I'm afraid I can't do that.
So where does the 32 billion come from?
Confiscate the endowments of all the Ivy League?
Put a 300% tax on all contributions to the democrats?
Disband the Education department and HUD?
Where?
MORE TESTING NEEDED!
"Thank you for not destroying my research" is a very small win.
The worst part will be the fight for government contracts which will make sure that the contracts go to "mainstream" AI projects. Innovation dies when only the established ideas will be able to get the government money.
Samuel Langley received big money from the US government to build a flying machine. He failed miserably but two bicycle mechanics succeeded. Langley had all the credentials, the Wright Brothers only had the talent.