California's Aggressive Regulations Put Burgeoning AI Industry at Risk
Overly strict or poorly designed rules could slow beneficial uses of AI in healthcare, education, infrastructure, and public safety.
California has recently enacted a sweeping package of AI laws, positioning itself as a leader in state-level AI regulation.
The focus is on safety, transparency, and specific use-cases like deep fakes and employment. The most significant piece of legislation is the Transparency in Frontier Artificial Intelligence Act (TFAIA), or Senate Bill 53.
That law aims to impose transparency and safety requirements rather than broad bans—focusing on "trust but verify" oversight: requiring disclosure of governance frameworks, safety protocols, and incident-reporting. However, the requirement to publish detailed transparency reports could expose trade secrets or vulnerabilities, and impose heavy compliance burdens. Some argue the law penalizes "paperwork" and formalities rather than actual harmful outcomes.
If you haven't figured it out by now, the first two paragraphs were largely produced using ChatGPT, an Artificial Intelligence generator. Other than a few style foibles, I can't take issue with its summary. Frankly, its explanation is better written and more accurate than similar reports I've read in daily newspapers. The stunning advance in AI sophistication is raising some obvious questions. The most pressing: What should the government do to regulate it?
Not surprisingly, my answer is "as little as possible." Government is a clunky, bureaucratic machine driven by special-interest groups and politicians. It's always behind the curve. If state and federal regulators had the skill of the entrepreneurs who developed these cutting-edge technologies, they would most likely work at such firms, where they'd score a higher pay package. The government B-team can't keep up with the A-team, so regulations lag behind corporate innovations.
Typically, as the AI robot explained, they focus on paperwork errors. These rules stifle meaningful advancements, benefit firms with high-powered lobbyists, and provide an advantage to companies that operate in less-regulated environments. When states pass their own rules, they create a mish-mash of hurdles for an industry that is not confined within any state boundary. Given its size, California's typically heavy-handed approach often becomes the national standard.
In fact, California lawmakers relish their role as national trend-setters, as they push for every progressive priority (from ICE vehicle bans to single-payer healthcare) in the hopes that it pushes the national conversation in their direction. Other Blue States are doing the same thing. Often, they base their regulations on the European Union's model—one that's based on fear of the unseen. States have thus far introduced 1,000 different AI-related bills.
As my R Street Institute colleague and AI expert Adam Thierer explained in testimony last month before the U.S. House of Representatives, "America's AI innovators are currently facing the prospect of many state governments importing European-style technocratic regulatory policies to America and, even worse, applying them in a way that could end up being even more costly and confusing than what the European Union has done. Euro-style tech regulation is heavy-handed with highly detailed rules that are both preemptive and precautionary in character.…Europe's tech policy model is 'regulate-first' while America's philosophy is 'try-first.'"
In the now-concluded California legislative session, lawmakers introduced at least 31 AI bills, with several, including SB 53, garnering Gov. Gavin Newsom's signature. Most are manageable for the industry, but new laws and regulations often suffocate ideas a little at a time. On the good-news front, Newsom—ever mindful of a potential presidential run, and sensible enough to not want to crush one of the state's economic powerhouses—vetoed the worst of them.
He rejected Assembly Bill 1064, which would have forbade any company or agency from making AI chatbots "available to a child unless the companion chatbot is not foreseeably capable of doing certain things that could harm a child." That broad language—how can anything be "foreseeably capable"?—caused much consternation. "AB 1064 effectively bans access of anyone under 18 to general-purpose AI or other covered products, putting California students at a disadvantage," as a prominent tech association argued in opposition.
In his veto, Newsom echoed that point and added that, "AI already is shaping the world, and it is imperative that adolescents learn how to safely interact with AI systems." He championed his signing of Senate Bill 243, which tech companies accepted as a better alternative. It mainly requires operators to disclose that children are interacting with a chatbot. That's fine, but the governor also promised to support other messages in the next session.
How exactly can an industry thrive under a never-ending threat of more legislation, especially given that some of the proposals are quite intrusive? I'm a big advocate for federalism and the idea that states are the laboratories of democracy, but in this case, a federal approach is better given, again, the national nature of the internet world.
I'll finish with words of wisdom from ChatGPT: Strict or poorly designed rules could slow beneficial uses of AI in healthcare, education, infrastructure, and public safety. Fear of liability or red tape might discourage experimentation that could improve lives.
This column was first published in The Orange County Register.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please to post comments
Pro tip: leave Cali
The most fun thing about AI hysteria is watching the privileged classes panic and shout about getting displaced by technology, after decades of cheering as the troglodyte classes suffered. Anyone else remember "learn to code"?
If this is all on CA - why can't these companies just move to another state?
Also, why do we love local control when it suits us but hate it when it doesn't?
Because the laws are being pitched as "consumer protection" rather than "business regulation". Consumer protection puts the jurisdiction analysis on the customer's jurisdiction. Which means that to avoid that jurisdiction, you have to avoid all CA customers - which is essentially impossible. Not only is CA a huge market that it will possibly bankrupt your company to ignore, jurisdiction can be imposed on you by a single CA customer coming to your site despite your efforts to avoid them. (More precisely, by a single regulator determining in hindsight that your efforts to exclude that CA customer were not adequate, a vague and sliding scale that regulators have an incentive to make impossible to meet.)
The bottom line is that moving out of CA or even not being incorporated in CA in the first place is at best very, very weak protection from their legislative overreach.
"I'm a big advocate for federalism and the idea that states are the laboratories of democracy, but in this case, a federal approach is better given, again, the national nature of the internet world."
The second part of that sentence put the lie to the first part.