AI Regulators Are More Likely To Run Amok Than Is AI
Proposed AI legislation would enshrine tech-killing precautionary principle into law.

Deploying the precautionary principle is a laser-focused way to kill off any new technology. As it happens, a new bill in the Hawaii Legislature explicitly applies the precautionary principle in regulating artificial intelligence (AI) technologies:
In addressing the potential risks associated with artificial intelligence technologies, it is crucial that the State adhere to the precautionary principle, which requires the government to take preventive action in the face of uncertainty; shifts the burden of proof to those who want to undertake an innovation to show that it does not cause harm; and holds that regulation is required whenever an activity creates a substantial possible risk to health, safety, or the environment, even if the supporting evidence is speculative. In the context of artificial intelligence and products, it is essential to strike a balance between fostering innovation and safeguarding the well-being of the State's residents by adopting and enforcing proactive and precautionary regulation to prevent potentially severe societal-scale risks and harms, require affirmative proof of safety by artificial intelligence developers, and prioritize public welfare over private gain.
The Hawaii bill would establish an office of artificial intelligence and regulation wielding the precautionary principle that would decide when and if any new tools employing AI could be offered to consumers.
Basically, the precautionary principle requires technologists to prove in advance of deployment that their new product or service will never ever cause anyone anywhere harm. It is very difficult to think of any technology ranging from fire and the wheel to solar power and quantum computing that could not be used to cause harm to someone. It's tradeoffs all of the way down. Ultimately, the precautionary principle is the requirement for trials without errors that amounts to the demand: "Never do anything for the first time."
With his own considerable foresight, the brilliant political scientist Aaron Wildavsky anticipated how the precautionary principle would actually end up doing more harm than good. "The direct implication of trial without error is obvious: If you can do nothing without knowing first how it will turn out, you cannot do anything at all," he wrote in his brilliant 1988 book Searching for Safety. "An indirect implication of trial without error is that if trying new things is made more costly, there will be fewer departures from past practice; this very lack of change may itself be dangerous in forgoing chances to reduce existing hazards….Existing hazards will continue to cause harm if we fail to reduce them by taking advantage of the opportunity to benefit from repeated trials."
Among myriad other opportunities, AI could greatly reduce current harms by speeding up the development of new medications and diagnostics, autonomous driving, and safer materials.
R Street Institute Technology and Innovation Fellow Adam Thierer notes the proliferation of over 500 state AI regulation bills like the one in Hawaii threatens to derail the AI revolution. He singles out California's Safe and Secure Innovation for Frontier Artificial Intelligence Models Act as being egregiously bad.
"This legislation would create a new Frontier Model Division within the California Department of Technology and grant it sweeping powers to regulate advanced AI systems," Thierer explains. Among other things, the bill specifies that if someone were to use an AI model for nefarious purposes, the developer of that model could be subject to criminal penalties. This is an absurd requirement.
As deep learning researcher Jeremy Howard observes. "An AI model is a general purpose piece of software that runs on a computer, much like a word processor, calculator, or web browser. The creator of a model can not ensure that a model is never used to do something harmful—any more so that the developer of a web browser, calculator, or word processor could. Placing liability on the creators of general purpose tools like these mean that, in practice, such tools can not be created at all, except by big businesses with well funded legal teams."
Instead of authorizing a new agency to implement the stultifying precautionary principle in which new AI technologies are automatically presumed guilty until proven innocent, Thierer recommends "a governance regime focused on outcomes and performance [that] treats algorithmic innovations as innocent until proven guilty and relies on actual evidence of harm." And just such a governance regime already exists, since most of the activities to which AI will be applied are currently addressed under product liability laws and other existing regulatory schemes. Proposed AI regulations are more likely to run amok than are new AI products and services.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Fuck Hawaii and their ancestral ways of knowing. If they weren't a useful geographical navy outpost they'd be speaking mandarin.
Let's leave it all up to Papahānaumoku to decide.
Yup. An island chain full of fat, lazy equator hillbillies who don't produce shit. Fuck pineapple, too.
Skynet will eventually kill everything in it's way. Problem solved.
MORE TESTING NEEDED!
What is the likelihood of AI running amok and how did we arrive at this figure?
To be fair, Google retracted its own AI when it started to run Amok. In Section 230 we trust.
Edit: Referring back to today's earlier AI thread, I'm again brought back to the Paperclip problem and how we've seen a small example of that play out right before our eyes:
Google released an ai where the creators told it to make sure there was more paperclip representation in any picture it produced. So it turned every picture into a paperclip.
Impressed this wasn't spun into another Ron DeSantis article.
Yet
Good to see the old pro-freedom Ronald Bailey once in a while.
COVID derangement syndrome may be fading at last.
Long COVID derangement syndrome, as it turns out.
Sorry. When it comes to the Covidians, never forgive never forget.
You know, if you're comparing two numbers, you need to have a handle on both of them. Regulators are liable to run amok, sure, there's good odds on that. But what are the odds of AI running amok? Well that's a technical question, and at least some of the biggest experts think the odds are very high. Then you need to ask what are the risks if something *does* run amok. Regulators running amok is unpleasant, but can be easily rolled back. AI running amok? Well in many scenarios there's no second chance because we're all dead.
Some things do suggest regulation as the lesser of evils. Global thermonuclear war, biological weapons. Should AI be on that list? Many AI experts say yes.