Regulation

AI Regulators Are More Likely To Run Amok Than Is AI

Proposed AI legislation would enshrine tech-killing precautionary principle into law.

|

Deploying the precautionary principle is a laser-focused way to kill off any new technology. As it happens, a new bill in the Hawaii Legislature explicitly applies the precautionary principle in regulating artificial intelligence (AI) technologies:

In addressing the potential risks associated with artificial intelligence technologies, it is crucial that the State adhere to the precautionary principle, which requires the government to take preventive action in the face of uncertainty; shifts the burden of proof to those who want to undertake an innovation to show that it does not cause harm; and holds that regulation is required whenever an activity creates a substantial possible risk to health, safety, or the environment, even if the supporting evidence is speculative. In the context of artificial intelligence and products, it is essential to strike a balance between fostering innovation and safeguarding the well-being of the State's residents by adopting and enforcing proactive and precautionary regulation to prevent potentially severe societal-scale risks and harms, require affirmative proof of safety by artificial intelligence developers, and prioritize public welfare over private gain.

The Hawaii bill would establish an office of artificial intelligence and regulation wielding the precautionary principle that would decide when and if any new tools employing AI could be offered to consumers.

Basically, the precautionary principle requires technologists to prove in advance of deployment that their new product or service will never ever cause anyone anywhere harm. It is very difficult to think of any technology ranging from fire and the wheel to solar power and quantum computing that could not be used to cause harm to someone. It's tradeoffs all of the way down. Ultimately, the precautionary principle is the requirement for trials without errors that amounts to the demand: "Never do anything for the first time."

With his own considerable foresight, the brilliant political scientist Aaron Wildavsky anticipated how the precautionary principle would actually end up doing more harm than good. "The direct implication of trial without error is obvious: If you can do nothing without knowing first how it will turn out, you cannot do anything at all," he wrote in his brilliant 1988 book Searching for Safety. "An indirect implication of trial without error is that if trying new things is made more costly, there will be fewer departures from past practice; this very lack of change may itself be dangerous in forgoing chances to reduce existing hazards….Existing hazards will continue to cause harm if we fail to reduce them by taking advantage of the opportunity to benefit from repeated trials."

Among myriad other opportunities, AI could greatly reduce current harms by speeding up the development of new medications and diagnostics, autonomous driving, and safer materials.

R Street Institute Technology and Innovation Fellow Adam Thierer notes the proliferation of over 500 state AI regulation bills like the one in Hawaii threatens to derail the AI revolution. He singles out California's Safe and Secure Innovation for Frontier Artificial Intelligence Models Act as being egregiously bad.

"This legislation would create a new Frontier Model Division within the California Department of Technology and grant it sweeping powers to regulate advanced AI systems," Thierer explains. Among other things, the bill specifies that if someone were to use an AI model for nefarious purposes, the developer of that model could be subject to criminal penalties. This is an absurd requirement.

As deep learning researcher Jeremy Howard observes. "An AI model is a general purpose piece of software that runs on a computer, much like a word processor, calculator, or web browser. The creator of a model can not ensure that a model is never used to do something harmful—any more so that the developer of a web browser, calculator, or word processor could. Placing liability on the creators of general purpose tools like these mean that, in practice, such tools can not be created at all, except by big businesses with well funded legal teams."

Instead of authorizing a new agency to implement the stultifying precautionary principle in which new AI technologies are automatically presumed guilty until proven innocent, Thierer recommends "a governance regime focused on outcomes and performance [that] treats algorithmic innovations as innocent until proven guilty and relies on actual evidence of harm." And just such a governance regime already exists, since most of the activities to which AI will be applied are currently addressed under product liability laws and other existing regulatory schemes. Proposed AI regulations are more likely to run amok than are new AI products and services.