Technology

Harsh AI Regulation From Congress Imperils Innovation

In addition to licensing regimes, there have also been calls for creating a new agency to regulate AI.

|

Congress is back in session, and Artificial Intelligence (AI) has captured policymakers' attention. A wide range of committees have hosted hearings on AI generally as well as how it applies to more specific fields, including medicine and disaster response. 

As a result, a growing number of bills are starting to proliferate. Among the most concerning of these proposals are a new government agency to regulate AI and a license for using AI. 

In what would be a dramatic departure from the light-touch approach that has supported a flourishing of American innovation, a bipartisan group of senators, including Richard Blumenthal (D–Conn.)* and Josh Hawley (R–Mo.), are expected to introduce legislation to create a government bureaucracy that would prevent high-risk AI models from entering the market. Under such legislation, AI models would have to receive a "license to deploy" from the government, most likely from a new, independent oversight agency. Concerningly, some leading technology companies, including OpenAI and Microsoft, have supported this approach.

The history of past licenses illustrates how they are a particularly "cronyistic" political tool. Licensure regimes favor large companies that can spend large sums of money engaging with the bureaucracy to gain favorable terms and on the expenses associated with licensing itself.

Furthermore, this process can be corrupted by large companies and existing players who, by increasing costs and influencing the design of requirements, can keep new and more innovative players out.

In addition to licensing regimes, there have also been calls for creating a new agency to regulate AI. However, AI is a general-purpose technology, which means an AI regulator could interfere in nearly every sector of the economy. The United States has long resisted calls for a digital regulator, perhaps recognizing that the administrative state could not only be far more expansive than anticipated but would also increase the risks of agency capture in a rapidly changing environment.

Regulating AI poses the same likely pitfalls, which is why it is ironic that some of the recently introduced bill's sponsors are among the most vocal critics of perceived concentration in the technology sector: A licensure regime would only further create a moat, making it more difficult for new players to challenge existing leaders.

Further, a licensure requirement could possibly encourage start-ups to seek an exit via acquisition rather than deal with the cost and burden of compliance challenges.

Calls for a new licensing regime or a new regulator follow the approach seen in Europe, where a heavy regulatory touch has produced undesirable economic consequences. An examination of the largest global internet companies reveals a notable absence of European players—a trend likely to continue in AI, given the similar approach to regulation. Instead of encouraging entrepreneurial discovery by limiting the role of government, Europe has continued to build a culture with the internet—and now AI—that requires innovators to come to regulators first. This culture has a low tolerance for risk, regardless of the potential for harm.

A better approach than a licensure regime or a new agency is to build on the success of light-touch innovation that has made the United States a world leader in the internet era. Before establishing new, burdensome requirements, American policymakers should examine how existing laws can address AI concerns. This would also allow them to repeal or reform statutes that stand to encumber beneficial applications.

CORRECTION: This article originally misstated which state Richard Blumenthal represents in the Senate.