Google Comes Out Against a 'Department of A.I.'
As the company explains, pre-market licensing would delay—or even deny—our access to artificial intelligence's potential benefits.
Google and its artificial intelligence lab DeepMind are on the right track for how to effectively and lightly regulate the deployment of new generative artificial intelligence (A.I.) tools like the ChatGPT and Bard large language models. "Artificial intelligence has the potential to unlock major benefits, from better understanding diseases to tackling climate change and driving prosperity through greater economic opportunity," Google notes rightly.
In order to unlock those benefits, Google argues for a decentralized "hub-and-spoke model" of national A.I. regulation. That model is a much superior approach compared to the ill-advised centralized, top-down licensing scheme suggested by executives at rival A.I. developers OpenAI and Microsoft.
Google outlines this proposal in its response to the National Telecommunications and Information Administration's (NTIA) April 2023 request for comments on A.I. system accountability measures and policies. The agency asked for public input that focuses "on self-regulatory, regulatory, and other measures and policies that are designed to provide reliable evidence to external stakeholders—that is, to provide assurance—that AI systems are legal, effective, ethical, safe, and otherwise trustworthy."
In its comment, Google supports at the national level "a hub-and-spoke approach—with a central agency like the National Institute of Standards and Technology (NIST) informing sectoral regulators overseeing AI implementation—rather than a 'Department of AI.'" In fact, NIST proactively launched its Artificial Intelligence Risk Management Framework in January.
Google further notes, "AI will present unique issues in financial services, health care, and other regulated industries and issue areas that will benefit from the expertise of regulators with experience in those sectors—which works better than a new regulatory agency promulgating and implementing upstream rules that are not adaptable to the diverse contexts in which AI is deployed."
In other words, to the extent that A.I. tools need regulation, they should be scrutinized in the context of where they are being deployed. Google advocates that sectoral regulators "use existing authorities to expedite governance and align AI and traditional rules" and provide, as needed, "updates clarifying how existing authorities apply to the use of AI systems."
Agencies overseeing financial services will be more attuned to how A.I. affects loan approvals and credit reporting; medical regulators can more easily assess diagnostic accuracy and health care privacy concerns; educational institutions and agencies can better gauge and direct A.I.'s effects on student learning; and transportation officials can monitor the development of self-driving automobiles. This approach melds well with NIST's A.I. Risk Management Framework, which aims to be "flexible and to augment existing risk practices which should align with applicable laws, regulations, and norms" and which is "designed to address new risks as they emerge."
The free market think tank R Street Institute's response to the NTIA bolsters Google's arguments against establishing a one-size-fits-all "Department of A.I." First, the R Street Institute observes that the NTIA and other would-be regulators "tend to stress worst-case scenarios" with respect to the deployment of new A.I. tools. The result of this framing is that A.I. innovations are being "essentially treated as 'guilty until proven innocent' and required to go through a convoluted and costly certification process before being allowed on the market."
Like Google, the R Street Institute notes that the development of A.I. technologies "will boost our living standards, improve our health, extend our lives, expand transportation options, avoid accidents, improve community safety, enhance educational opportunities, help us access superior financial services and much more." Imposing some kind of pre-market licensing scheme administered by a Department of A.I. would significantly delay and even deny Americans access to the substantial benefits that A.I. systems and technologies offer.
Show Comments (14)