European Union's AI Law Will Heavily Regulate a Technology Lawmakers Don't Understand
And in the process, it will stifle innovation and competition.

Lawmakers in the European Union (E.U.) last week overwhelmingly approved legislation to regulate artificial intelligence in an attempt to guide member countries as the industry rapidly grows.
The Artificial Intelligence Act (AI Act) passed 523–46, with 49 votes not cast. According to the E.U. parliament, the legislation is meant to "ensure[] safety and compliance with fundamental rights, while boosting innovation." It is far more likely, however, that the law will instead hamstring innovation, particularly when considering it is regulating a technology that is quickly changing and not well-understood.
"In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk-based approach should be followed," the law reads.
The legislation classifies AI systems into four categories. Systems deemed unacceptably high risk—including those that seek to manipulate human behavior or ones used for social scoring—will be banned. Also off limits, refreshingly, is the use of biometric identification in public spaces for law enforcement purposes, with a few exceptions.
The government will subject high-risk systems, such as high-priority infrastructure and public services, to risk assessment and oversight. Limited-risk apps and general-purpose AI, including foundation models like ChatGPT, will have to adhere to transparency requirements. Minimal-risk AI systems, expected by lawmakers to make up the bulk of applications, will be left unregulated.
In addition to addressing risk in order to "avoid undesirable outcomes," the law aims to "establish a governance structure at European and national level." The European AI Office, described as the center of AI expertise across the E.U., was established to carry out the AI Act. It also sets up an AI board to be the E.U.'s primary advisory body on the technology.
Costs of running afoul of the law are no joke, "ranging from penalties of €35 million or 7 percent of global revenue to €7.5 million or 1.5 percent of revenue, depending on the infringement and size of the company," according to Holland & Knight.
Practically speaking, the regulation of AI will now be centralized across the European Union's member nations. The goal, according to the law, is to establish a "harmonised standard," a routinely used measure in the E.U., for such regulation.
The E.U. is far from the only governing body passing AI legislation to bring the burgeoning technology under control; China introduced their temporary measures in 2023 and President Joe Biden signed an executive order on October 30, 2023, to rein in the development of AI.
"To realize the promise of AI and avoid the risk, we need to govern this technology," Biden said subsequently at a White House event. Though the U.S. Congress is yet to figure out long-term legislation, the E.U.'s AI Act could give them inspiration to do the same. Biden's words certainly sound similar to the E.U.'s approach.
But critics of the E.U.'s new law worry that the set of rules will stifle innovation and competition, limiting consumer choice in the market.
"We can decide to regulate more quickly than our major competitors," said Emmanuel Macron, the president of France, "but we are regulating things that we have not yet produced or invented. It is not a good idea."
Anand Sanwal, CEO of CB Insights, echoed the thought: "The EU now has more AI regulations than meaningful AI companies." Barbara Prainsack and Nikolaus Forgó, professors at the University of Vienna, meanwhile wrote for Nature Medicine that the AI Act views the technology strictly through the lens of risk without acknowledging the benefit, which will "hinder the development of new technology while failing to protect the public."
The E.U.'s law isn't all bad. Its restrictions on the use of biometric identification, for example, address a real civil liberties concern and are a step in the right direction. Less ideal is that the law makes many exceptions for cases of national security, allowing member states to interpret freely what exactly raises concerns about privacy.
Whether American lawmakers take a similar risk-based approach to AI regulation is yet to be determined, but it's not far-fetched to think it may only be a matter of time before the push for such a law materializes in Congress. If and when it does, it is important to be prudent about encouraging innovation, as well as keeping safeguarding civil liberties.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I’m sure they will get it right this time.
That's what they do with every other field and specialty. (Lawmakers might be the one group of people on the planet that know less collectively than journalists.) Why should AI get special treatment?
Government is the idiots who tell experts how to do their job.
As Americans we should be encouraging Europe to do stupid shit like this.
Only if our lawmakers are smart enough to not make their mistakes.
The evidence on that point is not encouraging.
Isn't one party rather famous for "let's be more like Europia!"
Only true as a matter of degree. Neither major party is winning any awards for policy excellence.
At whose expense?
"I don't understand it but I know I'm against it! I'm against anything I don't understand!" Okay, can the tech companies finally just ban all communications with Europe now? They should just provide services to the parts of the world that actually appreciate their services. If individuals in Europe manage to bypass the blockade and access AI anyway, just tell the Regulators to shove it and refuse to comply.
I studied AI for my CompSci degree and was exposed during work as a Consultant/Programmer....AI means absolutely nothing when you take NLP and Expert Systems and pretend they are actual manifestations of the never-existent artificial intelligence
Expert Systems are long if-then scripts but no way intelligent.
NLP absolutey depends on Essentialism, the knowledge that there is an essence to a 'cow' even if on every sense input the cows in front of you vary, Different weight, markings,mooning, age,etc.
And 'essence ' is not anything that comes in via a sensory essence detector.
But why frustrate myself. You Libs argue for freedom reflexively: Fredom for perverted sexual practices, freedom for tortured killing of babies in the womb,etc
Is that you, Hank?
ctrl-f comstock got no hits. So I don't think so.
What technologies do they have any kind of mature or nuanced understanding ? And yet they regulate.
'Systems deemed unacceptably high risk—including those that seek to manipulate human behavior or ones used for social scoring—will be banned.'
Of course they will not let AI do that. Those jobs are for EU bureaucrats and minions.
Nobody understands AI, which is just a marketing hype label for a bunch of computer programs.
Conversely, Artificial Idiocy understands nobody.
Look at the record. Americans in 1906 passed all sorts of drug regulations and liquor taxes whose financial repercussions nobody understood. They became enforceable in 1907. Americans pushed opium and stimulant bans from 1909 to 1914 affecting Balkan States and Germany. After that war, USA and League governments bundled those same prohibitions into the Versailles Treaty, wrecking Germany in 1923. They added layers of asset-forfeiture in Sept 1929, remember? To those were added plans to curtail German drug production in July 1931. Reaction empowered Hitler, remember? Then in 1986 as LSD production was crushed, war on replacement plant leaves wrecked Latin economies through 1992. Then G Waffen tried asset forfeiture again in 2008. Does anyone recall THOSE unexpected crashes, wars, dictators and disaster?
Lawmakers and their lackeys don’t even know how a toaster works, much less AI. The assertion in the headline is not new. It has been in practice for a hundred years.
There is a good chance lawmakers knew how butter and candles were made prior to a century ago.
Ah yes, "innovation and competition," the K-Y lube of Silicon Valley's useful idiots with very large right arms.
That may just be the most sensible thing I've ever heard President Macron say.
Why is everyone so afraid of AI? There are so many really useful tools emerging that can be used in everyday life, in work. It doesn't even have to be something impressive. AI generators like tattoon.ai to create a tattoo sketch in seconds seems to me at least curious and making the job of tattoo artists a little easier. Everyone is hyperbolizing AI so much, like it's something alive that could take over the entire planet.