The Coming Techlash Could Kill AI Innovation Before It Helps Anyone
Power-hungry data centers, disappearing jobs, and billions of dollars in subsidies are fueling resentment. If developers and policymakers don’t change course, Americans may reject AI before it ever delivers on its most significant promises.

The residents of New Braunfels, Texas, didn't volunteer to help accelerate AI development. Their once quiet corner of the state now buzzes with construction crews building power plants to sustain data centers—industrial warehouses that could soon consume as much electricity as entire cities to power state-of-the-art AI models. Meanwhile, thousands of miles away in Irvine, California, scores of video game developers laid off by Activision Blizzard back in 2024 may still be still looking for their next gig as the entire industry sees AI take over more and more tasks leading to thousands of total jobs being cut.
These aren't isolated incidents. They represent a small sample of an emerging public techlash that could derail AI development before the technology delivers on its most significant promises to revolutionize everything from education to health care.
Most Americans currently view AI as a threat to jobs and a strain on infrastructure, while tech executives make grand promises about revolutionary breakthroughs that often seem just out of reach. Only 17 percent of Americans believe AI will have a net positive impact on society over the next two decades, according to a poll conducted by the Pew Research Center. That's not just skepticism about short-term disruption—it's an indicator of distrust and, perhaps, opposition to the technology itself.
History shows what happens when powerful technologies lose public support due to isolated events and a pervasive fear-based narrative. The antinuclear movement of the 1970s effectively destroyed civilian nuclear power in America despite its potential for clean energy—an outcome many today regret. Opposition to genetic engineering has slowed agricultural innovations that could address food security and climate change. AI risks following the same path if the nascent AI techlash goes unaddressed.
The Mismatch Between Hype and Reality
Today's AI systems can write computer code better than expert programmers, diagnose certain diseases more accurately than doctors, and analyze complex datasets faster than any human analyst. But these impressive technical achievements haven't translated into felt benefits for most ordinary people.
Instead, AI development has focused on applications that primarily help corporations cut costs: chatbots that replace customer service workers, code generators that reduce the need for entry-level programmers, and automated systems that produce marketing copy and articles. These tools deliver value to companies while offering little direct benefit to the people whose jobs they eliminate or communities whose resources they consume.
The AI developers themselves are focused on pursuing artificial general intelligence (AGI), as shown by the fact that labs like Meta and OpenAI measure their models against benchmarks wholly disconnected from specific public policy needs. It's relatively meaningless to most Americans if a model can generate a proof for a bespoke math problem. In contrast, a test of whether a model has a user interface that's tailored to a diverse range of learning styles and cultural norms could drive labs to focus more on making models useful to everyday Americans.
The disconnect between the public interest and the incentives facing the labs and their core metrics is most jarring in the employment context. Job disruption disproportionately affects specific groups, often with little warning. Customer service representatives lose jobs to chatbots. Entry-level programmers face reduced demand as AI assistants handle basic coding tasks. Content creators compete with automated systems that produce marketing materials and articles. These aren't abstract economic trends—they represent real people losing income to machines they had no voice in developing.
Government spending patterns exacerbate the resentment. Billions of dollars flow toward AI development through legislation like the CHIPS and Science Act, while budgets for education, infrastructure, and social services face cuts or freezes. The message seems clear: taxpayer dollars support private AI development while public needs go unmet.
The Cooperation Problem
Dismissing AI entirely would be a costly mistake. The technology genuinely could transform how society addresses major challenges, but realizing that potential requires different priorities than current industry practices.
Consider education. AI tutoring systems could provide personalized instruction for every student, adapting to individual learning styles and pacing in ways impossible for overloaded teachers managing 30-student classrooms. These systems could identify exactly where each student struggles and provide targeted help, potentially closing achievement gaps that have persisted for decades.
But such systems only work if students, parents, and educators trust them enough to share learning data and integrate them into daily instruction. If public skepticism leads school districts to ban AI tools entirely—as some already did (only to later reverse their decision) or are contemplating doing so—these benefits will never materialize.
Medical AI faces similar challenges. Diagnostic systems could extend expert-level care to rural areas where specialists are scarce. AI can analyze medical images, suggest diagnoses, and recommend treatments with remarkable accuracy. Several systems already match or exceed human performance in detecting certain cancers, eye diseases, and other conditions.
Yet these tools only help patients if health care providers and patients themselves embrace them rather than viewing them as threats to human judgment and employment. Many medical professionals remain skeptical of using diagnostic AI due to concerns about liability, accuracy, and job security. Some patients lament a future in which they receive care from machines rather than doctors.
Traffic optimization is another example where public cooperation plays a crucial role in determining success. AI systems can analyze traffic patterns, predict congestion, and adjust signal timing to reduce commute times and emissions. One AI company claims that its tool can halve rush-hour traffic in urban settings. But scaling these benefits citywide requires drivers to use apps that share location data and cities to invest in connected infrastructure.
The Window Is Closing
Rejection of AI reflects understandable frustration with how its development has progressed, but it risks discarding genuinely valuable applications along with the problematic ones. The challenge is redirecting AI development toward public benefit while there's still time to build broad-based support.
There are three changes that could make the difference. First, if there is government funding for AI development, it should be narrowly focused and transparent, avoiding handouts to politically connected firms while prioritizing clear public benefits. Public investment should support clearly defined, limited goals, such as improving access to the legal system or accelerating breakthroughs in materials science.
Second, meaningful transparency about costs and benefits should become the norm. Just as New York City created a dashboard to track capital projects—monitoring whether they remain on time and under budget—governments could share leaderboards of which models have generated the most benefit by reducing waste, streamlining services, and expanding access to public goods—no doubt a complex measurement, but one that warrants development. The public may be less dissuaded by inevitable abuses of AI if they were aware of an AI education tool that actually improves student outcomes, or of an AI medical system that reduces health care costs while maintaining quality. AI developers could also publish independent audits that demonstrate how their models are being utilized and the resulting effects. Public access to this information would enable citizens to evaluate whether AI investments deliver the promised benefits.
Third, demonstrations of AI tools should focus less on abstract, speculative capabilities and more on solving real problems in visible ways. The best way to earn trust is a clear, verifiable improvement in someone's everyday life.
AI for Me but Not for Thee
The current trajectory of AI adoption, which is marked by high rates of trust and use by more well-educated Americans, threatens to create a divided society in which AI expertise and use is seen by many as a marker of social class. If AI tools remain primarily accessible to educated elites while everyone else faces displacement and disruption, the technology will become a source of inequality rather than a means of shared prosperity.
Countries that are open to AI adoption widely—not just in corporate boardrooms and tech labs—will gain lasting advantages in economic productivity, scientific research, and military capabilities. China has made AI adoption a national priority, investing heavily in public applications and encouraging mass adoption. America risks falling behind if public resistance prevents broad AI deployment.
The window to change course remains open, but it's closing rapidly. AI possesses genuine potential to address major challenges in education, health care, transportation, and governance. Realizing that potential requires shifting focus from corporate profit maximization to public problem-solving, from technological demonstration to real-world impact measurement, from elite adoption to mass benefit.
The alternative is a backlash that wastes both the technology's promise and the substantial public resources invested in its development. The residents of New Braunfels and the laid-off workers in Irvine represent the early stages of that techlash. Their concerns are legitimate, and their voices deserve to be heard. The question is whether policymakers will listen before it's too late to build the AI revolution that lifts everyone rather than just the few.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I am creating a company to create AI controlled street and stop signs. Imagine the fuel savings of not having to stop at a stop sign or light unless there is opposing traffic. And whether you need to stop is based on energy conservation, etc. I estimate a 10% reduction in fuel / emissions.
Of course, we will need to retract the second amendment to prevent my fellow rednecks from shooting up my AI controlled rural stop signs.
There already exists a desil car that gets 100 mpg
Fuck that. A guy invented a carburator that runs on tap water 50 years ago. I saw it in the back pages of Mad magazine. Of course Big oil went after him and when JFK found out they took him out too.
That's exactly what happened to the Stirling engine-powered transcontinental aircraft.
https://m.youtube.com/watch?v=e8jH1LIDHqk
Instead, AI development has focused on applications that primarily help corporations cut costs:
Well, duhh!
This just shows the author has bought into marxist worldviews. When you start discussing issues on their terms, you've already lost.
Look: any time you increase productivity, you are vulnerable to being accused of "Cutting costs" even if you spend more.
My stupid, backward-ass corporation has been trying to migrate thousands of applications to the cloud for 5 years. It has been a nightmare trying to re-educate 50-year old mainframe developers and their disciples to use cloud native technologies. It's a TBTF bank as well, so it is just chock full of regulations and other friction that make migration a frustrating exercise.
One development team, working on a pilot, was able to use AI to refactor 10 applications for the cloud and deploy in about one quarter. This was after spending almost a year to get a single application moved. It was a water-shed moment at our company where suddenly a path out of this morass was found.
Yes, the cost of re-factoring is WAY WAY down. But our company didn't "Cut costs". At our current rate, we'd have needed 20x the engineers to get anywhere near that throughput and our company was NEVER going to spend that. We have basically the same budget, but we are doing (with the same number of engineers) 10x more stuff. It is easy to use statistics to claim that is "cutting costs" when it is no such thing.
This is what people don't understand. When you drive the cost of something down, suddenly you have more consumers willing to use it. The cost of developing a complete ecommerce platform is about to be so low that a kid with an idea and some spare time outside their entry level job can get started. This is going to result in an explosion of new ideas being tested in the market- many shitty, and some great. We will all benefit.
This is not to say the process won't be disruptive. Our executive team used to take a month to send corporate communications as Comms staff drafted a letter, ran it through legal, regulatory, hr, etc. Now the VP writes an email a week to distribute to his org, and they are laying off half the comms team. The ones who remain will be responsible for wrangling the AIs. In this case, they are cutting costs, but most importantly they are doing more with what they have.
> Look: any time you increase productivity, you are vulnerable to being accused of "Cutting costs" even if you spend more.
It depends on how you define "productivity". If you produce 100 quality appliances yesterday, but now produce 200 shitty ass appliances today just to save a buck, you're aren't increasing productivity. And you're killing yourself in the long run as the slow moving consumer eventually realizes your products are shit.
Quick profits are the enemy of long term profits.
So you are saying we shouldn’t buy appliances from China?
Lol. Same comment. Leftists can hold 2 views diametrically opposed and not realize it. It is fucking amazing.
Weird coming from a guy for offshoring manufacturing and buying cheap shit from China.
We need more government to de-risk the world. We need to be more like China. We need to shift focus away from profit maximization.
Is this column an audition for Vox?
Every Reason article in the last ten years has been a Vox article auditioning for the Atlantic.
They barely qualify for Daily Beast.
Meanwhile, thousands of miles away in Irvine, California, scores of video game developers laid off by Activision Blizzard back in 2024 may still be still looking for their next gig
Yes, let's pretend that's AI's fault and not their gay homo faggoty wokeness.
I'm pretty sure gay homo faggoty wokeness will be seamlessly integrated into AI.
It isn't even that- it's just standard corporate mergers and acquisitions. They were bought by Microsoft, and MSFT doesn't need entire divisions building platforms that MSFT already produces by the bushel.
AI's voracious demand for electricity will revive nuclear power. That by itself is a good thing.
Wow, I thought I was reading an article on Reason, the libertarian website. It seems I was redirected to the American Prospect. Do I need better malware protection?
"...The disconnect between the public interest and the incentives facing the labs and their core metrics is most jarring in the employment context. Job disruption disproportionately affects specific groups, often with little warning..."
'Change happens and chicken littles want to be saved from the falling sky!'
What a pathetic pile of goose-stepping prose; did the calendar come un-done and we're back at April 1? Mother Jones called; they want their writer back and I want my $5 back from Reason.
Or maybe AI has been oversold, and is actually trash outside of niche applications.
Meanwhile, I'm not familiar with this lady but damn,
"I'm Not Putting On A F***ing Hijab" - UK Singer Refuses To Virtue Signal For Palestine, Pulls Out Of Festivals
https://www.zerohedge.com/political/im-not-putting-fing-hijab-uk-singer-refuses-virtue-signal-palestine-pulls-out-festivals
“More thinly veiled racism And overt antisemitism from the fucking gays for Hamas,” Banks further claimed, adding “#FUCKPALESTINE” at the end of the post.
In a further post she clarified, “And no, I’m not saying fuck actual Palestine. But fuck your dumb ass slogans and performative bullshit.”
“Yall wanna make a stance so bad but stand for absolutely nothing. As soon as the media says pedophilia is “natural and normal, You bitches will be right there talking about #PEDOPHILERIGHTS,” Banks further charged.
“That war has been going on in the background for fucking decades . Way before anyone alive today was born And all of a sudden yall are throwing around words like genocide and Zionist not even knowing the meaning of those words,” Banks continued.
“I don’t need to be on stage stressing out to appease some dumb ass protesters who literally rely on those people in Palestine being crushed so they can solicit donations they DO NOT give to the cause. Kiss my ass,” she further blasted.
In SF, a major Pride concert has been canceled; the lead singer was removed as a result of her blatant anti-semitism and support for the Pals, who, of course, would cheerfully kill the gays.
Funny stuff.
Glastonbury had an artist chant kill the IDF at the concert she refused to attend.
This guy could end up in jail in the UK. Not that I'm happy about that.
Herr Sarcasmic and Herr JewFree hardest hit.
AI tools are not going to result in utopia, and I wish people would stop pretending that they will.
It seems one result of the AI 'revolution' will be a lot of unemployed people but I suppose a UBI is the bandaid for that?
Or, wait, are we still doing 'learn to code'?
All of the coders will find work in coal mines.
“Learn to coal”.
I got laid off, because idiots management and executives thought that AI could replace me. I'm a software engineer for fucking cardiac pacemakers. And one of the largets manufacturers of cardiac pacemakers are busy laying off engineers to make room for AI.
It's frightening in the extreme. But the damned bean counters who don't know how make things decided that AI is the way to make things. Because that's the fucking narrative. Head over to LinkedIn and there's story after story about people losing their software engineering positions to AI. Microsoft laid off 30% of its developers at the same time it's bragging about moving to AI.
But can AI do it? FUCK NO! Google now has an AI summary on the top of every search, and it's invariably WRONG. But Google doesn't care because the public doesn't care. We have a culture that thrives on fake news, so fake search results just doesn't bother us.
But don't get me wrong. The problem is not the "AI". (Scare quotes because it's not at all intelligent"). The problem is the corporate culture that puts quick profits above quality, safety, and long term stability and growth. Get a quick buck and get out. That's the result of corporatism, fueled by the corporatist policies of Bush, Obama, Trump, Biden, and Trump. Tax and regulatory structures that encourage corporatism.
In the long run the market will correct itself. But loads of damage along the way. Back in the early to mid 2000s, companies thought that outsourcing all development to Bangalore was the cool thing to do. But within a couple of years they realized the software was utter crap. And brought development back in-house. Not saying Indian developers are bad, but when your SOLE goal is a quick buck then hire the cheap crap.
Same thing happening with AI. Laying off high quality developers to make way for LLMs never ever designed to do this kind of work, and in a couple of years they will learn their lessons as sales tank, networks implode, cars crash, and patients die. But upper management will have gotten their bonuses and moved on.
It sounds like I'm anti-capitalist. I am not! I am anti-corporatist. A few really really giant corporations are running everything, when what we need to hundreds and thousands of small nimble companies who care about making a quality product that customers want to purchase.
AI is not ready yet, we have years to go. But it's being put in charge anyway. It's one thing if automation was taking our jobs, but it's automation that doesn't even work. Self-driving cars that can't drive. AI generated software that won't even compile.
Sorry you got laid off. That sucks.
Pro tip: he didn’t have that job to begin with.
Likely this because we have replaced a total of zero engineers with AI. Where we've seen AI function is more on supply chain pattern finding with vendors and basic web admin changes. All embedded or high level coding has not been replaced. All the AI coding tools suck worse than a coding farm in India as seen by the startup that had to lie about AI code it had gotten from a coding farm in India.
Brandy is lying about his job most likely.
Stringing together some techy-sounding bullshit might fool your fellow idiots who still believe the 2020 election was stolen, but to anyone who knows the subject matter it looks like the word salad that it is. I can say with certainty that you know as much about software engineering as you do about economics. Nothing. Nothing at all. Just another reason why your posts look best in grey.
“idiots management and executives thought that AI could replace me. I'm a software engineer for fucking cardiac pacemakers. And one of the largets manufacturers of cardiac pacemakers are busy laying off engineers to make room for AI.”
Classic rookie mistake: You don’t fire the senior guys until they’ve trained the cheaper option to the point they can be replaced.
I don’t agree with you most of the time, but that sucks.
Bet you were in the bottom 30% of sw engineers. Because if you weren't then the rest of your screed is apparent to management, especially in a domain doing embedded hardware since AI can basically turn C and C++ into less efficient Java and can't do bare metal coding at all. Which means you werent one of the engineers still tasked with doing that.
Bet you were a web administrator or something similar.
"Google now has an AI summary on the top of every search, and it's invariably WRONG. But Google doesn't care because the public doesn't care."
Same with product reviews at Amazon. And tomorrow we'll probably get an AI summary of morning links here at Reason. What I'm seeing so far is a sanitized version of reality with a short nod to the detractors. In other words AI is creating the conventional wisdom that will be digested by the masses without any need for research or critical thinking. And I'm afraid that's the overarching goal here. The goal of the elite was always to get the proles to accept their version of reality and AI will make that easier than ever. On the economic side I'm all for creative destruction. But the zealots are promising a universal destructive juggernaut and no one seems able to predict the damage it will leave in it's wake. We're not talking about buggy whips and phone booths here. The most fundamental economic truth is, or was, that all wealth is created by human labor. I get that it takes labor to create AI. But won't the ultimate outcome be to make even the designers superfluous? Musk says we'll need some kind of UBI so the robots will save us from starvation. But a man without work is half a man. The species needs our work but more importantly we need it. I'll be gone long before the dystopian nightmare or the fabulous paradise on earth is manifest. Or maybe AI will bring me back and I'll live as a head in a jar on a fireplace mantle somewhere and lecture the youngsters about the benefits of a pair of SU carbs on a VW 1600.
We will all become nothing but batteries.