The Coming Techlash Could Kill AI Innovation Before It Helps Anyone
Power-hungry data centers, disappearing jobs, and billions of dollars in subsidies are fueling resentment. If developers and policymakers don’t change course, Americans may reject AI before it ever delivers on its most significant promises.
The residents of New Braunfels, Texas, didn't volunteer to help accelerate AI development. Their once quiet corner of the state now buzzes with construction crews building power plants to sustain data centers—industrial warehouses that could soon consume as much electricity as entire cities to power state-of-the-art AI models. Meanwhile, thousands of miles away in Irvine, California, scores of video game developers laid off by Activision Blizzard back in 2024 may still be still looking for their next gig as the entire industry sees AI take over more and more tasks leading to thousands of total jobs being cut.
These aren't isolated incidents. They represent a small sample of an emerging public techlash that could derail AI development before the technology delivers on its most significant promises to revolutionize everything from education to health care.
Most Americans currently view AI as a threat to jobs and a strain on infrastructure, while tech executives make grand promises about revolutionary breakthroughs that often seem just out of reach. Only 17 percent of Americans believe AI will have a net positive impact on society over the next two decades, according to a poll conducted by the Pew Research Center. That's not just skepticism about short-term disruption—it's an indicator of distrust and, perhaps, opposition to the technology itself.
History shows what happens when powerful technologies lose public support due to isolated events and a pervasive fear-based narrative. The antinuclear movement of the 1970s effectively destroyed civilian nuclear power in America despite its potential for clean energy—an outcome many today regret. Opposition to genetic engineering has slowed agricultural innovations that could address food security and climate change. AI risks following the same path if the nascent AI techlash goes unaddressed.
The Mismatch Between Hype and Reality
Today's AI systems can write computer code better than expert programmers, diagnose certain diseases more accurately than doctors, and analyze complex datasets faster than any human analyst. But these impressive technical achievements haven't translated into felt benefits for most ordinary people.
Instead, AI development has focused on applications that primarily help corporations cut costs: chatbots that replace customer service workers, code generators that reduce the need for entry-level programmers, and automated systems that produce marketing copy and articles. These tools deliver value to companies while offering little direct benefit to the people whose jobs they eliminate or communities whose resources they consume.
The AI developers themselves are focused on pursuing artificial general intelligence (AGI), as shown by the fact that labs like Meta and OpenAI measure their models against benchmarks wholly disconnected from specific public policy needs. It's relatively meaningless to most Americans if a model can generate a proof for a bespoke math problem. In contrast, a test of whether a model has a user interface that's tailored to a diverse range of learning styles and cultural norms could drive labs to focus more on making models useful to everyday Americans.
The disconnect between the public interest and the incentives facing the labs and their core metrics is most jarring in the employment context. Job disruption disproportionately affects specific groups, often with little warning. Customer service representatives lose jobs to chatbots. Entry-level programmers face reduced demand as AI assistants handle basic coding tasks. Content creators compete with automated systems that produce marketing materials and articles. These aren't abstract economic trends—they represent real people losing income to machines they had no voice in developing.
Government spending patterns exacerbate the resentment. Billions of dollars flow toward AI development through legislation like the CHIPS and Science Act, while budgets for education, infrastructure, and social services face cuts or freezes. The message seems clear: taxpayer dollars support private AI development while public needs go unmet.
The Cooperation Problem
Dismissing AI entirely would be a costly mistake. The technology genuinely could transform how society addresses major challenges, but realizing that potential requires different priorities than current industry practices.
Consider education. AI tutoring systems could provide personalized instruction for every student, adapting to individual learning styles and pacing in ways impossible for overloaded teachers managing 30-student classrooms. These systems could identify exactly where each student struggles and provide targeted help, potentially closing achievement gaps that have persisted for decades.
But such systems only work if students, parents, and educators trust them enough to share learning data and integrate them into daily instruction. If public skepticism leads school districts to ban AI tools entirely—as some already did (only to later reverse their decision) or are contemplating doing so—these benefits will never materialize.
Medical AI faces similar challenges. Diagnostic systems could extend expert-level care to rural areas where specialists are scarce. AI can analyze medical images, suggest diagnoses, and recommend treatments with remarkable accuracy. Several systems already match or exceed human performance in detecting certain cancers, eye diseases, and other conditions.
Yet these tools only help patients if health care providers and patients themselves embrace them rather than viewing them as threats to human judgment and employment. Many medical professionals remain skeptical of using diagnostic AI due to concerns about liability, accuracy, and job security. Some patients lament a future in which they receive care from machines rather than doctors.
Traffic optimization is another example where public cooperation plays a crucial role in determining success. AI systems can analyze traffic patterns, predict congestion, and adjust signal timing to reduce commute times and emissions. One AI company claims that its tool can halve rush-hour traffic in urban settings. But scaling these benefits citywide requires drivers to use apps that share location data and cities to invest in connected infrastructure.
The Window Is Closing
Rejection of AI reflects understandable frustration with how its development has progressed, but it risks discarding genuinely valuable applications along with the problematic ones. The challenge is redirecting AI development toward public benefit while there's still time to build broad-based support.
There are three changes that could make the difference. First, if there is government funding for AI development, it should be narrowly focused and transparent, avoiding handouts to politically connected firms while prioritizing clear public benefits. Public investment should support clearly defined, limited goals, such as improving access to the legal system or accelerating breakthroughs in materials science.
Second, meaningful transparency about costs and benefits should become the norm. Just as New York City created a dashboard to track capital projects—monitoring whether they remain on time and under budget—governments could share leaderboards of which models have generated the most benefit by reducing waste, streamlining services, and expanding access to public goods—no doubt a complex measurement, but one that warrants development. The public may be less dissuaded by inevitable abuses of AI if they were aware of an AI education tool that actually improves student outcomes, or of an AI medical system that reduces health care costs while maintaining quality. AI developers could also publish independent audits that demonstrate how their models are being utilized and the resulting effects. Public access to this information would enable citizens to evaluate whether AI investments deliver the promised benefits.
Third, demonstrations of AI tools should focus less on abstract, speculative capabilities and more on solving real problems in visible ways. The best way to earn trust is a clear, verifiable improvement in someone's everyday life.
AI for Me but Not for Thee
The current trajectory of AI adoption, which is marked by high rates of trust and use by more well-educated Americans, threatens to create a divided society in which AI expertise and use is seen by many as a marker of social class. If AI tools remain primarily accessible to educated elites while everyone else faces displacement and disruption, the technology will become a source of inequality rather than a means of shared prosperity.
Countries that are open to AI adoption widely—not just in corporate boardrooms and tech labs—will gain lasting advantages in economic productivity, scientific research, and military capabilities. China has made AI adoption a national priority, investing heavily in public applications and encouraging mass adoption. America risks falling behind if public resistance prevents broad AI deployment.
The window to change course remains open, but it's closing rapidly. AI possesses genuine potential to address major challenges in education, health care, transportation, and governance. Realizing that potential requires shifting focus from corporate profit maximization to public problem-solving, from technological demonstration to real-world impact measurement, from elite adoption to mass benefit.
The alternative is a backlash that wastes both the technology's promise and the substantial public resources invested in its development. The residents of New Braunfels and the laid-off workers in Irvine represent the early stages of that techlash. Their concerns are legitimate, and their voices deserve to be heard. The question is whether policymakers will listen before it's too late to build the AI revolution that lifts everyone rather than just the few.