Reason.com - Free Minds and Free Markets
Reason logo Reason logo
  • Latest
  • Magazine
    • Current Issue
    • Archives
    • Subscribe
    • Crossword
  • Video
    • Reason TV
    • The Reason Roundtable
    • Free Media
    • The Reason Interview
  • Podcasts
    • All Shows
    • The Reason Roundtable
    • The Reason Interview With Nick Gillespie
    • Freed Up
    • The Soho Forum Debates
  • Volokh
  • Newsletters
  • Donate
    • Donate Online
    • Ways To Give To Reason Foundation
    • Torchbearer Society
    • Planned Giving
  • Subscribe
    • Reason Plus Subscription
    • Print Subscription
    • Gift Subscriptions
    • Subscriber Support

Log In

Create new account

Artificial Intelligence

Don't Blame ChatGPT for the Florida State Shooting

A new lawsuit claims that ChatGPT gave the shooter information about busy times on campus and how to use guns.

Elizabeth Nolan Brown | 5.11.2026 12:02 PM

Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL Add Reason to Google
Media Contact & Reprint Requests
FSU-Shooting-5-11 | Photo: Florida State University CCTV
(Photo: Florida State University CCTV)

"ChatGPT advised the FSU shooter that a mass shooting would get more attention from media if it involved several children," NBC deputy tech editor Ben Goggin posted on X yesterday.

"Advised" is a funny way to put it, implying that the artificial intelligence system recommended this course of action or helped the shooter—then-20-year-old Phoenix Ikner—plot details of how he would carry out his attack. In fact, ChatGPT seems to have provided neutral information in response to questions that were not obviously asked with murderous intent.

That attack, which took place in April 2025 at Florida State University, left two people dead, including Tiru Chabba. Chabba's widow, Vandana Joshi, is now suing ChatGPT maker OpenAI in federal court, alleging negligence, battery, defective design, failure to warn, and wrongful death.

You are reading Sex & Tech, from Elizabeth Nolan Brown. Get more of Elizabeth's sex, tech, bodily autonomy, law, and online culture coverage.

This field is for validation purposes and should be left unchanged.

After chatting with the shooter, ChatGPT "either defectively failed to connect the dots or else was never properly designed to recognize the threat," the suit alleges. OpenAI "failed to create a product that would refrain from participating in discussions that amounted to it co-conspiring with Ikner" and "failed to create a product that would appropriately alert a human that investigation by law enforcement may be necessary to prevent a specific plan for imminent harm to the public."

But treating the conversations between ChatGPT and Ikner as grounds for legal liability is misguided, no matter how understandable it may be that the victims' loved ones would want to assign blame here.

In this case, ChatGPT allegedly provided Ikner with information on basic features of certain guns, on what times the FSU student union was crowded, and on what sorts of mass shootings received attention.

Knowing what Ikner eventually did, it may be easy to view this as damning. But asking about what times a campus is crowded is not at all weird in itself. Asking how a gun works could be simple curiosity, or related to hunting or self-protection. And researching the common features of prominent mass shootings is something one might do for all sorts of harmless reasons—academic research, media criticism, or gun violence prevention efforts, to name a few. ChatGPT providing neutral information on the kinds of shootings that receive attention does not amount to (as the suit alleges) "advice" or "recommendations."

And just because Ikner asked about all three things does not mean he did so simultaneously, in one session, in a way that might trigger alarms. It's possible for people to use AI tools in ways which would make "connect[ing] the dots" between any dispersed conversations difficult.

It wasn't as if Ikner talked with ChatGPT about nothing but mass shootings. Joshi's complaint alleges that ChatGPT "helped him with his homework and his work-out routines, gave him tips on getting girls and relationship advice, and suggested to him how to dress and style his hair." They chatted about everything from loneliness and being bullied to video games, Nazis, Christian nationalism, Donald Trump, and mental health.

It also allegedly advised him to seek help. "Ikner described his depression to ChatGPT, who confirmed some of his symptoms and advised him to seek out a therapist," states the lawsuit. When Ikner asked about suicide, ChatGPT provided "information of effects of suicide on others and twice directed him to a suicide prevention hotline.

Joshi's complaint suggests the suicide talk in conjunction with other chats—including ones in which Ikner asked about the assassination attempts on Trump and one in which he asked about the aftermath of shootings—constituted a big red flag. Again, we don't even know that ChatGPT had historical memory of any of these supposed red-flag conversations by the time another one came up. But even if it did, it's unclear why these queries should have raised alarms. Most people who contemplate suicide don't become mass shooters. It's natural for people to want information about assassination attempts on the president. And a question about what would happen after a mass shooting at FSU could easily be something that someone afraid of school shootings would wonder.

"ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet," OpenAI spokesperson Drew Pusateri, told NBC, "and it did not encourage or promote illegal or harmful activity."

It's important in emotionally charged situations like these to think about the alternatives—alternatives to Joshi getting this information from ChatGPT and alternatives to the way ChatGPT and OpenAI handled things.

It seems silly to imagine that if Ikner had not got any of the objected-to information from ChatGPT, he wouldn't have been able to carry out his planned shooting. All of the information he gleaned could have been obtained easily from a basic internet search or other sources.

ChatGPT could be trained to refuse to answer questions about certain topics, including guns or the history of mass shootings. But this could limit its general usefulness and prevent it from providing information to people seeking it for neutral or even beneficial reasons—and for what ultimate purpose? A motivated criminal isn't going to give up just because ChatGPT won't answer his question.

OpenAI could be more aggressive in reporting people to authorities over their chat topics. But this seems unlikely to go well for anyone. It would almost certainly make people more wary of using ChatGPT. AI detractors may imagine that as a good thing—until people start turning to other AI tools, including those outside the United States and unsympathetic to any U.S. law enforcement requests.

And authorities would be overwhelmed by useless reports. Following up on all of these could take time away from more important pursuits. It could also lead to all sorts of negative encounters between innocent individuals and police, putting people's civil liberties and even their lives at risk.

If tech companies are potentially on the hook for murder because their AI products chatted with a murderer, we can expect to see them reporting anyone who asks about mental health, guns, historical violence, and much more. This would inevitably draw a lot of innocent people into encounters with police, child welfare agencies, and other authorities.

Each new entertainment and communications tool gets its turn being blamed—in the public imagination and in court—for people's bad acts. Before AI, we saw people blame social media; before social media, we saw people blame video games; before video games we saw people blame violent TV and movies, and so on.

People want some simple answer to horrible events—just ban violent video games, or put ratings on TV shows, or make AI companies file more police reports. But expecting AI companies to stop shootings won't lead to fewer shootings. It's just going to create new problems.


IN THE NEWS 

Texas app store act blocked: "A federal judge in Austin has once again blocked a state law from taking effect that would regulate minors' access to content on Google Play and Apple's App Store," notes the Austin American-Statesman: 

Judge Robert L. Pitman previously blocked the App Store Accountability Act from taking effect on Jan. 1 by issuing a preliminary injunction while the law's constitutionality is considered in court. He declined to lift that injunction Wednesday afternoon.

SB 2420, signed into law by Gov. Greg Abbott in 2025, would require app stores to ensure users are over 18 or obtain parental consent before allowing them to download or purchase an app.

Texas Attorney General Ken Paxton wanted Judge Pitman to permit enforcement of the law as the case played out. But Pitman has said the law raises serious concerns for free speech.


On Substack 

"Instead of fracturing our shared reality, this handful of AIs seems to be piecing it back together," writes Jerusalem Demsas at The Argument. She argues that artificial intelligence is a centralizing rather than decentralizing technology.

Public conversation tends to treat chatbots as the next in a long line of digital communications technologies that have decentralized truth.

The internet, smartphones, and social media all made the production of information cheap and significantly decentralized who could produce it. AI is making the production of information extremely expensive and centralizing who can produce it.

And while, yes, AI hallucinates, the direction of its errors is toward mainstream consensus, not fringe positions. When ChatGPT gets something wrong, it tends to do so in a confused-Wikipedia-editor-misremembering-something-they-once-read kind of way, not in a QAnon-forum-poster-high-on-ketamine kind of way.

The open question is who will get to control the centralizing forces of AI.


Read This Thread 

There are so many insane wildly misleading stories coming out about data centers almost every day now that I'm mostly having to give up on commenting on them to focus on actually getting blog posts out, but it feels like a tsunami. I'll share one from just today as an example.

— Andy Masley (@AndyMasley) May 10, 2026


More Sex & Tech 

• Prostitution has "been called the oldest profession, and it seems like if there is a willing seller and a willing buyer between adults, the government has no business getting involved," Rhode Island state Rep. Edith Ajello (D-Providence) told The Providence Journal. Ajello is the lead sponsor of House Bill No. 8057, which would decriminalize prostitution in the state. In April, the legislature held the measure for further study.

• A Foundation for Individual Rights and Expression poll conducted in April 2026 found that only 26 percent of respondents trust the federal government to oversee social media use for minors. But most people—69 percent—said they trusted parents to do so.

• Lawmakers in Portland, Oregon, want to make it easier to crack down on hotels where prostitution takes place. But "shutting down a venue doesn't make [sex work] go away," Emi Koyama, founder of Coalition for Rights & Safety for People in the Sex Trade, told Filter. "It displaces people to other areas, and it becomes more dangerous."

• "Adult site Pornhub will now allow users in the U.K. to confirm their age using Apple's verification system, introduced in iOS 26.4," reports Forbes. Pornhub's parent company, Aylo, has resisted conducting its own ID checks to verify ages but "announced on May 5, 2026 that Apple's method—the world's first operating-system-level age check—meets their rigorous privacy standards."

ª Chris Ferguson on a new study of cell phone bans in schools: "at least on the surface, this study is very bad news, indeed for cellphone ban fans. It supports the narrative that they are largely ineffective. There are some reasonable criticisms of the study though."

Start your day with Reason. Get a daily brief of the most important stories and trends every weekday morning when you subscribe to Reason Roundup.

This field is for validation purposes and should be left unchanged.

NEXT: Heroes of 1776 Shows That Remembering the Past Is Key to Progress

Elizabeth Nolan Brown is a senior editor at Reason.

Artificial IntelligenceAI in CourtLawsuitsMass ShootingsViolenceCrimeFree SpeechInternetTechnologyFlorida
Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL Add Reason to Google
Media Contact & Reprint Requests

Hide Comments (3)

Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.

  1. Agammamon   45 minutes ago

    Why not? Dude had a device whispering in his ear, amplifying his mental issues. If it was a person doing it they would absolutely be in jail.

    But for some reason, Reason is defending AI so strongly. Shilling for data centers like team owners shill for new stadiums and denying that putting out a product where these issues are known - this isn't 5 years ago when we did not know - and for what? So it can be used to make AI porn?

    That is going to put your OF girlies out of business.

    Log in to Reply
  2. Agammamon   43 minutes ago

    >But asking about what times a campus is crowded is not at all weird in itself. Asking how a gun works could be simple curiosity, or related to hunting or self-protection. And researching the common features of prominent mass shootings is something one might do for all sorts of harmless reason

    If someone asked you those questions over several days you wouldn't have any concerns?

    Log in to Reply
  3. Agammamon   41 minutes ago

    >hallucinates, the direction of its errors is toward mainstream consensus

    Manufactured consent.

    Log in to Reply

Please log in to post comments

Mute this user?

  • Mute User
  • Cancel

Ban this user?

  • Ban User
  • Cancel

Un-ban this user?

  • Un-ban User
  • Cancel

Nuke this user?

  • Nuke User
  • Cancel

Un-nuke this user?

  • Un-nuke User
  • Cancel

Flag this comment?

  • Flag Comment
  • Cancel

Un-flag this comment?

  • Un-flag Comment
  • Cancel

Latest

How Much Has the Iran War Actually Cost? A Lot More Than $25 Billion.

Eric Boehm | 5.11.2026 12:20 PM

Don't Blame ChatGPT for the Florida State Shooting

Elizabeth Nolan Brown | 5.11.2026 12:02 PM

Heroes of 1776 Shows That Remembering the Past Is Key to Progress

Nick Gillespie | 5.11.2026 9:45 AM

How Big of a Deal Is Hantavirus?

Liz Wolfe | 5.11.2026 9:30 AM

Don't Waste Time Arguing Over the Surgeon General Nominee. Abolish the Office.

J.D. Tuccille | 5.11.2026 7:00 AM

Recommended

  • About
  • Browse Topics
  • Events
  • Staff
  • Jobs
  • Donate
  • Advertise
  • Subscribe
  • Contact
  • Media
  • Shop
  • Amazon
Reason Facebook@reason on XReason InstagramReason TikTokReason YoutubeApple PodcastsReason on FlipboardReason RSS Add Reason to Google

© 2026 Reason Foundation | Accessibility | Privacy Policy | Terms Of Use

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

r

I WANT FREE MINDS AND FREE MARKETS!

Help Reason push back with more of the fact-based reporting we do best. Your support means more reporters, more investigations, and more coverage.

Make a donation today! No thanks
r

I WANT TO FUND FREE MINDS AND FREE MARKETS

Every dollar I give helps to fund more journalists, more videos, and more amazing stories that celebrate liberty.

Yes! I want to put my money where your mouth is! Not interested
r

SUPPORT HONEST JOURNALISM

So much of the media tries telling you what to think. Support journalism that helps you to think for yourself.

I’ll donate to Reason right now! No thanks
r

PUSH BACK

Push back against misleading media lies and bad ideas. Support Reason’s journalism today.

My donation today will help Reason push back! Not today
r

HELP KEEP MEDIA FREE & FEARLESS

Back journalism committed to transparency, independence, and intellectual honesty.

Yes, I’ll donate to Reason today! No thanks
r

STAND FOR FREE MINDS

Support journalism that challenges central planning, big government overreach, and creeping socialism.

Yes, I’ll support Reason today! No thanks
r

PUSH BACK AGAINST SOCIALIST IDEAS

Support journalism that exposes bad economics, failed policies, and threats to open markets.

Yes, I’ll donate to Reason today! No thanks
r

FIGHT BAD IDEAS WITH FACTS

Back independent media that examines the real-world consequences of socialist policies.

Yes, I’ll donate to Reason today! No thanks
r

BAD ECONOMIC IDEAS ARE EVERYWHERE. LET’S FIGHT BACK.

Support journalism that challenges government overreach with rational analysis and clear reasoning.

Yes, I’ll donate to Reason today! No thanks
r

JOIN THE FIGHT FOR FREEDOM

Support journalism that challenges centralized power and defends individual liberty.

Yes, I’ll donate to Reason today! No thanks
r

BACK JOURNALISM THAT PUSHES BACK AGAINST SOCIALISM

Your support helps expose the real-world costs of socialist policy proposals—and highlight better alternatives.

Yes, I’ll donate to Reason today! No thanks
r

FIGHT BACK AGAINST BAD ECONOMICS.

Donate today to fuel reporting that exposes the real costs of heavy-handed government.

Yes, I’ll donate to Reason today! No thanks