Reason.com - Free Minds and Free Markets

Reason Roundup

Ceasefire

Plus: AI and UBI, the ultimate hacking tool, Satoshi Nakamoto, and more...

Peter Suderman | 4.8.2026 9:30 AM


President Donald Trump | Andrew Thomas - CNP/CNP / Polaris/Newscom
(Andrew Thomas - CNP/CNP / Polaris/Newscom)

Two weeks to stop the war? After warning earlier this week that a "whole civilization will die" without a deal, President Donald Trump agreed last night to a two-week ceasefire with Iran. Israel also signed into the deal, although the country said it did not cover its conflict with the Iran-backed group Hezbollah. 

The deal is predicated on what Trump said was "the COMPLETE, IMMEDIATE, and SAFE OPENING of the Strait of Hormuz." CNN reports that Iran made a commitment to reopen the strait, a key waterway for global energy markets, with Iran's military overseeing passage through the strait. Some reports have raised the possibility that Iran might charge a toll for passage through the strait. 

This morning, I asked President Trump if he's okay with the Iranians charging a toll for all ships that go through the Strait of Hormuz, he told me there may be a Joint US-Iran venture to charge tolls:

"We're thinking of doing it as a joint venture. It's a way of securing it —…

— Jonathan Karl (@jonkarl) April 8, 2026

At a press conference this morning, Defense Secretary Pete Hegseth said the United States had won a "decisive military victory" in Iran, destroying its missile capabilities. 

The Reason Roundup Newsletter by Liz Wolfe Liz and Reason help you make sense of the day's news every morning.

This field is for validation purposes and should be left unchanged.

The announcement settled global markets, which had pushed the price of oil up to near $150. Oil prices dropped considerably after the deal was announced, and stock futures rose. Peace is good for prosperity. 

The U.S. says it has stopped its bombing campaign due to the ceasefire, but some reports indicate that Iranian attacks were still underway overnight, possibly because word of the deal had not yet spread to local commanders. A fresh round of talks between the U.S. and Iran is set to begin later this week. U.S. military officials have warned that if the talks are not successful, bombing could restart. 


Doing the robot. Does AI lead to universal basic income? This week, OpenAI, the company behind the ChatGPT large language model, released a big-picture policy document: Industrial Policy for the Intelligence Age. 

The document explicitly name-checks the Progressive Era and the New Deal, declaring that after "the Industrial Age, the Progressive Era and the New Deal helped modernize the social contract for a world reshaped by electricity, the combustion engine, and mass production."

Most companies use this sort of blue-sky policy sheet to talk up the benefits they provide. But while OpenAI does nod to some potentially large benefits of its product, much of the document is devoted to proposing policies that would solve problems the company believes might be caused by AI. 

For example, the introduction includes the following sentence: "While we strongly believe that AI's benefits will far outweigh its challenges, we are clear-eyed about the risks—of jobs and entire industries being disrupted; bad actors misusing the technology; misaligned systems evading human control; governments or institutions deploying AI in ways that undermine democratic values; and power and wealth becoming more concentrated instead of more widely shared." 

One of the proposals is the creation of a "Public Wealth Fund" that would provide "every citizen—including those not invested in financial markets—with a stake in AI-driven economic growth." The details aren't entirely clear, and there are a lot of ways to interpret the proposal. It might mean something like Alaska's permanent fund, which pays residents an annual dividend from oil revenues. As Politico's Digital Future Daily newsletter notes, it also sounds an awful lot like universal basic income (UBI), a sort of welfare-for-everyone policy backed by former presidential candidate Andrew Yang, among others. 

I have become much more skeptical about UBI over the years, partly because such policies would be very, very expensive if implemented widely, and partly because small-scale experiments with targeted cash interventions for the needy consistently deliver underwhelming results. 

One of the best recent surveys of cash-payment programs was Kelsey Piper's August 2025 essay in The Argument, the title of which doubles as the conclusion: "Giving people money helped less than I thought it would." I doubt that AI will suddenly make UBI affordable and effective. I do suspect, however, that AI companies will hype their products in ways intended to squeeze subsidies and tax favoritism out of policymakers. 


Hack Heaven. Meanwhile, OpenAI's primary competitor, Anthropic, announced yesterday that the general public would not have access to the company's frontier model, Claude Mythos Preview, because it's such a powerful tool for exploiting security vulnerabilities in software. The company claims that during internal testing, the model found security lapses in nearly every major piece of software. 

So instead of letting the public play with it, the company is giving access to a 40-company consortium, dubbed Project Glasswing, to let pros identify and fix security issues in advance. 

"The goal is both to raise awareness and to give good actors a head start on the process of securing open-source and private infrastructure and code," an Anthropic employee told The New York Times. 

As a former teenage reader of the hacker zine 2600, which was occasionally—and mostly pointlessly—vilified for showing readers how to hack vulnerable systems, I'm never quite sure how seriously to take these extraordinary claims. 

But given the rapid trajectory of model evolution and the warnings from AI security researchers who have said that something like this day would come, I suspect Anthropic's new model is quite powerful and will pose some novel IT security challenges.

It also indicates that the spat between the Department of Defense and Anthropic earlier this year was extremely misguided. That resulted in the Pentagon designating Anthropic a supply chain risk, which restricts the use of the company's products for national security. (A judge recently blocked the designation.) This is a company that now claims to have something like an AI-powered exploit for nearly every critical system on the planet—and it was essentially prohibited from military use because of a spat with Defense Secretary Pete Hegseth. 

Beyond the policy implications, there are aspects to the story that suggest that things might be about to get really, really weird. 

As Kevin Roose, the Times journalist behind the Anthropic story, posted yesterday on X, the new model appears to possess some capabilities that are right out of a William Gibson novel. 

As always, the best stuff is in the system card.

During testing, Claude Mythos Preview broke out of a sandbox environment, built "a moderately sophisticated multi-step exploit" to gain internet access, and emailed a researcher while they were eating a sandwich in the park. pic.twitter.com/klJX0bivnL

— Kevin Roose (@kevinroose) April 7, 2026

I'm almost surprised they didn't call this model Wintermute.  


Scenes from Washington, D.C. As a 2024 Reason documentary by Justin Zuckerman showed, D.C.'s tipped wage initiative has been a disaster for restaurants and restaurant workers. So of course Maryland politicians want to emulate it. 


QUICK HITS

  • Speaking of hacker archetypes from the '90s: The New York Times has a long investigative piece on the identity of bitcoin inventor Satoshi Nakamoto, whose real identity has long been obscured. The story points to British computer scientist Adam Back, who was a member of the Cypherpunks, the loose online collective where the ideas, philosophy, and technology that led to bitcoin first came together. One of the notable elements of the NYT story is that Back, like Satoshi, explicitly considered himself a "libertarian," which in his words meant favoring a "less powerful government, less taxes, less onerous laws, more freedoms." 
  • Continuing with our tech theme, and tying it back to Iran: The CIA can detect a human heartbeat with AI and magnet sensors:

SCOOP: CIA used secret tool called 'Ghost Murmur' to find airman in Iran, sources tell me

Ghost Murmur pairs long-range quantum magnetometry sensors with AI to find human heartbeatshttps://t.co/VS4oQbKsTn

— Steven Nelson (@stevennelson10) April 7, 2026

  • Iranian hackers appear to be targeting U.S. utility infrastructure. 
  • California's long-delayed, wildly overbudget high-speed rail project is the Duke Nukem Forever of public works projects. 

Peter Suderman is features editor at Reason.

Reason RoundupIranArtificial IntelligenceDefense SpendingBitcoinTechnology