Be Like Pixar, Not NASA
Artificial intelligence poses the most risk when it is embedded in a centralized, tightly coupled organization. But it can facilitate decentralization too.

It began at 4:00 in the morning on March 28, 1979, at Three Mile Island, Pennsylvania. The nuclear reactor was operating at nearly full power when a secondary cooling circuit malfunctioned and affected the temperature of the primary coolant. This sharp rise in temperature made the reactor shut down automatically. In the second it took to deactivate the reactor's system, a relief valve failed to close. The nuclear core suffered severe damage, but operators couldn't diagnose or deal with the unexpected shutdown of the reactor in the heat of the moment.
Sociologist Charles Perrow later analyzed why the Three Mile Island accident had happened, hoping to anticipate other disasters to come. The result was his seminal book Normal Accidents. His goal, he said, was to "propose a framework for characterizing complex technological systems such as air traffic, marine traffic, chemical plants, dams, and especially nuclear power plants according to their riskiness."
One factor was complexity: The more components and interactions in a system, the more challenging it is when something goes wrong. With scale comes complexity, whether we are thinking of the technology or the organization that supports it. Imagine you run a start-up where everyone sits in the same loft space. From where you sit, you can easily see what they are all doing. In a large organization, that visibility is lost. The moment a leader can't see the inner workings of the system itself—in this case, staff activities—complexity rises.
Perrow associated this type of complexity with tech failures. At Three Mile Island, operators couldn't just walk up to its core and measure the temperature manually or peek inside to discover there was not enough coolant. Similarly, executives in a large company can't monitor every employee all the time without incurring resentment. They have to rely on indirect indicators, such as performance evaluations and sales results. Large companies also rely on complex information technologies and complex supply chains.
Another factor, wrote Perrow, was a system's coupling: the level of interdependence among its components. When systems are both complex and tightly coupled, they are more likely to produce negative unexpected consequences and get out of control.
Perrow did not include artificial intelligence (A.I.) or even software among the technologies whose interactions he charted. But using the criteria that he laid out relative to technological risk, A.I. systems fit in Perrow's framework next to nuclear power plants, space missions, and DNA sequencing. If some element isn't working according to plan, there can be unanticipated cascading effects that affect a system in wholly unexpected ways.
Tight and Loose Coupling
Tightly coupled systems have architectures—technological and social—that promote interdependence among their components and often isolation from outside connection. This makes them efficient and self-protective but less robust.
Loosely coupled systems, by contrast, have more open and diverse architectures. Changes in one module, section, or component hardly affect the other components. Each operates somewhat independently of the others. A loosely coupled architecture is easy to maintain and scale. It is also robust, in that problems don't propagate easily to other parts of the system.
Executives who run large organizations tend to favor a tightly coupled system. It is what they know. They grew up in their industries seeing a small number of people making decisions that affect millions of people. But tightly coupled systems can be harder to control. Think of a floor covered with dominoes that are lined up. When you tip one over, it will then, in sequence, knock down the entire array of dominoes—a simple example of a tightly coupled system. Now try to stop it once the domino effect is in motion. It's much harder than you would think.
A large company is also generally a tightly coupled system, especially compared to small businesses and local mom-and-pop retailers. If you have a complaint about a corner store's product, you can take it back and they'll take it in stride, handling it in a different way for each customer. They have control over their actions. If they work in a large company, or as a franchise, they are tightly coupled to the company's branding and scaled-up procedures—and to one another. Those who want to operate differently from the standard procedures must buck the tightly coupled network.
During the pandemic, we realized just how tightly coupled and interconnected our supply chains are—how one container ship stuck in the Suez Canal can delay global shipments for months. Many organizations have been looking to create more robust redundancies, effectively loosening the coupling in their supply chains by finding alternate vendors and investing in local sources.
The Formula for Disaster
Organizational sociologist Diane Vaughan is an expert on the ways systems can repeatedly engender catastrophe. She started studying the issue after the Challenger disaster of 1986, when the space shuttle exploded shortly after launch. The "technical cause," she later wrote, was "a failure of the rubber O-rings to seal the shuttle's solid rocket booster joints. But the NASA organization also failed."
NASA had been launching space shuttles with damaged O-rings since 1981. Pressured by the launch schedule, the agency leaders had ignored engineers' warnings right up to the day of the launch. In fact, within the established rules, the agency had labeled the O-ring damage an "acceptable risk."
Vaughan spent the next five years researching and writing The Challenger Launch Decision, an in-depth book about the organizational problems leading to the technological disaster. Like Perrow, she concluded that this type of organization would repeatedly produce catastrophic mistakes. After the book came out, she later noted, "I heard from engineers and people in many different kinds of organizations who recognized the analogies between what happened at NASA and the situations at their organizations. 'NASA is us,' some wrote."
Another crash, this time of the space shuttle Columbia, occurred on February 1, 2003. Another seven astronauts died. A technical review found a piece of foam had broken off and struck a wing. Once again, engineers had warned the agency and the warnings had been ignored. Once again, Vaughan became closely involved in investigating the causes, ultimately joining the government's Columbia Accident Investigation Board. She testified to the board that she had found the same organizational causes for both accidents.
In her writing on the disasters, Vaughan cites Perrow, noting that NASA's tightly coupled, complex nature made it systematically prone to occasional major errors. The key decision makers had fallen prey to a "normalization of deviance," in which dangerous complacency gradually became the ordinary way of doing things. "We can never totally resolve the problem of complexity, but we have to be sensitive to our organizations and how they work," she wrote. "While many of us work in complex organizations, we don't realize how much the organizations that we inhabit completely inhabit us. This is as true for those powerful actors at the top of the organization responsible for creating culture as it is for the people in the tiers below them who carry out their directives and do the everyday work."
In these disasters, she testified to the board, "the technological failure was a result of NASA's organizational failure."
Tightly Coupled A.I.
Software designer Alan Chan argues that some innate aspects of artificial intelligence tend to make everything it touches more complex and more tightly coupled. Even when a project is supposed to be "responsible A.I.," working with an automated algorithm can override the best intentions of the software engineers."
Although designers may try as much as possible to include all the relevant features, they may only come to know the relevance of some features after an accident informs them to that effect," says Chan. "Moreover, while a human observer is limited by the ways in which their senses interact with measurement instruments, an A.I. subsystem is limited not only by the same conditions as the human observer but also by the fact that human observers select the features for consideration. The measurement instruments may themselves be faulty, which was a crucial factor in the Three Mile Island accident."
In Perrow's parlance, "normal accidents" can be expected to increase over time in such systems. This is particularly true when not just an A.I. system itself but the organizational ecosystem around it are both complex and tightly coupled.
In the tech arena, the process of optimization itself exacerbates tight coupling. It creates strong dependencies and, therefore, ripple effects. Imagine an A.I. system tasked with allocating production resources in a supply chain. The system might have maximizing output as its only goal. This single focus would influence the whole system to couple itself more tightly.
The algorithm would resolve any tradeoffs between flexibility and optimization in favor of optimization. For instance, it would not keep reserve stocks because that would drag on inventory. The system is coded to align with the company's strategy in doing this, but in such a tightly coupled way that the system would falter under stress, as many supply chains did at the start of the COVID-19 pandemic. At various times in recent history, this dynamic led to shortages in things like protective equipment, semiconductor chips, diapers, and infant formula.
Another case of a tightly coupled A.I. system is Zillow's failed use of an automated decision-making algorithm to purchase homes. As an online real estate marketplace, Zillow was originally designed to help sellers and buyers make more informed decisions. In 2018, it opened a new division with a business model based on buying and flipping homes, using a machine learning algorithm called Zillow Offers. As home prices quickly rose during the COVID-19 pandemic, Zillow's iBuying algorithms used data such as the home's age, condition, and zip code to predict which homes would grow in value. But the system failed to take into account the radical uncertainty caused by the virus and completely underestimated rapid changes in the housing market. Moreover, there was a backlash against Zillow when a real estate agent, Sean Gotcher, created a viral video decrying the company's perceived manipulation of the housing market. By November 2021, the firm sold only 17,000 homes out of the 27,000 it had purchased.
Decoupling Zillow's home-buying business from its online marketplace may have saved the company or at least part of its reputation. Ultimately, Zillow shut down its home-buying division, cut 25 percent of the company's work force—about 2,000 employees—and wrote off a loss of $304 million in housing inventory.
To John Sviokla, who holds a Harvard doctorate in management information systems, tight coupling is directly related to the opaque nature of algorithmic systems: the closed-box effect. "If I can't look inside the system and see the weights given to different factors," he says, "then it is de facto tightly coupled. From a semantic standpoint, I can't communicate with it. I can only manage it by trying to figure out how it works, based on the behaviors it produces. I am not given access to the assumptions going in, or how it works. I either have to reject it or use it—those are my only two choices."
Chan argues that the greatest risk lies in A.I. systems that are both tightly coupled and complex within organizations that are tightly coupled and complex. Accidents are especially likely to occur when the organizational conditions are right. Since exact conditions cannot be predicted or prevented in detail and the organizational structure prevents them from being resilient, algorithmic, autonomous, and automated systems represent a continual challenge. Even when systems are working well, it is impossible to make them absolutely fail-safe from a "normal accident."
If you want to make the system safer and less harmful, you have to loosen it up.
Loosening a System
Pixar Animation Studios, the creators of the films Toy Story and Finding Nemo, has a well-known ritual that takes advantage of the studio's loosely coupled nature. Whenever a film under development hits a rough spot, the director can convene the company's "brain trust" for an in-depth critique. After the session, the director and his team decide what to do with the advice. It takes a thick skin to have a work under review, but the result is immense, tangible improvement."
There are no mandatory notes, and the brain trust has no authority," Pixar cofounder Ed Catmull explained in Harvard Business Review. "This dynamic is crucial. It liberates the trust members, so they can give their unvarnished expert opinions, and it liberates the director to seek help and fully consider the advice."
It took Pixar a while to understand why this system helped so much. "When we tried to export the brain trust model to our technical area, we found at first that it didn't work," Catmull wrote. "As soon as we said, 'This is purely peers giving feedback to each other,' the dynamic changed, and the effectiveness of the review sessions dramatically improved."
Note that Pixar's organizational design is deliberately loose. The brain trust's reactions are treated not as demands but as creative opportunities. These opportunities allow for simplicity on the other side of complexity.
Charles Perrow devoted much of Normal Accidents to a study of complex sociotechnical operations that had not ended in crisis or catastrophe. One option, he found, was to make decision making simple by focusing on just one or two activities: You centralize decision making around this relatively simple set of goals so that there is clear direction for channeling all the complexities involved. Another option was to put in place some basic organizational designs. A risk audit and oversight group may seem like yet another boring bureaucratic function, but if it is led by someone who understands loose coupling, it will be staffed by a diverse group of people who make sense of complex issues together.
And there is another alternative: to loosen the system. To bring decision making to the lowest possible level in the hierarchy, and to make sure every part of the organization can operate autonomously. To encourage people to communicate freely, so that no one small group is seen as the single source of knowledge about a key issue. To move decision making as close as possible to the point of action, and to bring people together regularly to learn from each other and avoid competing with other silos.
A.I. chatbots can tighten couplings in complex systems, intensifying the communications within them and automating the ways companies control behavior. That could lead to more disasters and missteps. But they could also loosen complex systems by providing alternative links and making it easier to seek out alternative views. Success often depends on finding someone with an outside sensibility who respects the inside priorities. Generative A.I. systems may make it easier to find those people and introduce them to one another.
There is much to learn about the interrelationship between machine behavior, human behavior, and the behavior of larger sociotechnical systems such as corporations and governments. In the end, it doesn't matter whether we think our A.I. systems are intelligent. What matters most is what they do and how they grow, and how we grow along with them.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Can I choose to be somebody who did not stop being innovative or creative several years ago?
Individuals can be smart, people are generally stupid.
Look how people have adopted google, can’t do without it now, knowing that search engines are paid and manipulated to yield the results of those who pay and coerce.
In a few years AI will be marketed to make all your decisions for you, all paid and coerced, critical thinking discouraged. After all WHO KNOWS BETTER THAN AI?
We’re going to need to prevent the plan to control our past, present and future. Criminalize lying.
Search engines could easily have the criteria of nearest location, and closest match in addition to most popular. But they don’t, do they?
They should criminalize you posting
Yes, there are bigots who refuse to consider information.
Worse are wannabe censors, like you, who would prevent others from considering information.
Not much for free speech are you?
Great article, Mike. I appreciate your work, I’m now creating over $35,600 dollars each month simply by doing a simple job online! I do know You currently making a lot of greenbacks online from $28,600 dollars, its simple online operating jobs.
.
.
Just open the link———————————————>>> http://Www.OnlineCash1.Com
Don't be like Pixar OR NASA is more like it.
And, oh by the way, a few details about the Three Mile Island "disaster".
Number of deaths = 0
Number of injuries = 0
Even Wikipedia can't fully make it a disaster.
"According to the American Nuclear Society, using the official radioactivity emission figures, "The average radiation dose to people living within 10 miles of the plant was eight millirem (0.08 mSv), and no more than 100 millirem (1 mSv) to any single individual. Eight millirem is about equal to a chest X-ray, and 100 millirem is about a third of the average background level of radiation received by US residents in a year."[108]"
I have many concerns about AI, but one that stands-out is that it displaces, rather than augments, the need for human knowledge and expertise. At that point, there will be nobody to second-guess anything that the Black Box of AI says or does.
I literally have no problem whatsoever with AI, as long as I'm in control of it.
Are you in control of all the programs on you personal devices. Snowden.
What makes you think AI will be any different?
"I’m in control of it."
If you don't own it, you don't control it.
Anybody read “The last question” by Asimov?
The A I eventually becomes God and restarts the Big Bang after the heat death of the universe.
"In these disasters, she testified to the board, "the technological failure was a result of NASA's organizational failure."
That's not how I understand it. It was pressure to launch according to arbitrary scheduling pressure (for Reagan to appear on TV talking to the teacher/astronaut, and unusually cold weather that made Challenger happen.
Reagan got his TV appearance, delivering the eulogy for the dead crew. The bourgeois press called this 'his finest moment.'
Nope.
I agree. The narrative has it that it's more acceptable to blame the engineers rather than the politicians who call the shots. It worked the same way in China today and Nazi Germany and Soviet Russia of yesteryear.
Finally! There is a great way you can work online from your home using your computer and earn at the same time... Only basic internet knowledge is needed and a fast internet connection... Earn as much as $3000 a week...Get more details on the following site...https://www.dailypay7.com/