Robo-Bureaucrats: Threat or Menace?
The promise and perils of cyber-bureaucracy
Never mind replacing factory and service industry workers: What if robots could replace bureaucrats? After all, nearly 22 million Americans are employed at all levels of government. Lots of them are involved in applying rules and making routine decisions. What if ever-smarter software could function as robo-administrative law judges, robo-comptrollers, robo-clerks, robo-magistrates, robo-deputy assistant secretaries of transportation or agriculture—in short, robo-bureaucrats?
Infotech is already becoming more adept at handling some tasks than human experts are. IBM's Watson cognitive computing system is reportedly better at diagnosing cancer than are physicians, and investors are increasingly trusting the algorithms behind robo-advisers like Betterment and Wealthfront to handle their retirement and other funds. Could robot administrators powered by computer algorithms and neural networks even-handedly apply rules and make objective decisions in allocating resources?
In a recent paper, "Cyberdelegation and the Administrative State," California Supreme Court Justice Mariano-Florentino Cuéllar considers the possibility.
Modern bureaucracies were devised as a fix for the problem of political patronage, also called the spoils system, in which victorious political parties rewarded their supporters with government jobs, contracts, regulatory approvals, and other favors. It didn't work. As public choice theory eventually showed, bureaucrats are far from dispassionate and objective arbiters of the law. Their decisions are distorted by incentives to maximize their budgets and extend the scope of their authority. And while not overtly beholden to politicians, bureaucrats often defer to the goals of the members of key legislative committees.
Bureaucrats, like all human beings, are subject to biases, cognitive failures, and just plain bad days. Yet public sector workers are less accountable than those employed in the private sector. When private-sector workers are incompetent, they are six times more likely to get fired than are federal bureaucrats.
So would robo-bureaucrats do better?
Cuéllar suggests that advanced information technology will make better use of data, enhance transparency, and reduce inconsistency in bureaucratic justice. Nevertheless, he worries that reliance on robot administrators will have subtle unintended consequences that undermine public deliberation and trust in political processes.
Cuéllar initially invites us to "imagine a series of sleek black boxes—capable of sustaining a cogent conversation with an expert, and networked to an elaborate structure of machines, data, and coded instruction sets." He suggests that sophisticated information technologies might be able to duplicate what an human administrator could do at lower cost; that they might also screen out human biases and circumvent faulty mental shortcuts, such as the availability heuristic; and that they may eventually avoid unintended consequences and reduce unknown unknowns by taking into account vaster domains of information. Those sleek black boxes might more reliably than human experts evaluate evidence and issue consistent rulings on Social Security disability claims, EPA hearings on air pollution violations or pipeline spills, and FDA safety assessments of new pharmaceuticals.
Then again, they might not. As Cuéllar argues, instructing robot bureaucrats how to properly assess and then maximize agency objectives will be hard to implement. To illustrate how such digital expert systems can go awry, Cuéllar cites the example of the automated system that the Department of Veterans Affairs set up to speed the processing of disability benefits applications. Benefit determinations were made faster, but the system was unable to figure out when veterans were exaggerating their ailments. Thus the automated system consistently awarded higher payments than human raters had previously done.
The Veteran Affairs robo-adjudication system highlights another problem: How do you police and maintain the line between automated decision support and fully automated decisions? Cuéllar observes that once the robo-system is set up, their human overseers become disinclined to overturn its determinations even in the face of information that suggests its assessments may be flawed.
So it proved at Veterans Affairs. The Wall Street Journal reports that managers eager to streamline approvals pressured the remaining human benefits-raters to accept the automated decisions. Raters have a strong incentive not to override the program's recommendations, because they are flagged and sent to supervisors, who must then deal with complaints from veterans. Consequently, raters overrode the robo-evaluations in less than 2 percent of the 1.4 million rating decisions they made in 2014. Eventually, officials and clients get used how decisions are made. Ultimately, Cuéllar observes, "Organized interests tend to defend the resulting status quo, and organizations often develop internal political dynamics favoring continuity over change."
Cybersecurity also looms larger as agencies trust more decisions to cognitive supercomputers. The numerous recent data breaches at federal agencies—the Office Personnel Management, the State Department, HealthCare.gov, the government contractors in charge of vetting personnel for security clearances—indicate that the feds' cybersecurity measures remain critically deficient. If hackers enter computer systems that are empowered to issue orders and impose fines, the results would be commensurately worse.
Further down the road, new cognitive technologies may enable robo-bureaucrats to handle greater amounts of information and ideally make better and more subtle decisions. (Think genetic algorithms that evolve over time as they search for optimal solutions, or self-modifying deep learning systems in neural networks.) Cuéllar suggests that as automated systems become more intuitive and their analytical capacities more sophisticated, the black boxes might steadily climb up the bureaucratic ladder, constraining and perhaps displacing supervisors, division heads, and even agency administrators. Cuéllar warns that increasing reliance on such adaptive systems could "complicate public deliberation about administrative decisions, because few if any observers will be entirely capable of understanding how a given decision was reached." Would politicians, officials, and citizens be content to abide by decisions, even if those decisions result in fair outcomes, if they are basically made by robot oracles?
Finally, there is a big difference between robo-advisers and robo-diagnosticians on the one hand and robo-bureaucrats on the other. If patients and investors don't like the decisions made by Betterment or Watson, they can choose to go elsewhere. There is only one Veterans Affairs Department, only one EPA, only one FDA. The decisions made by robot bureaucrats would have the full force of law behind them, including fines, fees, mandates, bans, and possibly jail. At what point does robo-bureaucracy become robo-tyranny?
Show Comments (58)