Robots

Regulatory Robophobia

We don't need a federal commission to govern things that go beep in the night.

|

The future is here. Driverless vehicles, drones, machine learning, and other emerging technologies offer programmable assistants able to handle mundane tasks and critical life-saving interventions alike. But not everyone is pleased. The digital Arcadia that awaits us is being fettered by the rise of the robophobes.

Robophobia exists on a continuum. At the extreme end are reactionaries who indiscriminately look to stifle all that goes beep in the night. They call for swift and pre-emptive regulations to address any imagined safety or privacy concerns, however unlikely. To the extent that they can enact their ideas, their mind-set is guaranteed to slow the pace of innovation, resulting in countless lost opportunities for economic and social progress—and, yes, even consumer safety and privacy. You'd almost suspect that this is their unstated goal.

Other cases of robophobia are milder, manifesting, for instance, in proposals for new government agencies. In a white paper published by the Brookings Institution last September, Ryan Calo, an assistant professor at the University of Washington School of Law, calls for a Federal Robotics Commission (FRC). Older agencies, he argues, don't have the expertise to "deal with the novel experiences and harms robotics enables." Furthermore, there are "distinct but related challenges that would benefit from being examined and treated together." Robots, he says, "may require investment and coordination to thrive."

Calo does not have a surreptitious desire to stifle new technologies hidden behind his policy proposals. He rightfully criticizes the Federal Aviation Administration (FAA) for its ham-handed drone policies, calling them "arbitrary and non-transparent." But Calo is no proponent of permissionless innovation, a term for totally unfettered freedom to experiment with new technology and business models coined by my technology policy colleague at the Mercatus Center Adam Thierer, either. He wants to regulate drones; he just thinks the FAA is doing it the wrong way. In his mind, a FRC would have the narrow focus and specialized expertise needed to effectively protect us.

Really, Calo is too kind to the FAA. He doesn't mention most of the questionable drone regulations the agency has proposed. The FAA has practically stopped innovation in its flight path by proposing to ban all but a handful of private-sector drones while the agency completes rules to govern the rest. Another doozy was its proposal to require drone pilots to obtain the same license as old-school airplane pilots, even though they never need set foot on an aircraft to do their jobs. The FAA's actions are badly hindering this exciting new technology, and for not-altogether-altruistic reasons. A January 15 story in The Wall Street Journal quotes Jim Williams, the head of the FAA's unmanned-aircraft office, bragging about his agency going to bat for the aerial surveyors, photographers, and moviemaking pilots who frequently lobby him to put the kibosh on commercial drone activity. "They'll let us know that, 'Hey, I'm losing all my business to these guys. They're not approved. Go investigate,'" he explains. "We will investigate those."

Would a robot commission be any better? History suggests that it won't. This is not the first time a scribbler has proposed a new agency to oversee an emerging technology. Robophobia is only the most recent incarnation of a timeless reaction to scientific developments: the desire to control them.

Calo cites the Federal Railroad Administration (FRA) as a successful response to the scary new phenomenon of travel by train. But actually, the Interstate Commerce Commission was created first, in 1887; it was promptly captured by railroad companies and began promulgating anti-consumer regulations on their behalf. The FRA was established far later, through the same 1966 legislation that brought us the Department of Transportation (DOT). It is strange but telling that Calo offers the FRA and DOT as prototypes for a future Federal Robotics Commission, since both bodies suffer from rather extreme amounts of regulatory zealotry, waste, fraud, and abuse.

But there is a more fundamental reason to object to an FRC. Calo himself claims to favor something more akin to a supervisory body than a formal regulatory agency, yet he leaves the door wide open for agency power grabs and ever-expanding regulation. Bureaucrats almost always act to maximize their spheres of influence. Why wouldn't this be the case for an agency tasked with overseeing a lucrative new technology like robotics? On the flip side, what makes us think the robotics industry itself would refrain from doing what so many other industries have before and working to influence FRC regulations for its own ends?

Regulatory capture is real. Consider the Federal Communications Commission (FCC) and its war on cable television. A recent paper by Thierer and another technology policy scholar at the Mercatus Center, Brent Skorup, is a must-read for anyone interested in how robotics might fare in Calo's world. Titled "A History of Cronyism and Capture in the Information Technology Sector," the paper documents the many ways the FCC has mostly served the private interests it was supposed to regulate rather than the "public interest" promoted by the likes of Calo.

When cable TV came about in the 1960s, the agency moved quickly to quash it—a naked effort to protect entrenched television broadcasters. Regulatory creep became a serious problem as the commission expanded its authority into almost every new telecommunications and media service that emerged. Predictably, the "independent" FCC eventually succumbed to the very problems that Calo's FRC ostensibly aims to rectify, such as being slow and arbitrary and constantly encroaching on areas it isn't equipped to regulate.

Calo is correct that our existing collection of regulatory agencies is ill-qualified to handle robotics policy. But adding another group of eggheads to the mix is doubling down on the problem rather than offering a solution. At a minimum, as Thierer writes, "when proposing new agencies, you need to get serious about what sort of institutional constraints you might consider putting in place to make sure that history does not repeat itself."

Innovation doesn't flourish at the hands of bureaucrats—even knowledgeable, benevolent, non-robophobic ones. It's simply impossible to anticipate what will happen when engineers, developers, and consumers take new technologies and begin to apply them in novel ways. Department of Defense engineers and early users of the agency's internal ARPANET system never dreamed that the simple packet switching network used in a handful of university research laboratories would one day be credited as the precursor to the Internet. In fact, ARPANET's administrators actually banned many of the core functions that you and I enjoy today, such as online commerce. Thierer, in his 2014 book Permissionless Innovation, quoted from the 1982 handbook at MIT's artificial intelligence lab, which stated: "It is considered illegal to use the ARPANet for anything which is not in direct support of Government business…Sending electronic mail over the ARPANet for commercial profit or political purposes is both anti-social and illegal. By sending such messages, you can offend many people, and it is possible to get MIT in serious trouble with the Government agencies which manage the ARPANet."

The modern Internet does not owe its success to a brilliant policy wonk, a series of white papers, or a federal agency tasked with developing a new technology and protecting people from any conceivable harm that might arise from it. The opposite is true: It's because the Clinton administration decided to break with tradition by rejecting top-down, command-and-control regulations that the Internet as we know it was born—a product of human action, not merely of human design.

Things could easily have been different. If the overly cautious had gotten their way, the commercial properties of the Internet may well have been squelched before we ever knew what we were missing. The same would be true under a Federal Robotics Commission. Progress requires us to reject robophobia and feel the digital love.

NEXT: Did the challengers in King really ignore federalism? Their response

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. *Really* late night links!

    😉

  2. permissionless innovation, a term for totally unfettered freedom to experiment with new technology and business models coined by my technology policy colleague at the Mercatus Center Adam Thierer, either

    Ugh.

    Regulatory capture is real.

    The solution of course is to nationalize industry.

    1. Nah. Progressives have no problem with big corporations, as long as it’s the right corporations. (Or left).

      They would be perfectly happy if Google or Apple would run the government. And to a large extent, Silicon Valley is, we’ve crossed over into an Oligarchy a while ago.

  3. I was a big Asimov fan when I was a kid, and one of the recurring themes in his stories was robotophobia.

    I think it was in Caves of Steel where there was a riot over a robot shoe store clerk. That always struck me as silly, but now it doesn’t.

    The internet created jobs, but it also destroyed jobs. Was it a net improvement? Most of the wealth created by the internet seems to be in Silicon Valley and they seem to love big government. Instead of all these local stores and businesses, now you only have a few gigantic ones, who if they hire local employees, pay them very little and have them work in poor conditions, or outsource everything overseas.

    Robots will likely be worse. It’s going to replace pretty much every low skilled job. So what are low skilled people going to do for a living, exactly? End up on welfare.

    Now many liberals view this as a good thing, post scarcity economy they call it. But the reality is, someone will still be doing hard work in those economies – engineers, people who actually create resources in the first place (they won’t magically appear) – farmers, miners, oil workers, construction workers. But you’ll have eliminated all the easy jobs and I think the end result will be a huge mess.

    1. I believe this piece by Don Boudreaux addresses your concerns.

      http://cafehayek.com/2015/03/r…..copia.html

    2. We need a Butlerian Jihad.

      1. No we do not!

        There is a spectrum between “Kill all Humans” and “Kill all Robots”. What we need is to aim for the happy medium, not end up an an LSD Fuelled dystopia run by genetic freaks.

        1. not end up an an LSD Fuelled dystopia run by genetic freaks

          Let’s not be hasty, now.

        2. There is a spectrum between “Kill all Humans” and “Kill all Robots”. What we need is to aim for the happy medium…

          Yeah, we should kill only some humans and some robots.

          1. Aren’t we already there?

            1. Mostly its ‘kill some humans *with* some robots’.

          2. “Yeah, we should kill only some humans and some robots.”

            First they came for the Millennial Robots….

    3. Most of the wealth created by the internet seems to be in Silicon Valley . . .

      You, simply have no idea what you’re talking about.

      Entrepreneurs (and their investors) typically capture about 3% of the value they create.

      It means that for every million dollars one guy got, the remaining 7 billion of us split 33 million in increased wealth.

    4. Most of the wealth created by the internet seems to be in Silicon Valley . . .

      You, simply have no idea what you’re talking about.

      Entrepreneurs (and their investors) typically capture about 3% of the value they create.

      It means that for every million dollars one guy got, the remaining 7 billion of us split 33 million in increased wealth.

  4. “They call for swift and pre-emptive regulations to address any imagined safety or privacy concerns, however unlikely.”

    So, for their safety, robophobes want regulations written by government — the same organization that actually killed hundreds of millions of people in the 20th century.

    But regarding privacy, I can see why government would be the go-to guys: we don’t want Amazon to know what kinds of books we read so that they can make “recommendations.” And government has a spotless record when it comes to respecting our privacy.

  5. Even though the failure of the experiment of govt happened long ago, the violent coercive monopoly will ensure it protects itself, and the special interests that surround it. A phone capable of diagnosing basic illnesses. Noooo! The DoCtERzzz! will loose money.

    Beaming power to homes!!!!! No way!!! We confiscated Tesla’s works, what makes you think we’re gonna allow remote generating stations to offer competitive rates, portfolios with either fossil, green, or a combination of generation that they can beam to homes? Compete with our donors? Hell no!!!

  6. This would probably be the major argument for the “robots will destroy jobs” people. Not because robots will actually destroy jobs, but because robots will destroy old business models, and then government will strangle new business models before they even get a chance to take off. This will lead to unemployment, and obviously, robots will be the scapegoats, not the government.

    1. Scapegoatron2000 can take blame five hundred times more effectively than a human, and a million times more than a goat.

      1. If Scapegoatatron2000 will save me from my wife when I spend the evening playing computer games rather doing the dishes, I’ll buy it. I’ll pay top dollar for Wife Distraction Mod.

        1. *I’ll buy the goat separately.

        2. “I’ll buy it. I’ll pay top dollar for Wife Distraction Mod.”

          Be certain that it has the “Right Thing to Say” feature, which uses an advanced algorithm to select the best responses (or even conservation-starters) particular to your wife. The Scapegoatatron2000 will whisper these to you in code during the Wife Distraction sequence so when she returns her attention to you you’ll be set to say the right thing at the right time.

          You must remain cognizant, Malkavian, that the robot also uses a (much simpler) algorithm to slowly modify your behavior such that over time you will be a more suitable husband.

          Win/Win, right?

          1. “robot also uses a (much simpler) algorithm to slowly modify your behavior such that over time you will be a more suitable husband.”

            That’s what the goat is for. He will take over the husbandly duties, except the fun ones.

  7. So it wasn’t Al Gore who invented the Internet, but rather Bill Clinton?

  8. Calo … calls for a Federal Robotics Commission (FRC). Older agencies, he argues, don’t have the expertise to “deal with the novel experiences and harms robotics enables.”

    Obviously the solution is for Obama to appoint a robot as Robotics Czar.

    1. He’d appoint Bender when Robby was available.

      1. “Danger! Danger!”

        1. +1 Jupiter II

  9. They call for swift and pre-emptive regulations to address any imagined safety or privacy concerns, however unlikely. Other cases of robophobia are milder, manifesting, for instance, in proposals for new government agencies.

    Milder, in not being swift and preemptive, but otherwise distinguishable in the long term.

    1. INdistinguishable.

  10. WTF, Reason?

    1. Seriously. AM Links just disappeared.

Please to post comments

Comments are closed.