When most people think about driverless car policies, they tend to focus on philosophical life-or-death scenarios. This isn't surprising: The question of how autonomous vehicles should behave when faced with the unfortunate decision between harming a passenger and harming a pedestrian is an eye-catching one indeed.
But many of the policy concerns surrounding robot cars are far more prosaic. How much data will connected cars collect, and how will this be stored? Will the data be secure? How secure? How will software be maintained and improved? Will the code be viewable by outsiders? And just who should oversee the whole process?
At the end of the month, regulators at the Federal Trade Commission (FTC) and National Highway Traffic Safety Administration (NHTSA) will hold a policy workshop to discuss just these issues. On June 28, policy experts, academics, industry representatives, and the general public are invited to take part in a conversation on "the consumer privacy and security issues that automated and connected vehicles" may pose. Rather than worrying about the typical questions of physical safety and accident liabilities involving autonomous cars, this workshop will focus mostly on the cybersecurity and data integrity of autonomous car software.
There is considerable disagreement in the policy space about how to proceed. At the core of the debate is the question of how seamlessly car companies can adapt to the new pressures of maintaining secure software.
Car manufacturers have had decades of experience testing their vehicles to minimize physical safety risks as much as possible. But as cars become more automated, these companies will have to act more like software firms. These days, GM and Ford and Chrysler need to be as comfortable with beta testing and encryption as they are with test tracks and crash dummies.
Critics believe that car companies are simply not up to snuff on security. They do not think that auto manufacturers will be able to provide secure and transparent autonomous car software without some kind of government mandate or pre-market approval process.
Computer security expert Bruce Schneier is a leading advocate of this position. In a recent article for the New York Times, Schneier argues that there is no market incentive for car manufacturers and other businesses involved in the "Internet of Things" (IOT) gold rush to care about good security. He thinks the government should mandate that companies integrate certain software features by default, and advocates that software developers be held liable for vulnerabilities in the code. (As a point of fact, the FTC already investigates IOT firms for "unfair and deceptive practices" with insecure products, but Schneier and others want to significantly expand the liability burden for software businesses.)
There are legislative efforts underway as well. Sen. Ed Markey (D-Mass.) has made connected car security into somewhat of a congressional bugaboo for the past few years. He first produced a sensationalist report on the topic in 2015, which was heavy on hysterics but notably light on any real examples of the hacking threats to connected cars. Markey then teamed up with Senator Richard Blumenthal to introduce the "SPY Car Act" later that year. The bill would vaguely require companies to equip "all entry points to the electronic systems" of cars sold in the US with "reasonable measures to protect against hacking attacks" while requiring sellers to append a "cyber dashboard" with security features on each car. The bill failed, having generated expected opposition from auto manufacturers for its "cumbersome and static" approach, but Markey released another version of the bill this year.
Yet these kinds of arguments are little different than the typical anti-market rhetoric used to justify all kinds of backwards regulations. They seem to believe that businesses can consistently provide sub-par data and software security and stay in business forever. Furthermore, they believe that government has the knowledge and will to promulgate beneficial regulations without unduly stifling the growth of this promising industry.
Contrary to critics, there is a very real profit incentive to provide good car security, as Anne Hobson of the R Street Institute pointed out in a recent policy brief. Companies that sell poor products will lose to competitors that provide superior goods—and the development of cyber-insurance products will help businesses to navigate this new and important risk domain. In fact, automobile companies are very concerned about providing good security. The two largest industry trade groups, the Alliance of Automobile Manufacturers and Association of Global Automakers, teamed up to form an Automotive Information Sharing and Analysis Center (Auto-ISAC) that disseminates best practices and shares information about new threats. The technology industry, too, has stepped in to promote car security with ventures like the Future of Automotive Security Technology Research (FASTR).
Additionally, the federal government is perhaps one of the worst bodies to be charged with improving the security of connected cars. As my research on federal security points out, the federal government consistently fails to meet its internal standards and suffers thousands of data security failures each year. Much of this is due to the static and backwards-looking nature of government security guidelines. Cybersecurity is a fast-paced, anticipatory process. Bureaucratic checklists simply don't cut the mustard, and often have the effect of improperly directing resources at the expense of preparing for unknown threats.
So how should the FTC and NHTSA proceed? My Mercatus Center colleague, Adam Thierer, has thought a lot about connected car security, and is scheduled to deliver remarks at the workshop. He will argue that policymakers should collaborate with existing industry regulation efforts to help educate businesses and the public on best security practices, rather than rushing to regulate. An embrace of "permissionless innovation" will ensure that consumers can enjoy the full benefit of driverless car technologies without the threat of precautionary government regulation inadvertently killing an industry before it has the chance to develop. And if a car company does engage in fraudulent activities, for example by misrepresenting the security of the products they sell, the FTC already rightly has the authority to investigate this kind of crime.
Fortunately for the public, the FTC is in good hands. FTC Chairwoman Maureen Ohlhausen is a veritable champion of permissionless innovation for technology, and her opening remarks at the workshop will likely set the tone of the event in a decidedly laissez faire frame. The NHTSA has taken some recent steps to encourage autonomous car security without prescriptive regulations, like with the publication of their voluntary guidelines for autonomous cars in 2016 and planned expansion of their New Car Assessment Program to cover autonomous features.
On the other hand, and somewhat ironically, the NHTSA recently proposed new rules that would require car companies to install specific vehicle-to-vehicle (V2V) communication technologies in new cars. My Mercatus Center colleague Brent Skorup criticized this rule for being "unprecedented and hasty," while civil liberties groups pointed out that these rules would create significant data privacy concerns for consumers. If the NHTSA wants to promote data privacy in connected car software, one good place to start might be to trash its own proposal!
When it comes to promoting good software security, the path forward is clear. Policies that empower bureaucrats over the engineers that actually build secure software products will promote neither security nor innovation. Instead of issuing mandates and threats to software developers and car companies, regulators should help to promote the voluntary information-sharing and certification endeavors that are already underway. Only a principled embrace of permissionless innovation can provide the connected car benefits and security that all parties desire.