Uncle Sam Wants Your Fitbit
The fight for Internet freedom gets physical.
We are at the dawn of the Internet of Things—a world full of smart devices equipped with sensors, all hooked up to a digital universe that will become as omnipresent as the air we breathe. Imagine every appliance in your home, every machine in your office, and every device in your car constantly communicating with a network and offering you a fully customizable, personalized experience. Besides neato gadgets and productivity gains, this hyper-connected future will also mean a new wave of policy wars, as politicians panic over privacy, security, intellectual property, occupational disruptions, technical standards, and more.
Behind these battles will be a grander clash of visions over the future course of technology. The initial boom of digital entrepreneurship was powered by largely unfettered experiments with new technologies and business models. Will we preserve and extend this ethos going forward? Or will technological reactionaries pre-emptively eliminate every hypothetical risk posed by the next generation of Internet-enabled things, perhaps regulating them out of existence before they even come to be?
The first generation of Internet policy punditry was dominated by voices declaring that the world of bits was, or at least should be, a unique space with a different set of rules than the world of atoms. Digital visionary John Perry Barlow set the tone with his famous 1996 essay, "A Declaration of the Independence of Cyberspace," which argued not just that governments should leave the Internet unregulated but that Internet regulation was not really feasible in the first place.
Barlow's vision thus embodied both Internet exceptionalism and technological determinism. Internet exceptionalism is the notion that the Net is a special medium that shouldn't be treated like earlier media and communications platforms, such as broadcasting or telephony. Technological determinism is the belief that technology drives history, and (in the extreme version) that it almost has an unstoppable will of its own.
First-generation exceptionalists and determinists included Nicholas Negroponte, the former director of the MIT Media Lab, and George Gilder, a technology journalist and historian. "Like a force of nature, the digital age cannot be denied or stopped," Negroponte insisted in his 1995 polemic, Being Digital. But Barlow's declaration represented the high-water mark of the early exceptionalist era. "Governments of the Industrial World," he declared, "are not welcome among us [and] have no sovereignty where we gather." The "global social space we are building," he added, is "naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear."
It turned out we had reasons to fear after all. If the first era of Internet policy signified A New Hope, the second generation—beginning about the time the dot-com bubble burst in 2000—could be called The Empire Strikes Back. From taxes to surveillance to network regulation, governments gradually learned that by applying enough pressure in just the right places, citizens and organizations will submit.
A second generation of Internet scholars cheered on these developments. The scholar-activists at Harvard's Berkman Center for Internet and Society, such as Lawrence Lessig, Jonathan Zittrain, and Tim Wu, joined with a growing assortment of policy activists with tangential pet peeves they wanted governments to address. Together they revolted against the earlier ethos and called for stronger powers for governments to direct social and commercial activities online.
In the new narrative, the real threat to our freedom was not public law but private code. "Left to itself," Lessig famously predicted, "cyberspace will become a perfect tool of control." Thus, government controls were called for. Later, Wu would advocate a forcible disintegration of the information economy via a "separations principle" that would segregate information providers into three buckets—creators, distributors, and hardware makers—and force them to stay put. All in the name of keeping us safe from "information monopolies."
Spurred on by this crowd, governments across the globe are clamoring for even greater control over people in cyberspace. But the second generation's narrative has proved overly simplistic in two ways.
First, the exceptionalists and techno-determinists were partially right—the Internet, while not being unregulatable per se, really has proven more resistant to government control than analog-era communications systems. The combination of highly decentralized networks, a global scale, empowered end-users, and the unprecedented volume of information created in the process has created formidable enforcement challenges for would-be censors and economic regulators.
With each passing year, the gap between "Internet time" and "government time" is widening. As the technology analyst Larry Downes argued in his 2009 book The Laws of Disruption, information-age "technology changes exponentially, but social, economic, and legal systems change incrementally." His examples ranged from copyright law, where bottling up published works is growing harder, to online privacy, where personal information is flowing faster than the ability of the law to control it.
This leads to the second way in which the Empire Strikes Back narrative falls short. As the Internet changes the way people connect with one another, governments have had to change the way they try to impose their wills on the rest of us. The old command-and-control models just don't work on highly distributed and decentralized networks.
Consider regulation of speech. Outright censorship has proven extremely difficult to enforce, and not just in the United States, where we have a First Amendment keeping the police at bay. Although some atavistic regimes still try to clamp down on content and communications, most attempt to shape behavior by encouraging firms and organizations to adopt recommended codes of conduct for online speech, often in the name of protecting children.
A similar phenomenon is at play for data privacy and cybersecurity policy. While some comprehensive regulatory frameworks have been floated, the conversations are shifting toward alternative methods of encouraging compliance. Many governments are choosing the softer road of encouraging codes of conduct and "best practices."
Economic regulations have evolved, too. Price and entry controls are almost never suggested as a first-order solution to concerns over market concentration. Instead of hard-nosed, top-down diktats, governments are increasingly using "nudges," convening "multistakeholder" meetings and workshops, and deploying what Tim Wu calls "agency threats." The Obama administration's Commerce Department and Federal Trade Commission (FTC) have already used this approach in their attempts to influence "big data" collection, biometrics, online advertising, mobile app development, and other emerging sectors and technologies.
Think of it as a "soft power" approach to tech policy: Policy makers dangle a regulatory Sword of Damocles over the heads of Internet innovators and subtly threaten them with vague penalties—or at least a lot of bad press—if they don't fall into line. The sword doesn't always have to fall to be effective; the fact that it's hanging there is enough to intimidate many firms into doing what regulators want. It's similar to the approach that the Food and Drug Administration has employed for decades with many food or medical device manufacturers: constantly harping on them about how to better develop their products, often without ever implementing formal regulations clarifying exactly how to do so.
That's how policy makers are already approaching the Internet of Things, too.
Why Matter Matters
It may feel like the Internet is already a ubiquitous backdrop of our existence, but "getting online" still requires a conscious effort to sit in front of a computer or grab a smartphone and then take steps to connect with specific sites and services. The Net does not have a completely seamless, visceral presence in our everyday lives. Yet.
The Internet of Things can change that, ushering in an era of ambient computing, always-on connectivity, and fully customizable, personalized services. Wearable health and fitness devices like Fitbit and Jawbone are already popular, foreshadowing a future in which these devices become "lifestyle remotes" that help consumers control or automate many other systems around them-in their homes, offices, cars, and so on.
Nest, recently acquired by Google, is already giving homeowners the ability to better manage their homes' energy use and to do so remotely. It signals the arrival of easy-to-program home automation technologies that will, in short order, allow us to personalize nearly every appliance in our home.
Meanwhile, our cars are quickly becoming rolling computers, loaded with chips and sensors that automate more tasks and make us safer in the process. Soon, automobiles will be communicating not only with us but with everything else around them. While fully driverless cars may still be a few decades away, semi-autonomous technologies that are already here are gradually making it easier for our cars to drive us instead of us driving them.
Think of this new world as the equivalent of Iron Man Tony Stark's invisible butler JARVIS; we'll be able to interface with our devices and the entire world around us in an almost effortless fashion. Apple's Siri and similar digital personal assistants are already on the market but are quite crude. The near future will bring us Siri's far more advanced descendants, ambient technologies that are invisible yet omnipresent in our lives, waiting for us to bark out orders and then taking immediate, complex actions based on our demands.
After that we may quickly enter the realm of cyberpunk. There are already plans for "digital skin" and "electronic tattoos" that affix ultrathin wearables directly to the body. Many firms have already debuted "epidermal electronics" that, beyond the obvious health monitoring benefits, will allow users to interface with other devices—money scanners might be one obvious application—to allow frictionless transactions. Monitoring and communication technologies could also be swallowed or implanted within the body, allowing users to develop a more robust and less invasive record of their health at all times.
These innovations are poised to fuel an amazing transformation in the industrial world too, leading to a world of machine-to-machine communications that can sense, optimize, and repair instantaneously, producing greater efficiency. Consulting firms such as McKinsey and IDC have predicted that this transformation will yield trillions of dollars' worth of benefits by expanding economic opportunities and opening up new commercial sectors.
When the Net is being baked into everything we contact, policy anxieties will multiply rapidly as well. Security and privacy concerns already dominate policy discussions about the Internet of Things. Critics fear a future in which marketers or the government scrape up the data our connected devices will collect about us. But even more profound existential questions are being raised by legal theorists, ethical philosophers, and technology critics, who often conjure up dystopian scenarios of intelligent machines taking over our lives and economy.
Which Vision Shall Govern?
This is where the question of permissionless innovation comes into play. Will Internet of Things–era innovators be at liberty to experiment and to offer new inventions without prior approval? Or will a more precautionary approach prevail, one where creators will have to get the blessing of bureaucrats before launching new products and services?
The FTC has already issued reports proposing codes of conduct to manage the growing deluge of data. The goal is to encourage coders to bake in "privacy by design" and "security by design" at every step of product development. In particular, FTC officials want developers to provide users with adequate notice regarding data collection practices, while also minimizing data collection in the aggregate.
Many of those practices are quite sensible as general guidelines, especially those related to promoting the use of encryption and anonymization to better secure stored data. But the FTC wants developers always to adopt such privacy and data security practices, and it wants to be able to hit them with fines and other penalties (using the agency's "unfair and deceptive practices" authority) if they fail to live up to those promises. If the intimidation game gets too aggressive and developers reorient their focus to pleasing Washington instead of their customers, it could have a chilling effect on many new forms of data-driven, Internet-enabled innovation.
The FTC has already gone after dozens of digital operators in this way, including such Internet giants as Google. In consent decrees, the commission extracted a wide variety of changes to those companies' privacy and data collection practices while also demanding that they undergo privacy audits for a remarkable two decades. That'll provide regulators with a hook for nudging corporate data decisions for many years to come.
While the FTC looks to incorporate the Internet of Things within this expanded process, some precautionary-minded academics are pushing for even more aggressive interventions. Many critics of private-sector data collection would like to formalize the FTC's privacy and security auditing process. Decrying a supposed lack of transparency regarding the algorithms that power various digital devices and services, they propose that companies create internal review boards or hire "data ethicists" (like themselves) to judge the wisdom of each new data-driven innovation before product launch.
More far-reaching would be the "algorithmic auditing" proposed by tech critic Evgeny Morozov and others. Advocates seek a legal mechanism to ensure that the algorithms that power search engines or other large-scale digital databases are "fair" or "accountable," without really explaining how to set that standard. There's also a movement afoot for some sort of "right of reply" to protect our online reputations by forcing digital platforms to give us the chance to respond to websites or comments we don't like. The European Union is already going down this path with the so-called Right to be Forgotten law, which mandates that search results for individuals' names be scrubbed upon request.
Fortunately, we are protected from such mandates in the U.S. by the First Amendment. The right to code is the right to speak. Technocrats will have to be cleverer to impose their controls stateside. Realizing that those roadblocks lie ahead, some activists are already trying to shift the discussion by claiming it's about "civil rights" and the supposed disparate impact that will occur if algorithmic decisions are left to the marketplace. Danielle Keats Citron, a law professor at the University of Maryland, calls for "technological due process" that would subject private companies to the sort of legal scrutiny usually reserved for government actors.
Meanwhile, new bureaucracies are being floated to enforce it all. Apparently the alphabet soup of technocratic agencies already trying to expand their jurisdictions to cover emerging technologies—FCC, FTC, FDA, FAA, NHTSA, etc.—aren't doing enough for the critics. For example, Frank Pasquale, also of Maryland's law school, favors not only a right of reply but also a Federal Search Commission to oversee "search neutrality" (think of it as net neutrality for search engines and social networking sites), as well as "fair automation practices" that would regulate what he regards as the "black box" of large private databases. And Ryan Calo of the University of Washington School of Law fears "digital market manipulation" that might "exploit the cognitive limitations of consumers." He also proposes a Federal Robotics Commission "to deal with the novel experiences and harms robotics enables."
Better Safe Than Sorry?
Anticipatory regulatory threats such as these will proliferate in tandem with the expanding penetration of ambient, networked technologies. The logic that animates such thinking has always been seductive among the wet-blanket set: Isn't it better to be safe than sorry? Why not head off hypothetical problems in privacy and security?
There is no doubt that slowing Internet of Things development could prevent future data spills or privacy losses, just as there is no doubt that regulatorily strangling Henry Ford's vision in the crib would have prevented numerous car crashes (while also preventing all the advantages cars have brought to our lives as well). If we spend all our time worrying over worst-case scenarios, that means the best-case scenarios will never come about either. Nothing ventured, nothing gained.
The trans-Atlantic contrast between the U.S. and Europe on digital innovation over the past 15 years offers real-world evidence of why this conflict of visions matters. America's tech sector came to be the envy of the world, and many U.S.-based firms are household names across Europe. (Indeed, European regulators are constantly trying to take the likes of Google, Amazon, and Facebook down a peg.) Meanwhile, it is difficult to name more than a few major Internet innovators from Europe. America's more flexible, light-touch regulatory regime left more room for competition and innovation compared to Europe's top-down regime of data directives and bureaucratic restrictions.
Instead of precaution, a little patience is the better prescription. Long before the Internet of Things came along, many predecessor technologies—telephones, broadcast networks, cameras, and the Net itself—were initially viewed with suspicion and anxiety. Yet we quickly adapted to them and made them part of our daily routines.
Human beings are not completely subservient to their tools or helpless in the face of technological change. Citizens have found creative ways to adjust to technological transformations by employing a variety of coping mechanisms, new norms, or other creative fixes. Historically, the births of new, highly disruptive networking technologies—think of social networking sites just a decade ago—have been met by momentary techno-panics, only to see citizens quickly adapting to them and then clamoring for more and more of the stuff. The same will be true as we adjust to the Internet of Things.
If we hope to usher in what Michael Mandel, chief economic strategist at the Progressive Policy Institute, calls "the next stage of the Internet Revolution," we'll need to guarantee that innovators will remain free to experiment with new and better ways of doing things. That's the Internet freedom we should be fighting for.