The introduction of steam engines boosted productivity growth by 0.3 percent a year from 1850 to 1910, according to a new report on the likely economic impacts of artificial intelligence. The study, produced by the McKinsey Global Institute (MGI), also suggests that introducing robots to manufacturing led to annual productivity increases of 0.4 percent between 1993 and 2007, and that the introduction of information technologies led to annual productivity increases of 0.6 percent during the 2000s.
The researchers acknowledge that "predicting the economic impact of AI or any disruptive technology is a highly speculative exercise." But they give it their best shot, and their estimates suggest that from now to 2030 artificial intelligence (AI) will have even more of an impact than steam did in the 19th century.
Their analysis encompasses five broad categories of artificial intelligence: computer vision, natural language, virtual assistants, robotic process automation, and advanced machine learning. These technologies' effects on employment, consumption, and production will, they argue, spark about 1.2 percent of activity growth between now and 2030. If you assume the global economy, which now stands at about $87 trillion gross world product, continues to grow at the 2017 rate of 3.1 percent annually for the next 12 years, world GDP would rise to about $125 trillion by 2030. But if the MGI researchers are right, AI will boost growth to an annual rate of 4.3 percent, resulting in a global GDP of $144 trillion by 2030.
To get an idea of how powerfully AI could affect economic growth, let's assume that the last quarter's 4.2 percent GDP growth rate in the U.S. was somehow sustained through 2030. Today's $20.5 trillion economy, growing for 12 years at 5.4 percent, would nearly double to $38.5 trillion by 2030. Taking projected population growth into account, U.S. per capita GDP would rise from around $61,000 now to more than $107,000 in 2030.
Some AI critics think these technologies are so disruptive that development and deployment should be regulated on the basis of the precautionary principle. As tech scholar Adam Thierer of the Mercatus Institute explains it, this is "the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions." Since all technologies have the potential for some kind of "harm," the principle is regulatory recipe for capricioulsy shutting down any innovation that attracts the attention of interest groups who believe that they will be adversely affected.
The Information Technology and Innovation Foundation offers a better way to proceed. When it comes to AI, the foundation suggests, "governments should follow the 'innovation principle' rather than the 'precautionary principle' and address risks as they arise, or allow market forces to address them, and not hold back progress with restrictive tax and regulatory policies because of speculative fears."
Every new technology comes with dangers and downsides, but humanity has reaped far more benefits from general purpose technologies like steam, electricity, and infotech than we have suffered harms. Given the significant upsides of deploying A, it is vital that would-be regulators adopt the innovation principle and allow inventors and the private sector to pursue the gains that these technologies will afford humanity.