- Home
- Ajay Agrawal
The Economics of Artificial Intelligence Page 11
The Economics of Artificial Intelligence Read online
Page 11
Hatzius, Jan, and Kris Dawsey. 2015. “Doing the Sums on Productivity Paradox
v2.0.” Goldman Sachs U.S. Economics Analyst, no. 15/ 30.
Henderson, Rebecca. 1993. “Underinvestment and Incompetence as Responses to
Radical Innovation: Evidence from the Photolithographic Industry.” RAND Jour-
nal of Economics 24 (2): 248– 70.
———. 2006. “The Innovator’s Dilemma as a Problem of Organizational Compe-
tence.” Journal of Product Innovation Management 23:5– 11.
Holmes, Thomas J., David K. Levine, and James A. Schmitz. 2012. “Monopoly
and the Incentive to Innovate When Adoption Involves Switchover Disruptions.”
American Economic Journal: Microeconomics 4 (3): 1– 33.
Hortaçsu, Ali, and Chad Syverson. 2015. “The Ongoing Evolution of US Retail:
A Format Tug- of-War.” Journal of Economic Perspectives 29 (4): 89– 112.
Jones, C. I., and P. M. Romer. 2010. “The New Kaldor Facts: Ideas, Institutions,
Population, and Human Capital.” American Economic Journal: Macroeconomics
2 (1): 224– 45.
Jovanovic, Boyan, and Peter L. Rousseau. 2005. “General Purpose Technologies.”
In Handbook of Economic Growth, vol. 1B, edited by Philippe Aghion and Steven
N. Durlauf, 1181– 224. Amsterdam: Elsevier B.V.
Kendrick, John W. 1961. Productivity Trends in the United States. National Bureau of Economic Research. Princeton, NJ: Princeton University Press.
Levine, S., C. Finn, T. Darrell, and P. Abbeel. 2016. “End- to-End Training of Deep
Visuomotor Policies.” Journal of Machine Learning Research 17 (39): 1– 40.
Levitt, Steven D., John A. List, and Chad Syverson. 2013. “Toward an Understand-
ing of Learning by Doing: Evidence from an Automobile Plant.” Journal of Po-
litical Economy 121 (4): 643– 81.
Levy, Frank. 2018. “Computers and Populism: Artifi cial Intelligence, Jobs, and Poli-
tics in the Near Term.” Oxford Review of Economic Policy 34 (3): 393– 417.
Liu, Y., A. Gupta, P. Abbeel, and S. Levine. 2017. “Imitation from Observation:
Learning to Imitate Behaviors from Raw Video via Context Translation.” arXiv
preprint arXiv:1707.03374. https:// arxiv .org / abs / 1707 .03374.
Manyika, James, Michael Chui, Mehdi Miremadi, Jacques Bughin, Katy George,
Paul Willmott, and Martin Dewhurst. 2017. “Harnessing Automation for a
Future That Works.” McKinsey Global Institute, January. https:// www .mckinsey
.com / global - themes / digital - disruption / harnessing - automation - for - a - future - that
- works.
McAfee, Andrew, and Erik Brynjolfsson. 2008. “Investing in the IT that Makes a
Competitive Diff erence.” Harvard Business Review July:98.
Milgrom, P., and J. Roberts. 1996. “The LeChatelier Principle.” American Economic
Review 86 (1): 173– 79.
Minsky, Marvin. 1967. Computation: Finite and Infi nite Machines. Upper Saddle
River, NJ: Prentice- Hall.
Mokyr, J. 2014. “Secular Stagnation? Not in Your Life.” Geneva Reports on the World Economy August:83– 89.
Morris, David Z. 2016. “Today’s Cars Are Parked 95 Percent of the Time.” Fortune, Mar. 13.
Nakamura, Leonard, and Rachel Soloveichik. 2015. “Capturing the Productivity
Comment 57
Impact of the ‘Free’ Apps and Other Online Media.” FRBP Working Paper no.
15– 25, Federal Reserve Bank of Philadelphia.
Nordhaus, W. D. 2015. “Are We Approaching an Economic Singularity? Informa-
tion Technology and the Future of Economic Growth.” NBER Working Paper
no. 21547, Cambridge, MA.
Organisation for Economic Co- operation and Development (OECD). 2015. The
Future of Productivity. https:// www .oecd .org / eco / growth / OECD-2015-The- future
- of-productivity- book .pdf.
Orlikowski, W. J. 1996. “Improvising Organizational Transformation over Time: A
Situated Change Perspective .” Information Systems Research 7 (1): 63– 92.
Pratt, Gill A. 2015. “Is a Cambrian Explosion Coming for Robotics?” Journal of
Economic Perspectives 29 (3): 51– 60.
Saon, G., G. Kurata, T. Sercu, K. Audhkhasi, S. Thomas, D. Dimitriadis, X. Cui,
et al. 2017. “English Conversational Telephone Speech Recognition by Humans
and Machines.” arXiv preprint arXiv:1703.02136. https:// arxiv .org / abs / 1703
.02136.
Smith, Noah. 2015. “The Internet’s Hidden Wealth.” Bloomberg View, June 6. http://
www .bloombergview .com / articles / 2015– 06– 10/ wealth- created- by- the- internet
- may- not- appear- in-gdp.
Solow, Robert M. 1957. “Technical Change and the Aggregate Production Func-
tion.” Review of Economics and Statistics 39 (3): 312– 20.
———. 1987. “We’d Better Watch Out.” New York Times Book Review, July 12, 36.
Song, Jae, David J. Price, Fatih Guvenen, Nicholas Bloom, and Till von Wachter.
2015. “Firming Up Inequality.” NBER Working Paper no. 21199, Cambridge, MA.
Stiglitz, Joseph E. 2014. “Unemployment and Innovation.” NBER Working Paper
no. 20670, Cambridge, MA.
Syverson, Chad. 2013. “Will History Repeat Itself ? Comments on ‘Is the Informa-
tion Technology Revolution Over?’ ” International Productivity Monitor 25:37– 40.
———. 2017. “Challenges to Mismeasurement Explanations for the US Productiv-
ity Slowdown.” Journal of Economic Perspectives 31 (2): 165– 86.
Yang, Shinkyu, and Erik Brynjolfsson. 2001. “Intangible Assets and Growth Ac-
counting: Evidence from Computer Investments.” Unpublished manuscript, Mas-
sachusetts Institute of Technology.
Comment Rebecca Henderson
“Artifi cial Intelligence and the Modern Productivity Paradox” is a fabulous
chapter. It is beautifully written, extremely interesting, and goes right to the
heart of a centrally important question, namely, what eff ects will AI have on
economic growth? The authors make two central claims. The fi rst is that AI
Rebecca Henderson is the John and Natty McArthur University Professor at Harvard Uni-
versity, where she has a joint appointment at the Harvard Business School in the General Management and Strategy units, and a research associate of the National Bureau of Economic Research.
For acknowledgments, sources of research support, and disclosure of the author’s material fi nancial relationships, if any, please see http:// www .nber .org / chapters / c14020 .ack.
58 Rebecca Henderson
is a general purpose technology, or GPT, and as such is likely to have a dra-
matic impact on productivity and economic growth. The second is that the
reason we do not yet see it in the productivity statistics is because—like all
GPTs—this is a technology that will take time to diff use across the economy.
More specifi cally, the authors argue that AI will take time to diff use
because its adoption will require mastering “adjustment costs, organiza-
tional changes, and new skills.” They suggest that just as we did not see IT
in the productivity statistics until fi rms had made the organizational changes
and hired the human capital necessary to master it, so the adoption of AI
will require not only the diff usion of the technology itself but also the de-
velopment of the organizational and human assets that will be required to
exploit its full potential.
This is a fascinating idea
. One of the reasons I like the chapter so much
is that takes seriously an idea that economists long resisted—namely, that
things as nebulous as “culture” and “organizational capabilities” might be
(a) very important, (b) expensive, and (c) hard to change. Twenty- fi ve years
ago, when I submitted a paper to the RAND Journal of Economics that
suggested that incumbents were fundamentally disadvantaged compared to
entrants because they were constrained by old ways of acting and perceiving,
I got a letter from the editor that began “Dear Rebecca, you have written
a paper suggesting that the moon is made of green cheese, and that econo-
mists have too little considered the motions of cheesy planetoids”
I like to think that few editors would respond that way today. Thanks
to a wave of new work in organizational economics and the pioneering
empirical research of scholars like Nick Bloom, John van Reenen, Raff aella
Sadun, and the authors themselves, we now have good reason to believe that
managerial processes and organizational structures have very real eff ects
on performance and that they take a signifi cant time to change. One of the
most exciting things about this chapter is that it takes these ideas suffi
ciently
seriously to suggest that the current slowdown in productivity is largely a
function of organizational inertia—that a central macroeconomic outcome
is a function of a phenomenon that thirty years ago was barely on the radar.
That’s exciting. Is it true? And if it is, what are its implications?
My guess is that the deployment of AI will indeed be gated by the need to
change organizational structures and processes. But I think that the authors
may be underestimating the implications of this dynamic in important ways.
Take the case of accounting. A few months ago, I happened to meet the
chief strategy offi
cer for one of the world’s largest accounting fi rms. He
told me that his fi rm is the largest hirer of college graduates in the world—
which may or may not be true, but which he certainly believed—and that
his fi rm was planning to reduce the number of college graduates they hire
by 75 percent over the next four to fi ve years—largely because it is increas-
ingly clear that AI is going to be able to take over much of the auditing work
currently performed by humans. This shift will certainly be mediated by
Comment 59
every accounting fi rm’s ability to integrate AI into their procedures and to
persuade their customers that it is worth paying for—examples of exactly
the kinds of barriers that this chapter suggests are so important—but in
principle it should dramatically increase the productivity of accounting ser-
vices, exactly the eff ects that Erik and his coauthors are hoping for.
But I am worried about all the college graduates the accounting fi rms are
not going to hire. More broadly, as AI begins to diff use across the economy
it seems likely that a lot of people will get pushed into new positions and a
lot of people will be laid off . And just as changing organizational processes
takes time, so it’s going to take time to remake the social context in ways
that will make it possible to handle these dislocations. Without these kinds
of investments—one can imagine they might be in education, in relocation
assistance, and the like—there is a real risk of a public backlash against AI
that could dramatically reduce its diff usion rate.
For example, the authors are excited about the benefi ts that the wide-
spread diff usion of autonomous vehicles are likely to bring. Productivity
seems likely to skyrocket, while with luck tens of thousands of people will
no longer perish in car crashes every year. But “driving” is one of the larg-
est occupations there is. What will happen when millions of people begin to
be laid off ? I’m with the authors in believing that the diff usion of AI could
be an enormous source of innovation and growth. But I can see challenges
in the transition at the societal level, as well as at the organizational level.
And there will also be challenges if too large a share of the economic gains
from the initial deployment of the technology goes to the owners of capital
rather than to the rest of society.
Which is to say that I am a little more pessimistic than Erik and his co-
authors as to the speed at which AI will diff use—and this is even before I
start talking about the issues that Scott, Iain, and I touch on in our own
chapter, namely, that we are likely to have signifi cant underinvestment in AI
relative to the social option, coupled with a fair amount of dissipative racing.
2
The Technological Elements
of Artifi cial Intelligence
Matt Taddy
2.1 Introduction
We have seen in the past decade a sharp increase in the extent that compa-
nies use data to optimize their businesses. Variously called the “Big Data” or
“Data Science” revolution, this has been characterized by massive amounts
of data, including unstructured and nontraditional data like text and images,
and the use of fast and fl exible machine learning (ML) algorithms in anal-
ysis. With recent improvements in deep neural networks (DNNs) and related
methods, application of high- performance ML algorithms has become
more automatic and robust to diff erent data scenarios. That has led to the
rapid rise of an artifi cial intelligence (AI) that works by combining many ML
algorithms together—each targeting a straightforward prediction task—to
solve complex problems.
In this chapter, we will defi ne a framework for thinking about the ingre-
dients of this new ML- driven AI. Having an understanding of the pieces
that make up these systems and how they fi t together is important for those
who will be building businesses around this technology. Those studying the
economics of AI can use these defi nitions to remove ambiguity from the
conversation on AI’s projected productivity impacts and data requirements.
Finally, this framework should help clarify the role for AI in the practice
of modern business analytics1 and economic measurement.
This article was written while Matt Taddy was professor of econometrics and statistics at the University of Chicago Booth School of Business and a principal researcher at Microsoft Research New England. He is currently at Amazon.com.
For acknowledgments, sources of research support, and disclosure of the author’s material fi nancial relationships, if any, please see http:// www .nber .org/ chapters/ c14021.ack.
1. This material has been adapted from a chapter in Business Data Science, forthcoming from McGraw-Hill.
61
62 Matt Taddy
2.2 What Is AI?
In fi gure 2.1, we show a breakdown of AI into three major and essential
pieces. A full end- to-end AI solution—at Microsoft, we call this a System
of Intelligence—is able to ingest human- level knowledge (e.g., via machine
reading and computer vision) and use this information to automate and
accelerate tasks that were previously only performed by humans. It is neces-
sary here to have a well- defi ned task structure to engineer against, and in a
/>
business setting this structure is provided by business and economic domain
expertise. You need a massive bank of data to get the system up and running,
and a strategy to continue generating data so that the system can respond
and learn. And fi nally, you need machine- learning routines that can detect
patterns in and make predictions from the unstructured data. This section
will work through each of these pillars, and in later sections we dive in detail
into deep learning models, their optimization, and data generation.
Notice that we are explicitly separating ML from AI here. This is impor-
tant: these are diff erent but often confused technologies. Machine learn-
ing can do fantastic things, but it is basically limited to predicting a future
that looks mostly like the past. These are tools for pattern recognition. In
contrast, an AI system is able to solve complex problems that have been
previously reserved for humans. It does this by breaking these problems
into a bunch of simple prediction tasks, each of which can be attacked by
a “dumb” ML algorithm. Artifi cial intelligence uses instances of machine
learning as components of the larger system. These ML instances need to
be organized within a structure defi ned by domain knowledge, and they
need to be fed data that helps them complete their allotted prediction tasks.
This is not to down- weight the importance of ML in AI. In contrast to
earlier attempts at AI, the current instance of AI is ML driven. Machine-
learning algorithms are implanted in every aspect of AI, and below we
describe the evolution of ML toward status as a general purpose technology.
This evolution is the main driver behind the current rise of AI. However,
ML algorithms are building blocks of AI within a larger context.
To make these ideas concrete, consider an example AI system from the
Microsoft- owned company Maluuba that was designed to play (and win!)
the video game Ms. Pac- Man on Atari (van Seijen et al. 2017).The system
Fig. 2.1 AI systems are self- training structures of ML predictors that automate
and accelerate human tasks
The Technological Elements of Artifi cial Intelligence 63
is illustrated in fi gure 2.2. The player moves Ms. Pac- Man on this game
“board,” gaining rewards for eating pellets while making sure to avoid get-