- Home
- Ajay Agrawal
The Economics of Artificial Intelligence Page 4
The Economics of Artificial Intelligence Read online
Page 4
and the potential for price discrimination. He emphasizes the diff erence
between learning by doing and data network eff ects: “There is a concept that
is circulating among lawyers and regulators called ‘data network eff ects.’ The
model is that a fi rm with more customers can collect more data and use this
data to improve its product. This is often true—the prospect of improving
operations is what makes ML attractive—but it is hardly novel. And it is
certainly not a network eff ect! This is essentially a supply- side eff ect known
as ‘learning by doing.’. . . A company can have huge amounts of data, but
if it does nothing with the data, it produces no value. In my experience, the
problem is not lack of resources, but is lack of skills. A company that has
data but no one to analyze it is in a poor position to take advantage of that
data.” He concludes by highlighting policy questions related to algorithmic
collusion (which was discussed at the conference as “economist catnip,”
13
interesting and fun but unlikely to be of fi rst- order importance), security,
privacy, and transparency.
Chevalier’s comment builds on Varian’s emphasis on the importance of
data, exploring the potential of antitrust policy aimed at companies that
use machine learning. Legal scholars and policymakers have asked whether
antitrust essential facilities doctrine should be applied to data ownership.
She emphasizes the trade- off between static and dynamic considerations for
such a policy: “In evaluating antitrust policies in innovative industries, it is
important to recognize that consumer benefi ts from new technologies arise
not just from obtaining goods and services at competitive prices, but also
from the fl ow of new and improved products and services that arise from
innovation. Thus, antitrust policy should be evaluated not just in terms of
its eff ect on prices and outputs, but also on its eff ect on the speed of inno-
vation. Indeed, in the high technology industries, it seems likely that these
dynamic effi
ciency considerations dwarf the static effi
ciency considerations.”
She also explores several practical challenges.
Another regulatory issue that arises from the importance of data is pri-
vacy. Tucker (chapter 17) notes that machine learning uses data to make
predictions about what individuals may desire, be infl uenced by, or do. She
emphasizes that privacy is challenging for three reasons: cheap storage
means that data may persist longer than the person who generated the data
intended, nonrivalry means that data may be repurposed for uses other than
originally intended, and externalities caused by data created by one indi-
vidual that contains information about others: “For example, in the case of
genetics, the decision to create genetic data has immediate consequences for
family members, since one individual’s genetic data is signifi cantly similar to
the genetic data of their family members. . . . There may also be spillovers
across a person’s decision to keep some information secret, if such secrecy
predicts other aspects of that individual’s behavior that AI might be able
to project from.” She discusses potential negative impacts of these three
challenges, concluding with some key open questions.
Jin (chapter 18) also focuses on the importance of data as an input into
machine learning. She emphasizes that reduced privacy creates security
challenges, such as identity theft, ransomware, and misleading algorithms
(such as Russian- sponsored posts in the 2016 US election): “In my opinion,
the leading concern is that fi rms are not fully accountable for the risk they
bring to consumer privacy and data security. To restore full accountability,
one needs to overcome three obstacles, namely (a) the diffi
culty to observe
fi rms’ actual action in data collection, data storage, and data use; (b) the
diffi
culty to quantify the consequence of data practice, especially before low-
probability adverse events realize themselves; and (c) the diffi
culty to draw a
causal link between a fi rm’s data practice and its consequence.” Combined,
Tucker and Jin’s chapters emphasize that any discussion of growth and
14 Ajay Agrawal, Joshua Gans, and Avi Goldfarb
impact of AI requires an understanding of the privacy framework. Access
to data drives innovation, underlies the potential for economic growth, and
frames the antitrust debate.
The economics of data also create challenges with respect to the rules
governing international trade. Goldfarb and Trefl er (chapter 19) argue that
economies of scale in data through feedback loops, along with economies
of scope and knowledge externalities in AI innovation, could create the
opportunity for country- level rents and strategic trade policy. At the same
time, they emphasize that the geographic constraints on data and knowledge
would have to be high for such a policy to be optimal at the country level.
They highlight the rise of China: “China has become the focal point for
much of the international discussion. The US narrative has it that Chinese
protection has reduced the ability of dynamic US fi rms such as Google and
Amazon to penetrate Chinese markets. This protection has allowed China
to develop signifi cant commercial AI capabilities, as evidenced by compa-
nies such as Baidu (a search engine like Google), Alibaba (an e-commerce
web portal like Amazon), and Tencent (the developer of WeChat, which
can be seen as combining the functions of Skype, Facebook, and Apple
Pay) . . . we collected time- series data on the institutional affi
liation of all
authors of papers presented at a major AI research conference . . . we com-
pare the 2012 and 2017 conferences. . . . While these countries all increased
their absolute number of participants, in relative terms they all lost ground
to China, which leapt from 10 percent in 2012 to 23 percent in 2017.” The
authors discuss the international dimensions of domestic regulation related
to privacy, access to government data, and industrial standards.
The fi nal regulatory issue highlighted in this section is tort liability.
Galasso and Luo (chapter 20) review prior literature on the relationship
between liability and innovation. They emphasize the importance of getting
the balance right between consumer protection and innovation incentives:
“A central question in designing a liability system for AI technologies is
how liability risk should be allocated between producers and consumers,
and how this allocation might aff ect innovation. . . . A key promise of AI
technologies is to achieve autonomy. With less room for consumers to take
precautions, the relative liability burden is likely to shift toward producers,
especially in situations in which producers are in a better position than indi-
vidual users to control risk. . . . On the other hand, during the transitional
period of an AI technology, substantial human supervision may still be
required. . . . In many of these situations, it may be impractical or too costly
for producers to monitor individual users and to intervene. Therefore, it
would be important to maintain consumer liability to the extent that users
of AI technologies have suffi
cient incentives to take precautions and invest
in training, thus internalizing potential harm to others.”
Broadly, regulation will aff ect the speed at which AI diff uses. Too much
regulation, and industry will not have incentives to invest. Too little regu-
15
lation, and consumers will not trust the products that result. In this way,
getting the regulatory balance right is key to understanding when and how
any impact of AI on economic growth and inequality will arise.
Impact on the Practice of Economics
Cockburn, Henderson, and Stern emphasize that machine learning is a
general purpose technology for science and innovation. As such, it is likely
to have an impact on research in a variety of disciplines, including eco-
nomics. Athey (chapter 21) provides an overview of the various ways in
which machine learning is likely to aff ect the practice of economics. For
example: “I believe that machine learning (ML) will have a dramatic impact
on the fi eld of economics within a short time frame. . . . ML does not add
much to questions about identifi cation, which concern when the object of
interest, for example, a causal eff ect, can be estimated with infi nite data, but
rather yields great improvements when the goal is semiparametric estima-
tion or when there are a large number of covariates relative to the number
of observations . . . a key advantage of ML is that ML views empirical
analysis as “algorithms” that estimate and compare many alternative mod-
els . . . ‘outsourcing’ model selection to algorithms works very well when
the problem is ‘simple’—for example, prediction and classifi cation tasks,
where performance of a model can be evaluated by looking at goodness of
fi t in a held- out test set.” She emphasizes the usefulness of machine- learning
techniques for policy problems related to prediction (as in Kleinberg et al.
2015). The chapter then details recent advances in using machine- learning
techniques in causal inference, which she views as a fundamental new tool kit
for empirical economists. She concludes with a list of sixteen predictions of
how machine learning will impact economics, emphasizing new econometric
tools, new data sets and measurement techniques, increased engagement of
economists as engineers (and plumbers), and, of course, increased study
of the economic impact of machine learning on the economy as a whole.
Lederman’s comment emphasizes the usefulness of machine learning to
create new variables for economic analysis, and how the use of machine
learning by organizations creates a new kind of endogeneity problem: “We
develop theoretical models to help us understand the data- generation pro-
cess which, in turn, informs both our concerns about causality as well as
the identifi cation strategies we develop. . . . Overall, as applied researchers
working with real- world data sets, we need to recognize that increasingly
the data we are analyzing is going to be the result of decisions that are made
by algorithms in which the decision- making process may or may not re-
semble the decision- making processes we model as social scientists.”
If the study of AI is going to be a key question for economists going for-
ward, Raj and Seamans (chapter 22) emphasize that we need better data:
“While there is generally a paucity of data examining the adoption, use, and
16 Ajay Agrawal, Joshua Gans, and Avi Goldfarb
eff ects of both AI and robotics, there is currently less information available
regarding AI. There are no public data sets on the utilization or adoption of
AI at either the macro or micro level. The most complete source of informa-
tion, the McKinsey Global Institute study, is proprietary and inaccessible
to the general public or the academic community. The most comprehensive
and widely used data set examining the diff usion of robotics is the Inter-
national Federation of Robotics (IFR) Robot Shipment Data . . . the IFR
does not collect any information on dedicated industrial robots that serve
one purpose. Furthermore, some of the robots are not classifi ed by indus-
try, detailed data is only available for industrial robots (and not robots in
service, transportation, warehousing, or other sectors), and geographical
information is often aggregated” They provide a detailed discussion of data-
collection opportunities by government and by academic researchers. If the
agenda set up in the other chapters is to be answered, it is important to have
a reliable data set that defi nes AI, measures its quality, and tracks its diff usion.
Related to Athey’s emphasis of increased engagement of economists
as engineering, Milgrom and Tadelis (chapter 23) describe how machine
learning is already aff ecting market- design decisions. Using specifi c ex-
amples from online marketplaces and telecommunications auctions, they
emphasize the potential of AI to improve effi
ciency by predicting demand
and supply, overcoming computational barriers, and reducing search fric-
tions: “AI and machine learning are emerging as important tools for market
design. Retailers and marketplaces such as eBay, Taobao, Amazon, Uber,
and many others are mining their vast amounts of data to identify patterns
that help them create better experiences for their customers and increase
the effi
ciency of their markets . . . two- sided markets such as Google, which
match advertisers with consumers, are not only using AI to set reserve prices
and segment consumers into fi ner categories for ad targeting, but they also
develop AI- based tools to help advertisers bid on ads. . . . Another impor-
tant application of AI’s strength in improving forecasting to help markets
operate more effi
ciently is in electricity markets. To operate effi
ciently, elec-
tricity market makers . . . must engage in demand and supply forecasting.”
The authors argue that AI will play a substantial role in the design and
implementation of markets over a wide range of applications.
Camerer (chapter 24) also emphasizes the role of AI as a tool for predict-
ing choice: “Behavioral economics can be defi ned as the study of natural
limits on computation, willpower, and self- interest, and the implications of
those limits for economic analysis (market equilibrium, IO, public fi nance,
etc.). A diff erent approach is to defi ne behavioral economics more generally,
as simply being open- minded about what variables are likely to infl uence
economic choices. . . . In a general ML approach, predictive features could
be—and should be—any variables that predict. . . . If behavioral econom-
ics is recast as open- mindedness about what variables might predict, then
ML is an ideal way to do behavioral economics because it can make use of
17
a wide set of variables and select which ones predict.” He argues that fi rms,
policymakers, and market designers can implement AI as either a “bionic<
br />
patch” that improves human decision- making or “malware” that exploits
human weaknesses. In this way, AI could reduce or exacerbate the political
economy and inequality issues highlighted in earlier chapters. In addition,
Camerer explores two other ways in which AI and behavioral economics will
interact. He hypothesizes that machine learning could help predict human
behavior in a variety of settings including bargaining, risky choice, and
games, helping to verify or reject theory. He also emphasizes that (poor)
implementation of AI might provide insight into new ways to model biases
in human decision- making.
The book concludes with Kahneman’s brief and insightful comment.
Kahneman begins with a discussion of Camerer’s idea of using prediction
to verify theory, but continues with a broader discussion of a variety of
themes that arose over the course of the conference. With an optimistic
tone, he emphasizes that there are no obvious limits to what artifi cial intel-
ligence may be able to do: “Wisdom is breadth. Wisdom is not having too
narrow a view. That is the essence of wisdom; it is broad framing. A robot
will be endowed with broad framing. When it has learned enough, it will
be wiser than we people because we do not have broad framing. We are nar-
row thinkers, we are noisy thinkers, and it is very easy to improve upon us.
I do not think that there is very much that we can do that computers will
not eventually be programmed to do.”
The Future of Research on the Economics of Artifi cial Intelligence
The chapters in this book are the beginning. They highlight key questions,
recognize the usefulness of several economic models, and identify areas for
further development. We can leverage what we know about GPTs to antici-
pate the impact of AI as it diff uses, recognizing that no two GPTs are iden-
tical. If AI is a general purpose technology, it is likely to lead to increased
economic growth. A common theme in these chapters is that slowing down
scientifi c progress—even if it were possible—would come at a signifi cant
cost. At the same time, many attendees emphasized that the distribution
of the benefi ts of AI might not be even. It depends on who owns the AI,
the eff ect on jobs, and the speed of diff usion.
The task given to the conference presenters was to scope out the research