Pages

Wednesday, July 10, 2019

Is AI Just Recycled Intelligence, Which Needs Economics to Help It Along?

The Harvard Data Science Review has just published its first issue. Many of us in economics are cousins of burgeoning data science field, and will find it of interest. As one example, Harvard provost (and economist) Alan Garber offers a broad-based essay on "Data Science: What the Educated Citizen Needs to Know."  Others may be more intrigued by the efforts of Mark Glickman, Jason Brown, and Ryan Song to use a machine learning approach to figure out whether Lennon or McCartney is more likely to have authored certain songs by the Beatles that are officially attributed to both, in "(A) Data in the Life: Authorship Attribution in Lennon-McCartney Songs."
But my attention was especially caught by an essay by Michael I. Jordan called "Artificial Intelligence—The Revolution Hasn’t Happened Yet," which is then followed by 11 comments: Rodney BrooksEmmanuel Candes, John Duchi, and Chiara SabattiGreg CraneDavid DonohoMaria FasliBarbara GroszAndrew LoMaja MataricBrendan McCordMax Welling, and Rebecca Willett.  The rejoinder from Michael I. Jordan will be of particular interest to economists, because it is titled "Dr. AI or: How I Learned to Stop Worrying and Love Economics."

Jordan's main argument is that the term "artificial intelligence" often misleads public discussions, because the actual issue here isn't human-type intelligence. Instead, a set of computer programs that can use data to train themselves to make predictions--what the experts call "machine learning," defined as "an algorithmic field that blends ideas from statistics, computer science and many other disciplines to design algorithms that process data, make predictions, and help make decisions." Consumer recommendation or fraud detection systems, for example, are machine learning, not  the high-level flexible cognitive capacity that most of us mean by "intelligence." As Jordan argues, the information technology that would run, say, an operational system of autonomous vehicles is more closely related to a much more complicated air traffic control system than to the human brain.

(One implication here for economics is that if AI is really machine learning, and machine learning is about programs that can update and train themselves to make better predictions, then one can analyze the effect of AI on labor markets by looking at specific tasks within various jobs that involve prediction. Ajay Agrawal, Joshua S. Gans, and Avi Goldfarb take this approach in "Artificial Intelligence: The Ambiguous Labor Market Impact of Automating Prediction" (Journal of Economic Perspectives, Spring 2019, 33 (2): 31-50). I offered a gloss of their findings in a blog post last month.)

Moreover, the machine learning algorithms, which often involve mixing together results from past research and pre-existing data in different situations with new forms of data can go badly astray. Jordan  offers a vivid example: 
Consider the following story, which involves humans, computers, data, and life-or-death decisions, but where the focus is something other than intelligence-in-silicon fantasies. When my spouse was pregnant 14 years ago, we had an ultrasound. There was a geneticist in the room, and she pointed out some white spots around the heart of the fetus. “Those are markers for Down syndrome,” she noted, “and your risk has now gone up to one in 20.” She let us know that we could learn whether the fetus in fact had the genetic modification underlying Down syndrome via an amniocentesis, but amniocentesis was risky—the chance of killing the fetus during the procedure was roughly one in 300. Being a statistician, I was determined to find out where these numbers were coming from. In my research, I discovered that a statistical analysis had been done a decade previously in the UK in which these white spots, which reflect calcium buildup, were indeed established as a predictor of Down syndrome. I also noticed that the imaging machine used in our test had a few hundred more pixels per square inch than the machine used in the UK study. I returned to tell the geneticist that I believed that the white spots were likely false positives, literal white noise.
She said, “Ah, that explains why we started seeing an uptick in Down syndrome diagnoses a few years ago. That’s when the new machine arrived.”
We didn’t do the amniocentesis, and my wife delivered a healthy girl a few months later, but the episode troubled me, particularly after a back-of-the-envelope calculation convinced me that many thousands of people had gotten that diagnosis that same day worldwide, that many of them had opted for amniocentesis, and that a number of babies had died needlessly. The problem that this episode revealed wasn’t about my individual medical care; it was about a medical system that measured variables and outcomes in various places and times, conducted statistical analyses, and made use of the results in other situations. The problem had to do not just with data analysis per se, but with what database researchers call provenance—broadly, where did data arise, what inferences were drawn from the data, and how relevant are those inferences to the present situation?
The comment by David Donoho refers to this as "recycled intelligence." Donoho writes:
The last decade shows that humans can record their own actions when faced with certain tasks, which can be recycled to make new decisions that score as well as humans’ (or maybe better, because the recycled decisions are immune to fatigue and impulse). ... Recycled human intelligence does not deserve to be called augmented intelligence. It does not truly augment the range of capabilities that humans possess. ... Relying on such recycled intelligence is risky; it may give systematically wrong answers ..."
Donoho offers the homely example of spellcheck programs which, for someone who is an excellent and careful speller, are as likely to create memorable errors as to improve the text.

From Jordan's perspective, what we should be talking about is not whether AI or machine learning will "replace" workers, but instead thinking about how humans will interact with these new capabilities. I'm not just thinking of worker training here, but of the issues related to privacy, access to technology, the structure of market competition, and other issues. Indeed, Jordan argues that one major ingredient missing from the current machine-learning programs is a fine-grained sense of what specific people want--which implies a role for markets. Jordan argues that rather than pretending that we are mimicking human "intelligence," with all the warts and flaws that we know human intelligence has, we should instead be thinking about interactions of how information technology can address the allocation of public and private resources in ways that benefit people. I can't figure out a way to summarize his argument in brief, without doing violence to it, so I quote here at length: 
Let us suppose that there is a fledgling Martian computer science industry, and suppose that the Martians look down at Earth to get inspiration for making their current clunky computers more ‘intelligent.’ What do they see that is intelligent, and worth imitating, as they look down at Earth?
They will surely take note of human brains and minds, and perhaps also animal brains and minds, as intelligent and worth emulating. But they will also find it rather difficult to uncover the underlying principles or algorithms that give rise to that kind of intelligence——the ability to form abstractions, to give semantic interpretation to thoughts and percepts, and to reason. They will see that it arises from neurons, and that each neuron is an exceedingly complex structure——a cell with huge numbers of proteins, membranes, and ions interacting in complex ways to yield complex three-dimensional electrical and chemical activity. Moreover, they will likely see that these cells are connected in complex ways (via highly arborized dendritic trees; please type "dendritic tree and spines" into your favorite image browser to get some sense of a real neuron). A human brain contains on the order of a hundred billion neurons connected via these trees, and it is the network that gives rise to intelligence, not the individual neuron.
Daunted, the Martians may step away from considering the imitation of human brains as the principal path forward for Martian AI. Moreover, they may reassure themselves with the argument that humans evolved to do certain things well, and certain things poorly, and human intelligence may be not necessarily be well suited to solve Martian problems.
What else is intelligent on Earth? Perhaps the Martians will notice that in any given city on Earth, most every restaurant has at hand every ingredient it needs for every dish that it offers, day in and day out. They may also realize that, as in the case of neurons and brains, the essential ingredients underlying this capability are local decisions being made by small entities that each possess only a small sliver of the information being processed by the overall system. But, in contrast to brains, the underlying principles or algorithms may be seen to be not quite as mysterious as in the case of neuroscience. And they may also determine that this system is intelligent by any reasonable definition—it is adaptive (it works rain or shine), it is robust, it works at small scale and large scale, and it has been working for thousands of years (with no software updates needed). Moreover, not being anthropocentric creatures, the Martians may be happy to conceive of this system as an ‘entity’—just as much as a collection of neurons is an ‘entity.’
Am I arguing that we should simply bring in microeconomics in place of computer science? And praise markets as the way forward for AI? No, I am instead arguing that we should bring microeconomics in as a first-class citizen into the blend of computer science and statistics that is currently being called ‘AI.’ ... 
Indeed, classical recommendation systems can and do cause serious problems if they are rolled out in real-world domains where there is scarcity. Consider building an app that recommends routes to the airport. If few people in a city are using the app, then it is benign, and perhaps useful. When many people start to use the app, however, it will likely recommend the same route to large numbers of people and create congestion. The best way to mitigate such congestion is not to simply assign people to routes willy-nilly, but to take into account human preferences—on a given day some people may be in a hurry to get to the airport and others are not in such a hurry. An effective system would respect such preferences, letting those in a hurry opt to pay more for their faster route and allowing others to save for another day. But how can the app know the preferences of its users? It is here that major IT companies stumble, in my humble opinion. They assume that, as in the advertising domain, it is the computer's job to figure out human users' preferences, by gathering as much information as possible about their users, and by using AI. But this is absurd; in most real-world domains—where our preferences and decisions are fine-grained, contextual, and in-the-moment—there is no way that companies can collect enough data to know what we really want. Nor would we want them to collect such data—doing so would require getting uncomfortably close to prying into the private thoughts of individuals. A more appealing approach is to empower individuals by creating a two-way market where (say) street segments bid on drivers, and drivers can make in-the-moment decisions about how much of a hurry they are in, and how much they're willing to spend (in some currency) for a faster route.
Similarly, a restaurant recommendation system could send large numbers of people to the same restaurant. Again, fixing this should not be left to a platform or an omniscient AI system that purportedly knows everything about the users of the platform; rather, a two-way market should be created where the two sides of the market see each other via recommendation systems.
It is this last point that takes us beyond classical microeconomics and brings in machine learning. In the same way as modern recommendation systems allowed us to move beyond classical catalogs of goods, we need to use computer science and statistics to build new kinds of two-way markets. For example, we can bring relevant data about a diner's food preferences, budget, physical location, etc., to bear in deciding which entities on the other side of the market (the restaurants) are best to connect to, out of the tens of thousands of possibilities. That is, we need two-way markets where each side sees the other side via an appropriate form of recommendation system.
From this perspective, business models for modern information technology should be less about providing ‘AI avatars’ or ‘AI services’ for us to be dazzled by (and put out of work by)—on platforms that are monetized via advertising because they do not provide sufficient economic value directly to the consumer—and more about providing new connections between (new kinds of) producers and consumers.
Consider the fact that precious few of us are directly connected to the humans who make the music we listen to (or listen to the music that we make), to the humans who write the text that we read (or read the text that we write), and to the humans who create the clothes that we wear. Making those connections in the context of a new engineering discipline that builds market mechanisms on top of data flows would create new ‘intelligent markets’ that currently do not exist. Such markets would create jobs and unleash creativity.
Implementing such platforms is a task worthy of a new branch of engineering. It would require serious attention to data flow and data analysis, it would require blending such analysis with ideas from market design and game theory, and it would require integrating all of the above with innovative thinking in the social, legal, and public policy spheres. The scale and scope is surely at least as grand as that envisaged when chemical engineering was emerging as a way to combine ideas from chemistry, fluid mechanics, and control theory at large scale.
Certainly market forces are not a panacea. But market forces are an important source of algorithmic ideas for constructing intelligent systems, and we ignore them at our peril. We are already seeing AI systems that create problems regarding fairness, congestion, and bias. We need to reconceptualize the problems in such a way that market mechanisms can be taken into account at the algorithmic level, as part and parcel of attempting to make the overall system be ‘intelligent.’ Ignoring market mechanisms in developing modern societal-scale information-technology systems is like trying to develop a field of civil engineering while ignoring gravity.
Markets need to be regulated, of course, and it takes time and experience to discover the appropriate regulatory mechanisms. But this is not a problem unique to markets. The same is true of gravity, when we construe it as a tool in civil engineering. Just as markets are imperfect, gravity is imperfect. It sometimes causes humans, bridges, and buildings to fall down. Thus it should be respected, understood, and tamed. We will require new kinds of markets, which will require research into new market designs and research into appropriate regulation. Again, the scope is vast.
I can think of all sorts of issues and concerns to raise about this argument (and I'm sure that readers can do so as well), but I also think the argument has an interesting force and plausibility.