Friday, June 14, 2019

The "Right" and "Wrong" Kind of Artificial Intelligence for Labor Markets

Sometimes technology replaces existing jobs. Sometimes it create new jobs. Sometimes it does both at the same time. This raises an intriguing question: Do we need to view the effects of technology on jobs as a sort of tornado blowing through the labor market? Or could we come to understand why some technologies have bigger effects on creating jobs, or supplementing existing jobs, than on replacing job--and maybe even give greater encouragement to those kinds of technologies?

Ajay Agrawal, Joshua S. Gans, and Avi Goldfarb tackle the issue of how artificial intelligence technologies can have differing effects on jobs in "Artificial Intelligence: The Ambiguous Labor Market Impact of Automating Prediction" (Journal of Economic Perspectives, Spring 2019, 33 (2): 31-50). Perhaps someday "artificial intelligence" will be indistinguishable from human intelligence. But the authors argue that at present, most of the developments in AI are really about "machine learning," which involves using computing power to make more accurate predictions from data. They write (citations omitted):
The majority of recent achievements in artificial intelligence are the result of advances in machine learning, a branch of computational statistics. ... Machine learning does not represent an increase in artificial general intelligence of the kind that could substitute machines for all aspects of human cognition, but rather one particular aspect of intelligence: prediction. We define prediction in the statistical sense of using existing data to fill in missing information. As deep-learning pioneer Geoffrey Hinton said, “Take any old problem where you have to predict something and you have a lot of data, and deep learning is probably going to make it work better than the existing techniques.”
The authors are using "prediction" in a very broad sense: "As an input into decision-making under uncertainty, prediction is essential to many occupations, including service industries: teachers decide how to educate students, managers decide who to recruit and reward, and janitors decide how to deal with a given mess." Here are a few examples from their paper, some fairly well-known, others less so. 

AI and Brain Surgery
For example, ODS Medical developed a way of transforming brain surgery for cancer patients. Previously, a surgeon would remove a tumor and surrounding tissue based on previous imaging (say, an MRI scan). However, to be certain all cancerous tissue is removed, surgeons frequently end up removing more brain matter than necessary. The ODS Medical device, which resembles a connected pen-like camera, uses artificial intelligence to predict whether an area of brain tissue has cancer cells or not. Thus, while the operation is taking place, the surgeon can obtain an immediate recommendation as to whether a particular area should be removed. By predicting with more than 90 percent accuracy whether a cell is cancerous, the device enables the surgeon to reduce both type I errors (removing noncancerous tissue) and type II errors (leaving cancerous tissue). The effect is to augment the labor of brain surgeons. Put simply, given a prediction, human decision-makers can in some cases make more nuanced and improved choices. 
AI and Tax Law
Blue J Legal’s artificial intelligence scans tax law and decisions to provide firms with predictions of their tax liability. As one example, tax law is often ambiguous on how income should be classified. At one extreme, if someone trades securities multiple times per day and holds securities for a short time period, then the profits are likely to be classified as business income. In contrast, if trades are rare and assets are held for decades, then profits are likely to be classified by the courts as capital gains. Currently, a lawyer who takes on a case collects the specific facts, conducts research on past judicial decisions in similar cases, and makes predictions about the case at hand. Blue J Legal uses machine learning to predict the outcome of new fact scenarios in tax and employment law cases. In addition to a prediction, the software provides a “case finder” that identifies the most relevant cases that help generate the prediction.
AI and Office Cleaning
A&K Robotics takes existing, human-operated cleaning devices, retrofits them with sensors and a motor, and then trains a machine learning-based model using human operator data so the machine can eventually be operated autonomously. Artificial intelligence enables prediction of the correct path for the cleaning robot to take and also can adjust for unexpected surprises that appear in that path. Given these predictions, it is possible to prespecify what the cleaning robot should do in a wide range of predicted scenarios, and so the decisions and actions can be automated. If successful, the human operators will no longer be necessary. The company emphasizes how this will increase workplace productivity, reduce workplace injuries, and reduce costs.
AI and Bail Decisions
Judges make decisions about whether to grant bail and thus to allow the temporary release of an accused person awaiting trial, sometimes on the condition that a sum of money is lodged to guarantee their appearance in court. Kleinberg, Lakkaraju, Leskovec, Ludwig, and Mullainathan (2018) study the predictions that inform this decision ... Judges will continue to weigh the relative costs of errors, and in fact the US legal system requires human judges to decide. But artificial intelligence could enhance the productivity of judges. The main social gains here may not be in hours saved for judges as a group, but rather from the improvement in prediction accuracy. Police arrest more than 10 million people per year in the United States. Based on AIs trained on a large historical dataset to predict decisions and outcomes, the authors report simulations that show enhanced prediction quality could enable crime reductions up to 24.7 percent with no change in jailing rates or jailing rate reductions up to 41.9 percent with no increase in crime rates. In other words, if judicial output were measured in a quality-adjusted way, output and hence labor productivity could rise significantly. 
AI and Drug Discovery
A company called Atomwise uses artificial intelligence to enhance the drug discovery process. Traditionally, identifying molecules that could most efficiently bind with proteins for a given therapeutic target was largely based on educated guesses and, given the number of potential combinations, it was highly inefficient. Downstream experiments to test whether a molecule could be of use in a treatment often had to deal with a number of poor-quality candidate molecules. Atomwise automates the task of predicting which molecules have the most potential for exploration. Their software classifies foundational building blocks of organic chemistry and predicts the outcomes of real-world physical experiments. This makes the decision of which molecules to test more efficient. This increased efficiency, specifically enabling lower cost and higher accuracy decisions on which molecules to test, increases the returns to the downstream lab testing procedure that is conducted by humans. As a consequence, the demand for labor to conduct such testing is likely to increase. Furthermore, higher yield due to better prediction of which chemicals might work increases the number of humans needed in the downstream tasks of bringing these chemicals to market. In other words, automated prediction in drug discovery is leading to increased use of already-existing complementary tasks, performed by humans in downstream occupations.
Some of these examples fit the mental model that robots driven by AI are going to replace human workers. Other suggest that AI will make existing workers more productive. It has become common, when looking at effects of technology on labor markets, to focus on the idea that a given job  has a bunch of tasks. A new technology replace most or all of the tasks a certain job, that job may be eliminated. It the technology creates the need for a bunch of new tasks, brand-new job categories may be created. Or often, a new technology may just cause a job to evolve, by replacing some tasks and creating a need for other tasks to be carried out. 

These differing pathways suggest that it might be able to differentiate, at least to some extent, between uses of artificial intelligence that are especially likely to be efficiency-enhancing for existing workers and job-creating for others, and uses of artificial intelligence that are more likely to be job-replacing in a way that saves a little money for employers but doesn't have large efficiency gains. 

For example, an article in Axios described a discussion with James Manyika, director of the McKinsey Global Institute. Manyika notes that in doing AI research: "If your goal is human-level capability, you're increasing the probability that you're doing substitutive work ... If you were trying to solve this as an economic problem, you'd want to develop AI algorithms or machines that are as different from humans as possible." Manyika suggests a few examples of AI-based research that are less likely to replace human workers, because they don't mimic human capabilities: "augmented reality," "AI systems that can predict how proteins are folded, or how to route trucks better," and "robots that can see around corners, or register sounds outside our hearing range."

Daron Acemoglu and Pascual Restrepo tackle this question in a short nontechnical essay "The Wrong Kind of AI? Artificial Intelligence and the Future of Labor Demand" (IZA Discussion Paper No. 12292, April 2019)
"Most AI researchers and economists studying its consequences view it as a way of automating yet more tasks. No doubt, AI has this capability, and most of its applications to date have been of this mold: e.g., image recognition, speech recognition, translation, accounting, recommendation systems, and customer support. But we do not need to accept that this as the primary way that AI can be and indeed ought to be used. ...
It is possible that the ecosystem around the most creative clusters in the United States, such as Silicon Valley, excessively rewards automation and pays insu¢ cient attention to other uses of frontier technologies. This may be partly because of the values and interests of leading researchers (consider for example the ethos of companies like Tesla that have ceaselessly tried to automate everything). It is also partly because the prevailing business model and vision of the large tech companies, which are the source of most of the resources going into AI, have focused on automation and removing the (fallible) human element from the production process. ...

All in all, even though we currently lack definitive evidence that research and corporate resources today are being directed towards the "wrong" kind of AI, the market for innovation gives no compelling reason to expect an efficient balance between different types of AI. If at this critical juncture insufficient attention is devoted to inventing and creating demand for, rather than just replacing, labor, that would be the "wrong" kind of AI from the social and economic point of view.
As one example, Acemoglu and Restrepo point out that individualized classroom teaching, enabled by AI, will not eliminate the need for teachers--and may even increase it. As they write: "Educational applications of AI would necessitate new, more flexible skills from teachers (beyond what is available and what is being invested in now), and they would need additional resources to hire more teachers to work with these new AI technologies (after all, that is the point of the new technology, to create new tasks and additional demand for teachers)." AI enabled-tools could go well beyond feeding students multiple-choice questions with continually adjusting levels of difficulty, and  provide a kind of feedback that is just different from what any classroom teacher can provide.