Thursday, April 2, 2020

Is It Getting Harder for Research to Boost Productivity?

New technologies are the beating heart of productivity growth and a rising standard of living. But Nicholas Bloom, Charles I. Jones, John Van Reenen, and Michael Webb ask "Are Are Ideas Getting Harder to Find?" (American Economic Review, April 2020, pp. 1104-44, not freely available online). The fact that the are asking you the question tells you that their answer is a pessimistic one. This economics research article will be tough sledding for the uninitiated, but the heart of their case is made with some graphs suitable for anyone to mull over.

For example, take an overall look at the US economy, considering the number of researchers and productivity growth. You find that the number of researchers grows by multiples, but productivity growth rises and falls by small amounts. The inference is that it's taking a lot more researchers just to keep productivity growth at the same level.
For a specific example, consider Moore's law, the notion that the density of semiconductors on a computer chip will double every two years or so. Moore's law turned 50 a few years ago, and as I noted at the time, it's been getting more and more expensive and difficult to keep doubling the density of chips. As Bloom, Jones, van Reenen and Webb write: "In particular, the number of researchers required to double chip density today is more than 18 times larger than the number required in the early 1970s. At least as far as semiconductors are concerned, ideas are getting harder to find. Research productivity in this case is declining sharply, at a rate of 7 percent per year."
Or how about agricultural crop yields? The green lines show the number of researchers, rising; the blue line shows agricultural productivity growth, falling.
Or how about inventions of new drugs? The authors write:

New molecular entities (NMEs) are novel compounds that form the basis of new drugs. Historically, the number of NMEs approved by the Food and Drug Administration each year shows little or no trend, while the number of dollars spent on pharmaceutical research has grown dramatically ... We reexamine this fact ... The result is that research effort rises by a factor of 9, while research productivity falls by a factor of 11 by 2007 before rising in recent years so that the overall decline by 2014 is a factor of 5.
Or how about reductions in cancer? Yes, death rates for cancer are falling, but research into fighting cancer has been rising quite rapidly. As a result, it seems to be taking more and more research publications about cancer and more and more clinical trials to reduce cancer deaths by an equivalent amount.
Based on these and other examples, the authors write: "[J]ust to sustain constant growth in GDP per person, the United States must double the amount of research effort every 13 years to offset the increased difficulty of finding new ideas."

In the fashion of honest academics, the authors note in a number of places and in a number of ways that examples like these don't prove conclusively that it's becoming more costly to find ideas and harder for research to boost productivity. For example:

  • Perhaps the measured growth of GDP doesn't capture many of the gains that are happening, like free or zero-marginal-cost access to so many goods and services over the internet. 
  • Perhaps  there are other examples measures of the gains from research would be rising, not falling. 
  • Perhaps Moore's law is a bad example, because it involves running into physical  limits, and thus isn't representative of other research efforts. 
  • Perhaps we are doing fine at discovering new ideas, but our economy is doing a poor job of turning these ideas into commercial products and at diffusing the ideas and products across a wide spectrum of industries and companies. 
  • Perhaps the shift away from "basic research" funded by governments and toward applied research largely funded by companies has reduced the number of big new ideas. 
  • Perhaps more firms are using intellectual property as a defensive technique for warding off competition rather than as a method of moving forward with productivity gains. 
  • Overall US R&D spending has been pretty flat for several decades at about 2.5% of GDP, and maybe that's the measure of "research" on which we should be focusing.
  • Perhaps there is some technology threshold for technologies like artificial intelligence, such that once that threshold is reached, very large productivity gains will then be possible, but we just haven't hit the threshold yet.   

You can probably add some possibilities to this list. But the weight of the argument from Bloom, Jones, van Reenen and Webb is that if we want technology and new ideas to ride to our rescue in a variety of areas--productivity growth, reducing pollution, improving health care and education, and many others--we need to step up our efforts considerably.

CBO: GDP Falls 7%, Unemployment Hits 10% in Second Quarter 2020

Phillip Swagel, Director of the Congressional Budget Office, blogs on "Updating CBO’s Economic Forecast to Account for the Pandemic" (April 2, 2020)
The following are CBO’s very preliminary estimates, which are based on information about the economy that was available through this morning and which include the effects of an economic boost from recently enacted legislation.
Gross domestic product is expected to decline by more than 7 percent during the second quarter. If that happened, the decline in the annualized growth rate reported by the Bureau of Economic Analysis would be about four times larger and would exceed 28 percent. Those declines could be much larger, however.
The unemployment rate is expected to exceed 10 percent during the second quarter, in part reflecting the 3.3 million new unemployment insurance claims reported on March 26 and the 6.6 million new claims reported this morning. (The number of new claims was about 10 times larger this morning than it had been in any single week during the recession from 2007 to 2009.)

Wednesday, April 1, 2020

Urbanization: Glaeser's Presidential Address to the Eastern Economic Association

Urban areas have traditionally been engines of prosperity and social mobility. But the technologies driving changes in urban structure and the ways in which government responds to these changes has evolved  over time. Edward L. Glaeser prepared the Presidential Address for the Eastern Economic Association on "Urbanization and Its Discontents" (Eastern Economic Journal, April 2020, 46:191–21). It's also available as NBER Working Paper # 26839. (Both links require a subscription, which most academic libraries will have.)

Glaeser offers a brief reminder of past urban patterns:
Urban fortunes are shaped by technological change. During some periods, technological shifts are largely centripetal, meaning that they pull people toward cities. During other eras, technological trends are centrifugal, meaning that they push people away from dense urban cores.
The nineteenth century was predominantly a centripetal century, marked by series of innovations, including steam engines, streetcars and skyscrapers, that abetted urban growth. The first 60 years of the twentieth century was largely a centrifugal era, largely because technological change reduced the tyranny of distance. Cheaper shipping costs, from highways, cheaper railroads and containerization,
allowed far-flung people to participate more fully in the global economy (Glaeser and Kolhase 2004). Radio and television enabled the rural population to enjoy previously entertainment.
These lower costs reduced the need to locate production near the urban ports and railroads that once anchored all of America’s cities. The mass-produced automobile enabled low density mobility and the rise of car-oriented suburbs (Baum-Snow 2007). These centrifugal technologies first slowed the rise of American cities and then enabled a mass exodus from urban America. The air conditioner made America’s warmer places far more appealing than they had been before World War II, and a move to sun accompanied the move to sprawl. Urban social problems, especially weak schools and crime, were exacerbated by suburbanization and then further encouraged the move to the suburbs and to lower density Sunbelt cities.
However, the last four decades or so have seen a resurgence of many urban areas, based in large part on a rise in the economic importance of proximity. Glaeser explains:
The industrial jobs that had once been the backbone of urban economies did not return. Instead, human capital-intensive business services became the new export industries for urban areas. Financial services expanded enormously in urban America from 1980 to 2007. At its height in 2007, finance and insurance generated over forty percent of the total payroll on the island of Manhattan. The urban edge in transferring knowledge is particularly valuable in finance, because a bit of extra information can make millions for a trader in minutes.
Face-to-face contact is often part of the delivery mechanism for urban services. Clients like to meet their accountants, bankers, lawyers and management consultants in person. Face-to-face contact is even more imperative for barbers and manicurists. Urban interactions enable young workers to become more skilled. ... 
Why didn’t improvements in electronic communication make face-to-face contact obsolete? While e-mail is possible almost everywhere, face-to-face interactions generate a richer information flow that includes body language, intonation and facial expression. As the world became more complex, the value of intense communication also increases. Physical immersion in an informationally intense environment, such as trading floor or an academic seminar, generates a rush of information that is hard to duplicate online. Moreover, dense environments facilitate random personal interactions that can create serendipitous flows of knowledge and collaborative creativity. The knowledge-intensive nature of the urban resurgence helps to explain why educated cities have done much better than uneducated cities. ..
While highly educated workers moved into professional and business services, successful cities also generated employment for less skilled workers in other parts of the service economy. Many workers switched from manufacturing to wholesale and retail trade during the 1990s. Hospitality and food services also expanded dramatically after 1980. Employment in these service industries depends on the demand generated by the success of more export-oriented services, like finance. In areas that lack viable export industries, the dominant sector is typically healthcare and social assistance, where demand is maintained by Federal transfers.
Cities also came back as places of consumption as well as places of production (Glaeser et al. 2001), which partially reflects the rise in returns to skill. As Americans became better educated and as educated people came to earn more, they spent more on higher-end urban pleasures, such as fine dining, art galleries and expensive retail. Young people increasingly lived in cities, even as they worked in suburbs. Prices rose dramatically in urban cores and remained flat in the suburbs.
This description also helps to what is being lost by the pandemic-induced recession. Yes, some workers can do many basic parts of their jobs from home, and students can do some work with online courses. But the "richer information flow" of "face-to-face interactions" in both production and consumption is being lost for a time, and while the network of such interactions can certainly be rebuilt, it doesn't flip on and off like a light switch.

The resurgence of many cities has also brought with it a new group of problems, as Glaeser details.

One change is that cities do not seem to be functioning as ladders of opportunity. It may be that the extent of social and ethnic segregation--in terms of who you have significant interactions with on a typical day--can be higher in urban areas. Schools in urban core areas have often not recovered from their declines back in the 1970s. Glaeser writes: "It is a great paradox that cities appear to be forges of human capital for adults, but places where children seem to learn less productive knowledge."

Cities are also places of growing income inequality. Skilled workers in a large city or a downtown typically earn more than workers of similar skill outside those locations, but unskilled workers often do not have a similar pay boost from working in a city or downtown area.

Part of the issue may be government regulation of lower-skilled entrepreneurs. As Glaeser trenchantly notes:
Somewhat oddly, much of America appears to regulate low human capital entrepreneurship much more tightly than it regulates high human capital entrepreneurs. When Mark Zuckerberg started Facebook in his Harvard College dormitory, he faced few regulatory hurdles. If he had been trying to start a bodega that sold milk products three miles away, he would have needed more than ten permits. One question is whether the inequality that persists in America’s system is exacerbated by the legal and regulatory system.
And of course, the extremely high cost of housing in a number of economically strong urban areas makes it very hard for the middle-class, let alone those with lower skill levels, to pay the rent. Glaeser says:
For much of the post-war period, many urbanites could find housing that cost substantially less than construction costs even in successful cities (Glaeser and Gyourko 2005a). Housing depreciates, like cars and clothing, and so poorer urbanites could find older apartments in less fashionable neighborhoods that cost less. Filtering models predict that neighborhoods go through transitions, and that the rich would live in a newer, nicer areas but the poor occupy older, more dilapidated areas. The rich vacate areas as they depreciate and then move to a new area that had been built with higher-quality housing. Apparently, this model appears to have broken down after 1970, probably because of regulation and increased neighborhood opposition to redevelopment.
There is a persistent theme in American culture of moving to the big city, finding a low-level job, and working your way up. But US cities have become places where it's more costly to move in because the rent looks unthinkably high, and harder to find that low-skilled job, and then harder to move up unless you develop a high skill level. Add traffic congestion, and concerns about poor schools and crime, and moving to the big city doesn't look so attractive.

Glaeser argues that today's urban problems often reflect poor performance by local governments, who when it comes to housing markets, labor issues, schools, and other areas, have in recent decades often focused on blocking change or supporting insider groups. He writes:
Why has urban success been accompanied by so much discontent? The most natural explanation is that the success of private enterprise in cities has not been accompanied by sufficient development of public capacity. The public sector has often focused on limiting urban change, rather than working to improve the urban experience. In many cases, this focus reflects the political priorities of empowered insiders. ... There are many good things about citizen empowerment, but the most empowered citizens tend to be longer-term residents with more resources. Those citizens do not internalize the interests of people who live elsewhere and would want to come to the city. Consequently, their political actions are more likely to exclude than to embrace.

Monday, March 30, 2020

Aging, the Demographic Transition, and the Necessary Adjustments

David E. Bloom has been thinking a lot about aging. Last fall he edited Live Long and Prosper? The
Economics of Ageing Populations, a free ebook with 20 short essays summarizing a range of research on the topic (October 2019, VoxEU.org, registration required). Then Bloom contributed the lead article, "Population 2020: Demographics can be a potent driver of the pace and process of economic development," in the most recent issue of Finance & Development (March 2020, pp. 4-9).  At the moment, of course, a primary concern is that older people may be more vulnerable to the spread of COVID-19. But more broadly, a shift in the distribution of ages across society will have broad consequences for social institutions and government policies.

Here's a figure from Andrew Scott, in his F&D essay "The Long, Good Life," which gives as sense of the shift. The horizontal axis of the figure shows the expansion of population over time. The vertical axis shows the shift in aging. Thus, the shaded area for 1950 is narrower (fewer people) and more pointed near the top (fewer older people). The time periods that follow get wider (more people) and also develop "shoulders," representing a population where more people stay older for longer.



Bloom explains the underlying patterns, including the "demographic transition" and the graying population, in his F&D essay:

In many developing economies, population growth has been associated with a phenomenon known as the “demographic transition”—the movement from high to low death rates followed by a corresponding movement in birth rates.
For most of human history, the average person lived about 30 years. But between 1950 and 2020, life expectancy increased from 46 to 73 years, and it is projected to increase by another four years by 2050. Moreover, by 2050, life expectancy is projected to exceed 80 years in at least 91 countries and territories that will then be home to 39 percent of the world's population. ... Cross-country convergence in life expectancy continues to be strong. For example, the life expectancy gap between Africa and North America was 32 years in 1950 and 24 years in 2000; it is 16 years today. ...
In the 1950s and 1960s, the average woman had roughly five children over the course of her childbearing years. Today, the average woman has somewhat fewer than 2.5 children. This presumably reflects the growing cost of child-rearing (including opportunity cost, as reflected mainly in women’s wages), increased access to effective contraception, and perhaps also growing income insecurity. ... Between 1970 and 2020, the fertility rate declined in every country in the world. ... 
If the population’s age structure is sufficiently weighted toward those in prime childbearing years, even a fertility rate of 2.1 can translate into positive population growth in the short and medium term, because low fertility per woman is more than offset by the number of women having children. This feature of population dynamics is known as population momentum and helps explain (along with migration) why the populations of 69 countries and territories are currently growing even though their fertility rates are below 2.1.

The result of this demographic transition is a population with a rising number and share of elderly. Bloom writes:
Population aging is the dominant demographic trend of the twenty-first century—a reflection of increasing longevity, declining fertility, and the progression of large cohorts to older ages. Never before have such large numbers of people reached ages 65+ (the conventional old-age threshold). We expect to add 1 billion older individuals in the next three to four decades, atop the more than 700 million older people we have today. Among the older population, the group aged 85+ is growing especially fast and is projected to surpass half a billion in the next 80 years. This trend is significant because the needs and capacities of the 85+ crowd tend to differ significantly from those of 65-to-84-year-olds.
Although every country in the world will experience population aging, differences in the progression of this phenomenon will be considerable. Japan is currently the world leader, with 28 percent of its population 65 and over, triple the world average. By 2050, 29 countries and territories will have larger elder shares than Japan has today. In fact, the Republic of Korea’s elder share will eventually overtake Japan’s, reaching the historically unprecedented level of 38.1 percent. Japan’s median age (48.4) is also currently the highest of any country and more than twice that of Africa (19.7). But by 2050, Korea (median age 56.5 in 2050) is also expected to overtake Japan on that metric (54.7).
Three decades ago,the world was populated by more than three times as many adolescents and young adults (15- to 24-year-olds) as older people. Three decades from now, those age groups will be roughly on par.
I won't try here to summarize all the discussions in the F&D and the ebook. Instead, I'll just list the tables of contents below. But here are a few thoughts: 
  • Consider your mental picture of an extended family gathering. Maybe it's a holiday with grandparents, parents, and children. Maybe it's a family reunion with a larger group of aunts, uncles, and cousins. Now in thinking about that family reunion, think about it  being much more common to including five generations: that is, from great-great-grandparents down to children. In addition, think about there being fewer people in each generation, as a result of fewer children. The "family tree" is going to look taller and skinnier.  
  • As we shift from (in Bloom's calculation) a world where there are three times as many young people as elderly to a world where those populations are equal, public institutions will also shift to meet the needs of the elderly. The design of parks, libraries, public transit, city streets, shopping malls, and much more will evolve to reflect more emphasis on the needs and desires of the elderly. On the other side, schools and education will be a shrinking part of what government does. 
  • Caring for the elderly who need a range of support from an occasional in-home visit to living in a full-care institution is going to be a growth industry, needing both additional workers and technological innovations. This will be especially true as the number of extremely elderly people rises--often defined now as those over 85, but in the future perhaps defined as those over 100. In addition, the elderly will have had fewer children, and thus are less likely to have access to within-family support. 
  • It will be important for the workplace to shift in ways that provide jobs with the flexibility and interest to appeal to at least the "young elderly," who might otherwise just choose to retire completely.  
  • Government spending on programs to support the elderly, including pensions and health care, are going to rise dramatically in size.
  • All over the world, including the US, it's time to start phasing back the age of retirement at which people become eligible for pension plans. Exactly how that is done, and what kind of flexibility is available for retiring earlier or later, is open for discussion. But an expectation that retirement ages will in general be later is a fundamental step in making government-provided social security or pensions sustainable in the long run.
  • Older people tend to save less, as they draw down their retirement accounts, but also to look for less risky and volatile investments (more bonds, less stock market).
  • Keeping older people healthy and functioning later in life will be an urgent need, both for the people themselves and also to reduce the need for outside support. 
Following Bloom's lead-off essay, other essays on this topic in the March 2020 F&D include: 

In the e-book, the Table of Contents is:

1) "The what, so what, and now what of population ageing," by David E. Bloom

Part I: Implications of Population Ageing: The 'So What'

2) "Who will care for all the old people?" by Finn Kydland and Nick Pretnar
3) "Employment and the health burden on informal caregivers of the elderly," by Jan M. Bauer and Alfonso Sousa-Poza
4) "Ageing in global perspective," by Laurence J. Kotlikoff
5) "What do older workers want?" by Nicole Maestas and Michael Jetsupphasuk
6) "The flip side of "live long and prosper": Noncommunicable diseases in the OECD and their macroeconomic impact," by David E. Bloom, Simiao Chen, Michael Kuhn and Klaus Prettner
7) "Macroeconomic effects of ageing and healthcare policy in the United States," by Juan Carlos Conesa, Timothy J. Kehoe, Vegard M. Nygaard and Gajendran Raveendranathan
8) "Global demographic changes and international capital flows," by Weifeng Liu and Warwick J. McKibbin
9) "Ageing into risk aversion? Implications of population ageing for the willingness to take risks," by Margaret A. McConnell and Uwe Sunde
10) "Life cycle origins of pre-retirement financial status: Insights from birth cohort data," by
Mark McGovern
11) "A longevity dividend versus an ageing society," by Andrew Scott

Part II: Solutions and Policies: The 'Now What'

12) "Understanding 'value for money' in healthy ageing," by Karen Eggleston
13) "Healthy population ageing depends on investment in early childhood learning and development," by Elizabeth Geelhoed, Phoebe George, Kim Clark and Kenneth Strahan
14) "Financing health services for the Indian elderly: Aayushman Bharat and beyond," by Ajay Mahal and Sanjay K. Mohanty
15) "Cutting Medicare beneficiaries in on savings from managed healthcare in Medicare," by Thomas G. McGuire
16) "Macroeconomics and policies in ageing societies,"by Andrew Mason, Sang-Hyop Lee, Ronald Lee and Gretchen Donehower
17) "Population ageing and tax system efficiency," by John Laitner and Dan Silverman
18) "Means-tested public pensions: Designs and impact for an ageing demographic," by George Kudrna and John Piggott
19) "Pension reform in Europe," by Axel Börsch-Supan
20) "Happiness at old ages: How to promote health and reduce the societal costs of ageing," by Maddalena Ferranna


Saturday, March 28, 2020

An Economist's First Tryst with Benefit-Cost Analysis

Célestin Monga has had an eminent career as a research economist at the World Bank, as Managing Director of the UN Industrial Development Organization, as Chief Economist and vice-president at the the African Development Bank, and now as a Senior Economic Adviser at the World Bank. Here, he tells of that intimate special moment in the life of any economist--that first encounter with cost-benefit analysis. 

(I'm quoting here from Monga's "Comment" (pp. 77-94) in response to an essay by Amartya Sen from The State of Economics, The State of the World, published in 2019 by MIT Press.)  
I still remember vividly the strange mix of excitement and bewilderment that overwhelmed me in my high school years when our professor of accounting taught us the fundamentals of benefit-cost analysis. I immediately went to my dormitory and spent most of the evening trying to apply this powerful technique, not to assess whether the advantages of a hypothetical investment project were likely to outweigh its drawbacks, but to evaluate my own life prospects. Benefit-cost analysis seemed like a rigorous and revealing tool to examine whether my minuscule and uncertain existence was a "profitable" venture, or at least a worthwhile escapade that deserved to be continued. Of course, the few friends to whom I confided this found it a ludicrous idea. ... They were right: ... But so what? I kept running the numbers. ... 
I also had to decide how to imagine and estimate the prospective benefits and costs of my entire life to come. Using my own personal value scale, I calculated the costs as the amount of compensation required to exactly offset negative consequences of being alive for 50 or so years of life expectancy ahead. The compensation was the monetary amount required that would leave me just as well off as before engaging in this exercise. Benefits were measured by my willingness  to stay alive and enjoy all the things and emotions that I could reasonably expect for the decades ahead. Knowing that, in the end, life always results in death, typically following either an abrupt and tragic event like a car or airplane crash, or a long and painful illness, I could not find many benefits show present and expected value could match and compensate for the pains and disappointments of the costs. The results of my benefit-cost analysis were not very promising: Taking into consideration all current and expected streams of good and bad news, life did not appear to be a "profitable" investment. 
Shocked by the outcomes, I quick did some sensitivity analysis to check the robustness of my findings: No  matter what discount rates I chose, the calculations still yielded disappointing numbers to the question of whether life was a worthwhile venture. This was all the more puzzling because I actually loved many aspects of my life. Not knowing what to do with the analyses, I concluded one should either doubt the validity of certain measurement instruments or our ability to use them "objectively," or radically give more weight to whatever we define as "positive" outcomes for our actions or inactions, or accept the very probable hypothesis that happiness may be an illusion but those who choose to live should learn to ignore its downsides. I could only forget the outcomes of my own study by learning to radically change whatever assumptions I used in carrying it out. "Live is impossible without the ability to forget," philosopher Emil Cioran once said. But some memories are just to long-lasting to ever be erased. 
Monga's reminisce serves as a reminder of teenage feelings about the world. It also illustrates that although benefit-cost analysis has a useful place in comparing certain limited sets of choices, the method does not contain solutions to the mysteries of life. However, if you are a young person who finds yourself tempted to carry out a benefit-cost analysis of your own life, you may wish to consider seriously a career as an economist. 

Friday, March 27, 2020

Value of a Statistical Life: Where Does It Come From?

One of the (many) questions that causes economists to pull their hair out takes the general form: "How can you economists even possibly try to weigh economic costs against the value of a life saved?" Even worse, the question is often delivered in a triumphalist tone of a deeper moral truth being unveiled.

But in the real world, people and governments actually weigh economic costs against the value of a life saved all the time. Certain jobs that pose a greater risk to life and limb also tend to pay more than jobs for similarly qualified workers without such risks. Those who take such jobs, or don't take them, are in part placing an economic value on a greater risk of losing their life. Many government regulations, from setting speed limits on the roads to health and safety standards for food, could be tightened in a way that would save more lives but impose greater costs, or loosened in a way that would save fewer lives but impose lesser costs. Deciding where to set such regulations will necessarily involve a decision about how much it's worth paying to reduce the risk of someone losing their life.

Thus, the relevant question is not "how" to put a monetary value on life or "why would anyone ever want" to put a monetary value on life. The discussion starts from the fact that people and governments are already putting a monetary value on life, albeit often implicitly, by the actual real-world decisions they make.  When economists say that the "value of a statistical life" is about $10 million, they are not just pulling a number out of the air. Instead, they are only pointing out the monetary values that people are already using.

Thomas J. Kniesner and W. Kip Viscusi offer a readable overview of the evidence behind such decisions in "The Value of a Statistical Life," which was published in June 2019 in the Oxford Research Encyclopedia, Economics and Finance. (If for some reason you don't have access, a version of the paper is available on SSRN.)

As Kneiser and Viscusi point out, evidence about the economic value that people place on a higher or lower risk of losing their life can come from several sources: "revealed preference" studies that look at choices people make about jobs or products with different risks, or "stated preference" studies that involve survey data. To understand the intuition here, it's important to recognize that they studies are not asking a question like: "How much money would we need to pay you before we kill you?" The "value of a statistical life" is about changes in risk. They write:
Suppose further that ... the typical worker in the labor market of interest, say manufacturing, needs to be paid $1,000 more per year to accept a job where there is one more death per 10,000 workers. This means that a group of 10,000 workers would collect $10,000,000 more as a group if one more member of their group were to be killed in the next year. Note that workers do not know who will be fatally injured but rather that there will be an additional (statistical) death among them. Economists call the $10,000,000 of additional wage payments by employers the value of a statistical life. It is also the amount that the same group of workers would be willing to pay via wage reductions to have safer jobs where one fewer of their group would be fatally injured or ill. In that sense the VSL measures the willingness of workers to implicitly pay for safer workplaces and can be used to calculate the benefits of life-saving projects by private sector managers and government policymakers.
Studies of specific jobs that compare risks of death and pay will come up with a range of numbers; after all, jobs differ in many ways other than just their mortality risk. Thus, in a 2018 study, Viscusi looked at 1,025 estimates of the value of a statistical life drawn from 68 publications. He looked both at the total group, and then also at a "best-set" subgroup of the estimates that used what he viewed as more reliable methods. He found: "The all-set mean VSL is $12.0 million and the best-set sample mean is $12.2 million, where all estimates are in $2015. The median values are somewhat lower—$9.7 million for the all- set sample and $10.1 million for the best-set sample."

Of course, not everyone will put the same value on reducing mortality risk, and those of different ages and income levels, for example, will prefer different values. But for evaluating a broad government regulation that affects a broad cross-section of the population, using an overall number makes sense.

Another branch of the literature looks at purchases of certain goods or services. For example, how much is the price of a house affected by being in a high-crime area or near a large source of air pollution? How does the price that people pay for bike helmets or smoke detectors compare to the reduction in risk from such purchases? Again, different studies have a range of answers: again, an estimate of $10 million as the value of a statistical life seems plausible.

Other studies have taken an approach that uses detailed scenario-setting surveys. For example, the questionnaire may lay out a starting scenario, which includes the health risk expressed in various ways, like the chance of living to 100 years of age or the annual risk of being killed in the next year by cancer or in a car accident. Then the follow-up question offers other scenarios, with a range of costs expressed in terms like expected changes in prices or taxes paid, and different health risks. Naturally, the construction and interpretation of such surveys can be controversial, and sometimes the answers seem crazy-high or crazy-low. But an OECD study a few years ago suggested, based on an overview of these studies, that using $3.6 million as the value of a statistical life was plausible.

When it comes to public policy, Kneiser and Viscusi note: "Most U.S. government agencies have now adopted VSL estimates in a similar range consistent with the economics literature." The point out that the  U.S. Department of Transportation (2016) uses $9.4 million as the value of a statistical life, compared with $9.7 million for Environmental Protection Agency and $9.6 million for the U.S. Department of Health and Human Services.

It's easy enough to come up with questions about the value of a statistical life. But again, it is simply a fact that people and governments make decisions all the time about weighing health and safety against costs. Blaming the economists for doing the calculations to figure out what values are actually being places on a statistical life is like blaming the bathroom scale, or perhaps the laws of gravity, when it tells you that you could stand to lose a few pounds.

In the midst of the coronavirus pandemic, an obvious question is what a value of $10 million for the value of a statistical life means about the ongoing strategy of causing a recession for the sake of protecting public health. The multiplication is straightforward. Imagine that the steps being taken to contain the virus save 500,000 US lives. With those lives valued at $10 million, a social cost of up to  $5 trillion in lost output would be justified. For comparison, US GDP is about $21 trillion. If steps taken to contain the virus save 50,000 lives, then a social cost of up to $500 billion in lost output would be justified. This calculation is so quick-and-dirty, and leaves out so much, that I hesitate even to include it  here. It does suggest to me that in these benefit-cost terms, it's plausibly worth a recession to contain the virus, even a deep-but-short recession. It also suggests that if looking at how health risks  have been valued by actual people and governments in the past, a long-term recession or depression would not be a price worth paying to contain the virus.

For some previous posts and articles on the value of a statistical life, and its cousin the "quality-adjusted life-year," see:

Wednesday, March 25, 2020

Does the US Tax Code Favor Automation Over Jobs?

Imagine a company that is considering two possible ways to improve efficiency and productivity. One is to pay for many of its employees to go through a training program to learn new sets of useful skills. The other is to pay for new equipment that will replace many of the employees. Daron Acemoglu, Andrea Manera, and Pascual Restrepo argue that the US tax code tends to favor the second option. The technical version of their argument, "Does the U.S. Tax Code Favor Automation?" is published in most recent Brookings Papers on Economic Activity (Spring 2020, a short readable overview of the paper is also available at the link).  They write (citations and footnotes omitted):
The most common perspective among economists is that even if automation is contributing to declining labor share and stagnant wages, the adoption of these new technologies is likely to be beneficial, and any adverse consequences thereof should be dealt with appropriate redistributive policies (and education and training investments). But could it be that the extent of automation is excessive, meaning that US businesses are adopting automation technologies beyond the socially optimal level? If this were the case, the policy responses to these major labor market trends would need to be rethought.
There are several reasons why the level of automation may be excessive. Perhaps most
saliently, the US tax system is known to tax capital lightly and provide various subsidies
to the use of capital in businesses. In this paper, we systematically document the asymmetric taxation of capital and labor in the US economy in the US tax system labor is much more heavily taxed than capital. ...
Mapping the complex range of taxes in the US to effective capital and labor taxes is not trivial. Nevertheless, under plausible scenarios (for example, depending on how much of healthcare and pension expenditures are valued by workers and the effects of means-tested benefits), we find that labor taxes in the US are in the range of 25.5-33.5%. Effective capital taxes on software and equipment, on the other hand, are much lower, about 10% in the 2010s and even lower, about 5%, after the 2017 tax reforms. We also show that effective taxes on software and equipment have experienced a sizable decline from a peak value of 20% in the year 2000.3 A major reason explaining this trend in capital taxation is the increased generosity [of] depreciation allowances ...
 I should emphasize that this  paper is part of an ongoing research effort by these authors to think about interactions between automation and jobs. I have blogged about a previous entry in this line of research in "Is Something Different This Time About the Effect of Technology on the Labor Market?" (May 6, 2019). I discussed there a paper by Daron Acemoglu and Pascual Restrepo titled  "Automation and New Tasks: How Technology Displaces and Reinstates Labor."

In that paper, they suggest a framework in which automation can have three possible effects on the tasks that are involved in doing a job: a displacement effect, when automation replaces a task previously done by a worker; a productivity effect, in which the higher productivity from automation taking over certain tasks leads to more buying power in the economy, creating jobs in other sectors; and a reinstatement effect, when new technology reshuffles the production process in a way that leads to new tasks that will be done by labor. In this model, the effect of automation on labor is not predestined to be good, bad, or neutral. It depends on how these three factors interact.

In that context, the authors of the current paper suggest the theoretical possibility of an "automation tax," defined as "a higher tax on the use of capital in tasks where labor has a comparative advantage." They would combine this with a lower tax on other forms of capital, as well as on labor. In my own words, they are proposing that the tax code encourage the kind of automation that complements what workers do in a way that leads to sharp increases in productivity and output, but that the tax code not encourage the kind of automation that mostly just replaces workers with a real but only modest cost savings for the employer.

Of course, it's reasonable to note that a theoretical economic model can just create variables for these two kinds of automation, while a real world policy might face some difficult challenges in distinguishing between them. Still, the authors are trying to break out of a binary choice where automation is viewed as always good or always bad, and automation is instead being viewed as a range of choices that include automation that is more likely to be job-destroying or more likely to be job-creating. It feels to me like a potential distinction worth investigating.