Thursday, October 31, 2019

Interview with Maureen Cropper: Environmental Economics

Catherine L. Kling and Fran Sussman have "A Conversation with Maureen Cropper" in the
Annual Review of Resource Economics (October 2019, 11, pp. 1-18). As they write in the introduction: Maureen has made important contributions to several areas of environmental economics, including nonmarket valuation and the evaluation of environmental programs. She has also conducted pioneering studies on household transportation use and associated externalities." There also is a short (~ a dozen paragraphs) overview of some of Cropper's best-known work.

I had not know that Cropper identified as a monetary economist when she was headed for graduate school. Here is her description of her early path to environmental economics:
My first formal introduction to economics was in college. I entered Bryn Mawr College in 1966. I had great professors at Bryn Mawr: Philip W. Bell, Morton Baratz, and Richard DuBoff. I learned microeconomics by reading James Meade's A Geometry of International Trade—that's how we were taught microeconomics by Philip Bell. It was really a very good grounding in economics. I got married as I graduated from college to Stephen Cropper (hence my last name), and I went to Cornell University because Stephen was admitted to the Cornell Law School. I was admitted to the Department of Economics at Cornell.
Frankly, my interests at the time were really in monetary economics, so I took several courses at the Cornell Business School, including courses in portfolio theory. My dissertation was on bank portfolio selection with stochastic deposit flows. My dissertation advisor was S.C. Tsiang. Henry Wan and T.C. Liu were also on my committee. Henry was a fantastic mentor and advisor. I would write a chapter of my dissertation and put it in his mailbox; the next day he would have it covered with comments. He was just an amazing advisor and very, very engaged. At this time, I was not doing anything in environmental economics. In fact, my first job offer was from the NYU Business School.
The reason I went into environmental economics is that I met Russ Porter in graduate school. Russ later became the father of my children. We decided that we would go on the job market together and looked for a place that would hire two economists. We wound up at the University of California, Riverside, which at the time was the birthplace of the Journal of Environmental Economics and Management (JEEM). I was on the job market in 1973, just when this journal was launched. Ralph d'Arge was the chair of the department then. Tom Crocker also taught there, and Bill Schulze and Jim Wilen were students in the department.
It was going to UC Riverside that really caused me to switch fields and go into environmental economics. It was a very important decision, although I must say it was made partly for personal reasons. It's had a huge impact on my life.
In the interview and overview, it quickly becomes apparent that Cropper has worked on an unsummarizably wide array of topics. Examples include stated preference studies to estimate the value of a statistical life, which became the basis for estimates used by the OECD and in Canada.  Another study, became the basis for EPA regulations that estimated the value of avoiding a case of chronic bronchitis through air pollution regulations. Cropper worked on whether or not to ban certain pesticides under the Federal Insecticide, Fungicide, and Rodenticide Act, what methods to use in cleaning up Superfund sites, and whether to ban  certain uses of asbestos in under the Toxic Substances Control Act (TSCA). She worked how to use trading allowances to reduce sulfur dioxide (SO2) emissions under the Acid Rain Program.

Cropper worked on many issues involving air pollution in India. She worked on models of estimating household location choices in Baltimore and in Mumbai. The studies in Mumbai became the basis for looking at other policies: slum relocation or converting buses to compressed natural gas. She has estimated how the shapes of cities affect demand for travel, and studied cross-country data on relationships from growth to traffic fatalities.  The interview touches on these topics and more. Here's a description of one such study from Cropper:
When I first got to the World Bank I realized that, in India, there hadn't been any state-of-the-art studies on the impact of air pollution on mortality. This was around 1995, the time when important cohort studies by Arden Pope and Douglas Dockery were coming out in the United States.
There is also literature looking at the impact of acute exposures to air pollution—daily time-series studies of the impact of air pollution on mortality. With the support of the Bank, I was able to get information in Delhi from air quality monitors—four years of daily data, although monitoring was not done every day. I was also able to obtain data on deaths by cause and age. I worked with Nathalie Simon and Anna Alberini to carry out a daily time-series study of the impact of air pollution on mortality. ...
We had a hard time getting the study published in an epidemiological journal because economists write up their results differently than epidemiologists. But we did document significant effects of particulate matter on mortality. And, it was important to do something early on and convince people in India that this sort of work could be done. (There have been many subsequent studies.) It is also interesting that the results we obtained in Delhi were similar to results obtained in other time-series studies in the United States.
When those at the EPA or the World Bank or the National Academy of Sciences were setting up an advisory committee or a consensus panel to produce a report or an evaluation, Cropper's name was perpetually on the short list. Her memory of one such experience gives a sense of why she has been in such high demand:
I learned so much in my time serving on the EPA Science Advisory Board. I actually began there in the 1990s when the retrospective analysis of the Clean Air Act—the first Section 812 study—was being written. Dick Schmalensee was the head of that committee. I actually chaired the review of the first prospective 812 study of the benefits and costs of the 1990 Clean Air Act Amendments.
I also preceded you, Cathy, as the head of the Environmental Economics Advisory Committee at EPA. I learned a lot being on these EPA committees. In terms of the 812 studies, you've got a subcommittee that's dealing with the health impacts: epidemiologists and toxicologists. You have air quality modelers and people who are exposure measurement experts. And of course, you also have economists. It's a fantastic opportunity to be exposed to all parts of the analysis. If you are concerned about air pollution policy, which is what I've worked on the most, you need to get the perspective of all of these different disciplines. 
I was also interested in Cropper's comments on how, in the area of environmental economics, theoretical research has diminished and empirical work has become more prominent. My sense is that this is broadly true for many fields of economics. Cropper says 
I think quasi-experimental econometrics are one of the things that graduate students really do learn nowadays. Graduate students are also learning structural approaches. If you want to estimate the welfare impacts of corporate average fuel economy (CAFE) standards on the new car market, you've got to use a structural model. You also have students who study empirical industrial organization, bringing those techniques to bear in environmental economics. In terms of the percentage of work that is done today that is more theoretically based, my impression is that theoretical research really has declined, in terms of the number of purely theoretical papers written or even papers that are using theoretical approaches.\
The emphasis on theory has also changed during the time I have been teaching. When I was teaching a graduate class a few years ago, we were talking about discounting issues and, of course, the Ramsey formula. Students had heard of the Ramsey formula, but when I asked students if they knew who Frank Ramsey was, I was surprised to find that they didn't know. The fact is, I think there has been this shift. When I teach environmental economics, the preparation of students in terms of econometric techniques is really quite impressive. I've got to say that has really been ramped up. That represents an important change in the profession ... 

Wednesday, October 30, 2019

Fall 2019 Journal of Economic Perspectives Available Online

I was hired back in 1986 to be the Managing Editor for a new academic economics journal, at the time unnamed, but which soon launched as the Journal of Economic Perspectives. The JEP is published by the American Economic Association, which back in 2011 decided--to my delight--that it would be freely available on-line, from the current issue back to the first issue. You can download it various e-reader formats, too. Here, I'll start with the Table of Contents for the just-released Fall 2019 issue, which in the Taylor household is known as issue #130. Below that are abstracts and direct links for all of the papers. I will probably blog more specifically about some of the papers in the next week or two, as well.



_________________

Symposium on Fiftieth Anniversary of the Clean Air and Water Acts


"What Do Economists Have to Say about the Clean Air Act 50 Years after the Establishment of the Environmental Protection Agency?" by Janet Currie and Reed Walker
Air quality in the United States has improved dramatically over the past 50 years in large part due to the introduction of the Clean Air Act and the creation of the Environmental Protection Agency to enforce it. This article is a reflection on the 50-year anniversary of the formation of the Environmental Protection Agency, describing what economic research says about the ways in which the Clean Air Act has shaped our society—in terms of costs, benefits, and important distributional concerns. We conclude with a discussion of how recent changes to both policy and technology present new opportunities for researchers in this area.
Full-Text Access | Supplementary Materials


"Policy Evolution under the Clean Air Act," by Richard Schmalensee and Robert N. Stavins
The US Clean Air Act, passed in 1970 with strong bipartisan support, was the first environmental law to give the federal government a serious regulatory role, established the architecture of the US air pollution control system, and became a model for subsequent environmental laws in the United States and globally. We outline the act's key provisions, as well as the main changes Congress has made to it over time. We assess the evolution of air pollution control policy under the Clean Air Act, with particular attention to the types of policy instruments used. We provide a generic assessment of the major types of policy instruments, and we trace and assess the historical evolution of the Environmental Protection Agency's policy instrument use, with particular focus on the increased use of market-based policy instruments, beginning in the 1970s and culminating in the 1990s. Over the past 50 years, air pollution regulation has gradually become more complex, and over the past 20 years, policy debates have become increasingly partisan and polarized, to the point that it has become impossible to amend the act or pass other legislation to address the new threat of climate change.
Full-Text Access | Supplementary Materials


"US Water Pollution Regulation over the Past Half Century: Burning Waters to Crystal Springs?" by David A. Keiser and Joseph S. Shapiro
In the half century since the founding of the US Environmental Protection Agency, public and private US sources have spent nearly $5 trillion ($2017) to provide clean rivers, lakes, and drinking water (annual spending of 0.8 percent of US GDP in most years). Yet over half of rivers and substantial shares of drinking water systems violate standards, and polls for decades have listed water pollution as Americans' number one environmental concern. We assess the history, effectiveness, and efficiency of the Clean Water Act and Safe Drinking Water Act and obtain four main conclusions. First, water pollution has fallen since these laws were passed, in part due to their interventions. Second, investments made under these laws could be more cost effective. Third, most recent studies estimate benefits of cleaning up pollution in rivers and lakes that are less than the costs, though these studies may undercount several potentially important types of benefits. Analysis finds more positive net benefits of drinking water quality investments. Fourth, economic research and teaching on water pollution are relatively uncommon, as measured by samples of publications, conference presentations, and textbooks.
Full-Text Access | Supplementary Materials

Symposium on Modern Populism
"On Latin American Populism, and Its Echoes around the World," by Sebastian Edwards
In this article, I discuss the ways in which populist experiments have evolved historically. Populists are charismatic leaders who use a fiery rhetoric to pitch the interests of "the people" against those of banks, large firms, multinational companies, the International Monetary Fund, and immigrants. Populists implement redistributive policies that violate the basic laws of economics, and in particular budget constraints. Most populist experiments go through five distinct phases that span from euphoria to collapse. Historically, the vast majority of populist episodes end up badly; incomes of the poor and middle class tend to be lower than when the experiment was launched. I argue that many of the characteristics of traditional Latin American populism are present in more recent manifestations from around the globe.
Full-Text Access | Supplementary Materials


"Informational Autocrats," by Sergei Guriev and Daniel Treisman
In recent decades, dictatorships based on mass repression have largely given way to a new model based on the manipulation of information. Instead of terrorizing citizens into submission, "informational autocrats" artificially boost their popularity by convincing the public they are competent. To do so, they use propaganda and silence informed members of the elite by co-optation or censorship. Using several sources, including a newly created dataset on authoritarian control techniques, we document a range of trends in recent autocracies consistent with this new model: a decline in violence, efforts to conceal state repression, rejection of official ideologies, imitation of democracy, a perceptions gap between the masses and the elite, and the adoption by leaders of a rhetoric of performance rather than one aimed at inspiring fear.
Full-Text Access | Supplementary Materials

"The Surge of Economic Nationalism in Western Europe," by Italo Colantone and Piero Stanig
We document the surge of economic nationalist and radical-right parties in western Europe between the early 1990s and 2016. We discuss how economic shocks contribute to explaining this political shift, looking in turn at theory and evidence on the political effects of globalization, technological change, the financial and sovereign debt crises of 2008–2009 and 2011–2013, and immigration. The main message that emerges is that failures in addressing the distributional consequences of economic shocks are a key factor behind the success of nationalist and radical-right parties. We discuss how the economic explanations compete with and complement the "cultural backlash" view. We reflect on possible future political developments, which depend on the evolving intensities of economic shocks, on the strength and persistence of adjustment costs, and on changes on the supply side of politics.
Full-Text Access | Supplementary Materials


"Economic Insecurity and the Causes of Populism, Reconsidered," Yotam Margalit
Growing conventional wisdom holds that a chief driver of the populist vote is economic insecurity. I contend that this view overstates the role of economic insecurity as an explanation in several ways. First, it conflates the significance of economic insecurity in influencing the election outcome on the margin with its significance in explaining the overall populist vote. Empirical findings indicate that the share of populist support explained by economic insecurity is modest. Second, recent evidence indicates that voters' concern with immigration—a key issue for many populist parties—is only marginally shaped by its real or perceived repercussions on their economic standing. Third, economics-centric accounts of populism treat voters' cultural concerns as largely a by-product of experiencing adverse economic change. This approach underplays the reverse process, whereby disaffection from social and cultural change drives both economic discontent and support for populism.
Full-Text Access | Supplementary Materials

Articles

"What They Were Thinking Then: The Consequences for Macroeconomics during the Past 60 Years," by George A. Akerlof
This article explores the development of Keynesian macroeconomics in its early years, and especially in the Big Bang period immediately after the publication of The General Theory. In this period, as standard macroeconomics evolved into the "Keynesian-neoclassical synthesis," its promoters discarded many of the insights of The General Theory. The paradigm that was adopted had some advantages. But its simplifications have had serious consequences—including immense regulatory inertia in response to massive changes in the financial system and unnecessarily narrow application of accelerationist considerations (regarding inflation expectations).
Full-Text Access | Supplementary Materials


"The Impact of the 2018 Tariffs on Prices and Welfare," by Mary Amiti, Stephen J. Redding and David E. Weinstein
We examine conventional approaches to evaluating the economic impact of protectionist trade policies. We illustrate these conventional approaches by applying them to the tariffs introduced by the Trump administration during 2018. In the wake of this increase in trade protection, the United States experienced substantial increases in the prices of intermediates and final goods, dramatic changes to its supply-chain network, reductions in availability of imported varieties, and the complete pass-through of the tariffs into domestic prices of imported goods. Therefore, the full incidence of the tariffs has fallen on domestic consumers and importers so far, and our estimates imply a reduction in aggregate US real income of $1.4 billion per month by the end of 2018. We see similar patterns for foreign countries that have retaliated with their own tariffs against the United States, which suggests that the trade war has also reduced the real income of these other countries.
Full-Text Access | Supplementary Materials


"Retrospectives: Tragedy of the Commons after 50 Years," by Brett M. Frischmann, Alain Marciano and Giovanni Battista Ramello
Garrett Hardin's "The Tragedy of the Commons" (1968) has been incredibly influential generally and within economics, and it remains important despite some historical and conceptual flaws. Hardin focused on the stress population growth inevitably placed on environmental resources. Unconstrained consumption of a shared resource—a pasture, a highway, a server—by individuals acting in rational pursuit of their self-interest can lead to congestion and, worse, rapid depreciation, depletion, and even destruction of the resources. Our societies face similar problems, with respect to not only environmental resources but also infrastructures, knowledge, and many other shared resources. In this article, we examine how the tragedy of the commons has fared within the economics literature and its relevance for economic and public policies today. We revisit the original piece to explain Hardin's purpose and conceptual approach. We expose two conceptual mistakes he made: conflating resource with governance and conflating open access with commons. This critical discussion leads us to the work of Elinor Ostrom, the recent Nobel Prize in Economics laureate, who spent her life working on commons. Finally, we discuss a few modern examples of commons governance of shared resources.
Full-Text Access | Supplementary Materials

"Recommendations for Further Reading," by Timothy Taylor

Tuesday, October 29, 2019

The Hearing Aid Example: Why Technology Doesn't Reduce Trade

Will the new technologies of 3D printing and robotics  lead to a reduction in international trade? After all, IF countries can use 3D printing and robotics to make goods at home, why import from abroad?

But there is a fascinating counterexample to these fears: the case of hearing aids. Around the world, they are nearly 100% produced by 3D printing. But international trade in hearing aids is rising, not falling. Caroline Freund, Alen Mulabdic, and Michele Ruta discuss this and other examples in "Is 3D Printing a Threat to Global Trade? The Trade Effects You Didn’t Hear About" (World Bank Policy Research Working Paper 9024, September 2019). For a readable summary, the authors have written a short overview article at VoxEU as well.

The authors point to a prediction that 3D printing could eliminate as much as 40% of all world trade by 2040. But actual examples like hearing aids  don't seem to be working out this way. As they note:
3D printers transformed the hearing aid industry in less than 500 days in the mid-2000s, which makes this product a unique natural experiment to assess the trade effects of this technology. ... The intuition for the results is that 3D printing led to a reduction in the cost of production. Demand rose and trade expanded. There is no evidence that 3D printing shifted production closer to consumers and displaced trade. One reason is that hearing aids are light products which makes them relatively cheap to transport internationally -we come back to this point below. A second reason is because printing hearing aids in high volumes requires a large investment in technology and machinery and the presence of highly specialized inputs and services. The countries that were early innovators, Denmark, Switzerland and Singapore, remain the main export platforms. Some middle-income economies such as China, Mexico and Vietnam have also been able to substantially increase their market shares between 1995 and 2015. As a result, exports did not become more concentrated in the top producing countries following the introduction of 3D printing.
Data from the US market shows the effect of 3D printing of hearing aids on prices, quality--and thus expanded use (citations and footnotes omitted):
The new technology fundamentally changed the industry because it produced a better product at a lower cost. The change is visible in US import price data and hearing aid usage. The United States is the number one importer of hearing aids and has relatively accurate data on unit prices. ... [T]he unit value of hearing aids imported into the United States dropped by around 25 percent after 2007, right around when the technology was adopted. Hearing aid usage also increased dramatically. From 2001 to 2008 only about 26 percent of the population above 70 with hearing loss used hearing aids, and the share was flat over the period. From 2008 to 2013 (last year of data), the share increased to 32 percent. Despite the potential benefits from the use of hearing aids, stigma, discomfort and cost had been among the most frequent reasons for rejecting the use of hearing instruments
What about other industries where 3D printing is important? In preliminary work looking across 35 different industries, Freund, Mulabdic, and Ruta find the same general pattern: that is, 3D printing leads to lower prices and thus benefits for consumers, including consumers in developing countries, but no particular shift in trade patterns. They write:
One example comes from dentistry, where custom products are in high demand but are being manufactured and exported by high-tech firms. Consider Renishaw, a British engineering company, that makes dental crowns and bridges from digital scans of patients’ teeth. The printers run for 8-10 hours to make custom teeth from cobalt-chrome alloy powder, which are then exported. Dentists are not installing the machines to print teeth locally, rather the parts are shipped to dental labs in Europe, where a layer of porcelain is added before the teeth are shipped to dentists. With 3D printing, the production process changed but the supply chain remains intact. In addition to teeth, the innovative technology is also being used for several other goods, from running shoes to prosthetic limbs.

Monday, October 28, 2019

Remembering the Cadillac Tax

When employers pay the health insurance premiums for their employees, these payments are exempt from income tax. If health insurance payments by employers were taxed as income, the government would collect about $200 billion in additional income taxes, and another $130 billion in payroll taxes for supporting Social Security and Medicare (according to the Analytical Perspectives volume of the US budget for 2020, Table 16-1).

The notion that the US would finance its private-sector health insurance system in this way is an historical accident going back to World War II. As Melissa Thomasson explains at the website of the Economic History Association:
During World War II, wage and price controls prevented employers from using wages to compete for scarce labor. Under the 1942 Stabilization Act, Congress limited the wage increases that could be offered by firms, but permitted the adoption of employee insurance plans. In this way, health benefit packages offered one means of securing workers. ... Perhaps the most influential aspect of government intervention that shaped the employer-based system of health insurance was the tax treatment of employer-provided contributions to employee health insurance plans. First, employers did not have to pay payroll tax on their contributions to employee health plans. Further, under certain circumstances, employees did not have to pay income tax on their employer’s contributions to their health insurance plans.

The idea that US employers will often pay for health insurance, and that this will be an important element of what most Americans mean by a "good job," is embedded in how most of think about the US healthcare system. But it's worth being clear on its distributional effects and economic incentives it provided. When employers provide a benefit with the value exempt from income tax, it will naturally offer greater benefit to those with high incomes, who otherwise would have paid higher income taxes. In addition, when employer-provided health insurance is tax-free, people will have an incentive to receive compensation in this tax-free form, rather than in a taxed form.

Katherine Baicker describes these dynamics in her recent 2019 Martin Feldstein Lecture at the National Bureau of Economic Research, "Economic Analysis forEvidence-Based Health Policy:Progress and Pitfalls" (NBER Reporter, September 2019). She says:
On the private side, the dominance of the employer-sponsored insurance market is driven in large part by the tax preference for health insurance benefits relative to wage compensation, which also drives down cost-sharing, since care covered through insurance plan premiums is often tax-preferred to out-of-pocket spending. This aspect of the tax code is thus both inefficient (driving inefficient utilization through moral hazard) and regressive (favoring people with higher incomes and more generous benefits)—a rare opportunity to improve both efficiency and distribution through reform.

This is a prime example of the challenge of translating economic insights into policy: Even though economists on both sides of the aisle agreed, proposing the taxation of employer-sponsored insurance to policymakers and the public was not popular. The “Cadillac tax” on expensive plans came into existence largely because it was nominally levied on insurers rather than taxpayers. This made it more politically palatable, even though it does not mean that the ultimate incidence falls on insurers, and it constrains the degree to which it can undo the regressivity of the tax treatment of employer-sponsored insurance. Earlier this year, the House voted to repeal the Cadillac tax; whether it will ever take effect remains an open question.
What is this "Cadillac tax" she is talking about? As part of the Patient Protection and Affordable Care Act of 2010, there was a provision that if someone was receiving very high-cost health insurance from an employer, there would be a tax equal to 40% of the value of the health insurance benefits above a certain level. The bill was careful to specify that this tax would be paid by employers--but of course, it would be reflected the design of health insurance plans and in overall compensation received by workers.  

In a standard example of the elegant dance moves that make up the budgeting process, the 2010 legislation bravely postponed the imposition of the Cadillac tax until 2018, which would be two years after President Obama left office even if he served a second term. Thus, the revenues from collecting the Cadillac tax could be counted in the long-run budget projections for the legislation--to show it wouldn't cost too much over time--but the actual tax was comfortably off in the future. 

Quite predictably, the Cadillac tax was then postponed from 2018 to 2020, and then to 2022. It its latest version: "This `Cadillac tax' will equal 40 percent of the value of health benefits exceeding thresholds projected to be $11,200 for single coverage and $30,150 for family coverage in 2022." 
In July, the Democrat-controlled House of Representatives voted to repeal the Cadillac tax altogether. It's not yet clear whether the Republican-controlled Senate will go along. Perhaps the repeal of the Cadillac tax will get stuck in the gears of politics for a little longer. But at this point there's no reason to believe that it will ever actually go into effect. 

I have my doubts about how the 2010 Cadillac tax was designed. For example, various health care analysts have argued that such a law might set certain thresholds, and then just have any health insurance benefits above that level taxed as regular income. I also have my cynical doubts about whether politicians back in 2010 intended that the Cadillac tax would ever go into effect. But whatever the details of the design or the underling motivations, the Cadillac tax was a modest effort. If allowed to take effect, it would raise about $8 billion in 2022, rising to $38 billion by 2028 (according to the Congressional Budget Office, see discussion starting on p. 231). It was the only meaningful effort to reduce the tax exemption for employer-provided health insurance, and to use that money to make health insurance more affordable for others. And it seems to be politically impossible.  

Saturday, October 26, 2019

Tips for Academic Writing from Cormac McCarthy

In 2007, Cormac McCarthy was the Pulitzer Prize Winner in Fiction for The Road. Little did I know that he was a fellow editor of academic writing Van Savage and Pamela Yeh provide the background in "Novelist Cormac McCarthy’s tips on how to write a great science paper: The Pulitzer prizewinner shares his advice for pleasing readers, editors and yourself" (Nature, September 26, 2019). They note: "For the past two decades, Cormac McCarthy — whose ten novels include The Road, No Country for Old Men and Blood Meridian — has provided extensive editing to numerous faculty members and postdocs at the Santa Fe Institute (SFI) in New Mexico."

For a sense of the incongruity here, this is the book jacket copy for The Road, via the Pulitzer website: 
A father and his son walk alone through burned America. Nothing moves in the ravaged landscape save the ash on the wind. It is cold enough to crack stones, and when the snow falls it is gray. The sky is dark. Their destination is the coast, although they don't know what, if anything, awaits them there. They have nothing; just a pistol to defend themselves against the lawless bands that stalk the road, the clothes they are wearing, a cart of scavenged food--and each other.
The Road is the profoundly moving story of a journey. It boldly imagines a future in which no hope remains, but in which the father and his son, "each the other's world entire," are sustained by love. Awesome in the totality of its vision, it is an unflinching meditation on the worst and the best that we are capable of: ultimate destructiveness, desperate tenacity, and the tenderness that keeps two people alive in the face of total devastation.
Doesn't sound exactly aligned with the cutting-edge scientific research across many fields on the themes of complexity, adaptation, and emergent properties that are emphasized at the Santa Fe Institute. But for what it's worth (and the Santa Fe Institute does have a healthy group of economists), here is how Savage and Yeh summarize are some bits of advice from McCarthy--with more bullet points at the linked article in Nature.

• Use minimalism to achieve clarity. While you are writing, ask yourself: is it possible to preserve my original message without that punctuation mark, that word, that sentence, that paragraph or that section? Remove extra words or commas whenever you can. ... 
• Don’t slow the reader down. Avoid footnotes because they break the flow of thoughts and send your eyes darting back and forth while your hands are turning pages or clicking on links. Try to avoid jargon, buzzwords or overly technical language. And don’t use the same word repeatedly — it’s boring. ... 
• And don’t worry too much about readers who want to find a way to argue about every tangential point and list all possible qualifications for every statement. Just enjoy writing. ... 
• Inject questions and less-formal language to break up tone and maintain a friendly feeling. Colloquial expressions can be good for this, but they shouldn’t be too narrowly tied to a region. Similarly, use a personal tone because it can help to engage a reader. Impersonal, passive text doesn’t fool anyone into thinking you’re being objective: “Earth is the centre of this Solar System” isn’t any more objective or factual than “We are at the centre of our Solar System.” ...
• After all this, send your work to the journal editors. Try not to think about the paper until the reviewers and editors come back with their own perspectives. When this happens, it’s often useful to heed Rudyard Kipling’s advice: “Trust yourself when all men doubt you, but make allowance for their doubting too.” Change text where useful, and where not, politely explain why you’re keeping your original formulation.
• And don’t rant to editors about the Oxford comma, the correct usage of ‘significantly’ or the choice of ‘that’ versus ‘which’. Journals set their own rules for style and sections. You won’t get exceptions.

Friday, October 25, 2019

The Health Costs of Global Air Pollution

The State of Global Air 2019 report notes:
Air pollution (ambient PM2.5, household, and ozone) is estimated to have contributed to about 4.9 million deaths (8.7% of all deaths globally) and 147 million years of healthy life lost (5.9% of all DALYs [disability-adjusted life years] globally) in 2017. The 10 countries with the highest mortality burden attributable to air pollution in 2017 were China (1.2 million), India (1.2 million), Pakistan (128,000), Indonesia (124,000), Bangladesh (123,000), Nigeria (114,000), the United States (108,000), Russia (99,000), Brazil (66,000), and the Philippines (64,000). 
Air pollution ranks fifth among global risk factors for mortality, exceeded only by behavioral and metabolic factors: poor diet, high blood pressure, tobacco exposure, and high blood sugar. It is the leading environmental risk factor, far surpassing other environmental risks that have often been the focus of public health measures in the past, such as unsafe water and lack of sanitation. ... Air pollution collectively reduced life expectancy by 1 year and 8 months on average worldwide, a global impact rivaling that of smoking. This means a child born today will die 20 months sooner, on average, than would be expected in the absence of air pollution.
The report is written by the Health Effects Institute, a Boston-based think tank, and the Institute for Health Metrics and Evaluation, an independent health research center based at the University of Washington.  The report focuses on three aspects of air pollution, with occasional references to other measures: "fine particle pollution— airborne particulate matter measuring less than 2.5 micrometers in aerodynamic diameter, commonly referred to as PM2.5; ground-level (tropospheric) ozone; and household air pollution that arises when "people burn solid fuels (such as coal, wood, charcoal, dung, and other forms of biomass, like crop waste) to cook food and to heat and light their homes."

Here's a map showing patterns of particulate pollution around the world. Clearly, the severity of this issue is worst in a band running from Africa across the Middle East to south and east Asia. 
When it comes to ozone:
Most ground-level ozone pollution is produced by human activities (for example, industrial processes and transportation) that emit chemical precursors (principally, volatile organic compounds and nitrogen oxides) to the atmosphere, where they react in the presence of sunlight to form ozone. Exposure to ground-level ozone increases a person’s likelihood of dying from respiratory disease, specifically chronic obstructive pulmonary disease. ...
The pattern of ozone exposures by level of sociodemographic development differs markedly from the patterns seen with PM2.5 and household air pollution. The more developed regions, like North America, also continue to experience high ozone exposures in the world, despite extensive and successful air quality control for ozone-related emissions in many of these countries.
On the topic of household air pollution:
In 2017, 3.6 billion people (47% of the global population) were exposed to household air pollution from the use of solid fuels for cooking. These exposures were most common in sub-Saharan Africa, South Asia, and East Asia ... Figure 7 shows the 13 countries with populations over 50 million in which more than 10% of the population was exposed to household air pollution. Because these countries have such large populations, the number of people exposed can be substantial even if the proportion exposed is low. An estimated 846 million people in India (60% of the population) and 452 million people in China (32% of the population) were exposed to household air pollution in 2017. ...
While the contribution of household air pollution to ambient air pollution varies by location and has not been calculated for most countries, one recent global estimate suggested that residential energy use, broadly defined, contributed approximately 21% of global ambient PM2.5 concentrations. Another study estimated that residential energy use contributed approximately 31% of global outdoor air pollution–related mortality. ...
The world is making progress in some areas in reducing air pollution, like in China:
A separate analysis of air  quality and related health impacts in  74 Chinese cities recently found that annual average PM2.5 concentrations  fell by one-third from 2013–2017, a  significant achievement. The study also showed a 54% reduction in sulfur dioxide concentrations and a  28% drop in carbon monoxide.  However, challenges remain. In 2017, ... approximately 852,000 deaths were attributable to PM2.5 exposures in China. Ozone  exposures have also remained largely untouched  by the actions taken in China to date, and the GBD [Global Burden of Disease] project attributed an additional  178,000 chronic respiratory disease–related  deaths in China in 2017 to ozone. 
In the US, steady progress has been made over several decades in reducing the "criteria" air pollutants-- Carbon MonoxideLeadNitrogen DioxideOzoneParticulate Matter (PM10)Particulate Matter (PM2.5); and Sulfur Dioxide--although there has been some backsliding in the last couple of years.  Or when it comes to household air pollution: "Globally, the proportion of households relying on solid fuels for cooking dropped from about 57% in 2005 to 47% in 2017."

But with progress duly noted, we're still talking about estimates of nearly 5 million deaths per year from air pollution, with over 100,000 of those deaths happening in the United States. Moreover, as the report notes: "Air pollution takes its greatest toll on people age 50 and older, who suffer the highest burden from noncommunicable air pollution–related diseases such as heart disease, stroke, lung cancer, diabetes, and COPD [Chronic Obstructive Pulmonary Disease]." An aging society is going to experience greater health costs from pollution.

It is perhaps worth noticing that none of the health care costs here are related to climate change, nor are these health costs of air pollution a few decades or a century into the future. However, it is often true that taking taking steps to reduce these conventional air pollutants will also have the effect of reducing carbon emissions. Instead of refighting the trench warfare of the climate change policy debates, over and over again,perhaps it would make sense to emphasize the value of steps to reduce these immediate health costs--and then just accept the benefits of fewer greenhouse gas emissions as a highly desirable side-benefit.

Thursday, October 24, 2019

Interview with Emmanuel Farhi: Global Safe Assets and Macro as Aggregated Micro

David A. Price interviews Emmanuel Farhi in Econ Focus (Regional Federal Reserve Bank of Richmond, Second/Third Quarter 2019, pp. 18-23). Here are some tidbits:

On global safe assets
If you look at the world today, it's very much still dollar-centric ... The U.S. is really sort of the world banker. As such, it enjoys an exorbitant privilege and it also bears exorbitant duties. Directly or indirectly, it's the pre-eminent supplier of safe and liquid assets to the rest of the world. It's the issuer of the dominant currency of trade invoicing. And it's also the strongest force in global monetary policy as well as the main lender of last resort.
If you think about it, these attributes reinforce each other. The dollar's dominance in trade invoicing makes it more attractive to borrow in dollars, which in turn makes it more desirable to price in dollars. And the U.S. role as a lender of last resort makes it safer to borrow in dollars. That, in turn, increases the responsibility of the U.S. in times of crisis. All these factors consolidate the special position of the U.S.
But I don't think that it's a very sustainable situation. More and more, this hegemonic or central position is becoming too much for the U.S. to bear.
The global safe asset shortage is a manifestation of this limitation. In my view, there's a growing and seemingly insatiable global demand for safe assets. And there is a limited ability to supply them. In fact, the U.S. is the main supplier of safe assets to the rest of the world. As the size of the U.S. economy keeps shrinking as a share of the world economy, so does its ability to keep up with the growing global demand for safe assets. The result is a growing global safe asset shortage. It is responsible for the very low levels of interest rates that we see throughout the globe. And it is a structural destabilizing force for the world economy. ... 
In my view, the global safe asset shortage echoes the dollar shortage of the late 1960s and early 1970s. At that time, the U.S. was the pre-eminent supplier of reserve assets. The global demand for reserve assets was growing because the rest of the world was growing. And that created a tension, which was diagnosed by Robert Triffin in the early '60s: Either the U.S. would not satisfy this growing global demand for reserve assets, and this lack of liquidity would create global recessionary forces, or the U.S. would accommodate this growing global demand for reserve assets, but then it would have to stretch its capacity and expose itself to the possibility of a confidence crisis and of a run on the dollar. In fact, that is precisely what happened. Eventually, exactly like Triffin had predicted, there was a run on the dollar. It brought down the Bretton Woods system: The dollar was floated and that was the end of the dollar exchange standard.
Today, there is a new Triffin dilemma: Either the U.S. does not accommodate the growing global demand for safe assets, and this worsens the global safe asset shortage and its destabilizing consequences, or the U.S. accommodates the growing global demand for safe assets, but then it has to stretch itself fiscally and financially and thereby expose itself to the possibility of a confidence crisis. ...
Basically, I think that the role of the hegemon is becoming too heavy for the U.S. to bear. And it's only a matter of time before powers like China and the eurozone start challenging the global status of the dollar as the world's pre-eminent reserve and invoicing currency. It hasn't happened yet. But you have to take the long view here and think about the next decades, not the next five years. I think that it will happen. 
For a readable overview of Farhi's views on global safe assets, a useful start is "The Safe Assets Shortage Conundrum," which he wrote with Ricardo J. Caballero and Pierre-Olivier Gourinchas, in the Summer 2017 issue of the Journal of Economic Perspectives (31:3, pp. 29-46. )

On some implications for public finance if many economic agents aren't fully rational and don't pay full attention to taxes 
There is a basic tenet of public taxation called the dollar-for-dollar principle of Pigouvian taxation. It says that if the consumption of a particular good generates a dollar of negative externality, you should impose a dollar of tax to correct for this exter­nality. For example, if consuming one ton of carbon generates a certain number of dollars of externalities, you should tax it by that many dollars.
But that relies on the assumption that firms and households correctly perceive this tax. If they don't — maybe they aren't paying attention — then you have to relax this principle. For example, if I pay 50 percent attention to the tax, the tax needs to be twice as big. That's a basic tenet of public finance that is modified when you take into account that agents are not rational.
In public finance, there is also a traditional presumption that well-calibrated Pigouvian taxes are better than direct quantity restriction or regulations because they allow people to express the intensity of their preferences. Recognizing that agents are behavioral can lead you to overturn this prescription. It makes it hard to calibrate Pigouvian taxes, and it also makes them less efficient. Cruder and simpler remedies, such as regulations on gas mileage, are more robust and become more attractive.
Aggregate production functions, the disaggregation problem, and the Cambridge-Cambridge controversy
There's an interesting episode in the history of economic thought. It's called the Cambridge-Cambridge controversy. It pitted Cambridge, Massachusetts — Solow, Samuelson, people like that — against Cambridge, U.K. — Robinson, Sraffa, Pasinetti. The big debate was the use of an aggregate production function.
Bob Solow had just written his important article on the Solow growth model. That's the basic paradigm in economic growth. To represent the possibility frontiers of an economy, he used an aggregate production function. What the Cambridge, U.K., side attacked about this was the idea of one capital stock, one number. They argued that capital was very heterogeneous. You have buildings, you have machines. You're aggregating them up with prices into one capital stock. That's dodgy.
It degenerated into a highly theoretical debate about whether or not it's legitimate to use an aggregate production function and to use the notion of an aggregate capital stock. And the Cambridge, U.K., side won. They showed that it was very problematic to use aggregate production functions. Samuelson conceded that in a beautiful paper constructing a disaggregated model that you could not represent with an aggregate production function and one capital stock.
But it was too exotic and too complicated. It went nowhere. The profession moved on. Today, aggregate production functions are pervasive. They are used everywhere and without much questioning. One of the things David [Baqaee] and I are trying to do is to pick up where the Cambridge-Cambridge controversy left. You really need to start with a completely disaggregated economy and aggregate it up. ... 
We have a name for our vision. We call it "macro as explicitly aggregated micro." The idea is you need to start from the very heterogeneous microeconomic environment to do justice to the heterogeneity that you see in the world and aggregate it up to understand macroeconomic phenomena. You can't start from macroeconomic aggregates. You really want to understand the behavior of economic aggregates from the ground up.
For example, you can't just come up with your measure of aggregate TFP [total factor productivity] and study that. You need to derive it from first principles. You need to understand exactly what aggregate TFP is. I talked about aggregate TFP and markups, but the agenda is much broader than that. It bears on the elasticity of substitution between factors: between capital and labor, or between skilled labor, unskilled labor, and capital. It bears on the macroeconomic bias of increasing automation. It bears on the degree of macroeconomic returns to scale underlying endogenous growth. It bears on the gains from trade and the impact of tariffs. In short, it is relevant to the most fundamental concepts in macroeconomics.

For a retrospective recounting of what happened in the Cambridge-Cambridge controversies, a useful starting point is Avi J. Cohen and G. C. Harcourt. 2003. "Retrospectives: Whatever Happened to the Cambridge Capital Theory Controversies?" Journal of Economic Perspectives, 17 (1): 199-214.

Wednesday, October 23, 2019

Fentanyl and Synthetic Opioids: What's Happening, What's Next

The US had 50,000 opioid-involved overdose deaths in 2019. This is similar to the number of people who died of AIDS at the peak of that crisis in 1995. For comparison, total deaths in car crashes is about 40,000 per year. My dark suspicion is that the opioid crisis gets less national media attention because its worse effects are concentrated in parts of Appalachia, New England, and certain mid-Atlantic states, rather than in big coastal cities.

For a thoughtful overview of the topic, I recommend The Future of Fentanyl and Other Synthetic Opioids, a book by Bryce Pardo, Jirka Taylor, Jonathan P. Caulkins, Beau Kilmer, Peter Reuter, Bradley D. Stein (RAND Institute 2019). Here are some points that caught my eye, but there's a lot more detail in the book.

"Although the media and the public describe an opioid epidemic, it is more accurate to think of it as a series of overlapping and interrelated epidemics of pharmacologically similar substances—the opioid class of drugs. Ciccarone (2017, p. 107) refers to a “triple wave epidemic”: The first wave was prescription opioids, the second wave was heroin, and the third—and ongoing—wave is synthetic opioids, such as fentanyl."

Fentanyl and other synthetic opioids have been around for decades. Indeed, the book describe four previous US episodes where there was a localized surge of production followed by a number of deaths. What makes it different this time around? The answer seems to be production of very cheap and powerful synthetic opioids in China. The report notes (citations and footnotes omitted):  

The current wave of overdoses is largely attributable to illicitly manufactured fentanyl. Most of the fentanyl and novel synthetic opioids in U.S. street markets—as well as their precursor chemicals—originate in China, where the regulatory system does not effectively police the country’s expansive pharmaceutical and chemical industries According to federal law enforcement, synthetic opioids arrive in U.S. markets directly from Chinese manufacturers via the post, private couriers (e.g., UPS, FedEx), cargo, by smugglers from Mexico, or by smugglers from Canada after being pressed into counterfeit prescription pills. ... . The U.S. Drug Enforcement Administration (DEA) suggests that some portion of fentanyl might be produced in Mexico using precursors from China. ...

China’s large and underregulated pharmaceutical and chemical industries create opportunities for anyone with access to the inputs to synthesize fentanyl or manufacture precursors. Mexican DTOs, which have a history of importing methamphetamine precursors from China, are now importing fentanyl precursors. Today, illicit fentanyl is no longer manufactured by a single producer in a clandestine laboratory. ... China’s economy, particularly its pharmaceutical and chemical industries, have grown at levels that outpace regulatory oversight, allowing suppliers to avoid regulatory scrutiny and U.S. law enforcement. 
A related issue is that fentanyl and the synthetic opoids coming from China are extremely powerful, and in terms of morphine-equivalent dose--that is, MED--are much cheaper than heroin. 
Synthetic opioids coming from China are much cheaper than Mexican heroin on a potency-adjusted basis ...  Recent RAND Corporation research identified multiple Chinese firms that are willing to ship 1 kg of nearly pure fentanyl to the United States for $2,000 to $5,000. In terms of the morphine-equivalent dose (MED; a common method of comparing the strength of different opioids), a 95-percent pure kg of fentanyl at $5,000 would generally equate to less than $100 per MED kg. For comparison, a 50-percent pure kg of Mexican heroin that costs $25,000 when exported to the United States would equate to at least $10,000 per MED kg. Thus, heroin appears to be at least 100 times more expensive than fentanyl in terms of MED at the import level.
Given that fentanyl and synthetic opioids are extremely cheap and powerful, transporting them becomes much simpler, even by regular mail. 
Likewise, rising trade and e- commerce originating in China since 2011 facilitate the diffusion of potent synthetic opioids. Drug distribution has been further facilitated by the advent of cryptocurrencies and anonymous browsing software. ...
Fentanyl's potency-to-weight ratio makes it ideal for smuggling. A small amount of fentanyl can be easily concealed through traditional conveyances, packed in vehicles or hidden on the person. The supply of minute amounts of fentanyl through mail and private package services is profitable to someone who can redistribute it to local markets; even an ounce of fentanyl can substitute for 1 kg of heroin. ... In 2011, postal services of the United States and China entered into an agreement to streamline mail delivery and reduce shipping costs for merchandise originating in China. This “ePacket” service is designed for shipping consumer goods (under 2 kg) from China directly and rapidly to customers ordering items online ... In FY 2012, USPS handled about 27 million ePackets from China. This increased to nearly 500 million ePackets by 2017. This figure does not include items from China arriving by cargo or private consignment operators, such as DHL or FedEx. ...
For reference, if the total U.S. heroin market was on the order of 45 pure metric tons (45,000 kg; Midgette et al., 2019) before fentanyl and if fentanyl is 25 times more potent than heroin, then it would only take 1,800 1-kg parcels to supply the same amount of MEDs to meet the demand for the entire U.S. heroin market. ...
Today, shipping costs from China are negligible. A 1-kg parcel can be shipped from China to the United States for as little as $10 through the international postal system or for $100 by private consignment  operator. 
Fentanyl and synthetic opioids present a number of challenges for conventional approaches to drug enforcement. For example, typical approaches assume that users of a certain drug have a demand for it, and enforcement efforts can  raising the price to reduce that demand. For those who become addicted, one can then offer treatment, or some advocate for offering zones for safer use of the drug.

But fentanyl and other synthetic opioids are different. Fentanyl is so extraordinarily cheaper than other opioids that even if additional enforcement was able to drive the price up five-fold or ten-fold, it would still be extraordinarily cheaper. It spreads not so much because users have a demand for these products. but because suppliers are cutting costs. The report says: "Indeed, many of fentanyl’s victims did not want or even know that they were using it." 

Treatment for prescription opioids or heroin often recognizes that the first round of treatment may not work, but repeated efforts can eventually work for many people. But fentanyl and synthetic opioids are deadly enough that more people are going to die while these cycles of treatment are ongoing, so while such treatment may still be cost-effective, it's success rate will be lower. Here's a discussion of the experience of treatment and harm reduction programs in Vancouver, when they collided with fentanyl: 
Fentanyl’s challenge to treatment and harm reduction is etched starkly in Vancouver’s death rate. Few cities have embraced treatment and harm reduction more energetically than Vancouver. Before fentanyl, that seemed to have worked well; HIV/AIDS was contained and heroin overdose death rates in British Columbia fell from an average of eight per 100,000 people from 1993 to 1999 to five per 100,000 from 2000 to 2012. However, those policies, programs, and services have been challenged by fentanyl. British Columbia now has one of the highest opioid-related death rates (more than 30 per 100,000 in 2017 and 2018), which is higher than that in all but five U.S. jurisdictions. The rate in Vancouver’s health service delivery area is even higher (55 per 100,000 people). These death rates are high, not only relative to opioid overdose deaths elsewhere but also in absolute terms. It is hard for many people who are not epidemiologists to understand whether death rates of 30 or 55 per 100,000 are large or small, so it might be useful to contrast them with death rates in the United States from homicide (4.8 per 100,000) and traffic crashes (12.3 per 100,000), which are familiar, widely discussed, and often pertain to premature deaths of people.
Of course, those harm-reduction policies could be saving many lives. Presumably, death rates would be higher if not for those efforts. However, the current approach fails to cope with fentanyl or heroin in absolute terms.
The potential implications for public policy are worth considering. The report discusses one possible scenario in this way: 
Over the long term, it is important to acknowledge that a new era could be coming when synthetic opioids are so cheap and ubiquitous that supply control will become less cost-effective. Falling prices and a pivot to treatment and harm reduction need not be an unhappy scenario for law enforcement. Freeing law enforcement of the obligation to squelch supply across the board could allow it to focus on the most-noxious dealers and organizations and strive to minimize violence and corruption per kilogram delivered, rather than the number of kilograms supplied. In a way, this would let law enforcement focus on public safety, rather than an addiction prevention mission. Also ... falling prices might reduce the amount of economic-compulsive crime committed as a means to finance drug purchase.
Previous outbreaks of the use of fentanyl and synthetic opioids going back to the 1980s had a very small number of producers and a limited distribution system, so once law enforcement shut down the producer, it was over. When it comes to trade issues with China, my own preference would be to have considerably less emphasis on tariffs for legal goods, and considerably more emphasis on coordinated efforts to shut down illegal production of synthetic opioids. But collaboration between the US and China is out of favor, and even if it could be done, the knowledge of how to make and sell fentanyl and synthetic opioids is now out there in a globalizing world economy. Stuffing that evil genie back in its bottle will be very difficult.  

At present, the harms of the US opioid crisis are somewhat concentrated geographically: 
[T]he ten states with the highest synthetic opioid overdose death rates in 2017 are, in order: West Virginia, Ohio, New Hampshire, Maryland, Massachusetts, Maine, Connecticut, Rhode Island, Delaware, and Kentucky. ... Although these ten states constituted 12 percent of the country’s population, they made up 35 percent of the 28,500 fatal overdoses involving synthetic opioids in 2017. Ohio’s share of fatalities alone was almost 12.5 percent, while the state made up about 3.5 percent of the country’s total population.
An alarming implication here is that if synthetic opioids break out of these 10 states and have a similar effect elsewhere, the already awful death toll could multiply.
For some earlier posts that (as usual) include links to other writing on the subject, see:

Monday, October 21, 2019

The Rise of Global Trade in Services

Our mental images of "global trade" are usually about goods: cars and steel, computers and textiles, oil and home appliances, and so on. But in the next few decades, most of the action in terms of increasing global trade is likely to be in services, not goods. More and more of the effects of trade on jobs is going to involve services, too. However, most of us are not used to thinking about countries import and export across national borders transportation services, financial services, tourism, construction, health care and education services, or many others.  The 2019 World Trade Report from the World Trade Organization focuses on the theme, "The future of services trade."  Here are some tidbits from the report (citations and references to figures omitted):
Services now seem to be transforming international trade in similar ways. Although they still only account for one fifth of cross-border trade, they are the fastest growing sector. While the value of goods exports has increased at a modest 1 per cent annually since 2011, the value of commercial services exports has expanded at three times that rate, 3 per cent. The services share of world trade has grown from just 9 per cent in 1970 to over 20 per cent today – and this report forecasts that services could account for up to one-third of world trade by 2040. This would represent a 50 per cent increase in the share of services in global trade in just two decades.
There is a common perception that globalization is slowing down. But if the growing wave of services trade is factored in – and not just the modest increases in merchandise trade – then globalization may be poised to speed up again.
Of course, high-income countries around the world already have most of their GDP in the form of services. But it's not as widely recognized that emerging market economies already have a majority of their output in services, too, or very close.
Services already accounted for 76 per cent of GDP in advanced economies in 2015 – up from 61 per cent in 1980 – and this share seems likely to rise. In Japan, for example, services represent 68 per cent of GDP; in New Zealand, 72 per cent; and in the US, almost 80 per cent. 
Emerging economies, too, are becoming more services-based – in some cases, at an even faster pace than advanced ones. Despite emerging as the “world’s factory” in recent decades, China’s economy is shifting dramatically into services. Services now account for over 52 per cent of GDP – a higher share than manufacturing – up from 41 per cent in 2005. In India, services now make up almost 50 per cent of GDP, up from just 30 per cent in 1970. In Brazil, the share of services in GDP is even higher, at 63 per cent. Between 1980 and 2015, the average share of services in GDP across all developing countries grew from 42 to 55 per cent.
Here's  figure showing the main services that are now being traded internationally. International trade in health care and education services is small, so far. The big areas at present are distribution services, financial services, telecom and computer services, transport services, and tourism. 




Like most big economic shifts, the rise in services trade is being driven by multiple factors. One is that advances in communications and information technology are making it vastly cheaper to carry out a service in one location and then to deliver it somewhere else. Another is that services are becoming a bigger part of the output of companies that, at first glance, seem focused mainly on goods. For example, car companies produce cars. But they also make a good share of their income providing services like financing, after-sales service of cars already sold, and customizing cars according to the desires of buyers. Another shift is what has been called "premature deindustrialization," referring to the fact that much of the future output growth in manufacturing is likely to come from investments in robotics and automation, while most of the future jobs are likely to be in services. The report notes:
Just because the services sector is playing a bigger role in national economies, this does not mean that the manufacturing sector is shrinking or declining. Many advanced economies are “post-industrial” only in the sense that a shrinking share of the workforce is engaged in manufacturing. Even in the world’s most deindustrialized, services-dominated economies, manufacturing output continues to expand thanks to mechanization and automation, made possible in no small part by advanced services. For example, US manufacturing output tripled between 1970 and 2014 even though its share of employment fell from over 25 per cent to less than 10 per cent. The same pattern of rising industrial output and shrinking employment can be found in Germany, Japan and many other advanced economies. ...
This line between manufacturing and services activities, which is already difficult to distinguish clearly, is becoming even more blurred across many industries. Automakers, for example, are now also service providers, routinely offering financing, product customization, and post-sales care. Likewise, on-line retailers are now also manufacturers, producing not only the computer hardware required to access their services, but many of the goods they sell on-line. Meanwhile, new processes, like 3D printing, result in products that are difficult to classify as either goods or services and are instead a hybrid of the two. This creative intertwining of services and manufacturing is one key reason why productivity continues to grow.
There are lots of issues here. For example, will the gains from trade in services end up benefiting mainly big companies, or mainly large urban areas, or will it allow small- and medium-sized firms in smaller cities or rural areas to have greater access to global markets? Many trade agreements about services are going to involve negotiating fairly specific underlying standards: for example, if a foreign insurance company or bank wants to do business in other countries, the solvency and business practices of that company will be a fair topic of investigation. 

The report offers lots of detail on services exports and imports around the world: for example, from advanced and developing countries, involving different kinds of services, as part of global value chains, the involvement of smaller and larger companies, the involvement of female and male workers, case studies of different aspects of services trade in India, China Kenya, Mexico, Philippines, and others. But overall, it seems clear that an ever-larger portion of international trade is going to be arriving electronically, not in a container-shipping compartment at a border stop or a port, and all of us are going to need to wrap our minds around the implications. 

Saturday, October 19, 2019

Neuromyths about the Brain and Learning

"Neuromyths are false beliefs, often associated with education and learning, that stem from misconceptions or misunderstandings about brain function. Over the past decade, there has been an increasing amount of research worldwide on neuromyths in education." The Online Learning Consortium has published an International report: Neuromyths and evidence-based practices in higher education. by the team of Kristen Betts, Michelle Miller,  Tracey Tokuhama-Espinosa,
Patricia A. Shewokis, Alida Anderson,  Cynthia Borja,  Tamara Galoyan,  Brian Delaney, 
John D. Eigenauer, and Sanne Dekker. 

They draw on previous surveys and information about "neuromyths" to construct their own online survey, which was sent to people inn higher education The response rate was low, as is common with online surveys, so consider yourself warned. But what's interesting to me is to read the "neuromyths" and to consider your own susceptibility to them. More details at the report itself, of course.


Homage to Bill Goffe for spotting this report. 

Friday, October 18, 2019

A Nobel for the Experimental Approach to Global Poverty for Banerjee, Duflo, and Kremer

Several decades ago, the most common ways of thinking about problems of poor people in low-income countries involved ideas like the "poverty trap" and the "dual economy." The "poverty trap" was the idea low-income countries were close to subsistence, so it was hard for them to save and make the investments that would lead to long-term growth. The "dual economy" idea was that low-income countries had both traditional and a modern parts of their economy, but the traditional part had large numbers of subsistence-level workers. Thus, if or when the modern part of the economy expanded, it could draw on this large pool of subsistence level workers and so there was no economic pressure for subsistence wages to rise. In either case, a common policy prescription was that low-income countries needed a big infusion of capital, probably from a source like the World Bank, to jump-start their economies into growth.

These older theories of economic development captured some elements of global poverty, but in many of their details and implications have proved unsatisfactory for the modern world. (Here's an essay on "poverty trap" thinking, and another on "dual economy" thinking.) For example, it turns out that low-income countries often do have sufficient saving to make investments in the future. Also, in a globalizing economy, flows of private investment capital along with remittances sent back home from emigrants far outstrip official development assistance. Moreover, there have clearly been success stories in which some low-income countries have escaped the poverty trap and the dual economy and moved to rapid growth, including China, India, other nations of east Asia, Botswana, and so on. 

Of course, it remains important that low-income countries avoid strangling their own economies with macroeconomic mismanagement, overregulation, or corruption. But a main focus of thinking about economic development shifted from how to funnel more resources to these countries to what kind of assistance would be most effective for the lives of the poor. The 2019 Nobel prize in economics was awarded “for their experimental approach to alleviating global poverty” to Abhijit Banerjee, Esther Duflo, and Michael Kremer.  To understand the work, the Nobel committee publishes two useful starting points: a "Popular Science" easy-to-read overview called "Research to help the world’s poor," and a longer and more detailed"Scientific Background" essay on "Understanding Development and Poverty Alleviation."

In thinking about the power of their research, it's perhaps useful to hearken back to long-ago discussions of basic science experiments. For example, back in 1881 Louis Pasteur wanted to test his vaccine for sheep anthrax. He exposed 50 sheep to anthrax. Of those 50, half chosen at random had been vaccinated. The vaccinated sheep lived and others died. 

Social scientists have in some cases been able to use randomized trials in the past. As one recent example, the state of Oregon wanted to expand Medicaid coverage back in 2008, but it only had funding to cover an additional 10,000 people. It chose those people through a lottery, and thus set up an experiment about how having health insurance affected the health and finances of the working poor (for discussions of some results, see here and here). In other cases, when certain charter high schools are oversubscribed and use a lottery to choose students, it sets up a random experiment for comparing students who gained admission to those schools with those who did not

The 2019 laureates took this ideas of social science experiments and brought it to issues of poverty and economic development. They went to India and Kenya and low-income countries around the world. They arranged with state and local governments to carry out experiments where, say, 200 villages would be selected, and then 100 of those villages at random would receive a certain policy intervention. Just dealing with the logistics of making this happen--for different interventions, in different places--would deserve a Nobel prize by itself. 

Many of the individual experiments focus on quite specific policies. However, as a number of these experimental results accumulate, broader lessons become clear. For example, consider the question of how to improve educational outcomes in low-income countries. Is the problem a lack of textbooks? A lack of lunches? Absent teachers? Low-quality teachers? Irregular student attendance? An overly rigid curriculum? A lack of lights at home that make it hard for students to study? Once you start thinking along these lines, you can think about randomized experiments that address each of these factors and others, separately and in various combinations. From the Nobel committee's "Popular Science Background":

Kremer and his colleagues took a large number of schools that needed considerable support and randomly divided them into different groups. The schools in these groups all received extra resources, but in different forms and at different times. In one study, one group was given more textbooks, while another study examined free school meals. Because chance determined which school got what, there were no average differences between the different groups at the start of the experiment. The researchers could thus credibly link later differences in learning outcomes to the various forms of support. The experiments showed that neither more textbooks nor free school meals made any difference to learning outcomes. If the textbooks had any positive effect, it only applied to the very best pupils. 
Later field experiments have shown that the primary problem in many low-income countries is not a lack of resources. Instead, the biggest problem is that teaching is not sufficiently adapted to the pupils’ needs. In the first of these experiments, Banerjee, Duflo et al. studied remedial tutoring programmes for pupils in two Indian cities. Schools in Mumbai and Vadodara were given access to new teaching assistants who would support children with special needs. These schools were ingeniously and randomly placed in different groups, allowing the researchers to credibly measure the effects of teaching assistants. The experiment clearly showed that help targeting the weakest pupils was an effective measure in the short and medium term.
Such experiments have been done in a wide range of contexts. For example, what about issues of improving health? 
One important issue is whether medicine and healthcare should be charged for and, if so, what they should cost. A field experiment by Kremer and co-author investigated how the demand for deworming pills for parasitic infections was affected by price. They found that 75 per cent of parents gave their children these pills when the medicine was free, compared to 18 per cent when they cost less than a US dollar, which is still heavily subsidised. Subsequently, many similar experiments have found the same thing: poor people are extremely price-sensitive regarding investments in preventive healthcare. ...

Low service quality is another explanation why poor families invest so little in preventive measures. One example is that staff at the health centres that are responsible for vaccinations are often absent from work. Banerjee, Duflo et al. investigated whether mobile vaccination clinics – where the care staff were always on site – could fix this problem. Vaccination rates tripled in the villages that were randomly selected to have access to these clinics, at 18 per cent compared to 6 per cent. This increased further, to 39 per cent, if families received a bag of lentils as a bonus when they vaccinated their children. Because the mobile clinic had a high level of fixed costs, the total cost per vaccination actually halved, despite the additional expense of the lentils.
How much do the lives of low-income people change from receiving access to credit? For example, does it change their consumption, or encourage them to start a business? If farmers had access to credit, would they be more likely to invest in fertilizer and expand their output?

As the body of experimental evidence accumulates, it begins to open windows on the lives of people in low-income countries, on issues of how they are actually making decisions and what constraints matter most to them. The old-style approach to development economics of sending money to low-income countries is replaced by policies aimed at specific outcomes: education, health, credit, use of technology. When it's fairly clear what really matters or what really helps, and the policies are expanded broadly, they can still be rolled out over a few yeas in a randomized way, which allows researchers to compare effects of those who experience the policies sooner to those who experienced them later. This approach to economic development has a deeply evidence-based practicality. 

For more on these topics, here are some starting points from articles in the Journal of Economic Perspectives, where I labor in the fields as Managing Editor: 

On the specific research on experimental approaches to poverty, Banerjee and Duflo coauthored Addressing Absence (Winter 2006 issue), about an experiment to provide incentives for teachers in rural schools to improve their attendance, and Giving Credit Where It Is Due (Summer 2010 issue), about experiments related to providing credit and how it affects the lives of poor people. 

I'd also recommend a pair of articles that Banerjee and Duflo wrote for JEP where they focus on the economic lives of those in low-income countries: "the choices they face, the constraints they grapple with, and the challenges they meet." The first paper focuses in the extremely poor. "The Economic Lives of the Poor (Winter 2007), while the other looks at those who are classified as "middle class" by global standards, "What Is Middle Class about the Middle Classes around the World? (Spring 2008). 

From Kremer, here are a couple of JEP papers focused on development topics not directly related to the experimental agenda: "Pharmaceuticals and the Developing World" (Fall 2002) and "The New Role for the World Bank" (written with Michael A. Clemens, Winter 2016).

Finally, the Fall 2017 issue of JEP had a three paper symposium on the issues involved in moving from a smaller-scale experiment to a scalable policy. The papers are: 

Thursday, October 17, 2019

Some Income Tax Data on the Top Incomes

How much income do US taxpayers have at the very top? How much do they pay in taxes? The IRS has just published updated date for 2017 on "Individual Income Tax Rates and Tax Shares."  Here, I'll focus on data for 2017 and "returns with Modified Taxable Income," which for 2017 basically means the same thing as returns with taxable income. Here are a couple of tables for 2017 derived from the IRS data.

The first table shows a breakdown for taxpayers from the top .001% to the top 5%. Focusing on the top .001% for a moment, there were 1,433 such taxpayers in 2017. (You'll notice that the number of taxpayers in the top .01%, .1% and 1% rise by multiples of  10, as one would expect.)

The "Adjusted Gross Income Floor" tells you that to be in the top .001% in 2017, you had to have income of $63.4 million in that year. If you had income of more than $208,000, you were in the top 5%,

The total income for the top .001% was $256 billion. Of that amount, the total federal income tax paid was $61.7 billion. Thus, the average federal income tax rate paid was 24.1% for this group. The top .001% received 2.34% of all gross income, and paid 3.86% of all income taxes.
Of course, it's worth remembering that this table is federal income taxes only. It doesn't include state taxes on income, property, or sales.  It doesn't include the share of corporate income taxes that end up being paid indirectly (in the form of lower returns) by those who own corporate stock.

Here's a follow-up table showing the same information, but for groups ranging from the top 1% to the top 50%.
Of course, readers can search through these tables for what is of most interest to them. But here are af few quick thoughts of my own.

1) Those at the very tip-top of the income distribution, like the top .001% or the top .01%, pay a slightly lower share of income in federal income taxes than say, the top 1%. Why? I think it's because those at the very top are often receiving a large share of their annual income in the form of  capital gains, which are taxed at a lower rate than regular income.

2) It's useful to remember that many of those at the very tip-top are not there every year. It's not like the fall into poverty the next year, of course. But they are often making a decision about when to turn capital gains into taxable income, and they are people who--along with their well-paid tax lawyers-- have some control over the timing of that decision and how the income will be received.

3) The average tax rate shown here is not the marginal tax bracket. The top federal tax bracket is 37% (setting aside issues of payroll taxes for Medicare and how certain phase-outs work as income rises). But that marginal tax rate applies only to an additional dollar of regular income earned. With deductions, credits, exemptions, and capital gains taken into account, the average rate of income tax a as a share of total income is lower.

4) The top 50% pays almost all the federal income tax. The last row on the second table shows that the top 50% pays 96.89% of all federal income taxes. The top 1% pays 38.47% of all federal income taxes. Of course, anyone who earns income also owes federal payroll taxes that fund Social Security and Medicare, as well as paying federal excise taxes on gasoline, alcohol, and tobacco, and these taxes aren't included here.

5) This data is about income in 2017. It's not about wealth, which is accumulated over time. Thus, this data is relevant for discussions of changing income tax rates, but not especially relevant for talking about a wealth tax.

6) There's a certain mindset which looks at, say, the $2.3 trillion in total income for the top 1%, and notes that the group is "only" paying $615 billion in federal income taxes, and immediately starts thinking about how the federal government could collect a few hundred billion dollars more from that group, and planning how to spend that money. Or one might focus further up, like the 14,330 in the top .01%  who had more than $12.8 million in income in 2017. Total income for this group was $565 billion, and they "only" paid about 25% of it in federal income taxes. Surely they could chip in another $100 billion or so? On average, that's only about $7 million apiece in additional taxes for those in the top .01%. No big deal. Raising taxes on other people is so easy.

I'm not someone who spends much time weeping about the financial plight of the rich, and I'm not going to start now. It's worth remembering (again) that the numbers here are only for federal income tax, so if you are in a state or city with its own income tax, as well as paying property taxes and the other taxes at various levels of government, the tax bill paid by those with high incomes is probably edging north of 40% of total income in a number of jurisdictions.

But let's set aside the question of whether the very rich can afford somewhat higher federal income taxes (spoiler alert: they can), and focus instead on the total amounts of money available. The numbers here suggest that somewhat higher income taxes at the very top could conceivably bring in a few hundred billion dollars, even after accounting for the ability of those with very high income to alter the timing and form of the income they receive. To put this amount in  perspective, the federal budget deficit is now running at about $800 billion per year.  To put it another way, it seems implausible to me that plausibly higher taxes limited to those with the highest incomes would raise enough to get the budget deficit down to zero, much less to bridge the existing long-term funding gaps for Social Security or Medicare, oi to support grandiose spending programs in the trillions of dollars for other purposes. Raising federal income taxes at the very top may be a useful step, but it's not a magic wand that can pay for every wish list.