Friday, May 22, 2020

Interview with Joshua Angrist: Education Policy and Causality Questions

David A. Price interviews Joshua Angrist in Econ Focus (First Quarter 2020, Federal Reserve Bank of Richmond, pp. 18-22). Angrist is well-known for his creativity and diligence in thinking about research design: that is, don't just start by looking at a bunch of correlations between variables, but instead think about what you might be able to infer about causality from looking at the data in a specific way. A substantial share of his recent research has focused on education policy, and that's the main focus of the interview as well.

To get a sense of what "research design" means in this area,  consider some examples. Imagine that you want to know if a student does better from attending a public charter school. If the school is oversubscribed and holds a lottery (as often happens), then you can compare those attending the charter with those who applied but were not chosen in the lottery. Does being surrounded by high-quality peers help your education? You can look at students who were accepted to institutions like Harvard and MIT but chose not to attend, and compare them with the students that were accepted and did choose to attend. Of course, these kinds of comparisons have to be done with appropriate statistical consideration. But their results are much more plausibly interpreted as causal, not just as a set of correlations. Here are some comments from Angrist in the interview that caught my eye.

Peer Effects in High School? 
I think people are easily fooled by peer effects. Parag, Atila Abdulkadiroglu, and I call it "the elite illusion." We made that the title of a paper. I think it's a pervasive phenomenon. You look at the Boston Latin School, or if you live in Northern Virginia, there's Thomas Jefferson High School for Science and Technology. And in New York, you have Brooklyn Tech and Bronx Science and Stuyvesant.
And so people say, "Look at those awesome children, look how well they did." Well, they wouldn't get into the selective school if they weren't awesome, but that's distinct from the question of whether there's a causal effect. When you actually drill down and do a credible comparison of students who are just above and just below the cutoff, you find out that elite performance is indeed illusory, an artifact of selection. The kids who go to those schools do well because they were already doing well when they got in, but there's no effect from exposure to higher-achieving peers.
How Much Does Attending a Selective College Matter? 
I teach undergrad and grad econometrics, and one of my favorite examples for teaching regression is a paper by Alan Krueger and Stacy Dale that looks at the effects of going to a more selective college. It turns out that if you got into MIT or Harvard, it actually doesn't matter where you go. Alan and Stacy showed that in two very clever, well-controlled studies. And Jack Mountjoy, in a paper with Brent Hickman, just replicated that for a much larger sample. There isn't any earnings advantage from going to a more selective school once you control for the selection bias. So there's also an elite illusion at the college level, which I think is more important to upper-income families, because they're desperate for their kids to go to the top schools. So desperate, in fact, that a few commit criminal fraud to get their kids into more selective schools.
Charter schools and takeovers
The most common charter model is what we call a startup — somebody decides they want to start a charter school and admits kids by lottery. But an alternative model is the takeover. Every state has an accountability system with standards that require schools to meet certain criteria. When they fail to meet these standards, they're at risk of intervention by the state. Some states, including Massachusetts, have an intervention that involves the public school essentially being taken over by an outside operator. Boston had takeovers. And New Orleans is actually an all-charter district now, but it moved to that as individual schools were being taken over by charter operators.
That's good for research, because you can look at schools that are struggling just as much but are not taken over or are not yet taken over and use them as a counterfactual. The reason that's important is that people say kids who apply to the startups are self-selected and so they're sort of primed to gain from the charter treatment. But the way the takeover model works in Boston and New Orleans is that the outside operator inherits not only the building, but also the existing enrollment. So they can't cherry-pick applicants. What we show is that successful charter management organizations that run successful startups also succeed in takeover scenarios.
Angrist has developed the knack of looking for these ways of interpreting a given data set, sometimes called "natural experiments." For those trying to find such examples as a basis for their own research, he says: 
One thing I learned is that empiricists should work on stuff that's nearby. Then you can have some visibility into what's unique and try to get on to projects that other people can't do. This is particularly true for empiricists who are working outside the United States. There's a temptation to just mimic whatever the Americans and British are doing. I think a better strategy is to say, "Well, what's special and interesting about where I am?"
Finally, as a bit of a side note, I was intrigued by Angrist's neutral-to-negative take on the potential for machine learning in econometrics:
I just wrote a paper about machine learning applications in labor economics with my former student Brigham Frandsen. Machine learning is a good example of a kind of empiricism that's running way ahead of theory. We have a fairly negative take on it. We show that a lot of machine learning tools that are very popular now, both in economics and in the wider world of data science, don't translate well to econometric applications and that some of our stalwarts — regression and two-stage least squares — are better. But that's an area of ongoing research, and it's rapidly evolving. There are plenty of questions there. Some of them are theoretical, and I won't be answering those questions, but some are practical: whether there's any value added from this new toolkit. So far, I'm skeptical.
Josh has written for the Journal of Economic Perspectives a few times. Interested readers might want to follow up with:

Thursday, May 21, 2020

Reconsidering the "Washington Consensus" in the 21st Century

The "Washington consensus" has become a hissing and a buzzword over time. The usual implication is that free-market zealots in Washington, DC, told developing countries around the world that they would thrive if they followed free-market policies, but when developing countries tried out these policies, they were proven not to work. William Easterly, who has been a critic of the "Washington consensus" in the past, offers an update and some new thinking in "In Search of Reforms for Growth New Stylized Facts on Policy and Growth Outcomes" (Cato Institute, Research Briefs #215, May 20, 2020). He summarizes some ideas from his NBER working paper of the same title (NBER Working Paper 26318, September 2019)/

Before discussing what Easterly has to say, it's perhaps useful to review how the "Washington consensus" terminology emerged. The name traces back to a 1989 seminar in which John Williamson tried to write down what he saw as the main steps that policy-makers in Washington, DC, thought were appropriate for countries in Latin America facing a debt crisis. As Williamson wrote in the resulting essay published in 1990:
No statement about how to deal with the debt crisis in Latin America would be complete without a call for the debtors to fulfill their part of the proposed bargain by "setting their houses in order," "undertaking policy reforms," or "submitting to strong conditionality."
The question posed in this paper is what such phrases mean, and especially what they are generally interpreted as meaning in Washington. Thus the paper aims to set out what would be regarded in Washington as constituting a desirable set of economic policy reforms. ... The Washington of this paper is both the political Washington of Congress and senior members of the administration and the technocratic Washington of the international financial institutions, the economic agencies of the US government, the Federal Reserve Board, and the think tanks. ... Washington does not, of course, always practice what it preaches to foreigners.
Here's how Williamson summed up the 10 reforms he listed in a follow-up essay in 2004:
  1. Fiscal Discipline. This was in the context of a region where almost all countries had run large deficits that led to balance of payments crises and high inflation that hit mainly the poor because the rich could park their money abroad.
  2. Reordering Public Expenditure Priorities. This suggested switching expenditure in a progrowth and propoor way, from things like nonmerit subsidies to basic health and education and infrastructure. It did not call for all the burden of achieving fiscal discipline to be placed on expenditure cuts; on the contrary, the intention was to be strictly neutral about the desirable size of the public sector, an issue on which even a hopeless consensus-seeker like me did not imagine that the battle had been resolved with the end of history that was being promulgated at the time. 
  3. Tax Reform. The aim was a tax system that would combine a broad tax base with moderate marginal tax rates. 
  4. Liberalizing Interest Rates. In retrospect I wish I had formulated this in a broader way as financial liberalization, stressed that views differed on how fast it should be achieved, and—especially—recognized the importance of accompanying financial liberalization with prudential supervision. 
  5. A Competitive Exchange Rate. I fear I indulged in wishful thinking in asserting that there was a consensus in favor of ensuring that the exchange rate would be competitive, which pretty much implies an intermediate regime; in fact Washington was already beginning to edge toward the two-corner doctrine which holds that a country must either fix firmly or else it must float “cleanly”. 
  6. Trade Liberalization. I acknowledged that there was a difference of view about how fast trade should be liberalized, but everyone agreed that was the appropriate direction in which to move. 
  7. Liberalization of Inward Foreign Direct Investment. I specifically did not include comprehensive capital account liberalization, because I did not believe that did or should command a consensus in Washington. 
  8. Privatization. As noted already, this was the one area in which what originated as a neoliberal idea had won broad acceptance. We have since been made very conscious that it matters a lot how privatization is done: it can be a highly corrupt process that transfers assets to a privileged elite for a fraction of their true value, but the evidence is that it brings benefits (especially in terms of improved service coverage) when done properly, and the privatized enterprise either sells into a competitive market or is properly regulated. 
  9. Deregulation. This focused specifically on easing barriers to entry and exit, not on abolishing regulations designed for safety or environmental reasons, or to govern prices in a non-competitive industry. 
  10. Property Rights. This was primarily about providing the informal sector with the ability to gain property rights at acceptable cost (inspired by Hernando de Soto’s analysis). 
There are really two main sets of complaints about the "Washington consensus" recommendations. One set of complaints is that a united DC-centered policy establishment was telling countries around the world what to do in an overly detailed and intrusive way. The other complaint was that the recommendations weren't showing meaningful results for improved economic growth in countries of Latin America, Africa, or elsewhere. As Easterly points out, these complaints were being voiced by the mid-1990s. 

Conversely, standard responses were that many of these countries had not actually adopted the list of 10 policy reforms. Moreover, the responses went, there is no instant-fix set of policies for raising economic growth, and these policies need to be maintained in place for years (or decades?) before their effects will be meaningful.  And there the controversy (mostly) rested.

Easterly is (wisely) not seeking to refight the specific proposals of the Washington consensus. Instead,  he is just pointing out some basic facts. The share of countries with extremely negative macroeconomic outcomes--like very high inflation, or very high black market premiums on the exchange rate for currency--diminished sharply in the 21st century, as compared to the 1980s and 1990s. Here are a couple of figures from Easterly's NBER working paper:

These kinds of figures provide a context for the 1990 Washington consensus: for example, in the late 1980s and early 1990s when between 25-40% of all countries in the world had inflation rates greater than 40%, getting that wildfire under control had a high level of importance.

Easterly also points out that when these extremely undesirable outcomes diminished in the 1990s, growth across countries of Latin America and Africa has done better in the 21st century. Easterly thus offers this gentle reconsideration of the Washington consensus policies: 
The new stylized facts seem most consistent with a position between complete dismissal and vindication of the Washington Consensus. ... Even critics of the Washington Consensus might agree that extreme ranges of inflation, black market premiums, overvaluation, negative real interest rates, and repression of trade were undesirable. ...
Despite these caveats, the new stylized facts are consistent with a more positive view of reform, compared to the previous consensus on doubting reform. The reform critics (including me) failed to emphasize the dangers of extreme policies in the previous reform literature or to note how common extreme policies were. Even if the reform movement was far from a complete shift to “free market policies,” it at least seems to have accomplished the elimination of the most extreme policy distortions of markets, which is associated with the revival of growth in African, Latin American, and other countries that had extreme policies. 

Wednesday, May 20, 2020

Is a Revolution in Biology-based Technology on the Way?

Sometimes, a person needs a change from feeling rotten about the pandemic and the economy. One needs a sense that, if not right away, the future holds some imaginative and exciting possibilities. A group at the the McKinsey Global Institute--Michael Chui,  Matthias Evers, James Manyika,  Alice Zheng, amd Travers Nisbet--have been working for about a year on their report: "The Bio Revolution: Innovations transforming economies,societies, and our lives" (May 2020). It's got a last-minute text box about COVID-19, emphasizing the speed with which biomedical research has been able to move into action in looking for vaccines and treatments. But the heart of the report is that the authors looked at the current state of biotech, and came up with a list of about 400 "cases that
are scientifically conceivable today and that could plausibly be commercialized by 2050. ... Over the next ten to 20 years, we estimate that these applications alone could have direct economic impact of between $2 trillion and $4 trillion globally per year."

For me, reports like this aren't about the economic projections, which are admittedly shaky, but rather are a way of emphasizing the importance of increasing national research and development efforts across a spectrum of technologies. As the authors point out, the collapsing costs of sequencing and editing genes are reshaping what's possible with biotech. Here are some of the possibilities they discuss.

When it comes to physical materials, the report notes that in the long run:
As much as 60 percent of the physical inputs to the global economy could, in principle, be produced biologically. Our analysis suggests that around one-third of these inputs are biological materials, such as wood, cotton, and animals bred for food. For these materials, innovations can improve upon existing production processes. For instance, squalene, a moisturizer used in skin-care products, is traditionally derived from shark liver oil and can now be produced more sustainably through fermentation of genetically engineered yeast. The remaining two-thirds are not biological materials—examples include plastics and aviation fuels—but could, in principle, be produced using innovative biological processes or be replaced with substitutes using bio innovations. For example, nylon is already being made using genetically engineered microorganisms instead of petrochemicals. To be clear, reaching the full potential to produce these inputs biologically is a long way off, but even modest progress toward it could transform supply and demand and economics of, and participants in, the provision of physical inputs.  ...
Biology has the potential in the future to determine what we eat, what we wear, the products we put on our skin, and the way we build our physical world. Significant potential exists to improve the characteristics of materials, reduce the emissions profile of manufacturing and processing, and shorten value chains. Fermentation, for centuries used to make bread and brew beer, is now being used to create fabrics such as artificial spider silk. Biology is increasingly being used to create novel materials that can raise quality, introduce entirely new capabilities, be biodegradable, and be produced in a way that generates significantly less carbon emissions. Mushroom roots rather than animal hide can be used to make leather. Plastics can be made with yeast instead of petrochemicals. ...
A significant share of materials developed through biological means are biodegradable and generate less carbon during manufacture and processing than traditional materials. New bioroutes are being developed to produce chemicals such as fertilizers and pesticides. ...
 A deeper understanding of human genetics offers potential for improvements in health care, where the social benefits go well beyond higher economic output. The report estimates that there are 10,000 human diseases caused by a single gene.

A new wave of innovation is under way that includes cell, gene, RNA, and microbiome therapies to treat or prevent disease, innovations in reproductive medicine such as carrier screening, and improvements to drug development and delivery.  Many more options are being explored and becoming available to treat monogenic (caused by mutations in a single gene) diseases such as sickle cell anemia, polygenic diseases (caused by multiple genes) such as cardiovascular disease, and infectious diseases such as malaria. We estimate between 1 and 3 percent of the total global burden of disease could be reduced in the next ten to 20 years from these applications—roughly the equivalent of eliminating the global disease burden of lung cancer, breast cancer, and prostate cancer combined. Over time, if the full potential is captured, 45 percent of the global disease burden could be addressed using science that is conceivable today. ...
An estimated 700,000 deaths globally every year are the result of vector-borne infectious diseases. Until recently, controlling these infectious diseases by altering the genomes of the entire population of the vectors was considered difficult because the vectors reproduce in the wild and lose any genetic alteration within a few generations. However, with the advent of CRISPR, gene drives with close to 100 percent probability of transmission are within reach. This would offer a permanent solution to preventing most vector-borne diseases, including malaria, dengue fever, schistosomiasis, and Lyme disease.

The potential gains for agriculture as the global population heads toward 10 billion and  higher seem pretty important, too.

Applications such as low-cost, high-throughput microarrays have vastly increased the amount of plant and animal sequencing data, enabling lower-cost artificial selection of desirable traits based on genetic markers in both plants and animals. This is known as marker-assisted breeding and is many times quicker than traditional selective breeding methods. In addition, in the 1990s, genetic engineering emerged commercially to improve the traits of plants (such as yields and input productivity) beyond traditional breeding.  Historically, the first wave of genetically engineered crops has been referred to as genetically modified organisms (GMOs); these are organisms with foreign (transgenic) genetic material introduced. Now, recent advances in genetic engineering (such as the emergence of CRISPR) have enabled highly specific cisgenic changes (using genes from sexually compatible plants) and intragenic changes (altering gene combinations and regulatory sequencings belonging to the recipient plant). Other innovations in this domain include using the microbiome of plants, soil, animals, and water to improve the quality and productivity of agricultural production; and the development of alternative proteins, including lab-grown meat, which could take pressure off the environment from traditional livestock and seafood.
More? Direct-to-consumer genetic testing is already a reality as a consumer product, but it will start to be combined with other goods and services based on your personal genetic profile: what vitamins and probiotics to take, meal services, cosmetics, whitening teeth, monitoring health, and more.

Pushing back against rising carbon emissions?

Genetically engineered plants can potentially store more CO2 for longer periods than their natural counterparts. Plants normally take in CO2 from the atmosphere and store carbon in their roots. The Harnessing Plant Initiative at the Salk Institute is using gene editing to create plants with deeper and more extensive root systems that can store more carbon than typical plants. These roots are also engineered to produce more suberin or cork, a naturally occurring carbon-rich substance found in roots that absorbs carbon, resists decomposition (which releases carbon back into the atmosphere), may enrich soil, and helps plants resist stress. When these plants die, they release less carbon back into the atmosphere than conventional plants. ...
Algae, present throughout the biosphere but particularly in marine and freshwater environments, are among the most efficient organisms for carbon sequestration and photosynthesis; they are generally considered photosynthetically more efficient than terrestrial plants. Potential uses of microalgal biomass after sequestration could include biodiesel production, fodder for livestock, and production of colorants and vitamins. Using microalgae to sequester carbon has a number of advantages. They do not require arable land and are capable of surviving well in places that other crop plants cannot inhabit, such as saline-alkaline water, land, and wastewater. Because microalgae are tiny, they can be placed virtually anywhere, including cities. They also grow rapidly. Most important, their CO2 fixation efficiency has been estimated at ten to 50 times higher than that of  terrestrial plants.
Using biotech to remediate earlier environmental damage or aid recycling?
One example is genetically engineered microbes that can be used to break down waste and toxins, and could, for instance, be used to reclaim mines. Some headway is being made in using microbes to recycle textiles. Processing cotton, for instance, is highly resource-intensive, and dwindling resources are constraining the production of petroleum-based fibers such as acrylic, polyester, nylon, and spandex. There is a great deal of waste, with worn-out and damaged clothes often thrown away rather than repaired. Less than 1 percent of the material used to produce clothing is recycled into new clothing, representing a loss of more than $100 billion a year.Los Angeles–based Ambercycle has genetically engineered microbes to digest polymers from old textiles and convert them into polymers that can be spun into yarns. Engineered microbes can also assist in the treatment of wastewater. In the United States, drinking water and wastewater systems account for between 3 and 4 percent of energy use and emit more than 45 million tons of GHG a year. Microbes—also known as microbial fuel cells—can convert sewage into clean water as well as generate the electricity that powers the process.
What about longer-run possibilities, still very much under research, that might bear fruit out beyond 2050?
  • "Biobatteries are essentially fuel cells that use enzymes to produce electricity from sugar. Interest is growing in their ability to convert easily storable fuel found in everyday sugar into electricity and the potential energy density this would provide. At 596 ampere hours per kilogram, the density of sugar would be ten times that of current lithium-ion batteries."
  • "Biocomputers that employ biology to mimic silicon, including the use of DNA to store data, are being researched. DNA is about one million times denser than hard-disk storage; technically, one kilogram of DNA could store the entirety of the world’s data (as of 2016)."
  • Of course, if people are going to live in space or on other planets, biotech will be of central importance. 

If your ideas about the technologies of the future begin and end with faster computing power, you are not dreaming big enough.

Tuesday, May 19, 2020

A Wake-Up Call about Infections in Long-Term Care Facilities

Those live in long-term care facilities are by definition more likely to be older and facing multiple health risks. Thus, it's not unexpected that a high proportion of those dying from the coronavirus live in long-term care facilities. But the problem of infections and deaths in long-term care facilities predates the coronavirus pandemic, and will likely outlast it, too. Here's some text from the Centers for Disease Control website:
Nursing homes, skilled nursing facilities, and assisted living facilities, (collectively known as long-term care facilities, LTCFs) provide a variety of services, both medical and personal care, to people who are unable to manage independently in the community. Over 4 million Americans are admitted to or reside in nursing homes and skilled nursing facilities each year and nearly one million persons reside in assisted living facilities. Data about infections in LTCFs are limited, but it has been estimated in the medical literature that:
  • 1 to 3 million serious infections occur every year in these facilities.
  • Infections include urinary tract infection, diarrheal diseases, antibiotic-resistant staph infections and many others.
  • Infections are a major cause of hospitalization and death; as many as 380,000 people die of the infections in LTCFs every year.
If you're a number-curious person like me, you immediately think, "Where does that estimate of 380,000 deaths come from? A bit of searching unearths that the 380,000 is from the National Action Plan to Prevent Health Care-Associated Infections, a title which has the nice ring of a program that is already well-underway. But then you look at Phase Three: Long-Term Care Facilities, and it takes you to a report called "Chapter 8: Long-Term Care Facilities," which dated April 2013.  The 2013 report reads:
More recent estimates of the rates of HAIs [health-care associated infections] occurring in NH/SNF [nursing home/skilled nursing facility] residents range widely from 1.4 to 5.2 infections per 1,000 resident-care days.2,3 Extrapolations of these rates to the approximately 1.5 million U.S. adults living in NHs/SNFs suggest a range from 765,000 to 2.8 million infections occurring in U.S. NHs/SNFs every year.4 Given the rising number of individuals receiving more complex medical care in NHs/SNFs, these numbers might underestimate the true magnitude of HAIs in this setting. Additionally, morbidity and mortality due to HAIs in LTCFs [long-term care facilities] are substantial. Infections are among the most frequent causes of transfer from LTCFs to acute care hospitals and 30-day hospital readmissions.5,6 Data from older studies conservatively estimate that infections in the NH/SNF population could account for more than 150,000 hospitalizations each year and a resultant $673 million in additional health care costs.5 Infections also have been associated with increased mortality in this population.4,7,8 Extrapolation based on estimates from older publications suggests that infections could result in as many as 380,000 deaths among NH/SNF residents every year.5
Because I am on a hunt for the source of the estimate of 380,000 deaths, I take a look at note 5, which refers to a 1991 study: Teresi JA, Holmes D, Bloom HG, Monaco C & Rosen S. Factors differentiating hospital transfers from long-term care facilities with high and low transfer rates. Gerontologist. Dec 1991; 31(6):795-806.  

So to summarize the bidding. Here in 2020, in the midst of a pandemic where the infections are  causing particular harm in nursing homes, the CDC website in 2020 is quoting estimates of deaths from a study published in 2013, and the methodology for estimating those deaths relies on an extrapolation from a study published three decades ago in 1991. 

I'm sure there are many good people making substantial efforts to reduce infections in long-term care facilities, often at meaningful risk to their own health. But ultimately, the degree of success in reducing infections isn't measured by good intentions or efforts: it's measured by actual counts of infections and deaths. And when the CDC describes estimates of "serious infections" that vary by a factor of three, and estimates of deaths based on extrapolations from a 1991 study, it seems pretty clear that the statistics about infections in long-term care facilities are not well-measured or consistent over time.

This problem of infections in long-term care facilities will matter well beyond the pandemic. Populations are aging everywhere: in the United States, 3.8% of the population is currently over 80, but by 2050 it will likely rise to 8.2%. The demand for long-term care is likely to rise accordingly,  which in turn will raise difficult questions about where the workers for such facilities and the financial support will come from. Here, I would emphasize that it will take redoubled efforts if the future rise in number of people in long-term care is not to be matched by a similar rise in the number of people subject to infections, including when (not if) future pandemics arrive.

Monday, May 18, 2020

The Bad News about the Big Jump in Average Hourly Wages

Average hourly wages in April 2020 were 7.9% higher than a year earlier, a very high jump. And as a moment's reflection will suggest, this is actually part of the terrible news for the US labor market. Of course, it's not true that the average hourly worker is getting a raise of 7.9%. Instead, the issue is that only workers who have jobs are included in the average. So the big jump in average hourly wages is actually telling us that a much higher proportion of workers with below-average wages have lost their jobs, so that the average wage of the hourly workers who still have jobs has risen.

Here's are a couple of illustrative figures, taken from the always-useful US Economy in a Snapshot published monthly by the Federal Reserve Bank of New York. The blue line in this figure shows the rise in average hourly wages over the previous 12 months. The red line shows a measure of inflation, the Employment Cost Index. There has been lots of concern in the last few years about why wages were not rising more quickly, and as you can see, the increase in average earnings had pulled ahead of inflation rates in 2019. 

However, the US economy is of course now experiencing a sharp fall in the number of hours worked. As the NY Fed notes, nonfarm payrolls fell by 20 million jobs in April, the largest fall in the history of this data series. Total weekly hours worked in the private sector fell by 14.9% in April compared with 12 months earlier. No matter how many checks and loans are handed out, the US economy will not have recovered until hours worked returns to normal levels.

You will sometimes hear statistics people talk about a "composition effect," which just means that if you are comparing a group over time, you need to beware of the possibility that  the composition of the group is changing. In this case, if you compare the average hourly earnings of the group that is working for hourly wages over time, you need to beware if the composition of the group that is working for hourly wages has systematically shifted in some way. In this case, the bottom of the labor market has fallen out for hourly workers who had been receiving below-average hourly wages.

There's nothing nefarious about these statistics. The average hourly wage is standard statistic published every month. The government statisticians just publish the numbers, as they should. It's up to citizens to understand what they mean.

Friday, May 15, 2020

Interview with Emi Nakamura: Price Dynamics, Monetary and Fiscal, and COVID-19 Adjustments

Douglas Clement at the Minneapolis Federal Reserve offers one of his characteristically excellent interviews, this one with Emi Nakamura, titled "On price dynamics, monetary policy, and this `scary moment in history'” (May 6, 2020, Federal Reserve Bank of Minneapolis). Here are a few of Nakamura's comments that caught my eye, but there's much more in the full interview.

On the current macroeconomic situation
It’s a scary moment in history. I thought the Great Recession that started in 2007 was going to be the big macroeconomic event of my lifetime, but here we are again, little more than a decade later. ... More than other recessions, this particular episode feels like it fits into the classic macroeconomic framework of dividing things into “shocks” and “propagation”—mainly because in this case, it’s blindingly clear what the shock is and that it is completely unrelated to other forces in the economy. In the financial crisis, there was much more of a question as to whether things were building up in the previous decade—such as debt and a housing bubble—that eventually came to a head in the crisis. But here that’s clearly not the case.
Price rigidity at times of macroeconomic adjustment
You might think that it’s very easy to go out there and figure out how much rigidity there is in prices. But the reality was that at least until 20 years ago, it was pretty hard to get broad-based price data. In principle, you could go into any store and see what the prices were, but the data just weren’t available to researchers tabulated in a systematic way. ...
Once macroeconomists started looking at data for this broad cross section of goods, it was obvious that pricing behavior was a lot more complicated in the real world than had been assumed. If you look at, say, soft drink prices, they change all the time. But the question macroeconomists want to answer is more nuanced. We know that Coke and Pepsi go on sale a lot. But is that really a response to macroeconomic phenomena, or is that something that is, in some sense, on autopilot or preprogrammed? Another question is: When you see a price change, is it a response, in some sense, to macroeconomic conditions? We found that, often, the price is simply going back to exactly the same price as before the sale. That suggests that the responsiveness to macroeconomic conditions associated with these sales was fairly limited. ... 
One of the things that’s been very striking to me in the recent period of the COVID-19 crisis is that even with incredible runs on grocery products, when I order my online groceries, there are still things on sale. Even with a shock as big as the COVID shock, my guess is that these things take time to adjust. ... he COVID-19 crisis can be viewed as a prime example of the kind of negative productivity shock that neoclassical economists have traditionally focused on. But an economy with price rigidity responds much less efficiently to that kind of an adverse shock than if prices and wages were continuously adjusting in an optimal way.

What's does the market learn from Fed announcements of changes in monetary policy? 
The basic challenge in estimating the effects of monetary policy is that most monetary policy announcements happen for a reason. For example, the Fed has just lowered interest rates by a historic amount. Obviously, this was not a random event. It happened because of this massively negative economic news. When you’re trying to estimate the consequences of a monetary policy shock, the big challenge is that you don’t really have randomized experiments, so establishing causality is difficult.
Looking at interest rate movements at the exact time of monetary policy announcements is a way of estimating the pure effect of the monetary policy action. ...  Intuitively, we’re trying to get as close as possible to a randomized experiment. Before the monetary policy announcement, people already know if, say, negative news has come out about the economy.The only new thing that they’re learning in these 30 minutes of the [time window around the monetary policy] announcement is how the Fed actually chooses to respond. Perhaps the Fed interprets the data a little bit more optimistically or pessimistically than the private sector. Perhaps their outlook is a little more hawkish on inflation. Those are the things that market participants are learning about at the time of the announcement. The idea is to isolate the effects of the monetary policy announcement from the effects of all the macroeconomic news that preceded it. Of course, you have to have very high-frequency data to do this, and most of this comes from financial markets. ...
The results completely surprised us. The conventional view of monetary policy is that if the Fed unexpectedly lowers interest rates, this will increase expected inflation. But we found that this response was extremely muted, particularly in the short run. The financial markets seemed to believe in a hyper hyper-Keynesian view of the economy. Even in response to a significant expansionary monetary shock, there was very little response priced into bond markets of a change in expected inflation. ... 
But, then, we were presenting the paper in England, and I recall that Marco Bassetto asked us to run one more regression looking at how forecasts by professional forecasters of GDP growth responded to monetary shocks. The conventional view would be that an expansionary monetary policy shock would yield forecasts of higher growth. When we ran the regression, the results actually went in the opposite direction from what we were expecting! An expansionary monetary shock was actually associated with a decrease in growth expectations, not the reverse! ... When Jay Powell or Janet Yellen or Ben Bernanke says, for example, “The economy is really in a crisis. We think we need to lower interest rates” ... perhaps the private sector thinks they can learn something about the fundamentals of the economy from the Fed’s announcements. This can explain why a big, unexpected reduction in interest rates could actually have a negative, as opposed to a positive, effect on those expectations.

The Plucking Model of Unemployment

A feature emphasized by Milton Friedman is that the unemployment rate doesn’t really look like a series that fluctuates symmetrically around an equilibrium “natural rate” of unemployment. It looks more like the “natural rate” is a lower bound on unemployment and that unemployment periodically gets “plucked” upward from this level by adverse shocks. Certainly, the current recession feels like an example of this phenomenon.
Another thing we emphasize is that if you look at the unemployment series, it appears incredibly smooth and persistent. When unemployment starts to rise, on average, it takes a long time to get back to where it was before. This is something that isn’t well explained by the current generation of macroeconomic models of unemployment, but it’s clearly front and center in terms of many economists’ thinking about the policy responses [to COVID-19]. A lot of the policy discussions have to do with trying to preserve links between workers and firms, and my sense is the goal here is to avoid the kind of persistent changes in unemployment that we’ve seen in other recessions.
For more on Nakamura and her work, the Journal of Economic Perspectives has a couple of articles to offer.

What Do We Know about Progress Toward a COVID-19 Vaccine?

There seem to me a few salient facts about the search for a COVID-19 vaccine.

1) According to a May 11 count by the World Health organization, there are now 8 vaccine candidates now in clinical trials, and an additional 102 vaccines in pre-clinical evaluation. Seems like an encouragingly high number.

2) Influenza viruses are different from coronaviruses. We do have vaccines for many influenza viruses--that's the "flu shot" many of us get each fall. But there has never been a vaccine developed for a coronavirus. The two previous outbreaks of a coronavirus--SARS (severe acute respiratory syndrome) in 2002-3 and MERS (Middle East respiratory syndrome) in 2012--both saw substantial efforts to develop such a vaccine, but neither one succeeded. Eriko Padron-Regalado discusses "Vaccines for SARS-CoV-2: Lessons from Other Coronavirus Strains" in the April 23 issue of Infectious Diseases and Therapy. 

3) It's not 100% clear to me why the previous efforts to develop a coronavirus vaccine for SARS or MERS failed. Some of the discussion seems to suggest that there wasn't a strong commercial reason to develop such a vaccine. The SARS outbreak back in 2002-3 died out. While some cases of MERS still happen, they are relatively few and seem limited to Saudi Arabia and nearby areas in the Middle East. Thus, one possible answer for the lack of a previous coronavirus vaccine is a lack of effort--an answer which would not reflect well on those who provide funding and set priorities for biomedical research.

4) The other possible answer is that it may be hard to develop that first coronavirus vaccine, which is why dozens of previous efforts to do so with SARS and MERS failed. Padron-Regalado put it this way (boldface is mine): "In vaccine development, it is ideal that a vaccine provides long-term protection. Whether long-term protection can be achieved by means of vaccination or exposure to coronaviruses is under debate, and more information is needed in this regard." A recent news story talked to researchers who tried to find a SARS vaccine, and summarizes their sentiments with comments like: "But there’s no guarantee, experts say, that a fully effective COVID-19 vaccine is possible. .... “Some viruses are very easy to make a vaccine for, and some are very complicated,” says Adolfo García-Sastre, director of the Global Health and Emerging Pathogens Institute at the Icahn School of Medicine at Mount Sinai. “It depends on the specific characteristics of how the virus infects.”Unfortunately, it seems that COVID-19 is on the difficult end of the scale. ... At this point, it’s not a given that even an imperfect vaccine is a slam dunk."

5) At best, vaccines take time to develop, especially if you are thinking about giving them to a very wide population group with different ages, genetics, and pre-existing conditions.

So by all means, research on a vaccine for the novel coronavirus should proceed full speed ahead. In addition, even if we reach a point where the disease itself seems to be fading, that research should keep proceeding with the same urgency. That research may offer protection against a future wave of the virus in the next year or two. Or it may offer scientific insights that will help with a vaccine targeted at a future outbreak of a different coronavirus. 

But we can't reasonably make current public policy about stay-at-home, business shutdowns, social distancing, or other public policy steps based on a hope or expectation that a coronavirus vaccine will be ready anytime soon.