Pages

Wednesday, March 31, 2021

The Spread in Labor Costs Across the European Union

In a common market, labor costs will look fairly similar across areas. Sure, there will be some places with differing skill levels, different mixes of industry, and different levels of urbanization, thus leading to somewhat higher or lower labor costs. But over time, workers from lower-pay areas will tend to relocate to higher-pay areas and employers in higher-pay areas will tend to relocate to lower-pay areas. Thus, it's interesting that the European Union continues to show large gaps in hourly labor costs. 

Here are some figures just released by Eurostat (March 31, 2021) on labor costs across countries. As you can see, hourly labor costs are up around €40/hour in Denmark, Luxembourg, and Belgium, but €10/hour or below in some countries of eastern Europe like Poland or the Baltic states like Lithuania. (For comparison, a euro is at present worth about $1.17 in US dollars. Norway and Iceland are not part of the European Union, but they are part of a broader grouping called the European Economic Area.)

Another major difference across EU countries is in what share of the labor costs paid by employers represent non-wage costs--that is, payments made by employers directly to the government for pensions and other social programs. In France and Sweden, these non-wage costs are about one-third of total hourly labor costs. It's interesting that in Denmark, commonly thought of as a Scandinavian high social-spending country, non-wage costs are only about 15% of total labor costs--because Denmark chooses not to finance its social spending by loading up the costs on employers to the same extent. 

These differences suggest some of the underlying stresses on the European Union. Given these wage gaps across countries, tensions in high-wage countries about migration from lower-wage countries and competition from firms in lower-wage countries will remain high. The large differences in non-wage costs as part of what employers pay for labor represents some of the dramatic differences across EU countries in levels of social benefits and how those benefits are financed. Proposals for European-wide spending and taxing programs, along with the desire of higher-income EU countries not to pay perpetual subsidies to lower-income countries, run into these realities every day. 

For comparison, here are some recent figures from the US Census Bureau on average employer costs per hour  across the 10 Census "divisions."  Yes, there are substantial differences between, say, the Pacific or New England divisions and the East South Central or West South Central divisions. But the United States is much more of a unified market than the European Union, both in wage levels and in the way non-wage labor costs are structured, and so the gaps are much smaller. 


Tuesday, March 30, 2021

Data and Development

The 2021 World Development Report. one of the annual flagship reports of the World Bank, is focused on the theme of "Data for Better Lives" (released in March 2021). The WDR is one of the flagship reports of the World Bank, and it is always a nice mixture of big-picture overview and specific examples. Here, I'll focus on a few of the themes that occurred to me in reading the report. 

First, there are lots of examples of how improved data can help economic development. For many economists, the first reaction is to think about dissemination of information related to production and markets. As the report notes: 
For millennia, farming and food supply have depended on access to accurate information. When will the rains come? How large will the yields be? What crops will earn the most money at market? Where are the most likely buyers located? Today, that information is being collected and leveraged at an unprecedented rate through data-driven agricultural business models. In India, farmers can access a data-driven platform that uses satellite imagery, artificial intelligence (AI), and machine learning (ML) to detect crop health remotely and estimate yield ahead of the harvest. Farmers can then share such information with financial institutions to demonstrate their potential profitability, thereby increasing their chance of obtaining a loan. Other data-driven platforms provide real-time crop prices and match sellers with buyers.
Other examples are about helping the government focus on improved and more focused provision of public services: 
The 2015 National Water Supply and Sanitation Survey commissioned by Nigeria’s government gathered data from households, water points, water schemes, and public facilities, including schools and health facilities. These data revealed that 130 million Nigerians (or more than two-thirds of the population at that time) did not meet the standard for sanitation set out by the Millennium Development Goals and that inadequate access to clean water was especially an issue for poor households and in certain geographical areas (map O.2). In response to the findings from the report based on these data, President Muhammadu Buhari declared a state of emergency in the sector and launched the National Action Plan for the Revitalization of Nigeria’s Water, Sanitation and
Hygiene (WASH) Sector.
 
Other examples are from the private sector, like logistics platforms to help coordinate trucking services.

These platforms (often dubbed “Uber for trucks”) match cargo and shippers with trucks for last-mile transport. In lower-income countries, where the supply of truck drivers is highly fragmented and often informal, sourcing cargo is a challenge, and returning with an empty load contributes to high shipping costs. In China, the empty load rate is 27 percent versus 13 percent in Germany and 10 percent in the United States. Digital freight matching overcomes these challenges by matching cargo to drivers and trucks that are underutilized. The model also uses data insights to optimize routing and provide truckers with integrated services and working capital. Because a significant share of logistics services in lower-income countries leverage informal suppliers, these technologies also represent an opportunity to formalize services. Examples include Blackbuck (India), Cargo X (Brazil), Full Truck Alliance (China), Kobo360 (Ghana, Kenya, Nigeria, Togo, Uganda), and Lori (Kenya, Nigeria, Rwanda, South Sudan, Tanzania, Uganda). In addition to using data for matching, Blackbuck uses various data to set reliable arrival times, drawing on global positioning system (GPS) data and predictions on the length of driver stops. Lori tracks data on costs and revenues per lane, along with data on asset utilization, to help optimize services. Cargo X charts routes to avoid traffic and reduce the risk of cargo robbery. Kobo360 chooses routes to avoid armed bandits based on real-time information shared by drivers. Many of the firms also allow shippers to track their cargo in real time. Data on driver characteristics and behavior have allowed platforms to offer auxiliary services to address the challenges that truck drivers face. For example, some platforms offer financial products to help drivers pay upfront costs, such as tolls, fuel, and tires, as well as targeted insurance products. Kobo360 claims that its drivers increase their monthly earnings by 40 percent and that users save an average of about 7 percent in logistics costs. Lori claims that more than 40 percent of grain moving through Kenya to Uganda now moves through its platform, and that the direct costs of moving bulk grain have been reduced by 17 percent in Uganda.

Some examples combine government efforts with privately-generated data. For example, there are estimates that reducing road mortality by half could save 675,000 lives a year. But how can the the government know where to invest on infrastructure and enforcement efforts?  

Unfortunately, many countries facing these difficult choices have little or no data on road traffic crashes and inadequate capacity to analyze the data they do have. Official data on road traffic crashes capture only 56 percent of fatalities in low- and middle-income countries, on average. Crash reports exist, yet they are buried in piles of paper or collected by private operators instead of being converted into useful data or disseminated to the people who need the information to make policy decisions. In Kenya, where official figures underreport the number of fatalities by a factor of 4.5, the rapid expansion of mobile phones and social media provides an opportunity to leverage commuter reports on traffic conditions as a potential source of data on road traffic crashes. Big data mining, combined with digitization of official paper records, has demonstrated how disparate data can be leveraged to inform urban spatial analysis, planning, and management. Researchers worked in close collaboration with the National Police Service to digitize more than 10,000 situation reports spanning from 2013 to 2020 from the 14 police stations in Nairobi to create the first digital and geolocated administrative dataset of individual crashes in the city. They combined administrative data with data crowdsourced using a software application for mobile devices and short message service (SMS) traffic platform, Ma3Route, which has more than 1.1 million subscribers in Kenya. They analyzed 870,000 transport-related tweets submitted between 2012 and 2020 to identify and geolocate 36,428 crash reports by developing and improving natural language processing and geoparsing algorithms. ... By combining these sources of data, researchers were able to identify the 5 percent of roads ... where 50 percent of the road traffic deaths occur in the city ... This exercise demonstrates that addressing data scarcity can transform an intractable problem into a more
manageable one.
There are lots of other examples in the report. "For remote populations around the world, receiving specialized medical care has been nearly impossible without having to travel miles to urban areas. Today, telehealth clinics and their specialists can monitor and diagnose patients remotely using sensors that collect patient health data and AI that helps analyze such data." Similar points can be made about delivering education services. "DigiCow, pioneered in Kenya, keeps digital health records on cows and matches farmers with qualified veterinary services."

My second main reaction to the report is that, despite the many individual examples of how data can help in economic development, there are substantial gaps in the data infrastructure for developing economies. At the national level, most countries now do a full census about once a decade, which often provide a reasonable population count at that time. But details on the population are often scanty. The report notes: 
Lack of completeness is often less of a problem in census and survey data because they are designed to cover the entire population of interest. For administrative data, the story is different. Civil registration and vital statistics systems (births and deaths) are not complete in any low-income country, compared with completeness in 22 percent of lower-middle-income countries, 51 percent of upper-middle-income countries, and 95 percent of high-income countries. These gaps leave about 1 billion people worldwide
without official proof of identity. More than one-quarter of children overall, and more than half of children in Sub-Saharan Africa, under the age of five are not registered at birth.
As another example of missing data, "Ground-based sensors, deployed in Internet of Things systems, can measure some outcomes, such as air pollution, climatic conditions, and water quality, on a continual basis and at a low cost. However, adoption of these technologies is still too limited to provide timely data at scale, particularly in low-income countries."

In some cases, it's possible to use other data sources to fill in some of the gaps. For example, measuring poverty is often done by carrying out much more detailed household surveys in a few areas, and then using the once-a-decade census data to project this to the country as a whole. The result is a reasonable statistical estimate of the poverty rate for the country as a whole, but not much knowledge about the location of actual poor people across the country. The report notes: 
Estimates of poverty are usually statistically valid for a nation and at some slightly finer level of geographic stratification, but rarely are such household surveys designed to provide the refined profiles of poverty that would allow policies to mitigate poverty to target the village level or lower. Meanwhile, for decades high-resolution poverty maps have been produced by estimating a model of poverty from survey data and then mapping this model onto census data, allowing an estimate of poverty for every household in the census data. A problem with this approach is that census data are available only once a decade (and in many poorer countries even less frequently). Modifications of this approach have replaced population census data with CDR [call detail record, from phones] data or various types of remote sensing data (typically from satellites, but also from drones). This repurposing of CDR or satellite data can provide greater resolution and timelier maps of poverty. For example, using only household survey data the government of Tanzania was able to profile the level of poverty across only 20 regions of the country’s mainland. Once the household survey data were combined with satellite imagery data, it became possible to estimate poverty for each of the country’s 169 districts (map O.3). Combining the two data sources increased the resolution of the poverty picture by eightfold with essentially no loss of precision.
The complimentary problem with lack of data is that is that data infrastructure in many low-income countries is often weak. This is a problem in the obvious way that many people and firms have a hard time accessing available data. But it's also a problem in a less obvious way: people who can't access data also can't contribute to data, and thus can't answer surveys, report on local conditions, offer feedback and advice, or offer access to data on purchase patterns and even (via cell-phone data) on location patterns. As the report notes: 
That said, efforts to move toward universal access face fundamental challenges. First, because of the continual technological innovation in mobile technology service, coverage is a moving target. Whereas in 2018, 92 percent of the world’s population lived within range of a 3G signal (offering speeds of 40 megabytes per second), that share dropped to 80 percent for 4G technology (providing faster speeds of 400 megabytes per second, which are needed for more sophisticated smartphone applications that can promote development). The recent commercial launch of 5G technology (reaching speeds of 1,000 megabytes per second) in a handful of leading-edge markets risks leaving the low-income countries even further behind. ...
The second challenge is that a substantial majority of the 40 percent of the world’s population who do not use data services live within range of a broadband signal. Of people living in low- and middle-income countries who do not access the internet, more than two-thirds stated in a survey that they do not know what the internet is or how to use it, indicating that digital literacy is a major issue.
Affordability is also a factor in low- and middle-income countries, where the cost of an entry-level smartphone represents about 80 percent of monthly income of the bottom 20 percent of households. Relatively high taxes and  duties further contribute to this expense. As costs come down in response to innovation, competitive pressures, and sound government policy, uptake in use of the internet will likely increase. Yet even among those who do use the internet, consumption of data services stands at just 0.2 gigabytes per capita per month, a fraction of what this Report estimates may be needed to perform basic social and economic functions online.
As a third reaction, the report often refers to potential dangers of increasing the role of data in an economy, including invasions of personal privacy and the danger of monopolistic companies using data to exploit consumers. In high-income countries and some middle-income countries, these are certainly important subjects for discussion. But in the context of low-income economies, it seems to me that the challenges of the lack of data are so substantial that worries about problems from widespread data are premature. 

The situation reminds me of Joan Robinson's comment in her 1962 book Economic Philosophy (p. 46 of my Pelican Book edition): "The misery of being exploited by capitalists is nothing compared to the misery of not being exploited at all." In a similar spirit, one might say that the misery of data being misused or monopolized is nothing compared to the misery of data barely being used at all. 

Finally, data is of course not valuable in isolation, but rather because of the ways that it may help people and firms and government to choose different actions. In the examples above, for instance, data can help government understand the location of social needs, or help a farmer adjust agricultural practices, or help a producer ship a products to a buyer, or a provide a method for someone to find work in the gig economy.  Data flows are also a feedback mechanism, both for markets and for government Without data to show the extent of problems, it's harder to hold public officials accountable.  

For some previous posts with additional discussion of government data and academic data, much of it from the context of the US and other high-income countries, see: 

Monday, March 29, 2021

Will the Fed Keep Interest Rates Low for the US Treasury?

Looking at the long-term budget projections from the Congressional Budget Office, which are based on current legislation, a key problem is that interest payments on past borrowing start climbing higher and higher--and as those have overborrowed on their credit cards know all too well, once you are on that interest rate treadmill it's hard to get off. So, will the Federal Reserve help out the US Government by keeping interest rates ultra-low for the foreseeable future? Fed Governor Christopher J. Waller says not in his talk "Treasury–Federal Reserve Cooperation and the Importance of Central Bank Independence" (March 29, 2021, given via webcast at the Peterson Institute for International Economics). Here's Waller: 
Because of the large fiscal deficits and rising federal debt, a narrative has emerged that the Federal Reserve will succumb to pressures (1) to keep interest rates low to help service the debt and (2) to maintain asset purchases to help finance the federal government. My goal today is to definitively put that narrative to rest. It is simply wrong. Monetary policy has not and will not be conducted for these purposes. My colleagues and I will continue to act solely to fulfill our congressionally mandated goals of maximum employment and price stability. The Federal Open Market Committee (FOMC) determines the appropriate monetary policy actions solely to move the economy towards those goals. Deficit financing and debt servicing issues play no role in our policy decisions and never will.
Interestingly, Waller goes back to the previous time when federal debt relative to GDP was hitting all-time highs--just after World War II. he analogy to the large rise in government debt during World War II interests me. In 1941, federal debt held by the public was 41.5% of GDP; by 1946, it had leaped to 106.1% of GDP. The Fed was essentially willing to hand off interest rate policy to the US Treasury during World War II: to put it another way, the Fed was fine with low interest rates as a way of helping to raise funds to win the war. But a few years after World War II, even though the US Treasury would have preferred an ongoing policy of low interest rates with all the accumulated debt, the Fed took back interest rate policy. Waller said (footnotes omitted): 

When governments run up large debts, the interest cost to servicing this debt will be substantial. Money earmarked to make interest payments could be used for other purposes if interest rates were lower. Thus, the fiscal authority has a strong incentive to keep interest rates low.
The United States faced this situation during World War II. Marriner Eccles, who chaired the Federal Reserve at the time, favored financing the war by coupling tax increases with wage and price controls. But, ultimately, he and his colleagues on the FOMC [Federal Open Market Committee] concluded that winning the war was the most important goal, and that providing the government with cheap financing was the most effective way for the Federal Reserve to support that goal. So the U.S. government ran up a substantial amount of debt to fund the war effort in a low interest rate environment, allowing the Treasury to have low debt servicing costs. This approach freed up resources for the war effort and was the right course of action during a crisis as extreme as a major world war.

After the war was over and victory was achieved, the Treasury still had a large stock of debt to manage and still had control over interest rates. The postwar boom in consumption, along with excessively low interest rates, led to a burst of inflation. Without control over interest rates, the Federal Reserve could not enact the appropriate interest rate policies to rein in inflation. As a result, prices increased 41.8 percent from January 1946 to March 1951, or an average of 6.3 percent year over year. This trend, and efforts by then-Chair Thomas McCabe and then-Board member Eccles, ultimately led to the Treasury-Fed Accord of 1951, which restored interest rate policy to the Federal Reserve. The purpose of the accord was to ensure that interest rate policy would be implemented to ensure the proper functioning of the economy, not to make debt financing cheap for the U.S. government.
For comparison, starting in 2007 before the Great Recession, the ratio of federal debt/GDP was 35.2% of GDP. By the end of the Great Recession, federal debt had doubled to 70.3% of GDP. The most recent Congressional Budget Office projections in February, forecast that federal debt will be 92.7% of GDP this year. This should be considered a lower-end estimate, because these estimates were done before the passage of the American Rescue Plan Act signed into law on March 11, 2021, 

Thus, the ratio of federal debt/GDP rose by 65 percentage points in the five years from 1941-1946.  It has now risen (at least) 57 percentage points over the 14 years from 2007-2021. In rough terms, it's fair to say that federal borrowing for the Great Recession and the pandemic has been quite similar (relative to the size of the US economy) to federal borrowing to fight World War II. Of course, a major difference is that federal spending dropped precipitously after World War II, while the current projections for federal spending suggest an ongoing rise. 

In extreme situations, including World War II, the Great Recession, and the pandemic recession, the Fed and the rest of the US government has focused on addressing the immediate need. But by definition, emergencies can't last forever, Given the current trajectories of spending and taxes, we are on a path where at some point in the medium term, a confrontation between the enormous borrowing of the US Treasury and the Fed control over interest rates seems plausible. 

For more on the Fed-Treasury Accord of 1951, when the Fed took back control over interest rates, useful starting points include:

Wednesday, March 24, 2021

China-US Trade: Some Patterns Since 1990

In US-based conversations about China-US trade, it sometimes seems to me that the working assumption is that China's economy is heavily dependent on trade with the United States--which in turn would give the US government strong leverage in trade disputes.  How true is that assumption? Here's some baseline evidence from the DHL Global Connectedness Index 2020: The State of Globalization in a Distancing World, by Stephen A. Altman and Phillip Bastian (December 2020). 

These first two figures show China-US trade in perspective to China: the top panel shows it relative to China's GDP, and the bottom panel shows it relative to China's total trade flows. Bottom line is that while China's exports to the US were as high as 7% of China's GDP back in 2007, after the big surge in China's exports to the entire world that followed China joining the World Trade Organization in 2001, but in the last few years or so Chinese exports to the US are less than 4% of China's GDP and were falling even before President Trump set of the trade war. 

China's exports to the US as a share of China's total exports went up considerably in the 1990s. But in the last decade or so, China's exports to the US were typically about 18-20% of China's total exports, before dropping lower in the trade war.

What about if we do the same calculations about US-China trade, but this time looking at the size of the flows relative to the US economy? The next figure shows how US imports from China as a share of US GDP: typically about 2.4-2.8% of US GDP in the last decade, before dropping lower in the trade war. 

The next panel shows that US imports from China have risen as a share of total US trade to about 21% of total US trade in the years before the pandemic--and seems to have rebounded back to that level after a short drop in the trade war. 

Altman and Bastian describe some other patterns of US-China economic interactions as well: 
Beyond trade, trends are mixed across other flows between the US and China. FDI flows in both directions rose from 2018 to 2019, although Chinese FDI into the US remained far below its 2016 peak. According to a recent analysis from the Peterson Institute for International Economics, “despite the rhetoric, US-China financial decoupling is not happening.” On the other hand, Chinese tourism to the US began declining in 2018, after 15 consecutive years of increases. And while it does not (yet) show up in broad patterns of international flows, US-China tensions over key technologies continue to boil, most notably with respect to 5G networking equipment (centered on Huawei) and social media (TikTok, WeChat) ...

Of course, the reality of international trade is that saying "China depends on the US for a substantial share of export sales" has precisely the same meaning as saying "the US depends on China for a substantial share of its supplies from imports."  Yes, the US could buy more from non-China countries and China could sell more to non-US countries, but changing the address labels on the shipping crates doesn't make much difference to the underlying economic forces at work. I'm reminded of a comment from Lawrence Summers in an interview last spring about US-China relations

At the broadest level, we need to craft a relationship with China from the principles of mutual respect and strategic reassurance, with rather less of the feigned affection that there has been in the past. We are not partners. We are not really friends. We are entities that find ourselves on the same small lifeboat in turbulent waters a long way from shore.

Monday, March 22, 2021

Mission Creep for Bank Regulators and Central Banks

The standard argument for government regulators who supervise the extent of bank risk is that if banks take on too much risk in the pursuit of short-term profits, but also raise the risk of  becoming insolvent, there are dangers not just to the banks themselves, but also risk to to bank depositors, the supply of credit in the economy, and other intertwined financial institutions. To put it another way, if the government is likely to end up bailing out individuals, firms, or the economy itself, then the government has a reason to check on how much risk is being taken.  

But what if countries start to load up the bank regulators with a few other goals at the same time? What tradeoffs might emerge? Sasin Kirakul, Jeffery Yong, and Raihan Zamil describe the situation in "The universe of supervisory mandates – total eclipse of the core?"  (Financial Stability Institute Insights on policy implementation No 30, March 2021).

Specifically, they look at bank regulators across 27 jurisdictions. In about half of these, the central bank also has the job of bank supervision; in the other half, a separate regulatory agency has the job. In all these jurisdictions, the bank regulators are to focus on "safety and soundness. But the authors identify 13 other jobs that are simultaneously being assigned to bank regulators--and they note that most bank regulators have at least 10 of these other jobs. They suggest visualizing the responsibilities with this diagram: 

The basic goal of supporting the public interest is at the bottom, with the core idea of safety and soundness of banking institutions right above.  This is surrounded by five of what they call "surveillance and oversight" goals: financial stability; crisis management; AML/CFT, which stands for anti-money laundering/combating the financing of terrorism; resolution, which refers to closing down insolvent banks, and consumer protection. The outer semicircle then includes seven "promotional objective, which refers to promoting financial sector development, financial literacy, financial inclusion, competition in the financial sector, efficiency, facilitating financial technology and innovation, and positioning the domestic market as an international financial center.  Then off to the right you see "climate change," which can be viewed as either an oversight/surveillance goal (that is, are banks and financial institutions taking these risks into account) or a promotional goal (is sufficient capital flowing to this purpose). 

There are ongoing efforts to add just a few more items to the list. For example, some economists at the IMF have argued that central banks like the Federal Reserve should go beyond the monetary policy issues of looking at employment, inflation, and interest rates, and also beyond the financial regulation responsibilities that many of them already face, and should also look at trying to address inequality. 

For the United States, the current statutory goals for financial regulators include safety and soundness as well as the first five surveillance and oversight goals--although in the US setting these goals are somewhat divided between different agencies like the Federal Reserve, the Office of the Comptroller of the Currency, and the Federal Deposit Insurance Commission. There are also statutory directives for certain agencies to pursue consumer projection and and financial inclusion, and non-statutory mandates to promote financial literacy, fintech/innovation, and to in some way take climate change concerns into account.  

In some situations, of course, these other goals can reinforce the basic goal of safety and soundness in banking. In other situations, not so much. For example, during a time of economic crisis, should the financial regulator also be pressing hard to make sure all banks are safe and sound, or should it give them a bit more slack at that time? Does "developing the financial sector" mean building up certain banks to be more profitable, while perhaps charging consumers more? What if promoting fintech/innovation could cause some banks to become weaker, thus reducing their safety and soundness and perhaps leading to less competition? Does the climate change goal involve bank regulators in deciding what particular firms or industries are "safe" or "risky" borrowers, and thus who will receive credit? 

There's a standard problem that when you start aiming at many different goals all at once, you often face some tradeoffs between those goals. For example, imagine a person planning a dinner with the following goals: tastes appealing to everyone; also tastes different and interesting; includes fiber, protein, vitamins, all needed nutrients; low calorie; locally sources; easily affordable; can prepare with no more than one hour of cooking time; and freezes well for leftovers. All the goals are worthy ones, and with some effort, one can often find a compromise solution that fits most of them. But you will almost certainly need to do less on some of the goals to make it possible to meet other goals. (Pre-pandemic, one of the last dinner parties my wife and I gave was for guests who between them were vegetarian, gluten-free, dairy- free, and no beans or legumes. Talk about compromises on the menu!)

In the case of the regulators who supervise banks, the more tasks you give them to do, the less attention and energy they will inevitably have for the core "safety and soundness" regulation. Also, more goals typically mean that the regulators have more discretion when trading off one objective against another, and thus it becomes harder to hold them to account. Those who need to aim at a dozen or more different targets are likely to end up missing at least some of them, much of the time. 

Friday, March 19, 2021

Measuring Teaching Quality in Higher Education

For every college professor, teaching is an important part of their job. For most college professors, who are not located at relatively few research-oriented universities, teaching the main part of their job. So how can we evaluate whether teaching is being done well or poorly? This question applies both at the individual level, but also for bigger institutional questions: for example, are faculty with lifetime tenure, who were granted tenure in substantial part for their performance as researchers, better teachers than faculty with short-term contracts?  David Figlio and Morton Schapiro tackle such questions in "Staffing the Higher Education Classroom" (Journal of Economic Perspectives, Winter 2021, 35:1, 143-62). 

The question of how to evaluate college teaching isn't easy. For example, there are not annual exams as often occur at the K-12 level, nor are certain classes followed by a common exam like the AP exams in high school. My experience is that the faculty colleges and universities are not especially good at self-policing of teaching.  In some cases, newly hired faculty get some feedback and guidance, and there are hallway discussions about especially awful teachers, but that's about it. Many colleges and universities have questionnaires on which students can evaluate faculty. This is probably a better method than throwing darts in the dark, but it is also demonstrably full of biases: students may prefer easier graders, classes that require less work, or classes with an especially charismatic professor. There is a developed body of evidence that white American faculty members tend to score higher. Figlio and Schapiro write: 

Concerns about bias have led the American Sociological Association (2019) to caution against over-reliance on student evaluations of teaching, pointing out that “a growing body of evidence suggests that their use in personnel decisions is problematic” given that they “are weakly related to other measures of teaching effectiveness and student learning” and that they “have been found to be biased against women and people of color.” The ASA suggests that “student feedback should not be used alone as a measure of teaching quality. If it is used in faculty evaluation processes, it should be considered as part of a holistic assessment of teaching effectiveness.” Seventeen other scholarly associations, including the American Anthropological Association, the American Historical Association, and the American Political Science Association, have endorsed the ASA report ...
Figlio and Schapiro suggest two measures of effective teaching for intro-level classes: 1) how many students from a certain intro-level teacher go on to become majors in the subject, and 2) "deep learning," which is combination of how many in an intro-level class go on to take any additional classes in a subject, and do whether students from a certain teacher tend to perform better in those follow-up classes. They authors are based at Northwestern University, and so they were able to obtain "registrar data on all Northwestern University freshmen who entered between fall 2001 and fall 2008, a total of 15,662 students, and on the faculty who taught them during their first quarter at Northwestern." 

Of course, Figlio and Schapiro emphasize that their approach is focused on Northwestern students, who are not a random cross-section of college students. The methods they use may need to be adapted in other higher-education contexts. In addition, this focus on first-quarter teaching of first-year students is an obvious limitation in some ways, but given that the first quarter may also play an outsized role in the adaptation of students to college, it has some strengths, too. In addition, they focus on comparing faculty within departments, so that econ professors are compared to other econ professors, philosophy professors to other philosophy professors, and so on. But with these limitations duly noted, they offer what might be viewed as preliminary findings that are nonetheless worth considering. 

For example, it seems as if their two measures of teaching quality are not correlated: "That is, teachers who leave scores of majors in their wake appear to be no better or worse at teaching the material needed for future courses than their less inspiring counterparts; teachers who are exceptional at conveying course material are no more likely than others to inspire students to take more courses in the subject area. We would love to see if this result would be replicated at other institutions." This result may capture the idea that some teachers are "charismatic" in the sense of attracting students to a subject, but that those same teachers don't teach in a way that helps student performance in future classes.

They measure the quality of research done by tenured faculty using measures of publications and professional awards, but find: "Our bottom line is, regardless of our measure of teaching and research quality, there is no apparent relationship between teaching quality and research quality." Of course, this doesn't mean that top researchers in the tenure-track are worse teachers; just that they aren't any better. They cite other research backing up this conclusion as well. 

This finding raises some awkward questions, as Figlio and Schapiro note: 
But what if state legislators take seriously our finding that while top teachers don’t sacrifice research output, it is also the case that top researchers don’t teach exceptionally well? Why have those high-priced scholars in the undergraduate classroom in the first place? Surely it would be more cost-efficient to replace them in the classroom either with untenured, lower-paid professors, or with faculty not on the tenure-line in the first place. That, of course, is what has been happening throughout American higher education for the past several decades, as we discuss in detail in the section that follows. And, of course, there’s the other potentially uncomfortable question that our analysis implies: Should we be concerned about the possibility that the weakest scholars amongst the tenured faculty are no more distinguished in the classroom than are the strongest scholars? Should expectations for teaching excellence be higher for faculty members who are on the margin of tenurability on the basis of their research excellence?
Figlio and Schapiro then extend their analysis to looking at the teaching quality of non-tenure track faculty. Their results here do need to be interpreted with care, given that non-tenure contract faculty at Northwestern often operate with three-year renewable contracts, and most of these faculty in this category are in their second or later contract. They write: 
Thus, our results should be viewed in the context of where non-tenure faculty at a major research university function as designated teachers (both full-time and part-time) with long-term relationships to the university. We find that, on average, tenure-line faculty members do not teach introductory undergraduate courses as well as do their (largely full-time, long-term) contingent faculty counterparts. In other words, our results suggest that on average, first-term freshmen learn more from contingent faculty members than they do from tenure track/tenured faculty. 
When they look more closely at the distribution of these results, they find that the overall average advantage of Northwestern's contingent faculty mainly arises because of a certain number of tenured faculty at the bottom tail of the distribution of teachers seem to be terrible at teaching first-year students. As Figlio and Schapiro point out, any contract faculty who were terrible and at the bottom tail of the teaching distribution are likely to be let go--and so they don't appear in the data. Thus, the lesson  here would be that institutions should be have greater awareness about the possibility that a small share of tenure-track faculty may be doing a terrible job in intro-level classes--and get those faculty reassigned somewhere else.

This study obviously leaves a lot of questions unanswered. For example, perhaps the skills to be a top teacher in an intro-level class are different than the skills to teach an advanced class. Maybe top researchers do better in teaching advanced classes? Or perhaps top researchers offer other benefits to the university (grant money, public recognition, connectedness to the frontier concepts in a field) that have additional value? But the big step forward here is to jumpstart more serious thinking about how it's possible to develop some alternative quantitative measures of teacher quality that don't rely on subjective evaluations by other faculty members or on student questionnaires.

One other study I recently ran across along these lines uses data from the unique academic environment of the US Naval Academy, where students are required to take certain courses from randomly assigned faculty. Michael Insler, Alexander F. McQuoid, Ahmed Rahman, and Katherine Smith discuss their findings in "Fear and Loathing in the Classroom: Why Does Teacher Quality Matter?" (January 2021, IZA DP No. 14036).  They write: 

Specifically, we use student panel data from the United States Naval Academy (USNA), where freshmen and sophomores must take a set of mandatory sequential courses, which includes courses in the humanities, social sciences, and STEM disciplines. Students cannot directly choose which courses to take nor when to take them. They cannot choose their instructors. They cannot switch instructors at any point. They must take the core sequence regardless of interest or ability." In addition: 
Due to unique institutional features, we observe students’ administratively recorded grades at different points during the semester, including a cumulative course grade immediately prior to the final exam, a final exam grade, and an overall course grade, allowing us to separately estimate multiple aspects of faculty value-added. Given that instructors determine the final grades of their students, there are both objective and subjective components of any academic performance measure. For a subset of courses in
our sample, however, final exams are created, administered, and graded by faculty who do not directly influence the final course grade. This enables us to disentangle faculty impacts on objective measures of student learning within a course (grade on final exam) from faculty-specific subjective grading practices (final course grade). Using the objectively determined final exam grade, we measure the direct impact of the instructor on the knowledge learned by the student.
To unpack this just a bit, the researchers can look both at test scores specifically, which can be viewed as "hard" measure of what is learned. But when instructors give a grade for a class, the instructor has some ability to add a subjective component in determining the final grade. For example, one can imagine that perhaps a certain student made great progress in improved study skills, or a student had some reason why they underperformed on the final (perhaps relative to earlier scores on classwork), and the professor did not want to overly penalize them. 

One potential concern here is that some faculty might "teach to the test," in a way that makes the test scores of their student look good, but doesn't do as much to prepare the students for the follow-up classes. Another potential concern is that when faculty depart from the test scores in giving their final grades, they may be giving students a misleading sense of their skills and preparation in the field--and thus setting those students up for disappointing performance in the follow-up class. Here the finding from Insler, McQuoid, Rahman, and Smith: 
We find that instructors who help boost the common final exam scores of their students also boost their performance in the follow-on course. Instructors who tend to give out easier subjective grades however dramatically hurt subsequent student performance. Exploring a variety of mechanisms, we suggest that instructors harm students not by “teaching to the test,” but rather by producing misleading signals regarding the difficulty of the subject and the “soft skills” needed for college success. This effect is stronger in non-STEM fields, among female students, and among extroverted students. Faculty that are well-liked by students—and thus likely prized by university administrators—and considered to be easy have particularly pernicious effects on subsequent student performance.

Again, this result is based on data from a nonrepresentative academic institution. But it does suggest some dangers of relying on contemporaneous popularity among students as a measure of teaching performance. 

Thursday, March 18, 2021

Carbon Capture and Storage: The Negative Carbon Option?

There used to be one coal-fired electricity generating plant in the US using carbon capture and storage (CCS) technology, the Petra Nova plant outside of Houston, Texas. It's now been shut down. It's not that the plant was a roaring technology success; for example, the process for scrubbing out the carbon required so much energy that the company had to build a separate natural-gas power plant just for that purpose. Still, I was sorry to see it go. There are other US plants, not coal-fired, learning about carbon capture and storage. But the way to learn about new technologies is to use them at scale. 

Here, I'll take a look at the Global Status of CCS 2020 report from the Global CCS Institute (December 2020) and the Special Report on Carbon Capture Utilisation and Storage: CCUS in clean energy transitions from the International Energy Agency (September 2020). These reports make no effort to oversell carbon capture and storage. Instead, the argument is that in specific locations and for specific purposes, carbon capture and storage technology could be a useful or even a necessary part of reducing carbon emissions. 

Brad Page, chairman of the Global CCS Institute, notes: "Just considering the role for CCS implicit in the IPCC 1.5 Special Report, somewhere between 350 and 1200 gigatonnes of CO2 will need to be captured and stored this century. Currently, some 40 megatonnes of CO2 are captured and stored annually. This must increase at least 100-fold by 2050 to meet the scenarios laid out by the IPCC." Nicholas Stern adds: "We have long known that CCUS will be an essential technology for emissions reduction; its deployment across a wide range of sectors of the economy must now be accelerated."

The basic point here is that even if there can be an enormous jump in non-carbon energy production for most purposes, there are likely to remain a few uses where it is extremely costly to substitute away from fossil fuels. Common examples include the iron, steel, and concrete industries, as well as back-up power-generating facilities that are needed for stabilizing power grids. For those purposes, carbon capture and storage technology can keep the resulting emissions as low as possible. Carbon capture and storage might have a role to play in a shift to hydrogen technology: hydrogen generates electricity without carbon, but using coal or natural gas to make the hydrogen is not carbon free. Moreover, it would be useful to have at least a few energy technologies that are carbon-negative. Examples would include if it is possible to combine biofuels with carbon capture and storage technology, or perhaps even in certain locations to use a cheap but local noncarbon energy source (say, geothermal energy) to capture carbon from the air. 

The IEA report summarizes the current situation in the US for carbon capture and storage technology this way: 

The United States is the global leader in CCUS development and deployment, with ten commercial CCUS facilities, some dating back to the 1970s and 1980s. These facilities have a total CO2 capture capacity of around 25 Mt/year – close to two-thirds of global capacity. Another facility in construction has a capture capacity of 1.5 Mt/year of CO2, and there are at least another 18-20 planned projects that would add around 46 Mt/year were they all to come to fruition. Most existing CCUS projects in the United States are associated with low-cost capture opportunities, including natural gas processing (where capture is required to meet gas quality specifications) and the production of synthetic natural gas, fertiliser, hydrogen and bioethanol. One project – Petra Nova – captures CO2 from a retrofitted coal-fired power plant for use in EOR though operations were suspended recently due to low oil prices. ...  All but one of the ten existing projects earn revenues from the sale of the captured CO2 for EOR operations. There are also numerous pilot- and demonstration-scale projects in operation as well as significant CCUS R&D activity, including through the Department of Energy’s National Laboratories.
I found the IEA discussion of potential options for removing carbon from the atmosphere to be especially interesting. as they state: "Carbon removal is also often seen as a way of producing net-negative emissions in the second half of the century to counterbalance excessive emissions earlier on. This feature of many climate scenarios however should not be interpreted as an alternative to cutting emissions today or a reason to delay action."

Basically, there are nature-based and technology-based options. The nature-based solutions involve finding ways to absorb more carbon in plants, soil, and oceans. The main technology solutions are bioenergy carbon capture and storage, commonly abbreviated as BECCS and direct air capture with storage, often abbreviated as DACS. The IEA writes: 

While all these approaches can be complementary, technology solutions can offer advantages over nature-based solutions, including the verifiability and permanency of underground storage; the fact that they are not vulnerable to weather events; including fires that can release CO2 stored in biomass into the atmosphere; and their much lower land area requirements. BECCS and DACS are also at a more advanced stage of deployment than some carbon removal approaches. Land management approaches and afforestation/reforestation are at the early adoption stage and their potential is limited by land needs for growing food. Other non-technological approaches – such as enhanced weathering, which involves the dissolution of natural or artificially created minerals to remove CO2 from the atmosphere, and ocean fertilisation/alkalinisation, which involves adding alkaline substances to seawater to enhance the ocean’s ability to absorb carbon – are only at the fundamental research stage. Thus, their carbon removal potentials, costs and environmental impact are extremely uncertain.

Here are a few words from the IEA on BECCS and on DACS:

BECCS involves the capture and permanent storage of CO2 from processes where biomass is converted to energy or used to produce materials. Examples include biomass-based power plants, pulp mills for paper production, kilns for cement production and plants producing biofuels. Waste-to-energy plants may also generate negative emissions when fed with biogenic fuel. In principle, if biomass is grown sustainably and then processed into a fuel that is then burned, the technology pathway can be considered carbon-neutral; if some or all of the CO2 released during combustion is captured and stored permanently, it is carbon negative, i.e. less CO2 is released into the atmosphere than is removed by the crops during their growth. ... The most advanced BECCS projects capture CO2 from ethanol production or biomass-based power generation, while industrial applications of BECCS are only at the prototype stage. There are currently more than ten facilities capturing CO2 from bioenergy production around the world . The Illinois Industrial CCS Project, with a capture capacity of 1 MtCO2/yr, is the largest and the only project with dedicated CO2 storage, while other projects, most of which are pilots, use the captured CO2 for EOR [enhanced oil recovery[ or other uses. ...

A total of 15 DAC plants are currently operating in Canada, Europe, and the United States. ... Most of them are small-scale pilot and demonstration plants, with the CO2 diverted to various uses, including for the production of chemicals and fuels, beverage carbonation and in greenhouses, rather than geologically stored. Two commercial plants are currently operating in Switzerland, selling CO2 to greenhouses and for beverage carbonation. There is only one pilot plant, in Iceland, currently storing the CO2: the plant captures CO2 from air and blends it with CO2 captured from geothermal fluid before injecting it into underground basalt formations, where it is mineralised, i.e. converted into a mineral. In North America, both Carbon Engineering and Global Thermostat have been operating a number of pilot plants, with Carbon Engineering (in collaboration with Occidental Petroleum) currently designing what would be the world’s largest DAC facility, with a capture capacity of 1 MtCO2 per year, for use in EOR [enhanced oil recovery] ...
Reducing carbon emissions isn't likely to happen through any single solution, but rather through a portfolio of actions. It seems to me that carbon capture and storage has a small but meaningful place in that portfolio. For a couple of earlier posts on this technology, see: 

Wednesday, March 17, 2021

Will Workers Disperse from Cities?

Predictions that technology shifts will cause urban job concentrations to disperse have been made a number of times in the last half-century or so. The predictions always sound plausible. But up until the pandemic, the predictions kept not happening.

Here's an example from a 1995 book City of Bits, by an MIT professor of architecture named William J. Mitchell. He wrote a quarter-century ago, while also making references to predictions a quarter-century before that (footnotes omitted): 

As information work has grown in volume and importance, and as increasingly efficient transportation and communication systems have allowed separation of offices from warehouses and factories, office buildings at high-priced central business district (CBD) locations have evolved into slick-skinned, air-conditioned, elevator-serviced towers. These architecturally represent the power and prestige of information-work organizations (banks, insurance companies, corporate headquarters of business and industrial organizations, government bureaucracies, law, accounting, and architectural firms, and so on) much as a grand, rusticated palazzo represented the importance of a great Roman, Florentine, or Sienese family. ... 

From this follows a familiar, widely replicated, larger urban pattern--one that you can see (with some local variants) from London to Chicago to Tokyo. The towers cluster densely at the most central, accessible locations in transportation networks. Office workers live in the lower-density suburban periphery and commute daily to and from their work.  ... 

The bonding agent that has held this whole intricate structure together (at every level, from that of the individual office cubicle to that of CBDs and commuter rail networks) is the need for face-to-face contact with coworkers and clients, for close proximity to expensive information-processing equipment, and for access to information held at the central location and available only there. But the development of inexpensive, widely distributed computational capacity and of pervasive, increasingly sophisticated telecommunications systems has greatly weakened the adhesive power of these former imperatives, so that chunks of the old structure have begun to break away and then to stick together again in new sorts of aggregations. We have seen the emergence of telecommuting, "the partial or total substitution of telecommunication, with or without the assistance of computers, for the twice-daily commute to/from work."

Gobs of "back office" work can, for example, be excised from downtown towers and shifted to less expensive suburban or exurban locations, from which locally housed workers remain in close electronic contact with the now smaller but still central and visible head offices. These satellite offices may even be transferred to other towns or to offshore locations where labor is cheaper. (Next time you pay your credit card bill or order something from a mail-order catalogue, take a look at the mailing address. You'll find that the envelope doesn't go to a downtown location in a major city, but more likely to an obscure location in the heartland of the country.) 

The bedroom communities that have grown up around major urban centers also provide opportunities for establishing telecommuting centers small, Main Street office complexes with telecommunications links to central offices of large corporations or government departments. As a consequence, commuting patterns and service locations also begin to change; a worker might bicycle to a suburban satellite office cluster or telecommuting center, for example, rather than commute by car or public transportation to a
downtown headquarters. Another strategy is to create resort offices, where groups can retreat for a time to work on special projects requiring sustained concentration or higher intellectual productivity, yet retain electronic access to the information resources of the head office. This idea has interested Japanese corporations, and prototypes have been constructed at locations such as the Aso resort area near Kumamoto ...

More radically, much information work that was traditionally done at city-center locations can potentially be shifted back to network-connected, computer-equipped, suburban or even rural homes. Way back in the 1960s, well before the birth of the personal computer, James Martin and Adrian R. D. Norman could see this coming. They suggested that "we may see a return to cottage industry, with the spinning wheel replaced by the computer terminal" and that "in the future some companies may have almost no offices." The OPEC oil crisis of 1973 motivated some serious study of the economics of home-based telecommuting. Then the strategy was heavily promoted by pop futurologists of the Reaganite eighties, who argued that it would save workers the time and cost of commuting while also saving employers the cost of space and other overhead. The federal Clean Air Act amendments of 1990, which required many businesses with a hundred or more employees to reduce the use of cars for commuting, provided further impetus. ...

In the 1960s and early 1 970s, as the telecommunications revolution was rapidly gaining momentum, some urbanists leaped to the conclusion that downtowns would soon dissolve as these new arrangements took hold. Melvin Webber, for example, predicted: "For the first time in history, it might be possible to locate on a mountain top and to maintain intimate, real-time and realistic contact with business or other associates. All persons tapped into the global communications net would have ties approximating those used today in a given metropolitan region." ...

But the prophets of urban dissolution underestimated the inertia of existing patterns, and the reality that has evolved in the 1980s and 1990s is certainly more complex than they imagined. The changing relative costs of telecommunication and transportation have indeed begun to affect the location of office work. But weakening of the glue that once firmly held office downtowns together turns out to permit rather than determine dispersal; the workings of labor and capital markets and the effects of special local conditions often end up shaping the locational patterns that actually emerge from the shakeup.
I love the passage in part because it starts of in the first paragraph talking about how dense central business districts "represent the power and prestige of information-work organizations," which makes it sound as if downtown urban areas are nothing but an ego trip for top executives, but then ends with some comments about how economic factors "labor and capital markets" actually end up shaping the results. 

The economic patterns of big cities have changed. I have discussed "How Cities Stopped Being Ladders of Opportunity" (January 19, 2021), because in recent decades they have been places where the more-educated could earn higher wages, but they have stopped being places where the less-educated could earn higher wages. 

But moreover, when Mitchell in his 1995 book referred to "the need for face-to-face contact with coworkers and clients," he was seeing only part of the picture. Yes, contact with coworkers and clients within a firm matters, but it's also true that firms of a certain type often bunch together geographically. It seems important to be geographically located near workers and clients from other firms, too. I've written a bit about this "economics of density," and offer some links, in "Cities as Economic Engines: Is Lower Density in Our Future" (August 14, 2020). 
 
Hannah Rubinton offers another piece of evidence in "Business Dynamism and City Size" (Economic Synopses: Federal Reserve Bank of St. Louis, 2021, Number 4). The points represent data for individual cities. The horizontal axis shows the population of the city. The vertical axis of the top panel shows the "establishment entry rate," which is the rate at which new business establishments are started in a city. An "establishment" includes both a new business or a new location for part of an existing firm. In the bottom panel, the vertical axis shows the "establishment exit rate." The payoff for these figures is that if you plot the data for 1982, you can that larger cities tended to have lower rates of entry and exit (the solid lines slope down), but by 2018 the larger cities tended to have higher rates of entry and exit (the dashed lines slope up.)

This pattern reflects that in the last few decades, a substantial part of economic dynamism, productivity growth, and wage growth has been happening in the larger cities. As Rubinton notes: 
At the same time, large and small cities have diverged on several important dimensions: Large cities increasingly have a more educated workforce and offer higher wage premiums for skilled workers. Given that dynamism is important for productivity and economic growth, the differential changes in dynamism across cities could be important to understanding the divergence in wages and skill-composition between large and small cities. ... [T]hese patterns are consistent with competition becoming tougher in large cities relative to small cities. Large cities have become more congested than they were in 1980: As population has grown and technology has improved, rents and wages have increased. Less-productive firms that cannot afford the higher prices are more likely to exit, leaving room for new firms to enter.
Maybe the aftereffects of the pandemic will change all this. I tend to believe that some of the shift to telecommuting in this last year will persist. But I'm also very aware that predictions about how jobs "can potentially be shifted back to network-connected, computer-equipped, suburban or even rural homes" have been around for decades. Yet downtown business districts and other clusters of economic activity continue to persist and grow, which suggests strong underlying economic forces at work. 



Monday, March 15, 2021

Negative Interest Rates: Practical, but Limited

For a lot of people, the idea of negative interest rates sounds as if it must violate some law of nature, like a perpetual motion machine. Why would any depositor put money into an investment that promised a negative return? Well, starting way back in 2012, a substantial number of central banks around the world including the European Central Bank, the Swiss National Bank, the Bank of Japan, and the Sveriges Riksbank (the central bank of Sweden) have pushed the specific interest rates on which they focus monetary policy into negative territory for the last several years. Luís Brandão-Marques, Marco Casiraghi, Gaston Gelos, Güneş Kamber, and Roland Meeks offer an overview of the experience in "Negative Interest Rates: Taking Stock of the Experience So Far" (IMF Monetary and Capital Markets Department, 21-03, March 2021). 

Perhaps the obvious questions about a negative interest rate is why depositors would put money in the bank at all. The short answer is that banks provide an array of financial services to both businesses and individual customers (ease of electronic payments, not needing to hold large amounts of cash, access to credit, and so on). As a customer, you can pay for those services with some combination of fees and lower interest rates. It's easy to imagine a situation where slightly negative interest rates are offset by changes in other fees or contractual arrangements. 

It's also worth remembering that the fact of a negative interest rate in real terms is not actually new at all. At many times in the past, people have experienced negative real interest rates on their bank deposits--that is, when the inflation rate is higher than the nominal interest rate, the real interest rate is negative. 

Of course, if the bank interest rates became too negative, then depositors would indeed move away from banks and toward cash or other alternative investments. What is the "effective lower bound" for a negative interest rate? The answer will depend on various assumptions about the financial system, including costs of setting up companies that store large amounts of cash, but here's a set of estimates from various studies. 

Given that slightly negative bank interest rates are clearly possible, what are their benefits and risks? 

On the benefit side, when a central bank acts to lower interest rates, its goal is to stimulate the economy, both to encourage growth and also to raise inflation if that rate is below the desired target level (often set at 2%). Thus, the key question is whether moving the bank policy interest rate below zero does provide some additional macroeconomic boost. The specific effects of negative interest rates aren't easy sort out on their own, because central banks that have moved their policy interest rate into negative territory have also been carrying out other unconventional monetary policies like quantitative easing setting explicit forward guidance for what monetary policy will be in the near-term or middle-term future, or intervening in exchange rate markets. But as this report summarizes the evidence: "For instance, the transmission mechanism of monetary policy does not appear to change significantly when official rates become negative."

As one example of how negative interest rate policies reduce actual interest rates, this figure shows how the returns on euro-denominated government debt have dipped into negative terms. 



On the risk side, perhaps the main danger was that negative interest rates would cause large losses for banks or other major players in the financial system like money market funds. But again, at least so far, these problems have not emerged. The report notes:  

Overall, most of the theoretical negative side effects associated with NIRP [negative interest rate policies] have failed to materialize or have turned out to be less relevant than expected. Economists and policymakers have identified a number of potential drawbacks of NIRP, but none of them have emerged with such an intensity as to tilt the cost-benefit analysis in favor of removing this instrument from the central bank toolbox.  ... [O]verall, bank profitability has not significantly suffered so far ...  and banks do not appear to have engaged in excessive risk-taking. Of course, these side effects may still arise if NIRP remains in place for a long time or policy rates go even more negative, approaching the reversal rate.

However, it's worth noting that the negative interest rates in place have mainly affected large institutional depositors. For households and retail investors, banks have tried to keep the interest rates they receive in slightly positive territory--but have also adjusted other fees and charges in ways that have sustained bank profitability. As the report notes: "Banks seem to respond to NIRP by increasing fees on retail deposits, while passing on negative rates partly to firms."

The IMF report also identifies areas where research on negative interest rates policies has been limited. 

The literature so far has largely overlooked the impact of negative interest rates on financial intermediaries other than banks. Although pension funds and insurance companies do not typically offer overnight deposits and thus the constraint on lowering the corresponding rates below zero is not an issue, other non-linearities may arise when market rates become negative. Among others, legal or behavioral constraints to offering negative nominal returns could affect the profitability of nonbanks. Given the importance of these institutions for the financial system, the absence of empirical evidence on the impact of negative rates on their behavior is surprising. ..

Another interesting direction for future research is to further study the determinants of the corporate channel identified by Altavilla and others (2019b). According to this channel, cash-rich firms with relationships with banks that charge negative rates on deposits are more likely to use their liquidity to increase investment. What drives this channel is still unclear. For instance, the role of multiple bank relationships could be investigated. If cash-rich firms can easily move their liquidity across financial institutions (including nonbanks), then negative rates on corporate deposit may simply lead these firms to reallocate their liquidity across intermediaries, without any significant impact on investment. By contrast, frictions that prevent firms from easily establish new bank relationships, and thus move their funds around, could induce a reallocation from corporate deposits to other less liquid assets, such as fixed capital.

In my reading, the general tone of the IMF report is that the negative interest rate policies have been modestly useful, and without worrisome negative side effects--but also that central banks have a number of other options for unconventional monetary policy and while particular option should be in the toolkit of options, it perhaps should not be pushed too hard.  

For some additional discussion of negative interest rates,  including the previous IMF staff report on the subject back in 2017 and various other sources, starting points include: \

Thursday, March 11, 2021

Retail Investors Show "Exuberance"

The word "exuberance" has a special meaning for investors. Back in 1996, then-Federal Reserve chair Alan Greenspan gave a speech as stock prices rose during the "dot-com" boom. He asked: "But how do we know when irrational exuberance has unduly escalated asset values, which then become subject to unexpected and prolonged contractions ...?" When the Fed chair starts "just asking" questions about exuberance, people take note.

But any one who took Greenspan's speech as a prediction of a near-term drop in the stock market missed out, because the "exuberance" had a few more years to run. The S&P 500 index was at about 750 at the time of Greenspan's speech in December 1996. It had doubled in value during the previous five years. After Greenspan's speech, it would double again, topping out at nearly 1500 in September 2000, before sagging back to about 820 in September 2002.

Still, when those in the financial community use the language of "exuberance," my eyebrows go up. The March 2021 issue of the BIS Quarterly Review from the Bank of International Settlements uses "exuberance" a couple of times in a lead article, "Markets wrestle with reflation prospects" (pp. 1-16).   Here's a snippet (references to graphs and boxes omitted): 

Equities and credit gained on the back of a brighter outlook and expectations of greater fiscal support, with signs of exuberance reflected in the behaviour of retail investors. ...

Low long-term interest rates have been critical in supporting valuations. Since recent US price/earnings ratios were among the highest on record, they suggest stretched valuations if considered in isolation. However, assessments that also take into account the prevailing low level of interest rates indicate that valuations were in line with their historical average. ...[E]quity prices are particularly sensitive to monetary policy in environments akin to the current one, featuring high price/earnings ratios and low interest rates.

Even if equity valuations did not appear excessive in the light of low rates, some signs of exuberance had a familiar ring. Just as during the dotcom boom in the late 1990s, IPOs [initial public offerings] saw a major expansion and stock prices often soared on the first day of trading. The share of unprofitable firms among those tapping equity markets also kept growing. In addition, strong investor appetite supported the rise of special purpose acquisition companies (SPACs) – otherwise known as “blank cheque” companies. These are conduits that raise funds without an immediate investment plan.

The increasing footprint of retail investors and the appeal of alternative asset classes also pointed to brisk risk-taking. An index gauging interest in the stock market on the basis of internet searches surged, eclipsing its previous highest level in 2009. This rise went hand in hand with the growing market influence of retail investors. In a sign of strong risk appetite, funds investing in the main cryptoassets grew rapidly in size following sustained inflows, and the prices of these assets reached all-time peaks ...

Here are some illustrative figures. The first shows the number and value of initial public offerings. 

This figure shows the rise of the "blank cheque" SPAC companies. 

This figure shows data based on Google searches about interest in the stock market. 


Sirio Aramonte and Fernando Avalos contribute a short "box" discussion to this article with more details on "The rising influence of retail investors." They write (the graphs to which they refer are included below):
Telltale signs of retail investors' growing activity emerged from patterns in equity trading volumes and stock price movements. For one, small traders seem to be often attracted by the speculative nature of single stocks, rather than by the diversification benefits of indices. Consistent with such preferences gaining in importance, share turnover for exchange-traded funds (ETFs) tracking the S&P 500 has flattened over the past four years, while that for the S&P 500's individual constituents has been on an upward trend over the same period, pointing to 2017 as the possible start year of retail investors' rising influence (Graph B, left-hand panel). In addition, retail investors are more likely to trade assets on the basis of non-fundamental information. During the late 1990s tech boom, for instance, they sometimes responded to important news about certain companies by rushing to buy the equity of similarly named but distinct firms. Comparable patterns emerged in early 2021 – for instance, when the value of a company briefly quintupled as investors misinterpreted a social media message as endorsing its stock.

In the United States, retail investors' sustained risk-taking has been channelled through brokerage accounts, the main tool they have to manage their non-retirement funds. Brokerage accounts allow owners to take leverage in the form of margin debt. In December 2020, the amount of that debt stood at $750 billion, the highest level on record since 1997, both in inflation-adjusted terms and as a share of GDP. Its fast growth in the aftermath of March 2020 exceeded 60% (Graph B, centre panel). There is evidence that retail investors are currently taking risky one-way bets, as rapid surges in margin debt have been followed by periods of stock market declines.

In seeking exposure to individual companies, retail investors trade options. Call (put) options pay off only when the price of the underlying stock rises (falls) past a preset value, with gains potentially amounting to multiples of the initial investment. In this sense, options have embedded leverage that margin debt magnifies further. Academic research has found that option trading tends to be unprofitable in the aggregate and over longer periods for small traders, not least because of their poor market timing.

Reports in early 2021 have suggested that the surge in trading volumes for call options – on both small and large stocks – has indeed stemmed from retail activity. For example, internet searches for options on five technology stocks – a clear sign of retail investors' interest – predicted next-day option volumes. This link was particularly strong for searches that took place on days with high stock returns, suggesting that option activity was underpinned by bets on a continuation of positive returns ... 

Equity prices rose and fell as retail investors coordinated their trading on specific stocks through social media in January 2021. While online chat rooms were already a popular means of information exchange in the late 1990s, the trebling of the number of US internet users and the rise in no-fee brokerages since then has widened the pool of traders who can combine their efforts. In a recent episode, retail investors forced short-sellers to unwind their positions in distressed companies. A similar move in the more liquid silver market floundered a few days later. These dislocations were short-lived, not least because, in response to collateral requests from clearing houses, some brokerages limited their customers' ability to trade. Even so, it has become clear that deliberate large-scale coordination among small traders is possible and can have substantial effects on prices.

Certain actions of retail investors can raise concerns about market functioning. Sudden bursts of trading activity can push prices far away from fundamental values, especially for less liquid securities, thus impairing their information content. In a move that underscored the materiality of this issue, the US Securities and Exchange Commission suspended trading in the shares of companies that had experienced large price movements on the back of social media discussions.
Here's the figure showing how exchange-traded funds are being used more for individual stocks. 
Here's the figure showing the rise in margin debt for retail investors. 
I'm certainly not in the investment advice business, and I'm very aware that Greenspan's 1996 comments about "irrational exuberance" were more in the middle of a stock market rise than at the end. That said, there do seem to be elements of occasional exuberance at play. 

Wednesday, March 10, 2021

The Case for More Activist Antitrust Policy

The University of Pennsylvania Law Review  (June 2020) has published a nine-paper symposium on antitrust law, with contributions by a number of the leading economists in the field who tend to favor more aggressive pro-competition policy in this area. Whatever your own leanings, it's a nice overview of many of the key issues. Here are snippets from three of the papers. Below, I'll list all the papers in the issue with links and abstracts. 

 C. Scott Hemphill and Tim Wu write about "Nascent Competitors," which is the concern that large firms may seek to maintain their dominant market position by buying up the kinds of small firms that might have developed into future competitors. The article is perhaps of particular interest because Wu has just accepted a position with the Biden administration to join the National Economic Council, where he will focus on competition and technology policy. Hemphill and Wu write (footnotes omitted): 

Nascent rivals play an important role in both the competitive process and the process of innovation. New firms with new technologies can challenge and even displace existing firms; sometimes, innovation by an unproven outsider is the only way to introduce new competition to an entrenched incumbent. That makes the treatment of nascent competitors core to the goals of the antitrust laws. As the D.C. Circuit has explained, “it would be inimical to the purpose of the Sherman Act to allow monopolists free rei[]n to squash nascent, albeit unproven, competitors at will . . . .” Government enforcers have expressed interest in protecting nascent competition, particularly in the context of acquisitions made by leading online platforms.

However, enforcers face a dilemma. While nascent competitors often pose a uniquely potent threat to an entrenched incumbent, the firm’s eventual significance is uncertain, given the environment of rapid technological change in which such threats tend to arise. That uncertainty, along with a lack of present, direct competition, may make enforcers and courts hesitant or unwilling to prevent an incumbent from acquiring or excluding a nascent threat. A hesitant enforcer might insist on strong proof that the competitor, if left alone, probably would have grown into a full-fledged rival, yet in so doing, neglect an important category of anticompetitive behavior.

One main concern with a general rule that would block entrenched incumbents from buying smaller companies is that, for entrepreneurs who start small companies, the chance of being bought out by a big firm is one of the primary incentives for starting a firm in the first place. Thus, there is a concern that more aggressive antitrust enforcement against buying smaller firms could reduce incentives to start such firms in the first place. Hemphill and Wu tackle the question head-on:

The acquisition of a nascent competitor raises several particularly challenging questions of policy and doctrine. First, acquisition can serve as an important exit for investors in a small company, and thereby attract capital necessary for innovation. Blocking or deterring too many acquisitions would be undesirable. However, the significance of this concern should not be exaggerated, for our proposed approach is very far from a general ban on the acquisition of unproven companies. We would discourage, at most, acquisition by the firm or firms most threatened by a nascent rival. Profitable acquisitions by others would be left alone, as would the acquisition of merely complementary or other nonthreatening firms. While wary of the potential for overenforcement, we believe that scrutiny of the most troubling acquisitions of unproven firms must be a key ingredient of a competition enforcement agenda that takes innovation seriously.

In another paper, William P. Rogerson and Howard Shelanski write about "Antitrust Enforcement, Regulation, and Digital Platforms." They raise the concern that the tools of antitrust may not be well-suited to some of the competition issues posed by big digital firms. For example, if Alphabet was forced to sell off Google, or some other subsidiaries, would competition really be improved? What would it even mean to, say, try to break Google's search engine into separate companies? When there are "network economies," where many agents want to be on a given website because so many other players are on the same website, perhaps a relatively small number of firms is the natural outcome. 

Thus, while certainly not ruling out traditional antitrust actions, Rogerson and Shelanski argue that the case for using regulations to achieve pro-competitive outcomes. They write: 

[W]e discuss why certain forms of what we call “light handed procompetitive” (LHPC) regulation could increase levels of competition in markets served by digital platforms while helping to clarify the platforms’ obligations with respect to interrelated policy objectives, notably privacy and data security. Key categories of LHPC regulation could include interconnection/interoperability requirements (such as access to application programming interfaces (APIs)), limits on discrimination, both user-side and third-party-side data portability rules, and perhaps additional restrictions on certain business practices subject to rule of reason analysis under general antitrust statutes. These types of regulations would limit the ability of dominant digital platforms to leverage their market power into related markets or insulate their installed base from competition. In so doing, they would preserve incentives for innovation by firms in related markets, increase the competitive impact of existing competitors, and reduce barriers to entry for nascent firms. 

The regulation we propose is “light handed” in that it largely avoids the burdens and difficulties of a regime—such as that found in public utility regulation—that regulates access terms and revenues based on firms’ costs, which the regulatory agency must in turn track and monitor. Although our proposed regulatory scheme would require a dominant digital platform to provide a baseline level of access (interconnection/interoperability) that the regulator determines is necessary to promote actual and potential competition, we believe that this could avoid most of the information and oversight costs of full-blown cost-based regulation ...  The primary regulation applied to price or non-price access terms would be a nondiscrimination condition, which would require a dominant digital platform to offer the same terms to all users. Such regulation would not, like traditional rate regulation, attempt to tie the level or terms of access to a platform’s underlying costs, to regulate the company’s terms of service to end users, or to limit the incumbent platform’s profits or lines of business. Instead of imposing monopoly controls, LHPC regulation aims to protect and promote competitive access to the marketplace as the means of governing firms’ behavior. In other words, its primary goal is to increase the viability and incentives of actual and potential competitors. As we will discuss, the Federal Communication Commission’s (FCC) successful use of similar sorts of requirements on various telecommunications providers provides one model for this type of regulation.

Nancy L. Rose and Jonathan Sallet tackle a more traditional antitrust question in "The Dichotomous Treatment of Efficiencies in Horizontal Mergers: Too Much? Too Little? Getting it Right."   A "horizontal" merger is one between two firms selling the same product. This is in contrast to a "vertical" merger, where one firm merges with a supplier, or a merger where the two firms sell different products. When two firms selling the same product propose a merger, they often argue that the two firms  will be more efficient together, and thus able to provide a lower-cost product to consumers. Rose and Sallett offer this example: 

Here is a stylized example of the role that efficiencies might play in an antitrust review. Imagine two paper manufacturers, each with a single factory that produces several kinds of paper, and suppose their marginal costs decline with longer production runs of a single type of paper. They wish to merge, which by definition eliminates a competitor. They justify the merger on the ground that after they combine their operations, they will increase the specialization in each plant, enabling longer runs and lower marginal costs, and thus incentivizing them to lower prices to their customers and expand output. If the cost reduction were sufficiently large, such efficiencies could offset the merger’s otherwise expected tendency to increase prices.
In this situation, the antitrust authorities need to evaluate whether these potential efficiencies exist and are likely to benefit consumers. Or alternatively, is the talk of "efficiencies" a way for top corporate managers to build their empires while eliminating some competition? Rose and Sallett argue, based on the empirical evidence of what has happened after past mergers, that antitrust enforcers have been too willing to believe in the possibility of efficiencies that don't seem to happen. They write: 
As empirically-trained economists focused further on what data revealed about the relationship between mergers and efficiencies, the results cast considerable doubt on post-merger benefits. As discussed at length by Professor Hovenkamp, “the empirical evidence is not unanimous, however, it strongly suggests that current merger policy tends to underestimate harm, overestimate efficiencies, or some combination of the two.” The business literature is even more skeptical. As management consultant McKinsey & Company reported in 2010: “Most mergers are doomed from the beginning. Anyone who has researched merger success rates knows that roughly 70 percent of mergers fail.”
For more on antitrust and the big tech companies, some of my previous posts include:

Here's the full set of papers from the June 2020 issue of the  University of Pennsylvania Law Review issue, with links and abstracts: 

"Framing the Chicago School of Antitrust Analysis," by Herbert  Fiona Scott Morton
The Chicago School of antitrust has benefitted from a great deal of law office history, written by admiring advocates rather than more dispassionate observers. This essay attempts a more neutral examination of the ideology, political impulses, and economics that produced the School and that account for its durability. The origins of the Chicago School lie in a strong commitment to libertarianism and nonintervention. Economic models of perfect competition best suited these goals. The early strength of the Chicago School was that it provided simple, convincing answers to everything that was wrong with antitrust policy in the 1960s, when antitrust was characterized by over-enforcement, poor quality economics or none at all, and many internal contradictions. The Chicago School’s greatest weakness is that it did not keep up. Its leading advocates either spurned or ignored important developments in economics that gave a better accounting of an economy that was increasingly characterized by significant product differentiation, rapid innovation, networking, and strategic behavior. The Chicago School’s protest that newer models of the economy lacked testability lost its credibility as industrial economics experienced an empirical renaissance, nearly all of it based on models of imperfect competition. What kept Chicago alive was the financial support of firms and others who stood to profit from less intervention. Properly designed antitrust enforcement is a public good. Its beneficiaries—consumers—are individually small, numerous, scattered, and diverse. Those who stand to profit from nonintervention were fewer in number, individually much more powerful, and much more united in their message. As a result, the Chicago School went from being a model of enlightened economic policy to an economically outdated but nevertheless powerful tool of regulatory capture.

"Nascent Competitors," by C. Scott Hemphill & Tim Wu
A nascent competitor is a firm whose prospective innovation represents a serious threat to an incumbent. Protecting such competition is a critical mission for antitrust law, given the outsized role of unproven outsiders as innovators and the uniquely potent threat they often pose to powerful entrenched firms. In this Article, we identify nascent competition as a distinct analytical category and outline a program of antitrust enforcement to protect it. We make the case for enforcement even where the ultimate competitive significance of the target is uncertain, and explain why a contrary view is mistaken as a matter of policy and precedent. Depending on the facts, troubling conduct can be scrutinized under ordinary merger law or as unlawful maintenance of monopoly, an approach that has several advantages. In distinguishing harmful from harmless acquisitions, certain evidence takes on heightened importance. Evidence of an acquirer’s anticompetitive plan, as revealed through internal communications or subsequent conduct, is particularly probative. After-the-fact scrutiny is sometimes necessary as new evidence comes to light. Finally, our suggested approach poses little risk of dampening desirable investment in startups, as it is confined to acquisitions by those firms most threatened by nascent rivals.

"Antitrust Enforcement, Regulation, and Digital Platforms," by William P. Rogerson & Howard Shelanski
There is a growing concern over concentration and market power in a broad range of industrial sectors in the United States, particularly in markets served by digital platforms. At the same time, reports and studies around the world have called for increased competition enforcement against digital platforms, both by conventional antitrust authorities and through increased use of regulatory tools. This Article examines how, despite the challenges of implementing effective rules, regulatory approaches could help to address certain concerns about digital platforms by complementing traditional antitrust enforcement. We explain why introducing light- handed, industry-specific regulation could increase competition and reduce barriers to entry in markets served by digital platforms while better preserving the benefits they bring to consumers.

"The Dichotomous Treatment of Efficiencies in Horizontal Mergers: Too Much? Too Little? Getting it Right," Nancy L. Rose and Jonathan Sallet
The extent to which horizontal mergers deliver competitive benefits that offset any potential for competitive harm is a critical issue of antitrust enforcement. This Article evaluates economic analyses of merger efficiencies and concludes that a substantial body of work casts doubt on their presumptive existence and magnitude. That has two significant implications. First, the current methods used by the federal antitrust agencies to determine whether to investigate a horizontal merger likely rests on an overly-optimistic view of the existence of cognizable efficiencies, which we believe has the effect of justifying market-concentration thresholds that are likely too lax. Second, criticisms of the current treatment of efficiencies as too demanding—for example, that antitrust agencies and reviewing courts require too much of merging parties in demonstrating the existence of efficiencies—are misplaced, in part because they fail to recognize that full-blown merger investigations and subsequent litigation are focused on the mergers that are most likely to cause harm.

"Oligopoly Coordination, Economic Analysis, and the Prophylactic Role of Horizontal Merger Enforcement," by Jonathan B. Baker and Joseph Farrell
For decades, the major United States airlines have raised passenger fares through coordinated fare-setting when their route networks overlap, according to the United States Department of Justice. Through its review of company documents and testimony, the Justice Department found that when major airlines have overlapping route networks, they respond to rivals’ price changes across multiple routes and thereby discourage competition from their rivals. A recent empirical study reached a similar conclusion: It found that fares have increased for this reason on more than 1000 routes nationwide and even that American and Delta, two airlines with substantial route overlaps, have come close to cooperating perfectly on routes they both serve.

"The Role of Antitrust in Preventing Patent Holdup," by Carl Shapiro and Mark A. Lemley
Patent holdup has proven one of the most controversial topics in innovation policy, in part because companies with a vested interest in denying its existence have spent tens of millions of dollars trying to debunk it. Notwithstanding a barrage of political and academic attacks, both the general theory of holdup and its practical application in patent law remain valid and pose significant concerns for patent policy. Patent and antitrust law have made significant strides in the past fifteen years in limiting the problem of patent holdup. But those advances are currently under threat from the Antitrust Division of the Department of Justice, which has reversed prior policies and broken with the Federal Trade Commission to downplay the significance of patent holdup while undermining private efforts to prevent it. Ironically, the effect of the Antitrust Division’s actions is to create a greater role for antitrust law in stopping patent holdup. We offer some suggestions for moving in the right direction.

"Competition Law as Common Law: American Express and the Evolution of Antitrust," by Michael L. Katz & A. Douglas Melamed
We explore the implications of the widely accepted understanding that competition law is common—or “judge-made”—law. Specifically, we ask how the rule of reason in antitrust law should be shaped and implemented, not just to guide correct application of existing law to the facts of a case, but also to enable courts to participate constructively in the common law-like evolution of antitrust law in the light of changes in economic learning and business and judicial experience. We explore these issues in the context of a recently decided case, Ohio v. American Express, and conclude that the Supreme Court, not only made several substantive errors, but also did not apply the rule of reason in a way that enabled an effective common law-like evolution of antitrust law.


"Probability, Presumptions and Evidentiary Burdens in Antitrust Analysis: Revitalizing the Rule of Reason for Exclusionary Conduct," by Andrew I. Gavil & Steven C. Salop
The conservative critique of antitrust law has been highly influential. It has facilitated a transformation of antitrust standards of conduct since the 1970s and led to increasingly more permissive standards of conduct. While these changes have taken many forms, all were influenced by the view that competition law was over-deterrent. Critics relied heavily on the assumption that the durability and costs of false positive errors far exceeded the costs of false negatives. Many of the assumptions that guided this retrenchment of antitrust rules were mistaken and advances in law and economic analysis have rendered them anachronistic, particularly with respect to exclusionary conduct. Continued reliance on what are now exaggerated fears of “false positives,” and failure adequately to consider the harm from “false negatives,” has led courts to impose excessive burdens of proof on plaintiffs that belie both sound economic analysis and well-established procedural norms. The result is not better antitrust standards, but instead an unwarranted bias towards non-intervention that creates a tendency toward false negatives, particularly in modern markets characterized by economies of scale and network effects. In this article, we explain how these erroneous assumptions about markets, institutions, and conduct have distorted the antitrust decision-making process and produced an excessive risk of false negatives in exclusionary conduct cases involving firms attempting to achieve, maintain, or enhance dominance or substantial market power. To redress this imbalance, we integrate modern economic analysis and decision theory with the foundational conventions of antitrust law, which has long relied on probability, presumptions, and reasonable inferences to provide effective means for evaluating competitive effects and resolving antitrust claims.

"The Post-Chicago Antitrust Revolution: A Retrospective," by Christopher S. Yoo
A symposium examining the contributions of the post-Chicago School provides an appropriate opportunity to offer some thoughts on both the past and the future of antitrust. This afterword reviews the excellent papers presented with an eye toward appreciating the contributions and limitations of both the Chicago School, in terms of promoting the consumer welfare standard and embracing price theory as the preferred mode of economic analysis, and the post-Chicago School, with its emphasis on game theory and firm-level strategic conduct. It then explores two emerging trends, specifically neo-Brandeisian advocacy for abandoning consumer welfare as the sole goal of antitrust and the increasing emphasis on empirical analyses.