Monday, May 31, 2021

Clean Energy and Pro-Mining

One approach to the goal of reducing carbon emissions is sometimes called "electrification of everything," a phrase which is a shorthand for an agenda of using electricity from carbon-free sources--including solar and wind--to replace fossil fuels.  The goal is to replace fossil fuels in all their current roles: not just in generating electricity directly, but also in their roles in transportation, heating/cooling of buildings, industrial uses, and so on. Even with the possibilities for energy conservation and recycling taken into account, the "electrification of everything" vision would require a very substantial increase in electricity production in the US and everywhere. 

A necessary but often undiscussed consequence of this transition is a dramatic increase in mining, as discussed in "The Role of Critical Minerals in Clean Energy Transitions," a World Energy Outlook Special Report from the International Energy Agency (May 2021). The IEA notes:

An energy system powered by clean energy technologies differs profoundly from one fuelled by traditional hydrocarbon resources. Building solar photovoltaic (PV) plants, wind farms and electric vehicles (EVs) generally requires more minerals than their fossil fuelbased counterparts. A typical electric car requires six times the mineral inputs of a conventional car, and an onshore wind plant requires nine times more mineral resources than a gas-fired power plant. Since 2010, the average amount of minerals needed for a new unit of power generation capacity has increased by 50% as the share of renewables has risen.

The types of mineral resources used vary by technology. Lithium, nickel, cobalt, manganese and graphite are crucial to battery performance, longevity and energy density. Rare earth elements are essential for permanent magnets that are vital for wind turbines and EV motors. Electricity networks need a huge amount of copper and aluminium, with copper being a cornerstone for all electricity-related technologies. The shift to a clean energy system is set to drive a huge increase in the requirements for these minerals, meaning that the energy sector is emerging as a major force in mineral markets. Until the mid-2010s, the energy sector represented a small part of total demand for most minerals. However, as energy transitions gather pace, clean energy technologies are becoming the fastest-growing segment of demand.

The IEA is careful to say that this rapid growth in demand for a number of minerals doesn't negate the need to move to cleaner energy, and the report argues that the difficulties of increasing mineral supply are "manageable, but real." But here is a summary list of some main concerns: 

High geographical concentration of production: Production of many energy transition minerals is more concentrated than that of oil or natural gas. For lithium, cobalt and rare earth elements, the world’s top three producing nations control well over three-quarters of global output. In some cases, a single country is responsible for around half of worldwide production. The Democratic Republic of the Congo (DRC) and People’s Republic of China (China) were responsible for some 70% and 60% of global production of cobalt and rare earth elements respectively in 2019. ...

Long project development lead times: Our analysis suggests that it has taken on average over 16 years to move mining projects from discovery to first production. ...

Declining resource quality: ... In recent years, ore quality has continued to fall across a range of commodities. For example, the average copper ore grade in Chile declined by 30% over the past 15 years. Extracting metal content from lower-grade ores requires more energy, exerting upward pressure on production costs, greenhouse gas emissions and waste volumes.

Growing scrutiny of environmental and social performance: Production and processing of mineral resources gives rise to a variety of environmental and social issues that, if poorly managed, can harm local communities and disrupt supply. ...

Higher exposure to climate risks: Mining assets are exposed to growing climate risks. Copper and lithium are particularly vulnerable to water stress given their high water requirements. Over 50% of today’s lithium and copper production is concentrated in areas with high water stress levels. Several major producing regions such as Australia, China, and Africa are also subject to extreme heat or flooding, which pose greater challenges in ensuring reliable and sustainable supplies.

The policy agenda here is fairly clear-cut. Put research and development spending into ways of conserving on the use of mineral resources, and on ways of recycling them. Step up the hunt for new sources of key minerals now, and get started sooner than strictly necessary with the planning and permitting. And for supporters of clean energy in high-income countries like the United States, be aware that straitjacket restriction on mining in high-income countries is likely to push production into lower-income countries where any such restrictions may be considerably looser. 

Friday, May 28, 2021

Do Riskier Jobs Get Correspondingly Higher Pay?

The idea of a "compensating differential" is conceptually straightforward. Imagine two jobs that require equivalent levels of skill. However, one job is unattractive in some way: physically exhausting, dangerous to one's health, bad smells, overnight hours, and so on. The idea of a compensating differential is that if employers want to fill these less attractive jobs, they will need to pay workers more than those workers would have received in more-attractive jobs. 

The existence of compensating differentials comes up in a number of broader issues. For example:

1) If you believe in compensating differentials, you are likely to worry less about health and safety regulation of jobs--after all, you believe that workers are being financially compensated for health and safety risks.

2) When discussing gender wage gaps, an issue that often comes up is to compare pay in male-dominated and female-dominated occupations. An argument is sometimes made that male-dominated occupations tend to be more physically dangerous or risky (think construction or law enforcement) or involve distasteful tasks (say, garbage collection). One justification for the pay levels in these male-dominated jobs is that they are in part a compensating differential.

3) When thinking about regulatory actions, it's common to compare the cost of the regulation to the benefits, which require estimating the "value of a statistical life." Here's one crisp explanation of the idea from Thomas J. Kniesner and W. Kip Viscusi:
Suppose further that ... the typical worker in the labor market of interest, say manufacturing, needs to be paid $1,000 more per year to accept a job where there is one more death per 10,000 workers. This means that a group of 10,000 workers would collect $10,000,000 more as a group if one more member of their group were to be killed in the next year. Note that workers do not know who will be fatally injured but rather that there will be an additional (statistical) death among them. Economists call the $10,000,000 of additional wage payments by employers the value of a statistical life.
Notice that at the center of this calculation is the idea of a compensating differential: in this case, estimating that two jobs are essentially identical except for one with a higher risk of injury

4) It's plausible that workers may sort themselves into jobs based on the preferences of those workers. Thus, workers who end up working outdoors or overnight, for example, may be more likely to have a preference for working outdoors or overnight. Those who work in riskier jobs may be people who place a lower value on such risks. It would seem unwise to assume that workers who end up in different jobs have the same personal preferences about job characteristics: my compensating differential for working in a risky job may be higher than the compensating differential for those who actually have such jobs. It's also plausible that workers with lower income levels might be more willing to trade off higher-risk for somewhat higher income than workers with higher income levels. 

5) The idea that high-risk jobs are paid a compensating differential makes the labor market into a kind of health-based lottery, with winners and losers. The compensating differential is based on average levels of risk, but not everyone will have the average outcome. Those who take high-risk jobs, get higher pay, and do not become injured are effectively the winners. Those who take  high-risk jobs but do become injured, and in this way suffer a loss of lifetime earnings, are effectively the losers. 

6) Precise knowledge about the overall safety of jobs is likely to be very unequally distributed between the employer, who has experience with outcomes of many different workers, and the employee, who does not have access to similar data.
 
7) If compensating differentials do not exist--that is, if workers in especially unattractive jobs are not compensated in some way--then it raises questions about how real-world wages are actually determined. If most workers of a given skill level have a range of comparable outside job options, and act as if they have a range of outside options, then one might expect that an employer could only attract a workers for a high-risk job by paying more. But if workers do not act as if they have comparable outside options, then their pay may not be closely linked to the riskiness or other conditions of their employment--and may not be closely linked to their productivity, either.  

As you might imagine, the empirical calculation of compensating differentials is a controversial business. Peter Dorman and Les Boden make the case that it's hard to find persuasive evidence for compensating wage differentials for risky work in their essay "Risk without reward: The myth of wage compensation for hazardous work" (Economic Policy Institute, April 19, 2021). The authors focus on the issue of occupational health and safety. They write: 
Although workplaces are much less dangerous now than they were 100 years ago, more than 5,000 people died from work-related injuries in the U.S. in 2018. The U.S. Department of Labor’s Bureau of Labor Statistics (BLS) reports that about 3.5 million people sustained injuries at work in that year. However, studies have shown that the BLS substantially underestimates injury incidence, and that the actual number is most likely in the range of 5-10 million. The vast majority of occupational diseases, including cancer, lung diseases, and coronary heart disease, go unreported. A credible estimate, even before the Covid-19 pandemic, is that 26,000 to 72,000 people die annually from occupational diseases. ...
The United States stands poorly in international comparisons of work-related fatal injury
rates. The U.S. rate is 10% higher than that of its closest rival, Japan, and six times the rate
of Great Britain. This difference cannot be explained by differences in industry mix: The
U.S. rate for construction is 20% higher, the manufacturing rate 50% higher, and the
transportation and storage rate 100% higher than that of the E.U.
I will not try here to disentangle the detailed issues related to the research for estimating compensating wage differentials for risky jobs. Those who do such research are aware of the potential objections and seek to address them. They argue that although any individual studies are suspect, a developed body of research using different data and method produces believable results. On the other side, Dorman and Boden make the case that such findings should be viewed with a highly skeptical eye. They also point out that during the pandemic, it is far from obvious that the "essential" workers who continued in jobs that involved a higher risk to health received a boost in wages that reflected these risks. They write: 
The view of the labor market associated with the freedom-of-contract perspective, which holds that OSH risks are efficiently negotiated between workers and employers, is at odds with nearly everything we know about how labor markets really work. It cannot accommodate the reality of good and bad jobs, workplace authority based on the threat of dismissal, discrimination, and the pervasive role of public regulation in defining what employment entails and what obligations it imposes. It also fails to acknowledge the social and psychological dimensions of work, which are particularly important in understanding how people perceive and respond to risk.

Wednesday, May 26, 2021

Economic Research Becomes a Team Sport

Coauthorship of economic research has risen considerably, and it's probably past time to think more seriously about the tradeoffs involved. Benjamin F. Jones tackles the issue in "The Rise of Research Teams: Benefits and Costs in Economics" (Spring 2021, Journal of Economic Perspectives, 35:2,  191-216). A 25-minute audio interview with Jones about the paper and its themes is available here. 

Here are some basic facts as a starting point. The share of published economics research papers that is solo-authored is shown by the red dashed line, measured on the right-hand axis. The top figure shows all research journals; the bottom figure shows the "top five" highly prominent research journals. As you can see, almost all economic research was single-authored in 1950. Now, it's down around 20%, The solid blue  line  shows the average number of authors per research paper. Back in 1950, it was around 1.1 or 1.2--say, out of every five papers, four were single-authored and the fifth one had two authors. Now, it's up around 2.5 authors/paper--thus, papers with two or three authors are common, and more authors is not uncommon. 

Jones slices and dices this data in various ways. For example, it turns out that there is also a steady trend toward papers with more authors being more heavily cited. Jones defines a "home run" paper as one that is among the top in number of citations received for papers published in that year.  
Teams have a growing impact advantage. In addition, this growing advantage is stronger when one looks at higher thresholds of impact. From the 1950s through the 1970s, a team- authored paper was 1.5 to 1.7 times more likely to become a home-run than a solo-authored paper, with the modest variation depending on the impact threshold. By 2010, the home- run rate for team- authorship was at least 3.0 times larger than for solo-authorship. From the 1980s onward, the team- impact advantage is increasing as the impact threshold rises. By 2010, team-authored papers are 3.0 times more likely to reach the top 10 percent of citations, 3.3 times more likely to reach the top 5 percent of citations, and 4.1 times more likely to reach the top 1 percent of citations than solo-authored papers.
Moreover, the grow of team-based research and its greater success seems to be happening in every subfield of economics research. It's also happening in other social sciences, and it already happened several decades ago in engineering and the hard sciences. 

The great strength of team-based research is that in a world where knowledge has become vastly broader, teams are a way to deploy input from those with different specialties. These combinations of research insights from differing areas are also more likely to become the kinds of innovative papers that are widely cited in the future. Jones offers some striking comparisons here: 
To put some empirical content around this conceptual perspective, consider that John Harvard’s collection of approximately 400 books was considered a leading collection of his time, and its bequest in 1638, along with small funds for buildings, helped earn him the naming right to Harvard College (Morrison 1936). One hundred seventy- five years later, Thomas Jefferson’s renowned library of 6,487 books formed the basis for the US Library of Congress. That library’s collection had risen to 55,000 books by 1851 (Cole 1996). Today, the US Library of Congress holds 39 million books (as described in https://www.loc.gov/about/general-information).

Looking instead at journal articles, the flow rate of new papers grows at 3–4 percent per year. In 2018, peer-reviewed, English-language journals published three million new papers (Johnson, Watkinson, and Mabe 2018). In total, the Web of Science™ now indexes 53 million articles from science journals and another 9 million articles from social science journals (as described at https://clarivate. com/webofsciencegroup/solutions/web-of-science). In economics alone, the Microsoft Academic Graph counts 30,100 economic journal articles published in the year 2000. This publication rate was twice what it was in 1982 and half what it is today. ...

The organizational implication—teamwork—then follows naturally as a means to aggregate expert knowledge. In the history of aviation, for example, the Wright brothers designed, built, and flew the first heavier-than-air aircraft in 1903. This pair of individuals successfully embraced and advanced extant scientific and engineering knowledge. Today, by contrast, the design and manufacture of airplanes calls on a vast store of accumulated knowledge and engages large teams of specialists; today, 30 different engineering specialties are required to design and produce the aircraft’s jet engines alone.

Even if the growth in co-authorship is overall beneficial, perhaps even inevitable, it raises a number of concerns and issues. In no particular order: 

1) Most of the time, it is individuals who get hired and promoted and given tenure--not teams. Thus, those who do the hiring and promotion and tenuring need to figure out how to attribute credit within a team. Who in the team is more or less deserving of career advancement? 

2) The issue of assigning credit is not only difficult in itself, but raises a possibility of bias. There's some evidence that women economists tend to receive less credit for co-authored work than male economists.  

3) If teams are more important, then how teams are formed matters, which raises issues of its own. For an individual researcher, what is the best strategy for knowing when to join a team and when to back away? Research teams in economics are likely to be fluid and shifting from project to project. Academic and personal networks will influence who knows who, and what collaborations are more or less likely to arise, and who is likely to be left out. 

4) Along with the teams of named co-authors, team-based work may also be supported by institutions. Colleges and universities with more resources for research assistants, data access, computing power, travel budgets, and sabbaticals will also be able to offer more support for teams. Across the universe of US colleges and universities, there has always been unequal access to such support, but the rising importance of teams could give greater bite to these inequalities. 

5) The people who enter PhD programs and eventually become research economists are not typically trained to work as team members. They have passed a lot of exams. But communication and working together in teams may not have played much or a role in their earlier development. (Indeed, a cynic might say that people may have a tendency to become professors because they aren't especially good at being team players.) There has not traditionally been formal training for research economists in managing even a small team, much less in larger management tasks like handling budgets or  overseeing a human resources department: some researchers will find that they can build such skills on their own while others will fail, sometimes egregiously, and the members of their team will suffer as a result.

6) Research papers in economics have tripled in length in the last few decades. Although there are a number of reasons behind this shift, it seems plausible that groups of co-authors--all willing and able to add to the paper in their own way--are part of the underlying dynamic. As an editor at an academic journal myself, I have on occasion wondered if all the co-authors of a paper are taking full responsibility for everything in the paper, or alternatively, if each co-author is focused on their own material, and no author is really taking responsibility for a clear introduction, internal structure, and conclusions. 


Saturday, May 22, 2021

Can Policy Tame Credit Cycles?

There's an often-told story about why economies go through cycles of boom and bust that goes like this. In good economic times, there is lots of lending and borrowing. Indeed, this credit boom helps provide the force that keeps the good times going. But although few people focus on this fact during the good times, the credit boom involves a larger and larger share of somewhat risky loans--loans that are less and less likely to get paid off when a negative shock hits the economy and times turn bad. When that negative shock inevitably hits, the economy moves very rapidly from a credit boom situation, where it's easy to borrow, and a credit bust situation, where it's much harder. Lots of firms were counting on new loans to help pay off their own loans, and those new loans aren't available. Again, the negative situation feeds on itself, with the sharp decline in credit and economic buying power helping to prolong the recession. 

The details of this credit boom/credit bust cycle vary from time to time and place to place, but the pattern is frequently recognizable. Thus, Jeremy Stein gave the Mundell-Fleming lecture at the IMF a in fall 2019 on the subject "Can Policy Tame the Credit Cycle?" The lecture has now been published in the IMF Economic Review (January 2021, 69: 5–22). 

Here's how Stein describes the credit cycle: 

I want to emphasize two sets of stylized facts. The first, which has become increasingly well-known and widely accepted in recent years, is that if one looks at quantity data that captures the growth of aggregate credit, then at relatively low frequencies rapid growth in credit tends to portend adverse macroeconomic outcomes, be it a financial crisis or some kind of more modest slowdown in activity. Second, and perhaps less familiar, is that elevated credit-market sentiment also tends to carry negative information about future economic growth, above and beyond that impounded in credit-quantity variables. ... One interpretation of this pattern is that when sentiment is high, there is an increased risk of disappointing over-optimistic investors. And when investors  are disappointed, this tends to lead to get a sharp reversal in credit conditions that corresponds to an inward shift in credit supply, which in turn exerts a contractionary effect on economic activity. So again, the overall picture is that credit booms, especially those associated not just with rapid increases in the quantity of credit, but also with exuberant sentiment—i.e., aggressive pricing of credit risk—tend to end badly.

This process doesn't operate on a schedule or like clockwork. An economy  proceeding into a credit boom becomes increasingly vulnerable. But it typically takes some additional trigger for that vulnerability to turn into recession. 

Stein discussed the various models economist have used to look at this process. For example, one set of models is built on the idea that actors in a growing economy can become irrationally exuberant, and start to neglect downside risks. In other models, the lenders and borrowers in a credit boom are strictly rational, but focus only on their own risks. They don't take into account that their expansion of credit is contributing to greater economic vulnerability for the economy as a whole, these "externalities in leverage" mean that credit will grow faster than the socially optimal amount. Although researcher sweat blood over the differences between these approaches, there's no reason they can't both be true. 

So how might a government that is aware of this history of credit cycles respond? In one approach, sometimes called "macroprudential regulation," government would tighten and loosen financial regulations to counterbalance the risks of credit boom and bust. For example, banks could be required to hold more capital as credit levels rise in an economy. Many countries change their regulations about how easy it is to get a home mortgage. Stein writes that '"while a number of countries have implemented time-varying loan-to-value or debt-to-income requirements on home mortgage loans, we have not seen anything similar in the USA., and it does not appear that we are likely to anytime in the near future."

But this approach has limitations, too. As Stein points out, "as the USA is concerned, regulators appear to have little in the way of operational, time-varying macroprudential tools at their disposal."

In addition, adjusting macroprudential regulations might affect the actions of banks and homebuyers, but the financial system has all sorts of ways of expanding credit that will be much less affected by those kinds of regulations. Stein writes; 
[I]t is useful to think about the rapid growth in recent years of the corporate bond market and the leveraged loan market. And bear in mind that some of this growth may be explained by lending to large and medium-sized firms migrating away from the banking sector as capital requirements there have gone up. Leveraged loan issuance in particular has been booming of late; these are loans that are typically structured and syndicated by banks but most often wind up on the balance sheet of other investors, be they collateralized loan obligations (CLOs), pension funds, insurance companies, or mutual funds
I've written in the last few years before the pandemic about expansions in corporate bond markets and in leveraged loan markets (for example, herehere, here and here).

So what else might be done? Stein suggests that monetary policy might want to keep an eye on credit cycles, and perhaps lean into them a little. Thus, if the economy was doing well and the Fed was wondering about how soon and how much to raise interest rates, it might act a little more quickly if a credit boom seemed well-underway, but otherwise act more slowly. But as Stein points out, the traditional view of central banking has been that "monetary policy should focus on its traditional inflation-employment mandate and should leave matters of financial stability to regulatory tools."

In response to this traditional view, Stein writes: 
To be clear, I think this view is almost certainly right in a world where financial regulation is highly effective. However, for the reasons outlined above, I am inclined to be more skeptical with respect to this premise ... at least in the current US context. This is of course not to say that we should not make every possible effort to improve our regulatory apparatus so as to mitigate its existing weaknesses. But taking the world as it exists today, I am more pessimistic that we can expect financial regulation to satisfactorily address the booms and busts created by the credit cycle entirely on its own. This would seem to leave open the possibility of a role for monetary policy—albeit a second-best one— in attending to the credit cycle.

Thursday, May 20, 2021

A Primer on SPACs: An Alternative to IPOs?

Imagine that an entrepreneur who is running a promising business wants to change over from being a privately-held company and to become a publicly-owned company--that is, to get an infusion of money in exchange for becoming accountable to shareholders. How might this be done? 

There have traditionally been two main choices. One option is to have an "initial public offering"--that is, to create stock and sell it to the public. The other option is for the entrepreneur to sell the company to an established firm, thus becoming accountable to the shareholders of that firm. But in the last year or so, a new option has emerged called the SPAC, which stands for "special purpose acquisition company."

The Knowledge@Wharton website recently published "Why SPACs Are Booming" (May 2, 2021), wh which is a short descriptive overview of a one-hour video presentation called "Understanding SPACs," in which "Wharton finance professors Nikolai Roussanov and Itamar Drechsler explained how SPACs work and their pros and cons for investors. Another useful overview is a paper just called "SPACs," by Minmo Gahng, Jay R. Ritter, and Donghang Zhang (working paper at the SSRN website, last revised March 2, 2021). Let's run through the basic questions here: how does it work,  how many are there, why do it, and should investors be worried. 

Here's a figure from the Wharton presentation showing the SPAC process. 

The first step is to form a SPAC. This is sometimes called a "blank check" company. It is a publicly-listed company--that is, it raises money by having own initial public offering in which it sells shares to investors--but at the start the SPAC doesn't own anything. The company does not have to identify in advance what it plans to do with its money. Presumably, investors buy stock in such a company based on the reputation of those who started it. As the Wharton write-up explains: 

From the time a SPAC lists and raises money through an IPO, it has 18 to 24 months to find a private operating company to merge with. If a SPAC can’t find an acquisition target in the given time, it liquidates and returns the IPO proceeds to investors, who could be private equity funds or the general public.
When the SPAC finds a target company, it often seeks out some additional investors known as PIPEs, for "private investors in public equity." If the SPAC fails to merge with the target firm, then investors get their money back. IF the SPAC does merges with the target firm, then the owners of the target firm get a payoff and that target firm now has a set of stockholders. 

SPACs have taken off lately. Here's a figure from the Gahng, Ritter, and Zhang paper, where the blue dots (left axis) show the number of SPACs and the gray bars (right axis) show the dollar value of the initial public offerings used to form these SPACs. As you can see, SPACs are not brand-new--they have been around for a few years--but the number and volume was gradually rising up through 2019 before taking off in 2020. 

Why was the number of SPACs on the rise in 2020? One simple reason is that with stock prices high, more companies are trying to find ways to cash in. 

Another reasons involves the regulation of initial public offerings. Specifically, a firm going through an IPO is only allowed to describe its past historical performance, and is forbidden from making forecasts of future earnings. Obviously, this tends to favor somewhat established firms, and to rule out young start-up companies, especially those with little little history and little revenue. The time and energy and cost and regulatory requirements cancel out the benefits. A firm being purchased by a SPAC can make forecasts of future earnings, and the entire process can happen in a couple of months. Similarly, if you are an outside investor who would like to own a diversified portfolio of young start-up companies, hoping that a few of them will hit it big, investing in SPACs gives you the opportunity to do that without having inside connections to venture capitalists, angel investors, or private equity firms. 

For a firm thinking about being merged into a SPAC, one main disincentive is that the sponsor of the SPAC typically takes 20% of the value of the original firm as its reimbursement. This does give the sponsor of the SPAC a strong incentive to remain involved and to hellp shepherd the firm toward growth and profitability. But the target firm is in effect giving up 20% of the value of the firm in exchange for the cash infusion. 

What are the potential problems with the SPAC approach? The obvious issue is that an investor in a SPAC is essentially trusting the SPAC to make a smart decision about which firm to merge with, and at what price, and additional trusting the SPAC management to keep pushing the firm forward after the merger is completed. When retail investors are looking at promises about what might happen with young firms, and displaying some perhaps irrational exuberance, the 

A less obvious issue is how the IPO for a SPAC is structured for investors. The Wharton write-up explains: 

Investors in the IPO of a SPAC typically buy what are called units for $10 each. The unit consists of a common share, which is regular stock, and a derivative called a warrant. Warrants are call options and they allow investors to buy additional shares at specified “exercise” prices. After the merger with the shell company, both the shares and the warrants are listed and traded publicly. If some SPAC investors change their minds and do not want to participate in the merger with the shell company, they could redeem their shares and get back the $10 they paid for each. However, they can retain the warrants.
Yes, you read that correctly. When you buy a "unit" in a SPAC IPO, you can sell back the "unit" at the original purchase price and essentially keep the warrant--that is, the option to purchase stock at a locked-in lower price even if the stock price goes up--for free.  The economic justification for this is that it provides an incentive for the SPAC sponsor to negotiate a good deal, because if the deal is perceived to be a bad one, the original money raised by the SPAC could evaporate.  The warrants can be viewed as compensation for tying up your funds while the SPAC tries to negotiate a merger with a target firm. But this stock-plus-a-warrant-for-free structure is being criticized within the industry and has come under the eagle eye of regulators, and may not last. 

For investors, the past record of SPACs looks good if you are part of the original IPO--that is, one of the people giving the SPAC a blank check--but not especially good if you are buying in as one of the "private investors in public equity" stage. The Gahng, Ritter, and Zhang team reports using data from 2010 up through May 2018: "While SPAC IPO investors have earned 9.3% per year, returns for investors in merged companies are more complex. Depending on weighting methods, they have earned -4.0% to -15.6% in the first year on common shares but 15.6% to 44.3% on warrants." 

SPACs in some form seem here to stay, in some form, unless the initial public offering rules are revised in a way that works better for young companies without a clear history of revenue growth. But they have now come under regulatory scrutiny. On April 8, John Coates at the Securities and Exchange Commission made a statement on "SPACs, IPOs and Liability Risk under the Securities Laws." He began (footnotes omitted): 

Over the past six months, the U.S. securities markets have seen an unprecedented surge in the use and popularity of Special Purpose Acquisition Companies (or SPACs). Shareholder advocates – as well as business journalists and legal and banking practitioners, and even SPAC enthusiasts themselves – are sounding alarms about the surge. Concerns include risks from fees, conflicts, and sponsor compensation, from celebrity sponsorship and the potential for retail participation drawn by baseless hype, and the sheer amount of capital pouring into the SPACs, each of which is designed to hunt for a private target to take public.With the unprecedented surge has come unprecedented scrutiny, and new issues with both standard and innovative SPAC structures keep surfacing.

A few days later the SEC issued "accounting guidance" that made the warrants less attractive, by requiring that they be treated as a liability of the original SPAC.  The number of SPACs promptly plummeted. Again, I suspect SPACs aren't going away, but they are definitely an innovation in flux. 

Tuesday, May 18, 2021

NItrous Oxide, Agriculture, and Climate Change

 For most of us, nitrous oxide calls up memories of the local anaesthetic used by many dentists. But nitrous oxide is also an important greenhouse gas, and the main emissions into the atmosphere do not come from dentists run amok, but rather from applications of nitrogen-based fertilizers to soil. Ula Chrobak tells the story in "Fighting climate change means taking laughing gas seriously: Agriculture researchers seek ways to reduce nitrous oxide’s impact on warming" (Knowable Magazine, May 14, 2021). She writes: 

N20 also known as laughing gas, does not get nearly the attention it deserves, says David Kanter, a nutrient pollution researcher at New York University and vice chair of the International Nitrogen Initiative, an organization focused on nitrogen pollution research and policy making. “It’s a forgotten greenhouse gas,” he says. Yet molecule for molecule, N20is about 300 times as potent as carbon dioxide at heating the atmosphere. And like CO2, it is long-lived, spending an average of 114 years in the sky before disintegrating. It also depletes the ozone layer. In all, the climate impact of laughing gas is no joke. IPCC scientists have estimated that nitrous oxide comprises roughly 6 percent of greenhouse gas emissions, and about three-quarters of those N20 emissions come from agriculture.

Last October, Hanquin Tian and a long list of co-authors published "A comprehensive quantification of global nitrous oxide sources and sinks" in Nature (October 7, 2020, pp. 248-256, subscription required). They write (footnotes omitted): 

Global human-induced [N20] emissions, which are dominated by nitrogen additions to croplands, increased by 30% over the past four decades ... This increase was mainly responsible for the growth in the atmospheric burden. Our findings point to growing N20 emissions in emerging economies—particularly Brazil, China and India. ... The recent growth in N20 emissions exceeds some of the highest projected emission scenarios, underscoring the urgency to mitigate N20 emissions.

Yes, carbon dioxide is by far the most important greenhouse gas, with methane running second, and nitrous oxide third. These figures from the EPA illustrate greenhouse gas emissions for the US and for the world.


But that said, the climate change policy agenda needs to be multidimensional, addressing the issue from many different angles. In addition, with a need to expand agricultural productivity and output in many countries around the world, increased applications of fertilizer seem nearly certain, unless there is a concerted research-based effort to think about alternative approaches.

Monday, May 17, 2021

Taxation Tales and the Window Tax

H.L. Mencken once wrote: "Taxation . . . is eternally lively; it concerns nine-tenths of us more directly than either smallpox or golf, and has just as much drama in it; moreover, it has been mellowed and made gay by as many gaudy, preposterous theories." Michael Keen and Joel Slemrod quote Mencken and then make his point in many vivid ways in their gallop through tax policy in their just-published book Rebellion, Rascals, and Revenue: Tax Follies and Wisdom Through the Ages For those (including economics instructors) looking to replenish and refresh their dusty anecdotes about offbeat tax policies, this is the book you've been waiting for. The book itself is an easy read, with copious footnotes leading to the research literature for those who want more detail. They write at the start: 
Tax stories from the past, we hope to show, can be entertaining—sometimes in a weird way, sometimes in a gruesome one, and sometimes simply because they are fascinating in themselves. They are also helpful in thinking about the tax issues that run through today’s headlines and politics. The stories we tell in this book span several millennia, from Sumerian clay tablets, Herodotus, and the unusual tax ideas of the Emperor Caligula through to the slippery practices revealed by the Panama Papers, the tax possibilities unleashed by blockchain, and the outlook for taxation in a world transformed by the COVID-19 pandemic. But this book is not a history of taxation. Nor is it a primer on tax principles. It is a bit of both.

The book is a torrent of examples, with general lessons proffered gently in the interstices. You've heard of the Rosetta Stone, right? But did you know that when it was deciphered, it "describes a tax break given to the temple priests of ancient Egypt." You can learn about the civil strife over the "armed resistance by  the Maori of Hokianga County in New Zealand to a tax on every dog in the district," or another dog-tax episode when "the Bondelswarts, a nomadic group in German Southwest Africa (now Namibia), rose up against an increase in the dog tax that had been imposed in 1917." You will find examples of a hut tax, a beard tax, a bachelor tax, and much more. 

As a concrete example, and to give a sense of  the expository style of the book, I'll focus here on the window tax, an example that is fairly well-known among public finance economists, but perhaps not to the broader world. Keen and Slemrod write (many footnotes omitted):  

This is the tale of the window tax, imposed in Britain from 1697 to 1851. At first blush, taxing windows may seem anachronistic or just plain folly. But it was actually pretty clever. 

The problem faced by the government of the time was to find a tax based on something that: increased with wealth (for fairness); was  easily verified (to avoid disputes); and—being intended to replace a tax on hearths (that is, fireplaces), much hated for requiring inspectors to check inside the property, imposed by the recently deposed Stuarts—observable from afar. The answer: windows. 

The number of windows in a house was a decent proxy for the grandeur and wealth of its occupants, so that on average, wealthier people would owe more window tax. And it could be assessed from outside by “window peepers.” In an age that lacked Zillow.com or any other way to estimate on a large scale and with reasonable accuracy the value of residential property, this tax was not such a bad idea. Indeed, a window tax is essentially a (very) simple version of the computer-assisted mass appraisal systems by which some developing countries now assess property tax, valuing each house by applying a mathematical formula to a range of relatively easily observed characteristics (location, size, and so on).

Clever though the window tax idea was, it had limitations of a kind that pervade other taxes as well. It was not, for instance, a very precise proxy. That led to unfairness. Adam Smith was irked that: 

A house of ten pounds rent in a country town may sometimes have more windows than a house of five hundred pounds rent in London; and though the inhabitant of the former is likely to be a much poorer man than that of the latter, yet so far as his contribution is regulated by the window-tax, he must contribute more to the support of the state.

And even though the tax only applied to properties with more than a certain number of windows, which went some way toward easing the burden on the poorest families, the tenement buildings into which the urban poor were crowding counted as single units for the purposes of the tax, and so were usually not exempt from tax. 

The window tax also encountered the difficulty that it induced changes in behavior by which taxpayers reduced how much they owed, but only at the expense of suffering some new harm. The obvious incentive created by the tax was to have fewer windows, if need be by bricking up existing ones, as remains quaintly visible to this day on distinguished  old properties (and some undistinguished ones). Light and air were lost. The French economist and businessman Jean-Baptiste Say (1767– 1832) experienced this response first-hand when a bricklayer came to his house to brick up a window so as to reduce his tax liability. He said this led to jouissance de moins (enjoyment of less) while yielding nothing to the Treasury, which is a felicitous definition of “excess burden”: the idea—one of the most central and hardest to grasp in thinking about taxation—is that the loss which taxpayers suffer due to a tax is actually greater than the amount of tax itself. ...

 The harm done by vanished windows was not trivial. Poor ventilation spread disease; lack of light led to a deficiency of vitamin B that stunted growth—what the French came to call the “British sickness.” Opponents reviled the tax as one on “the light of heaven”; the medical press protested that it was a “tax on health.” Philanthropic societies hired architects to design accommodation for the poor so as to reduce liability to the window tax, and great minds of the time railed against it. ... Charles Dickens was straight up irate: The adage “free as air” has become obsolete by Act of Parliament. Neither air nor light have been free since the imposition of the window-tax . . . and the poor who cannot afford the expense are stinted in two of the most urgent necessities of life. France followed the British example, adopting a tax on windows (adding an equally hated tax on doors) in 1798, leading the saintly Bishop of Digne of Les Misérables to pity “the poor families, old women and young children, living in those hovels, the fevers and other maladies! God gives air to mankind and the law sells it.” ...

Because people preferred to both keep their windows and pay less tax, the response to the window tax, as with most taxes, was largely a story of evasion and avoidance, disputes, and legislative change trying to clarify the tax rules about what was and was not subject to tax. When visitors today take a punting outing on the River Cam in Cambridge, the guide may point out a house on the bank with a window on the corner of the building, supposedly designed to let light into two adjacent rooms that counted as just one window for purposes of the tax. The government caught on to that trick, however, and in 1747 introduced legislation stipulating that windows lighting more than one room were to be charged per room. A less subtle ploy was to hoodwink the window peepers by temporarily blocking windows “with loose Bricks or Boards, which may be removed at Pleasure or with Mud, Cow-dung, Moarter, and Reeds, on the Outside, which are soon washed off with Shower of Rain, or with paper and Plateboard on the Inside.” In response, the same 1747 law also required that no window that had been blocked up previously could be unblocked without informing the surveyor, with heavy fines for violation. 

Disputes, favoritism, and upset abounded. What exactly, for instance, is a window? ... The wording of the act seemed to imply that any hole in an exterior wall, even from a missing brick, was a taxable window. The rules did become clearer (or at least more complex) over time; the 1747 reform, for instance, clarified that when two or more panes were combined in one frame, they counted as distinct windows if the partition between them was more than 12 inches wide. In any case, the tax commissions, consisting of local gentlemen, tended to apply the tax much as they wanted. This practice created many opportunities for favoritism. John Wesley, founder of Methodism, complained about an acquaintance with 100 windows paying only for 20.

The window tax was very imperfect. But it was not a folly. And it illustrates the key challenges that are at the heart of the tax-design problem: the quest for tolerable fairness, the wasteful behavioral responses that the tax induces, and the desire to administer a tax cost effectively and non-intrusively. ... Many governments, as we will see, have done far worse that the window-taxers did.

A 20-minute interview with the Michael Keen and Joel Slemrod discussing some themes of the book is available from the ever-useful Econofact website.



What is Behind People's Views on Tax Policy?

When economists study taxation, they typically separate two issues: one is the distributional issue of which groups are paying more or less; the other is the ways in which taxes reduce efficient economic incentives for work, savings, investment, innovation, and so on on. Today is the deadline for when US individual income tax returns are due with the federal Internal Revenue Service as well as with state-level income tax authorities. According to the research of Stefanie Stantcheva, most of those taxpayers focus almost entirely on the distributional question, not the efficiency question. 

Stancheva discusses the topic as part of an interview with David Cutler on the occasion of winning the 2020 Elaine Bennett Research Prize, which is "awarded every two years to recognize and honor outstanding research in any field of economics by a woman not more than seven years beyond her Ph.D." (CSWEP Newsletter, 2021, 21:1, "Interview with Bennett Prize Winner Stefanie Stantcheva"). The underlying research paper by Stantcheva being discussed here is "Understanding Tax Policy: How Do People Reason" (November 2020, NBER Working Paper 27699).  Stantcheva reports in the interview: 

Consider the example of tax policy. Is it that people have different perceptions about the economic cost of taxes? Is it that they think differently about the distributional impacts that tax changes will have? Or is it that they have very different views of what’s fair and what’s not? Could the reason be their views on the government—how wasteful or efficient they think the government is? Or is it purely a lack of knowledge about how the tax system works and what inequality is?

I think of these factors as my explanatory or right-hand side variables. I can decompose a person’s policy views into these various components. What I find is that for tax policy, a person’s views on fairness, and who’s going to gain and lose from tax changes completely dominates all other concerns. This is followed by a person’s views of the government. How much do they think the government should be doing, how efficient is it, how wasteful is it, how much do they trust it? Efficiency concerns are actually quite second-order in people’s minds when it comes to tax policy.

These are all correlations. To see what’s actually causal and what could be shifting views, I show people these short ECON courses, which are two - or three-minute-long videos which explain how taxes actually work. The videos take different perspectives. Although they’re neutral and pedagogical, they don’t tell people what taxes should be or what’s fair or not. They just explain the how taxes work from one perspective. For instance, one version focuses only on the distributional impacts of taxes - who gains and who loses. The other version focuses only on the efficiency costs. Then there is the economist treatment, which shows both and emphasizes the trade-off between efficiency and equity. One can replicate this approach for the other policies such as health policy or trade or even climate change, which all have efficiency and equity considerations.

What I find for tax policy confirms the correlations. What shifts people’s views most is to see the distributional impacts of taxes, not at all the efficiency consequences of it. Even if you put it together and emphasize the trade-off, it’s still the distributional considerations that dominate and outweigh the efficiency concerns.
Distributional concerns about taxes matter to me, as well! But even if you generally agree on the idea that taxes should weigh more heavily on those with higher incomes or different wealth, it doesn't help to distinguish between different ways this might be be done. 

For example, one might have higher marginal tax rates on those with higher income levels. One might reduce the value of tax deductions, like deductions for mortgage interest or state and local taxes, that tend to benefit those with high incomes more. One might insist that taxes be paid on currently untaxed fringe benefits, like employer-purchased health insurance, because exempting those benefits from income tax will provide greater benefit to those with higher incomes. One might want to alter rules that let people make tax-free contributions to retirement accounts, on the grounds that reducing taxes in this way will tend to benefit those with higher incomes. One might think about expansion of "refundable" tax provisions that help the working poor, like the Earned Income Tax Credit and the child tax credit. One might alter corporate taxes, on the theory that this would affect shareholders and top managers more than it will affect wages paid to average employees. One might alter the way in which capital gains are taxed, and one might want to distinguish between capital gains on owner-occupied housing, on family businesses, or on financial assets. One might alter the rules that let high-income people pass wealth to future generations. For example, the current rules are that when financial assets which have gained in value over the lifetime of the owner, those previous gains are not taxed when the asset is passed through an estate. One might want to change other rules on what assets can be passed to the next generation, including other aspects of the estate tax to rules about intergenerational giving, along with rules about using life insurance policies or charitable foundations to pass income between generations. 

For those readers who are sunk deepest into distributional thinking I suspect the honest response to this list is something like: "I'm against anything that would raise my taxes by a single dime, but I'm for anything that would only be paid by high-income, high-wealth individuals, and I don't care about how it affects their incentives." Of course, in a US economy where government debt is ascending to unprecedented levels even before we try to address the middle-term projected insolvency of Social Security and Medicare, that response is just an abdication of analysis. 

Friday, May 14, 2021

Interview with Christopher Pissarides: Unemployment and Labor Markets

Michael Chui and Anna Bernasek of the McKinsey Global Institute interview Christopher Pissarides (Nobel, '10) "about how he developed the matching theory of unemployment, how COVID-19 affected his research, and what might be in store for labor markets after the pandemic" (May 12, 2021, "Forward Thinking on unemployment with Sir Christopher Pissarides"). At the website, audio is available for the half-hour interview, along with an edited transcript, upon which I will draw here.

As a starting point, it's useful to remember that labor markets always have, at the same time, both unemployed workers who are looking for jobs and employers who have job vacancies. For example, the US economy had about 9.7 million unemployed workers in March 2021, and at the same time, employers were listing 8.,1 million job vacancies. Indeed, there was a stretch from April 2018 to February 2020--not all that long ago, where the number of job vacancies for the US economy exceeded the number of unemployed in the monthly data.

At first glance, this combination of millions of job vacancies and millions of unemployed seems like a puzzle. Why don't the unemployed just take the vacant jobs? This is where Pissarides comes in. He has emphasized that unemployment and hiring are not just about raw numbers, but involve a matching process. Most employers, most of the time, don't just hire the first person who walks in through the front door, but instead are looking for a good match for the skills they desire.. Most workers, most of the time, know that they could get certain kinds of low-wage work  pretty quickly, if that was what they wanted, but they instead are looking for a good match for the skills they can offer. Policies to address unemployment, or to help unemployed workers, need to be considered in context of this matching process. Here's Pissarides: 

[U]nemployment is a very serious problem that I think governments should always be dealing with. It’s a cause of poverty, of disenfranchisement from the labor market, of misery.  ... [B]efore we did that work, people were thinking of unemployment as a kind of stock of workers, as a number of workers if you like, who could not get a job. They would start from the top end of the market and say, “This is how much output this economy needs, that’s how much is demanded. Then how many people do you need to produce that output?” Then you would come up with a number. And then they would say, “Well, how many workers want jobs?” If there are more workers that want jobs, you call the difference unemployment. ..

What we did was to start from below, saying the outcomes in the labor market are the result of workers looking for jobs, companies looking for workers.The two need to come together. They need to agree that the qualifications of the worker are the right ones for the firm. That once the firm has the capital, that worker needs to make the best use of his or her skills. That unemployment insurance policy might influence the incentives that the worker needs to take a job. The tax policy might influence the incentives of the company. Once you open the field up like that, it gives you unlimited possibilities for research in that area and working out the impact of these different policies or different features of the labor market on unemployment. ...

[T]he time that it takes to find that job depends on how many jobs are being offered in the labor market, what types of skills firms want, what incentives the worker has to accept the jobs, what’s the structure of production, the profit that the firm expects to make, conditions overall in the market. All those things influence the duration of unemployment. Therefore you could study there—how long does the worker remain unemployed? What could influence that duration? What could make it shorter? What would make it longer if you did

certain things? On that basis, you derive good policies towards unemployment, and they are still the policies that governments use, in fact widely, to work out how long people remain unemployed and what the implications of their unemployment are.

So what are some insights that emerge from this approach? The unemployment that results from this matching process can be a good thing in an economy:

[L]ower unemployment than what might exist is not always a good thing for the labor market, because some unemployment is good because of the matching problem. If a worker becomes unemployed, or if a new worker leaves school, a person leaves school, gets into the labor market, is a new worker, it wouldn’t be a good idea to accept the first job that is offered on day one and get into it. Because it may not be the job that would bring out the best productivity from that worker, or the job that that worker would like best. Now, you might say it’s obvious, and I now think it is, but when we were working on it, this didn’t exist.

On designing unemployment insurance: 

[I]f you offer unemployment compensation, which is necessary to reduce poverty caused by unemployment, then you have to be careful when you’re doing that, because if you just offer it unconditionally, it’s going to create disincentives for people to take jobs, and it’s going to lengthen the duration of unemployment. Therefore it’s going to increase your unemployment incidence. You are going to see more people unemployed, because they stay unemployed longer, collecting benefits. Now, that’s been exploited a lot by politicians. I don’t agree with that way, that they say, we have to cut benefits because of these incentives.

A better way of dealing with it is to say, we need to structure our unemployment compensation policy in such a way that it deals with the poverty issue, but at the same time it doesn’t create those disincentives that you might get if you offer it unconditionally. The leading countries that develop policies that give exactly the answer to the question I’ve just posed, how to structure it, are mainly the Scandinavians—Denmark, Sweden, Norway. And other countries have followed them now, and most of them do follow this advice of structuring the benefit in such a way that the incentives are not harmed very much when you are dealing with the poverty issue of unemployment.
On retraining programs:
Retraining needs to be provided by companies, because they’re the ones that would know in what to train and to what extent. Now, for training to succeed, however, it has to be funded from outside as well, because no company, except for the very big ones, I guess, will take on workers on expensive training programs if they are running the risk that some other company will come and take their workers away from them after they get trained. There is this poaching problem. ...

Then the other issue is that training succeeds when the worker owns the training, in the sense that the worker is doing the training not because someone forced that worker to do the training, but because they believe that it’s good for them and their career, and it’s going to give them career progression and a pay raise. ... Somehow maybe part of the amount should be given to the worker, then the worker chooses how to spend it. They cannot take it as money, but they could draw on a fund, a training fund. Singapore has a very good scheme like that. I think it’s called SkillsFuture. Some other countries are introducing it. It’s not an easy thing, but we have enough experience now to know how to plan those kinds of training support schemes.

Wednesday, May 12, 2021

Political Economy of the Pandemic Response

If economics-minded policy-makers rules made decisions in response to the pandemic, what might they do differently, and why? Peter Boettke and Benjamin Powell suggest some answers to that question in "The political economy of the COVID‐19 pandemic" (Southern Economic Journal, April 2021, pp. 1090-1106). Their paper leads off a symposium on the topic. I'll list all the papers in the symposium below. I'm told that they are all freely available online now, and for the next few weeks, so if you don't have library access to the journal, you might want to check them out sooner rather than later. Boettke and Powell write: 
[F]rom the perspective of promoting overall societal well‐being, we believe that governments in the United States and around the world made significant errors in their policy response to the COVID‐19 pandemic. ... [A] political economy perspective challenges the assumptions of omniscience and benevolence of all actors—politicians, regulators, scientists, and members of the public—in response to the pandemic. We live in an imperfect world, populated by imperfect beings, who interact in imperfect institutional environments ...
What are some ways in which pandemic policies based in micro theory and welfare economics might differ from the policies actually used? The potential answers seems to me both of interest in themselves, but also a good live subject for classroom discussions and writing exercises.  

For example, when discussing the subject of how policymakers should respond to negative externalities, a general principle is that there are a wide array of possible responses, and the least-cost response should be selected. If one thinks of society as divided into the elderly and non-elderly, for example, it seems plausible that the lowest social cost response to COVID-19would involve restrictions on  the elderly. Boettke and Powell write: 

The activities of the young and healthy impose a negative health externality on the old and infirm. But it is equally true that if the activities of the young are restricted because of the presence of the old and infirm, this latter group has imposed a negative externality on the young and healthy. If transactions costs were low, the Coase theorem would dictate that it would not matter to which party the rights to activity or restriction were assigned, as bargaining would reach the efficient outcome. However, in the case of COVID‐19, and large populations, it is quite clear that transactions costs of bargaining would be prohibitive. Thus, the standard law and economics approach would recommend assigning rights such that the least cost mitigator bears the burden of adjusting to the externality. In the case of COVID‐19, it is clear that the low opportunity cost mitigators are the old and infirm. Thus, Coasean economics would recommend allowing the activities of the young and healthy to impose externalities on the old and infirm, not the other way around. Lockdowns and stay at home orders get the allocation of rights exactly backwards and result in large inefficiencies because costs are disproportionately borne by the high cost mitigators.
Another common insight from economics is that those closest to the externality typically know the most about how to respond. In the case of pollution control, for example, there is a standard argument for using a pollution tax or marketable pollution permits, rather than trying to draw up command-and-control rules for  every smokestack or pollution source. Have those creating pollution bear the cost, and they will have an incentive to find ways to reduce those costs. 

Of course, the response of most states and  localities to COVID-19 was very much a command-and-control response, with extensive and ever-changing rules about outdoors and indoors, about restaurants, parks, and churches, about what businesses or school could be open under what conditions. As the authors write: "The thousands upon thousands of varied restrictions are too numerous and diverse for us to comprehensively categorize here. But their sheer number and variability make it obvious that these command and control regulations are not in any way promoting a cost minimizing form of transmission mitigation." The alternative might have been to categorize activities according to their chance of spreading COVID-19, and then impose a tax for participating in such activities.  
The marginal costs of reducing risk‐generating activities are really just the inverse of the subjective marginal benefits of engaging in myriad social interactions in the market place, civil society, families, politics, religious communities, and recreation. No regulator is going to know the value of these diverse activities to those engaged in them. Economists have long appreciated that, in the presence of heterogenous mitigation costs, command and control regulation of much simpler pollution mitigation is less efficient than a pollution tax, because firms know their mitigation costs better than regulators. That informational asymmetry between the economist regulator and people regulated is even greater in this case. Thus, an efficiency maximizing economist policy advisor would recommend leaving people free to choose activities for themselves, while imposing a tax on activities set to reduce the marginal benefit of engaging in activities, proportional to increased risk of COVID‐19 transmission.
Another policy option would be for the government to subsidize activities that would reduce the spread of the externality: for example, "government funding to expand hospital capacity and the purchase of supplies and equipment, and research funding to speed the discovery of new medical treatments and vaccines. They could also include removal of regulatory barriers that impede medical capacity and the development of medicines and vaccines. Unlike efficient policies related to the mitigation activities that risk disease transmission, governments have undertaken these policies to varying degrees." 

But the interesting observation here is that the size of government activities that focused directly on reducing the disease was dwarfed by the size of payments the government made to affected individuals and businesses. For example, the government put $10 billion into the Warp Speed program to produce vaccines and guarantee that certain volumes would be purchased, but has spent trillions of dollars--more than a hundred times more--on payments that do not directly reduce the risk of transmission. 

A final example involves decisions about who would get the vaccine first. For example, should it go to "essential workers"? Or to the elderly or those with greater vulnerability to the disease? Who defines these groups? Will lotteries be involved at certain stages? By the time all the rules are argued over,  spelled out, and  then enforced, an obvious question (to economists) is whether a more flexible and market-oriented system might work better. The authors write: 
Even if policymakers cared more about the welfare of the people that guidelines currently prioritize for vaccination, they could design policy better than the CDC guidelines by allocating a re‐sellable right to receive the vaccination, rather than the vaccination itself. Those prioritized individuals who resell the right will, through their actions, indicate that they are even better off, and the transfers of the right to higher value vaccinators would promote greater efficiency too. No politicians are considering such policies.
What's interesting to me is not that the economics answers here are obviously "right"--one can certainly point out tradeoffs that would be involved--but that the tradeoffs were barely noticed or discussed as real options.  Boettke and Powell point out some underlying issues here of political economy. For example, public health officials "re not necessarily untruthful, but they will be biased against committing an error of over optimism—no forecast or treatment protocol or vaccine will be championed that underestimates the downside risk. Better for them to commit errors of over‐pessimism." 

The combination of media and public attention in the social media age does not seem predisposed to calm consideration of tradeoffs, either. Instead, tradeoffs are typically presented as involving "good people," who are judged leniently , and "bad people," who are judged harshly. The authors write: 
One implication is that fair and balanced reporting may be too boring to grab the attention of the median listener/viewer/reader. Rather than nuanced and subtle discussion of trade‐offs, and the calm calculation of risk, we get extreme projections of nothing here or catastrophe awaits. And, of course, those incentives for attracting an audience have grown more intense in the last decade with traditional print media competing with online sources. ... 

Both politicians and the mainstream media have kept much of the populace in such an alarmed state throughout the pandemic, which has allowed both paternalistic interventions and created bottom up parentalist demands for such interventions, which have nothing to do with efficiently correcting a market failure.
Here's the full Table of Contents for the symposium. Again, I'm told that all the articles will be open access for the next few weeks:

Is the Pandemic Worse in Lower- or Higher-Income Countries?

It seems obvious that the COVID-19 pandemic must be worse in lower-income countries. After all, it seems as if the opportunities for social  distancing must be lower in urban areas in those countries, and the resources for everything from protective gear to hospital care must be lower. There are certainly cases where the pandemic has hit some areas hard outside of  high-income countries: for example, the current situation in India, or the city of Manaus in Brazil that suffered a a first wave, and then suffered a second wave with a new variant of COVID.  

But that said, Angus Deaton (Nobel '15) makes a case that the areas outside the high-income countries of the world have, as a group, been less affected by the pandemic in "Covid-19 and Global Income Inequality" (Milken Institute Review, Second Quarter 2021, pp. 24-35). As a starting point for his argument, consider this figure, which shows countries with higher per capita income have tended to have higher per capita COVID-19 deaths. 

Deaton discusses this figure from a variety of angles,  including the possibility that COVID-19 is less well-measured in lower-income countries. But he argues that a number of other factors may help to explain the pattern of higher COVID-19 deaths in higher-income countries. 

The low number in low-income countries has been linked by Pinelopi Goldberg and Tristan Reed to (the lack of) obesity, to the smaller fraction of the population over 70 and to the lower density of population in the largest urban centers.

Another alternative is to focus on demography. Patrick Heuveline and Michael Tzen provide age-adjusted mortality rates for each country by using country age-structures to predict what death rates would have been if the age-specific Covid-19 death rates had been the same as the U.S. The ratio of predicted deaths to actual deaths is then used to adjust each country’s crude mortality rate. This procedure scales up mortality rates for countries that are younger than the U.S. (Peru has the highest age- and sex-adjusted mortality rate) and scales down mortality rates for countries like Italy and Spain (which had the highest unadjusted rate) that are older than the U.S.

If Figure 1 were redrawn using the adjusted rates, the positive slope would remain, though the slope showing the relationship between death rates and income would be reduced from 0.99 to 0.47 — that is, the relationship would hold but would be less pronounced. ...

[P]oor countries are also warmer countries, where much activity takes place outside, and there are relatively few large, dense cities with elevators and mass transit to spread the virus. It is also possible that Africa’s long-standing experience with infectious epidemics stood it in good stead during this one. People in countries with more-developed economies consume a higher fraction of income in the form of personal services, which makes infection easier.
Deaton further argues that countries with higher death rates, as shown in the figure above, have also tended to have worse economic outcomes. 

All analysis of the pandemic is, as yet, incomplete. Deaton's data goes through the end of 2020. Just as India has recently been clobbered by the pandemic, something similar could happen in other countries. Furthermore, in the poorest countries of the world, even a smaller loss of income may cause extreme human suffering. 

But other possible lessons here are that, just perhaps, the pandemic did not make the world a more economically unequal place. Moreover, having lower deaths from the pandemic appears to be a good way of bolstering a country's economy. asdj

Tuesday, May 11, 2021

The Slow Magic from Agricultural R&D

For much of human history, a majority of people have worked in agriculture. In the countries of sub-Saharan Africa, about half of all workers are currently in agriculture--more in lower-income countries. The process of raising the overall standard of living requires a rise in agricultural productivity, so that a substantial share of workers can shift away from agriculture, and thus be able to work in other sectors of the economy. In turn, rises in agricultural productivity are typically driven by research and development, which has been lagging. Julian M. Alston, Philip G. Pardey, and Xudong Rao make the case in "Rekindling the Slow Magic of Agricultural R&D" (Issues in Science and Technology, May 3, 2021).

The authors discuss CGIAR, which stands for Consultative Group on International Agricultural Research. This system was started in 1971. The authors note: "The CGIAR was conceived to play a critical role, working in concert with national agricultural research systems of low- and middle-income countries, to develop and distribute farm technologies to help stave off a global food crisis. The resulting Green Revolution technologies were adapted and adopted throughout the world, first and foremost in South Asia and parts of sub-Saharan Africa and Latin America where the early centers of the CGIAR were located. In 2019 the CGIAR spent $805 million on agricultural R&D to serve the world’s poor, down by 30% (adjusted for inflation) from its peak of over $1 billion in 2014 ... "

For perspective, total public and private spending by low-income countries on agricultural R&D is roughly equal to what is spent through CGIAR. The payoff from this spending has been on the order of 10:1. 
The CGIAR research record has been much studied, but questions remain about the past and prospective payoffs to the investment. Similar questions have been raised about public investments in the agricultural research systems of various nations—particularly those of poor countries that receive substantial development aid from richer countries. To address those questions, we conducted a comprehensive meta-analysis of more than 400 studies published since 1978 that looked at rates of return on agricultural research conducted by public agencies in low- and middle-income countries. Of that total, 78 studies reported rates of return for CGIAR-related research and 341 studies reported rates of return for non-CGIAR agricultural research. (Full details of the meta-analysis are online at supportagresearch.org.) ...

 Across 722 estimates, the median ratio of the estimated research benefits to the corresponding costs was approximately 10:1 for both the CGIAR (170 estimates) and national agricultural research systems of developing countries (522 estimates). In other words, $1 invested today yields, on average, a stream of benefits over future decades equivalent to $10 (in present value terms). ...  Notably, all these estimated benefits accrued in developing countries, home to the preponderance of the world’s food poor. And yet, rich donor countries also reap benefits by adopting technologies developed by CGIAR research—“doing well by doing good.” For example, the yield- and quality-enhancing traits bred into new wheat and rice varieties destined for developing countries are also incorporated into most varieties used by rich-country farmers.
But as noted earlier, CGIAR funding is down 30% in the last few years. Also, I was surprised to notice that the Gates Foundation alone is more than one-eighth of then entire CGIAR budget. 

We are talking here about quantities measured in hundreds of millions of dollars--not even a single billion, much less the trillions that are being discussed in various pandemic-relief programs. The benefits of agricultural R&D seem enormous, but the world is not stepping up to the opportunity. 

Thursday, May 6, 2021

Interview with Matthew Jackson: Human Networks

David A. Price does an "Interview" with Matthew Jackson, with the subheading "On human networks, the friendship paradox, and the information economics of protest movements" (Econ Focus: Federal Reserve Bank of Richmond, 2021, Q1, pp. 16-20). Here are a few snippets of the conversation, suggestive of the bigger themes.

Homophily
[O]ne key network phenomenon is known among sociologists and economists as homophily. It's the fact that friendships are overwhelmingly composed of people who are similar to each other. This is a natural phenomenon, but it's one that tends to fragment our society. When you put this together with other facts about social networks — for instance, their importance in finding jobs — it means many people end up in the same professions as their friends and most people end up in the communities they grew up in.

From an economic perspective, this is very important, because it not only leads to inequality, where getting into certain professions means you almost have to be born into that part of society, it also means that then there's immobility, because this transfers from one generation to another. It also leads to missed opportunities, so people's talents aren't best matched to jobs.
The Friendship Paradox
This concerns another network phenomenon, which is known as the friendship paradox. It refers to the fact that a person's friends are more popular, on average, than that person. That's because the people in a network who have the most friends are seen by more people than the people with the fewest friends.

On one level, this is obvious, but it's something that people tend to overlook. We often think of our friends as sort of a representative sample from the population, but we're oversampling the people who are really well connected and undersampling the people who are poorly connected. And the more popular people are not necessarily representative of the rest of the population.

So in middle school, for example, people who have more friends tend to have tried alcohol and drugs at higher rates and at earlier ages. And this distorted image is amplified by social media, because students don't see pictures of other students in the library but do tend to see pictures of friends partying. This distorts their assessment of normal behavior.

There have been instances where universities have been more successful in combating alcohol abuse by simply educating the students on what the actual consumption rates are at the university rather than trying to get them to realize the dangers of alcohol abuse. It's powerful to tell them, "Look, this is what normal behavior is, and your perceptions are actually distorted. You perceive more of a behavior than is actually going on."
Causality in Networks
Establishing causality is extremely hard in a lot of the social sciences when you're dealing with people who have discretion over with whom they interact. If we're trying to understand your friend's influence on you, we have to know whether you chose your friend because they behave like you or whether you're behaving like them because they influenced you. So to study causation, we often rely on chance things like who's assigned to be a roommate with whom in college, or to which Army company a new soldier is assigned, or where people are moved under a government program that's randomly assigning them to cities. When we have these natural experiments that we can take advantage of, we can then begin to understand some of the causal mechanisms inside the network.
Live Protests vs. Social Media
[I]t's cheap to post something; it's another thing to actually show up and take action. Getting millions of people to show up at a march is a lot harder than getting them to sign an online petition. That means having large marches and protests can be much more informative about the depth of people's convictions and how many people feel deeply about a cause.

And it's informative not only to governments and businesses, but also to the rest of the population who might then be more likely to join along. There are reasons we remember Gandhi's Salt March against British rule in 1930 or the March on Washington for Jobs and Freedom in 1963. This is not to discount the effects that social media postings and petitions can have, but large human gatherings are incredible signals and can be transformative in unique ways because everybody sees them at the same time together with this strong message that they convey.
If you would like more Jackson, one starting point is his essay in the Fall 2014 issue of the Journal of Economic Perspectives, "Networks in the Understanding of Economic Behaviors." The abstract reads:
As economists endeavor to build better models of human behavior, they cannot ignore that humans are fundamentally a social species with interaction patterns that shape their behaviors. People's opinions, which products they buy, whether they invest in education, become criminals, and so forth, are all influenced by friends and acquaintances. Ultimately, the full network of relationships—how dense it is, whether some groups are segregated, who sits in central positions—affects how information spreads and how people behave. Increased availability of data coupled with increased computing power allows us to analyze networks in economic settings in ways not previously possible. In this paper, I describe some of the ways in which networks are helping economists to model and understand behavior. I begin with an example that demonstrates the sorts of things that researchers can miss if they do not account for network patterns of interaction. Next I discuss a taxonomy of network properties and how they impact behaviors. Finally, I discuss the problem of developing tractable models of network formation.