Wednesday, April 28, 2021

Electrification of Everything: The Transmission Lines Challenge

Reducing carbon emissions will require a number of different intertwined steps, and  one  of them is commonly being referred to as "electrification of everything." Basically, the more that society can rely for its energy needs on electricity generated from low-carbon or carbon-free methods, the more it can turn away from  burning fossil fuels. In turn, this policy agenda will require a vast expansion of high-voltage electricity transmission lines, especially if a substantial share of that electricity is generated from solar and wind. If there is more reliance on intermittent sources of electricity, there is also more need to ship electricity from place to place--more need for what is sometimes called a National Supergrid.  

However, the prospect of doubling the number of long-distance electricity transmission lines (or perhaps more than doubling) poses a classic problem of political economy. Under current law, decisions about allowing pathways for high-voltage lines are typically made at the state- or county-level. Local decision-makers don't have much incentive to take into account the broad social benefits of a widespread network of electricity lines that cross state and county boundaries. Thus, it's become politically very hard to expand the existing network. An obvious possible answer is to give more power to a federal-level authority to grant permissions. But where this has been done--say, in building and updating natural gas pipelines--the process has often proven to be highly controversial and has not always results in pipelines being built. 

Liza Reed provides an overview of these issues in "Transmission Stalled: Siting Challenges for Interregional Transmission" (Niskanen Center April 2021). She writes (footnotes omitted): 

The electricity sector is expected to change dramatically to meet decarbonization goals, with some pathways showing demand doubling or more as cars, households, and industry are increasingly electrified. This will require similar expansion in transmission capacity to serve increasing demand. ...

Under the current system of planning and permitting, high-voltage interstate transmission lines take eight to ten years on average to complete, if they succeed at all. Four years or more of that timeline is absorbed by the regulatory hurdles, particularly siting the lines and acquiring the permits and land rights to build.

Transmission lines that traverse multiple states must satisfy the requirements of all states along a planned route. The timelines for each state are different, as are the standards each state uses for the evaluation of public convenience and necessity. Some states require a developer to be a recognized utility provider within the state, an arguably anachronistic requirement. In other states the siting process is handled at the county level, placing an even higher regulatory burden on transmission developers. High-voltage transmission lines often provide their highest overall value to the system as a whole, and may only provide modest benefits to a particular state. It can be difficult or impossible for developers of these lines to convince multiple states that the benefits are enough. Oftentimes developers choose not to pursue these projects at all. The national transmission system, which could be the backbone of our electricity system and decarbonization efforts, suffers as a result.

A 2016 review of transmission projects by the Lawrence Berkeley National Lab identified permitting as one of the top four factors affecting transmission projects.  Referring to multi-state siting and permitting, the report notes: “Regardless of which process concludes first, the process that concludes last determines when construction can be completed.”

Reed discusses multiple attempts to extend transmission lines so that electricity can be shipped across states (including wind energy) that have been blocked for years by state/local politics and court decisions. She also points out that incumbent energy firms may not favor expansions of long-distance transmission lines, either, because it raises the level of competition they face from electricity generated in other locations. Local regional planners may also want to support local sources of energy, and thus oppose closer ties to energy generated outside  their region. 

There have been some attempts to designate "national interest electricity transmission corridors (NIETCs)" where it would be faster and easier to get permission to build additional electricity transmission lines, but these have also been contested and blocked by local authorities and courts. 

I've got no magic solution here. Local control tends to block the needed national expansion. Moving authority to the federal level, like the Federal Energy Regulatory Commission, would in some cases inevitably make decisions opposed to local desires--indeed, the reason for putting greater authority at the federal level would be to override local desires in some cases. Reed provides an honest discussion of some problems that have come up in the case of FERC having greater power to speed the permitting process for natural gas pipelines.

Natural gas pipeline infrastructure does not face the same siting challenges. The Natural Gas Act grants siting authority to FERC for interstate natural gas pipelines. The average permitting time is 18 months, less than half of the average interstate transmission permitting time. This single, central authority, in which FERC sites and permits lines and coordinates environmental reviews, is why the United States was able to respond quickly to the shale gas boom. ...

When considering reforms for transmission infrastructure, policy makers should consider how expansive FERC siting authority under the Natural Gas Act has disadvantaged private citizens and landowners. In practice, FERC provides limited notice of landowners’ rights, limited notice of applications for natural gas lines, and little meaningful access for impacted landowners. FERC delegates its statutory and constitutional obligations to provide notice to landowners to pipeline companies, and fails to confirm that such notice was actually provided. ...  Indeed, FERC establishes ad hoc timelines rather than a fixed time for intervention, and there are examples of FERC providing landowners with at least three inconsistent and contradictory sets of instructions for intervening. This has resulted in landowners being given as little as 13 days to intervene in proceedings whose purpose is to take their property. Though the practice has been recently rejected by the U.S. Court of Appeals for the D.C. Circuit, FERC has a long history of indefinitely delaying landowner rehearings (and thereby delaying landowners’ access to judicial review) by what are colloquially known as “tolling orders,” which prevented landowners from challenging FERC’s decision.

FERC’s record reveals other problems, too. FERC can issue “conditioned certificates” allowing eminent domain, even though the pipeline in question has not, and may never, obtain other required permits. With a FERC certificate in hand, courts currently will grant pipelines so-called quick-take possession of property, whereby a company takes land prior to remuneration, removing an incentive for the company to reimburse landowners on a reasonable timeline. FERC also establishes conditions on how companies construct pipelines and protect the remainder of landowners’ property, but the agency consistently fails to respond to any landowner complaints regarding violations. These practices allow for takings and destruction of private land in absence of oversight and without a fully permitted project. What’s more, if the project never gets built, or a court finds that the certificate was invalid, the pipeline company gets to keep the easements obtained from landowners, including all perpetual land use restrictions, however irrelevant in the absence of a pipeline.

Again,  I have no magic solution to balance the competing interests here. But I will say that if you are a strong proponent of solar and wind power, basic consistency requires that you also need to favor a vast and well-coordinated expansion of long-distance electricity transmission lines, with the associated commitments of physical resources and land, as well as sometimes needing to override local interests.  As Reed writes: 

Recent studies from MIT, Princeton, and NREL [National Renewable Energy Laboratory] demonstrate that interstate lines and interregional coordination are critical to achieving a cost-effective grid. Clear and consistent rules and metrics, which can only come from a single governing agency, would allow transmission developers, utilities, and generators to unlock the clean energy resources available across the nation.

For some other recent posts about of the future of US electricity generation and transmission, see: 

Monday, April 26, 2021

Amazon and Value Creation: A Bezos Farewell

Jeff Bezos is stepping down from daily management tasks as chief executive officer of Amazon, the company he founded in 1994, although he will continue to be involved in the company as executive chairman of the board. Earlier this month, Bezos wrote his last annual letter to company shareholders. A main focus of the letter is on how Amazon creates "value." 
 
Of course, for economists one measure of value is the total value of Amazon's stock, which now stands at about $1.6 trillion (and Bezos owns about one-eighth of that). But his letter focuses on the most recent year. He writes: 
Last year, we hired 500,000 employees and now directly employ 1.3 million people around the world. We have more than 200 million Prime members worldwide. More than 1.9 million small and medium-sized businesses sell in our store, and they make up close to 60% of our retail sales. Customers have connected more than 100 million smart home devices to Alexa. Amazon Web Services serves millions of customers and ended 2020 with a $50 billion annualized run rate.
During 2020, Amazon had net income of $21.3 billion. Bezos adds: 
In 2020, employees earned $80 billion, plus another $11 billion to include benefits and various payroll taxes, for a total of $91 billion.

How about third-party sellers? We have an internal team (the Selling Partner Services team) that works to answer that question. They estimate that, in 2020, third-party seller profits from selling on Amazon were between $25 billion and $39 billion, and to be conservative here I’ll go with $25 billion. ...
Customers complete 28% of purchases on Amazon in three minutes or less, and half of all purchases are finished in less than 15 minutes. Compare that to the typical shopping trip to a physical store – driving, parking, searching store aisles, waiting in the checkout line, finding your car, and driving home. Research suggests the typical physical store trip takes about an hour. If you assume that a typical Amazon purchase takes 15 minutes and that it saves you a couple of trips to a physical store a week, that’s more than 75 hours a year saved. That’s important. We’re all busy in the early 21st century. So that we can get a dollar figure, let’s value the time savings at $10 per hour, which is conservative. Seventy-five hours multiplied by $10 an hour and subtracting the cost of Prime gives you value creation for each Prime member of about $630. We have 200 million Prime members, for a total in 2020 of $126 billion of value creation. ...
AWS [Amazon Web Services] is challenging to estimate because each customer’s workload is so different, but we’ll do it anyway, acknowledging up front that the error bars are high. Direct cost improvements from operating in the cloud versus on premises vary, but a reasonable estimate is 30%. Across AWS’s entire 2020 revenue of $45 billion, that 30% would imply customer value creation of $19 billion (what would have cost them $64 billion on their own cost $45 billion from AWS). The difficult part of this estimation exercise is that the direct cost reduction is the smallest portion of the customer benefit of moving to the cloud. The bigger benefit is the increased speed of software development – something that can significantly improve the customer’s competitiveness and top line. We have no reasonable way of estimating that portion of customer value except to say that it’s almost certainly larger than the direct cost savings. To be conservative here (and remembering we’re really only trying to get ballpark estimates), I’ll say it’s the same and call AWS customer value creation $38 billion in 2020.
I'm sure one can tinker with these estimates in a variety of ways, and combining wages paid to employees with time saved by consumers will represent conceptually different categories of "value." One could also expand this list in various ways: for example, there is value to consumers (especially consumers who may not live close to lots of other retail options) in the extreme variety of products readily available via Amazon. 

But my goal here is not to fine-tune the estimates, but to make a general point here worth noticing. The value of Amazon's profits in a given year is much, much less than the value created by the company in other ways: wages, facilitating sales by third-part firms, time savings for consumers, and so on. 

These gains didn't just happen.  Building an interactive website that works at large scale is a monumental task. As one counterexample among many, think about the issues that arose when trying to build websites for buying health insurance in the aftermath of the Patient Protection and Affordable Care Act of 2010, or think about the computer network problems of the Internal Revenue Service. Yes, it's plausible that if Bezos had never started Amazon, some other company would have emerged from the dot-com scrum of the late 1990s. However, Bezos led the company that actually did it. Whether you are a fan or detractor of Amazon, the sheer size and scope of what has been built  commands attention. 

Of course, when at top executive at a big company is writing to shareholders, the emphasis tends to be on the good news. I never want to deify any company. There are lots of tough real-world questions about Amazon: How well does the firm treat its workers? How is the firm using data collected from customers and searches? Has the firm taken advantage of its platform not just to act as a tough competitor, but also to block competition from others? How is Amazon, both domestically and abroad, interacting with the US corporate tax code? What have been the tradeoffs of Amazon's success for bricks-and-mortar retailers? 

But asking reasonable questions is different from being a doomsayer. Especially during the pandemic, Amazon has made my life easier. As one example, I'm a person who has a visceral need for new reading material. The ability during the pandemic to "go to" the local public library online, 24/7, and download books to my Kindle e-reader has saved me money and helped keep me sane. 

Sunday, April 25, 2021

Nine Principles of Policing from 1829

In the early 19th century, various cities in Scotland and Ireland had established their own police forces. Sir Robert Peel is typically credited with the lead role in bringing a police force to London via the passage of the Metropolitan Police Act of 1829. Indeed, the early London police were often called "peelers." 

Either Peel or the early commissions of the London Police Force wrote down nine principles of policing, which have been fairly well-known since then to police everywhere. Here are "9 Policing Principles" are as listed at the website of the Law Enforcement Action Partnership
  1. To prevent crime and disorder, as an alternative to their repression by military force and severity of legal punishment.
  2. To recognize always that the power of the police to fulfill their functions and duties is dependent on public approval of their existence, actions and behavior, and on their ability to secure and maintain public respect.
  3. To recognize always that to secure and maintain the respect and approval of the public means also the securing of the willing cooperation of the public in the task of securing observance of laws.
  4. To recognize always that the extent to which the cooperation of the public can be secured diminishes proportionately the necessity of the use of physical force and compulsion for achieving police objectives.
  5. To seek and preserve public favor, not by pandering to public opinion, but by constantly demonstrating absolute impartial service to law, in complete independence of policy, and without regard to the justice or injustice of the substance of individual laws, by ready offering of individual service and friendship to all members of the public without regard to their wealth or social standing, by ready exercise of courtesy and friendly good humor, and by ready offering of individual sacrifice in protecting and preserving life.
  6. To use physical force only when the exercise of persuasion, advice and warning is found to be insufficient to obtain public cooperation to an extent necessary to secure observance of law or to restore order, and to use only the minimum degree of physical force which is necessary on any particular occasion for achieving a police objective.
  7. To maintain at all times a relationship with the public that gives reality to the historic tradition that the police are the public and that the public are the police, the police being only members of the public who are paid to give full-time attention to duties which are incumbent on every citizen in the interests of community welfare and existence.
  8. To recognize always the need for strict adherence to police-executive functions, and to refrain from even seeming to usurp the powers of the judiciary of avenging individuals or the State, and of authoritatively judging guilt and punishing the guilty.
  9. To recognize always that the test of police efficiency is the absence of crime and disorder, and not the visible evidence of police action in dealing with them.
I am fully aware that it's not 1829 anymore. But as one looks at the struggles of police forces across the country, it feels like time to restore and revivify the spirit behind a number of these principles. 

Friday, April 23, 2021

Stigler's Economic Theory of Regulation: The Semicentennial

I've found that the word "regulation" is a sort of Rorschach test on which many people project their broader political beliefs. Some are deeply suspicious of any proposals that can be characterized as "deregulation," and predisposed to favor "regulation" even before knowing the details of the proposal. These people tend to begin with a belief that private market actors are almost also pushing up to and beyond the edge of what is good for society, and thus comfortable with a presumption that government pushback in the form or regulation may help.  Indeed, for the first three-quarters of the 20th century, during the rise of US regulatory agencies from near-zero to high prominence, this group was preeminent in how most academics and policy-makers thought about regulation. 

On the other side, another group is deeply suspicious of regulations, because they mistrust the ability of government both to diagnose problems with a market economy or to design solutions. Instead, they fear that government regulations often end up either supporting or offering loopholes for politically powerful interest groups. The patron saint of this second group is George J. Stigler, who 50 years ago published published "The Theory of Economic Regulation" in the Bell Journal of Economics and Management Science (Spring 1971, 2:1, pp. 3-21, available via JSTOR and other places on the web). Stigler later won the 1982 Nobel prize "for his seminal studies of industrial structures, functioning of markets and causes and effects of public regulation."

In the 1971 essay, Stigler made the case for what is now called "regulatory capture." Imagine that the government is thinking about passing a certain set of regulations, and about an ongoing agency to enforce and interpret these regulation. Then ask yourself: Who has the most incentive to spend large amounts of money, time and attention focused on every twist and turn, every subclause and comma, of these regulations? And to sustain this focus day after day, year after year? Stigler pointed out that politics and behind-the-scenes lobbying will play a big role in this process. Over time, the industries directly affected by regulation will have a strong incentive to play a prominent role in shaping regulations. Stigler writes: "A central thesis of this paper is that, as a rule, regulation is acquired by the industry and is designed and operated primarily for its benefit."

For those who would like a thorough review of the arguments for and against this theory, the Stigler Center for the Study of the Economy and the State at the University of Chicago held a webinar on the occasion of the 50th anniversary of Stigler's essay this last week, and four hours of video of the discussions are available.  Several of the participants have also published short essays available at the link. Here, I'll offer a brief sketch of the state of the argument. 

1) Stigler had a point. 

Stigler's 1971 essay offers lots of examples in which it seems plausible that regulation was being used by incumbents to stifle competition and thus to improve their own profits. As he points out, the Civil Aeronautics Board which at the time set prices for all airline flights and decided which flights would be allowed did had "not allowed a single new trunk line to be launched since it was created in 1938." He cited studies that the Federal Deposit Insurance Corporation had "reduce[d] the rate of entry into commercial banking by 60 percent." Federal regulation of trucking led to a situation in which the number of licensed carriers was declining over time, despite thousands of annual applications for certificates to license additional truckers. Stigler wrote: "We propose the general hypothesis: every industry or occupation that has enough political power to utilize the state will seek to control entry."

The deregulation wave of the late 1970s and 1980s affected airlines, trucking and banking. But other examples mentioned by Stigler live on. Government regulations, typically at the state level, require licenses for about one-fourth of all US jobs. A well-developed body of evidence (often looking at what is regulated or unregulated across states) suggests that in many cases, these regulations are less about protecting the public than about limiting competition. It seems likely that the building trade unions have used building codes to hinder new cost-saving technologies, including factory-built homes.  Government regulations in education give large advantages to established colleges and universities, and to the public K-12 schools, while limiting entry of new competition. Rules to limit certain imports of goods from abroad are typically driven by domestic industries concerned about foreign competition.

Indeed, every time a supporter of government regulation bewails how a special interest has invaded the process and caused a desired regulation to be blocked or diluted, they are in effect channeling their inner Stigler-style skepticism about the practical reality of the regulatory process.  Are supporters of added government regulation at all surprised that big health insurance companies lobbied for the Patient Protection and Affordable Care Act of 2010 and benefited after its passage? Or that in the aftermath of new regulations to reduce the risk of government bank bailouts, big banks gained market share at the expense of smaller institutions? 

Whatever the actual shortcomings of the private sector, and whatever the theoretical case for corrective government regulation, Stigler-style skepticism offers a useful corrective about what regulations are actually enacted. As Filippo Maria Lancieri and Luigi Zingales write in their short essay accompanying the Stigler Center symposium: 

The 1887 Interstate Commerce Commission was the first government agency to regulate an important sector of the US economy. By the 1900’s, there were 10 federal agencies, employing 15,000 workers. By 2019, the number of agencies rose to 117, employing 1.4 million workers. The 20th century could easily be labeled the century of regulation. ...  Most likely, the 20th century would have also ended as the century of regulation if it were not for George Stigler.

2) Stigler probably overemphasized the role of pure regulator capture, as opposed to regulatory problems created by poor information or ideological bias. 

In Andrei Shleifer's keynote address for the Stigler Center symposium, he makes the point that there can be lots of reasons why regulations have mixed or negative effects. It's not all about regulatory capture by the industry affected. As a recent example, Shleifer points to the waves of rules and regulations that have become prominent in COVID-19 pandemic. For example, there have been rules about masks, social distancing, and lockdowns. There were regulations about what kinds of COVID-19 tests could be sold or used. There were rules about what constituted adequate testing of vaccines. There were questions over whether to shut down the use of the Johnson & Johnson vaccine over the possibility of a heightened risk of blood clots. 

Shleifer's point is not to argue for or against specific rules, but just to point out that, in general, the differences of opinion on these rules were shaped by available knowledge (and ignorance), and by beliefs about what risks were acceptable (or not), and what messages the public was ready to hear (or not). In these regulations and probably in many others as well, the key dividing lines are more about issues of knowledge and ideology that about a Stigler-style regulatory capture scenario. Of course, this doesn't make Stigler's insights irrelevant, but it does suggest that his "theory of regulation" was focused only one one slice of the issues involved. 

3) Stigler's "theory of regulation" may overemphasize the potential for bad outcomes, to the extent that his 1971 article essentially fails to acknowledge potentially beneficial outcomes of regulation. 

Cass Sunstein makes this argument in his keynote address for the second day of the Stigler Center symposium, and also in a short article written to accompany the event.  He points to a number of specific regulations: for example, rules requiring accessibility to public areas for those with disabilities; or rules specifying the rights that airline passengers have when a flight is overbooked; or the rules that now require rear-view cameras in all motor vehicles; or rules that require the Post Office to collect data on certain packages arriving from overseas as part of the effort to reduce imports of opioids. With these and many other examples in mind, Sunstein writes: 

Stigler offered, but did not adequately defend, the proposition that “as a rule, regulation is acquired by the industry and is designed and operated primarily for its benefit.” That proposition is false. As a rule, regulation is not acquired by the industry, and it is not designed and operated for its benefit (primarily or otherwise). ... Surely there are such examples, but they are not “the general rule.” I conclude that the success and influence of Stigler’s argument owed a great deal, not to its accuracy, but to its iconoclasm, its sense of knowingness, its smarter-than-thou, cooler-than-thou cynicism (appealing to many), its mischieviousness, and its partial (!) truth.
Sunstein is fully aware of the politics behind regulation. But rather than defaulting to the conclusion that all regulations are captured by industry, he suggests instead a focus on why regulators hold the beliefs that they do. When asking whether regulators are right or wrong, Sunstein writes: 
But why, exactly, do they believe such things? There are two main answers. The first involves the information they receive: What do they learn, and from whom do they learn it? In some cases, “the industry” is relevant; in other cases, journalists matter; political parties, public interest groups, think tanks, and academics might matter as well. Some regulators live in echo chambers; others do not. In many cases, we might well be able to speak of “epistemic capture,” which occurs not when regulators are literally pressured (threatened or promised), but when what they believe to be true is only a subset of the truth, or not true at all. The second main answer involves the motivations of the regulators themselves. What do they want to believe, and what do they want to dismiss? ...  Understanding what people end up hearing and crediting, and also what they want to hear and credit, would enable us to make real progress in specifying the mechanisms that lead to regulation.

As Lancieri and Zingales wrote in their essay, "Stigler’s enduring legacy was opening the door for the political analysis of regulation." Perhaps today it seems obvious to everyone that political analysis of regulation is a useful and important task. But it wasn't obvious to everyone a half century ago. 

Thursday, April 22, 2021

International Trade and Economic Disruption in Context

Paul Romer (Nobel '18) offered a pithy aphorism  a few years ago: "Everyone wants progress. Nobody wants change." But of course, the process of economic progress is inevitably lumpy. It doesn't smoothly affect everyone in the same way. International trade is one part of the process of economic progress, but nobody wants change. Adam S. Posen pushed back against the resulting dynamic in "The Price of Nostalgia: America’s Self-Defeating Economic Retreat" (Foreign Affairs, May/June 2021). He writes: 

There is a popular notion that the United States has been sacrificing justice in the name of economic efficiency, and so it is time to correct the imbalance by stepping back from globalization. This is a largely false narrative. The United States has been withdrawing from the world economy for 20 years, and for most of that time, U.S. economic dynamism has been falling, and inequality in the country has risen more than it has in economies that were opening up. Workers are less mobile. Fewer businesses have been started. Corporate power has grown more concentrated. Innovation has slowed. Although many factors have contributed to this decline, it has likely been reinforced by the United States’ retreat from global economic exposure. 

There's a lot to reflect on in the article, and there are parts of it I would agree with more than others. Here, I want to emphasize two of the points that Posen makes. 

One is that there are a wide variety of reasons for economic disruption and job loss: new technology, domestic competition, lousy management, shifts in consumer preferences for goods and services, and many more. For a huge and well-diversified economy like the United States, with its enormous internal market, the disruptions related to international trade are a relatively small part of the picture. Posen writes: 

After much debate, economists have agreed on an upper-bound estimate of the number of U.S. manufacturing jobs that were lost as a result of Chinese competition after 1999: two million, at most, out of a workforce of 150 million. In other words, from 2000 to 2015, the China shock was responsible for displacing roughly 130,000 workers a year. That amounts to a sliver of the average churn in the U.S. labor market, where about 60 million job separations typically take place each year. Although approximately a third of those total job separations are voluntary in an average year, and others are due to individual circumstances, at least 20 million a year are due to business closures, restructurings, or employers moving locations. Think of the flight of jobs from inner cities or the displacement of secretarial and office workers due to technology—losses that, for the workers affected, are no different in terms of local impact and finality than the manufacturing job losses resulting from foreign competition. In other words, for each manufacturing job lost to Chinese competition, there were roughly 150 jobs lost to similar-feeling shocks in other industries. But these displaced workers got less than a hundredth of the public mourning.

An American who loses his job to Chinese competition is no more or less deserving of support than one who loses his job to automation or the relocation of a plant to another state. Many jobs are unsteady. The disproportionate outcry about the effect of Chinese trade ignores the experiences of the many more lower-wage workers who experience ongoing churn, and it forgets the way that previous generations of workers were able to adapt when they lost their jobs to foreign competition.
 The other point is that the rest of the world is going ahead with globalization.  Around the rest of the  world, export/GDP ratios have generally bounced back after the decline during the Great Recession, but not in the US.  Meanwhile, countries within the European Union are extremely open to trade with each other, and becoming more so.  The European Union has expanded by 13 countries since 2000, and is signing trade agreements with the rest of the world. China is maintaining high involvement with the rest of the global economy, too. 

It's grimly amusing to me that many Americans who are quite supportive of social democratic policies in Scandinavian and other European countries often don't seem to agree with those countries when it comes to the importance of open trade. Posen writes: 

 Since World War II, the United States has approached international economic integration as something it encouraged others to do. Trade deals were framed as being about foreign countries opening their markets and reforming their economies through competition. For a long time, this narrative was largely true. It had the unfortunate effect domestically, however, of characterizing the United States as open and the rest of the world as protectionist. The competition that U.S. firms faced from abroad was seen as the result of unfair trade. Those perceptions have now outlasted the reality. It is the United States that needs foreign pressure and inspiration.

Wednesday, April 21, 2021

An Economics with Verbs, Not Just Nouns

W. Brian Arthur re-opens some old questions about the discipline of economics and the role of mathematics with fresh language iu "Economics in Nouns and Verbs" (April 5, 2021, preprint at arXiv). Here's a flavor of his argument:

I will argue that economics, as expressed via algebraic mathematics, deals only in nouns—quantifiable nouns—and does not deal with verbs (actions), and that this has deep consequences for what we see in the economy and how we theorize about it. ...

Let me begin by pointing out that economics deals with prices, quantities produced, consumption, rates of interest, rates of exchange, rates of inflation, unemployment levels, trade surpluses, GDP, financial assets, Gini coefficients. These are all nouns. In fact, they are all quantifiable nouns—amounts of things, levels of things, rates of things. Economics as it is formally expressed is about amounts and levels and rates, and little else. This  statement seems simple, trite almost, but it is only obvious when you pause to notice it. Nouns are the water economics swims in.

Of course in the real economy there are actions. Investors, producers, banks, and consumers act and interact incessantly. They trade, explore, forecast, buy, sell, ponder, adapt, invent, bring new products into being, start companies. And these of course are actions—verbs. Parts of economics—economic history, or business reporting—do deal with actions. But in formal discourse about the economy, in the theory we learn and the models we create and the statistics we report, we deal not with verbs but with nouns. If companies are indeed started, economic models reflect this as the number of companies started. If people invest, models reflect this as the amount of investment. If central banks intervene, they reflect this by the quantity of intervention. Formal economics is about nouns and reduces all activities to nouns.

You could say that is its mode of understanding, its vocabulary of expression. Perhaps this is just a curiosity and doesn’t matter. And maybe it’s necessary that to be a science economics needs to deal with quantifiable objects—nouns. But other sciences heavily use verbs and actions. In biology DNA replicates itself, corrects copying errors in its strands, splits apart, and transfers information to RNA to express genes. These are verbs, all. Biology—modern molecular biology, genomics, and proteomics—is based squarely on actions. Indeed biology would be hard to imagine without actions—events triggering events, events inhibiting events. ... 
Any economist will have some immediate reactions here, several of which are anticipated by Arthur. 
The reader may object that mathematics in economics does use verbs: agents maximize; they learn and adapt; rank preferences; decide among alternatives; adjust supply to meet demand. But the verbs here are an illusion; algebraic mathematics doesn’t allow them, so they are quickly finessed into noun quantities. We don’t actually see agents maximizing in the theory; we see the necessary conditions of their maximizing expressed as noun-equations. We don’t see them learning; we see some parameter changing in value as they update their beliefs. We don’t see producers deciding levels of production; we see quantities determined via some optimizing rule. We don’t see producers responding to demand via manufacturing actions, we see quantities adjusting. It might appear that dynamics in the economy are an exception—surely they must contain verbs. But expressed in differential equations, they too are just changes in noun quantities. Verbs in equation-based theory require workarounds.
It should be said that Arthur has a point. The equations in economics often have "black box" component--that is, you can't see what's going on inside. For example, firms in a standard economic model have a "production function," which shows that when certain levels of inputs are used, certain levels of outputs emerge. But on the subject of how production actually works, the equation is silent. 

More important, on the subject of how the process of production interacts with new technologies for production and offering new products to consumers, the production function is also silent. When talking about issues like the underlying causes of productivity growth in an economy, or issues like economic development of low- and middle-income countries, these black box production functions (at least in their basic versions) don't capture what's actually happening. 

So what is Arthur's alternative? As an economic theorist with extensive mathematical training, he suggests that economists open themselves up to an alternative kinds of math--the mathematics of algorithms. He writes: 
The reason algorithms handle processes well is because each individual instruction, each step, can signal an action or process. Also, and here is where process enters par excellence, they allow if-then conditions. If process R has been executed 100 times, then execute process L; if not, then execute process H. Algorithms can contain processes that call or trigger other processes, inhibit other processes, are nested within processes, indeed create other processes. And so they provide a natural language for processes, much as algebra provides a natural language for noun-quantities. Frequently algorithms include equations, and so sometimes we can think of algorithmic systems as equation-based ones with if-then conditions. As such, algorithmic systems generalize equation-based ones, and they give us a new mode, a new language of expression in economics, although one that may look different from what we’re used to. ...

The world revealed here is not one of rational perfection, nor is it mechanistic. If anything it looks distinctly biological. Its agents are constantly acting and reacting within a situation—an “ecology” if you like—brought about by other agents acting and reacting. Algorithmic expression allows novel, unthought of behaviors, novel formations, structural change from within—it allows creation. It gives us a world alive, constantly creating and re-creating itself.
Arthur and others who work in "complexity economics" have been creating and working with these kinds of models for decades. As Arthur writes in this paper, such models can be viewed as simulations" or "laboratory experiments"--that is, given certain starting points and behavioral rules, if you allow the algorithm to evolve many times, what kinds of outcomes are more or less likely to evolve? 

All this is fair enough. My own sense is that algorithmic methods can be especially useful in showing how seemingly mild and plausible rules can sometimes lead to unexpected and even disastrous outcomes, and how small changes in initial conditions or in underlying assumptions about behavior can lead to dramatic differences in how outcomes evolve. 

But that said, algorithms still involve reducing the real world to mathematical equations and still require specifying a set of assumptions. It's not obvious that algorithms do a better job of looking inside the "black box" of how production happens, or how it evolves to higher productivity and output, than conventional economic methods. 

Thus, by the end of the essay, Arthur seems to be backpedaling from grand claims. He writes: "As a means of understanding, algorithmic expression need not replace equation-based expression in economics, but can take its place alongside it as a parallel language." He says: "I do not believe algorithmic expression is a panacea in economics. It can include heterogenous agents with “non-rational” behavior that is context dependent, detailed and therefore more realistic. But it does not easily capture the `humanness' of economic life, its emotionality, its intuitive nature, its personages,
its very style. For this we would need other means." What starts off sounding like a frontal assault on the methods of economics ends up as a quiet plea for openness to an expanded set of mathematical tools. 

Tuesday, April 20, 2021

The US Net International Investment Position, aka "Debtor Nation"

US investors of all types--individual, corporate, government--purchase debt and equity issued in other countries. International investors of all types--individual, corporate, government--purchase debt and equity issued in the United State. The US Bureau of Economic Analysis estimates the total holdings of foreign assets by US investors, and the total holdings of US assets by foreign investors. Here's the tally from the BEA: 
Back in 2011,  US-owned assets abroad (blue line) were about $2.5 trillion less than the US liabilities owned by foreign investors (orange line). At the end of 2020, the gap had risen to $14 trillion. This is the US "net international investment position."

This gap can change for two main reasons. One reason is that, in a given year, the inflow and outflow of financial investments to and from the US economy don't need to balance. In fact, when the US economy runs trade deficits, with imports greater than exports, it necessarily means that the US economy is consuming (with imports included) more than it is producing (with exports included). As economists are quick to point out, the result of this situation is that foreign parties will take some of the US dollars they have earned, and invest them in US financial assets. 

The other reason the gap can change is that prices of financial assets can move around. For example, the S&P 500 stock market index has more than tripled in the last decade. Thus, the liabilities of the US economy to foreign investors who purchased US stocks a decade ago will be much larger as a result. Indeed, the BEA estimates that the value of foreign holdings of US portfolio assets--that is, holding debt or equity, but not with control of the US company--rose from $12 trillion a decade ago to more than $24 trillion at present, thus accounting for essentially all of the larger gap between US foreign assets and US foreign liabilities shown in the figure above. When comparing the size of foreign assets and liabilities over time, movements in exchange rates will also matter. 

Because the value of US foreign liabilities exceeds that of US foreign assets, the US is sometimes referred to as a "debtor nation." The name isn't quite right for several reasons. When you read about the "national debt," the reference is usually to the accumulated debt from US government borrowing. But while this measure included government debt that is involved international investment decisions, it also include private sector choices about international investments in debt and equity as well. 

How much should Americans be worried about the fact that the US net international investment position is negative $14 trillion. Gian Maria Milesi-Ferretti suggests that it shouldn't be cause for large concern (Brookings Institution, "The US is increasingly a net debtor nation. Should we worry?" (April 14, 2021).  He breaks down the US net international investment position into flows of debt and flows of equity. In addition, he breaks down debt and equity into "flows" and "position," where "flows" describes the amount flowing back and forth across the border, and "position" includes changes in the total value of debt and equity.
The figure shows that the common pattern for the US economy during the last quarter-century is that foreign investors hold much more in US-issued debt than US investors hold in foreign debt. One main reason for that is that US-dollar debt--especially the debt issued by the US government, is viewed around the world as a safe asset. The US dollar has been the world's dominant currency for a long time.

However, during the last 25 years, US investors have tended to hold more in international equity than foreign investors held in US equity. The dashed blue line shows that flows of equity investment back and forth across the border haven't changed a lot. But because of the sharp rise in US stock markets, and also the stronger value of the US dollar, the total value of foreign holdings of US equity have risen compared to US holdings of foreign equity--and the US net international investment position has fallen accordingly.  Milesi-Ferretti writes: 
Since 2010, ... the net international investment position has plunged by some 50 percent of GDP. This time the valuation effects have worked in reverse: the U.S. dollar has strengthened notably since 2010, and U.S. equity prices have risen much more than foreign equity prices. In other words, the value of foreigners’ investments in the U.S. has risen a lot relative to the value of Americans’ investments abroad.

Thus, one way to look at the fall in US net international investment position is that it's the result of good news--a rising US stock market. 

The other important pattern here is that, in general, if you have $100 in debt it will pay a lower return than $100 in equity, basically because the debt is safer than the equity. The US investment in foreign equity is often in the form of "foreign direct investment," where a US firm owns a large enough share of a foreign firm that the US firm has a say in managing the foreign firm (although the US firm may not have complete control over the foreign firm). This means that US investments abroad systematically earn more than foreign investments in the US. Here's a figure from Milesi-Ferretti:

Indeed, even though the rest of the world owns $14 trillion more in US assets than US investors own in foreign assets, it has been and continues to be true that the actual total returns paid on assets held in other countries is higher for US investors holding foreign assets than it is for foreign investors  holding US assets. 

I sometimes say that when it comes to international investment, the US economy is like a company that borrows money at a low interest rate and then invests that money in corporate stock and receives a higher rate of return. There are of course risks to this approach, but it's been this way for the US economy for a long time and there are clearly benefits, too. 

Sunday, April 18, 2021

A Hive of Authentic College Applicants

As someone with a couple of college-age children who have navigated the admissions process at selective colleges, I found myself nodding in agreement with Matt Feeney's essay in the Chronicle of Higher Education, "The Abiding Scandal of College Admissions: The process has become an intrusive and morally presumptuous inquisition of an applicant’s soul" (April 16, 2021). 

A basic fact is that applications at selective colleges are way up, and given a fixed number of slots of students, acceptance rates are way down. For example, the Washington Post just reported: "Columbia's applications were up a stunning 51 percent this year, and Harvard's were up 42 percent. There were also double-digit increases at Brown (27 percent), Dartmouth (33 percent), Princeton (15 percent), the University of Pennsylvania (33 percent) and Yale (33 percent)." Acceptance rates at places like Harvard, Stanford, and Princeton are in the range of 4-5%.

When a school is accepting only one applicant of every 20, or every 10, or every five, you might think that the school would want to be clear with applicants about their low odds--before those applicants invest time, sweat, soul, and money in writing the essays and doing the paperwork. But of course, that's incorrect. Lots of applicants and a low acceptance rate may mean wasted time and enormous disappointment for applicants, but it looks good for the school. 

So instead, selective schools encourage everyone to apply: we were on tours at multiple selective schools that started with hundreds of people in auditoriums where such encouragement was given. We were repeatedly told not to worry too much about test scores or high school grades--although even the most casual acquaintance with the facts about who is actually admitted suggests that these measures are pretty important. Instead, the emphasis was, as Feeney points out, on "holistic admissions" and "authentic" application that demonstrates the real specialness of you. 

On one side, saying that it's all about "authenticity" is an encouragement to apply. On the other side, if not accepted based on your authentic self, while others are accepted based on their authentic selves, it will seem pretty clear to an overwhelming majority of applicants that either your authentic self was either presented poorly or judged and found wanting. It's all too reminiscent of what Groucho Marx said about "sincerity," "If you can fake that, you've got it made." 

Moreover, it's clear at selective colleges that the applicant all need to show their special personal authenticity in some very specific ways: grades/test scores, involvement in extracurriculars and the community, ability and willingness to diagnose and write about their own selves, and so on. 

As Feeney points out, as college admissions have become more selective in recent decades, what the admissions people say they are looking at and emphasizing has changed, too. There was a stretch in the 1980s and 1990s where the emphasis was on extracurricular activities and the "well-rounded" applicant After this (quite predictably) resulted in an epidemic of extreme resume-padding, "more recently they have come to favor the passionate specialist, otherwise known as the `well-lopsided' applicant." Apparently on the horizon is an admissions online platform that will let you start storing your essays and videos starting in ninth grade. 

(Bad news here for applicants to selective colleges: Multiply the number of applicants by, say, a generously estimated one or two hours to look over every application. The admissions personnel on average don't have much more time than that. The idea that they are going to spend many hours looking over video and text of the best science reports, short stories, choir/band concerts, sports team highlights, and community service projects for every applicant is delusional. At best, they could skim and skip through a few entries for specific applicants.) 

Here's Feeney in the Chronicle of Higher Education:  
The people who made applying to college an elaborate performance, a nervous and years-long exercise in self-construction have now decided that the end result of this elaborate performance must be “the real you.” The tacit directive in all this — “Be authentic for us or we won’t admit you” — puts kids in a tough position. It’s bad that kids have to suffer this torment. It’s also bad that admissions departments actually think that the anxiously curated renderings that appear in applications can in any way be called “authentic.” It’s like watching Meryl Streep portray Margaret Thatcher and thinking: Now that is the real Meryl Streep. ... 

What distinguishes an applicant here is not authenticity, but access to the best advice on how to create the right authenticity effect — cultured parents, costly admissions coaches, able and informed college counselors. ... This points to another dark aspect of all this personalizing, with its imposed subtleties of performance and discernment — the barely hidden class bias. Admissions personnel are generally eager to add their voices to the chorus bewailing the socioeconomic and racial bias in standardized testing, but they’re largely incurious about the class bias in their own softer measures. In practice, that is, what ends up resembling “authenticity” to admissions officers is an uncannily WASPy mix of dispensations better understood as discretion, or, perhaps, good taste. After all, what admissions readers really dislike are the braggarts, and isn’t bragging a vice of the classless, the parvenus and arrivistes? ...

Admissions bureaucrats faced with thousands more applicants than they can accept soon reach a level of arbitrariness. At that point, they launch an inquisition of their applicants’ souls. This makes little sense academically but allows them to stage a powerful, utterly undeserved disciplinary claim on the inner lives of teenagers — that is the abiding scandal of college admissions. ....

Admissions officers have come to see the process they oversee in therapeutic terms. They present the college application as a set of therapeutic prompts, gentle invitations for the applicant to free herself from repression and self-deceit and move toward authentic self-expression and self-knowledge. ...

Setting up a years-long, quasi-therapeutic process in which admissions goads young people into laying bare their vulnerable selves — a process that conceals a high-value transaction in which colleges use their massive leverage to mold those selves to their liking — is reprehensible. It is terrible thing to do. It renders the discovery of true underlying selves absurd. Sometimes, as we’ve seen, admissions people will admit they have this formative leverage over young people. But they fail to show the humility that should attend this admission, the clinician’s awareness that to use this power is to abuse it. Instead, they want even more power. They want to intrude even more deeply into the souls of their applicants. ...
I can easily understand some sensible reasons why colleges want their own admissions department. Sometimes there is a really good fit between the abilities and interests of student and the specific strengths of an institution. Pools of applicants will vary from year to year, and there's some logic in trying to make sure that you admit a class that has a degree of balance in terms of academic interests, nonacademic interests, and geographic and demographic characteristics. 

But with no deep disrespect meant to the admissions personnel at selective colleges and universities, who I think are mostly just doing the best they can, they aren't professors or therapists. So who died and made them the monarchs of defining what is the desirable kind of authenticity, and how a holistic view of that authenticity should be expressed?  Especially the authenticity of 17 year-olds? 

Friday, April 16, 2021

Interview with Esther Duflo: On Experimental Methods and Inequality

Douglas Clement provides an "Esther Duflo interview: Deciding how to share" (For All: Federal Reserve Bank of Minneapolis, Spring 2021).  

On the existence of a tradeoff between growth and inequality:
I think the whole notion of a trade-off is likely a fallacy, for various reasons. First of all, there is no clear link either on theoretical grounds or empirically between higher inequality and more growth. There is no reason why inequality is necessary for growth. And there is no law of economics that says that growth increases inequality either. So I think there is no causality necessarily going in either direction; therefore, there is not necessarily a trade-off. Just as a matter of accounting, growth is equality-enhancing if most of the benefits of growth are going toward the poor. And growth is inequality-enhancing if most of the advantages are going toward the rich. Both are possible. I don’t think there is a systematic pattern either way. ... 

In fact, we don’t seem to have much of a handle on what causes growth anyway, although we might have interesting theoretical narratives on growth. If there is a consensus among macroeconomists, it’s on what should be avoided at all costs, like hyperinflation. But there is not a set of recipes that guarantees growth, and it’s not that these recipes therefore lead to a trade-off. So, I think there is actually no trade-off.
On how evidence from randomized control trials is like a pointillist painting
The idea of the pointillist painting is, imagine a painting by Seurat. It’s literally made of dots, and each of these dots on its own is perfectly nice, but it doesn’t generalize to anything. But if you step back and accumulate all these dots, you see the entire painting of, say, a family on the bank of the Seine having a picnic.

Suppose you’re trying to assemble a jigsaw puzzle of that Seurat painting. Just by looking at the rest of the painting, you sort of know what goes next. You have a prediction about where a given piece fits. You might find that your piece doesn’t fit. It might be wrong. It’s not what you expected. But the frame, the painting, gives you good guidance for what you might expect.

That’s how progress happens. The caricature is that you try one small experiment in one place, and then you can take the result to the entire world. That’s not it. The way it actually works is: Do your small experiment; get some findings that are interesting. They might contradict or confirm the theory that you started from, but they give you fodder for the next experiment, and so on and so forth, until you have an understanding of what might be the entire shape or contour of that problem.
On using the superstar power of economists to save lives
My husband, Abhijit Banerjee, also a Nobel Prize laureate, was asked to be the chairman of the coronavirus response team in West Bengal. ... We knew from previous work ... that stars and celebrities are very influential in conveying these messages, so we were looking for stars to pass along very basic social distancing advice to households in India at a time when it was completely confusing. It finally dawned on us that the best star we had was right on our team! Abhijit Banerjee has been a bit of a household name in West Bengal—where he’s from—since he won the Nobel Prize. ...

Abhijit recorded messages that were sent in two rounds to subscribers with Airtel, a bigger subscriber network. One message was about asking people to be kind to coronavirus patients and not to shun them out of the village, and the other was about how travel during Durga Puja, where people normally come in droves to town and make pilgrimages to makeshift temples. So, potentially, a scene of millions of people crowded together, coming from everywhere and going back. It could have been a coronavirus disaster.

Abhijit worked with others in putting together something that was feasible. You cannot say, “Cancel the holiday.” That’s not really an option. So something that was feasible, but would improve things. And we sent one more round of messages urging people to stay home if they’re older, and if they do go out, visit just one location, and wear a mask.

And quickly after that, Durga Puja happened, and we saw that the attendance was down a significant amount from previous years. So it was much, much, much lower attendance. And we can now see whether there was an uptick of coronavirus and we don’t see that.

So, of course, it was not just his messages. There was also the chief minister went on television to relay the message. But this entire effort to convince people with clear messages about what to do seems to have been very effective. I’m convinced that that saved thousands and thousands of lives ultimately. You don’t get to do that every day.

For an earlier post on the award of the 2019 Nobel Prize in economics to Duflo, Banerjee, and Michael Kremer, see "A Nobel for the Experimental Approach to Global Poverty for Banerjee, Duflo, and Kremer" (October 18, 2019). 

Thursday, April 15, 2021

Pharma R&D: Vaccines and Other Drugs

Surely one of the key lessons of the pandemic is the value of research and development, which in turn means the value of making the investments over time in education and equipment so that researchers are tooled up and ready to go as needed. The Congressional Budget Office has published "Research and Development in the Pharmaceutical Industry" (April 2021), which offers a useful primer in getting up to speed on some of the key trends and issues. Here are five of the main themes as I see them. 

1) Research and development may play a bigger role in pharmaceuticals than in any other industry. 

This figure shows how much different industries spend on R&D as a share of their "net revenues"--that is, revenues minus expenses. A few years back, pharmaceuticals were similar in their "research intensity" to industries like semiconductors and software, but in the last decade or so, pharma has become even more research-intensive. 

Pharma R&D spending has gone way up. The CBO writes: 

In real terms, private investment in drug R&D among member firms of the Pharmaceutical Research and Manufacturers of America (PhRMA), an industry trade association, was about $83 billion in 2019, up from about $5 billion in 1980 and $38 billion in 2000. Although those spending totals do not include spending by many smaller drug companies that do not belong to PhRMA, the trend is broadly representative of R&D spending by the industry as a whole. A survey of all U.S. pharmaceutical R&D spending (including that of smaller firms) by the National Science Foundation (NSF) reveals similar trends.

Let's say that again: in real (that is, adjusted for inflation) dollars, pharma R&D is up by a factor of about 10 from the average in the 1980s and even before the pandemic had more than doubled since 2000. The CBO also points out that the cost of developing a successful new drug can commonly be in the range of $1-$2 billion, once the costs of the drugs that didn't work out are included, and the process of developing a drug so that it's ready to sell can take a decade or more. 

2) There's controversy over the direction of pharma R&D spending. 

Pharma companies will be attracted by producing expensive drugs for big markets. Conversely, the incentive for a drug company to spend $1 billion  and a decade addressing a smaller market or finding a lower-cost alternative to an existing money-maker will not be large. The CBO writes: 

The number of new drugs approved each year has also grown over the past decade. On average, the Food and Drug Administration (FDA) approved 38 new drugs per year from 2010 through 2019 (with a peak of 59 in 2018), which is 60 percent more than the yearly average over the previous decade. Many of the drugs that have been approved in recent years are “specialty drugs.” Specialty drugs generally treat chronic, complex, or rare conditions, and they may also require special handling or monitoring of patients. Many specialty drugs are biologics (large-molecule drugs based on living cell lines), which are costly to develop, hard to imitate, and frequently have high prices. Previously, most drugs were small-molecule drugs based on chemical compounds. Even while they were under patent, those drugs had lower prices than recent specialty drugs have. Information about the kinds of drugs in current clinical trials indicates that much of the industry’s innovative activity is focused on specialty drugs that would provide new cancer therapies and treatments for nervous-system disorders, such as Alzheimer’s disease and Parkinson’s disease.

Here's a figure showing the therapeutic areas where US drug spending has increased the most in the last decade. The big ones at the top are drugs to address cancer, diabetes, and autoimmune diseases. Because these are the big markets, this is also where pharma R&D for future drugs will tend to be focused. 

3) There's controversy over the role of larger and smaller pharma companies. 

The pharma industry has developed a partial division of labor, where smaller companies are more likely to be doing R&D, and larger companies are more likely to be leading the way on the clinical testing needed before the drugs come to market. Thus, a common dynamic is that if a small company has developed a promising drug, either the drug or the entire company may be bought by a larger firm. There's nothing necessarily wrong with this dynamic. It gives successful entrepreneurs a way to start small and then cash out when successful. But it does raise a danger that big pharm companies are buying out the very firms that could, in time, have grown into being their future competitors. There's evidence that in some cases, large pharma companies have bought smaller firms with a new drug that might have competed with existing drugs--and then halted development of the new drug. The CBO writes (footnotes and references to text boxes omitted): 

Although total R&D spending by all drug companies has trended upward, small and large firms generally focus on different R&D activities. Small companies not in PhRMA [the trade association of big pharma companies] devote a greater share of their research to developing and testing new drugs, many of which are ultimately sold to larger firms. By contrast, a greater portion of the R&D spending of larger drug companies (including those in PhRMA) is devoted to conducting clinical trials, developing incremental “line extension” improvements (such as new dosages or delivery systems, or new combinations of two or more existing drugs), and conducting post-approval testing for safety-monitoring or marketing purposes. ...

Small drug companies (those with annual revenues of less than $500 million) now account for more than 70 percent of the nearly 3,000 drugs in phase III clinical trials. They are also responsible for a growing share of drugs already on the market: Since 2009, about one-third of the new drugs approved by the Food and Drug Administration have been developed by pharmaceutical firms with annual revenues of less than $100 million. Large drug companies (those with annual revenues of $1 billion or more) still account for more than half of new drugs approved since 2009 and an even greater share of revenues, but they have only initiated about 20 percent of drugs currently in phase III clinical trials.

4) The government has always played an important role in vaccine markets, with its requirements for who needs to get vaccinated. And of course, the government played a substantial role in developing COVID-19 vaccines with the Warp Speed program. 

Here's the CBO summary of what companies got money for a COVID-19 vaccine, and for what purpose.  Given the costs of COVID-19, this $19 billion probably ranks with the most cost-effective money the US government has ever spent on anything.

5) Research expertise in vaccines, as in many other areas, often can shift from one disease to another, so that what looks like "failure" in producing a vaccine for one disease can build expertise in addressing a different disease. 

For example, it turns out that although the effort to produce an HIV vaccine has not so far been successful, many of of the technologies and skills developed in that search were useful in creating a ?COVID-19 vaccine (for discussions of this point, see here and here). 

Jeffrey E. Harris argues this case in some detail in "The Repeated Setbacks of HIV Vaccine Development Laid the Groundwork for SARS-COV-2 Vaccine" (March 2021, NBER Working Paper 28587). As he points out, before AIDS the common vaccines were "dead" or "live." A "dead" vaccine (like the polio vaccine) treated the infectious organism with heat or chemicals so that it was no longer infectious, but still helped the body to produce an immune response. A "live" vaccine (like the measles vaccine) ran the infectious organism through animals or other treatments to produce a version that produced only a very mild infection--but still caused the body to produce an immune response. 

But neither method worked in trying to produce a vaccine for the highly mutable AIDS virus. Instead, working on an AIDS vaccine forces researchers to think about how a vaccine might attack the molecular structure of AIDS. I won't embarrass myself by trying to summarize the progression of scientific research, but it turns out that a key "spike" protein that had been studies in the HIV research turned out to be the key protein for the mRNA vaccines that are being used against COVID-19. In addition, discussions in the trade press suggest that, in turn, the knowledge gained from the COVID-19 vaccine about mRNA technologies could help lead to a vaccine for malaria, hepatitis C, dengue--and even HIV.  

The broader point is that although private pharma firms clearly have strong incentives to do R&D aimed at large existing drug markets, there is a broad social benefit from having research into many areas of vaccines and other drugs, because you can't know in advance how scientific progress will lead to practical gains. 

Tuesday, April 13, 2021

What Do You Call a Bigger Wave of Debt?

Sometimes you work on a big and worthwhile project, and then find yourself to be overtaken by events. The project remains worthwhile, but it can suddenly feel outdated. Thus, I found myself wincing in sympathy at  Global Waves of Debt: Causes and Consequences, a World Bank report written by M. Ayhan Kose, Peter Nagle, Franziska Ohnsorge, and Naotaka Sugawara and published in March 2021. 

The problem is that the report focuses on four major waves of government debt up through 2018. Of course, when the authors launched into this project they had no way of knowing that the world was on the cusp of a COVID-related surge in government debt starting in 2020.  But the result is that the authors are warning of the potential dangers of a wave of government debt given the debt levels of 2018--but pandemic-related debt wave is now bigger than they would have anticipated. For example, they write: 

The global economy has experienced four waves of broad-based debt accumulation over the past 50 years. In the latest wave, underway since 2010, global debt has grown to an all-time high of 230 percent of gross domestic product (GDP) in 2018. The debt buildup was particularly fast in emerging market and developing economies (EMDEs). Since 2010, total debt in these economies has risen by 54 percentage points of GDP to a historic peak of about 170 percent of GDP in 2018. Following a steep fall during 2000-10, debt has also risen in low-income countries to 67 percent of GDP ($268 billion) in 2018, up from 48 percent of GDP (about $137 billion) in 2010. ...

Before the current wave, EMDEs [emerging market and developing economies] experienced three waves of broad-based debt accumulation. The first wave spanned the 1970s and 1980s, with borrowing primarily accounted for by governments in Latin America and the Caribbean region and in low-income countries, especially in Sub-Saharan Africa. The combination of low real interest rates in much of the 1970s and a rapidly growing syndicated loan market encouraged these governments to borrow heavily.

The first wave culminated in a series of crises in the early 1980s. Debt relief and restructuring were prolonged in the first wave, ending with the introduction of the Brady Plan in the late 1980s for mostly Latin American countries. The Plan provided debt relief through the conversion of syndicated loans into bonds, collateralized with U.S. Treasury securities. For low-income countries, substantial debt relief came in the mid-1990s and early 2000s with the Heavily Indebted Poor Countries initiative and the Multilateral Debt Relief Initiative, spearheaded by the World Bank and the International Monetary Fund.

The second wave ran from 1990 until the early 2000s as financial and capital market liberalization enabled banks and corporations in the East Asia and Pacific region and governments in the Europe and Central Asia region to borrow heavily, particularly in foreign currencies. It ended with a series of crises in these regions in 1997-2001 once investor sentiment turned unfavorable. The third wave was a run-up in private sector borrowing in Europe and Central Asia from European Union headquartered “mega-banks” after regulatory easing. This wave ended when the global financial crisis disrupted bank financing in 2007-09 and tipped several economies in Europe and Central Asia into recessions. ... 

The latest wave of debt accumulation began in 2010 and has already seen the largest, fastest, and most broad-based increase in debt in EMDEs in the past 50 years. The average annual increase in EMDE debt since 2010 of almost 7 percentage points of GDP has been larger by some margin than in each of the previous three waves. In addition, whereas previous waves were largely regional in nature, the fourth wave has been widespread with total debt rising in almost 80 percent of EMDEs and rising by at least 20 percentage points of GDP in just over one-third of these economies. ... 
Since 1970, there have been 519 national episodes of rapid debt accumulation in 100 EMDEs, during which government debt typically rose by 30 percentage points of GDP and private debt by 15 percentage points of GDP. The typical episode lasted about eight years. About half of these episodes were accompanied by financial crises, which were particularly common in the first and second global waves, with severe output losses compared to countries without crises. Crisis countries typically registered larger debt buildups, especially for government debt, and accumulated greater macroeconomic and financial
vulnerabilities than did noncrisis countries.
Although financial crises associated with national debt accumulation episodes were typically triggered by external shocks such as sudden increases in global interest rates, domestic vulnerabilities often amplified the adverse impact of these shocks. Crises were more likely, or the economic distress they caused was more severe, in countries with higher external debt—especially short-term—and lower international reserves.
Of course, pandemic-related debt has increased the previous debt projections. Here are some figures from the IMF Fiscal Monitor published in April 2021. The first panel shows debt/GDP ratios from 2007 to 2021. The yellow lines show interest payments, which so far have been able to remain fairly low thanks to the prevailing low interest rates. The rising debt/GDP ratios in emerging market and developing economies are clear.
The second set of panels shows how debt projections have changes since the pandemic for these three groups of countries. The bars show annual deficit/GDP predictions, pre- and post-pandemic, while the lines show the shift in accumulated debt, pre- and post-pandemic. 
As the authors of the World Bank report above point out in their discussion, rising debt does not automatically bring disaster. The sharp-eyed reader will note that the debt/GDP ratios for advanced economies are higher than those for emerging market and developing economies. There is a general pattern that as an economy develops, the financial sector of that economy also develops in ways that typically lead to a higher debt/GDP ratios. More broadly, the depth of the financial sector and the sophistication of financial regulation will make a big difference. 

On the other side, debt is often referred to as "leverage," because it magnifies the outcome of both positive and negative events for a national economy (or for a company or a household)  . With a higher level of debt, an adverse event can easily become two problems--the adverse event itself and also a debt crisis. It is concerning that this risk was viewed as high for many countries around the world, even before they increased their debt during the pandemic. 

Monday, April 12, 2021

The US Productivity Slowdown After 2005

In the long run, a rising standard of living is all about productivity growth. When the average person in a country produces more per hour worked, then it becomes possible for the average person to consume more per hour worked. Yes, there is a meaningful and necessary role for redistribution to the needy. But the main reason why societies get rich is by redistributing more: rather, societies are able to redistribute more because rising productivity expands the size of the overall pie. 

In the latest issue of the Monthly Labor Review from the US Bureau of Labor Statistics, Shawn Sprague provides an overview in "The U.S. productivity slowdown: an economy-wide and industry-level analysis" (April 2021). In particular, he is focused on the slowdown in US productivity growth since 2005, after a resurgence of productivity growth in the previous decade. Here's a figure showing the longer-run patterns, which have birthed roughly a jillion research papers. 
Notice that total productivity growth is robust in the decades after World War II, from 1948 to 1973. Then there is a productivity slowdown, especially severe in the stagflationary 1970s, but continuing through the 1980s and into the 1990s. There's a productivity surge from 1997 to 2005, commonly attributed to acceleration in the power and deployment of computing and information technology. But just when it seemed as if the economy might be moving back to a higher sustained rate of productivity growth, then starting around 2005, productivity sagged back to the levels of the slowdown in the 1970s and 1980s. 

The figure also shows how economists break down sources of economic growth. First look at how much the quality of the labor force has improved, as measured by education and experience. Then look at how much capital the average worker is using on the job. After calculating how much productivity growth can be explained by those two factors, what is left over is called "multifactor productivity growth." This is often interpreted as changes in technology--broadly understood to include not just new inventions but all the ways that production can be improved. But as the economist Moses Abramowitz said years ago, measuring multifactor productivity growth as what is left over, after accounting for other factors, means that productivity growth is "the measure of our ignorance."

As Sprague points out, variations in multifactor productivity growth are the biggest part of changes in productivity over time. 
The deceleration in MFP growth—the largest contributor to the slowdown—explains 65 percent of the slowdown relative to the speedup period; it also explains 79 percent of the sluggishness relative to the long-term historical average rate. The massive deceleration in MFP growth is also emblematic of a broader phenomenon shown in figure 2. We can see that throughout the historical period since WWII, the majority of the variation in labor productivity growth from one period to the next was from underlying variation in MFP growth, rather than from the other two components.
However, the most recent slowdown in productivity also seems to have something to do with capital investment. Sprague again: 
At the same time, in addition to the notable variation in MFP growth during the recent periods, something unprecedented about these recent periods was the additional contribution from variation in the contribution of capital intensity. The contribution of capital intensity had previously remained within a relatively small range (0.7 percent to 1.0 percent) during the first five decades of post-WWII periods, but then in the 1997–2005 period, the measure nearly doubled, from 0.7 percent up to 1.3 percent, followed by nearly halving to 0.7 percent in the 2005–18 period. ... The contribution of capital intensity accounts for 34 percent of the labor productivity slowdown relative to the speedup period and explains 25 percent of the sluggishness relative to the long-term historical average rate.
What are some possible explanations for the growth slowdown? As Sprague writes: [N]not only has the productivity slowdown been one of the most consequential economic phenomena of the last two decades, but it also represents the most profound economic mystery during this time ..." Sprague does a detailed breakdown of economy-wide factors that may have contributed to the productivity slowdown as well as industry-specific factors. Here, I'll just mention some of the main themes. 

A first set of explanations focus on the Great Recession, and the sluggish recovery afterwards. One can argue, for example, that when the financial sector is in turmoil and an economy is growing slowly, firms have less ability and less incentive to raise capital for productivity gains. This seems plausible, and surely has some truth in it, but it also has some weak spots. For example, the productivity slowdown in the data pretty clearly starts a few years before the Great Recession. Also, one might argue that in difficult times, firms might have more incentive to seek out productivity gains. Finally, it feels like a circular argument to ask "why aren't additional inputs producing output gains as large as before?" and then to answer "because the output gains were not as large as before." 

A second explanation is that productivity gains at the frontier have not actually slowed down: instead, what has slowed down is the rate at which these gains are diffusing to the rest of the economy. From this point of view, the real news is a wider dispersion in productivity growth within industries, as productivity laggards fall farther behind leaders (for discussion, see here and here). At a more detailed level, "not many of the firms that have been innovating have not similarly been able to scale up and hire more employees commensurate with their improved productivity." It could also be that there are certain characteristics of productivity growth leaders--like an ability to apply leading-edge information technology to business processes throughout the company--that are especially hard for productivity laggards to follow. This lack of reallocation in the economy toward high-productivity firms may be related to other prominent issues like a decrease in levels of competition in certain industries or rising inequality. 

A third explanation is that the productivity surge from 1997-2005 should be be viewed as a one-time anomalous event, and what's happening here is a long-term slowdown in the rate of productivity growth. Sprague writes: 
One underlying rationale for this potential story is provided by Joseph A. Tainter. This author offers that, in general, as complexity in a society increases following initial waves of innovation, further innovations become increasingly costly because of diminishing returns. As a result, productivity growth eventually succumbs and recedes below its once torrid pace: “As easier questions are resolved, science moves inevitably to more complex research areas and to larger, costlier organizations,” clarifying that “exponential growth in the size and costliness of science, in fact, is necessary simply to maintain a constant rate of progress.” Nicholas Bloom, Charles I. Jones, John Van Reenen, and Michael Webb offer supporting evidence for this view regarding the United States, asserting that given that the number of researchers has risen exponentially over the last century—increasing by 23 times since 1930—it is apparent that producing innovations has become substantially more costly during this period.
Again, this explanation has some plausibility. But it also feels as if the modern economy does have a substantial number of innovations,  and the puzzle is why they aren't showing up in the productivity statistics.

A fourth set of explanations digs down into which industries showed the biggest falls in productivity after 1995 and which ones showed the biggest rises. Here's an illustrative figure. The industries with the biggest losses are computers/electronics products, along with retail and wholesale trade. 
This selection of industries may feel counterintuitive, but remember that this is a comparison between two time periods. Thus, the figure isn't saying that productivity outright declined in these sectors--only that the gain after 2005 was slower than the gain in the pre-2005 decade. In computers, for example, rate of decline in  prices of microprocessors began to slow down in the mid-2000s. Similarly, retail and wholesale businesses underwent a huge change in the late 1990s and early 2000s that increased their productivity, but then the changes after that time were more modest.  In short, this is the detailed version industry-level version of the argument that the productivity rise from 1997-2005 was a one-time blip.

A final explanation, not really discussed by Sprague, is worth considering as well: Perhaps we are entering an economy where certain kinds of gains in output are not well-reflected in measured GDP gains. For example, imagine that the development of COVID-19 vaccines halts the virus. The social welfare gains from such vaccines are much larger than just the measured gains to GDP. Or imagine that a set of innovations makes it possible to reduce carbon emissions in a way that reduces the risk of climate change. From a social welfare perspective, this avoided risk would be a huge benefit, but it wouldn't necessarily show up in the form of a more rapidly expanding GDP. 

Or consider the range of online activities now available: entertainment, social, health, education, retail, working-away-from-the-office. Add in the services that are available at no direct financial cost, like email, software, shared websites, cloud storage, and so on. It seems plausible to me that the social benefits from this expanding set of options are much greater than how they are measured in GDP terms--for example, by how much I pay for my home internet service or how much ad revenue is taken in by companies like Google and Facebook. 

Again, this thesis has some plausibility. One never wants to fall into the trap of thinking that output as measured by GDP is also a measure of social welfare. It's well-known that GDP measures money spent on health care and money spent on environmental protection, but will have troubles measuring gains in actual health or the environment. GDP will often have a hard time measuring gains in variety and flexibility as well.  

But this set of explanations also raises issues of its own. It suggests that people may be experiencing gains in their standard of living that are not reflected in their paychecks. In contrast, when productivity gains in terms of output per worker slow down, we are talking about output as measured by what is bought and sold in the economy. In short, gains in measured productivity are what can help to produce pay raises. But if these other kinds of gains are meaningful, they can't be used to pay your rent or your taxes.