Monday, November 19, 2018

Economics of Mushroom Production: Kennett Square and the Rise of China

Mushrooms are a relatively small US agricultural crop, with total production of about $1.2 billion in the 2017-2018 growing year. But they do illustrate some economic lessons, including how a local area that develops a specialization in a certain product can be hard to dislodge, and how the rise of China is reshaping global production in so many ways.

US mushroom production has for a long time been very geographically concentrated. The town of Kennett Square in southeastern Pennsylvania bills itself as the Mushroom Capital of the World, because about half of all US mushroom production happens in the surrounding area of Chester County.

The story here goes back to 1885, and to a florist named William Swayne who lived in Kennett Square. Swayne grew a lot of carnations, which required raised beds. He pondered whether it might be possible to grow a cash crop in the space under those raised beds. Mushrooms had been domesticated in France and England in the middle of the 19th century. Swayne sent away to England for mushroom spores, and began growing them. The demand was high enough that he built a "mushroom house," an enclosed building designed to grow only mushrooms. Other local farmers took note, and the Mushroom Capital of the World became established.

From an economic point of view, an obvious question is why mushroom production remains so concentrated in Chester County more than 120 years later. After all, the basic materials for growing mushrooms like compost from vegetative material (like straw and hay), along with animal manure, are not hard to find. The climate of southeastern Pennsylvania provides a usefully cool ground temperature in fall, winter, and spring, but there are many other locations with similar temperatures.

Although I do not know of a systematic study of mushroom technology, there are some obvious hypotheses as to why mushroom growing has stayed so geographically concentrated. Many types of production look fairly easy from the outside. But when it comes to large-scale commercial production that covers costs and makes a profit, it seems likely that growing mushrooms commercially requires detailed skill and knowledge that spreads among the workers and producers in a geographically close community--in much the same way that software developers flourish in the area around Silicon Valley. In addition to a local labor force with crop-specific skills, local producers build up a chain of processors, wholesalers, national distribution networks, and retailers that is not quickly duplicated. The producers around Kennett Square have shown an ability to dramatically increase production over time: for example, back in 1967 the total US production of mushrooms was 157 million pounds, with 57% coming from Pennsylvania mushroom farmers; in recent years, total US production of mushrooms has risen by a multiple of six at over 900 million pounds.  Finally, the relatively small size of the mushroom market can limit the incentives for new competitors to make substantial investments in trying to take over this market.  

But from the perspective of global mushroom production, this sixfold increase in US mushroom production in the last half-century is only a modest part of the story. The growth of China's economy has led an extraordinary rise in global mushroom production in the last 20 years. Daniel J. Royse , Johan Baars and Qi Tan provide  background in "Current Overview of Mushroom Production in the World." which appears as Chapter 2 in the 2017 book Edible and Medicinal Mushrooms: Technology and Applications, edited by Diego Cunha Zied and Arturo Pardo-Giménez. As they note (references omitted):
World production of cultivated, edible mushrooms has increased more than 30‐fold since 1978 (from about 1 billion kg in 1978 to 34 billion kg in 2013). This is an extraordinary accomplishment, considering the world’s population has increased only about 1.7‐fold during the same period (from about 4.2 billion in 1978 to about 7.1 billion in 2013). Thus, per capita consumption of mushrooms has increased at a relatively rapid rate, especially since 1997, and now exceeds 4.7 kg annually (vs 1 kg in 1997; Figure 2.2). ...
China is the main producer of cultivated, edible mushrooms (Figure 2.3). Over 30 billion kg  of mushrooms were produced in China in 2014, and this accounted for about 87% of total production. The rest of Asia produced about 1.3 billion kg, while the EU, the Americas, and other countries produced about 3.1 billion kg.
Here's a figure showing growth of mushroom production vs. world population.

And here's a figure showing global mushroom production by location:

For sales of fresh mushrooms within the US and Canada, Kennett Square doesn't appear to be under immediate threat. But a 2010 report of the US International Trade Commission pointed out that the US became a net importer of processed mushrooms--typically grown in China--back in 2003-2004.

Friday, November 16, 2018

Solow on Friedman's 1968 Presidential Address and the Medium Run

Fifty years ago in 1968, Milton Friedman's Presidential Address to the American Economic Association set the stage for battles in macroeconomics that have continued ever since. The legacy of the talk has been important enough that in the Winter 2018 issue of the Journal of Economic Perspectives, where I work as Managing Editor, we published a three-paper symposium on "Friedman's Natural Rate Hypothesis After 50 Years."
Likewise, the Review of Keynesian Economics has committed now most of its October 2018 issue to a nine-paper symposium on the issues raised by Friedman's presidential address. The first two papers in the issue, by Robert Solow and Robert J. Gordon, are freely available on-line, with the rest of the issue requiring a library subscription. Here, I'll focus mainly on Solow's comments.

What was the key insight or argument in Friedman's 1968 address? Friedman offers a reminder that interest rates and unemployment rates are set by economic forces. Friedman uses this idea to build a distinction between the long-run and the short-run. In the short run, it is possible for a central bank like the Federal Reserve to influence interest rates and the unemployment rate. In the long run, there is a "natural" rate of interest and a "natural" rate of unemployment which is trying to emerge, gradually, over time from all the various forces in the economy

This short-run, long-run distinction then led to differing views over the appropriate role of government macroeocnomic policy. In the magisterial Monetary History of the United States that Friedman had published in 1963 with Anna J. Schwartz, they make a powerful case that the effect of monetary policy in the past had often been to make the macroeconomic situation worse, rather than better. Given the practical imperfections faced by monetary policy (including time lags and political biases in the the policy response and the long and variable lags in how monetary policy affects macroeconomic variables),  Friedman argued that the “first and most important lesson” is that “monetary policy can prevent money itself from being a major source of economic disturbance.” While Friedman was open to the idea of macroeconomic policy responding to extreme economic situations, he worried about policy mistakes and overreactions.

One standard counterargument was that monetary policy and the macroeconomy had become much better understood over time, thanks in part to Friedman's work. Thus, example of past misguided policy should not immobilize central bankers thinking about future policy choices.

Robert Solow is a notable player in these disputes: in particular, in his 1960 paper with Paul Samuelson, "Analytical Aspects of Anti-Inflation Policy" (American Economic Review, 50:2, pp. 177-194). In an essay in the Winter 2000 issue of the Journal of Economic Perspectives, "Toward a Macroeconomics of the Medium Run,"  Solow addressed this question of thinking about macroeconomic policy in the short- and the long-run. He wrote:
I can easily imagine that there is a “true” macrodynamics, valid at every time scale. But it is fearfully complicated, and nobody has a very good grip on it. At short time scales, I think, something sort of “Keynesian” is a good approximation, and surely better than anything straight “neoclassical.” At very long time scales, the interesting questions are best studied in a neoclassical framework, and attention to the Keynesian side of things would be a minor distraction. At the five-to-ten-year time scale, we have to piece things together as best we can, and look for a hybrid model that will do the job.
In this most recent essay, "A Theory is a Sometime Thing," Solow pushes this idea of medium-run thinking harder. He acknowledges that if a central bank can only cause the interest rate and unemployment rate to shift for a year or two, in the short-run before a rebound to what is determined in the long run, then when problems of lags in timing are included, macroeconomic policy might be dysfunctional. But if a central bank can affect the interest rate and the unemployment rate for a medium-run period of, say 5-7 years, then even with some uncertainty and lags, macroeocnomic policy may be quite relevant and possible. At one point, Solow writes: "The medium run is where we live."
On the issue of interest rates, Solow points out in the late 1970s and early 1980s, Paul Volcker's actions pushed up interest real interest rates substantially, such that the real federal funds interest rate "rose sharply to about 5 percent and fluctuated around that level for the next six years ...This sustained 5 percentage point increase in the real funds rate was not a random event. It was a deliberate intervention, designed to end the ‘double-digit’ inflation of the early 1970s, and it did so, with real side-effects. ... So the Fed was in fact able to control (‘peg’) its real policy rate, not for a year or two but for at least six years, certainly long enough for the normal conduct of counter-cyclical monetary policy to be effective. 
The history of the Bernanke/Yellen Fed is more complicated ..... The Fed was apparently able to lower the real ten-year Treasury bond rate for half a dozen years, 2011–2016. Of course there are many influences on the real long interest rate; it is at least plausible that large Fed purchases contributed to the outcome that the Fed was consciously seeking. The difference between ‘a year or two’ and ‘half a dozen years’ is not a small matter.
What about the natural rate of unemployment? One implication of Friedman's arguments was that if the government used macroeconomic policy in an attempt to hold the unemployment rate below it's natural rate in the long-run, it would lead to surges of ever-higher inflation. As Solow notes, in the 1970s and early 1980s, sharp drops in the unemployment rate do seem associated with rising inflation. But the main story about inflation in the last 20-25 years is that it doesn't seem to react to much: it doesn't get a lot higher or a lot lower as the unemployment rate rises and falls. Solow goes so far as to claim: "[T]there is no well-defined natural rate of unemployment, either statistically or conceptually."

For a more positive gloss on the legacy of Friedman's argument and its applications to modern macroconomics, I commend your attention to the JEP articles listed above. Here, Solow ends  his note with the kind of elegant rhetorical flourish that he brings to so much of his writing:
"A few major failures like those I have registered in this note may not be enough for a considered rejection of Friedman's doctrine and its various successors. But they are certainly enough to justify intense skepticism, especially among economists, for whom skepticism should be the default mental setting anyway. So why did those thousand ships sail for so long, why did those ideas float for so long, without much resistance? I don't have a settled answer.
One can speculate. Maybe a patchwork of ideas like eclectic American Keynesianism, held together partly by duct tape, is always at a disadvantage compared with a monolithic doctrine that has an answer for everything, and the same answer for everything. Maybe that same monolithic doctrine reinforced and was reinforced by the general shift of political and social preferences to the right that was taking place at about the same time. Maybe this bit of intellectual history was mainly an accidental concatenation of events, personalities, and dispositions. And maybe this is the sort of question that is better discussed while toasting marshmallows around a dying campfire."
Here's a Table of Contents for the relevant papers in the October 2018 issue of the Review of Keynesian Economics:
Along with the JEP papers mentioned earlier, those interested in the subject may also want to consult the paper by Edward Nelson, “Seven Fallacies Concerning Milton Friedman’s `The Role of Monetary Policy,'" *Finance and Economics Discussion Series 2018-013, Board of Governors of the Federal Reserve System,

Thursday, November 15, 2018

Superstar Firms and Cities

Imagine two people who have seemingly equal skills and background. They go to work for two different companies. However, one "superstar" company grows much faster, so that wages and opportunities in that company also grow much faster. Or they go to work in two different cities. One "superstar" urban economy grows much faster, so that wages and opportunities in that city also grow faster.

Of course, such patterns of unequal growth have always existed  to some extent. When evaluating a potential employer or location choice, people  have always taken into account the potential for joining a superstar performer. The interesting question is whether the gap between superstar and ordinary firms, or between superstar and ordinary cities, has been growing or changing over time. For example, some argue that the rise of superstar firms, and the resulting rise in between-firm performance and labor compentiation, can explain most of the rise in US income inequality.

The McKinsey Global Institute has a nice report summarizing past evidence and offering new evidence of their own in Superstars: The Dynamics of Firms, Sectors, and Cities Leading the Global Economy (October 2018). It's written by a team led by James Manyika, Sree Ramaswamy,  Jacques Bughin, Jonathan Woetzel, Michael Birshan, and Zubin Nagpal. Short summary: Superstar firms and cities do seem to be widening their economic leadership gap, with the evidence that certain sectors are superstars seems weaker.

For superstar firms, the report notes:
"For firms, we analyze nearly 6,000 of the world’s largest public and private firms, each with annual revenues greater than $1 billion, that together make up 65 percent of global corporate pretax earnings. In this group, economic profit is distributed along a power curve, with the top 10 percent of firms capturing 80 percent of economic profit among companies with annual revenues greater than $1 billion. We label companies in this top 10 percent as superstar firms. The middle 80 percent of firms record near-zero economic profit in aggregate, while the bottom 10 percent destroys as much value as the top 10 percent creates. The top 1 percent by economic profit, the highest economic-value-creating firms in our sample, account for 36 percent of all economic profit for companies with annual revenues greater than $1 billion. Over the past 20 years, the gap has widened between superstar firms and median firms, and also between the bottom 10 percent and median firms. ... The growth of economic profit at the top end of the distribution is thus mirrored at the bottom end by growing and increasingly persistent economic losses ..."
Here's an illustrative figure, showing firms by decile, and comparing the time windows from 1995-97 and from 2014-2016.

Some other patterns are that the superstar firms "come from all sectors and regions and include global banks and manufacturing companies, long-standing Western consumer brands, and fast-growing US and Chinese tech firms. The sector and geographic diversity of firms in the top 10 percent and the top 1 percent by economic profit is greater today than 20 years ago." Along with being more profitable, superstar firms spend more on R&D and on intangible investments like intellectual property, software, and brand value. In additinon, the rate of movement (or the "churn") in and out of the deciles doesn't seem to have changed much over time. 
"In the top 1 percent by economic profit, only one out of every six of today’s superstar firms has been there for the past three decades. They are mostly American and European consumer goods and technology firms that have survived, often through reinvention and adaptation to a changing environment and sustained investment, and they own some of the world’s most familiar brands.26 They include Altria, Coca-Cola, Intel, Johnson & Johnson,  Merck, Microsoft, Nestle, and Novartis. They are joined by several other firms that have stayed in the top ranks for two-thirds or more of the past 30 years and that come from a broader set of regions and sectors. These include firms such as Samsung, Toyota, and Walmart, and they make up another one-sixth of the top 1 percent."
The analysis also identifies 50 superstar cities, with a map below.
"Fifty cities are superstars by our definition ... The 50 cities account for 8 percent of global population, 21 percent of world GDP, 37 percent of urban high-income households, and 45 percent of headquarters of firms with more than $1 billion in annual revenue. The average GDP per capita in these cities is 45 percent higher than that of peers in the same region and income group, and the gap has grown over the past decade. ... The growth of superstar cities is fueled by gains in labor income and wealth from real estate and investor income, yet many show higher rates of income inequality within the cities than peers. ... Of the 50 superstar cities, 31 are ranked among the most globally integrated cities, 27 among the world’s 50 most innovative cities, 26 among the world’s top 50 financial centers, and 23 among the world’s 50 “digitally smartest” cities. Twenty-two are national and regional capitals, while 22 are among the world’s largest container ports." 

For individuals thinking about potential employers, and for individuals and firms thinking about location decisions, it's useful to consider the potential gains of being connected to a superstar firm or city.

For a national economy, a different question arises. What is the "special sauce" that superstar companies and cities are using to achieve their outsized and growing levels of productivity and income? Companies and cities will always differ, or course. But the rising advantage of superstars raises a question of how at least some of those practices and policies might be more broadly disseminated across the rest of the economy.

Wednesday, November 14, 2018

What Amazon Said, What Amazon Meant

In September 2017, Amazon announced that it was planning to set up a second headquarters. It published a "Request for Proposal" that began:
Amazon invites you to submit a response to this Request for Proposal (“RFP”) in conjunction with and on behalf of your metropolitan statistical area (MSA), state/province, county, city and the relevant localities therein. Amazon is performing a competitive site selection process and is considering metro regions in North America for its second corporate headquarters.
The RFP suggested that within broad parameters, the search was wide-open. It is full of comments like " All options are under consideration" and "We encourage testimonials from other large companies" and "Tell us what is unique about your community." The quick overview of its requirements looked like this:
In choosing the location for HQ2, Amazon has a preference for:
  • Metropolitan areas with more than one million people
  • A stable and business-friendly environment
  • Urban or suburban locations with the potential to attract and retain strong technical talent
  • Communities that think big and creatively when considering locations and real estate options
HQ2 could be, but does not have to be:
  • An urban or downtown campus
  • A similar layout to Amazon’s Seattle campus
  • development-prepped site. We want to encourage states/provinces and communities to think creatively for viable real estate options, while not negatively affecting our preferred timeline
Several hundred cities heard what Amazon said, and sent in proposals. Many of those were no-hopers, of course. Still, now Amazon has announced its choices: New York (technicallly Long Island City) and DC (technically Arlington, Virginia). Wow, some really radical open-minded out-of-the-box thinking there! It seems as if a more accurate list of criteria for Amazon's Request for proposal might have had three elements.

1) Should be an easy commute from one of the homes of Jeff Bezos, CEO of Amazon. Scott Galloway offers a useful info-graphic here:

2) Should either be near the nation's major center of government or near the nation's major center of the financial industry. Or maybe we'll just do both.

3) Should be one of the top two cities for total number of people already employed in computer and mathematical jobs. Alan Berube at Brookings offers this useful table.
Table 1: Tech workers by metro area

There's nothing wrong with these actual criteria. Having a corporate headquarters near the residence of the CEO, especially when the CEO is as closely identified with the company as Bezos is with Amazon, is a long-standing practice. There are obvious advantages to being in New York and DC.

But I do wonder if the folks at Amazon have any clue about how annoyingly cozy this looks to the several hundred other cities that took the time to put in bids. Sure, places like Columbus, Ohio, or Indianapolis, Indiana can get a pat on the head for being on the "short list."  The day before the decision was announced, the major of Jersey City tweeted: "Of course #jerseycity would benefit if it’s in NY but I still feel this entire Amazon process was a big joke just to end up exactly where everyone guessed at the start. No real social impact on a city, no real transformation, no inspiring young residents that never had this" The next time Amazon starts talking with cities about locating a facility anywhere, this process will be remembered.

In the RFP, Amazon talked a good game about the importance of a " local government structure and elected officials eager and willing to work with the company." It talked about a fast permit process for building, and about the importance of smoothly functioning transportation infrastructure. It Amazon becomes mired in the local politics, regulatory disputes, and traffic jams of New York and DC, it shouldn't expect much sympathy from hundreds of other places across the country.

Tuesday, November 13, 2018

Some Economics of World War I

What we now call World War I was known at the time, and for several decades afterward, simply as the "Great War." It wasn't until the arrival of World War II that World War I was re-christened. The Great War ended 100 years ago on November 11, 1918.

Stephen Broadberry and Mark Harrison have edited a collection of 20 essays called The Economics of the Great War: A Centennial Perspective (November 2018, free registration needed). It's a Book published by CEPR Press, The useful approach of these books is to focus on short and readable essays, often 6-10 pages in length, in which the authors bring out some of the key points from their previous or ongoing  research. Thus, the books offer a gentle introduction to a broader swath of the literature. I'll list the full table of content below. Here, I'll offer a few tidbits. 

As the editors point out, many of the modern discussion so the Great War focus on changes  that happened in the aftermath of the war, many of which are hot topics again a century later. Examples of such topics with echoes for the present day include:  

The Great War marked the end of a period that had shown a strong rise in economic inequality. Walter Scheidel writes: "In the years leading up to World War I, economic inequality in many industrial nations was higher than it had ever been before. In the early 1910s, the highest-earning 1% of adults in France, Germany, Japan, the Netherlands, the UK, and the US received
approximately one-fifth of all personal income. Income inequality then was much greater than it is now, except in the US, where the level of the 1910s has returned. ... Personal wealth was even more concentrated. The UK, where the richest 1% owned almost 70% of all wealth, led the pack. Today’s figure is closer to 20%. The corresponding French, Dutch, and Swedish shares of close to 60% were the highest ever recorded for these countries and between two and three times as large as they are now. Uncharacteristically, the US was lagging behind, even though its wealth concentration was also – if only moderately – greater than it is today ... "

The Great War brought the first great deglobalization of the world economy. David Jacks writes: "I document the evolution of world trade up to the precipice of World War I and the implosion of world trade in the initial years of the war, along with important changes in the composition of trade. Chief among these was the dramatic erosion in the share of Europe in world exports in general, and in the share of Germany in European exports in particular. Turning an eye to more long-run developments, World War I emerges as a clear inflection point in the evolution of the global economy. The diplomatic misunderstandings, economic headwinds, and political changes introduced in its wake can be discerned in the data as late as the 1970s."

The Great War showed how to address a Depression, but the lesson wasn't learned. Hugh Rockoff explains: "Policymakers might have drawn the conclusion from World War I that deficit spending combined with an expansionary monetary policy had propelled the economy toward full employment – a lesson that would have been enormously valuable in the Depression. ... Although lessons about the effectiveness of monetary and fiscal policy could have been drawn from the war, economic theory was not ready. ... The methods used for dealing with shortages during the war, whatever their success in wartime, were simply inappropriate for dealing with the Depression. Although the Roosevelt administration wrestled mightily with the Depression, and produced important pieces of social legislation such as Social Security and the minimum wage, many of its programmes were aimed simply at reallocating resources from one interest group to another, rather than creating the additional demand that would have done the most to ameliorate the Depression."

The Great War marked the end of unrestricted mass migration. Drew Keeling points out: "The war declarations of August 1914 spelled far-reaching alterations to the fundamental character of modern long-distance international mass migration. For most of the preceding century, in the majority of big economies, international human relocation had been largely peaceful, voluntary, and motivated by market incentives. Since then, politically determined quotas and legal restrictions, and flight from war, oppression or similarly fearsome dangers and disasters, have been more salient ..."

The Great War brought an international refugee crisis. Peter Gatrell writes: "Amidst all the current talk of an international ‘refugee crisis’, it is worth pointing out that World War I yielded a harvest of mass population displacement that caught contemporaries by surprise and is only now attracting scholarly attention. It uprooted upwards of 14 million civilians whose suffering generated widespread sympathy and encouraged often impressive programmes of humanitarian aid as well as self-help. In Western Europe wartime displacement did not leave a lasting legacy, because refugees were able to return to their homes. But in Eastern Europe and the Balkans, the situation was complicated by revolution, civil war, the collapse of three continental empires, and a series of population exchanges."

The volume also looks in more detail at the history of the war itself. As an example, Mark Harrison contributes an opening essay, "Four Myths about the Great War." Here's a summary, with citations omitted for readability. 

Myth 1: "How the war began: An inadvertent conflict?"
"There was no inadvertent conflict. The decisions that began the Great War show: • agency, • calculation, • foresight, and • backward induction. Agency is shown by the fact that, in each country, the decision was made by a handful of people. These governing circles included waverers, but at the critical moment the advocates of war, civilian as well as military, were able to dominate. Agency was not weakened by alliance commitments or mobilisation timetables. ... What ruled the leaders’ calculation in every country was the idea of the national interest ... While the ignorant many hoped for a short war, the informed few rationally feared a longer, wider conflict. They planned for this, acknowledging that final victory was far from certain. ... The European powers understood deterrence. No one started a war in 1909 or 1912 because at that time they were deterred. War came in 1914 because in that moment deterrence failed."
Myth 2: "How the war was won: Needless slaughter?"
"Another myth characterises fighting in the Great War as a needless wasteful of life. In fact, there was no other way to defeat the enemy. Attrition was not a result of trench warfare. Attrition became a calculated strategy on both sides. From the Allied standpoint, the rationality of attrition is not immediately clear. The French and British generally lost troops at a faster rate than the Germans. Based on that alone, the Allies could have expected to lose the war. The forgotten margin that explains Allied victory was economic. This was a war of firepower, as well as manpower. ...  When America joined the war and Russia left it, the Allied advantage declined in population but rose in production. On the basis of their advantage, ... the Allies produced far more munitions, including the offensive weaponry that finally broke the stalemate on the Western front."
Myth 3: "How the war was lost: The food weapon?"
"Hunger was decisive in the collapse of the German home front in 1918. Was Germany starved out of the war by Allied use of the food weapon? In Germany, this myth became prevalent and assumed historic significance in Hitler’s words ... It is true that Germany imported 20‐25% of calories for human consumption before the war. Wartime imports were limited by an Allied blockade at sea and (via pressure on neutrals) on land. At the same time, German civilians suffered greatly – hunger-related mortality is estimated at around 750,000. But decisions made in Berlin, not London, did the main damage to German food supplies. ... [T]the effects of the loss of trade were outweighed by Germany’s war mobilisation. Mobilisation policies damaged food production in several ways. On the side of resources, mobilisation diverted young men, horses, and chemical fertilisers from agricultural use to the front line. Farmers’ incentives to sell food were weakened when German industry was converted to war production and ceased to supply the countryside with manufactures. Government initiatives to hold down food prices for the consumer did further damage. Because trade supplied at most one quarter of German calories, and German farmers the other three quarters, it is implausible to see the loss of trade as the primary factor. Germany’s own war effort probably did more to undermine food supplies."
Myth 4: "How the peace was made: Folly at Versailles?"
"Since Keynes (1920), many serious consequences have been ascribed to the treaty of Versailles. ...  Germany actually paid less than one fifth of the 50 billion gold marks that were due. From 1924, there was no net drain from the Germany economy because repayments were covered by American loans. Eventually, Hitler defaulted on both loans and reparations. German governments could have covered most of it [the reparations] by accepting the treaty limits on military spending. Instead, they evaded it by means of a ‘war of attrition’ against foreign creditors. The Allied pursuit of reparations was unwise and unnecessarily complicated Europe’s postwar readjustment, but it is wrong to conclude that it radicalised German politics. The political extremism arising from the treaty was short-lived. In successive elections from 1920 through 1928, a growing majority of German votes went to moderate parties that supported constitutional government. In fact, Weimar democracy’s bad name is undeserved. It was the Great Depression that reawakened German nationalism and put Hitler in power."

Here's the full Table of Contents

Monday, November 12, 2018

Global Population Pyramids

The Lancet has just published a recent set of papers from the Global Burden of Disease Study. As it notes: "The Global Burden of Disease Study (GBD) is the most comprehensive worldwide observational epidemiological study to date. It describes mortality and morbidity from major diseases, injuries and risk factors to health at global, national and regional levels. Examining trends from 1990 to the present and making comparisons across populations enables understanding of the changing health challenges facing people across the world in the 21st century."

Interested readers will find lots to chew on in these papers. Here, I'll stick to a figure showing "global population pyramids" from the article "Population and fertility by age and sex for 195 countries andterritories, 1950–2017: a systematic analysis for the Global Burden of Disease Study 2017," authored by the GBD 2017 Population and Fertility Collaborators (Lancet, published November 10, 2018; 392: 10159, pp. 1995–2051).

The four panels show four years: 1950, 1975, 2000, and 2017. The red bars show size of female population at different age groups, from younger ages at the bottom to older ages at the top; the yellow bars do the same for men. Horizontal lines show the average and median age for men and women in each year.

A few thoughts:

1) These population pyramids are especially useful for looking at the evolution of the age distribution. "In 1950, the global mean  age of a person was 26·6 years, decreasing to 26·0 years in 1975, then increasing to 29·0 years in 2000 and 32·1 years in 2017.

2) The bulge in the working age population in 2000 and 2017, as opposed to the earlier years, is apparent. " Demographic change has economic consequences, and the proportion of the population that is of working age (15–64 years) decreased from 59·9% in 1950 to 57·1% in 1975, then increased to 62·9% in 2000 and 65·3% in 2017."

3) Over time, the population pyramids has become relatively broader at the bottom; that is, they do not taper as quickly as one moves to older aged. That pattern tells you that the elderly are a rising share of the population. It also suggests that as today's younger generations age, and their number rise up through the global population pyramid, the share of the elderly in the world population will rise substantially.

4) The increasing area of the pyramids shows the rise in global population since 1950. Less clear to the naked eye is that the rate of growth in the world  population has shifted from being exponential to being linear. 
"From 1950 to 1980, the global population increased exponentially at an annualised rate of 1·9% ... . From 1981 to 2017, however, the pace of the global population increase has been largely linear, increasing by  83·6 million people per year. ... Growth of the global population increased in the 1950s and reached 2·0% per year in 1964, then slowly decreased to 1·1% in 2017." 
5) Although geographic breakdowns aren't shown in this figure, the regional patterns of population growth have also differed substantially. 
"In 1950, the high-income, central Europe, eastern Europe, and central Asia GBD super-regions accounted for 35·2% of the global population but, in 2017, the populations of these countries accounted for 19·5% of the global population. Large increases occurred in the proportion of the world’s population living in south Asia, sub-Saharan Africa, Latin America and the Caribbean, and north Africa and the Middle East. ...  Growth of the population in north Africa and the Middle East increased until the 1970s, and it has remained quite high, at 1·7% in 2017. Population growth rates in sub-Saharan Africa increased from 1950 to 1985, decreased during 1985–1993, increased again until 1997, and then plateaued; at 2·7% in 2017, population growth rates were almost the highest rates ever recorded in this region. The most substantial changes to population growth rates were in the southeast Asia, east Asia, and Oceania super-region, where the population growth rate decreased from 2·5% in 1963 to 0·7% in 2017. ... In central Europe, eastern Europe, and central Asia, the population growth rate dropped rapidly after 1987 and was negative from 1993 to 2008. Growth rates in the high-income super-region have changed the least, starting at 1·2% in 1950 and reaching 0·4% in 2017."

Thursday, November 8, 2018

The Tax Cuts and Jobs Act, One Year Later

The Tax Cuts and Jobs Act was signed into law by President Trump less than a little less than year ago, on December 22, 2017. What are the likely benefits and costs associated with the legislation? The Fall 2018 issue of the Journal of Economic Perspectives (where I work as Managing Editor) includes a two-paper symposium on the subject. Joel Slemrod provides an overview of the main elements of the legislation and its effects in " "Is This Tax Reform, or Just Confusion?" (32:4, pp. 73-96). Alan J. Auerbach focuses on one primary aspect of the law, its shifts in the US corporate income tax, in "Measuring the Effects of Corporate Tax Cuts," (32:4, pp. 97-120).

Here's a flavor of Slemrod's argument:
"The Tax Cuts and Jobs Act is not tax reform, at least not in the traditional sense of broadening the tax base and using the revenue so obtained to lower the rates applied to the new base. Nor, based on its unofficial title, did it aspire to this approach as a main objective. It does, though, contain several base-broadening features long favored by tax reform advocates. 
"or is the Tax Cuts and Jobs Act just confusion. There are coherent arguments buttressing the centerpiece cut in the corporation tax rate. To the extent that the new legislation reduces the cost of capital (which is not obvious), business investment will be higher than otherwise. 
"Its serious downsides are the contribution to deficits and to inequality. The former is less of a concern to the extent that the Tax Cuts and Jobs Act turns out to stimulate growth; the latter is less of an issue the more its centerpiece cuts in business taxation will be shifted to the benefit of workers, especially low-income workers. In both cases, the Tax Cuts and Jobs Act represents a huge gamble on the magnitude of these effects, about which the evidence is not at all clear. My own view is that the stimulus to growth will be modest, far short of many supporters’ claims, and so the Tax Cuts and Jobs Act will increase federal deficits by nearly $2 trillion over the next decade, a nontrivial stride in the wrong direction that promises to shift the tax burden to future generations. How it will affect the within-generation distribution of welfare is the most controversial question of all. Although according to conventional wisdom, the Tax Cuts and Jobs Act delivers the bulk of the tax cuts to the richest Americans, whose relative well-being has been rising continuously in recent decades, other plausible models of the economy, supported by some new empirical evidence, raise the possibility that the gains will be more widely shared. This is the most important question about which we know too little."
Auerbach digs into the tricky issues involved in thinking about who really ends up paying corporate taxes, how they affect investment, and how the answers to these questions may change in a world of multinational corporations operating across borders.

For example, until a few years ago the traditional view had been that corporate taxes reduced the return on investment. Thus, even if the corporate income tax was formally collected from companies, the Congressional Budget Office and others assumed that it was actually paid by those who receive income from capital investment. However, in an economy where corporate investment flows easily across international borders, this assumption may not hold up. A higher domestic tax on corporations could chase capital to other countries and reduce investment, which in turn would reduce productivity and wages of domestic workers over time. Auerbach reports that in "the five decades between 1966 and 2016, the share of the income of US resident corporations that was accounted for by foreign operations rose from 6.3 to 31.1 percent." Thus, since 2012, the CBO now assumes that 75% of the US corporate tax is paid by lower returns on capital income, but the other 25% is paid by lower wages for workers.

But these estimates about how corporate taxes affect domestic investment and thus ultimately productivity and wages are rough, and there is room substantial disagreement.  As Auerbach writes:
"One may trace the controversy over distributional effects of the 2017 tax cut (or other potential tax corporate cuts) to differences over the effectiveness of such tax cuts at promoting capital deepening, differences over the extent to which any such capital deepening would generate increases in wages, and differences over whether a corporate tax cut might increase wages through other significant channels. ...
In summary, the rise of the multinational corporation, with cross-border ownership and operations, and the growing importance of intellectual property in production have broadened the set of relevant behavioral responses to corporate taxation and led governments to participate in a multidimensional tax competition game. In this game, each country chooses not only its statutory corporate tax rate, but also asset-specific provisions applying to domestic investment and rules applying to cross-border investments. Changes in any one instrument may affect firms on several decision margins, and policy changes might influence US investment through several direct and indirect channels. While one may expect a reduction in the US corporate tax rate to encourage US-based investment and production, the effects of other policy changes may be more complex."
And of course, the effects of US corporate tax changes will also be affected by how other countries respond to changes in US corporate taxes, and by what further changes are made to tax law in the future. All that said, there does seem to be some rough commonality in the findings of a number of studies that the 2017 legislation will increase incentives for domestic US investment, and in that way lead to additional growth over a 10-year time horizon. Here's Auerbach:
"The Joint Committee on Taxation (2017b) “projects an increase in investment in the United States, both as a result of the proposals directly affecting taxation of foreign source income of US multinational corporations, and from the reduction in the after-tax cost of capital in the United States.” The average increase in the capital stock over the 10-year budget window is 0.9 percent and the average increase in GDP is 0.7 percent, although the increases are smaller at the end of the period because of the changes in provisions noted above. Congressional Budget Office (2018) projects an average increase in GDP of 0.7 percent over the 10-year budget period. A relatively similar private-sector assessment by Macroeconomic Advisers (2018) finds that potential GDP rises by 0.6 percent by the end of the budget period, “mainly by encouraging an expansion of the domestic capital stock.” The Penn Wharton Budget Model (2017) estimates a 10-year growth in GDP of between 0.6 and 1.1 percent, depending on assumptions about the composition of returns to capital. Barro and Furman (forthcoming, Table 11) estimate that GDP would be higher as a result of an increased capital-labor ratio, by 0.4 percent after 10 years under the law as written, and 1.2 percent if initial provisions were made permanent, with the effects being smaller if deficitinduced crowding out is taken into account."