Pages

Friday, October 30, 2015

Update on the National School Lunch Program

"On a typical schoolday in October 2014, over 30 million U.S. schoolchildren and teens took their trays through the lunch line. Seventy-two percent of these students received their meals for free or paid a reduced price, and the remaining 28 percent purchased the full-price lunch." However, the number of children receiving a free lunch is rising, while the number purchasing a school lunch is falling. Katherine Ralston and Constance Newman take "A Look at What’s Driving Lower Purchases of School Lunches," in Amber Waves, published by the US Department of Agriculture (October 5, 2015).

Here are some facts to organize the discussion. First, here's a figure showing total number of students getting school lunch over time. The number receiving free lunches has risen substantially; the number paying for lunch has dropped.



Another angle on this same data is instead of looking at total numbers, look at the proportion of students in each category. About 60% of all students are provided a lunch at school. The share of those who are eligible to get a lunch, and actually getting one, is about 90%. The share of students who would need to pay for their own lunch, and are paying for the school lunch, is down in the last few years.



The National School Lunch program cost $11.6 billion in 2012, according to a USDA fact sheet. Why is it leading to fewer paid lunches? Perhaps the obvious explanation for fewer paid lunches is the 2007-2009 recession and its aftermath. It seems plausible that a number of families who weren't eligible for free lunches were concerned about saving some money, and started sending their children to school with a home-packed lunch instead. But this answer seems incomplete, because the program has been tweaked in a number of ways in recent years.

For example, Ralston and Newman explain:
In 2010, Congress passed the Healthy, Hunger-Free Kids Act. The Act addressed concerns about the nutritional quality of children’s diets, school meals, and competitive foods available in schools (those not part of the school meal, such as a la carte items or foods and drinks sold in vending machines). ... In implementing the Act, USDA promulgated rules requiring lunches to include minimum servings per week of specific categories of vegetables, including dark green and red/orange vegetables, as well as changes to increase whole grains while limiting calories and sodium. ... These rules took effect starting with school year 2012-13. Some school lunch standards were gradually phased in. ...  The updated standards set a ceiling on total calories per average lunch in addition to existing minimum calorie requirements, with upper restrictions ranging from 650 kilocalories (kcal) for grades K-5 to 850 kcal for high schools. Total sodium levels for average lunches offered were also limited for the first time to 1,230 milligrams (mg) (grades K-5), 1,360 mg (grades 6-8), and 1,420 mg (grades 9-12) per average lunch by July 1, 2014, with intermediate and final targets scheduled for school years 2017-18 and 2022-23. 
It's easy to find surveys of school administrator who cheerily praise these new rules. As someone with three children in public schools, my anecdotal evidence is that not all children are pleased with the changes. Also, I think the kinds of concerns over what children eat for lunch that motivated the passage of the 2010 act are also leading some families to believe that a home-packed lunch will feed their children better. In addition to the menu changes, the law has also led many school to raise the price for school lunches. Again, Ralston and Newman explain: 

The Paid Lunch Equity provision requires districts to work towards making the revenue from paid lunches to equal the difference between the reimbursement rates for free lunches and paid lunches. For example, in school year 2014-15, the reimbursement rate for free lunches, including an additional $0.06 for compliance with updated meal standards, was $3.04 and the reimbursement for paid lunches, together with the additional 6 cents, was $0.34. The difference of $2.70 would represent the “equity” price. A district charging $2.00 for a paid lunch would be required to obtain an additional $0.70 per meal, on average, by gradually raising prices or adding non-Federal funds to make up the difference over time. Until the gap is closed, districts must increase average revenue per lunch, through prices or other non-Federal sources, by 2 percent plus the rate of inflation, with minimum increases capped at 10 cents in a given year, with exemptions under certain conditions.
Higher prices for paid lunches run the risk of reducing participation. A nationally representative study from school year 2005-06 found that a 10-percent increase in lunch price was associated with a decline of 1.5 percentage points in the participation rate of paid lunches, after controlling for other characteristics of the meal and the school foodservice operation. Another nationally representative survey conducted in 2012 found that lunch prices rose 4.2 percent in elementary schools and 3.3 percent in middle and high schools, on average, between school years 2010-11 and 2011-12. Applying the earlier results on effects of differences in lunch prices on paid-lunch participation rates, these price increases would be expected to lead to declines in participation rates of 0.6 percentage points for elementary school and 0.5 percentage points for middle and high school. These estimates suggest that price increases related to the Paid Lunch Equity provision could have contributed modestly to the decline in participation rates for paid lunches.
Other changes to the school lunch program are just beginning to be phased in. In the current school year, the "Smart Snacks in School" rules kicked in, requiring that "competitive" foods sold in schools along with school lunches "must meet limits on calories, total and saturated fat, trans-fat, sugar, and sodium and contribute to servings of healthy food groups."

After a few years of pilot programs, the eligibility rules for free school lunches are being eased. "Overall NSLP participation may also be helped by the Community Eligibility Provision (CEP), a new option that allows schools in low-income areas to offer school meals at no charge to all students. Under CEP, a district may offer all meals at no charge in any school where 40 percent or more of students are certified for free meals without an application ... An evaluation of seven early adopting States found that CEP increased student participation in NSLP by 5 percent relative to comparable schools that did not participate in CEP. The increase in overall participation associated with CEP may result not only from the expansion of free lunches, but also from reduced stigma and faster moving lunch lines due to the elimination of payments."

Like a lot of middle class families, we use the school lunch program as a convenience. Our children take home-packed lunches most days, but some days the lunches never quite get made. My sense is that he nutritional value of the lunches our children take to school is considerably better than what they eat when they buy a school lunch (remembering that what they actually eat is not the same as what the school tries to serve them). But for a lot of low-income families, the school lunch program is nutritional lifeline. The poverty rate for children in the United States (21% in 2014) is considerably higher than for other age groups.

I'm sympathetic to the notion that the food served in schools should be healthier. But as a parent, I've learned that serving healthier food to children is comparatively easy. Having children eat that food is harder. And having children learn healthy habits related to food and diet can be harder, still.


Wednesday, October 28, 2015

The Trade Facilitation Agenda

The most common way of talking about "barriers to trade" between countries has often involved measuring taxes on imports ("tariffs") or quantitative limits on imports ("quotas"). But import tariffs and quotas have been reduced over time, and the focus of many new trade agreements--along with the World Trade Organization-- is "trade facilitation," which means taking steps to reduce the costs of international trade. Some of of these costs involve transportation and communications infrastructure, but a number of the changes also involve administrative practices like the paperwork and time lags needed to get through customs.

Back in December 2013, the trade negotiators at the  World Trade Organization signed off on the Trade Facilitation Agreement, the first multilateral trade agreement concluded since the establishment of the World Trade Organization in 1995. The agreement legally comes into force if or when two-thirds of the WTO member countries formally accept it. So far, 18 have done so, so there's some distance to go. In its World Trade Report 2015, subtitled "Speeding up trade: benefits and challenges of implementing the WTO Trade Facilitation Agreement," the WTO lays out the potential gains and challenges.  The WTO writes:
While trade agreements in the past were about “negative” integration – countries lowering tariff and non-tariff barriers – the WTO Trade Facilitation Agreement (TFA) is about positive integration – countries working together to simplify processes, share information, and cooperate on regulatory and policy goals. ... The TFA represents a landmark achievement for the WTO, with the potential to increase world trade by up to US$ 1 trillion per annum.
How  big are the costs of trading internationally? The WTO writes:
Based on the available evidence, trade costs remain high. Based on the Arvis et al. (2013) database, trade costs in developing countries in 2010 were equivalent to applying a 219 per cent ad valorem tariff on international trade. This implies that for each dollar it costs to manufacture a product, another US$ 2.19 will be added in the form of trade costs. Even in high-income countries, trade costs are high, as the same product would face an additional US$ 1.34 in cost.
Here's a figure showing how these trade costs vary across types of countries. The report has discussion of variation by sector of industry as well.



There is already widespread recognition that these costs are hindering trade, and so the  trade facilitation agenda is already spreading rapidly through regional and bilateral trade agreements. This figure shows the rise in the number of regional trading agreements, and also emphasizes that almost all of those agreements have trade facilitation components. Indeed, a defining characteristic of international trade in the 21st century is that it involves global value chains, in which the chain of production is divided up across multiple countries (for more detail, see here, here, or here). In effect, many regional trading agreements are seeking to facilitate these global value chains by reducing the costs of trade.


For a taste of what specifically is meant by "trade facilitation" in these agreements, here's a list of the trade facilitation provisions that are most common in regional trade agreements. Of course, the WTO report has details about what each of these categories means.


Shipping good and services across global distances and multiple national borders is never going to be quite as simple as dealing with a nearby provider who is operating within the same borders. How much can the trade facilitation agenda reduce the kinds of costs give above? Here's the WTO summary:
Trade costs are high, particularly in developing countries. Full implementation of the Trade Facilitation Agreement (TFA) will reduce global trade costs by an average of 14.3 per cent. African countries and least-developed countries (LDCs) are expected to see the biggest average reduction in trade costs. ... Computable general equilibrium (CGE) simulations predict export gains from the TFA of between US$ 750 billion and well over US$ 1 trillion dollars per annum, depending on the implementation time-frame and coverage. Over the 2015-30 horizon, implementation of the TFA will add around 2.7 per cent per year to world export growth and more than half a per cent per year to world GDP growth. ... Gravity model estimates suggest that the trade gains from the TFA could be even larger, with increases in global exports of between US$ 1.1 trillion and US$ 3.6 trillion depending on the extent to which the provisions of the TFA are implemented.
There are some other benefits to the trade facilitation agenda, as well. For example, reforming the legal and regulatory processes around customs, and reducing delays, means that there is less reason to pay bribes to facilitate the process--and thus reduces corruption. The WTO writes:
Trade-related corruption is positively affected by the time spent to clear customs procedures. Shepherd (2010) shows that a 10 per cent increase in trade time leads to a 14.5 per cent fall in bilateral trade in a low-corruption country, and to a 15.3 per cent fall in a country with high levels of corruption. By reducing the time required to move goods across borders, trade facilitation is therefore a useful instrument for anticorruption efforts at the border.
More broadly, steps to facilitate trade across borders by simplifying paperwork, improving infrastructure, and reducing delays will often be quite useful for domestic production chains, not just for international trade.  Thus, lots of organizations are pushing the trade facilitation agenda, not just the WTO. As one example, the report notes:
The World Bank is also active in the trade facilitation area. In fiscal year 2013, for example, the World Bank spent approximately US$ 5.8 billion on trade facilitation projects, including customs and border managementand streamlining documentary requirements, as well as trade infrastructure investment, port efficiency, transport security, logistics and transport services, regional trade facilitation and trade corridors or transit and multimodal transport. The Bank is also involved in analytical work such as the Trade and Transport Facilitation Assessment which “is a practical tool to identify the obstacles to the fluidity of trade supply chains.”


Monday, October 26, 2015

How Tight is the US Labor Market?

The US unemployment rate was 5.1% in August and September. This rate is low by the standards of recent decades, but concerns remain over the extent to which is it not reflecting those who were long-term unemployed, have dropped out of looking for a job--and thus are no longer officially counted in the ranks of the unemployed.

Alan B. Krueger tackles this and related issues in "How Tight Is the Labor Market?", which was delivered as the 2015 Martin Feldstein Lecture at the National Bureau of Economic Research on July 22, 2015. An edited version of the talk is here; If you would like to watch the lecture and see the Powerpoint slides, you can do so here. (Full disclosure: Alan was Editor of the Journal of Economic Perspectives, and thus was my boss, from 1996-2002.) Short answer: The long-term unemployed dropping out of the labor market do contribute modestly to a lower labor force participation rate and the lower unemployment rate. However, if one focuses on short-run unemployment levels, the labor market is tight enough that it is leading to higher wages in much the same way as in previous decades.

Here are a few figures to set the stage. Here, I'll use versions generated by the ever-useful FRED website run by the Federal Reserve Bank of St. Louis, which has the advantage of updating the figures a bit from the ones provided in Krueger's talk last summer.  For starters, the US unemployment rate has now dropped dramatically, back to levels that are relatively low in the context of recent decades. 


However, if one focuses on the share of the unemployed who are long-term unemployed, defined as those without a job and still looking for one after at least 27 weeks, the picture isn't as rosy. Although the share of the unemployed who are long-term unemployed has declined, it still remains at relatively high levels by the standards of recent decades. To describe this pattern in another way, those who are long-term unemployed have found it harder to get back into employment than those who were unemployed for less than 27 weeks. 


In addition, the official unemployment statistics only count someone as unemployed if they are out of a job and actively looking for work. This definition of unemployment makes some sense: for example, it would be silly to count a 75 year-old retiree or a married spouse staying home by choice as "unemployed." The labor force participation rate measures the share of adults who are "in the labor force," which means that they either have a job or are out of a job and looking for one. This rate has been generally declining since the late 1990s. There are a number of possible reasons for this decline: for example, the baby boomer generation is retiring in force, and more retirees means a lower labor force participation rate; more young adults are continuing to attend school into their 20s, and thus aren't counted as being in the labor force; and some of those who were long-term unemployed have given up looking for work, and are no longer counted in the unemployment statistics even though they would still prefer to be employed. 


Krueger slices and dices this topic from several directions, but a lot of his recent work has focused on the issue of the long-term unemployed. Those who are long-term unemployed tend to become disconnected from the labor market over time.  Their job search activity gradually diminishes, and employers are less likely to give interviews to those whose resumes show long-term unemployment. For illustration, here's a figure from Krueger on the probability of an unemployed worker finding a job based on how long the worker has been unemployed.

Figure5

Krueger writes: "A variety of evidence points to the long-term unemployed being on the margins of the labor market, with many on the verge of withdrawing from searching for a job altogether. As a result, the long-term unemployed exert less downward pressure on wages than do the short-term unemployed. They are increasingly likely to transition out of the labor force, which is a loss of potential for our economy and, more importantly, a personal tragedy for millions of workers and their families." By Krueger's calculation, about half of the decline in the share of the long-term unemployed is due to that group dropping out of the labor force altogether. Of the decline in the labor force participation rate, most of it is due to a larger share of retirees in the population and young adults being more likely to remain in school, but on Krueger's estimates about one percentage point of the decline is due to long-term unemployment leaving the labor market and no longer looking for work. (I've discussed the decline in labor force participation rates a number of times on this blog before: for example, here, here and here, or for some international comparisons, see here.)

A different measure of the tightness of the labor market is to stop parsing the job statistics, and instead look at the patterns of unemployment and wages,what economists call a Phillips curve. In general, one might expect that higher unemployment would mean less pressure for wages to rise, and the reverse for lower unemployment. One sometime hears an argument that real wages haven't been rising recently in the way one should expect if unemployment is genuinely low (as opposed to just appearing low because workers have dropped out of the labor force).

Krueger argues that the patterns of wage changes and unemployment are roughly what one should expect. He focuses only on short-term employment (that is, employment less than 27 weeks), on the grounds that the long-term unemployed are more likely to be detached from the labor force and thus will exert less pressure on wages. Increases in real wages are measured with the Employment Cost Index data collected by the US Bureau of Labor Statistics, and then subtracting inflation as measured by the Personal Consumption Expenditures price index. In the figure below, the solid line shows the relationship between short-term unemployment and changes in real wages for the period from 1976-2008. (The dashed lines show the statistical confidence intervals on either side of this line.) The points labelled in blue are for the years since 2008. From 2009-2011, the points line up almost exactly on the relationship predicted from earlier data. For 2012-2014, the points are below the predicted relationship, although still comfortably within the range of past experience (as shown by the confidence intervals). For the first quarter of 2015, the point is above the historical prediction.

Figure10

This pattern suggests that since 2008, the relationship between unemployment rates and wage increases hasn't changed much. To put it another way, the low unemployment rates now being observed are a meaningful statistic--not just covering up for workers exiting from the labor market--because they are tending to push up wages at pretty much the same way as they have in the past.

Friday, October 23, 2015

How Raising the Top Tax Rate Won't Much Alter Inequality

"Would a significant increase in the top income tax rate substantially alter income inequality?"  William G. Gale, Melissa S. Kearney, and Peter R. Orszag ask the question in a very short paper of this title published by the Economic Studies Group at the Brookings Institution. Their perhaps surprising answer is "no."

The Gale, Kearney, Orszag paper is really just a set of illustrative calculations, based on the well-respected microsimulation model of the tax code used by the Tax Policy Center.  Here's one of the calculations. Say that we raised the top income tax bracket (that is, the statutory income tax rate paid on a marginal dollar of income earned by those at the highest levels of income) from the current level of 39.6% up to 50%. Such a tax increase also looks substantial when expressed in absoluted dollars. By their calculations, "A larger hike in the top income tax rate to 50 percent would result, not surprisingly, in larger tax increases for the highest income households: an additional $6,464, on average, for households in the 95-99th percentiles of income and an additional $110,968, on average, for households in the top 1 percent. Households in the top 0.1 percent would experience an average income tax increase of $568,617."

In political terms, at least, this would be a very large boost. How much would it affect inequality of incomes? To answer this question, we need a shorthand way to measure inequality, and a standard tool for this purpose is the Gini coefficient. This measure runs from 0 in an economy where all incomes are equal to 1 in an economy where one person receives all income (a more detailed explanation is available here). For some context, the US distribution of income based on pre-tax income is .610. After current tax rates are applied, the after-tax distribution of income is .575.

If the top tax bracket rose to 50%, then according to the first round of Gale, Kearney, Orszag calculations, the Gini coefficient for after-tax income barely fall, dropping to .571. For comparison, the Gini coefficient for inequality of earnings back in 1979, before inequality had started rising, was .435.

As a follow-up calculation, how about if we took the $95.6 billion raised by this tax increase and distributed it to the bottom 20% of the income distribution: "Increasing the top rate to 50 percent .. would bring in an additional $95.6 billion in revenue, leading to an additional $2,650 in post-tax income for the bottom fifth of households." This calculation includes at least two fairly heroic assumptions: there would be no reduction in the income-earning behavior of taxpayers as a result of the higher tax rates, and the US political system would focus the revenue raised on those with the lowest income levels (rather than the middle class or other spending priorities). Even so, this redistribution would only reduce inequality as measured by the Gini coefficient to .560.

The authors also offer a simulation that assumes a chance in income-earning behavior from the higher tax rate: "We redo the simulation assuming that households with more than $100,000 in pre-tax income reduce their pre-tax income in response to an increase in the income tax rate, with an income elasticity of .4." (This elasticity implies, for example, that a 10% rise in the tax rate would lead to a 4% fall in taxable income earned.) This makes only a small difference in the overall reduction ion inequality from a tax-and-redistribute plan. As the authors explain explain: "The highest income households reduce their pre-tax income, which would amplify the reduction in income inequality, but that leaves less revenue to redistribute."

This paper is really just a set of calculations. It's doesn't recommend the higher tax rate, nor does it not recommend the higher rate. In that spirit, what lessons might one take away from this calculation?

To me, it emphasizes how very large the rise in inequality has been. Even a substantial increase in the top tax rate has a fairly small effect on after-tax inequality--precisely because the before-tax rise in inequality has been so very large. Indeed, trying to address income inequality by raising tax rates on the highest incomes would require some VERY high rates, far above the 50% level considered here.

Just because this rise in tax rates would have a quite modest effect on after-tax inequality of incomes, even when combined with redistribution, it might be worth considering for other reasons, like as part of an overall deficit-reduction package or to fund certain kinds of high-priority spending.  On the other side, raising the tax rate on those with high incomes will not be transformative for the US budget situation. Raising the top income tax rate to 50% brings in less than $100 billion per year. Total federal spending in 2015 seems likely to run around $3.8 trillion. So it would be fair to say that raising the top income tax rate to 50% might increase total federal revenues by about 2%.

Thursday, October 22, 2015

Greater Inequality of Returns Across US Firms

The returns to investing in US firms have become more unequal over time--a fact which might help to explain the rise in income inequality. Jason Furman and Peter Orszag provide evidence and discussion in their October 2015 paper, "A Firm-Level Perspective on the Role of Rents in the Rise in Inequality." 

Here's an example of the growing inequality of returns across firms based on stock market returns. The blue line shows the distribution of stock market returns across the Standard & Poor's 500 in 1996; the red line shows the distribution in 2014. The returns as shown on the horizontal axis are measured relative to the most common or "modal" return. Notice that in 2014, there are fewer firms in the middle of the distribution, and more that are out on the right-hand side with comparatively high levels of returns.


As another measure, here's a related but different calculation. Here, the approach is to look at the "return on invested capital" for publicly-traded nonfinancial firms. As they explain, "the return on invested capital ... [is] defined as net after-tax operating profits divided by capital invested in the firm. This measure reflects the total return to capital owners, independent of financing mix." That is, the measure takes into account whether some firms have lots of debt or lots of equity, and looks at all capital invested. The big takeaway here is that if you look at firms at the 90th percentile of return on invested capital and compare them to firms at, say, the median (or 50th percentile), the relationship doesn't change too much from 1965 through the 1980s. But after that point, the firms with a 90th percentile rate of return start doing comparatively much better.
The growth of these high-flying firms is part of the explanation for the rise in income inequality during the last quarter-century. What happens is that pay is higher for those who work at highly profitable firms. The overall growth in income inequality arises not because people inside a given firm are seeing more inequality, but because the between-firm level of inequality is rising. As evidence on this point, Furman and Orszag discuss a 2014 study by Erling Barth, Alex Bryson, James C. Davis, and Richard Freeman, "It’s Where You Work: Increases in Earnings Dispersion across Establishments and Individuals in the U.S."
They estimate that increasing inequality between establishments explains more than two-thirds of the increase in overall earnings inequality between 1992 and 2007. Among workers who continued at the same establishment from one year to the next, the increased spread in average pay between establishments explained 79 percent of the rise in earnings inequality over that period.
Furman and Orszag also quote evidence from a recent paper by Jae Song, David J. Price, Fatih Guvenen, and Nicholas Bloom that I discussed back in July. As Furman and Orszag write:
Song et al. found that essentially all of the increase in national wage inequality from 1978 to 2012 stemmed from increasing disparities in average pay across companies. By contrast, their analysis suggests that the wage gap between the highest-paid employees and average employees within firms explains almost none of the rise in overall inequality. ... [W]hile individual wage disparities have clearly increased in recent decades, virtually all the increased dispersion is attributable to inter-firm
dispersion rather than intra-firm dispersion.
Just why this level of intra-firm dispersion in returns and the accompanying inequality of incomes has risen is a subject still being researched, but various hypotheses suggest themselves. One possibility suggested by Furman and Orszag is that certain industries have become more concentrated, allowing firms in that industry to earn higher profits in a situation of less competition. Another possibility is that a small number of firms are adopting new technologies, but these technologies are not diffusing as rapidly from the cutting-edge firms.  Still another possibility is that the high-profit firms are more likely to be in industries that need a disproportionate amount of highly-skilled labor, which would help to explain the high pay in those industries. Thinking about how rising inequality in returns across firms is related to rising inequality of incomes seems likely to be a hot topic in the next wave of research on income inequality.


Wednesday, October 21, 2015

The Shifting World Distribution of Income

The fastest-growing countries around the world, now and probably for the next few decades, will not be the high-income countries. As a result, the global distribution of income will become--gradually--more equal over time. TomĆ”Å” Hellebrandt and Paolo Mauro offer a projection in their essay, "China’s Contribution To Reducing Global Income Inequality," which appears as part of  China’s Economic Transformation Lessons, Impact, and the Path Forward, a group of short essays published in September 2015 by the Peterson Institute for International Economics (PIEE Briefing 15-3).  Here's the key figure.


The horizontal axis shows income level per person. You should think of this axis as broken down into small segments that are each $20 in width. Then, vertical axis shows what share of world population receives that income level, for each $20 segment of income. (If you used income widths greater than $20, the overall shape of the green, red, and blue lines would be essentially the same, but because the width of the "bins" would be larger, the numbers on the vertical axis would be larger, too.)

The green line shows distribution of global income in 2003, the red line in 2013, and the blue line is a projection for 2035. You can see the median and mean income distributions rising over time. The overall flattening of the income distribution over time as a smaller share of the population is bunched at the bottom tells you that the income distribution is getting more equal. On the graph, inequality is measured by a "Gini coefficient," which is a standard measure of inequality.

For quick intuition, I'll just say that the Gini coefficient is measured along a scale from 0-100, where zero means complete equality of incomes, and 100 means that a single person receives all the income. To get a more intuitive feel for what the Gini means, the World Bank publishes estimates of Gini coefficients, when data is available, for countries around the world. Countries with a very high level of inequality, like Brazil, Mexico, Zambia and Uganda, have a Gini around 50.  In the United States and China, the Gini is about 40. In Germany and France, it's about 30. In highly egalitarian countries like Sweden or Norway, it's closer to 25. It's not especially surprising that the global Gini coefficient is higher than the Gini for any given country: after all, global inequality is greater than inequality within any given country.

If you would like a more detailed explanation of how a Gini coefficient is calculated, you can check out my earlier post on "What's a Gini Coefficient?" (April 3, 2014). One of the bits of intuition given there goes like this: "A Gini coefficient of G per cent means that, if we take any 2 households from the population at random, the expected difference is 2G per cent of the mean. So that a rise in the Gini coefficient from 30 to 40 per cent implies that the expected difference has gone up from 60 to 80 per cent of the mean."

Thus, in the figure above, in 2013, the Gini coefficient is 64.9 and the mean income is $5,375. Thus, if you pick at random two households from anywhere in the world, the average difference in their incomes will be 2 x (.649) x $5,375=$6,976.

Tuesday, October 20, 2015

When High GDP No Longer Means High Per Capita GDP

For much of the 20th century, the economies of the world that had the largest GDP were also among those with the largest per capita GDP. The weight of global economic activity was aligned with the US, Canada, western Europe, and Japan. But we seem to be headed for a world in which this pattern no longer holds. For a few years now, the consulting firm PricewaterhouseCoopers been publishing reports offering long-range forecasts of the size of national economies for the year 2050. A recent example is the February 2015 report: "The World in 2050:Will the shift in globaleconomic power continue?"

Here's a table (which I snipped from a larger table in the report) listing the top 10 economies in the world by size, using what are called "purchasing power parity" exchange rates--which roughly means the exchange rates that equalize buying power of internationally traded goods across economies.  Using this measure, China now has the largest economy in the world (using market exchange rates, it will still be a few more years before China's economy overhauls the United States). It's striking to look out at these projections for 2050, when China will have by far the largest economy in the world--and India will be edging out the US for second place. Countries like Indonesia, Brazil, Mexico and Nigeria will also be in the world's top 10 largest economies. Only three of today's high-income countries--the US, Japan, and Germany--are projected to be in the top 10 by 2050.


Of course, many of the largest economies in 2050 are also countries with large populations. In the 2050 projections, a large GDP does not imply a large per capita GDP. Here's a figure showing per capita GDP for a number of countries,with the darker lines showing the level for 2014 and the lighter lines showing the projection for 2050.  Even projecting rapid growth for countries like China, India, Indonesia, Brazil, and Mexico more than three decades into the future, their average standard of living as measured by per capita GDP will still be far below the high-income countries.





Of course, the precise levels of these long-range projections should not be taken too seriously. After all, they are based on estimates about rates of economic growth and other economic variables in countries all around the world decades into the future. The numbers also look a little different if one does the calculations using market exchange rates, rather than the purchasing power parity exchange rates. Readers who want details can check out the report.

But tweaking the numbers is not going to change the overall conclusion, which is based on the fact that when it comes to international relations, size matters. Countries with bigger economies tend to get more say, and the United States has been used to having the biggest say of all. But in 21st century, when it comes a wide array of decisions--international trade talks, decisions of the the International Monetary Fund and the World Bank, who leads the way during global financial crises, who dominates the flows of international investment capital and foreign aid, who has the power to impose trade or financial sanctions, and what kind of military threats are most credible--the shifts in the global economy suggest that the high-income countries of the world will not dominate as they did during most of the 20th century. Instead, countries with the world's largest economies, but much lower standard of living for their populations, will play a central role in setting the rules.

Monday, October 19, 2015

Exonerating Henry Ellsworth and Charles Duell, Former U.S. Commissioners of Patents

There's a well-known story about how a long-ago head of the US Patent Office supposedly proposed that it was time to shut down the patent office, and said something like:  "Everything that can be invented has been invented." The story seems implausible on its face: after all, who would be appointed to run the patent office, or who would take the job, believing that invention was about to become obsolete? Nonetheless, the story seems to have appeared in two incarnations: one citing Henry L. Ellsworth, who was the first commissioner of the US Patent Office from 1835-1845, and Charles Duell, who was commissioner from 1898-1901.

A first debunking of this story appears in an article by Eber Jeffrey called "Nothing Left to Invent," which appeared in the Journal of the Patent Office Society in 1940 (vol. 22, pp. 479-481, available through the magic of online inter-library loan). Jeffrey writes:
Numerous versions of the story appear. A discouraged examiner is said to have declared in his letter of resignation, that there was no future for the inventor; a Congressman favored termination of the functions of the Patent Office since the time was near at hand when such functions would serve no purpose; and an eminent Commissioner of Patents retired almost a century ago, offering as his reason the view that the limits of human ingenuity already had been reached.  This last variant is told of Commissioner Harry L. Ellsworth, who resigned on the first day of April, 1845.
As Jeffrey points out, there is nothing in Ellsworth's resignation letter (which he reprints) showing any belief that the Patent Office was soon to be obsolete. However, one can find a phrase in one of Ellsworth's earlier reports which suggests he might have held such views. Ellsworth wrote a ten-page introduction to the 1843 Annual Report of the Commissioner of Patents (magically available online), which on p. 5 contains a one-sentence paragraph which reads:
"The advancement of the arts from year to year taxes our credulity, and seems to presage the arrival of that period when human improvement must end."
However, in the contest of his report, this phrase doesn't read like a prediction that human improvement is about to end. It reads like a sign of wonder about how much technical improvement is in fact happening, beyond what seems imaginable. As Jeffrey summarizes it in the 1940 essay:
Ellsworth did not elaborate on this statement. The content of the whole report, though, surely indicates that he did not think the end of "human improvement" was immediately at hand. He recommended that Congress provide for additions to the Patent Office building and asked for more equipment. He showed that great scientific progress was to be expected in the use of electricity, particularly for the telegraph and for railroads. He pointed out that important forward steps had been taken and were to be expected in medicine, in sugar refining, in the manufactures of textile, leather, and iron products. He anticipated a great variety of improvements in agriculture. Ellsworth probably did more than any other Commissioner to promote scientific farming while the agricultural bureau was a unit in the Patent Office. From these considerations it seems that Henry L. Ellsworth could not have felt that the end of progress in the mechanic arts was near. It is much more likely that the statement in question, probably an unfortunate one, was a mere rhetorical flourish intended to emphasize the remarkable strides forward in inventions then current and to be expected in the future. 
Charles Duell was the US Commissioner of Patents from 1898 to 1901. Stephen Sass, who was a librarian for a division of General Motors, told Duell's story for a magazine called the Skeptical Inquirer (Spring 1989, pp. 310-313). Sass writes:
This new story surfaced in the fall of 1985, when full-page advertisements sponsored by the TRW Corporation appeared in a number of leading periodicals, including Harper and Business Week. These ads had as their theme "The Future Isn't What It Used to Be." They contained photographs of six individuals, ranging from a baseball player to a president of the United States, who had allegedly made wrong predictions. Along with such statements as "Sensible and responsible women do not want to vote," attributed to President Cleveland, and "There is no likelihood man can ever tap the power of the atom," attributed to physicist Robert Millikan, there is a prediction that was supposedly made by Commissioner of the U.S. Patent Office Charles H. Duell. The words attributed to him were: "Everything that can be invented has been invented." The date given was 1899.
When Sass contacted TRW to find a source for the quotation, they pointed him to a couple of quotation books. The second quotation book cited the first one, and the first one had no link to a primary source. The 1899 Annual Report of the Commissioner of Patents (available online) is available online, and the quotation does not appear there. However, one does find a number of comments about the growth of patents and recommendations for legislation to improve the patent office. At the end, Duell writes: "May not our inventors hopefully look to the Fifth-Sixth Congress for aid and effectual encouragement in improving the American patent system?" Just before that, Duell quotes with evident approval a then-recent comment from President McKinley, who said in his annual message on December 5, 1899: "Our future progress and prosperity depend on our ability to equal, if not surpass, other nations in the enlargement and advance of science, industry, and commerce. To invention we must turn as one of the most powerful aids to the accomplishment of such a result."

Duell offered a similar message in other settings. For example, here is how Duell is quoted in a newspaper article called "Chances for the Inventor: Fame and Wealth Awaiting Him in Many Fields"
in the New York Sun from December 29, 1901:

In short, when you hear claims about how someone in the US Patent Office once believed that it was about to run out of work, be very skeptical.



Friday, October 16, 2015

When Global Demand Shifts: Cars and Movies

Producers try to figure out what the market wants, and then supply it. For the second half of the 20th century, the largest single world market was the United States. Not surprisingly, many products and brands were aimed at the US market. But during the next few decades, the single largest market is going to be China--and China is also likely to be among the fastest-growing markets. Producers around the world will re-orient accordingly.

For example, the number of cars sold in China has exceeded the number sold in the U.S. market for several years now. Here's a table and a figure from the VDA (the German Association of the Automotive Industry) showing global sales figures for passenger cars.


Here's a figure showing growth of motor vehicle sales in China.

For another example of a global industry, consider movies. Here's some data from the MPAA (Motion Picture Association of America). If you go back to reports from earlier years, in 2001 the US was about half of the global market for movies. Now the US/Canada share of the global market for movies is just over one-quarter, and falling.
Here are the top international markets for movies. In 2014, China became the first non-US national market to exceed $4 billion in movie revenues.
It's worth remembering that China has about four times as many people as the US and about one-fifth the per capita income, so China has enormous potential for continued rapid growth of car sales, movies, and many other goods and services.

As a thought experiment, imagine that you are a movie executive trying to focus on movies that can be shown with minimal adaptations in a number of large global markets: the US, Europe, China, Japan, Latin America. What sort of movies would you make? Well, you might focus on movies that are heavy on action and sound effects, but light on complex dialogue, as well as movies with a number of comic-book characters and aliens, who can appeal across conventional lines of ethnicity.

In the closing decades of the 20th century, a lot of Americans took it for granted that prominent global brands would either be American-based or at least would have a large US market presence. Meanwhile, people in many other countries found it bothersome (sometimes mildly annoying, sometimes downright aggravating) that they were so often confronted in their own countries--both on store shelves and in advertising--with products that had a strong American identity. But global demand is shifting. The huge US market will remain important, of course. But more and more, Americans are going to be seeing brands and titles and products where the US market is just one among several--and not necessarily the most important one.

Thursday, October 15, 2015

Thoughts on Shovel-Ready Infrastructure

When the economy slows down, there's often a call for increased infrastructure spending to give employment and output a jump-start. This idea is far from new, but it seems difficult to implement.

As one example of this idea from the past, L.W. Wallace published an article back in 1927 called "A Federal Department of Public Works and Domain: Its Planning, Activities and Influence in Leveling the Business Cycle," which appeared in the Proceedings of the Academy of Political Science in the City of New York (Vol. 12, No. 3, Jul., 1927, pp. 102-110, available through JSTOR). Wallace quoted a recent speech by President Calvin Coolidge, who said:

The idea of utilizing construction, particularly of public works, as a stabilizing factor in the business and employment situation has long been a plan of perfection among students of these problems. If in periods of great business activity the work of construction might be somewhat relaxed; and if in periods of business depression and slack employment those works might be expanded to provide occupation for workers otherwise idle, the result would be a stabilization and equalization which would moderate the alternations of employment and unemployment. This in turn would tend to favorable modification of the economic cycle. ... The first and easiest application of such a regulation is in connection with public works; the construction program which involves public buildings, highways, public utilities, and the like. Most forms of Government construction could be handled in conformity to such a policy, once it was definitely established.... This applies not only to the construction activities of the Federal Government, but to those of states, counties and cities.
More than this, the economies possible under such a plan are apparent. When everybody wants to do the same thing at the same time, it becomes unduly expensive. Every element of costs, in every direction, tends to expand. These conditions reverse themselves in times of slack employment and subnormal activity, with the result that important economies are possible.
I am convinced that if the Government units would generally adopt such a policy, and if, having adopted it, they would give the fullest publicity to the resultant savings, the showing would have a compelling influence upon business generally. Quasi-public concerns, such as railroads and other public utilities, and the great corporations whose requirements can be quite accurately anticipated and charted, would be impressed that their interest could be served by a like procedure
One example of infrastructure spending being used to stabilize the economy arose in the 1950s, when President Dwight Eisenhower viewed the interstate highway system along these lines. Raymond J. Saulnier  offers some explanation in his 1991 book Constructive Years: The U.S. Economy Under Eisenhower, describing how the authorization of the Interstate Highway system in 1956 was envisioned both for its own benefits, and also as a tool of economic stabilization. Saulnier writes (p. 74, and p. 233):
"And although Eisenhower's interest in having authority to develop an Interstate Highway System was primarily for what it would accomplish  toward improving the nation's infrastructure, he viewed it from the beginning (as he viewed all construction projects over which the Executive Branch had control) as a program that could be used to help stabilize the economy."  ... The undertaking--40,000 miles in its original format, to be built over 13 years--was the largest single U.S. public works project to that time or since. It as of immense interest to Eisenhower for what it would mean to national security (military experience had underlined the importance of transport) and, equally important, for what it would mean to the nation's economic development. He was acutely sensitive, also, to the possibility that its construction could if necessary be scheduled to help stabilize the economy." 
A later Congressional Budget Office report on "Highway Assistance Programs:A Historical Perspective," described in February 1978 how the federal highway legislation was soon used for countercyclical spending (p. 6 and pp. 30-31):

The Federal-Aid Highway Act of 1944 provided greatly expanded funding and established separate, proportional authorizations for three categories of highways—the primary system, the secondary system, and the urban extensions of the primary system—which became known as the ABC programs. ... The Federal-Aid Highway Act of 1958, which was put forward as an antirecession measure, suspended the Byrd Amendment for 1959 and 1960, allowing apportionments to be made for the full amount authorized even though Trust Fund revenues were not expected to be sufficient. Thus, the "pay-as-you-build" principle established in the 1956 act was almost immediately suspended, albeit only temporarily. Additional authorizations were also made for 1959, and the funds were made available immediately (the original 1959 authorization had already been apportioned). For these additional funds, which were primarily countercyclical in nature, the regulations regarding the proportion of funds allocated to each of the ABC systems were suspended and the federal share was temporarily increased to two-thirds. The decision not only to continue Interstate authorizations but also to raise the levels was based on two reasons. First, it was argued that a general economic stimulus would derive from the increased authorizations. Second, much was made of the Congressional intent expressed in 1956 regarding the "acceleration and prompt completion of the Interstate System."
However, the most recent US experience with trying to use infrastructure spending to stimulate the economy was at best a partial success, because even after such spending was authorized, it took so long to actually get underway that the recession had ended. President Barack Obama made this point in an interview with New York Times in October 2010, where he said:
Infrastructure has the benefit of for every dollar you spend on infrastructure, you get a dollar and a half in stimulus because there are ripple effects from building roads or bridges or sewer lines. But the problem is, is that spending it out takes a long time, because there’s really nothing -- there’s no such thing as shovel-ready projects.
If we are going to take seriously the notion of using infrastructure, here are some thoughts.

1) You need to have a detailed plan of projects that really are shovel-ready, like the federal highways in the 1950s. Otherwise, it's just too slow to get such project underway when a recession hits. Perhaps the ideal approach is to have a long-term project happening on an ongoing basis, with the possibility for speeding it up if a recession hits.

2) Government is often focused on the infrastructure that it owns directly: like roads, bridges, and sewer lines. These areas matter, of course. But the future of the US economy will rely on a lot of other kinds of infrastructure, many of which are either privately owned or are some form of public-private partnership: including phone and cable lines, electricity generation and transmission, pipelines for oil and gas, railroad tracks, airport and seaport capacity, and water reservoirs and pipes. A broader focus on infrastructure would think about ongoing efforts to build infrastructure in these areas, and how some form of government support might accelerate them during a recession, too.

3) Pretty much everyone favors infrastructure in theory, but in practice, there are often some hard issues to work out. How does one focus on infrastructure that has the largest payoff, rather than just spreading out the spending to favored state and congressional districts, or contracts to favored political interests? How does one make sure the best deals get negotiated for the use of taxpayer support? Many kinds of infrastructure involve a mixture of user payments and taxpayer money, and how should those user payments be structured? Finally, how do we balance the need to give opponents of infrastructure projects a fair hearing, but also not give opponents an unfettered ability to use "lawfare" to block infrastructure projects?


Tuesday, October 13, 2015

The 2015 Nobel Prize: Angus Deaton

Economics is a tree with many branches, but consumption patterns and the standard of living more broadly understood are certainly one of the most important. Angus Deaton won the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2015--commonly known as the Nobel Prize in economics--for "for his analysis of consumption, poverty, and welfare." Each year, the committee publishes a number of materials about the award at its websitc, including background papers and interviews. Here, I'll focus on two background publications whose names convey their ease of readability: "Information for the Public: Consumption, great and small" and "Scientific Background: Angus Deaton: Consumption, poverty and welfare."

Every year I feel a little defensive when trying to explain the intellectual contributions of the winner of the Nobel prize in economics. Non-economists want to know: "What big discovery did he make or what big question did he solve?" But professional economists are are more interested in questions like: "In what ways did he develop the theory and the empirical evidence to increase our understanding of economic behavior and the economy?" Here, let me start by quoting how the committee answered this broad question in the "Scientific Background" paper, and then let me try to disentangle the jargon a bit and offer a few thoughts of my own.

Over the last three to four decades, the study of consumption has progressed enormously. While many scholars have contributed to this progress, Angus Deaton stands out. He has made several fundamental and interconnected contributions that speak directly to the measurement, theory, and empirical analysis of consumption. His main achievements are three.
First, Deaton’s research brought the estimation of demand systems – i.e., the quantitative study of consumption choices across different commodities – to a new level of sophistication and generality. The Almost Ideal Demand System that Deaton and John Muellbauer introduced 35 years ago, and its subsequent extensions, remain in wide use today – in academia as well as in practical policy evaluation.
Second, Deaton’s research on aggregate consumption helped break ground for the microeconometric revolution in the study of consumption and saving over time. He pioneered the analysis of individual dynamic consumption behavior under idiosyncratic uncertainty and liquidity constraints. He devised methods for designing panels from repeated cross-section data, which made it possible to study individual behavior over time, in the absence of true panel data. He clarified why researchers must take aggregation issues seriously to understand total consumption and saving, and later research has indeed largely come to address macroeconomic issues through microeconomic data, as such data has increasingly become available.
Third, Deaton spearheaded the use of household survey data in developing countries, especially data on consumption, to measure living standards and poverty. In so doing, Deaton helped transform development economics from a largely theoretical field based on crude macro data, to a field dominated by empirical research based on high-quality micro data. 
As just one example of how these different ideas come together, consider the problem of learning about consumption levels of households in low-income countries. Until just a few decades ago, it was common for researchers in this area to look at national-level data on patterns of consumption and income, and then divide by population to get an average. Deaton was at the front edge of the group of researchers who pushed for the World Bank to develop the Living Standard Measurement Study, which is a set of detailed country-level surveys that collect detailed data on a nationally representative sample of people in countries around the world. This would be an example of the third point in the committee's list above.

But an obvious practical problem has to be addressed in any such survey. If you want to know about how the consumption and savings decisions of households evolve over time--for example, it may take several years for a household to adjust fully to a sharp change in prices or incomes--it seems as if you need to follow the same group of households over time. Economists call this "panel data," But it can be hard to collect panel data because tracking people over years can be hard for survey researchers to do. People move, or households split up, and especially in low-income countries, finding out where they went isn't easy.  However, Deaton showed that if you had a series of surveys over time, you would have enough data to say how households with certain groups characteristics reacted over time. He showed that you could work with this data to draw empirical conclusions from a series of surveys of many different individuals to create a "pseudo-panel" that would work just as well as actual panel data. This would be an example of the second point made by the committee above.

Another analytical problem is how you combine all the data from the different households to draw overall conclusions about how consumption and saving shift in response to changes in prices or income. When Deaton first started writing about these issues in the 1970s, a common practice was to treat the economy as if it were one giant consumer, reaction to prices and income changes. Perhaps not surprisingly, such calculations didn't work well in describing how patterns of demand across goods shifted. Deaton, working with John Muellbauer, developed a more flexible way of looking at patterns of demand for the wide range of goods and services that allowed there to be some flexibility in patterns of household demand (for example, based on the number of people in a household and how many of them were children). It turns out that by allowing this extra flexibility, it becomes possible to draw sensible conclusions about consumption patterns from the data. This is an example of the first point made by the committee above.

Once you have data and a theoretical framework in hand, you can seek out some interesting conclusions about consumption patterns in low-income countries. For example, in one of his papers, Deaton found that low income tends to lead to malnutrition, but that malnutrition doesn't seem to be an important factor in causing low incomes. In another paper, he found that household purchases of adult goods like alcohol and tobacco change in the same ways when either a boy or girl is born during normal times, but in adverse times the adult purchases are cut by less when a girl is born--providing evidence that in that setting fewer family resources are committed to raising girls. Another paper found that the costs of expenditures on children are about 30-40% of expenditures on adults--which implies that when comparing countries with higher proportions of children to countries with lower proportions of children, you need to avoid comparisons that just divide economic output by the total number of people. Deaton has been at the center of efforts to use the available data and theory to measure the global level of poverty.

Over the years, Deaton has appeared in the pages of the Journal of Economic Perspectives (where I work as Managing Editor), a number of times. Along with the materials from the Nobel prize committee, these articles would also give the interested reader a sense of Deaton's approach as well as his intellectual breadth into areas that didn't get much mention from the committee. (As always, all articles from JEP are freely available compliments of the American Economic Association.)

Monday, October 12, 2015

Unpaid Care Work, Women, and GDP

The "economy" measures what is bought and sold. Thus, it is standard in introductory economics classes to point out that if my neighbor and I both mow our own lawns, it's not part of GDP. But if we hire each other to mow each other's lawns, GDP is then higher--even though exactly the same amount of lawn-mowing output was produced. In a broader sense, what's would be the economic value of nonmarket family services if they were valued instead in monetary terms?

The McKinsey Global Institute provides some background on this issue in the September 2015 report: The Power of Parity: How Advancing Women's Equality Can Add $12 Trillion to Global Growth."  The report offers some calculations that if women participated in the paid labor force at the same level as the leading country in their region (thus, not holding those in Latin America, Africa, or the Middle East to the standard of northern Europeans), it would add $12 trillion to GDP. However, the report also notes that these women who are not in the paid labor force are of course already working and producing at least $10 trillion in nonmarket output.
Beyond engaging in labor markets in ways that add to GDP, a large part of women’s
labor goes into unpaid work that is not accounted for as GDP. Women do an average
of 75 percent of the world’s total unpaid care work, including the vital tasks that keep
households functioning, such as child care, caring for the elderly, cooking, and cleaning.
In some regions, such as South Asia (including India) and MENA, women are estimated to undertake as much as 80 to 90 percent of unpaid care work. Even in Western Europe and North America, their share is high at 60 to 70 percent. Time spent in unpaid care work has a strong negative correlation with labor-force participation rates, and the unequal sharing of household responsibilities is a significant barrier to enhancing the role of women in the world economy. Applying conservative estimates based on available data on minimum wages, the unpaid care work of women could be valued at $10 trillion of output per year—an amount that is roughly equivalent to 13 percent of global GDP. In the United States alone, the value of unpaid care work carried out by women is about $1.5 trillion a year. ... Data from 27 countries indicate that some 61 percent of unpaid care work (based on a simple average across countries) is routine household work, 14 percent involves taking care of household members, 11 percent is time spent on household purchases, and 10 percent is time spent on travel  ...
The amount of unpaid work that women do is closely related to female participation in the paid labor force. The horizontal axis shows the ratio of the labor force participation rate of women to that of men. The vertical axis shows the ratio of time spent on unpaid care by women to the time spent by men. Thus, in India, women spend about 10 times as many hours on unpaid care as men, and their labor force participation rate is one-third as high. In a number of high-income countries, women spend 1.5-2 times as many hours on unpaid work as men, and the labor force participation rate for women is about 80% of the level for men. (For the record, "unpaid care" is defined not just as care for other family members, but also includes housework and voluntary community work.) The MGI report notes: "Globally, women spend three times as many hours in unpaid domestic and care work as men."

Some additional background on unpaid work by women is available in "Unpaid Care Work:
The missing link in the analysis of gender gaps in labour outcomes," written by GaĆ«lle Ferrant, Luca Maria Pesando and Keiko Nowacka for the OECD Development Centre in December 2014.

The two figures shows ratios of time spent on unpaid care by women relative to men: the left-hand figure is across regions; the right-hand figure is across countries divided by income level. The left-hand figure shows that the female-to-male ratio of time spent on unpaid care is nearly 7 in the Middle East and North Africa region and in the South Asia region, but is below 2 in Europe and North America. The right-hand figure shows that the ratio is roughly three in low-income, lower-middle income, and upper-middle income countries, but less than 2 in high-income countries.



The level of unpaid care matters for several reasons. The most obvious, perhaps, is the McKinsey calculation that women moving into the paid labor force could raise world GDP by $12 trillion. But there are a number of more subtle ways in which the inclusion of unpaid work alters one's sense of social output. Ferrant, Pesando and Nowacka point out (citations omitted):
"It leads to misestimating households’ material well-being and societies’ wealth. If included, unpaid care work would constitute 40% of Swiss GDP and would be equivalent to 63% of Indian GDP. It distorts international comparisons of well-being based on GDP per capita because the  underestimation of material well-being would be proportionally higher in those countries where the share of housewives and home-made consumption is higher. For instance, by including Household Satellite Accounts the GDP per capita of Italy reaches from 56% to 79% of the USA’s GDP, and 98% to 120% of that of Spain."
In a broader sense, of course, the issue is not to chase GDP, but to focus on the extent to which people around the world are having the opportunity to fulfill their capabilities and to make choices about their lives. Countries where women have more autonomy also tend to be countries where the female-to-male ratios of time spent on unpaid care are not as high. The share of unpaid care provided by women highly correlated with women's ability to participate in the paid workforce, as well as to acquire skills and experience that lead to better-paying jobs, as well as participating in other activities like political leadership. Ferrant, Pesando and Nowacka write:
"The unequal distribution of caring responsibilities between women and men within the household thus also translates into unequal opportunities in terms of time to participate equally in paid activities. Gender inequality in unpaid care work is the missing link in the analysis of related to gender gaps in labour outcomes in three areas: gender gaps in labour force participation rates, quality of employment, and wages."
In a similar vein, the MGI report notes: "Beyond GDP, there could be other positive effects. For instance, more women could be financially independent, and there may be intergenerational benefits for the children of earning mothers. In one study of 24 countries, daughters of working mothers were more likely to be employed, have higher earnings, and hold supervisory roles."

What are the pathways by which the time spent on unpaid care activities might be reduced? Many women are familiar with the feeling that a large part of their take-home pay is going to childcare, a housecleaner, lawn care, takeout food when there was no time to cook, and the like. Having people pay each other for what was previously unpaid work will add to GDP, but it may not add to total output broadly understood.

Thus, the challenge is reduce unpaid work in ways that don't just swap unpaid work around, but actually free up time and energy. Both the McKinsey report and the OECD authors have similar comments about how this has happened in practice. Historically, one major change for high-income countries has been the arrival of labor-saving inventions. The MGI report notes:
Some of the routine household work and travel time can be eliminated through better public services and greater automation. For example, in developing countries, the time spent on household chores is increased by poor public infrastructure. Providing access to clean water in homes can reduce the time it takes to collect water, while electricity or solar power can eliminate the time spent hunting for firewood. Tools such as washing machines and kitchen appliances long ago lightened much of the drudgery associated with household work in higher-income countries, and millions of newly prosperous households in emerging economies are now adopting them, too. Innovations such as home-cleaning robots may one day make a leap forward in automating or streamlining many more tasks.
A number of other issues that affect the balance between unpaid and paid labor: the prevalence of workplace policies like family leave and flex-time; the availability of high-quality child care and elder care; the length of school days along with preschool and post-school programs; and the extent to which money that women earn in the paid workforce is reduced by government tax, or by the withdrawal of transfers that would otherwise have been available.  And of course, social attitudes about the role of women are central to these outcomes.

The MGI report gives a sense of how these forces have evolved in the US economy in recent decades:
In the United States, for example, labor-force participation by women of prime
working age rose from 44 percent in 1965 to 74 percent in 2010. Over this period, the time women spent on housework was cut almost in half, but the hours they spent on child care actually rose by 30 percent, reflecting evolving personal and familial choices. Both housework and child care became more equitably shared. Men’s share of housework rose from 14 percent in 1965 to 38 percent in 2010, and their share of child care from 20 percent to 34 percent.
The ultimate constraint which rules us all is that a seven-day week has 168 hours. A gradual reduction in the time spent on unpaid care activities, which have traditionally been primarily the job of women, is part of what makes society better-off.


Friday, October 9, 2015

The Eurozone Crisis: Crystalizing the Narrative

The economy of the eurozone makes the US economy look like a picture of robust health by comparison. Richard Baldwin and Francesco Giavazzi have edited The Eurozone CrisisA Consensus View of the Causesand a Few Possible Solutionsa VoxEU.org book from the Centre for Economic Policy Research in London. The book includes a useful introduction from Richard Baldwin and Francesco Giavazzi, followed by 14 mostly short and all quite readable essays.

As a starting point for non-European readers, consider that the unemployment rate across the 19 eurozone countries is still well into double-digits at 11%.


While the US economy has experienced disappointingly sluggish growth after the end of the Great Recession in 2009, the eurozone economy experienced a follow-up recession through pretty much all of 2011 and 2011, and since then has experienced growth that is sluggish even by US standards.


What went wrong in the eurozone? Here's the capsule summary from the introduction by Baldwin and Giavezzi:

The core reality behind virtual every crisis is the rapid unwinding of economic imbalances. ... In the case of the EZ [eurozone] crisis, the imbalances were extremely unoriginal. They were the standard culprits that have been responsible for economic crises since time immemorial – namely, too much public and private debt borrowed from abroad. Too much, that is to say, in relation to the productive investment financed through the borrowing. 
From the euro’s launch and up until the crisis, there were big capital flows from EZ core nations like Germany, France, and the Netherland to EZ periphery nations like Ireland, Portugal, Spain and Greece. A major slice of these were invested in non-traded sectors – housing and government services/consumption. This meant assets were not being created to help pay off in the investment. It also tended to drive up wages and costs in a way that harmed the competitiveness of the receivers’ export earnings, thus encouraging further worsening of their current accounts. 
When the EZ crisis began – triggered ultimately by the Global Crisis – cross-border capital inflows stopped. This ‘sudden stop’ in investment financing raised concerns about the viability of banks and, in the case of Greece, even governments themselves. The close links between EZ banks and national governments provided the multiplier that made the crisis systemic. 
Importantly, the EZ crisis should not be thought of as a sovereign debt crisis. The nations that ended up with bailouts were not those with the highest debt-to-GDP ratios. Belgium and Italy sailed into the crisis with public debts of about 100% of GDP and yet did not end up with IMF programmes, while Ireland and Spain, with ratios of just 40%, (admittedly kept artificially low by large tax revenues associated with the real estate bubble) needed bailouts. The key was foreign borrowing. Many of the nations that ran current account deficits – and thus were relying of foreign lending – suffered; none of those running current account surpluses were hit.
In working through their detailed explanation, here's are a few of the points that jumps out at me. When the euro first came into widespread use in the early 2000s, interest rates fell throughout the eurozone and all the eurozone countries were able to borrow at the same rate; that is, investors were treating all governments borrowing in euros as having the same level of risk--Germany the same as Greece. Here's a figure showing the falling costs of government borrowing and the convergence of interest rates across countries.


The crucial patterns of borrowing that emerged were not about lending from outside Europe to inside Europe, but instead about lending between the countries of the eurozone, a pattern which strongly suggests that the common currency was at a level generating ongoing trade surpluses and capital outflows from some countries, with corresponding trade deficits and capital inflows for other countries. Baldwin and Giavezzi write:
To interpret the individual current accounts, we must depart from an essential fact: The Eurozone’s current account as a whole was in balance before the crisis and remained close to balance throughout. Thus there was very little net lending from the rest of the world to EZ countries. Unlike in the US and UK, the global savings glut was not the main source of foreign borrowing – it was lending and borrowing among members of the Eurozone. For example, Germany’s large current account surpluses and the crisis countries deficits mean that German investors were, on net, lending to the crisis-hit nations – Greece, Ireland, Portugal and Spain (GIPS).

Sitting here in 2015, it seems implausible that policymakers around Europe weren't watching these emerging imbalances with care and attention, and planning ahead for what actions could be taken with regard to government debt, private debt, banking reform, central bank lender-of-last-resort policy, and other issues. But of course, it's not uncommon for governments to ignore potential risks, and only make changes after a catastrophe has occurred.   Baldwin and Giavezzi note wryly:
It is, ex post, surprising that the building fragilities went unnoticed. In a sense, this was the counterpart of US authorities not realising the toxicity of the rising pile of subprime housing loans. Till 2007, the Eurozone was widely judged as somewhere between a good thing and a great thing.
And what has happened in the eurozone really is an economic catastrophe. Baldwin and Giavezzi conclude:
The consequences were and still are dreadful. Europe’s lingering economic malaise is not just a slow recovery. Mainstream forecasts predict that hundreds of millions of Europeans will miss out on the opportunities that past generations took for granted. The crisis-burden falls hardest on Europe’s youth whose lifetime earning-profiles have already suffered. Money, however, is not the main issue. This is no longer just an economic crisis. The economic hardship has fuelled populism and political extremism. In a setting that is more unstable than any time since the 1930s, nationalistic, anti-European rhetoric is becoming mainstream. Political parties argue for breaking up the Eurozone and the EU. It is not inconceivable that far-right or far-left populist parties could soon hold or share power in several EU nations. Many influential observers recognise the bind in which Europe finds itself. A broad gamut of useful solutions have been suggested. Yet existing rules, institutions and political bargains prevent effective action. Policymakers seem to have painted themselves into a corner.
For those looking for additional background on eurozone issues, here are links to a few previous posts on the subject: