Thursday, May 23, 2019

Time for Fiscal Rules?

Should governments set rules to constrain the size of government borrowing on an annual basis or government debt accumulated over time? Pierre Yared discusses the question in "Rising Government Debt: Causes and Solutions for a Decades-Old Trend," in the Spring 2019 issue of the Journal of Economic Perspectives.

There's really no economic case to be made for the plain-vanilla rule that national governments should balance their budget every year. During a recession, for example, tax revenues will fall as income falls, and government spending on  programs like unemployment insurance, Medicaid, and food stamps will rise. If in the face of these forces the government wanted to keep a balanced budget during a recession, it would thus need to find ways to raise its tax revenues and cut other spending even while the economy is weak. A more sensible strategy is to find ways for these fiscal "automatic stabilizers" to function more strongly.

But the foolishness of a simplistic rule to balance the budget every year doesn't mean that no rules at all can work. But as Yared writes (citations omitted):  "Thus, governments across the world have adopted fiscal rules—such as mandated deficit, spending, or revenue limits—to curtail future increases in government debt. In 2015, 92 countries had fiscal rules in place, a dramatic increase from
1990, when only seven countries had them."

The form of these rules varies across countries. A basic lesson seems to be that all fiscal rules are imperfect, and can be gamed or avoided if a government wishes to do so, but also that well-designed rules--even with looseness and imperfections--do offer some constraints and limits that can hold down the amount of government borrowing.

Yared mentions an IMF study by Luc Eyraud, Xavier Debrun, Andrew Hodge, Victor Duarte Lledo, and Catherine A Pattillo called "Second-Generation Fiscal Rules : Balancing Simplicity, Flexibility, and Enforceability" (IMF Staff Discussion Note, SDN/18/04,  April 13, 2018).  They sum up the situation with fiscal rules in this way:
By improving fiscal performance, well-designed rules help build and preserve fiscal space while allowing its sensible use. Good rules encourage building buffers in good times and allow fiscal policy to support the economy in bad times. This implies letting automatic stabilizers operate symmetrically over the cycle and including escape clauses that allow discretionary fiscal support when needed. By supporting a credible commitment to fiscal sustainability, rules can also create space in the budget for financing growth-enhancing reforms and inclusive policies. 
To be effective, fiscal rules should have three main properties—simplicity, flexibility, and enforceability. These three properties are very difficult to achieve simultaneously, and past reforms have struggled to find the right balance. In the past decade, “second-generation” reforms have expanded the flexibility provisions (for example, with new escape clauses) and improved enforceability (by introducing independent fiscal councils, broader sanctions, and correction mechanisms). However, these innovations as well as the incremental nature of the reforms have made the systems of rules more complicated to operate, while compliance has not improved. ... 
This paper presents new evidence that well-designed rules are indeed effective in constraining excessive deficits. Country experiences show that successful rules generally have broad institutional coverage, are tightly linked to fiscal sustainability objectives, are easy to understand and monitor, and support countercyclical fiscal policy. Supporting institutions, like fiscal councils, are also important. In contrast, rules that are poorly designed and do not align well with country circumstances can be counterproductive. Novel empirical research finds that fiscal rules can reduce the deficit bias even when they are not complied with.

In his essay in JEP, Yared offers some more detailed insights. In some ways, the key issue isn't the fiscal rule you set, but rather what consequences will arise if the rule is broken. Here's Yared:
There are several issues to take into account when considering punishments for breaking fiscal rules. First, whether or not rules have been broken might be unclear. There are numerous examples of how governments can use creative accounting to circumvent rules. Frankel and Schreger (2013) describe how euro-area governments use overoptimistic growth forecasts to comply with fiscal rules. Many US states compensate government employees with future pension payments, which increases off-balance-sheet entitlement liabilities not subject to fiscal rules (Bouton, Lizzeri, and Persico 2016). In 2016, President Dilma Rousseff of Brazil was impeached for illegally using state-run banks to pay government expenses and bypass the fiscal responsibility law (Leahy 2016). Given this transparency problem, many countries have established independent fiscal councils to assess and monitor compliance with fiscal rules (Debrun et al. 2013).
A second issue to consider is the credibility of punishments. As an example, the Excessive Deficit Procedure against France and Germany in 2003 was stalled by disagreement between the European Commission and the European Council; consequently, French and German deficits persisted without penalty  ...

A third issue is the response of the private sector to the violation of rules, which can also serve as a form of punishment. For example, Eyraud, Debrun, Hodge, Lledó, and Pattillo (2018) [in the IMF study mentioned above] find that the violation of fiscal rules is associated with a significant increase in interest rate spreads for sovereign borrowing. Such an increase in financing costs immediately penalizes a government for breaching a rule. ...
Many governments’ fiscal rules feature an escape clause that allows violating the rule under exceptional circumstances (Lledó et al. 2017). Triggering an escape clause typically involves a review process, which culminates in a final decision by an independent fiscal council, a legislature, or citizens via a referendum. In Switzerland, for example, the government can deviate from a fiscal rule with a legislative supermajority in the cases of natural disaster, severe recession, or changes in accounting method. The cost of triggering an escape clause deters governments from using them too frequently. Moreover, because these costs largely involve a facilitation of information gathering to promote efficient fiscal policy, escape clauses are useful even in the presence of perfect rule enforcement.
Again, a theme that emerges is that a government which is serious about a fiscal rule will want to set up procedures to be followed when that rule is being broken. In turn, those procedures should be high-profile at least in a publicity sense, so that the decision to break the fiscal rule must be explained, justified, and evaluated by an independent commission. 

Another issue Yared mentions is that a fiscal rule can be designed with different categories: instrument-based rules that focus on specific categories of  spending or taxes, or overall target-based rules. He writes:

In practice, fiscal rules can constrain different instruments of policy, such as specific categories of government spending or tax rates. Different instruments may call for different thresholds ... For instance, due to volatile geopolitical conditions, military spending needs may be less forecastable than other spending needs, and may thus demand more flexibility. Capital spending is another category where allowing increased flexibility may be optimal, as the benefits of capital spending accrue well into the future and are thus subject to a less-severe present bias. Thus, many countries have “golden rules,” which limit spending net of a government’s capital expenditure. ... Overall, the evidence [suggests that rules that distinguish across categories are indeed associated with better fiscal and macroeconomic outcomes (for discussion, see Eyraud, Lledó, Dudine, and Peralta 2018). Moreover, it can be optimal to set multiple layers of rules, for example specifying a fiscal threshold for individual categories of taxes and spending as well as on the total level of taxes and spending in the form of a (forecasted) deficit rule.  

Ultimately, Yared argued for the benefits of a hybrid rule, "which allows for an instrument threshold that is relaxed whenever a target threshold is satisfied." 

In short, practical fiscal rules are quite possible, at least according the 90-plus countries that have them. And research suggests that such rule do constrain government borrowing, even given that they are going to be broken from time to time. But simple-minded fiscal rules like the US government "debt ceiling" will be essentially pointless, except for connoisseurs of short-term political dramas. Meaningful fiscal rules will not be simple, and will need to pay detailed attention not just to the overall goal, but to the practical issues of how much flexibility should surround the goal and what consequences will result when government borrowing that break through even a flexible rule. 

Wednesday, May 22, 2019

Origins of "Microeconomics" and "Macroeconomics"

Economists have written about topics that we would now classify under the headings of "microeocnomics" or "macroeconomics" for centuries. But the terms themselves are much more recent, emerging only in the early 1940s. For background, I turn to the entry on "Microeconomics" by Hal R. Varian published in The New Palgrave: A Dictionary of Economics, dating back to the first edition in 1987.

The use of "micro-" and "macro-" seems to date back to the work of Ragnar Frisch in 1933, but he referred to micro-dynamics and macro-dynamics. As Varian writes:
Frisch used the words ‘micro-dynamic’ and ‘macro-dynamic’, albeit in a way closely related to the current usage of the terms ‘microeconomic’ and ‘macroeconomic’: 
"The micro-dynamic analysis is an analysis by which we try to explain in some detail the behaviour of a certain section of the huge economic mechanism, taking for granted that certain general parameters are given ... The macrodynamic analysis, on the other hand, tries to give an account of the whole economic system taken in its entirety (Frisch 1933)." 
Elsewhere Frisch gives a more explicit definition of these terms that is closely akin to the modern usage of micro and macroeconomics: ‘Microdynamics is concerned with particular markets, enterprises, etc., while macro-dynamics relate to the economic system as a whole’ .... 
John Maynard Keynes does not seem to have used the micro- and macro- language. But Varian quotes a passage from the General Theory in 1936  show that Keynes was quite aware of the distinction. Keynes wrote:
The division of Economics between the Theory of Value and Distribution on the one hand and the Theory of Money on the other hand is, I think, a false division. The right dichotomy is, I suggest, between the Theory of the Individual Industry or Firm and of the rewards and the distribution of a given quantity of resources on the one hand, and the Theory of Output and Employment as a whole on the other hand [emphasis in the original]. 
Varian points to a somewhat obscure economist P. de Wolff as the first to use "microeconomic" and "macroeconomic" in 1941. Varian writes:
The earliest published reference that explicitly uses the term ‘microeconomics’ that I have been able to locate is de Wolff (1941). De Wolff, a colleague of Tinbergen at the Netherlands Statistical Institute, was well aware of the macrodynamic modelling efforts of Frisch, and may have been inspired to extend Frisch’s use of ‘micro-dynamics’ to the more general expression of ‘microeconomics’. De Wolff’s note is concerned with what we now call the ‘aggregation problem’ – how to move from the theory of the individual consuming unit to the behaviour of aggregate consumption. ... He [de Wolff] is quite clear about the distinction between micro- and macroeconomics: 
"The concept of income elasticity of demand has been used with two entirely different meanings: a micro- and macro-economic one. The micro-economic interpretation refers to the relation between income and outlay on a certain commodity for a single person or family. The macro-economic interpretation is derived from the corresponding relation between total income and total outlay for a large group of persons or families (social strata, nations, etc.) [emphasis in original]."
In Varian's telling, the terms of macroeconomics start popping up in academic journals and even some lesser-used textbooks in the 1940s, are in widespread use by the mid-1950s, and first appear in Paul Samuelson's canonical intro economics textbook in the 1958 edition.

Tuesday, May 21, 2019

Strengthening Automatic Stabilizers

For economists, "automatic stabilizers" refers to how tax and spending policies adjust without any additional legislative policy or change during economic upturns and downturns--and do so in a way that tends to stabilize the economy. For example, in an economic downturn, a standard macroeconomic prescription is to stimulate the economy with lower taxes and higher spending. But in an economic downturn, taxes fall to some extent automatically, as a result of lower incomes. Government spending rises to some extent automatically, as a result of more people becoming eligible for unemployment insurance, Medicaid, food stamps, and so on. Thus, even before the government undertakes additional discretionary stimulus legislation, the automatic stabilizers are kicking in.

Might it be possible to redesign the automatic stabilizers of tax and spending policy in advance so that they would offer a quicker and stronger counterbalance when (not if) the next recession comes? The question is especially important because in past recessions, the Federal Reserve often cut the policy interest rate (the "federal funds" interest rate) by about five percentage points. But interest rates are lower around the world for a variety of reasons, and the federal funds interest rate is now at 2.5%. So when the next recession comes, monetary policy will be limited in how much it can reduce interest rates before those rates hit zero percent, and will instead need to rely on nontraditional monetary policy tools like quantitative easing, forward guidance, and perhaps even experiments with a negative policy interest rate.

Heather Boushey, Ryan Nunn, and Jay Shambaugh have edited a collection of eight essays under the title Recession Ready: Fiscal Policies to Stabilize the American Economy (May 2019, Hamilton Project at the Brookings Institution and Washington Center for Equitable Growth).

In one of the essays, Louise Sheiner and Michael Ng look at US experience with fiscal policy during recessions in recent decades, and find that it has consistently had the effect of counterbalancing economic fluctuations. They write: "Fiscal policy has been strongly countercyclical over the past four decades, with the degree of cyclicality somewhat stronger in the past 20 years than the previous 20. Automatic stabilizers, mostly through the tax system and unemployment insurance, provide roughly half the stabilization, with discretionary fiscal policy in the form of enacted tax cuts and increased spending accounting for the other half."

Automatic stabilizers are important in part because the adjustments can happen fairly quickly. In contrast, when the discretionary Obama stimulus package--American Recovery and Reinvestment Act of 2009--was signed into law in February 2019, the Great Recession had started 14 months earlier.

In another essay, Claudia Sahm proposes that along with the already-existing built-in shifts in taxes and spending, fiscal stabilizers could be designed to kick in automatically when a recession starts. In particular, she proposes that the trigger for such actions could be when "the three-month moving average of the national unemployment rate has exceeded its minimum during the preceding 12 months by at least 0.5 percentage points. ... The Sahm rule calls each of the last five recessions within 4 to 5 months  of its actual start. ... The Sahm rule would not have generated any incorrect signals in the last 50 years."

Sahm argues that when this trigger is hit, the federal government should have legislation in place that would immediately make a direct payment--which could be repeated a year later if the recession persists. She makes the case for a total payment of about 0.7% of GDP (given current GDP of around $20 trillion, this would be $140 billion). She writes: "All adults would receive the same base payment, and in addition, parents of minor dependents would receive one half the base payment
per dependent." This isn't cheap! But a lasting and persistent recession is considerably more expensive. 

Other chapters of the book focus on a number of other proposals, which include: 
  • "[T]ransfer federal funds to state governments during periods of economic weakness by automatically increasing the federal share of expenditures under Medicaid and the Children’s Health Insurance Program"
  • "[C]reating a transportation infrastructure spending plan that would be automatically triggered during a recession"
  • Publicize availability of unemployment benefits when the unemployment rate starts rising, and extend the length of unemployment insurance payments at this time
  • Expand Temporary Assistance for Needy Families to include subsidized jobs in recessions
  • An automatic rise of 15% in Supplemental Nutrition Assistance Program (SNAP) benefits during recessions
The list isn't exhaustive, of course. For example, one policy used during the Great Recession was to have a temporary cut in the payroll taxes that workers pay to support Social Security and Medicare. For most workers, these taxes are larger than their income taxes. And there is a quick and easy way to get this money to people, just by reducing what is withheld from paychecks. 

The broader issues here, of course, is not about the details of specific actions, some of which are more attractive to me than others. It's whether we seize the opportunity now to reduce the sting of the next recession.

For estimates of automatic stabilizers in the past, see "The Size of Automatic Stabilizers in the US Budget" (November 23, 2015).

Here's a table of contents for the book edited by Boushey, Nunn, and Shambaugh:

Monday, May 20, 2019

Daniel Hamermesh: How Do People Spend Time?

For economists, the idea of "spending" time isn't a metaphor. You can spend any resource, not just money. Among all the inequalities in our world, it remains true that every person is allocated precisely the same 24 hours in each day. In "Escaping the Rat Race: Why We Are Always Running Out of Time," the Knowledge@Wharton website interviews Daniel Hamermesh, focusing on themes from his just-published book Spending Time: The Most Valuable Resource.

The introductory material at the start quotes William Penn, who apparently once said, “Time is what we want most, but what we use worst.” Here are some comments from Hamermesh:

Time for the Rich, Time for the Poor
The rich, of course, work more than the others. They should. There’s a bigger incentive to work more. But even if they don’t work, they use their time differently. A rich person does much less TV watching — over an hour less a day than a poor person. They sleep less. They do more museum-going, more theater. Anything that takes money, the rich will do more of. Things that take a lot of time and little money, the rich do less of. ...
I think complaining is the American national pastime, not baseball. But the thing is, those who are complaining about the time as being scarce are the rich. People who are poor complain about not having enough money. I’m sympathetic to that. They’re stuck. The rich — if you want to stop complaining, give up some money. Don’t work so hard. Walk to work. Sleep more. Take it easy. I have no sympathy for people who say they’re too rushed for time. It’s their own darn fault.

Time Spent Working Across Countries
Americans are the champions of work among rich countries. We work on average eight hours more per week in a typical week than Germans do, six hours more than the French do. It used to be quite a bit different. Forty years ago, we worked about average for rich countries. Today, even the Japanese work less than we do. The reason is very simple: We take very short vacations, if we take any. Other countries get four, five, six weeks. That’s the major difference. ...
What’s most interesting about when we work is you compare America to western European countries, and it’s hard to find a shop open on a Sunday in western Europe. Here, we’re open all the time. Americans work more at night than anybody else. It’s not just that we work more; we also work a lot more at night, a lot more in the evenings, and a heck of a lot more on Sundays and Saturdays than people in other rich countries. We’re working all the time and more. ...
It’s a rat race. If I don’t work on a Sunday and other people do, I’m not going to get ahead. Therefore, I have no incentive to get off that gerbil tube, get out of it and try to behave in a more rational way. ...  The only way it’s going to be solved is if somehow some external force, which in the U.S. and other rich countries is the government, imposes a mandate that forces us to behave differently. No individual can do it. ...
We have to force ourselves, as a collective, as a polity, to change our behavior. Pass legislation to do it. Every other rich country did that between 1979 and 2000. We think the Japanese are workaholics. They’re not workaholics. Compared to us, they work less than we do, yet 40 years ago they worked a heck of a lot more. They chose to cut back. ,.. It’s going to be a heck of a lot of trouble to change the rules so that people are mandated to take four weeks of vacation or to take a few more paid holidays. Other countries have done it. It didn’t just happen from the day the countries were born. They chose to do it. It’s a political issue, like the most important things in life. 
Time and Technology, Money Chasing Hours
Time is an economic factor; economics is about scarcity more than anything else. Because our incomes keep on going up, whereas time doesn’t go up very much, time is the increasingly important scarce factor.  ...
There’s no question technology has made us better off. Think about going to a museum. When I went to the Museum of Science and Industry in Chicago as a kid, you’d pull levers. You did a few things. These days, it’s all incredibly immersive. Great technology. But you can’t go to the museum in any less time. You can’t cut back on sleep. A few things are easier to do more quickly because of technology: cooking, cleaning, washing, I don’t know if you’re old enough to remember the semi-automatic washing machine with a ringer. Tremendous improvements in the things you do with the house. Technology has made life better, but it hasn’t saved us much time. ... So, we are better off, but it’s not that we’re going to have more time; we’re going to have less time. But we have more money chasing the same number of hours.
 For a longer and more in-depth and wide-ranging discussion of these subjects, listen to the hour-long EconTalk episode in which Russ Roberts interviews Daniel Hamermesh  (March 25, 2019). 

Friday, May 17, 2019

Time for a Return of Large Corporation Research Labs?

It often takes a number of intermediate steps to move from a scientific discovery to a consumer product. A few decades ago, many larger and even mid-sized corporations spent a lot of money on research and development laboratories, which focused on all of these steps. Some of these corporate laboratories like those at AT&T, Du Pont, IBM, and Xerox were nationally and globally famous. But the R&D ecosystem has shifted, and firms are now much more likely to rely on outside research done by universities or small start-up firms. These issues are discussed in "The changing structure of American innovation: Cautionary remarks for economic growth," by Ashish Arora, Sharon Belenzon,  Andrea Patacconi, and Jungkyu Suh, presented at conference on  "Innovation Policy and the Economy 2019," held on on on April 16, 2019, hosted by the National Bureau of Economic Research, and sponsored by the Ewing Marion Kauffman Foundation.

On the importance of corporate laboratories much better decades of US productivity growth, they authors note:
From the early years of the twentieth century up to the early 1980s, large corporate labs such as AT&T's Bell Labs, Xerox's Palo Alto Research Center, IBM's Watson Labs, and DuPont's Purity Hall were responsible for some of the most consequential inventions of the century such as the transistor, cellular communication, graphical user interface, optical bers, and a host of synthetic materials such as nylon, neoprene, and cellophane.
But starting in the 1980s, firms began to rely more on universities and on start-ups to do their R&D. Here's one of many examples, the closing of the main DuPont research laboratory: 
A more recent example is DuPont's closing of its Central Research & Development lab in 2016. Established in 1903, DuPont Central R&D served as a premiere lab on par with the top academic chemistry departments. In the 1960s, the central R&D unit published more articles in the Journal of the American Chemical Society than MIT and Caltech combined. However, in the 1990s, DuPont's attitude toward research changed as the company started emphasizing business potential of research projects. After a gradual decline in scientifi c publications, the company's management closed the Experimental Station as a central research facility for the firm after pressure from activist investors in 2016.
The pattern shows up in broader trends. The authors write that "the number of publications per firm fell at a rate of 20% per decade from 1980 to 2006 for R&D performing American listed firms." Business-based R&D as a share of total R&D peaked back in the 1990s, and has been falling since then. The share of business R&D which is "research," as opposed to "development," has been falling, too. 

The authors tell the story of how so much research was based in corporations, or shared by corporations and universities, for the first sis or seven seven decades of the 20th century, and how the shift to a greater share of research happening universities took place. One big change was the Bayh-Dole act of 1980 (citations omitted):
Perhaps the most widely commented on reform of this era is the Bayh-Dole Patent and Trademark Amendments Act of 1980, which allowed the results of federally funded university research to be owned and exclusively licensed by universities. Since the postwar period, the federal government had been funding more than half of all research conducted in universities and owned the rights to the fruits of such research, totaling in 28,000 patents. However, only a few of these inventions would actually make it into the market. Bayh-Dole was meant to induce industry to develop these underutilized resources by transferring property rights to the universities, which were now able to independently license at the going market rate.
As universities took on more research, corporations backed off. Here are a couple of examples: 
In 1979, GE's corporate research laboratory employed 1,649 doctorates and 15,555 supporting staff, while IBM employed 1,900 staff and 1,300 doctorate holders. The comparable figures in 1998 for GE was 475 PhDs supported by 880 professional staff, and 1,200 doctorate holders for IBM. Indeed, rms whose sales grew by 100% or higher between 1980 and 1990 published 20.6 fewer scienti c articles per year. This contrast between sales growth and publications drop persists into the next two decades: rms that doubled in sales between 1990 and 2000 published 12.0 fewer articles. Publications dropped by 13.3 for such fast growth firms between 2000 and 2010.
A common pattern seems to be that the number of researchers and scientific papers is falling at a number of firms, but the number of patents at these same firms has been steadily rising.  Firms are putting less emphasis on the research, and more on development that can turn into well-defined intellectual property. This pattern seems to hold (mostly) across big information technology and computer firms. The pharmaceutical and biotech firms offer an exception of an industry that has continued to publish research--probably because published research is important in regulatory approval for many of their products. 
Overall, the new innovation ecosystem exhibits a deepening division of labor between universities that specialize in basic research, small start-ups converting promising new findings into inventions, and larger, more established firms specializing in product development and commercialization. Indeed, in a survey of over 6,000 manufacturing- and service-sector firms in the U.S. ... 49% of the innovating firms between 2007 and 2009 reported that their most important new product originated from an external source.
But in this new eco-system of innovation, has something been lost? The authors argue that as businesses have outsourced R&D, it has contributed to the sustained sluggish pace of US productivity growth. They write: 
Spinoffs, startups, and university licensing offices have not fully filled the gap left by the decline of the corporate lab. Corporate research has a number of characteristics that make it very valuable for science-based innovation and growth. Large corporations have access to signi ficant resources, can more easily integrate multiple knowledge streams, and their research is directed toward solving specifi c practical problems, which makes it more likely for them to produce commercial applications. University research has tended, more so than corporate research, to be curiosity-driven rather than mission-focused. It has favored insight rather than solutions to specifi c problems, and partly as a consequence, university research has required additional integration and transformation to become economically useful. This is not to deny the important contributions that universities and small rms make to American innovation. Rather, our point is that large corporate labs may have distinct capabilities, which have proved to be difficult to replace. Further, large corporate labs may also generate signi ficant positive spillovers, in particular by spurring high-quality scienti fic entrepreneurship.
It's not clear how to encourage a resurgence of corporate research labs. Companies and their investors seem happy with the current division of R&D labor. But from a broader social perspective, the growing separation of companies from the research on which they rely suggests that the gap between scientific research and consumer products is growing, along with the the possibility that economically valuable innovations are falling into that gap and never coming into existence.


Those interested in this argument might also want to check "The decline of science in corporate R&D," written by Ashish Arora, Sharon Belenzon, and Andrea Patacconi, published in Strategic Management (2018, vol. 39, pp.  3–32).

For those with an interest in the broader subject of US innovation policy, here's the full list of papers presented at the April 2019 NBER conference:

Thursday, May 16, 2019

Does the Federal Reserve Talk Too Much?

For a long time, the Federal Reserve (and other central banks) carried out monetary policy with little or no explanation. The idea was that the market would figure it out. But in the last few decades, there has been an explosions of communication and transparency from the Fed (and other central banks), consisting both of official statements and an array of public speeches and articles by central bank officials. On one side, a greater awareness has grown up that economic activity isn't just influenced by what the central bank did in the past, but on what it is expected to do in the future. But does the this "open mouth" approach clarify and strengthening monetary policy, or just muddle it?

Kevin L. Kliesen, Brian Levine, and Christopher J. Waller present some evidence on the changes in Fed communication and the results in "Gauging Market Responses to Monetary Policy Communication," published in the Review of the Federal Reserve Bank of St. Louis (Second Quarter 2019, pp. 69-92). They start by describing the old ways, by quoting an exchange between John Maynard Keynes and Bank of England Deputy Governor Sir Ernest Harvey on December 5, 1929:
KEYNES: Arising from Professor Gregory's questions, is it a practice of the Bank of England never to explain what its policy is?
HARVEY: Well, I think it has been our practice to leave our actions to explain our policy.
KEYNES: Or the reasons for its policy?
HARVEY: It is a dangerous thing to start to give reasons.
KEYNES: Or to defend itself against criticism?
HARVEY: As regards criticism, I am afraid, though the Committee may not all agree, we do not admit there is a need for defence; to defend ourselves is somewhat akin to a lady starting to defend her virtue.
From 1967 to 1992, the Federal Open Market Committee released a public statement 90 days after its meetings. The FOMC then started, sometimes, releasing statements right after meeting. Here's a figure showing how the length of these statements has expanded over time, with the shaded area showing the period of "unconventional monetary policy" during and after the Great Recession.

As one example,

[F]ollowing the August 9, 2011, meeting, the policy statement stated the following:
"The Committee currently anticipates that economic conditions—including low rates of resource utilization and a subdued outlook for inflation over the medium run—are likely to warrant exceptionally low levels for the federal funds rate at least through mid-2013."
In this case, the FOMC's intent was to signal to the public that its policy rate would remain low for a long time in order to spur the economy's recovery.
Here's count of the annual "remarks" (speeches, interviews, testimony) by presidents of the regional Federal Reserve banks, members of the Board of Governors, and the chair of the Fed:

Here are some comments about Fed communication that seems to m:
"Speeches have become important communication events. Chairman Greenspan's new economy speech in 1995 and his "irrational exuberance" speech in 1996 were among his more notable speeches. Chairman Ben Bernanke also gave notable speeches during his tenure. Two that standout are his "Deflation: Making Sure 'It' Doesn't Happen Here" speech in 2002 and his global saving glut speech in 2005. ...
One of the key communication innovations during the Bernanke tenure was the public release of individual FOMC participants' expectations of the future level of the federal funds rate. Once a quarter, with the release of the SEP [Summary of Economic Projections], each FOMC participant—anonymously—indicates their preference for the level of the federal funds rate at the end of the current year, at the end of the next two to three years, and over the "longer run." These projections are often termed the FOMC "dot plots." According to the survey, both academics and those in the private sector found the dot plots of limited use as an instrument of Fed communication (more "useless" than "useful"). One-third of the respondents found the dot plots "useful or extremely useful," 29 percent found them "somewhat useful," and 38 percent found them "useless or not very useful." ...
We find that Fed communication is associated with changes in prices of financial market instruments such as Treasury securities and equity prices. However, this effect varies by type of communication, by type of instrument, and by who is doing the speaking. Perhaps not surprisingly, we find that the largest financial market reactions tend to be associated with communication by Fed Chairs rather than by other Fed governors and Reserve Bank presidents and with FOMC meeting statements rather than FOMC minutes.
It's probably impossible for a 21st century central bank to operate with what used to be an unofficial motto attributed to the long-ago Bank of England: "Never explain, never apologize." Just for purposes of political legitimacy, and for maintaining the independence of the central bank, a greater degree of transparency and explanation is needed. But if the choice is between the risk of  instability from financial markets making predictions in a situation of very little central bank disclosure, or the risk of instability from financial markets making predictions in a situation with the current level of central bank disclosure, the current level seems preferable. The authors write:
The modern model of central bank communication suggests that central bankers prefer to err on the side of saying too much rather than too little. The reason is that most central bankers believe that clear and concise communication of monetary policy helps achieve their goals.

Wednesday, May 15, 2019

Alice Rivlin, 1931-2019, In Her Own Words

Alice Rivlin, who died yesterday, was a legend in the Washington policy community. In "Alice Rivlin: A career spent making better public policy," Fred Dewes interviewed Rivlin for the Brookings Cafeteria Podcast on March 8, 2019. 

If you would like some additional detail about Rivlin's career, there's a shorter interview from 1998 by Hali J. Edison, originally published in the newsletter of the Committee on the Status of Women in the Economics Profession (although a more readable reprint of the interview is here). A 1997 interview David Levy of the Minneapolis Fed is here. If you want more Rivlin, here's an hour-long podcast she did with Ezra Klein, Alice Rivlin, queen of Washington's budget wonks," from May 2016.

Rivlin was an economics major at Bryn Mawr College. From the Edison interview:
I wrote my undergraduate honors thesis on the economic integration of Western Europe, which was a pretty prescient topic choice in 1952. I even had a discussion of European monetary union! By then I was sufficiently hooked to be thinking about graduate school, but I went to Europe for a year first, where I had a junior job in Paris working on the Marshall Plan.
She entered Harvard's PhD program in economics in the 1950s. Here are some thoughts about graduate study and the academic job market at that time, from the Edison interview:
Harvard was having a hard time adjusting to the idea of women in the academy. Indeed, since I was already focused on policy, I applied first to the graduate school of public administration (now The Kennedy School), which rejected my application on the explicit grounds that a woman of marriageable age was a "poor risk." I then applied to the economics department, which had about 5 per cent females in the doctoral program. They were just working up their courage to allow women to be teaching fellows and tutors in economics. I taught mixed classes, but initially was assigned only women tutees. One of my tutees wanted to write an honors thesis on the labor movement in Latin America--a subject on which one of my male colleagues had considerable expertise. He was willing to supervise my young woman if I would take one of his young men. However, the boy's senior tutor objected to the switch on the grounds that being tutored by a woman would make a male student feel like a second class citizen. People actually said things like that in those days!

The second year that I taught a section of the introductory economics course, I was expecting a baby in March and did not teach the spring semester. The man who took over my class announced to the class that, since no woman could teach economics adequately, he would start over and the first semester grades would not count. It was an exceptionally bright class and I had given quite a few "A's," so the students were upset. The department chair had to intervene.

In retrospect, the amazing thing was that the women were not more outraged. I think we thought we were lucky to be there at all. Outwitting the system was kind of a game. One of the university libraries was closed to women, and its books could not even be borrowed for a female on inter-library loan. I don't remember being upset. If I needed a book, I just got a male friend to check it out for me. ...

Realistically, moreover, academic opportunities were limited for my generation of women graduate students. Most major universities did not hire women in tenure track positions. Early in my career (about 1962), the University of Maryland was looking for an assistant professor in my general area. I was invited by a friend on the faculty to give a seminar and then had an interview with the department chairman. He was effusive in his praise for my work and said how sorry he was that they could not consider me for the position. I asked why not, and he said that the dean had expressly forbidden their considering any women. That wasn't illegal at the time, so we both expressed our regrets, and I left with no hard feelings.
She ended up at the Brookings Institution. In the late 1960s came as stint at the Department of Health, Education and Welfare during the Johnson administration, then back to Brookings. In the mid-1970s it was decided to start the Congressional Budget Office, which Rivlin ran from 1975-1983. Here's Rivlin's description of  how she was chosen as the original director, from the Dewes interview:
 I was the candidate of the Senate. They, rather stupidly, had two separate search processes, one in the Senate and one in the house. I told them they should never do that again, and they haven't. But that left them with two candidates. I was the candidate of the Senate and a very qualified man named Sam Hughes, who had been the deputy at OMB—no, at the Government Accounting Office— was the other candidate. But the chairman of the House Budget Committee was a man named Al Ullman, and Mr. Ullman had said in an off moment, over his dead body was a woman going to get this job. So, there was kind of a standoff, and then it was solved by an accidental event. The chairman of Ways and Means was a powerful congressman from Arkansas named Wilbur Mills, and he was a mover and shaker in the Congress and a very intelligent man. But he had a weakness—he was an alcoholic. And one night he and an exotic dancer named Fanne Fox were proceeding down Capitol Hill toward the Tidal Basin in his car and Fanne leapt out of the car and into the Tidal Basin. She didn't drown in the Tidal Basin—it's quite shallow—but it was a scandal and Wilbur Mills had to resign. And Al Ullman, chairman of the Budget Committee, was ranking member on Ways and Means, so he moved up. And that left a new chairman who wasn't committed to the previous process, Brock Adams, and he said to Senator Muskie, who was my sponsor, if you want Rivlin it's okay with me. So, I owe that job to Fanne Fox.
Rivlin later ran the Office of Management and Budget during the Clinton administration in the early 1990s. From 1996-99 she was vice-chair on the Federal Reserve Board of Governors. Here's her description of the switch, from the Levy interview:
Off and on over my career, I've been asked if I wanted to be on the Federal Reserve, usually when I was doing something else that I loved doing. One time I was running the Congressional Budget Office. I was doing something very exciting that I wanted to go on doing. And then later, when I was in the Clinton administration, I was asked about the Fed, but I was fully engaged at the Office of Management and Budget and didn't want to leave that. But after I'd been there for almost four years, it did seem, perhaps, time for a change.
For some reason, that description makes me smile. For some people, being on the Fed is a once-in-a-lifetime opportunity. But if you have the capabilities and judgement of Alice Rivlin, it's an opportunity that gets offered to you every few years, until the time is right.  From 1998 to 2001, Rivlin was chair of the District of Columbia Financial Responsibility and Management Assistance Authority, which had legal authority to oversee the finances of the District of Columbia. 

Along the way, Rivlin went back to Brookings a few times, where she started her career 62 years ago in 1957. She taught classes at Georgetown and gave talks and wrote. Rivlin was working on one more book, hoping to publish it this fall. I hope it was close enough to complete that economists and everyone else can hear from her one more time.

Added later: For one more Rivlin interview, here's a 2002 interview which is part of an oral history of the Clinton presidency, and thus focused on the late 1980s and early 1990s. The summary says: "Alice Rivlin discusses deficit reduction, working with the National Economic Council, North American Free Trade Agreement, 1995-1996 government shutdown, Haiti, and press relations." 

Tuesday, May 14, 2019

Are Firms Doing a Lousy Job in How they Hire?

In a lot of economic models, firms decide to hire based on whether they need more workers to meet the demand for their products; in the lingo, labor is a "derived demand," derived from the desired level of output. Beyond that, economic models often don't pay much attention to the details of how hiring happens, assuming that profit-maximizing firms will figure out relatively cost-effective ways of gathering and keeping the skills and workers they need. But what if that hypothesis is wrong?

Peter Cappelli thinks so, and writes "Your Approach to Hiring Is All Wrong" in the May-June 2019 issue of the Harvard Business Review.  He writes:
Only about a third of U.S. companies report that they monitor whether their hiring practices lead to good employees; few of them do so carefully, and only a minority even track cost per hire and time to hire. ... Employers also spend an enormous amount on hiring—an average of $4,129 per job in the United States, according to Society for Human Resource Management estimates, and many times that amount for managerial roles—and the United States fills a staggering 66 million jobs a year. Most of the $20 billion that companies spend on human resources vendors goes to hiring.

One big change that Capelli emphasizes is a shift from filling job vacancies internally to filling them externally. The old working assumption was to hire from within, but in the last few decades, the working assumption seems to be that hiring from outside is preferable. Capelli writes:
In the era of lifetime employment, from the end of World War II through the 1970s, corporations filled roughly 90% of their vacancies through promotions and lateral assignments. Today the figure is a third or less. When they hire from outside, organizations don’t have to pay to train and develop their employees. Since the restructuring waves of the early 1980s, it has been relatively easy to find experienced talent outside. Only 28% of talent acquisition leaders today report that internal candidates are an important source of people to fill vacancies—presumably because of less internal development and fewer clear career ladders. ... Companies hire from their competitors and vice versa, so they have to keep replacing people who leave. Census and Bureau of Labor Statistics data shows that 95% of hiring is done to fill existing positions. Most of those vacancies are caused by voluntary turnover. LinkedIn data indicates that the most common reason employees consider a position elsewhere is career advancement—which is surely related to employers’ not promoting to fill vacancies.
There doesn't seem to be evidence that hiring from outside is better. What evidence does exist seems to be that internal hires get up the learning curve faster, and often don't need as much of an immediate pay bump. If you persuade someone to leave their current employer by offering more money, what you get is a worker whose top priority is "more money," rather than on work challenges and career opportunities. ("As the economist Harold Demsetz said when asked by a competing university if he was happy working where he was: `Make me unhappy.'”)

A common emphasis of modern labor markets is to have a big "funnel," with lots of people applying for jobs but only maybe 2% eventually getting a job. But making the funnel as big as possible means that you face the costs of sorting through a very large number of applicants. And it turns out that lots of managers who are perfectly fine at running a business aren't necessarily all that good at evaluating job applicants.

It turns out that college grades aren't a great predictor of future job performance. Interviews by managers aren't a great predictor, either. There tend to be lots of biases about who the interviewer would choose as a friend with shared interests and cultural background, but not necessarily who will turn out to be the best managers. There are lots of newfangled machine learning techniques that purport to guide hiring, but they are recent enough that it's not clear what kind of workforces they ultimately end up producing.

So what does work?

1) Actual tests of skills that will be useful in the job.

2) Think about promoting and filling positions from within.

3) Giving applicants a realistic preview of what the job actually involves. This is old-style advice, but some companies like Google and Marriott Hotel have set up online games that give applicants a sense of the kinds of decisions and tasks they would need to make.

4) Evaluate hiring by following up on how employees perform. Yes, employee performance in big organizations can be hard to measure, but some basic approaches are available and underused. Which employees quit? Which employees are absent a lot? Which employees qualify for performance-based raises? Or just ask the supervisor if they would hire that person again.

In a nearby article in the same issue of HBR, Dane E. Holmes of Goldman Sachs describes how they hire 3,000 summer interns each year, thus collecting a talent pool they hope will drive the company in the future. Rather than having many different people try to carry out many different interviews at many different locations, Holmes describes a different approach:
"[W]e decided to use `asynchronous' video interviews—in which candidates record their answers to interview questions—for all first-round interactions with candidates. Our recruiters record standardized questions and send them to students, who have three days to return videos of their answers. This can be done on a computer or a mobile device. Our recruiters and business professionals review the videos to narrow the pool and then invite the selected applicants to a Goldman Sachs office for final-round, in-person interviews. (To create the video platform, we partnered with a company and built our own digital solution around its product.)"
This approach allows the company to reach out to a broader group of applicants, to standardize the interview process, to give applicants a sense of the sorts of issues that arise at this employer, to test the ability of applicants to respond to these sorts of issues, and to allow the first round of applicants to be being evaluated in the same way. Goldman Sachs can also use the results to help match applicants to appropriate roles within the company.
We seem to be living in an economy with very low unemployment rates, and where lots of jobs are being advertised, but where actually being hired is often a costly process for both applicants and employers. Moreover, it's an economy that seems relatively full of outside options for shifting to other employers, but relatively light on inside options for expanding skills and building a career with one's current employer. A job market in a dynamic economy will always have some element of musical chairs, as people shift between jobs, but it should also encourage lasting matches between an employee and an employer when the fit is a good one.

Monday, May 13, 2019

The Origin of "Third World" and Some Ruminations

Back in the late 1970s when I was first reading about the world economy in any serious way, it was still common to describe the world as divided into "first world" market-driven high income economies, "second world" command-and-control economies, and "third world" low-income countries. Jonathan Woetzel offers a commentary on the sources of that nomenclature, and how outdated it has come to sound, in "From Third World To First In Class: Rapid economic growth is blurring the distinctions among developing, emerging and advanced countries," appearing in the most recent Milken Institute Review (Second Quarter 2019, pp. 22-33).  Woetzel writes:
When historians in the distant future look back at our era, the name Alfred Sauvy may appear in a footnote somewhere. Sauvy was a French demographer who coined the term “third world” in a magazine article in 1952, just as the Cold War was heating up. His point was that there were countries not aligned with the United States or the Soviet Union that had pressing economic needs, but whose voices were not being heard.
Sauvy deliberately categorized these countries as inferior: “tiers monde” (or third world) was an explicit play on “tiers état” (third estate), the ragged assembly of peasants and bourgeoisie under France’s ancien régime that was subservient to the monarchy (the first estate) and the nobility (the second). “The third world is ignored, exploited and mistrusted, just like the third estate,” Sauvy wrote. “The millennial cycle of life and death has become a cycle of misery.”
As a piece of editorial rhetoric based on the fetid geopolitical atmosphere of the time, Sauvy’s essay was on the mark. As prophecy about the course of economic progress, he could hardly have been more wrong. “Third world” today is politically incorrect as a phrase and economically incorrect as a concept, for it fails to take into account one of the biggest stories of the past half-century: the spectacular economic development that has taken place across the globe. Since Sauvy’s essay, some (but not all) of the countries he referred to have enjoyed very rapid growth and huge leaps in living standards, including in health and education. ... The changes have been so striking that we have reached a point where the very distinctions among “developing,” “emerging” and “advanced” countries have become blurred.
These other terms have been criticized for a lack of accuracy and political correctness, too. For example, if some countries are "advanced," then are other countries "backward" or "behind"? If some countries are describes as  "emerging," what are they emerging from, and what are they becoming? When countries were referred to as "developing," it sometimes seemed to be more of an optimistic outlook than an actual description, and referring to countries with rich and lengthy cultural, political and human inheritances as "undeveloped" seemed to put economic values ahead of all others. 

Others have used acronyms "From BRICs to  MINTs" (February 24, 2014), but looking at clusters of four countries, whether it's Brazil, Russia, India, and China or Mexico, Indonesia, Nigeria, and Turkey, doesn't capture the breadth of the economic shift that is occurring.
Woetzel describes how the global economy is changing in response to four shifts: the rapid march of technological progress; the emerging “superstar” phenomenon, which is exacerbating inequalities; the rapidly changing dynamics of China’s economy; and the evolving nature of globalization itself. He draws on a report that he co-authored with Jacques Bughin, "Navigating a world of disruption" (McKinsey Global Institute, January 2019),  which describes the range and scope of economic success stories in countries around the world. That report notes: 
Among emerging economies, our research has identified 18 high-growth “outperformers” that have achieved powerful and sustained long-term growth—and lifted more than one billion people out of extreme poverty since 1990.1 Seven of these outperformers (China, Hong Kong, Indonesia, Malaysia, Singapore, South Korea, and Thailand) have averaged GDP growth of at least 3.5 percent for the past 50 years. Eleven other countries (Azerbaijan, Belarus, Cambodia, Ethiopia, India, Kazakhstan, Laos, Myanmar, Turkmenistan, Uzbekistan, and Vietnam) have achieved faster average growth of at least 5 percent annually over the past 20 years. Underlying their performance are pro-growth policy agendas based on productivity, income, and demand—and often fueled by strong competitive dynamics. The next wave of outperformers now looms, as countries from Bangladesh and Bolivia to the Philippines, Rwanda, and Sri Lanka adopt a similar agenda and achieve rapid growth.
It's certain true that the old distinctions are breaking down. I've written before about how it's different to be in a world economy "When High GDP No Longer Means High Per Capita GDP" (October 20, 2015).

Here's a list of high-income economies around the world, as classified by the World Bank.  Some of the entrants on the list of high-income may surprise people. Argentina and Chile? Korea and Israel? Poland and Croatia? If one digs into the numbers on GDP per capita, you find that South Korea is ahead of Spain, Portugal, and Greece, and only a couple of notches behind Italy Israel is ahead of France and the United Kingdom in per capita GDP. 

Meanwhile, China ranks with Mexico, Brazil, Thailand, and others in the "upper middle income" category. India and Indonesia are in the "lower middle income group." Looking ahead at the next few decades, most of the growth in the global economy seems likely to be coming from countries that were still being called "third world" four or five decades ago.

Follow-up: A correspondent from France sent along some follow-up thoughts about the origins of "third world." Above, Woetzel writes: "`Tiers monde' (or third world) was an explicit play on `tiers état' (third estate), the ragged assembly of peasants and bourgeoisie under France’s ancien régime that was subservient to the monarchy (the first estate) and the nobility (the second)." My correspondent writes:
1 - In fact, the "first estate" was the clergy and the "second estate" was the nobility.
2 - The "Tiers Etat" was far from uniformly "ragged", it also included some of the largest fortunes of France.
3 - The play on words is much more subtle and less dismissive in French. "Tiers", in French legalese and in everyday speak, means "third party", so basically Sauvy was also implicitly referring to countries which were not engaged in the defining conflict of the era, ie the Cold War.

Also you may be interested to know that, yes, "Sauvy was a French demographer", that was his main job, but that he was also an economic historian, whose 3 volume, 1500 pp textbook on the French economy between 1918 and 1939 was the standard - and fairly unwholesome - text ...

Friday, May 10, 2019

How To Cut US Child Poverty in Half

Back in the 1960s, the poverty rate for those over-65 was about 10 percentage points higher than the poverty rate for children under 18. For example, in 1970 the over-65 poverty rate was about 25%, while the under-18 poverty rate was 15%. But government support for the elderly rose substantially, and  in the 1970s, the over-65 poverty rate dropped below the under-18 rate. For the last few decades, the under-18 poverty rate has been 7-9 percentage points higher than the over-65 poverty rate. In 2017, for example, the under-18 poverty rate was 17.5%, while the over-65 poverty rate was 9.2%.   (For the numbers, see Figure 6 in this US Census report from last fall.)

Poverty is always distressing, but poverty for children has the added element that it shapes the lives of future citizens, workers, and neighbors. The National Academies Press has published A Roadmap to Reducing Child Poverty, edited by Greg Duncan and Suzanne Le Menestrel (February 2019). There is of course nothing magic about specific "poverty line." Being just a little above the poverty line isn't all that different from being just a little below it. But the existence of such a line that is measured the same way over time can still be useful for analysis and policy.

In my own mind, there is a compelling case for reducing child poverty based on the importance of improving equality of opportunity in America. But even if that argument leaves you cold, there is a compelling case based on cold-blooded cost-benefit analysis.

The correlation between child poverty and later outcomes is unarguable. As one example, the report notes:
A study by Duncan, Ziol-Guest, and Kalil (2010) is one striking example. Their study uses data from a national sample of U.S. children who were followed from birth into their thirties and examines how poverty in the first six years of life is related to adult outcomes. What they find is that compared with children whose families had incomes above twice the poverty line during their early childhood, children with family incomes below the poverty line during this period completed two fewer years of schooling and, as adults, worked 451 fewer hours per year, earned less than half as much, received more in Food Stamps, and were more than twice as likely to report poor overall health or high levels of psychological distress . Men who grew up in poverty, they find, were twice as likely as adults to have been arrested, and among women early childhood poverty was associated with a six-fold increase in the likelihood of bearing a child out of wedlock prior to age 21.
But correlation isn't causation, of course, as economists (and this study) are quick to note. For example, say that there is a strong correlation between families in poverty and a lower education level for the parents. Perhaps a substantial share of the problems for children in poverty are not caused by lower family income, but by the lower education level of parents. If the root cause is lower parental education levels, then raising these families above the poverty line in terms of income won't have much effect on the long-term problems faced by children from these families.  

Making the case that various income-support programs will indeed address problems of children in poverty thus requires more detailed arguments, and the report goes through a number of studies in detail. But broadly speaking, raising families with children out of poverty affects the long-term outcomes for children in two ways. The report notes (citations omitted):
An “investment” perspective may be adopted ... emphasizing that higher income may support children’s development and well-being by enabling poor parents to meet such basic needs. As examples, higher incomes may enable parents to invest in cognitively stimulating items in the home (e.g., books, computers), in providing more parental time (by adjusting work hours), in obtaining higher-quality nonparental child care, and in securing learning opportunities outside the home. Children may also benefit from better housing or a move to a better neighborhood. Studies of some poverty alleviation programs find that these programs can reduce material hardship and improve children’s learning environments.
The alternative, “stress” perspective on poverty reduction focuses on the fact that economic hardship can increase psychological distress in parents and decrease their emotional well-being. Psychological distress can spill over into marriages and parenting. ... Parents’ psychological distress and conflict have in fact been linked with harsh, inconsistent, and detached parenting. Such lower-quality parenting may harm children’s cognitive and socioemotional development. 
These are ways in which additional income affects child development. Here are a couple of examples, chosen from meny, of the evidence that has accumulate on this point. The report writes:

Neuroscientists have produced striking evidence of the effect of early-life economic circumstances on brain development. Drawing from Hanson et al. (2013), Figure 3-3 illustrates differences in the total volume of gray matter between three groups of children: those whose family incomes were no more than twice the poverty line (labeled “Low SES” in the figure); those whose family incomes were between two and four times the poverty line (“Mid SES”); and those whose family incomes were more than four times the poverty line (“High SES”). Gray matter is particularly important for children’s information processing and ability to regulate their behavior. The figure shows no notable differences in gray matter during the first nine or so months of life, but differences favoring children raised in high-income families emerge soon after that. Notably, the study found no differences in the total brain sizes across these groups—only in the amount of gray matter."
This study is again a correlation, not a proof of causality. As the report notes: "However, the existence of these emerging differences does not prove that poverty causes them. This study adjusted for age and birth weight, but not for other indicators of family socioeconomic status that might have been the actual cause of these observed differences in gray matter for children with different family incomes." But with all due caution rigorously observed, it seems to me a highly suggestive correlation. 

Other studies look at the long-term effects of existing government programs that have raised income levels for poor families. Here's another example:
In their 2016 study of possible long-term effects of Food Stamp coverage in early childhood on health outcomes in adulthood, Hoynes, Schanzenbach, and Almond focus on the presence or absence of a cluster of adverse health conditions known as metabolic syndrome. In the study, metabolic syndrome was measured by indicators for adult obesity, high blood pressure, diabetes, and heart disease. Scores on these indicators of emerging cardiovascular health problems increased (grew worse) as the timing of the introduction of Food Stamps shifted to later and later in childhood (Figure 3-4). The best adult health was observed among individuals in counties where Food Stamps were already available when these individuals were conceived. Scores on the index of metabolic syndrome increase steadily until around the age of five.

Add all these kinds of studies and factors up, and you can obtain a rough-and-ready estimate a total cost of child poverty. 

Holzer et al. (2008) base their cost estimates on the correlations between childhood poverty (or low family income) and outcomes across the life course, such as adult earnings, participation in crime, and poor health. ... Their estimates represent the average decreases in earnings, costs associated with participation in crime (e.g. property loss, injuries, and the justice system), and costs associated with poor health (additional expenditures on health care and the value of lost quantity and quality of life associated with early mortality and morbidity) among adults who grew up in poverty. ... Holzer et al. (2008) make a number of very conservative assumptions in their estimates of earnings and the costs of crime and poor health. ... All of these analytic choices make it likely that these estimates are a lower bound that understates the true costs of child poverty to the U.S. economy.
The bottom line of the Holzer et al. (2008) estimates is that the aggregate cost of conditions related to child poverty in the United States amounts to $500 billion per year, or about 4 percent of the Gross Domestic Product (GDP). The authors estimate that childhood poverty reduces productivity and economic output in the United States by $170 billion per year, or by 1.3 percent of GDP; increases the victimization costs of crime by another $170 billion per year, or by 1.3 percent of the GDP; and increases health expenditures, while decreasing the economic value of health, by $163 billion per year, or by 1.2 percent ...
McLaughlin and Rank (2018) build on the work of Holzer and colleagues by updating their estimates in 2015 dollars and adding other categories of the impact of childhood poverty on society. They include increased corrections and crime deterrence costs, increased social costs of incarceration, costs associated with child homelessness (such as the shelter system), and costs associated with increased childhood maltreatment in poor families (such as the costs of the foster care and child welfare systems). Their estimate of the total cost of childhood poverty to society is over $1 trillion, or about 5.4 percent of GDP. ...  They do make it clear that there is considerable uncertainty about the exact size of the costs of childhood poverty. Nevertheless, whether these costs to the nation amount to 4.0 or 5.4 percent of GDP—roughly between $800 billion and $1.1 trillion annually in terms of the size of the U.S. economy in 2018—it is likely that significant investment in reducing child poverty will be very cost-effective  over time.
Of course, various programs are already reducing the number of children who live below the poverty line. The figure shows estimates of what the child poverty rate would have been without certain programs, including the Earned Income Credit, the Child Tax Credit, the Supplemental Nutrition Assistance Program ("food stamps"), Supplemental Security Income, Social Security, unemployment compensation, and others. (One warning about the figure: the poverty rate for children is given here as 13%, because the study is using a Supplemental Poverty Measure that (for example) includes a value for in-kind benefits like Medicaid.) 
What additional programs would it take to reduce US child poverty by half? The report looks at a range of programs and designs and combinations, seeking to provide  menu of options rather than a single recommendation. For example, one can look at general assistance linked directly to work, like the Earned Income Credit, or assistance like food stamps or housing vouchers. One could provide means-tested benefits only to the poor, or a universal benefit to all children--but where the value of that benefit would treated be taxable income for the non-poor. But for example, here's one set of policies that would make a substantial difference, with their estimated effects and costs. 

For example, if one chose the four top items on this list, the annual cost would be about $160 billion. The benefits later in life would be considerably larger. 

I don't propose spending $160 billion lightly. But I will point out that the expansion of the health insurance under the Patient Protection and Affordable Care Act of 2010 costs the US government over $100 billion per year.  Similarly, the costs of the Tax Cuts and Jobs Act passed in 2017 are projected to have an average cost of $100 billion per year (or more?) In short, our political system does seem fully capable of belching up expenditures of this size when the stars are properly aligned. 

As the report points out, some American cousins have taken the plunge to reducing child poverty by half.
The United States spends less to support low-income families with children than peer English-speaking countries do, and by most measures it has much higher rates of child poverty. Two decades ago, child poverty rates were similar in the United States and the United Kingdom. That began to change in March 1999, when Prime Minister Tony Blair pledged to end child poverty in a generation and to halve child poverty in 10 years. Emphasizing increased financial support for families, direct investments in children, and measures to promote work and increase take-home pay, the United Kingdom enacted a range of measures that made it possible to meet the 50 percent poverty reduction goal by 2008—a year earlier than anticipated. More recently, the Canadian government introduced the Canada Child Benefit in its 2016 budget. According to that government’s projections, the benefit will reduce the number of Canadian children living in poverty by nearly half.
Personally, I would be a lot more comfortable with the extent of US inequality if the child poverty rate was considerably lower, and thus the starting points for American children were closer together. 

Thursday, May 9, 2019

Low-Skill Male Workers: A Black Spot on the Rosy Employment Outlook

The monthly unemployment rate in April fell to 3.6%, the lowest monthly rate since December 1969. It's now been a 4.0% or less for more than a year. But in this generally quite positive employment environment, low-skill male workers have been an ongoing sore spot. The issues are discussed in a three-paper symposium in the Spring 2019 issue of the Journal of Economic Perspectives:
Binder and Bound set the stage: 
During the last 50 years, labor market outcomes for men without a college education in the United States worsened considerably. Between 1973 and 2015, real hourly earnings for the typical 25–54 year-old man with only a high school degree declined by 18.2 percent,1 while real hourly earnings for college-educated men increased substantially. Over the same period, labor-force participation by men without a college education plummeted. In the late 1960s, nearly all 25–54 year-old men with only a high school degree participated in the labor force; by 2015, such men participated at a rate of 85.3 percent.
Here's a figure from their paper showing labor force participation by level of education for "prime-age" males in the 25-54 age group. In the late 1960s, prime-age men of all education levels had very high labor force participation. But it has sagged over time for all education levels, and sagged the most for those with lower education levels. 

This drop-off in labor force participation has been accompanied by a wave of other symptoms, as discussed in the paper by Coile and Duggan. As one example, consider mortality rates for prime-age men, using their table. 

The overall mortality rate for men (bottom row) dropped dramatically from 1980 to 2000, but barely budged from 2000-2016. In particular, from 2000-2016, the mortality rate rose for men age 25-34 and for white men in the 25-age group as a whole. Looking at cause of death, there are big falls in death rates for prime-age men from heart disease and cancer in the 1980s and 1990s, but much smaller falls since then. Meanwhile, death rates for this group from accidents, suicides, and homicides went up from 2000-2016. Data on cause of death doesn't include education level, but the authors go on to show that in areas with lower education levels, these rises in death rates were more pronounced. 

When Coile and Duggan look instead at reporting of health problems, they find:
"There is a steep health gradient with respect to education—within each age group, the share in fair or poor health is roughly 2.5 times as large for men with a high school education or less than for men with some college or more. Men with less education are similarly more likely to report having a work-limiting disability, limitations in physical activity or ADLs/IADLs [Activities of Daily Living or Instrumental Activities of Daily Living], and obesity ...  Men’s health ... is getting worse over time. ... [T]he fraction of men reporting a health problem is higher in 2015 than in 2000 in nearly every case."
Coile and Duggan look at a variety of other patterns for prime-age men, focusing on lower skill levels where the data makes it possible. For example, they note the sharp rise in incarceration rates for men from 1980 to 2000. The pattern that emerges is that the incarceration rate for men in the 45-54 age group is higher in 2016 than in 2000, reflecting large numbers of younger men sentenced to prison in the 1980s and 1990s. However, the incarceration rate for men in the 25-34 and 35-44 age group is generally down in 2016 compared to 2000. As one example, the incarceration rate of black men ages 25-34 was 5.5% in 1980, 12.8% in 2000, and 7.4% in 2016. 

Marriage rates have been on a generally downward trend as well, although the drop-off from 2000 to 2016 is a lot smaller than the fall for the 1980-2000 period. Here's an illustrative table from Coile and Duggan: 
In some general way, all of these factors seems to combine into a shadowy picture. Low-skill men are working less, reporting worse health, were for a time more likely to be locked up, and seem less likely to form family ties. How do these factors connect? 

Binder and Bound focus on the task of explaining the drop in labor force participation. They argue that the reduced demand for labor of low-skill but prime-age men (perhaps because of shifts in technology or international trade) isn't nearly enough to explain the drop in their labor force participation.  They also offer back-of-the envelope estimates that while higher disability rates may affect men in the 45-54 age bracket, they aren't likely to explain less labor force participation for the younger prime-age men. They write:
On its own, falling labor demand does not sufficiently explain the secular decline in less-educated male labor-force participation—at least, not without allowing for substantial adjustment frictions in the long run as well as the short run. Rising access to Disability Insurance is at most a partial explanation for the 45–54 year-old group and matters quite little for younger men and for high school dropouts. Rising exposure to prison may be a significant factor for dropouts and for blacks without college education, but labor-force participation for these groups began declining decades before prison populations skyrocketed. Certainly no single explanation can sufficiently explain the decline, and even in combination, the explanations appear insufficient.
We suspect that there is another factor at play. We will argue that the prospect of forming and providing for a new family constitutes an important male labor supply incentive; and thus, that developments within the marriage market can influence male labor-force participation. A decline in the formation of stable families produces a situation in which fewer men are actively involved in family provision or can expect to be involved in the future. This removes a labor supply incentive; and the possibility of drawing support from one’s existing family ... creates a feasible labor-force exit.
The paper by Edin, Nelson, Cherlin and Francis is by a group of sociologists, based on in-depth interviews with working-class men who have children but are not married to the mothers and do not live with them. They argue that low-skilled men are often trying to renegotiate their relationship to jobs, family and religion--but that many of them are in a social setting where these attempts lead to "haphazard lives." They write (citations omitted):
[W]e show that working-class men are not simply reacting to changes in the economy, family norms, or religious organizations. Rather, they are attempting to renegotiate their relationships to these institutions by attempting to construct autonomous, generative selves. For example, these men’s desire for autonomy in jobs seems rooted in their rejection of the monotony and limited autonomy that their fathers and grandfathers experienced in the workplace, along with a new ethos of self-expression. Similarly, these working-class men focus on their ties to their children even when they have little relationship with the children’s mothers, and they seek spiritual fulfillment even though they disdain organized religion. ... In sum, these working-class men show both a detachment from institutions and an engagement with more autonomous forms of work, childrearing, and spirituality ... . Autonomy refers to independent action in pursuit of personal growth and development. Personal growth has come to be highly valued among middle class Americans but until recently has not been associated with the working class. ... [P]ast scholarship typically assumed that such forms of action would usually only be found among those so materially comfortable that they needn’t spend time worrying about their economic circumstances ...

Our interviews strongly suggest that the autonomous, generative self that many men described is also a haphazard self. For example, vocational aspirations usually remain nebulous and tentative, rarely taking the form of an explicit strategy. In the meantime, career trajectories are often replaced by a string of random jobs. These men typically transitioned to parenthood more by accident than design, and in the context of tenuous romantic relationships. ... Religious community and a systemic belief system have been replaced by a patched-together religious identity that holds little sway over behavior, especially as it is divorced from the communal aspects of faith that have adhered working-class men to a set of behavioral norms. ...

The optimistic reading of the developments we have described is that workingclass men are now sharing in the autonomy and generativity that was largely the province of middle- and upper-class men in previous generations. Moreover, the interest they show in being involved as fathers and in helping others could represent a widening of the boundaries of masculinity in ways that are more consistent with contemporary family and work life. The pessimistic reading is that these men are pursuing goals that they are unlikely to achieve due to their lack of social integration. They must find their way without ties to steady work, stable families, and organized religion. Without social support, their chances of success diminish. Those who fail to achieve the autonomous, generative selves they crave will have little to fall back on and few people to prevent them from sinking into despair.
In other words, the problems of low-skilled men in US society are certainly not just a matter of income, and not just a matter of having a job, either. Instead, they are related to a more wide-ranging disconnectness, which shows up across many domains of behavior and outcomes.  

Wednesday, May 8, 2019

Snapshots of US Income Taxation Over Time

As Americans recover from our annual April 15 deadline for filing income taxes, here are a series of figures about longer-term patterns of taxes in the US economy. They are drawn from a series of blog posts by the Tax Foundation over the last few months.  The Tax Foundation is a nonpartisan group whose analysis typically leans toward side that taxes on those with high incomes are already high enough. However, the figures that follow are compiled from fairly standard data sources: IRS data, the Congressional Budget Office, and the like.

For example, here's a figure showing what taxes are the main sources of federal income over time from Erica York. She writes: "Before 1941, excise taxes, such as gas and tobacco taxes, were the largest source of revenue for the federal government, comprising nearly one-third of government revenue in 1940. Excise taxes were followed by payroll taxes and then corporate income taxes. Today, payroll taxes remain the second largest source of revenue. However, other sources have shifted in relative importance. Specifically, individual income taxes have become a central pillar of the federal revenue system, now comprising nearly half of all revenue. Following an opposite trend, corporate income and excise taxes have decreased relative to other sources."

Indeed, for all the huffing and puffing over income taxes, it's worth remembering that 67.8% of US taxpayers in 2019 will pay more in federal payroll taxes (which fund Social Security, Medicare, and disability insurance) than in federal income taxes. Robert Bellefiore offers this figure, drawn from a Joint Committee on Taxation study, showing that this pattern holds on average for all income groups under $200,000.

Arguments over taxes often make fairness claims about the share of taxes paid by various income groups. Whatever one's ultimate conclusions about what should happen, it's useful to start from teh basis of what is actually happening.

It's common to hear a complaint that those with high incomes are evading federal taxes. Some do, of course. It's a big country. If a very rich person puts all their money into tax-exempt bonds, with the associated lower interest rates for being tax-free, they won't pay taxes on that income. But on average, those with higher incomes do pay a much larger share of taxes. Robert Bellefiore offers a couple of illustrative graphs. The first figure focuses only on federal income taxes.

The second figure includes the share of all federal taxes: that is, income, payroll, corporate (as attributed to individuals who benefit from corporate profits), excise taxes on gasoline, tobacco, and alcohol, and so on. Again, those with higher income levels pay a larger share of total federal taxes.

One can of course still argue that the share of taxes paid by those with high incomes should be larger. But again, arguing that those with high incomes don't already pay a larger share of federal taxes is not a true statement.

What about taxes paid at the very tip-top of the income distribution? Erica York offers this figure on the average tax rates paid by the top 0.1%. To be clear, the "average" rate rate is the actual share of income paid in taxes, which is different from the "marginal" tax rate charged on the highest $1 of income earned. Back in the 1950s, the highest marginal income tax rates sometimes reached 90%. The fact that the average tax rate is so much lower tells you that those very high marginal tax rates were largely for show, in the sense that they didn't actually apply to very much income. York writes: "The graph below illustrates the average tax rates that the top 0.1 percent of Americans faced over the last century, based on research from Thomas Piketty, Emmanuel Saez, and Gabriel Zucman. The blue line includes the impact of all federal, state, and local taxes on individual income, payroll taxes, estates, corporate profits, properties, and sales. The purple line shows income taxes only, including federal, state, and local." The overall pattern is while effective tax rates on the top 0.1% were higher in the 1950s, they haven't shown much long-term trend one way or the other in the last half-century or so.

When listening to arguments over tax policy, it's common to hear complaints about whether deductions should be limited for purposes like mortgage interest, state and local taxes, or charitable contribution. It's useful to remember that those deductions don't apply to most taxpayers. Erica York explains: "In 2016, barely a quarter of households with adjusted gross income (AGI) between $40,000 and $50,000 claimed itemized deductions when filing their taxes. In contrast, more than 90 percent of households making $200,000 and above itemized their deductions." One effect of the 2017 tax reform law is that the number of taxpayers who find it useful to itemize deductions will drop by as much as 60%.

The share of total federal taxes paid by those with high incomes has been rising over time. Part of that change is because the share of those who owe zero in federal income tax has been rising over time. Robert Bellefiore provides a graph. One main reason for the rise share of taxpayers who owe zero is the expansion of refundable tax credits aimed at those with lower income, including the Earned Income Tax Credit and the Child Credit. You can also see the share of those with zero income taxes owed rises in the Great Recession.

In a different post, Robert Bellefiore offers a chart showing the overall effects of federal tax and transfer policy on the share of income received by different groups. He writes: "The lowest quintile’s income nearly doubles, while the second and middle quintiles experience relatively smaller increases in income. The fourth quintile’s income share remains constant, and only the highest quintile has a lower share of income after taxes and transfers. The top 1 percent’s share of income, for example, falls from 16.6 percent to 13.2 percent."

Again, one can argue that the amount of redistribution should be larger. But it would be untrue to argue that a significant amount of redistribution--like doubling the after-taxes-and-transfers share of the lowest quintile--doesn't already happen.