How many people in the world have a million dollars or more in financial assets? That is, leave aside the value of real estate or other owned property. Capgemini and RBC Wealth Management provide some estimates in a report that seeks to define the global market for the wealth management industry, the World Wealth Report 2013.
The report splits High Net Worth Individual (HNWI, natch) into three categories. Those with $1 million to $5 million in financial assets are in the "millionaire next door" category, and while I find that name a bit grating, it's fair enough. After all, a substantial number of of households in high-income countries that are near retirement, if they have been steadily saving throughout their working life, will have accumulated $1 million or more. The next step up is those with $5 million to $30 million in financial assets, who this report calls the "mid-tier millionaires." At the top, with more than $30 million in financial assets are the "ultra-HNWI" individuals. Here's the global distribution:
A few quick observations:
1) The "ultra-HNWI" individuals are less than1% of the total HNWI population, but have 35% of the total assets of this group. The "millionaires next door" with $1 million to $5 million are 90% of the high net worth individual population, and have 42.8% of the total net worth of this group.
2) Another table in the report shows that 3.4 million of the high-net worth individuals--about 28% of the total--are in the United States. The next four countries for number of people in the high net worth category are Japan (1.9 million), Germany (1.0 million), and China (643,000) and the UK, (465,000).
3) World population is about 7 billion. So the 12 million or so high net worth individuals are about one-sixth of 1% of the world population.
Pages
▼
Friday, June 28, 2013
Thursday, June 27, 2013
High School Standards and Graduation Rates: The Tradeoff
One grim piece of news for the U.S. economy in the last few decades has been that the high school graduation rate flattened out around 1970. In a modern economy that depends on skills and brainpower, this is a troubling pattern. Richard J. Murnane considers the evidence and explanations in "U.S. High School Graduation Rates: Patterns and Explanations," in the most recent issue of the Journal of Economic Literature (vol. 51:2, pp. 370–422). The JEL is not freely available on-line, although many in academia will have on-line access through their library or through a personal membership in the American Economic Association. Here's how Murnane sets the stage (footnotes and citations omitted for readability):
Why the stagnation in high school graduation rates from 1970 up to about 2000? Clearly, it's not because the labor market rewards to getting a high school degree had declined; in fact, the gains from a high school degree had increased. Research has looked at explanations especially relevant at certain places and time, like a boom in demand for Appalachian coal in the 1970s that might have made a high school degree look less value to lower-skilled labor in that area, or how the crack epidemic of the late 1980s and early 1990s altered the expected rewards to finishing high school for a group of young men in certain inner cities, or how the end of certain court-ordered desegregation plans led to higher dropout rates for some at-risk youth.
While all of these explanations have an effect in certain times and places, Murnane suggests a bigger cause: a pattern of increasing high school graduation requirements that started in the 1970s. He writes: "In summary, my interpretation of the evidence is that increases in high school graduation requirements during the last quarter of the twentieth century increased the nonmonetary cost of earning a diploma for students entering high school with weak skills. By so doing, they counteracted the increased financial payoff to a diploma and contributed to the stagnation in graduation
rates over the last decades of the twentieth century."
Why have high school graduation rates apparently risen in the last 10 years or so? Murnane offers some fragmentary evidence that better-prepared ninth graders, expanded preschool programs, and reductions in teen pregnancy may have played a role. But his conclusion is modest: "In summary, there are many hypotheses for why the high school graduation rate of 20–24-year-olds in 2010 is higher than it was in 2000, and why the increase in the graduation rate was particularly large for blacks and Hispanics. However, to date, there is no compelling evidence to explain this encouraging
recent trend."
Murnane offers the useful reminder that voting to raise graduation standards is easy, but raising the quality of education so that a rising share of students can meet those standards is hard. "An assumption implicit in state education policies is that the quality of schooling will improve sufficiently to enable high school graduation rates to rise even as graduation requirements are stiffened. Indeed, many states increased public expenditures on public education to facilitate this improvement. However, it has proven much more difficult to improve school quality than to
legislate increases in graduation requirements."
My own concern about high school graduation requirements is that they are too often focused on getting a student into a college, any college, rather than moving the student toward a career. A high school student in the 25th percentile of a class should still be able to graduate from high school. But while some students who performed poorly in high school will shine in college--and should have an opportunity to do so--it is an unforgiving fact that many students at the bottom of the high school performance distribution will have little interest or aptitude in signing up for more schooling.
In the Spring 2013 issue of the Journal of Economic Perspectives, Julie Berry Cullen, Steven D. Levitt, Erin Robertson, and Sally Sadoff tackle this question: "What Can Be Done To Improve Struggling High Schools?" They point out that the overall high school graduation rates do not show the depth of the problem in a number of inner-city school districts. They conclude: "In spite of decades of well-intentioned efforts targeted at struggling high schools, outcomes today are little improved. A handful of innovative programs have achieved great success on a small scale, but more generally, the economic futures of the students at the bottom of the human capital distribution remain dismal. In our view, expanding access to educational options that focus on life skills and work experience, as opposed to a focus on traditional definitions of academic success, represents the most cost-effective, broadly implementable source of improvements for this group." (Full disclosure: I've been the Managing Editor of the Journal of Economic Perspectives for the past 27 years, so I am predisposed to find all of the articles intriguing. All JEP articles back to the first issue in 1987 are freely available on-line courtesy of the American Economic Association.)
"During the first seventy years of the twentieth century, the high school graduation rate of teenagers in the United States rose from 6 percent to 80 percent. A result of this remarkable trend was that, by the late 1960s, the U.S. high school graduation rate ranked first among countries in the Organisation for Economic Co-operation and Development (OECD). The increase in the proportion of the labor force that had graduated from high school was an important force that fueled economic growth and rising incomes during the twentieth century.Here's a figure showing the patterns. The horizontal axis shows (approximate) birth year. The vertical axis shows the high school graduation rate for those who were 20-24 at the time. Thus, for example, the upward movement of the graph for those born around 1980 is based on data when those people had reached the age of 20-24.
Between 1970 and 2000, the high school graduation rate in the United States stagnated. In contrast, the secondary school graduation rate in many other OECD countries increased markedly during this period. A consequence is that, in 2000, the high school graduation rate in the United States ranked thirteenth among nineteen OECD countries.
Until quite recently, it appeared that the stagnation of the U.S. high school graduation rate had continued into the twenty-first century. However, evidence from two independent sources shows that the graduation rate increased substantially between 2000 and 2010. This increase prevented the United States from losing further ground relative to other OECD countries in preparing a skilled workforce. But graduation rates in other OECD countries also increased during that decade. As a result, the U.S. high school graduation rate in 2010 was still below the OECD average."
Why the stagnation in high school graduation rates from 1970 up to about 2000? Clearly, it's not because the labor market rewards to getting a high school degree had declined; in fact, the gains from a high school degree had increased. Research has looked at explanations especially relevant at certain places and time, like a boom in demand for Appalachian coal in the 1970s that might have made a high school degree look less value to lower-skilled labor in that area, or how the crack epidemic of the late 1980s and early 1990s altered the expected rewards to finishing high school for a group of young men in certain inner cities, or how the end of certain court-ordered desegregation plans led to higher dropout rates for some at-risk youth.
While all of these explanations have an effect in certain times and places, Murnane suggests a bigger cause: a pattern of increasing high school graduation requirements that started in the 1970s. He writes: "In summary, my interpretation of the evidence is that increases in high school graduation requirements during the last quarter of the twentieth century increased the nonmonetary cost of earning a diploma for students entering high school with weak skills. By so doing, they counteracted the increased financial payoff to a diploma and contributed to the stagnation in graduation
rates over the last decades of the twentieth century."
Why have high school graduation rates apparently risen in the last 10 years or so? Murnane offers some fragmentary evidence that better-prepared ninth graders, expanded preschool programs, and reductions in teen pregnancy may have played a role. But his conclusion is modest: "In summary, there are many hypotheses for why the high school graduation rate of 20–24-year-olds in 2010 is higher than it was in 2000, and why the increase in the graduation rate was particularly large for blacks and Hispanics. However, to date, there is no compelling evidence to explain this encouraging
recent trend."
Murnane offers the useful reminder that voting to raise graduation standards is easy, but raising the quality of education so that a rising share of students can meet those standards is hard. "An assumption implicit in state education policies is that the quality of schooling will improve sufficiently to enable high school graduation rates to rise even as graduation requirements are stiffened. Indeed, many states increased public expenditures on public education to facilitate this improvement. However, it has proven much more difficult to improve school quality than to
legislate increases in graduation requirements."
My own concern about high school graduation requirements is that they are too often focused on getting a student into a college, any college, rather than moving the student toward a career. A high school student in the 25th percentile of a class should still be able to graduate from high school. But while some students who performed poorly in high school will shine in college--and should have an opportunity to do so--it is an unforgiving fact that many students at the bottom of the high school performance distribution will have little interest or aptitude in signing up for more schooling.
In the Spring 2013 issue of the Journal of Economic Perspectives, Julie Berry Cullen, Steven D. Levitt, Erin Robertson, and Sally Sadoff tackle this question: "What Can Be Done To Improve Struggling High Schools?" They point out that the overall high school graduation rates do not show the depth of the problem in a number of inner-city school districts. They conclude: "In spite of decades of well-intentioned efforts targeted at struggling high schools, outcomes today are little improved. A handful of innovative programs have achieved great success on a small scale, but more generally, the economic futures of the students at the bottom of the human capital distribution remain dismal. In our view, expanding access to educational options that focus on life skills and work experience, as opposed to a focus on traditional definitions of academic success, represents the most cost-effective, broadly implementable source of improvements for this group." (Full disclosure: I've been the Managing Editor of the Journal of Economic Perspectives for the past 27 years, so I am predisposed to find all of the articles intriguing. All JEP articles back to the first issue in 1987 are freely available on-line courtesy of the American Economic Association.)
Tuesday, June 25, 2013
Setting a Carbon Price: What's Known, What's Not
A number of scientists believe that rising levels of carbon dioxide are likely to lead to climate change. Maybe they are incorrect! But prudence suggests that when enough warning sirens are going off, you should at least start looking at options. In that spirit, I found it useful to consider Robert S. Pindyck essay on "Pricing Carbon When We Don’t Know the Right Price," in the Summer 2013 issue of Regulation magazine. The issue also includes four other articles on carbon tax issues. Pindyck sets the stage in this way:
A second problem is that the "central value" doesn't reveal anything about the potential risk of catastrophe--and by the time one has combined the uncertainties of how well climate science can predict catastrophic weather changes that are 50 or 100 years away, combined with uncertainties over the economic costs of those weather changes, this problem is severe.
The third problem is choosing a "discount rate"--that is, how should we best compare the costs of acting in the near-term to reduce carbon emissions with the benefits that would be received in 50 or 100 years? Presumably, a substantial share of the benefits will go to people who do not yet exist, and who, presuming that economic growth continues over time, will on average have considerably higher incomes than we do today. Placing a high value on those future benefits means that we should be willing to sacrifice a great deal in the present; placing a lower value on those future benefits means a smaller willingness to incur costs in the present. But deciding how much to discount the future is an unsettled question in both economic and philosophy.
Pindyck's policy proposal is to set a low carbon tax now. He argues: "Because it is essential to establish that there is a social cost of carbon, and that social cost must be internalized in the prices that consumers and firms actually see and pay. Later, as we learn more about the true size of
the SCC, the carbon tax can be increased or decreased accordingly." My own views on this subject favor a "Drill-Baby Carbon Tax."
But I'd take a moment here to note that the temptation to argue based on the low-probability chance of catastrophe needs to be handled with care. After all, there are lots of possible sequences of events that are low-probability, but potentially catastrophic. Those who want to limit use of fossil fuels call up certain climate change scenarios. Those who are anti-science point to the possibility that scientists working with genetics or nanotechnology over the next century will create a doomsday plague. Those who favor huge spending on defense and espionage point to the possibility that a rogue government or a group of terrorists will be able to arm themselves with weapons of mass destruction. Those who favor aggressive space exploration talk about the possibility of the earth suffering a devastating strike from an asteroid in the next century or two. Write your own additional political, economic, and science fiction disaster scenarios here! My point is that being able to name a catastrophe with a low but unquantifiable probability is a fairly cheap tool of argumentation.
"There is almost no disagreement among economists that the true cost to society of burning a ton of carbon is greater than its private cost. ... This external cost is referred to as the social cost of carbon (SCC) and is the basis for the idea of imposing a tax on carbon emissions or adopting a similar policy such as a cap-and-trade system. However, agreeing that the SCC is greater than zero isn’t really agreeing on very much. Some would argue that any increases in global temperatures will be moderate, will occur in the far distant future, and will have only a small impact on the economies of most countries. If that’s all true, it would imply that the SCC is small, perhaps only around $10 per ton of CO2, which would justify a very small (almost negligible) tax on carbon emissions, e.g., something like 10 cents per gallon of gasoline. Others would argue that without an immediate and stringent GHG abatement policy, there is a reasonable possibility that substantial temperature increases will occur and might have a catastrophic effect. That would suggest the SCC is large, perhaps $100 or $200 per ton of CO2, which would imply a substantial tax on carbon, e.g., as much as $2 per gallon of gas. So who is right, and why is there such wide disagreement?"Pindyck acknowledges the uncertainty over how the atmospheric science of climate change, but as befits an economist, his main focus is on the economic issues. He points to the often cited study by Michael Greenstone, Elizabeth Kopits and Ann Wolverton, who published a 2011 paper on "Estimating the Social Cost of Carbon for Use in U.S. Federal Rulemakings: A Summary and Interpretation." They estimated a "central value" for the social cost of carbon of $21 per ton of carbon dioxide emissions. But as Pindyck points out, this central value is of uncertain value for three reasons. First, the link from climate change to an effect on economic output " is completely ad hoc and of almost no predictive value. The typical IAM ["integrated assessment model"] has a loss function that relates temperature increases to reductions in GDP. But there is no economic theory behind the loss function; it is simply made up. Nor are there data on which to base the parameters of the function; instead the parameters are simply chosen to yield moderate losses that seem “reasonable” (e.g., 1 or 2 percent of GDP) from moderate temperature increases (e.g., 2° or 3°C). Furthermore, once we consider larger increases in temperatures (e.g., 5°C or higher), determining the economic loss becomes pure guesswork. One can plug high temperatures into IAM loss functions, but the results are just extrapolations with no empirical or theoretical grounding."
A second problem is that the "central value" doesn't reveal anything about the potential risk of catastrophe--and by the time one has combined the uncertainties of how well climate science can predict catastrophic weather changes that are 50 or 100 years away, combined with uncertainties over the economic costs of those weather changes, this problem is severe.
The third problem is choosing a "discount rate"--that is, how should we best compare the costs of acting in the near-term to reduce carbon emissions with the benefits that would be received in 50 or 100 years? Presumably, a substantial share of the benefits will go to people who do not yet exist, and who, presuming that economic growth continues over time, will on average have considerably higher incomes than we do today. Placing a high value on those future benefits means that we should be willing to sacrifice a great deal in the present; placing a lower value on those future benefits means a smaller willingness to incur costs in the present. But deciding how much to discount the future is an unsettled question in both economic and philosophy.
Pindyck's policy proposal is to set a low carbon tax now. He argues: "Because it is essential to establish that there is a social cost of carbon, and that social cost must be internalized in the prices that consumers and firms actually see and pay. Later, as we learn more about the true size of
the SCC, the carbon tax can be increased or decreased accordingly." My own views on this subject favor a "Drill-Baby Carbon Tax."
But I'd take a moment here to note that the temptation to argue based on the low-probability chance of catastrophe needs to be handled with care. After all, there are lots of possible sequences of events that are low-probability, but potentially catastrophic. Those who want to limit use of fossil fuels call up certain climate change scenarios. Those who are anti-science point to the possibility that scientists working with genetics or nanotechnology over the next century will create a doomsday plague. Those who favor huge spending on defense and espionage point to the possibility that a rogue government or a group of terrorists will be able to arm themselves with weapons of mass destruction. Those who favor aggressive space exploration talk about the possibility of the earth suffering a devastating strike from an asteroid in the next century or two. Write your own additional political, economic, and science fiction disaster scenarios here! My point is that being able to name a catastrophe with a low but unquantifiable probability is a fairly cheap tool of argumentation.
Monday, June 24, 2013
The Punch Bowl Speech: William McChesney Martin
In monetary policy jargon, "taking away the punch bowl" refers to a central bank action to reduce the stimulus that it has been giving the economy.
Thus, last Wednesday, Ben Bernanke discussed the possibility that if the U.S. economy performs well, the Federal Reserve would reduce and eventually stop its "quantitative easing" policy of buying U.S. Treasury bonds and various mortgage-backed securities. Everyone knows this needs to happen sooner or later, but Bernanke's comments raised the possibility that it might be sooner rather than later, and at least for a few days, stock markets dropped and broader financial markets were shaken.
Various blog commentaries and press reports referred to Bernanke's action as taking away the "punch bowl" (for example, here, here, and here).
The "punch bowl" metaphor seems to trace back to a speech given on October 19, 1955, by William McChesney Martin, who served as Chairman of the Federal Reserve from 1951 through 1970, to the New York Group of the Investment Bankers Association of America. Here's what Martin said to the financiers of his own time, who presumably weren't that eager to see the Fed reduce its stimulus, either:
Monetary policy in the 1950s got a lot less attention than it does today: indeed, there was a significant group of economists who believed that it was completely ineffectual. The old story told by Herb Stein in his 1969 book, The Fiscal Revolution in America, was that President John F. Kennedy used to remember what Martin did by "the fact that William McChesney Martin was head of the Federal Reserve, and that "Martin" started with an "M", as did "monetary," so he knew that monetary policy was what the Federal Reserve did. (Apparently he was not bothered by the fact that "fiscal" and "Federal Reserve" both start with an "f".)"
But Martin viewed monetary policy very much as a balancing act. As he once said in testimony before the U.S. Senate: “Our purpose is to lean against the winds of deflation or inflation, whichever way they are blowing.” (In the Winter 2004 issue of the Journal of Economic Perspectives, where I've been managing editor since 1986, Christina Romer and David Romer wrote "Choosing the Federal Reserve Chair: Lessons from History," which puts Martin's views on monetary policy in the context of other pre-Bernanke Fed chairmen.)
Martin held the view that monetary policy could be useful in reducing the risk of depressions and inflations, but that it wasn't all-powerful. In the 1955 speech, he said:
My own sense, as I've argued on this blog more than once is that that extraordinary monetary policy steps taken by the Fed made sense in the context of the extraordinary financial crisis and Great Recession from 2007-2009, and even for a year or two or three afterward. But the Great Recession ended four years ago in June 2009. The extreme stimulus policies of the Fed--ultra-low interest rates and direct buying of financial securities--don't seem to pose any particular danger of inflation as yet, but they create other dislocations: savers suffer, and some will go on a "search for yield" that can create new asset market bubbles; money market funds are shaken; and banks and governments that can borrow cheaply are less likely to carry out needed reforms. And of course, there is the problem of economic and financial problems that arise when the Fed does take away the punch bowl. For discussion of these concerns, see earlier blog posts here, here, here and here.
My own sense is that there are times for monetary policy to tighten and times for it to loosen, and the very difficult practical wisdom lies in knowing the difference. In a similar spirit, Martin started his 1955 speech this way: "There's an apocryphal story about a professor of economics that sums up in a way the theme of what I would like to talk about this evening. In final examinations the professor always posed the same questions. When he was asked how his students could possibly fail the test, he replied simply, ''Well, it's true that the questions don't change, but the answers do.""
Thus, last Wednesday, Ben Bernanke discussed the possibility that if the U.S. economy performs well, the Federal Reserve would reduce and eventually stop its "quantitative easing" policy of buying U.S. Treasury bonds and various mortgage-backed securities. Everyone knows this needs to happen sooner or later, but Bernanke's comments raised the possibility that it might be sooner rather than later, and at least for a few days, stock markets dropped and broader financial markets were shaken.
Various blog commentaries and press reports referred to Bernanke's action as taking away the "punch bowl" (for example, here, here, and here).
The "punch bowl" metaphor seems to trace back to a speech given on October 19, 1955, by William McChesney Martin, who served as Chairman of the Federal Reserve from 1951 through 1970, to the New York Group of the Investment Bankers Association of America. Here's what Martin said to the financiers of his own time, who presumably weren't that eager to see the Fed reduce its stimulus, either:
"If we fail to apply the brakes sufficiently and in time, of course, we shall go over the cliff. If businessmen, bankers, your contemporaries in the business and financial world, stay on the sidelines, concerned only with making profits, letting the Government bear all of the responsibility and the burden of guidance of the economy, we shall surely fail. ... In the field of monetary and credit policy, precautionary action to prevent inflationary excesses is bound to have some onerous effects--if it did not it would be ineffective and futile. Those who have the task of making such policy donl t expect you to applaud. The Federal Reserve, as one writer put it, after the recent increase in the discount rate, is in the position of the chaperone who has ordered the punch bowl removed just
when the party was really warming up."
Monetary policy in the 1950s got a lot less attention than it does today: indeed, there was a significant group of economists who believed that it was completely ineffectual. The old story told by Herb Stein in his 1969 book, The Fiscal Revolution in America, was that President John F. Kennedy used to remember what Martin did by "the fact that William McChesney Martin was head of the Federal Reserve, and that "Martin" started with an "M", as did "monetary," so he knew that monetary policy was what the Federal Reserve did. (Apparently he was not bothered by the fact that "fiscal" and "Federal Reserve" both start with an "f".)"
But Martin viewed monetary policy very much as a balancing act. As he once said in testimony before the U.S. Senate: “Our purpose is to lean against the winds of deflation or inflation, whichever way they are blowing.” (In the Winter 2004 issue of the Journal of Economic Perspectives, where I've been managing editor since 1986, Christina Romer and David Romer wrote "Choosing the Federal Reserve Chair: Lessons from History," which puts Martin's views on monetary policy in the context of other pre-Bernanke Fed chairmen.)
Martin held the view that monetary policy could be useful in reducing the risk of depressions and inflations, but that it wasn't all-powerful. In the 1955 speech, he said:
"But a note should be made here that, while money policy can do a great deal, it is by no means all powerful. In other words, we should not place too heavy a burden on monetary policy. It must be accompanied by appropriate fiscal and budgetary measures if we are to achieve our aim of stable progress. If we ask too much of monetary policy we will not only fail but we will also discredit this useful, and indeed indispensable, tool for shaping our economic development. ...It seems to me that at least some of the current discussion of the Fed has a similar tone to what Martin is describing of exaggeration in both directions. Some critics argue that the extraordinary monetary policies undertaken since the later part of 2007 are useless. On the other extreme, other critics argue that if only those extraordinary policies had been pursued with considerably more vigor, the U.S. economy would already have returned to full employment. In other words, the Fed is either ineffectual or all-powerful--but the truth is likely to exist between these extremes.
"Nowadays, there is perhaps a tendency to exaggerate the effectiveness of monetary policy in both directions. Recently, opinion has been voiced that the country' s main danger comes from a roseate belief that monetary policy, backed by flexible tax and debt management policies and aided by a host of built-in stabilizers, has completely conquered the problem of major economic fluctuations and relegated them to ancient history. This, of course, is not so because we are dealing with human
beings and human nature.
"While the pendulum swings between too little or too much reliance upon credit and monetary policy, there is an emerging realization more and more widely held and expressed by business, labor and farm organizations that ruinous depressions are not inevitable, that something can be done about moderating excessive swings of the business cycle. The idea that the business cycle can be altogether abolished seems to me as fanciful as the notion that the law of supply and demand can be repealed. It is hardly necessary to go that far in order to approach the problems of healthy economic growth sensibly and constructively. Laissez faire concepts, the idea that deep depressions are divinely guided retribution for man's economic follies, the idea that money should be the master instead of the servant, have been discarded because they are no longer valid, if they ever were."
My own sense, as I've argued on this blog more than once is that that extraordinary monetary policy steps taken by the Fed made sense in the context of the extraordinary financial crisis and Great Recession from 2007-2009, and even for a year or two or three afterward. But the Great Recession ended four years ago in June 2009. The extreme stimulus policies of the Fed--ultra-low interest rates and direct buying of financial securities--don't seem to pose any particular danger of inflation as yet, but they create other dislocations: savers suffer, and some will go on a "search for yield" that can create new asset market bubbles; money market funds are shaken; and banks and governments that can borrow cheaply are less likely to carry out needed reforms. And of course, there is the problem of economic and financial problems that arise when the Fed does take away the punch bowl. For discussion of these concerns, see earlier blog posts here, here, here and here.
My own sense is that there are times for monetary policy to tighten and times for it to loosen, and the very difficult practical wisdom lies in knowing the difference. In a similar spirit, Martin started his 1955 speech this way: "There's an apocryphal story about a professor of economics that sums up in a way the theme of what I would like to talk about this evening. In final examinations the professor always posed the same questions. When he was asked how his students could possibly fail the test, he replied simply, ''Well, it's true that the questions don't change, but the answers do.""
Thursday, June 20, 2013
Macroprudential Monetary Policy: What It Is, How it Works
In the old days, like six or seven years ago, one could teach monetary policy at the intro level as consisting of basically one tool: the central bank would lower a particular target interest rates to stimulate the economy out of recessions, and raise that target interest rate when an economy seemed to be overheating. But after the last few years, even at the intro level, one needs to
teach about some additional tools available to monetary authorities. One
set of tools goes under the name of "macroprudential policy."
The idea here is that in the past, regulation of financial institutions focused on whether individual companies were making reasonably prudent decisions. A major difficulty with this "microprudential" approach to regulation, as the Great Recession showed, is that it didn't take into account whether the decisions of many financial firms all at once were creating macroeconomic risk. In particular, when the central bank was looking at whether the economy was in sinking into recession or on the verge of inflation, it didn't take into account whether the overall level of credit being extended in the economy was growing very rapidly--like in the housing price bubble from about 2004-2007. I discussed some of the evidence on how boom-and-bust credit cycles are often linked to severe recessions in a March 2012 post on "Leverage and the Business Cycle" as well as in a February 2013 post on "The Financial Cycle: Theory and Implications."
Macroprudential policy means using regulations to limit boom-and-bust swings of credit. Douglas J. Elliott, Greg Feldberg, and Andreas Lehnert offer a useful listing of these kinds of policies, how they have been used in the past, and some preliminary evidence on how they have worked in "The History of Cyclical Macroprudential Policy in the United States," written as a working paper in the Finance and Economics Discussion Series published by the Federal Reserve.
One basic but quite useful contribution of the paper is to organize a list of macroprudential policy tools. One set of tools can be used to affect demand for credit, like rules about loan-to-value ratios for those borrowing to buy houses, margin requirements for those buying stocks, the acceptable length of loans for buying houses, and tax policies like the extent to which interest payments can be deductible for tax purposes. Another set of tools affects the supply of credit, like rules about the interest rates that financial institutions can pay on certain accounts, or the interest rates that they can charge for certain loans, along with rules about how much financial institutions must set aside in reserves or have available as capital, any restrictions on the portfolios that financial institutions can hold, and the aggressiveness of the regulators in enforcing these rules. Here's a list of macroprudential tools, with some examples of their past use.
One interesting aspect of these macroprudential policy tools is that many of them are sector-specific. When the central bank thinks of monetary policy as just moving overall interest rates, it constantly faces a dilemma. Is it worth raising interest rates for the entire economy just because there might be a housing bubble? Or just because the stock market seems to be experiencing "irrational exuberance" as in the late 1990s? Macroprudential policy suggests that one might address a housing market credit boom by altering regulations focused on housing markets, or one might address a stock market bubble by altering margin requirements for buying stock.
Do these macroprudential tools work? Elliott, Feldberg, and Lehnert offer some cautious evidence on this point: "In this paper, we use the term “macroprudential tools” to refer to cyclical macroprudential tools aimed at slowing or accelerating credit growth. ... Many of these tools appear to have succeeded in their short-term goals; for example, limiting specific types of bank credit or liability and impacting terms of lending. It is less obvious that they have improved long-term financial stability or, in particular, successfully managed an asset price bubble, and this is fertile ground for future research. Meanwhile, these tools have faced substantial administrative complexities, uneven political
support, and competition from nonbank or other providers of credit outside the set of regulated institutions. ... Our results to date suggest that macroprudential policies designed to tighten credit
availability do have a notable effect, especially for tools such as underwriting standards, while
macroprudential policies designed to ease credit availability have little effect on debt outstanding."
When the next asset-price bubble or credit boom emerges--and sooner or later, it will--macroprudential tools and how best to use them will become a main focus of public policy discussion.
For more background on the economic analysis behind macroprudential policy, a useful starting point is "A Macroprudential Approach to Financial Regulation," by Samuel G. Hansen, Anil K. Kashyap, and Jeremy C. Stein, which appeared in the Winter 2011 issue of the Journal of Economic Perspectives. (Full disclosure: My job as Managing Editor of JEP has been paying the household bills since 1986.) Jeremy Stein is now a member of the Federal Reserve Board of Governors, so his thought on the subject are of even greater interest.
The idea here is that in the past, regulation of financial institutions focused on whether individual companies were making reasonably prudent decisions. A major difficulty with this "microprudential" approach to regulation, as the Great Recession showed, is that it didn't take into account whether the decisions of many financial firms all at once were creating macroeconomic risk. In particular, when the central bank was looking at whether the economy was in sinking into recession or on the verge of inflation, it didn't take into account whether the overall level of credit being extended in the economy was growing very rapidly--like in the housing price bubble from about 2004-2007. I discussed some of the evidence on how boom-and-bust credit cycles are often linked to severe recessions in a March 2012 post on "Leverage and the Business Cycle" as well as in a February 2013 post on "The Financial Cycle: Theory and Implications."
Macroprudential policy means using regulations to limit boom-and-bust swings of credit. Douglas J. Elliott, Greg Feldberg, and Andreas Lehnert offer a useful listing of these kinds of policies, how they have been used in the past, and some preliminary evidence on how they have worked in "The History of Cyclical Macroprudential Policy in the United States," written as a working paper in the Finance and Economics Discussion Series published by the Federal Reserve.
One basic but quite useful contribution of the paper is to organize a list of macroprudential policy tools. One set of tools can be used to affect demand for credit, like rules about loan-to-value ratios for those borrowing to buy houses, margin requirements for those buying stocks, the acceptable length of loans for buying houses, and tax policies like the extent to which interest payments can be deductible for tax purposes. Another set of tools affects the supply of credit, like rules about the interest rates that financial institutions can pay on certain accounts, or the interest rates that they can charge for certain loans, along with rules about how much financial institutions must set aside in reserves or have available as capital, any restrictions on the portfolios that financial institutions can hold, and the aggressiveness of the regulators in enforcing these rules. Here's a list of macroprudential tools, with some examples of their past use.
One interesting aspect of these macroprudential policy tools is that many of them are sector-specific. When the central bank thinks of monetary policy as just moving overall interest rates, it constantly faces a dilemma. Is it worth raising interest rates for the entire economy just because there might be a housing bubble? Or just because the stock market seems to be experiencing "irrational exuberance" as in the late 1990s? Macroprudential policy suggests that one might address a housing market credit boom by altering regulations focused on housing markets, or one might address a stock market bubble by altering margin requirements for buying stock.
Do these macroprudential tools work? Elliott, Feldberg, and Lehnert offer some cautious evidence on this point: "In this paper, we use the term “macroprudential tools” to refer to cyclical macroprudential tools aimed at slowing or accelerating credit growth. ... Many of these tools appear to have succeeded in their short-term goals; for example, limiting specific types of bank credit or liability and impacting terms of lending. It is less obvious that they have improved long-term financial stability or, in particular, successfully managed an asset price bubble, and this is fertile ground for future research. Meanwhile, these tools have faced substantial administrative complexities, uneven political
support, and competition from nonbank or other providers of credit outside the set of regulated institutions. ... Our results to date suggest that macroprudential policies designed to tighten credit
availability do have a notable effect, especially for tools such as underwriting standards, while
macroprudential policies designed to ease credit availability have little effect on debt outstanding."
When the next asset-price bubble or credit boom emerges--and sooner or later, it will--macroprudential tools and how best to use them will become a main focus of public policy discussion.
For more background on the economic analysis behind macroprudential policy, a useful starting point is "A Macroprudential Approach to Financial Regulation," by Samuel G. Hansen, Anil K. Kashyap, and Jeremy C. Stein, which appeared in the Winter 2011 issue of the Journal of Economic Perspectives. (Full disclosure: My job as Managing Editor of JEP has been paying the household bills since 1986.) Jeremy Stein is now a member of the Federal Reserve Board of Governors, so his thought on the subject are of even greater interest.
Technology and Job Destruction
Is there something about the latest wave of information and communication technologies that is especially destructive to jobs? David Rotman offers an overview of the arguments in "How Technology Is Destroying Jobs," in the July/August 2013 issue of the MIT Technology Review.
On one side Rotman emphasizes the work of Erik Brynjolfsson and Andrew McAfee: "That robots, automation, and software can replace people might seem obvious to anyone who’s worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee’s claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States. And, they suspect, something similar is happening in other technologically advanced countries."
As one piece of evidence, they offer this graph showing productivity growth and private-sector employment growth. going back to 1947, these two grew at more-or-less the same speed. But starting around 2000, a gap opens up with productivity growing faster than private sector employment.
The figure sent me over to the U.S. Bureau of Labor Statistics website to look at total jobs. Total U.S. jobs were 132.6 million in December 2000. Then there's a drop associated with the recession of 2001, a rise associated with the housing and finance bubble, a drop associated with the Great Recession, and more recently a bounceback to 135.6 million jobs in May 2013. But put it all together, and from December 2000 to May 2013, total U.S jobs now are about 2.2% higher than they were back at the start of the century.
Why the change? The arguments rooted in technological developments sound like this: "Technologies like the Web, artificial intelligence, big data, and improved analytics—all made possible by the ever increasing availability of cheap computing power and storage capacity—are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared. W. Brian Arthur, a visiting researcher at the Xerox Palo Alto Research Center’s intelligence systems lab and a former economics professor at Stanford University, calls it the “autonomous economy.” It’s far more subtle than the idea of robots and automation doing human jobs, he says: it involves “digital processes talking to other digital processes and creating new processes,” enabling us to do many things with fewer people and making yet other human jobs obsolete."
Of course, there are other arguments about slower job growth rooted in other factors. Looking at the year 2000 as a starting point is not a fair comparison, because the U.S. economy was at the time in the midst of the unsustainable dot-com bubble. The current economy is still recovering from its worst episode since the Great Depression. In addition, earlier decades have seem demographic changes like a flood of baby boomers entering the workforce from the 1960s through the 1980s, along with a flood of women entering the (paid) workforce. As those trends eased off, the total number of jobs would be expected to grow more slowly.
Another response to the technology-is-killing-jobs argument is that while technology has long been disruptive, the economy has shown an historical pattern of adjusting over time. Rotman writes: "At least since the Industrial Revolution began in the 1700s, improvements in technology have changed the nature of work and destroyed some types of jobs in the process. In 1900, 41 percent of Americans worked in agriculture; by 2000, it was only 2 percent. Likewise, the proportion of Americans employed in manufacturing has dropped from 30 percent in the post–World War II years to around 10 percent today—partly because of increasing automation, especially during the 1980s. ... Even if today’s digital technologies are holding down job creation, history suggests that it is most likely a temporary, albeit painful, shock; as workers adjust their skills and entrepreneurs create opportunities based on the new technologies, the number of jobs will rebound. That, at least, has always been the pattern. The question, then, is whether today’s computing technologies will be different, creating long-term involuntary unemployment."
Given that the U.S. and other high-income economies have been experiencing technological change for well over a century, and the U.S. unemployment rate was below 6% as recently ago as the four straight years from 2004-2007, it seems premature to me to be forecasting that technology is now about to bring a dearth of jobs. Maybe this fear will turn out to be right this time, but it flies in the face of of a couple of centuries of economic history.
However, it does seem plausible to me that technological development in tandem with globalization are altering pay levels in the labor force, contributing to higher pay at the top of the income distribution and lower pay in the middle. For some discussion of technology and income inequality, see my post earlier this week on "Rock Music, Technology, and the Top 1%," and for some discussion of technology and "hollowing out" the middle skill levels of the labor force, see my post on "Job Polarization by Skill Level" or this April 2010 paper by David Autor (Full disclosure: Autor is also editor of the Journal of Economic Perspectives, and thus is my boss.)
Given that new technological developments can be quite disruptive for existing workers, the conclusion I draw is the importance of finding ways for more workers to find ways to work with computers and robots in ways that can magnify their productivity. Rotman mentions a previous example of such a social transition: "Harvard’s [Larry] Katz has shown that the United States prospered in the early 1900s in part because secondary education became accessible to many people at a time when employment in agriculture was drying up. The result, at least through the 1980s, was an increase in educated workers who found jobs in the industrial sectors, boosting incomes and reducing inequality. Katz’s lesson: painful long-term consequences for the labor force do not follow inevitably from technological changes." It feels to me as if we need a widespread national effort in both the private and the public sector to figure out ways in which every worker in every job can use information technology to become more productive.
The arguments over how technology affects jobs remind me a bit of an old story from the development economics literature. An economist is visiting a public works project in a developing country. The project involves building a dam, and dozens of workers are shoveling dirt and carrying it over to the dam. The economist watches for awhile, and then turns to the project manager and says: "With all these workers using shovels, this project is going to take forever, and it's not going to be very high quality. Why not get a few bulldozers in here?" The project manager responds: "I can tell that you are unfamiliar with the political economy of a project like this one. Sure, we want to build the dam eventually, but really, one of the main purposes of this project is to provide jobs. Getting a bulldozer would wipe out these jobs." The economist mulls this answer a bit, and then replies: "Well, if the real emphasis here is on creating jobs, why give the workers shovels? Wouldn't it create even more jobs if they used spoons to move the dirt?"
The notion that everyone could stay employed if only those new technologies would stay out of the way has a long history. But the rest of the world is not going to back off on using new technologies. And future U.S. prosperity won't be built by workers using the metaphorical equivalent of spoons, rather than bulldozers.
On one side Rotman emphasizes the work of Erik Brynjolfsson and Andrew McAfee: "That robots, automation, and software can replace people might seem obvious to anyone who’s worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee’s claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States. And, they suspect, something similar is happening in other technologically advanced countries."
As one piece of evidence, they offer this graph showing productivity growth and private-sector employment growth. going back to 1947, these two grew at more-or-less the same speed. But starting around 2000, a gap opens up with productivity growing faster than private sector employment.
The figure sent me over to the U.S. Bureau of Labor Statistics website to look at total jobs. Total U.S. jobs were 132.6 million in December 2000. Then there's a drop associated with the recession of 2001, a rise associated with the housing and finance bubble, a drop associated with the Great Recession, and more recently a bounceback to 135.6 million jobs in May 2013. But put it all together, and from December 2000 to May 2013, total U.S jobs now are about 2.2% higher than they were back at the start of the century.
Why the change? The arguments rooted in technological developments sound like this: "Technologies like the Web, artificial intelligence, big data, and improved analytics—all made possible by the ever increasing availability of cheap computing power and storage capacity—are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared. W. Brian Arthur, a visiting researcher at the Xerox Palo Alto Research Center’s intelligence systems lab and a former economics professor at Stanford University, calls it the “autonomous economy.” It’s far more subtle than the idea of robots and automation doing human jobs, he says: it involves “digital processes talking to other digital processes and creating new processes,” enabling us to do many things with fewer people and making yet other human jobs obsolete."
Of course, there are other arguments about slower job growth rooted in other factors. Looking at the year 2000 as a starting point is not a fair comparison, because the U.S. economy was at the time in the midst of the unsustainable dot-com bubble. The current economy is still recovering from its worst episode since the Great Depression. In addition, earlier decades have seem demographic changes like a flood of baby boomers entering the workforce from the 1960s through the 1980s, along with a flood of women entering the (paid) workforce. As those trends eased off, the total number of jobs would be expected to grow more slowly.
Another response to the technology-is-killing-jobs argument is that while technology has long been disruptive, the economy has shown an historical pattern of adjusting over time. Rotman writes: "At least since the Industrial Revolution began in the 1700s, improvements in technology have changed the nature of work and destroyed some types of jobs in the process. In 1900, 41 percent of Americans worked in agriculture; by 2000, it was only 2 percent. Likewise, the proportion of Americans employed in manufacturing has dropped from 30 percent in the post–World War II years to around 10 percent today—partly because of increasing automation, especially during the 1980s. ... Even if today’s digital technologies are holding down job creation, history suggests that it is most likely a temporary, albeit painful, shock; as workers adjust their skills and entrepreneurs create opportunities based on the new technologies, the number of jobs will rebound. That, at least, has always been the pattern. The question, then, is whether today’s computing technologies will be different, creating long-term involuntary unemployment."
Given that the U.S. and other high-income economies have been experiencing technological change for well over a century, and the U.S. unemployment rate was below 6% as recently ago as the four straight years from 2004-2007, it seems premature to me to be forecasting that technology is now about to bring a dearth of jobs. Maybe this fear will turn out to be right this time, but it flies in the face of of a couple of centuries of economic history.
However, it does seem plausible to me that technological development in tandem with globalization are altering pay levels in the labor force, contributing to higher pay at the top of the income distribution and lower pay in the middle. For some discussion of technology and income inequality, see my post earlier this week on "Rock Music, Technology, and the Top 1%," and for some discussion of technology and "hollowing out" the middle skill levels of the labor force, see my post on "Job Polarization by Skill Level" or this April 2010 paper by David Autor (Full disclosure: Autor is also editor of the Journal of Economic Perspectives, and thus is my boss.)
Given that new technological developments can be quite disruptive for existing workers, the conclusion I draw is the importance of finding ways for more workers to find ways to work with computers and robots in ways that can magnify their productivity. Rotman mentions a previous example of such a social transition: "Harvard’s [Larry] Katz has shown that the United States prospered in the early 1900s in part because secondary education became accessible to many people at a time when employment in agriculture was drying up. The result, at least through the 1980s, was an increase in educated workers who found jobs in the industrial sectors, boosting incomes and reducing inequality. Katz’s lesson: painful long-term consequences for the labor force do not follow inevitably from technological changes." It feels to me as if we need a widespread national effort in both the private and the public sector to figure out ways in which every worker in every job can use information technology to become more productive.
The arguments over how technology affects jobs remind me a bit of an old story from the development economics literature. An economist is visiting a public works project in a developing country. The project involves building a dam, and dozens of workers are shoveling dirt and carrying it over to the dam. The economist watches for awhile, and then turns to the project manager and says: "With all these workers using shovels, this project is going to take forever, and it's not going to be very high quality. Why not get a few bulldozers in here?" The project manager responds: "I can tell that you are unfamiliar with the political economy of a project like this one. Sure, we want to build the dam eventually, but really, one of the main purposes of this project is to provide jobs. Getting a bulldozer would wipe out these jobs." The economist mulls this answer a bit, and then replies: "Well, if the real emphasis here is on creating jobs, why give the workers shovels? Wouldn't it create even more jobs if they used spoons to move the dirt?"
The notion that everyone could stay employed if only those new technologies would stay out of the way has a long history. But the rest of the world is not going to back off on using new technologies. And future U.S. prosperity won't be built by workers using the metaphorical equivalent of spoons, rather than bulldozers.
Wednesday, June 19, 2013
Global Energy Snapshots
Here are some patterns in world energy markets that caught my eye, taken from the just-released BP Statistical Review of World Energy 2013.
For starters, here's the long-run pattern of world oil prices. The top line is the relevant one, because the prices are adjusted to 2012 dollars. To me, the striking pattern is that real oil prices are at their all-time high since the Pennsylvania oil boom of the 1860s added enough supply to bring real oil prices down. The severity of the price shocks of the mid- and late 1970s stand out here. but the increase in oil prices since about 2000 is also striking. Even in a U.S. economy that relies more on services and information than on old-style heavy manufacturing, this price increase must be contributing to the economic sluggishness of the last few years.
What about natural gas prices? The time series here doesn't go back so far: only to 1995. What's interesting to me here is that natural gas prices around the world move more-or-less in harmony up to about 2008. But since then, natural gas prices in the U.S. have dropped and stayed down, while those in Germany, the UK, and Japan dropped in the recession but have since increased. Natural gas is not (yet?) a unified world market, because it cannot be cheaply transported in volume around the world. Thus, the recent increases in unconventional natural gas production in North America have brought down prices here, but not in the rest of the world.
What about coal? Here I'll focus on quantities, not prices. As the report notes: "Coal remained the fastest-growing fossil fuel, with China consuming half of the world’s coal for the first
time – but it was also the fossil fuel that saw the weakest growth relative to its historical average. ...
Global coal production grew by 2%. The Asia Pacific region accounted for all of the net increase, offsetting a large decline in the US. The Asia Pacific region now accounts for more than two-thirds of global output. Coal consumption increased by a below-average 2.5%. The Asia Pacific region was also responsible for all of the net growth in global consumption. A second consecutive large decline in North America (-11.3%) more than offset growth in other regions; EU consumption grew for a third consecutive year." It appears that North America is finding ways to substitute natural gas for coal on the margin, which is clearly a "win" for the environment.
Finally, here's an image of overall global energy consumption since 1987.
As the report summarizes: "World primary energy consumption grew by a below-average 1.8% in 2012. Growth was below average in all regions except Africa. Oil remains the world’s leading fuel, accounting for 33.1% of global energy consumption, but this figure is the lowest share on record and oil has lost market share for 13 years in a row. Hydroelectric output and other renewables in power generation both reached record shares of global primary energy consumption (6.7% and 1.9%, respectively)." I would also note that while percentage gains in renewable energy sources can appear large from their very small starting point, they remain a tiny part of overall world energy consumption.
For starters, here's the long-run pattern of world oil prices. The top line is the relevant one, because the prices are adjusted to 2012 dollars. To me, the striking pattern is that real oil prices are at their all-time high since the Pennsylvania oil boom of the 1860s added enough supply to bring real oil prices down. The severity of the price shocks of the mid- and late 1970s stand out here. but the increase in oil prices since about 2000 is also striking. Even in a U.S. economy that relies more on services and information than on old-style heavy manufacturing, this price increase must be contributing to the economic sluggishness of the last few years.
What about natural gas prices? The time series here doesn't go back so far: only to 1995. What's interesting to me here is that natural gas prices around the world move more-or-less in harmony up to about 2008. But since then, natural gas prices in the U.S. have dropped and stayed down, while those in Germany, the UK, and Japan dropped in the recession but have since increased. Natural gas is not (yet?) a unified world market, because it cannot be cheaply transported in volume around the world. Thus, the recent increases in unconventional natural gas production in North America have brought down prices here, but not in the rest of the world.
What about coal? Here I'll focus on quantities, not prices. As the report notes: "Coal remained the fastest-growing fossil fuel, with China consuming half of the world’s coal for the first
time – but it was also the fossil fuel that saw the weakest growth relative to its historical average. ...
Global coal production grew by 2%. The Asia Pacific region accounted for all of the net increase, offsetting a large decline in the US. The Asia Pacific region now accounts for more than two-thirds of global output. Coal consumption increased by a below-average 2.5%. The Asia Pacific region was also responsible for all of the net growth in global consumption. A second consecutive large decline in North America (-11.3%) more than offset growth in other regions; EU consumption grew for a third consecutive year." It appears that North America is finding ways to substitute natural gas for coal on the margin, which is clearly a "win" for the environment.
Finally, here's an image of overall global energy consumption since 1987.
As the report summarizes: "World primary energy consumption grew by a below-average 1.8% in 2012. Growth was below average in all regions except Africa. Oil remains the world’s leading fuel, accounting for 33.1% of global energy consumption, but this figure is the lowest share on record and oil has lost market share for 13 years in a row. Hydroelectric output and other renewables in power generation both reached record shares of global primary energy consumption (6.7% and 1.9%, respectively)." I would also note that while percentage gains in renewable energy sources can appear large from their very small starting point, they remain a tiny part of overall world energy consumption.
Tuesday, June 18, 2013
Rock Music, Technology, and the Top 1%
I'm always on the lookout for real-world applications about how technology is altering the distribution of income. Applications that have intuitive appeal for students are even better! Thus, I enjoyed on several levels Alan Krueger's recent talk at the Rock and Roll Hall of Fame, "Rock and Roll, Economics, and Rebuilding the Middle Class." Krueger uses the music industry as a microcosm for technological trends that have led to greater inequality in recent decades. I'll start here with some facts and exhibits.
Prices for concert tickets have been rising quickly. Since the early 1980s, overall price inflation is up about 150%, but the price of concert tickets is up about 400%.
The share of concert revenue received by the top 1% of performers has more than doubled in the last 30 years or so.
How has technology contributed to this change? Krueger explains:
Much of the rest of the talk is given over to applying these lessons to the broader economic picture. Technology has altered many industries so that some ways of earning money have been decimated, while others have been encouraged. The share of income going to the top 1% for the U.S. economy as a whole has been rising. In many cases, who ends up in this top 1% has an element of luck in the sense that very large economic returns are a matter of fortunate timing, not just skill. Those who run the first company to invent a certain product may end up very rich, while those who were just a few weeks or months behind end up with much less. Certainly the current top executives of big companies are lucky in the sense that instead of earning 25 times as much as the pay of an average worker, as CEOs did back in the 1970s, the timing of their career now allows them earn 200 times the pay of an average worker.
Krueger is not a newcomer to the economics of rock 'n roll. For one of his earlier efforts, see "Rockonomics: The Economics of Popular Music,” by Marie Connolly and Alan B. Krueger. They quote the well-known noneconomist Paul Simon: “The fact of the matter is that popular music is one of the industries of the country. It’s allcompletely tied up with capitalism. It’s stupid to separate it.” It's available as a 2005 working paper from the Princeton UJniversity Industrial Relations Section.
Full disclosure: Alan Krueger was editor of the Journal of Economic Perspectives, and thus was my boss, from 1996-2002.
Prices for concert tickets have been rising quickly. Since the early 1980s, overall price inflation is up about 150%, but the price of concert tickets is up about 400%.
The share of concert revenue received by the top 1% of performers has more than doubled in the last 30 years or so.
How has technology contributed to this change? Krueger explains:
"Technological changes through the centuries have long made the music industry a super star industry. Advances over time including amplification, radio, records, 8-tracks, music videos, CDs, iPods, etc., have made it possible for the best performers to reach an ever wider audience with high fidelity. And the increasing globalization of the world economy has vastly increased the reach and notoriety of the most popular performers. They literally can be heard on a worldwide stage. But advances in technology have also had an unexpected effect. Recorded music has become cheap to replicate and distribute, and it is difficult to police unauthorized reproductions. This has cut into the revenue stream of the best performers, and caused them to raise their prices for live performances. My research suggests that this is the primary reason why concert prices have risen so much since the late 1990s. In this spirit, David Bowie once predicted that “music itself is going to become like running water or electricity,” and, that as a result, artists should “be prepared for doing a lot of touring because that’s really the only unique situation that’s going to be left.” While concerts used to be a loss leader to sell albums, today concerts are a profit center."Krueger also points out that which bands become popular is to some extent a matter of luck, and once a band has become popular, that popularity can then be to some extent self-sustaining.
Much of the rest of the talk is given over to applying these lessons to the broader economic picture. Technology has altered many industries so that some ways of earning money have been decimated, while others have been encouraged. The share of income going to the top 1% for the U.S. economy as a whole has been rising. In many cases, who ends up in this top 1% has an element of luck in the sense that very large economic returns are a matter of fortunate timing, not just skill. Those who run the first company to invent a certain product may end up very rich, while those who were just a few weeks or months behind end up with much less. Certainly the current top executives of big companies are lucky in the sense that instead of earning 25 times as much as the pay of an average worker, as CEOs did back in the 1970s, the timing of their career now allows them earn 200 times the pay of an average worker.
Krueger is not a newcomer to the economics of rock 'n roll. For one of his earlier efforts, see "Rockonomics: The Economics of Popular Music,” by Marie Connolly and Alan B. Krueger. They quote the well-known noneconomist Paul Simon: “The fact of the matter is that popular music is one of the industries of the country. It’s allcompletely tied up with capitalism. It’s stupid to separate it.” It's available as a 2005 working paper from the Princeton UJniversity Industrial Relations Section.
Full disclosure: Alan Krueger was editor of the Journal of Economic Perspectives, and thus was my boss, from 1996-2002.
Friday, June 14, 2013
Flexibility and Neoclassical Economics
A common complaint from some of those learning economics, and from some economists themselves, is that the formal study of economics is straitjacket that limits analysis and constrains policy conclusions--in particular that it leads to an overappreciation of market forces and an underappreciation of the usefulness of government interventions. This belief seems misguided to me. John Maynard Keynes, who said so many things so well, once wrote (in his introduction to Cambridge Economic Handbooks: "[Economics] is a method rather than a doctrine,
an apparatus of the mind, a technique of thinking which helps its possessor to
draw correct conclusions."
Dani Rodrik spells out very nicely how neoclassical economics has proved no hindrance to his work that has often questioning what was at the time mainstream economic wisdom in an interview published in the April 2013 issue of the World Economics Association Newsletter. Here are a few of Rodrik's comments.
On the usefulness of the economics toolkit
On the critique that many economists are narrow in their outlook.
Dani Rodrik spells out very nicely how neoclassical economics has proved no hindrance to his work that has often questioning what was at the time mainstream economic wisdom in an interview published in the April 2013 issue of the World Economics Association Newsletter. Here are a few of Rodrik's comments.
On the usefulness of the economics toolkit
"I have never thought of neoclassical economics as a hindrance to an understanding of social and economic problems. To the contrary, I think there are certain habits of mind that come with thinking about the world in mainstream economic terms that are quite useful: you need to state your ideas clearly, you need to ensure they are internally consistent, with clear assumptions and causal links, and you need to be rigorous in your use of empirical evidence. Now, this does not mean that neoclassical economics has all the answers or that it is all we need. Too often, people who work with mainstream economic tools lack the ambition to ask broad questions and the imagination to go outside the box they are used to working in. But that is true of all “normal science.” Truly great economists use neoclassical methods for leverage, to reach new heights of understanding, not to dumb down our understanding. Economists such as George Akerlof, Paul Krugman, and Joe Stiglitz are some of the names that come to mind who exemplify this tradition. Each of them has questioned conventional wisdom, but from within rather than from outside. ...
"The criticism of methodological uniformity in Economics can also be taken too far. Surely, the use of mathematical and statistical techniques is not a problem per se. Such techniques simply ensure our arguments are conceptually and empirically coherent. Yes, excessive focus on these techniques, or the use of math just for its own sake, are a problem–but a problem against which there is already a counter-movement from within. In the top journals of the profession, I would say most math-heavy papers are driven by substantive questions rather than methods-driven concerns. "
On the level of policy disagreement that exists among those using similar mainstream methods
"Pluralism on policy is already a reality, even within the boundaries of the existing methods, as I indicated. There are healthy debates in the profession today on the minimum wage, fiscal policy, financial regulation, and many other areas too. I think many critics of the economics profession overlook these differences, or view them as the exception rather than the rule. And there are certainly some areas, for example international trade,where economists’ views are much less diverse than public opinion in general. But economics today is not a discipline that is characterized by a whole lot of unanimity."
On the critique that many economists are narrow in their outlook.
"There are powerful forces having to do with the sociology of the profession and the socialization process that tend to push economists to think alike. Most economists start graduate school not having spent much time thinking about social problems or having studied much else besides math and economics. The incentive and hierarchy systems tend to reward those with the technical skills rather than interesting questions or research agendas. An in-group versus out-group mentality develops rather early on that pits economists against other social scientists. All economists tend to imbue a set of values that tends to glorify the market and demonize public action. What probably stands out with mainstream economists is their awe of the power of markets and their belief that the market logic will eventually vanquish whatever obstacle is placed on its path."
Here's one example of Rodrik using standard economic analysis as a tool for challenging conventional wisdom--in this case, the conventional wisdom that the benefits of globalization clearly outweigh the redistributive effects.
For noneconomists, I guess the obvious question is: "If economics doesn't give a correct and clear answer most of of the time, what good is it?" I sometimes argue that the main version of economics, at its best, is that it is a disciplined way of thinking and arguing that makes clear where people disagree. If two economists disagree, they can unpack each other's arguments. Do they disagree in their underlying assumptions? In their model of how those assumptions fit together? In their arguments over cause and effect? In their beliefs about what data to use? In the statistical methods they use? Even when economist end up disagreeing, they should be able to pinpoint the sources of their disagreement--and thus to agree on what issues need to be further researched and resolved. From this process, provisional truths (and is there really any other kind?) do emerge.
"Take for example the relationship between the gains from trade and the distributive implications of trade. To this day, there is a tendency in the profession to overstate the first while minimizing the second. This makes globalization look a lot better: it’s all net gains and very little distributional costs. Yet look at the basic models of trade theory and comparative advantage we teach in the classroom and you can see that the net gains and themagnitudes of redistribution are directly linked in most of these models. The larger the net gains, the larger the redistribution. After all, the gains in productive efficiency derive from structural change, which is a process that inherently creates gainers (expanding sectors and the factors employed therein) and losers (contracting sectors and the factors employed therein). It is nonsensical to argue that the gains are large while the amount of redistribution is small--at least in the context of the standard models. Moreover, as trade becomes freer, the ratio of redistribution to net gains rises. Ultimately, trying to reap the last few dollars of efficiency gain comes at the “cost” of significant redistribution of income. Again, standard economics. Saying all this doesn’t necessarily make you very popular right away."On the flexibility of economic modeling in reaching pre-desired conclusions
"I love an old quote from Carlos Diaz-Alejandro who once said something along the lines of “by now any graduate student can come up with any policy conclusion he desires by building appropriate assumptions into his model.” And that was some thirty years ago! We have plenty more models that generate unorthodox conclusions now."
For noneconomists, I guess the obvious question is: "If economics doesn't give a correct and clear answer most of of the time, what good is it?" I sometimes argue that the main version of economics, at its best, is that it is a disciplined way of thinking and arguing that makes clear where people disagree. If two economists disagree, they can unpack each other's arguments. Do they disagree in their underlying assumptions? In their model of how those assumptions fit together? In their arguments over cause and effect? In their beliefs about what data to use? In the statistical methods they use? Even when economist end up disagreeing, they should be able to pinpoint the sources of their disagreement--and thus to agree on what issues need to be further researched and resolved. From this process, provisional truths (and is there really any other kind?) do emerge.
Thursday, June 13, 2013
250,000 New Permanent Federal Employees?
My perhaps old-fashioned view of government is that it exists to carry out tasks on behalf of the citizenry. Although the government needs to hire people to carry out those tasks, government employees are not a purpose of government; instead, they are a cost of carrying out government tasks. I would like the federal bureaucracy to be well-managed by tough cost-cutters, so that as high a proportion as possible of tax money can flow to program beneficiaries, infrastructure needs, and the like, not government paychecks. Thus, I get a queasy feeling from "Sizing Up the Executive Branch," a January 2013 report from the U.S. Office of Personnel Management.
The figure shows total civilian employment by the federal government in the last eight years. NSFTP, the blue bars, shows Non-Seasonal Full-Time Permanent employees. Other, shown by the red bars, is part time, seasonal, and nonpermanent employees.
Whenever I see these numbers, the sheer size of federal employment widens my eyes. About 144 million Americans are employed, and more than 1% of them work are civilian employees of the federal government. While unemployment rates have been wrenchingly high for the last five years, government employment has been growing. The number of non-seasonal permanent full-time federal employees rose by about 250,000 from 2006 to 2011--a rise of about 15%--before falling back slightly in 2012.
It's plausible that the Great Recession from 2007-2009 required hiring additional government employees, but now that the end of the recession is four years behind us, will we see the federal civilian employment levels drop substantially--say, by 10% or more? Much of what the government does is manage information about programs and people. Developments in information and communications technology should make it possible to do these tasks more efficiently, if they are implemented by thoughtful managers with a cost-cutting focus. No politician publicly advocated a permanent increase of several hundred thousand federal employees, but it just happened anyway.
If you are interested in discussion of the pay of federal employees relative to their private-sector counterparts, see my February 2012 post "Government Workers: It's Not the Wages, It's the Benefits."
Note added later: Louis Johnston points out that in the notes under Table 3 of this report, it reads: "The Department of Defense, Department of Homeland Security, Department of Justice, Department of the Air Force, Department of the Army, Department of the Navy, and the Department of Veterans Affairs have each grown by more than 10,000 employees over the past eight fiscal years. Over the last eight fiscal years those seven agencies have grown by over 230,000 employees." However, remember that this report is only civilian employees of the federal government--not armed forces. So most of the increase is the civilian national security apparatus in various forms.
The figure shows total civilian employment by the federal government in the last eight years. NSFTP, the blue bars, shows Non-Seasonal Full-Time Permanent employees. Other, shown by the red bars, is part time, seasonal, and nonpermanent employees.
Whenever I see these numbers, the sheer size of federal employment widens my eyes. About 144 million Americans are employed, and more than 1% of them work are civilian employees of the federal government. While unemployment rates have been wrenchingly high for the last five years, government employment has been growing. The number of non-seasonal permanent full-time federal employees rose by about 250,000 from 2006 to 2011--a rise of about 15%--before falling back slightly in 2012.
It's plausible that the Great Recession from 2007-2009 required hiring additional government employees, but now that the end of the recession is four years behind us, will we see the federal civilian employment levels drop substantially--say, by 10% or more? Much of what the government does is manage information about programs and people. Developments in information and communications technology should make it possible to do these tasks more efficiently, if they are implemented by thoughtful managers with a cost-cutting focus. No politician publicly advocated a permanent increase of several hundred thousand federal employees, but it just happened anyway.
If you are interested in discussion of the pay of federal employees relative to their private-sector counterparts, see my February 2012 post "Government Workers: It's Not the Wages, It's the Benefits."
Note added later: Louis Johnston points out that in the notes under Table 3 of this report, it reads: "The Department of Defense, Department of Homeland Security, Department of Justice, Department of the Air Force, Department of the Army, Department of the Navy, and the Department of Veterans Affairs have each grown by more than 10,000 employees over the past eight fiscal years. Over the last eight fiscal years those seven agencies have grown by over 230,000 employees." However, remember that this report is only civilian employees of the federal government--not armed forces. So most of the increase is the civilian national security apparatus in various forms.
Wednesday, June 12, 2013
China's Demography and the Lewis Turning Point
China's wages have risen quickly in the last decade or so, but not quickly enough to keep up with productivity growth. (To put it another way, the share of national income earned by labor is falling in China, as elsewhere.) One factor that has prevented wages from rising even faster is that China has had a vast number of underemployed workers. As China's industry expanded, it could keep drawing more and more of these underemployed workers into its higher-productivity sectors, but the presence of these underemployed workers held the average pay raise below what it would otherwise have been. However, Mitali Das and Papa N’Diaye explain that this dynamic is reaching its end in "The End of Cheap Labor," which appears in the June 2013 issue of Finance and Development. They refer to some of the best-known work of development economist Sir Arthur Lewis, who won the Nobel prize back in 1979, to set up their argument.
Part of their argument is based on estimates of how many workers in China remain in the low-productivity sectors, and thus could still transfer over to the high-productivity sectors. But such estimates are inevitably a little shaky. They are on stronger ground, it seems to me, in pointing out that "demographics virtually guarantee that China will cross the Lewis Turning Point—almost certainly before 2025."
Here's a figure showing the annual growth rate of China's working-age population. Back in the 1970s and 1980s, the China's working age population was growing at 10-15% per year. But as the effects of the one-child policy began to bite, growth in the working-age population is slowing, and will start to shrink around 2020.
As another illustration, here's a figure showing the total size of China's "core group" of workers age 25-39, compared with the size of the group of those under age 15 and over age 64. In the 1970 and 1980s, the boom in China's working-age population meant that the number of "core workers" outstripped what can be viewed as the "dependent" population for a few years. This pattern of a surge in workers is sometimes called the "demographic dividend." But the "core group" is already starting to shrink in size, and the number of elderly in China is about to take off. China's labor market is clearly evolving toward a very different situation.
For a more in-depth discussion of these patterns, I recommend a couple of articles from the Fall 2012 issue of the Journal of Economic Perspectives: Hongbin Li, Lei Li, Binzhen Wu, and Yanyan Xiong contributed "The End of Cheap Chinese Labor." and Xin Meng writes on "Labor Market Outcomes and Reforms in China." (Full disclosure: I'm the managing editor of JEP, so I am predisposed to believe that all of its content is fascinating and well-done. It's also freely available compliments of the American Economic Association.)
"In Sir Arthur Lewis’s seminal work (1954), developing economies are characterized by two sectors: a low-productivity sector with excess labor (agriculture, in China’s case) and a high-productivity sector (manufacturing in China). The high-productivity sector is profitable, in part, because of the surplus of labor it can employ cheaply because of the low wages prevalent in the low-productivity sector. Because productivity increases faster than wages, the high-productivity sector is more profitable than it would be if the economy were at full employment. It also promotes higher capital formation, which drives economic growth. As the number of surplus workers dwindles, however, wages in the high productivity sector begin to rise, that sector’s profits are squeezed, and investment falls. At that point, the economy is said to have crossed the Lewis Turning Point. ...
"Recent developments in the Chinese labor market seem somewhat contradictory. On the one hand, aggregate wage growth has remained about 15 percent during the past decade, and corporate profits have remained high. Wage growth lags productivity, resulting in rising profits, which suggests that China has not reached the so-called Lewis Turning Point ... at which an economy moves from one with abundant labor to one with labor shortages. At the same time, though, since the financial crisis began, industry has increasingly relocated from the coast to the interior, where the large reserve of rural labor resides. As a result, previously large gaps between the demand for and supply of registered city workers have progressively narrowed, and worker demand for higher wages and better working conditions has risen—suggesting the onset of a structural tightening in the Chinese labor market."
Part of their argument is based on estimates of how many workers in China remain in the low-productivity sectors, and thus could still transfer over to the high-productivity sectors. But such estimates are inevitably a little shaky. They are on stronger ground, it seems to me, in pointing out that "demographics virtually guarantee that China will cross the Lewis Turning Point—almost certainly before 2025."
Here's a figure showing the annual growth rate of China's working-age population. Back in the 1970s and 1980s, the China's working age population was growing at 10-15% per year. But as the effects of the one-child policy began to bite, growth in the working-age population is slowing, and will start to shrink around 2020.
As another illustration, here's a figure showing the total size of China's "core group" of workers age 25-39, compared with the size of the group of those under age 15 and over age 64. In the 1970 and 1980s, the boom in China's working-age population meant that the number of "core workers" outstripped what can be viewed as the "dependent" population for a few years. This pattern of a surge in workers is sometimes called the "demographic dividend." But the "core group" is already starting to shrink in size, and the number of elderly in China is about to take off. China's labor market is clearly evolving toward a very different situation.
For a more in-depth discussion of these patterns, I recommend a couple of articles from the Fall 2012 issue of the Journal of Economic Perspectives: Hongbin Li, Lei Li, Binzhen Wu, and Yanyan Xiong contributed "The End of Cheap Chinese Labor." and Xin Meng writes on "Labor Market Outcomes and Reforms in China." (Full disclosure: I'm the managing editor of JEP, so I am predisposed to believe that all of its content is fascinating and well-done. It's also freely available compliments of the American Economic Association.)
Tuesday, June 11, 2013
Global Burden of Disease
What are the world's biggest health problems and risks? The Global Disease Burden study, a collaborative project that in its most recent version includes 488 co-authors from 303 institutions in 50 countries, tries to answer that question. A nice summary of some of the results is available from the Institute for Health Metrics and Evaluation at the University of Washington in its report "The Global Burden of Disease: Generating Evidence, Guiding Policy."
There are perhaps two main ways to measure health effects. The simpler one is how many deaths are caused. A more complex one uses DALYs, or "disability-adjusted life years," a measure that was first developed in the first Global Disease Burden study in the 1990s, but has become common since. It seeks to measure how many healthy years of life are lost: thus, if someone's health is injured, there is a cost in DALYs even if their life expectancy doesn't change. Of course, if their health is diminished and life expectancy falls, too, the cost in DALYs is greater.
Here's a figure showing the top 10 leading diseases and injuries on a global basis, shown with blue diamonds, and the top 10 risk factors for for deaths, shown with brown diamonds. The horizontal axis shows their cost in deaths in 2010. The vertical axis shows their cost in DALYs. Thus, "Low Back Pain" among the top 10 diseases and injuries based on DALYs, although it is not a direct cause of death. Lung cancer and diarrhea cause a similar number of deaths, but diarrhea is far worse in terms of DALYs. A few of the high risk-factors that jump out at me as being a little unexpected to find in the top 10 are "Diet low in fruit," "Household air pollution," and "High sodium."
What problems are getting better, and what problems are getting worse? Here's a figure which lists the top 25 risk factors, from left to right. Thus, in keeping with the figure above, the first five are "High blood pressure," "Smoking," "Household air pollution," "Diet low in fruit," and "Alcohol use." However, rather than showing the level of health damage done, the figure shows how the level if injury changed from the 1990 data in the first Global Disease Burden study to the 2010 data used in this study.
Clearly, the three big success stories in the last two decades in terms of reduced DALYs are a reduced health cost from "Household air pollution," from "Childhood underweight," and from "Suboptimal breastfeeding."
Among the rising problems, the world is managing to combine a rising number of DALYs from "High body mass index" with people having health problems from "Diet Low in Fruit," "High sodium," "Diet Low in Nuts and Seeds," "Diet low in whole grains," "Diet low in vegetables," "Diet low in omega-3," "High-processed meat," and "Diet low in fiber," not to mention "Smoking" and "Alcohol." In short, a large share of the world's health risk factors, and a large share of the problems that are getting worse, have to do with what people are putting in their mouths.
There are perhaps two main ways to measure health effects. The simpler one is how many deaths are caused. A more complex one uses DALYs, or "disability-adjusted life years," a measure that was first developed in the first Global Disease Burden study in the 1990s, but has become common since. It seeks to measure how many healthy years of life are lost: thus, if someone's health is injured, there is a cost in DALYs even if their life expectancy doesn't change. Of course, if their health is diminished and life expectancy falls, too, the cost in DALYs is greater.
Here's a figure showing the top 10 leading diseases and injuries on a global basis, shown with blue diamonds, and the top 10 risk factors for for deaths, shown with brown diamonds. The horizontal axis shows their cost in deaths in 2010. The vertical axis shows their cost in DALYs. Thus, "Low Back Pain" among the top 10 diseases and injuries based on DALYs, although it is not a direct cause of death. Lung cancer and diarrhea cause a similar number of deaths, but diarrhea is far worse in terms of DALYs. A few of the high risk-factors that jump out at me as being a little unexpected to find in the top 10 are "Diet low in fruit," "Household air pollution," and "High sodium."
What problems are getting better, and what problems are getting worse? Here's a figure which lists the top 25 risk factors, from left to right. Thus, in keeping with the figure above, the first five are "High blood pressure," "Smoking," "Household air pollution," "Diet low in fruit," and "Alcohol use." However, rather than showing the level of health damage done, the figure shows how the level if injury changed from the 1990 data in the first Global Disease Burden study to the 2010 data used in this study.
Clearly, the three big success stories in the last two decades in terms of reduced DALYs are a reduced health cost from "Household air pollution," from "Childhood underweight," and from "Suboptimal breastfeeding."
Among the rising problems, the world is managing to combine a rising number of DALYs from "High body mass index" with people having health problems from "Diet Low in Fruit," "High sodium," "Diet Low in Nuts and Seeds," "Diet low in whole grains," "Diet low in vegetables," "Diet low in omega-3," "High-processed meat," and "Diet low in fiber," not to mention "Smoking" and "Alcohol." In short, a large share of the world's health risk factors, and a large share of the problems that are getting worse, have to do with what people are putting in their mouths.
Monday, June 10, 2013
The Soaring Number of $100 Bills
Between credit cards, debit cards, automatic deposits, automatic payments, making payments via a smartphone, and similar technologies, it seems as if cash must be on the way out. But John C. Williams explains otherwise in "Cash Is Dead! Long Live Cash!" an essay written for the 2012 Annual Report of the Federal Reserve Bank of San Francisco. Williams writes :
From 1989 through the early 1990s, the growth of currency with denominations of $50 or less pretty much tracked the rise in GDP. But with the arrival of all the alternative methods of payment listed above, the rise in currency in denominations of $50 or less began to lag well behind the rise of GDP. However, overall currency in circulation, and especially those $100 bills, have risen faster than the rise in GDP.
Part of the story here is that in the scary economic times of the last few years, when many financial institutions looked unsafe, more Americans have been holding cash. Williams writes: "As fears about
But it's not just Americans who hold U.S. currency. If you were a European looking at the financial and banking struggles across the continent, holding some U.S. dollars in the form of cash might make some sense. Williams writes: "As Europe’s crisis worsened in the spring of2010, U.S. currency holdings rose sharply. And they continued to rise as economic and political turmoil and uncertainty about the future sent Europeans scrambling to convert some of their euros to dollars. It’s estimated that the share of U.S. currency held abroad rose from about 56% before the tumultuous events of the past five years to 64% in 2011." My guess is that many of the newly well-to-do in China, India, Brazil and Russia have also built up a stash of U.S. $100 bills.
In both of these situations, it's important to notice that with interest rates at such extremely low levels, those who decide to hold cash are not giving up much in terms of foregone interest payments.
Two other explanations are sometimes offered for the rise of cash, but they are probably minor factors. First, price levels in general have risen, and so more people are likely to use $100 bills for transactions--like buying a tankful of gasoline or going grocery shopping. While this is probably a contributing factor, not that many people are walking around with $100 bills in their wallets for daily transactions. Second, $100 bills may be used to pay those in the "gray economy," all those people who work jobs that are paid in cash and not reported to the IRS or Social Security or unemployment insurance or workman's compensation. Again, this is probably a contributing factor, but most cash-based employers are not handing their nanny or gardener a few $100 bills.
"[S]ince the start of the recession in December 2007 and throughout the recovery, the value of U. S. currency in circulation has risen dramatically. It is now fully 42% higher than it was five years ago. ... Over the past five years, cash holdings increasedon average about 7¼% annually, more than three times faster than theeconomy’s growth rate over this period. At the end of 2012, currency in circulation stood at over $1.1 trillion, representing a staggering $3,500 for every man, woman, and child in the nation."To get a sense of what's happening here, consider this figure. The red line in the middle shows growth of GDP over time. The green line at the bottom shows the growth of U.S. currency in circulation in denominations of $50 or less. The blue line at the top shows total currency in circulation: that is, the gap between the green line and the blue line is made up of $100 bills.
From 1989 through the early 1990s, the growth of currency with denominations of $50 or less pretty much tracked the rise in GDP. But with the arrival of all the alternative methods of payment listed above, the rise in currency in denominations of $50 or less began to lag well behind the rise of GDP. However, overall currency in circulation, and especially those $100 bills, have risen faster than the rise in GDP.
Part of the story here is that in the scary economic times of the last few years, when many financial institutions looked unsafe, more Americans have been holding cash. Williams writes: "As fears about
the
safety of the banking system spread in late 2008, many people became
terrified of losing their savings. Instead, they put theirtrust
in cold, hard cash. Not surprisingly, as depositors socked away money
to protect themselves against a financial collapse, they often
sought $100 bills. Such a large denomination is easier to conceal or
store in bulk than smaller bills. Indeed, in the six months following the fall of the investment bank Lehman Brothers in 2008, holdings of $100 bills soared by $58 billion, a 10% jump."
But it's not just Americans who hold U.S. currency. If you were a European looking at the financial and banking struggles across the continent, holding some U.S. dollars in the form of cash might make some sense. Williams writes: "As Europe’s crisis worsened in the spring of2010, U.S. currency holdings rose sharply. And they continued to rise as economic and political turmoil and uncertainty about the future sent Europeans scrambling to convert some of their euros to dollars. It’s estimated that the share of U.S. currency held abroad rose from about 56% before the tumultuous events of the past five years to 64% in 2011." My guess is that many of the newly well-to-do in China, India, Brazil and Russia have also built up a stash of U.S. $100 bills.
In both of these situations, it's important to notice that with interest rates at such extremely low levels, those who decide to hold cash are not giving up much in terms of foregone interest payments.
Two other explanations are sometimes offered for the rise of cash, but they are probably minor factors. First, price levels in general have risen, and so more people are likely to use $100 bills for transactions--like buying a tankful of gasoline or going grocery shopping. While this is probably a contributing factor, not that many people are walking around with $100 bills in their wallets for daily transactions. Second, $100 bills may be used to pay those in the "gray economy," all those people who work jobs that are paid in cash and not reported to the IRS or Social Security or unemployment insurance or workman's compensation. Again, this is probably a contributing factor, but most cash-based employers are not handing their nanny or gardener a few $100 bills.
Friday, June 7, 2013
The Economics of Maple Syrup
It sounds like the plot-line from a crime-caper-gone-wrong movie, but Canada's global strategic maple syrup reserve was drained of $18 million worth of syrup last fall. robbed last fall. Jacqueline Deslauriers tells the story in "Liquid Gold," in the June 2013 issue of Finance & Development. She tells the story (citations omitted for readability) :
"Although its value to the Canadian economy may pale in comparison with, say, wheat or soybeans, maple syrup trumps the vast wheat fields of Manitoba and Saskatchewan when it comes to Canadian cultural identity. It is for good reason that the maple leaf is Canada’s best-known symbol. Canadians’ deep attachment to this exotic food shapes their attitude toward protecting the price farmers receive for producing maple syrup. ... Maple trees, the source of maple syrup, grow naturally in eastern North America. Canada produces 80 percent of the world’s supply of maple syrup, and the province of Quebec, where the heist took place, accounts for 90 percent of Canada’s production ...Teachers of economic in search of a new and lively example might stick a fork in the maple syrup example. I've poured attention on maple syrup issues in the past. Back in September 2012, I posted on "The Great Maple Syrup Theft: A Supply and Demand Story."
"The Federation of Quebec Maple Syrup Producers was set up in 1966 to represent and advocate for producers—most of them dairy farmers who supplemented their income by tapping maple trees. By the 1990s, maple syrup output had grown rapidly, and by 2000 the industry was producing a surplus of between 1.3 and 2 million gallons a year. Because maple syrup is so easily stored, in bumper years the 80 licensed maple syrup buyers from Canada and three U.S.-based buyers stocked up at low prices, and bought less during lean years when prices tended to be higher. By and large, farmers were at the mercy of the buyers. ...
"Things changed in 2001, when a bumper crop of almost 8.2 million gallons of maple syrup sent prices plunging. That prompted producers to change the federation from an advocacy group to a marketing board that could negotiate better prices with the buyers. ... The new-look federation also began to store surplus production to keep prices from plunging. Initially, individual farmers were free to produce as much as they wanted. But another bumper crop in 2003 resulted in so much syrup, much of which had to be stored, that the industry decided to control production by imposing quotas on individual producers.... Because production has been lean in the past few years, producers currently can sell 100 percent of their quota. If there are a few bumper crop years, the cartel can reduce the amount that farmers are permitted to sell.
Any output that cannot be sold must be transferred to the federation’s reserve. Producers do not receive payment for this excess production until the federation sells it.... Maple syrup is sold from the reserve when current production does not meet the demand from authorized buyers. In 2009, after four dismal years of production, the global maple syrup reserve ran dry. Since then production has bounced back and the reserve is overflowing....
The $18 million theft was from one of three warehouses the federation uses to stash excess production and was discovered in mid–2012 during an audit of the warehouse contents. The warehouse, about 60 miles southwest of provincial capital Quebec City, was lightly guarded—in retrospect, perhaps, too lightly guarded. The thieves set up shop nearby, and over the course of a year, according to police, made off with roughly 10,000 barrels of maple syrup—about 323,000 gallons, or about 10 percent of the reserve. Because one gallon of Quebec maple syrup looks like any other gallon of the product, consumers had no way of distinguishing the federation-approved product from stolen syrup. And some buyers may not have cared.
It appears the thieves attempted to unload their booty to buyers in other Canadian provinces and the United States. Officers from the Royal Canadian Mounted Police, the Canada Border Services Agency, and U.S. Immigration and Customs Enforcement helped the Quebec provincial police with their investigation. Police arrested three suspects in December 2012 and 15 more soon thereafter. Those arrested faced charges of theft, conspiracy, fraud, and trafficking in stolen goods. Police have recovered two-thirds of the stolen syrup."
Thursday, June 6, 2013
Labor's Falling Share, Everywhere
When I was getting my feet wet in economics back in the late 1970s and early 1980s, it was conventional wisdom that the share of national income going to labor fluctuated a bit from year to year, but didn't display a rising or falling trend over time. But the stability of labor's share no longer holds true. The Internation Labour Organization discusses some of the data in Chapter 5 if its Global Wage Report 2012/13 on the theme of "Wages and equitable growth." Here, I'll provide a few background charts, and then some thoughts. The ILO report summarizes some of the evidence this way:
Here's the labor share of income in the U.S., Germany, and Japan. For example, the U.S labor share of income (shown by the triangles) hover around 68-70% of GDP through the 1970s, and even by the mid-1980s is near the bottom end of this range, but has declined since.
Here's a figure showing patterns for several groups of emerging and developing economies. The longest time series, shown by the darker blue diamonds, is an average for Mexico, South Korea, and Turkey.
And what about China? Labor share is declining there, too.
One of the results of the declining labor share of the economy is that as productivity growth increases the size of economies, the amount going to labor is not keeping up. Here's a figure showing the divergence in output and labor income that has opened up since 1999 for developed economies. The results here are weighted by the size of the economy, so the graph largely reflects the experience of the three biggest developed economies: the U.S., Japan, and Germany.
What can be said about this pattern of a declining labor share?
1) When a trend cuts across so many countries, it seems likely that the cause is something cutting across all countries, too. Looking for a "cause" based on some policy of Republicans or Democrats in the U.S. almost certainly misses the point. The same is true of looking for a "cause" based in policies more common in Europe, or in China.
2) The causes are still murky, but one possible answer can be pretty much ruled out. The declining labor share is not caused by a shift from labor-intensive to more capital-intensive industries--because the trend toward a lower labor share is happening across all industries. The difficulty is that the other possible explanations are interrelated and hard to disentangle. They include technological change, globalization, the rise of financial markets, altered labor market institutions , and a decline in the bargaining power of labor. But after all, technological changes in information and communication technology are part of what has fed globalization, as well as part of what led to a rise of the financial sector. Globalization is part of what has reduced the bargaining power of labor.The ILO report offers some evidence that the rise of the financial sector is a substantial part of the answer. Here's a post from a couple of weeks ago on the growth of the U.S. financial sector.
3) The flip side of a lower share of national income going to labor is a higher share of income going to capital. The ILO report argues that in many countries, this pattern seems to involve rising dividend payments.
4) While understanding causes is useful, policies don't always have to address root causes. When someone is hit by a car, you can't reverse the cause, but you can still address the consequences. However, it's worth remembering that the falling share of labor income has been happening all over the world, in countries with a very wide range of different policies and economic institutions. For example, European labor market institutions are often thought of as being more worker-friendly, but they haven't prevented a fall in the labor share of income.
5) It's important to remember that the falling share of labor income is different from a rising level of wage inequality. The share of income going to labor as a whole is falling, and also a greater share of labor income is going to those at the highest levels of income. Both trends mean that those with lower- and middle-incomes are having a tougher time.
"The OECD has observed, for example, that over the period from 1990 to 2009 the share of labour compensation in national income declined in 26 out of 30 developed economies for which data were available, and calculated that the median labour share of national income across these countries fell considerably from 66.1 per cent to 61.7 per cent ... Looking beyond the advanced economies, the ILO World of Work Report 2011 found that the decline in the labour income share was even more pronounced in many emerging and developing countries, with considerable declines in Asia and North Africa and more stable but still declining wage shares in Latin America."
Here's the labor share of income in the U.S., Germany, and Japan. For example, the U.S labor share of income (shown by the triangles) hover around 68-70% of GDP through the 1970s, and even by the mid-1980s is near the bottom end of this range, but has declined since.
Here's a figure showing patterns for several groups of emerging and developing economies. The longest time series, shown by the darker blue diamonds, is an average for Mexico, South Korea, and Turkey.
And what about China? Labor share is declining there, too.
One of the results of the declining labor share of the economy is that as productivity growth increases the size of economies, the amount going to labor is not keeping up. Here's a figure showing the divergence in output and labor income that has opened up since 1999 for developed economies. The results here are weighted by the size of the economy, so the graph largely reflects the experience of the three biggest developed economies: the U.S., Japan, and Germany.
What can be said about this pattern of a declining labor share?
1) When a trend cuts across so many countries, it seems likely that the cause is something cutting across all countries, too. Looking for a "cause" based on some policy of Republicans or Democrats in the U.S. almost certainly misses the point. The same is true of looking for a "cause" based in policies more common in Europe, or in China.
2) The causes are still murky, but one possible answer can be pretty much ruled out. The declining labor share is not caused by a shift from labor-intensive to more capital-intensive industries--because the trend toward a lower labor share is happening across all industries. The difficulty is that the other possible explanations are interrelated and hard to disentangle. They include technological change, globalization, the rise of financial markets, altered labor market institutions , and a decline in the bargaining power of labor. But after all, technological changes in information and communication technology are part of what has fed globalization, as well as part of what led to a rise of the financial sector. Globalization is part of what has reduced the bargaining power of labor.The ILO report offers some evidence that the rise of the financial sector is a substantial part of the answer. Here's a post from a couple of weeks ago on the growth of the U.S. financial sector.
3) The flip side of a lower share of national income going to labor is a higher share of income going to capital. The ILO report argues that in many countries, this pattern seems to involve rising dividend payments.
4) While understanding causes is useful, policies don't always have to address root causes. When someone is hit by a car, you can't reverse the cause, but you can still address the consequences. However, it's worth remembering that the falling share of labor income has been happening all over the world, in countries with a very wide range of different policies and economic institutions. For example, European labor market institutions are often thought of as being more worker-friendly, but they haven't prevented a fall in the labor share of income.
5) It's important to remember that the falling share of labor income is different from a rising level of wage inequality. The share of income going to labor as a whole is falling, and also a greater share of labor income is going to those at the highest levels of income. Both trends mean that those with lower- and middle-incomes are having a tougher time.
Wednesday, June 5, 2013
Global Biodiversity for $80 Billion Per Year
What would it cost to take large steps to reduce the extinction risk of all globally endangered species? Donal P. McCarthy and a list of 15 other authors estimate "Financial Costs of Meeting Global Biodiversity Conservation Targets: Current Spending and Unmet Needs" in the November 16, 2012, issue of Science magazine (which isn't freely available on-line, although many academics will have access through a library subscription). Here is their summary:
The estimate surprised me a bit, because it's more-or-less one-tenth of 1% of the global economy--a very large amount, but not an unthinkably large amount. However, my guess is that the practical issues of protecting and managing biodiversity-protection areas may be much larger than the straight monetary cost implies.
For those who want some additional discussion of biodiversity issues, the journal Wildlife Research has a recent issue with eight articles on "Prioritisation and Evaluation of Biodiversity Projects" that seek in various ways to tackle the "Noah's Ark" problem--that is, if the world isn't going to do all of what it could to conserve biodiversity, how should priorities be set? And here's are some summary statistics from the IUCN [International Union for the Conservation of Nature] Red List of Endangered Species.
"World governments have committed to halting human-induced extinctions and safeguarding important sites for biodiversity by 2020, but the financial costs of meeting these targets are largely unknown. We estimate the cost of reducing the extinction risk of all globally threatened bird species ... to be U.S. $0.875 to $1.23 billion annually over the next decade, of which 12% is currently funded. Incorporating threatened nonavian species increases this total to U.S. $3.41 to $4.76 billion annually. We estimate that protecting and effectively managing all terrestrial sites of global avian conservation significance (11,731 Important Bird Areas) would cost U.S. $65.1 billion annually. Adding sites for other taxa increases this to U.S. $76.1 billion annually. Meeting these targets will require conservation funding to increase by at least an order of magnitude."I'll leave the details of their methodology to the article, but basically it uses a combination of expert estimates of conservation costs, and then using them as a basis for modeling that includes information on forests, breeding, and size of local economies.
The estimate surprised me a bit, because it's more-or-less one-tenth of 1% of the global economy--a very large amount, but not an unthinkably large amount. However, my guess is that the practical issues of protecting and managing biodiversity-protection areas may be much larger than the straight monetary cost implies.
For those who want some additional discussion of biodiversity issues, the journal Wildlife Research has a recent issue with eight articles on "Prioritisation and Evaluation of Biodiversity Projects" that seek in various ways to tackle the "Noah's Ark" problem--that is, if the world isn't going to do all of what it could to conserve biodiversity, how should priorities be set? And here's are some summary statistics from the IUCN [International Union for the Conservation of Nature] Red List of Endangered Species.
Tuesday, June 4, 2013
A Legal Right to Paid Vacation?
From an American perspective, a legal right to paid vacation sounds like a peculiar and impractical hypothetical. For other high-income countries in the world, it's the law. Rebecca Ray, Milla Sanes, and John Schmitt lay out the facts in "No-Vacation Nation Revisited," written for the Center for Economic and Policy Research.
The dark-blue columns show the statutory minimum number of paid vacation days. The light-blue lines show national paid holidays. The zero at the far-right-side for either one is the United States.
Here is table showing the numbers behind the figure.
Ray, Sanes, and Schmitt sum it up this way:
In my Principles of Economics textbook, I include a little table comparing average annual hours worked across countries, based on OECD data. Here's the figure with 2011 data (with thanks to Dianna Amasino):
Of course, more vacation time is not a free lunch. One reason why per capita GDP is lower in these other high income countries than in the United States is the average U.S. worker spends more hours on the job. There are political economy issues, too: it makes my economist's skin crawl to imagine Congress and a president happily handing out paid vacation days to all, with little concern for the tradeoffs. But on the other side, it's also true that many of the rules that govern employment, and vacation time, are based in tradition and an implicit agreement about what a "job" will mean, not the result of a free-form multidimensional negotiation between employers and potential employees. It can be quite difficult for an individual, especially one seeking a low-skilled job, to negotiate even for flexible hours, much less for paid vacation or company-paid health insurance.
As an American, the idea of a legal right to paid vacation is gap in hours worked is outside my personal experience. I am honestly not sure that I would be emotionally comfortable cutting my workload by, say, six or seven weeks per year. It feels to me as if such a change would reshape my personal relationship to work in ways that I cannot really anticipate. But I wouldn't mind seeing the federal government add a few more national holidays. Most employers would treat them as vacation, and accommodate without much trouble. School districts would do the same, allowing families to plan some time together. And many workers who don't get the day off would at least get a pay boost if they end up working on a federal holiday.
The dark-blue columns show the statutory minimum number of paid vacation days. The light-blue lines show national paid holidays. The zero at the far-right-side for either one is the United States.
Here is table showing the numbers behind the figure.
Ray, Sanes, and Schmitt sum it up this way:
"The United States is the only advanced economy in the world that does not guarantee its workers paid vacation. European countries establish legal rights to at least 20 days of paid vacation per year, with legal requirements of 25 and even 30 or more days in some countries. Australia and New Zealand both require employers to grant at least 20 vacation days per year; Canada and Japan mandate at least 10 paid days off. The gap between paid time off in the United States and the rest of the world is even larger if we include legally mandated paid holidays, where the United States offers none, but most of the rest of the world's rich countries offer at least six paid holidays per year."
"In the absence of government standards, almost one in four Americans has no paid vacation (23 percent) and no paid holidays (23 percent). According to government survey data, the average worker in the private sector in the United States receives only about ten days of paid vacation and about six paid holidays per year: less than the minimum legal standard set in the rest of world's rich economies excluding Japan (which guarantees only 10 paid vacation days and requires no paid holidays). The paid vacation and paid holidays that employers do make available are distributed unequally. According to the same government survey data, only half of low-wage workers (bottom fourth of earners) have any paid vacation (49 percent), compared to 90 percent of high-wage workers (top fourth of earners)."
In my Principles of Economics textbook, I include a little table comparing average annual hours worked across countries, based on OECD data. Here's the figure with 2011 data (with thanks to Dianna Amasino):
Of course, more vacation time is not a free lunch. One reason why per capita GDP is lower in these other high income countries than in the United States is the average U.S. worker spends more hours on the job. There are political economy issues, too: it makes my economist's skin crawl to imagine Congress and a president happily handing out paid vacation days to all, with little concern for the tradeoffs. But on the other side, it's also true that many of the rules that govern employment, and vacation time, are based in tradition and an implicit agreement about what a "job" will mean, not the result of a free-form multidimensional negotiation between employers and potential employees. It can be quite difficult for an individual, especially one seeking a low-skilled job, to negotiate even for flexible hours, much less for paid vacation or company-paid health insurance.
As an American, the idea of a legal right to paid vacation is gap in hours worked is outside my personal experience. I am honestly not sure that I would be emotionally comfortable cutting my workload by, say, six or seven weeks per year. It feels to me as if such a change would reshape my personal relationship to work in ways that I cannot really anticipate. But I wouldn't mind seeing the federal government add a few more national holidays. Most employers would treat them as vacation, and accommodate without much trouble. School districts would do the same, allowing families to plan some time together. And many workers who don't get the day off would at least get a pay boost if they end up working on a federal holiday.