Monday, July 1, 2013

Re-thinking Recycling

Cato Unbound for June 2013 has a lively and provocative set of essays on "The Political Economy of Recycling." The lead essay is by Michael Munger, with comments from Edward Humes, Melissa Walsh Innes, and Stephen Landsberg.  All of the essays are full of lively examples and sharp writing. Here are three examples from Munger, but these essays have many more treats in store.

Example 1: Wash Your Garbage

"When I was working on recycling policies for cities, I read a lot of web sites that described what was expected of good citizens.  Note that these policies were not mandatory; they were just what a moral person was expected to do.  The duties of good citizens came down to three things:  (1) recycle everything; (2) sort it assiduously; and (3) wash it carefully. Note that this whole approach is entirely insulated from costs or the logic of price.  The reason “we” recycle is that people in our town are good people, not people motivated by money. ...

"The result is that people drive, sometimes several miles or more, to sort their garbage into little bins like they were playing demented Tetris.  Bottles and glass here, plastic here, paper here, aluminum there.  In many cities, the resulting separated waste is actually picked up, re-mingled, and landfilled, because it has no economic value whatsoever.  But that’s okay, because the important thing is the moral act of recycling, not the saving of resources. The strangest part of this fetishization of garbage ... is the practice advocated by many small towns:  run your garbage through the dishwasher.  ...

"Curious, I phoned the public relations officers with the recycling departments in several small cities in the Northeast.  I asked one extremely cheerful and energetic young woman how her city could justify asking people to put their garbage in the dishwasher.  Isn’t that pretty expensive, in terms of human time, and the energy to heat the water, compared to the value of the garbage? Using the same tone of voice one would use to talk to a five year old—she clearly thought I was not the sharpest can lid in the recycle bin—she gave me the most concise explanation I have encountered in the whole genre.  She said, “Oh, you have to understand, sir.  Recycling is always cheaper, no matter how much it costs!” .... The problem is that, from economic perspective, from the perspective of balancing resource use, that’s just not true.  If you are trying to save energy and resources such as water and time, it never makes sense to put your garbage in the dishwasher.

Example 2: Required Composting Goes Awry

"The state of North Carolina, where I live, has a law against disposing of yard waste in a standard landfill.  This makes some sense. Yard waste decomposes naturally and turns into compost eventually, so it can be disposed of more safely than the dangerous waste we put in landfills.  But the city of Durham, where Duke University is located, decided that they would do the state one better.  The city council required all citizens to dispose of their yard waste at curbside, where it was picked up by city trucks and taken to the city composting facility.

"The city collected an extra fee—about $60—from residents to operate this service.  Expenses were much greater than that, but the theory was that the composted yard waste could be resold to pay the rest of the costs of the operation.  Best of all, there would be no need for landfilling the yard waste once the operation was up and running:  yard waste in, compost out, with no waste going to any other kind of disposal. The problem was that much of the yard waste was large stumps and tree limbs, resulting from several hurricanes and large storms.  The “compost pile” quickly became the “stump dump,” an enormous pile of rubbish.  The idea that the stuff was valuable was just wrong. It was garbage, not “black gold.”
"And then it caught fire.  Spontaneous combustion deep inside the pile, a common result of decomposition and pressure, found enough oxygen to begin to smolder.  The city tried to put it out by soaking the pile, but that just made the smoke worse. The fire could not be completely extinguished for weeks, and neighbors for miles downwind complained of the pollution. So the waste that homeowners paid extra for reusing was dumped instead in the main garbage staging facility. But remember, the law prohibits disposal of yard waste in landfills in North Carolina.    So, Durham shipped all its trash, including grass clippings, to a landfill more than 85 miles away in Lawrenceville, VA.  The clean-up and the extra hauling charges cost Durham an extra $1 million compared to landfill disposal."


Example 3: Recycling vs. Reducing the Waste Stream

"There are two ways to think of the solution to a problem.  Consider the problem of polio, a disease that killed tens of thousands and ruined the lives of millions around the world in the 1930s through 1950s.  One solution was to try to ease the suffering of polio victims, developing better iron lungs and systems of braces, wheelchairs, and prosthetics to make it possible that they could live some kind of life.  This industry was enormous, and highly profitable. The other solution was to develop a vaccine, the one that Dr. Jonas Salk finally perfected in 1952, and which showed itself to be effective within a decade.  By the late 1960s, polio had been reduced sharply in the United States.  Now, it is almost unknown here and in most of the rest of the world.  Of course, the makers of braces, crutches, and iron lungs took a beating, because no one needed their products anymore.  But the total costs to society were dramatically reduced, even accounting for the “loss” to the equipment manufacturers.

"When it comes to waste management, we are at the stage of manufacturing braces and iron lungs.  Our brightest and most motivated young people are pawing through garbage, arguing about who is more holy and who is most devoted to the misleadingly moralized cause of recycling.  Huge investments in industry and innovation are being misdirected–may I say “thrown in the garbage”?–because we are working on the wrong problem. ... 

"We need to be working on the waste management equivalent of the Salk vaccine:  prevent the creation of a massive and expensive waste stream at the source.  We have to change the incentives so that manufacturers are responsible for the waste they create, the packaging they use to move products, the containers they use to hold liquids, food items, and consumer goods. ...

Who will solve the problem, and how?  I am a Hayekian; I have no idea.  And unlike the Salk vaccine there may not be any one identifiable person or idea that solves the problem.  What I do know is that if we recognize that the answer is not arguing over how to handle garbage that already exists, we have to produce less garbage in the first place.  The answer may be counterintuitive, of course.  Where we have reduced the amount of plastic in bottles and aluminum in cans by more than 50% in the last decade, the answer may be to increase the sturdiness of containers so that they can be reused. Instead of reducing the bulk of pallets and packages, the answer may be to make packages reusable, in the same way that shipping containers are now refilled and reused rather than melted down and reprocessed after a single use.
"If we start asking the right question—not how to recycle garbage, but how to prevent garbage’s existence—we might make progress.  As it stands, too many people, and too many large powerful corporations, have a financial stake in the status quo. They are making the waste management equivalent of iron lungs and polio braces."

Friday, June 28, 2013

How Many High Wealth Individuals?

How many people in the world have a million dollars or more in financial assets? That is, leave aside the value of real estate or other owned property. Capgemini and RBC Wealth Management provide some estimates in a report that seeks to define the global market for the wealth management industry, the World Wealth Report 2013.

The report splits High Net Worth Individual (HNWI, natch) into three categories. Those with $1 million to $5 million in financial assets are in the "millionaire next door" category, and while I find that name a bit grating, it's fair enough. After all, a substantial number of of households in high-income countries that are near retirement, if they have been steadily saving throughout their working life, will have accumulated $1 million or more. The next step up is those with $5 million to $30 million in financial assets, who this report calls the "mid-tier millionaires." At the top, with more than $30 million in financial assets are the "ultra-HNWI" individuals. Here's the global distribution:

A few quick observations:

1) The "ultra-HNWI" individuals are less than1% of the total HNWI population, but have  35% of the total assets of this group. The "millionaires next door"  with $1 million to $5 million are 90% of the high net worth individual population, and have 42.8% of the total net worth of this group.

2) Another table in the report shows that 3.4 million of the high-net worth individuals--about 28% of the total--are in the United States.  The next four countries for number of people in the high net worth category are Japan (1.9 million), Germany (1.0 million), and China (643,000) and the UK, (465,000).

3) World population is about 7 billion. So the 12 million or so high net worth individuals are about one-sixth of 1% of the world population.

Thursday, June 27, 2013

High School Standards and Graduation Rates: The Tradeoff

One grim piece of news for the U.S. economy in the last few decades has been that the high school graduation rate flattened out around 1970. In a modern economy that depends on skills and brainpower, this is a troubling pattern. Richard J. Murnane considers the evidence and explanations in "U.S. High School Graduation Rates: Patterns and Explanations," in the most recent issue of the Journal of Economic Literature (vol. 51:2, pp. 370–422). The JEL is not freely available on-line, although many in academia will have on-line access through their library or through a personal membership in the American Economic Association. Here's how Murnane sets the stage (footnotes and citations omitted for readability):

"During the first seventy years of the twentieth century, the high school graduation rate of teenagers in the United States rose from 6 percent to 80 percent. A result of this remarkable trend was that, by the late 1960s, the U.S. high school graduation rate ranked first among countries in the Organisation for Economic Co-operation and Development (OECD). The increase in the proportion of the labor force that had graduated from high school was an important force that fueled economic growth and rising incomes during the twentieth century.

Between 1970 and 2000, the high school graduation rate in the United States stagnated. In contrast, the secondary school graduation rate in many other OECD countries increased markedly during this period. A consequence is that, in 2000, the high school graduation rate in the United States ranked thirteenth among nineteen OECD countries.

Until quite recently, it appeared that the stagnation of the U.S. high school graduation rate had continued into the twenty-first century. However, evidence from two independent sources shows that the graduation rate increased substantially between 2000 and 2010. This increase prevented the United States from losing further ground relative to other OECD countries in preparing a skilled workforce. But graduation rates in other OECD countries also increased during that decade. As a result, the U.S. high school graduation rate in 2010 was still below the OECD average."
Here's a figure showing the patterns. The horizontal axis shows (approximate) birth year. The vertical axis shows the  high school graduation rate for those who were 20-24 at the time. Thus, for example, the upward movement of the graph for those born around 1980 is based on data when those people had reached the age of 20-24.

Why the stagnation in high school graduation rates from 1970 up to about 2000? Clearly, it's not because the labor market rewards to getting a high school degree had declined; in fact, the gains from a high school degree had increased. Research has looked at explanations especially relevant at certain places and time, like a boom in demand for Appalachian coal in the 1970s that might have made a high school degree look less value to lower-skilled labor in that area, or how the crack epidemic of the late 1980s and early 1990s altered the expected rewards to finishing high school for a group of young men in certain inner cities, or how the end of certain court-ordered desegregation plans led to higher dropout rates for some at-risk youth.

While all of these explanations have an effect in certain times and places, Murnane suggests a bigger cause: a pattern of increasing high school graduation requirements that started in the 1970s. He writes: "In summary, my interpretation of the evidence is that increases in high school graduation requirements during the last quarter of the twentieth century increased the nonmonetary cost of earning a diploma for students entering high school with weak skills. By so doing, they counteracted the increased financial payoff to a diploma and contributed to the stagnation in graduation
rates over the last decades of the twentieth century."

Why have high school graduation rates apparently risen in the last 10 years or so? Murnane offers some fragmentary evidence that better-prepared ninth graders, expanded preschool programs, and reductions in teen pregnancy may have played a role. But his conclusion is modest: "In summary, there are many hypotheses for why the high school graduation rate of 20–24-year-olds in 2010 is higher than it was in 2000, and why the increase in the graduation rate was particularly large for blacks and Hispanics. However, to date, there is no compelling evidence to explain this encouraging
recent trend."

Murnane offers the useful reminder that voting to raise graduation standards is easy, but raising the quality of education so that a rising share of students can meet those standards is hard. "An assumption implicit in state education policies is that the quality of schooling will improve sufficiently to enable high school graduation rates to rise even as graduation requirements are stiffened. Indeed, many states increased public expenditures on public education to facilitate this improvement. However, it has proven much more difficult to improve school quality than to
legislate increases in graduation requirements."

My own concern about high school graduation requirements is that they are too often focused on getting a student into a college, any college, rather than moving the student toward a career. A high school student in the 25th percentile of a class should still be able to graduate from high school. But while some students who performed poorly in high school will shine in college--and should have an opportunity to do so--it is an unforgiving fact that many students at the bottom of the high school performance distribution will have little interest or aptitude in signing up for more schooling.

In the Spring 2013 issue of the Journal of Economic Perspectives, Julie Berry Cullen, Steven D. Levitt, Erin Robertson, and Sally Sadoff tackle this question: "What Can Be Done To Improve Struggling High Schools?" They point out that the overall high school graduation rates do not show the depth of the problem in a number of inner-city school districts. They conclude: "In spite of decades of well-intentioned efforts targeted at struggling high schools, outcomes today are little improved. A handful of innovative programs have achieved great success on a small scale, but more generally, the economic futures of the students at the bottom of the human capital distribution remain dismal. In our view, expanding access to educational options that focus on life skills and work experience, as opposed to a focus on traditional definitions of academic success, represents the most cost-effective, broadly implementable source of improvements for this group." (Full disclosure: I've been the Managing Editor of the Journal of Economic Perspectives for the past 27 years, so I am predisposed to find all of the articles intriguing. All JEP articles back to the first issue in 1987 are freely available on-line courtesy of the American Economic Association.)







Tuesday, June 25, 2013

Setting a Carbon Price: What's Known, What's Not

A number of scientists believe that rising levels of carbon dioxide are likely to lead to climate change. Maybe they are incorrect! But prudence suggests that when enough warning sirens are going off, you should at least start looking at options. In that spirit, I found it useful to consider Robert S. Pindyck essay on "Pricing Carbon When We Don’t Know the Right Price," in the Summer 2013 issue of Regulation magazine. The issue also includes four other articles on carbon tax issues. Pindyck sets the stage in this way:

"There is almost no disagreement among economists that the true cost to society of burning a ton of carbon is greater than its private cost. ... This external cost is referred to as the social cost of carbon (SCC) and is the basis for the idea of imposing a tax on carbon emissions or adopting a similar policy such as a cap-and-trade system. However, agreeing that the SCC is greater than zero isn’t really agreeing on very much. Some would argue that any increases in global temperatures will be moderate, will occur in the far distant future, and will have only a small impact on the economies of most countries. If that’s all true, it would imply that the SCC is small, perhaps only around $10 per ton of CO2, which would justify a very small (almost negligible) tax on carbon emissions, e.g., something like 10 cents per gallon of gasoline. Others would argue that without an immediate and stringent GHG abatement policy, there is a reasonable possibility that substantial temperature increases will occur and might have a catastrophic effect. That would suggest the SCC is large, perhaps $100 or $200 per ton of CO2, which would imply a substantial tax on carbon, e.g., as much as $2 per gallon of gas. So who is right, and why is there such wide disagreement?"
Pindyck acknowledges the uncertainty over how the atmospheric science of climate change, but as befits an economist, his main focus is on the economic issues. He points to the often cited study by Michael Greenstone, Elizabeth Kopits and Ann Wolverton, who published a 2011 paper on "Estimating the Social Cost of Carbon for Use in U.S. Federal Rulemakings: A Summary and Interpretation." They estimated a "central value" for the social cost of carbon of $21 per ton of carbon dioxide emissions. But as Pindyck points out, this central value is of uncertain value for three reasons. First, the link from climate change to an effect on economic output " is completely ad hoc and of almost no predictive value. The typical IAM ["integrated assessment model"] has a loss function that relates temperature increases to reductions in GDP. But there is no economic theory behind the loss function; it is simply made up. Nor are there data on which to base the parameters of the function; instead the parameters are simply chosen to yield moderate losses that seem “reasonable” (e.g., 1 or 2 percent of GDP) from moderate temperature increases (e.g., 2° or 3°C). Furthermore, once we consider larger increases in temperatures (e.g., 5°C or higher), determining the economic loss becomes pure guesswork. One can plug high temperatures into IAM loss functions, but the results are just extrapolations with no empirical or theoretical grounding."

A second problem is that the "central value" doesn't reveal anything about the potential risk of catastrophe--and by the time one has combined the uncertainties of how well climate science can predict catastrophic weather changes that are 50 or 100 years away, combined with uncertainties over the economic costs of those weather changes, this problem is severe.

The third problem is choosing a "discount rate"--that is, how should we best compare the costs of acting in the near-term to reduce carbon emissions with the benefits that would be received in 50 or 100 years? Presumably, a substantial share of the benefits will go to people who do not yet exist, and who, presuming that economic growth continues over time, will on average have considerably higher incomes than we do today.  Placing a high value on those future benefits means that we should be willing to sacrifice a great deal in the present; placing a lower value on those future benefits means a smaller willingness to incur costs in the present. But deciding how much to discount the future is an unsettled question in both economic and philosophy.

Pindyck's policy proposal is to set a low carbon tax now. He argues: "Because it is essential to establish that there is a social cost of carbon, and that social cost must be internalized in the prices that consumers and firms actually see and pay. Later, as we learn more about the true size of
the SCC, the carbon tax can be increased or decreased accordingly." My own views on this subject favor a "Drill-Baby Carbon Tax."

But I'd take a moment here to note that the temptation to argue based on the low-probability chance of catastrophe needs to be handled with care. After all, there are lots of possible sequences of events that are low-probability, but potentially catastrophic. Those who want to limit use of fossil fuels call up  certain climate change scenarios. Those who are anti-science point to the possibility that scientists working with genetics or nanotechnology over the next century will create a doomsday plague. Those who favor huge spending on defense and espionage point to the possibility that a rogue government or a group of terrorists will be able to arm themselves with weapons of mass destruction. Those who favor aggressive space exploration talk about the possibility of the earth suffering a devastating strike from an asteroid in the next century or two. Write your own additional political, economic, and science fiction disaster scenarios here! My point is that being able to name a catastrophe with a low but unquantifiable probability is a fairly cheap tool of argumentation. 


Monday, June 24, 2013

The Punch Bowl Speech: William McChesney Martin

In monetary policy jargon, "taking away the punch bowl" refers to a central bank action to reduce the stimulus that it has been giving the economy. 

Thus,  last Wednesday, Ben Bernanke discussed the possibility that if the U.S. economy performs well, the Federal Reserve would reduce and eventually stop its "quantitative easing" policy of buying U.S. Treasury bonds and various mortgage-backed securities. Everyone knows this needs to happen sooner or later, but Bernanke's comments raised the possibility that it might be sooner rather than later, and at least for a few days, stock markets dropped and broader financial markets were shaken.
Various blog commentaries and press reports referred to Bernanke's action as taking away the "punch bowl" (for example, here, here, and here). 

The "punch bowl" metaphor seems to trace back to a speech given on October 19, 1955, by William McChesney Martin, who served as Chairman of the Federal Reserve from 1951 through 1970, to the  New York Group of the Investment Bankers Association of America. Here's what Martin said to the financiers of his own time, who presumably weren't that eager to see the Fed reduce its stimulus, either:

"If we fail to apply the brakes sufficiently and in time, of course, we shall go over the cliff. If businessmen, bankers, your contemporaries in the business and financial world, stay on the sidelines, concerned only with making profits, letting the Government bear all of the responsibility and the burden of guidance of the economy, we shall surely fail. ... In the field of monetary and credit policy, precautionary action to prevent inflationary excesses is bound to have some onerous effects--if it did not it would be ineffective and futile. Those who have the task of making such policy donl t expect you to applaud. The Federal Reserve, as one writer put it, after the recent increase in the discount rate, is in the position of the chaperone who has ordered the punch bowl removed just
when the party was really warming up."

Monetary policy in the 1950s got a lot less attention than it does today: indeed, there was a significant group of economists who believed that it was completely ineffectual. The old story told by Herb Stein in his 1969 book, The Fiscal Revolution in America, was that President John F. Kennedy used to remember what Martin did by "the fact that William McChesney Martin was head of the Federal Reserve, and that "Martin" started with an "M", as did "monetary,"  so he knew that monetary policy was what the Federal Reserve did.  (Apparently he was not bothered by the fact that "fiscal" and "Federal Reserve" both start with an "f".)"

But Martin viewed monetary policy very much as a balancing act. As he once said in testimony before the U.S. Senate: “Our purpose is to lean against the winds of deflation or inflation, whichever way they are blowing.” (In  the Winter 2004 issue of the Journal of Economic Perspectives, where I've been managing editor since 1986,  Christina Romer and David Romer wrote "Choosing the Federal Reserve Chair: Lessons from History," which puts Martin's views on monetary policy in the context of other pre-Bernanke Fed chairmen.)

Martin held the view that monetary policy could be useful in reducing the risk of depressions and inflations, but that it wasn't all-powerful. In the 1955 speech, he said:

"But a note should be made here that, while money policy can do a great deal, it is by no means all powerful. In other words, we  should not place too heavy a burden on monetary policy. It must be accompanied by appropriate fiscal and budgetary measures if we are to achieve our aim of stable progress. If we ask too much of monetary policy we will not only fail but we will also discredit this useful, and indeed indispensable, tool for shaping our economic development. ...

"Nowadays, there is perhaps a tendency to exaggerate the effectiveness of monetary policy in both directions. Recently, opinion has been voiced that the country' s main danger comes from a roseate belief that monetary policy, backed by flexible tax and debt management policies and aided by a host of built-in stabilizers, has completely conquered the problem of major economic fluctuations and relegated them to ancient history. This, of course, is not so because we are dealing with human
beings and human nature.

"While the pendulum swings between too little or too much reliance upon credit and monetary policy, there is an emerging realization more  and more widely held and expressed by business, labor and farm organizations that ruinous depressions are not inevitable, that something can be done about moderating excessive swings of the business cycle. The idea that the business cycle can be altogether abolished seems to me as fanciful as the notion that the law of supply and demand can be repealed. It is hardly necessary to go that far in order to approach the problems of healthy economic growth sensibly and constructively. Laissez faire concepts, the idea that deep depressions are divinely guided retribution for man's economic follies, the idea that money should be the master instead of the servant, have been discarded because they are no longer valid, if they ever were."
It seems to me that at least some of the current discussion of the Fed has a similar tone to what Martin is describing of exaggeration in both directions. Some critics argue that the extraordinary monetary policies undertaken since the later part of 2007 are useless. On the other extreme, other critics argue that if only those extraordinary policies had been pursued with considerably more vigor, the U.S. economy would already have returned to full employment. In other words, the Fed is either ineffectual or all-powerful--but the truth is likely to exist between these extremes.

My own sense, as I've argued on this blog more than once is that that extraordinary monetary policy steps taken by the Fed made sense in the context of the extraordinary financial crisis and Great Recession from 2007-2009, and even for a year or two or three afterward. But the Great Recession ended four years ago in June 2009. The extreme stimulus policies of the Fed--ultra-low interest rates and direct buying of financial securities--don't seem to pose any particular danger of inflation as yet, but they create other dislocations: savers suffer, and some will go on a "search for yield" that can create new asset market bubbles; money market funds are shaken; and banks and governments that can borrow cheaply are less likely to carry out needed reforms.  And of course, there is the problem of economic and financial problems that arise when the Fed does take away the punch bowl. For discussion of these concerns, see earlier blog posts here, here, here and here.

My own sense is that there are times for monetary policy to tighten and times for it to loosen, and the very difficult practical wisdom lies in knowing the difference. In a similar spirit, Martin started his 1955 speech this way: "There's an apocryphal story about a professor of economics that sums up in a way the theme of what I would like to talk about this evening. In final examinations the professor always posed the same questions. When he was asked how his students could possibly fail the test, he replied simply, ''Well, it's true that the questions don't change, but the answers do.""

Thursday, June 20, 2013

Macroprudential Monetary Policy: What It Is, How it Works

In the old days, like six or seven years ago, one could teach monetary policy at the intro level as consisting of basically one tool: the central bank would lower a particular target interest rates to stimulate the economy out of recessions, and raise that target interest rate when an economy seemed to be overheating. But after the last few years,  even at the intro level, one needs to teach about some additional tools available to monetary authorities. One set of tools goes under the name of "macroprudential policy."

The idea here is that in the past, regulation of financial institutions focused on whether individual companies were making reasonably prudent decisions. A major difficulty with this "microprudential" approach to regulation, as the Great Recession showed, is that it didn't take into account whether the decisions of many financial firms all at once were creating macroeconomic risk. In particular, when the central bank was looking at whether the economy was in sinking into recession or on the verge of inflation, it didn't take into account whether the overall level of credit being extended in the economy was growing very rapidly--like in the housing price bubble from about 2004-2007. I discussed some of the evidence on how boom-and-bust credit cycles are often linked to severe recessions in a March 2012 post on "Leverage and the Business Cycle"  as well as in a February 2013 post on "The Financial Cycle: Theory and Implications."

Macroprudential policy means using regulations to limit boom-and-bust swings of credit. Douglas J. Elliott, Greg Feldberg, and  Andreas Lehnert offer a useful listing of these kinds of policies, how they have been used in the past, and some preliminary evidence on how they have worked in "The History of Cyclical Macroprudential Policy in the United States," written as a working paper in the Finance and Economics Discussion Series published by the Federal Reserve.

One basic but quite useful contribution of the paper is to organize a list of macroprudential policy tools. One set of tools can be used to affect demand for credit, like rules about loan-to-value ratios for those borrowing to buy houses, margin requirements for those buying stocks, the acceptable length of loans for buying houses, and tax policies like the extent to which interest payments can be deductible for tax purposes.  Another set of tools affects the supply of credit, like rules about the interest rates that financial institutions can pay on certain accounts, or the interest rates that they can charge for certain loans, along with rules about how much financial institutions must set aside in reserves or have available as capital, any restrictions on the portfolios that financial institutions can hold, and the aggressiveness of the regulators in enforcing these rules. Here's a list of macroprudential tools, with some examples of their past use.



One interesting aspect of these macroprudential policy tools is that many of them are sector-specific. When the central bank thinks of monetary policy as just moving overall interest rates, it constantly faces a dilemma. Is it worth raising interest rates for the entire economy just because there might be a housing bubble? Or just because the stock market seems to be experiencing "irrational exuberance" as in the late 1990s? Macroprudential policy suggests that one might address a housing market credit boom by altering regulations focused on housing markets, or one might address a stock market bubble by altering margin requirements for buying stock.

 Do these macroprudential tools work? Elliott, Feldberg, and Lehnert offer some cautious evidence on this point: "In this paper, we use the term “macroprudential tools” to refer to cyclical macroprudential tools aimed at slowing or accelerating credit growth. ... Many of these tools appear to have succeeded in their short-term goals; for example, limiting specific types of bank credit or liability and impacting terms of lending. It is less obvious that they have improved long-term financial stability or, in particular, successfully managed an asset price bubble, and this is fertile ground for future research. Meanwhile, these tools have faced substantial administrative complexities, uneven political
support, and competition from nonbank or other providers of credit outside the set of regulated institutions.  ... Our results to date suggest that macroprudential policies designed to tighten credit
availability do have a notable effect, especially for tools such as underwriting standards, while
macroprudential policies designed to ease credit availability have little effect on debt outstanding."

When the next asset-price bubble or credit boom emerges--and sooner or later, it will--macroprudential tools and how best to use them will become a main focus of public policy discussion.

For more background on the economic analysis behind macroprudential policy, a useful starting point is "A Macroprudential Approach to Financial Regulation," by Samuel G. Hansen, Anil K. Kashyap, and Jeremy C. Stein, which appeared in the Winter 2011 issue of the Journal of Economic Perspectives.  (Full disclosure: My job as Managing Editor of JEP has been paying the household bills since 1986.) Jeremy Stein is now a member of the Federal Reserve Board of Governors, so his thought on the subject are of even greater interest.


Technology and Job Destruction

Is there something about the latest wave of information and communication technologies that is especially destructive to jobs? David Rotman offers an overview of the arguments in "How Technology Is Destroying Jobs," in the July/August 2013 issue of the MIT Technology Review.

On one side Rotman emphasizes the work of Erik Brynjolfsson and Andrew McAfee: "That robots, automation, and software can replace people might seem obvious to anyone who’s worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee’s claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States. And, they suspect, something similar is happening in other technologically advanced countries."

As one piece of evidence, they offer this graph showing productivity growth and private-sector employment growth. going back to 1947, these two grew at more-or-less the same speed. But starting around 2000, a gap opens up with productivity growing faster than private sector employment.



The figure sent me over to the U.S. Bureau of Labor Statistics website to look at total jobs. Total U.S. jobs were 132.6 million in December 2000. Then there's a drop associated with the recession of 2001, a rise associated with the housing and finance bubble, a drop associated with the Great Recession, and more recently a bounceback to 135.6  million jobs in May 2013. But put it all together, and from December 2000 to May 2013, total U.S jobs now are about 2.2% higher than they were back at the start of the century.

Why the change? The arguments rooted in technological developments sound like this: "Technologies like the Web, artificial intelligence, big data, and improved analytics—all made possible by the ever increasing availability of cheap computing power and storage capacity—are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared. W. Brian Arthur, a visiting researcher at the Xerox Palo Alto Research Center’s intelligence systems lab and a former economics professor at Stanford University, calls it the “autonomous economy.” It’s far more subtle than the idea of robots and automation doing human jobs, he says: it involves “digital processes talking to other digital processes and creating new processes,” enabling us to do many things with fewer people and making yet other human jobs obsolete."

Of course, there are other arguments about slower job growth rooted in other factors. Looking at the year 2000 as a starting point is not a fair comparison, because the U.S. economy was at the time in the midst of the unsustainable dot-com bubble. The current economy is still recovering from its worst episode since the Great Depression.  In addition, earlier decades have seem demographic changes like a flood of baby boomers entering the workforce from the 1960s through the 1980s, along with a flood of women entering the (paid) workforce. As those trends eased off, the total number of jobs would be expected to grow more slowly.

Another response to the technology-is-killing-jobs argument is that while technology has long been disruptive, the economy has shown an historical pattern of adjusting over time. Rotman writes: "At least since the Industrial Revolution began in the 1700s, improvements in technology have changed the nature of work and destroyed some types of jobs in the process. In 1900, 41 percent of Americans worked in agriculture; by 2000, it was only 2 percent. Likewise, the proportion of Americans employed in manufacturing has dropped from 30 percent in the post–World War II years to around 10 percent today—partly because of increasing automation, especially during the 1980s. ... Even if today’s digital technologies are holding down job creation, history suggests that it is most likely a temporary, albeit painful, shock; as workers adjust their skills and entrepreneurs create opportunities based on the new technologies, the number of jobs will rebound. That, at least, has always been the pattern. The question, then, is whether today’s computing technologies will be different, creating long-term involuntary unemployment."

Given that the U.S. and other high-income economies have been experiencing technological change for well over a century, and the U.S. unemployment rate was below 6% as recently ago as the four straight years from 2004-2007, it seems premature to me to be forecasting that technology is now about to bring a dearth of jobs. Maybe this fear will turn out to be right this time, but it flies in the face of of a couple of centuries of economic history.

However, it does seem plausible to me that technological development in tandem with globalization are altering pay levels in the labor force, contributing to higher pay at the top of the income distribution and lower pay in the middle. For some discussion of technology and income inequality, see my post earlier this week on "Rock Music, Technology, and the Top 1%," and for some discussion of technology and "hollowing out" the middle skill levels of the labor force, see my post on "Job Polarization by Skill Level" or this April 2010 paper by David Autor  (Full disclosure: Autor is also editor of the Journal of Economic Perspectives, and thus is my boss.)

Given that new technological developments can be quite disruptive for existing workers, the conclusion I draw is the importance of finding ways for more workers to find ways to work with computers and robots in ways that can magnify their productivity. Rotman mentions a previous example of such a social transition: "Harvard’s [Larry] Katz has shown that the United States prospered in the early 1900s in part because secondary education became accessible to many people at a time when employment in agriculture was drying up. The result, at least through the 1980s, was an increase in educated workers who found jobs in the industrial sectors, boosting incomes and reducing inequality. Katz’s lesson: painful long-term consequences for the labor force do not follow inevitably from technological changes." It feels to me as if we need a widespread national effort in both the private and the public sector to figure out ways in which every worker in every job can use information technology to become more productive.


The arguments over how technology affects jobs remind me a bit of an old story from the development economics literature. An economist is visiting a public works project in a developing country. The project involves building a dam, and dozens of workers are shoveling dirt and carrying it over to the dam. The economist watches for awhile, and then turns to the project manager and says: "With all these workers using shovels, this project is going to take forever, and it's not going to be very high quality. Why not get a few bulldozers in here?" The project manager responds: "I can tell that you are unfamiliar with the political economy of a project like this one. Sure, we want to build the dam eventually, but really, one of the main purposes of this project is to provide jobs. Getting a bulldozer would wipe out these jobs." The economist mulls this answer a bit, and then replies: "Well, if the real emphasis here is on creating jobs, why give the workers shovels? Wouldn't it create even more jobs if they used spoons to move the dirt?"

The notion that everyone could stay employed if only those new technologies would stay out of the way has a long history. But the rest of the world is not going to back off on using new technologies. And future U.S. prosperity won't be built by workers using the metaphorical equivalent of spoons, rather than bulldozers.