Thursday, April 24, 2014

Comparing Electricity Production Costs: Fossil Fuels, Wind, Solar

To compare the costs of producing electricity in various ways, the U.S. Energy Information Administration uses what is called "levelized cost." The idea is to consider the cost of building a new electricity-generating facility, thus using the most recent technology, and then using that plant to produce electricity for 30 years. Of course, some methods of producing electricity like solar and wind will have a high up-front cost, but then no additional cost for fuel. Other methods of producing electricity like coal or natural gas might have lower costs up-front, but then need to pay for fuel in the future. Looking at the levelized cost over 30 years is a framework that takes such differences into account.

Here are two sets of levelized estimates for producing electricity that I've put together from an April 2014 EIA report. The first column shows levelized costs, expressed in 2012 dollars per megawatt/hour, of producing electricity for a plant where construction is started now, and the plant is ready to produce full-scale in 2019. The second column, again expressed in 2012 dollars per megawatt/hour, is the estimated cost for a plant that would be started in about 20 years, and would begin producing in 2040. Thus, the second column incorporates estimates of how costs of various fossil fuels and technological progress in electricity production will happen in the next couple of decades.



Here are some thoughts about these numbers:

1) The table is divided into "dispatchable" and "non-dispatchable" technologies. Basically, dispatchable technologies produce electricity when you want it. Non-dispatchable technologies produce electricity when nature is willing: that is, when the wind is blowing, the sun is shining, and there's water in the dam to flow through the turbines. For understandable reasons, those responsible for running electrical grids have some preference for dispatchable energy, because they know it can be there when they want it. However, this advantage is not taken into account in the levelized cost estimates.

2) These estimates also take into account the "capacity factor," which is what proportion of the time is the facility actually producing eletricity. Coal, natural gas, and biomass have capacity factors in the range of 83-87%. (The exception here is the "turbine" approaches to natural gas. These are smaller-scale plants meant to be run only at times of peak demand when electricity is most-needed, so their "capacity factor" is 30%--which is why their costs of generating electricity are comparatively high.) Nuclear has a capacity factor of 90%. Wind has a capacity factor of 35-37%; solar has a capacity factor of 20-25%; and hydroelectric has a capacity factor of 53%,

3) The cost estimates refer to building the electricity production capacity at an appropriate location. Thus, while geothermal is the cheapest way of producing electricity of the options here, the locations where geothermal electricity can be produced at this low cost are somewhat limited. It probably makes sense to keep looking for new places to produce geothermal electricity, but the reason it costs more in the 2040 projections than in the 2019 projections is based on the belief that future locations for geothermal will be more costly than current ones.

4) An obvious question about these comparisons is the extent to which they take environmental differences into account: in particular, what about the carbon emissions from burning fossil fuel? The EIA writes: "3 percentage points are added to the cost of capital when evaluating investments in greenhouse gas (GHG) intensive technologies like coal-fired power and coal-to-liquids (CTL) plants without carbon control and sequestration (CCS). In LCOE terms, the impact of the cost of capital adder is similar to that of an emissions fee of $15 per metric ton of carbon dioxide (CO2) when investing in a new coal plant without CCS, which is representative of the costs used by utilities and regulators in their resource planning. The adjustment should not be seen as an increase in the actual cost of financing, but rather as representing the implicit hurdle being added to GHG-intensive projects to account for the possibility that they may eventually have to purchase allowances or invest in other GHG-emission-reducing projects to offset their emissions. As a result, the LCOE values for coal-fired plants without CCS are higher than would otherwise be expected." Of course, what sort of cost adjustment is appropriate for carbon-emitting sources of electricity can be disputed, but there is some adjustment built into these numbers.

5) For the 2019 estimates, natural gas is the cheapest of the fossil fuel approaches. Of course, this is in part because natural gas prices in the U.S. have fallen; further, because natural gas cannot easily be shipped around the world, the US price can remain lower than in other countries.  Other research suggests that when taking the sum of private costs of production and the environmental costs into account, natural gas is the low-cost choice.

5) Wind and solar photovoltaics are expected to become cheaper ways of generating electricity over time, as you can see from comparing the 2019 and 2040 columns. But the locations for cost-effective use of wind resources are limited. And at least according to the U.S. Energy Information Administration, solar electricity will still be more costly than electricity from fossil fuels by 2040.

6) Meanwhile, the price of generating electricity from coal is projected to keep falling, too.


Wednesday, April 23, 2014

U.S. Airline Deregulation: The Next Step

The philosophy of the U.S. domestic airlines over the last decade or so is simple: fewer flights, packed with more passengers. Consider the data from the Bureau of Transportation Statistics. The total number of domestic U.S. flights was 8.3 million in 2013, down from 10 million in 2005. The number of available seat-miles was 693 million in 2013, down from 740 million in 2005. However the number of passengers dropped by much less: there were 645 million domestic passengers on U.S. flights in 2013, down only a bit from 657 million in 2007. And the "revenue passenger-miles"--that is, number of passengers times distance flown--actually rose slightly, from 571 billion in 2005 to 578 billion in 578 billion in 2013.

It has been possible to have fewer flights but more revenue passenger-miles because the average flight is fuller. The "load factor" which is calculated as the actual passenger-miles traveled as a proportion of the available seats, was 83.5% in 2013, up from 77% in 2005, and 70% in 2002. In other words, US airlines have been competing to jam more people into fewer flights.

This background is part of Kenneth Button's case for "Really Opening Up the American Skies," in the Spring 2014 issue of Regulation magazine. Button points out that while airline deregulation back in 1978 led to lower prices and additional service (though hub-and-spoke route systems), those patterns have been changing in recent years: the number of U.S. domestic routes has been contracting and airfares have stopped declining.   Button writes:

"The deregulation of the 1970s, by removing entry quantitative controls, led to a considerable increase in services. It also increased the capability of individuals to access a wider range of destinations from their homes via the hub-and-spoke system of routings that emerged. This pattern has been reversed since 2007. The largest 29 airports in the United States lost 8.8 percent of their scheduled flights between 2007 and 2012, but medium-sized airports lost 26 percent and small airports lost 21.3 percent. ...
The advent of jet and wide-bodied aircraft lowered costs in the 1960s and 1970s, and the 1978 Airline Deregulation Act caused the trend to continue in the 1980s and 1990s. Since then, real airline fares within the United States have largely plateaued;they fluctuate as fuel prices and economic growth oscillate and the temporary effects of mergers are felt. The challenge is to get the fare curve moving down again. The issue is not simply a matter of fares, but also the number and nature of services that are provided. People who no longer have ready access to air services are confronted with an infinite airfare—a fact not reflected in the airline airfare statistics."
What's the answer? Button argues that it's time for the next step in U.S. airline deregulation: that is, letting airlines from other countries enter the U.S.  market and deliver U.S. passengers between U.S. cities. He writes:
"In sum, the 1978 Airline Deregulation Act only partially liberalized the U.S. domestic airline market. One important restriction that remains is the lack of domestic competition from foreign carriers. The U.S. air traveler benefited from the country being the first mover in deregulation, and this provided lower fares and consumer-driven service attributes some 15–20 years before they were enjoyed in other markets; the analogous reforms in Europe only fully materialized after 1997. But the world has
changed, and so have the demands of consumers and the business models adopted by the airlines. ...  But remaining regulations still limit the amount of competition in the market and, with this, the ability of travelers to enjoy even lower fares and a wider range of services."
Back when U.S. airline deregulation was being considered in the 1970s, one of the power examples for advocates of deregulation was that if you looked at airfares between cities within a certain state--say, within Texas or within California--they were much lower than airfares between similar cities in different states. The reason was that airfares and routes on within-state flights weren't federally regulated, and so it could be seen that competition offered a better deal for customers. In a similar spirit, Button offers some examples of fares for European carriers like Ryanair or easyJet compared with U.S. carriers like Southwest or JetBlue. For flights of similar distance, the European airlines are often charging a lot less.



The objections to allowing foreign airlines into the U.S. domestic market tend to fall into two broad categories. One argument is that the foreign airlines will provide inferior service and don't have a sense of what U.S. customers want, so they won't attract much business. A cynic might answer that inferior service and ignorance about what customers are not exactly unknown characteristics among the current U.S. airlines. Also, while this concern over how foreign airlines might suffer financial losses must needs touch a tender chord of throbbing emotion in every American breast, frankly, the foreign airlines can look after themselves. The other argument is that the foreign airlines will be so successful that the workers of U.S. airlines will suffer. But most of the people working at airports, like baggage handlers and ground crew, will continue to be Americans. And the aftermath of the 1978 airline deregulation teaches that if more efficient practices and lower fares bring a new surge of airline customers, then the industry as a whole--and the American workers in the airline industry broadly defined--will expand.

My wife and I have three children. I favor more competition and lower airfares.


Tuesday, April 22, 2014

Earth Day: A Baptists and Bootleggers Story

Earth Day was first celebrated on April 22, 1970. It is now observed in 192 countries, and is coordinated by the Earth Day Network. Bruce Yandle offers a hard-eyed look at how the original Earth Day affected U.S. environmental legislation in "How Earth Day Triggered Environmental Rent Seeking," which appeared in the Summer 2013 issue of the Independent Review.

One of Yandle's signature insights is the idea of a "Baptists-and-bootleggers" coalition. Who favored prohibition of alcohol sales? Baptists, on moral grounds, and bootleggers, because government prohibition would limit competition and boost their profits. He makes a strong argument that Earth Day led to a similar environmentalists-and-industrialists coalition, in which environmentalists pushed for laws to reduce pollution, and industrialists pushed for anti-pollution laws that would hinder their competition.

Before the passage of the Clean Air Act and Clean Water Act in 1970, pollution was often restricted by common law cases brought through the courts. From the point of view of incumbent business, these court cases were an unpleasant way to deal with environmental problems. Court decisions could be inconsistent, and sometimes harshly punitive. But in addition, common law court decisions offered no way to inhibit competition by raising the costs of new entrants and rival producers. Thus, many large companies saw opportunities to limit competition in the idea of federal environmental laws.

In some ways, the use of anti-pollution laws to limit competition was pretty obvious. For example, the new environmental laws commonly grandfathered in existing plants, but required new plants to meet much stricter standards.

In other ways, the methods of restricting competition were less obvious. Consider that there are essentially three ways to set environmental standards. One is to use economic incentives like pollution taxes and tradeable pollution permits. A second is to set performance standards for how much pollution can be emitted, but to leave firms the flexibility to decide how to meet the standards in the most cost-effective way. The third way is a technological standard which requires that every firm use the same method for reducing pollution. When a technological standard is required, then firms which could have reduced pollution more cheaply are not allowed to gain a competitive advantage from doing so--because all must follow the prescribed standard.

For several decades after 1970s, one could at least argue that most environmental indicators were moving in the right direction. But after a review of the more limited progress against air and water pollution in the last couple of decades, Yandle argues, "These data strongly suggest we have hit the cleanup limits of a top-down, command-and-control, technology-based pollution-control system. We know we can do better, and so do EPA managers."

Thus, the environmental authorities have been pushing away from technology-based standards, and toward offering flexibility in meeting environmental goals. In the case of water pollution, Yandle reports: "In 1991, the EPA began to push hard to develop watershed-based nutrient trading communities where publicly owned treatment works and other dischargers are allowed to exchange discharge offsets. In some cases, farmers and land developers are included in the larger trading communities. When trades take place, the incremental cost of reducing pollution falls dramatically."

In the case of air pollution, flexible pollution permit trading arrangements were used to reduce lead emissions in the 1980s, and sulfur dioxide emissions since the 1990s. Yandle writes: "For
the nation, as of 2011 there are 242 nonattainment counties for ozone, 121 for PM2.5. But get this, there are just 9 nonattainment counties, which are those that have not achieved EPA National Ambient Air Quality Standards, for sulfur dioxide, the only criteria pollutant managed by markets. Indeed, since 1990, sulfur dioxide emissions have been reduced 65 percent at an EPA estimated cost of from
$1.17 to $2 billion. If command-and-control had been used instead of markets, the estimated cost would have ranged from $7.5 to$11.5 billion ..."

For those interested in learning more about these flexible systems for reducing pollution with tradeable permits, the Winter 2013 of the Journal of Economic Perspectives had a symposium on the subject. It starts with an overview paper by Lawrence H. Goulder, "Markets for Pollution Allowances: What Are the (New) Lessons?" There are then three papers on specific applications. Richard Schmalensee and Robert N. Stavins discuss "The SO2 Allowance Trading System: The Ironic History of a Grand Policy Experiment"; Richard G. Newell, William A. Pizer and Daniel Raimi tackle "Carbon Markets 15 Years after Kyoto: Lessons Learned, New Challenges"; and Karen Fisher-Vanden and Sheila Olmstead explore "Moving Pollution Trading from Air to Water: Potential, Problems, and Prognosis."
As always, all papers in the JEP back to the first issue in 1987 are freely available, courtesy of the American Economic Association. (Full disclosure: I've been Managing Editor of JEP since 1987, too.)

Is there some reason that the environmentalists and the industrialists will be willing to move away from the technology-based and performance-based environmental rules standards and embrace a more flexible incentive-based approach? Yandle offers the following argument: "At some point, the environmental Baptists will see that they are losing ground. The system they have supported no longer delivers the goods they desire. As we have seen, major elements of environmental progress are dead in the water. And the bootleggers? At some point, global competition becomes so severe that regulatory rent seeking no longer pays. For durable regulation to survive, bootleggers and Baptists must be singing off the same page. For now, the music has stopped."





Monday, April 21, 2014

Behind the Long-Term Rise in U.S. Health Care Costs

There is ongoing controversy over where U.S. health care costs are headed next. Has the rate of growth slowed, and if so when and why? Did it just slow briefly during the aftermath of the Great Recession and now is speeding up again? Health care expenditures in the U.S. economy were 5% of GDP in 1960, and have risen steadily to 17% of GDP. Of course, if health care is getting a bigger slice of the GDP pie, then other desireable areas of spending, both for households and for government, must be getting a smaller slice. Indeed, the projections for rising health care costs are by far the largest factor that drives the projections of expanding federal budget deficits in the long run.

Louise Sheiner offers some useful "Perspectives on Health Care Spending Growth"  in a paper recently written for the Engelberg Center on Health Care Reform at the Brookings Institution. She makes the point that even as health care costs have been rising, public and private health care insurance has been expanding so that Americans have been paying a lower share of those costs out of pocket.

Indeed, given the rising in health care costs as a share of GDP, but the fact that Americans are paying a lower share of those expenses in out-of-pocket in health care costs, the overall balance is that out-of-pocket health care costs as a share of GDP haven't risen for several decades. To put it another way: Back in 1960, health care spending was 5% of GDP, and Americans paid about half of that--2.5% of GDP--in the form of out-of-pocket costs. Now health care spending is 17% of GDP, but only 2% of GDP is being paid in out-of-pocket health care costs--with public and private insurance paying for the rest.



As Sheiner writes: "As [health care] spending rise as a share of income, two things happen: insurance contracts change to insulate people from the risk of large expenses if they become ill, and public programs expand to help maintain access to health services for lower income. Both of these changes fuel increased adoption of health technology. ... It is clear that it is the combination of technological
innovation and a continued willingness-to-pay for that technology that has allowed health spending to rise faster than income for so long. For example, without the dramatic decline in the share of health expenditures paid out-of-pocket, many Americans would simply not have been able to afford the new technologies when they became ill. It is inevitable that this willingness-to-pay will diminish at some point, but we have very little ability to predict when that will be."

What does this mean for the future path of health care spending? Sheiner analyzes patterns of GDP growth over time compared with health care costs. Like other analysts, she observes that the rise in health care costs started slowing down about a decade ago--that is, well before the Patient Protection and Affordable Care Act was enacted in 2009. She cautions against reading too much into this slower rise of health care costs: "[T]he slowdown in health spending growth observed since 2002 is largely the result of the two recessions that occurred in the last decade ... [I]t would be hard to argue that a few years of slower growth should be viewed as a turning point, particularly given that the recent slowdown occurred during unusual times: a decade of very slow economic growth and very low inflation (which made it harder for firms to pass on health insurance costs to their employees and may have required larger adjustments than usual), a major health reform that was accompanied by much confusion and fear, and a huge runup in budgets deficits that intensified attention on the need for future spending cuts."

Friday, April 18, 2014

When Technology Spreads Slowly

One of the most important issues in thinking about the economic growth potential for the U.S. economy is this question: Has the U.S. economy already seen most of the economic growth that will result from the innovations in information and communication technology, including the web, the cloud, robotics, and so on? Or is the U.S. economy perhaps only a fraction of the way--perhaps even less than halfway--through its adaptation to the potential for productivity gains from these technologies, and thus has stronger prospects for future growth?

When confronted by these kinds of questions, hindsight is clearer than foresight. And among economic historians, it is actually a standard insight that major new technologies can take decades to diffuse through the economy. Rodolfo E. Manuelli and Ananth Seshadri offer an example in "Frictionless Technology Diffusion: The Case of Tractors," which appears in the April 2014 issue of the American Economic Review. (The article is not freely available on-line, but many readers will have access through library subscriptions. Full disclosure: the AER is published by the American Economic Association, which also publishes the Journal of Economic Perspectives, where I work as Managing Editor.) They point out that in simple economic models, a firm just chooses a technology--and can choose a new technology at any time it wants. But in the real world, new technologies often take time to diffuse. They note that surveys of dozens of new technologies often find that it takes 15-30 years for a new technology to go from 10% to 90% of the potential market. But some major inventions take longer.

Here's how the tractor slowly displaced horses and mules in the U.S. agricultural sector from 1910 to 1960. Horses and mules, shown by the black dashed line and measured on the right-hand axis, declined from about 26 million in 1920 to about 3 million by 1960. Conversely, the number of tractors, shown  by the blue solid line, rose from essentially zero in 1910 to 4.5 million by 1960.



What factors might explain why it would take a half-century for tractors to spread? Lots of answers have been proposed: farmers needed time and experience to learn about the new technology; older farmers preferred not to learn, but gradually died off; some farmers didn't have large enough farms to make tractors economically viable; some farmers didn't have the financial ability to invest in a tractor; there was a lack of information about the benefits of tractors; established interests like the horse and mule industry pushed back against tractors where possible. Manuelli and Seshadri offer another explanation: During much of this time, the quality of tractors was continually improving, and also during the earlier part of this time period (like the Great Depression) wages for farm workers were not rising by much. Thus, it made some sense for a number of farmers to avoid buying the early generations of tractors. Let someone else work out the kinks! But as the quality of tractors improved and wages of farmworkers rose, investing in a tractor began to look like a better and better deal.

My own personal favorite example of the slow diffusion of technology was laid out by Paul David in "Computer and Dynamo: The Modern Productivity Paradox in a Not-too-Distant Mirror," which appeared in a 1991 OECD book called Technology and Productivity: The Challenge for Economic Policy.  At the time David was writing, the U.S. was still mired in a productivity slowdown that had started in the 1970s. However, there had clearly been a lot of computerization during that time, leading to a much-repeated comment from Robert Solow: "We see computers everywhere except in the economic statistics." David harked back to the historical example of using dynamos to produce electricity, explaining that this innovation was around for decades, sometimes at what seemed to be very large scale, before it showed up in productivity gains.

Dynamos had been producing electricity that was used for illumination since the 1870s. This technology was well-known enough that the Paris Exhibition of 1900 included many examples of electrical machinery, run from the power generated by dynamos that were 40 feet tall. But the Paris Exhibition also used electric light on a more widespread basis in public spaces than ever before. David wrote: "Although Europeans already knew of electric lighting for decades, never before Paris 1900 had it been used to illuminate a whole city--in such a way that outdoor festivals could continue into the night."

However, despite the demonstrated technological capabilities of generating and using electricity, and what seemed like a strong array of technological and scientific breakthroughs, productivity growth in the U.S. and the UK economies was actually relatively slow for about two decades after 1890. It's not until the 1920s that productivity growth based on electrification really took off. In retrospect, the reasons why are clear enough. Although the technology was already well-known, it took time for electrification to become widespread. Here's one figure showing diffusion of electrification in the household sector, and another showing the industrial sector. You could illuminate Paris with electrical light in 1900, but most places in the US didn't have access to electricity then.


But it wasn't just the spread of electricity. It was also the changes that industry and households needed to make to take advantage of it. Factories had long run on a "group drive" principle, where a single source of power (like water power or steam engines) powered everything through a series of gears. A "group drive" arrangement set constraints on the location of the factory and the organization of the machines. Electrification made "unit drive" possible, where factories had much more freedom to choose their location and set up their machines, but it took time and learning to figure out the best ways of doing this. More broadly, electricity changed everything from the lighting in factories to the fire safety, along with changes in the ability to develop new chemical and heating processes, and much more. For US households, it took time--really up into the 1920s--until they had both a source of electricity and also a supply of new household appliances like the vacuum cleaner, radio, washing machines, dishwasher, and all the changes of lifestyle that came with reliable indoor electric light.

In the mid-1990s, several years after Paul David's essay was published, a US productivity resurgence rooted in making and using information and communications technology did occur. It didn't happen on the time and schedule that many had been expecting . But as David wrote, many people "lose a proper sense of the complexity and historical contingency of the processes involved in technological change and the entanglement of the latter with economic, social, political, and legal transformations. There is no automaticity in the implentation of a new technological paradigm, such as that which we presently discern is emerging from the confluence of advances in computer and communications technologies."

In my own mind, examples like the slow spread of the tractor and electrification suggest the possibility that we may be only a moderate portion of the way through the social gains from the information and communications technology revolution. One of the reasons that tractors spread slowly was that the capabilities of tractors were steadily rising, which made them more attractive over time. In a much more extreme way way, the power of information and computing technology continues to rise, which keeps opening new horizons of potential uses and applications. One of the reasons that electrification spread slowly is that it took time for producers to rethink and revise their processes in a fundamental way, and time for spread and power of electricity to increase, and time for the invention and spread of household appliances related to electricity. In a similar way, my sense is that many firms are still very much in the process of rethinking and revising their processes in response to the developments in information and communications technology, the capabilities of that technology (like faster wireless speeds and computational power) continue to evolve, and the range of new household products using that technology (in areas from automated homes to entertainment to driverless cars and  roboticscontinue to expand.

Ultimately, of course, many of us are a little schizophrenic about the future of technological change. Some days we worry that technological change will be too slow, and that as a result the U.S. economy is headed for a future of slow growth and a stagnant standard of living. Other days we worry that technological change will be so rapid as to lead to massive disruption of jobs and workplaces across the economy. It is unlikely that both of these fears will come true! On my optimistic days, I hope that a flexible society and economy can find ways to adapt to an ongoing pattern of robust technological change and economic growth.

Thursday, April 17, 2014

What Happened to the Great Moderation?

In the 1990s and into the early years of the 2000s, it was common to hear economists speak of a "Great Moderation" in the U.S. economy. After the economic convulsions of the 1970s and early 1980s, in particular, the path of the U.S. economy seemed to have smoothed. To be sure, there was an 8-month recession in 1990-91, and another 8-month recession in 2001. But both recessions were fairly mild: unemployment topped out at 7.8% in the aftermath of the 1990-81 recession, and reached only 6.3% in the aftermath of the 2001 recession. And the recessions seemed more scarce: the average length of an economic upswing since World War II has been 58 months, but the upswing before the 1990-91 recession was 92 months, and the upswing before the 2001 recession was 120 months.

Of course, after 2007 when the Great Recession had crashed the party, talk of a Great Moderation seemed disconnected from reality. Jason Furman, chair of President Obama's Council of Economic Advisers, has taken on the question of "Whatever Happened to the Great Moderation?" in an April 10 speech.

Furman makes the interesting point that even now, including the Great Recession and its aftereffects in the data, the level of short-term volatility in economic statistics like quarterly GDP or monthly job growth seems to be lower than it was from the 1950s to the 1970s, not only in the United States but also in other high-income countries. (Of course, "less volatile" doesn't mean "healthy growth rate.")

Peering into the inner workings of the US economy, Steven J. Davis and James A. Kahn provided an overview of the evidence in the Fall 2008 issue of the Journal of Economic Perspectives in "Interpreting the Great Moderation: Changes in the Volatility of Economic Activity at the Macro and Micro Levels."  (The article, like all articles in  JEP, is freely available on-line courtesy of the American Economic Association. Full disclosure: I've been Managing Editor of the journal since its inception in 1987.) They find that the drop in short-term volatility of GDP can largely be traced to a drop in the volatility of production of durable goods. The volatility of production of nondurable goods falls only a little, and production of service was never that volatile to begin with. Volatility of production inventories declined substantially, too.

Furman points out an intriguing pattern here: "From 1960 to 1984, inventories were quite volatile, and were also procyclical, meaning that when sales increased, inventories also increased, further contributing to the volatility of production. During the post-1984 Great Moderation period, inventory investment itself became much less volatile, and the previous relationship between inventories and sales reversed, so that the two became negatively correlated. Focusing specifically on durable goods, the change in the covariance between inventories and sales accounts for nearly half of the decline in the variance in durable goods output. However, including the Great Recession, it appears that the relationship between output, sales and inventories partially reverted to the pre-Great Moderation pattern. The covariance of inventories and sales turned positive again, suggesting that improved inventory management was not enough to cushion the massive blow of the Great Recession, and in fact exacerbated it." Furman is careful to note that the argument that inventories have become procyclical is based on only a few years of data.  But if the pattern continues, it will need exploring and explaining.

Another pattern here is that consumption patterns have continued to show less short-term volatility, even through the Great Recession. Furman writes: "Disaggregating the GDP data, the reduced volatility of consumption is one of the major sources of the Great Moderation—and this reduced volatility has continued to hold up during and after the Great Recession, especially in consumer durables. The continued stability in consumption stands in contrast to other components of GDP like business fixed investment, which became less volatile during the initial Great Moderation but has since at least partially reverted to its earlier volatility."

Improvements in macroeconomic policy offer another potential explanation for the Great Moderation: that is, monetary policy was less disruptive after the mid-1980s than it had been in, say, the 1970s. The use of fiscal policy to stimulate the economy during downturns arguably became more purposeful and effective. Indeed, as Furman points out, one can make a case that monetary and fiscal policies helped to prevent the Great Recession from being even greater (citations omitted here, and  throughtout):

"Improvements in monetary and fiscal policy have likely contributed to the patterns in the high-frequency data originally identified as the Great Moderation, although one could debate the share of the credit they deserve. I believe policy steps have also played a critical role at lower frequencies as well, with the best example being the Great Recession itself, which in many ways started off looking like it could be as bad or worse than the Great Depression. To appreciate this point, consider that the plunge in stock prices in late 2008 proved similar to what occurred in late 1929, but was compounded by sharper home price declines, ultimately leading to a drop in overall household wealth that was substantially greater than the loss in wealth at the outset of the Great Recession. . . .Moreover, Alan Greenspan (2013) has argued that short-term credit markets froze more severely in 2008 than in 1929, and to find a comparable episode in this regard one has to go back to the panic of 1907. However, in large part because of an aggressive policy response, the unemployment rate increased 5 percentage points, compared to a more than 20 percentage point increase in the Great Depression from 1929 to 1934. And real GDP per working age population returned to its pre-recession peak more quickly in the United States than in other countries that also experienced systemic crises in 2007-08."
The pattern that emerges from Furman's discussion is that the Great Moderation was quite real as measured by smaller short-term fluctuations in GDP, employment, consumption, production of durable goods, and inventories. Even more surprisingly, many of these factors (although not inventories) have continued to show lower short-term volatility in the aftermath of the Great Recession. But of course, this lower level of short-term quarter-to-quarter or month-to-month economic fluctuations did not protect the economy from the enormous economic blow of the Great Recession, which lasted 18 months, spiked the unemployment rate from under 5% in mid-2007 to 10% in October 2009,m and then has been followed by years of frustrating sluggish (and without a lot of short-term volatility) recovery.

One possible interpretation here is that the Great Moderation is real, and the Great Recession was a sort of perfect storm, best understood as a one-off divergence from the long-run trend. Another possible interpretation is when short-term volatility is lower and when recessions become milder and less common, firms and households become less wary of risk, and more willing to take chances--which in turn leads to the kind of risky conditions that can create the underlying conditions for a deeper recession.  And yet another interpretation is that while the old vulnerabilities that led to the economic volatility of smokestack industries back in the 1950s and 1960s have declined, the U.S. and world economy how face some new vulnerabilities due to changes in technology, globalization, and the financial sector. In this view, the Great Recession was only a first foretaste of the kinds of disruptive interactions that can occur in this new economic configuration.


Wednesday, April 16, 2014

Demand for Sand

These are boom times for the sand industry, which is actually a mixed blessing, resulting in high prices and even environmental risks. The Global Environmental Alert Service of the United Nations Environment Programme tells some of the story in a March 2014 report, "Sand, rarer than one thinks." As the report notes (citations omitted for readability): "Globally, between 47 and 59 billion tonnes of material is mined every year, of which  sand and gravel, hereafter known as aggregates, account for both the largest share (from 68% to 85%) and the fastest extraction increase ..."

To get a sense of the volume here,  consider this comparison: "A conservative estimate for the world consumption of aggregates exceeds 40 billion tonnes a year. This is twice the yearly amount of sediment carried by all of the rivers of the world, making humankind the largest of the planet’s transforming agent with respect to aggregates ..." Or to look at it another way, one major use of aggregates like sand and gravel is for concrete. "Thus, the world’s use of aggregates for concrete can be estimated at 25.9 billion to 29.6 billion tonnes a year for 2012 alone. This represents enough concrete to build a wall 27 metres high by 27 metres wide around the equator."  Sand and gravel are also used land in reclamation, shoreline developments, road embankments, asphalt, and by industries including glass, electronics, and aeronautics.

Dredging sand and gravel from oceans and rivers causes environmental disruption, which can in some cases become severe, leading to problems with erosion, greater vulnerability to storm surges, and destruction of habitat for plant and animals. "Lake Poyang, the largest freshwater lake in China, is a distinctive site for biodiversity of international importance, including a Ramsar Wetland. It is also the largest source of sand in China and, with a conservative estimate of 236 million cubic metres a year of sand extraction, may be the largest sand extraction site in the world. ... Sand mining has led to deepening and widening of the Lake Poyang channel and an increase in water discharge into the Yangtze River. This may have influenced the lowering of the lake’s water levels, which reached a historically low level in 2008 ..." (The Ramsar Convention is the nickname for the Convention on Wetlands of International Importance, which is an intergovernmental treaty for protection of key wetlands.) In general, economic growth in China has been one of the major reasons for the expansion of sand and gravel mining in the last decade.

Or to choose a more extreme case: "In some extreme cases, the mining of marine aggregates has changed international boundaries, such as through the disappearance of sand islands in Indonesia."
The qualities of sand and gravel matter for their eventual use. For example, "If the sodium is not removed from marine aggregate, a structure built with it might collapse after few decades due to corrosion of its metal structures. Most sand from deserts cannot be used for concrete and land reclaiming, as the wind erosion process forms round grains that do not bind well."

With a combination of research and development into alternative materials, along with different materials methods of landfill and construction, the use of sand and gravel could be reduced. Some possible alternative materials for various uses include quarry dust, incinerator ash, recycled concrete and glass, perhaps finding ways to use desert sand.

According to data from the U.S. Geological Survey, the U.S. economy used about 46 million tons of sand and gravel for industrial purposes in 2012, which represents nearly a doubling since 2003. In addition, the price of sand and gravel for industrial use rose from $18.30/ton in 2003 to $52.80/ton in 2012. Essentially, this kind of sand has a high silicon dioxide content, and a large portion of this run-up in demand is because this kind of sand is used in hydraulic fracturing, which now consumes about 62% of this kind of sand in the U.S.

Use of sand and gravel for construction purposes was much greater in the U.S economy, about 842 million tons in 2012. However, this was down from about 1,200 million tons per year during the housing and construction boom of the years leading up to the Great Recession. The USGS reports: "It is estimated that about 44% of construction sand and gravel was used as concrete aggregates; 25% for road base and coverings and road  stabilization; 13% as asphaltic concrete aggregates and other bituminous mixtures; 12% as construction fill; 1% each for concrete products, such as blocks, bricks, and pipes; plaster and gunite sands; and snow and ice control; and the remaining 3% for filtration, golf courses, railroad ballast, roofing granules, and other miscellaneous uses."

With all due apologies to the good people and productive firms working in this industry, it's a little difficult for me to imagine a more boring product than sand and gravel. As a first step toward getting out of my ivory tower and getting over this prejudice, I close here with some comments from a 1999 report by the U.S. Geological Survey, "Natural Aggregates—Foundation of America’s Future."

"Natural aggregates, which consist of crushed stone and sand and gravel, are among the most abundant natural resources and a major basic raw material used by construction, agriculture, and industries employing complex chemical and metallurgical processes. Despite the low value of the basic products, natural aggregates are a major contributor to and an indicator of the economic well-being of the Nation. Aggregates have an amazing variety of uses. Imagine our lives without roads, bridges, streets, bricks, concrete, wallboard, and roofing tiles or without paint, glass, plastics, and medicine. Every small town or big city and every road connecting them were built and are maintained with aggregates. More than 90 percent of asphalt pavements and 80 percent of concrete are aggregates. Paint, paper, plastics, and glass also require sand, gravel, or crushed stone as a constituent. When ground into powder, limestone is used as an important mineral supplement in agriculture, medicine, and household products. ... On the basis of either weight or volume, aggregates accounted for more than two-thirds of about 3.3 billion metric tons of nonfuel minerals produced in the United States in 1996."