Friday, October 7, 2011

America as Conventional Energy Powerhouse?!?

I've been trying to wrap my mind around the issues and possibilities created by the new technologies for extracting oil and gas from North America. Amy Myers Jaffe, an energy expert who runs the Baker Institute Energy Forum at Rice University, has a nice provocative short article in the most recent issue of Foreign Policy magazine called  "The Americas, Not the Middle East, Will Be the World Capital of Energy." Jaffe writes: 
"By the 2020s, the capital of energy will likely have shifted back to the Western Hemisphere, where it was prior to the ascendancy of Middle Eastern megasuppliers such as Saudi Arabia and Kuwait in the 1960s. The reasons for this shift are partly technological and partly political. Geologists have long known that the Americas are home to plentiful hydrocarbons trapped in hard-to-reach offshore deposits, on-land shale rock, oil sands, and heavy oil formations. The U.S. endowment of unconventional oil is more than 2 trillion barrels, with another 2.4 trillion in Canada and 2 trillion-plus in South America -- compared with conventional Middle Eastern and North African oil resources of 1.2 trillion. The problem was always how to unlock them economically.

But since the early 2000s, the energy industry has largely solved that problem. With the help of horizontal drilling and other innovations, shale gas production in the United States has skyrocketed from virtually nothing to 15 to 20 percent of the U.S. natural gas supply in less than a decade. By 2040, it could account for more than half of it. ... Meanwhile, onshore oil production in the United States, condemned to predictions of inexorable decline by analysts for two decades, is about to stage an unexpected comeback."
Jaffe's article sent me back to the sober-sided Annual Energy Outlook 2011 published in April by the U.S. Energy Information Administration. Here's a figure showing how the new "enhanced-oil recovery" techniques are expected to raise oil production in the lower 48 states in a way that offsets declining production from Alaska. The report says: "Rising world oil prices, growing shale oil resources (i.e., liquid oil embedded in non-porous shale rock), and increased production using EOR [enhanced oil-recovery] techniques contribute to increased domestic crude oil production from 2009 to 2035 in the AEO2011 Reference case (Figure 95). The Bakken shale oil formation contributes to growth in crude oil production in the Rocky Mountain Region, and growth in the Gulf Coast region is spurred by the resources in the Eagle Ford and Austin Chalk formations. Some of the decline in oil production in the Southwest region is offset by production coming from the Avalon shale formation."


And here's a figure showing that the share of U.S. oil consumption that is imported peaked in 2005, and is expected to fall over the next couple of decades. The report says: "[W]hile consumption of liquid fuels increases steadily in the Reference case from 2009 to 2035, the growth in demand is met by domestic production.The net import share of U.S. liquid fuels consumption fell from 60 percent in 2005 to 52 percent in 2009. The net import share continues to decline in the Reference case, to 42 percent in 2035 ..."



Of course, there are potential environmental issues. There are issues about what kinds of risks are posed by these technologies for extracting oil, as well as about the conventional pollutants and carbon dioxide emitted by burning these fossil fuels. But for the next few decades, substantial quantities of fossil fuels will continue to be used. The carbon dioxide produced will be essentially the same regardless of where these fossil fuels are produced. Thus, if the local environmental issues can be worked out--that is, the issues about extracting these resources and about conventional pollutants--then there is no inconsistency in moving toward fossil fuels produced and refined by U.S. workers, rather than imported fossil fuels produced and refined by foreign workers, as we continue to seek ways of reducing global carbon emissions.  

Thursday, October 6, 2011

Why Didn't Dot-Com Crash Hurt Like Housing Crash Did?

In the late 1990s, the U.S. economy suffered the end of the dot-com bubble, but had only a mild recession lasting for 8 months in 2001. But when the housing bubble popped, the U.S. economy had a brutally deep 18 month recession from December 2007 to June 2009, followed by a Long Slump of a recovery. Why did the bursting of the housing bubble hurt so much more? 

The magnitude of the two event is roughly similar. The value of corporate equities owned by households went from $9 trillion in 1999 to $4.1 trillion in the third quarter of 2002, according to stats in Table B.100 of the Federal Reserves Flow of Funds Accounts in September 2003. The value of household real estate dropped from $22.7 trillion in 2006 to $17.1 trillion by 2009, and since then has fallen to $16.2 trillion by the second quarter of 2011, according to stats in Table B.100 of the latest Flow of Funds Accounts released by the Federal Reserve


The answer is that when the dot-com boom collapsed, the lost value was in stock prices. Those who bought stocks knew in advance that stock prices could rise and fall. The losses for pension funds and retirement accounts were large, but they didn't cause widespread household or firm bankruptcies. However, when the housing price bubble burst, the losses were in the form of debts that couldn't be paid off. People couldn't pay their mortgages. Banks and financial institutions which were holding dicey mortgage-backed securities faced huge losses, and a financial crisis resulted. If the dot-com boom had been financed by enormous waves of household and business borrowing, and that borrowing had been turned into securities widely held by banks, then the bursting of the dot-com boom would have been much more economically destructive.

The key difference here is between equity and debt. The value of equity is contingent on what happens in the stock market, and so can rise or fall. But debt is typically not contingent on how other values change: you borrowed it, you need to pay it on schedule. Otherwise, defaults, foreclosures, bankruptcies, and financial crisis can result. Kenneth Rogoff thinks through many of these issues in the 2011 Martin Feldstein Lecture to the Natural Bureau of Economic Research: "Sovereign Debt in the Second Great Contraction: Is This Time Different?"



Rogoff focuses on this difference between non-contingent debt and contingent equity: [E]ven before the onset of the Second Great Contraction, it should have bothered macro-theorists more that such a large fraction of world capital markets consists of non-contingent debt, including public and private bonds, as well as bank credit. It is difficult to pin down global aggregates, but a recent McKinsey study found that at the end of 2008, the equity market accounted for roughly $34 trillion out of $178 trillion in global assets, with government debt, private credit, and banking accounting for the rest. This figure, of course, is exaggerated by the global stock market crash that occurred after the collapse of Lehman Brothers in 2008, but even at the pre-crisis equity level of $54 trillion, equity markets represented less than one third of the total. True, there is an entire zoology of derivative markets that makes some of the debt contingent, but incorporating these would not dramatically change the basic point."

As Rogoff points out, there have been proposals by Robert Shiller and others that when governments borrow, they should do so in a more contingent form--for example, perhaps the debt payments could adjust automatically if their GDP growth is faster or slower than expected. But in practice, given how governments can play games with their own economic statistics, such an approach has had limited appeal. In general, the clear promise to repay debt is easier to monitor and to enforce than a payment schedule linked to some other variable. But this widespread use of non-contingent debt, which in turn is subject to a wide array of poorly-understood risks, contributes to making the world economy a fragile place when bad news arises.







Wednesday, October 5, 2011

When Milton Friedman Blessed Foreign Exchange Futures Markets

Leo Melamed tells the story of “Milton Friedman’s 1971 Feasibility Paper” in the Fall 2011 issue of the Cato Journal

“In 1971, as chairman of the Chicago Mercantile Exchange, I had an idea: a futures market in foreign currency. It may sound so obvious today, but at the time the idea was revolutionary. I was acutely aware that futures markets until then were primarily the province of agriculture and—as many claimed—might not be applicable to instruments of finance. Not being an economist, the idea was in need of validation. There was only one person in the world that could satisfy this requisite for me. We went to Milton Friedman. We met for breakfast at the Waldorf Astoria in New York. By then he was already a living legend and I was quite nervous. I asked the great man not to laugh and to tell me whether the idea was “off the wall.” Upon hearing him emphatically respond that the idea was “wonderful,” I had the temerity to ask that he put his answer in writing. He agreed to write a feasibility paper on “The Need for Futures Markets in Currencies,” for the modest stipend of $7,500. It turned out to be a helluva trade.” 

The same issue publishes Friedman’s 1971 paper, “The Need for Futures Markets in Currencies,” for what I think is the first time. Friedman writes: 


"Bretton Woods is now dead. The president’s action on August 15 [1971] in closing the gold window was simply a public announcement of the change that had really occurred when the two-tier system was established in early 1968. ... The U.S. is a natural place for the futures market because the dollar is almost certain to continue to be the major intervention
currency for central banks and the major vehicle currency for international transactions. Exchange rates will almost surely continue to be stated in terms of the dollar. In addition, the U.S. has the largest stock in the world of liquid wealth on which the market can draw for support. It has a legal structure and a financial stability that will attract funds from abroad. It has a long tradition of free, open, and fair markets. It is clearly in our national interest that a satisfactory futures market should develop, wherever it may do so, since that would promote U.S. foreign trade and investment. But it is even more in our national interest that it develop here instead of abroad. As Britain demonstrated in the 19th century, financial services of all kinds can be a highly profitable export commodity."






Research and Development Tax Credit

Back in the mid-1980s, when the world was young and I was just leaving economics graduate school, I wrote editorials on economic and environmental issues for the San Jose Mercury News for a couple of years. (At that time, the paper was booming, because in those pre-Internet times, it carried much the help-wanted advertising for Silicon Valley.) In 1981, Congress had passed a tax credit for research and development, but it has been passed on a temporary basis. Remarkably enough, in 2011 the
the R&D tax credit still languishing in temporary status, expiring every few years and then being re-authorized, currently set to expire at the end of 2011.

The theoretical case for government support of R&D is unchanged over time: new technology provides social benefits that often greatly exceed the private benefits received by the inventor, and so society can in theory be better off by subsidizing such activity. However, two things have changed  since I was writing editorials advocating a permanent R&D tax credit back in the mid-1980s. There is now a body of research strongly suggesting that the tax credit is cost-effective at increasing research and development. And much of the rest of the world, agreeing with this research, now offers more aggressive support for industry R&D than does the United States.

Research supporting an R&D tax credit


The R&D Credit Coalition hired Ernst and Young to write a report on the evidence. Unsurprisingly, both given the parade of evidence over the years and the source of the funding (!), the report is called: "The R&D Credit: An effective policy for promoting research spending."  Their overall conclusion is that an R&D tax credit could increase industry R&D spending by 10-20% over the long run, depending on design. Clearly, this isn't a revolutionary change--just a sensible step to take. Here's a useful figure summarizing the results of studies of how an R&D tax credit affects R&D spending.



International Trends

While U.S. policy on an R&D tax credit has been running in place for 30 years, many other countries have embraced such a policy. For example, here is an OECD report from 2008 on the spread of such incentives:  "Recent years have seen a shift from direct public funding of business R&D towards indirect funding (Figure 3). In 2005, direct government funds financed on average 7% of business
R&D, down from 11% in 1995. In 2008, 21 OECD countries offered tax relief for business R&D, up from 12 in 1995, and most have tended to make it more generous over the years. The growing use of R&D tax credits is partly driven by countries’ efforts to enhance their attractiveness for R&D-related foreign direct investment."

Here is the Figure 3 referred to in the quotation. Cross-country comparisons of tax policy can be hazardous, because the conclusions can depend on just how certain provisions are classified. Nonetheless, it's striking that the U.S. ranks 24th in its tax support for industry R&D of the countries in the figure.








Tuesday, October 4, 2011

Left-Number Bias in Used Car Prices

Left-number bias is when you pay disproportionate attention to the number on the left. It's the reason why you see so many more prices at, say, $69.99 than at $70.01. When buying a used car, left-number bias manifests itself on the odometer: that is, car buyers view the difference between, say, 67,000 and 68,000 miles as of only modest importance, but the difference between 69,000 and 70,000 miles as quite important. Nicola Lacetera, Devin Pope, and Justin Sydnor Heuristic explore this topic with a with a data set of 22 million used car transations in "Heuristic Thinking and Limited Attention in the Car Market." It's NBER Working Paper No. 17030, but these papers are gated unless your institution has a membership. 

For a short overview of the paper in the NBER Digest by Lester Picker, see here. I quote from that overview: "[T]he authors document significant price drops at each 10,000-mile threshold from 10,000 to 100,000 miles, ranging from about $150 to $200. For example, cars with odometer values between 79,900 and 79,999 miles, on average, are sold for approximately $210 more than cars with odometer values between 80,000 and 80,100 miles, but for only $10 less than cars with odometer readings between 79,800 and 79,899. The authors also find price drops at 1,000-mile thresholds, but these changes are smaller."

Here is an illustrative figure. the horizontal axis shows miles on the odometer of the used car, rounded down to the nearest 500. The vertical axis shows average sale price. As you would expect, cars with more mileage on average sell for less. But look at what happens at each 10,000-mile level. Instead of price dropping in a more-or-less smooth line, there is a discrete price drop at each 10,000 mile level, showing the left-number bias at work.



Picker's overview in the NBER Digest also says: "This apparent left-digit bias not only influences wholesale prices but also affects supply decisions. If sellers are savvy and are aware of these effects, then they will have an incentive to bring cars to auction before the vehicle's mileage crosses a threshold. Indeed, the authors show that there are large volume spikes in cars before 10,000-mile thresholds."

Here's a figure showing this effect on the supply side.Again, the horizontal axis shows mileage on the odometers of used cars. This time, the vertical axis shows volume of cars sold with that mileage. As one would expect, relatively few cars are sold with extremely low mileage. But look at the line at the 10,000-mile intervals from 60,000 to 100,000. There is an extra little blip of more cars being sold just before they cross over into the next mileage category. This is smart sellers, taking advantage of the left-number bias on the part of buyers.









More Herbert Hoover: Father of the New Deal

Last week I pointed out in Herbert Hoover, Deficit Spender that, contrary to a widespread belief, Hoover didn't cut spending or seek to balance the budget. Instead, Franklin Roosevelt ran in 1932 on promise to balance the budget, a promise which he abandoned a few months after taking office. The next day Steven Horwitz published a Cato Briefing Paper called "Herbert Hoover: Father of the New Deal,"  with a broader treatment of the actual Herbert Hoover. Here are some tastes of the Horwitz argument (footnotes and citations omitted):

"The version of Hoover presented in the media’s narrative of Hoover as champion of laissez faire bears little resemblance to the details of Hoover’s life, the ideas he held, and the policies he adopted as president. ..."

"Hoover had long believed that it was necessary to `transform the structure of the U.S. economy from one of laissez-faire to one of voluntary cooperation.' In her biography Herbert Hoover: Forgotten Progressive Joan Hoff Wilson summarizes Hoover’s economic views this way:

Where the classical economists like Adam Smith had argued for uncontrolled competition between independent  economic units guided only by the invisible hand of supply and demand, he talked about voluntary national economic planning arising from cooperation between business interests and the government. . . . Instead of negative government action in times of depression, he advocated the expansion of public works, avoidance of wage cuts, increased rather than decreased production—measures that would expand rather than contract purchasing power.

Hoover was also a long-time critic of international free trade, and favored `increased inheritance taxes, public dams, and, significantly, government regulation of the stock market.'”

Horwitz provides chapter and verse on how Hoover, as president, increased spending and intervened in the economy. Here's an editorial cartoon from 1930 criticizing Hoover for his flood of increased spending. 
As Horwitz points out, leading intellectuals of the Roosevelt administration recognized that Hoover had set the stage for their policies:

"Rexford G. Tugwell, one of the academics at the center of FDR’s `brains trust' said: `When it was all over, I once made a list of New Deal ventures begun during Hoover’s years as Secretary of Commerce and then as president. . . . The New Deal owed much to what he had begun.' Another member of the brains trust, Raymond Moley, wrote of that period: 
When we all burst into Washington . . . we found every essential idea [of the New Deal] enacted in the 100-day Congress in the Hoover administration itself. The essentials of the NRA [National Recovery Administration], the PWA [Public Works Administration], the emergency relief setup were all there. Even the AAA [Agricultural Adjustment Act] was known to the Department of Agriculture. Only the TVA and the Securities Act was drawn from other sources. The RFC [Reconstruction Finance Corporation], probably the greatest recovery agency, was of course a Hoover measure, passed long before the inauguration.
Late in both of their lives, Tugwell wrote to Moley and said of Hoover, “we were too hard on a man who really invented most of the devices we used."

Horwitz argues that Hoover's economic policies were deeply misguided. My point here is not to endorse his evaluation of Hoover's policies (I think some were more justifiable than others), but just to point out that as a matter of historical fact, it is incorrect to think of Hoover as a radical free market and budget balancer whose policies were overturned by FDR. Indeed, as Horwitz points out, FDR and others saw Hoover during the 1920s as a possible presidential candidate for the Democrats!

Thanks to Arnold Kling at the EconLog website for the pointer.

  



Monday, October 3, 2011

Low-Cost Education Reforms: Later Starts, K-8, and Focusing Teachers

Discussions of education reform often seem to collide with a budgetary brick wall. Longer school year? Better teacher pay? Longer school day? What school district can afford it? Thus, the discussion paper by Brian E. Jacob and Jonah Rockoff for the Hamilton Project is a breath of fresh air, because they propose three low-cost methods of reorganizing existing school resources in ways that research suggests will improve student performance. Their overview (citations and footnotes dropped throughout):

"In this paper, we describe three organizational reforms that recent evidence suggests have the potential to increase K–12 student performance at modest costs: (1) Starting school later in the day for middle and high school students; (2) Shifting from a system with separate elementary and middle schools to one with schools that serve students in kindergarten through grade eight; (3) Managing teacher assignments with an eye toward maximizing student achievement (e.g. allowing teachers to gain experience by teaching the same grade level for multiple years or having teachers specializing in the subject where they appear most effective). We conservatively estimate that the ratio of benefits to costs is 9 to 1 for later school start times and 40 to 1 for middle school reform. A precise benefit-cost calculation is not feasible for the set of teacher assignment reforms we describe, but we argue that the cost of such proposals is likely to be quite small relative to the benefits for students."

On starting the school day later

They write: "The earliest school start times are associated with annual reductions in student performance of roughly 0.1 standard deviations for disadvantaged students, equivalent to replacing an average teacher with a teacher at the sixteenth percentile in terms of effectiveness. ... According to the National Household Education Survey, roughly half of middle schools start at or before 8:00 a.m., and fewer than 25 percent start at 8:30 a.m. or later. High schools start even earlier. Wolfson and Carskadon (2005), surveying a random sample of public high schools, found that more than half of the schools reported start times earlier than 8:00 a.m."

As the authors point out, there are two main tradeoffs here. One is that starting later might require some school districts to use more buses, rather than using the same buses in series every morning for high school, middle school, and elementary school. The estimated cost of additional transportation is $150/student, which is a very low cost for this much educational gain. The other main concern is after-school activities, especially sports and work. A possible resolution here is whether schools could offer more flexibility during the last school period of the day for those who need it for these reasons.

On K-8 schools

They write: "While the vast majority of American public school students in Grades 9 through 12 attend a traditional high school, a wide variety of configurations are used to divide students in the primary grades (K–8) across school buildings. Although there is likely no single configuration that is optimal for every school district nationwide, it is unlikely that the hodgepodge we see today is based on a careful analysis of how grade configuration impacts student achievement. In particular, recent evidence suggests that districts should address problems in middle schools (Grades 6 to 8) and junior high schools (Grades 7 and 8), particularly in the year of entry, or eliminate the use of these types of schools altogether. ... Middle and junior high schools were not always part of the educational landscape in America. ... These types of schools have never become popular in the private sector, where K–8 or K–12 institutions continue to be the most common grade configuration. If middle and junior high schools are effective organizational forms, it is curious that the private sector continues to eschew them. ..."

The clearest and most worrisome evidence on middle and junior high schools comes from two recent studies, one in New York City (Rockoff and Lockwood 2010) and the other in Florida (Schwerdt and West 2011). Both are statistical analyses of large administrative databases that track student achievement over the majority of the primary grades and, in the Florida case, into high school. The clear result of both of these studies is that students who move to a middle or junior high school in Grades 6 or 7 experience a sharp decrease in their learning trajectories and continue to struggle, relative to their peers who attended K–8 schools, through Grade 8 and into high school. ..."





As the authors point out, some districts would find it less costly to move to a K-8 configuration than others, and this may be a suggestion to be plucked when the time is ripe. But they add: "Even if changes in grade configuration are not an option, the research discussed above suggests it is imperative that districts devote resources to eliminating the drop in achievement associated with middle schools."

On focusing teachers

They write: "Recent research suggests that elementary teacher grade assignments vary considerably from year to year, even among the set of teachers who maintain the same certification and continue teaching in the same school. In New York City, for example, roughly 38 percent of teachers switch grades from one year to the next. An even larger fraction of teachers switch grades over two or three years. ... The rate of grade switching among upper elementary teachers in Los Angeles, Miami, and Gwinnett County, Georgia, are all greater than 20 percent. ..."

"A recent study of fourth- and fifth-grade teachers in North Carolina found a correlation of roughly 0.7 between measures of teacher effectiveness in English and math. However, even with this relatively high correlation, the authors of this study calculate that shifting teacher assignments so that each teacher taught only the subject in which she or he was most effective would lead to substantial increases in student achievement. Indeed, they estimate the benefits of this complete specialization would be larger than the benefit of firing the bottom 10 percent of teachers (based on student test scores). Of course, complete teacher specialization by subject would require large structural changes in the organization of schooling."

Again, the authors are quick to point out possible trade-offs here. Sometimes students benefit when teachers switch. But principals and others who set teaching assignments should stay highly aware that specific experience in teaching a certain grade and subject does tend to make the teacher better at that focused task. Striving to make switching less common, and instead to have teachers develop deeper expertise in a grade and/or a particular subject, would be a useful step.