Thursday, October 13, 2011

Global Supply Chains: U.S. ITC #2

The U.S. International Trade Commission has published the 7th edition of its occasional report: "The Economic Effects of Significant U.S. Import Restraints." The report comes in two main parts. The first part, discussed in an earlier post here, is an overview and status report on the main U.S. barriers. The second part concerns the trend toward longer global supply chains. Here are some highlights (with footnotes and citations expunged for readability throughout):

Description and illustration of a basic global supply chain

"For example, a domestic firm might provide the R&D and design of a product, and produce the initial intermediate inputs using local raw materials, as in figure 3.1. Then these intermediate inputs would be exported to a second country, where a firm would use them to produce a semifinished product. That firm would then export the semifinished good to a third country, where the final good is
assembled and packaged. The third country would then export the good back to the domestic firm, which would oversee the marketing, retailing, and delivery of the product domestically and abroad. Supply chains like these require extensive organizational oversight. They also typically involve heavy reliance on telecommunications to ensure that different stages of the product are made to specification and on logistics to coordinate the movement of material across many firms and countries. As the case
studies later in this chapter illustrate, global supply chains can involve complex interconnections between different tasks, as well as between domestic and foreign firms carrying out those tasks. This complexity is managed by lead firms in the chain that oversee production and make other key decisions ...


What factors are driving longer global supply chains?
A key force behind the widespread development of global supply chains has been
technological change. Over time, technological change has allowed more production
processes to be fragmented—split into stages or tasks—and those stages or tasks to be
carried out in new, often distant locations. For example, in the 1970s some apparel
production for the U.S. market was offshored in nearby countries in the Caribbean region.
But advances in telecommunications and in transport have allowed the industry to source
from distant Asian suppliers and still meet the time-sensitive demands of the industry. ...
Two other important drivers in the development of global chains are the extensive global
trade liberalization (e.g., reduction in tariff and nontariff barriers) and falling
transportation costs that have occurred in the past quarter-century. Because goods and
services produced by global supply chains typically cross borders multiple times, they
pass through multiple customs regimes and are affected by multiple tariffs and nontariff
barriers. Thus, the benefits of trade liberalization can also be multiplied for goods and
services produced in global supply chains."

Expansion of the processing trade

"Numerous countries have set up programs to encourage processing trade, which allow duty-free imports of components used in products made solely for export. Using data on these programs provides a more direct measure of global supply chain trade, since all of the trade in the components and products affected by the programs moves through a supply chain. China and Mexico are the two largest users of export processing regimes in the developing world, and together account for about 80–85 percent of such exports worldwide. Chinese trade grew by more than 800 percent between 1995 and 2008—and about half of this growth is attributable to Chinese processing trade. Mexico is also heavily reliant on processing trade; processing imports represented over 50 percent of total Mexican imports in 2006."

A Cautionary Story for the U.S. in Global Supply Chains: Flat-Panel Display Televisions


There are two key components for FPD [flat-panel display] televisions, the display panel and the chipset, which together account for 94 percent of the costs. The global supply chain for FPD televisions uses glass produced in Japan and Korea; displays incorporating the glass, assembled in Japan, Korea, and Taiwan; and semiconductor chip sets designed in the United States and elsewhere and produced in China, Korea, Singapore, and Taiwan. Assembly occurs principally in China, the world’s largest television producer, although most sets destined for the U.S. market are assembled in Mexico. ... U.S. participation in the global supply chain is now limited to the design of chips, some
product development, distribution, marketing, and customer service. The last U.S. television factory (owned by Sony) closed in 2009. All televisions sold in the United States now are imported from original equipment manufacturers (OEMs) with factories outside the United States (principally in Mexico) or from contract manufacturers with factories principally in Mexico and China. The sole remaining U.S.-headquartered television brand, Vizio, entered the U.S. market in 2002. Vizio has no factories of its own, but rather uses contract manufacturers in China, Taiwan, and Mexico to produce goods to Vizio’s specifications. Although Vizio builds products that incorporate current technology, it does no R&D; instead, it purchases patents or licenses the technology from other patent owners. Vizio has also acquired other patents, which it licenses to other television manufacturers. The principal suppliers of finished televisions to Vizio are two contract manufacturers in Taiwan, Foxconn and Amtran. These companies are also part owners of Vizio."

A U.S. Success in Global Supply Chains: Logistics

U.S. firms are among the leading logistics providers worldwide and hence have become essential participants in global supply chains. Logistics, the coordinated movement of goods and services, encompasses diverse activities that oversee the end-to-end transport of raw, intermediate, and final goods between suppliers, producers, and consumers.... The largest and most diversified U.S. logistics firms are FedEx and UPS, although for both firms, primary revenues are derived from the express delivery of letters and small packages. Some other large U.S.-based logistics firms include C.H. Robinson Worldwide, Expeditors International of Washington, Caterpillar Logistics Services, and Penske Logistics. All of these firms operate globally and typically have hundreds of offices worldwide. Like FedEx and UPS, these firms have added logistics and supply chain capabilities to their main lines of business which, for example, include the transportation of heavy freight (Caterpillar) and the arrangement of transportation services (C.H. Robinson and Expeditors). For all firms, supply chain management is a fast-growing business segment, with U.S. revenues for supply chain services having grown by about 20 percent during 2004–09.

Shifting to a value-added view of trade

When products cross national borders several times, then instead of focusing on the value of what crosses the border, which is "gross trade," it becomes important to understand "value-added" trade--that is,what value-added occurred within your country. One approach here is to look at the foreign content in your production. The green line shows that foreign content in U.S. manufacturing has risen from about 10% in the mid-1980s to more than 25% now. Overall foreign content in U.S. exports has risen, but more slowly, from about 8% in the late 1970s to as high as 15% before the recession hit full force in 2008.

Looking at value-added also affects how one sees bilateral trade patterns. Here's an explanation: "China is the final assembler in a large number of global supply chains, and it uses components from many other countries to produce its exports. The figure below shows that the U.S.-China trade deficit on a value-added basis is considerably smaller (by about 40 percent in 2004) than on the commonly reported basis of official gross trade.b By contrast, Japan exports parts and components to countries throughout Asia; many of these components are eventually assembled into final products and exported to the United States. Thus the U.S.-Japan trade balance on a value-added basis is larger than the comparable gross trade deficit. The U.S. value-added tradedeficits with other major trading partners (Canada, Mexico, and the EU-15) differ by smaller amountsfrom their corresponding gross trade deficits."

Other ways in which longer global supply chains change thinking about international trade
Here are some other changes: "Modern complex supply chains generate more trade than traditional supply networks in which only raw materials or final goods might be sent across international borders. In the earlier example of a supply chain in which the stages in figure 3.1 were carried out in three countries, the product was exported three times before being sold in final form at home or abroad. Global chains can also generate new patterns of specialization, as firms in a particular country often specialize in a particular stage or task. In electronics, for example, intermediate and semifinished goods are often produced in Japan, Hong Kong, South Korea, and Taiwan, while final assembly activities are often contracted to Chinese firms. Finally, global chains can change the nature of a nation’s trade. As countries become more vertically specialized, their imports and exports are increasingly composed of intermediate goods and services that are moving to the next stage in the chain."

I would add two final thoughts here:

1) It will be interesting to see if the growth of global supply chains alters the political economy of trade. In the old view of trade, firms within a certain country made goods like cars or machine tools or computers. That doesn't happen so much any more; instead, firms within a country do pieces and parts of the production process. As manufacturers of cars and computers and other goods become less national in scope, will there be less political pressure to protect them from international trade? Or will being more economically intertwined make trade seem like a more frightening and salient issue?

2) The U.S. economy has some large advantages in a world of longer global supply chains: the sheer size of its existing markets; its functional rules of law and finance; its expertise in logistics and marketing; its well-developed communication and transportation facilities; the cultural and personal connections that American has throughout the world economy; its R&D and scientific capabilities; and the flexibility of its workers and firms. There are a lot of clouds in the future economic outlook for the U.S., but one potential bright spot--if we go out and seize it--is the multiplicity of roles that the U.S. can play in the longer supply chains of an evolving global economy.

For more on this subject, and in particular some measures of how foreign content in exports has evolved over recent decades, see my post from August 19 is about an IMF report on Longer Global Supply Chains.





U.S. Barriers to Imports: U.S. ITC #1

The U.S. International Trade Commission has published the 7th edition of its occasional report: "The Economic Effects of Significant U.S. Import Restraints." The report comes in two main parts. The first part is an overview and status report on the main U.S. barriers. The second part, which I'll discuss in a follow-up post, concerns the trend toward longer global supply chains.

The main message of the first part the report is that the U.S. economy is in general extremely open to imports: "The United States is one of the world’s most open economies. In 2010, the average U.S.
tariff on all goods remained near its historic low of 1.3 percent, on an import-weighted basis, essentially unchanged from the previous update in 2009. Nonetheless, significant restraints on trade remain in certain sectors. The U.S. International Trade Commission (Commission) estimates that U.S. economic welfare, as defined by total public and private consumption, would increase by about $2.6 billion annually by 2015 if the United States unilaterally ended (“liberalized”) all significant restraints quantified in this report. Exports would expand by $9.0 billion and imports by $11.5 billion. These changes would result from removing import barriers in the following sectors: sugar, ethanol, canned tuna, dairy products, tobacco, textiles and apparel, and other high-tariff manufacturing sectors."

The single most costly trade barrier concerns rules against importing ethanol. The fact that such rules exist at all, of course, strongly suggests that the key issue in ethanol policy is not how much gasoline we can replace, but instead how much of a subsidy can find a justification for sending to farmers. Here's the ITC overview:

"Because of rapidly increasing quantities of ethanol mandated by the U.S. Renewable Fuel Standard, both U.S. ethanol production and U.S. imports of ethanol are projected to rise markedly by 2015. The projected higher import quantities and the continued moderate restrictiveness of ethanol restraints combine to make these restraints the most costly (in welfare terms) among all sectors considered. The Commission estimates that liberalizing ethanol import restraints would increase welfare by $1.5 billion
and increase imports by 45 percent in 2015. Although liberalization would reduce the domestic industry’s output and employment from their projected 2015 levels by 4–5 percent, these changes are minor considering that the ethanol industry employment and output are both projected to
more than double between 2005 and 2015, with or without liberalization."

One final element of these reports that I always appreciate is that they treat employment issues in the context of the overall economy where over time wages and industries adjust. Thus, while for each trade barrier the report seeks to quantify output and employment changes that would arise if that trade barrier was lifted, the report is also careful to note that as the economy adjusts, an equivalent number of job would arise elsewhere. This message comes through far too seldom in discussions of international trade: barriers to trade, or lifting barriers to trade, aren't going to alter the total number of jobs over time, but instead will shift the industries and sectors where those jobs occur.










Wednesday, October 12, 2011

More on Hating Biofuels: The National Research Council

I've posted here and here on how many international organizations hate government subsidies for biofuels. Now it's time for the National Research Council to have a whack at this pinata. The Committee on Economic and Environmental Impacts of Increasing Biofuels Production of the National Research Council has published: "Renewable Fuel Standard: Potential Economic and Environmental Effects of U.S. Biofuel Policy." The report was mostly written under the chairmanship of Lester Lave, but was completed after his death last May. As befits a report from the NRC, it is a sober-sided discussion that lays out evidence at great length without seeking to take a particular explicit policy stance. Here are the eight major findings of the study, with a few quick comments from me, as quoted from the "prepublication copy" that can be downloaded free of charge:

FINDING: Absent major technological innovation or policy changes, the RFS2-mandated consumption of 16 billion gallons of ethanol-equivalent cellulosic biofuels is unlikely to be met in 2022.
RSF2 is the committee's way of referring to the Renewable Fuels Standard passed into law in 2005 and revised in 2007. Cellulosic biofuel is not from corn or soybeans or animal fat, but instead from certain kinds of grasses or wood chips. Cellulosic biofuel has the theoretical advantage that the sources for such fuel are cheap and abundant; however, producing fuel from these sources is harder than producing it from corn or soybeans or sugar, and the technologies for converting cellolosic material to biofuels are far from cost-effective. Indeed, they write "no commercially viable biorefineries exist for converting lignocellulosic biomass to fuels as of the writing of this report."

FINDING: Only in an economic environment characterized by high oil prices, technological breakthroughs, and a high implicit or actual carbon price would biofuels be cost-competitive with petroleum-based fuels.
Indeed, the case for biofuels probably comes down to either very high oil prices or technological breakthroughs that make is much cheaper, because as the next finding notes, it's not at all clear that biofuels reduce greenhouse gas emissions.

FINDING: RFS2 may be an ineffective policy for reducing global GHG emissions because the effect of biofuels on GHG emissions depends on how the biofuels are produced and what land-use or land-cover changes occur in the process.
Expanded production of biofuels will almost certainly involve clearing and planting additional land. Depending on how it is done, this process can release more carbon than biofuels save. In addition, it's important to remember that the biofuels and agricultural products operate in a global market, so it's not just an issue of how U.S. biofuels policies affect clearing and planting of U.S. land, but how it affects clearing and planting of land all around the world.

FINDING: Absent major increases in agricultural yields and improvement in the efficiency of converting biomass to fuels, additional cropland will be required for cellulosic feedstock production; thus, implementation of RFS2 is expected to create competition among different land uses, raise cropland prices, and increase the cost of food and feed production.
FINDING: Food-based biofuel is one of many factors that contributed to upward price pressure on agricultural commodities, food, and livestock feed since 2007; other factors affecting those prices included growing population overseas, crop failures in other countries, high oil prices, decline in the value of the U.S. dollar, and speculative activity in the marketplace.
Many U.S. households can find ways to adjust without too much pain to a slightly higher price of food. But food products are sold in global markets, and for many people around the world, higher food prices can have dire consequences for nutrition and health.

FINDING: Achieving RFS2 would increase the federal budget outlays mostly as a result of increased spending on payments, grants, loans, and loan guarantees to support the development of cellulosic biofuels and forgone revenue as a result of biofuel tax credits.
Even if explicit subsidies for biofuels are allowed to expire, as they are scheduled to do at the end of 2012, the mandates for consuming biofuels will remain in place, which will raise costs for consumers. Also, gasoline is taxed and biofuels are subsidized, so a movement from gasoline to biofuels will reduce government tax revenues.

FINDING: The environmental effects of increasing biofuels production largely depend on feedstock type, site-specific factors (such as soil and climate), management practices used in feedstock production, land condition prior to feedstock production, and conversion yield. Some effects are local and others are regional or global. A systems approach that considers various environmental effects simultaneously and across spatial and temporal scales is necessary to provide an assessment of the overall environmental outcome of increasing biofuels production.
Biofuels are commonly sold on their environmental merits. The committee is saying here, in a very polite way, that when different feedstocks are considered, along with their effects on air, soil, and water, these purported environmental gains have not yet been convincingly demonstrated. 

FINDING: Key barriers to achieving RFS2 are the high cost of producing cellulosic biofuels compared to petroleum-based fuels and uncertainties in future biofuel markets.

I'm a supporter of expanded energy R&D efforts. Maybe some scientists will find a way to make biofuels that are both cost-effective and clearly an environmental gain, in a way that doesn't drive up food prices around the world. But at this stage, subsidizing production of biofuels or mandating that they be used in certain quantities--especially for technologies like cellolosic biofuels that don't exist on a commercial basis--is putting the cart way in front of the horse.



Tuesday, October 11, 2011

Using Financial Repression to Reduce Government Debt

The usual ways of reducing a government debt burden over time are fairly well-known: cut spending or raise taxes; have the economy grow faster than the debt burden, so the ratio of debt/GDP declines over time; a burst of inflation, which reduces the real value of past debt; and in some cases an outright default or restructuring of the debt. To this list, Carmen Reinhart, Jacob F. Kirkegaard, and M. Belen Sbrancia offer "Financial Repression Redux."Here are some main themes (references omitted for readability):

Here's their definition of financial repression:
"Financial repression occurs when governments implement policies to channel to themselves funds that in a deregulated market environment would go elsewhere. Policies include directed lending to the government by captive domestic audiences (such as pension funds or domestic banks), explicit or implicit caps on interest rates, regulation of cross-border capital movements, and (generally) a tighter connection between government and banks, either explicitly through public ownership of some of the banks or through heavy “moral suasion.” Financial repression is also sometimes associated with relatively high reserve requirements (or liquidity requirements), securities transaction taxes, prohibition of gold purchases, or the placement of significant amounts of government debt that is nonmarketable.... "

How financial repression works like a tax
"One of the main goals of financial repression is to keep nominal interest rates lower than they would be in more competitive markets. Other things equal, this reduces the government’s interest expenses for a given stock of debt and contributes to deficit reduction. However, when financial repression produces negative real interest rates (nominal rates below the inflation rate), it reduces or liquidates existing debts and becomes the equivalent of a tax—a transfer from creditors (savers) to borrowers, including the government. But this financial repression tax is unlike income, consumption, or sales taxes. The rate is determined by financial regulations and inflation performance, which are opaque compared with more visible and often highly politicized fiscal measures. Given that deficit reduction usually involves highly unpopular expenditure reductions and/or tax increases, authorities seeking to reduce outstanding debts may find the stealthier financial repression tax more politically palatable."


How is financial repression currently operating in the U.S.?
One potential example of how financial repression is operating in the U.S. is the super-low interest rates. In part, of course, these are an attempt to stimulate the economy, but it also seems plausible to me that they are intended to help the U.S. government in financing its debt. But a more straightforward example is that when the Federal Reserve and other central banks buy U.S. Treasury debt directly--debt that might very well need to pay a higher interest rate if it was sold to outsiders. Back in 1990, outsiders owned about 75% of U.S. Treasury debt; now they own about half.


How is financial repression happening in other countries? 
Central banks in many other countries--for example, the UK, Ireland, Portugal, and Greece--have sharply increased their holdings of government debt. In France and Ireland, major pension funds have been required to invest in government debt.

How much can financial repression reduce government debt?
These authors cite research that financial repression can have a major effect in reducing government debt, through what they call "the liquidation effect." Many of their calculations focus on how government debt burdens were reduced after WWII. They write: "For the United States and the United Kingdom, the annual liquidation effect [between 1945 and 1980] amounted on average to between 3 and 4 percent of GDP a year. ... For Australia and Italy, which recorded higher inflation rates, the liquidation effect was larger (about 5 percent a year)."


My point here isn't to argue for or against what they call financial repression. But if their calculations are roughly right, it's an option for reducing government debt that could end up playing a major role, and needs to be better understood.


 


Monday, October 10, 2011

2011 Nobel Prize to Thomas Sargent and Christopher Sims

According to the Nobel website: "The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2011 was awarded jointly to Thomas J. Sargent and Christopher A. Sims `for their empirical research on cause and effect in the macroeconomy.'" But what does that actually mean?

The website of the Nobel organization always offers useful background information about the laureates, including a "Scientific Background" paper about the winners. This year's background paper about Thomas Sargent and Christopher Sims is going to be hard sledding for those uninitiated into academic macroeconomics--by which I mean it has a bunch of equations. But the opening  pages offer an accessible overview of why they are eminently deserving of the prize. Here are some excerpts, mixed with some of my own explanations:


How was macroeconomic analysis done before the work of Sargent, Sims, and others? 
Here's my own description: If one looks back at how macroeconomics was typically done in the 1960s and into the early 1970s, the common macroeconomic models were big sets of equations--that is, they added up relationships between elements like consumption, investment, saving, imports, exports and total economic output, along with equations for how interest rates and exchange rates affected each other and these categories. A big category like "consumption" would be broken down in to durable goods and nondurable goods, and in turn these categories would be broken down still further. The resulting models would have hundreds of equations all interrelated with each other, and adding up to a picture of the macroeconomy as a whole. But as the Nobel background paper reports: "This estimated system was then used to interpret macroeconomic time series, to forecast the economy, and to conduct policy experiments. Such large models were seemingly successful in accounting for historical data. However, during the 1970s most western countries experienced high rates of inflation combined with slow output growth and high unemployment. In this era of stagflation, instabilities appeared in the large models, which were increasingly called into question."


The key role of expectations in this analysis
Many of the public policy discussions in the stagflation of the 1970s focused on expectations. What if workers were expecting higher wages? What if firms could promise higher wages because they expected prices to rise? Were the expectations causing inflation and recession, or were inflation and recession causing the expectations, or were there feedback loops in all of these and other economic factors? The macroeconomics of that time had no clear-cut tools for dealing with these issues.


The background paper puts it this way: "In any empirical economic analysis based on observational data, it is difficult to disentangle cause and effect. This becomes especially cumbersome in macroeconomic policy analysis due to an important stumbling block: the key role of expectations. Economic decision-makers form expectations about policy, thereby linking economic activity to future policy. Was an observed change in policy an independent event? Were the subsequent changes in economic activity a causal reaction to this policy change? Or did causality run in the opposite direction, such that expectations of changes in economic activity triggered the observed change in policy? Alternative interpretations of the interplay between expectations and economic activity might lead to very different policy conclusions. The methods developed by Sargent and Sims tackle these difficulties in different, and complementary, ways."

Sargent and structural econometrics
Instead of trying to build a macroeconomic model on a pile of statistics, and how those statistics added up and interrelated, the approach of Sargent (and others) was to build a macroeconomic model starting from the idea that economic actors like households and firms were doing their best to pursue their own interests. This approach has sometimes been called "rational expectations," but that term is probably misleading. The "rationality" here doesn't mean that economic actors have all available information, can calculate everything perfectly, and always make correct decisions. It only implies that they won't make the same mistake over and over again. In Sargent's hands, at least, this approach explicitly leaves open the question of just how people form expectations and learn.

Here's the background paper: "Sargent began his research around this time [the early 1970s], during the period when an alternative theoretical macroeconomic framework was proposed. It emphasized rational expectations, the notion that economic decisionmakers like households and firms do not make systematic mistakes in forecasting. This framework turned out to be essential in interpreting the inflation-unemployment experiences of the 1970s and 1980s. It also formed a core of newly emerging macroeconomic theories. Sargent played a pivotal role in these developments. He explored the
implications of rational expectations in empirical studies, by showing how rational expectations could be implemented in empirical analyses of macroeconomic events--so that researchers could specify and test theories using formal statistical methods--and by deriving implications for policymaking. ...
In fact, the defining characteristic of Sargent's overall approach is not an insistence on rational expectations, but rather the essential idea that expectations are formed actively, under either full or bounded rationality. In this context, active means that expectations react to current events and incorporate an understanding of how these events affect the economy. This implies that any systematic change in policymaking will influence expectations, a crucial insight for policy analysis."

I would add that instead of a model of the macroeconomy with potentially hundreds of variables, Sargent and others worked with models that on the surface appeared much simpler: for example, one example in the "background" paper is a model of the macroeconomy that has only three variables: inflation, output, and a nominal interest rate. But the inferences about cause-and-effect in these models are defensible and logical.

Sims and vector autoregressions
Sims pointed out that the earlier generation of macroeconomic models were built on a series of assumptions about how certain economic factors or policies "caused" other policies. But in an model of expectations, these statements about "cause" needed to be demonstrated, not assumed. Thus, instead of having a model in which some factors caused other factors, Sims proposed that macroeconomic analysis should begin with a model in which is was possible for every factor to "cause" a change in every other factor, and in addition for past values of every factor over the last few years to "cause" a change in every factor. This approach is called a "vector autoregression," but I often preferred to think of it as starting from a position of honest ignorance.

You then plug in all your data--say, quarterly data over a period of years--and see what patterns emerge. As you might imagine, it immediately looks clear that certain factors are not affecting others. Sims proposed a process for figuring out when certain factors aren't connected. As you begin to rule out what is NOT connected, what is left behind is a model of connections that actually exist. It's sort of like the way that a sculptor starts with a block of stone, and by gradually removing pieces, ends up with an image.

The background paper puts it this way: "Sims launched what was perhaps the most forceful critique
of the predominant macroeconometric paradigm of the early 1970s by focusing on identification, a central element in making causal inferences from observed data. Sims argued that existing methods relied on "incredible" identification assumptions, whereby interpretations of "what causes what"
in macroeconomic time series were almost necessarily flawed. Misestimated models could not serve as useful tools for monetary policy analysis and, often, not even for forecasting. As an alternative, Sims proposed that the empirical study of macroeconomic variables could be built around a statistical tool, the vector autoregression (VAR). Technically, a VAR is a straightforward N-equation, N-variable (typically linear) system that describes how each variable in a set of macroeconomic variables depends on its own past values, the past values of the remaining N - 1 variables, and on some exogenous "shocks." Sims's insight was that properly structured and interpreted VARs might overcome many identification problems and thus were of great potential value not only for
forecasting, but also for interpreting macroeconomic time series and conducting monetary policy experiments."

Other thoughts and resources
Sargent and Sims were colleagues at the University of Minnesota for about 15 years. The Federal Reserve Bank of Minneapolis puts out a readable publication called "The Region," which often does in-depth interviews with prominent economists about their work. An interview with Sargent from the September 2010 issue is available here; an interview with Sims from the June 2007 issue is available here.

Also, Sims published an article in the Spring 2010 issue of my own Journal of Economic Perspectives called "But Economics Is Not an Experimental Science," on issues of how to draw defensible cause-and-effect inferences from naturally-occurring data. Like all article in my journal, it is freely available courtesy of the American Economic Association.


While no one quite knows what the Nobel committee is thinking when they choose laureates, it seems clear that one standard is whether the ideas have been important enough to launch a sustained research literature. The ideas of Sargent and Sims from back in the 1970s and early 1980s certainly meet this test. Both these authors, and hundreds of others, have built on these ideas for decades.



After Japan's Quake, the Intervention to Stabilize the Yen

In the aftermath of the dreadful earthquake and tsunami which hit Japan on March 11, 2011, I completely missed that there was an international intervention to stabilize the exchange rate of the Japanese yen. Fortunately, Christopher J. Neely tells the story and offers useful context in "A Foreign Exchange Intervention in an Era of Restraint." Here are some highlights of what has happened, and what lessons can be drawn: 


Foreign exchange intervention has become rare for the G-7 countries
Back in the late 1980s and early 1990s, many major central banks stopped frequent intervention in exchange rate markets, as shown on the figure. In fact, there have been only three exchange rate interventions for these countries since 1995: an intervention after Japan's quake in March 2011, an intervention soon after the start of the euro in September 2000, and an intervention in the yen after East Asia's financial crisis in 1998. 

 
The FX intervention after Japan's March 2011 Quake
 Japan's currency started falling sharply after the earthquake. Here's how Neely describes what happened:  "Nevertheless, the G-7 finance ministers and central bank governors held a conference call on the evening of Thursday, March 17 (Friday morning in Tokyo) and decided to conduct a coordinated intervention to weaken the JPY. The G-7 issued a press release containing the following text:
 In response to recent movements in the exchange rate of the yen associated with the tragic events in Japan, and at the request of the Japanese authorities, the authorities of the United States, the United Kingdom, Canada, and the European Central Bank will join with Japan, on 18 March 2011, in concerted intervention in exchange markets. As we have long stated, excess volatility and disorderly movements in exchange rates have adverse implications for economic and financial stability. We will monitor exchange markets closely and will cooperate as appropriate (G-7, 2011).
 Figure 7 shows that the yen reacted immediately to the intervention announcement, surging
almost 4 percent within the hour against the USD ..."

As Neely reports, the total intervention was about $10.4 billion. Notice that the yen starts stabilizing when the announcement is made, and then moves to a certain level and more-or-less sticks at that level for awhile. The volatility of the yen foreign exchange rate diminishes a great deal.

What did the 1998 exchange rate intervention look like?
Neely describes the background to the 1998 intervention this way: "The June 1998 intervention also followed a financial crisis, the 1997 Asian exchange rate crisis in which international capital fled many developing Asian countries, such as Thailand and South Korea. In early June 1998, the main macroeconomic concern was that the yen was unusually weak and weakening further, which made goods and services from other Asian countries less competitive with Japanese goods and
services and harmed those countries’ recoveries. Policymakers probably feared that a falling yen
might cause China to devalue the renminbi (RMB), possibly sparking competitive devaluations, inflation, and instability throughout the region."

The pattern of the 1998 intervention is qualitatively similar to the 2011 intervention: that is, a reaction just before the announcement is made, a movement to a new level, and volatility is stabilized.


What happened in the September 2000 intervention? 
Neely sets the stage: "On January 1, 1999, the ECB began conducting a common monetary policy with a new currency, the euro, for the 11 original nations of the European Monetary Union (EMU). From its inception, the euro tended to depreciate against the dollar, falling from about 1.18 USD/EUR on the inception date to less than 0.85 USD/EUR in September 2000. Doubts about the policies of the new central bank probably contributed to this weakness. At the same time, the U.S. economy was slowing—it would officially enter a recession in March 2001—and the strong dollar/weak euro was perceived as detrimental to U.S. exporters. In addition, the Japanese feared that an overly strong yen would price Japanese exports out of the European markets. Against this backdrop, the ECB, the United States, and Japan decided to intervene to support the euro on September 22, 2000."

Again, the qualitative pattern is the same: the exchange rate takes a jump, but then stabilizes at a new level with diminished volatility.

What are the overall lessons?
 Neely summarizes the lessons this way: "Since 1995 most advanced governments/central banks have used intervention only very sparingly as a policy tool. Examination of coordinated interventions during this period shows that intervention is not a magic wand that authorities can use to move exchange rates at will. It can be a very effective tool in certain circumstances, however, to coordinate market expectations about fundamental values of the exchange rate and calm disorderly foreign exchange markets by reintroducing two-sided risk."

Those who are talking about pressuring China to adjust its exchange rate vs. the U.S. dollar have a reasonable case to make. But they would be wise to take to heart the practical issues here. Foreign exchange rate intervention can stabilize a disorderly  market in a short-run situation where everyone is betting the currency will move in only one directly, but is not a magic wand to move exchange rates at will.

Friday, October 7, 2011

America as Conventional Energy Powerhouse?!?

I've been trying to wrap my mind around the issues and possibilities created by the new technologies for extracting oil and gas from North America. Amy Myers Jaffe, an energy expert who runs the Baker Institute Energy Forum at Rice University, has a nice provocative short article in the most recent issue of Foreign Policy magazine called  "The Americas, Not the Middle East, Will Be the World Capital of Energy." Jaffe writes: 
"By the 2020s, the capital of energy will likely have shifted back to the Western Hemisphere, where it was prior to the ascendancy of Middle Eastern megasuppliers such as Saudi Arabia and Kuwait in the 1960s. The reasons for this shift are partly technological and partly political. Geologists have long known that the Americas are home to plentiful hydrocarbons trapped in hard-to-reach offshore deposits, on-land shale rock, oil sands, and heavy oil formations. The U.S. endowment of unconventional oil is more than 2 trillion barrels, with another 2.4 trillion in Canada and 2 trillion-plus in South America -- compared with conventional Middle Eastern and North African oil resources of 1.2 trillion. The problem was always how to unlock them economically.

But since the early 2000s, the energy industry has largely solved that problem. With the help of horizontal drilling and other innovations, shale gas production in the United States has skyrocketed from virtually nothing to 15 to 20 percent of the U.S. natural gas supply in less than a decade. By 2040, it could account for more than half of it. ... Meanwhile, onshore oil production in the United States, condemned to predictions of inexorable decline by analysts for two decades, is about to stage an unexpected comeback."
Jaffe's article sent me back to the sober-sided Annual Energy Outlook 2011 published in April by the U.S. Energy Information Administration. Here's a figure showing how the new "enhanced-oil recovery" techniques are expected to raise oil production in the lower 48 states in a way that offsets declining production from Alaska. The report says: "Rising world oil prices, growing shale oil resources (i.e., liquid oil embedded in non-porous shale rock), and increased production using EOR [enhanced oil-recovery] techniques contribute to increased domestic crude oil production from 2009 to 2035 in the AEO2011 Reference case (Figure 95). The Bakken shale oil formation contributes to growth in crude oil production in the Rocky Mountain Region, and growth in the Gulf Coast region is spurred by the resources in the Eagle Ford and Austin Chalk formations. Some of the decline in oil production in the Southwest region is offset by production coming from the Avalon shale formation."


And here's a figure showing that the share of U.S. oil consumption that is imported peaked in 2005, and is expected to fall over the next couple of decades. The report says: "[W]hile consumption of liquid fuels increases steadily in the Reference case from 2009 to 2035, the growth in demand is met by domestic production.The net import share of U.S. liquid fuels consumption fell from 60 percent in 2005 to 52 percent in 2009. The net import share continues to decline in the Reference case, to 42 percent in 2035 ..."



Of course, there are potential environmental issues. There are issues about what kinds of risks are posed by these technologies for extracting oil, as well as about the conventional pollutants and carbon dioxide emitted by burning these fossil fuels. But for the next few decades, substantial quantities of fossil fuels will continue to be used. The carbon dioxide produced will be essentially the same regardless of where these fossil fuels are produced. Thus, if the local environmental issues can be worked out--that is, the issues about extracting these resources and about conventional pollutants--then there is no inconsistency in moving toward fossil fuels produced and refined by U.S. workers, rather than imported fossil fuels produced and refined by foreign workers, as we continue to seek ways of reducing global carbon emissions.