Monday, June 30, 2014

Sick Shrimp Supply Shock

Perhaps economists are the only ones who feel their pulses accelerate at a title like "Shrimp disease in Asia resulting in high U.S. import prices." But when explaining intro economics, there's always room for one more supply and demand example.  Kristen Reed and Sharon Royales from the U.S. Bureau of Labor Statistics lay out some facts about supply shocks in the shrimp market in a short "Beyond the Numbers" (June 2014, vol. 3, no. 14). They write (footnotes omitted):

Shrimp has become a popular purchase for American consumers, with U.S. consumption of shrimp reaching 3.8 pounds per person in 2012. Demand for shrimp has increased over the years, and shrimp is currently the largest imported seafood species, accounting for 29 percent of seafood imports by dollar value. In 2013, consumers and businesses found themselves paying higher prices with less product available in supermarkets and restaurants. For example, the popular restaurant chain Red Lobster recently saw a 35-percent increase in the price the company paid for shrimp. The price hike contributed to a 3.1-percent increase in the company’s overall food costs and, more recently, an 18-percent decrease in earnings during the quarter that ended in February 2014. Similarly, Noodles & Company noted that the cost of shrimp in its pasta dishes would rise 29 percent this year.

The reason for the higher shrimp prices is a shortage of imports from the top shrimp producers in Southeast Asia. With about 90 percent of shrimp consumed in the United States coming from imports, any change in foreign supply affects both U.S. import prices and overall consumer prices. ... A large contributor to the seafood price increases was a disease-related decline in supplies from the top three shrimp-producing countries: Thailand, Vietnam, and China.

Here's the pattern of U.S. shrimp prices over the last 10 years from the Index Mundi website, based on price data collected by the International Monetary Fund (as part of its data on "primary commodities").

The more modest price rise for shrimp back in 2010 apparently reflects, according to Reed and Royales a previous outbreak of shrimp diseases in other countries, together with the effect of the Deepwater Horizon oil spill on shrimpers in the Gulf of Mexico. Here's a figure from the Reed and Royales showing quantities of U.S. shrimp imports from China, Vietnam, and Thailand.

For context, here's some data on overall US shrimp supply from the National Marine Fisheries Service at NOAA. The most recent data available here only go through 2012, and so only show the start of the supply drop-off.

There's an old rule for holding a successful friendly dinner party: never seat two economists beside each other. But if you draw the short straw and end up sitting next to an economist when you're out for a nice seafood dinner, feel free to discuss diseased Asian shrimp, oil spills, and the resulting price fluctuations. The economist will regard it as normal meal-time conversation.

Friday, June 27, 2014

Sluggish US Investment

There has been enormous attention paid, and rightly so, to how slowly U.S. labor markets have rebounded since the Great Recession. But the sluggish rebound of U.S. business investment deserves attention, too. Here is the pattern of private nonresidential fixed investment in the U.S. divided by GDP, created with the help of the ever-useful FRED website. During the worst of the Great Recession, investment fell to levels comparable to troughs of recessions in the 1970s and the early 1990s. But even with some bounce-back since 2009, the level of U.S. investment remains low by historical levels.

This low level of investment is showing up in a number of recent discussions. For example, Robert Hall recently calculated that the U.S. GDP is now 13% below where it would be if it had remained on the average trend path fro 1990-2007. He attributes 3.9 percentage points of that gap to a shortfall in business capital. Lawrence Summers gave a recent speech about "U.S. Economic Prospects: Secular Stagnation, Hysteresis, and the Zero Lower Bound." The secular stagnation argument, dating back to a 1938 paper by Alvin Hansen, makes the claim that a strong level of investment is needed for a full-employment economy. Hansen argues that historically, high levels of investment have been driven by three factors: 1) innovation and new technology; 2) a rising  population; and 3) the discovery of new territory and resources.  He argued in 1938 that the last two causes were looking unlikely, and so the U.S. economy needed to focus on innovation and new technology.

As Summers points out, the last two U.S. economic upswings--the dot-com boom of the 1990s and the housing boom of the mid-2000s--were driven by rising investment levels. Of course, the busts that followed these booms were not created equal. The dot-com boom led to high levels of investment in information and communications technology that has paid off in productivity gains, and was followed by a drop in stock prices and the relatively brief and shallow recession in 2001. The financial losses around 2001 were concentrated in stock prices. The housing boom led to more houses, which will not have an effect of boosting future productivity, and was followed by a financial crises and Great Recession that shook the U.S. economy to its roots. Thus, the challenge is not just to have more investment, but to have it in a way that improves productivity and doesn't set the stage for a financial earthquake.

The very slow rebound in investment isn't obvious to explain.

One possible explanation is a rise in economic uncertainty, as U.S. firms and consumers try to process what hit them during the Great Recession and how to deal with the various major pieces of legislation Congress has passed since then. At some basic level, the very slow rebound in investment is troubling because it suggests that business doesn't perceive the U.S. economy as having opportunities for future growth.

A second possibility is that some small and medium-size firms may be having trouble finding sources of financing for investment. However,  many larger firms have sizable profits and are sitting on cash, with what appears to be the ability to borrow if they wish to do so, but they are not choosing to invest. Here's a figure from Summers's lecture showing corporate profits in recent years.

A third possibility is that despite running budget deficits and low interest rates at levels that would have astonished almost anyone back in 2007, there is still insufficient demand in the economy to encourage sufficient business investment.

A fourth possibility is that businesses are doing a lot of investment, but it's often a form of investment that involves reorganizing their firm around new information and communications technology--whether in terms of design, business operation, or far-flung global production networks. As a result, this form of investment doesn't involve enough demand to push the economy to full employment. Summers suggests this argument as a possibility in his talk as well.
Ponder that the leading technological companies of this age—I think, for example, of Apple and Google—find themselves swimming in cash and facing the challenge of what to do with a very large cash hoard. Ponder the fact that WhatsApp has a greater market value than Sony, with next to no capital investment required to achieve it. Ponder the fact that it used to require tens of millions of dollars to start a significant new venture, and significant new ventures today are seeded with hundreds of thousands of dollars. All of
this means reduced demand for investment ... 
As another way to see this point, here's a price index for capital equipment. As Summers says: "Cheaper capital goods mean that investment goods can be achieved with less borrowing and spending,
reducing the propensity for investment."

What might be done to encourage a resurgence in business investment? Low interest rates and large government budget deficits haven't been a sufficient answer so far.

One suggestion from Summers is large boost in government spending on infrastructure. I confess that this idea leaves me a little cold. Sure, we can all think of examples where infrastructure spending would be useful. Summers likes to kvetch about the griminess of Kennedy Airport. Here in Minnesota, a major bridge in Minneapolis suddenly collapsed in August 2007, killing 13 people and injuring more than 100. For economists, the trick with infrastructure spending is always to think about the right mixture of price incentives to manage congestion and damage along with pouring concrete--and to try to focus on projects with a large payoff, not just pork barrel spending. While I can easily support appropriately targeted and priced infrastructure spending, I don't think that growth in the 21st century economy is going to be built on wider highways.

Summers also suggests actively promoting and encouraging exports, and I've argued on this blog a number of times that the U.S. should be trying to build ties with the faster-growing portions of the world economy. Of course, this is a somewhat indirect way to encourage investment.

Back in the 1960s and 1970s, the government used to enact an "investment tax credit" when the economy was slow. The notion was that firms often have some investment plans up their sleeves, for when times get better. By offering a tax credit that expires in a year or two, you encourage firms to get off their duffs and move those future plans up to the present. The broad-based investment tax credit was always controversial, and it died off with the Tax Reform Act of 1986. Instead, there are now little mini-credits for specific investments like those in cleaner  energy. But given the current investment slump, perhaps a broader investment tax credit should be considered.

Summers also writes in general terms of "regulatory and tax reforms that would promote private
investment," and that agenda seems worth pursuing, too. The U.S. corporate tax code seems clearly out-of-whack with the rest of the world. Back in 1938, Alvin Hanson wrote that one traditional stimulation to investment is the discovery of new resources, and the breakthroughs in unconventional natural gas drilling seem to offer a classic opportunity both to provide cheaper energy to the US economy in a way that respects and addresses the environmental issues. And for fans of infrastructure spending, the energy boom offers a number of opportunities for rail and pipelines.

Finally, I believe that in the 21st century, the U.S. is more dependent on an ability to translate research and development into new products and industries than ever before. R&D spending has been stagnant as a share of GDP for decades. Setting a goal of doubling R&D spending might also be a way to give U.S. business investment a big push.  But one way or another, the U.S. economy isn't going to be roaring again--and the U.S. labor market isn't likely to recover fully--until U.S. firms start making major new investments in new plant and equipment.

Thursday, June 26, 2014

Snapshots of Foreign Direct Investment

UNCTAD is the go-to source for information about foreign direct investment, and its World Investment Report 2014 has the most recent numbers. To be clear, the definition of FDI "refers to an investment made to acquire lasting interest in enterprises operating outside of the economy of the investor," and where "the investor´s purpose is to gain an effective voice in the management of the enterprise." A common practice is to use "a threshold of 10 per cent of equity ownership to qualify an investor as a foreign direct investor." FDI is usefully distinguished from "portfolio investment," which is foreign investment that is purely financial and does not involve a voice in management and may be quite short-term. Global data on portfolio investment is collected by the IMF.

Here's the pattern of FDI since 1995. The "traditional pattern" of FDI, as the report calls it, has been that developed countries received a greater share of such inflows than developing countries. But as you can see, the inflows of FDI to developed economies have been fairly volatile, while the inflows to developing economies have been rising more steadily. The "traditional pattern" is gradually shifting toward the developing countries. Total FDI for 2014 looks to be about $1.6 trillion.

For 2013, total stock of foreign direct investment around the world is about $26 trillion, total income is about $1.7 trillion, which works out to a rate of return of about 6.5%. About 70 million people around the world are employed by foreign affiliates of companies based elsewhere.
Here's a more detailed breakdown of FDI inflows and outflows. To me, one of the facts that jumps out is the preeminence of the U.S. economy in foriegn direct investment--it leads the way by a wide margin in inflow and in outflows. In a globalizing economy, the deepening of these interrelationships of U.S. firms with firms around the world should be viewed as a considerable strength.

In looking at the rest of the FDI inflows, it's interesting to me that no EU country shows up until Spain (#9 on the list). It's also notable that after developing and transition economies are so well-represented on the list of FDI inflows. After the U.S., the next few top countries for FDI inflows are all developing and transition economies. I would not have guessed off-hand that FDI inflows to India exceed those for Germany, nor that FDI inflows to Chile and to Indonesia would exceed inflows to Italy. In looking at the rest of FDI outflows, the developing and transition economies are less well-represented, as one might expect.

What other patterns emerge from the data? Here are a few mentioned by UNCTAD: 

Less emphasis of FDI on extractive industries in low-income countries. 

"Although historically FDI in many poor developing countries has relied heavily on extractive industries, the dynamics of greenfield investment over the last 10 years reveals a more nuanced picture. The share of the extractive industry in the cumulative value of announced cross-border greenfield projects is substantial in Africa (26 per cent) and in LDCs (36 per cent). However, looking at project numbers the share drops to 8 per cent of projects in Africa, and 9 per cent in LDCs, due to the capital intensive nature of the industry. Moreover, the share of the extractive industry is rapidly decreasing. Data on announced greenfield investments in 2013 show that manufacturing and services make up about 90 per cent of the total value of projects both in Africa and in LDCs."
Shale gas is becoming a perceptible force in global FDI. 

"The shale gas revolution is now clearly visible in FDI patterns. In the United States oil and gas industry, the role of foreign capital is growing as the shale market consolidates and smaller domestic players need to share development and production costs. Shale gas cross-border M&As accounted for more than 80 per cent of such deals in the oil and gas industry in 2013. United States firms with necessary expertise in the exploration and development of shale gas are also becoming acquisition targets or industrial partners
of energy firms based in other countries rich in shale resources."
Private equity is less involved in global FDI, and tends to focus mainly on FDI in high-income countries--although this could change in the years ahead. 

"In 2013, outstanding funds of private equity firms increased further to a record level of $1.07 trillion, an increase of 14 per cent over the previous year. However, their cross-border investment – typically through M&As – was $171 billion ($83 billion on a net basis), a decline of 11 per cent. Private equity accounted for 21 per cent of total gross cross-border M&As in 2013, 10 percentage points lower than at its peak in 2007.
With the increasing amount of outstanding funds available for investment (dry powder), and their relatively subdued activity in recent years, the potential for increased private equity FDI is significant. Most private equity acquisitions are still concentrated in Europe (traditionally the largest market) and the United States."

Wednesday, June 25, 2014

Expectations and Reactions Concerning Future Technology

For each of the following questions, consider your own answer. Then compare it with the results of a national poll of Americans conducted by the Pew Research Center and Smithsonian magazine. the questions are taken from the script used by the interviewers. Other questions and more detail on the answers are available in the report.

"Now I have a few questions about the future. Some books and movies portray a future where technology provides products and services that make life better for people. Others portray a future where technology causes environmental and social problems that make life worse for people. How about you? Over the long term, you think that technological changes will lead to a future where people’s lives are mostly better or to a future where people’s lives are mostly worse?"

The overall response is 59% think technology will mostly make people's lives better, while 30% think it will mostly make people's lives worse. To me, the notion that almost one-third of Americans think future technology is mostly a negative is startling and unwelcome. Men are more likely to think that technology will make lives mostly better (67%) than are women (51%). Those with a college education are more likely to think that technology will make lives mostly better (66%) than are those who have not completed a college education (56%).

Next, here are some other things that might happen in the next 50 years. For each, tell me if you think it would be a change for the better or a change for the worse if this happens. How about [INSERT ITEMS; RANDOMIZE]?
a. If lifelike robots become the primary caregivers for the elderly and people in poor health
b. If personal and commercial drones are given permission to fly through most U.S. airspace
c. If most people wear implants or other devices that constantly show them information about the world around them
d. If prospective parents can alter the DNA of their children to produce smarter, healthier, or more athletic offspring.

A majority believes that all four of these would be a change for the worse. The least unpopular is wearing implants or devices, with 53% saying it would be a change for the worse. For the other three, between 63-66% think it would be a change for the worse.

The report notes: "Men and women have largely similar attitudes toward most of these potential societal changes, but diverge substantially in their attitudes toward ubiquitous wearable or implantable computing devices. Men are evenly split on whether this would be a good thing: 44% feel that it would be a change for the better and 46% a change for the worse. But women overwhelmingly feel (by a 59%–29% margin) that the widespread use of these devices would be a negative development."

Next, here are some things that people might be able to do in the next 50 years. For each, tell me if this were possible, would YOU PERSONALLY do this... (First,) Would you [INSERT ITEMS; RANDOMIZE]?
a. Eat meat that was grown in a lab
b. Ride in a driverless car
c. Get a brain implant to improve your memory or mental capacity

This question struck me as a little odd, because the 50-year horizon seems pretty far away. The first hamburger grown from stem cells in a lab was served in August 2013 in London, and cost about $335,000. But the possibility of factory meat production at commercial prices may be only a few years away, and growing meat in a lab involves far less use of energy, water, and land than does agricultural production. Driverless cars have been on the roads on an experimental basis for a few years now.  While we don't yet have brain implants, we do have computers and smartphones that many of us use all the time in ways that may actually be altering how we use memory and mental capacity.

About half of people would be willing to try a driverless car--and it's interesting to me that about half say they would not.  From the report: "College graduates are particularly interested in giving driverless cars a try: 59% of them would do so, while 62% of those with a high school diploma or less would not. There is also a geographical split on this issue: Half of urban (52%) and suburban (51%) residents are interested in driverless cars, but just 36% of rural residents say this is something they’d find appealing."

People are more opposed to eating meat from a lab (78% would not) than they are to a memory brain implant (72% would not). Apparently, there are people out there who are willing to have brain implants if only they can eat a cow-sourced cheeseburger while doing it. College graduates stand out here as more willing to experiment, with 37% of them would be willing to get a performance-enhancing brain implant if given the chance, and 30% willing to try lab-grown meat. There may be a commentary here, both positive and negative, on what we are teaching college graduates.

Public attitudes toward new technology are of course malleable over time. Many people in the past might well have opposed having electrical wires running through homes, for example. But public attitudes shape attitudes toward support of research and development and whether new technologies will have to face extensive regulatory hurdles. Because new companies are chasing consumer dollars, people's attitudes determine which technologies will be pursued with greater intensity and, ultimately, what new technologies will succeed.

Tuesday, June 24, 2014

Practical Challenges of Universal Health Insurance

During discussions of US health care policy, there is often be a moment when someone asks--sometimes angrily, sometimes plaintively, sometimes just wondering--"Why can't the U.S. just have a universal coverage single-payer system like every other country seems to have?" The question is at some level reasonable, but it's also makes me wince a bit. It seems to presume--sometimes angrily, sometimes plaintively, sometimes just wondering--that other countries have all found a clear and simple answer to health care financing issues. But other high-income countries actually have a fair amount of variety in their health care systems, and they are all struggling in different ways with the unavoidable realities of health insurance. Mark Stabile and Sarah Thomson discuss "The Changing Role of Government in Financing Health Care: An International Perspective," in the June 2014 issue of the Journal of Economic Literature (52:2, pp. 480-518). (The JEL is not freely available on-line, but many readers will have access through library subscriptions._

Stabile and Thomson focus on seven countries: Australia, Canada, France, Germany, Switzerland, the United Kingdom--and for comparison purposes, the United States. In my own reading of their paper, here are some of the practical challenges and issues that arise, even when health insurance is universal.

How should health care be financed?

Funds for health care can come from several sources: general government revenues, an earmarked tax for health care, private insurance, out-of-pocket payments by patients, and other sources including charitable foundations. The figure shows how these differ across countries. The UK, Canada, and Australia, for example, rely heavily on general tax revenues. The U.S., with its earmarked Medicare tax, is more similar to France, Germany, and Switzerland in using an earmarked tax. Private insurance is predictably a larger share of health care finance in the U.S. than in these other countries--but all of them continue to have non-negligible private health insurance. Out-of-pocket payments are common in other high-income countries: indeed, the share of total health spending that comes out-of-pocket from patients is larger in several of these countries than in the United States, although many countries have some cap on the total out-of-pocket payment.
The choice of how to finance health care is necessarily linked to the incentives for spending on health care. General fund spending means that health care competes directly against other spending priorities. An earmarked tax insulates health care from such competition, but it creates a different kind of pressure--to limit health care spending to what the earmarked tax provides.

Does universal mean government-provided? 

In a strict logical sense, "universal" health insurance coverage doesn't specify how people get health insurance, only that everyone has it. For example, a legal requirement that people have private health insurance, followed up by a backstop plan of public insurance, can provide universal coverage. Universal insurance is administered by regional governments in Canada and Australia, and by central governments in the UK and France. In Switzerland, there is 100% coverage by universal private insurance. Germany lets those with high incomes opt out of the government health insurance system--that is, they do not pay into the system or receive benefits from it--and about 10% of Germany's population is covered by private health insurance.

The health care finance arrangements in several of these countries have evolved fairly recently. Stabile and Thomson explain (citations omitted):
Universally compulsory coverage is a relatively recent development in France, Germany, and Switzerland. Switzerland introduced compulsory universal coverage in 1996 to address concerns about unequal access to health insurance, gaps in coverage and rising health expenditures. Before 2000, SHI [statutory health insurance] in France was compulsory for workers and their dependents and voluntary for everyone else; those who could not afford to pay the fixed (nonincome-related) contribution for voluntary coverage relied on locally administered government subsidies. In 2000, France broke the link with employment and extended income-related contributions to all residents, with free access to health insurance for those with very low incomes. In 2009, Germany introduced compulsory universal coverage to stem the growing number of uninsured people, but it maintained the link between statutory coverage and employment. Germany is the only OECD country to allow higher earners to opt out of contributing to the SHI scheme and to be privately covered, instead.

How can problems of risk pooling? 

These countries all wish to allow a fair degree of choice for patients between primary care physicians and hospitals. In this setting, how should the health care providers who treat a higher-than-average share of those with more costly health conditions be compensated?

For example, one option is to subsidize high-risk individuals with vouchers or payments that let them purchase health insurance. Another option is to subsidize an insurance company for accepting high-risk patients. For example, in Switzerland people choose among 35 health insurance companies, and those with pre-existing health risks and conditions get a government subsidy so that what they pay for health insurance is the same as what others would pay. Another option is to adjust payments to health care providers in a way that is linked to the kinds of patients they treat--but also seeks to avoid having health care providers game the system by finding ways to receive extra compensation. As Stabile and Thomson note, all of these countries are tweaking how their payment systems respond to risk.

How to pay for performance? 

In a system of universal coverage, there is still a wish to have different providers compete to some extent against each other, so that there can be incentives for innovation and finding ways to hold down costs. There are a variety of ways to do this, and the approaches have been evolving over time.

For example, one basic approach is to pay for the total number of patients treated, and to pay an amount that represents the average cost per patient treated. The hope is that hospitals, knowing they won't get extra pay, will find ways to trim costs. The harsh reality is that many hospitals will find ways to undertreat patients or avoid treating sick patients, as a way of holding costs down. Thus, a fallback approach called "diagnostic related groups" involves defining different diagnoses, and paying hospitals the average cost for each diagnosis. It's a little harder to game this system, but not much. Hospitals have some power to manipulate who is in what diagnostic related group in the first place, and to be very available to patients with some conditions and not others.

Thus, there have been experiments with various pay-for-performance health care systems, where there is some kind of formula that includes how many patients, their condition, their waiting times, the use of preventive care, the prevalence of avoidable infections, and many other factors to come up with a formula for payment. For example, UK physicians have a list of 65 indicators of quality of care. Again, the question of what indicators apply to what patients, and how much the pay of the doctor is adjusted for each indicator, open up possibilities for gaming the system. These systems only become more complex when trying to encourage experimentation with new technologies and methods of care delivery.

These experiments with diagnostic related groups and pay-for-performance do seem, on average, to improve care and efficiency. But they are a continual work in progress.


My point here is not to defend the U.S. system of health care finance, either as it existed before the Patient Protection and Affordable Care Act of 2009 or since. I've noted that the U.S. health care system had genuine problems, and that the 2009 act is at best a partial and quite disruptive way to address those problems. Instead, the point is that even universal coverage health care financing systems require a bunch of practical choices and challenges. Setting up a health insurance system that offers the right incentives to patients and providers for cost-effectiveness and innovation is a fundamentally difficult task, and those practical challenges don't disappear just by invoking talismanic phrases like "universal coverage" or "single-payer."

Full disclosure: The Journal of Economic Literature is published by the American Economic Association, which also publishes the Journal of Economic Perspectives, where I have worked as Managing Editor since 1987.

Monday, June 23, 2014

Small Firms and Job Creation: International Evidence

We know that a large proportion of job growth and economic dynamism comes from those young firms and small firms that are in the process of taking off. But how does the role of small firms vary across countries, and during the recent Great Recession? Chiara Criscuolo, Peter N. Gal, and Carlo Menon have put together the evidence in "The Dynamics of Employment Growth: New Evidence from 18 Countries," published as OECD Science, Technology and Industry Policy Papers No. 14 (May 21, 2014).  The same authors also offer a readable short summary of some of the main themes of the report in "DynEmp: New cross-country evidence on the role of young firms in job creation, growth, and innovation" at the Vox website.

As an overall starting point, it's useful to recognize that most firms in high-income economies are small, but by definition, these firms hire relatively few workers, and so most employment happens in medium and large firms. Here's the data from Criscuolo, Gal, and Menon:

The U.S. experience is distinctive in these figures, but perhaps not in the expected way. "Micro" firms with 1-9 workers are a smaller share of all firms in the U.S. than in most other countries (except Norway). However, the share of "small" firms in the U.S. in the range of 10-49 employees is larger than in many other countries. In terms of employment, the U.S. has a smaller share of workers in micro firms than in other countries, and a larger share of employees working for employers with more than 250 employees.

Moreover, the evidence from Criscuolo, Gal, and Menon suggests that while sizes of start-ups are fairly similar across countries, the size of older firms differs quite a bit.  The dark blue bars on the figures show the size of start-ups in terms of employees, while the light blue squares show the average size of firms across countries: top panel is manufacturing, bottom panel is services.

The authors warn that the size of start-ups as measured by employees may not be fully comparable across countries, because in some cases a newly merged firm is apparently counted as a start-up. But that said, the evidence is intriguing because it suggests that the U.S. economy is not especially extraordinary in having larger start-up firms, but it is different in having a friendlier business climate that is more likely to allow some start-ups to grow into larger firms. They write:

As indicated in the figure, differences in the size of start-ups at entry exist but
are not striking ... The average size of old firms in the United States – around 80 employees in manufacturing and 40 in services – is by far the largest in the sample. These statistics are even more striking since in some other economies, the average start-ups tend to be larger than in the United States, for example the average size of start-ups in the French manufacturing sector is more than double the average size of United States start-ups, while the situation reverses when considering older businesses: on average the size for an old manufacturing business in France is half the size than in the United States. This evidence confirms previous results of Bartelsman et al. (2005) on employment growth amongst surviving firms in the manufacturing sector of six
European countries (France, Finland, West Germany, Portugal, Italy and the United Kingdom) and the United States showing that at the age of seven, US firms are on average 60% larger than their size at entry, while in European countries the figure ranges between 5% and 35%. This suggests that in some countries there are lower entry barriers for new firms; as a consequence, entrants can start off at a smaller size as they have more room for experimentation. This, in turn, might contribute to unleashing the growth prospects of very productive and successful businesses. Also it indicates that in some countries barriers to growth (access to market; burdensome regulation on starting businesses; lack of competition; etc.) might hinder the growth potential of young businesses.
The evidence also suggest that the start-up rate for small businesses (defined as the fraction of all firms that are start-ups) has been declining, a fact which was known for the U.S. but at least I had not know about for other countries. Perhaps not surprisingly, because young firms and small firms can be fragile, the trend toward lower start-up rates worsened during the Great Recession.
Despite the decline in start-up rates, new and small firms remain very important to job creation. Economist now have data to look at both job creation and job destruction. This figure shows the rat of job creation over the entire group of 18 countries for job creation by young firms less than five years of age and older firms more than five yeas of age, as well as rates of job destruction in these two groups of firms. The line going across the figure show the average rates of net job creation, with all job creation and destruction taken into account.

You can eyeball this figure in a bunch of ways, but I found it useful to notice that job creation at young firms exceeds job destruction at young firms, while for older firms the reverse is true. You can also see that even during the Great Recession, when job growth turned negative, small firms on net were creating more jobs than they destroyed.

The authors note with appropriate caution that how this cross-country evidence is preliminary and there can be issues in how official statistics on firm age, entry and exit, are collected across countries--and thus the extent to which they are truly comparable. That said, the available evidence to me makes a powerful statement that it's not the share of small firms in an economy that matters, nor the number of employees working at small firms, nor the size of start-ups firms, nor the rate of start-up firms. What matters most is whether the economy provides an environment in which a relatively small share of start-ups will be able to take off in size and become larger firms.

Thursday, June 19, 2014

Free Parking: A Gift to Whom?

A number of US cities at various times have had the same brainstorm: Why not offer free parking for a few days during the holiday shopping season? After all, it will presumably encourage holiday shoppers, and thus please retailers, and even provide some flexibility for city employees at a time when more people would prefer to be off work. Sure, it costs the city some money, but in the holiday season, why not give it a try? Donald Shoup, the guru of the economics of parking, unpacks the issues in "Parking  Charity," a short essay in the Spring 2014 issue of Access magazine.

Free parking during the holiday season has been tried in recent years in Berkeley, CA, Bellingham, WA, and Durango, CO. For me, as for many economists, my instant reaction when hearing about "free parking" is along the lines: "If I'm not there first thing in the morning, then I'm not going, because no parking spaces are going to be available later in the day." Shoup points to an article in the Durango Herald on December 23, 2013, pointing out the problems:

As sleigh bells ring and the countdown to Christmas comes to a close, the city has been promoting free downtown parking for holiday shoppers as it replaces 1,200 parking meters. But there is just one small problem: There’s nowhere left to park. ...  
“I get it,” said Alan Cuenca, owner of Put-a-Cork-in-It, 121 E. 10th St. “Idealistically, it was a good idea, but ultimately what has happened is all the employees that work downtown are taking full advantage of the free parking, and not leaving any for people who come downtown to shop.” Cuenca said he has noticed some motorists driving dangerously, pulling aggressive maneuvers to secure their spot before spreading commerce and holiday cheer. “It’s created a frantic frenzy just to find a spot,” he said. . . .
[Durango Business Development Manager Bob] Kunkel said some congestion had been anticipated . . . “We talked about (congestion) as a possible outcome, and I’ve noticed that every space in town is taken, but this enforces the job that parking meters do, and that’s to create turnover,” he said. Turnover, he added, equals one thing: more shoppers for businesses. “That’s why a parking spot is valuable to a merchant,” he said. “It’s turnover, and the more turnover the better.”

Shoup suggests that if cities are feeling charitable during the holiday season, they might just keep the parking meters in place, but announce that any funds or fines collected during the holiday season will go to charity. In Berkeley, this could have meant as much as $50,000 per day for charity. Or to push the point even further, shopping malls and other places that usually offer free parking could insert some temporary parking meters near the door of the mall, with the proceeds to go to charity. He writes:
"If cities donate their meter money to charity during the Christmas season, and if
stores place a few charity meters in their most convenient spots, drivers will begin to see that charging for parking can do some good for the world. Only a Grinch would demand free parking for Christmas." 

Wednesday, June 18, 2014

Trade in Services Begins to Blossom

At least since the writing of David Ricardo in the early 19th century, economists have taught their little lessons about the potential economic benefits of trade with examples that used goods. Ricardo's famous illustration of comparative advantage involved production of wine and cloth. Generations of modern textbooks have written about oil and wheat, cars and computers, and many other pairs of goods. But the notion of international trade as involving physical goods is eroding. Prakash Loungani and Saurabh Mishra lay out some of the patterns in "Not Your Father’s
Service Sector," appearing in the June 2014 issue of Finance & Development.

The basic story is that the revolution in information and communications technology has broken the old geographic bonds, where a service needed to be consumed where it was produced. Now, a wide array of services to be carried out in one place and consumed elsewhere. Loungani and Mishra write:

Using telecommunication networks, service products can be transported almost instantly over long distances. The range of service activities that can be digitized and globalized is expanding, from the processing of insurance claims and tax payments to the transcription of medical records to the provision of education via online courses. . . .And advanced market firms are stripping out the more standardized portions of their high value–added activities and relocating them to emerging market economies. Witness mushrooming business consulting and knowledge-processing offices and the boom in e-commerce and online retailers in emerging markets across the Middle East, Brazil, China, India, and Singapore. . . . The good news is that the spread of the service sector—and of service exports—in developing economies has outstripped the oft-cited example of information technology growth in India. Think of the mobile revolution that has transformed financial services in many countries in Africa, the Nigerian film industry, game design in Cambodia, accounting services in Sri Lanka, and human resources–processing firms in Abu Dhabi.
Trade in services has been growing almost three times as fast as trade in goods over the last decade or so. "Though measuring services trade is difficult, it appears that developing economies’ share in world service exports increased from about 14 percent in 1990 to 25 percent in 2011."

Along with the developments in information and communications technology, the other big change is that services are in the process of becoming a bigger share of the overall value even for  many products that are sold as physical goods. Loungani and Mishra quote the old Silicon Valley line that
“70 percent of hardware is software.” They also point out that, in a truly ugly neologism, some writers have labelled this change as the “servitization of manufacturing.”

This argument is often phrased in terms of the "smile" diagram credited to management
guru Ram Mudambi. It suggests that companies will set up production chains within and across national borders depending on the amount of value created at different stages of production. Mudambi argues that for modern manufactured good, much of the value-added for the final product is created in the early stages of R&D, design and commercialization,and in the late stages of marketing, logistics, and after-sales services.

As global trade shifts to a greater emphasis on services, I suspect that it will begin to change the fundamental ways in which we think about what international trade means.

Global supply chains in services are enabled by information technology, not by ports, planes, trains and highways. Trade in services may be able to shift very rapidly: after all, if a company is buying a service to be delivered on-line, it may have  allegiance to the geographic location where that service is being performed. As the capabilities of information technology increase, and the speed and clarity and immediacy of online connections deepen, my suspicion is that many types of production will end up being divided and subdivided between locations in ways we have not yet begun to imagine.

And ultimately, I expect that the growth in services trade will reduce pressures for protectionism. Instead of talking about hypothetical trade in hypothetical completed goods--like cars and computers--it will become clear that portions of the value-added are often being created in different places. Pushing for trade protectionism in the name of specific products made in other countries like cars or steel or televisions is one thing, but I'm not sure any similar protectionist movement will form to prevent, say, insurance record-keeping or checking diagnostic X-rays from happening in another country. In addition, countries will need to be wary of placing tariffs or other restrictions on imports, because many imports will be part of a global production chain, and domestic produces will be quick to point out how inhibiting their access to those global connections will injure the domestic economy.

Tuesday, June 17, 2014

Evaluating Low-Carbon Energy Alternatives

Consider five low-carbon energy sources: solar, wind, hydroelectric, nuclear, and natural gas. Which ones offer is the most cost-effective way to reduce carbon emissions? Charles R. Frank, Jr., ranks natural gas first, and wind and solar last in "The Net Benefits of Low and No-Carbon Electricity Technologies," written as Working Paper 73 for Global Economy & Development at the Brookings Institution (May 2014). Frank summarizes:

[A]ssuming reductions in carbon emissions are valued at $50 per metric ton and the price of natural gas is $16 per million Btu or less—nuclear, hydro, and natural gas combined cycle have far more net benefits than either wind or solar. This is the case because solar and wind facilities suffer from a very high capacity cost per megawatt, very low capacity factors and low reliability, which result in low avoided emissions and low avoided energy cost per dollar invested.
Frank's approach takes off from here: "The benefits of a new electricity project are its avoided carbon dioxide emissions, avoided energy costs [that is, cost of fuel] and avoided capacity costs." For a summary of Frank's approach, consider this table, which shows the benefits and costs in these three categories. Notice that that the benefits and costs are measured per megawatt of electricity generated, and the changes are expresses relative to producing a megawatt less by burning coal.

The key factor in these calculations is that wind and solar run at much lower capacity than do hydro, nuclear, and combined cycle natural gas. ("Combined cycle" means that the power plant "utilizes both a gas turbine and a steam turbine to produce electricity. The waste heat from the gas turbine burning natural gas to produce electricity is utilized to heat water and produce steam for the steam turbine to produce additional electricity.") As a result, you need to build a lot more solar or wind capacity to generate the same amount of electricity. Frank explains:

For example, adjusting U.S. solar and wind capacity factors to take account of lack of reliability, we estimate that it would take 7.30 MW of solar capacity, costing roughly four times as much per MW to produce the same electrical output with the same degree of reliability as a baseload gas combined cycle plant. It requires an investment of approximately $29 million in utility-scale solar capacity to produce the same output with the same reliability as a $1 million investment in gas combined cycle. Reductions in the price of solar photovoltaic panels have reduced costs for utility-scale solar plants, but photovoltaic panels account for only a fraction of the cost of a solar plant. Thus such price reductions are unlikely to make solar power competitive with other electricity technologies without government subsidies.
Wind plants are far more economical in reducing emissions than solar plants, but much less economical than hydro, nuclear and gas combined cycle plants. Wind plants can operate at a capacity factor of 30 percent or more and cost about twice as much per MW to build as a gas combined cycle plant. Taking account of the lack of wind reliability, it takes an investment of approximately $10 million in wind plants to produce the same amount of electricity with the same reliability as a $1 million investment in gas combined cycle plants.
Thus, when you adjust for the ability of the power plant to produce the same amount of electricity, the benefits of fewer carbon emissions, less need for fuel, and benefits from replacing capacity look much lower for solar and wind. Notice that this conclusion holds true even though the carbon emissions and costs of new energy from the solar and wind facilities are estimated as zero.

For some other recent posts on U.S. energy issues, see "The U.S. Energy Picture" (June 2, 2014), "Comparing Electricity Production Costs: Fossil Fuels, Wind, Solar" (April 24, 2014), and
"Clean Energy: A Global Perspective" (April 26, 2013).

Monday, June 16, 2014

Medical R&D and Actual Health Conditions

It's a simple question: How well does the scientific research on health care issues match up with what health conditions impose the highest costs on people? James A. Evans, Jae-Mahn Shim, and John P. A. Ioannidis tackle this question in "Attention to Local Health Burden and the Global
Disparity of Health Research," which appears in the April 2014 issue of PLOS One.

They make a plausible case that health care R&D is not correlated with actual health condisions. Measure the amount of health care R&D effort by looking at the number of research articles published in that area, labelled on the right-hand axis. Measure the effect of various health care problems by looking at estimates from the World Health Organzation on DALYs--which stands for disability-adjusted life years, on the left-hand axis. The idea of DALYs is not just to measure lives lost, but also to measure health costs that might, for example, result in losing half of one's ability work in a given year. The figure shows that these measures of actual health conditions and medical R&D do not line up well.

Of course, one wouldn't expect the correlation to be perfect. At any given time, some scientific areas are surely more "ripe" for discoveries than others. Making substantial progress against a disease with a smaller footprint beats making little or no progress against a disease with a larger footprint. But still, a correlation of essentially zero between health conditions and medical research should raise some eyebrows.

The authors argue that health care research tends to follow what is geographically common near that health care research, which is a nice polite way of saying that most health care research is in high-income countries and focuses on the health problems most common in high-income countries. Here are some differences in the burden of disease profiles across countries:
For example, consider the relative burden of infectious and malignant neoplasms (cancers) in rich and poor countries. Infectious diseases like diarrheal diseases, malaria and HIV naturally levy a much higher toll in less developed countries, while cancers incur a larger burden in more developed countries with longer life spans. Respiratory infections, perinatal conditions and injuries disproportionately afflict less developed countries, while neuro-psychiatric conditions like depression and schizophrenia and musculoskeletal diseases like arthritis and back pain represent a greater burden in wealthy countries. Note the conditions that most afflict poor populations only lightly affect the rich (e.g., infectious diseases, respiratory infections, perinatal conditions), while diseases that most afflict rich populations also levy a substantial toll on poor ones (e.g., cancers, neuro-psychiatric and musculoskeletal disorders). . . .
[M]alaria, tetanus, Chagas disease, measles, Vitamin A deficiency, lymphatic filariasis, schistosomiasis, and diphtheria most disproportionately afflict poor populations. Other conditions also inflict a greater burden in less developed countries, including fires, violence, drowning, and poisoning, as also glaucoma, peptic ulcers and ear infections.
The authors raise a potentially deeper problem as well: "Not only environmental but the biological context of disease is likely to be different in less developed countries." An understanding of certain disease based on the populations and environmental conditions of high-income countries may be only partly transferable to the occurrence of that same disease in low-income countries.

It seems clear that medical research is, to a considerable extent, chasing market size, not disease burden. When you consider the number of advertisements for treatments to grow hair or to improve the sex lives of older men, who can doubt it?

Saturday, June 14, 2014

An Admirable College Book-Burning

I'm usually opposed to book-burning, but I've recently discovered that Haverford College, where I graduated in 1982, used to have a tradition of book-burning that seems to me admirable and even enviable. It worked like this: Back in the 1880s, the sophomore class was responsible for choosing their least favorite book. They then planned an elaborate ceremony--marches, costumes, songs--to be held near the end of the academic year. A common highlight of the ceremony was that the worst book was put on trial, with arguments for and against, and when the inevitable conviction was announced, a copy of the book was burned at the stake. Of course, the fact that a book was convicted and executed one year didn't mean that students were able to avoid reading it the next year.

Some details are available at at a website run by the Quaker and Special Collections part of the Haverford Library. Here's one of the invitations to the event (the pointing fingers are added to the image by the librarians, and at the website they link to comments and translations):

Here's a picture of students dressing in costume for the event.  

I should admit that this type of costumed ritual, whether it involves book-burning or not, is a transcontinental distance from my personality type. But what I do like about this particular book-burning ritual, of course, is that it is rooted in an intimate and detailed connection with the text--as well as with the demerits of other texts not ultimately worthy of cremation. My guess is that the curriculum for sophomores offered a lot less choice 150 years ago, and so a high proportion of students would all read the same books. Apparently, such book-burning rituals also occurred at other schools at around this time, including Penn, Princeton, Yale, and Rutgers.

The ritural reminds me of a a story I once heard from a an acquaintance once that after completing his Ph.D. dissertation, he saved one copy for a special purpose: he executed each separate page in a distinctive manner, through numerous variations on crumpling, burning, drowning, tearing. He limited himself to executing one page per day, so that he could prolong the experience. 

Hat tip: I heard about this book-burning practice through the Twitter feed David Wessel ‏@davidmwessel 49m. David is now Director of the Hutchins Center on Fiscal & Monetary Policy at the Brookings institution, and before that was at the Wall Street Journal for a number of years. But he also preceded me as a Haverford alum, in his case with the class of 1975.

Friday, June 13, 2014

Should U.S. Government Cost-Benefit Analysis Look Outside the U.S.?

When Americans consider the costs and benefits of policy actions, should we also be counting costs and benefits for people  in other countries? Ted Gayer and W. Kip Viscusi point out in "Determining the Proper Scope of Climate Change Benefits," (June 3, 2014) that U.S. environmental regulators seem to have started counting global benefits when looking at costs and benefits of U.S. policies with regard to climate change.

Of course, any benefits from reducing climate change have global effect. Gayer and Viscusi refer to one which estimated that for a global reduction in the effects of climate change, the United States would receive 7-10% of the benefits. They point out that if the U.S. benefited relative to it share of world GDP, it would receive 23% of the benefits from reducing effects of climate change. Here's a recent example comparing global benefits to domestic costs (footnotes omitted):
More recently, the EPA proposed regulations to limit CO2 from existing power plants.
For this rule, EPA estimated climate benefits amounting to $30 billion in 2030 using a 3 percent discount rate. However, assessing these benefits in a manner that is consistent with the methodology developed by the Working Group, only 7 to 23 percent of these benefits would be domestic benefits. As a result, the domestic benefits amount is only $2.1 billion-$6.9 billion, which is less than the estimated compliance costs for the rule of $7.3 billion. (Note, however, that EPA also claims substantial air-pollution co-benefits for this rule, associated with reductions in particulate matter and ozone.)
In other words, the estimates are that the global benefits of the rule exceed the costs, but the U.S. benefits are much smaller, and may possibly (depending on how other factors are counted) not exceed the costs.

This concept of counting global benefits of U.S. regulatory actions is a clear departure from established practice. U.S. environmental laws and guidance for regulations are quite careful to specify that the cost-benefit calculations should be done for Americans. Explicit guidance for regulatory authorities from President Obama, as from previous presidents, has emphasized that they are to consider costs and benefits for "the American people."As once example, Gayer and Viscusi write:
Subsequently, the U.S. Office of Management and Budget (OMB) developed a guidance document (known as Circular A-4) for regulatory impact analyses that maintained an emphasis on domestic benefits but permitted the reporting of foreign benefits if reported separately: “Your analysis should focus on benefits and costs that accrue to citizens and residents of the United States. Where you choose to evaluate a regulation that is likely to have effects beyond the borders of the United States, these effects should be reported separately.”
But even if this practice of counting global benefits in the cost-benefit calculation is a departure from the norm, should it become standard practice? Or is it the kind of practice that will only be followed when convenient? The practice of counting global effects in U.S. government cost-benefit decisions raises some tricky issues. As Gayer and Viscusi point out, if U.S. government actions are to take benefits to foreign citizens into account on a regular basis, the policy implications could be striking.
It is important to note that granting the GHG [greenhouse gas] benefits to non-citizens equally to the benefits to citizens represents a dramatic shift in policy, and if applied broadly to all policies, would substantially shift the allocation of societal resources. The global perspective would likely shift immigration policy to one of entirely open borders, as the benefits to granting citizenship to poor immigrants from around the world would dominate any costs to current U.S. citizens. It would suggest a shift away from transfers to low-income U.S. citizens towards transfers to much lower-income non-U.S. citizens, elevating policy challenges such as eradicating famine and disease in Africa to the most pressing concerns for U.S. policymakers, trumping most domestic efforts in terms of their impact on social welfare. And a shift in policy towards fully counting the costs and benefits towards citizens of all other countries would suggest a drastic change in defense policy. A shift in policies to foster such efforts, while in many cases worthwhile, would not be consistent with the preferences of the U.S. citizens who are bearing the cost of such programs and whose political support is required to maintain such efforts.

It's easy to imagine other difficult situations that would arise. Imagine that U.S. environmental standards are tightened, and that as a result some U.S. companies decide to locate their manufacturing elsewhere. In this case, the economic gains received in other countries would be counted as a plus for the policy, which would presumably could be used to offset any economic costs the policy created in the U.S. economy.

Even if one takes the reasonable position that the U.S should give weight to benefits and costs incurred in other countries, there is a question of who determines how much weight will be given. Imagine a U.S. law which requires that U.S. companies abroad operate in a way that has certain standards for lower pollution, worker safety, not bribing public officials, and the like. Now also imagine that the other country would prefer not to have such laws, or to have lower standards. If the government of another country does not favor such laws, does the U.S. claim that people of that country gain anyway?

Economists often work with models that assume a diminishing marginal utility of of income: that is, gains or losses to people with low levels of income should have more social value than same-sized gains or losses to those which higher levels of income. (For example, this is the standard justification why society should favor a degree of redistribution, because the social cost of transferring a certain amount of income from those with higher incomes is less than the social benefit received by the recipients who have lower incomes.) But if this kind of cost-benefit analysis is to be applied to the world as a whole, costs and benefits in low-income countries will receive a greater weight than costs and benefits of the same size in high-income countries.

It seems to me that as a practical matter, the current federal rules about evaluating costs and benefits of government regulatory policies are correct: that is, evaluate them first in terms of effects on U.S. citizens, and if there are also effects on the rest of the world, by all means list them--but list the non-U.S. effects separately.

Bayer and Viscusi write: "The question of whose preferences are to be counted in the calculation of net benefits is known as standing. There has been limited academic discussion about economic standing, with the more recent studies suggesting that standing cannot be resolved based on principles of benefit-cost analysis but instead depends on the ethical consensus of society ..." Of course, this is both true and a way for economists to make sure that the buck does not stop with them, but instead is handed off to "the ethical consensus of society."

Epilogue: When thinking about how people regard their own well-being, in comparison to how they regard the well-being of people who live in faraway places, I always remember the comment by Adam Smith in his first book, The Moral Sentiments (Chapter III, Part III), where he points out
that for most people, losing your little finger would feel like a much larger calamity than the death of hundreds of millions of people in a faraway place like China. (Here, I quote from the ever-useful version of the book at the Library of Economics and Liberty website.)

Let us suppose that the great empire of China, with all its myriads of inhabitants, was suddenly swallowed up by an earthquake, and let us consider how a man of humanity in Europe, who had no sort of connexion with that part of the world, would be affected upon receiving intelligence of this dreadful calamity. He would, I imagine, first of all, express very strongly his sorrow for the misfortune of that unhappy people, he would make many melancholy reflections upon the precariousness of human life, and the vanity of all the labours of man, which could thus be annihilated in a moment. He would too, perhaps, if he was a man of speculation, enter into many reasonings concerning the effects which this disaster might produce upon the commerce of Europe, and the trade and business of the world in general. And when all this fine philosophy was over, when all these humane sentiments had been once fairly expressed, he would pursue his business or his pleasure, take his repose or his diversion, with the same ease and tranquillity, as if no such accident had happened. The most frivolous disaster which could befal himself would occasion a more real disturbance. If he was to lose his little finger to-morrow, he would not sleep to-night; but, provided he never saw them, he will snore with the most profound security over the ruin of a hundred millions of his brethren, and the destruction of that immense multitude seems plainly an object less interesting to him, than this paltry misfortune of his own.
Smith goes on to argue that people should and in fact do care about those who live elsewhere. I would add that governments should care about those in other places, too. But whether it's the case of climate change, or some other issue, it's important first to be clear on whether U.S. policies have benefits that exceed costs for the U.S. population, and then to look at the global dimensions.

Thursday, June 12, 2014

Turnover on the Federal Reserve Board of Governors

In August 2013, the Federal Reserve Board of Governors, the key decision-making authority, had a full complement of seven members. Since then, four of them have resigned and one had their term in office expire, meaning that President Obama has found himself with a need to fill five slots. Here's the membership from August 2013:

The prospect of having one president appoint five of the seven positions on the Board of Governors at about the same time is not how this institution is supposed to work.  Each member of the Board of Governors is appointed to a 14-year nonrenewable term, with confirmation required by the U.S. Senate. This pattern is intended to guarantee that the president can make a new appointment once every two years.

But the reality of terms and appointments to the Fed has one key difference from how it is drawn up on paper. People often leave terms early--that is, in less than 14 years. In that case, the replacement appointee serves out the remainder of that term, and the replacement then can be appointed to their own 14-year term after that. For example, Alan Greenspan first joined the Federal Reserve Board of Governors in 1987 and served out the remainder of the term that expired in 1992, before then being appointed to his own 14-year term which expired in 2006. Ben Bernanke was filling out a partially completed term at the Fed when he was a member from 2002-2005, but was then appointed to his own 14-year term in 2006.

So far, President Obama has appointed Stanley Fischer, who was confirmed by the U.S. Senate for a spot on the Board of Governors on May 28, but needs to be confirmed again to become vice-chair. The president has sent the Senate two other nominations: one for Jerome Powell to begin his own 14-year term, and Lael Brainerd, who had been undersecretary of international affairs at the U.S. Treasury. Powell and Brainerd are wending their way toward confirmation votes before the U.S. Senate. But even if or when they are confirmed, two of the seven slots on the Board of Governors will be empty. [Added: Powell and Brainerd were confirmed by the Senate today, June 12.]

The Federal Reserve has highly competent staff, and it can certainly continue to operate with its current complement of three official members (or four, given that Powell is continuing to serve while waiting to be confirmed to a new term). But the rationale for a seven-member board rotating on a 14-year cycle is that debate and discussion among a cohesive group are important to diagnosing the economy and to making policy decisions in this area, and that there is a value in continuity of experience with a slow and occasional inflow of fresh voices.

The Fed faces a number of tough decisions in the next few years: when and how to back away from policies of quantitative easing and return interest rates to more historically normal levels; how to deal with its new regulatory and consumer protection responsibilities in the aftermath of the Dodd-Frank financial reform legislation; and more broadly how to think about U.S. monetary policy in a globalizing economy where the euro is a soap opera and the U.S. economy is a decreasing share of world output. Although the new arrivals at the Board of Governors are certain to have background experience in at least some aspects of central bank operations already, the only way to accumulate experience on the job is actually to do the job.

There's no particular blame to be placed here for the high level of turnover at the Fed. But it's not how the institution is supposed to work. Here's my pledge: In the extremely off-chance that I am nominated to fill one of the vacant Board of Governor slots, I will only accept with the intention of serving the full 14-year term.

Wednesday, June 11, 2014

U.S. Science; An Eroding Lead in a Global Economy

Yu Xie asks "Is U.S. Science in Decline?" in the Spring 2014 Issues in Science and Technology. The article is an abbreviated version of the Henry and David Bryna Lecture for 2013 that Xie gave at the National Research Council. The talk can be viewed here, and I'll lift a few of the slides from the talk for this post. As an overall perspective, Xie writes:

"Science is now entering a new world order and may have changed forever. In this new world order, U.S. science will remain a leader but not in the unchallenged position of dominance it has held in the past. In the future, there will no longer be one major world center of science but multiple centers. As more scientists in countries such as China and India actively participate in research, the world of science is becoming globalized as a single world community. . . .Just because science is getting better in other countries, this does not mean that it’s getting worse in the United States. One can imagine U.S. science as a racecar driver, leading the pack and for the most part maintaining speed, but anxiously checking the rearview mirror as other cars gain in the background, terrified of being overtaken. Science, however, is not an auto race with a clear finish line, nor does it have only one winner. On the contrary, science has a long history as the collective enterprise of the entire human race. In most areas, scientists around the world have learned from U.S. scientists and vice versa. In some ways, U.S. science may have been too successful for its own good, as its advancements have improved the lives of people in other nations, some of which have become competitors for scientific dominance."

Here's the story of China's rise to global scientific prominence in four graphs. One way to measure scientific output is the number of research papers published. The U.S. still leads the world in this area, but its lead is eroding. The sharp rise of China as a producer of scientific papers is clear (red line) but also notice the considerable rise in scientific papers from India (bottom light blue line). 

Perhaps these scientific papers from China tend to be on relatively unimportant subjects, while the really important work continues to be done in the United States? One way to measure this possibility is to look at how often papers from other countries are cited by other research. In the past, U.S. research was cited more than other countries, but that lead is also eroding. Scientific papers from the UK and Germany are now cited more often than similar papers from the US, and scientific papers from other countries, like China and India, have seen a rise in their citations relative to the U.S. level, too.

Underlying these trends in scientific papers are the number of people trained in science. The number of undergraduates getting scientific degrees in China has skyrocketed, and the number of science and engineering Ph.D.s in China now exceeds that in the United States. One can raise a valid concern that the quality of education in China might not in all cases be up to U.S. levels, but the overall trends and patterns remain remarkable. 

Of course, part of what feeds these trends is just China's overall economic growth. One can argue, as Xie does, that it's not so much a matter of the U.S. science and engineering effort being lower, as a matter of rapid catch-up elsewhere in the world. Xie writes: "Census data show that the [U.S.] scientific labor force has increased steadily since the 1960s. In 1960, science and engineering constituted 1.3% of the total labor force of about 66 million. By 2007, it was 3.3% of a much larger labor force of about 146 million." Xie is also quite correct to note that science and prosperity should not be viewed as zero-sum games. 

That said, my own belief is that the U.S. dramatically underinvests in research and development, which is to say that it underinvests in the activity that hires so many scientists. The U.S. R&D effort has been essentially stagnant at about 2.5% of GDP since the mid-1960s. Some economic studies suggest that the optimal level of U.S. R&D would be 2-4 times the current level. One troublesome sign is that the labor market rewards for scientists in the U.S. have not kept pace with those of other high-status professionals. Xie writes: 
[O]ur analysis of earnings using data from the U.S. decennial censuses revealed that scientists’ earnings have grown very slowly, falling further behind those of other high-status professionals such as doctors and lawyers. This unfavorable trend is particularly pronounced for scientists at the doctoral level. . . . [S]cientists who seek academic appointments now face greater challenges. Tenure-track positions are in short supply relative to the number of new scientists with doctoral training seeking such positions. As a result, more and more young scientists are now forced to take temporary postdoctoral appointments before finding permanent jobs. Job prospects are particularly poor in biomedical science . . .

At some deeper level, however, Xie's article doesn't quite come to grips with fundamental problem. Many people support public funding for scientific research because they believe that it will translate into a stronger U.S. economy, along with better-paying jobs and a rising standard of living over time. This argument as a strong historical foundation: that is, there are many examples in the U.S. and in other countries where scientist and industry interacted in this way. But in a globalizing economy, the linkage from science to the economy is less clear. If a new scientific discovery leads to a company with a U.S. headquarters and research lab, but production facilities someplace like Mexico, China, Indonesia, or South Africa, the economic payoff from that scientific discovery becomes less clear. Thus, while I would support a dramatic expansion of R&D efforts, I also believe that the U.S. needs to be rethinking the institutions and information pipelines that connect scientific discoveries, new and expanding companies, and a productive U.S. workforce.

Tuesday, June 10, 2014

Does Fair Trade Reduce Wages?

I have viewed Fair Trade labeling as a benign if rather limited movement. On one side, the Fair Trade organization certifies that a product like coffee was produced in a way that lived up to a certain code of conduct for how workers were treated, environmentally friendly practices, and the like. On the other side,  consumers in high-income countries who are willing to pay higher prices for goods like coffee produced according to such standards could then identify this output. But how much does Fair Trade really help workers in low-income countries? The Fairtrade, Employment and Poverty Reduction in Ethiopia and Uganda (FTEPR) research team, based at SOAS at the University of London, set out to gather evidence on this question. The main authors are Christopher Cramer, Deborah Johnston, Carlos Oya and John Sender, but the process of data collection and processing was extensive and required a full-time research officer in the UK, as well as research supervisors in Ethiopia and Uganda, and many other contributors. The total cost of the study ran about 700,000 British pounds. The group has now published "Fairtrade, Employment and Poverty Reduction in Ethiopia and Uganda" (April 2014) and the results will be disheartening for supporters of fair trade.

The researchers chose about a dozen local areas by to collect detailed evidence in rural areas of Ethiopia and Uganda, focusing on coffee and flower producers in Ethiopia and coffee and tea producers in Uganda. They then sought to interview enough people in each of these local areas that they could have a locally representative sample of wages and earnings for that area, looking both at those who worked for a local certified Fair Trade producer and those who didn't. They tried to gather data on each member of entire households, including children, and they returned to these areas for either 2-3 years to do follow-up surveys.  Some people were surveyed more intensively or by different methods than others, but the overall result is that data was gathered from thousands of local farm workers. As the study authors wrote: "[T]he over-arching research question was whether a poor rural person dependent on access to wage employment for their (and their family’s) survival is better served by employment opportunities in areas where there is a Fairtrade certified producer organization or in areas where there is none."

And after several years of effort, what did the researchers find?
"This research was unable to find any evidence that Fairtrade has made a positive difference to the wages and working conditions of those employed in the production of the commodities produced for Fairtrade certified export in the areas where the research has been conducted. This is the case for ‘smallholder’ crops like coffee – where Fairtrade standards have been based on the erroneous assumption that the vast majority of production is based on family labour – and for ‘hired labour organization’ commodities like the cut flowers produced in factory-style greenhouse conditions in Ethiopia. In some cases, indeed, the data suggest that those employed in areas where there are Fairtrade producer organisations are significantly worse paid, and treated, than those employed for wages in the production of the same commodities in areas without any Fairtrade certified institutions (including in areas characterised by smallholder production). At the very least, this research suggests that Fairtrade organizations need to pay far more attention to the conditions of those extremely poor rural people – especially women and girls – employed in the production of commodities labelled and sold to ‘ethical consumers’ who expect their purchases to improve the lives of the poor. . . .

Another issue of importance both to the Fairtrade literature and more widely is the governance and structure of producer cooperatives. The research finds a high degree of inequality between members of these cooperatives, i.e. the area cultivated with the certified crop (tea and coffee) and the share of the cooperative’s output are very unevenly distributed among members: there are large numbers of members who have tiny plots of land and sell very little to the cooperative, and there is a small number of members who dominate sales to and through the cooperative. One clear implication of this is that the many benefits of being a member of a Fairtrade certified cooperative – tax breaks, direct marketing channels to high-value niche markets, international donor financed subsidies – accrue very unequally. Fairtrade may ‘work’ but it does not quite do what it says on most of the labels: it aggravates rural inequality and at best may do so by supporting the emergence of rural capitalist producers; and it fails to make a difference, on the data collected, to the welfare of the poorest people involved in the Fairtrade chain, i.e. manual agricultural wage workers. . . .
In conclusion, it may be argued, for the areas and producer organisations where this research was conducted, that Fairtrade certification has failed to benefit poor wage workers because it has overlooked their existence, because it has proven institutionally incapable of monitoring effectively the wages and conditions of those working in production conditions (e.g. flowers) where there is acknowledged hired labour, despite the existence of auditing procedures against the Hired Labour Standard, and because it is relatively ineffective compared to other factors that are more likely to influence both productive efficiency and working conditions. ... 
The reasons for Fairtrade’s failure to make a clear positive difference to wages and conditions, or to the amount of work offered, are fairly clear. They have to do – especially in the production of “smallholder” commodities – with what this research suggests has been in the past a wilful denial of the significance of wage labour and an obsessive concentration on producers/employers and their organisations. ... [T]his research suggests that a large number of obstacles remain in implementing improved standards in a way that will benefit rural workers. First and foremost is the need not just for more monitoring and evaluation, but also for better methods. And they have to do – again, especially where Fairtrade certification is awarded to cooperatives – with the espousal of a romantic ideology of how cooperatives operate in poor rural areas.
Of course, it would be unwise to condemn all of Fair Trade based on a single study of about a dozen local areas in two countries. Matt Collin and Theo Talbot at the Center for Global Development take on the task of putting the results in context in a blog post. They point out that although the study was focused on wage-earning farmworkers, not on the farm-owners. Although the study tried to compare farmworkers at Fair Trade operations to similar farmworkers at similar non-Fair Trade operations, such comparisons are always difficult. The results show that the Fair Trade workers were paid less, but they do not conclusively show that Fair Trade is what caused the workers to be paid less. Some other studies of Fair Trade have have found more positive results for how the pay of the small number of Fair Trade producers is increased.

But it won't do to dismiss this most recent study, which was done with considerable care and attention. After all, if this study had discovered a big wage boost for Fair Trade agricultural workers in these countries, you can be sure that advocates of Fair Trade would trumpet the results to the skies. Discouraging evidence can't just be tossed aside.