Pages

Thursday, October 31, 2013

TARP, Five Years Later

President George W. Bush signed the Troubled Asset Relief Program into law on October 3, 2008. What has happened with it five years later? For me, the main difficulty in thinking about TARP has been keeping track of all the things it ended up doing. The U.S. Treasury has a useful website that runs through the details of TARP. As one might expect, it's slanted toward the position that TARP was a good idea. But that bias doesn't affect the actual numbers of what the government spent and the extent to which it has been paid back.

TARP was authorized to spend $700 billion. What did it actually do? The money went five places: 1) $68 billion to the insurance company AIG; 2) $80 billion to the auto companies; 3) $245 billion to bank investment programs; 4) $27 billion to credit market programs; and 5) $46 billion to housing programs. The other $235 billion in spending authorization was cancelled. Let's unpack these categories and see what happened.

This chart shows Treasury's total commitments under TARP in billions of dollars. $235 billion was cancelled; $68 billion was com
AIG

The government justification for investing in AIG looks like this: "At the height of the financial crisis in September 2008, American International Group (AIG) was on the brink of failure. At the time, AIG was the largest provider of conventional insurance in the world. Millions depended on it for their life savings and it had a huge presence in many critical financial markets, including municipal bonds. AIG’s failure would have been devastating to global financial markets and the stability of the broader economy. Therefore, the Federal Reserve and Treasury acted to prevent AIG’s disorderly failure." The Treasury used its $67 billion to buy AIG stock and it has sold off that stock since then, finishing in December 2012. The Treasury ended up making a gain of $5 billion. The Federal Reserve Bank of New York also loaned AIG $112 billion, and has ended up making a gain of $7 billion as that loan has been repaid. These gains end up in the U.S. Treasury.

The auto industry

Treasury reports: "The Automotive Industry Financing Program (AIFP) was created to prevent the collapse of the U.S. auto industry, which would have posed a significant risk to financial market stability,threatened the overall economy, and resulted in the loss of one million U.S. jobs. Treasury invested approximately $80 billion in the auto industry through its Automotive Industry Financing Program. As of September 30, it has recovered $53.3 billion or more than 66% of the funds disbursed through the AIFP program." The Treasury exited its investment in Chrysler in 2011, and the government now owns only about 7% of GM stock, down from 60% at the peak. The website reports that "the auto industry rescue may end up as a net cost to the government." It's useful to remember that the TARP money wasn't all that the government did. As I discuss here, the government also stage-managed an accelerated bankruptcy process that reorganized the ownership of Chrysler, in a way that did much less for the bondholders than a standard bankruptcy process, and more  for the the employees. The firms were also handed created tax breaks not usually available to bankrupt firms worth tens of billions of dollars.

Bank investment programs

This was actually five separate sub-programs; for example, it includes the "stress tests" under which bank regulators re-examined the balance sheets of many banks under a variety of different scenarios, and pushed some of them to raise more outside private capital as a result. But in terms of spending, the biggest element was the Capital Purchase Program that provided investments and loans to about 700 banks. "Treasury has recovered almost $225 billion from CPP through repayments, dividends, interest, and other income – compared to the $204.9 billion initially invested. Treasury has recovered more than 100 percent of that amount through repayments, dividends, interest, and other income. Treasury continues to recover additional funds."

Credit Market Programs

"Three programs were launched: the Public-Private Investment Program (PPIP), the SBA 7(a) Securities Purchase Program, and the Term Asset‐Backed Securities Loan Facility (TALF). Although the specific goals and implementation methods of each program differed, the overall goal of these three programs was the same—to restart the flow of credit to meet the critical needs of small businesses and consumers.​" These three programs are no longer making loans and are in the process of being wound up. They have either repaid the government money invested, or are on their way to doing so.

Housing Programs

There are two main programs here. The Making Home Affordable Program is aimed at assisting households that are faced with foreclosure on their home mortgage. The website reports that 1.2 million households have negotiated lower mortgage payments (usually with some write-down of the principal) and another 200,000 have arranged to sell their homes for less than the mortgage. The Hardest Hit Fund aimed at mortgage borrowers in 18 states where the fall in housing prices was especially severe and/or the unemployment rate was exceptionally high. It allocated funds to state-based programs. As of second quarter 2013, the summary report is that it has spent $1.6 billion to assist about 126,000 households nationally. Here in late 2013, these programs are still taking applicants, which seems misguided to me. They were badly needed during the past few years, when they did relatively little.

Overall, the Treasury reports that TARP is likely to end up costing the federal government about $40 billion. As this review should help to clarify, most of that is for the auto company bailout, and the rest is the housing and credit market programs. The bank investment programs and the AIG investment have ended up making money for the government. This outcome wasn't unexpected. The economic theory behind a lender-of-last-resort program is well-established. In the middle of a financial panic, as in fall 2008, financial markets can lock up. In that setting, having a deep-pockets government agency like the Treasury or the Federal Reserve provide capital can restart the financial markets. Even better, when the government provides financial liquidity during a crisis, it can then often make money when it cashes out its financial stake after the crisis has passed, when the economy has improved.

Of course, the fact that most of the TARP spending has ended up being repaid doesn't settle the issue of whether it was a good idea. Here are some outstanding issues:

1) The bailouts of 2008 raise an issue of whether systemically important financial institutions can reasonably expect future bailouts if they get in trouble. If so, they may engage in overly risky behavior in the future. The government can make many promises that it won't bail out large institutions in the future, but if push comes to shove, will those promises be kept?

2) We don't get to replay history to find out how alternative policies would have worked. What if AIG, the car companies, or some of the banks had been required to reorganize through a normal bankruptcy process?

3) Many firms went broke in the Great Recession. As a matter of fairness, what makes the firms that TARP helped so special?

4) TARP was intertwined with separate but related actions by the Federal Reserve, by bank regulators, with bankruptcy processes for the car companies, various tax law changes, and so on. As a result, it's hard or impossible to evaluate the effect of TARP in a vacuum.

5) When you step in during a crisis, there is some luck involved if your intervention works out well.


Tuesday, October 29, 2013

The Global Sugar Market and US Sugar Consumption

Here's the pattern of American daily calorie consumption since the early 20th century. I suspect that some of the the ups and downs from 1900 up to about 1980 are due to issues with data and measurement, as well as social trends that came and went. But there's no mistaking the rise in the last few decades.

From an international perspective, Americans consume a large amount of sugar (whether from cane or beets) and related high-calorie sweeteners like high-fructose corn syrup.



The Credit Suisse Research Institute ponders the economic background and health consequences of these patterns in "Sugar Consumption at a Crossroads," a report released in September 2013. A sizeable share of the report looks at the evidence linking sugar consumption in various forms to obesity, the health consequences of obesity, and possible policy options like a tax on sugary beverages.  Here, I'll stick to looking at patterns of consumption and production.

The report emphasizes that the growth in U.S. consumptin of calories can be linked to sweetened beverages: "Sweetened  beverages are now delivering  an increasingly greater percentage of the sugars that are ingested in an average diet. Between 1955  and 2000,  the consumption  of  soft drinks in the USA increased from about ten gallons/person to  54 gallons/person and then declined by around 20% until 2012, but with an equivalent increase in the consumption  of fruit juices and bottled water. According to  the USDA, the beverage industry now accounts for 31% of total sweetener  deliveries  and we estimate  that 43% of added sugars in a normal US diet come from sweetened  beverages. A similar stabilizing trend can be seen in most other developed markets, while consumption is still on the rise in emerging markets.:

U.S. consumption of sugar and high-fructose corn syrup happens in the context of a global market. "The global sweetener market is estimated to be around 190 million tons of “white sugar equivalent,”
and is unsurprisingly dominated by sugar. Each of the major groups (high-intensity/artificial sweeteners,
sugar, and high-fructose corn syrup) has been growing at a similar rate of circa 2% per annum, though the most recent numbers have natural high intensity\sweeteners growing rather faster."

The actual price that consumers pay for sugar is highly influences by government policies: for example, the U.S. acts to limit imports of sugar as part of how it assists producers of beet sugar and high-fructose corn syrup. 

"Many countries have regimes that protect the local production through various mechanisms including support prices, import restrictions, production quota, etc. Examples include the US Farm Act, the European Union Sugar Regime, or the Chinese government’s controls on imports. Put simply, the complexity of the infrastructure surrounding sugar is significant. Thus, the traded market (or the “world market”) is only 55–60 million tons, and is sometimes referred to as the residual market (where the sugar that is not a part of the special agreements is bought and sold). The largest producer of sugar by some distance is Brazil (22% of world production), followed by India (15%), China (8%) and Thailand (6%). However, India and China consume all they produce, so if we look at the supply to the “world market” instead, this is dominated by Brazil (supplies typically half the “world market”) and Thailand (10%–15%). 
The “residual” nature of the world market has made the “world price” very volatile and sensitive to movements in global supply versus demand. ... Brazil’s cost of production is generally thought to be USD 18 cents/lb. and, in the long term, this should be the floor of the market. As we mentioned earlier, most of the markets are protected/controlled, which means the local price bears little significance to the world price – and trades at a significant premium (see Figure 20). These regimes have been in place for many years and are designed to protect the local farmers from the vagaries of the world price and guarantee them an economic return. ...
Finally, the market for HFCS [high fructose corn syrup] is similar in size to that of HIS, but is concentrated in three major markets: USA, China and Japan. The principal requirement for HFCS to flourish is government support. HFCS can only truly become established where it is allowed and where there is enough supply of starch. ...

Here's a figure showing the retail price of sugar. It can't be compared directly to Brazil's cost of production of 18 cents per pound, but the comparison is nonetheless interesting.

If one is concerned about the effects of high levels of sugar consumption on public health, what might be done? One option is to have a tax on foods that are high in sugar, which presumably would need to include not just sugared soft drinks, but also many kinds of drinks flavored with sugar or fruit juice, and other high-sugar foods. The Credit Suisse report leans in favor of such taxes; my own somewhat more pessimistic review of the workability and desirability of such taxes is here.

Another option is technological: that is, find a way to sweeten foods that doesn't have (many) calories. The difficult here is that while most societies have no regulation that limit loading up food and drink with sugar, they often have strict regulations about using alternative sweeteners. The Credit Suisse report writes: "The market for high-intensity sweeteners, both natural and artificial, is completely open, but the products are the most heavily regulated among sweeteners. These regulations vary from country to country. A high-intensity sweetener cleared in one country may be banned in another. The artificial sweetener industry’s profile on health is somewhat colored and many still see some of these products in a bad light. This is not the case for natural HIS, the largest portion of which is made of polyols (sugar alcohols)." At least so far, many people have more fear of how artificial sweeteners may affect their health than about how high sugar intake might affect their health. Also, many of the artificial sweeteners don't really taste like sugar.

A final option is some sort of social attitude adjustment. Just as shifts in social attitudes brought us the explosion in soft drink consumption, and more recently the explosion in bottled water consumption, a future change could mean lower consumption of added sugars. The Credit Suisse report has some suggestive if not dispositive evidence here. If one looks across regions of the United States, those regions with a higher average level of education and higher average levels of income are more likely to consume diet soda. Of course, this kind of correlation doesn't prove a cause and effect. But it does suggest that there are different cultural norms even within the United States about drinking sugared soda, and thus some hope for a healthier set of sugar-consumption norms to emerge.




Monday, October 28, 2013

Richard Thaler on Behavioral Economics

Douglas Clement has a lively and incisive interview with Richard Thaler in the September 2013 issue of the The Region magazine, which is published by the Federal Reserve Bank of Minneapolis. Thaler is well-known as one of the leading figures in "behavioral" economics, which involves thinking about how common psychological factors may cause people to act in ways that differ from what is predicted in a basic economic model of people purposefully pursuing their own self-interest. Here are a few of Thaler's comments that jumped out at me.

Getting started in thinking about behavioral economics
[L]ater I would call them anomalies, but for a while I just called them “the list.” And I started writing a list of funny behaviors on my blackboard, such as paying attention to sunk costs. I mean, at first they were just stories. Like, a buddy of mine and I were given tickets to a basketball game. Then there’s a blizzard and we don’t go. But he says, “If we had paid for the tickets, we would have gone.” Another thing on the list was a story about having a group of fellow grad students over for dinner and putting out a large bowl of cashew nuts. We started devouring them. After a while, I hid the bowl in the kitchen and everyone thanked me.But as econ grad students, of course, we immediately started asking why we were happy about having a choice removed. For years, some of my friends referred to my new research interests as “cashew theory.”

Behavioral economics and finance
The biggest surprise about behavioral economics, I think, looking back on it all, is that the subfield where behavioral has had the biggest impact is finance.If you had asked me in 1980 to say which field do you think you have your best shot at affecting, finance would have been the least likely, essentially because of the arguments that [Gary] Becker’s making: The stakes are really high, and you don’t survive very long if you’re a trader who loses money.But for me, of course, that was exactly the attraction of studying finance ...
The random walk and the rationality assumption
Bob Shiller has this great line in one of his early papers to the effect that if you see a random walk, concluding from that that prices are rational is the greatest error in the history of economic thought. Why? Because it could be a drunken walk. A drunken man will have a random walk and it’s not rational.

No free lunch and price is right?
I separate these two aspects of the efficient markets argument: Whether you can get rich (the “no-free lunch” part) and whether the “price is right.” It’s hard to get rich because even though I thought Scottsdale real estate was overpriced, there was no way to short it. Even if there were a way—[Robert] Shiller tried to create markets in that, so that you could have shorted it—you might have gone broke before you were right. …  I think it’s hard to beat the market. Nobody thinks it’s easy, and so that part of the hypothesis is truer, but if we look at what happened to Nasdaq in 2000, and then the recent crash, well, of course, we’ve never gotten back to 5,000. So it’s very hard to accept that markets always get prices right. … If anything, the Internet has wildly exceeded our expectations, but the Nasdaq has still not gotten close to where it was in 2000. So I think it’s pretty obvious that market was overheated, just like the Las Vegas and Phoenix real estate markets were, but you couldn’t say necessarily when it was going to end.
A story of a non-price behavioral economics intervention in the United Kingdom

Let me tell you another story about the U.K. We had a meeting with the minister in charge of a program to encourage people to insulate their attics, which they call “lofts”—I had to learn that. Now, any rational economic agent will have already insulated their attic because the payback is about one year. It’s a no-brainer. But a third of the attics there are uninsulated. The government had a program to subsidize insulation and the takeup was only 1 percent.
The ministry comes to us and says, “We have this program, but no one’s using it.” They came to us because they had first gone to the PM or whomever and said, “We need to increase the subsidy.” You know, economists have one tool, a hammer, and so they hammer. You want to get people to do something? Change the price. Based on theory, that’s the only advice economists can give. ...
So we sent some team members to talk to homeowners with uninsulated attics. “How come you don’t have insulation in your attic?” They answered, “You know how much stuff we have up there!?” So, we got one of the retailers, their equivalent of Home Depot, that are actually doing the [insulation] work, to offer a program at cost. They charge people, say, $300; they send two people who bring all the stuff out of the attic. They help the homeowners sort it into three piles: throw away, give to charity, put back in the attic. And while they’re doing this, the other guys are putting in the insulation. You know what happened? Up to a 500 percent increase. So, that’s my other mantra. If you want to get somebody to do something, make it easy. 

Friday, October 25, 2013

The New Orleans Economy Since Katrina

On August 29, 2005, Hurricane Katrina slammed into the Gulf Coast area, causing $125 billion in property damage and more than 1,800 deaths. According to U.S. Census Bureau estimates, in the July 2005 the population of the New Orleans-Metairie-Kenner metropolitan area was area at a shade over 1.3 million, essentially unchanged since 2000. By the July 2006 count, dropped to 978,000. The population has rebuilt slowly since then, up to nearly 1.2 million by July 2009, but remains below the pre-storm level. What about the economy of New Orleans? As I'll try to explain, it's a story with twists and turns, but perhaps without any clear policy implication.

Charles Davidson has written ""The Big Busy: A radical reset after the Katrina catastrophe is transforming the economy of New Orleans," in the Third Quarter 2013 issue of Econ South, published by the Federal Reserve Bank of Atlanta. He draws to some extent on data from the Greater New Orleans Community Data Center and its report "The New Orleans Index at Eight." For some of the good news, here's a figure showing job growth in various metro areas since 2008. New Orleans lost fewer jobs than the rest of the country. The group of "aspirational metros" is seven Southern metro areas with populations of more than 1 million that have seen at least 10% job growth since 2000: Orlando, Nashville, Raleigh, Charlotte, Austin, Houston, and San Antonio. Just the fact that job growth in those cities since 2008 are closely comparable to those in New Orleans since 2008 is a big change.

Here's a figure showing wage levels in New Orleans compared to the U.S. level. Notice that a substantial gap opened up between wages in the New Orleans metro in the 1980s and 1990s, but after 2005 that gap closed somewhat. Of course, there's a difficult question here. A substantial share of those who left New Orleans and didn't return after Katrina were those with lower incomes, whose presence had been holding down average wages. Right after the storm, the area received considerable funds from insurance companies and government programs for clean-up and construction. Thus, are these higher wages an artifact of altered population composition and demographics, along with a short-term gusher of rebuilding money? Or a phenomenon that might have longer-term sustainability?

The argument that the gains in wages might be sustainable is based in part on the fact that New Orleans has experienced a wave of entrepreneurial behavior. This figure shows the number of individuals per 100,000 starting up businesses. New Orleans has risen from well below the U.S. average to well above it.

Much of Davidson's article is focused on how the storm transformed attitudes and governance in New Orleans. He quotes Michael Hecht, president of the largest economic development agency in the region, in this way: “New Orleans was like a morbidly obese person who finally had a heart attack that was strong enough to scare them, but not strong enough to kill them ... Katrina laid bare that this was a city and a
region that had been in slow, decadent decline, probably since  the ’60s ...”

The economic analysis of the city of New Orleans looks back  before Hurricane Katrina, and sees a city in long-term decline. Jacob Vigdor laid out the issues in "The Economic Aftermath of Hurricane Katrina,"
which appeared in the Fall 2008 issue of the Journal of Economic Perspectives. (Full disclosure: I've been the Managing Editor of JEP since the first issue in 1987.) Vidgor points out that many cities which have experienced disasters revert over time to their previous pattern of population: for example, this graphs shows population levels before and after for some cities that experienced heavy bombing during World War II, along with San Francisco and the 1906 earthquake, and Chicago and its 1871 fire.

So what was the long-term trend for New Orleans before Hurricane Katrina? As Hecht says, it was "slow, decadent decline." Here's a figure from Vigdor showing the population of New Orleans as a share of U.S. population. It's no surprise that the New Orleans share of the U.S. population falls during the 19th century, at a time when the U.S. was adding states. What is perhaps more interesting is that the New Orleans share of the U.S. population was largely unchanged from about 1880 to 1950, when it began to decline. Vigdor argues that U.S. cities since 1950 have experienced two transforming changes: the rise of suburbs and a reliance on "knowledge-based" industries as the basis for growth--and that New Orleans essentially  missed both of these transformations. The population of the city of New Orleans itself had fallen from more than 600,000 in 1960 to less than 400,000 in 2005, before Katrina hit.


Vigdor also emphasizes a recent lesson in the economic thinking about cities in economic decline: When a city is declining, low-quality housing can become quite inexpensive. The result is that those with low incomes find it hard to leave the city, because although their prospects for earning income aren't good, their cost of housing is low, and moving to some other area with a higher cost of housing seems like a high-risk choice. But Hurricane Katrina blasted the New Orleans housing stock. Vigdor wrote: "The 2000 Census counted just over 215,000 housing units in the city of New Orleans. By 2006, the estimated number of units had declined to 106,000, of which more than 32,000 were vacant. Although these vacant units appeared intact from the exterior, most of them undoubtedly required significant interior rehabilitation prior to occupation. Hurricane Katrina thus rendered two-thirds of the city’s housing stock uninhabitable, at least in the short term." To be sure, a substantial amount of this housing stock was eventually refurbished. But some of the cycle of low-income people living in low-cost housing was diminished, partly because a number of those low-income people ended up relocated in other cities, and partly because much of the refurbished housing was no longer as inexpensive as it had previously been.

Another piece of evidence is from another report by Vicki Mack and Elaine Ortiz of the Greater New Orleans Community Data Center, "Who lives in New Orleans and the metro area now? Based on 2012 U.S. Census Bureau data." Here is some of their data on education. The first bar, the parish of Orleans, is the same as the city of New Orleans. The next two bars show a couple of nearby parishes,  then the metro area around New Orleans. Notice that for the city of New Orleans in particular, the share of those with only a high school diploma fell sharply and the share of those with at least a bachelor's degree rose sharply. The main industry for New Orleans is tourism. But for a city hoping to build a stronger future in knowledge-based service industries, this rise in educational attainment is encouraging.


New Orleans as a city and a metropolitan area has a chance to move into the future with a different kind of momentum: that is, as a city in which tourism and dilapidation plays a relatively less important role, and entrepreneurs with higher education levels living in a refurbished housing stock play a greater role. But of course, these changes have arrived only after the gut-wrenching destruction and deaths from Hurricane Katrina, and the forces it created for resettlement and rebuilding.  There's no question that the economy, schools and governance of New Orleans circa mid-2005 were stagnating in a way that wasn't providing opportunity for many of its residents--but what a dreadful and disruptive way for some welcome changes to arrive.


Wednesday, October 23, 2013

The Disability-Industrial Complex

Americans on average are healthier and living longer. U.S. jobs on average are moving away from hard physical labor and toward service jobs and brainwork. And yet, the percentage of Americans who are officially too disabled to work has been rising for a quarter-century. Tad DeHaven lays out some of the trends in "The Rising Cost of Disability Insurance," written as Cato Institute Policy Analysis #733 (August 6, 2013).

Here's a figure showing the share of those receiving federal disability payments per 1,000 U.S. workers. Back in the mid-1980s, there were about 30 recipients of disability for every 1,000 workers; now, it's up to 75 recipients of disability for every 1,000 workers.  .

As you might guess, this sharp rise in disability has less to do with a sharp drop in levels of physical health, and more to do with a sharp rise in people who are disabled with "nonexertional conditions," like someone who has a high level of depression or anxiety, or who experiences pain, often back pain, from a "musculoskeletal condition." These are real conditions, and they are conditions where is can be hard to verify their severity. DeHaven points out that the National Academy of Medicine estimates that 116 million Americans suffer from some form of chronic pain, and the National Institute of Mental Health estimates that 61 million Americans suffer from some mental disease. Of course, most people with these conditions manage to continue their day-to-day functioning, including holding a job.

The Social Security Disability Insurance program is funded by a payroll tax of 1.8 percent of income up to a certain level, which is $113,700 this year. With the rising number of recipients, it's no wonder that the SSDI trust fund is even now dropping below the minimum level for financial solvency and will probably be empty in a few years, according to the annual report of the system's actuaries.  (The top line shows the path of the Social Security trust fund, with three scenarios for the future; the bottom line shows the path of the disability insurance trust fund, again with three scenarios.)


There are essentially two broad approaches for fixing disability insurance, and it would behoove us to choose both of them. The first approach is to encourage people to treat disability as a short term event, and to allow disability to be partial, and thus allowing for some work. This might be accomplished through some combination of program restructuring and financial carrots. For example, one approach along these lines would have employers purchase private-sector disability insurance that could last up to a couple of years. During this time, the private-sector insurance company would have an incentive to find ways to help the person get back to work, at least part-time. If such efforts didn't succeed after a couple of years, only then would the person migrate over to the federal disability insurance program. Here's a discussion of such a proposal from David Autor and Mark Duggan. Another approach would tinker with letting those who are receiving disability benefits continue to receive some of those benefits even if they find work. One never wants to set up a situation in which earning $1 means losing $1 of government benefits, because the incentives are much the same as a 100% marginal tax rate. Instead, disability benefits could be phased down so that for every $1 earned, the person might lose only 25 cents in disability benefits. Here's an example of such a proposal from Jagadeesh Gokhale.

The other approach is to get tougher about demanding that those who receive disability insurance payments are really and truly disabled. DeHaven points to a number of troubling anecdotes that suggest the possible scale of the problem. For example, anyone denied a disability claim can appeal the decision at five levels, represented by lawyers working for contingency fees. Fees paid to lawyers as a part of disability appeals tripled from $425 million in 2001 to $1.4 billion in 2011. Some judges find that almost 100% of claims of back pain should receive disability status, while others find that fewer than 20% of back pain claims deserve disability status. During a recent four-year period a single judge in Pennsylvania overruled the Social Security Administration on 2,285 cases, and made these 2,285 people eligible for disability insurance payments. The decisions of that single judge have led to $2 billion in disability insurance payments. There are private-sector consulting companies who are hired by states and paid several thousand dollars for every person who they manage to shift from a cash welfare program, which is partly funded by the state, over to disability insurance, which is funded by the federal government. One news story (for National Public Radio, no less) referred to all this as the "Disability-Industrial Complex."

For some people, disability insurance has become a way of exiting the labor force. It's hard for me to get into high dudgeon over these people, because I suspect that many of them have at least mild disabilities and also lousy job prospects, especially the last few years. But the hard fact is that the disability insurance program has limited funds, and is headed for bankruptcy. If it pays those funds to a substantial number people who are only marginally disabled,  and could be working, it cannot pay higher benefits to the more severely disabled.

Tuesday, October 22, 2013

Value of a Statistical Life? $9.1 Million

The costs of regulations can be measured by the money that must be spent for compliance. But many of the benefits of regulation are measured by lives saved or injuries avoided. Thus, comparing costs and benefits requires putting some kind of a monetary value on the reduction of risks to life and limb. For example, the US Department of Transportation estimates the "value of a statistical life" at $9.1 million in 2012. In a memo called "Guidance on Treatment of the Economic Value of a Statistical Life in the U.S. Department of Transportation Analyses," it explains how this number was reached. I'll run through the DoT estimates, and then raise some of the broader issues as discussed in a recent paper by Cass Sunstein called "The value of a statistical life: some clarifications and puzzles," which appeared in a recent issue of the Journal of Benefit Cost Analysis (4:2, pp. 237-261).

There are essentially two ways to estimate what value people place on a reduction in risk. Revealed preference studies look at how people react to different combinations of risk and price. For example, one can look at what workers are paid in jobs that involve a greater risk of death or injury, or at what people are willing to pay for safety equipment that reduces risks.  As DoT explains: "Most regulatory actions involve the reduction of risks of low probability (as in, for example, a one-in-10,000 annual chance of dying in an automobile crash).  For these low-probability risks, we shall assume that the willingness to pay to avoid the risk of a fatal injury increases proportionately with growing risk.  That is, when an individual is willing to pay $1,000 to reduce the annual risk of death by one in 10,000, she is said to have a VSL of $10 million.  The assumption of a linear relationship between risk and willingness to pay therefore implies that she would be willing to pay $2,000 to reduce risk by two in 10,000 or $5,000 to reduce risk by five in 10,000.   The assumption of a linear relationship between risk and willingness to pay (WTP) breaks down when the annual WTP becomes a substantial portion of annual income, so the assumption of a constant VSL is not appropriate for substantially larger risks."

As the report also points out, while the result of this calculation is called the "value of a statistical life," it's not actually putting value on a life, but on a reduction in risk. "What is involved is not the valuation of life as such, but the valuation of reductions in risks."

The alternative method is called a "stated preference" approach, in which people work their way through a sophisticated survey tool that informs them about various combinations of risks and costs, and seeks to elicit their preferences. This method is sometimes called "contingent valuation," and it's a controversial subject as to whether the values that are inferred from surveys can capture "real" preferences. (For a three-paper symposium on the use of contingent valuation techniques in estimating environmental damages, see the Fall 2012 issue of the Journal of Economic Perspectives.) When it comes to estimating value of reductions in risk, the DoT dismisses this method, on these grounds: "Despite procedural safeguards, however, SP [stated preference] studies have not proven consistently successful in estimating measures of WTP [willingness-to-pay] that increase proportionally with greater risks."

The DoT gets its value of $9.1 million with a literature review: specifically, it looks at nine recent studies that consider risk and pay in various occupations and that seem methodologically sound, and takes the average value from those studies. DoT also looks at costs of health or injury, as measured by research on what are called "quality-adjusted life years," which sets up criteria for categorizing the severity of an injury. They set up a scale with six levels of severity of injury: minor, which is worth .003 of the value of a statistical life, moderate, .047 of a VSL; serious, .105; severe, .266; critical, .593, and unsurvivable, 1.0. 

The DoT memo lays out where its numbers come from, but quite appropriately, it doesn't venture into a broader discussion of using the value of a statistical life in the first place. Cass Sunstein was for several years Administrator of the Office of Information and Regulatory Affairs in the Obama White House, so his views are of more than ordinary interest. While he supports using the value of a statistical life, he is also clear and thoughtful about a number of the tricky issues involved. Here are some of the tough questions raised by his article.

1) If the benefits of a regulation outweigh the costs, why is the regulation even necessary? Presumably, the answer is that there is some reason that buyers and sellers in the market cannot coordinate on an appropriate safety outcome. Potential reasons might include that people lack information or a range of choice between safety and price options.

2) Should the value of a statistical life be different across people? For example, perhaps reducing the risks faced by a child who lacks capacity to weigh and measure risk should be weight more heavily than risks faced by an adult. Or perhaps reducing the risks for a young adult, with a long life expectancy, should carry a higher value than reducing the risks faced by an elderly person. This point seems logically sound, but administratively and politically difficult.

3) If the reduction in risk is based on willingness to pay, then don't those with low incomes end up with less protection than those with higher incomes? Sunstein faces up to this point and accepts it. As he writes: "The reason is not that poor people are less valuable than rich people. It is that no one, rich or poor, should be forced to pay more than she is willing to pay for the reduction of risks." Guaranteeing low-income people a level of safety where the costs are higher than what they wish to pay ultimately doesn't make sense. "Government does not require people to buy Volvos, even if Volvos would reduce statistical risks. If government required everyone to buy Volvos, it would not be producing desirable redistribution. A uniform VSL has some of the same characteristics as a policy that requires people to buy Volvos. In principle, the government should force exchanges only on terms that people find acceptable, at least if it is genuinelyconcerned with their welfare."

4) What if the costs of risk reduction are carried by one group, but the benefits are received by another? Sunstein points out that in some cases, like regulation of drinking water, much of the cost of safer water is passed along in the form of higher water prices, and thus paid by everyone. Similarly, the cost of the worers' compensation program basically means that the benefits received by (nonunionized) workers are essentially offset by lower take-home pay. However, in regulation of air pollution, it's quite possible that the costs are spread across companies that pollute and their shareholders, while the benefits are realized by people regardless of income. Here, Sunstein points to the classic and controversial argument that if overall benefits for society exceed overall costs to society, even if there are some individual winners and losers, the policy can be justified. But he argues that redistribution is not the right goal for regulatory policy: "It is important to see that the best response to unjustified inequality is a redistributive income tax, not regulation – which is a crude and potentially counterproductive redistributive tool ..."

5) Maybe people aren't knowledgeable or rational their thinking about costs and benefits of risk reduction? Maybe they place a high value on avoiding some risks, but not on avoiding others, even though the objective level of risk seems much the same? Sunstein takes the technocratic view here: "Regulators should use preferences that are informed and rational, and that extend over people’s life-histories."

6) Instead of thinking about willingness to pay to reduce risk, the problem instead can be formulated as one of rights: that is, people have a right not to have certain risks imposed on them. Sunstein argues that this idea of rights applies in situations where the risk is extremely high, but doesn't apply well to issues of changes in statistically small risks. He further argues that if the issue is one of rights, then the cost-benefit calculation no longer applies (as implied in the DoT quotation above). Sunstein notes that in issues involving, say race and gender discrimination or sexual harassment, we quite rightly don't apply a cost-benefit calculation. But a regulatory issue like what kinds of bumpers should be put on cars to reduce risks during a crash is not a "right" in this sense, and so a cost-benefit calculation becomes appropriate.

Ultimately, Sunstein is a supporter of using the value of a statistical life in setting regulatory policy. As he notes, there are easier and harder cases for applying this principle. What he doesn't emphasize in this article is that if we can figure out which regulations have greater benefits for their cost, and which regulations have lower benefits for their cost, we should then be able to tighten up the very cost-effective regulations and loosen up the cost-ineffective regulations, and end up helping more people at the same or even lower cost.


Friday, October 18, 2013

One Million Page Views and Round Number Bias

Earlier this week, this Conversable Economist blog reached 1,000,000 pageviews. Of course, for big-time blogs this would be small-time news. But for this one-person blog, which offers links and discussion of economic analysis 4-5 times per week, it seems like a landmark worth noting. Of course, being me, I can 't commemorate a landmark without worrying about it. Is focusing on 1,000,000 pageviews just another example of round-number bias? Are pageviews a classic example of looking at what is easily measureable, when what matters is not as easily measurable?

Round number bias is the human tendency to pay special attention to numbers that are "round" in some way. For example, in the June 2013 issue of the Journal of Economic Psychology (vol. 36, pp. 96-102) ,Michael Lynn, Sean Masaki Flynn, and Chelsea Helion ask "Do consumers prefer round prices? Evidence from pay-what-you-want decisions and self-pumped gasoline purchases." They find, for example, that at a gas station where you pump your own, 56% of f sales ended in .00, and an additional 7% ended in .01--which probably means that the person tried to stop at .00 and missed. They also find evidence of round-number bias in patterns of restaurant tipping and other contexts.

Another set of examples of round number bias come from Devin Pope and Uri Simonsohn in a 2011 paper that appeared in Psychological Science (22: 1, pp. 71-79): "Round Numbers as Goals: Evidence from Baseball, SAT Takers, and the Lab." They find, for example, that if you look at the batting averages of baseball players five days before the end of the season, you will see that the distribution over .298, .299, .300, and .301 is essentially even--as one would expect it to be by chance. However, at the end of the season, the share of players who hit .300 or .301 was more than double the proportion who hit .299 or .298. What happens in those last five days? They argue that batters already hitting .300 or .301 are more likely to get a day off, or to be pinch-hit for, rather than risk dropping below the round number. Conversely, those just below .300 may get some extra at-bats, or be matched against a pitcher where they are more likely to have success. Pope and Simonsohn also find that those who take the SAT test and end up with a score just below a round number--like 990 or 1090 on what used to be a 1600-point scale--are much more likely to retake the test than those who score a round number or just above. They find no evidence that this behavior makes any difference at all in actual college admissions.

Round number bias rears its head in finance, too. In a working paper called "Round Numbers and Security Returns," Edward Johnson, Nicole Bastian Johnson, and Devin Shanthikumar desribe their results this way: "We find, for one-digit, two-digit and three-digit levels, that returns following closing prices just above a round number benchmark are significantly higher than returns following prices just below. For example, returns following “9-ending” prices, which are just below round numbers, such as $25.49, are significantly lower than returns following “1-ending” prices, such as $25.51, which are just above. Our results  hold when controlling for bid/ask bounce, and are robust for a wide collection of subsamples based on year, firm size, trading volume, exchange and institutional ownership. While the magnitude of return difference varies depending on the type of round number or the subsample, the magnitude generally amounts to between 5 and 20 basis points per day (roughly 15% to 75% annualized)."

In "Rounding of Analyst Forecasts," in the July 2005 issue of Accounting Review (80: 3, pp. 805-823), Don Herrmann and Wayne B. Thomas write: "We find that analyst forecasts of earnings per share occur in nickel intervals at a much greater frequency than do actual earnings per share. Analysts who round their earnings per share forecasts to nickel intervals exhibit characteristics of analysts that are less informed, exert less effort, and have fewer resources. Rounded forecasts are less accurate and the negative relation between rounding and forecast accuracy increases as the rounding interval goes from nickel to dime, quarter, half-dollar, and dollar intervals."

In short, the research on round-number bias strongly suggests not getting too excited about 1,000,000 in particular. There is very little reason to write this blog post now, as opposed to several months in the past or in the future. However, I hereby acknowledge my own personal round number bias and succumb to it.

The other reason not to get too excited about this particular round number is that pageviews themselves are only one measure of the benefits of a blog. For example, page views don't count those who receive the blog by RSS feed, or those who read part of a post as it is reprinted on another site, or those who are emailed a post by a friend. Pageviews also don't measure the intentionality and seriousness of readers, and of course everyone who reads this blog is considerably above average in every way.

In addition, I often tell myself that the compelling reason to carry on with this blog is for my own personal needs. The blog has become a sort of extended memory for me, where I can easily track down that article I dimly remember reading a few months back. It's a filing system, where I can store figures and quotations in a searchable form. My efforts at explaining something here at the blog can serve as a dry run for later explanations that will be smoother and better-developed.

Of course, I could achieve these kinds of self-focused and mildly anti-social goals with a private blog, closed off to the world. The fact that I publish the blog is an admission that I like having readers. If you have become a regular reader in the last couple of years--whether or not you are counted in the pageviews--thanks for spending some time with me. If you are an irregular reader, check in more often. This blog is a specialized flavor, reflecting the randomness of my own reading and interests. But if you feel like recommending the blog to anyone--well, my round-number bias tells me that I would be irrationally delighted to have some help in reaching 2,000,000 page views.




Thursday, October 17, 2013

Adam Smith's Support for Regulatory Financial Firewalls

In the Oxford English Dictionary, the origin of the term "fire-walls" is traced to a comic play published in 1799 by G.E. Lessing, called "School for Honor." One character is booted out of his hotel room for the arrival of a beautiful lady, and given an inadequate replacement room, which is described as "behind the pigeon-loft; with the prospect between the neighbor's fire-walls."

But for those who care about such matters, Adam Smith has a reference to fire-walls in The Wealth of Nations. Even better, Smith's reference draws an explicit parallel between the "obligation of building party walls, in order to prevent the communication of fire," and his support of a certain kind of financial regulation. In the OED, the origin of using the word "firewall" in the sense of "any structure, device, or procedure designed to protect the security or integrity of a system, process" is only dated back to a use in Business Week in 1975. The Adam Smith passage is from Book II, Chapter II, and I quote here from the ever-useful version of TWN available on-line at the Library of Economics and Liberty website. Smith writes:

II.2.94
"To restrain private people, it may be said, from receiving in payment the promissory notes of a banker, for any sum whether great or small, when they themselves are willing to receive them, or to restrain a banker from issuing such notes, when all his neighbours are willing to accept of them, is a manifest violation of that natural liberty which it is the proper business of law not to infringe, but to support. Such regulations may, no doubt, be considered as in some respects a violation of natural liberty. But those exertions of the natural liberty of a few individuals, which might endanger the security of the whole society, are, and ought to be, restrained by the laws of all governments, of the most free as well as of the most despotical. The obligation of building party walls, in order to prevent the communication of fire, is a violation of natural liberty exactly of the same kind with the regulations of the banking trade which are here proposed."
What is Smith talking about here? At this time, money took two main forms. There was gold and silver money, which held its value because of its content of precious metals, and there were bank notes--that is, a note issued by a bank that could be used as a medium of exchange because it could be redeemed for gold or silver at the bank. The difficulty is that anyone could start a bank and begin issuing "bank notes" to pay for goods and services, which led to difficulties. Smith wrote:
"Where the issuing of bank notes for such very small sums is allowed and commonly practised, many mean people are both enabled and encouraged to become bankers. A person whose promissory note for five pounds, or even for twenty shillings, would be rejected by everybody, will get it to be received without scruple when it is issued for so small a sum as a sixpence. But the frequent bankruptcies to which such beggarly bankers must be liable may occasion a very considerable inconveniency, and sometimes even a very great calamity to many poor people who had received their notes in payment. It were better, perhaps, that no bank notes were issued in any part of the kingdom for a smaller sum than five pounds. Paper money would then, probably, confine itself, in every part of the kingdom, to the circulation between the different dealers, as much as it does at present in London, where no bank notes are issued under ten pounds value ..."
Thus, Smith was essentially arguing that financial sector firms should not be allowed to issue promises to pay unless they had actual assets, and that undercapitalized financial firms--"beggarly bankers"--should not be allowed to operate. Smith goes on to a discussion of how in  North America, paper money was often issued by governments who then tried to require others to accept the paper as legal tender. The British Parliament banned such actions, and although the colonists complained, Smith defended Parliament's financial regulatory action (and offered a present discounted value calculation along the way):

"The paper currencies of North America consisted, not in bank notes payable to the bearer on demand, but in government paper, of which the payment was not exigible till several years after it was issued; and though the colony governments paid no interest to the holders of this paper, they declared it to be, and in fact rendered it, a legal tender of payment for the full value for which it was issued. But allowing the colony security to be perfectly good, a hundred pounds payable fifteen years hence, for example, in a country where interest at six per cent, is worth little more than forty pounds ready money. To oblige a creditor, therefore, to accept of this as full payment for a debt of a hundred pounds actually paid down in ready money was an act of such violent injustice as has scarce, perhaps, been attempted by the government of any other country which pretended to be free. It bears the evident marks of having originally been, what the honest and downright Doctor Douglas assures us it was, a scheme of fraudulent debtors to cheat their creditors. ... Notwithstanding any regulation of this kind, it appeared by the course of exchange with Great Britain, that a hundred pounds sterling was occasionally considered as equivalent, in some of the colonies, to a hundred and thirty pounds, and in others to so great a sum as eleven hundred pounds currency; this difference in the value arising from the difference in the quantity of paper emitted in the different colonies, and in the distance and probability of the term of its final discharge and redemption. No law, therefore, could be more equitable than the Act of Parliament, so unjustly complained of in the colonies, which declared that no paper currency to be emitted there in time coming should be a legal tender of payment."

Tuesday, October 15, 2013

The 2013 Nobel Prize to Fama, Hanson, and Shiller

The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel --aka "the Nobel prize in economics"--has been awarded in 2013 to Eugene Fama, Lars Peter Hansen, and Robert Schiller. Each of them has done absolutely top-notch academic work. But at least for me, it is difficult to describe this year's prize in a compact way involving a unified theme.

The prize committee said that the award was given "for their empirical analysis of asset prices," but also writes in the description of why the award was given that "we do not yet fully understand how asset prices are determined." Fama is commonly identified as a supporter of the "efficient markets" school of finance, which usually holds that asset prices and changes in asset  prices can be explained by underlying economic factors. Meanwhile, Shiller published a book back in 2000 called Irrational Exuberance and is commonly identified as someone who believes that financial markets are frequently irrational or "behavioral" issues.. Meanwhile, Hansen developed a high-powered statistical tool called the Generalized Method of Moments that has become standard in analyzing asset-market data. So yes, all three winners have done "empirical analysis of asset prices," but they often seem to be coming and going in different directions. In a way, this is a follow-up Nobel prize to the one given in 1990 to Harry Markowitz, Merton H. Miller, and William F. Sharpe "for their pioneering work in the theory of financial economics." But the theoretical models of finance for which that prize told a story in a way that this prize--at least for me--does not. 

Rather than try to create a of cohesive narrative of the work behind this Nobel prize, I'll just try give a flavor of the work that Fama, Hansen, and Shiller have done in the empirical analysis of asset prices: some main findings, innovations in methods, new data, and implications. I'll draw on the always-useful materials that the Nobel committee posts on its website, both the "Popular Information," short essay called "Trendspotting in Asset Markets," and the "Advanced Information," a longer and more technical paper called (a bit optimistically!) "Understanding Asset Prices."

Findings: One theme is that movements in asset market prices, like stock prices, are not predictable in the short run, but are somewhat predictable in the long run. This may sound contradictory but actually, the two points are closely related.. For example, the sharp fluctuations in the short run can lead to times when assets are distinctly overvalued or undervalued compared to long-run benchmarks. In the short run, this high level of volatility isn't predictable, but when asset prices get far out of alignment, they do tend to correct. Fama is responsible for a substantial body of empirical work starting in the 1960s that pointed out that asset prices are not predictable in the short run. Shiller is responsible for a body of work starting in the 1980s that emphasized that the short-run movements in asset prices were often so large that some bounceback at some point in the longer term becomes predictable. Of course, exactly when that longer term bounceback will arrive is unclear. There's a piece of old investor wisdom sometimes attributed to Keynes, "Remember that the market can stay irrational longer than you can stay solvent."

Methods: Fama was an early leader in what is called an "event study" method: basically, look at the price of an asset before a certain event happens and after a certain event happens--like a corporate takeover, or a dividend payment, or news that affects future profits. Event studies can help to show if an event is anticipated (did the price move before the event?), the economic value of the event (shown by the price shift), and whether the effect had a permanent or temporary effect (did the price jump during the event and then return to the previous level?). Event studies have become a standard piece in the toolkit of empirical economists. 

Hanson made use of a statistical method called the Generalized Method of Moments. I won't try to explain it here, partly because I'm fairly sure I'd mess it up, but here's one way of thinking about what it means. A "moment" refers to ways of characterizing a pattern of data. For example, if you are trying to describe a bunch of data, a first step might be to take the average: that is, to add up the data and divide by the quantity of data.  A second step might be to think about how spread out the data is, which statisticians measure by the "variance." A third step might be to think about whether the distribution of data is symmetric around the mean, or tends to "lean" one way or the other, which is "skewness." Yet another step might be to look at whether the distribution of data has a substantial share of extreme values far from the mean, which to statisticians is "kurtosis." Each of steps,and others still more complex, are called a statistical "moment." With this framework, you can look at a theory of what would cause asset market prices to change or vary, and then using all the statistical moments you can compare the actual pattern of asset market prices to what the theory predicts. Thus, this approach provides a workhorse statistical tool for looking at theoretical explanations of asset market prices and comparing them to data. 

Data: Fama was one of the first to use the CRSP data, which is a dataset that commenced in the early 1960s from the Center for Research in Security Prices at the University of Chicago, and includes information on a very wide array of securities prices and returns. Shiller together with Karl Case constructed the first high-quality index of housing prices in the 1980s. The index has not only been useful in looking at the housing market, but it has formed the basis for financial contracts based on this index which can allow for hedging against falls in real estate prices. 

Implications: Movements in asset prices matter enormously to individuals, to firms, to the financial sector, and to the macroeconomy as a whole. Empirical findings in this area thus often lead to real-world consequences. For example, the finding that stock prices don't have much short-term predictability is part of what triggered the enormous growth of index fund investing in the last few decades. The arguments that people are not fully rational in how they think about asset prices, and pointing out that people are often not well-diversified against risks like unemployment or a falling house price, has led to the invention of a wide array of financial instruments, as well as to public policies that encourage saving. The U.S. financial crisis and the Great Recession in recent years have led to numerous policy proposals, all of which are at least implicitly based on a theory of how asset prices are generated. 

For a sample of the work from the 2013 Nobel laureates, I can recommend some articles from the Journal of Economic Perspectives, where I have work as Managing Editor since 1987: 

In the Summer 2004 issue, Eugene Fama and Kenneth R. French wrote "The Capital Asset Pricing Model: Theory and Evidence," (18:3, pp 25-46).

In the Winter 2003 issue, Robert Shiller wrote "From Efficient Markets Theory to Behavioral Finance (17:1, pp. 83-104).

In the Winter 1996 issue, Lars Peter Hansen and James J. Heckman wrote "The Empirical Foundations of Calibration" (10:1,  87-104). That article is about calibration as a tool for macroeconomic modeling, rather than about the statistical work which is the emphasis of the prize. In the Fall 2001 issue, Jeffrey M. Wooldridge made an heroic effort to explain Generalized Method of Moments in an only mildly mathematical way in "Applications of Generalized Method of Moments Estimation" (15:4,  87-100).

For posts on the previous Nobel prize winners, see: The 2012 Nobel Prize to Shapley and Roth (October 17, 2012) and the 2011 Nobel Prize to Thomas Sargent and Christopher Sims (October 10, 2011).  

Monday, October 14, 2013

The Global Wealth Distribution

Credit Suisse has published its Global Wealth Report 2013. Reports like this help me update the mental picture of the world economy that I try to carry around with me. First, the quick overview of global wealth looks like this: Total world wealth was about $241 trillion in 2013, with a little under one-third in North America, a little under one-third in Europe, and the rest spread around the rest of the world. Average wealth per adult for the world economy was $52,000, with North Americans averaging about six times that amount, while those in Africa and India averaged less than one-tenth of that amount.

The report has lots of detailed information about how wealth is held in financial and nonfinancial forms, trends in the last year, and trends back to 2000. I found especially interesting the discussion of what happens if we look at the distribution of global wealth not as averages across regions, as in the table above, but across individuals?
"To determine how global wealth is distributed across households and individuals – rather than
regions or countries – we combine our data on the level of household wealth across countries with information on the pattern of wealth distribution within countries. Our estimates for mid-2013 indicate that once debts have been subtracted, an adult requires just USD 4,000 in assets to be in the wealthiest half of world citizens. However, a person needs at least USD 75,000 to be a member of the top 10% of global wealth holders, and USD 753,000 to belong to the top 1%. Taken together, the bottom half of the global population own less than 1% of total wealth. In sharp contrast, the richest 10% hold 86% of the world’s wealth, and the top 1% alone account for 46% of global assets."
Here's a pyramid of wealth for the world economy. The 32 million people around the world who have more than $1 million in wealth represent 0.7% of the world population, and hold 41% of the world's wealth.
What countries are these people from? Here's a division of the millionaires by country. Remember, these are not people who have $1 million in annual income, but rather people who have $1 million or more of combined value in their financial accounts, including retirement accounts, and in home equity and other nonfinancial assets.

What about if we look at the top of that wealth pyramid, with a focus on those who have more than $1 million in wealth? There are roughly 100,000 people in the world with more than $50 million in wealth.

And finally, how about if we look at the ultra-wealthy, the top of the top of the wealth pyramid, meaning those 100,000 people with more than $50 million in wealth. What countries are they from? The United States is first on the list, which isn't big surprise, but I found it startling that among countries, China has the second-largest number of ultra-wealthy  people.


Friday, October 11, 2013

International Trade in Apples

Apples are part of the American language: Johnny Appleseed, American as apple pie, apple-pie order, apple of my eye, sure as God made little green apples, the  Big Apple of New York City, and here in Minnesota, the Mini-Apple of Minneapolis. So it's a little startling to discover at the website of the Food and Agriculture Organization that the US is actually #2 in the world in apple production, lagging far behind China. In fact, China produced 36 million tons of apples in 2011, while the U.S. produced 4.2 million tons. Other major world apple producers include the far-flung group of Turkey, Italy, India, Poland, France, and Iran.

But even within the production of apples, there is global specialization. The US economy both exports and imports apples, depending on the season, but overall runs a trade surplus in apples. However, the U.S. runs a substantial trade deficit in frozen apple juice concentrate, relying heavily on imports from China. Here are some statistics about U.S. trade in apples from the U.S. Department of Agriculture (which are helpfully archived on-line at Cornell University).

For trade in fresh apples, the website of the US Apple Association reports: "Approximately one out of every four fresh apples grown in the United States is exported." The USDA statistics for 2010 show that the main destinations for exports of fresh apples were Mexico and Canada, as one might expect from proximity, then followed by Taiwan, Indonesia, Hong Kong, and the United Kingdom. One suspects that unless the 7 million people of Hong Kong are completely obsessed with eating apples, a substantial share of those U.S. exports of fresh apples are ending up in China.

In terms of imports, the U.S. Apple Association reports that about 6% of all fresh apples consumed in the United States are imported. Roughly two-thirds of those imported fresh apples come from Chile and another 20% from New Zealand--that is, U.S. imports of fresh apples are mainly from the Southern Hemisphere when apple production is out of season here. The total value of U.S. fresh apple exports was $827 million in 2010, while the value of fresh apple imports was $169 million.

When it comes to apple juice and cider, on the other hand, about 85% of U.S. consumption is imported, and about 80% of those imports come from China, according to the USDA statistics for 2010. Only about 8% of U.S. production of apple juice and cider is exported, most of that to Canada. Total value of U.S. apple juice imports in 2010 was $444 million; total value of U.S. apple juice exports, just $32 million.

I will spare you additional data on dried apples, canned apples, comparisons of apple yield statistics over time, and the like. But the main point is that the world economy is full of patterns that I would not have guessed before looking at the data.  The notion that the U.S. exports fresh apples to China and runs a trade surplus in fresh apples, while importing apple juice from China and running a trade deficit in apple juice, is one of those patterns. But if you think about varieties of apples, some more suited to being eaten as fresh apples and some more suited to being used for juice, along with the differing transportation costs of shipping fresh apples and apple juice, these patterns make some economic sense.

Note: Thanks to faithful reader Chris Laughton, the Director of Knowledge Exchange at Farm Credit East, for calling the basic facts about U.S. international trade in apples to my attention.

Wednesday, October 9, 2013

Discouraged Workers and Unemployment

As the unemployment rate has drifted down from its peak of 10% in October 2009 to its current level at 7.3%, a number of commenters have noted that the labor force participation rate has also been falling, from about 66% in late 2007 before the start of the recession to a current level of around 63.2%. Thus, is the drop in the unemployment rate nothing more than a drop in the share of adults seeking to participate in the labor market in the first place? More specifically, what do the statistics tell us about whether those who are outside the labor force are seeking to work?

Just to be clear on the basics, the unemployment rate is calculated as part of the Current Population Survey, which defines unemployment in this way: "Persons are classified as unemployed if they do not have a job, have actively looked for work in the prior 4 weeks, and are currently available for work. Persons who were not working and were waiting to be recalled to a job from which they had been temporarily laid off are also included as unemployed. Receiving benefits from the Unemployment Insurance (UI) program has no bearing on whether a person is classified as unemployed."

What about those who would like a job, but are so discouraged that they have given up on looking? The same survey asks people who are not in the labor market various questions, and divides them up into categories. As of August 2013, there were about 90 million adults not in the labor force. However, many of them were out of the labor force by choice: for example, they were retired, or full-time students, or spouses staying home with children. The survey asks those who are out of the labor force if they want a job, and in August 2013, about 6.3 million answered "yes." Here's a graph from the Bureau of Labor Statistics website showing the number of those out of the labor force who tell the survey that they want a job. The number has clearly risen substantially, by about 2 million, since the start of the recession in 2007. However, it's interesting to note that the total of those out of the labor force who want a job is not that different now than it was back in 1994, in the aftermath of the 1990-91 recession and before the dot-com boom of the mid- and late 1990s had taken hold.

Of those who are out of the labor force but would like a job, a subcategory is the "Marginally Attached to the Labor Force," which refers to "persons who want a job, have searched for work during the prior 12 months, and were available to take a job during the reference week, but had not looked for work in the past 4 weeks." In August 2013 there were about 2.3 million of the "Marginally attached," and here's a graph showing how that number has evolved over time.

Of the "Marginally Attached," yet another subcategory is "Discouraged Workers," which refers to "those who did not actively look for work in the prior 4 weeks for reasons such as thinks no work available, could not find work, lacks schooling or training, employer thinks too young or old, and other types of discrimination." There were 866,000 discouraged workers in August 2013, and here's how their number has evolved over time.

So what does all this mean about interpreting the fall  in the unemployment rate? Troy Davig and José Mustre-del-Río discuss what they call ""The Shadow Labor Supply and Its Implications for the Unemployment Rate" in the Third Quarter 2013 Economic Review published by the Federal Reserve Bank of Kansas City. They refer to those who are out of the labor force but tell the survey that they would like to work as the "shadow labor force." They write:

"Nevertheless, despite the swelling size of the shadow labor supply, a return of these individuals to the labor force in numbers that would considerably affect the unemployment rate appears unlikely. Variation in their job search behavior may influence the future path of the unemployment rate modestly, but not greatly. Although individuals in the shadow labor force do flow back into unemployment, the peak in their return to the labor force typically occurs in the first few post-recession years. The recent, post-recession peak of their flow back into unemployment has already occurred, in mid-2010. While another surge back into the labor force by individuals in the shadow labor supply is possible, historical evidence suggests it is unlikely."
Although Davig and Mustre-del-Río don't put it this way, my own interpretation would be that the decline in the unemployment rate since about 2010 is the sign of an economy that is tottering back to a healthier labor market. The number of those out of the labor force who tell the survey that they would like a job bounces around through the year--it's not a seasonally adjusted number--typically peaking in June when people start looking for summer jobs. But it hasn't shown any particular trend since 2010. I've posted earlier about the long-term decline in the labor force participation rate, a trend which predates the Great Recession, here and here.