Pages

Friday, June 29, 2012

Interviewing Darrell Duffie on Financial Economics

The June 2012 issue of the Region, from the Federal Reserve Bank of Minneapolis, has an insightful interview of Darrell Duffie by Douglas Clement. It's worth reading the whole thing, but here are a few of the highlights that caught my eye. Except where noted, all quotations are from Duffie.

_____________________________

Developing a theory of inattentive investors and market volatility

"In the ideal world, we’d all be sitting at our terminals watching for every possible price distortion caused by demands for immediacy. We’d all jump in like piranhas to grab that, we’d drive out those price distortions and we’d have very efficient markets. But in the real world, you know, we all have other things to do, whether it’s teaching or interviewing economists or whatever, and we’re not paying attention.

"So we do rely on providers of immediacy, and we should expect that prices are going to be inefficient in the short run and more volatile than they would be in a perfectly efficient market, but in a natural way. I have been studying markets displaying that kind of price behavior to determine in part how much inattention there is or how much search is necessary to find a suitable counterparty for your trade."

As a vivid example of how even professional investors aren't always attentive, a footnote to the interview relates one of Duffie's anecdotes on this subject: "In his American Finance Association presidential address, Duffie refers to a Wall Street Journal article (Feb. 19, 2010) that reported, “Investors took time out from trading to watch [Tiger] Woods apologize for his marital infidelity. ...New York Stock Exchange  volume fell to about 1 million shares, the lowest level of the day at the time in the minute Woods began a televised speech. ...Trading shot to about 6 million when the speech ended.”

_______________________________________

Reducing the need for future government guarantees on investments at money market mutual funds

In September 2008, one of the events that created panic in financial markets was when a money market, the Reserve Primary Fund, seemed likely to announce that it had lost money. Investors pulled $300-$400 billion out of money market funds in two weeks, until the government guaranteed the value of your principal in these funds. What might be enacted so that money market mutual funds don't face runs in the future? Duffie explains:

"One of those proposals is to put some backing behind the money market funds so that a claim to a one-dollar share isn’t backed only by one dollar’s worth of assets; it’s backed by a dollar and a few pennies per share, or something like that. So, if those assets were to decline in value, there would still be a cushion, and there wouldn’t be such a rush to redeem shares because it would be unlikely that cushion would be depleted. That’s one way to treat this problem.

"A second way to reduce this problem is to stop using a book accounting valuation of the fund assets that allows these shares to trade at one dollar apiece even if the market value of the assets is less than that. ...  That’s called a variable net asset value approach, which has gotten additional support recently. Some participants in the industry who had previously said that a variable net asset value is a complete nonstarter have now said we could deal with that. ... 

"A third proposal, which has since come to the fore, is a redemption gate: If you have $100 million invested in a money market fund, you may take out only, say, $95 million at one go. There will be a holdback. If you have redeemed shares during a period of days before there are losses to the fund’s assets, the losses could be taken out of your holdback. That would give you some pause before trying to be the first out of the gate. In any case, it would make it harder for the money market fund to crash and fail from a liquidity run. ...

"The SEC has a serious issue about which of these, if any, to adopt. And it’s getting some push-back not only from the industry, but even from some commissioners of the SEC. They are concerned—and I agree with them—that these measures might make money market funds sufficiently unattractive to investors that those investors would stop using them and use something else. That alternative might be better or might be worse; we don’t know. It’s an experiment that some are concerned we should not run. ...  I feel sympathy for the  SEC. It has a tough decision to make."
___________________________________

Addressing instability in the repo market with a public utility for tri-party clearing

The market for repurchase agreements ("the repo market") was one of the financial markets that seized up during the financial crisis in late 2008 and early 2009. Repo agreements are very short-term borrowing, often overnight. But as a result, those using such agreements often want to borrow during the day, as well. The financing for intraday trading is done by "tri-party clearing banks," and essentially all of the tri-party deals in the U.S. are handled by two banks: JPMorgan Chase and Bank of New York Mellon.

"JPMorgan Chase and Bank of New York Mellon handle essentially all U.S. tri-party deals. As part of this, they provide the credit to the dealer banks during the day. Toward the end of the day, a game of musical chairs would take place over which securities would be allocated as collateral to new repurchase agreements for the next day. All of those collateral allocations would get set up and then, at the end of the day, the switch would be hit and we’d have a new set of overnight repurchase agreements. The next day, the process would repeat.

This was not satisfactory, as revealed during the financial crisis when two of the large dealer banks, Bear Stearns and Lehman, were having difficulty convincing cash investors to line up and lend more money each successive day. The clearing banks became more risk averse about offering intraday credit. ...  [T]he amounts of these intraday loans from the clearing banks at that time exceeded $200 billion apiece for some of these dealers. Now they’re still over $100 billion apiece. That’s a lot of money. ..."

"The tri-party clearing banks are highly connected, and we simply could not survive the failure of probably either of those two large clearing banks without an extreme dislocation in financial markets, with consequential macroeconomic losses. So if you take, for example, the Bank of New York Mellon, it really is too interconnected to fail, at the moment. And that’s not a good situation. We should try to arrange for these tri-party clearing services to be provided by a dedicated utility, a regulated monopoly, with a regulated rate of return that’s high enough to allow them to invest in the automation that I described earlier."
________________________________________

What financial plumbing should we be working on now, so that the chance of a future financial crisis is reduced?

"And there has been a lot of progress made, but I do feel that we’re looking at years of work to improve the plumbing, the infrastructure. And what I mean by that are institutional features of how our financial markets work that can’t be adjusted in the short run by discretionary behavior. They’re just there or they’re not. It’s a pipe that exists or it’s a pipe that’s not there. And if those pipes are too small or too fragile and therefore break, the ability of the financial system to serve its function in the macroeconomy—to provide ultimate borrowers with cash from ultimate lenders, to transfer risk through the financial system from those least equipped to bear it to those most equipped to bear it, to get capital to corporations—those basic functions which allow and promote economic growth could be harmed if that plumbing is broken.

"If not well designed, the plumbing can get broken in any kind of financial crisis if the shocks are big enough. It doesn’t matter if it’s a subprime mortgage crisis or a eurozone sovereign debt crisis. If you get a big pulse of risk that has to go through the financial system and it can’t make it through one of these pipes or valves without breaking it, then the financial system will no longer function as it’s supposed to and we’ll have recession or possibly worse."

Some of the preventive financial plumbing that Duffie emphasizes would include (in the words of the interviewer): "broadening access to liquidity in emergencies to lender-of-last-resort facilities," "engaging in a deep forensic analysis of prime brokerage weakness during the Lehman collapse, "
"tri-party repo markets," "wholesale lenders that might gain prominence if money market funds are reformed and therefore shrink," "cross-jurisdictional supervision of CCPs [central clearing parties]," and "including foreign exchange derivatives in swap requirements."

Bringing the plumbing up to code in an older house is no fun at all, and bringing the economy's financial plumbing up to code is not much fun, either. But having the plumbing break under when stressful but predictable events occur is even less fun.

Thursday, June 28, 2012

Other Air Pollutants: Soot and Methane

It sometimes seems to me that the arguments over carbon emissions and the risk of climate change have crowded out attention to other environmental issues--including other types of air pollution. Thus, I was intrigued to see the article by Drew Shindell called "Beyond CO2: The Other Agents of Influence," in the most recent issue of Resources magazine from Resources for the Future. Shindell focuses on the benefits of reducing soot (more formally known as "black carbon") and methane emissions (which are a precursor to more ozone in the atmosphere), and identifies which emissions to go after. He is reporting the results of a study with a larger group of authors appeared here in the January, 13, 2012, issue of Science magazine. I'll quote from both articles here.


The starting point for this group was to note: "Tropospheric ozone and black carbon (BC) are the only two agents known to cause both warming and degraded air quality." Thus, "an international team of researchers, including experts from the Stockholm Environment Institute, the Joint Research Centre of the European Commission, the U.S. Environmental Protection Agency, and others" looked at  400 different policies for potentially reducing these emissions.

Here's a capsule overview of the effects of soot and methane from the Resources article: 

"When the dark particles of black carbon absorb sunlight, either in the air or when they accumulate on snow and ice and reduce their reflectivity, they increase radiative forcing (a pollutant’s effect on the balance of incoming and outgoing energy in the atmosphere, and the concept behind global warming), and thus cause warming. They can also be inhaled deeply into human lungs, where they cause cardiovascular disease and lung cancer.

"Methane has a more limited effect than black carbon on human health, but it can lead to premature death from the ozone it helps form. That ozone is also bad for plants, so methane also reduces crop yields. It is a potent greenhouse gas as well, with much greater potential to cause global warming per ton emitted than CO2. But its short atmospheric lifetime—less than 10 years, versus centuries or longer for CO2—means that the climate responds quickly and dramatically to reductions. CO2 emissions, in contrast, affect the climate for centuries, but plausible reductions will hardly affect global temperatures before 2040."
The group looked at costs and benefits to whittle down the 400 measures and eventually selected 14 of them. "Of the 14 measures selected, 7 target methane emissions (from coal mining, oil and gas production, long-distance gas transmission, municipal waste and landfills, wastewater, livestock manure, and rice paddies). The other 7 controls target black carbon emissions from incomplete combustion and include both technical measures (for diesel vehicles, biomass stoves, brick kilns, and coke ovens) and regulatory measures (for agricultural waste burning, high-emitting vehicles, and domestic cooking and heating)."

 The potential gains from the policies that they advocate are shown in this table from Science. The first column of the table shows gains from reducing methane, the second shows gains from the technical fixes for soot emissions and the third shows gains from regulatory measures for reducing soot emissions. The first few rows are physical effects: in particular, you can see that methane the emissions have a bigger effect on crops, but soot emissions have a much larger effect on lives saved. The remaining rows then put monetary values on the reduction in emissions. 



The Science article sums up: "We identified 14 measures targeting methane and BC [black carbon]
emissions that reduce projected global mean warming ~0.5°C by 2050. This strategy avoids 0.7 to 4.7 million annual premature deaths from outdoor air pollution and increases annual crop yields by 30 to 135 million metric tons due to ozone reductions in 2030 and beyond."  Indeed, a "combination of measures to control black carbon, methane, and CO2 could keep global mean warming at less than 2ÂșC (relative to the preindustrial era) during the next 60 years—something that reducing the emissions of any one agent cannot achieve by itself."


The authors also find that the benefits of such policies far outweigh the costs. "Benefits of methane emissions reductions are valued at $700 to $5000 per metric ton, which is well above typical marginal abatement costs (less than $250)." For soot, "improved efficiencies lead to a net cost savings for the brick kiln and clean-burning stove BC measures. These account for ~50% of the BC measures’ impact. The regulatory measures on high-emitting vehicles and banning of agricultural waste burning, which require primarily political rather than economic investment, account for another 25%. Hence, the bulk of the BC measures could probably be implemented with costs substantially less than the benefits given the large valuation of the health impacts."

The policy agenda for soot and methane is daunting in practical and political terms. For example, it requires measures that affect rice paddies, fossil fuel production and transmission, animal manure, brick kilns, diesel stoves, indoor cooking, and other areas. The agenda is worldwide, and those who receive the benefits will often not align well with those who are likely to end up footing the costs. The Resources article points out that an international coalition involving Canada, Sweden, Mexico, Ghana, Bangladesh, the United States, and the United Nations Environment Programme has embarked on a program to reduce  black carbon and methane emissions. The Climate and Clean Air Coalition to Reduce Short-Lived Climate Pollutants is just getting underway.


Tuesday, June 26, 2012

Drop in Total Value of U.S. Housing: Illustrating the Financial Crisis

Faithful reader M.R. was looking through archives and found my first post of  May 17, 2011, on "Two ways of illustrating the financial crisis" and the follow-up post of August 3, 1011, on "Four More Ways of Illustrating the Financial Crisis." He writes:  "[M]y reason for this email is to differ over your 2 + 4 ways of illustrating the financial crisis.  To me the illustration is a graph I have never seen -- hint to one who knows the literature much better than I -- a graph of the market value of residential real estate." Here  are some illustrative graphs made with the ever-useful FRED website of the Federal Reserve Bank of St. Louis, all using data from the Federal Reserve's Flow of Funds accounts.

What is the drop in the total market value of residential real estate? Here's the data series on :"Real Estate - Assets - Balance Sheet of Households and Nonprofit Organizations." The drop is from a shade over $25 trillion in October 2006 to $18.2 trillion in October 2011. And yes, you do see a little upward wiggle at the right-hand end of the line, a rise to $18.6 trillion in the most recent data for January 2012.



For many households, of course, what matters is not the total value of your property, but how much equity you have in the property. For example, if the value of your house declined from $320,000 to $240,000, but  your outstanding mortgage is  $250,000, it's not all that comforting to notice that your home still has a positive market value overall. Here's the graph showing "Owner's Equity in Household Real Estate." Equity peaked at $13.5 trillion in January 2006, and dropped to $6.2 trillion in October 2011--a drop of more than half.



Both of these time series are in nominal dollars. In the next two graphs, I've divided them by nominal GDP, which has the effect of adjusting both for inflation and for real growth of the economy over time. First, here is the total value residential real estate divided by GDP.

Just to be clear, there is no reason why the share of housing compared to GDP should be a constant over time. Rising incomes will presumably cause people to consume more housing, but the pace at which total housing consumption rises could be faster or slower than the growth of the economy over time. If you squint at the figure and twist your head a bit, one can imagine that the total value of housing as a share of GDP is rising gradually over time, but with a lot of fluctuations. Clearly, the price spike in the mid-2000s was far outside the pattern of any such long-run historical trend.

The final graph is owner's equity in housing divided by GDP. It's interesting to observe that housing equity relative to GDP became a more important asset from the mid-1960s up through the late 1980s. A lot of the advice one used to hear about buying a house early in life was from people living through that experience. Through much of the 1990s, the ratio of owner's equity to GDP fell, but in the early 1990s, that was partly a result of depressed regional real estate markets in certain states in the aftermath of the collapse of many savings and loan institutions in the late 1980s and early 1990s (which made the numerator of the ratio decline), and also a result of fast economic growth from the mid-1990s on (which made the denominator of the ratio rise). Again, the bottom line is that the spike in the total value of housing that ended around 2006 is well outside the post-World War II historical experience. And the drop since 2006 takes this ratio from by far its highest value since 1950 to by far its lowest value since 1950.


Of course, the financial crisis itself is more than just drop of nearly $7 trillion in housing values. It's also about what forces created the housing bubble in the first place, and how the bursting of the bubble translated itself into weaknesses in the banking and financial system. But the rise and fall of housing values is a central part of what occurred.


Monday, June 25, 2012

The Death Penalty and Deterrence: Why No Clear Answers?

A panel of the National Research Council headed by Daniel S. Nagin and John V. Pepper has published "Deterrence and the Death Penalty." The report can be ordered or a free PDF can be downloaded here. The report refers back to a 1978 NRC report which concluded that "available studies provide no useful evidence on the deterrent effect of capital punishment." The latest study reaches the same conclusion.

"The committee concludes that research to date on the effect of capital punishment on homicide is not informative about whether capital punishment decreases, increases, or has no effect on homicide rates. Therefore, the committee recommends that these studies not be used to inform deliberations requiring judgments about the effect of the death penalty on homicide. Consequently, claims that research demonstrates that capital punishment decreases or increases the homicide rate by a specified amount or has no effect on the homicide rate should not influence policy judgments about capital punishment."

Why has the research found it so hard to sort out cause and effect in this situation? The NRC report emphasizes two reasons, but discusses a number of others:

1) One problem is that deciding on the effect of capital punishment requires saying, "Compared to what?" But existing studies don't pay enough attention to what the alternative punishment might have been. Here the NRC report: "Properly understood, the relevant question about the deterrent effect of
capital punishment is the differential or marginal deterrent effect of execution over the deterrent effect of other available or commonly used penalties, specifically, a lengthy prison sentence or one of life without the possibility of parole. One major deficiency in all the existing studies is that none specify the noncapital sanction components of the sanction regime for the punishment of homicide."

2) Another problem is that studies of the deterrent effect of capital punishment need to make some assumptions about how potential murderers perceive that penalty. Are they aware of how often it is imposed, and under what circumstance, and their actual chances of receiving the penalty? In many studies, these underlying assumptions are not spelled out clearly, or even at all.


3) There's a classic cause-and-effect problem in studying the deterrent effects of any criminal penalty, whether fines or imprisonment or capital punishment. Say that there is one jurisdiction where lots of crimes occur and another where not many crimes occur, for whatever reason. The first jurisdiction thus imposes lots of criminal penalties, and the other jurisdiction doesn't. In this situation, a naive statistical test will observe that high level of crime are correlated with high levels of penalties, and low levels of crime are correlated with low levels of penalties. But it would be foolish to argue that the levels of penalties are causing the levels of crime. Instead, one needs to figure out how increases or decreases in the level of penalties would affect levels of crime, which is a much harder question, especially because changes in penalties often occur as a reaction to levels of crime.

4) The statistical problem here is a difficult one. To help illustrate why, here's a graph of the number of death sentences and executions in the U.S. since 1974. Back in 1976, U.S. Supreme Court decisions had made nearly impossible for states to execute anyone, and the number of death penalty sentences was somewhat lower as well. Then the number of death penalty sentences rises, and a few years later the number of executions rises. In the last decade or so, the number of death sentences had dropped substantially, but the number of executions has dropped by less, surely because of that large number of death sentences originally given back on the 1980s and 1990. Referring back to the earlier point, an obvious question here is the extent to which potential murderers are aware of these patterns in sentencing and executions.  

Next, here's a graph of homicide rates since 1974. Notice that it starts off at about 10 per 100,000 population, and then starting around 1990 drops off to about half that level. In other words, about a decade after capital punishment sentences rise, and at about the same time as the execution rate starts to rise, the homicide rate drops off. A naive statistical comparison between these patterns using national data might well suggest that higher levels of executions preceded a drop-off in murders.

But of course the deal penalty is not equally likely across states. New York state sentenced only 10 people to death from 1973-2009, and executed none during that time. California and Texas both sentences large numbers of people to death, but California actually carried out only 13 people from 1976-2009, while Texas executed 447 people. However, these differences in death sentences and actual executions seem to have very little effect on the murder rate in these three states, which essentially follows the national pattern in all three cases.


Of course, researchers in this area are fully aware of these difficulties in looking at the data over time and across states, and have applied a wide array of methods to this data. Given these issues, it is perhaps no surprise that the NRC report lists studies in the last decade which find that each execution deters five other murders, or 18 other murders, as well as studies that find that capital punishment deters no murders at all, and studies that find that the conclusions one draws from the data are quite fragile, depending on small differences in the statistical tests that are run. 


Of course, the issue of whether capital punishment deters is not the only issue in making policy choices about capital punishment. The NRC report is careful to point out that it is not considering the moral arguments for or against capital punishment, nor is it looking at the arguments over whether the penalty is administered in a consistent or nondiscriminatory fashion. Personally, my own moral sense would not rule out the concept of the death penalty for the most extreme and egregious cases of murder. I do worry that its application seems to vary so greatly across jurisdictions, and across racial groups, and by the quality of the lawyers involved in the case. I also worry that in a world without capital punishment, those who have already committed crimes that could land them in prison for life (say, kidnapping) have an incentive to kill potential witnesses, because in a situation without capital punishment, there is no additional penalty for doing so. 

The NRC report is careful to point out several times that a lack of solid empirical support for whether capital punishment deters doesn't prove that such an deterrent effect does not in fact exist. As the old saying among empirical researchers goes: "Absence of evidence is not evidence of absence."


On the particular issue of the uncertainty of whether or how much capital punishment deters future murders, I struggle with a conundrum that was put to me many years ago. Take as a starting point that we aren't sure whether capital punishment deters or not, and we must make a choice whether to execute certain murderers or not. Then consider four possibilities: 1) we execute some murderers and it does deter others, so we save innocent lives; 2) we execute some murderers and it doesn't deter others; 3) we don't execute any murderers and it wouldn't have deterred anyone if we did; and 4) we don't execute any murderers but it would have deterred some future murderers if we had done so. Those who worry about executing those who don't really deserve such a grave punishment, or even who may be innocent, have a point. But if it's possible that capital punishment may deter, and we genuinely aren't sure, then we also need to take into account in our policy calculations the possibility that executing the most egregious murderers might save innocent lives.
 

Friday, June 22, 2012

World Energy in Three Graphs

The BP Statistical Review of  World Energy is out for 2012. It's always a good reference for basic quantity and price data. Here are three graphs that caught my eye, and a few thoughts.

As a starting point, here's a figure showing world consumption of energy by source going back 25 years to 1986. The comment under the table reads: "World primary energy consumption grew by 2.5% in 2011, less than half the growth rate experienced in 2010 but close to the historical average. Growth decelerated for all regions and for all fuels. Oil remains the world’s leading fuel, accounting for 33.1% of global energy consumption, but this figure is the lowest share on record. Coal’s market share of 30.3% was the highest since 1969."



The sources of energy on the figure from bottom to top are green for oil, red for natural gas, light orange for nuclear, blue for hydroelectricity, dark orange for renewables, and gray for coal. It's true that the "renewables" slice has grown over time, from nearly imperceptible to actually visible. But it's useful to bear in mind that despite all the publicity about solar, wind, biofuels, and other renewable resources, they remain tiny compared with the other energy sources--especially the big three of oil, natural gas and coal. Even if public policies less favorable to fossil fuels were enacted worldwide, like a meaningful carbon tax, and even if technological progress in renewables continues to ramp up at a rapid pace, realistically speaking the big three fossil fuels will dominate world energy production for the next few decades.

The share of oil in world energy consumption is the lowest on record, BP reports. Part of the reason is the price. Here's a figure showing the long-term price of oil going back 150 years to 1861. The dark green line shows oil prices in nominal dollars; the light green line shows oil prices in real inflation-adjusted dollars. It's striking that after the Pennsylvania oil boom of the 1860s, real oil prices dropped to historically low levels and pretty much stayed there from 1880 up to the early 1970s--nearly a century of relatively cheap oil. Since then, crude oil prices have been seen two enormous spikes: one in the 1970s and one in the last few years. Both spikes took real prices of crude oil roughly back to where they were in 1860, before the era of cheap oil began. 

Oil is produced, shipped and sold in a global market. Natural gas is tougher to ship all around the world, unless or until there is a very large investment in the pipelines and ships that could make it possible. As a result, natural gas prices are falling in the U.S in the last few years (red line on the graph) while rising in places like Germany, the UK, and Japan.


 Of course, the fall in U.S. natural gas prices is largely due to the rise in unconventional natural gas production. I posted on June 7, 2012, about "Unconventional Natural Gas and Environmental Issues." My own grand compromise proposal for U.S. energy policy is the "Drill Baby Carbon Tax" -- basically push ahead with domestic production of fossil fuels, with environmental safeguards in place, but at the same time impose a carbon or pollutant tax to encourage the transition to increased energy conservation and to less-polluting sources of energy. 

Thursday, June 21, 2012

Health Care at 20% of GDP?

Back in the late 1970s and early 1980s, when I was first getting my feet wet in the arguments over health care finance, it was common to point out that national health care spending had risen from 5.5.-6.0% of GDP in the early 1960s to 9.2% of  GDP by 1980. Surely, we argued, such an increase couldn't continue. By 1993, early in the Clinton administration, when proposals for reforming health care finance were in the air, health care spending had reached an extraordinarily high 13.8% of GDP. Surely it was close to topping out?!?

But by 2009, when the Obama administration's health care reform package was announced, national health care spending had reached an unbelievably high 17.8% of GDP.  After several decades of watching U.S. health care spending climb higher than I would have believed plausible, I suppose I should be inured to the trend. But I still found my eyes bulging at the most recent official projections that health care spending will hit 19.6% of GDP by 2021. The estimates are from a team of actuaries from the Centers for Medicare and Medicaid Services, led by senior economist Sean P. Keehan, and they appear in "National Health Expenditure Projections: Modest Annual Growth Until Coverage Expands And Economic Growth Accelerates" in the July 2012 issue of Health Affairs.  


 The actuaries project relatively slow growth in health care spending through 2013, due to the " sustained effects of the recent recession and modest recovery on projected growth in disposable personal income, insurance coverage,and unemployment rates ..." But as the economy recovers, demand for health insurance and health care will pick up again. Also, "[b]eginning in 2014, major coverage expansions from the Affordable Care Act will take effect. These expansions are expected to increase the number of people with health insurance; the demand for health care (particularly prescription drugs and physician care); and the share of total health spending sponsored by federal, state, and local governments." Here are a few highlights from the much more detailed and lengthy tables. First, here is total health care spending, per capita health care spending, and health care spending as a share of GDP projected for 2021.

And here are a few bits from a more detailed table in which total health care spending is divided up into who pays the bills: private business (through private health insurance), households (through private health insurance premiums and out-of-pocket spending) and government (including federal, state and local government, both government programs that provide health insurance to citizens as well as health insurance for government employees). The government share of health care spending was 46% in 2011, and is projected to be 49.6% of health care spending in 2021. 

This is a "current law" projection of costs, so it assumes that various methods to control costs will be imposed and work as they are currently written into law. In turn, this means that the cost estimates are more likely biased low than high. With only a bit of rounding, we seem headed for health care spending levels that I wouldn't have believed back in the late 1970s, and can barely believe now: total health care spending at 20% of GDP, with government paying half the bills.

Wednesday, June 20, 2012

The Average Child at $243,900

Mark Lino of the U.S. Department of Agriculture has authored this year's version of Expenditures on Children by Families, 2011, a report going back to 1960. This year's version uses data from the Consumer Expenditure Survey of 2005-06 on what people buy, and then the prices adjusted to their 2011 levels. My wife and I have three little bundles of joy--actually, they are rapidly metamorphosing into lanky adolescent bundles of joy--so this report always catches my eye. The headline finding, I suppose, is that average lifetime expenditures by a husband-wife family on a child were $234,900 in 2011.

Here is annual spending per child by household income level and by age of the child--more specifically, for the younger child in a family with two parent and two children. Spending per child for each age group is more twice as high in the higher as opposed to the lower income group.

The primary expense is housing, but health care is also important. Over time, the actual cost of feeding and clothing children has become less important, while the out-of-pocket costs of child care and education have become more important.


The costs of raising a child are rising over time, but the total size of the increase may be less important than the specific drivers behind the rise. Here's Lino on the cost differences from 1960 to 2011 in raising a child (references to figures omitted):
"In 1960, average expenditures on a child in a middle-income, husband-wife family amounted to $25,229, or $191,723 in 2011 dollars. By 2011, these estimated expenditures climbed 23 percent in real terms to $234,900 ... Housing was the largest expense on a child in both time periods and increased in real terms over this time. Food was also one of the largest expenses in both time periods, but decreased in real terms. Changes in agriculture over the past 50 years have resulted in family food budgets being a lower percentage of household income. Transportation expenses on a child increased slightly in real terms from 1960 to 2011."

"Clothing and miscellaneous expenses on a child decreased as a percentage of total child-rearing expenses and in real terms from 1960 to 2011. Reduced real expenses on children’s clothing is somewhat of a surprise given the popularity of many designer clothing items today; however, it is likely that technological changes and globalization have made clothing less expensive in real terms. ..."

"Health care expenses on a child doubled as a percentage of total child-rearing costs, as well as increasing in real terms, from 1960 to 2011. ...  Perhaps the most striking change in child-rearing expenses over time relates to child care and education expenses. It should be noted that in 1960, child care/education expenses included families with and without the expense. Even so, these expenses grew from 2 percent of total child-rearing expenditures in 1960 (for families with and without the expense) to 18 percent (for families with the expense) in 2011. Much of this growth is likely related to child care. In 1960, child care costs were negligible, mainly consisting of in-the-home babysitting. Since then, the labor force participation of women has greatly increased, leading to the need for more child care. Child-rearing expense estimates were not provided for single-parent families in 1960, likely because of the small percentage of children residing in such households at the time."

When I contemplate the costs of child-raising, I sometimes remember a bit of good cheer I once heard from a financial planner: Those who have children often feel much better about their standard of living in retirement. Those without children experience retirement as a time when their income has diminished, but their desired expenditures are the same or higher. But for those of us with children, retirement will be a time when our income has diminished but our expenditures--when we are no longer supporting children--will also be much lower. Just one more blessing of parenthood!

Tuesday, June 19, 2012

Interview with Gary Becker on Rationality


Catherine Herfeld has an interview with Gary Becker in the Spring 2012 issue of the Erasmus Journal for Philosophy and Economics. It's called "The potentials and limitations of rational choice theory: an interview with Gary Becker." I found it in a post from Peter Klein at the "Organization and Markets" blog. Herfeld tries to push Becker on the limitations of rational choice theory and whether economists should be considering alternatives like behavioral economics, and Becker pushed back. Here are a few excerpts:


Becker on behavioral economics and the financial crisis

"But did the behavioral economists predict the crisis any better? When taking a look at the literature, one does not find better results. ... In terms of understanding the crisis, I do not think that more realistic behavioral assumptions would solve the problem. ...  In fact, I do not think that behavioral economics is a revolution. However, it has added some insights into human behavior and those insights, to the extent that they are verifiable, will be absorbed into the rational choice model. They will not lead to a radical change of the model. The real issues are how important are those insights and where do they apply?"

"So, for example, the explanation that consumers were somehow misled in the credit market, and that this in turn contributed to the financial crisis: I think there is very little empirical support for that. A lot of consumers were making pretty rational decisions, even those who were taking out mortgages with low interest rates and low down payments. Maybe they were going to default. But they did not default on their own capital. They defaulted on the lender’s capital. So I see very little evidence from this that consumers are not rational, in the sense that the rational choice model cannot explain most of what they did."

Becker on the use of mathematics in economics

"There is a lot of critique against mathematics in economics, from non-economists, from Austrian economists and from other groups, and I think it is misplaced. Mathematics can be a very useful servant; when it becomes the master, we are not in a good situation. However, I do not think it has become a master in economics. I think we made mistakes in understanding how economies move forward, even in understanding the pricing of derivatives. But one can make these mistakes, and plenty of mistakes have been made, without using any mathematics. Sociologists make a lot of mistakes without using mathematics. So I do not think that the problem is the use of mathematics per se. ... If we did it all verbally, would that improve our science? Economics was a verbal science until the 1940s and I would say we are now doing much better than the economists back then."

Becker on "takes a theory to beat a theory"

"If you want to abandon rational choice theory altogether, you have to substitute it with a new framework, and I do not see any new framework available at the moment—neither in the behavioral economics literature nor anywhere else—that has comparable explanatory and predictive power. That is the test. It is an old saying that you need a theory to beat a theory. That does not mean that you cannot extend the existing theory or modify it—you can and you should. As we learn more, we will modify rational choice theory. Maybe fifty years from now it will not be like rational choice theory anymore, because by then it will have been modified and changed in so many ways. That is how things evolve."

Becker on  how economics looks at aggregates and market relations, not at particular individuals
"Economists can make use of individual data panels and other data based on observations about the individual. However, what we are interested in are aggregates and market relations. For example, if the political aim is to subsidize education, economists do not care about how you respond or I respond in particular. Maybe there are differences in how Germans respond or Americans respond, or how people who study at the University of Chicago respond in comparison to how students from Columbia University respond, and I guess that we would care about that. But not about how the individual responds. In my opinion, this is a fundamental difference between psychology and economics."











Monday, June 18, 2012

Certificates: An Alternative to College?

The number of people with job market certificates as their highest level of education has risen from 2% in 1984 to 12% by 2009. The number of certificates awarded has risen eight-fold in the last 30 years. What's going on here? Anthony P. Carnevale, Stephen J. Rose, Andrew R. Hanson tell the story and discuss the evidence in "Certificates: Gateway To Gainful Employment and College Degrees," a June 2012 report from the Georgetown University Center on Labor and the Workforce.


Defining certificates

"Certificates are often confused with industry-based certifications, like a Microsoft or Cisco certification, for example. The essential difference between a certificate and an industry-based certification is that the certificates are earned through seat time in a classroom and industry-based certifications are awarded based on performance on a test, irrespective of where the learning occurs. Certificates more closely resemble degrees: They are awarded mainly by public, two-year schools or private, for-profit, non-degree granting business, vocational, technical, and trade schools. Certificates are typically classified by length of program: the amount of time a program is designed to be completed in, typically for students who are enrolled on a full-time basis. Short-term certificates take less than a year; medium-term certificates take between one and two years to complete; long-term certificates take between two and four years. Short-term certificates are most common, accounting for 54 percent in the most recently available data. Medium-term certificates account for 41 percent of certificates, while the remaining 5 percent are long-term certificates."

What fields do certificates cover?
Some of the most common areas for certification are health care, business and office management, computer and information services, auto mechanics, construction, metalworking, and electronics. As the table below shows, many of these certificate fields are dominated by either men or women.





Certificate holders tend to come from lower-income households and lower education levels, and to get their certificate at a young age.

About 44% of certificate-holders earned their certificate before the age of 22; 66% earned it before age 29. But this does leave one-third who earned the certificate in their 30s and 40s and older, often as a way of brushing up their job market credentials.Certificate holders tend to come from households with lower income levels, although the gap isn't as large as I might have guessed. For example, for those from a three-person household with less than $34,000 in income, about 18% have a certificate; for those from households with more than $102,000 in income, 10% have a certificate. About 34% of certificate-holders have a college degree. 40% have only a high school degree--or didn't finish a high school degree.

Tennesssee is a leading example of a state with a focus on certificate programs.

"TTC is known for its high completion rates and high placement rates in high skill, high wage jobs. Over 70 percent of students complete their program of study, compared to just 13 percent at the state’s community colleges. Graduates are placed in field at an 83 percent rate and 95 percent of students pass certification exams on the first attempt."

"What stands out about TTC are its unique program structure, learning model, and support services. Students have one or two instructors over the course of their program and have an average of six hours of face time per day with those instructors. Students’ advancement through the program is based on mastery of skills rather than completion of individual course requirements. Students’ choices are significantly constrained; their only decisions are their program of study, whether they attend on a full- or part-time basis, and whether they attend during the day or evening."

"Remedial coursework, which often bogs down community college students, is replaced by a Technology Foundations course that all students are required to take. Students’ learning is largely self-paced. TTC buildings are designed with a focus on hands-on learning, with few traditional classrooms and more “lab” space. Employers of TTC graduates report that the quality of their work is similar to others with two to three years of work experience. In addition, TTC’s faculty, staff, and administration are all part of the support services offered to students. TTC reports the support system is critical to the success of students from low-income communities."
Certificates seem to boost earnings for those with lower skill levels, but there are complexities.

The report notes: "The median worker with a high school diploma earns slightly more than $29,000, while certificate holders earn slightly less than $35,000, meaning that the certificate premium over high school is 20 percent." This comparison includes only those for whom a certificate is their highest degree.

One difficulty with all such comparisons of course, is that those who get certificates may have more persistence or drive than those with similar education who don't. It may not be the certificate per se that helps them earn more money, but rather their extra persistence and drive. On the other side, the reports from Tennessee that those with certificates are similar to those with 2-3 years work experience suggests that the certificates are offering a real boost in skill levels, and thus helping some workers open a door to the labor market that they might not otherwise be able to achieve.

Another difficulty is that the amount by which a certificate improves earnings varies with many factors.  Certificates in health care, cosmetology, and food service often don't seem to increase pay by much. Other certificates like "police and protective services, computer and information services, agriculture, and business and office management offer large earnings premiums." Certificates have a bigger effect in some states than in others--perhaps reflecting differences in what is needed to get a certificate in different states, or in the average pay levels in different states.

Yet another concern is that any boost in pay from such certificates needs to be weighed against the cost, which can vary a lot. "The costs of attending public two-year schools are much less than private schools: less than $7,000 annually at public schools, $15,000 annually at private nonprofits, and almost $20,000 annually at private for-profits."

Finally, certificates may offer a earnings boost for those with lower levels of skill, but not for those with higher levels of skill. "Certificates are a high achievement for low-skilled adults, but a low achievement for high-skilled adults."

Certificates may have a role to play in helping the transition from school to jobs, especially for those with lower skill levels.

One theme that I seem to touch on every few months is that it is highly unlikely that the U.S. is going to dramatically expand the number of people attending college--at least college in its current form, which is quite costly. We need to be thinking about other ways to smooth the transition from high school to work for those who, whether by skills or interests or inclinations, aren't going to flourish in a college setting. Certificates can play a role here. Here's some of the conclusion from Carnevale, Rose, and Hanson"

"In an American economy where the advancement of technology and globalization means that a
high school diploma alone is no longer able to provide family-sustaining earnings to many, certificates represent one piece of a multi-pronged solution on the road to a workforce with 60 percent postsecondary attainment. Though certificates currently aren’t counted in many measures
of postsecondary attainment, often they provide the outcomes that degree-seeking students
are looking for: gainful employment. Certificates can also serve as the first rung on the ladder
to a college degree or as training for workers with degrees who are engaged in the process of
lifelong learning and career advancement. The rapid growth of certificates over the past 30 years
is a promising signal that students and institutions are recognizing the value of certificates at an
increasing rate. The main lesson from the available data on certificates is this: They are diverse. While it is important to look at the value of certificates in the aggregate, their diversity in purpose and value means that transparency is absolutely essential. By and large, certificates work, but they do
not work for everyone."


I posted on January 16, 2012, about "Certificate Programs for Labor Market Skills." For some other alternatives to the standard college degree, I posted last October 18 on "Apprenticeships for the U.S. Economy" and last November 3 on "Recognizing Non-formal and Informal Learning."

Friday, June 15, 2012

Teacher Attendance and Digital Cameras: An Experiment

One of the hot new trends in economics over the last decade or two is carrying out randomized controlled trials, and one of the most talked-about examples of this approach is now officially published. The study looks at whether providing teachers in low-income countries with a digital camera and then requiring them to take a time-stamped picture at the beginning and end of the school day--and then linking this to financial incentives--can reduce teacher absenteeism. The study is called "Incentives Work: Getting Teachers to Come to School," by Esther Duflo, Rema Hanna, and Stephen P. Ryan, and it appears in the June issue of the American Economic Review. The article isn't freely available on-line, but many academics will have access through their libraries.

Teacher absenteeism at rates of 20 and 30% and more have long been recognized as a severe problem in many low- and middle-income countries. For example, in a Winter 2006 article in my own Journal of Economic Perspectives, Nazmul Chaudhury, Jeffrey Hammer, Michael Kremer, Karthik Muralidharan and F. Halsey Rogers wrote "Missing in Action: Teacher and Health Worker Absence in Developing Countries."

Here's the Duflo, Hanna, and Ryan description of this randomized controlled trial run under the auspices of a nongovernment organization called Seva Mandir in India (footnotes and citations omitted):


"Seva Mandir runs about 150 NFEs [nonformal education centers] in the tribal villages of Udaipur, Rajasthan. Udaipur is a sparsely populated, hard-to-access region. Thus, it is difficult to regularly monitor the NFEs, and absenteeism is high. A 1995 study found that the absence rate was 40 percent, while our first observation in the schools included in our study (in August 2003, before the program was announced) found that the rate was about 35 percent.  Before 2003, Seva Mandir relied on occasional visits to the schools, as well as reports by the local village workers, to monitor teacher attendence. They then use  bimonthly teacher meetings to talk to delinquent teachers. Given the high absence rate, they were aware that the level of supervision was insufficient.
 

"Therefore, starting in September 2003, Seva Mandir implemented an external monitoring and incentive program on an experimental basis. They chose 120 schools to participate, with 60 randomly selected schools serving as the treatment group and the remaining 60 as the comparison group. In the treatment schools, Seva Mandir gave each teacher a camera, along with instructions for one of the students to take a photograph of the teacher and the other students at the start and end of each school day. The cameras had a tamper-proof date and time function that made it possible to precisely track each school’s openings and closings. Rolls were collected every two months at regularly scheduled teacher meetings, and payments were distributed every two months. If a camera malfunctioned, teachers were instructed to call the program hotline within 48 hours. Someone was then dispatched to replace the camera, and teachers were credited for the missing day.
 

At the start of the program, Seva Mandir’s monthly base salary for teachers was Rs. 1,000 ($23 at the real exchange rate, or about $160 at purchasing power parity) for at least 20 days of work per month. In the treatment schools, teachers received a Rs. 50 bonus ($1.15) for each additional day they attended in excess of the 20 days (where holidays and training days, or about 3 days per month on average, are automatically credited as working days), and they received a Rs. 50 fine for each day of the 20 days they skipped work. Seva Mandir defined a “valid” day as one in which the opening and closing photographs were separated by at least five hours and at least eight children were present in both photos. Due to ethical and political concerns, Seva Mandir capped the fine at Rs. 500. Thus, salaries ranged from Rs. 500 to Rs. 1,300 (or $11.50 to $29.50)."
What was the result? 

"The program resulted in an immediate and long-lasting improvement in teacher attendance rates in treatment schools, as measured through monthly unannounced visits in both treatment and comparison schools. Over the 30 months in which attendance was tracked, teachers at program schools had an absence rate of 21 percent, compared to 44 percent at baseline and the 42 percent in the comparison schools."


Of course, the Duflo, Hanna and Ryan analysis goes much deeper than this graph. The research paper is technical, and it slices and dices the data with care and expertise to consider alternative explanations, alternative payment schedules, and more. But one basic message is that the combination of monitoring with the digital cameras and providing financial incentives did seem to improve attendance. Another message is that when teachers were in school more, students had more days of schooling, and their test results showed significant gains in learning. "Two and a half years into the program, children from the treatment schools were also 10 percentage points (or 62 percent) more likely to transfer to formal primary schools, which requires passing a competency test."

Even teachers seemed pleased by the program, although they of course griped about some of the rule that the photos be taken five hours apart on a given day.

"Overall, teachers did not complain about the principle of the program, although many teachers had some specific complaints about the inflexibility of the rules. For example, many did not like the fact that a day was not valid even if a teacher was present 4 hours and 55 minutes (the normal school day is 6 hours, but slack of 1 hour was given). On the other hand, many felt empowered as the onus of performing better was actually in their hands: “Our payments have increased, so my interest in running the center has gone up.” Others described how the payment system had made other community members less likely to burden them with other responsibilities once they knew that a teacher would be penalized if he did not attend school. This suggests that the program may actually have stronger effects in the long run, as it signals a change in the norms of what teachers are expected to do."


This study and this topic are heartbreaking in so many ways. The extremely high levels of teacher absenteeism are grim. The notion that a "successful" program for reducing teacher absenteeism gets the rate down from two out of every five days to one out of ever five days is grim. The thought of impoverished children faithfully showing up each day in the face of this absenteeism (students attendance doesn't seem higher when teachers show up more often) is grim. Just to help keep the students in mind, here's one of the actual photos from the project, taken from an article by Duflo and Abhijit Banerjit in the Winter 2006 issue of my own Journal of Economic Perspectives called "Addressing Absence."



Perhaps most grim of all is that the entire Seva Mandir study was carried out in "nonformal education centers," rather than schools, and using "para-teachers," rather than regular teachers. The reason is that in India, as in many other low- and middle-income countries, teachers are a highly organized labor group that politicians don't dare to cross, and so proposals to increase the dismally low levels of teacher attendance don't even happen in the regular school sector. But there is something optimistic about the use of inexpensive digital technology as a way of addressing the problem--and perhaps even of eventually altering the social norms about what kind of teacher attendance should be expected.


Thursday, June 14, 2012

Next Merger Wave Coming? Hart-Scott-Rodino 2011

The Federal Trade Commission has released the Hart-Scott-Rodino Annual Report 2011.  Each year, it summarizes the number of  mergers and acquisitions, their size, and some trends in antitrust enforcement. At least in my reading, the data in the report hints that a new wave of mergers may be on the way.

The Hart-Scott-Rodino legislation requires that when businesses plan a merger or an acquisition above a certain price--typically $66 million in 2011--it must first be reported to the Federal Trade Commission. The FTC can let the merger proceed, or request more information. Based on that additional information, the FTC can then let the merger proceed, block it, or approve it subject to various conditions (for example, requiring that the merged entity divest itself of certain parts of the business to preserve competition in those areas).

Here are the total mergers reported for the last 10 years. You can see the wave of mergers and acquisitions that peaked in 2007--and what appears as if it could be the start of a new merger wave in the last couple of years.



The total value of merger transactions reported under Hart-Scott-Rodino isn't completely comparable across years, because the threshold at which transactions need to be reported has risen over time. But for what it's worth, here's the pattern of the total dollar value of reported transactions since the late 1990s.
"The total dollar value of reported transactions rose dramatically from fiscal years 1996 to 2000, from about $677.4 billion to about $3 trillion. After the statutory thresholds were raised, the dollar value declined to about $565.4 billion in fiscal year 2002, and $406.8 billion in fiscal year 2003. This was followed by an increase in the dollar value of reported transactions over the next four years: about $630 billion in fiscal year 2004, $1.1 trillion in fiscal year 2005, $1.3 trillion in fiscal year 2006, and almost $2 trillion in 2007. The total dollar value of reported transactions declined to just over $1.3 trillion in fiscal year 2008, and to $533 billion in fiscal year 2009, increased to $780 billion in fiscal year 2010, and $979 billion in fiscal year 2011."

And here's a table showing the size distribution of the reported mergers in 2011. About 28% of the transactions exceeded $500 million; 11% exceeded $1 trillion in size.

The FTC allows most mergers to proceed, which makes sense in a competitive market economy. As a general rule, those who run businesses are in a better position than government economists to plan strategies for their firms. In 2011, about 4.1% of the mergers reported to the FTC has a request for additional information. At the end of the day, the FTC actually challenged 20 mergers in 2011. Thirteen of these challenges went to court; 11 were settled by consent decree. Perhaps the best-known of these cases in 2011 was the FTC decision to block AT&T’s proposed acquisition of T-Mobile. With the other seven, two of the deals were abandoned, and the other five led to restructuring of the deal or changes in conduct.



There are a number of reasons to suspect that a new wave of mergers may be coming, nicely reviewed in an article in the Economist magazine of May 19 called "Surf’s up: Merger waves mean that markets can consolidate rapidly. The next one is coming."' Mergers often happen when there are lots of struggling firms and excess capacity, which creates lots of willing sellers and good deals for possible buyers. When firms have a lot of cash on their balance sheets, like now, mergers start to look attractive. When interest rates are low, borrowing money to finance a merger or acquisition looks more attractive, and the alternative possible investments for corporate funds look less attractive.  

Waves of mergers often happen in waves. Once a few mergers happen in an industry, other companies start to feel as if they need to find a merger partner as well. In short, the broad economic conditions are ripe for a wave of mergers and the data for 2011 suggests that such a wave may just be getting underway. The FTC and the antitrust division over at the U.S. Department of Justice should be on their toes. 

Wednesday, June 13, 2012

Wealth by Distribution, Region, and Age

The Federal Reserve has published results from the most recent Survey of Consumer Finance, the triennial survey that is the canonical source for looking at the wealth of households. The results are in an article in the June 2012 issue of the Federal Reserve Bulletin called "Changes in U.S. Family Finances from 2007 to 2010: Evidence from the Survey of Consumer Finances," by a team of authors led by Jesse Bricker, Arthur B. Kennickell, Kevin B.Moore, and John Sabelhaus.

The headline finding from the report is that the median household wealth level fell from $126,000 in 2007 to $77,000 in 2010. Mean wealth is of course far higher than median wealth. It fell from 584,000 in 2007 to $499,000 in 2007. During a three-year period when housing prices and the stock market declined, a fall in wealth is expected. I'm still trying to digest the data, but here, I'll focus on three patterns in the evolution of wealth that just happened to catch my eye: changes across the distribution of wealth, across regions, and across age groups. Here's a table with the mean and median household values for the 2001, 2004, 2007, and 2010 surveys.


Changes in the Distribution of Wealth

Those in the 90-100th percentiles of the wealth distribution have median wealth of $1,864,000, and mean wealth of $3,716,000 in 2010. That's also the part of the wealth distribution that had the smallest percentage decline in the median and the mean from 2007 to 2010. 

"Median net worth fell for all percentile groups of the distribution of net worth, with the largest decreases in proportional terms being for the groups below the 75th percentile of the net worth distribution. From 2007 to 2010, the median for the lowest quartile of net worth fell from $1,300 to zero—a 100 percent decline; at the same time, the mean for the group fell from negative $2,300 to negative $12,800. For the second and third quartiles, the median and mean declines in net worth were smaller but still sizable; for example, median net worth for the second quartile fell 43.3 percent. Median and mean net worth did not fall quite as much for the higher net worth groups. For the 75th-to-90th percentile group, the median fell 19.7 percent while the mean fell 14.4 percent. For the wealthiest decile, the 11.0 percent decline in the mean exceeded the 6.4 percent decline in the median for that group; as was discussed earlier in the case of family income, this pattern of the changes in
the median and mean suggests that there was some compression of higher values in the wealth distribution."

Changes in Wealth by Region

The drop in housing prices hit especially hard in the western states, and the decline in median wealth was by far the highest in that region. Here's the Fed report: 

"Between 2007 and 2010, median net worth fell dramatically for families living in all regions of the country, but especially for those living in the West—a 55.3 percent decline. This pattern reflects the effect of the collapse of housing values in several parts of the West region. Median wealth in every other region fell 28.2 percent or more. As with the overall population and most other demographic groups discussed earlier, the decline in mean net worth within every region was smaller than the drop in the median. In the South and Midwest regions, the percentage decline in the median was about twice as large as the percentage decline in the mean, but in percentage terms, the median for the West fell four times as much as the mean."

Changes in Wealth by Age

The age 35-44 group experienced the biggest falls in net worth. Oddly enough, the over-75 age group saw a modest rise in net worth from 2007 to 2010. This must mean that the over-75s were not as heavily exposed to the drop in housing prices and and the stock market. Here's the Fed:

"The survey shows substantial declines in median and mean net worth by age group between 2007 and 2010, with the exception that mean net worth rose modestly (1.3 percent) for the 75-or-more age group. The 35-to-44 age group saw a 54.4 percent decline in median net worth during the most recent three-year period, and the mean for that age group fell 36.4 percent. The wealth decreases for the less-than-35 age group were also large; the median fell 25.0 percent while the mean fell 41.2 percent. The declines in median and mean net worth for middle-aged families (the 45-to-54 and 55-to-64 age groups) were also large."

Tuesday, June 12, 2012

Eight Bon Mots from Milton Friedman



 Allen S. Sanderson has a nice tribute marking the 100th anniversary of Milton Friedman's birth in "Remembering Milton," which appears in the Second Quarter 2012 issue of the Milken Institute Review. (Available on-line, but free registration required.) The article offers a number of nice reminisces from Friedman's colleagues and students (two groups that often overlap).
Along with Friedman's status as one of the handful of most prominent economists of the 20th century, he also had a nearly wicked rhetorical ability to turn a phrase. Here are a few of Friedman's one-liners collected by Sanderson:
Concentrated power is not rendered harmless by the good intentions of those who create it.

History suggests that capitalism is a necessary condition for political freedom. Clearly it is not a sufficient condition.

The problem of social organization is how to set up an arrangement under which greed will do the least harm; capitalism is that kind of a system.

With some notable exceptions, businessmen favor free enterprise in general but are opposed to it when it comes to themselves.

The free man will ask neither what his country can do for him nor what he can do for his country.

The case for prohibiting drugs is exactly as strong and as weak as the case for prohibiting people from overeating.

If you put the federal government in charge of the Sahara Desert, in five years there’d be a shortage of sand.

Only a crisis — actual or perceived — produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around.