The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel --aka "the Nobel prize in economics"--has been awarded in 2013 to Eugene Fama, Lars Peter Hansen, and Robert Schiller. Each of them has done absolutely top-notch academic work. But at least for me, it is difficult to describe this year's prize in a compact way involving a unified theme.
The prize committee said that the award was given "Meanwhile, Hansen developed a high-powered statistical tool called the Generalized Method of Moments that has become standard in analyzing asset-market data. So yes, all three winners have done "empirical analysis of asset prices," but they often seem to be coming and going in different directions. In a way, this is a follow-up Nobel prize to the one given in 1990 to Harry Markowitz, Merton H. Miller, and William F. Sharpe
"for their pioneering work in the theory of financial economics." But the theoretical models of finance for which that prize told a story in a way that this prize--at least for me--does not.
Rather than try to create a of cohesive narrative of the work behind this Nobel prize, I'll just try give a flavor of the work that Fama, Hansen, and Shiller have done in the empirical analysis of asset prices: some main findings, innovations in methods, new data, and implications. I'll draw on the always-useful materials that the Nobel committee posts on its website, both the "Popular Information," short essay called "Trendspotting in Asset Markets," and the "Advanced Information," a longer and more technical paper called (a bit optimistically!) "Understanding Asset Prices."
Findings: One theme is that movements in asset market prices, like stock prices, are not predictable in the short run, but are somewhat predictable in the long run. This may sound contradictory but actually, the two points are closely related.. For example, the sharp fluctuations in the short run can lead to times when assets are distinctly overvalued or undervalued compared to long-run benchmarks. In the short run, this high level of volatility isn't predictable, but when asset prices get far out of alignment, they do tend to correct. Fama is responsible for a substantial body of empirical work starting in the 1960s that pointed out that asset prices are not predictable in the short run. Shiller is responsible for a body of work starting in the 1980s that emphasized that the short-run movements in asset prices were often so large that some bounceback at some point in the longer term becomes predictable. Of course, exactly when that longer term bounceback will arrive is unclear. There's a piece of old investor wisdom sometimes attributed to Keynes, "Remember that the market can stay irrational longer than you can stay solvent."
Methods: Fama was an early leader in what is called an "event study" method: basically, look at the price of an asset before a certain event happens and after a certain event happens--like a corporate takeover, or a dividend payment, or news that affects future profits. Event studies can help to show if an event is anticipated (did the price move before the event?), the economic value of the event (shown by the price shift), and whether the effect had a permanent or temporary effect (did the price jump during the event and then return to the previous level?). Event studies have become a standard piece in the toolkit of empirical economists.
Hanson made use of a statistical method called the Generalized Method of Moments. I won't try to explain it here, partly because I'm fairly sure I'd mess it up, but here's one way of thinking about what it means. A "moment" refers to ways of characterizing a pattern of data. For example, if you are trying to describe a bunch of data, a first step might be to take the average: that is, to add up the data and divide by the quantity of data. A second step might be to think about how spread out the data is, which statisticians measure by the "variance." A third step might be to think about whether the distribution of data is symmetric around the mean, or tends to "lean" one way or the other, which is "skewness." Yet another step might be to look at whether the distribution of data has a substantial share of extreme values far from the mean, which to statisticians is "kurtosis." Each of steps,and others still more complex, are called a statistical "moment." With this framework, you can look at a theory of what would cause asset market prices to change or vary, and then using all the statistical moments you can compare the actual pattern of asset market prices to what the theory predicts. Thus, this approach provides a workhorse statistical tool for looking at theoretical explanations of asset market prices and comparing them to data.
Data: Fama was one of the first to use the CRSP data, which is a dataset that commenced in the early 1960s from the Center for Research in Security Prices at the University of Chicago, and includes information on a very wide array of securities prices and returns. Shiller together with Karl Case constructed the first high-quality index of housing prices in the 1980s. The index has not only been useful in looking at the housing market, but it has formed the basis for financial contracts based on this index which can allow for hedging against falls in real estate prices.
Implications: Movements in asset prices matter enormously to individuals, to firms, to the financial sector, and to the macroeconomy as a whole. Empirical findings in this area thus often lead to real-world consequences. For example, the finding that stock prices don't have much short-term predictability is part of what triggered the enormous growth of index fund investing in the last few decades. The arguments that people are not fully rational in how they think about asset prices, and pointing out that people are often not well-diversified against risks like unemployment or a falling house price, has led to the invention of a wide array of financial instruments, as well as to public policies that encourage saving. The U.S. financial crisis and the Great Recession in recent years have led to numerous policy proposals, all of which are at least implicitly based on a theory of how asset prices are generated.
For a sample of the work from the 2013 Nobel laureates, I can recommend some articles from the Journal of Economic Perspectives, where I have work as Managing Editor since 1987:
In the Summer 2004 issue, Eugene Fama and Kenneth R. French wrote "The Capital Asset Pricing Model: Theory and Evidence," (18:3, pp 25-46).
In the Winter 2003 issue, Robert Shiller wrote "From Efficient Markets Theory to Behavioral Finance (17:1, pp. 83-104).
In the Winter 1996 issue, Lars Peter Hansen and James J. Heckman wrote "The Empirical Foundations of Calibration" (10:1, 87-104). That article is about calibration as a tool for macroeconomic modeling, rather than about the statistical work which is the emphasis of the prize. In the Fall 2001 issue, Jeffrey M. Wooldridge made an heroic effort to explain Generalized Method of Moments in an only mildly mathematical way in "Applications of Generalized Method of Moments Estimation" (15:4, 87-100).
For posts on the previous Nobel prize winners, see: The 2012 Nobel Prize to Shapley and Roth (October 17, 2012) and the 2011 Nobel Prize to Thomas Sargent and Christopher Sims (October 10, 2011).