Friday, October 17, 2014

Putting Terrorism Risks in Context

Counterterrorism policies are an acid test for anyone seeking to maintain a dispassionate attitude about costs and benefits. My gut reaction, which I suspect is shared by many, is that more spending to reduce the risks of terrorism is better. But John Mueller and Mark G. Stewart have taken up the gauntlet of reminding us that the core logic of costs and benefits applies here, too.  Their piece on  "Responsible Counterterrorism Policy" appears as Cato Institute Policy Analysis #755 (September 10, 2014).  Their article on "Evaluating Counterterrorism Spending" appears in the Summer 2014 issue of the Journal of Economic Perspectives (28:3, pp. 237-48), which is freely available online courtesy of the American Economic Association. (Full disclosure: I've been Managing Editor of the JEP since the first issue in 1987.) Their overall message sounds like this: 

[T]he United States spends about $100 billion per year seeking to deter, disrupt, or protect against domestic terrorism. If each saved life is valued at $14 million, it would be necessary for the counterterrorism measures to prevent or protect against between 6,000 and 7,000 terrorism deaths in the country each year, or twice that if the lower figure of $7 million for a saved life is applied. Those figures seem to be very high. The total number of people killed by terrorists within the United States is very small, and the
number killed by Islamist extremist terrorists since 9/11 is 19, or fewer than 2 per year. That is a far cry, of course, from 6,000 to 7,000 per year. A defender of the spending might argue that the number is that low primarily because of the counterterrorism efforts. Others might find that to be a very considerable stretch.
An instructive comparison might be made with the Los Angeles Police Department, which operates with a yearly budget of $1.3 billion. Considering only lives saved following the discussion above, that expenditure would be justified if the police saved some 185 lives every year when each saved life is valued at $7 million. (It makes sense to use the lower figure for the value of a saved life here, because police work is likely to have few indirect and ancillary costs: for example, a fatal car crash does not cause others to avoid driving.) At present, some 300 homicides occur each year in the city and about the same number of deaths from automobile accidents. It is certainly plausible to suggest that both of those numbers would be substantially higher without police efforts, and accordingly that local taxpayers are getting pretty good value for their money. Moreover, the police provide a great many other services (or “cobenefits”) to the community for the same expenditure, from directing traffic to arresting burglars and shoplifters.
Mueller and Stewart push for thinking about terrorism as one risk among many risks. For example, here's an table showing the risk of death from various causes over various time periods. 

Of course, terrorism is arguably a more frightening or terrible risk than other risks, and thus one can make a plausible-if-arguable case that it's worth spending more to save 100 lives from terrorist attacks than it is worth to save 100 lives from, say cancer or industrial accidents or traffic deaths. But saying that it makes sense to spend "more" against terrorism risks is not the same as arguing that terrorism risks should get a blank check. Mueller and Stewart also offer a cost-per-life-saved chart, which mixes together estimates of the effects of a number of past regulatory actions. 

One can quarrel with the specifics of these estimates in various ways. But overall, a sensible regulatory system should be seeking out ways to do more of the kinds of items that offer higher ratios of benefits to costs, and scaling back on the items that offer lower ratios of benefits to costs. Of course, this doesn't mean that all or even most counterterrorism spending is socially undesirable. But it does mean that when thinking about counterterrorism spending, we should be on the lookout for areas where resources might be redeployed more effectively. As one concrete example, Mueller and Stewart point out, the "Transport Security Administration’s Federal Air Marshal Service and its full body scanner technology together are nearly as costly as the entire FBI counterterrorism budget, but their risk reduction over the alternatives appears to be negligible." 

Indeed, the health risks of the body scanner technology may be greater than the terrorism risk they are preventing. They write: 

It involves the risk that body scanners using x-ray technology will cause cancer. Asked about it, the DHS official in charge, John Pistole, essentially said that, although the cancer risk was not zero, it was acceptable. ... Since the radiation exposure delivered to each passenger is known, one can calculate the risk of getting cancer from a single exposure using a standard approach that, although controversial, is officially accepted by nuclear regulators in the United States and elsewhere. On the basis of a 2012 review of scanner safety, that cancer risk per scan is about 1 in 60 million. As it happens, the chance that an individual airline passenger will be killed by terrorists on an individual flight is much lower—1 in 90 million. ... [T]he risk of being killed by a terrorist on an airliner is already fully acceptable by the standards applied to the cancer risk from body scanners using x-ray technology. But no official has drawn that comparison.
As Mueller and Stewart point out, we have in fact eased many rules about airplane travel in recent years, without people freaking out. For example, no one any longer asks if you packed your bag yourself and have had it with you at all time. Since 2005, air passengers have (technically) been allowed to take short scissors and knives on planes. Passenger no longer need to show identification at the airplane gate. The color-coded "alert" scheme has been ended. There has been a hiring freeze on "air marshalls" since 2012. The young (under 13) and the old (over 74) no need to take off their shoes when going through screening. The PreCheck system is allowing as many as half of all flyers, including many frequent flyers, to go through airport security without taking off their belts and shoes and jackets and removing liquids and laptops from their bags.

But despite these changes, in the U.S. as a whole, our approach to counterterrorism seems to largely ignore cost-benefit analysis. Indeed, Mueller and Steward cite a 2010 report done by a panel of outside experts making this point. Other countries seem able to make different choices about counterterrorism spending. 
"The United Kingdom, which faces an internal threat from terrorism that may well be greater than that for the United States, nonetheless spends proportionately much less than half as much on homeland security, and the same holds for Canada and Australia. ... It is true that few voters spend a great amount of time following the ins and outs of policy issues, and even fewer are certifiable policy wonks. But they are grownups, and it is just possible they would respond reasonably to an adult conversation about terrorism.”

Thursday, October 16, 2014

Thoughts on High-Priced Textbooks

High textbook prices are a pebble in the shoe of many college students. Sure, it's not the biggest financial issue they face, But it's a real and nagging annoyance that for hinders performance for many students.

Here's how the U.S. Government Accountability Office (GAO) gave the basic facts in a 2013 report:
In 2005, based on data from the Bureau of Labor Statistics, we reported that new college textbook prices had risen at twice the rate of annual inflation over the course of nearly two decades, increasing at an average of 6 percent per year and following close behind increases in tuition and fees. More recent data show that textbook prices continued to rise from 2002 to 2012 at an average of 6 percent per year, while tuition and fees increased at an average of 7 percent and overall prices increased at an average of 2 percent per year.

A January 2014 report by Ethan Senack, published by the U.S. PIRG Education Fund and The Student PIRGs, pulls together and cites some of the evidence from recent years in a report called "Fixing the Textbook Market: How Students Respond to High Textbook Costs and Demand Alternatives." A sampling of his commentary:
According to the College Board, the average student spends $1,2001 per year on textbooks and supplies. That’s as much as 39% of tuition and fees at a community college and 14% of tuition and fees at a four-year public institution. ...  It is also important to note that just five textbook companies control more than 80% of the $8.8 billion publishing market, giving them near market monopoly and protecting them from serious competition. ... 65% of students said that they had decided against buying a textbook because it was too expensive. The survey also found that 94% of students who
had foregone purchasing a textbook were concerned that doing so would hurt their grade in a course. ... Nearly half of all students surveyed said that the cost of textbooks impacted how many/which classes they took each semester.
David Kestenbaum and Jacob Goldstein at National Public Radio took up this question recently on one of their "Planet Money" podcasts. They say: ""By popular demand: Why are textbooks so expensive?" For economists, a highlight is that they converse with Greg Mankiw, author of what is currently the best-selling introductory economics textbook, which as they point out is selling for $286 on Amazon. Maybe this is a good place to point out that I am not a neutral observer in this argument: The third edition of my own Principles of Economics textbook is available through Textbook Media. The pricing varies from $25 for online access to the book, up through $60 for both a paper copy (soft-cover, black and white) and online access.

Several explanations for high textbook prices are on offer. The standard arguments are that textbook companies are marketing selling to professors, not to students, and professors are not necessarily very sensitive to textbook prices. (Indeed, one can argue that before the rapid rise in textbook prices in the last couple of decades, it made sense for professors not to focus too much on textbook prices.) Competition in the textbook market is limited, and the big publishers load up their books with features that might appeal to professors: multi-colored hardcover books, with DVDs and online access, together with test banks that allow professors to give quizzes and tests that can be machine-graded. At many colleges and universities, the intro econ class is taught in a large lecture format, which can include hundreds or even several thousand students, as well as a flock of teaching assistants, so some form of computerized grading and feedback is almost a necessity. Some of the marketing by textbook companies involves paying professors for reviewing chapters--of course in the hope that such reviewers will adopt the book.

The NPR show casts much of this dynamic as a "principal-agent problem," the name for a situation in which one person (the "principal") wants another person (the "agent") to act on their behalf, but lacks the ability to observe or evaluate the actions of the agent in a complete way. Principal-agent analysis is often used, for example, to think about the problem of a manager motivating employees. But it can also be used to consider the issue of students (the "principals") wanting the professor (the "agent") to choose the book that will best suit the needs of the students, with all factors of price and quality duly taken into account.  The NPR reporters quote one expert saying that the profit margin for high school textbooks is 5-10%, because those books decisions are made by school districts and states that negotiate hard. However, profit margins on college textbooks--where the textbook choice is often made by a professor who may not even know the price that students will pay--are more like 20%.

The NPR report suggests this principal-agent framework to Greg Mankiw, author of the top-selling $286 economic textbook. Mankiw points out that principal-agent problems are in no way nefarious, but come up in many contexts. For example, when you get an operation, you rely on the doctor to make choices that involve costs; when you get your car fixed, you rely on a mechanic to make choices that involve costs; when you are having home repairs done, you rely on a repair person or a contractor to make choices that involve costs. Mankiw argues that professors, acting as the agents of students, have legitimate reason to be concerned about tradeoffs of time and money. As he notes, a high quality book is more important "than saving them a few dollars"--and he suggests that saving $30 isn't worth it for a low-quality book.

But of course, in the real world there are more choices than a high-quality $286 book and a low-quality $256 book. The PIRG student surveys suggest that up to two-thirds of students are avoiding buying textbooks at all, even though they fear it will hurt their grade, or are shifting to other classes with lower textbook costs. If a student is working 10 hours a week at a part-time job, making $8/hour after taxes, then the difference between $286 book and a $60 book is 28.25 hours--nearly three weeks of part-time work. I am unaware of any evidence in which students were randomly assigned different textbooks but otherwise taught and evaluated in the same way, and kept time diaries, which would show that higher-priced books save time or improve academic performance. It is by no means obvious that a lower-cost book (yes, like my own) works less well for students than a higher-cost book from a big publisher. Some would put that point more strongly.

A final dynamic that may be contributing to higher-prices textbooks is a sort of vicious circle related to the textbook resale market. The NPR report says that when selling a textbook over a three-year edition, a typical pattern was that sales fell by half after the first year and again by half after the second year, as students who had bought the first edition resold the book to later students. Of course, this dynamic also means that many students who bought the book new are not really paying full-price, but instead paying the original price minus the resale price. The argument is that as textbooks have increased in price, the resale market has become ever-more active, so that sales of a textbook in later years have dwindled much more quickly. Textbook companies react to this process by charging more for the new textbook, which of course only spurs more activity in the resale market.

A big question for the future of textbooks is how and in what ways they migrate to electronic forms. On one side, the hope is that electronic textbooks will offer expanded functionality, as well as being cheaper. But this future is not foreordained. At least at present, my sense is that the functionality of reading and taking notes in online textbooks hasn't yet caught up to the ease of reading on paper. Technology and better screens may well shift this balance over time. But even setting aside questions of reading for long periods of time on screen, or taking notes on screen, at present it remains harder to skip around in a computerized text between what you are currently reading and the earlier text that you need to be checking, as well as skipping to various graphs, tables, and definitions. To say it more simply, in a number of subjects it may still be harder to study an on-line text than to study a paper text.

Moreover, as textbook manufacturers shift to an on-line world, they will bring with them their full bag of tricks for getting paid. The Senack report notes:
Today’s marketplace offers more digital textbook options to the student consumer than ever. “Etextbooks” are digitized texts that students read on a laptop or tablet. Similar to PDF documents, e-textbooks enable students to annotate, highlight and search. The cost may be 40-50 percent of the print retail price, and access expires after 180 days. Publishers have introduced e-textbooks for nearly all their traditional textbook offerings. In addition, the emergence of the ereader like the Kindle and iPad, as well as the emergence of many e-textbook rental programs, all seemed to indicate that the e-textbook will alter the college textbook landscape for the better. However, despite this shift, users of e-textbooks are subject to expiration dates, on-line codes that only work once, page printing limits, and other tactics that only serve to restrict use and increase cost. Unfortunately for students, the publishing companies’ venture into e-textbooks is a continuation of the practices they use to monopolize the print market.

Wednesday, October 15, 2014

Snapshots of Global Wealth

Wealth is everywhere distributed far more unequally than income. After all, wealth is the value of assets accumulated over time, not the paychecks received. Many younger adults have decently high incomes, but once their debts are taken into account, they have little wealth. Many retirees have decently high wealth, but since they are no longer on the job, their income is low. That said, inequalities of wealth around the world are quite remarkable. The Credit Suisse Global Wealth Report 2014,  put together by a group at Credit Suisse in collaboration with outside economists Anthony Shorrocks and Jim Davies, documents many of the patterns and trends.

As a starting point, global wealth including financial and real estate assets by their calculation adds up to $263 trillion; for comparison, the world GDP was about $75 trillion in 2013. Unsurprisingly, the bulk of this wealth is in North American and Europe. Still, the differences in wealth per adult are striking: $340,000 wealth/adult in North America, $145,000 in wealth/adult in Europe, roughly $22,000 in wealth/adult in Latin America and in China, and about $5,000 in wealth/adult in Africa and India.

What about if we look at the distribution of wealth across the world? The report notes: "Our estimates for mid-2014 indicate that once debts have been subtracted, a person needs only USD 3,650 to be among the wealthiest half of world citizens. However, more than USD 77,000 is required to be a member of the top 10% of global wealth holders, and USD 798,000 to belong to the top 1%. Taken together, the bottom half of the global population own less than 1% of total wealth. In sharp contrast, the richest decile hold 87% of the world’s wealth, and the top percentile alone account for 48.2% of global assets."

It's would be unwise to overinterpret what's shown on the left-hand side of this graph. Since you need $3,650 in wealth to be in the upper half of the world distribution, the lower half of the distribution  is showing relatively small differences in wealth, much of it from countries that don't have especially good data. That said, the figure shows some interesting patterns: for example, the red blob of China's population at about the 6th to 9th decile of wealth is striking, as is the purple blob of India's population at below-median wealth.

Here's a pyramid of world wealth, showing that about 35 million people who hold more than $1 million in wealth  account for 0.7% of the world population and 44% of global wealth. Unsurprisingly, given the earlier statistics, Americans make up by far the single largest nationality in this group. An middle-class American household that puts 10-15% of earnings into a retirement account every year, and which also buys a house and  pays off the mortgage, is likely to qualify for being in the top 1% of world wealth by retirement.

And here's a close-up of the very top of the wealth pyramid: that is, the estimated 128,200 adults around the world who have more than $50 million in wealth.

What is the trend of of wealth inequality over time? Income inequality is rising in many countries, but while inequalities of income can translate into inequalities of wealth over time, these patterns need not move in lockstep. If those with high incomes spend their money rather than save it, or accumulate some wealth and then split it up among children and charities, then a rising inequality of income can coexist with relatively little rise in the inequality of wealth. Here are the patterns of income and wealth inequality for the U.S. since 1910. The bottom blue line, for example, shows the share of total income going to those in the upper 1% of the annual distribution of income, and as has been documented many times, this share has doubled since the late 1970s. However, the share of wealth held by the top 1%, shown by the green-ish line second from the bottom, has risen only a small amount. At least for the U.S., at the very tip-top of the wealth distribution, the concentration of wealth is not matching the rising concentration of income. The brown and yellow lines show income and wealth held by the top 10%. Most of the rise in inequality of incomes is happening in the top 1% of the income distribution, but most of the rise in the inequality of wealth is happening not in the top 1%, but in the 90-99th percentiles.  

Many countries of the world have seen a rise in wealth held by the top 10% of the income distribution in recent years that is larger than the rise in the U.S. economy. Here's a list of countries ranked by how much the share of wealth held by the top 10% has risen from 2000-2014, with China, Egypt, Hong Kong, Turkey, Korea, Argentina, India, and Russia at the top.

Part of the issue here, of course, is that inequality of wealth in the U.S. economy was already fairly high, so it didn't have as much room to rise. Part of the reason is that economic growth is often unevenly distributed across a country like China or Korea, so when growth hits the inequality of wealth rises at least for a time. Another issue is that when countries have a combination of political turmoil and corruption, the economic suffering of the middle class makes the share of the wealth held by the top 10% look larger.

Of course, the broader lessons is that wealth inequality across a country is the result of a wide array of economic and policy factors. As the report notes:

Over longer periods, wealth inequality is influenced by economic growth, demographics, savings behavior, landholding, inheritance and government policy. Fast economic growth, for example, is expected to lead to the rapid rise of new businesses, raising inequality. This may account in part for the high level of wealth inequality evident in emerging market economies. Patterns of landholding and the transmission of land from generation to generation is an important consideration in developing countries, while inheritance more generally will tend to support higher levels of inequality, especially in slower growth economies. Governments can influence the level and distribution of wealth in many ways. Higher levels of taxation – on income, capital, property or inheritance – are all expected to reduce inequality in the longer run, although the repercussions on personal incentives are widely debated. Encouraging wealth creation through tax advantages given to retirement savings programs is less controversial and will tend to reduce inequality. Welfare state policies, including public pensions, help to reduce income inequality; somewhat perversely, however, they reduce the need for lower and middle income families to save, lowering their wealth and tending to raise wealth inequality.

Tuesday, October 14, 2014

The Nobel Prize to Jean Tirole

As of the mid-1980s, there were two main choices for government regulation of large public utilities like electricity transmission companies and water mains, and neither seemed adequate. One option was "cost-plus" regulation, where the government regulator looked at the costs of the regulated firm and then lets the firm charge enough to make a modest profit. The problem, of course, is that this system gives a regulated firm no incentive to cut costs or even to provide quality service. Instead, the regulated firm has an incentive to build new plants and even to run up costs where possible, because the regulators will let the firms cover those costs--and make a bit more besides. 

The other option was "price cap" regulation, where the government regulator set a price that the regulated firm could charge for the next few years. Sometimes the price was set on an downward trajectory: that is, the firm would be required to charge slightly lower prices each year. However, if the regulated firm could find a way to cut costs or innovate more rapidly, then the regulated firm could earn higher profits, at least for several years until the regulators reset the price. The problem, of course, is that the regulated firm now has incentives to deceive the regulator about its costs, to get the price cap set high, and then to find ways of slashing costs and making high profits for a few years, before then trying again to persuade the regulator that costs are high when the price cap comes up for renewal. 

What advice does economic analysis offer for regulators in this situation? Jean Tirole has been awarded the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel for 2014 “for his analysis of market power and regulation," which tackles this and a number of related questions. The Nobel committee always posts some background and supporting material at its website. I'll draw on their "Popular Information"  and "Scientific Information" essays. 

In work in the 1980s, Tirole and his co-author Jean-Jacques Laffont tackled the question of how to regulate firms by spelling out models which clarified what regulators could know about firms. Specifically, one basic model argued that a regulator can observe costs of production at a firm, but the regulator cannot observe either the potential technologies that a firm has available for reducing costs, nor can it observe how much effort a firm has put into reducing costs. Thus, the challenge for regulators is to give firms an incentive to reveal this kind of information. In turn, this led to a number of insights.
For example, one potential approach is for the regulator to combine a cost-plus plan and some incentives. Thus, the firm announces its costs, and regulator says: "Fine, we'll let you set prices in a way that lets you cover those costs. However, if you save money, we will let you add some portion of what you save to profits, and if you lose money, we will let you set prices in a way that recoups some of your losses." Under certain conditions, this kind of formula (the "optimal static mechanism") gives firms an incentive to reveal their true costs (and thus not to pump up costs in the way that pure cost-plus regulation would encourage) and also to seek out cost savings, but to share some of the gains of that cost saving with customers (because the firm doesn't get to keep 100% of its cost savings like it would under pure price cap regulation). 

However, drawing up a specific contract for sharing of potential cost savings or cost overruns is a tricky business in practice. Thus, an alternative is for regulators to offer firms a choice: either the firm can choose to be regulated by a cost-plus approach, or it can choose to be regulated by a price cap approach. The idea here is that if a firm chooses the price cap approach, it is revealing to regulators that it sees a number of ways to cut costs; if it chooses the cost-plus approach, it is saying to regulators that it doesn't see a way to reduce costs. 

Tirole's style of analysis is to work through the potential issues that can arise one at a time, modelling and analyzing each one separately and then in  various combinations. Thus, another issue that arises is what happens if there is a dimension of quality of service that the regulator cannot observe. A price cap approach would encourage the firm to save money by reducing quality, and so when the regulator has a hard time observing quality of service, it should offer the firm only modest opportunities to add to profits by cutting costs. 

Or what happens when we think of regulation not as a one-time choice, but as an agreement that both sides know will be renegotiated over time? The Nobel committee explains the potential problem that can arise: "Suppose the firm can make a sunk-cost investment in a technology which will generate
future cost savings. If the firm invests and its costs fall, the regulator may be tempted to expropriate the investment by reducing the transfer to the firm (or tighten its price cap). If the firm anticipates this kind of hold-up problem then it may prefer not to invest. This problem is the largest when long-run investments are essential, as in the electricity and telecommunications industries."

A counterintuitive finding results in this setting: "In practice, a regulator may employ various strategies to remain ignorant about the firm’s cost. For example, the regulator may try to commit to infrequent reviews of a price cap. If this commitment is credible, the firm will have a strong incentive to minimize its production cost. However, if the commitment is not credible, the firm expects that
any cost reductions will quickly trigger a tighter price cap, and the incentives for cost minimization

Yet another issue that arises is the problem of "regulatory capture," which refers to a situation where over time, the regulators end up looking out for the regulated industry, rather than for consumers. This dynamic is a common one. After all, the regulated industry pays a huge amount of attention to the regulatory agency, doing its best to get sympathetic folks chosen. The regulated industry necessarily provides and shapes the information on which regulators rely. The regulated industry focuses with laser intensity on the fine print of every word and comma in the regulations. And after regulators have served for a few years, they often can end up working for consulting firms or for the regulated industry, helping deal with the same regulations they wrote in the first place. In contrast, most consumers don't have time or energy to focus on regulatory agencies in a way that would counterbalance these forces. 

Tirole's model argues that unsophisticated price-cap regulation--where the firm gets 100% of every dollar it saves under the price cap--offers the biggest financial incentives for regulatory capture. (Imagine regulators who set a price cap at a high level, enabling the firm to make high profits, and later are rewarded in their careers for having done so.) Thus, when concerns about regulatory capture are especially high, giving regulated firms a chance to earn very high profits--even by saving money and cutting costs!--may be unwise. 

Tirole is meticulous in going through possible factors, situations, and contingencies. I've focused here on the issue of regulating large firms like electricity companies, which is in some ways at the heart of his work. But Tirole also looks at how these lessons apply for regulation across a range of other industries, including "too big to fail" financial firms, the pricing of telecommunications networks, and others. Tirole has lots to say about how large firms might compete or subtly cooperate with each other, and applies these lessons across "horizontal" markets where similar firms compete with each other and "vertical" markets that involve supply chains between firms.

One interesting branch of Tirole's work looks at the "patent race" problem, which is the issue that if many firms feel that they have a good chance to get an important patent, they may spend so much on duplicative research and development that their efforts are a poor deal for society as a whole. On the other side, if firms feel that only one company is really well-positioned to get an important patent, they may choose not to try, and without the pressure of competition, insufficient research and development may be done in developing that technology. One of the policy controversies in this areas is whether competitive firms should be able to set up "patent pools," in which firms pay a fee to use all the patents int he pool. Tirole's models suggest that patent pools are a good idea, but only if the patents in the pool can also be licensed individually--which prevents the patent pool from becoming a way to shut out small competitors who only need access to one or a few patents. 

Since the 1950s, economic work in the field of  industrial organization has gone through three waves, with Tirole's work serving as a canonical representation of the third wave. As the Nobel essay explains, in the 1950s the standard approach was called "Structure-Conduct-Performance (SCP) paradigm. The basic idea was that industry conditions (the number of sellers, the production technology, and so forth) determine industry structure, which determines firm conduct (pricing, investment and so forth), which in turn determines industry performance." Thus, a standard study in this approach might look at a measure of market concentration--like the share of total market output for the top four firms--and see how was correlated with some measure of profitability for the industry. "Prescriptions for government policies, particularly with regard to horizontal mergers, reflected the SCP paradigm and were largely based on these concentration measures." 

Then starting in the 1960s, a "Chicago School: approach pointed out that correlation from these earlier studies didn't mean causation. Say that there is an industry with a small number of large firms, who are earning high profits.  Does this mean that the large firms are earning profits by unfairly squeezing the competition? Or earning profits by using their size to be very efficient? The typical structure-conduct-performance study couldn't really separate those possibilities. 

Tirole and others brought analytical tools of game theory and mechanism design to industrial organization. The Nobel committee writes: "In the 1980s, the game-theory revolution in IO [Industrial Organization] closed the circle by supplying the tools necessary to take these industry-specific conditions into account. Since then, game theoryhas become the dominant paradigm for the study of imperfect competition, providing a rigorous and flexible framework for building models of specific industries, which has facilitated empirical studies and welfare analysis." In this way, all of Tirole's specific models, illustrations, and investigations add up to fundamental change in how economics think about regulation, competition policy, and antitrust--which is what makes this body of work Nobel-worthy. 

Monday, October 13, 2014

Taking Your Nobel Medal Through the Fargo Airport

Brian Schmidt was a co-winner of the 2011 Nobel Physics Prize for "for the discovery of the accelerating expansion of the Universe through observations of distant supernovae." This is the discovery that leads physicists to infer the existence of "dark energy," which although we have no direct way to measure or observe it is apparently causing the expansion of the universe to speed up. At the Scientific American blog, Clara Moskowitz reports the story recently told by Schmidt about ttaking his Nobel medal to show his grandmother in Fargo, North Dakota -- a city on the eastern edge of North Dakota, on the border with my home state of Minnesota. Fargo has a little more than 100,000 people, which makes it the largest population city in North Dakota. Here's how Schmidt tells the story:

“There are a couple of bizarre things that happen. One of the things you get when you win a Nobel Prize is, well, a Nobel Prize. It’s about that big, that thick [he mimes a disk roughly the size of an Olympic medal], weighs a half a pound, and it’s made of gold.
“When I won this, my grandma, who lives in Fargo, North Dakota, wanted to see it. I was coming around so I decided I’d bring my Nobel Prize. You would think that carrying around a Nobel Prize would be uneventful, and it was uneventful, until I tried to leave Fargo with it, and went through the X-ray machine. I could see they were puzzled. It was in my laptop bag. It’s made of gold, so it absorbs all the X-rays—it’s completely black. And they had never seen anything completely black.
“They’re like, ‘Sir, there’s something in your bag.’
I said, ‘Yes, I think it’s this box.’
They said, ‘What’s in the box?’
I said, ‘a large gold medal,’ as one does.
So they opened it up and they said, ‘What’s it made out of?’
I said, ‘gold.’
And they’re like, ‘Uhhhh. Who gave this to you?’
‘The King of Sweden.’
‘Why did he give this to you?’
‘Because I helped discover the expansion rate of the universe was accelerating.’
At which point, they were beginning to lose their sense of humor. I explained to them it was a Nobel Prize, and their main question was, ‘Why were you in Fargo?’”

What Americans Know About Their Economy

An old friend of mine, when teaching a course in introductory economics, used to give students a list of 10 economic statistics that he wanted them to know on the final: basic stuff like the unemployment rate, the poverty rate, total federal spending, the level of Dow Jones Industrial Average, and the like. The first 10 questions of the final just asked students to recite these statistics. He used to rant and laugh a bit about the results: "It's 10 easy points!  I tell them the ten statistics in advance! And many of them have no clue!"

The Pew Research Center does regular national surveys of what Americans know about the news. Here are the questions and answers about economics from the September 25-28 survey.

Many Americans dramatically overstate the unemployment rate and the poverty rate.

Nearly half of those surveyed don't even venture a guess about who runs the Federal Reserve. Indeed, of those who answered, a certain number seem to have some confusion about the difference between the Supreme Court and the Federal Reserve.

In many surveys over the years, Americans state that a huge share of U.S. federal spending goes to foreign aid: a common finding is that Americans think about 25% of US spending goes to foreign aid, when the correct answer is about 1%. And while interest payments on past federal borrowing are up in recent years, they are far short of Social Security payments.

One statistic where Americans do seem fairly accurate is the minimum wage.

There's an old saying often attributed to Daniel Patrick Moynihan that "Everyone is entitled to their own opinions, but not to their own facts." In public opinion surveys, of course, people are offered a chance to assert facts that reflect their own frame of mind. For example, Social Security is popular, while foreign aid is  not, and therefore people (wishfully) hold the opinion that we must not be spending too much on Social Security, but are spending a lot on foreign aid that could cut with little domestic pain.  But it's obviously tricky to have a productive social discussion about economic issues when there is little agreement on central facts.

Saturday, October 11, 2014

More on the Origins of the Free Rider Idea

About a month ago, I posted on "How the Free Rider Idea Evolved," with an emphasis on how the "free rider" terminology was used about financial markets and labor union organizing in the 1940s and 1950s, before the term seeped over into its modern economic usage by way of James Buchanan and Mancur Olsen. For those who enjoy tracking terms of art back to their burrows, here's some follow-up that varies from the pedestrian to the intriguing to the wonderful--but probably not true.

The earliest use of the "free rider" term seems to be straightforward, even boring. The Oxford English dictionary offers this definition:
orig. U.S. Originally: a person who rides a train, bus, etc., without having paid for it (when others have). Now chiefly: a person who, or organization which, benefits (or seeks to benefit) in some way from the effort of others, without making a similar contribution.
The OED offers an example dating back to 1859 about a count of rail passengers, "not including commuters and free riders." Of course, someone who doesn't pay their fare on a mass transit system is a good example even for the modern classroom of a free rider.

Reader Charles Clarke sent me an intriguing example of "free ride" terminology in the economics literature from back in the 1920. Specifically, John Maurice Clark wrote this in his 1926 book, the Social Control of Business (University of Chicago Press, pp. 110-111):

A person who does not have a job or any other source of income, and who does not know where to get one and how to go about canvassing the market effectively, does not possess the substance of liberty. That person is in a position to be exploited and to be forced to make contracts which are essentially made under duress. In addition to this equipment of knowledge, a person needs some reserve funds in order to be able to hold off from the market and see if the second or tenth or twentieth bargain that offers will not be better than the first. When pockets are empty this search may mean real privation. Often one of the chief obstacles to a real canvass of the market consists of the costs of transportation, in which case "liberty and the pursuit of happiness" may require a free ride on the railroad. If this is not forthcoming from public funds, the employer's private interest may be strong enough to furnish it. But when the employer foots the bill, his interest in the case is likely to end when he gets enough labor, without regard to what happens to the laborers after he is through with them. For example, in this country there are various ways of getting harvest hands into the fields without requiring them to pay their railroad fares, but there is no system for getting them back again after the harvest is in.
Beyond a sort of OCD compulsiveness about noting a place where the "free ride" terminology is employed in the economics literature, this reference is conceptually intriguing. In way way, it's just a reiteration of the already-established use of referring to those who ride trains without paying. But in another way, it focuses on the modern issue of addressing search costs for those finding a new job. In my reading, it also offers just a hint that an industry like a railroad with high fixed costs and low marginal costs may sometimes charging more than is socially desirable, in an attempt to cover its fixed costs, when something closer to marginal cost pricing might offer social benefits.

One final source of the "free rider" image may be an example of the "too good to check" phenomenon. In his Intermediate Microeconomics textbook (Scott, Foresman and Company, 1990 edition, p. 572), Heinz Kohler wrote:
This unwillingness of individuals voluntarily to help cover the cost of a pure public good, and their eagerness to let others produce the good so they can enjoy its benefits at a zero cost, is called the free-rider problem. The name has its origin in the Old West, in the days of cattle rustling. The ranchers of Dodge City banded together to form a vigilante group to catch (and hang) cattle theives. Everyone contributed to the cost of the security force on horseback--that is, until rustling had been sufficiently discouraged by the existence of this group. Then individual ranchers began to withdraw, realizing that they could benefit as much if they didn't pay. They became "free-riders" instead. Before long,the security force collapsed, and cattle rustling resumed. 
This story has a comforting concreteness, and certainly sounds as if it's referring to a real event. There are of course examples in the western United States of voluntary groups formed to fight cattle rustlers, with more  or less success. It's a nice intuitive story of what the broader "free rider problem" means. But at least with a cursory search (the Oxford English Dictionary and some messing around with Google), I've not found any evidence that the actual term "free rider" originated in this context. Maybe some historian of the Old West can pass along a citation?