Thursday, March 31, 2016

The Economics of Daylight Savings Time

Where I live in Minnesota, the short days of December have less than 9 hours of daylight, with sunrise around 7:50 am and sunset around 4:40 pm. In contrast, the long days of June have about 15 1/2 hours of daylight, with sunrise around 5:30 am and sunset around 9:00 pm. But of course, those summer times for sunrise and sunset use Daylight Savings Time. If we didn't spring the clocks forward in March, the summertime in Minnesota would feature a 4:30 am sunrise and an 8:00 sundown.

If I was a stronger and more flexible person, there would be no need for Daylight Savings Time. I would just rise with the summertime sun at 4:30 and take advantage of those extra daytime hours. But I don't synchronize my day to the sunlight. Instead, like most people, I have daily schedules that involve getting up at roughly same time most days. For me, this is the strongest case for Daylight Savings Time: it shifts an hour of daylight that would otherwise occur when I'm asleep to a time of day at a time of year when I can enjoy it. For those who live closer to the equator, where the seasonal variation in length of day is less, I presume that Daylight Savings Time matters less. But for those of us in northern climates, long summer evenings are a nice counterbalance to those dismal winter days when you drive to work before sunrise and drive home from work after sunset.

However, discussions about the merits of Daylight Savings Time aren't usually focused on sweet summertime evenings. For example, the US Department of Transportation website lists three practical reasons for Daylight Savings time and the longer summer evenings: saves energy, reduces traffic deaths, and reduces crime. Austin C. Smith reviews the evidence on these claims before presenting his own research in "Spring Forward at Your Own Risk: Daylight Saving Time and Fatal Vehicle Crashes," which appears in the April 2016 issue of the American Economic Journal: Applied Economics 8:2, 65–91). (The AEJ: Applied isn't freely available on-line, but many readers will have access through library subscriptions. Full disclosure: This journal is published by the American Economic Association, which also publishes the Journal of Economic Perspectives where I work as Managing Editor.)

It's long been argued that  Daylight Savings Time provides modest but real energy savings, but Smith cites some recent evidence that leans the other way. A standard method in empirical economics in recent years is to look for "natural experiments," which are situations where Daylight Saving Time was or was not imposed in a way that offers a chance for some comparisons. Thus, Smith writes:
"Kellogg and Wolff (2008) use a natural experiment in Australia where DST was extended in some states to accommodate the Sydney Olympics. They find that while DST reduce energy demand in the evening, it increases demand in the morning with no significant net effect. Kotchen and Grant (2011) make use of a quasi-experiment in Indiana where some Southern Indiana counties did not practice DST until 2006. Their work suggests that DST could actually increase residential energy use, as increased heating and cooling use more than offset the savings from reduced lighting use."
(For those who would like specific citations for these papers:

  • Kellogg, Ryan, and Hendrik Wolff. 2008. “Daylight time and energy: Evidence from an Australian experiment.” Journal of Environmental Economics and Management 56 (3): 207–20. 
  • Kotchen, Matthew J., and Laura E. Grant. 2011. “Does daylight saving time save energy? Evidence from a natural experiment in Indiana.” Review of Economics and Statistics 93 (4): 1172–85.) 

Smith's main focus is on how Daylight Savings Time affects traffic fatalities.  Smith looks at the data on all US vehicle crashes that involve a fatality from 2002-2011. He uses two main comparisons: 1) he looks at days around the shift from Standard Time to DST each year, looking for a "discontinuity" or a jump in the rate of fatalities when the change happens; and 2) he compares dates that were covered by DST in some years but not in other years--because the exact date of the shift varies from year to year. He argues that sleep disruption in the spring transition to DST imposes significant costs:
"DST impacts practicing populations through two primary mechanisms. First, it creates a short-term disruption in sleeping patterns following the spring transition. Using the American Time Use Survey, Barnes and Wagner (2009) find that Americans sleep 40 minutes less on the night of the spring transition, but they do not sleep a significant amount more on the night of the fall transition despite the extra hour. Second, DST creates darker mornings and lighter evenings than would be observed under Standard Time. ... In both specifications I find a 5–6.5 percent increase in fatal crashes immediately following the spring transition. Conversely, I find no impact following the fall transition when no significant shock to sleep quantity occurs. ...This suggests that the spring transition into DST is responsible for over 30 deaths annually ...The total costs of DST due to sleep deprivation could be orders of magnitude larger when worker productivity is considered ..." 
In passing, Smith also mentions a recent studies about effects of Daylight Savings Time on crime. The December 2015 issue of the Review of Economics and Statistics includes "Under the Cover of Darkness: How Ambient Light Influences Criminal Activity," by Jennifer L. Doleac
and Nicholas J. Sanders (97: 5, pp. 1093-1103). They find that cases of robbery drop by 7% in the weeks right after Daylight Savings Time begins.

Smith's article is also full of "did-you-know" tidbits about Daylight Savings Time:

Did you know that about 1.5 billion people around the world practice some form of Daylight Savings Time? Of course, this means that about 5.5 billion people around the world, presumably those who live closer to the equator, don't use it.

Did you know that farmers tend to oppose Daylight Savings Time? "DST is often mistakenly believed to be an agricultural policy. In reality, farmers are generally against the practice of DST because it requires them to work for an extra hour in the morning, partially in darkness, to coordinate with the timing of markets ..."

Did you know that the specific idea for Daylight Savings Time dates back to 1895, when "the formal procedure was proposed by George Vernon Hudson, an entomologist who wanted more light in the evenings to pursue his passion of collecting insects ..."

I'm a sleep-lover, and I disruption to sleep patterns is something I feel in the center of my being. My personal experience with evening insects is pretty much limited to catching lightning bugs and slapping mosquitoes. But I'm with George Vernon Hudson in liking long summer evenings.

Wednesday, March 30, 2016

Grade Inflation Update: A's Rule

There's no systematic data collected on the distribution of college and university grades. Instead, such data is collected by individual researchers. Perhaps the largest and most prominent data on college grades over time, now with current data from over 400 schools with a combined enrollment of more than four million undergraduate students is from Stuart Rojstaczer and Christopher Healy. I wrote about the previous update of their data back in August 2014. They now have a substantial update of the data available at http://www.gradeinflation.com.

Their overall finding is that during the 30 years from 1983 to 2013, average grades for undergraduates at four-year college have risen from about 2.85 to 3.15 on a 4.0-point scale--that is, the average used to be halfway between a B- (2.7) and a B (3.0), and it's now halfway between a B (3.0) and a B+ (3.3).



Along the way, A's became the most common grade back in the mid-1990s. The prevalence of grades of C and D slumped back in the 1960s, and have continued to slide since then. More recently, B's have been declining, too.



I've commented on the grade inflation phenomenon before, but perhaps a quick recap here is useful. I view grades as a mechanism for communicating information, and grade inflation makes that mechanism less useful--with consequences both inside academia and out.

For example, grade inflation is not equal across academic departments; it has been most extreme in the humanities and softer social sciences, and mildest in the sciences and the harder social sciences (including economics). Thus, one result of this differential grade inflation across majors is that a lot of freshmen and sophomores are systematically being told by their grades that they are worse at science than at other potential majors. The Journal of Economic Perspectives (where I work as Managing Editor) carried an article on this connection way back in the Winter 1991 issue: Richard Sabot, and John Wakeman-Linn on  "Grade Inflation and Course Choice." (pp. 159-170). For an overview some of the additional evidence here, see "Grade Inflation and Choice of Major" (November 14, 2011).  In turn, when grade inflation influences the courses that students choose, it also influences the shape of colleges and universities--like which kinds of departments get additional resources or faculty hires

Another concern within higher education is that in many classes, the range of potential grades for a more-or-less average student has narrowed, which means that an extra expenditure of effort can raise grades only modestly. With grade inflation, an average student is likely to perceive that they can get the typical 3.0 or 3.3 without much effort. So the potential upside from working hard is at most a 3.7 or a 4.0.

Grade inflation also makes grades a less useful form of information when students start sending out their transcripts to employers and graduate programs. As a result,  the feedback that grades provide to the skills and future prospects of students has diminished, while other forms of information about student skills become more important. For example, employers and grad schools will give more weight to achievement or accreditation tests, when these are available, rather than to grades. Internships and personal recommendations become more important, although these alternative forms of information about student quality depend on networks that will typically be more available to students at certain colleges and universities with more resources and smaller class sizes.

As the data at the top suggests, efforts to limit grade inflation have not been especially successful. In "Grade Inflation: Evidence from Two Policies" (August 6, 2014), I wrote about a couple of efforts to reduce grade inflation. Wellesley College enacted a policy that the average grade in lower-level courses shouldn't exceed 3.3, which was somewhat successful at reducing the gap between high-grading and low-grading department. Cornell University, took a different tack, by deciding to publish student grades along with median grades for each course, so that it would be possible to compare how the student looked relative to the median. This plan seemed to worsen grade inflation, as students learned more about which courses were higher-grading and headed for those classes. For the Wellesley study, see Kristin F. Butcher, Patrick J. McEwan, and Akila Weerapana on "The Effects of an Anti-Grade-Inflation Policy at Wellesley College," in the Summer 2014 issue of the JEP. For the Cornell study, see Talia Bar, Vrinda Kadiyali, and Asaf Zussman on "Grade Information and Grade Inflation: The Cornell Experiment," in the Summer 2009 issue of the JEP.

Tuesday, March 29, 2016

The Economics of Pandemic Preparedness

Asking politicians to spending money to reduce the risk of a future problems can be problematic. After all, it's hard to claim political credit for avoiding something and causing it not to happen. But in the case of planning ahead to reduce the risks and costs of pandemics, the case for advance planning seems especially strong. The Commission on a Global Health Risk Framework for the Future spells out the issues in its report, The Neglected Dimension of Global Security: A Framework to Counter
Infectious Disease Crises, which is available here with free registration from the National Academies Press. This Commission was sponsored by a coalition of philanthropic and government groups. In included 17 members from 12 countries, who also got reactions from an oversight group and invited comments at public meetings. It was chaired by Peter Sands, who used to be the CEO of Standard Chartered, and is now a Senior Fellow at teh Mossavar-Rahmani Center for Business and Government at Harvard Kennedy School, with Oyewale Tomori, President of the Nigerian Academy of Science, serving as vice-chair.

A very quick summary of the report would be that it suggests spending $4.5 billion per year to build up the world's response system to pandemics. It offers estimates that the costs of pandemics could average $60 billion per year in the next century.

Here's a description of costs (citations and footnotes omitted):
The World Bank has estimated the economic impact of a severe pandemic (that is, one on the scale of the influenza pandemic of 1918–1919) at nearly 5 percent of global gross domestic product (GDP), or roughly $3 trillion. Some might see this as an exaggeration, but it could also be an underestimate. Aggregate cumulative GDP losses for Guinea, Sierra Leone, and Liberia in 2014 and 2015 are estimated to amount to more than 10 percent of GDP. This huge cost is the result of an epidemic that, for all its horror, infected only about 0.2 percent of the population of Liberia, roughly 0.25 percent of the population of Sierra Leone, and less than 0.05 percent of the population of Guinea, with 11,287 total deaths. The Commission’s own scenario modeling, based on the World Bank parameters, suggests that during the 21st century global pandemics could cost in excess of $6 trillion, with an expected loss of more than $60 billion per year. 
Indeed, the economic impact of infectious diseases appears to be increasing as greater human and economic connectedness—whether through transnational supply chains, increased travel, or ubiquitous access to communication technologies and media—fuel contagion, both of the virus itself and of fear. Most of the economic impact of pandemics stems not from mortality but from behavioral change, as people seek to avoid infection. This behavioral change is driven by fear, which in turn is driven by a potent mix of awareness and ignorance. ...  The experience of SARS is instructive: viewed from the perspective of overall mortality, SARS infected “only” 8,000 people and killed less than 800. Yet the economic cost of SARS has been estimated at more than $40 billion. At the peak of SARS, Hong Kong saw a 66 percent reduction in airport arrivals and a 50 percent reduction in cinema admissions. ...
We should not become fixated on the probability of a “once-in-a-100-years” pandemic of the 1918–1919 influenza pandemic of severity. Much less virulent pandemics can still cause significant loss of life and economic impact. The influenza pandemics of 1958 and 1968, while far less deadly than the one in 1918–1919, are estimated to have cost 3.1 percent and 0.7 percent of global GDP, respectively. Potential pandemics, that is outbreaks or epidemics that could become pandemics if not effectively contained, can also have enormous impact. Ebola, an epidemic that looked as if might have the potential to become a pandemic, has killed more than 11,000 people and cost more than $2 billion. While there is a high degree of uncertainty, the commission’s own modeling suggests that we are more likely than not to see at least one pandemic over the next 100 years, and there is at least a 20 percent chance of seeing 4 or more ... .

What's the proposed solution? The report offers lots of detail, but the broad three-point plan is national action, global cooperation, and focused R&D:
Against this, we propose incremental spending of about $4.5 billion per year—a fraction of what we spend on other risks to humankind. ... 
Robust public health infrastructure and capabilities are the foundation of resilient health systems and the first line of defense against infectious disease outbreaks that could become pandemics. Yet far too many countries have failed to build the necessary capabilities and infrastructure. Even by their own internal assessments, 67 percent of World Health Organization (WHO) member states fail to meet the requirements of the 2005 International Health Regulations (IHR); objective external evaluations
would almost certainly reveal even lower rates of compliance. ...
Although reinforcing the first line of defense at the country level is the foundation of a more effective global framework for countering the threat of infectious diseases, strengthening international coordination and capabilities is the next most vital component. Pandemics know no borders, so international cooperation is essential. Global health security is a global public good requiring collective action. ...  The Commission believes that an empowered WHO must take the lead in the global system to identify, prevent, and respond to potential pandemics. There is no realistic alternative. However, we believe that the WHO must make significant changes in order to play this role effectively. It needs more capability and more resources, and it must demonstrate more leadership. ...
This means accelerating R&D in a coordinated manner across the whole range of relevant medical products, including vaccines, therapeutics, diagnostic tools, personal protective equipment, and instruments. To ensure that incremental R&D has maximum impact in strengthening defenses against infectious diseases, we propose that the WHO galvanize the creation of a Pandemic Product Development Committee (PPDC) to mobilize, prioritize, allocate, and oversee R&D resources relating to infectious diseases with pandemic potential.
The report also points out that spending in these areas is likely to have substantial benefits even if a pandemic does not occur. "Moreover, the risks of spending too much or too little are asymmetric. Even if we have overestimated the risks of potential pandemics, money invested to mitigate
them will still be money well spent. Most of the investments we recommend will help achieve other high-priority health goals, such as countering antimicrobial resistance and containing endemic diseases like tuberculosis and malaria. Yet if we spend too little, we open the door to a disaster of terrifying magnitude."

I would probably quibble with some of the details of the recommendations. For example, I think the report may underestimate the difficulties of having the World Health Organization take a leading role in this effort, and a different institutional framework might be needed. But that said, the case for acting to limit pandemics seems ironclad. As an example of the potential gains, the report points to the example of Uganda, which has managed to deal with multiple outbreaks of Ebola in the last 15 years:
Before the current West African Ebola outbreak, Uganda was the site of the largest Ebola outbreak in history, with 425 reported cases in 2000. Yet the outcome of this outbreak was distinctly more positive, because Uganda had in place an operational national health policy and strategic plan, an essential health services package that included disease surveillance and control, and a decentralized health delivery system. After 2000, Uganda’s leadership realized that, despite the successful containment of the outbreak, a focus on strengthening surveillance and response capacities at each level of the national system would greatly improve the country’s ability to respond to future threats. Uganda has since suffered four additional Ebola outbreaks, as well as one outbreak of Marburg hemorrhagic fever. However, due to its new approach, Uganda was able to markedly improve its detection and response to these public health emergencies.
All too often, we are most willing to invest in disaster prevention right after a severe disaster has occurred, right after an outbreak of disease or famine or natural disaster, when memories are still fresh. It wold be nice if the pandemics we have already suffered, as well as cautionary stories of SARS, Ebola, the Zika virus and others could lead to actions before the next pandemic looms.

Monday, March 28, 2016

Affordable Care Act: Costs of Expanding Coverage

The most notable success of the Patient Protection and Affordable Care Act of 2010 is that it has reduced the number of Americans without health insurance. There's no magic in how this has happened: it's just a matter of spending an extra $110 billion.  The Congressional Budget Office lays out the costs in its March 2016 report, "Federal Subsidies for Health Insurance Coverage for People Under Age 65: 2016 to 2026."  CBO writes:

To separate the effects of the ACA’s [Affordable Care Act's] coverage provisions from those broader estimates, CBO and JCT [Joint Committee on Taxation] compared their current projections with estimates of what would have occurred if the ACA had never been enacted. In 2016, those provisions are estimated to reduce the number of uninsured people by 22 million and to result in a net cost to the federal government of $110 billion. ... Those estimates address only the insurance coverage provisions of the ACA, which do not generate all of the law’s budgetary effects. Many other provisions—such as various tax provisions that increase revenues and reductions in Medicare payments to hospitals, to other providers of care, and to private insurance plans delivering Medicare’s
benefits—are, on net, expected to reduce budget deficits.
Dividing the $110 billion in additional spending by 22 million more people with health insurance works out to about $5,000 per person. For comparison, although the comparison should be taken only as rough and not as apples-to-apples, Medicaid spending is about $5,800 per enrollee. There's never been any secret that if the US was willing to spend an extra $100 billion or more, it could subsidize health insurance for a lot more people. 

The CBO report offers an overview of health insurance coverage in the US, along with federal subsidies. Here's part of a figure from the report showing US health insurance coverage by categories. Most of those under-65 have employment-based coverage, but you can see the estimates for Medicaid and other programs.  The CBO prediction is that there will be 27-28 million Americans without health insurance through 2026. 




Here's part of a table from the CBO report showing federal subsidies for  health insurance coverage. The two main categories in which the Patient Protection and Affordable Care Act of 2010 raised subsidies for health insurance are $64 billion for expanding Medicaid coverage, and $43 billion in subsidies for those with lower income levels to purchase insurance  though the "marketplaces" (which seems to be the new name for what have often been called the "exchanges). 



A few comments: 

1) The reduction in the number of people without health insurance leads to one of those situations where you can see the glass as  half-full or half-empty. Some supporters of the 2010 legislation are emphasizing the reduction in the number of uninsured as a major success, which seems fair to me. However, if your expectation or your standard for comparison was that the 2010 law would come close to ending the issue of Americans without health insurance, it's disheartening that the law is apparently going to leave 27 million or so without health insurance. For the record, estimates of the effects of the law from the White House and from CBO back around 2010 all stated clearly that there would still be tens of millions without health insurance even after the law passed.

2)  Overall, I'm personally in favor of spending an extra $110 billion to provide health insurance coverage for 22 million more people. Sure, there's part of me that wonders if some of those people might have preferred getting health insurance that was more bare-bones and cheaper, and instead getting some of that $5,000 per person subsidy in the form of income that could have been spent in other ways. But that political choice wasn't available.

3) The main issue for me isn't the extra $110 billion in spending, but rather how that additional spending was designed and implemented, and how it interacts with the health insurance and health care markets as a whole. If the fundamental goal of the act was to spend and extra $110 billion and subsidize insurance for 22 million more Americans, the law could have been a lot simpler and less invasive.

4) In particular, it's worth noting that the cost of the tax exclusion for employer-provided health insurance--that is, the provision in the US tax code that the value of health insurance from your employer isn't counted as income on which tax is owed--was $266 billion in 2016. The CBO report forecasts that this tax exclusion will reduce tax revenues by $460 billion by 2026. At the risk of grievously oversimplifying a vast literature on how to control health care costs, I'll note that as long as employer-provided health insurance is an untaxed fringe benefit worth hundreds of billions of dollars, it really shouldn't be a big surprise that health care spending remains so high and rising. In addition, the fringe benefit is of course worth the most to those with higher income levels, who are more likely to have health insurance through their employers, more likely to have that health insurance be fairly generous, and more likely to be in higher income tax brackets. Finding a way to trim back the tax exclusion of employer-provided health insurance by about half--with an emphasis on reducing the subsidy to those with higher income levels--could provide the revenues to subsidize health insurance for all remaining Americans who continue to lack it.

Friday, March 25, 2016

Stock Options: A Theory of Compensation and Income Inequality at the Top

As I've thought about the reasons behind the sharp rise in inequality of compensation at the top 1% of the income distribution during the last 25 years or so, I keep returning to the rise of executive stock options. In turn, the widespread granting of stock options was fueled in part by a 1993 law which will forever exemplify the Law of Unintended Consequences.  But although stock options helped generate a huge one-time jump in inequality of compensation, they now seem to have a diminishing role as a form of executive compensation.

As background for this theory, it's useful to review a few main facts and patterns about executive compensation and income inequality. For starters, here's a figure showing a rise in executive compensation as a multiple of the salary of a typical worker (taken from a January 2016 Credit Suisse report on trends in corporate governance). The multiple rises somewhat from the mid-1970s up through the early 1990s, but then rather abruptly takes off and fluctuates at a higher level after that point.

For a longer-term view going back to the 1940s, here's a figure showing the historical change in median pay levels of top executives at large US companies  from Carola Frydman and Raven E. Saks in their article "Executive Compensation: A New View from a Long-Term Perspective, 1936-2005," which appeared in the Review of Financial Studies (23: 5, May 2010, pp. 2099-2138).



From the 1930s up through about 1980, most executive pay was in the form of salary-plus-bonus. But in the 1980s, long-term pay and stock options began to play a much larger role. Clearly, much of the rise in executive salaries can be traced to new categories of executive compensation that were relatively small a few decades ago: "long-term pay" and stock options.

These rises in executive pay have a very similar timing to the rise in income inequality at the highest levels. Here are a couple of illustrative figures about inequality at the top income levels, based on tax return data, from "Striking it Richer: The Evolution of Top Incomes in the United States (Updated with 2014 preliminary estimates)," by Emmanuel Saez in a working paper dated June 25, 2015. First, this figure shows the share of total income received by the top 1%, the top 1%-5%, and the top 5%-10%. Notice that while there's a rise in the share of total income received by all three groups over time, the rise for the top 1% is by far the largest--and of course, the top 1% also by definition has fewer people than the other two groups. Also, while the increase for the top 1% dates back to the 1970s, there's an especially sharp for the share of income received by the top 1% in the early to mid-1990s. Since then, the share going to the top 1% is volatile, but doesn't show much upward trend.

And here's a figure showing the share of total income received at the very top, by the .01%--in 2014, for example, this group had more than 9.75 million in income. Again, there's an increase for this group in the 1980s, but the really large jump comes in the 1990s. Again, there is volatility since the late 1990s, but not much of an upward trend.

So what happened back in the early 1990s that would have caused executive compensation and the use of stock options to take off? A common explanation traces the causes back to the 1992 election when Bill Clinton made high executive pay a campaign issue. In 1993, a law was passed placing a $1 million cap on salaries for a number of top corporate executives. However, the cap was only about salaries, not about pay that was in some way linked to performance. In response to this law, there was a large-scale shift to paying top executives with stock options. The stock market more-or-less tripled in value during the five years from late 1994 to late 1999, and so those who had stock options did very well indeed. Christopher Cox, who had been a congressman back in 1993 but then had become chairman of the Securities and Exchange Commission, testified before Congress in 2006:
"[O]ne of the most significant reasons that non-salary forms of compensation have ballooned since the early 1990s is the $1 million legislative cap on salaries for certain top public company executives that was added to the Internal Revenue Code in 1993. As a Member of Congress at the time, I well remember that the stated purpose was to control the rate of growth in CEO pay. With complete hindsight, we can now all agree that this purpose was not achieved. Indeed, this tax law change deserves pride of place in the Museum of Unintended Consequences."
Just to be clear, I'm not arguing that stock options for top executives is the only factor affecting inequality of compensation. I suspect that what economists call skill-biased technical change--that is, the way in which information technology allowed some of those with high skills to add enormously to their productivity--also made a difference. The rise of globalization greatly increased the opportunities for some folks with certain skills, even as it diminished opportunities for others. But if the focus is on the rise in incomes at the very top, this change mirrors very closely in time the rise in stock options. In addition, my guess is that  when the pay of top executives skyrocketed in the 1990s, it reset expectations about it was "reasonable" to pay top employees in many settings, including at law firms, financial firms, and even in top academic jobs. And remember, the widespread rise of stock options largely arose out of an attempt to pass a law that would limit CEO pay!

The obvious argument for stock options, for many years now, has been that they created a link between executive pay and corporate shareholders. There's a good argument for incentive pay for top executives, so that they are willing to take tough decisions when needed and not just sit around on their paychecks. But the particular kind of incentives provided by stock options often was not well targeted. In the statistical literature on baseball players, there is a concept of a "replacement player"--that is, an average or "replacement" player that any team can get at any time. Baseball statheads then calculate VORP, or "value over replacement player." In a similar spirit, just running an average company and having average results doesn't seem as if it deserves special bonuses. High performance occurs when an executive performs better than a replacement of average quality--for example, if the firm is outperforming competitors, regardless of whether the overall market is up or down.

In this sense, stock options were often poorly designed to create a link with stock market performance. For some companies, handing out stock options apparently felt like paying with free money. Executives made money with stock options when the market as a whole went up in the 1990s, even if their company just went up less than average for the market or for their industry. There were concerns that stock options can encourage excessive risk-taking--for example, in the financial sector---because they offered a big upside for executives who did well, but no downside losses for those who did poorly. There were concerns that stock options encouraged steps to juice up the stock price--say, through having a company buy back its own stock or through empire-building mergers and acquisitions--rather than focusing on growing the company for the future. There was a scandal back in 2006 (when Cox was testifying before Congress) about how companies backdated" options--that is, they falsely claimed that stock options had been granted to executives in the past, thus allowing the executives to cash in on increases in the stock price since that time. The amount of compensation actually being received by executives through stock options was often fairly opaque, at least to those outside the executive suites.

Whether or not you buy my argument about why the use of stock options became so widespread and my skepticism about their design, it's interesting to note that they seem to be in decline. Here's some evidence from "Measuring CEO Compensation," a March 2016 "Economic Brief" written by Arantxa Jarque and David A. Price of the Federal Reserve Bank of Richmond. The green bars shows the share of large firms using stock options, but not grants of restricted stock. ("Restricted" stock means that it is not actually given to the executive certain targets are met, which can include an executive staying with a company for a certain time, or the company meeting certain financial or product development targets.) The purple bars show the rising share of companies using only restricted stock, and not stock options, while the red bars show companies using both. The trend is pretty clear: stock options have become much less dominant.


As Jarque and Price write: 
In the 2000s, the data also show a marked shift from stock options to restricted stock grants or a combination of the two; this timing is consistent with firms reacting to policy changes unfavorable to options during that period. Among these changes were a provision of the Sarbanes-Oxley Act of 2002 requiring faster disclosure of option grants and the adoption in 2006 of accounting standards that mandated the treatment of option grants as expenses.
Just to reiterate the essence of their final sentence, stock options were popular when disclosure could be slower and the company didn't need to treat such payments as expenses (!). When those factors changed, apparently other methods of aligning the incentives of executives and shareholders, like grants of restricted stock linked to specific performance targets, became attractive.

If this theory about a strong connection from the widespread rise of stock options in the 1990s to the rise in income inequality at the very top contains some measure of truth, then a few insights follow. The especially sharp rise in income inequality at the top should be a more-or-less one-time event. As noted above, the rise in executive pay and in share of incomes going to the top 1% or .01% jumped in the 1990s, and while it has fluctuated since then, the overall trend doesn't seem to be up since the late 1990s. Moreover, this shift to greater inequality at the very top seems unlikely to be repeated. There's a shift away from stock options toward restricted stock; moreover, the stock market doesn't seem likely to triple in the next five years, as it did from late 1994 through late 1990s. Finally, there's probably a lesson in the fact that when executive pay was highly visible to outsiders, in the form of salary plus cash bonus, compensation for top executives didn't rise much, but as the form of compensation became more opaque with stock options and grants of restricted stock that were often based on a number of nonpublic underlying conditions, executive pay rose substantially.

Thursday, March 24, 2016

Dissecting the Concept of Opportunity Cost

Perhaps no topic is really simple, if you look at it closely. In the Winter 2016 issue of the Journal of Economic Education, a group of five economists put the intro-year topic of "opportunity cost" under a definitional microscope. The JEE is not freely available online, but many readers will have access through library subscriptions.

David Colander provides an introduction. Michael Parkin provides a brisk overview of the history of thought about opportunity cost, and argues that opportunity cost is more usefully based on the quantity of what is given up, rather than on attempts to calculate the value of what is given up. Daniel G. Arce, Rod O’Donnell, and Daniel F. Stone offer critiques. Parkin then seeks to synthesize the various views by arguing that opportunity cost as value can be interpreted in several different ways, and claims that one of these interpretations reconciles his view with the critics.

Parkin's first essay is full of interesting tidbits. For example, I had not known that the concept of opportunity cost dates back to an essay in the January 1894 issue of the Quarterly Journal of Economics called "Pain-Cost and Opportunity-Cost" (8: 2, 218-229). This reference sent me scurrying to the JSTOR archive, where I find that David I. Green starts off in a discussion of the true cost of labor, before moving to a more general argument:
But what is commonly summed up in the term "cost" is not principally the pain or weariness on the part of the laborer, and of long delay in consumption on the part of the capitalist; but the costs consists for the most part of the sacrifice of opportunity. ... By devoting our efforts to any one task, we necessarily give up the opportunity of doing certain other things which would yield us some return; and it is, in general, this sacrifice of opportunity that we insist upon being paid for rather than for any pain which may be involved in the work performed. ... But when we once recognize the sacrifice of opportunity as an element in the cost of production, we find that the principle has a very wide application. Not only time and strength, but commodities, capital, and many of the free gifts of nature, such as mineral deposits and the use of fruitful land, must be economized if we are to act reasonably. Before devoting any one of these resources to a particular use, we must consider the other uses from which it will be withheld by our action; and the most advantageous opportunity which we deliberately forego constitutes a sacrifice for which we must expect at least an equivalent return.
But Parkin's main focus is more on the concept than on the history. He writes:
The idea of opportunity cost helps to address five issues that range from the simple and basic to the complex and sophisticated. The simplest and most basic purpose of opportunity cost is to express the fundamental economic problem: Faced with scarcity, we must make choices, and in choosing we are confronted by cost. The second purpose, equally basic, is to see cost as an alternative forgone rather than dollars of expenditure. Its third purpose is to identify, and to correctly establish, what the forgone alternative is. Its fourth purpose is to use the appropriately identified cost alongside an appropriately identified benefit to make (and to analyze) a rational choice. Its fifth purpose, and its most complex and sophisticated, is to derive theorems about the determination of relative prices.
He gives examples from the 1920s and 1930s up through modern textbooks to illustrate that while some writers have preferred to think of opportunity cost in terms of quantity foregone, others have preferred to think of value foregone. He writes:
The two definitions of opportunity cost (hereafter OC) differ in what is forgone. For the “quantity” version, it is the highest-valued alternative: the physical thing or things that otherwise would have been chosen. For the “value” version, it is the value of the highest-valued alternative: the value of the physical thing or things that otherwise would have been chosen.
Parkin argues  that the quantity measure is most useful, in part because using "value" adds an additional and potentially controversial step to the concept. 

Daniel Arce argues that value-based calculations of opportunity cost are useful in certain contexts, like looking at shadow prices or deriving a measure of economic profit. Along the way, he makes the interesting claim that teaching and learning about opportunity cost suffers less from imprecise definition than from lack of good old-fashioned examples. Arce writes:
In over 25 years of teaching principles of economics, I have used at least 10 different textbooks and cannot recall a single student expressing concern that the textbook’s treatment of opportunity cost was ambiguous, nor have I had any difficulties with how opportunity cost is operationalized in the associated test banks.What I have had trouble with is the dearth of examples in textbooks and test banks. Opportunity cost is a major takeaway in principles of economics and in managerial economics for MBAs. Yet, I can think of no textbook in either area in which the coverage of opportunity cost would sustain even half a lecture. With consulting firms earning millions of dollars calculating economic profits for their clients (where the hard work is in identifying opportunity costs), how can this be? This is compounded by the virtual absence of any discussion of opportunity cost in undergraduate and MBA textbooks’ coverage of marginal decision making (e.g., utility maximization, cost minimization, and profit maximization) and a similar lack of material on marginal decision making when opportunity cost is covered. 

Rod O'Donnell and Daniel F. Stone offer further arguments in favor of the value criterion: for example, that it is especially useful in talking about interest rates (or foregone rate of return) as an opportunity cost, and that using value terms for opportunity cost offers an advantage of making comparisons across similar units.

Parkin argues in his closing essay that the "value" approach to opportunity cost can be divided into two approaches as well: 
For the “value” version of OC, what is forgone is the highest amount that would be willingly paid for the forgone alternative. Value is willingness to pay. ... Another commonly used value concept is the number of dollars thatmust be paid at market prices to buy a defined basket of goods and services. ... For the “quantity” version of OC, it is the physical basket (not its dollar value) that is the defining feature. The dollars are merely a convenient measuring rod. To be clear, for the “value” version of OC, the dollars represent the largest amount that would be willingly paid, while for the “quantity” version of OC, the dollars represent the amount that must be paid.
Parkin argues that with this distinction in mind, all the writers are in agreement. I suspect the other writer would not agree with this assessment! But I'd like to add my agreement to Arce's point that opportunity cost is a powerful idea that gets short shrift in the classroom, both because it is less tied into other concepts than it could be, and also because it lack a wide range of strong examples that help give students a sense for its many applications.

Wednesday, March 23, 2016

Why Economist are Unwelcome in Hell: Bulletin Board Material

Now and then, my plan is to post a cartoon or a quotation that is the kind of thing I'd tack up for awhile on the bulletin board outside my office door. Some teachers of economics might also like adding items like these to their Powerpoint slides. This is from the Saturday Morning Breakfast Cereal website by Zach Weinersmith.

 

Tuesday, March 22, 2016

Is the Essence of Globalization Shifting?

Since the Great Recession of 2007-2009, a number of the standard economic measures of globalization have declined--flow of goods, services, and finance. But other aspects of globalization are on the rise, like communication and the ability of small firms and individuals to participate in international markets. The McKinsey Global Institute explores these changes in a March 2016 report  Digital globalization: The new era of global flows, written by a team led by James Manyika, Susan Lund, Jacques Bughin, Jonathan Woetzel, Kalin Stamenov, and Dhruv Dhingra.

Here's a rough measure of the recent drop in standard measures of globalization. The bars show measures of international flows of goods services, and finance measured in trillions of dollars. The line shows the total flows as a share of global GDP.


Of course, it's easy to come up with reasons why this slowdown in standard measures of globalization is just a short-term blip; the recession slowed down trade, the fall in the price of oil and other commodities reduced the value of trade, China's growth is slower, and so on. But the report argues that some more fundamental factors are shifting:
"Yet there is more behind the slowdown in global goods trade than a commodities cycle. Trade in manufactured goods has also been flat to declining for both finished goods and intermediate inputs. Global container shipping volumes grew by 7.8 percent from 2000 to 2005, but from 2011 to 2014, growth was markedly slower, at only 2.8 percent. Multiple cyclical factors have sapped momentum in the trade of manufactured goods. Many of the world’s major economies—notably China, Europe, and Japan—have been experiencing slowdowns. China, for example, posted almost 18 percent annual growth in both imports and exports from 2000 to 2011. But since then its export growth has slowed to 4.6 percent, and imports have actually shrunk. However, there may be structural reasons in global manufacturing that explain decelerating growth in traded goods. Our analysis find that global consumption growth is outpacing trade growth for some types of finished goods, such as automobiles, pharmaceuticals, fertilizers, and plastic and rubber goods. This indicates that more production is happening in the countries where the good is consumed. This may reflect the “reshoring” of some manufacturing to advanced economies as well as increasing consumption in emerging markets where these goods are produced."
The McKinsey report argues that the form of globalization is shifting. Much of the discussion emphasizes international flows of data and information crossing borders, but there is also some emphasis on international flows of people at tourists, migrants, and students, as well as changes in e-commerce. For example, the report states:  
"The world has become more intricately connected than ever before. Back in 1990, the total value of global flows of goods, services, and finance amounted to $5 trillion, or 24 percent of world GDP. There were some 435 million international tourist arrivals, and the public Internet was in its infancy. Fast forward to 2014: some $30 trillion worth of goods, services, and finance, equivalent to 39 percent of GDP, was exchanged across the world’s borders. International tourist arrivals soared above 1.1 billion. And the Internet is now a global network instantly connecting billions of people and countless companies around the world. Flows of physical goods and finance were the hallmarks of the 20th-century global economy, but today those flows have flattened or declined. Twenty-first-century globalization is increasingly defined by flows of data and information. This phenomenon now underpins virtually all cross-border transactions within traditional flows while simultaneously transmitting a valuable stream of ideas and innovation around the world.

"Our econometric research indicates that global flows of goods, foreign direct investment, and data have increased current global GDP by roughly 10 percent compared to what would have occurred in a world without any flows. This value was equivalent to $7.8 trillion in 2014 alone. Data flows account for $2.8 trillion of this effect, exerting a larger impact on growth than traditional goods flows. This is a remarkable development given that the world’s trade networks have developed over centuries but cross-border data flows were nascent just 15 years ago."
What do some of these data flows look like?


Cross-border data flows are the hallmarks of 21st-century globalization. Not only do they transmit valuable streams of information and ideas in their own right, but they also enable other flows of goods, services, finance, and people. Virtually every type of cross-border transaction now has a digital component. ...
Approximately 12 percent of the global goods trade is conducted via international e‑commerce, with much of it driven by platforms such as Alibaba, Amazon, eBay, Flipkart, and Rakuten. Beyond e‑commerce, digital platforms for both traditional employment and freelance assignments are beginning to create a more global labor market. Some 50 percent of the world’s traded services are already digitized. Digitization also enables instantaneous exchanges of virtual goods. E-books, apps, online games, MP3 music files and streaming services, software, and cloud computing services can all be transmitted to customers anywhere in the world there is an Internet connection. Many major media websites are shifting from building national audiences to global ones; a range of publications, including The Guardian, Vogue, BBC, and BuzzFeed, attract more than half of their online traffic from foreign countries. By expanding its business model from mailing DVDs to selling subscriptions for online streaming, Netflix has dramatically broadened its international reach to more than 190 countries. While media, music, books, and games represent the first wave of digital trade, 3D printing could eventually expand digital commerce to many more product categories.
Finally, “digital wrappers” are digital add-ons that enable and raise the value of other types of flows. Logistics firms, for example, use sensors, data, and software to track physical shipments, reducing losses in transit and enabling more valuable merchandise to be shipped and insured. Online user-generated reviews and ratings give many individuals the comfort level needed to make cross-border transactions, whether they are buying a consumer product on Amazon or booking a hotel room halfway around the world on Airbnb, Agoda, or TripAdvisor. ...
Small and medium-sized enterprises (SMEs) worldwide are using the “plug-and-play” infrastructure of Internet platforms to put themselves in front of an enormous global customer base and become exporters. Amazon, for instance, now hosts some two million third-party sellers. In countries around the world, the share of SMEs that export is sharply higher on eBay than among offline businesses of comparable size. PayPal enables crossborder transactions by acting as an intermediary for SMEs and their customers. Participants from emerging economies are senders or receivers in 68 percent of cross-border PayPal transactions. Microenterprises and projects in need of capital can turn to platforms such as Kickstarter, where nearly 3.3 million people representing nearly all countries made pledges in 2014. Facebook estimates that 50 million SMEs are on its platform, up from 25 million in 2013; on average 30 percent of their fans are from other countries. To put this number in perspective, consider that the World Bank estimated there were 125 million SMEs worldwide in 2010. For small businesses in the developing world, digital platforms are a way to overcome constraints in their local markets.
As one vivid example of international data flows from the report, the number of people on the most popular online social media platforms exceeds the population of most countries--showing that these platforms are crossing lots of international borders.



Another example involves the rise in digital phone calls: "We also analyzed cross-border digital calls, which have more than doubled from 274 billion call minutes in 2005 to 569 billion call minutes in 2014. This rising volume is primarily attributable to the expanded use of voice over Internet protocol (VoIP) technology. Since 2005, VoIP call minutes have grown by 19 percent per year, while traditional call minutes have grown by 4 percent. Additionally, cross-border computer-to-computer Skype communications have soared, with call minutes increasing by some 500 percent over the past five years. In 2014,  computer-to-computer Skype call minutes were equal to 46 percent of traditional phone call minutes."

Although the report doesn't especially emphasize how flows of people have increased, I found this graphic interesting. Over the last few decades, the change in the number of migrants and refugees has largely reflected growth of the overall world population. But many more people are having an shorter-term international experience, either as students or as travelers.



What's the bottom line on these changes? It's already true that international trade in goods has shifted away from being about final products, and instead become more a matter of intermediate products being shipped along a global production chain. Now, information in all its forms (design, marketing, managerial expertise) is becoming a bigger share of the final value of many physical products.  Moreover, a wired world will be more able able to buy and sell digital products. New technologies like 3D printing will make it easier to produce many physical products on-site, wherever they are needed, by shipping only the necessary software, rather than the product itself. The greater ease and cheapness of international communication will presumably strengthen many person-to-person cross-border ties, which is not just a matter of broadening one's social life, but also means a greater ability to manage business and economic relationships over distance.

It's interesting to speculate on how these shift in globalization, as it percolate through economies around the world, will affect attitudes about globalization. Imagine a situation in which globalization is less about big companies shipping cars and steel and computers, and more about small and medium companies shipping non-standard products or services. And imagine a situation in which globalization becomes less faceless, because it will be so much easier to communicate with those in other countries--as well as so much more common to visit in person as a student or tourist. Changes in how globalization manifests itself seems sure to shake up how economists, and everyone else, view its costs and benefits.

Monday, March 21, 2016

The Next Big M&A Boom is Here

The conditions have seemed ripe for a boom in mergers and acquisitions for a few years now. Lots of companies are sitting on piles of cash. Interest rates are low,  so borrowing money to complete a deal is cheap. Whether in the national or the global economy, production chains are being reshaped by waves of new technology, outsourcing, and in-sourcing. Such economic shakeups can affect the shape of firms and their perceptions of whether a merger or acquisition makes sense. But in 2015, many of these forces came together and the next big mergers and acquisitions boom seems to have arrived.

Here's a comment and figure taken from Congressional testimony on March 9, 2016, by William Baer, who is Assistant Attorney General at the Antitrust Division of the U.S. Department of Justice, before the US Senate (more specifically, before the Subcommittee on Antitrust, Competition Policy and Consumer Rights, of the Committee on the Judiciary). Baer said:
"The merger wave is back. Big time. Global merger and acquisition volume has reached historic levels in terms of number, size and complexity. In FY 2015, 67 proposed mergers were valued at more than $10 billion. That is more than double the annual volume in 2014. Last year 280 deals were worth more than $1 billion, nearly double the number from FY 2010."

The global volume of mergers and acquisitions in 2015 exceeded $5 trillion, about twice as high ast the total volume in 2013. According to Dealogic, about one-half of the global M&A deals in 2015 (by dollar value) targeted US firms, and about one-quarter targeted Asian Pacific firms. Here's a list of the big M&A deals announced in 2015--many of which are still pending.

There's of course nothing intrinsically wrong with merger and acquisition deals. Sometimes, it's just a way for companies to evolve and grow. That said, when the level of such deals hits historically high levels, and with historically large sizes, it's reasonable to raise some questions. 

For example, in past merger waves a common finding in the academic research is that on average such deals turns out to be a gain for the shareholders of the firm that gets acquired, but on average it's neutral or even a small loss for shareholders of the firm doing the acquiring. This pattern suggests that the executives of firms which acquire other  firms are often too optimistic about the gains that will result. A few years from now, we'll have a sense as to whether that common pattern continued through the current wave of deals.

Another issue involves the appropriate actions of antitrust regulators in the face of a merger wave. In Baer's testimony, he shows that the actions of US antitrust regulators did increase in 2015--as one would expect given the rise in the number and size of deals proposed. But even with a rise in antitrust enforcement, it's still a tiny minority of M&A deals that are challenged, presumably those representing what the regulators think are the most clear-cut or egregious cases. The antitrust authorities often enter into negotiations with the companies who are proposing a deal, which result in various tweaks and adjustments to the deal--like an agreement to sell off some parts of the merged company to preserve a degree of competition. But Baer's testimony also hints that in the past, these adjustments to M&A deals may not have worked to protect consumers. He said:
"When we find a merger between rivals that risks decreasing competition in one or more markets, we are invariably urged to accept some form of settlement, typically modest asset divestitures and sometimes conduct commitments or supply agreements. We thoroughly review every offer to settle, but we have learned to be skeptical of settlement offers consisting of behavioral remedies or asset divestitures that only partially remedy the likely harm. We will not settle Clayton Act violations unless we have a high degree of confidence that a remedy will fully protect consumers from anticompetitive harm both today and tomorrow. In doing so, we are guided by the Clayton Act and the Supreme Court, which instruct us to not only stop imminent anticompetitive effects, but
also to be forward-looking and arrest potential restraints on competition “in their incipiency.” Settlements need to preserve the status quo ante in markets where there is a risk of competitive harm. Where complex transactions pose antitrust risks in multiple markets, our confidence that Rube Goldberg settlements will preserve competition diminishes. Consumers should not have to bear the risks that a complex settlement may not succeed. If a transaction simply cannot be fixed, then we will not hesitate to challenge it."
But with the current wave of mergers and acquisitions, the big issue to me is not so much what is legal, but what it reveals about the priorities and perceptions of top executives of these companies. A huge M&A deal is a huge commitment of the time of executives, not just in negotiating the deal, but then in following up and integrating various parts of the two companies.

In that spirit, I find it discouraging that the top executives at Pfizer apparently believe that the best focus of their time and energy at present isn't about developing new pharmaceuticals in-house, but rather to do a tax-inversion deal with the Irish firm Allergan, so that Pfizer can pay the lower Irish corporate tax rate instead. I find it discouraging that the Belgian firm Anheuser-Busch Inbev and the US firm SABMiller, both of which are the result of earlier large-scale mergers, apparently believe that the most important corporate priority isn't to focus on selling fizzy water to their customers, but instead to do yet another merger. The same discouragement applies to most of the mergers on the top 10 list above.

The merger wave means that top executives across a wide range of industries--pharmaceuticals, oil and gas, food and beverages, chemicals, technology, telecom, health care, and others--are all deciding that the most productive use of their time and energy is to devote a historically enormous amount of capital is to merge with or acquire other existing firms. I'm sure every one of these firms can offer some flashy case about the "synergies" from the mergers, and probably a few of those cases will even turn out to be correct. But I find myself wondering about the potential gains to productivity and consumers if, instead of pursuing mergers, these top executives focused the same time and energy and financial resources on building the capabilities of their own workforce, innovating in their product areas, and competing with the other firms in their industries.


Friday, March 18, 2016

Eliminate High-Denomination Bills?

Most of us use a fair number of $20 bill, and maybe a $50 or $100 bill every now and again. But of the total US currency in circulation, 78% is held in the form of $100 bills. To put it differently the $1,014 billion outstanding in $100 bills is the equivalent of about 3,000 $100 bills for every person in the United States. I've noted this phenomenon before: for example, here and here. Peter Sands argues that it's time to do something about it in "Making it Harder for the Bad Guys:The Case for Eliminating High Denomination Notes," which was published in February 2016 by the Mossavar-Rahmani Center for Business & Government at the Harvard Kennedy School (Working Paper #52). He writes:
Our proposal is to eliminate high denomination, high value currency notes, such as the €500 note, the $100 bill, the CHF1,000 [Swiss franc] note and the £50 note. Such notes are the preferred payment mechanism of those pursuing illicit activities, given the anonymity and lack of transaction record they offer, and the relative ease with which they can be transported and moved. By eliminating high denomination, high value notes we would make life harder for those pursuing tax evasion, financial crime, terrorist finance and corruption. ...
To get a sense of why this might matter to criminals, tax evaders or terrorists, consider what it would take to transport US$1m in cash. In US$20 bills, US$1m in cash weighs roughly 110lbs and would fill 4 normal briefcases. One courier could not do this. In US$100 bills, the same amount would weigh roughly 22lbs and take only one briefcase. A single person could certainly do this, but it would not be that discrete. In €500 notes, US$1m equivalent weighs about 5lbs and would fit in a small bag. ... It should be no surprise that in the underworld the €500 note is known as a “Bin Laden”.
For example, consider the cross-border flows of cash between the United States and Mexico from drug trafficking. These amount to billions, which in turn means thousands or tens of thousands of trucks, pick-ups and individual couriers carrying cash. As pointed out earlier, interdiction rates are very low: against cross-border flows of the order of US$20-30bn per year, total seizures in the decade to 2013 amounted to under US$550m. Suppose the US$100 bill was eliminated and the drug traffickers switched entirely to US$50 bills. All else equal, the number of trucks, pick-ups and couriers would have to double. Costs and interdiction rates would probably more than double. Taking the logic further, suppose US$50 issuance was constrained so that the drug traffickers had to rely largely on US$20 bills. The transportation task would increase by up to five times. It would be very surprising if this did not have a very significant impact on costs and interdiction. ...
Once the decision is made to eliminate high denomination notes, there are a range of options about how to implement this, which vary in pace and impact. These are not examined in any depth in this paper. However, the most straightforward option is very simple: stop issuing the highest denominations and withdraw the notes whenever they are presented to a bank. More assertive options would put restrictions on where and how they can used (e.g., “no more than 20 on any one transaction”) or put a maximum value on permissible cash transactions (as Italy has done). The most aggressive option would be to put a time limit on how long the high denomination notes would be honored. However, this would be contrary to the established doctrines of a number of central banks, which continue to honor withdrawn notes many years after the event.
As Sands readily recognizes at a number of places throughout the article, the case against big bills isn't an easy one to prove with ironclad systematic evidence, because no one really knows where the big bills are. The exception seems to be Japan, where lots of people carry and use 10,000 yen notes in everyday life.  But in other countries, a lot of the currency consists of big bills that the average person rarely sees.


Thus, Sands's argument tried to piece together the bits and pieces of evidence that do exist. At least in my reading, the attempt is more successful in some cases than others. For example, while the cash economy certainly contributes to tax evasion, it's not clear to me that very large numbers of large-denomination bills are the main issue here.

However, the importance of large-denomination bills in moving the profits of the illegal drug trade seems pretty clear. Sands writes: 
"By far the largest quantum of income from transnational organized crime is derived from the illicit production and sale of narcotics. UNODC estimates drug-trafficking revenues amount to about 0.4-0.6% of global GDP, or roughly US$300-450bn. ... In the drug economy, cash dominates. Sales of illicit narcotics are almost exclusively conducted as cash transactions. As a result, large amounts of currency accumulate at collection points and across supply lines over relatively short periods of time. Storing, transporting and smuggling the proceeds of drug sales is a key operational challenge for international syndicates keen to hide the proceeds of their crimes from authorities. Cash derived from sales across the United States is typically taken to regional counting houses in major cities, converted into higher denomination notes, vacuum sealed to further reduce bulk then “concealed in the structure of cars or articulated trucks that are hitherto unknown to law enforcement”. The United States Custom and Border Patrol confirm that most proceeds from illicit drugs are transported as bulk cash, with an estimated US$20-30bn in currency crossing from the United States across the border with Mexico each year. Indeed, as governments have increased scrutiny and control over formal payment systems, cash smuggling has become the principal mechanism for distributing proceeds through global drug production chains."
Sands also makes a strong case that large-denomination bills play a substantial role in human trafficking and human smuggling, cash plays a large role, "not least because the ability to move large amounts of money across borders without detection is a critical part of the business model." ISIS seems to rely for its financing on flows of large-denomination bills, too:
"The biggest source of money for ISIS is oil smuggling, estimated at its peak to be around US$500m per year, but probably significantly less now, given air-strikes on pumping stations, refineries, pipelines and oil tanker convoys by the US-led coalition, as well as the decline in the oil price. There is very little reliable information on how the oil is sold, but it appears that much is sold for cash, largely US dollars (and given the volumes almost certainly US$100 bills). Sometimes payments are made to the bank accounts of ISIS sympathizers elsewhere, with the money then couriered into ISIS territory in cash (again, almost certainly in US$ or Euro)."
It's easy enough to come up with reasons why some law-abiding people, whether in the US or in countries beset by economic or political instability, might want to hold a stash of $100 or €500 notes. It's also easy to suggest ways that if the large-denomination bills were phased out, other stores of value like diamonds or gold or anonymous electronic money like Bitcoin might take its place.  Perhaps the most creative argument I've heard for keeping the large-denomination bills is that the authorities could figure out a way to mark some of them in a way that could be traced, and then could follow the large-denomination bills to the criminals and the terrorists.

But without making all this too complicated, the basic tradeoff here is whether it's worth inconveniencing a relatively small number of law-abiding people with legitimate needs for large-denomination bills in exchange for, as the title of Sands's paper says, "making it harder for the bad guys." One interesting fact is that in exchange rate markets, large-denomination bills actually trade at above face value, presumably because of their ability to maintain and transport value.



For some reason, thinking about phasing out $100 bills made me think about the ongoing argument for dropping the penny. Seems to me that the real-world gains from dropping the penny are small compared to the gains from phasing out large-denomination bills.

Jérémie Cohen-Setton offers a useful overview of the arguments with links to a number of comments in a blog post on "The elimination of High Denomination Notes" (March 7, 2016) at the website of Bruegel, a European think-tank.

Thursday, March 17, 2016

Dynamic Pricing: Uber, Coca Cola, Disneyland and Elsewhere

Dynamic pricing refers to the practice of changing prices in real time depending on fluctuations in demand or supply.  Most consumers are inured to dynamic pricing in certain contexts. For example, when a movie theater charges more on a Friday or a Saturday night than for an afternoon matinee, or when a restaurant offers an early-bird dinner special, or when mass transit buses or trains offer a lower fare during off-peak hours, or when airlines charge more for a ticket ordered one day before the flight rather than three months before the flight, it doesn't raise many eyebrows.

In other cases, dynamic pricing is more controversial. One classic example is that back in 1999, Coca Cola experimented with vending machines that would automatically rise the price on hot days. The then-chairman, M. Douglas Ivester, pointed out that demand for a cold drink can increase on hot days and said: "'So, it is fair that it should be more expensive. ... The machine will simply make this process automatic.'' However, the reaction from customers stopped the experiment in its tracks. On the other side, in 2012 certain Coca-Cola owned vending machines in Spain were set to cut the price of certain lemonade drinks by as much as half on hot days. To my knowledge, there was no outcry over this policy.

Information technology is enabling dynamic pricing to become more widespread in a number of contexts. The on-line Knowledge magazine published by the Wharton School at the University of Pennsylvania has been publishing some readable commentary on  dynamic pricing. "The Promise — and Perils — of Dynamic Pricing" (February 23, 2016) offers an overview of the arguments with links to some research.  In "Frustrated by Surge Pricing? Here’s How It Benefits You in the Long Run" (January 5, 2016)  Ruben Lobel and Kaitlin Daniels discuss how it's important to see the whole picture--both higher prices at peak times, but also lower prices at other times. In "The Price Is Pliant: The Risks and Rewards of Dynamic Pricing" (January 15, 2016), Senthil Veeraraghavan looks at the choices that sellers face in considering dynamic pricing if they are taking their long-term relationships with customers into account.

Many of the most current examples seem to involve the entertainment industry. For example, the St. Louis Cardinals baseball team uses "a dynamic pricing program tied to its ticketing system in which the team changes ticket prices daily based on such factors as pitching match-ups, weather, team performance and ticket demand." Some ski resorts are adjusting prices based on demand and recent snowfall. Disneyland recently announced a plan to raise admissions prices by as much as 20% on days that are historically known to be busy, while lowering them on other days.
These examples are worthy of study: for example, one paper points out that if a seller only uses dynamic pricing to raise prices on busy days, but doesn't correspondingly lower prices to entice more people on on non-busy days, it can end up losing revenue overall. But at the end of the day, it's hard to argue that these industries involve any great issue of fairness or justice. If you don't want to go to Disneyland or a certain ski resort, then don't go. Sure, sellers in the entertainment industry should be very cautious about a perception that they are jerking their customers around. But there's now an active online market for reselling tickets for a lot of entertainment events, and prices in that market are going to affect last-minute supply and demand factors.

The current controversies over dynamic pricing often seem to bring up Uber, with its policy of having fares that rise during peak times. Uber released a research paper in September 2015 called "The Effects of Uber’s Surge Pricing: A Case Study," by  Jonathan Hall, Cory Kendrick, and Chris Nosko. Part of the paper focuses on the evening of March 21, 2015, when Ariana Grande played a sold-out show at Madison Square Garden. When the concert let out, Uber prices surged: more specifically, the usual Uber price was raised by a multiple of "1.2 for 5 minutes, 1.3 for 5 minutes, 1.4 for 5 minutes, 1.5 for 15 minutes, and 1.8 for 5 minutes." Here's the pattern that emerged in the market.

The red dots show the pattern of people opening the Uber app after the concert; the red line is smoothed out to show the overall pattern. The blue dots and the blue line show the actual ride request. Notice that this rises, but not by as much, probably in part because some of those who looked at the higher surge price decided it wasn't worth it, and found another way of getting home. The green dots and green line show the rise in Uber drivers in the area, with the rise presumably occurring in part because drivers were attracted by the surge price.

I don't think even the authors of the paper would make strong claims here that Uber surge pricing worked perfectly on the night of March 21, 2015. But it did get more cars on the streets, and it did mean that people willing to pay the price had an additional option for getting home.

Those interested in a fuller analysis of Uber might want to track down "Disruptive Change in the Taxi Business: The Case of Uber," by Judd Cramer and Alan B. Krueger. (It's downloadable for free as  Princeton Industrial Relations Working Paper #595, released December 2015, and also was released in March 2016 as National Bureau of Economic Research Working Paper 22083.) Their estimate suggests that "UberX drivers spend a significantly higher fraction of their time, and drive a substantially higher share of miles, with a passenger in their car than do taxi drivers." They write:
"Because we are only able to obtain estimates of capacity utilization for taxis for a handful of major cities – Boston, Los Angeles, New York, San Francisco and Seattle – our estimates should be viewed as suggestive. Nonetheless, the results indicate that UberX drivers, on average, have a passenger in the car about half the time that they have their app turned on, and this average varies relatively little across cities, probably due to relatively elastic labor supply given the ease of entry and exit of Uber drivers at various times of the day. In contrast, taxi drivers have a passenger in the car an average of anywhere from 30 percent to 50 percent of the time they are working, depending on the city. Our results also point to higher productivity for UberX drivers than taxi drivers when the share of miles driven with a passenger in the car is used to measure capacity utilization. On average, the capacity utilization rate is 30 percent higher for UberX drivers than taxi drivers when measured by time, and 50 percent higher when measured by miles, although taxi data are not available to calculate both measures for the same set of cities. Four factors likely contribute to the higher utilization rate of UberX drivers: 1) Uber’s more efficient driver-passenger matching technology; 2) Uber’s larger scale, which supports faster matches; 3) inefficient taxi regulations; and 4) Uber’s flexible labor supply model and surge pricing, which more closely match supply with demand throughout the day."
However, I'd argue that the two up-and-coming examples of surge pricing that could have the biggest effect on the the most people involve electricity and traffic jams. In the case of variable prices for electricity, a policy of charging more for electricity on hot days will encourage more people to ease back on their use of air conditioning at those times and look for opportunities to conserve, which in turn means less chance of power outages and less need to use expensive back-up generating capacity. A policy of charging higher tolls on congested roads will encourage people to find other ways to travel, and provide a market demand for when building additional lanes of highway is really worth doing.  As these examples suggest, the economic theory behind dynamic pricing or "surge pricing" is well-understood. When the quantity demanded of a good or service rises and falls at predictable times, broader social benefits emerge from charging more at that time.

This economic logic even applies in what is surely the most controversial case of surge pricing, which is when prices of certain goods rise either just before or just after a giant storm or other disaster. The higher price--often attacked as "price gouging"-- gives buyers an incentive not to purchase and hoard the entire stock, and it gives outside sellers an incentive to hop in their pick-up trucks and vans and bring more of the product to the disaster area. What's worse than being in a disaster area and having to pay extra for certain key goods? Being in a disaster area where those goods aren't available at any price, because the price stayed low and they were sold out before you arrived.

The ongoing gains in information technology are only going to make dynamic pricing more common, because it is only going to become easier both to track changes in demand either historically or in real time and also to make price adjustments in real time (think of the ability to adjust electricity bills or road tolls, for example).  There are going to be changes that will feel like abuses. For example, I wouldn't be surprised if some online retailers already have software in place so that if there is a demand surge for some product, the price jumps automatically. Of course, many of those who want to push back against companies that use surge pricing, like Uber, will have no problem with personally using that same information technology to re-sell their tickets to a highly demanded or sold-out event at well above face-value.

Wednesday, March 16, 2016

Automation and Job Loss: The Fears of 1927

As I've noted from time to time, blasts of concerns over how automation would reduce the number of jobs have been erupting for more than 200 years. As one example, in "Automation and Job Loss: The Fears of 1964" (December 1, 2014), I wrote about what were called the "automation jobless" in a 1961 news story and how John F. Kennedy advocated and Lyndon Johnson signed into law a National Commission on Technology, Automation, and Economic Progress. The Commission eventually released its report in February 1966. when the unemployment rate was 3.8%.

Here's an example of concerns about automation replacing labor from a speech given in 1927 by the US Secretary of Labor James J. Davis called "The Problem of the Worker Displaced by Machinery, which was published in the Monthly Labor Review of September 1927 (25: 3, pp. 32-37, available through JSTOR).  Before offering an extended quotation from Davis, here are a few quick bits of background. 
  • When Davis delivered this speech in 1927, the extremely severe recession of 1920-21 was six years in the past, but between 1921 and 1927 the economy had had two milder recessions
  • The unemployment rate in 1927 was 3.9%, according to the Historical Statistics of the United States
  • At several points in his speech, Davis expresses deep concerns over immigration, and how much worse the job loss due to automation would have been if immigration had not been limited earlier in the 1920s. Both then and now. economic stress and concerns about economic transition seem to be accompanied by heightened concern over immigration. 
  • Lewis ends up with what many economists have traditionally viewed as the "right" answer to concerns about automation and jobs: that is, find ways to help workers who are dislocated in the process of technological innovation, but by no means try to slow the course of automation itself. 
  • As a bit of trivia, Davis is the only person to serve as Secretary of Labor under three different presidents: Harding, Coolidge, and Hoover. 
Here's what Davis had to say in his 1927 talk.
"Every day sees the perfection of some new mechanical miracle that enables one man to do better and more quickly what many men used to do. In the past six years especially, our progress in the lavish use of power and in harnessing that power to high-speed productive machinery has been tremendous. Nothing like it has ever been seen on earth. But what is all this machinery doing for us? What is it doing to us? I think the time is ripe for us to pause and inquire.
"Take for example the revolution that has come in the glass industry. For a long time it was thought impossible to turn out machines capable of replacing human skill in the making of glass. Now practically all forms of glassware are being made by machinery, some of the machines being extraordinarily efficient. Thus, in the case of one type of bottle, automatic machinery produces forty-one times as much per worker as the old hand processes, and the machine production requires no skilled glass blowers. In other words, one man now does what 41 men formerly did. What are we doing with the men displaced?
"The glass industry is only one of many industries that have been revolutionized in this manner. I began my working life as an iron puddler, and sweated and toiled before the furnace. In the iron and steel industry, too, it was long thought that no machinery could ever take the place of the human touch; yet last week I witnessed the inauguration of a new mechanical sheet-rolling process with six times the capacity of the former method. 
"Like the bottle machine, this new mechanical wonder in steel will abolish jobs. It dispenses with men, many of whom have put in years acquiring their skill, and take a natural pride in that skill. We must, I think, soon begin to think a little less of our wonderful machines and a little more of our wonderful American workers, the alternative being that we may have discontent on our hands. This amazing industrial organization that we have built up in our country must not be allowed to get in its own way. If we are to go on prospering, we must give some thought to this matter.
"Understand me, I am not an alarmist. If you take the long view, there is nothing in sight to give us grave concern. I am no more concerned over the men once needed to blow bottles than I am over the seamstresses that we once were afraid would starve when the sewing machine came in. We know that thousands more seamstresses than before earn a living that would be impossible without the sewing machine. In the end, every device that lightens human toil and increases production is a boon to humanity. It is only the period of adjustment, when machines turn workers out of their old jobs into new ones, that we must learn to handle them so as to reduce distress to the minimum. 
"To-day when new machines are coming in more rapidly than ever,that period of adjustment becomes a more serious matter. Twenty years ago we thought we had reached the peak in mass production. Now we know that we had hardly begun. ... In the long run new types of industries have always absorbed the workers displaced by machinery, but of late we have been developing new machinery at a faster rate than we have been developing new industries. Inventive genius needs to turn itself in this direction.
"I tremble to think what a state we might be in as a result of this development of machinery without the bars we have lately set up against wholesale immigration: If we had gone on admitting the tide of aliens that formerly poured in here at the rate of a million or more a year, and this at a time when new machinery was constantly eating into the number of jobs, we might have had on our hands something much more serious than the quiet industrial revolution now in progress. 
"Fortunately we were wise in time, and the industrial situation before us is, as I say, a cause only for thought, not alarm. Nevertheless I submit that it does call for thought. There seems to be no limit to our national efficiency. At the same time we must ask ourselves, is automatic machinery, driven by limitless power going to leave on our hands a state of chronic and increasing unemployment? Is the machine that turns out wealth also to create poverty? Is it giving us a permanent jobless class? Is prosperity going to double back on itself and bring us social distress? ...
"We saved ourselves from the millions of aliens who would have poured in here when business was especially slack and unemployment high. In the old days we used to admit these aliens by the shipload, regardless of the state of the times. I remember that in my own days in the mill when a new machine was put into operation or a new plant was to be opened, aliens were always brought in to man it. When we older hands were through there was no place for us to go. No one had a thought for the man turned out of a job. He went his way forgotten.
"With a certain amount of unemployment even now to trouble us, think of the nation-wide distress in 1920-21 with the bars down and aliens flooding in, and nowhere near enough jobs to go round. Our duty, as we saw it, was to care as best we could for the workers already here, native or foreign born. Restrictive immigration enabled us to do so, and thus work out of a situation bad enough as it was. Now, just as we were wise in season in this matter of immigration, so we must be wise in sparing our people to-day as much as possible from the curse of unemployment as a result of the ceaseless invention of machinery. It is a thought to be entertained, whatever the pride we naturally take in our progress in other directions.
"Please understand me, there must be no limits to that progress. We must not in any way restrict new means of pouring out wealth. Labor must not loaf on the job or cut down output. Capital must not, after building up its great industrial organization shut down its mills. That way lies dry rot. We must ever go on, fearlessly scrapping old methods and old machines as fast as we find them obsolete. But we can not afford the human and business waste of scrapping men. In former times the man suddenly displaced by a machine was left to his fate. The new invention we need is a way of caring for this fellow made temporarily jobless. In this enlightened day we want him to go on earning, buying, consuming, adding his bit to the national wealth in the form of product and wages. When a man loses a job, we all lose something. Our national efficiency is not what it should be unless we stop that loss.
"As I look into the future, far beyond this occasional distress of the present, I see a world made better by the very machines invented to-day. I see the machine becoming the real slave of man that it was meant to be. ...  We are going to be masters of a far different and better life."
I'll add my obligatory reminder here that just because past concerns about automation replacing workers have turned out to be overblown certainly doesn't prove that current concerns will also prove out to be overblown. But it is an historical fact that for the last two centuries, automation and technology has played a dramatic role in reshaping jobs, and also helped to lower the average work-week, without leading to a jobless dystopia.