Pages

Friday, March 30, 2018

Misconceptions about Trade Deficits

Back in 1999, I wrote an article called "Untangling the Trade Deficit" for the Public Interest magazine. I started this way:
The competition for most misunderstood economic statistic is hard-fought, but there is a clear winner: the trade deficit. No other number is interpreted so differently by professional economists and the general public. Common reactions to the U.S. trade deficit range from belligerence to dejectedness: It is thought that America’s trade deficit exists either because of the skullduggery and unfair trade practices of countries that shut out U.S. products, or because American companies are failing to compete against their global competitors. In either case, the preferred solution is often to get tough in trade negotiations for the sake of protecting U.S. jobs. But, according to most economists, cutting across partisan and ideological lines, such mainstream beliefs about cause, effect, and solution are wrong. Even more bothersome, these popular beliefs are wrong not simply because the evidence is against them—although it is—but because they reflect fundamental misunderstandings of what the trade deficit is and how it interacts with the rest of the economy.
For economists, that article didn't didn't offer any new lessons. It was just one more effort by to explain the intuition behind the economics of trade deficits--as taught in standard intro econ classes--to the general reader. The history of such explanations runs deep; indeed, back to Adam Smith and earlier. Apparently, the subject is difficult to exposit and economists aren't very good at doing so.

Robert Z. Lawrence takes one more swing at the pinata in "Five ReasonsWhy the Focus onTrade Deficits Is Misleading," published by the Peterson Institute of International Economics. (March 2018). I'll start with some background, and then link it to Lawrence's list of misconceptions.

It seems to be widely believed that a trade deficit shows the level of unfairness of import competition, and moreover that a trade deficit shows economic weakness, while a trade surplus shows economic strength. (For a vivid example, see the "Remarks by President Trump at Signing of a Presidential Memorandum Targeting China’s Economic Aggression" last week.) But even a casual look at actual US trade balances in recent decades shows the implausibility of such beliefs. Here's a figure of US trade imbalances (as measured by the current account balance) since 1970, measured as a share of GDP.

In the 1970s, trade deficits were close to zero. But this did not mean when most people believed that international competition was fair: instead, it's a time when foreign competitors from Japan and elsewhere were savaging US industries like cars and steel. It's also not a time when the US looks especially strong, with a period of "stagflation" combining high unemployment and inflation, as well as a slowdown in productivity growth.

In the 1980s, trade deficits first boomed, and then diminished. But the mid-1980s was not a time of US economic weakness: instead, these were years of hearty economic growth after the recession of the early 1980s. The recession of 1990-91 is actually when the trade deficit declined. Moreover, no one seriously claims that US trading partners suddenly became much less fair for a few years in the mid-1980s, before then suddenly becoming much more fair by the early 1990s--which means that unfairness of trade isn't what causes the US trade deficit to change.

Through the 1990s, this is a period when the US trade deficit becomes large, but at the same time, the US economy grows rapidly. Also, this is not a time a higher trade deficit can be linked to barriers to trade increased:  instead, this is the decade when barriers to trade are reduced by the North American Free and by the completion of the "Uruguay round" of international trade talks leading to the creation of the World Trade Organization in 1995.

Since 2000, the trade deficit first falls when the economy is growing in the early 2000s, and then the steep recession of 2007-2009 is accompanied by a sharp decline in the trade deficit. If the trade deficit is a measure of unfair trade (which it isn't!), the US should presumably be congratulating the rest of the world for how it dramatically improved its trade fairness since about 2006.

It is blindingly apparent from the most casual acquaintance with the actual trade balance statistics that trade deficits are often not associated with periods of weak economic performance, that declines in trade deficits are not associated with strong economic performance, and that fluctuations in foreign trade barriers are a deeply implausible explanation for changes in the trade balance.

One can walk through the same exercise with trade balances of other countries, as well. For example, here is China's trade balance since its reforms started in the late 1970s, from the World Bank website.


China's trade surplus as a share of GDP was low, mostly near-zero and sometimes in deficit, from the early 1980s up to around 2000. Of course, China's economy was booming during these decades, which suggests that its small trade surpluses during this time were not a primary driver of its growth. Also, if a trade balance measures openness to trade (and it doesn't), then one would need to conclude that China was more open to US imports in the 1980s and 1990s than later, after it joined the World Trade Organization and reduced trade barriers in 2001. Further, one would need to believe that China had a dramatic spike in trade unfairness around 2007, followed by a dramatic return to trade fairness just after that. Of course, none of these interpretations about China's trade balance and its level of openness to foreign trade can pass the laugh test.

If trade balances are not about economic strength or about trade barriers of other countries, what are they about? Let's go back to basics. A trade deficit means that a nation is importing more than it is exporting. To put it another way, other countries are earning US dollars by selling into the US market, and a share of these US dollars are not getting spent on US-produced goods and services. (After all, if all the US dollars earned by those abroad selling into US markets were spent on US-exported goods and services, no trade imbalance would exist.) Instead, the value of the US trade deficit represents a flow of financial capital that is invested into the US as investment capital. Thus, a trade deficit necessarily and always means an inflow of international capital, while a trade surplus necessarily and always means an outflow of international capital.

In an economy without any international trade, the domestic savings of the economy has to equal domestic investment--because domestic savings is what provides the finance for domestic investment. But if an economy is open to trade, then a trade deficit means that there is an inflow of capital from abroad: specifically, an inflow of capital equal to the trade deficit itself.

Thus, the US economy is a low-saving, high consumption economy. Indeed, the US economy consumes more than it produces, which it can do by importing more than it exports and running trade deficits. The US economy also has a situation where domestic investment can be larger than domestic savings, because the US trade deficit means that there is a net inflow of foreign capital. Here's a figure from Lawrence's paper to illustrate the point. Notice that the inflow of foreign capital, shown by the trade deficit, is what allows domestic investment to exceed domestic saving.

Economist might disagree in their  interpretation of the circumstances in which patterns of trade deficits/capital inflows or trade surpluses/capital outflows are beneficial or harmful. But the  connection between a trade deficit and an inflow of foreign capital (or between a trade surplus and an outflow of financial capital) is not a "theory" over which economists disagree. It's just a basic understanding of what these terms mean.

Now let's turn to Lawrence's list of misconceptions:

MISCONCEPTION 1: TRADE DEFICITS ARE BAD

Trade deficits necessarily mean capital inflows. If the capital inflows from abroad are wisely invested, a trade deficit can be beneficial. For example, South Korea had large trade deficits and inflows of international capital when it was building up its industrial base, and so did the United States in the 19th century. In the 1990s, when the US had large trade deficits and inflows of international capital but was also making very large investments in information technology, there was at least an argument to be made that this pattern wasn't overly harmful to the US economy at that time. The problem arises when sustained trade deficits are accompanied by capital inflows that are not invested in a way that encourages long-run investment and growth.

I sometimes try to make this point with a parable about the meaning of trade imbalances between Robinson Crusoe and Friday, as I laid out in "Trade Imbalances: A Parable for Teachers" (July 18, 2012).

MISCONCEPTION 2: TRADE BALANCES REFLECT TRADE POLICIES

As noted above, it is silly to try to explain movements in trade balances with abrupt changes in trade policy. Instead, the movements in trade balances are easily explained by macroeconomic factors like consumption and saving.

MISCONCEPTION 3: TRADE DEFICITS ALWAYS LEAD TO JOB LOSS AND SLOWER GROWTH

This is clearly untrue, based on US experience with larger trade deficits and vigorous economic growth in the 1980s, 1990s, and early 2000s.

MISCONCEPTION 4: TRADE PERFORMANCE IS THE MOST IMPORTANT REASON FOR THE LONG-RUN DECLINE IN US EMPLOYMENT IN MANUFACTURING

Lawrence writes: "It is noteworthy that the share of US employment in manufacturing began declining in the 1960s, long before the economy was heavily exposed to trade, and that the declines in the share of manufacturing employment in industrial countries with large surpluses in manufacturing trade, such as Germany, Italy, and Japan, has been similar to the declines in the share of manufacturing employment in the United States and other countries with trade deficits. This evidence suggests that most of the declining share of employment in US manufacturing reflects factors other than the trade deficit. The share of manufacturing employment in all major industrial countries, including those with large trade surpluses, has declined since the early 1970s. The primary
reason for these declining shares has been rapid productivity growth coupled with demand that is  relatively unresponsive to lower goods prices and higher incomes ... "

In other words, manufacturing workers keep getting more efficient, so it takes fewer of them to make the same level of output. However, as incomes rise, the quantity demanded of manufacturing goods isn't rising as much--and so fewer manufacturing workers are needed, in the US and everywhere.

MISCONCEPTION 5: BILATERAL TRADE BETWEEN COUNTRIES SHOULD BE BALANCED

It's just silly to argue that trade should be balanced on a bilateral basis, between any two countries. Even in a world with only three countries, it's easy to imagine a situation in which each country has a surplus with one of the other countries and a deficit with the other. No two of these countries would have balanced trade with each other, but all three would have balanced trade overall.

But the bigger point is that there's no reason that countries should be seeking an overall balance of trade, either. Some growing economies will want to welcome inflows of international capital, which means that they will have trade deficits. Some more mature economies, like Germany and Japan, will generate more in domestic saving than they can find a way to productively invest, and so they will run trade surpluses and have net outflows of financial capital. 

There are subtle and debateable issues about trade policy. But thinking that the size of trade deficits measure the level of unfairness in trade is just wrong-headed. If you think that trade surpluses mean economic strength, tell it to Japan, which has been experiencing  a combination of trade surpluses and miserably sluggish economic growth since the early 1990s. Even if the US had no trade deficit, many of its companies and industries would still need to face tough international (and domestic) competition.

As economists of all political beliefs will point out, the only way to ensure a lower trade deficit is to  have an economy with either higher domestic saving or less domestic investment--and because less investment isn't typically a great idea for long-run growth, higher domestic saving is the preferred policy tool. If you understand that point, you can at least start to grapple with what a trade deficit actually means. 

Thursday, March 29, 2018

The Commuter Parking Tax Break

Many employers provide parking to employees who commute to work, which can be viewed as an untaxed fringe benefit of their jobs. The value of this benefit depends on where the parking is located. If the employer is in a uncongested suburban or rural area, where parking is generally free for everyone, then the value of this fringe benefit is low. But if the employer is in the part of an urban area with traffic congestion and where parking usually has a monetary price, then the value of this benefit can be higher.

Tony Dutzik, Elizabeth Berg, Alana Miller, and Rachel Cross, considers the tradeoffs in their report, "Who Pays For Parking? How Federal Tax Subsidies Jam More Cars into Congested Cities, and How Cities Can Reclaim Their Streets" (September 2017), published by TransitCenter and the Frontier Group. They write:
"Because employer-provided and employer-paid parking is excluded from an employee’s income, the parking tax benefit  accounts for an estimated $7.3 billion in lost federal and state income and payroll tax revenues every year. ... While the vast majority of Americans drive to work, most do not gain from the commuter parking benefit. The reason is that parking is so abundant in many places—especially in suburban and rural areas—that it essentially has no value as defined by the Internal Revenue Service. As a result, only about one-third of commuters benefit from this policy. In fact, most Americans are net losers from the commuter subsidies, as they must endure higher taxes or reduced government services—as well as increased congestion—to subsidize parking for a minority of commuters. ... The parking tax benefit disproportionately assists commuters who work in dense employment centers, such as downtowns, where parking is most valuable."
To IRS has tried to tax the value of employee parking a few times over the decades with minimal success.  However, the US government did enact a policy to offer a counter-subsidy by making it possible pay for transit and carpooling out of pretax income. (And yes, this means the official policy is both not to tax the benefit of free parking, thus encouraging people to drive to work, and also not to tax the value of transit passes, thus encouraging people not to drive to work.) But the transit benefit is relatively small-scale in size.
"In an effort to counteract the effects of the commuter parking benefit, America also subsidizes people not to drive to work through the “commuter transit benefit”—a $1.3 billion program that enables workers to receive transit passes or vanpool services from their employers tax-free. The transit tax benefit encourages Americans to use public transportation by making the cost of transit passes or vanpooling  payable from pre-tax income. ,,, The main impediment to the effectiveness of the transit benefit, however, is that few workers receive it. Only 7 percent of the American workforce has access to subsidized commuter transit benefits, and only 2 percent of the US workforce uses them. Most employers— particularly smaller firms—do not offer employer-based transit
benefits programs. Like the parking tax benefit, the transit tax benefit disproportionately aids those with higher incomes who work for large employers in dense downtown districts. Workers in the top 10 percent by income are seven times more likely to have access to subsidized transit benefits than those in the bottom 10 percent of the income range."

The tax exemption for the value of employer-provided commuter parking has real effects. It means more cars on the road during peak commute times, and higher traffic congestion. The additional pollutants in the air have health costs. And there is an opportunity cost of using scarce space in urban areas--whether on streets, surface lots, or ramps--for parking cars.

The report suggests a range of policy options, and without endorsing or opposing them here, let me put them out there as useful ideas for shaking up one's thoughts.

One option would be for Congress (or a state government) to pass a "parking cash-out" law which would require companies to determine the market price of their employee parking benefits. Then all employers would offer that benefit as a payment to all employees. Those employees who choose to keep driving and parking would simply pay for the parking directly. They would be no worse off in financial terms--but they would be far more aware of the costs of the parking benefit. Those who find an alternative way to get to work could just treat the parking-related payment as additional income. And the tax authorities would then find it straightforward to tax the parking-related income payments. As the authors wrote:
"Other jurisdictions have adopted or considered “parking cashout”— a policy that requires businesses that offer free parking to their employees to give non-driving workers a cash payment of equivalent value. Parking cash-out can benefit employers by reducing the number of parking spaces they must rent to provide to their employees."
If that approach is politically impossible, a local option is for cities to raise their own taxes on commuter parking:
"Cities generally assess parking taxes on commercial lots and garages (with some exceptions, see below), and assess the tax either at a flat rate or as a percentage of the parking charge. In large cities,  parking taxes range from New York’s 18.4 percent to 20 percent in Chicago and Philadelphia to 25 percent in San Francisco and 37.5 percent in Pittsburgh. These rates of taxation may seem high, but they are insufficient to cancel out the tax incentive for commuter parking for many workers. In 2015, the combined average marginal tax rate for federal income tax, the employee share of federal payroll tax, and state income tax was 35.1 percent. The commuter parking benefit exempts workers from paying this tax on the value of employer-provided parking. To fully counteract that subsidy, municipalities would need to tax employee parking at a rate of roughly 54 percent—well above even the highest parking taxes assessed in American cities. Most parking taxes fail to counteract the effect of tax-free treatment of employer-provided parking in another way as well: they apply only to commercial parking facilities, not to the “free” parking offered by employers at their facilities. Several cities around the world have implemented taxes that apply to parking spaces regardless of
whether they are provided for free or at a cost."
For previous posts on the economics of parking, see:

Monday, March 26, 2018

Equal Pay for Equal Work: Rathbone and Fawcett in 1918

One hundred years ago,  the leading British economics journal (edited by John Maynard Keynes) published an article and a response from two women authors: Eleanor Rathbone and Millicent Fawcett. Despite writing in the Economic Journal, neither had professional training in economics. But they were clearly recognized as experts with opinions that economists hear.

Eleanor Rathbone (1872-1946) graduated from Somerville College at Oxford in 1893. In 1909 she was elected to the Liverpool city council; in 1929, she was elected to Parliament. Much of her focus was on government support for needy children, and over time she authored a number of article and books on the topic, as well as advocating in her political roles.  Millicent Garrett Fawcett (1847–1929) is known by those who study the history of economic thought as the author of a Political Economy for Beginners book that went through 10 editions over 41 years. She led the largest UK suffragist organization, and played a role the founding of Newnham College, Cambridge.

Rathbone led off with her article, "The Remuneration of Women's Services" (Economic Journal, March 1917, 27: 105, pp. 55-68). She argued that although women had been accepted into many jobs during World War I, the situation was not sustainable. She offered reasons why women and men were not equal in the workplace. But her particular focus, as in much of her life, was on who would pay for the costs of raising children. In her view, men needed to be paid more because many of them were providing for families. She was quick to note that this method of providing for families didn't always work very well--given different pay, different number of children, and men who didn't pass along much of their income to their families. She argued that if the government provided greater support for children (again, this was her long-standing cause), then equality of pay for women would be more likely to work well, because the "men need to support a family" argument would no longer hold.  Here are some samples of her argument:
"In industry, the outbreak of the war [that is, World War I]found the women workers confined almost entirely, except in a few occupations traditionally their own, to the lowest, most ill-paid, and unskilled occupations. The barriers that kept them out of the skilled trades were for the most part unrecognised by law, but they were almost completely effective, being built up partly of tradition, partly of trade union regulations, but mainly of the sex exclusiveness in which employers and employed made common cause. Against these barriers the "women's movement" had beaten itself for half a century in vain, but within two years the necessities of the war have broken them down--by no means completely, but to such an extent that it is plain that if re-erected they will have to be based frankly upon the desire of the male to protect himself from competition, and no longer upon the alleged incapacity of the female to compete. ... 
"The women themselves, ill-organised and voteless, with the sentiment in favour of the returning soldiers not only strong against them but strong among them, could not by themselves put up much of a fight. But they are likely to have two powerful allies: first, in the employers, who, having tasted the advantages of a great reserve of cheap, docile, and very effective labour are obviously not going to let themselves be deprived of it without a struggle; and secondly, in the growing public sense of the necessity on national grounds of making the most of our economic resources. ...
"This difficulty may be most shortly put in the form of a question: "Is fair competition between men workers and women workers possible, bearing in mind the customary difference in the wage level of the two sexes and the causes of that customary difference? In other words, is it possible for women with men, without undercutting their standards undermining their standards of life? " The reply offered by feminists to this question prompt and unhesitating, and is practically a denial of the difficulty. Women, they say, must, of course, be freely in all occupations. But they must not undercut.  They must demand and receive equal wages for equal work. This is the claim put forward by practically all women, except, of course, when they are themselves employing women.  I have not yet met the feminist whose principles compel her to pay her waitress the wages that would be demanded by a butler. ...

"There are in the eyes of most employers certain standing disadvantages of women's labour which have to be reckoned with. There is the fact that the law will not allow him to work her at night nor for overtime, except under rigid restrictions; that her liability to sickness (in most trades) is rather greater; that he cannot put her to lift heavy weights or to do odd jobs; that he cannot comfortably swear at her if she is stupid; that, in short, she is a woman, and most employers, being male, have a "club " instinct which makes them feel more at ease with an undiluted male staff. Above all, there is the overwhelming disadvantage, if the occupation is a skilled one, that she is liable to "go off and get married just as she is beginning to be of some use." Of course, there are advantages which to a certain extent counterbalance these disadvantages from the employer's point of view. There is the greater docility of women; their greater willingness to be kept at routine work; their lesser liability to absence on drinking bouts, to strikes, and to other disturbances of the economic routine. But obviously most of these "advantages " are likely to be regarded by the employer rather as reasons why he can safely exploit women than as reasons why he should equitably pay them as much as men.  ... 
"After all, perhaps the most important function which any State has to perform--more important even than guarding against its enemies--is to secure its own periodic renewal by providing for the rearing of fresh generations. ...  During the last forty-six years the State has taken directly upon itself the cost of the school education of its young, and it is gradually in a hesitating and half-hearted way taking over the cost of some of the minor provisions necessary for child-nurture, such as midwifery (paid for through the maternity benefit), medical attendance (through child-welfare centres, medical school inspectors, &c.). But the great bulk of the main cost of its renewal it still pays for, as it has always done, by the indirect and extraordinarily clumsy method of financing the male parent and trusting to him somehow to see the thing through. It does not even finance him directly, but leaves it to what it is fond of calling "blind economic forces " to bring it about that the wages of men shall be sufficient for the purposes of bringing up families. The "blind forces" accomplish this task, as might be expected, in a very defective and blundering way, with a good deal of waste in some places and a much worse skimping in others, but upon the whole they do accomplish it ...
"The wages of women workers are not based on the assumption that "they have families to keep," and in so far as these wages are determined by the standard of life of the workers it is a standard based on the cost of individual subsistence, and not on the cost of family subsistence. ...  For, after all, the majority of women workers are birds of passage in their trades. Marriage and the bearing rearing of children are their permanent occupations. ... 
"It is outside the scope of the present article to consider what should be the basis, the scale, and the machinery of any system by which the State should take upon itself the prime cost of rearing future generations. It might be done through a continuance of something resembling the present system of separation allowances, which provides for the upkeep of individual homes. The allowance might be on a flat rate-so much for the woman and so much for each child; or it might be dependent to some extent on the amount of the allotment made by the man from his pay. Or, again, our system of elementary schools might be developed into day boarding-schools, where children were fed and clad as well as taught, and could enjoy organised play. In the upper and middle classes, practically every parent who can afford it either commits his children to such schools or sends them altogether away from home."
Millicent Fawcett's rejoinder "Equal Pay for Equal Work," was published 100 years ago this month. (Economic Journal,  March 1918 28:109, pp. 1-6). Fawcett refers to Rathbone's "interesting article," which seems like the prelude to a robust British disagreement.  Fawcett sidesteps the arguments about the role of the state in supporting children. Instead, she insists on equal pay for equal work. Her case is partly a matter of fairness; indeed, she cites examples that the inexperienced and untrained women who entered the workforce during World War I were in some prominent cases much more productive than the experienced and trained men they replaced. But in addition, she argues that if women are paid less than men, there will always be a harsh conflict between male workers who will be correct to fear being undercut by lower-wage female workers. Fawcett begins with a story:
"John Jones earned good wages from a firm of outfitters by braiding military tunics. He fell ill and was allowed by the firm to continue his work in his own home. He taught his wife his trade, and as his illness became gradually more severe she did more and more of the work until presently she did it all. But as long as he lived it was taken to the firm as his work and paid for accordingly. When, however, it became quite clear, John Jones being dead and buried, that it could not be his work, Mrs. John Jones was obliged to own that it was hers, and the price paid for it by the firm was immediately reduced to two-thirds of the amount paid when it was supposed to be her husband's. ...
"[T]he tremendously depressing effect on women's wages of the pre-war trade union rules, combined with social use and wont, which kept women out of nearly all the skilled industries. This policy obviously cut off a great volume of the demand for women's labour which would exist if these barriers could be broken down. It it quite true to say that, although the doctrine of demand and supply has fallen of late years into unpopularity, it is nevertheless a fact that if demand for a particular class of labour is either destroyed or very much restricted, "a downward pull " on wages is called into existence for the whole class. ... The unskilled trades open to them would be overcrowded, and competition among the workers might well force down wages to less than subsistence level. It had done so in the case of large masses of women before the war. ... [T]he Committee of the Queen's Work for Women Fund, started at the beginning of the war, reported that "many working women are normally in receipt of wages below subsistence level." The evil effects of such a state of things can hardly be exaggerated. It means physical degeneracy, not for one sex only, premature old age for women, impossibility of organising women's labour, the stamping out of any intelligent effort to acquire industrial training and a high degree of industrial efficiency. ...
"I may quote Sir William Beardmore, the well-known engineer, and President in 1916 of the Iron and Steel Institute. In his presidential address he spoke of the difficulty met with by employers in induLcing workmen to utilise to the best advantage improved methods of manufacture evolved by experimental research; he said: "Early in the war it was found at Parkhead forge that the output from the respective machines was not so great as what the machines were designed for, and one of the workers was induced to do his best to obtain the most out of a machine. He very greatly increased his output, notwithstanding his predilection for trade union restrictions. When it was found that the demands of the Government for a greatly accelerated production of shells required the employment of girls in the projectile factory, owing to the scarcity of skilled workers, these girls in all cases produced more than double that by thoroughly trained mechanics--members of trade unions--working the same machines under the same conditions. In the turning of the shell body the actual output by girls, with the same machines and working under the same conditions and for an equal number of hours, was quite double that by trained mechanics. In the boring of shells the output also was quite double, and in the curving, waving, and finishing of shell cases quite 120 per cent. more than that of experienced mechanics " (Manchester Guardian, May 16th, 1916).  Here, therefore, you have a case in which women's work excelled men's work in productiveness by two to one or more. I always take care when I am speaking to women on this subject to warn them not to run away with the idea that either physically or mentally they excel men. What these figures do show is some part of the extent to which the whole atmosphere in which industry was carried on in this country before the war led to the deliberate restriction of output by the male workers. ... 
"If, for instance, owing to a lower degree of physical strength it was found necessary to employ three women to do the work ordinarily done by two men, then the wages for the three women could reasonably be adjusted to balance this disadvantage. War experience, however, has stiffened the conviction of many feminists that a large proportion of supposed feminine disadvantages exist more in imagination than in reality. That a woman in the textile trade was paid at a lower rate than a man for the same work has, for instance, been accounted for, time out of mind, by saying that a woman was incapable of "tuning" or "setting" her machine. Very few of those who used this formula took the trouble to explain that women were never given the opportunity of learning how to tune or set a machine. It was looked upon as a law of nature that a man could set a machine and that a woman could not. ...
"I do not claim in all cases identical wages for men and women. If the men are worth more let them receive more, or if the women are worth more (as they were in the Parkhead forge) let them receive more. The one chance of women being received into industry by the men already employed as comrades and fellow-workers, not as enemies and blacklegs, is in their standing for the principle, equal pay for equal work, or, as it is sometimes expressed, equal pay for equal results. ...
"The advocates of the principle of equal pay for equal work have an encouraging precedent in the successful stand which women doctors have made from the outset that they would not undersell the men in the profession. Whether as physicians or surgeons they have been quite determined on this point. Medical women working for the War Office since 1914 did not secure this position without a struggle, but I understand that the controversy is now settled in a satisfactory manner.
I will not try to break down these diverging views in any detail. But I am struck that a number of the issues that are touched upon here continue to resonate.

For example, in a number of ways the United States is still grappling with Rathbone's question of how to support the children of low-income families, especially when parents work little or not at all.  Of course, for many low-income families with children, there is only a single parent and the assumption that a man's wage will be needed to support a wife and family is anachronistic.

There is an ongoing dispute over the extent to which the remaining male/female wage gap arises because a greater number of women have careers that are either interrupted, or in which they cannot put in long hours to get fully established, because of parenting responsibilities (for example, see here and here).

The US economy is also struggling with an issue described by Fawcett in which groups try to set up rules that limit workers from competing with them, although in the modern US economy these issues arise less often in the context of unions, which are now quite small in size, and more in the context of rules about occupational licensing

Friday, March 23, 2018

Contingent Valuation and the Deepwater Horizon Spill

Economists are often queasy about the idea that preferences can be measured by surveys. It's easy for someone to say that they value organic fruits and vegetables, for example, but when they go to the grocery, how do they actually spend their money?

However, in some contexts, prices are not readily available. A common example is an oil spill, like the BP Deepwater Horizon Oil Spill in the Gulf of Mexico in 2010, or the Exxon Valdez oil spill in Alaska back in 1989. We know that such spills cause economic costs to those who use the waters directly, like the tourism and fishing industries. But is there some additional cost for "non-use" value? Can I put a personal value on protecting the environment in a place where I have not visited, and am not likely to visit?  There are various ways to measure these kinds of environmental damages. For example, one can include the costs of clean-up and remediation. But another method is to try to design a survey instrument that would get people to reveal the value that they place on this environmental damage, which is called a "contingent valuation" survey.

Such a survey has been completed for the BP Deepwater Horizon Oil spill. Richard C. Bishop and 19 co-authors provide a quick overview in "Putting a value on injuries to natural assets: The BP oil spill" (Science, April 21, 2017, pp. 253-254). For all the details, like the actual surveys used and how they were developed, you can go to the US Department of the Interior website (go to this link, and then type "total value" into the search box).

The challenge for a contingent valuation study is that it would obviously be foolish just to walk up to people and ask: "What's your estimate of the dollar value of the damage from the BP oil spill?" If the answers are going to be plausible, they need some factual background and some context. Also, they need to suggest, albeit hypothetically, that the person answering the survey would need to pay something directly toward the cost. As Bishop et al. write:
"The study interviewed a large random sample of American adults who were told about (i) the state of the Gulf before the 2010 accident; (ii) what caused the accident; (iii) injuries to Gulf natural resources due to the spill; (iv) a proposed program for preventing a similar accident in the future; and (v) how much their household would pay in extra taxes if the program were implemented. The program can be seen as insurance, at a specified cost, that is completely effective against a specific set of future, spill-related injuries, with respondents told that another spill will take place in the next 15 years. They were then asked to vote for or against the program, which would impose a one-time tax on their household. Each respondent was randomly assigned to one of five different tax amounts: $15, $65, $135, $265, and $435 ..." 
Developing and testing the survey instrument took several years. The survey was administered to a nationally-representative random sample of household by 150 trained interviewers. There were 3,646 respondents. They write: "Our results confirm that the survey findings are consistent with economic decisions and would support investing at least $17.2 billion to prevent such injuries in the future to the Gulf of Mexico’s natural resources."

One interesting permutation of the survey is that it was produced in two forms: a "smaller set of injuries" and a "larger set of injuries" version.
"To test for sensitivity to the scope of the injury, respondents were randomly assigned to different versions of the questionnaire, describing different sets of injuries and different tax amounts for the prevention program. The smaller set of injuries described the number of miles of oiled marshes, of dead birds, and of lost recreation trips that were known to have occurred early in the assessment process. The larger set included the injuries in the smaller set plus injuries to bottlenose dolphins, deep-water corals, snails, young fish, and young sea turtles that became known as later injury studies were completed  ..." 

Here's a sample of the survey results. The top panel looks at those who had the survey with the smaller set of injuries. It shows a range of how much taking steps to avoid the damage would personally (hypothetically) cost the person taking the survey. You can see that a majority were willing to pay $15, but the willingness to pay to prevent the oil spill declined as the cost went up. The willingness to pay was higher for the larger set of injuries, but at least my eye, not a whole lot larger.

It should be self-evident why the contingent evaluation approach is controversial. Does the careful and extensive process of  constructing and carrying out the survey lead to more accurate results? Or does it in some ways shape or predetermine the results? The authors seem to take some comfort in the fact that their estimate of $17.2 billion is roughly the same as the value of the Consent Decree signed in April 2016, which called for $20.8 billion in total payments. But is it possible that the survey design was tilted toward getting an answer similar to what was likely to emerge from the legal process? And if the legal process is getting about the same result, then the contingent valuation survey method is perhaps a useful exercise--but not really necessary, either.

I'll leave it to the reader to consider more deeply. For those interested in digging deeper into the contingent valuation debates, some useful starting points might be:

The Fall 2012 issue of the Journal of Economic Perspectives had three-paper symposium on contingent valuation with a range of views:


H. Spencer Banzhaf has just published  "Constructing Markets: Environmental Economics and
the Contingent Valuation Controversy," which appears in the Annual Supplement 2017 issue of the History of Political Economy (pp. 213-239). He provides a thoughtful overview of the origins and use of contingent valuation methods from the early 1960s ("estimated the economic value of outdoor recreation in the Maine woods") up to the Exxon Valdez spill in 1989.

Harro Maas and Andrej Svorenčík tell the story of how Exxon organized a group of researchers in opposition to contingent valuation methods in the aftermath of the 1989 oil spill in "`Fraught with Controversy': Organizing Expertise against Contingent Valuation," appearing in the History of Political Economy earlier in 2017 (49:2, pp. 315-345). 

Also, Daniel McFadden and Kenneth Train edited a 2017 book called Contingent Valuation of Environmental Goods, with 11 chapters on various aspects of how to do and think about contingent valuation studies. Thanks to Edward Elgar Publishing, individual chapters can be freely downloaded.

Thursday, March 22, 2018

Opioids: Brought to You by the Medical Care Industry

There's a lot of talk about the opioid crisis, but I'm not confident that most people have internalized just how awful it is. To set the stage, here are a couple of figures from the 2018 Economic Report of the President.

The dramatic rise in overdose deaths, from about 7000-8000 per year in the late 1990s to more than 40,000 in 2016, is of course just one reflection of social problem that includes much more than deaths.


However, the nature of the opioid crisis is shifting. The rise in overdose deaths from 2000 up to about 2010 was mainly due to prescription drugs. The more recent rise is overdose deaths is due to heroin and synthetic opioids like fentanyl.


It seems clear that the roots of the current opioid crisis are in prescribing behavior: to be blunt about it, US health care professionals made the decisions that created this situation. The Centers for Disease Control and Prevention notes on its website: "Sales of prescription opioids in the U.S. nearly quadrupled from 1999 to 2014, but there has not been an overall change in the amount of pain Americans report. During this time period, prescription opioid overdose deaths increased similarly."

The CDC also offers a striking chart showing differences in opioid prescriptions across states. Again from the website: "In 2012, health care providers in the highest-prescribing state wrote almost 3 times as many opioid prescriptions per person as those in the lowest prescribing state. Health issues that cause people pain do not vary much from place to place, and do not explain this variability in prescribing."
Some states have more opioid prescriptions per person than others. This color-coded U.S. map shows the number of opioid prescriptions per 100 people in each of the fifty states plus the District of Columbia in 2012. Quartile (Opioid Prescriptions per 100 People): States: 52-71: HI, CA, NY, MN, NJ, AK, SD, VT, IL, WY, MA, CO; 72-82.1: NH, CT, FL, IA, NM, TX, MD, ND, WI, WA, VA, NE, MT; 82.2-95: AZ, ME, ID, DC, UT, PA, OR, RI, GA, DE, KS, NV, MO; 96-143: NC, OH, SC, MI, IN, AR, LA, MS, OK, KY, WV, TN, AL. Data from IMS, National Prescription Audit (NPATM), 2012.
But although the roots of the opioid crisis come from this rise in prescriptions, the problem of opioid abuse itself is more complex. What seems to have happened in many cases is that as opioids were prescribed so freely that there was a good supply for friends, family, and to sell. Here's one more chart from the CDC, this one showing where those who abuse opioids get their drugs. Three of the categories are: given by a friend or relative for free; stolen from a friend or relative; and bought from a friend or relative.
 Source of Opioid Pain Reliever Most Recently Used by Frequency of Past-Year Nonmedical Use[a]
For example, a study published in JAMA Surgery in November 2017 found that among patients who were prescribed opioids for pain relief after surgery, 67-92%  ended up not using their full rescription.

This narrative of how the medical profession fueled the opioid crises has gotten some pushback from doctors. For example, Nabarun Dasgupta, Leo Beletsky, and Daniel Ciccarone wrote (The Opioid Crisis: No Easy Fix to Its Social and Economic Determinants" in the February 2018 issue of the American Journal of Public Health (pp. 182-186). After briskly acknowledging the evidence, the paper veers into the importance of "the urgency of integrating clinical care with efforts to improve patients’ structural environment. Training health care providers in “structural competency” is promising, as we scale up partnerships that begin to address upstream structural factors such as economic opportunity, social cohesion, racial disadvantage, and life satisfaction. These do not typically figure into the mandate of health care but are fundamental to public health .As with previous drug crises and the HIV epidemic, root causes are social and structural and are intertwined with genetic, behavioral, and individual factors. It is our duty to lend credence to these root causes and to advocate social change."

Frankly, that kind of essay seems to me to me an attempt the fact that the health care profession made extraordinarily poor decisions. We had root causes back in 1999. We have root causes now. It isn't the root causes that brought the opioid crisis down on us.

As another example, Sally Satel contributed an essay on "The Myth of What’s Driving the Opioid Crisis: Doctor-prescribed painkillers are not the biggest threat," to Politico (February 21, 2018).  She makes a number of reasonable points. Tthe current rise in opioid deaths is being driven by heroin and fentanyl, not prescription opioids. Only a very small percentage of those who are prescribed prescription opioids become addicts, and many of those had previous addiction problems.

As Satel readily acknowledges:
In turn, millions of unused pills end up being scavenged from medicine chests, sold or given away by patients themselves, accumulated by dealers and then sold to new users for about $1 per milligram. As more prescribed pills are diverted, opportunities arise for nonpatients to obtain them, abuse them, get addicted to them and die. According to SAMHSA, among people who misused prescription pain relievers in 2013 and 2014, about half said that they obtained those pain relievers from a friend or relative, while only 22 percent said they received the drugs from their doctor. The rest either stole or bought pills from someone they knew, bought from a dealer or “doctor-shopped” (i.e., obtained multiple prescriptions from multiple doctors). So diversion is a serious problem, and most people who abuse or become addicted to opioid pain relievers are not the unwitting pain patients to whom they were prescribed.
But her argument is that even though it was true 5-10 years ago that three-quarters of the heroin addicts showing up at treatment centers said they had got their start using presciption opioids, more recent evidence is that addicts are starting with heroin and fentanyl directly. Ultimately, Satel writes:
What we need is a demand-side policy. Interventions that seek to reduce the desire to use drugs, be they painkillers or illicit opioids, deserve vastly more political will and federal funding than they have received. Two of the most necessary steps, in my view, are making better use of anti-addiction medications and building a better addiction treatment infrastructure.
This specific recommendation makes practical sense, and it sure beats a ritual invocation of "root causes," but I confess it still rubs me the wrong way. We didn't have these demand-side interventions back in 1999, either, but the number of drug overdoses was much lower. Sure, the nature of the opioid crisis has shifted in recent years. But prescription opioids are still being prescribed at triple the level of 1999. And given that the medical profession lit the flame of the current opioid crisis, it seems evasive to seek a reduced level of blame by pointing out that the wildfire has now spread to other opioids. .


For a list of possible policy steps, one starting point is the President's Commission on Combating Drug Addiction and the Opioid Crisis, which published its report in November 2017. The 56 recommendations make heavy use of terms like "collaborate," "model statutes," "accountability," "model training program," "best practices," "a data-sharing hub," "community-based stakeholders," "expressly target Drug Trafficking Organizations," "national outreach plan," "incorporate quality measures," "the adoption of process, outcome, and prognostic measures of treatment services,"" prioritize addiction treatment knowledge across all health disciplines." "telemedicine," "utilizing comprehensive family centered approaches," "a comprehensive review of existing research programs," "a fast-track review process for any new evidence-based technology," etc. etc. There are probably some good suggestions embedded here, like fossils sunk deeply into a hillside. Hope someone can disinter them.

Tuesday, March 20, 2018

The Distribution and Redistribution of US Income

The Congressional Budget Office has published the latest version of its occasional report on "The Distribution of Household Income, 2014" (March 2018). It's an OK place to start for a fact-based discussion of the subject. Here is one figure in particular that caught my eye.



The vertical axis of the figure is a Gini coefficient, which is a common way of summarizing the extent of inequality in a single number. A coefficient of 1 would mean that one person owned everything. A coefficient of zero would mean complete equality of incomes.

In this figure, the top line shows the Gini coefficient based on market income, rising over time.

The green line shows the Gini coefficient when social insurance benefits are included: Social Security, the value of Medicare benefits, unemployment insurance, and worker's compensation. Inequality is lower with such benefits taken into account, but still rising. It's worth remembering that almost all of this change is due to Social Security and Medicare, which is to say that it is a reduction in inequality because of benefits aimed at the elderly.

The dashed line then adds a reduction in inequality due to means-tested transfers. As the report notes, the largest of these programs are "Medicaid and the Children’s Health Insurance Program (measured as the average cost to the government of providing those benefits); the Supplemental Nutrition Assistance Program (formerly known as the Food Stamp program); and Supplemental Security Income." What many people think of as "welfare," which used to be called Aid to Families with Dependent Children (AFDC) but for some years now has been called Temporary Assistance to Needy Families (TANF), is included here, but it's smaller than the programs just named. 

Finally, the bottom  purple line also includes the reduction in inequality due to federal taxes, which here includes not just income taxes, but also payroll taxes, corporate taxes, and excise taxes. 

A few thoughts: 

1) As the figure shows, the reduction in inequality for programs aimed at the elderly--Social Security and Medicare--is about as large as the total reduction in inequality due to all the reduction in inequality that happens from mean-tested spending and federal taxes. 

2) Moreover, a large share of the reduction in inequality shown in this figure is a result of "in-kind' programs that do not put any cash in the pockets of low-income people. This is true of the health care programs, like Medicare, Medicaid, and the Children's Health Insurance Program, as well as of the food stamp program. These programs do benefit people by covering a share of health care costs or helping buy food, but they don't help to pay for other costs like the rent, heat, or electricity.  

3) Contrary to popular belief, federal taxes do help to reduce the level of inequality. This figure shows the average tax rate paid by those in different income groups. The calculation includes all federal taxes: income, payroll, corporate, and excise. It is the average amount paid out of total income, which includes both market income and Social Security benefits. 

4) Finally, to put some dollar values on the Gini coefficient numbers, here is the average income for each of these groups in 2014. (Remember, this includes both cash and in-kind payments from the government, and all the different federal taxes.)
Figure 8.
Average Income After Transfers and Taxes, by Income Group, 2014
Thousands of Dollars  
Lowest Quintile 31,100
Second Quintile 44,500
Middle Quintile 62,300
Fourth Quintile 87,700
Highest Quintile 207,300
81st to 90th Percentiles 120,400
91st to 95th Percentiles 159,100
96th to 99th Percentiles 251,500
Top 1 Percent 1,178,600
Source: Congressional Budget Office.
(I'm a long-standing fan of CBO reports. But n the shade of this closing parenthesis, I'll add in passing that the format of this report has changed, and I think it's a change for the worse. Previous versions had more tables, where you could run your eye down columns and across rows to see patterns. This figure is nearly all figures and bar charts. It's quite possible that I'm more in favor of seeing underlying numbers and tables than the average reader. And it's true that you can go to the CBO website and see the numbers behind each figure. But in this version of the report, it's harder (for me) to see some of the patterns that were compactly summarized in a few tables in previous reports, but are now spread out over figures and bar graphs on different pages.) 

Monday, March 19, 2018

What if Country Bonds Were Linked to GDP Growth?

What if countries could have some built-in flexibility in repaying their debts: specifically, what if the repayment of the debt was linked to whether the domestic economy was growing? Thus, the burden of debt payments would fall in a recession, which is when government sees tax revenues fall and social expenditures rise. Imagine, for example, how the the situation of Greece with government debt would have been different if the country's lousy economic performance had automatically restructured its debt burden in away that reduced current payments. Of course, the tradeoff is that when the economy is going well, debt payments are higher--but presumably also easier to bear.

There have been some experiments along these lines in recent decades, but the idea is now gaining substantial interest,  James Benford, Jonathan D. Ostry, and Robert Shiller have edited a 14-paper collection of papers on Sovereign GDP-Linked Bonds: Rationale and Design (March 2018, Centre for  Economic Policy Research, available with free registraton here).

For a taste of the arguments, here are a few thoughts from the opening essay: "Overcoming the obstacles to adoption of GDP-linked debt," by Eduardo Borensztein, Maurice Obstfeld, and Jonathan D. Ostry.  They provide an overview of issues like: Would borrowers have to pay higher interest rates for GDP-linked borrowing? Or would the reduced risk of default counterbalance other risks? What measure of GDP would be used as part of such a debt contract? They write:
"Elevated sovereign debt levels have become a cause for concern for countries across the world. From 2007 to 2016, gross debt levels shot up in advanced economies – from 24 to 89% of GDP in Ireland, from 35 to 99% of GDP in Spain, and from 68 to 128% of GDP in Portugal, for example. The increase was generally more moderate in emerging economies, from 36 to 47% of GDP on average, but the upward trend continues. ...

"GDP-linked bonds tie the value of debt service to the evolution of GDP and thus keep it better aligned with the overall health of the economy. As public sector revenues are closely related to economic performance, linking debt service to economic growth acts as an automatic stabiliser for debt sustainability. ..  While most efforts to reform the international financial architecture over the past 15 years have aimed at facilitating defaults, for example through a sovereign debt restructuring framework (SDRM), the design of a sovereign debt structure that is less prone in the first place to defaults and their associated costs  would be a more straightforward policy initiative. GDP-linked debt is an attractive instrument for this purpose because it can ensure that debt stays in step with the growth of the economy in the long run and can create fiscal space for countercyclical policies during recessions. ...
"The first lesson is to ensure that the payout structure of the instrument reflects the state of the economy and is free from complexities or delays that can make payments stray from their link to the economic situation. To date, GDP-linked debt has been issued primarily in the context of debt restructuring operations, from the Brady bond exchanges that began in 1989 to the more recent cases of Greece and Ukraine. ...  This feature, however, gave rise to structures that were not ideal from the point of view of debt risk management. For example, some specifications provided for large payments if GDP crossed certain arbitrary thresholds or were a function of the distance to GDP from those thresholds. In addition, some payout formulas were sensitive to the exchange rate, failed to take inflation into account, or were affected by revisions of population or national account statistics. All these mechanisms resulted in payments that were  disconnected from the business cycle and the state of public finances, detracting from the value of these GDP-linked instruments for risk management (see Borensztein 2016).
"The second lesson is that the specification of the payout formula can strengthen the integrity of the instruments. GDP statistics are supplied by the sovereign, and there is no realistic alternative to this arrangement. This fact is often held up as an obstacle to wide market acceptance of the instruments. However, the misgivings seem to have been exaggerated, as under-reporting of GDP growth is not a politically attractive idea for a policymaker whose success will be judged on the strength of economic performance. ... 
"[T]he main source of reluctance regarding the use of GDP-linked debt, or insurance instruments more generally, may not stem from markets but from policymakers. Politicians tend to have relatively short horizons, and would not find debt instruments attractive that offer insurance benefits in the medium to long run but are costlier in the short run, as they include an insurance premium driven by the domestic economy’s correlation with the global business cycle. In addition, if the instruments are not well understood, they may be perceived as a bad choice if the economy does well for some time. The value of insurance may come to be appreciated only years later, when the country hits a slowdown or a recession, but by then the politician may be out of office. While this problem is not ever likely to go away completely, multilateral institutions might be able to help by providing studies on the desirability of instruments for managing country risk, and how to support their market development, in analogy to work done earlier in the millennium promoting emerging markets’ domestic-currency sovereign debt markets."
Back in 2015, the Ad Hoc London Term Sheet Working Group decided to produce a hypothetical model example of how a specific contract for GDP-linked government agreement might work, with the ideas that the framework could then be adapted and applied more broadly. This volume has a short and readable overview of the results by two members of the working group, in "A Term Sheet for GDP-linked bonds," by Yannis Manuelides and Peter Crossan. I'll just add that in the introduction to the book, Robert Shiller characterizes the London Term Sheet approach in this way:
"The kind of index-linked bond described in the London Term Sheet in this volume is close to a conventional bond, in that it has a fixed maturity date and a balloon payment at the end. The complexities described in the Term-Sheet are all about inevitable details and questions, such as how the coupon payments should be calculated for a GDP-linked bond that is issued on a specific date within the quarter, when the GDP data are issued only quarterly. The term sheet is focused on a conceptually simple concept for a GDP-linked  bond, as it should be. It includes, as a special case, the even simpler concept – advocated recently by me and my Canadian colleague Mark Kamstra – of a perpetual GDP-linked bond, if one sets the time to maturity to infinity. Perpetual GDP-linked bonds are an analogue of shares in corporations, but with GDP replacing corporate earnings as a source of dividends. However, it seems there are obstacles to perpetual bonds and these obstacles might slow the acceptance of GDP-linkage. The term-sheet here gets the job done with finite maturity, shows how a GDP-linkage can be done in a direct and simple way, and should readily be seen as appealing.
"The London Term Sheet highlighted in this volume describes a bond which is simple and attractive, and the chapters in this volume that spell out other considerations and details of implementation, have the potential to reduce the human impact of risks of economic crisis, both real crises caused by changes in technology and environment, and events better described as financial crises. The time has come for sovereign GDP-linked bonds. With this volume they are ready to go."

Friday, March 16, 2018

An NCAA Financial Digression During March Madness

I'm an occasional part of the audience for college sports, both the big-time televised events like basketball's March Madness and college football bowl games, as well as sometimes going to baseball and women's volleyball and softball games here at the local University of Minnesota. I enjoy the athletes and the competition, but I try not to kid myself about the financial side.

 Big-time colleges and universities do receive substantial sports-related revenues. But the typical school has sports-related expenses that eat up all of that revenue and more besides. For data, a useful starting point is the annual NCAA Research report called "Revenues and Expenses, 2004-2016," prepared by Daniel Fulks. This issue was released in 2017; the 2018 version will presumably be out in a few months.

For the uninitiated, some terminology may be useful here. The focus here is on Division I athletics, which is made up of about 350 schools that tend to have large student attendance, large participation in intercollegiate athletics, and lots of scholarships. Division I is then divided into three groups. The Football Bowl Subdivision is the most prominent schools, in which the football teams participate in bowl games at the end of the season. In the FBS group, Alabama beat Georgia 26-23 for the championship in January. The Football Championship series is medium-level football programs. Last season, North Dakota State beat James Madison 17-13 in the championship game at this level. And the Division I schools without football programs include many well-known universities that have scholarship athletes and prominent programs in other sports: Gonzaga and Marquette are two examples.

Since 2014, the Football Bowl Division is further divided into two groups, the Autonomy Group and the Non-Autonomy Group. The Autonomy Group is the 65 schools that are most identified with big-time athletics. They are in the "Power Five" conferences: the Atlantic Coast Conference, Big Ten, Big 12, Pac 12 and Southeastern Conference. Under the 2014 agreement, they have autonomy to alter some rules for the group as a whole: for example, this group of schools offer scholarships that cover the "full cost" of attending the university, which pays the athletes a little more, and coaches are no longer (officially) allowed to take a scholarship away because a player isn't performing as hoped. The Non-Autonomy Schools are allowed to follow these rule changes, but are not required to do so.

With this in mind, here are some facts from the NCAA report about the big-time Football Bowl Division schools.
Net Generated Revenues. The median negative net generated revenue for the AG is $3,600,000 (i.e., the median loss for a program in the AG), which must be supplemented by the institution; for the NA is $19,900,000; and for all FBS is $14,400,000. ...
Financial Haves and Have-nots. A total of 24 programs in the AG showed positive net generated revenues (profits), with a median of $10,000,000, while the remaining 41 of the AG lost a median of $10,000,000; the 64 NA programs lost a median of $20,000,000; the total FBS loss is a median of $18,000,000. Net losses for women's programs were $14,000,000 for AG, $6,500,000 for NA, and $9,000,000 for FBS.
For the Football Bowl Championship schools, the magnitude of the losses is smaller, but the pattern remains the same:
Net Generated Revenues. The result is a median net loss for the subdivision of $12,550,000; men's programs = $5,022,000 and women's programs = $4,089,000. These medians are up only slightly from 2015. ...
Losses per Sport: Highest losses incurred were in gymnastics and basketball for women's programs and football and basketball for the men.
And for the non-football Division I schools, where the big-time revenue sport is usually basketball, the pattern of losses continues:
Median Losses. The median net loss for the 95 schools in this subdivision was $12,595,000 for the 2016 reporting year, compared with $11,764,000 in 2015, and $5,367,000 in the 2004 base year. ... 
Programmatic Results. Five men's basketball programs reported positive net generated revenues, with a median of $1,742,000, while the remaining 90 schools reported a median negative net generated revenue of $1,573,000. The median loss for women's basketball was $1,415,000. These losses are up slightly from 2015 and more than double from 2004.

There's an ongoing dispute about whether big-time colleges and universities should pay their players. When I listen to sports-talk radio, a usual comment is along these lines: "These college athletes are making millions of dollars for their institutions. They deserve to be paid, and more than just a scholarship and some meal money." I'm sympathetic. But the economist in me always rebels against the assumption that there is a Big Rock Candy Mountain made of money just waiting to be handed out.  I want to know where the money is going to come from, and how the wages will be determined.

The median school is losing money on athletics. I know of no evidence that donations from alumni are sufficient to counterbalance these losses. So if the payment for athletes is going to come from schools, there will be a tradeoff. Should costs be cut by eliminating sports that don't generate revenue (and the scholarships for those athletes)? The NCAA Report notes that salaries are about one-third of total expenses for college sports programs, and maybe some of that money could be redistributed to student-athletes. It seems implausible that the median school is going to substantially increase its subsidies to the athletics department.

What if the money for paying students came from outside sponsors? Some decades ago, top college athletes sometimes were compensated via make-work or no-show jobs. It would be interesting to observe how a single rich alum or a group of local businesses, could collaborate with a coaching staff to raise money for paying athletes--and what the athletes might need to endorse in return.

It's easy to say that student-athletes should get "more," but it's not obvious that they would or should all get the same. For example, would all student-athletes get the same pay, regardless of revenue generated by their sport? Even within a single sport, would the star players get the same play as the backups? Would the amount of pay be the same between first-years and seniors? Would the pay be adjusted year-to-year, depending on athletic performance? Would players get bonuses for championships or big wins? 

I don't have a clear answer to the economic issues here, and so I will now turn off this portion of my brain and return to watching the games in peace. For those who want more, Allen R. Sanderson and John J. Siegfried wrote a thoughtful article," The Case for Paying College Athletes." which appeared in the Winter 2015 issue of the Journal of Economic Perspectives (where I work as Managing Editor).

Thursday, March 15, 2018

The Skeptical View in Favor of an Antitrust Push

Is the US economy as a whole experiencing notably less competition? Of course, pointing to a few industries where the level of competition seems to have declined (like airlines or banking) does not prove that competition as a whole has declined. In his essay, "Antitrust in a Time of Populism," Carl Shapiro offers a skeptical view on whether overall US competition has declined in a meaningful way, but combines this critique with an argument for the ways in which antitrust enforcement should be sharpened. The essay is forthcoming in the International Journal of Industrial Organization, which posted a pre-press version in late February. A non-typeset version is available at Shapiro's website

(Full disclosure: Shapiro was my boss for a time in the late 1980s and into the 1990s as a Co-editor and then Editor of the Journal of Economic Perspectives, where I have labored in the fields as Managing Editor since 1987.)

Shapiro points to a wide array of articles and reports from prominent journalistic outlets and think tanks that claim that the US is experiencing a wave of anti-competitive behavior. He writes:
"Until quite recently, few were claiming that there has been a substantial and widespread decline in competition in the United States since 1980. And even fewer were suggesting that such a decline in competition was a major cause of the increased inequality in the United States in recent decades, or the decline in productivity growth observed over the past 20 years. Yet, somehow, over the past two years, the notion that there has been a substantial and widespread decline in competition throughout the American economy has taken root in the popular press. In some circles, this is now the conventional wisdom, the starting point for policy analysis rather than a bold hypothesis that needs to be tested. ...
"I would like to state clearly and categorically that I am looking here for systematic and widespread evidence of significant increases in concentration in well-defined markets in the United States. Nothing in this section should be taken as questioning or contradicting separate claims regarding changes in concentration in specific markets or sectors, including some markets for airline service, financial services, health care, telecommunications, and information technology. In a number of these sectors, we have far more detailed evidence of increases in concentration and/or declines in competition."
Shapiro makes a number of points about competition in markets. For example, imagine that national restaurant chains are better-positioned to take advantage of information technology and economies of scale than local producers. As a result. national restaurant chains expand and locally-owned eateries decline. A national measure of aggregation will show that the big firms have a larger share of the market. But focusing purely on the competition issues, local diners may have essentially the same number of choices that they had before.

A number of the overall measures of growth of larger firms don't show much of a rise. As one example, Shapiro points to an article in the Economist magazine which divided the US economy into 893 industries, and found that the share of the four largest firms in each industry had on average risen from 26% to 32%. Set aside for a moment the issues of whether this is national or local, or whether it takes international competition into account. Most of those who study competition would say that a market where the four largest firms combine to have either 26% or 32% of the market is still pretty competitive. For example, say the top four firms all have 8% of the market. Then the remaining firms each have less than 8%, which means this market probably has at least a dozen or more competitors.

The most interesting evidence for a fall in competition, in Shapiro's view, involves corporate profits. Here's a figure showing corporate profits over time as a share of GDP.

And here's a figure showing the breakdown of corporate profits by industry.
Thus, there is evidence that profit levels have risen over time. In particular, they seem to have risen in the Finance & Insurance sector an in the Health Care & Social Assistance area. But as Shapiro emphasizes, antitrust law does not operate on a presupposition that "big is bad" or "profits are bad." The linchpin of US antitrust law is whether consumers are benefiting.

Thus, it is a distinct possibility that large national firms in some industries are providing lower-cost services to consumers and taking advantage of economies of scale. They earn high profits, because it's hard for small new firms without these economies of scale to compete. Shapiro writes:
"Simply saying that Amazon has grown like a weed, charges very low prices, and has driven many smaller retailers out of business is not sufficient. Where is the consumer harm? I presume that some large firms are engaging in questionable conduct, but I remain agnostic about the extent of such conduct among the giant firms in the tech sector or elsewhere. ... As an antitrust economist, my first question relating to exclusionary conduct is whether the dominant firm has engaged in conduct that departs from legitimate competition and maintains or enhances its dominance by excluding or weakening actual or potential rivals. In my experience, this type of inquiry is highly fact-intensive and may necessitate balancing procompetitive justifications for the conduct being investigated with possible exclusionary effects. ...
"This evidence leads quite naturally to the hypothesis that economies of scale are more important, in more markets, than they were 20 or 30 years ago. This could well be the result of technological progress in general, and the increasing role of information technology on particular. On this view, today’s large incumbent firms are the survivors who have managed to successfully obtain and exploit newly available economies of scale. And these large incumbent firms can persistently earn supra-normal profits if they are protected by entry barriers, i.e., if smaller firms and new entrants find it difficult and risky to make the investments and build the capabilities necessary to challenge them."
What should be done? Shapiro suggests that tougher merger and cartel enforcement, focused on particular practices and situations, makes a lot of sense. As one example, he writes:

"One promising way to tighten up on merger enforcement would be to apply tougher standards to mergers that may lessen competition in the future, even if they do not lessen competition right away. In the language of antitrust, these cases involve a loss of potential competition. One common fact pattern that can involve a loss of future competition occurs when a large incumbent firm acquires a highly capable firm operating in an adjacent space. This happens frequently in the technology sector. Prominent examples include Google’s acquisition of YouTube in 2006 and DoubleClick in 2007, Facebook’s acquisition of Instagram in 2012 and of the virtual reality firm Oculus CR in 2014, and Microsoft’s acquisition of LinkedIn in 2016.  ... Acquisitions like these can lessen future competition, even if they have no such immediate impact."
Shapiro also makes the point that a certain amount of concern about large companies mixes together a range of public concerns: worries about whether consumers are being harmed by a lack of competition is mixed together with worries about whether citizens are being harmed by big money in politics, or worries about rising inequality of incomes and wealth, or worries about how locally-owned firms may suffer from an onslaught of national chain competition. He argues that these issues should be considered separately.
"I would like to emphasize that the role of antitrust in promoting competition could well be undermined if antitrust is called upon or expected to address problems not directly relating to competition. Most notably, antitrust is poorly suited to address problems associated with the excessive political power of large corporations. Let me be clear: the corrupting power of money in politics in the United States is perhaps the gravest threat facing democracy in America. But this profound threat to democracy and to equality of opportunity is far better addressed through campaign finance reform and anti-corruption rules than by antitrust. Indeed, introducing issues of political power into antitrust enforcement decisions made by the Department of Justice could dangerously politicize antitrust enforcement. Antitrust also is poorly suited to address issues of income inequality. Many other public policies are far superior for this purpose. Tax policy, government programs such as Medicaid, disability insurance, and Social Security, and a whole range of policies relating to education and training spring immediately to mind. So, while stronger antitrust enforcement will modestly help address income inequality, explicitly bringing income distribution into antitrust analysis would be unwise."

In short, where anticompetitive behavior is a problem, by all means go after it--and go after it more aggressively than the antitrust authorities have done in recent decades. But other concerns over big business need other remedies. 

Tuesday, March 13, 2018

Interview with Jean Tirole: Competition and Regulation

"Interview: Jean Tirole" appears in the most recent issue of Econ Focus from the Federal Reserve Bank of Richmond (Fourth Quarter 2017, pp. 22-27). The interlocutor is David S. Price. Here are a few comments that jumped out at me.

How did Tirole end up in the field of industrial organization?
"It was totally fortuitous. I was once in a corridor with my classmate Drew Fudenberg, who's now a professor at MIT. And one day he said, "Oh, there's this interesting field, industrial organization; you should attend some lectures." So I did. I took an industrial organization class given by Paul Joskow and Dick Schmalensee, but not for credit, and I thought the subject was very interesting indeed.
"I had to do my Ph.D. quickly. I was a civil servant in France. I was given two years to do my Ph.D. (I was granted three at the end.) It was kind of crazy."
Why big internet firms raise competition concerns
"[N]ew platforms have natural monopoly features, in that they exhibit large network externalities. I am on Facebook because you are on Facebook. I use the Google search engine or Waze because there are many people using it, so the algorithms are built on more data and predict better. Network externalities tend to create monopolies or tight oligopolies.
"So we have to take that into account. Maybe not by breaking them up, because it's hard to break up such firms: Unlike for AT&T or power companies in the past, the technology changes very fast; besides, many of the services are built on data that are common to all services. But to keep the market contestable, we must prevent the tech giants from swallowing up their future competitors; easier said than done of course ...
Bundling practices by the tech giants are also of concern. A startup that may become an efficient competitor to such firms generally enters within a market niche; it's very hard to enter all segments at the same time. Therefore, bundling may prevent efficient entrants from entering market segments and collectively challenging the incumbent on the overall technology.
"Another issue is that most platforms offer you a best price guarantee, also called a "most favored nation" clause or a price parity clause. You as a consumer are guaranteed to get the lowest price on the platform, as required from the merchants. Sounds good, except that if all or most merchants are listed on the platform and the platform is guaranteed the lowest price, there is no incentive for you to look anywhere else; you have become a "unique" customer, and so the platform can set large fees to the merchant to get access to you. Interestingly, due to price uniformity, these fees are paid by both platform and nonplatform users — so each platform succeeds in taxing its rivals! That can sometimes be quite problematic for competition.
"Finally, there is the tricky issue of data ownership, which will be a barrier to entry in AI-driven innovation. There is a current debate between platform ownership (the current state) and the prospect of a user-centric approach. This is an underappreciated subject that economists should take up and try to make progress on."

The economics of two-sided platforms
"We get a fantastic deal from Google or credit card platforms. Their services are free to consumers. We get cashback bonuses, we get free email, Waze, YouTube, efficient search services, and so on. Of course there is a catch on the other side: the huge markups levied on merchants or advertisers. But we cannot just conclude from this observation that Google or Visa are underserving monopolies on one side and are preying against their rivals on the other side. We need to consider the market as a whole.
"We have learned also that platforms behave very differently from traditional firms. They tend to be much more protective of consumer interests, for example. Not by philanthropy, but simply because they have a relationship with the consumers and can charge more to them (or attract more of them and cash in on advertising) if they enjoy a higher consumer surplus. That's why they allow competition among applications on a platform, that's why they introduce rating systems, that's why they select out nuisance users (a merchant who wants to be on the platform usually has to satisfy various requirements that are protective of consumers). Those mechanisms — for example, asking collateral from participants to an exchange or putting the money in an escrow until the consumer is satisfied — screen the merchants. The good merchants find the cost minimal, and the bad ones are screened out.
"That's very different from what I call the "vertical model" in which, say, a patent owner just sells a license downstream to a firm and then lets the firm exercise its full monopoly power.
"I'm not saying the platform model is always a better model, but it has been growing for good reason as it's more protective of consumer interest. Incidentally, today the seven largest market caps in the world are two-sided platforms."