Tuesday, July 17, 2018

On Preferring A to B, While Also Preferring B to A

"In the last quarter-century, one of the most intriguing findings in behavioral science goes under the unlovely name of `preference reversals between joint and separate evaluations of options.' The basic idea is that when people evaluate options A and B separately, they prefer A to B, but when they
evaluate the two jointly, they prefer B to A." Thus, Cass R. Sunstein begins his interesting and readable paper "On preferring A to B, while also preferring B to A" (Rationality and Society 2018,  first published July 11, 2018, subscription required)

Here is one such problem that has been studied: 

Dictionary A: 20,000 entries, torn cover but otherwise like new
Dictionary B: 10,000 entries, like new

"When the two options are assessed separately, people are willing to pay more for B; when they are assessed jointly, they are willing to pay more for A." A common explanation is that when assessed separately, people have no basis for knowing if 10,000 or 20,000 words is a medium or large number for a dictionary, so they tend to focus on "new" or "torn cover." But when comparing the two, people focus on the number of words.

Here's another example, which (as Sunstein notes is "involving an admittedly
outdated technology":

CD Changer A: Can hold 5 CDs; Total Harmonic Distortion = 0.003%
CD Changer B: Can hold 20 CDs; Total Harmonic Distortion = 0.01%


"Subjects were informed that the smaller the Total Harmonic Distortion, the better the sound quality. In separate evaluation, they were willing to pay more for CD Changer B. In joint evaluation, they were willing to pay more for CD Changer A." When looking at them separately, holding 20 CDs seems more more important. When comparing them, the sound quality in Total Harmonic Distortion seems more important--although most people have no basis for knowing if this difference ins sound quality would be meaningful to their ears or not.

And one more example:

Baseball Card Package A: 10 valuable baseball cards, 3 not-so-valuable baseball cards
Baseball Card Package B: 10 valuable baseball cards


"In separate evaluation, inexperienced baseball card traders would pay more for Package B than for Package A. In joint evaluation, they would pay more for Package A (naturally enough). Intriguingly, experienced traders also show a reversal, though it is less stark." When comparing them, choosing A is obvious. But without comparing them, there is something about getting all valuable cards, with no less valuable cards mixed in, which seems attractive.

And yet another example:

Congressional Candidate A: Would create 5000 jobs; has been convicted of a misdemeanor
Congressional Candidate B: Would create 1000 jobs; has no criminal convictions

"In separate evaluation, people rated Candidate B more favorably, but in joint evaluation they preferred candidate A." When looking at them separately, the focus is on criminal history; when looking at them together, the focus is on jobs.
And one more: 

Cause A: Program to improve detection of skin cancer in farm workers
Cause B: Fund to clean up and protect dolphin breeding locations

When people see the two in isolation, they show a higher satisfaction rating from giving to Cause B, and they are willing to pay about the same. But when they evaluate them jointly, they show a much higher satisfaction rating from A, and they want to pay far more for it." The explanation here seems to be a form of category-bound thinking, where just thinking about the dolphins generates a stronger visceral response, but when comparing directly, the humans weigh more heavily. 

One temptation in these and many other examples given by Sunstein is that joint evaluation must be more meaningful, because there is more context for comparison. But he argues strongly that this conclusion is unwarranted. He writes: 
"In cases subject to preference reversals, the problem is that in separate evaluation, some characteristic of an option is difficult or impossible to evaluate—which means that it will not receive the attention that it may deserve. The risk, then, is that a characteristic that is important to welfare or actual experience will be ignored. In joint evaluation, the problem is that the characteristic that is evaluable may receive undue attention. The risk, then, is that a characteristic that is unimportant to welfare or to actual experience will be given excessive weight."
In addition, life does not usually give us a random selection of choices and characteristics for our limited attention spans to consider. Instead, choices are defined and described by sellers of products, or by politicians selling policies. They choose how to frame issues. Sunstein writes: 
"Sellers can manipulate choosers in either separate evaluation or joint evaluation, and the design of the manipulation should now be clear. In separate evaluation, the challenge is to show choosers a characteristic that they can evaluate, if it is good (intact cover), and to show them a characteristic  that they cannot evaluate, if it is not so good (0.01 Total Harmonic Distortion). In joint evaluation, the challenge is to allow an easy comparison along a dimension that seems self-evidently important, whether or not the difference along that dimension matters to experience or to people’s lives. ... Sellers (and others) can choose to display a range of easily evaluable characteristics (appealing ones) and also display a range of others that are difficult or impossible to assess (not so appealing ones). It is well known that some product attributes are “shrouded,” in the sense that they are hidden from view, either because of selective attention on the part of choosers or because of deliberative action on the part of sellers." 
We often think of ourselves as having a set of personal preferences that are fundamental to who we are--part of our personality and self. But in many contexts, people (including me and you) can be influenced by the framing and presentation of choices. Whether the choice is between products or politicians, beware.

Monday, July 16, 2018

Carbon Dioxide Emissions: Global and US

US emissions of carbon have been falling, while nations in the Asia-Pacific region have already become the main contributors to the rise in atmospheric carbon dioxide. These and other conclusions are apparent from the BP Statistical Review of World Energy (June 2018), a useful annual compilation of global trends in energy production, consumption, and prices. 

Here's a table from the report on carbon emissions (I clipped out columns showing annual data for the years from 2008-2016). The report is careful to note: "The carbon emissions above reflect only those through consumption of oil, gas and coal for combustion related activities ... This does not allow for any carbon that is sequestered, for other sources of carbon emissions, or for emissions of other greenhouse gases. Our data is therefore not comparable to official national emissions data." But the data does show some central plot-lines in the carbon emissions story.



A few thoughts: 

1) The US has often had the biggest declines in the world in carbon emissions in absolute magnitudes in recent years. Granted, this is in part because the quantity of US carbon emissions is so large that even a small percentage drop is large in absolute size. Still, better down than up. The BP report notes: "This is the ninth time in this century that the US has had the largest decline in emissions in the world. This also was the third consecutive year that emissions in the US declined, though the fall was the smallest over the last three years. ... Carbon emissions from energy use from the US are the lowest since 1992, the year that the UNFCCC came into existence.:

2) Anyone who follows this topic at all knows that China leads the world in carbon emissions. Still, it's striking to me that China accounts for 27.6% of world carbon emissions, compared to 15.2% for the US. On a regional basis, the Asia Pacific region--led by China, India, and Japan, but also with substantial contributions from Indonesia, South Korea, and Australia--by itself accounts for nearly half of global carbon emissions. If you're concerned about carbon emissions, you need to think about proposals that would have strong effects on China and this region. 

3) Total carbon emissions from the three regions of South and Central America, the Middle East, and Africa total 13.8% of the global total, and thus their combined total is less than either the United States or the European/Eurasian economies. However, if the carbon emissions for this group of three regions keeps growing at about 3% per year, while the carbon emissions for the US economy keeps falling at 1% per year, their carbon emissions will outstrip the US in a few years.

4) In an interconnected global economy, it's worth remembering that the country where energy is used doesn't always reflect where the final product is consumed. If China produces something through an energy-intensive process that is later consumed in the US, it counts as energy use in China--but both countries play a role.

For some more US-specific data, here's some data from the Monthly Energy Review (June 2018) published by the US Energy Information Administration. This table shows total carbon emissions for the US, emissions per capita, and emissions relative to GDP, going back to 1950.


A few comments: 

1) US carbon emissions on this measure peaked around 2007, and have generally declined since then.  An underlying pattern here is a reduction in the use of coal and rise in the use of natural gas, along with greater use of renewables. US emissions are now back to the levels from the late 1980s and early 1990s. 

2) Carbon emissions per capita in the US economy have fallen back to the level of the early 1950s. 

3) Carbon emissions relative to GDP produced have been falling pretty steadily for the almost 70 years shown in this data. 


Friday, July 13, 2018

Time to Reform Unemployment Insurance

The best time to fix your roof is when the weather is sunny and warm, not when it's rainy, cold--and actually leaking. In a similar spirit, the best time to fix unemployment insurance is when the unemployment rate is low. Conor McKay, Ethan Pollack, andAlastair Fitzpayne offer some ideas in "Modernizing Unemployment Insurance for the Changing Nature of Work" (Aspen Institute, January 2018). They write:

The UI [Unemployment Insurance] program — which is overseen by the U.S. Department of Labor and administered by the states — collects payroll taxes from employers to insure workers against unexpected job loss. Eligible workers who become unemployed through no fault of their own can receive temporary income support while they search for reemployment. In 2016, the program paid $32 billion to 6.2 million out-of-work individuals. UI is one of America’s most important anti-poverty programs for individuals and families, serving as a key counter-cyclical stabilizer for the broader economy. In 2009, the worst year of the Great Recession, the UI program kept 5 million Americans out of poverty, and prevented an estimated 1.4 million foreclosures between 2008 and 2012."

So what's wrong with UI as it stands? The system was designed for full-time workers, who have been with an employer for some time, losing full-time jobs. It doesn't do a good job of covering independent contractors, freelancers, short-timers, and part-timers. However, if you are receiving unemployment insurance and you take a freelance or part-time or self-employed job, your benefits usually stop. The share of unemployed workers who are actually covered by unemployment insurance is falling over time.

Many of the reforms mentioned here have been kicked around before, but it's still useful to have them compiled in one place.

The main conceptual difficulty is that unemployment insurance needs someone to pay into the system on a regular basis, but only to withdraw money from the system when it's really needed. Some legislative creativity may be needed  here. But for example, independent and freelance workers could pay unemployment insurance premiums while employed, and if they did so for some period of time (maybe a year or more), they could become eligible for some level of unemployment insurance payouts.  Figure out ways to offer some protection to those who hold multiple jobs, but are not currently eligible for unemployment insurance from any single employer. Figure out how to offer at least some protection to self-employed and temporary workers.

Alternatively, nontraditional workers could be allowed to set up tax-free savings accounts that they would only use if they became unemployed. Such an account could be combined with a retirement account: basically, a worker with a short-term financial need could withdraw some money from the account, but only up to a certain maximum--while the rest stayed in the retirement account.

Finally, it seems wise not to be too quick to cut off unemployment benefits for those who try to work their way back with a part-time job or by starting their own company. Or unemployment insurance could be designed to encourage workers to conduct a long-distance job search and consider moving to another city or state. 

The OECD Employment Outlook 2018 that was just published includes a chapter on unemployment insurance issues across high-income countries. The problem of limited coverage of unemployment insurance is common.
"Across 24 OECD countries, fewer than one-in-three unemployed, and fewer than one-in-four jobseekers, receive unemployment benefits on average. Coverage rates for jobseekers are below 15% in Greece, Italy, Poland, Slovak Republic, Slovenia and the United States. Austria, Belgium and Finland show the highest coverage rates in 2016, ranging between approximately 45% and 60%: In countries with the highest coverage in the OECD, at least four-in-ten jobseekers still report not receiving an unemployment benefit."
Unemployment insurance systems differ quite a bit across countries: qualifications to receive benefits (like what kind of job you previously had, for how long, and how long you have been unemployed from that job), level of benefits, time limits on benefits, whether you are required to get training or some kind of job search assistance while unemployed--and how all of these factors were adjusted by political systems during and after the rise in unemployment during the Great Recession.

But the OECD report emphasizes that unemployment insurance doesn't just help those who are unemployed--it also provides a mechanisms for government to focus on what kinds of assistance might help the unemployed get jobs again. The chapter in the OECD report ends (citations omitted):
"[U]nemployment benefits provide the principal instrument for linking jobless people to employment services and active labour market programmes to improve their job prospects. In the absence of accessible unemployment benefits, it can be difficult to reach out to those facing multiple barriers to employment, who therefore risk being left behind. In these cases, achieving good benefit coverage can be essential to make an activation strategy effective and sustainable. For this reason the new OECD Jobs Strategy calls for clear policy action to extend access to unemployment benefit within a rigorously-enforced `mutual obligation' framework, in which governments have the duty to provide jobseekers with benefits and effective services to enable them to find work and, in turn, beneficiaries have to take active steps to find work or improve their employability ..." 

Thursday, July 12, 2018

China Stops Importing Waste Plastic

For a few decades now, the US and Europe have been managing their plastic waste by shipping it to China and other countries in east Asia for recycling and reuse. But in the last few years, China has been tightening up what it was willing to import, wanting only plastic waste that is uncontaminated. In 2017, China announced that in the future it was banning the import of nonindustrial plastic waste--that is the plastic waste generated by households.

Amy L. Brooks, Shunli Wang, Jenna R. Jambeck look at some consequences in "The Chinese import ban and its impact on global plastic waste trade," published in Science Advances (June 20, 2018). Here's a figure showing patterns of exports and imports of plastic waste, in quantities and values, based on UN data. In theory, of course, the lines for imports and exports should match exactly, but the data is collected from different countries and errors of classification and inclusion do creep in. Still, the overall pattern of a dramatic rise, leveling off in the last few years as China imposed additional restrictions, is clear.
Brooks, Wang, and Jambeck summarize in this way:
"The rapid growth of the use and disposal of plastic materials has proved to be a challenge for solid waste management systems with impacts on our environment and ocean. While recycling and the circular economy have been touted as potential solutions, upward of half of the plastic waste intended for recycling has been exported to hundreds of countries around the world. China, which has imported a cumulative 45% of plastic waste since 1992, recently implemented a new policy banning the importation of most plastic waste, begging the question of where the plastic waste will go now. We use commodity trade data for mass and value, region, and income level to illustrate that higher-income countries in the Organization for Economic Cooperation have been exporting plastic waste (70% in 2016) to lower-income countries in the East Asia and Pacific for decades. An estimated 111 million  metric tons of plastic waste will be displaced with the new Chinese policy by 2030. As 89% of historical exports consist of polymer groups often used in single-use plastic food packaging (polyethylene, polypropylene, and polyethylene terephthalate), bold global ideas and actions for reducing quantities of nonrecyclable materials, redesigning products, and funding domestic plastic waste management are needed."
The pattern of high-income countries sending their recycling to lower- and middle income countries is common. The share of plastic waste going to China seems to actually be greater than the 45% mentioned above. The authors write: "China has imported 106 million MT of plastic waste, making up 45.1% of all cumulative imports. Collectively, China and Hong Kong have imported 72.4% of all plastic waste. However, Hong Kong acts as an entry port into China, with most of the plastic waste imported to Hong Kong (63%) going directly to China as an export in 2016." 

I don't have a solution here. The authors write: "Suggestions from the recycling industry demonstrate that, if no adjustments are made in solid waste management, and plastic waste management in particular, then much of the waste originally diverted from landfills by consumers paying for a recycling service will ultimately be landfilled." Other nations of east Asia don't have the capacity to absorb this flow of plastic waste, at least not right now. There doesn't seem to be much market for this type of plastic waste in the US or Europe, at least not right now. Substitutes for these plastics that either degrade or recycle more easily do not seem to be immediately available. But there is a mountain of plastic waste coming, so we will have a chance to see how the forces of supply, demand, and regulation deal with it. 

For short readable surveys of the study, I can recommend Ellen Airhart, "China Won't Solve the World's Plastics Problem Anymore," in Wired (June 20, 2018) and Jason Daley, "China’s Plastic Ban Will Flood Us With Trash," in Smithsonian (June 21, 2018).

Wednesday, July 11, 2018

When Growth of US Education Attainment Went Flat

Human capital in general, and educational background in particular, are one of the key ingredients for economic growth. But the US had a period of about 20 years, for those born through most of the 1950s and 1960s, where educational attainment barely budged.  Urvi Neelakantan and Jessie Romero provide an overview in "Slowing Growth in Educational Attainment," an Economic Brief written for the Federal Reserve Bank of Richmond (July 2018, EB18-07).

Here's a figure showing years of schooling for Americans going back to the 1870s. You can see the steady rise for both men and women up until the birth cohorts, when the educational gains for women slow down and those for men go flat.


In their essay, Neelakantan and Romero argue that this strengthens the case for improving K-12 education, and offer some thoughts. Here are a few related points I would emphasize.

1) Lots of factors affect productivity growth for an economy. But rapid US education growth starting back in the 19th century has been tied to later US economic growth. And it's probably not just a coincidence that when those born around 1950 were entering the workforce in the 1970s, there is a sustained slump in productivity that lasts about 20 years--into the early 1990s.

2) One reason for the rise in inequality of incomes that started in the late 1970s is that the demand for high-skilled workers was growing faster than the supply. For example, the wage gap between college-educates worker and workers with no more than a high school education increased substantially. As Neelakantan and Romero write: "This slowdown in skill acquisition, combined with growing demand for high-skill workers, contributed to a large increase in the `college premium' — the higher wages and earnings of college graduates relative to workers with only high school degrees." When educational attainment went flat, it also helped to create the conditions for US inequality to rise.

3) When a society has a period of a couple of decades where educational attainment doesn't rise, there's no way to go back later and "fix" it. The consequences like slower growth and higher inequality just march on through time. Similarly, the current generation of students--all of them, K-12, college and university--will be the next generation of US workers.

Monday, July 9, 2018

Three Questions for the Antitrust Moment

There seems to be a widespread sense that many problems of the US economy are linked to a lack of dynamism and competition, and that a surge of antitrust enforcement might be a part of the answer. Here are three somewhat separable questions to ponder in addressing this topic. 

1) Is rising concentration a genuine problem in most of the economy, or only in a few niches?

The evidence does suggest that concentration has risen in many industries. However, it also suggests that for most industries the rise in concentration is small, and within recent historical parameters. For example, here's a figure from an article by  Tim Sablik, "Are Markets Too Concentrated?" published in Econ Focus, from the Federal Reserve Bank of Richmond (First Quarter 2018, pp. 10-13). The HHI is a standard measure of market concentration: it is calculated by taking the market share of each firm in an industry, squaring it, and then summing the result. Thus, a monopoly with 100% of the market would have a HHI measure of  1002 , or 10,000. A industry with, say, two leading firms that each have 30% of the market and four other firms with 10% of the market would have an HHI of 2200. The average HHI across industries has indeed risen--back to the level that prevailed in the late 1970s and early 1980s.



A couple of other points are worth noting:

In some of the industries where concentration has risen, recent legislation is clearly one of the important underlying causes. For example, healthcare providers and insurance firms became more concentrated in the aftermath of restrictions and rules imposed by the Patient Protection and Affordable Care act of 2010. The US banking sector became more concentrated in the aftermath of
Wall Street Reform and Consumer Protection Act of 2010 (the Dodd-Frank act). In both cases, supporters of the bill saw additional concentration as a useful tool for seeking to achieve the purported benefits of the legislation.

The rise in bigness that seems to bother people the most is the dominance of Apple, Alphabet,  Amazon, Facebook, and Microsoft. The possibility that these firms raise anticompetitive issues seems to me like a very legitimate concern. But it also suggests that the competition issues of most concern apply mostly to a relatively small number of firms in a relatively small number of tech-related industries.

2) Is rising concentration the result of pro-competitive, productivity-raising actions that benefit consumers, or anti-competitive actions that hurt consumers? 

The general perspective of US antitrust law is that there is no reason to hinder or break up a firm that achieves large size and market domination by providing innovative and low-cost products for consumers. But if a large firm is using its size to hinder competition or to keep prices high, then the antitrust authorities can have reason to step in. So which is it? Sablik writes:

"Several recent studies have attempted to determine whether the current trend of rising concentration is due to the dominance of more efficient firms or a sign of greater market power. The article by Autor, Dorn, Katz, Patterson, and Van Reenen lends support to the Chicago view, finding that the industries that have become more concentrated since the 1980s have also been the most productive. They argue that the economy has become increasingly concentrated in the hands of `superstar firms,' which are more efficient than their rivals." 
"The tech sector in particular may be prone to concentration driven by efficiency. Platforms for search or social media, for example, become more valuable the more people use them. A social network, like a phone network, with only two people on it is much less valuable than one with millions of users. These network effects and scale economies naturally incentivize firms to cultivate the biggest platforms — one-stop shops, with the winning firm taking all, or most, of the market. Some economists worry these features may limit the ability of new firms to contest the market share of incumbents.  ...  Of course, there are exceptions. Numerous online firms that once seemed unstoppable have since ceded their dominant position to competitors. America Online, eBay, and MySpace have given way to Google, Amazon, Facebook, and Twitter."
There is also international evidence that leading edge firms in many industries are pulling ahead of others in the industry in terms of productivity growth. Here seems to me reason for concern that well-established firms in industries with these network effects have found a way to establish a position that makes it hard--although clearly not impossible--for new competitors to enter. For example, Federico J. Díez, Daniel Leigh, and Suchanan Tambunlertchai have published "Global Market Power and its Macroeconomic Implications" (IMF Working Paper WP/18/137, June 2018). They write:

"We estimate the evolution of markups of publicly traded firms in 74 economies from 1980-2016. In advanced economies, markups have increased by an average of 39 percent since 1980. The increase is broad-based across industries and countries, and driven by the highest markup firms in each economic sector. ... Focusing on advanced economies, we investigate the relation between markups and investment, innovation, and the labor share at the firm level. We find evidence of a non-monotonic relation, with higher markups being correlated initially with increasing and then with decreasing investment and innovation rates. This non-monotonicity is more pronounced for firms that are closer to the technological frontier. More concentrated industries also feature a more negative relation between markups and investment and innovation."
In other words, firms may at first achieve their leadership and higher profits with a burst of innovation, but over time, the higher profits are less associated with investment and innovation.


An interrelated but slightly different argument that what the rise in concentration is telling us is less about the behavior of large firms, and more about a slowdown in the arrival of new firms. For example, it's no surprise that concentration was lower in the 1990s, with the rise of the dot-com companies, and it's no surprise that concentration then rose again after that episode.  Jason Furman and Peter Orszag explore these issues in "Slower Productivity and Higher Inequality: Are They Related?" (June 2018, Peterson Institute of International Economics, Working Paper 18-4). They argue that the rise of "superstar" firms has been accompanied by slower productivity growth and more dispersion of wages, but that the underlying cause is a drop in the start-up rates of new firms and the dynamism of the US economy. They write:
"Our analysis is that there is mounting evidence that an important common cause has contributed to both the slowdown in productivity growth and the increase in inequality. The ultimate cause is a reduction in competition and dynamism that has been documented by Decker et al (2014, 2018) and many others. This reduction is partly a “natural” reflection of trends like the increased importance of network externalities and partly a “manmade” reflection of policy choices, like increased regulatory barriers to entry. These increased rigidities have contributed to the rise in concentration and increased dispersion of firm-level profitability. The result is less innovation, either through a straightforward channel of less investment or through broader factors such as firms not wanting to cannibalize on their own market shares. At the same time, these channels have also contributed to rising inequality in a number of different ways ..." 
Here are a couple more articles I found useful in thinking about these issues, and in particular about the cases of Google and Amazon. 

Charles Duhigg wrote "The Case Against Google," in the New York Times Magazine (February 20, 2018). He notes that a key issue in antitrust enforcement is whether a large firm is actively undermining potential competitors, and offers some examples of small companies are pursuing legal action because they felt undermined. If Google is using its search functions and business connections to disadvantage firms that are potential competitors, then that's a legitimate antitrust issue. Duhigg also argues that if Microsoft had not been sued for this type of anticompetitive behaviour about 20 years ago, it might have killed off Google.

The argument that Google uses its search functions to disadvantage competitors reminds me of the longstanding antitrust arguments about computer reservation systems in the airline industry. Going back to the late 1980s, airlines like United and American build their own computer reservation systems, which were then used by travel agents. While in theory the systems listed all flights, the airlines also had a tendency to list their own flights more prominently, and there was some concern that they could also adjust prices for their own flights more quickly. Such lawsuits continue up to the present. The idea that a firm can use search functions to disadvantage competitors, and that such behavior is anticompetitive under certain conditions, is well-accepted in existing antitrust law.

As Duhigg notes, the European antitrust authorities have found against Google. "Google was ordered to stop giving its own comparison-shopping service an illegal advantage and was fined an eye-popping $2.7 billion, the largest such penalty in the European Commission’s history and more than twice as large as any such fine ever levied by the United States." As you might imagine, the case remains under vigorous appeal and dispute.

As a starting point for thinking about Amazon and anticompetitive issues, I'd recommend Lina M. Khan's article on "Amazon's Antitrust Paradox"  (Yale Law Journal, January 2017, pp. 710-805).  From the abstract:
"Amazon is the titan of twenty-first century commerce. In addition to being a retailer, it is now a marketing platform, a delivery and logistics network, a payment service, a credit lender, an auction house, a major book publisher, a producer of television and films, a fashion designer, a hardware manufacturer, and a leading host of cloud server space. Although Amazon has clocked staggering growth, it generates meager profits, choosing to price below-cost and expand widely instead. Through this strategy, the company has positioned itself at the center of e-commerce and now serves as essential infrastructure for a host of other businesses that depend upon it. Elements of the firm’s structure and conduct pose anticompetitive concerns—yet it has escaped antitrust scrutiny.
"This Note argues that the current framework in antitrust—specifically its pegging competition to `consumer welfare,' defined as short-term price effects—is unequipped to capture the architecture of market power in the modern economy. We cannot cognize the potential harms to competition posed by Amazon’s dominance if we measure competition primarily through price and output. Specifically, current doctrine underappreciates the risk of predatory pricing and how integration across distinct business lines may prove anticompetitive. These concerns are heightened in the context of online platforms for two reasons. First, the economics of platform markets create incentives for a company to pursue growth over profits, a strategy that investors have rewarded. Under these conditions, predatory pricing becomes highly rational—even as existing doctrine treats it as irrational and therefore implausible. Second, because online platforms serve as critical intermediaries, integrating across business lines positions these platforms to control the essential infrastructure on which their rivals depend. This dual role also enables a platform to exploit information collected on companies using its services to undermine them as competitors."
This passage summarizes the conceptual issue.  In effect, it argues that Amazon may be good for consumers (at least in the short-run of some years), but still have potential "harms for competition." The idea that antitrust authorities should act in a way that hurts consumers in the short run, on the grounds that it will add to competition that will benefit consumers in the long run, would be a stretch for current antitrust doctrine--and if applied too broadly could lead to highly problematic results. Khan's article is a good launching-pad for that discussion.

3) Should bigness be viewed as bad for political reasons, even if it is beneficial for consumers?

The touchstone of antitrust analysis has for some decades now been whether consumers benefit. Other factors like whether workers lose their jobs or small businesses are driven into bankruptcy do not count. Neither does the potential for political clout being wielded by large firms. But the argument that antitrust should go beyond efficiency that benefits consumers has a long history, and seems to be making a comeback.

Daniel A. Crane discusses these issues in "Antitrust’s Unconventional Politics The ideological and political motivations for antitrust policy do not neatly fit the standard left/right dichotomy," appearing in Regulation magazine (Summer 2018, pp. 18-22).
"Although American antitrust policy has been influenced by a wide variety of ideological schools, two influences stand out as historically most significant to understanding the contemporary antitrust debate. The first is a Brandeisian school, epitomized in the title of Louis Brandeis’s 1914 essay in Harper’s Weekly, “The Curse of Bigness.” Arguing for `regulated competition' over `regulated monopoly,' he asserted that it was necessary to `curb[...] physically the strong, to protect those physically weaker' in order to sustain industrial liberty. He evoked a Jeffersonian vision of a social-economic order organized on a small scale, with atomistic competition between a large number of equally advantaged units. His goals included the economic, social, and political. ... The Brandeisian vision held sway in U.S. antitrust from the Progressive Era through the early 1970s, albeit with significant interruptions. ...
"The ascendant Chicago School of the 1960s and 1970s threw down the gauntlet to the Brandeisian tendency of U.S. antitrust law. In an early mission statement, Bork and Ward Bowman characterized antitrust history as `vacillat[ing] between the policy of preserving competition and the policy of preserving competitors from their more energetic and efficient rivals,' the latter being an interpretation of the Brandeis School. Chicagoans argued that antitrust law should be concerned solely with economic efficiency and consumer welfare. `Bigness' was no longer necessarily a curse, but often the product of superior efficiency. Chicago criticized Brandeis’s `sympathy for small, perhaps inefficient, traders who might go under in fully competitive markets.' Preserving a level playing field meant stifling efficiency to enable market participation by the mediocre. Beginning in 1977–1978, the Chicago School achieved an almost complete triumph in the Supreme Court, at least in the limited sense that the Court came to adopt the economic efficiency/consumer welfare model as the exclusive or near exclusive goal of antitrust law ..."
As Crane points out, the intellectual currents here have been entangled over time, reflecting our tangled social views of big business. The Roosevelt administration trumpeted the virtues of small business, until it decided that large consolidated firms would be better at getting the US economy out of the Great Depression and fighting World War II. After World War II, there was a right-wing fear that large consolidated firms were the pathway to a rise of government control over the economy and Communism, and Republicans pushed for more antitrust. In the modern economy, we are more likely to view unsuccessful firms as needing support and subsidy, and successful firms as having in some way competed unfairly. One of the reasons for focusing antitrust policy on consumer benefit was that it seemed clearly preferable to a policy that seemed focused on penalizing success and subsidizing weakness.

The working assumption of current antitrust policy is that no one policy can (or should) try to do everything. Yes, encouraging more business dynamism and start-ups is a good thing. Yes, concerns about workers who lose their jobs or companies that get shut down are a good thing. Yes,  certain rules and restrictions on the political power of corporations are a good thing. But in the conventional view (to which I largely subscribe), antitrust is just one policy. It should focus on consumer welfare and specific anticompetitive behaviors by firms, but not become a sort of blank check for government to butt in and micromanage successful firms.

Friday, July 6, 2018

Skeptical about Cryptocurrencies

Cryptocurrencies like Bitcoin have many interesting properties as financial assets, but are they ever likely to become money? The Bank for International Settlements (BIS) devotes Chapter V of its Annual Report 2017-18 (released June 24, 2018) to the topic "Cryptocurrencies: looking beyond the hype."  Here's the main thrust of the argument:
"[T]he essence of good money has always been trust in the stability of its value. And for money to live up to its signature property – to act as a coordination device facilitating transactions – it needs to efficiently scale with the economy and be provided elastically to address fluctuating demand. ... The chapter then gives an introduction to cryptocurrencies and discusses the economic limitations inherent in the decentralised creation of trust which they entail. For the trust to be maintained, honest network participants need to control the vast majority of computing power, each and every user needs to verify the history of transactions and the supply of the cryptocurrency needs to be predetermined by its protocol. Trust can evaporate at any time because of the fragility of the decentralised consensus through which transactions are recorded. Not only does this call into question the finality of individual payments, it also means that a cryptocurrency can simply stop functioning, resulting in a complete loss of value. Moreover, even if trust can be maintained, cryptocurrency technology comes with poor efficiency and vast energy use. Cryptocurrencies cannot scale with transaction demand, are prone to congestion and greatly fluctuate in value. Overall, the decentralised technology of cryptocurrencies, however sophisticated, is a poor substitute for the solid institutional backing of  money. That said, the underlying technology could have promise in other applications, such as the simplification of administrative processes in the settlement of financial transactions. Still, this remains to be tested."
Here are some figures that caught my eye on the energy consumption and scaling issues that face cryptocurrencies



On the issue of energy use, as the "miners" who update the system while solving computational problems burn energy to do so: "Individual facilities operated by miners can host computing power equivalent to that of millions of personal computers. At the time of writing, the total electricity use of bitcoin mining equalled that of mid-sized economies such as Switzerland, and other cryptocurrencies also use ample electricity. Put in the simplest terms, the quest for decentralised trust has quickly become an environmental disaster."

Verifying a cryptocurrency transaction through the blockchain is not only costly in terms of electricity, it's slow. This slowness isn't a bug; it's a feature--that is, it's built in to how the blockchain verifies transactions.  "[A]t the time of writing, the Bitcoin blockchain was growing at around 50 GB per year and stood at roughly 170 GB. Thus, to keep the ledger’s size and the time needed to verify all transactions (which increases with block size) manageable, cryptocurrencies have hard limits on the throughput of transactions."




The underlying architecture of crytocurrencies is that they are updated in blocks, which can only be added an prespecified intervals of time after earlier blocks have been completed. When demand for transactions gets high, the system becomes congested and transactions sometime wait several hours before being verified. "Another aspect of the scalability issue is that updating the ledger is subject to congestion. For example, in blockchain-based cryptocurrencies, in order to limit the number of transactions added to the ledger at any given point in time, new blocks can only be added at pre-specified intervals. Once the number of incoming transactions is such that newly added blocks are already at the maximum size permitted by the protocol, the system congests and many transactions go into a queue. With capacity capped, fees soar whenever transaction demand reaches the
capacity limit . And transactions have at times remained in a queue for several hours, interrupting the payment process. This limits cryptocurrencies’ usefulness for day-to-day transactions such as paying for a coffee or a conference fee, not to mention for wholesale payments. Thus, the more people use a cryptocurrency, the more cumbersome payments become."

Here's a figures showing spikes in daily average transaction fees for several cryptocurrencies:
Put these issues of high and volatile transactions costs together with the extreme volatility in the price of cryptocurrencies, even those that are supposedly designed to have relatively stable values, and their widespread use as  "money" seems implausible.

But the underlying blockchain technology might prove itself useful in other ways, "in niche settings where the benefits of decentralised access exceed the higher operating cost of maintaining multiple copies of the ledger." These uses often will not involve cryptocurrencies, but will instead be "cryptopayment" systems where the blockchain technology is used by a group of players who have permission to be on the system to handle long-distance payments. For example, the World Food Programme’s ran a blockchain-based cryptopayment system to send funds for food aid serving Syrian refugees in Jordan. More broadly, a blockchain based system might cut transaction costs for the global remittance flows of $540 billion annually. Or a "smart contract" system might be set up to facilitate trade financing, such that exporters can be reassured that they will be paid and importers can be reassured that what they paid for has actually shipped.

My own sense is that blockchain is likely to be one more entry in the list of technologies that were invented for one purpose--in this case, cryptocurrency--but ended up instead being most useful for other purposes, from targeted finance to safe-recordkeeping to keeping track of supply chains. For more on Bitcoin and blockchain, see: