Wednesday, October 28, 2020

The Need for Large Firms in Developing Countries

The US economy had about 6 million firms in 2017 (the most recent data). About 20,000 of those firms employed more than 500 people, and those 20,000 firms (about one-third of 1 percent of the total) accounted for 53% of all US employment by firms. Another 90,000 firms employed between 100 and 499 workers, and those 90,000 firms (about 1.5% of the total) accounted for another 14% of all US employment by firms. The job totals here don't take into account employment by the public sector and by nonprofits. But the point I'm making is that an important social function of firms is to coordinate production in a way that provides a bridge between workers and suppliers on one hand and the desires of customers on the other hand. In high-income economies, large firms coordinating the efforts of hundreds of workers play a major role in this activity. 

But many lower-income countries have only very small numbers of larger firms, which is one of the factor hindering their development. A group of World Bank researchers--Andrea Ciani, Marie Caitriona Hyland, Nona Karalashvili, Jennifer L. Keller, Alexandros Ragoussis, and Trang Thu Tran--address this topic in "Making It Big: Why Developing Countries Need More Large Firms" (September 2020). 

The available evidence on firm size, employment, and productivity in low- and middle-income countries is sometimes sketchy, so the report pulls together data, studies, and comparisons from a range of sources. The evidence strongly suggests benefits from large firms:
This report shows that large firms are different than other firms in low- and middle-income countries. They are significantly more likely to innovate, export, and offer training and are more likely to adopt international standards of quality. Their particularities are closely associated with productivity advantages—that is, their ability to lower the costs of production through economies of scale and scope but also to invest in quality and reach demand. Across low- and middle-income countries with available business census data, nearly 6 out of 10 large enterprises are also the most productive in their country and sector.

These distinct features of large firms translate into improved outcomes not only for their owners but also for their workers and for smaller enterprises in their value chains. Workers in large firms report, on average, 22 percent higher hourly wages in household and labor surveys from 32 low- and middle-income countries—a premium that rises considerably in lower-income contexts. That is partly because large firms attract better workers. But this is not the only reason: accounting for worker characteristics and nonpecuniary benefits, the large-firm wage premium remains close to 15 percent. Besides higher wages—which are strongly associated with higher productivity—large firms more frequently offer formal jobs, secure jobs, and nonpecuniary benefits such as health insurance that are fundamental for welfare in low- and middle-income countries. 
Using various measures, the authors argue that there is a pattern of a "truncated top" in the distribution of firm sizes in low- and middle-income countries. 
Smaller and lower-income markets tend to host smaller firms. But even in relative terms, there are too few larger firms in these countries relative to the size of the economy and the number of smaller firms—there is a “missing top.” In 2016, for example, for every 100 medium-size firms, more than 20 large firms were operating in the nonagricultural sector in the United States, as opposed to less than 9 in Indonesia—a lower-middle-income country with roughly the same population. A closer study of the firm-size distribution in country pairs suggests that what is missing are the larger of large firms—that is, those with 300+ employees—as well as the more productive and outward-oriented firms. ... The evidence suggests that larger firms employing more than 300 workers are systematically underrepresented in the lower-income countries under observation. In Ethiopia, for example, large firms have a 7-percentage-point lower share of employment than what is predicted by the optimal distribution, while in Indonesia, the gap is 4.6 percentage points, corresponding to a rough estimate of 230,000 missing jobs in manufacturing. 
Why are there fewer large firms than expected, and how might low- and middle-income countries generate more large firms? As the report points out, there are basically four ways in which a large firm forms: "foreign firms creating new affiliates, other large firms spinning off new ventures, governments, and entrepreneurs."

Given that the lack of large firms is the problem in the first place, spin-offs from existing large firms is not likely to address the problem. Having governments of low- and middle-income countries start large firms hasn't usually worked well. 
To fill the “missing top,” governments have often resorted to the creation of state-owned enterprises (SOEs). These firms rarely deliver the benefits one might expect from their scale. First, it has proven difficult to establish governance sufficiently independent of the state to operate in a commercial manner. SOEs often pursue a mix of social and commercial objectives, which are used to justify regulatory protection from competition. It is also difficult for governments to manage the conflict of interest that arises between exposing SOEs to competition, on the one hand, and the risk of job losses and changes in product offerings that come with this exposure, on the other. As a result, SOEs in lower-income economies rarely emulate the productivity and dynamism of privately owned firms: they are three times less likely to be the most productive firm in their country and sector.
The remaining options are to have foreign firms start a larger company, or to have domestic entrepreneurs build one. But many low-income countries have set up rules and regulations that make it hard for larger firms to operate. For example, there are often a set of taxes, regulations, and rules about employment and wages that only apply to firms larger than certain employment size--often set around 100 workers or in countries even less. Foreign firms are often blocked. There are often a variety of rules aimed at protecting small incumbent firms from competition, making it hard for larger firms to get a foothold. My own sense is that governments in low- and middle-income countries often tend to view large firms as an alternative power structure and (not without reason) as a threat to their own political power. For a detailed explanation of how this dynamic plays out in Mexico, a useful starting point is my post on "Mexico Misallocated" (January 24, 2019).

To put it another way, larger firms have some natural advantages in productivity, at least in certain contexts, but in many low- and middle-income countries, the sum total of government actions counterbalances and offsets that advantage. Thus, the World Bank researcher suggest that policies to encourage larger firms (and remember, we're only talking here about firms with 100 or few hundred employees, not giant global multinationals) mostly involve existing governments getting out of the way: 
In low-income countries, governments can achieve that objective with simple policy reorientations, such as breaking oligopolies, removing unnecessary restrictions to international trade and investment, and putting in place strong competition frameworks to prevent the abuse of market power. Opening markets to competition benefits entrants of all sizes. In practice, however, regulation is often designed for the benefit of large incumbents using statutory monopolies and oligopolies, preferential access to natural resources and government contracts, or barriers to foreign competitors that rarely enter at small scale in new markets. The entry of more large firms to compete with incumbents would aim to disperse power by any one firm. There is a long way to go in this regard: regulatory protection of incumbents in lower-middle-income countries is more than 60 percent greater, on average, than the level observed in high-income countries.

Beyond the entry point, operational costs associated with a range of government policies can greatly influence investors’ decisions to establish new, large firms. Large firms in low- and middle-income countries are significantly more likely than small firms to report customs operations, the court system, workforce skills, transportation, and telecommunications infrastructure as constraining their operations. Bread-and-butter reforms that aim to improve market regulation, trade processes, and tax regimes and to protect intellectual property rights stand to make a difference in that respect, even when these long-term reforms do not have large-firm creation as the objective.
The need for expanding employment into jobs with decent pay is a huge topic for many low- and middle-income countries of the world--from India and south Asia to sub-Saharan Africa, from China to the countries of the Middle East. That policy goal is not likely to be achievable without a surge in large firms in these countries. 

Tuesday, October 27, 2020

Thinking about Better Graphs and Use of Color

When I started working as the Managing Editor of the Journal of Economic Perspectives back in 1986, making figures for academic articles was still relatively expensive. The changeover to software-generated figures was getting underway, but with lots of hiccups--for example, we had to purchase a more expensive printer that could produce figures as well as text. At my home base at the time,  Princeton University still employed a skilled draftsman to create beautiful figures, using tools like plotting points and tracing along the edge of a French curve, which have now gone the the way of the slide rule.  

Generating figures has now become cheap: indeed, I see more and more first drafts at my journal which include at least a dozen figures and often more. I sometimes suspect that the figures were generated for slides that can be shown during a live presentation, and then the paper was written around the series of figures. Economists and other social scientists, like it or not, need to know something about what makes a good graph.  Susan Vanderplas, Dianne Cook, and Heike Hofmann give some background in "Testing Statistical Charts: What Makes a Good Graph?" (Annual Review of Statistics and Its Application, 2020, subscription required). 

With a good statistical graph or figure, readers should be able to read information or see patterns with reasonable accuracy (although people have a tendency to round up or down). As the authors write (citations omitted): 

A useful starting point is to apply gestalt principles of visual perception, such as proximity, similarity, common region, common fate, continuity, and closure, to data plots. These principles are useful because good graphics take advantage of the human visual system’s ability to process large amounts of visual information with relatively little effort.
The authors discuss research on the extent to which certain graphs meet this goal: for example, one can use "think-aloud" methods where subjects talk about what they are seeing and thinking about as they look at various figures, or eye-tracking studies to find what people are actually looking at. They also focus on statistical charts, not on the production of more artistic "infographics." Along with general tips, I've been interested in recent years about the use of color. 

The authors argue that when using a range of colors, best practice is to use a neutral color in between a range of two other colors. They also point out that the human eye does not discern gradations in all colors equally well: "It is also important to consider the human perceptual system, which does not perceive hues uniformly: We can distinguish more shades of green than any other hue, and fewer shades of yellow, so green univariate color schemes will provide finer discriminability than other colors because the human perceptual system evolved to work in the natural world, where shades of green are plentiful." In terms of human physiological perceptions, " a significant portion of the color space is dedicated to greens and blues, while much smaller regions are dedicated to violet, red, orange, and yellow colors. This unevenness in mapping color is one reason that the multi-hued rainbow color scheme is suboptimal—the distance between points in a given color space may not be the same as the distance between points in perceptual space. As a result of the uneven mapping between color space and perceptual space, multi-hued color schemes are not recommended." In addition, some people are color-blind: the most common kind is an inability to distinguish between red and green, but there are also people who have difficulties distinguishing between blues and greens, and between yellows and reds. 

Given these realities, what range of color is recommended? The bottom purple-orange gradient both circles through a neutral color and is also distinguishable by people with any sort of color-blindness. Of course, this doesn't mean it should always be used: people may have mental associations with colors (say, blue associated with cold) that make it useful to use other colors. But it's worth remembering. 


For an example of how a better graph can help with perception, consider this example. The graph is looking at notifications for tuberculosis in Australia in 2012, divided by age and gender. The top panel shows gender side-by-side for each age group, with two colors used to distinguish gender. The bottom panel shows age groups side-by-side for each gender, with five colors used to distinguish ages. The authors argue that "common region" arguments make it easier for most viewers get information from the top figure. 

Finally, here's an example of a graph that is "interactive," even though it is static.  The graph shows the average number of births on each day of the year. Notice that although there's a lot of shading, it's in green so the distinctions are easier to perceive. Key takeaways stand out easily: like more babies born in summer than in winter, and fewer births around holidays like July 4, Thanksgiving, Christmas, and New Year's. Also, the natural tendency for a reader is to check out their own birthday--which is what makes the figure interactive. It's easy to imagine other kinds of figures--by age, gender, location, income, education, and so on--that might cause readers to interact in a similar way by checking out the data for their own group.
For those who want to dig deeper, the article has lots more examples and citations. For more on graphic presentations of data, a useful starting point from the journal where I work as Managing Editor is the paper by Jonathan A. Schwabish in the Winter 2014 issue. "An Economist's Guide to Visualizing Data." Journal of Economic Perspectives, 28 (1): 209-34. From his abstract: "Once upon a time, a picture was worth a thousand words. But with online news, blogs, and social media, a good picture can now be worth so much more. Economists who want to disseminate their research, both inside and outside the seminar room, should invest some time in thinking about how to construct compelling and effective graphics."

Monday, October 26, 2020

Will China Be Caught in the Middle-Income Trap?

The "middle-income trap" is the phenomenon that once an economy has made the big leap from being a lower-income country to being a middle-income country, then it may find it difficult (although not impossible) to make the next leap from being middle-income to high-income. Matthew Higgins considers the situation of China in "China’s Growth Outlook: Is High-Income Status in Reach?" (Federal Reserve Bank of New York, Economic Policy Review, October 2020, 26:4, pp. 68-97). 

Higgins provides the basic backdrop for China's remarkable economic growth in the last four decades. 

China’s growth performance has been remarkable following the introduction of economic reforms in the late 1970s. According to the official data, real GDP growth has averaged 9.0 percent since 1978. ... Rapid economic growth has led to a similar increase in living standards, lifting China out of poverty and into middle-income status. According to official figures, real per capita income has risen by a factor of 25 since 1978. Annual per capita income now stands at about $16,100 measured at purchasing power parity, in “2011 international dollars.” ... This places China at roughly the 60th percentile of the global income distribution, though still slightly below 30 percent of the U.S. level.
A first question, of course, is whether we really believe the official growth numbers, and the answer is "not quite." One difficulty with huge growth numbers over sustained periods of time is that you can project backwards to what the original level of income must have been at the start of the process. Thus, if current Chinese real per capita income is $16,100, and the growth rate has been 9% for (say) 40 years, then the real per capita income for China would have been about $500 before the reforms started. As Higgins spells out the implication: 
Indeed, real per capita income [in China] at the start of the decade [the 1980s] would have been below that of most countries in sub-Saharan Africa as well as neighbors such as Bangladesh, Laos, and Myanmar. Although China was clearly a poor country at the time, few would have rated it as one of the poorest. Such a ranking is also inconsistent with data on life expectancy, literacy, and other quality-of-life indicators. Growth rates from the Penn World Table, more plausibly, place China at roughly the 30th percentile of the global income distribution in the early 1980s, ahead of most countries in sub-Saharan Africa but still behind neighbors such as Indonesia, the Philippines, and Thailand.
For comparison, here are China's official growth rates and those from the Penn World Tables: 
As you might expect, there's been an ongoing controversy for a couple of decades now over what numbers are most accurate, which I will sidestep here (although other papers in this issue of the Economic Policy Review do address them). I'll just point out that if you start adjusting numbers for one country, you need to adjust them for all countries, and when all is said and done, it remains true that China has had decades of extraordinary growth and has become a middle-income economy. 

Here, I want to focus on the question of what it would take for China to become a high-income economy, and thus not to succumb to the middle-income trap. As the figure shows, China's growth rates were slowing down even before the trade wars and now the pandemic. Higgins looks at past patterns of countries moving from middle-income to high-income status and writes: 
Our middle-income category includes countries with per capita incomes at 10 to 50 percent of the U.S. level (at current purchasing power parities); our high-income category includes anything above that. ... Out of 124 countries, 52 qualified as middle-income in 1978 and 49 in 2018. Of the original cohort of 52 middle-income countries, just 8 had advanced to high-income status by 2018.
Of course, if China can maintain a 6% growth rate for the next few decades, it will keep catching up to high-income countries like the US, Japan, Canada, and nations of western Europe. But for most countries reaching middle-income status, sustaining such high growth rates for additional decades doesn't usually happen. For example, Higgins point out that after Japan had several decades of rapid growth and reached China's current level of per capita GDP back in 1976, Japan's growth rate steadily dropped over time, and has been at about 1% per year in recent decades. Or after South Korea had several decades of rapid growth and reached China's current level of per capita GDP back in 1994, its growth rate has steadily decline to less than 3% per year. 

How likely is continued rapid growth for China? Higgins digs down into the underlying sources of growth for some insights. Thus, one source of economic growth is known as the "demographic dividend," which happens when a country has a rising share of its population in the prime working years from age 20-64: "According to U.N. figures, China’s working-age population is expected to
decline by about 12 percent over the next twenty years even as the total population rises
slightly." As the figure shows, the share of China' population that is working-age started declining af ew years ago: for other rapid-growth cases like Japan or the east Asian "tiger" economies, the working-age share of the population was still rising when they hit China's current level of per capita GDP. 
Another issue is that other examples of rapid growth, like Japan, South Korea, and the other east Asian "tigers" kept their growth rates high in part with very high levels of physical capital investment. But China has already gone through a stage of extremely high levels of investment, and is now trying to shift to an economy in which growth is based more on human skills/education, technology, and services.  

On the other side, because China's real per capita GDP has only reached about 30% of the US level, there is certainly still room for growth. Higgins writes: "Prospects for rapid growth in China are buoyed by two key factors: the country’s distance behind current global income leaders and its relatively low rate of urbanization. These factors could provide scope for continued rapid growth through `catch-up' effects and structural transformation. ... China’s unfinished structural transformation leaves it with plenty of room to run. How fully China exploits this potential will depend largely on its own policies."

Higgins points out one set of "institutional" policies as measured by the World Bank. The rankings for these policies have been adjusted so that the average for the 121 countries included is set at zero, and the standard deviation is set at 1.0. On five of the six measures, China is below the global average. On all six measures it is well below the high-income countries of the world. One can of course quarrel with the details of how such measures are calculated, but the overall pattern is clear.  
Perhaps the fundamental challenge for China is to recognize that the past 40 years of economic growth were an excellent start to becoming a high-income country, but really only a start, and additional future growth will require even more sweeping and additional changes to the economy and society.  

As noted above, this issues of Economic Policy Review has a group of articles on "China in the Global Economy." The four articles are: 

Thursday, October 22, 2020

interview with Sandra Black: Education Outcomes and A Stint in Politics

Douglas Clement has an interview with Sandra Black in the Fall 2020 issue of For All, a publication of the  Opportunity & Inclusive Growth Institute at the Minneapolis Federal Reserve. The title sums up the topics: "Seeing the margins: An interview with Columbia University economist Sandra Black
Sandra Black on education, family wealth, her time at the White House, COVID-19, and the cost of bad policy." Like a lot of the interviews done by Clement, the interviewee is encouraged to describe the basic insight behind some of their own prominent research, which in turn gives a look into how economists think about research. 

For example, Black wrote an article back in 1999 on the subject of how much value parents place on living in a school district with higher test scores (Sandra E. Black, "Do Better Schools Matter? Parental Valuation of Elementary Education,"  Quarterly Journal of Economics, 114: 2, May 1999, pp. 577–599). Here's how Black describes the issue and her approach: 
Let’s look at how parents value living in a house that is associated with a better school. That’s an indirect value of the school—what the parents are willing to pay to have the right to send their children to a particular school. The problem is that when you buy a house, it has a whole bunch of different attributes. You’re buying the school that you get to send your kids to, but you’re also buying the neighborhood and the house itself and all the public amenities and all kinds of other things. And those things tend to be positively correlated. Better school districts tend to be in better neighborhoods with nicer houses—so isolating the part due just to schools is somewhat complicated. ... 

What I did was look, in theory, at two houses sitting on opposite sides of the same street, where the attendance district boundary divides the street. The houses are clearly in the same neighborhood, they’re of similar quality, et cetera. The only difference between them is which elementary school the child from each home attends. And then you can ask, How different are the prices of those houses, and how does that difference relate to the differences in school quality?

What I found was that parents were willing to pay more for better schools, but much less than you would casually estimate if you didn’t take into account all these other factors. In Massachusetts, parents were willing to pay 2.5 percent more for a 5 percent increase in school test scores. ... 

[T]this was a long time ago, so pretty much all the information was hand-collected. The housing prices were in a database, but for the attendance district boundaries, I had to contact each school district to ask for their map. I would call them and say, “Can I get the map of your boundaries?” And they would ask, “What house are you thinking of buying?” I’d reply, “No, I actually just want the map.” They’d usually send me a list of streets that were in the attendance district, and a friend of mine and I would sit down and try to create these maps. She was a very good friend.
Here's another example. Back in 1997 the state of Texas passed the "Top Ten Percent Plan." The idea was that anyone in the top 10% of their high school class would be automatically admitted to any University of Texas campus they wished. One of the hopes was to improved diversity at flagship U-Texas campus in Austin. Both for those admitted to the traditionally more selective UT-Austin campus and for those who missed out on going to that campus as a result of the change, what happened? (The paper is Sandra E. Black, Jeffrey T. Denning, and Jesse Rothstein, "Winners and Losers: The Effect of Gaining and Losing Access to Selective Colleges on Education and Labor Market Outcomes," March 2020, NBER Working Paper 26821). Black tells the story: 
The idea is that the top 10 percent of every high school in Texas would be automatically admitted to any University of Texas institution—any one of their choice. All of a sudden, disadvantaged high schools that originally sent very few students to selective universities like the University of Texas, Austin—the state’s top public university— found that their top students were now automatically admitted to UT Austin. If they wanted to go, all the student had to do was apply. There was also outreach, to make students aware of the new admissions policy. The hope was that it would maintain racial diversity because the disadvantaged high schools were disproportionately minority.

It’s not obvious that the goal of maintaining diversity was realized, in part because even though a school may have a disproportionate number of minority students, its top 10 percent academically is often less racially diverse than the rest of the school. There is some debate about whether it maintained racial diversity.

What you do see, however, is that more students from these disadvantaged schools started to attend UT Austin. And students from the more advantaged high schools who were right below their school’s top 10 percent were now less likely to attend. So there’s substitution—for every student gaining admission, another loses. I think that is true in every admissions policy, but we don’t always consciously weigh these trade-offs. ...  Here, we’re trying to explicitly think about, and measure, these trade-offs. ... 

[W]e show that the students who attend UT Austin as a result of the TTP plan—who wouldn’t have attended UT Austin prior to the TTP plan—do better on a whole range of outcomes. They’re more likely to get a college degree. They earn higher salaries later on. It has a positive impact on them.

But what was really interesting is that the students who are pushed out—that’s how we referred to them—didn’t really suffer as a result of the policy. These students would probably have attended UT Austin before the TTP plan. But now, because they were not in the top 10 percent [of their traditional “feeder” school], they got pushed out of the top Texas schools like UT Austin. We see that those students attend a slightly less prestigious college, in the sense that they’re not going to UT Austin, the flagship university. But they’ll go to another four-year college, and they’re really not hurt. They’re still graduating, and they’re getting similar earnings after college.

So the students who weren’t attending college before [because they didn’t attend a traditional feeder school] now are, and they’re benefiting from that in terms of graduation rates and income, while the ones who lose out by not going to Texas’ top university aren’t really hurt that much. It seems like a win-win.

Back in 2015, Black spent some time at the White House Council of Economic Advisers. Here's one of her reflections on that time:  

[W]hich job do I prefer: adviser or academic? That’s easy to answer: being a professor. I like thinking about things for long periods of time, and it was quite the opposite when I was in D.C. There, I was scheduled every 15 minutes. Each meeting would cover a different topic, and I had to be ready to be an expert on A, then an expert on B, and then an expert on C.

It is the antithesis of being an academic, and it’s a skill that I think a lot of academics don’t naturally have, me included. It was a really hard transition from academia to the policy world. Coming back to academia was hard too. I noticed that my attention span had become so much shorter. It took six months, at least, before I could sit and read a whole paper and just think about that paper. Being at the CEA was a very different experience. I really enjoyed it, but I was happy to come back to academia.

Wednesday, October 21, 2020

The Google Antitrust Case and Echoes of Microsoft

The US Department  of Justice has filed an antitrust case against Google. The DoJ press release is here;  the actual complaint filed with the US District Court for the District of Columbia is here. Major antitrust cases often take years to litigate and resolve, so there will be plenty of time to dig into the details as they emerge. Here, I want to reflect back on the previous major antitrust case in the tech sector, the antitrust case against Microsoft that was resolved back in 2001. 

For both cases, the key starting point is to remember that in US antitrust law, being big and having a large market share is not a crime. Instead, the possibility of a crime emerges when a company with a large market share leverages that market share in a way that helps to entrench its own position and block potential competition. Thus, the antitrust case digs down into specific contractual details.

In the Microsoft antitrust case, for example, the specific legal question was not whether Microsoft was big (it was), or whether it dominated the market for computer operating systems (it did). The legal question was whether Microsoft was using its contracts with personal computer manufacturers in a way that excluded other potential competitors. In particular, Microsoft signed contracts requiring that computer makers license and install Microsoft's Internet Explorer browser system as a condition of having a license to install the Windows 95 operating system. Microsoft had expressed fears in internal memos that alternative browsers like Netscape Navigator might become the fundamental basis for how computers and software interacted in the future. From the perspective of antitrust regulators, Microsoft's efforts to used contracts as a way of linking together its operating system and its browser seemed like anticompetitive behavior. (For an overview of the issues in the Microsoft case, a useful starting point is a three-paper symposium back in the Spring 2001 issue of the Journal of Economic Perspectives.)

After several judicial decisions went against Microsoft, the case was resolved with a consent agreement in November 2001. Microsoft agreed to stop linking its operating system and its web browser. It agreed to share some of its coding so that it was easier for competitors to produce software that would connect to Microsoft products. Microsoft also agreed to an independent oversight board that would oversee its actions for potentially anticompetitive behavior for five years. 

As we look back on that Microsoft settlement today, it's worth noting that losing the antitrust case in the courts and being pressured into a consent agreement certainly did not destroy Microsoft. The firm was not broken up into separate firms. In 2020, Microsoft ranks either #1 or very near the top of all US companies as measured by the total value of its stock. 

Looking again at the antitrust case against Google, the claims are focused on specific contractual details. For example, here's how the Department of Justice listed the issues in its press release: 

As alleged in the Complaint, Google has entered into a series of exclusionary agreements that collectively lock up the primary avenues through which users access search engines, and thus the internet, by requiring that Google be set as the preset default general search engine on billions of mobile devices and computers worldwide and, in many cases, prohibiting preinstallation of a competitor. In particular, the Complaint alleges that Google has unlawfully maintained monopolies in search and search advertising by:
  • Entering into exclusivity agreements that forbid preinstallation of any competing search service.
  • Entering into tying and other arrangements that force preinstallation of its search applications in prime locations on mobile devices and make them undeletable, regardless of consumer preference.
  • Entering into long-term agreements with Apple that require Google to be the default – and de facto exclusive – general search engine on Apple’s popular Safari browser and other Apple search tools.
  • Generally using monopoly profits to buy preferential treatment for its search engine on devices, web browsers, and other search access points, creating a continuous and self-reinforcing cycle of monopolization.
As noted earlier, I expect these allegations will result in years of litigation. But I also strongly suspect that even if Google eventually loses in court and signs a consent agreement, it ultimately won't injure Google much or at all as a company, nor will it make a lot of difference in the short- or the medium-term to the market for online searches. If this is the ultimate outcome, I'm not sure it's a bad thing. After all, what are we really talking about in  this case? As Preston McAfee has pointed out, "First, let's be clear about what Facebook and Google monopolize: digital advertising. The accurate phrase is `exercise market power,' rather than monopolize, but life is short. Both companies give away their consumer product; the product they sell is advertising. While digital advertising is probably a market for antitrust purposes, it is not in the top 10 social issues we face and possibly not in the top thousand. Indeed, insofar as advertising is bad for consumers, monopolization, by increasing the price of advertising, does a social good." 

Ultimately, it seems to me as if the most important outcomes of these big-tech antitrust cases may not be about the details of contractual tying. Instead, the important outcome is that the company is put on notice that it is being closely watched for anticompetitive behavior, it has been judged legally guilty of such behavior, and it needs to back away from anything resembling such behavior moving forward.  

Looking back at the aftermath of the Microsoft case, for example, some commenters have suggested that it caused Microsoft to back away from buying other upstart tech companies--like buying Google and Facebook when they were young firms. A common complaint against the FAANG companies— Facebook, Apple, Amazon, Netflix, and Google--is that they are buying up companies that could have turned into their future competitors. A recent report from the House Judiciary Committee ("Investigation of Competition in Digital Markets") points out that "since 1998, Amazon, Apple, Facebook, and Google collectively have purchased more than 500 companies. The antitrust agencies did not block a single acquisition. In one instance—Google’s purchase of ITA—the Justice Department required Google to agree to certain terms in a consent decree before proceeding with the transaction."

It's plausible to me that the kinds of contracts Google has been signing with Apple or other firms are a kind of anticompetitive behavior that deserves attention from the antitrust authorities. But the big-picture question here is about the forces that govern overall competition in these digital markets, and one major concern seems to me that the big tech fish are protecting their dominant positions by buying up the little tech fish, before the little ones have a chance to grow up and become challengers for market share. 

Mark A. Lemley and Andrew McCreary offer a strong statement of this view in their paper "Exit Strategy (Stanford Law and Economics Olin Working Paper #542, last revised January 30, 2020).  They write (footnotes omitted): 

There are many reasons tech markets feature dominant firms, from lead-time advantages to branding to network effects that drive customers to the most popular sites. But traditionally those markets have been disciplined by so-called Schumpeterian competition — competition to displace the current incumbent and become the next dominant firm. Schumpeterian competition involves leapfrogging by successive generations of technology. Nintendo replaces Atari as the leading game console manufacturer, then Sega replaces Nintendo, then Sony replaces Sega, then Microsoft replaces Sony, then Sony returns to displace Microsoft. And so on. One of the biggest puzzles of the modern tech industry is why Schumpeterian competition seems to have disappeared in large swaths of the tech industry. Despite the vaunted speed of technological change, Apple, Amazon, Google, Microsoft, and Netflix are all more than 20 years old. Even the baby of the dominant firms, Facebook, is over 15 years old. Where is the next Google, the next Amazon, the next Facebook?
Their answer is the "exit strategy" for the hottest up-and-coming tech firms isn't to do a stock offering, remain an independent company, and keep building the firm until perhaps it will challenge one of the existing tech Goliaths. Instead, the "exit strategy," often driven by venture capital firms, is for the new firms to sell themselves to the existing firms. 

This particular antitrust case against Google's allegedly anticompetitive behavior in the search engine market is surely just one of the cases Google will face in the future, both in the US and around the world. The attentive reader will have noticed that nothing in the current complaint is about broader topics like how Google collects or makes use of  information on consumers. There's nothing about how Google might or might not be manipulating the search algorithms to provide an advantage to Google-related products: for example, there have been claims that if you try to search Google for websites that do their own searches and price comparisons, those websites may be hard to find. There are also questions about whether or how Google manipulates its search results based on partisan political purposes. 

As I look back at the Microsoft case, my suspicion is that the biggest part of the outcome was that when Microsoft was under the antitrust microscope, other companies that eventually became its big-tech competitors had a chance to grow and flourish on their own. With Google, the big issue isn't really about details of specific contractual agreements relating to its search engine, but whether Google and the other giants of the digital economy are leaving sufficient room for their future competitors. 

For more posts on antitrust and the big tech companies, some previous posts include:

Tuesday, October 20, 2020

Will Vote-by-Mail Affect the Election Outcome?

For the 2020 election, the United States will rely more heavily on vote-by-mail than ever before. Is it likely to affect the outcome? Andrew Hall discusses some of the evidence in "How does vote-by-mail change American elections?" (Policy Brief, October 2020, Stanford Institute for Economic  Policy Research).

There are several categories of vote-by-mail. The mild traditional approach was the absentee ballot, used by people who knew in advance that they wouldn't be able to make it to the polls in person on Election Day for some specific reason (like being an out-of-state college student or deployed out-of-state in the military). Over time, this has evolved in many states into "no excuses" absentee voting, where anyone can request an absentee ballot for pretty much any reason. 

Perhaps the most aggressive version is universal vote-by-mail, where the state mails a ballot to every registered voter. The vote can then vote by mail, bring the mailed ballot in person to vote, or ignore the mailed ballot and just vote in-person on Election Day. Hall notes: "Prior to 2020, only Colorado, Hawaii, Oregon, Utah, and Washington employed universal vote-by-mail, while California was in the process of phasing it in across counties. In response to COVID-19, three more states, Nevada, New Jersey, and Vermont, along with the District of Columbia, have implemented the policy, while California accelerated its ongoing implementation. Montana has also begun to phase in the practice."

In 2020, most states are experimenting with something in-between: not quite universal vote-by-mail (in most states), but often more encouragement for vote-by-mail than had been common in the previous situation of no-excuses absentee voting. Thus, thinking about what will happen in 2020 requires looking back at earlier experience. 

For example, the universal mail-in states often phase in the process a few randomly chosen counties at at time. Thus, social scientists can compare, in the same election, how voting behavior changed when mail-in voting first arrived. Hall writes: 

In our first study, published recently in the Proceedings of the National Academy of Sciences, we examined historical data from California, Utah, and Washington, where universal vote-by-mail was phased in over time, county by county ... We found that, in pre-COVID times, switching to universal vote-by-mail had only modest effects on turnout, increasing overall rates of turnout by approximately two percentage points. Because universal vote-by-mail has such modest effects on overall turnout, it’s not surprising that we also found that it conveyed no meaningful advantage for the Democratic Party. When counties switched to universal vote-by-mail, the Democratic share of turnout did not increase appreciably, and neither did the vote shares of Democratic candidates. Our largest estimate suggests that universal vote-by-mail could increase Democratic vote share by 0.7 percentage points---enough to swing a very close election, to be sure, but a very small advantage in most electoral contexts, and a much smaller effect than recent rhetoric might suggest.
Of course, this evidence is about a move to universal mail-in voting, and what is actually happening in most states is more like a dramatic expansion of no-excuse-needed absentee balloting. However, I confess that I am less sanguine than Hall about a swing of "only" 0.7 percentage points. If the presidency or control of the US Senate comes down to a few key, close-run states, that amount may represent the margin of victory. Also, this pre-COVID evidence may underestimate the partisan difference in 2020, given that there is some survey evidence from April and June suggesting that Democrats are more enthused about mail-in voting than Republicans. But what has seemed to happen in other states is that while Democrats are more likely to vote by mail, overall turnout and voting margins are not much affected. 

As another piece of evidence, Hall discussed the Texas run-off primary on June 14. For research purposes, it's useful that this vote happened when the pandemic was already underway. Also, it's useful that in this election, only those 65 and over could vote-by-mail with no reason needed. Thus, one can compare voting patterns of those just under 65 and just over 65, and see whether among voters who were close in age but had different rules for mail-in voting, did the pandemic change the patterns? For example, would the 64 year-olds who did not have easy access to a mail-in ballot vote less? The short answer is that gap between 64 and 65 year-old voters did not change. 

I'll admit here at the bottom that although I've had to vote absentee a couple of times in my life, I'm not a big fan of vote-by-mail. I like the idea of most people voting at the same time, with the same information, and early mail-in voting raises the problem that if new news arrives and you want to change your vote, you're out of luck.  In addition, I'm a big fan of the secret ballot. No matter what you say to other people, when you are alone in that voting booth, you can choose who you want. Vote-by-mail will inevitably be a less private experience, where those who might wish to defy their family members or friends or those in their apartment building or their assisted care facility may find it just a little harder to do so. 

There are also security concerns about mail-in ballots being delivered and practical concerns about difficulties of validating them and counting them expeditiously. I'm confident that in at least one state in the 2020 election, probably a state with little previous experience in mail-in voting, the process is going to go wincingly wrong.  As Hall writes: "That being said, there are important November-specific factors our research cannot address. The most important issue concerns the logistics of vote-by-mail. Historically, mail-in ballots are rejected at higher rates than in-person votes. Capacity issues in the face of an enormous surge in voting by mail could drive these rejection rates higher. And if Democrats cast more mail-in ballots than Republicans, as looks extremely likely, these higher rejection rates could mean that vote-by-mail paradoxically hurts Democrats."

Of course, vote-by-mail is only one of the multiple differences across states in how voting occurs, including differences in voter registration, voter ID, recounts, and others For an overview, see "Sketching State Laws on Administration of Elections" (September 26, 2016). 

Monday, October 19, 2020

The Ada Lovelace Controversies

 Ada Lovelace (1815-1852) is generally credited with being the first computer programmer: specifically, after Charles Babbage wrote down the plans for his Analytical Engine (which Britannica calls "a general-purpose, fully program-controlled, automatic mechanical digital computer"), Lovelace wrote down a set of instructions that would allow the machine to calculate the "numbers of Bernoulli" (for discussion, see here and here). Suw Charman-Anders gives an overview of the episode and some surrounding historical controversy in "Ada Lovelace: A Simple Solution to a Lengthy Controversy" (Patterns, October, 9, 2020, volume 1, issue 7). 

The historical controversy is whether Lovelace really truly deserves credit for the program, or whether her contemporaries who gave her credit for doing so were just being chivalrous to a fault (and perhaps being generous to the only daughter of Lord Byron and his wife). For example: 

In a letter to Michael Faraday in 1843, Babbage referred to her as “that Enchantress who has thrown her magical spell around the most abstract of Sciences and has grasped it with a force which few masculine intellects (in our own country at least) could have exerted over it”. Sophia De Morgan, who had tutored the young Lovelace, and Michael Faraday himself were both impressed with her understanding of Babbage’s Analytical Engine. Augustus De Morgan, Sophia’s husband and another of Lovelace’s tutors, described her as having the potential, had she been a man, to become “an original mathematical investigator, perhaps of first-rate eminence” ...

 Apparently, some modern writers have pored over what remains of the imprecisely dated correspondence between Lovelace and her tutor Augustus de Morgan, and decided that Lovelace didn't know enough math to have written the program. (Personally, I shudder to think of what judgments would be reached about my own capabilities if I was judged by the questions I sometimes felt the need to ask!) But Charman-Anders makes a persuasive case that the whole controversy is based in a mis-dating of Lovelace's mathematical education in general and her correspondence with de Morgan in particular; that is, critics of Lovelace were mistakenly treating early questions she asked her tutor as if they were questions asked several years later. 

For me, the more interesting point that Charman-Anders makes is to emphasize that writing a computer program was its own conceptual breakthough. There had long been mechanical computing machines, where you plugged in a problem and it spit out an answer. But the breakthrough from Lovelace was to see that the Babbage's Analytical Engine could be viewed a set of rules for working out new results; indeed, Lovelace  hypothesized that such a machine could write music based on a set of rules.   Charman-Anders writes (quotations in first paragraph from Lovelace's 1843 notes, footnotes omitted): 

Although Lovelace was the first person to publish a computer program, that wasn’t her most impressive accomplishment. Babbage had written snippets of programs before, and while Lovelace’s was more elaborate and more complete, her true breakthrough was recognizing that any machine capable of manipulating numbers could also manipulate symbols. Thus, she realized, the Analytical Engine had the capacity to calculate results that had not “been worked out by human head and hands first,” separating it from the “mere calculating machines” that came before, such as Babbage’s earlier Difference Engine. Such a machine could, for example, create music of “any degree of complexity or extent”, if only it were possible to reduce the “science of harmony and of musical composition” to a set of rules and variables that could be programmed into the machine. ...

While calculating devices have a long history, the idea that a machine might be able create music or graphics was contrary to all experience and expectation. Lovelace and her peers would have been familiar with the artifice of the automaton, clockwork machines which looked and acted like humans or animals but were driven by complex arrangements of cams and levers. And indeed, Babbage is said to have owned one called the Silver Lady, which could “bow and put up her eyeglass at intervals, as if to passing acquaintances”. But the Analytical Engine would have been in a category all its own.

One of the biggest leaps the human mind can make is extrapolating from current capabilities to future possibilities. The “art of the possible”, as it has been called, is an essential skill for innovators and entrepreneurs, but envisioning an entirely new class of machine is something for which few people have the capacity. Babbage’s design for the Analytical Engine was astounding, but none of his peers seemed to truly grasp its meaning. None except Lovelace.

Saturday, October 17, 2020

Interview with Gary Hoover: Economics and Discrimination

The Southwest Economy publication of the Federal Reserve Bank of Dallas has published "A Conversation with Gary Hoover" (Third Quarter 2020, pp. 7-9). Here are some of Hoover's comments: 

On  his own career path: 

Although I have been successful in economics, it has not come without some amount of psychological trauma. When I arrived at the University of Alabama in 1998, the economics department had never hired a Black faculty member. Sadly, that is still the case at more economics departments than not. I would not call those initial years hostile, but they were not inviting either.

I stuck to my plan, which was to publish articles to the best of my ability and teach good classes. The pressures were there to mentor Black students, serve on countless committees to “diversify” things and be a role model. I took on the extra tasks but never lost track of my goal. I saw so many of my Black counterparts fall into the trap. They had outsized service burdens compared to their peers, which they took on with the encouragement of the administration. However, when promotion and tenure evaluation time arrived, they were dismissed for not “meeting the high standards of the unit.”
On labor market impediments for black workers:
The impediments begin for Blacks seeking employment from the very outset. Some research has shown that non-Black job applicants of equal ability receive 50 percent more callbacks than Blacks. To further amplify on the issue, some research has shown that Black males without criminal records receive the same rate of callbacks for interviews as white males just released from prison when applying for employment in the low-wage job market.

With such handicaps existing from the start, it is no surprise that a wage gap exists. Some estimates show that gap to be as large as 28 percent on average and as large as 34 percent for those earning in the highest end (95th percentile) of the wage distribution. ,,,

Employers want workers who are trainable and present. Black workers, who have been poorly trained or suffer inferior health outcomes, will suffer disproportionately. In addition, the impacts of the criminal justice system cannot be overlooked. Some recent research has shown that for the birth cohort born between 1980 and 1984, the likelihood of incarceration transition for Blacks was 2.4 times greater than for their white counterparts. Given this outsized risk of incarceration, the prospects of long-term unemployment are dramatically increased.
On whether "the economy will evolve quickly enough to ensure the success and prosperity of minority groups":
I think that I must be optimistic about the future. What employers are yet to realize, but will have to come to grips with, is that successful market outcomes for minority groups mean success for them also. By that I mean, this is not a zero-sum game where one group will only improve at the expense of the other. In fact, history has shown us the opposite. Once minorities are fully utilized and integrated in the labor force, the economy as a whole will enjoy a different type of prosperity than has ever been experienced in the U.S. Once again, we must remember the introductory idea we teach to our college freshmen about the circular flow of the economy in that those fully engaged minority employees become fully engaged consumers.
For more on Hoover's thoughts about racial and ethnic diversity in the economic profession, a useful starting point is his co-authored article in the Summer 2020 issue of JEP, written with Amanda Bayer and Ebonya Washington. "How You Can Work to Increase the Presence and Improve the Experience of Black, Latinx, and Native American People in the Economics Profession" (Journal of Economic Perspectives, 34: 3, pp. 193-219).

For an overview of how economists seek to understand discrimination in theoretical and empirical terms, and how the views of economists differ from sociologists, a useful starting point is the two-paper
symposium on "Perspcctives on Racial Discrimination" in the Spring 2020 issue of JEP: 


Friday, October 16, 2020

COVID-19 Risks by Age

It seems well-known that the health risks of COVID-19 are larger for the elderly. But how much larger? And what is the trajectory of risk across age?  Andrew T. Levin, William P. Hanage, Nana Owusu-Boaitey, Kensington B. Cochran, Seamus P. Walsh, Gideon Meyerowitz-Katz provide a set of estimates "Assessing the Age Specificity of Infection Fatality Rates for COVID-19: Meta-Analysis & Public Policy Implications" (NBER Working paper 27597, as revised October 2020, also available via medRxiv, which is a "preprint server for the health sciences"). 

Also, Andrew Levin is the genial and informative talking head in a 15-minute video discussing the main approach and results. 

As the title implies, the paper is an effort to pull together evidence on the health effects of COVID-19 by age from a variety of sources. Two figures in particular caught my eye. This figure shows the "infection fatality rate"--that is, the the ratio of fatalities to total infections. The different kinds of dots on the figure show results from different kinds of studies. The red line is their central estimate, which is surrounded with estimates of the uncertainty involved. 

As the authors write: "Evidently, the SARS-CoV-2 virus poses a substantial mortality risk for middle-aged adults and even higher risks for elderly people: The IFR is very low for children and young adults but rises to 0·4% at age 55, 1·3% at age 65, 4·2% at age 75, 14% at age 85, and exceeds 25% for ages 90 and above." 

The COVID-19 risk for the elderly is clearly substantial. But how does one think about the risk for those, say, in the 45-65 age bracket. Their COVID-19 risk is clearly lower than for the 85 year-olds. But how does their COVID-19 risk compare with other everyday risks? In his talk, Levin offers a comparison with risks of death from an automobile crash by age. 

One wouldn't want to pretend that this comparison literally apples-to-apples. For example, the risks of driving are somewhat under the control of the drive, while the risk of dying after being infected by COVID-19 is not. In addition, this is comparing the risks of dying after being infected, which applies to only a subset of the population, with the overall risk of driving for the entire population. 

However, the comparison nonetheless seems quite useful to me, in the sense that many of us accept that driving a car has some risk, but it's a risk we take almost every day without excessive concern. Thus, seeing that for the average person under age 34, the COVID infection fatality rate is below the auto fatality rate gives a sense that for that age group taken as a whole (and of course with exceptions for a small number of people with certain pre-existing conditions), the personal risk of COVID-19 shouldn't bother them much. 

Interpreting the risks of those in the age brackets from, say, 35-64 is a little trickier. The COVID-19 risk number for these age brackets do not look especially high in absolute terms, certainly not as compared to the risks for the 85+ group. But from another perspective, for the 45-54 group, the COVID-19 risk is something like 16 times the auto fatality risk; for the 55-64 group, the COVID-19 rise is more than 54 times the auto fatality risk. 

Most people, myself included, are not good at thinking about these kinds of small risks. If I take a risk that I think of as negligible, and multiply it by 16, does "16 x negligible" equal something I should worry about? Maybe "16 x negligible" is like the risk of driving home in the dark on a snowy day, which is a risk I think about, but not one that stops me from driving home. 

What about about "54 x negligible" for the 55-64 age group, of which I have the honor to be a member? Is that enough to do more than raise my eyebrows? For my age group, the risk of dying if I got COVID-19 is 0.7%. which is like saying 1 chance out of 143. There are a lot of contexts where I wouldn't pay much attention to 1 chance in 142. But if it's life and death, I'm willing to take some steps to reduce the risk of that outcome. 

There are certain risks I don't take while driving, like driving with alcohol in my system. Granted, I don't take the risk of driving under the influence not so much because I fear I will kill myself, but because I fear accidents and, even worse, harming someone else. But if the COVID-19 danger to me is in some way comparable to driving while intoxicated, then consistency in thinking about risks suggests that I should make efforts to avoid being exposed to the disease--and also to avoid being a carrier to my wife or any other above-age-35 people with whom my life intersects. 

To put it another way, many of us adjust our behavior in a variety of ways to reduce moderate health risks, like wearing a helmet while bicycling, or not driving in an unsafe manner, or throwing away food that seems to have spoiled in the refrigerator. The reduction in risk from these behavior may not be large in absolute terms, but they feel worth taking. In a similar sense, the health risks of COVID-19 for those in the 35-64 age group are probably not exceptionally high in absolute terms, but for many of us who act to reduce other risks in our lives, the COVID-19 risks are also high enough to justify efforts that will reduce those risks. 

Of course, these sorts of comparisons are about averages, not at individuals who will be above- or below-average in various risks. But general public health guidance needs to be aimed at averages. 

Thursday, October 15, 2020

Will Services Trade Lead the Future for US Exports?

At least for a time, one legacy of the pandemic is likely to be a decrease in physical connections around the world economy, from tourism and business travel to shipping objects. But international trade in services is delivered online. For the US, trade in services has been becoming a bigger part of the overall trade picture, and the pandemic may give it an additional boost. Alexander Monge-Naranjo and Qiuhan Sun provide some background in "Will Tech Improvements for Trading Services Switch the U.S. into a Net Exporter?" (Regional Economist, Federal Reserve Bank of St. Louis, Fourth Quarter 2020). 

The authors point out that shifts in transportation routes or shipping method like containerization have had large effects on international trade in the past. They write: 

The U.S. is a world leader in most high-skilled professional service sectors, such as health, finance and many sectors of research and development. Moreover, leading American producers have been ahead of others in the adoption of ICT in their production networks. The global diffusion of ICT—including possibly the expansion of 5G networks—is prone to make many of these services tradeable for servicing households and businesses....  Similarly, the day-to-day activities of many businesses all involve tasks that can be automated and/or performed remotely and, of course, across national boundaries. Thus, a natural prediction would be that the U.S. should become a net exporter of high-skilled, knowledge-intensive professional services because of its comparative advantage.
Here are some illustrations of the patterns already underway. This figure shows the US trade balance separating out goods and services. The US trade deficit in goods plummetted from the early 1990s up to about 2006--with an especially sharp drop after China entered the World Trade Organization in 2001 and China's global exports exploded in size. But notice that US trade in services has consistently been running a trade surplus over this time, and the services trade surplus has been rising in recent years. 

Indeed, the long-run pattern seems to be that for the US economy, services have stayed about the same proportion of total imports in recent decades, but have become a rising proportion of total exports. 

Some of the big areas of gains for US services exports have been information technology and telecommunications services, insurance and financial services, and other business services (which includes areas like "professional and management consulting, technical services, and research and development services"). 

Monge-Naranjo and Sun don't actually make a case that a rise in services exports could be enough to turn the overall US trade deficit into a surplus; in that sense, the title of their short article overstates their case. But they do show that trade in services is not only a large and rising part of US exports, but may be the part of US economic output with the biggest upside for expanding US exports in the future. 

Supporting this potential for rising US exports in services requires a different public-sector actions. It's not about better transportation systems for physical goods, but rather about faster and more reliable virtual connections across the US and to other places around the world. A substantial and ongoing improvement in this virtual infrastructure also seems potentially quite important for the US economy as it adapts to a new reality of online meetings, online healthcare, online education, online retail, online work-from-home, and more. The US economy isn't going to move back to its manufacturing-dominant days of several decades ago, and at least in the medium-term, it probably isn't going to move back to to the social-clustering times way back in January 2020, either.   

In addition, there is "A Fundamental Shift in the Nature of Trade Agreements," as I called it in a post a few years ago, where the emphasis is less about tariffs and import quotas, and more about negotiating the legal and regulatory frameworks to open up foreign markets for US exporters of services. The kinds of trade agreements needed to facilitate, say, US insurance companies operating overseas, are quite different from the trade agreements about tariffs on objects like tariffs or steel.  

Wednesday, October 14, 2020

Are We Staying at Home By Choice or Because of Government Rules?

If the government removed all rules about social distancing, limited capacity, and mask-wearing in restaurants, stores, workplaces, entertainment venues from theaters to sports, churches, and other places, would you go back?  How people answer to that question is important to answering a bunch of questions. 

For example, have people been taking these kinds of precautions more because of government restrictions, or because of their own private concerns about health conditions? If government removed the restrictions, how much would people's behavior actually change? If many people are unlikely to change their avoidance behavior for a sustained period of time, then a full economic recovery from the effects of  the recession will be delayed. Moreover, the shape of that economic recovery may require a permanent reallocation of jobs from some sectors to others. 

In the October 2020 World Economic Outlook report from the IMF, Chapter 2 ("Dissecting the Economic Impact") has a discussion of government lockdowns vs. people's voluntary behavior in an international context. The authors write: 

This chapter’s first goal is to shed light on the extent to which the economic contraction was driven by the adoption of government lockdowns instead of by people voluntarily reducing social interactions for fear of contracting or spreading the virus. ... If lockdowns were largely responsible for the economic contraction, it would be reasonable to expect a quick economic rebound when they are lifted. But if voluntary social distancing played a predominant role, then economic activity would likely remain subdued until health risks recede. ...

Regression results show that lockdowns have a considerable negative effect on economic activity. Nonetheless, voluntary social distancing in response to rising COVID-19 infections can also have strong detrimental effects on the economy. In fact, the analysis suggests that lockdowns and voluntary social distancing played a near comparable role in driving the economic recession. The contribution of voluntary distancing in reducing mobility was stronger in advanced economies, where people can work from home more easily and sustain periods of temporary unemployment because of personal savings and government benefits. 

(For the record, when talking about government lockdowns: "The analysis uses a lockdown stringency index that averages several subindicators—school closures, workplace closures, cancellations of public events, restrictions on gatherings, public transportation closures, stay-at-home requirements, restrictions on internal movement, and controls on international travel—provided by the University of Oxford’s Coronavirus Government Response Tracker.")

There's a lot of ongoing research on the subject of lockdowns and personal choices,  and it would be unwise to treat any one study as the last word. That said, one study of the US experience that caught my eye is by Austan Goolsbee and Chad Syverson, "Fear, Lockdown, and Diversion: Comparing Drivers of Pandemic Economic Decline 2020" (Becker Friedman Institute Working Paper, June 18, 2020). From their abstract: 

This paper examines the drivers of the economic slowdown using cellular phone records data on customer visits to more than 2.25 million individual businesses across 110 different industries. Comparing consumer behavior over the crisis within the same commuting zones but across state and county boundaries with different policy regimes suggests that legal shutdown orders account for only a modest share of the massive changes to consumer behavior ... While overall consumer traffic fell by 60 percentage points, legal restrictions explain only 7 percentage points of this. Individual choices were far more important and seem tied to fears of infection. Traffic started dropping before the legal orders were in place; was highly influenced by the number of COVID deaths reported in the county; and showed a clear shift by consumers away from busier, more crowded stores toward smaller, less busy stores in the same industry. States that repealed their shutdown orders saw symmetric, modest recoveries in activity, further supporting the small estimated effect of policy. Although the shutdown orders had little aggregate impact, they did have a significant effect in reallocating consumer activity away from “nonessential” to “essential” businesses and from restaurants and bars toward groceries and other food sellers.
If personal voluntary choices are a big part or even a majority of the adjustment in the shifting patterns of hiring, work, shopping, entertainment, education, and health care--rather than government shutdowns--there are several implications looking ahead. Here are some thoughts from the IMF, based on its overview of the evidence: 

When looking at the recovery path ahead, the importance of voluntary social distancing as a contributing factor to the downturn suggests that lifting lockdowns is unlikely to rapidly bring economic activity back to potential if health risks remain. This is true especially if lockdowns are lifted when infections are still relatively high because, in those cases, the impact on mobility appears more modest. Further tempering the expectations of a quick economic rebound, the analysis documents that easing lockdowns tends to have a positive effect on mobility, but the impact is weaker than that of tightening lockdowns.
These findings suggest that economies will continue to operate below potential while health risks persist, even if lockdowns are lifted. Therefore, policymakers should be wary of removing policy support too quickly and consider ways to protect the most vulnerable and support economic activity consistent with social distancing. These may include measures to reduce contact intensity and make the workplace safer, for example by promoting contactless payments; facilitating a gradual reallocation of resources toward less-contact-intensive sectors; and enhancing work from home, for example, by improving internet connectivity and supporting investment in information technology.
The last point in particular seems worth emphasizing to me. Back in late March and early April, a common view of the pandemic was that it would be over in a few months. As one example of standard wisdom at that time, Ben Bernanke likened the economic effects of a pandemic and a lockdown to a severe snowstorm: that is, everything is disrupted for a time, but then returns to the previous normal. Thus, the early government response to the pandemic was focused on how to support income and job connections to employers for a few months. 

Of course, that view of pandemic-as-snowstorm is now outdated. It now appears that we may end up dealing with COVID-19 for the foreseeable future, From this viewpoint, supporting work and industry configurations as they existed in February 2020 is not a useful approach. Helping those whose lives have been upended by the pandemic is a worthy public policy goal, but thinking about how government can support and speed the economic adjustment to a new configuration may matter just as much. 

Just to be clear, the IMF argument does not claim that government lockdowns are "good" or "bad." Yes, lockdowns do have severe negative economic consequences. But if a lockdown stops the pandemic, then the medium-term economic results can easily be worth it. But as the IMF report says, "lockdowns are more effective in curbing infections if they are introduced early in the stage of a country’s epidemic. The analysis also suggests that lockdowns must be sufficiently stringent to reduce infections significantly." 

The widespread belief back in late March and April the pandemic would be over by, say, July 1 was also a reason that the early steps against the pandemic were relatively mild. At that time, longer and more stringent lockdown didn't seem worth it. We are still arguing up the present about different kinds of COVID-19 tests that can or should be available, and what kind of contact tracing and quarantining should happen when the results are positive. It may be that the key policy choice in a pandemic is whether or when to react very strongly for the first few months in the hope of ending the pandemic at that point and not needing to deal with it for a few years instead. But like all strong preventive actions, they are likely to be unpopular when taken. Even worse from a political point of view, if the strong actions then work, the bad outcomes they prevented will never actually be observed, and so the critics of such actions may never accept that they were needed. 

Tuesday, October 13, 2020

Politics and Attitudes Toward Vaccination

For many people, their willing to be vaccinated apparently varies with whether they are a supporter of the president. For example, here are the results of a series of Gallup polls taken since July on the question: "If an FDA-approved vaccine to prevent COVID-19 was available right now at no cost, would you agree to be vaccinated?" Through July and August, Democrats (blue line) were far more likely to say "yes" than Republicans. But in September, the share of Dems saying "yes" fell sharply while the share of Repubs saying "yes" rose sharply. 

What changed? In late August, the Centers for Disease Control sent out a notice asking states to be ready to operate vaccine distribution centers by November. One might both think that this this announcement was probably premature, with its timing determined by the political calendar, and also  hope that it might possibly be a meaningful statement about progress on a vaccine. But politically, it was being spun as good news for the Trump administration. Thus, Dem willingness to take an FDA-approved vaccine at no cost dropped sharply, while Repub willingness correspondingly rose. 

One might suspect that this kind of connection between politics and willingness to get vaccinated is a unique result of the high level of partisanship around the 2020 election, but one would be wrong. Masha Krupenkin has published an article in Political Behavior (published online May 5, 2020), "Does Partisanship Affect Compliance with Government Recommendations?" She asks: 
Are partisans less likely to comply with government recommendations after their party loses the presidency? To answer this question, I combine survey and behavioral data to examine the effect of presidential co-partisanship on partisans’ willingness to vaccinate. Vaccination provides an especially fertile testing ground for my theory for three reasons. First, both Republican and Democratic administrations have recommended vaccination as a public health measure. This provides natural variation in control of government, while keeping the government recommendation constant. Second, there is significant survey and behavioral data on vaccine compliance. This allows me to test the effect of partisanship both on peoples’ beliefs about vaccination, and their actual vaccination behavior. Finally, vaccination provides a “hard test” of the hypothesis, since the consequences of non-compliance can adversely impact individuals’ health. If partisanship affects receptivity to vaccination, this finding has important implications for the acceptance of other government interventions that do not carry such high costs for non-compliance. 

For example, one of her pieces of evidence is to look at kindergarten vaccination rates across California from 2001-2015. It turns out that when President Obama took over from President Bush, vaccination rates in Republican-leaning areas declined while those in Democrat-leaning areas rose. 

Another piece of evidence looks at surveys, like the Gallup poll data above, on willingness to be vaccinated. 

I look at three cases of partisan vaccination gaps. One of these is the smallpox vaccine in 2003, during the Bush administration. The other two are the swine flu (H1N1) and measles vaccines in 2009 and 2015, both during the Obama administration. This allows me to test whether Democrats and Republicans switch their perceptions of vaccine safety depending on which party is in power. ... Republicans were more likely to believe that vaccines were safe under a Republican president (smallpox vaccine), and Democrats believed the opposite (H1N1, measles). Partisan survey responses to perceptions of vaccine safety seem to “flip” depending on the party of the president.

In short, when people are deciding whether to vaccinate themselves or their children, whether they identify with the party of the president in power is a substantial factor. To put it another way, we all like to think that we are independent and fact-based in our judgment, and surely in some cases we are, but in other cases we are just acting as partisan herds of independent minds.   

Vaccinations are not the only illustration of this behavior. As I've noted before, when the Trump administration took actions to limit international trade, support for free trade sharply increased among Democrats. As I wrote there: "But these survey results may also suggest that US opinions about trade are just not very deeply rooted, and are more expressions of transient emotions and political partisanship." 

Another example that crossed my line of sight recently involves a Gallup poll question asking what share of people report that either they themselves or a family member has put off health care in the last 12 months for a serious or somewhat serious condition because of cost. As this issue of health care reform reheated as the Democratic primary process got underway in 2019, the share of Democrats reporting postponing care shot up.   

There is no obvious public policy change in 2019 that should have had a much bigger effect on D's than on R's in terms of health care costs. The Gallup report notes: "Whether these gaps are indicative of real differences in the severity of medical and financial problems faced by Democrats compared with Republicans or Democrats' greater propensity to perceive problems in these areas isn't entirely clear. But it's notable that the partisan gap on putting off care for serious medical treatment is currently the widest it's been in two decades."

For an overview of more evidence on how political and partisan identity shapes our perceptions of fact and our stated beliefs, didate. Brendan Nyhan provides an overview of some research in this area in "Facts and Myths about Misperceptions" (Journal of Economic Perspectives, Summer 2020, 34:3, pp. 220-36). My overview of Nyhan's article is here

Monday, October 12, 2020

A Nobel Prize for Auction Theory: Paul Milgrom and Robert Wilson

Auctions are widely used throughout the economy. The big auction houses like Christie's and Sotheby's are well-known for selling famous art, and many people have either attended a live auction at a fund-raising event or a flea market or participated in an online auction at a site like eBay. But the behind-the-scenes uses of auctions are far more important. The right for online advertising to appear on your screen is sold in an auction format. When the US government borrows money by selling Treasury debt, it does so in an auction format. When electricity providers sign contracts to purchase electricity from electricity producers, they often use an auction format to do so. Some of the proposals for a buying and selling permits to emit carbon, as a mechanism for the gradual reduction of carbon emissions, would auction off the right to emit carbon. 

One useful property of auctions is that in a number of settings they can discipline the public sector to make decisions based on economic values, rather than favoritism. For example, when a city wants to sign a contract with a company that will pick up the garbage from households, companies can submit bids--rather than having a city council choose the company run by someone's favorite uncle. When the US government wants to give companies the right to drill in certain areas for offshore oil, or wishes to allocate radio spectrum for use by phone companies, it can auction off the rights rather than handing them out to whatever company has the best behind-the-scenes lobbyists.  In many countries, auctions are used to privatize selling off a formerly government-owned company.

But the bad thing about auctions is that (like all market mechanisms), they can go sideways and produce undesirable results in certain settings. The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2020--commonly known as the Nobel Prize in economics, was awarded to Paul R. Milgrom and Robert B. Wilson “for improvements to auction theory and inventions of new auction formats.” For some years now, the Nobel committee has also published a couple of useful reports with each award, one aimed a a popular audience and one with more econo-speak, jargon, and technical detail. I'll quote here from both reports: "Popular science background: The quest for the perfect auction" and "Scientific Background: Improvements to auction theory and inventions of new auction formats."

A useful starting point is to recognize that auctions can have a wide array of formats. Most people are used to the idea of an auction where an auctioneer presides over a room of people who call out bids, until no one is willing to call out a higher bid. But auctions don't need to work in that way. 

An "English auction" is one where the bids are ascending, until a highest bid is reached. A "Dutch auction"--which is commonly used to sell about 20 million fresh flowers per day--starts with a high bid and then declines, so that the first person to speak up wins. In an open-outcry auction, the bid are heard by everyone, but in a sealed-bid auction, the bids are private. Some auctions have only one round of bidding; others may eliminate some bidders after one round but proceed through multiple rounds. In "first-price" auctions, the winner pays what they bid; in "second-price" auctions, the winner instead pays whatever was bi by the runner up. 

In some auctions the value of what is being bid on is mostly a "private value" to the bidders (the Nobel committee suggests thinking about bidding on dinner with a Nobel economist as an example, but you may prefer to substitute a celebrity of your choice), but in other cases, like bidding on an offshore oil lease, the value of the object is at least to some extent a "common value," because any oil that is found will be sold at the global market price. In some auctions, the bidders may have detailed private information about what is being sold (say, in the case where a house is being sold but you are allowed to do your own inspection before bidding), while in other auctions the information about the object being auctioned may be mostly public. 

In short, there is no single perfect auction. Instead, thinking about how auctions work means considering for any specific context how auction rules and format in that situation, given what determines the value of the auctioned objects and what what kind of information and uncertainty bidders might have. 

If the auction rules aren't set up appropriately, the results can go sideways. For some example, Paul Klemperer wrote a some years back on the subject of "What Really Matters in Auction Design." 
One of his examples was about what happened in 1991, when the UK used a process of sealed-bid auctions to see what company would be allowed to provide television services in certain areas. Klemperer writes: 
The 1991 U.K. sale of television franchises by a sealed-bid auction is a dramatic example While the regions in the South and Southeast, Southwest, East, Wales and West, Northeast and Yorkshire all sold in the range of 9.36 to 15.88 pounds per head of population, the only—and therefore winning—bid for the Midlands region was made by the incumbent firm and was just one-twentieth of one penny (!) per head of population. Much the same happened in Scotland, where the only bidder for the Central region generously bid one-seventh of one penny per capita. What had happened was that bidders were required to provide very detailed region-specific programming plans. In each of these two regions, the only bidder figured out that no one else had developed such a plan.
Another problem arises if the bidders find a way to signal each other to hold prices down. In some cases, the bidders can use the bidding process itself to send messages. Here's an example from Klemperer: 
In a multilicense U.S. spectrum auction in 1996–1997, U.S. West was competing vigorously with McLeod for lot number 378: a license in Rochester, Minnesota. Although most bids in the auction had been in exact thousands of dollars, U.S. West bid $313,378 and $62,378 for two licenses in Iowa in which it had earlier shown no interest, overbidding McLeod, who had seemed to be the uncontested high bidder for these licenses. McLeod got the point that it was being punished for competing in Rochester and dropped out of that market. Since McLeod made subsequent higher bids on the Iowa licenses, the “punishment” bids cost U.S. West nothing (Cramton and Schwartz, 1999).
Notice that the bids from U.S. West ended in the number 378, which was the lot number where the company wanted McLeod to back off. 

Of course, concerns like these have obvious answers. For example, set a "reserve price" or a minimum price that needs to be bid for the object, so no one gets it for (nearly) free. Also, set a rule that all bids need to be in certain fixed amounts, and that increases in bids also need to be in fixed amounts. But making these points both raises practical questions of how this should be done, and also shows some ways in which the practical rules of auctions can matter a lot. 

A more subtle but well-known problem with auctions is called the "winner's curse." It was first documented in the context of bidding by companies for off-shore oil leases. An analysis of the bids, along with how much oil was later discovered in the area, found that the "winner" of these auctions was on average losing money. The reason is that each individual company was forming its own guess about how much oil was on the site. Naturally, some companies would be more optimistic than others, and the most over-optimistic company of all was likely to bid highest and "win" the auction. A problem is that once bidders in an auction become aware of the risk of the winner's curse, they may become very reluctant to bid, so that the bids stop representing the actual estimates of value. 

In professional sports, this kind of scenario often plays out when free agents try to encourage bidding among teams for their services. From the player point of view, it only takes one high-end bidder, a bidder who perhaps is ignoring the winner's curse, to get a great contract. But many teams may decide to avoid the risk of overpaying and the winner's curse by not bidding at all. 

There are various possible responses to a winner's curse in an auction format. One is to find ways for the bidders to collect more private information, so that they can be more confident in their bidding. Another is a "second-price" auction, where the winner pays the price of the second-highest bidder. This format provides some protection against the winner's curse: that is, everyone can feel free to bid as high as they would like, knowing that if they are way out of line with the second-price bid, they will only have to pay the second-price bid. If a second-price bid greatly reduces concerns about the winner's curse and leads to more aggressive bidding, it can (counterintuitively) end up raising more money than a first-price auction. 

The auctions that most people participate in are "private-value auctions," where the issue is just how much do you want it--because you are planning to use it rather than to resell it. In this setting, a live auctioneer tries to get people emotionally involved in how much they want something, and in this sense to get them to pay more than they had perhaps planned to pay beforehand. As Ambrose Bierce wrote in his Devil's Dictionary published back in 1906: "AUCTIONEER, n. The man who proclaims with a hammer that he has picked a pocket with his tongue."

But auctions for oil leases, spectrum rights, privatized companies, Treasury debt, an so on have some element of being "common value" auctions, where the value of what is being sold will be similar across  potential buyers. As the Nobel committee writes: "Robert Wilson was the first to create a framework for the analysis of auctions with common values, and to describe how bidders behave in such circumstances. In three classic papers from the 1960s and 1970s, he described the optimal bidding strategy for a first-price auction when the true value is uncertain. Participants will bid lower than their best estimate of the value, to avoid making a bad deal and thus be afflicted by the winner’s curse. His analysis also shows that with greater uncertainty, bidders will be more cautious and the final price will be lower. Finally, Wilson shows that the  problems caused by the winner’s curse are even greater when some bidders have better information than others. Those who are at an information disadvantage will then bid even lower or completely abstain from participating in the auction."

But when you think about it, many of these "common value" auctions actually have a mixture of private values as well. For example, consider bidding on an offshore oil lease. The value of any oil discovered may be a common value. But each individual company may have specific technology for discovering or extracting oil that works better in some situations that others. Some companies may also already be operating nearby, or have facilities nearby. In short, lots of real-world auctions are a mixture of private and common values. As the Nobel committee writes: 
In most auctions, the bidders have both private and common values. Suppose you are thinking about bidding in an auction for an apartment or a house; your willingness to pay then depends on your private value (how much you appreciate its condition, floor plan and location) and your estimate of the common value (how much you might be able to sell it for in the future). An energy company that bids on the right to extract natural gas is concerned with both the size of the gas reservoir (a common value) and the cost of extracting the gas (a private value, as the cost depends on the technology available to the company). A bank that bids for government bonds considers the future market interest rate (a common value) and the number of their customers who want to buy bonds (a private value). ... The person who finally cracked this nut was Paul Milgrom, in a handful of papers published around 1980. ... This particular result reflects a general principle: an auction format provides higher revenue the stronger the link between the bids and the bidders’ private information. Therefore, the seller has an interest in providing participants with as much information as possible about the object’s value before the bidding starts. For example, the seller of a house can expect a higher final price if the bidders have access to an (independent) expert valuation before bidding starts.
In addition, Milgrom has participated in setting up new kinds of auctions. When auctioning radio spectrum to telecommunications providers, for example, how much you are willing to bid for rights in one geographic area may be linked whether you own the rights in an adjoining area. Thus, rather than auctioning off each geographic area separately--which can lead problems of collusion between bidders-- it makes sense to design a Simultaneous Multiple Round Auction, which starts with low prices and allows repeated bids across many areas, so that geographic patterns of ownership can evolve in a single process. There is also a Combinatorial Clock Auction, in which bidders might choose to bid on overall “packages” of frequencies, rather than bidding separately on each license. Milgrom also was a leading developer of the Incentive Auction, which the Nobel committee describes in this way;
The resulting new Incentive auction was adopted by the FCC in 2017. This design combines two separate but interdependent auctions. The first is a reverse auction that determines a price at which the remaining over-the-air broadcasters voluntarily relinquish their existing spectrum-usage rights. The second is a forward auction of the freed-up spectrum. In 2017, the reverse auction removed 14 channels from broadcast use, at a cost of $10.1 billion. The forward auction sold 70 MHz of wireless internet licenses for $19.8 billion, and created 14 MHz of surplus spectrum. The two stages of the incentive auction thus generated just below $10 billion to U.S. taxpayers, freed up considerable spectrum for future use, and presumably raised the expected surpluses of sellers as well as buyers.
The economic theory of auctions is clearly tied up in intimate ways with the practice and design of real-world auctions. More broadly, close analysis of buyers and sellers in the structured environment of auctions can also offer broader insights into how non-auction markets work as well. After all, in some ways a competitive market is just an informal auction with sellers offering bids hoping to get a higher price and buyers making offers hoping to get a lower price. 

For more from Milgrom and Wilson on auctions and related economics, here are some articles from the Journal of Economic Perspectives, where I work as Managing Editor.