Tuesday, October 27, 2020

Thinking about Better Graphs and Use of Color

When I started working as the Managing Editor of the Journal of Economic Perspectives back in 1986, making figures for academic articles was still relatively expensive. The changeover to software-generated figures was getting underway, but with lots of hiccups--for example, we had to purchase a more expensive printer that could produce figures as well as text. At my home base at the time,  Princeton University still employed a skilled draftsman to create beautiful figures, using tools like plotting points and tracing along the edge of a French curve, which have now gone the the way of the slide rule.  

Generating figures has now become cheap: indeed, I see more and more first drafts at my journal which include at least a dozen figures and often more. I sometimes suspect that the figures were generated for slides that can be shown during a live presentation, and then the paper was written around the series of figures. Economists and other social scientists, like it or not, need to know something about what makes a good graph.  Susan Vanderplas, Dianne Cook, and Heike Hofmann give some background in "Testing Statistical Charts: What Makes a Good Graph?" (Annual Review of Statistics and Its Application, 2020, subscription required). 

With a good statistical graph or figure, readers should be able to read information or see patterns with reasonable accuracy (although people have a tendency to round up or down). As the authors write (citations omitted): 

A useful starting point is to apply gestalt principles of visual perception, such as proximity, similarity, common region, common fate, continuity, and closure, to data plots. These principles are useful because good graphics take advantage of the human visual system’s ability to process large amounts of visual information with relatively little effort.
The authors discuss research on the extent to which certain graphs meet this goal: for example, one can use "think-aloud" methods where subjects talk about what they are seeing and thinking about as they look at various figures, or eye-tracking studies to find what people are actually looking at. They also focus on statistical charts, not on the production of more artistic "infographics." Along with general tips, I've been interested in recent years about the use of color. 

The authors argue that when using a range of colors, best practice is to use a neutral color in between a range of two other colors. They also point out that the human eye does not discern gradations in all colors equally well: "It is also important to consider the human perceptual system, which does not perceive hues uniformly: We can distinguish more shades of green than any other hue, and fewer shades of yellow, so green univariate color schemes will provide finer discriminability than other colors because the human perceptual system evolved to work in the natural world, where shades of green are plentiful." In terms of human physiological perceptions, " a significant portion of the color space is dedicated to greens and blues, while much smaller regions are dedicated to violet, red, orange, and yellow colors. This unevenness in mapping color is one reason that the multi-hued rainbow color scheme is suboptimal—the distance between points in a given color space may not be the same as the distance between points in perceptual space. As a result of the uneven mapping between color space and perceptual space, multi-hued color schemes are not recommended." In addition, some people are color-blind: the most common kind is an inability to distinguish between red and green, but there are also people who have difficulties distinguishing between blues and greens, and between yellows and reds. 

Given these realities, what range of color is recommended? The bottom purple-orange gradient both circles through a neutral color and is also distinguishable by people with any sort of color-blindness. Of course, this doesn't mean it should always be used: people may have mental associations with colors (say, blue associated with cold) that make it useful to use other colors. But it's worth remembering. 


For an example of how a better graph can help with perception, consider this example. The graph is looking at notifications for tuberculosis in Australia in 2012, divided by age and gender. The top panel shows gender side-by-side for each age group, with two colors used to distinguish gender. The bottom panel shows age groups side-by-side for each gender, with five colors used to distinguish ages. The authors argue that "common region" arguments make it easier for most viewers get information from the top figure. 

Finally, here's an example of a graph that is "interactive," even though it is static.  The graph shows the average number of births on each day of the year. Notice that although there's a lot of shading, it's in green so the distinctions are easier to perceive. Key takeaways stand out easily: like more babies born in summer than in winter, and fewer births around holidays like July 4, Thanksgiving, Christmas, and New Year's. Also, the natural tendency for a reader is to check out their own birthday--which is what makes the figure interactive. It's easy to imagine other kinds of figures--by age, gender, location, income, education, and so on--that might cause readers to interact in a similar way by checking out the data for their own group.
For those who want to dig deeper, the article has lots more examples and citations. For more on graphic presentations of data, a useful starting point from the journal where I work as Managing Editor is the paper by Jonathan A. Schwabish in the Winter 2014 issue. "An Economist's Guide to Visualizing Data." Journal of Economic Perspectives, 28 (1): 209-34. From his abstract: "Once upon a time, a picture was worth a thousand words. But with online news, blogs, and social media, a good picture can now be worth so much more. Economists who want to disseminate their research, both inside and outside the seminar room, should invest some time in thinking about how to construct compelling and effective graphics."

Monday, October 26, 2020

Will China Be Caught in the Middle-Income Trap?

The "middle-income trap" is the phenomenon that once an economy has made the big leap from being a lower-income country to being a middle-income country, then it may find it difficult (although not impossible) to make the next leap from being middle-income to high-income. Matthew Higgins considers the situation of China in "China’s Growth Outlook: Is High-Income Status in Reach?" (Federal Reserve Bank of New York, Economic Policy Review, October 2020, 26:4, pp. 68-97). 

Higgins provides the basic backdrop for China's remarkable economic growth in the last four decades. 

China’s growth performance has been remarkable following the introduction of economic reforms in the late 1970s. According to the official data, real GDP growth has averaged 9.0 percent since 1978. ... Rapid economic growth has led to a similar increase in living standards, lifting China out of poverty and into middle-income status. According to official figures, real per capita income has risen by a factor of 25 since 1978. Annual per capita income now stands at about $16,100 measured at purchasing power parity, in “2011 international dollars.” ... This places China at roughly the 60th percentile of the global income distribution, though still slightly below 30 percent of the U.S. level.
A first question, of course, is whether we really believe the official growth numbers, and the answer is "not quite." One difficulty with huge growth numbers over sustained periods of time is that you can project backwards to what the original level of income must have been at the start of the process. Thus, if current Chinese real per capita income is $16,100, and the growth rate has been 9% for (say) 40 years, then the real per capita income for China would have been about $500 before the reforms started. As Higgins spells out the implication: 
Indeed, real per capita income [in China] at the start of the decade [the 1980s] would have been below that of most countries in sub-Saharan Africa as well as neighbors such as Bangladesh, Laos, and Myanmar. Although China was clearly a poor country at the time, few would have rated it as one of the poorest. Such a ranking is also inconsistent with data on life expectancy, literacy, and other quality-of-life indicators. Growth rates from the Penn World Table, more plausibly, place China at roughly the 30th percentile of the global income distribution in the early 1980s, ahead of most countries in sub-Saharan Africa but still behind neighbors such as Indonesia, the Philippines, and Thailand.
For comparison, here are China's official growth rates and those from the Penn World Tables: 
As you might expect, there's been an ongoing controversy for a couple of decades now over what numbers are most accurate, which I will sidestep here (although other papers in this issue of the Economic Policy Review do address them). I'll just point out that if you start adjusting numbers for one country, you need to adjust them for all countries, and when all is said and done, it remains true that China has had decades of extraordinary growth and has become a middle-income economy. 

Here, I want to focus on the question of what it would take for China to become a high-income economy, and thus not to succumb to the middle-income trap. As the figure shows, China's growth rates were slowing down even before the trade wars and now the pandemic. Higgins looks at past patterns of countries moving from middle-income to high-income status and writes: 
Our middle-income category includes countries with per capita incomes at 10 to 50 percent of the U.S. level (at current purchasing power parities); our high-income category includes anything above that. ... Out of 124 countries, 52 qualified as middle-income in 1978 and 49 in 2018. Of the original cohort of 52 middle-income countries, just 8 had advanced to high-income status by 2018.
Of course, if China can maintain a 6% growth rate for the next few decades, it will keep catching up to high-income countries like the US, Japan, Canada, and nations of western Europe. But for most countries reaching middle-income status, sustaining such high growth rates for additional decades doesn't usually happen. For example, Higgins point out that after Japan had several decades of rapid growth and reached China's current level of per capita GDP back in 1976, Japan's growth rate steadily dropped over time, and has been at about 1% per year in recent decades. Or after South Korea had several decades of rapid growth and reached China's current level of per capita GDP back in 1994, its growth rate has steadily decline to less than 3% per year. 

How likely is continued rapid growth for China? Higgins digs down into the underlying sources of growth for some insights. Thus, one source of economic growth is known as the "demographic dividend," which happens when a country has a rising share of its population in the prime working years from age 20-64: "According to U.N. figures, China’s working-age population is expected to
decline by about 12 percent over the next twenty years even as the total population rises
slightly." As the figure shows, the share of China' population that is working-age started declining af ew years ago: for other rapid-growth cases like Japan or the east Asian "tiger" economies, the working-age share of the population was still rising when they hit China's current level of per capita GDP. 
Another issue is that other examples of rapid growth, like Japan, South Korea, and the other east Asian "tigers" kept their growth rates high in part with very high levels of physical capital investment. But China has already gone through a stage of extremely high levels of investment, and is now trying to shift to an economy in which growth is based more on human skills/education, technology, and services.  

On the other side, because China's real per capita GDP has only reached about 30% of the US level, there is certainly still room for growth. Higgins writes: "Prospects for rapid growth in China are buoyed by two key factors: the country’s distance behind current global income leaders and its relatively low rate of urbanization. These factors could provide scope for continued rapid growth through `catch-up' effects and structural transformation. ... China’s unfinished structural transformation leaves it with plenty of room to run. How fully China exploits this potential will depend largely on its own policies."

Higgins points out one set of "institutional" policies as measured by the World Bank. The rankings for these policies have been adjusted so that the average for the 121 countries included is set at zero, and the standard deviation is set at 1.0. On five of the six measures, China is below the global average. On all six measures it is well below the high-income countries of the world. One can of course quarrel with the details of how such measures are calculated, but the overall pattern is clear.  
Perhaps the fundamental challenge for China is to recognize that the past 40 years of economic growth were an excellent start to becoming a high-income country, but really only a start, and additional future growth will require even more sweeping and additional changes to the economy and society.  

As noted above, this issues of Economic Policy Review has a group of articles on "China in the Global Economy." The four articles are: 

Thursday, October 22, 2020

interview with Sandra Black: Education Outcomes and A Stint in Politics

Douglas Clement has an interview with Sandra Black in the Fall 2020 issue of For All, a publication of the  Opportunity & Inclusive Growth Institute at the Minneapolis Federal Reserve. The title sums up the topics: "Seeing the margins: An interview with Columbia University economist Sandra Black
Sandra Black on education, family wealth, her time at the White House, COVID-19, and the cost of bad policy." Like a lot of the interviews done by Clement, the interviewee is encouraged to describe the basic insight behind some of their own prominent research, which in turn gives a look into how economists think about research. 

For example, Black wrote an article back in 1999 on the subject of how much value parents place on living in a school district with higher test scores (Sandra E. Black, "Do Better Schools Matter? Parental Valuation of Elementary Education,"  Quarterly Journal of Economics, 114: 2, May 1999, pp. 577–599). Here's how Black describes the issue and her approach: 
Let’s look at how parents value living in a house that is associated with a better school. That’s an indirect value of the school—what the parents are willing to pay to have the right to send their children to a particular school. The problem is that when you buy a house, it has a whole bunch of different attributes. You’re buying the school that you get to send your kids to, but you’re also buying the neighborhood and the house itself and all the public amenities and all kinds of other things. And those things tend to be positively correlated. Better school districts tend to be in better neighborhoods with nicer houses—so isolating the part due just to schools is somewhat complicated. ... 

What I did was look, in theory, at two houses sitting on opposite sides of the same street, where the attendance district boundary divides the street. The houses are clearly in the same neighborhood, they’re of similar quality, et cetera. The only difference between them is which elementary school the child from each home attends. And then you can ask, How different are the prices of those houses, and how does that difference relate to the differences in school quality?

What I found was that parents were willing to pay more for better schools, but much less than you would casually estimate if you didn’t take into account all these other factors. In Massachusetts, parents were willing to pay 2.5 percent more for a 5 percent increase in school test scores. ... 

[T]this was a long time ago, so pretty much all the information was hand-collected. The housing prices were in a database, but for the attendance district boundaries, I had to contact each school district to ask for their map. I would call them and say, “Can I get the map of your boundaries?” And they would ask, “What house are you thinking of buying?” I’d reply, “No, I actually just want the map.” They’d usually send me a list of streets that were in the attendance district, and a friend of mine and I would sit down and try to create these maps. She was a very good friend.
Here's another example. Back in 1997 the state of Texas passed the "Top Ten Percent Plan." The idea was that anyone in the top 10% of their high school class would be automatically admitted to any University of Texas campus they wished. One of the hopes was to improved diversity at flagship U-Texas campus in Austin. Both for those admitted to the traditionally more selective UT-Austin campus and for those who missed out on going to that campus as a result of the change, what happened? (The paper is Sandra E. Black, Jeffrey T. Denning, and Jesse Rothstein, "Winners and Losers: The Effect of Gaining and Losing Access to Selective Colleges on Education and Labor Market Outcomes," March 2020, NBER Working Paper 26821). Black tells the story: 
The idea is that the top 10 percent of every high school in Texas would be automatically admitted to any University of Texas institution—any one of their choice. All of a sudden, disadvantaged high schools that originally sent very few students to selective universities like the University of Texas, Austin—the state’s top public university— found that their top students were now automatically admitted to UT Austin. If they wanted to go, all the student had to do was apply. There was also outreach, to make students aware of the new admissions policy. The hope was that it would maintain racial diversity because the disadvantaged high schools were disproportionately minority.

It’s not obvious that the goal of maintaining diversity was realized, in part because even though a school may have a disproportionate number of minority students, its top 10 percent academically is often less racially diverse than the rest of the school. There is some debate about whether it maintained racial diversity.

What you do see, however, is that more students from these disadvantaged schools started to attend UT Austin. And students from the more advantaged high schools who were right below their school’s top 10 percent were now less likely to attend. So there’s substitution—for every student gaining admission, another loses. I think that is true in every admissions policy, but we don’t always consciously weigh these trade-offs. ...  Here, we’re trying to explicitly think about, and measure, these trade-offs. ... 

[W]e show that the students who attend UT Austin as a result of the TTP plan—who wouldn’t have attended UT Austin prior to the TTP plan—do better on a whole range of outcomes. They’re more likely to get a college degree. They earn higher salaries later on. It has a positive impact on them.

But what was really interesting is that the students who are pushed out—that’s how we referred to them—didn’t really suffer as a result of the policy. These students would probably have attended UT Austin before the TTP plan. But now, because they were not in the top 10 percent [of their traditional “feeder” school], they got pushed out of the top Texas schools like UT Austin. We see that those students attend a slightly less prestigious college, in the sense that they’re not going to UT Austin, the flagship university. But they’ll go to another four-year college, and they’re really not hurt. They’re still graduating, and they’re getting similar earnings after college.

So the students who weren’t attending college before [because they didn’t attend a traditional feeder school] now are, and they’re benefiting from that in terms of graduation rates and income, while the ones who lose out by not going to Texas’ top university aren’t really hurt that much. It seems like a win-win.

Back in 2015, Black spent some time at the White House Council of Economic Advisers. Here's one of her reflections on that time:  

[W]hich job do I prefer: adviser or academic? That’s easy to answer: being a professor. I like thinking about things for long periods of time, and it was quite the opposite when I was in D.C. There, I was scheduled every 15 minutes. Each meeting would cover a different topic, and I had to be ready to be an expert on A, then an expert on B, and then an expert on C.

It is the antithesis of being an academic, and it’s a skill that I think a lot of academics don’t naturally have, me included. It was a really hard transition from academia to the policy world. Coming back to academia was hard too. I noticed that my attention span had become so much shorter. It took six months, at least, before I could sit and read a whole paper and just think about that paper. Being at the CEA was a very different experience. I really enjoyed it, but I was happy to come back to academia.

Wednesday, October 21, 2020

The Google Antitrust Case and Echoes of Microsoft

The US Department  of Justice has filed an antitrust case against Google. The DoJ press release is here;  the actual complaint filed with the US District Court for the District of Columbia is here. Major antitrust cases often take years to litigate and resolve, so there will be plenty of time to dig into the details as they emerge. Here, I want to reflect back on the previous major antitrust case in the tech sector, the antitrust case against Microsoft that was resolved back in 2001. 

For both cases, the key starting point is to remember that in US antitrust law, being big and having a large market share is not a crime. Instead, the possibility of a crime emerges when a company with a large market share leverages that market share in a way that helps to entrench its own position and block potential competition. Thus, the antitrust case digs down into specific contractual details.

In the Microsoft antitrust case, for example, the specific legal question was not whether Microsoft was big (it was), or whether it dominated the market for computer operating systems (it did). The legal question was whether Microsoft was using its contracts with personal computer manufacturers in a way that excluded other potential competitors. In particular, Microsoft signed contracts requiring that computer makers license and install Microsoft's Internet Explorer browser system as a condition of having a license to install the Windows 95 operating system. Microsoft had expressed fears in internal memos that alternative browsers like Netscape Navigator might become the fundamental basis for how computers and software interacted in the future. From the perspective of antitrust regulators, Microsoft's efforts to used contracts as a way of linking together its operating system and its browser seemed like anticompetitive behavior. (For an overview of the issues in the Microsoft case, a useful starting point is a three-paper symposium back in the Spring 2001 issue of the Journal of Economic Perspectives.)

After several judicial decisions went against Microsoft, the case was resolved with a consent agreement in November 2001. Microsoft agreed to stop linking its operating system and its web browser. It agreed to share some of its coding so that it was easier for competitors to produce software that would connect to Microsoft products. Microsoft also agreed to an independent oversight board that would oversee its actions for potentially anticompetitive behavior for five years. 

As we look back on that Microsoft settlement today, it's worth noting that losing the antitrust case in the courts and being pressured into a consent agreement certainly did not destroy Microsoft. The firm was not broken up into separate firms. In 2020, Microsoft ranks either #1 or very near the top of all US companies as measured by the total value of its stock. 

Looking again at the antitrust case against Google, the claims are focused on specific contractual details. For example, here's how the Department of Justice listed the issues in its press release: 

As alleged in the Complaint, Google has entered into a series of exclusionary agreements that collectively lock up the primary avenues through which users access search engines, and thus the internet, by requiring that Google be set as the preset default general search engine on billions of mobile devices and computers worldwide and, in many cases, prohibiting preinstallation of a competitor. In particular, the Complaint alleges that Google has unlawfully maintained monopolies in search and search advertising by:
  • Entering into exclusivity agreements that forbid preinstallation of any competing search service.
  • Entering into tying and other arrangements that force preinstallation of its search applications in prime locations on mobile devices and make them undeletable, regardless of consumer preference.
  • Entering into long-term agreements with Apple that require Google to be the default – and de facto exclusive – general search engine on Apple’s popular Safari browser and other Apple search tools.
  • Generally using monopoly profits to buy preferential treatment for its search engine on devices, web browsers, and other search access points, creating a continuous and self-reinforcing cycle of monopolization.
As noted earlier, I expect these allegations will result in years of litigation. But I also strongly suspect that even if Google eventually loses in court and signs a consent agreement, it ultimately won't injure Google much or at all as a company, nor will it make a lot of difference in the short- or the medium-term to the market for online searches. If this is the ultimate outcome, I'm not sure it's a bad thing. After all, what are we really talking about in  this case? As Preston McAfee has pointed out, "First, let's be clear about what Facebook and Google monopolize: digital advertising. The accurate phrase is `exercise market power,' rather than monopolize, but life is short. Both companies give away their consumer product; the product they sell is advertising. While digital advertising is probably a market for antitrust purposes, it is not in the top 10 social issues we face and possibly not in the top thousand. Indeed, insofar as advertising is bad for consumers, monopolization, by increasing the price of advertising, does a social good." 

Ultimately, it seems to me as if the most important outcomes of these big-tech antitrust cases may not be about the details of contractual tying. Instead, the important outcome is that the company is put on notice that it is being closely watched for anticompetitive behavior, it has been judged legally guilty of such behavior, and it needs to back away from anything resembling such behavior moving forward.  

Looking back at the aftermath of the Microsoft case, for example, some commenters have suggested that it caused Microsoft to back away from buying other upstart tech companies--like buying Google and Facebook when they were young firms. A common complaint against the FAANG companies— Facebook, Apple, Amazon, Netflix, and Google--is that they are buying up companies that could have turned into their future competitors. A recent report from the House Judiciary Committee ("Investigation of Competition in Digital Markets") points out that "since 1998, Amazon, Apple, Facebook, and Google collectively have purchased more than 500 companies. The antitrust agencies did not block a single acquisition. In one instance—Google’s purchase of ITA—the Justice Department required Google to agree to certain terms in a consent decree before proceeding with the transaction."

It's plausible to me that the kinds of contracts Google has been signing with Apple or other firms are a kind of anticompetitive behavior that deserves attention from the antitrust authorities. But the big-picture question here is about the forces that govern overall competition in these digital markets, and one major concern seems to me that the big tech fish are protecting their dominant positions by buying up the little tech fish, before the little ones have a chance to grow up and become challengers for market share. 

Mark A. Lemley and Andrew McCreary offer a strong statement of this view in their paper "Exit Strategy (Stanford Law and Economics Olin Working Paper #542, last revised January 30, 2020).  They write (footnotes omitted): 

There are many reasons tech markets feature dominant firms, from lead-time advantages to branding to network effects that drive customers to the most popular sites. But traditionally those markets have been disciplined by so-called Schumpeterian competition — competition to displace the current incumbent and become the next dominant firm. Schumpeterian competition involves leapfrogging by successive generations of technology. Nintendo replaces Atari as the leading game console manufacturer, then Sega replaces Nintendo, then Sony replaces Sega, then Microsoft replaces Sony, then Sony returns to displace Microsoft. And so on. One of the biggest puzzles of the modern tech industry is why Schumpeterian competition seems to have disappeared in large swaths of the tech industry. Despite the vaunted speed of technological change, Apple, Amazon, Google, Microsoft, and Netflix are all more than 20 years old. Even the baby of the dominant firms, Facebook, is over 15 years old. Where is the next Google, the next Amazon, the next Facebook?
Their answer is the "exit strategy" for the hottest up-and-coming tech firms isn't to do a stock offering, remain an independent company, and keep building the firm until perhaps it will challenge one of the existing tech Goliaths. Instead, the "exit strategy," often driven by venture capital firms, is for the new firms to sell themselves to the existing firms. 

This particular antitrust case against Google's allegedly anticompetitive behavior in the search engine market is surely just one of the cases Google will face in the future, both in the US and around the world. The attentive reader will have noticed that nothing in the current complaint is about broader topics like how Google collects or makes use of  information on consumers. There's nothing about how Google might or might not be manipulating the search algorithms to provide an advantage to Google-related products: for example, there have been claims that if you try to search Google for websites that do their own searches and price comparisons, those websites may be hard to find. There are also questions about whether or how Google manipulates its search results based on partisan political purposes. 

As I look back at the Microsoft case, my suspicion is that the biggest part of the outcome was that when Microsoft was under the antitrust microscope, other companies that eventually became its big-tech competitors had a chance to grow and flourish on their own. With Google, the big issue isn't really about details of specific contractual agreements relating to its search engine, but whether Google and the other giants of the digital economy are leaving sufficient room for their future competitors. 

For more posts on antitrust and the big tech companies, some previous posts include:

Tuesday, October 20, 2020

Will Vote-by-Mail Affect the Election Outcome?

For the 2020 election, the United States will rely more heavily on vote-by-mail than ever before. Is it likely to affect the outcome? Andrew Hall discusses some of the evidence in "How does vote-by-mail change American elections?" (Policy Brief, October 2020, Stanford Institute for Economic  Policy Research).

There are several categories of vote-by-mail. The mild traditional approach was the absentee ballot, used by people who knew in advance that they wouldn't be able to make it to the polls in person on Election Day for some specific reason (like being an out-of-state college student or deployed out-of-state in the military). Over time, this has evolved in many states into "no excuses" absentee voting, where anyone can request an absentee ballot for pretty much any reason. 

Perhaps the most aggressive version is universal vote-by-mail, where the state mails a ballot to every registered voter. The vote can then vote by mail, bring the mailed ballot in person to vote, or ignore the mailed ballot and just vote in-person on Election Day. Hall notes: "Prior to 2020, only Colorado, Hawaii, Oregon, Utah, and Washington employed universal vote-by-mail, while California was in the process of phasing it in across counties. In response to COVID-19, three more states, Nevada, New Jersey, and Vermont, along with the District of Columbia, have implemented the policy, while California accelerated its ongoing implementation. Montana has also begun to phase in the practice."

In 2020, most states are experimenting with something in-between: not quite universal vote-by-mail (in most states), but often more encouragement for vote-by-mail than had been common in the previous situation of no-excuses absentee voting. Thus, thinking about what will happen in 2020 requires looking back at earlier experience. 

For example, the universal mail-in states often phase in the process a few randomly chosen counties at at time. Thus, social scientists can compare, in the same election, how voting behavior changed when mail-in voting first arrived. Hall writes: 

In our first study, published recently in the Proceedings of the National Academy of Sciences, we examined historical data from California, Utah, and Washington, where universal vote-by-mail was phased in over time, county by county ... We found that, in pre-COVID times, switching to universal vote-by-mail had only modest effects on turnout, increasing overall rates of turnout by approximately two percentage points. Because universal vote-by-mail has such modest effects on overall turnout, it’s not surprising that we also found that it conveyed no meaningful advantage for the Democratic Party. When counties switched to universal vote-by-mail, the Democratic share of turnout did not increase appreciably, and neither did the vote shares of Democratic candidates. Our largest estimate suggests that universal vote-by-mail could increase Democratic vote share by 0.7 percentage points---enough to swing a very close election, to be sure, but a very small advantage in most electoral contexts, and a much smaller effect than recent rhetoric might suggest.
Of course, this evidence is about a move to universal mail-in voting, and what is actually happening in most states is more like a dramatic expansion of no-excuse-needed absentee balloting. However, I confess that I am less sanguine than Hall about a swing of "only" 0.7 percentage points. If the presidency or control of the US Senate comes down to a few key, close-run states, that amount may represent the margin of victory. Also, this pre-COVID evidence may underestimate the partisan difference in 2020, given that there is some survey evidence from April and June suggesting that Democrats are more enthused about mail-in voting than Republicans. But what has seemed to happen in other states is that while Democrats are more likely to vote by mail, overall turnout and voting margins are not much affected. 

As another piece of evidence, Hall discussed the Texas run-off primary on June 14. For research purposes, it's useful that this vote happened when the pandemic was already underway. Also, it's useful that in this election, only those 65 and over could vote-by-mail with no reason needed. Thus, one can compare voting patterns of those just under 65 and just over 65, and see whether among voters who were close in age but had different rules for mail-in voting, did the pandemic change the patterns? For example, would the 64 year-olds who did not have easy access to a mail-in ballot vote less? The short answer is that gap between 64 and 65 year-old voters did not change. 

I'll admit here at the bottom that although I've had to vote absentee a couple of times in my life, I'm not a big fan of vote-by-mail. I like the idea of most people voting at the same time, with the same information, and early mail-in voting raises the problem that if new news arrives and you want to change your vote, you're out of luck.  In addition, I'm a big fan of the secret ballot. No matter what you say to other people, when you are alone in that voting booth, you can choose who you want. Vote-by-mail will inevitably be a less private experience, where those who might wish to defy their family members or friends or those in their apartment building or their assisted care facility may find it just a little harder to do so. 

There are also security concerns about mail-in ballots being delivered and practical concerns about difficulties of validating them and counting them expeditiously. I'm confident that in at least one state in the 2020 election, probably a state with little previous experience in mail-in voting, the process is going to go wincingly wrong.  As Hall writes: "That being said, there are important November-specific factors our research cannot address. The most important issue concerns the logistics of vote-by-mail. Historically, mail-in ballots are rejected at higher rates than in-person votes. Capacity issues in the face of an enormous surge in voting by mail could drive these rejection rates higher. And if Democrats cast more mail-in ballots than Republicans, as looks extremely likely, these higher rejection rates could mean that vote-by-mail paradoxically hurts Democrats."

Of course, vote-by-mail is only one of the multiple differences across states in how voting occurs, including differences in voter registration, voter ID, recounts, and others For an overview, see "Sketching State Laws on Administration of Elections" (September 26, 2016). 

Monday, October 19, 2020

The Ada Lovelace Controversies

 Ada Lovelace (1815-1852) is generally credited with being the first computer programmer: specifically, after Charles Babbage wrote down the plans for his Analytical Engine (which Britannica calls "a general-purpose, fully program-controlled, automatic mechanical digital computer"), Lovelace wrote down a set of instructions that would allow the machine to calculate the "numbers of Bernoulli" (for discussion, see here and here). Suw Charman-Anders gives an overview of the episode and some surrounding historical controversy in "Ada Lovelace: A Simple Solution to a Lengthy Controversy" (Patterns, October, 9, 2020, volume 1, issue 7). 

The historical controversy is whether Lovelace really truly deserves credit for the program, or whether her contemporaries who gave her credit for doing so were just being chivalrous to a fault (and perhaps being generous to the only daughter of Lord Byron and his wife). For example: 

In a letter to Michael Faraday in 1843, Babbage referred to her as “that Enchantress who has thrown her magical spell around the most abstract of Sciences and has grasped it with a force which few masculine intellects (in our own country at least) could have exerted over it”. Sophia De Morgan, who had tutored the young Lovelace, and Michael Faraday himself were both impressed with her understanding of Babbage’s Analytical Engine. Augustus De Morgan, Sophia’s husband and another of Lovelace’s tutors, described her as having the potential, had she been a man, to become “an original mathematical investigator, perhaps of first-rate eminence” ...

 Apparently, some modern writers have pored over what remains of the imprecisely dated correspondence between Lovelace and her tutor Augustus de Morgan, and decided that Lovelace didn't know enough math to have written the program. (Personally, I shudder to think of what judgments would be reached about my own capabilities if I was judged by the questions I sometimes felt the need to ask!) But Charman-Anders makes a persuasive case that the whole controversy is based in a mis-dating of Lovelace's mathematical education in general and her correspondence with de Morgan in particular; that is, critics of Lovelace were mistakenly treating early questions she asked her tutor as if they were questions asked several years later. 

For me, the more interesting point that Charman-Anders makes is to emphasize that writing a computer program was its own conceptual breakthough. There had long been mechanical computing machines, where you plugged in a problem and it spit out an answer. But the breakthrough from Lovelace was to see that the Babbage's Analytical Engine could be viewed a set of rules for working out new results; indeed, Lovelace  hypothesized that such a machine could write music based on a set of rules.   Charman-Anders writes (quotations in first paragraph from Lovelace's 1843 notes, footnotes omitted): 

Although Lovelace was the first person to publish a computer program, that wasn’t her most impressive accomplishment. Babbage had written snippets of programs before, and while Lovelace’s was more elaborate and more complete, her true breakthrough was recognizing that any machine capable of manipulating numbers could also manipulate symbols. Thus, she realized, the Analytical Engine had the capacity to calculate results that had not “been worked out by human head and hands first,” separating it from the “mere calculating machines” that came before, such as Babbage’s earlier Difference Engine. Such a machine could, for example, create music of “any degree of complexity or extent”, if only it were possible to reduce the “science of harmony and of musical composition” to a set of rules and variables that could be programmed into the machine. ...

While calculating devices have a long history, the idea that a machine might be able create music or graphics was contrary to all experience and expectation. Lovelace and her peers would have been familiar with the artifice of the automaton, clockwork machines which looked and acted like humans or animals but were driven by complex arrangements of cams and levers. And indeed, Babbage is said to have owned one called the Silver Lady, which could “bow and put up her eyeglass at intervals, as if to passing acquaintances”. But the Analytical Engine would have been in a category all its own.

One of the biggest leaps the human mind can make is extrapolating from current capabilities to future possibilities. The “art of the possible”, as it has been called, is an essential skill for innovators and entrepreneurs, but envisioning an entirely new class of machine is something for which few people have the capacity. Babbage’s design for the Analytical Engine was astounding, but none of his peers seemed to truly grasp its meaning. None except Lovelace.

Saturday, October 17, 2020

Interview with Gary Hoover: Economics and Discrimination

The Southwest Economy publication of the Federal Reserve Bank of Dallas has published "A Conversation with Gary Hoover" (Third Quarter 2020, pp. 7-9). Here are some of Hoover's comments: 

On  his own career path: 

Although I have been successful in economics, it has not come without some amount of psychological trauma. When I arrived at the University of Alabama in 1998, the economics department had never hired a Black faculty member. Sadly, that is still the case at more economics departments than not. I would not call those initial years hostile, but they were not inviting either.

I stuck to my plan, which was to publish articles to the best of my ability and teach good classes. The pressures were there to mentor Black students, serve on countless committees to “diversify” things and be a role model. I took on the extra tasks but never lost track of my goal. I saw so many of my Black counterparts fall into the trap. They had outsized service burdens compared to their peers, which they took on with the encouragement of the administration. However, when promotion and tenure evaluation time arrived, they were dismissed for not “meeting the high standards of the unit.”
On labor market impediments for black workers:
The impediments begin for Blacks seeking employment from the very outset. Some research has shown that non-Black job applicants of equal ability receive 50 percent more callbacks than Blacks. To further amplify on the issue, some research has shown that Black males without criminal records receive the same rate of callbacks for interviews as white males just released from prison when applying for employment in the low-wage job market.

With such handicaps existing from the start, it is no surprise that a wage gap exists. Some estimates show that gap to be as large as 28 percent on average and as large as 34 percent for those earning in the highest end (95th percentile) of the wage distribution. ,,,

Employers want workers who are trainable and present. Black workers, who have been poorly trained or suffer inferior health outcomes, will suffer disproportionately. In addition, the impacts of the criminal justice system cannot be overlooked. Some recent research has shown that for the birth cohort born between 1980 and 1984, the likelihood of incarceration transition for Blacks was 2.4 times greater than for their white counterparts. Given this outsized risk of incarceration, the prospects of long-term unemployment are dramatically increased.
On whether "the economy will evolve quickly enough to ensure the success and prosperity of minority groups":
I think that I must be optimistic about the future. What employers are yet to realize, but will have to come to grips with, is that successful market outcomes for minority groups mean success for them also. By that I mean, this is not a zero-sum game where one group will only improve at the expense of the other. In fact, history has shown us the opposite. Once minorities are fully utilized and integrated in the labor force, the economy as a whole will enjoy a different type of prosperity than has ever been experienced in the U.S. Once again, we must remember the introductory idea we teach to our college freshmen about the circular flow of the economy in that those fully engaged minority employees become fully engaged consumers.
For more on Hoover's thoughts about racial and ethnic diversity in the economic profession, a useful starting point is his co-authored article in the Summer 2020 issue of JEP, written with Amanda Bayer and Ebonya Washington. "How You Can Work to Increase the Presence and Improve the Experience of Black, Latinx, and Native American People in the Economics Profession" (Journal of Economic Perspectives, 34: 3, pp. 193-219).

For an overview of how economists seek to understand discrimination in theoretical and empirical terms, and how the views of economists differ from sociologists, a useful starting point is the two-paper
symposium on "Perspcctives on Racial Discrimination" in the Spring 2020 issue of JEP: