Thursday, May 28, 2020

How Economists and Sociologists See Racial Discrimination Differently

Economists tend to see discrimination as based on actions of individuals, who in turn are interacting in markets and society. However, sociologists do not feel the same compulsion as economists to build their theories on purposeful decision-making by individuals: "Sociologists generally understand racial discrimination as differential treatment on the basis of race that may or may not result from prejudice or animus and may or may not be intentional in nature."   The Spring 2020 issue of the Journal of Economic Perspectives illustrates the difference with a two-paper symposium on "Perspectives on Racial Discrimination: 
As most economists learned somewhere along the way, one can think of individual motivations for discrimination as coming in two flavors: taste-based discrimination in the oeuvre of Gary Becker (Nobel 1992) or "statistical discrimination" from the writings of Edmund Phelps (Nobel 2006) and Kenneth Arrow (Nobel 1972). One can dispute how economists discuss the subject of discrimination, but it would just be false to claim that it has not been a high-priority topic of top-level economists for decades.)

Taste-based discrimination is the name given to racial prejudice and animus. Statistical discrimination refers to the reality that we all make generalizations about people. Sometimes the generalizations are socially useful: Lang and Spitzer mention the generalizations that people are more likely to give up their seat on the bus or subway to a pregnant woman or an elderly person, based on the statistical generalization that they are more likely to need the seat, or that health care providers are more likely to emphasize breast-cancer screening for women than for me. However, statistical discrimination can also be harmful: say, if it is based on beliefs that those of a certain race who are applying to be hired for a job or to rent an apartment are more likely to be criminals. Moreover, when statistical discrimination is based on inaccurate statistics and exaggerated concerns, it begins to look functionally similar to taste-based discrimination. 

In addition, economists have long pointed out that the effects of discrimination may vary based on the parties involved: for example, in the context of labor market discrimination one can look separately at discrimination by employers, by co-workers, and by customers. For example, if the issue is discrimination by employers, one possible result is firms that are segregated by race, but both selling to the same consumers. If the issue is discrimination by consumers, one result may be that whites become more likely to have the "front-facing" jobs that deal directly with customers.  

The economic approach to discrimination, with its focus on purposeful and intentional acts by individuals, can offer some useful insights, and Lang and Spetzer give a useful overview of the research. For example, while the basic statistics show that blacks are more likely to be arrested for traffic violations, how can we know whether this is linked to prejudiced behavior by the police? One line of research has looked at traffic violations at different times of day, when there is more or less daylight. The underlying idea is that racial prejudice is more likely to manifest itself when the police can see the driver! The evidence from these studies is mixed: One study found that no effect of daylight on the racial mix of traffic stops, but another found that blacks were stopped more often at night on street with better lighting. 

Studies of "ban-the-box" legislation also have unexpected effects, as Lang and Spetzer point out: 
Because a higher proportion of blacks have criminal records than whites do, one might expect that preventing employers from inquiring about criminal records, at least at an early stage, would increase black employment. However, if firms cannot ask for information about criminal records, they may rely on correlates of criminal history, including being a young black man. This concern is even greater if employers tend to exaggerate the prevalence of criminal histories among black men, thus leading to inaccurate statistical discrimination. Agan and Starr (2018) investigate “ban the box” legislation in which companies are forbidden from asking job applicants about criminal background. Before such rules took effect, employers interviewed similar proportions of black and white male job applicants without 84 Journal of Economic Perspectives criminal records. Prohibiting firms from requesting this information reduced callbacks of black men relative to otherwise similar whites. Consistent with this, Doleac and Hansen (2016) find that banning the box reduced the employment of low-skill young black men by 3.4 percentage points and low-skill young Hispanic men by 2.3 percentage points. Similarly, occupational licensing increases the share of minority workers in an occupation despite their lower pass rates on such exams (Law and Marks 2009). Prohibiting the use of credit reports in hiring reduced black employment rather than increasing it (Bartik and Nelson 2019). Taken together, these studies provide strong evidence that statistical discrimination plays an important role in hiring.
As sociologists, Small and Pager have no direct issue with this kind of work in economics: as they point out, some sociologists work in a similar vein. But their essay emphasizes that discriminatory outcomes can emerge from reasonable-sounding institutional choices and from history. 

For example, many companies, when they are hiring, encourage current workers to refer their friends and neighbors. This practice is not overtly racial. But given US patterns of residential segregation and friendship, it means that new hires will tend to reinforce the earlier racial composition of the workforce Or consider the standard practice that when doing layoffs, last hired will be first fired. If a company has only fairly recently started hiring minority groups, then the weight of layoffs will fall more heavily on these groups. As Small and Pager write: 
It is not surprising that a national study of 327 establishments that downsized between 1971 and 2002 found that downsizing reduced the diversity of the firm’s managers—female and minority managers tended to be laid off first. But what is perhaps more surprising is that those companies whose layoffs were based formally on tenure or position saw a greater decline in the diversity of their managers; net of establishment characteristics such as size, personnel structures, unionization, programs targeting minorities for management, and many others; and of industry characteristics such as racial composition of industry and state labor force, proportion of government contractors, and others (Kalev 2014). In contrast, those companies whose layoffs were based formally on individual performance evaluations did not see greater declines in managerial diversity (Kalev 2014).
In other cases, actions taken for discriminatory reasons in the past can have effects for long periods into the future. For example, blacks are much less likely to accumulate wealth through homeownership than whites, and one reason dates back to decisions made by federal agencies in the 1930s. 
However, the Home Owners Loan Corporation and Federal Housing Administration were also responsible for the spread of redlining. As part of its evaluation of whom to help, the HOLC created a formalized appraisal system, which included the characteristics of the neighborhood in which the property was located. Neighborhoods were graded from A to D, and those with the bottom two grades or rankings were deemed too risky for investment. Color-coded maps helped assess neighborhoods easily, and the riskiest (grade D) neighborhoods were marked in red. These assessments openly examined a neighborhood’s racial characteristics, as “% Negro” was one of the variables standard HOLC forms required field assessors to record (for example, Aaronson, Hartley, and Mazumder 2019, 53; Norris and Baek 2016, 43). Redlined neighborhoods invariably had a high proportion of AfricanAmericans. Similarly, an absence of African-Americans dramatically helped scores. For example, a 1940 appraisal of neighborhoods in St. Louis by the Home Owners Loan Corporation gave its highest rating, A, to Ladue, an area at the time largely undeveloped, described as “occupied by ‘capitalists and other wealthy families’” and as a place that was “not the home of ‘a single foreigner or Negro’” (Jackson 1980, 425). In fact, among the primary considerations for designating a neighborhood’s stability were, explicitly, its “protection from adverse influences,” “infiltration of inharmonious racial or nationality groups,” and presence of an “undesirable population” (as quoted in Hillier 2003, 403; Hillier 2005, 217).
More recent research looks at the long-term effects of the boundaries that were drawn at the time. 
The results are consistent with the HOLC boundaries having a causal impact on both racial segregation and lower outcomes for predominantly black neighborhoods. As the authors write, “areas graded ‘D’ become more heavily African-American than nearby C-rated areas over the 20th century, [a] . . . segregation gap [that] rises steadily from 1930 until about 1970 or 1980 before declining thereafter” (p. 3). They find a similar pattern when comparing C and B neighborhoods, even though “there were virtually no black residents in either C or B neighborhoods prior to the maps” (p. 3). Furthermore, the authors find “an economically important negative effect on homeownership, house values, rents, and vacancy rates with analogous time patterns to share AfricanAmerican, suggesting economically significant housing disinvestment in the wake of restricted credit access” (pp. 2–3).
While economists have not totally neglected the role of institutions and history in the transmission of racial discrimination, it's fair to say that it hasn't been their main emphasis, either.  My own sense is that through most of US history, the main issue of racial discrimination was explicit white prejudice. But the balance has shifted, and current differences in racial outcomes are a difficult combination of history, institutions, and social patterns. 

For example, one theme that has emerged from earlier research both by economists and sociologists is that discrimination can reduce the incentives to gain human capital. Indeed, a group that is has experienced discrimination may end up with less human capital for interrelated reasons: less access to educational resources, reduced motivation to gain human capital (because of lurking future  discrimination), reduced expectations or less support from family and peer groups,  and other reasons. Once this  dynamic has unfolded, then even employers who have zero preference for taste-based discrimination, but just hire on the basis of observable background and skills, will end up with different labor market outcomes by race. 

Wednesday, May 27, 2020

Fiscal Federalism: An International View

What is the appropriate balance of taxes and spending between the central government of a country and the subcentral governments--in the US, state and local government? Countries vary, and there's no one-size-fits-all mode. But Kass Forman, Sean Dougherty, and Hansjörg Blöchliger provide an overview of how countries differ and some standard tradeoffs to consider in "Synthesising Good Practices in Fiscal Federalism Key recommendations from 15 years of country surveys" (OECD Economic Policy Paper #28, April 2020).   

Here are a couple of figures to give some background on the underlying issues. On this figure, the horizontal axis is the share of spending done by subcentral governments, while the vertical axis is the share of taxes collected by subcentral governments. Being on the 45-degree line line would mean that these were the same. However, every country falls below the 45-degree line, which means that for every country, some of the revenues spent by subcentral govenments are collected by the central government. 

Its interesting to note the different models of fiscal federalism that prevail in various countries. At the far right, Canada is clearly an outlier, with nearly 70% of all government spending happening at the subnational level, and half of all taxes collected at the subnational level. Other countries where about half or more of government spending happens at the subnational level include the US, Sweden, Switzerland (CHE) and Denmark. 

Mexico is an interesting case where 40% of government spending happens at the subnational level, but tax revenues collected at that level are very low. Germany (DEU) and Israel are countries with a substantial level of subnational spending that is also nearly matched by the level of subnational taxes--and thus a relatively low redistribution of revenue from central to subcentral governments. Many countries huddled in the bottom left of the figure are low both subnational spending and subnational taxes. 

Here's a figure showing the change in these patterns across countries from 1995-2017. 

The crossing point of the horizontal and the vertical lines means relatively little change: for example, the US had a small rise in the share of spending happening at the subnational level and a small drop in the share of revenues raised at the subnational level. 

Some countries with a big rise in the share of spending happening at the subnational level include Spain (ESP), Belgium, and Sweden. Some countries with a big rise in the share of subnational taxes collected include Spain, Belgium, and Italy. Clearly, Spain stands out as a country that has been decentralizing both government revenue and spending. Conversely, Denmark (DNK) stands out as a country that has been decentralizing government spending, but centralizing the collection of tax revenue. Hungary and Netherlands stand out as countries that have moved toward centralizing their spending, and Hungary in particular seems to be both increasing subnational taxes while decreasing subnational spending. 

What are the key tradeoffs here?  Forman, Dougherty, and Blöchliger write (citations omitted): 
Fiscal federalism refers to the distribution of taxation and spending powers across levels of government. Through decentralisation, governments can bring public services closer to households and firms, allowing better adaptation to local preferences. However, decentralisation can also make intergovernmental fiscal frameworks more complex and risk reinforcing interregional inequality unless properly designed. Accordingly, several important trade-offs emerge from the devolution of tax and spending powers. ... 
For example:  
[D]ecentralised fiscal frameworks allow for catering to local preferences and needs, while more centralised frameworks help reap the benefits of scale. Another key trade-off derives from the effect of decentralisation on the cost of information to different levels of government. While greater decentralisation implies that sub-national governments can access more information about the needs of a constituency at lower cost, it simultaneously increases the informational distance between central and sub-national government. In turn, this may make information more costly from the perspective of the central government, impeding its co-ordination and monitoring functions.
Decentralisation could also engender a costly misalignment of incentives. For example, a “common pool” problem may arise when decentralisation narrows the sub-national revenue base and raises the vertical fiscal gap. In this case, the necessary reliance on revenue sharing with central government to ensure SCG [subcentral government] fiscal capacity may also distort the cost/benefit analysis of sub-national governments—particularly in situations where an SCG realises a payoff without bearing the entirety of the associated cost. Rigid arrangements that entrench fiscal dependence on the central government may drive SCGs to manipulate tax-sharing agreements in order to increase their share while undermining their motivation to cultivate the local tax base. Therefore, the possible efficiency and equity gains from decentralisation are closely linked to mitigating the pitfalls of poorly designed revenue sharing.
  What does this mean in practical terms? Their survey of the cross-country research has a number of intriguing findings: 
OECD research has found a broadly positive relationship between revenue decentralisation and growth, with spending decentralisation demonstrating a weaker effect ...

[D]ecentralisation appears to reduce the gap between high and middle-income households but may leave low incomes behind, especially where jurisdictions have large tax autonomy ...

In healthcare, research suggests costs fall and life expectancy rises with moderate decentralisation, but the opposite effects hold once decentralisation becomes excessive (Dougherty et al., 2019[11]). With respect to educational attainment, Lastra-Anadón and Mukherjee (2019[27]) find that a 10 percentage point increase in the sub-national revenue share improves PISA scores by 6 percentage points ...

Decentralisation has also been linked to greater public investment, with a 10% point increase in decentralisation (as measured by both SCG spending and revenue share of government total) “lifting the share of public investment in total government spending from around 3% to more than 4% on average”. The investment driven by decentralisation appears to accrue principally to soft infrastructure, that is human capital as measured by education.
In the US version of fiscal federalism, states and local governments face constraints on their borrowing, while the federal government does not. In the case of disruptions from a national recession or pandemic, when a surge of government borrowing is needed, as in the case of a pandemic, it will thus be natural for subnational governments to turn to the US federal government for support. However, it's worth remembering that in more normal times, having state and local governments bear a substantial responsibility for their there own tax and sending levels can have real benefits for accountability and government services. 
 

Tuesday, May 26, 2020

Will Telecommuting Stick?

Responses to the pandemic have shifted many patterns: online education for both K-12 and higher ed, online health care consultations, online business meetings, and telecommuting to jobs. It will be interesting to see whether these shifts are only temporary, or whether they are the first step to a more permanent shift. Here are some bits and pieces of evidence.

The word "telecommuting" and the emergence of the idea into public discourse happened in the early 1970s. A NASA engineer named Jack Nilles, who in fact was working remotely, is usually credited with coining the word. He was also the lead author on a 1973 book looking more closely at the idea:
The Telecommunications-Transportation Tradeoff: Options for Tomorrow.

Before the pandemic, the share of US workers who telecommuted on a regular basis had been creeping up rather slowly over time, and had exceeded 5% of the workforce. From one perspective, this isn't a huge number. From another perspective, it suggest that a lot of workers and employers have some experience with telecommuting over time. And for the country as a whole, the number of telecommuters already exceeded the number taking mass transit on a regular basis.

Of course, the pandemic changed telecommuting, along with so much else. A Gallup poll found that in the last two weeks of March 2020, the share of US workers who had ever worked remotely doubled:
Line graph. Percentage of U.S. workers saying they are working remotely has doubled to 62% from mid-March to early April 2020.


There's anecdotal evidence suggesting that for big companies, the shift to telecommuting may be substantial and lasting. For example, a New York Times story suggests "Manhattan Faces a Reckoning if Working From Home Becomes the Norm" (by Matthew Haag, May 12, 2020). The article points out that just three companies--Barclays, JP Morgan Chase and Morgan Stanley--employ more than 20,000 employees in Manhattan and "they lease more than 10 million square feet in New York — roughly all the office space in downtown Nashville."
Before the coronavirus crisis, three of New York City’s largest commercial tenants — Barclays, JP Morgan Chase and Morgan Stanley — had tens of thousands of workers in towers across Manhattan. Now, as the city wrestles with when and how to reopen, executives at all three firms have decided that it is highly unlikely that all their workers will ever return to those buildings. The research firm Nielsen has arrived at a similar conclusion. Even after the crisis has passed, its 3,000 workers in the city will no longer need to be in the office full-time and can instead work from home most of the week.
Manhattan has the largest business district in the country, and its office towers have long been a symbol of the city’s global dominance. With hundreds of thousands of office workers, the commercial tenants have given rise to a vast ecosystem, from public transit to restaurants to shops. They have also funneled huge amounts of taxes into state and city coffers. But now, as the pandemic eases its grip, ...  they are now wondering whether it’s worth continuing to spend as much money on Manhattan’s exorbitant commercial rents. They are also mindful that public health considerations might make the packed workplaces of the recent past less viable. ...
Lots of other prominent firms, like Twitter and Facebook, are also announcing that work-from-home will be available to many more employees in the future. 

With just a bit of imagination, one can envisage a future where corporate offices have a lot less floorspace. There would be rooms for team meetings, and a number of cubicles or small offices that would be used by whoever was physically present that day. But the idea that most workers have a desk or cubicle or office reserved or them would fad away. One can also imagine a corresponding shift on the residential side of the market. There might be less demand for living near concentrations of urban jobs, and more demand for living near what has been viewed as a vacation destination. (For example, interest in real estate near Lake Tahoe has perked up.) When it comes to choosing a home, more people might go beyond just counting bedrooms and bathrooms and what the countertops are made of,  and also looking for a place what has that have a nook or a room designed as at-home workspace.

But what actual evidence bears on the likelihood that such a shift will occur? I'm only aware of one previous example of a relatively shock that led to a very large short-run increase in telecommuting: an earthquake in New Zealand in 2011. Erin Brown explains in an article in Knowable magazine ("Could Covid-19 usher in a new era of working from home?"April 30, 2020).
When a magnitude 6.3 earthquake struck Christchurch, New Zealand, on February 22, 2011, the capital city’s central business district was leveled — and hundreds of essential government workers suddenly found themselves working from home, scrambling to figure out how to get their jobs done without access to the office. Some encountered technical difficulties, others had trouble managing teams. But most found the pros outweighed the cons, and agencies held on to remote work options.
“It was immediate telework,” says Kate Lister, president of Global Workplace Analytics, a consulting firm near San Diego that helps companies set up work-from-home polices. And once it was over, they did not go back.”
On the other side, Brown also points out that not all past experiences with telecommuting have been positive, either from the employer or the employee's point of view. Yahoo! and Best Buy both had work at home policies that they abolished in 2013. More recently, before the pandemic the Trump administration had been reducing telework options for federal employees. Brown writes:
A 2017 look at research on alternative work arrangements, including remote work, in the Annual Review of Organizational Psychology and Organizational Behavior identified similar benefits. It also documented challenges for remote workers, including feeling lonely, isolated or not respected by colleagues; an increase in work-family conflict due to longer hours and blurred boundaries; and, in some cases, a tendency among workers who are encouraged to maintain work-life boundaries to “be less likely to extend themselves in crunch times, possibly increasing the workload of non-telecommuting coworkers.”
Katherine Guyot and Isabel V. Sawhill offer an essay with links to much of the recent evidence in  "Telecommuting will likely continue long after the pandemic" (Brookings Institution,  April 6, 2020). They write:
[A] recent paper estimates that only a third of jobs can be done entirely from home. ... Already, nearly one in five chief financial officers surveyed last week said they planned to keep at least 20% of their workforce working remotely to cut costs. ... Cutting commuting is good both for environmental reasons and because it is one of the least enjoyable activities that many adults engage in on a daily basis. ...
It can be a problem when some members of an organization or a team can telework and others cannot. Having more coworkers who telework can result in lower performance, higher absenteeism, and higher turnover among those who do not telework, particularly if team members have very limited face-to-face time. This suggests that telework may create some additional work for onsite workers (for instance, if they have to serve as liaisons for their teleworking colleagues), or that social interaction at work is important for morale. ...
A new study of employees at a U.S. technology services company found that extensive telecommuting is associated with fewer promotions and lower salary growth, but that telecommuters who have face-to-face time with managers or who perform supplemental work outside of normal hours have better outcomes. Supplemental work signals dedication to the job but also blurs the boundary between work and home life, contributing to pressure to be “always on.”
Of course, extrapolating from a pandemic back to (relative) normality is a dicey business. It's plausible that the enormous advances in telecommunications technology would have led to a gradual increase in telecommuting over time. Maybe it will turn out that the pandemic just gave telecommuting a push to move a little faster in the direction it was already headed. Here, I'll just finish with a few thoughts that are dubious about whether a seismic shift in telecommuting has just occurred. 

1) For many high-income workers, the option to telework can feel like a way to get some focused work time and also to improve the work-life balance. But if telework is imposed by the employer, and begins to include an expectation of being available for work 24/7, and perhaps especially if it is imposed on low-income workers, the feeling may be quite different.  More generally, there may be some workers who are happier heading off to work in an office, with a separation between work and home life. 

2) In the pandemic, firms moved people with existing and well-defined jobs into telecommuting for a time. But firms need to evolve over time. Job responsibilities and jobs themselves change. How will hiring and training of new employees happen? How will team-based projects be pursued?  How will new directions for the company be conveyed to teleworking employees? How will work be evaluated? The loyalty that workers feel to their employer as unemployment soars in a pandemic may be rather different than the loyalty they feel after working at home for months or years. 

3) One of the fundamental questions of urban economics is why economic activity tends to be clumped together in metro areas, and often clumped together in certain parts of urban areas, rather than being spread out more evenly across the landscape. The broad answer is that there are "economies of agglomeration," which is a fancy way for saying that workers who are physically located near each other tend to be more productive, as they share ideas and goals and motivate and interact with each other. There is a large body of empirical evidence on the concentration of economic activity, as well as high productivity and innovation, within specific locations. My suspicion is that the wise and career-oriented telecommuter will still want to find a way to make regular in-person appearances at these locations.

Monday, May 25, 2020

Interview with Larry Summers: China, Debt, Pandemic, and More

Irwin Stelzer and Jeffrey Gedmin have a wide-ranging interview with Lawrence Summers in The American Interest (May 22, 2020, "How to Fix Globalization—for Detroit, Not Davos"). As always, Summers is his habitually and incorrigibly interesting and provocative self. Here re a few of many quotable remarks. 

China
In general, economic thinking has privileged efficiency over resilience, and it has been insufficiently concerned with the big downsides of efficiency. Going forward we will need more emphasis on “just in case” even at some cost in terms of “just in time.” More broadly our economic strategy will need to put less emphasis on short-term commercial advantage and pay more attention to long-run strategic advantage. ...
At the broadest level, we need to craft a relationship with China from the principles of mutual respect and strategic reassurance, with rather less of the feigned affection that there has been in the past. We are not partners. We are not really friends. We are entities that find ourselves on the same small lifeboat in turbulent waters a long way from shore. We need to be pulling in unison if things are to work for either of us. If we can respect each other’s roles, respect our very substantial differences, confine our spheres of negotiation to those areas that are most important for cooperation, and represent the most fundamental interests of our societies, we can have a more successful co-evolution that we have had in recent years. ...
Attitudes on Globalization
Someone put it to me this way: First, we said that you are going to lose your job, but it was okay because when you got your new one, you were going to have higher wages thanks to lower prices because of international trade. Then we said that your company was going to move your job overseas, but it was really necessary because if we didn’t do that, then your company was going to be less competitive. Now we’re saying that we have to cut the taxes on those companies and cut the calculus class from your kid’s high school, because otherwise we won’t be able to attract companies to the United States, and you have to pay higher taxes and live with fewer services. At a certain point, people say, “This whole global thing doesn’t work for me,” and they have a point.
So we need a global agenda that is about broad popular interests rather than about corporate freedom—that is, cooperation to assure that government purposes can be served and that global threats can be met. If we have an agenda like that, we can rebuild a constituency for global dialogue.
Government debt
The deepest truth about debt is that you can’t evaluate borrowing without knowing what it’s going to be used for. Borrowing to invest in ways that earn a higher return than the cost of borrowing, and provide the wherewithal for debt service with an excess leftover, is generally a good and sustainable thing. Borrowing to finance consumption, leaving no return to cover debt service, is generally an unsustainable and problematic thing. ...
I think we need to be very careful, with respect to the expectation that we now seem to be setting of having government cover all the losses associated with the COVID period. For the life of me, I cannot understand why grants should have been made to airlines to enable them to continue to function, rather than allowing their share values to be further depressed, and allowing those who would earn substantial premiums by taking risk on airline bonds to do so, accepting the consequences of an investment gone wrong.
Looking towards an economy that is going to be very different than the one we had before COVID, we cannot aspire to maintain every job or every enterprise with a compensation program indefinitely. So as I look at the 30 percent of GDP deficit that we are running in Quarters Three and Four of Fiscal 2020, I don’t think that can be sustained over a multi-year period.
Enforcing Existing Tax Laws for Those With High Incomes
We could raise well over a trillion dollars over the next decade by simply enforcing the tax law that we have against people with high incomes. Natasha Sarin and I made this case and generated a revenue estimate some time ago. If we just restored the IRS to its previous size, judged relatively to the economy; if we moved past the massive injustice represented by the fact that you’re more likely to get audited if you receive the earned income tax credit (EITC) than if you earn $300,000 a year or more; if we made plausible use of information technology and the IRS got to where the credit card companies were 20 years ago, in terms of information technology-matching; and if we required of those who make shelter investments the kind of regular reporting that we require of cleaning women, we would raise, by my estimate, over a trillion dollars [over ten years]. Former IRS Commissioner Charles Rossotti, who knows more about it than I do, thinks the figure is closer to $2 trillion. That’s where we should start.
Coronavirus Priorities
The real crime is not that we miscalibrated on some economic versus public health trade-off. The real crime is that we have not succeeded in generating far greater quantities of testing, far greater mechanisms for those 40 million unemployed people to do contract tracing, far more availability of well-fitting, comfortable, and safe masks, and that we’re under-investing in the development of new therapeutics and vaccines.
When something costs $10 to $15 billion a day, you need to make decisions in new ways. We should not be waiting to see which of two tests works best. We should be producing both of them. We should not wait for vaccines to be proven before we start producing them. We should be producing all the plausible candidates. Remember, one week earlier in moving through this is worth a hundred billion dollars: two months’ worth of the annual defense budget.

Friday, May 22, 2020

Interview with Joshua Angrist: Education Policy and Causality Questions

David A. Price interviews Joshua Angrist in Econ Focus (First Quarter 2020, Federal Reserve Bank of Richmond, pp. 18-22). Angrist is well-known for his creativity and diligence in thinking about research design: that is, don't just start by looking at a bunch of correlations between variables, but instead think about what you might be able to infer about causality from looking at the data in a specific way. A substantial share of his recent research has focused on education policy, and that's the main focus of the interview as well.

To get a sense of what "research design" means in this area,  consider some examples. Imagine that you want to know if a student does better from attending a public charter school. If the school is oversubscribed and holds a lottery (as often happens), then you can compare those attending the charter with those who applied but were not chosen in the lottery. Does being surrounded by high-quality peers help your education? You can look at students who were accepted to institutions like Harvard and MIT but chose not to attend, and compare them with the students that were accepted and did choose to attend. Of course, these kinds of comparisons have to be done with appropriate statistical consideration. But their results are much more plausibly interpreted as causal, not just as a set of correlations. Here are some comments from Angrist in the interview that caught my eye.

Peer Effects in High School? 
I think people are easily fooled by peer effects. Parag, Atila Abdulkadiroglu, and I call it "the elite illusion." We made that the title of a paper. I think it's a pervasive phenomenon. You look at the Boston Latin School, or if you live in Northern Virginia, there's Thomas Jefferson High School for Science and Technology. And in New York, you have Brooklyn Tech and Bronx Science and Stuyvesant.
And so people say, "Look at those awesome children, look how well they did." Well, they wouldn't get into the selective school if they weren't awesome, but that's distinct from the question of whether there's a causal effect. When you actually drill down and do a credible comparison of students who are just above and just below the cutoff, you find out that elite performance is indeed illusory, an artifact of selection. The kids who go to those schools do well because they were already doing well when they got in, but there's no effect from exposure to higher-achieving peers.
How Much Does Attending a Selective College Matter? 
I teach undergrad and grad econometrics, and one of my favorite examples for teaching regression is a paper by Alan Krueger and Stacy Dale that looks at the effects of going to a more selective college. It turns out that if you got into MIT or Harvard, it actually doesn't matter where you go. Alan and Stacy showed that in two very clever, well-controlled studies. And Jack Mountjoy, in a paper with Brent Hickman, just replicated that for a much larger sample. There isn't any earnings advantage from going to a more selective school once you control for the selection bias. So there's also an elite illusion at the college level, which I think is more important to upper-income families, because they're desperate for their kids to go to the top schools. So desperate, in fact, that a few commit criminal fraud to get their kids into more selective schools.
Charter schools and takeovers
The most common charter model is what we call a startup — somebody decides they want to start a charter school and admits kids by lottery. But an alternative model is the takeover. Every state has an accountability system with standards that require schools to meet certain criteria. When they fail to meet these standards, they're at risk of intervention by the state. Some states, including Massachusetts, have an intervention that involves the public school essentially being taken over by an outside operator. Boston had takeovers. And New Orleans is actually an all-charter district now, but it moved to that as individual schools were being taken over by charter operators.
That's good for research, because you can look at schools that are struggling just as much but are not taken over or are not yet taken over and use them as a counterfactual. The reason that's important is that people say kids who apply to the startups are self-selected and so they're sort of primed to gain from the charter treatment. But the way the takeover model works in Boston and New Orleans is that the outside operator inherits not only the building, but also the existing enrollment. So they can't cherry-pick applicants. What we show is that successful charter management organizations that run successful startups also succeed in takeover scenarios.
Angrist has developed the knack of looking for these ways of interpreting a given data set, sometimes called "natural experiments." For those trying to find such examples as a basis for their own research, he says: 
One thing I learned is that empiricists should work on stuff that's nearby. Then you can have some visibility into what's unique and try to get on to projects that other people can't do. This is particularly true for empiricists who are working outside the United States. There's a temptation to just mimic whatever the Americans and British are doing. I think a better strategy is to say, "Well, what's special and interesting about where I am?"
Finally, as a bit of a side note, I was intrigued by Angrist's neutral-to-negative take on the potential for machine learning in econometrics:
I just wrote a paper about machine learning applications in labor economics with my former student Brigham Frandsen. Machine learning is a good example of a kind of empiricism that's running way ahead of theory. We have a fairly negative take on it. We show that a lot of machine learning tools that are very popular now, both in economics and in the wider world of data science, don't translate well to econometric applications and that some of our stalwarts — regression and two-stage least squares — are better. But that's an area of ongoing research, and it's rapidly evolving. There are plenty of questions there. Some of them are theoretical, and I won't be answering those questions, but some are practical: whether there's any value added from this new toolkit. So far, I'm skeptical.
Josh has written for the Journal of Economic Perspectives a few times. Interested readers might want to follow up with:

Thursday, May 21, 2020

Reconsidering the "Washington Consensus" in the 21st Century

The "Washington consensus" has become a hissing and a buzzword over time. The usual implication is that free-market zealots in Washington, DC, told developing countries around the world that they would thrive if they followed free-market policies, but when developing countries tried out these policies, they were proven not to work. William Easterly, who has been a critic of the "Washington consensus" in the past, offers an update and some new thinking in "In Search of Reforms for Growth New Stylized Facts on Policy and Growth Outcomes" (Cato Institute, Research Briefs #215, May 20, 2020). He summarizes some ideas from his NBER working paper of the same title (NBER Working Paper 26318, September 2019)/

Before discussing what Easterly has to say, it's perhaps useful to review how the "Washington consensus" terminology emerged. The name traces back to a 1989 seminar in which John Williamson tried to write down what he saw as the main steps that policy-makers in Washington, DC, thought were appropriate for countries in Latin America facing a debt crisis. As Williamson wrote in the resulting essay published in 1990:
No statement about how to deal with the debt crisis in Latin America would be complete without a call for the debtors to fulfill their part of the proposed bargain by "setting their houses in order," "undertaking policy reforms," or "submitting to strong conditionality."
The question posed in this paper is what such phrases mean, and especially what they are generally interpreted as meaning in Washington. Thus the paper aims to set out what would be regarded in Washington as constituting a desirable set of economic policy reforms. ... The Washington of this paper is both the political Washington of Congress and senior members of the administration and the technocratic Washington of the international financial institutions, the economic agencies of the US government, the Federal Reserve Board, and the think tanks. ... Washington does not, of course, always practice what it preaches to foreigners.
Here's how Williamson summed up the 10 reforms he listed in a follow-up essay in 2004:
  1. Fiscal Discipline. This was in the context of a region where almost all countries had run large deficits that led to balance of payments crises and high inflation that hit mainly the poor because the rich could park their money abroad.
  2. Reordering Public Expenditure Priorities. This suggested switching expenditure in a progrowth and propoor way, from things like nonmerit subsidies to basic health and education and infrastructure. It did not call for all the burden of achieving fiscal discipline to be placed on expenditure cuts; on the contrary, the intention was to be strictly neutral about the desirable size of the public sector, an issue on which even a hopeless consensus-seeker like me did not imagine that the battle had been resolved with the end of history that was being promulgated at the time. 
  3. Tax Reform. The aim was a tax system that would combine a broad tax base with moderate marginal tax rates. 
  4. Liberalizing Interest Rates. In retrospect I wish I had formulated this in a broader way as financial liberalization, stressed that views differed on how fast it should be achieved, and—especially—recognized the importance of accompanying financial liberalization with prudential supervision. 
  5. A Competitive Exchange Rate. I fear I indulged in wishful thinking in asserting that there was a consensus in favor of ensuring that the exchange rate would be competitive, which pretty much implies an intermediate regime; in fact Washington was already beginning to edge toward the two-corner doctrine which holds that a country must either fix firmly or else it must float “cleanly”. 
  6. Trade Liberalization. I acknowledged that there was a difference of view about how fast trade should be liberalized, but everyone agreed that was the appropriate direction in which to move. 
  7. Liberalization of Inward Foreign Direct Investment. I specifically did not include comprehensive capital account liberalization, because I did not believe that did or should command a consensus in Washington. 
  8. Privatization. As noted already, this was the one area in which what originated as a neoliberal idea had won broad acceptance. We have since been made very conscious that it matters a lot how privatization is done: it can be a highly corrupt process that transfers assets to a privileged elite for a fraction of their true value, but the evidence is that it brings benefits (especially in terms of improved service coverage) when done properly, and the privatized enterprise either sells into a competitive market or is properly regulated. 
  9. Deregulation. This focused specifically on easing barriers to entry and exit, not on abolishing regulations designed for safety or environmental reasons, or to govern prices in a non-competitive industry. 
  10. Property Rights. This was primarily about providing the informal sector with the ability to gain property rights at acceptable cost (inspired by Hernando de Soto’s analysis). 
There are really two main sets of complaints about the "Washington consensus" recommendations. One set of complaints is that a united DC-centered policy establishment was telling countries around the world what to do in an overly detailed and intrusive way. The other complaint was that the recommendations weren't showing meaningful results for improved economic growth in countries of Latin America, Africa, or elsewhere. As Easterly points out, these complaints were being voiced by the mid-1990s. 

Conversely, standard responses were that many of these countries had not actually adopted the list of 10 policy reforms. Moreover, the responses went, there is no instant-fix set of policies for raising economic growth, and these policies need to be maintained in place for years (or decades?) before their effects will be meaningful.  And there the controversy (mostly) rested.

Easterly is (wisely) not seeking to refight the specific proposals of the Washington consensus. Instead,  he is just pointing out some basic facts. The share of countries with extremely negative macroeconomic outcomes--like very high inflation, or very high black market premiums on the exchange rate for currency--diminished sharply in the 21st century, as compared to the 1980s and 1990s. Here are a couple of figures from Easterly's NBER working paper:


These kinds of figures provide a context for the 1990 Washington consensus: for example, in the late 1980s and early 1990s when between 25-40% of all countries in the world had inflation rates greater than 40%, getting that wildfire under control had a high level of importance.

Easterly also points out that when these extremely undesirable outcomes diminished in the 1990s, growth across countries of Latin America and Africa has done better in the 21st century. Easterly thus offers this gentle reconsideration of the Washington consensus policies: 
The new stylized facts seem most consistent with a position between complete dismissal and vindication of the Washington Consensus. ... Even critics of the Washington Consensus might agree that extreme ranges of inflation, black market premiums, overvaluation, negative real interest rates, and repression of trade were undesirable. ...
Despite these caveats, the new stylized facts are consistent with a more positive view of reform, compared to the previous consensus on doubting reform. The reform critics (including me) failed to emphasize the dangers of extreme policies in the previous reform literature or to note how common extreme policies were. Even if the reform movement was far from a complete shift to “free market policies,” it at least seems to have accomplished the elimination of the most extreme policy distortions of markets, which is associated with the revival of growth in African, Latin American, and other countries that had extreme policies. 

Wednesday, May 20, 2020

Is a Revolution in Biology-based Technology on the Way?

Sometimes, a person needs a change from feeling rotten about the pandemic and the economy. One needs a sense that, if not right away, the future holds some imaginative and exciting possibilities. A group at the the McKinsey Global Institute--Michael Chui,  Matthias Evers, James Manyika,  Alice Zheng, amd Travers Nisbet--have been working for about a year on their report: "The Bio Revolution: Innovations transforming economies,societies, and our lives" (May 2020). It's got a last-minute text box about COVID-19, emphasizing the speed with which biomedical research has been able to move into action in looking for vaccines and treatments. But the heart of the report is that the authors looked at the current state of biotech, and came up with a list of about 400 "cases that
are scientifically conceivable today and that could plausibly be commercialized by 2050. ... Over the next ten to 20 years, we estimate that these applications alone could have direct economic impact of between $2 trillion and $4 trillion globally per year."

For me, reports like this aren't about the economic projections, which are admittedly shaky, but rather are a way of emphasizing the importance of increasing national research and development efforts across a spectrum of technologies. As the authors point out, the collapsing costs of sequencing and editing genes are reshaping what's possible with biotech. Here are some of the possibilities they discuss.

When it comes to physical materials, the report notes that in the long run:
As much as 60 percent of the physical inputs to the global economy could, in principle, be produced biologically. Our analysis suggests that around one-third of these inputs are biological materials, such as wood, cotton, and animals bred for food. For these materials, innovations can improve upon existing production processes. For instance, squalene, a moisturizer used in skin-care products, is traditionally derived from shark liver oil and can now be produced more sustainably through fermentation of genetically engineered yeast. The remaining two-thirds are not biological materials—examples include plastics and aviation fuels—but could, in principle, be produced using innovative biological processes or be replaced with substitutes using bio innovations. For example, nylon is already being made using genetically engineered microorganisms instead of petrochemicals. To be clear, reaching the full potential to produce these inputs biologically is a long way off, but even modest progress toward it could transform supply and demand and economics of, and participants in, the provision of physical inputs.  ...
Biology has the potential in the future to determine what we eat, what we wear, the products we put on our skin, and the way we build our physical world. Significant potential exists to improve the characteristics of materials, reduce the emissions profile of manufacturing and processing, and shorten value chains. Fermentation, for centuries used to make bread and brew beer, is now being used to create fabrics such as artificial spider silk. Biology is increasingly being used to create novel materials that can raise quality, introduce entirely new capabilities, be biodegradable, and be produced in a way that generates significantly less carbon emissions. Mushroom roots rather than animal hide can be used to make leather. Plastics can be made with yeast instead of petrochemicals. ...
A significant share of materials developed through biological means are biodegradable and generate less carbon during manufacture and processing than traditional materials. New bioroutes are being developed to produce chemicals such as fertilizers and pesticides. ...
 A deeper understanding of human genetics offers potential for improvements in health care, where the social benefits go well beyond higher economic output. The report estimates that there are 10,000 human diseases caused by a single gene.

A new wave of innovation is under way that includes cell, gene, RNA, and microbiome therapies to treat or prevent disease, innovations in reproductive medicine such as carrier screening, and improvements to drug development and delivery.  Many more options are being explored and becoming available to treat monogenic (caused by mutations in a single gene) diseases such as sickle cell anemia, polygenic diseases (caused by multiple genes) such as cardiovascular disease, and infectious diseases such as malaria. We estimate between 1 and 3 percent of the total global burden of disease could be reduced in the next ten to 20 years from these applications—roughly the equivalent of eliminating the global disease burden of lung cancer, breast cancer, and prostate cancer combined. Over time, if the full potential is captured, 45 percent of the global disease burden could be addressed using science that is conceivable today. ...
An estimated 700,000 deaths globally every year are the result of vector-borne infectious diseases. Until recently, controlling these infectious diseases by altering the genomes of the entire population of the vectors was considered difficult because the vectors reproduce in the wild and lose any genetic alteration within a few generations. However, with the advent of CRISPR, gene drives with close to 100 percent probability of transmission are within reach. This would offer a permanent solution to preventing most vector-borne diseases, including malaria, dengue fever, schistosomiasis, and Lyme disease.

The potential gains for agriculture as the global population heads toward 10 billion and  higher seem pretty important, too.

Applications such as low-cost, high-throughput microarrays have vastly increased the amount of plant and animal sequencing data, enabling lower-cost artificial selection of desirable traits based on genetic markers in both plants and animals. This is known as marker-assisted breeding and is many times quicker than traditional selective breeding methods. In addition, in the 1990s, genetic engineering emerged commercially to improve the traits of plants (such as yields and input productivity) beyond traditional breeding.  Historically, the first wave of genetically engineered crops has been referred to as genetically modified organisms (GMOs); these are organisms with foreign (transgenic) genetic material introduced. Now, recent advances in genetic engineering (such as the emergence of CRISPR) have enabled highly specific cisgenic changes (using genes from sexually compatible plants) and intragenic changes (altering gene combinations and regulatory sequencings belonging to the recipient plant). Other innovations in this domain include using the microbiome of plants, soil, animals, and water to improve the quality and productivity of agricultural production; and the development of alternative proteins, including lab-grown meat, which could take pressure off the environment from traditional livestock and seafood.
More? Direct-to-consumer genetic testing is already a reality as a consumer product, but it will start to be combined with other goods and services based on your personal genetic profile: what vitamins and probiotics to take, meal services, cosmetics, whitening teeth, monitoring health, and more.

Pushing back against rising carbon emissions?

Genetically engineered plants can potentially store more CO2 for longer periods than their natural counterparts. Plants normally take in CO2 from the atmosphere and store carbon in their roots. The Harnessing Plant Initiative at the Salk Institute is using gene editing to create plants with deeper and more extensive root systems that can store more carbon than typical plants. These roots are also engineered to produce more suberin or cork, a naturally occurring carbon-rich substance found in roots that absorbs carbon, resists decomposition (which releases carbon back into the atmosphere), may enrich soil, and helps plants resist stress. When these plants die, they release less carbon back into the atmosphere than conventional plants. ...
Algae, present throughout the biosphere but particularly in marine and freshwater environments, are among the most efficient organisms for carbon sequestration and photosynthesis; they are generally considered photosynthetically more efficient than terrestrial plants. Potential uses of microalgal biomass after sequestration could include biodiesel production, fodder for livestock, and production of colorants and vitamins. Using microalgae to sequester carbon has a number of advantages. They do not require arable land and are capable of surviving well in places that other crop plants cannot inhabit, such as saline-alkaline water, land, and wastewater. Because microalgae are tiny, they can be placed virtually anywhere, including cities. They also grow rapidly. Most important, their CO2 fixation efficiency has been estimated at ten to 50 times higher than that of  terrestrial plants.
Using biotech to remediate earlier environmental damage or aid recycling?
One example is genetically engineered microbes that can be used to break down waste and toxins, and could, for instance, be used to reclaim mines. Some headway is being made in using microbes to recycle textiles. Processing cotton, for instance, is highly resource-intensive, and dwindling resources are constraining the production of petroleum-based fibers such as acrylic, polyester, nylon, and spandex. There is a great deal of waste, with worn-out and damaged clothes often thrown away rather than repaired. Less than 1 percent of the material used to produce clothing is recycled into new clothing, representing a loss of more than $100 billion a year.Los Angeles–based Ambercycle has genetically engineered microbes to digest polymers from old textiles and convert them into polymers that can be spun into yarns. Engineered microbes can also assist in the treatment of wastewater. In the United States, drinking water and wastewater systems account for between 3 and 4 percent of energy use and emit more than 45 million tons of GHG a year. Microbes—also known as microbial fuel cells—can convert sewage into clean water as well as generate the electricity that powers the process.
What about longer-run possibilities, still very much under research, that might bear fruit out beyond 2050?
  • "Biobatteries are essentially fuel cells that use enzymes to produce electricity from sugar. Interest is growing in their ability to convert easily storable fuel found in everyday sugar into electricity and the potential energy density this would provide. At 596 ampere hours per kilogram, the density of sugar would be ten times that of current lithium-ion batteries."
  • "Biocomputers that employ biology to mimic silicon, including the use of DNA to store data, are being researched. DNA is about one million times denser than hard-disk storage; technically, one kilogram of DNA could store the entirety of the world’s data (as of 2016)."
  • Of course, if people are going to live in space or on other planets, biotech will be of central importance. 

If your ideas about the technologies of the future begin and end with faster computing power, you are not dreaming big enough.