Wednesday, July 15, 2020

Interview with Melissa Dell: Persistence Across History

Tyler Cowen inteviews Melissa Dell, the most recent winner of the Clark medal (which "is awarded annually .. to that American economist under the age of forty who is judged to have made the most significant contribution to economic thought and knowledge). Both audio and a transcript of the one-hour conversation are available. From the overview: 
Melissa joined Tyler to discuss what’s behind Vietnam’s economic performance, why persistence isn’t predictive, the benefits and drawbacks of state capacity, the differing economic legacies of forced labor in Indonesia and Peru, whether people like her should still be called a Rhodes scholar, if SATs are useful, the joys of long-distance running, why higher temps are bad for economic growth, how her grandmother cultivated her curiosity, her next project looking to unlock huge historical datasets, and more.
Here, I'll just mention a couple of broad points that caught my eye. Dell specializes in looking at how conditions in at one point in time--say, being in an area which for a time has strong centralized tax-collecting government--can have persistent effects on economic outcomes decades or even centuries later. For those skeptical of such effects, Dell argues that explaining, say, 10% of a big difference between two areas is a meaningful feat for social science. She says: 
I was presenting some work that I’d done on Mexico to a group of historians. And I think that historians have a very different approach than economists. They tend to focus in on a very narrow context. They might look at a specific village, and they want to explain a hundred percent of what was going on in that village in that time period. Whereas in this paper, I was looking at the impacts of the Mexican Revolution, which is a historical conflict in economic development. And this historian, who had studied it extensively and knows a ton, was saying, “Well, I kind of see what you’re saying, and that holds in this case, but what about this exception? And what about that exception?”

And my response was to say my partial R-squared, which is the percent of the variation that this regression explains, is 0.1, which means it’s explaining 10 percent of the variation in the data. And I think, you know, that’s pretty good because the world’s a complex place, so something that explains 10 percent of the variation is potentially a pretty big deal.

But that means there’s still 90 percent of the variation that’s explained by other things. And obviously, if you go down to the individual level, there’s even more variation there in the data to explain. So I think that in these cases where we see even 10 percent of the variation being explained by a historical variable, that’s actually really strong persistence. But there’s a huge scope for so many things to matter.

I’ll say the same thing when I teach an undergrad class about economic growth in history. We talk about the various explanations you can have: geography, different types of institutions, cultural factors. Well, there’s places in sub-Saharan Africa that are 40 times poorer than the US. When you have that kind of income differential, there’s just a massive amount of variation to explain.

Nathan Nunn’s work on slavery and the role that that plays in explaining Africa’s long-run underdevelopment — he gets pretty large coefficients, but they still leave a massive amount of difference to be explained by other things as well, because there’s such large income differences between poor places in the world and rich places. I think if persistence explains 10 percent of it, that’s a case where we see really strong persistence, and of course, there’s other cases where we don’t see much. So there’s plenty of room for everybody’s preferred theory of economic development to be important just because the differences are so huge.
Dell also discusses a project to organize historical data, like old newspapers, in ways that will make them available for empirical analysis.  She says: 
I have a couple of broad projects which are, in substance, both about unlocking data on a massive scale to answer questions that we haven’t been able to look at before. If you take historical data, whether it be tables or a compendia of biographies or newspapers, and you go and you put those into Amazon Textract or Google Cloud Vision, it will output complete garbage. It’s been very specifically geared towards specific things which are like single-column books and just does not do well with digitizing historical data on a large scale. So we’ve been really investing in methods in computer vision as well as in natural language processing to process the output so that we can take data, historical data, on a large scale. These datasets would be too large to ever digitize by hand. And we can get them into a format that can be used to analyze and answer lots of questions.

One example is historical newspapers. We have about 25 million-page scans of front pages and editorial pages from newspapers across thousands and thousands of US communities. Newspapers tend to have a complex structure. They might have seven columns, and then there’s headlines, and there’s pictures, and there’s advertisements and captions. If you just put those into Google Cloud Vision, again, it will read it like a single-column book and give you total garbage. That means that the entire large literature using historical newspapers, unless it uses something like the New York Times or the Wall Street Journal that has been carefully digitized by a person sitting there and manually drawing boxes around the content, all you have are keywords.

You can see what words appear on the page, but you can’t put those words together into sentences or into paragraphs. And that means we can’t extract the sentiment. We don’t understand how people are talking about things in these communities. We see what they’re talking about, what words they use, but not how they’re talking about it.

So, by devising methods to automatically extract that data, it gives us a potential to do sentiment analysis, to understand, across different communities in the US, how people are talking about very specific events, whether it be about the Vietnam War, whether it be about the rise of scientific medicine, conspiracy theories — name anything you want, like how are people in local newspapers talking about this? Are they talking about it at all?

We can process the images. What sort of iconic images are appearing? Are they appearing? So I think it can unlock a ton of information about news.

We’re also applying these techniques to lots of firm-level and individual-level data from Japan, historically, to understand more about their economic development. We have annual data on like 40,000 Japanese firms and lots of their economic output. This is tables, very different than newspapers, but it’s a similar problem of extracting structure from data, working on methods to get all of that out, to look at a variety of questions about long-run development in Japan and how they were able to be so successful. 

Tuesday, July 14, 2020

Lower Tax Rates or Less Tax Enforcement?

Let's compare two hypothetical tax cuts. In the first tax cut, we decide which groups will pay lower rates, and we may have a dispute over what share the tax cut should go to those with low incomes, or families with children, or as an incentive for job training or research and development or some other purpose. In the second tax cut, we announce that those who are willing to take the risk of breaking the tax laws can pay less, but everyone else will pay the same. 

I prefer the first form of tax cut, and I suspect I am not alone in that preference.  But by reducing funding to the IRS in the last decade or so, we are in fact choosing the second form of tax cut. The Congressional Budget Office lays out the evidence in "Trends in the Internal Revenue Service’s Funding and Enforcement" (July 2020). Here are some bullet-points to consider from CBO: 
  • In its most recent report on uncollected taxes, the IRS estimated that an average of $441 billion (16 percent) of the taxes owed annually between 2011 and 2013 was not paid in accordance with the law. Most of the unpaid taxes were the result of taxpayers’ underreporting their income. Through enforcement, the IRS collected an average of $60 billion of those unpaid taxes annually, reducing the gap between taxes owed and taxes paid in those years to $381 billion per year, on average.
  • The IRS’s appropriations have fallen by 20 percent in inflation-adjusted dollars since 2010, resulting in the elimination of 22 percent of its staff. The amount of funding and staff allocated to enforcement activities has declined by about 30 percent since 2010.
  • Since 2010, the IRS has done less to enforce tax laws. Between 2010 and 2018, the share of individual income tax returns it examined fell by 46 percent, and the share of corporate income tax returns it examined fell by 37 percent. The disruptions stemming from the 2020 coronavirus pandemic will further reduce the ability of the IRS to enforce tax laws.
  • CBO estimates that increasing the IRS’s funding for examinations and collections by $20 billion over 10 years would increase revenues by $61 billion and that increasing such funding by $40 billion over 10 years would increase revenues by $103 billion.
Unsurprisingly, most Americans don't have a lot of room to fiddle with our taxes. Our employer reports our pay to the IRS; our bank reports our meager interest income; other parts of the financial industry report if we had any capital gains or other financial benefits in the year. However, those who receive income in forms not separately reported by third parties--like income for the proprietors of a business, or from royalties or rents--have much more ability to understate their income. 

Most of the drop in enforcement relates to a lower chance of close inspection of the tax returns of high-income individuals and large corporations. 



Notice that the comparisons given do not go back decades, but only about a decade.  Maybe I'm just not remembering clearly (always a possibility!), but it doesn't seem to me that the perennial complaints over the intrusiveness of IRS enforcement were higher than usual in 2010. I also don't remember any policy consensus that a reduction in tax enforcement would be a bipartisan policy choice over the decade following 2010. Raising IRS enforcement spending 20%, so it returns to 2010 levels, does not seem excessive or onerous. And although many government tax and spending policies claim to "pay for themselves," this one actually does so. 

Monday, July 13, 2020

The US Dollar in the Global Economy


The two left-hand red bars in this table show the US share of global trade and the US share of the global economy. The other blue bars show the role of the US dollar in cross-border loans, international debt securities, foreign exchange transaction volume, official foreign exchange reserves, invoicing of international trade, and payments made through the international network (mainly but not all banks) called SWIFT.  
The CGFS report goes into considerable detail on the role of the US dollar in each of these areas. But here's an overview of pluses and minuses for the world economy: 
Global economic and financial activity depends on the ability of US dollar funding to flow smoothly and efficiently between users. The broad international use of a dominant funding currency generates significant benefits to the global financial system, but also presents risks. Benefits arise from economies of scale and network effects, which reduce the costs of transferring capital and risks around the financial system. At the same time, financial globalisation, coupled with the dominant role of the US dollar in international markets, may have led to a more synchronised behaviour of actors in the global financial system, at least in part because many international investors and borrowers are exposed to the US dollar. As a consequence, it is possible that shocks stemming from US monetary policy, US credit conditions or general spikes in global risk aversion get transmitted across the globe. These dynamics increase the need for participants to manage the risk of a retrenchment in cross-border flows.
In short, having a currency that can be widely used around the global economy--whether directly or as a fallback whenever needed--is a huge benefit. But one tradeoff is that many players in global markets around the world are dependent on having access to a continuing supply of US dollars (say, to make payments or repay loans). This may not be a problem in many cases--for example, perhaps the party in question has a US dollar credit line at a big bank. But many other parties around the world may not have direct access to US dollars when needed. 

In addition, when someone who is in an economy that doesn't use US dollars promises to make payments in US dollars, there is always a danger that if exchange rates shift, that payment may become more difficult to make. 

And also in addition, if there was for whatever reason a shortage of US dollar financing for the global economy as a whole, the problems would hit in all kinds of locations and markets all at once. Because of the global dependence on US dollars, any actions of the  Federal Reserve or the US banking authorities can have outsized and unexpected effects on the rest of the global economy. As a policy response, the Federal Reserve has set up "swap lines" with a number of central banks around the world, where the Fed agrees in advance to swap US dollars for the currency of that central bank during a time of crisis, so that the other central bank, in turn, could make sure those US dollars were available in its own economy. 

Problems along these lines arose during the global financial crisis from 2007-2009, and again during the crisis in European sovereign debt markets in 2010. Although the main focus of this report is an overall perspectives on international US dollar funding, it does include some discussion of how these issues erupted in March 2020 as concerns over COVID-19 erupted. Financial and corporate actors around the world had an increased desire to hold US dollars, as a safety precaution in uncertain times. The foreign exchange value of the dollar appreciated about 8% in a couple of weeks. Those who had been planning to trade in US dollars or borrow in US dollars, around the world, found that it was more difficult and costly to do so. The report notes: 
The prospect of a severe economic downturn drove a significant increase in demand for US dollar liquidity. Many businesses around the globe, anticipating sharp declines in their revenues, sought to borrow funds (including US dollars) to meet upcoming expenses such as paying suppliers or servicing debts. US dollars were in particularly high demand given the dollar’s extensive international use in the invoicing of trade, short-term trade finance and long-term funding ... Faced with uncertainty about how large such needs would be, many firms, as a precaution, chose to draw on any source of US dollar funding they could obtain.
The activities of NBFIs [non-bank financial institutions] also appear to have contributed to strong demand for US dollar liquidity. In recent years, non-US insurers and pension funds have funded large positions in US dollar assets by borrowing US dollars on a hedged basis ... The appreciation of the US dollar meant that these NBFIs in some jurisdictions were required to make margin payments, potentially adding to demand for US dollar funding. ... At the same time, US dollar funding became much more difficult to obtain in global capital markets as suppliers of funding shifted into cash and very liquid assets. ...

Finally, EMEs [emerging market economies] that raise US dollar funding have faced particular strain. Over the past decade, corporations, banks and sovereigns in EMEs had issued large volumes of US dollar debt securities, partly owing to a shift away from bank-intermediated funding ... The pandemic has seen fund managers substantially shift their portfolios away from US dollar bonds issued by EME borrowers ...  At the same time, many EME governments and corporations have an increased demand for funding (across currencies), owing to fiscal expansions and sharply lower revenues, including from commodity exports. Together, these pressures have contributed to a spike in US dollar bond yields for EME sovereigns and corporations ...
 The US Federal Reserve worked with central banks around the world to make sure that the flow of US dollar financing was only hindered in a way that gave it a reasonable chance to adjust, not harshly interrupted. With the widespread use of the US dollar around the world, and the interconnections of the world economy, the Fed has little choice but to accept some responsibility for the availability of US dollars not just in the US economy, but around the world. 


Friday, July 10, 2020

Some Background about Police Shootings

The Annals of the American Academy of Political and Social Science devoted its January 2020 issue to a set of 14 articles on the theme of   "Fatal Police Shootings: Patterns, Policy, and Prevention." I'll post a Table of Contents for the issue below. Here, I'll just note some of the lessons one might take away from a few of the papers in the issue.

Franklin Zimring lays out some useful background (citations omitted): 
Police shoot and kill about a thousand civilians each year, and other types of conflict and custodial force add more than one hundred other lives lost to the annual total death toll. This is a death toll far in excess of any other fully developed nation, and the existing empirical evidence suggests that at least half and perhaps as many as 80 percent of these killings are not necessary to safeguard police or protect other citizens from life-threatening force. ...

One reason why U.S. police kill so many civilians is that U.S. police themselves are vastly more likely than police in other rich nations to die from violent civilian attacks. In Great Britain or Germany, the number of police deaths from civilian attack most years is either one or zero. In the United States—four or five times larger—the death toll from civilian assaults is fifty times larger. And the reason for the larger danger to police is the proliferation of concealable handguns throughout the social spectrum. When police officers die from assault in Germany or England, the cause is usually a firearm, but firearms ownership is low, and concealed firearms are rare. There are, however, at least 60 million concealable handguns in the United States and the firearm is the cause of an officer’s death in 97.5 percent of intentional fatal assaults, an effective monopoly of life-threatening force even though more than 95 percent of all assaults against police and an even higher fraction of those said to cause injury are not gun related. ... 
A theme that runs loosely through a number of these essays is that police-citizen interactions can involve "tight coupling," which is organizational behavior jargon for an interrelated system with lots of stresses and little slack. A "tightly coupled" system is bad at dealing with unexpected shocks, which can cause catastrophic breakdowns. A situation where a police officer is feeling threatened and stress, and as if there is a need for immediate urgent action, is also a situation where racial prejudices about who poses a danger and what actions are justified in response more easily boil to the surface. 

An implication of this insight is that focusing just on the situations where a breakdown (in this case, a policy shooting) occurs runs a risk of missing the point, which is that the system is fragile and prone to failure. Thus,  Zimring points out both that criminal prosecutions in cases of police shooting are extremely low--indeed, so low as to raise concerns that justice is not being done in many cases--but also to argue that while responding after-the-fact with prosecutions of police who kill someone can be a useful step in some cases, it misses the broader point. He writes:
One important problem in the governmental control of unnecessary police use of deadly force is the fact that police officers have been operating with near impunity when efforts are made by citizens or law enforcement to prosecute police officers for criminal misuse of their lethal weapons. The thousand or so killings of civilians by police officers in the United States each year have in recent history produced about one felony conviction of a uniformed officer per year. According to research by Philip Stinson of Bowling Green University, there were in the years 2000 to 2014 an average of 4.4 cases per year in the United States where police killings resulted in murder or manslaughter charges against one or more officers, and the prospects for obtaining felony conviction in these cases were low. The odds of a death producing a felony conviction were close to one in one thousand. ...

If the high death rates generated by police activity in the United States were for the most part the result of blameworthy activity by a few bad cops, then criminal law would make sense as a primary control strategy. But the problems are a mix of ineffective administrative controls, vague regulations, and the absence of administrative policy analyses and incentives for reducing death rates. It is hard to pin 100 percent of the blame for this mess on one or two officers. ... The critical problem with reform priorities in the first years after Ferguson, Missouri, was the exclusive emphasis on criminal prosecutions and criminal prosecutors. Ineffective police administrators—and the vague and permissive nonspecificity of their deadly force standards—have been unjustly spared in the reexamination of why the epidemic of civilian deaths is a chronic part of our national experience.
So what is to be done to adjust the system of policing. There are a number of There are a number of proposals for improving police performance across-the-board, including a hoped-for reduction in the number of shootings. But the evidence in support of the efficacy of these steps is somewhere between weak and nonexistent. Robin S. Engel, Hannah D. McManus, Gabrielle T. Isaza write: "Of the litany of recommendations believed to reduce police shootings, five have garnered widespread support: body-worn cameras, de-escalation training, implicit bias training, early intervention systems, and civilian oversight. These highly endorsed interventions, however, are not supported by a strong body of empirical evidence that demonstrates their effectiveness." 

They review the partial and limited evidence on these policies. They point out that when it comes to public policy, it isn't always possible or desirable to wait for years of study to be sure that something works. As the social scientists say, absence of evidence is not evidence of absence. Instead, jurisdictions that are trying these policies should also be trying to couple new policies with rigorous evaluation. They describe the experience of the University of Cincinnati Department of Public Safety, which works closely with the Cincinnati police: 
This is the approach we used to facilitate the reform efforts within the UCPD. Our first step was to redesign data collection systems to include the data necessary to evaluate the impact of our work. Our executive team modified existing data collection processes and also mandated the collection of new data. Changes in data collection instruments and practices resulted in new data generated during traffic and pedestrian stops, during the citizen complaint process, through the review and cataloging of BWC footage, during potential use-of-force encounters (e.g., when officers draw their Tasers or firearms but do not deploy them), along with multiple citizen and officer surveys. Each of these data collection changes required an accompanying change in policy, training, and supervisory oversight to ensure that the data were being properly collected and used. The UCPD is now in a better position to test specific propositions about the effectiveness of our own reform efforts.
What other factors might matter? Greg Ridgeway writes in his essay: 
Using data from the New York City Police Department (NYPD) and the Major Cities’ Chiefs Association (MCCA), the analysis finds that police officers who join the NYPD later in their careers have a lower shooting risk: for each additional year of their recruitment age, the odds of being shooters declines by 10 percent. Both officer race and prior problem behavior (e.g., losing a firearm, crashing a department vehicle) predict up to three times greater odds of shooting, yet officers who made numerous misdemeanor arrests were four times less likely to shoot.
Laurie O. Robinson adds: 
When President Obama asked my White House Task Force cochair, Chuck Ramsey, and me if there was one area we would have delved into if given more time, we said that area was recruitment. American policing in the future will be shaped by the men and women now coming into the police academies, yet at a time when there are calls for advancing a “guardian” culture in policing, many training academies are still organized as military-style boot camps emphasizing a “warrior” approach ...... 
Robinson also notes that there have been lots of changes in use-of-force policies in major police departments. As she writes: 
Larger police agencies are, in fact, taking steps to revise their use of force policies, and it is having an impact. According to a survey of forty-seven of the largest law enforcement agencies in the United States from 2015 to 2017 conducted by the Major Cities Chiefs Association (MCCA) and the National Police Foundation, 39 percent of the departments changed their use of force policies and revised their training to incorporate de-escalation and beef up scenario-based training approaches. Significantly, officer-involved shootings during this period dropped by 21 percent in the agencies surveyed ...
The editor of the volume, Lawrence W. Sherman, suggests in an essay near the close of the volume that there are three proposals "that seem to have the greatest chance of winning a political consensus, and then winning implementation." He writes: 
These proposals are
  1. to empower police to seize guns without a court order, as may appear necessary to them in a “split-second decision”;
  2. to develop the core tactics underlying systems that seek to reduce “tight coupling” that creates “split-second decisions” and leave too little time to save lives; and
  3. to equip police with more powerful first aid strategies, from hi-tech bandages in every police car to policies enabling police to “scoop and run” with every shooting or stabbing victim.
But ultimately, a fundamental problem is that there are something like 18,000 police departments across the United States, and when a police shooting occurs, the US system of government often assumes that the same local law enforcement mechanisms that include the police in a central role will also be able to investigate the police. It's not a surprise that this often doesn't work well. In some cases, the state steps in, but as Zimring points out: "The unit of government that maintains authority in many other criminal justice operations—the state level—usually has no concern with and little statutory authority about policing."

Thus, Zimring suggests that there could be a national-level Office of Police Conduct, which can serve as a clearinghouse for complaints, reports, and information He writes: 
There is also one important foreign model of a national fact-gathering institution that could also be incorporated into the U.S. government’s Department of Justice, perhaps in the civil rights division. Police departments in England and Wales have decentralized administrations, not unlike the United States. But the United Kingdom also created an Independent Office for Police Conduct (formerly known as the Independent Police Complaints Commission) that has become a statistical and analysis resource that is worthy of emulation on this side of the Atlantic ... 

Thursday, July 9, 2020

The Subway Map View of US Mortality and Health

If the US had a national goal of improving health, it would quite possibly take aggressive action to reduce current spending on health care, and instead use those funds to address social factors that affect health. Donald M. Berwick makes this case in his short essay, "The Moral Determinants of Health" (Journal of the American Medical Association, June 12, 2020). Berwick writes (footnotes omitted): 
Except for a few clinical preventive services, most hospitals and physician offices are repair shops, trying to correct the damage of causes collectively denoted “social determinants of health.” Marmot has summarized these in 6 categories: conditions of birth and early childhood, education, work, the social circumstances of elders, a collection of elements of community resilience (such as transportation, housing, security, and a sense of community self-efficacy), and, cross-cutting all, what he calls “fairness,” which generally amounts to a sufficient redistribution of wealth and income to ensure social and economic security and basic equity. ...

The power of these societal factors is enormous compared with the power of health care to counteract them. One common metaphor for social and health disparities is the “subway map” view of life expectancy, showing the expected life span of people who reside in the neighborhood of a train or subway stop. From midtown Manhattan to the South Bronx in New York City, life expectancy declines by 10 years: 6 months for every minute on the subway. Between the Chicago Loop and west side of the city, the difference in life expectancy is 16 years. At a population level, no existing or conceivable medical intervention comes within an order of magnitude of the effect of place on health. ...

How do humans invest in their own vitality and longevity? The answer seems illogical. In wealthy nations, science points to social causes, but most economic investments are nowhere near those causes. Vast, expensive repair shops (such as medical centers and emergency services) are hard at work, but minimal facilities are available to prevent the damage. In the US at the moment, 40 million people are hungry, almost 600 000 are homeless, 2.3 million are in prisons and jails with minimal health services (70% of whom experience mental illness or substance abuse), 40 million live in poverty, 40% of elders live in loneliness, and public transport in cities is decaying. ...

Decades of research on the true causes of ill health, a long series of pedigreed reports, and voices of public health advocacy have not changed this underinvestment in actual human well-being. Two possible sources of funds seem logically possible: either (a) raise taxes to allow governments to improve social determinants, or (b) shift some substantial fraction of health expenditures from an overbuilt, high-priced, wasteful, and frankly confiscatory system of hospitals and specialty care toward addressing social determinants instead. Either is logically possible, but neither is politically possible, at least not so far.
Here is one of the 21 "subway maps" of life expectancy in different areas of the United States from researchers at Virginia Commonwealth University ("Mapping Life Expectancy," September 26, 2016), this one using data from Chicago. 
LE Map, Chicago, jpg, black

Health care spending is headed for one-fifth of total GDP, and there's substantial reason to doubt that it is improving health by enough to justify that bill.  For example, here's a figure from Our World in Data showing the shifts health care spending per person over time (horizontal axis) and the change in life expectancy over time (vertical axis). The US is clearly on a different path from other high-income countries. 
Making sure people have access the kind of health care with a high impact on health seems like a valuable social goal. But if the overall social goal is improving health, not just feeding the health care industry, finding ways to transfer funds away from health care to other social needs that affect health may be more important than health insurance for all. For a previous post on the need for spending on programs "upstream" of medical care, see "U.S. Health Care: The Case for Going Upstream" (March 15, 2017). 

Tuesday, July 7, 2020

Marketable Pollution Permits and Medieval Indulgences

Eugene McCarthy died in 2005, and last held public office in 1971, so I suspect he is not well-known among the under-50 crowd. But he was a grand old man of Democratic politics: a Congressman from Minnesota from 1949-59, a Senator from 1959-1971, someone who got some consideration as LBJ's vice-president in the 1964, and someone who made a credible run for the Democratic presidential nomination in 1968, followed by less credible presidential runs in 1972 and 1976. His 1968 campaign left us with the slogan "Get Clean for Gene," referring to how a number of his young adult campaign volunteers cut their hair and beards so that they would be less likely to alienate undecided voters when going door-to-door. For the tone of the times, you may prefer listening to the Peter, Paul and Mary campaign song: "Eugene McCarthy for President (If You Love Your Country).

I pass along this background to emphasize that when McCarthy wrote an essay in 1990 for The New Republic, analogizing marketable pollution permits as a tool for reducing sulfur dioxide emissions to the sale of indulgences by the Catholic Church in the Middle Ages, his views got some attention (Eugene J. McCarthy, "Pollution Absolution," The New Republic, October 29, 1990, p. 9). McCarthy wrote: 
[T]he practice of giving or selling indulgences ... operated in the Middle Ages on the principle that some persons were better than they needed to be in order to escape temporal or purgatorial punishments, whereas many others, known as sinners, fell short. Under the terms of the granting of indulgences, credits built up by the good could be transferred to those who had fallen short, or even to those who anticipated falling short. The transfer could be gratuitous, it could be in answer to prayers and petitions, or it could be for money. The anticipation of credits for forgiveness of sin, according to the record, moved William of Aquitaine to establish the monastery of Cluny with the instruction that the monks pray continuously for his salvation while be went about his work of war, pillage, rapine, and other activities. This procedure is perfectly echoed in an amendment to the Senate's version of the Clean Air bill, which would allow, for example, the aerospace industry in California to buy from a local supervisory authority pollution rights in excess of its allotted amount and then allow the authority to use the money to subsidize or pay for pollution reduction somewhere else in the region, by some other person or company.
I had McCarthy's essay in mind when I sat down to write "Are Property Rights a Solution to Pollution?" for the special  issue of PERC Reports (Summer 2020), commemorating 40 years of the PERC free-market environmentalism think tank in Bozeman, Montana. The essay may offer a useful overview for those not familiar with broad picture of economic thinking about the environment: Alfred Marshall and the idea of externalities in 1890; Marshall's student A.C. Pigou and the idea of that a Pigovian tax can offer a socially appropriate adjustment for externalities in 1920; Ronald Coase's classic 1960 essay and the idea of thinking about externalities in terms of property rights; early experiments by the Environmental Protection Agency in the 1970s with first allowing firms the flexibility to meet overall pollution-reduction goals in the way that seemed most cost-effective to the firm, and then allowing firms to trade pollution permits with each other; the use of marketable permits for reducing other pollutants like lead and  sulfur (after that 1990 legislation; and novel ways of using marketable pollution permits including efforts to reduce water pollutants and carbon emissions. 

Here, I'll just add a couple of points not especially emphasized in my essay, and instead aimed at McCarthy's view that pollution is a sin to be shamed and punished, not an undesirable output from otherwise useful production that needs to be managed. 

One point is that marketable pollution permits have been proven to work, at least in certain settings and for certain purposes. McCarthy wrote in 1990: "It should perhaps be remembered, however, that the sale of indulgences did not serve the Church well—nor, in the long run, as the record shows, did it discourage sinners." In contrast, marketable pollution permits have now worked in a variety of settings as a more cost-effective way of reducing a range of pollutants. 

The other point is that those who are have a reflexively negative reaction to marketable pollution permits may wish to reflect on the idea that a law limiting pollution is, in its own way, a property right to pollute. After all, such a law grants to firms a legal right--which can be viewed as a property right owned by the firm--to emit any amount of pollution up to the limit. However, with this legal-limit property right to pollute, a firm has no incentive to seek out innovative ways of reducing pollution below the legal limit. In contrast, a marketable pollution permit means that firms do have an incentive to seek out innovative and cost-effective ways of reducing pollution, because if they reduce beyond pollution below the legal limit, they can sell those pollution permits to other firms (or in some cases, "bank" the permits for their own future use). Thus, in choosing between a legal rule to limit pollution emissions and a marketable permit system, the central issue is not whether firms are granted a legal property right to pollute--this happens in either case. The difference is that marketable permits unleashes incentives to seek out ways of reducing pollution more quickly and cheaply. 

Monday, July 6, 2020

An Audit Study of Discrimination in the Boston Rental Market

Audit studies are one of the most persuasive ways to show real-world discrimination. The idea is to come up with pairs of potential applicants or buyers, where each pair is given background information that makes them essentially identical--except for a difference in race. Then these pairs of people enter into the economy by taking actions like renting an apartment, applying for a job or a loan, or buying a car.  In some audit studies, like applying for certain job, an audit study can be done without actual people, just by constructing resumes and social media links, and seeing who gets a response at all and who gets invited for an actual interview. In these studies, the difference in race just involves using names or differences in interests that can act as a signal of race to the person (or the software?) looking at the job applications. In other audit studies, actual pairs of people are matched up.  


One additional factor in the rental markets is whether prospective renters say that they are planning to use a voucher for subsidized housing, or whether they were just planning to pay the full market rate. Thus, this study   involved 200 "testers," who were divided into groups of four who were given similar characteristics and background: "Specifically, the test coordinator created matched pairs who were demographically similar (i.e., cisgender, same sex, no visible disabilities, age) and assigned the testers similar characteristics like income, family size, and credit score." The 50 potential rentals were randomly selected from public lists. The testers were also told to communicate with housing providers within the same fairly short time window, and to make the original communication in the same way (for example, by phone call or text). Within each group of four testers,  two were black and two were white, and two said they were planning to pay full market rate while the other two were planning to use a housing voucher.

With this background: 
The study measured a number of data points including:
  • whether testers were able to make appointments to see the properties;
  • how many units housing providers told testers about or showed them;
  • whether housing providers offered financial incentives;
  • whether housing providers made positive or negative comments about the housing units; and
  • whether housing providers offered testers an application. 
Here's a summary of some results: 
Results indicate that White market-rate testers— meaning White testers not using vouchers—were able to arrange to view apartments 80% of the time. Similarly situated Black market-rate testers seeking to view the same apartments were only able to visit the property 48% of the time. Testers who had vouchers, regardless of their race, were prevented from viewing apartments at very high rates. White voucher holders were able to view rental apartments only 12% of the time. Black voucher holders were able to view apartments they were interested in renting only 18% of the time. ... In addition, housing providers showed White market-rate testers twice as many apartment units as Black market-rate testers, and provided them with better service as measured by a number of different variables. The results also showed that testers who were offered a site visit by the housing provider received differential treatment at the visit based on race and voucher status. 
Of course, no social science methodology is truly bulletproof. One might wonder, for example, if the sample size is large enough, or the matched pairs of rental applicants were really the same, or if the problem of discrimination is especially severe in Boston rental markets. Because any single study can be questioned into exhaustion, one looks for other studies--and it turns out that these findings are broadly similar to the results of previous audit studies of  housing markets done at other places and times. 

For example, Newsday did its own audit study last fall looking at real estate agents. ""Newsday conducted 86 matching tests in areas stretching from the New York City line to the Hamptons and from Long Island Sound to the South Shore. Thirty-nine of the tests paired black and white testers, 31 matched Hispanic and white testers and 16 linked Asian and white testers." The testers interacted with 93 real estate agents in the Long Island area, and found pervasive evidence of racial bias. For a number of other studies, I've in the past discussed "Audit Studies and Housing Discrimination" (September 21, 2016). 

There's an old saying in social science that data is not the plural of anecdote: that is, you can pile up a lot of anecdotes, but it's inevitably hard to know if such stories tell what happened, or what people think might have happened, or whether they reveal a common or a rare event. The growing body of audit studies in US housing markets is not a bunch of anecdotes: it's data showing that racial discrimination which is illegal under existing law is in fact disturbingly pervasive in US housing markets. I would love to see a wave of these audit studies of housing market discrimination carried out around the country, with loud publicity for the results and also with some legal consequences attached. It would be socially useful if rental agents and real estate agents needed to take seriously the possibility that the ways in which they are treating their minority customers could come under public scrutiny.