Thursday, February 25, 2021

India: Pivoting from the Pandemic to Economic Reforms

Each year, the Economic Division in India's Ministry of Finance publishes the Economic Survey of India (January 2021). The first volume is a set of chapters on different topics: the second volume is a point-by-point overview of the last year's developments, in fiscal, monetary, and trade policy, along with developments in main sectors like agriculture, industry, and services. Here, I'll cherry-pick some points that caught my eye in looking over the first volume. 

Of course, any discussion of a country's economy 2020 will start with the pandemic. All statements about what "worked" or "didn't work" during 2020 are of course subject to revision as events evolve. As a country with many low-income people living in high-density cities, and high absolute numbers of elderly people, India was clearly a country with what looked as if it might experience large health costs in the pandemic. But the report argues that at least for 2020, India's COVID-19 response worked well. (For those not used to reading reports from India, "lakh" refers to 100,000, and a "crore" is 100 lakh, or 10 million.)

India was amongst the first of the countries that imposed a national lockdown when there were only 500 confirmed cases. The stringent lockdown in India from 25th March to 31st May was necessitated by the need to break the chain of the spread of the pandemic. This was based on the humane principle that while GDP growth will come back, human lives once lost cannot be brought back. 

The 40-day lockdown period was used to scale up the necessary medical and para-medical infrastructure for active surveillance, expanded testing, contact tracing, isolation and management of cases, and educating citizens about social distancing and masks, etc. The lockdown provided the necessary time to put in place the fundamentals of the '5 T' strategy - Test, Track, Trace, Treat, Technology. As the first step towards timely identification, prompt isolation & effective
treatment, higher testing was recognized as the effective strategy to limit the spread of infection. At the onset of the pandemic in January, 2020, India did less than 100 COVID-19 tests per day at only one lab. However, within a year, 10 lakh tests were being conducted per day at 2305 laboratories. The country reached a cumulative testing of more than 17 crore in January, 2021. ... 

The districts across India, based on number of cases and other parameters were classifie into red, yellow and green zones. Across the country, ‘hotspots’ and ‘containment zones’ were identified – places with higher confirmed cases increasing the prospect of contagion. This strategy was increasingly adopted for intensive interventions at the local level as the national lockdown was eased. ... 

India was successful in flattening the pandemic curve, pushing the peak to September. India managed to save millions of ‘lives’ and outperform pessimistic expectations in terms of cases and deaths. It is the only country other than Argentina that has not experienced a second wave. It has among the lowest fatality rates despite having the second largest number of confirmed cases. The recovery rate has been almost 96 per cent. India, therefore, seems to have managed the health aspect of COVID-19 well.

India's economy seems to have experienced a V-shaped recession, with a sharp decline during the 40-day lockdown period but then a return to pre-pandemic levels by the end of 2020. 

Other chapters in the report look at other issues that have become more salient as a result of the pandemic. For example, India's economy has labored for years under what has been called the "license raj," referring back to the British colonial period for a metaphor to describe how an extraordinarily instrusive level of licensing and regulation limits flexibility and growth in the India's economy. 

Element of the "license raj" still exist. As one example, the report notes: 

International comparisons show that the problems of India’s administrative processe derive less from lack of compliance to processes or regulatory standards, but from overregulation. ... [T]he issue of over-regulation is illustrated through a study of time and procedures taken for a company to undergo voluntary liquidation in India. Even when there is no dispute/ litigation and all paperwork is complete, it takes 1570 days to be stuck off from the records. This is an order of magnitude longer than what it takes in other countries. ... 

The ‘World Rule of Law Index’ published by the World Justice Project provides cross-country comparison on various aspects of regulatory enforcement. The index has various sub-categories, which capture compliance to due processes, effectiveness, timelines, etc. In 2020, India’s rank is 45 out of 128 countries in the category of ‘Due process is respected in administrative proceedings’ (proxy for following due process). In contrast, in the category ‘Government regulations are effectively enforced’ (proxy for regulatory quality/effectiveness), the country’s rank is 104 (Table 1). India stands at 89th rank in ‘Administrative Proceedings are conducted without unreasonable delay’ (proxy for timeliness) and 107th in ‘Administrative Proceedings are applied and enforced without improper influence’ (proxy for rent seeking).

Another example looks back at some aftereffects of policies taken during the Great Recession back in 2008-2009. During that time, India's banking and financial regulators instituted a policy of "forbearance," meaning that they wouldn't crack down on financial institutions that were in a shaky position during a deep recession. This policy can make sense in the short-term: if regulators crack down on banks during a recession, it can propagate a deeper recession. But soon after the recession, this policy of forbearance needs to stop--and in India that's not what happened.  

During the GFC [global financial crisis], forbearance helped borrowers tide over temporary hardship caused due to the crisis and helped prevent a large contagion. However, the forbearance continued for seven years though it should have been discontinued in 2011, when GDP, exports, IIP [international investment position] and credit growth had all recovered significantly. Yet, the forbearance continued long after the economic recovery, resulting in unintended and detrimental consequences for banks, firms, and the economy. Given relaxed provisioning requirements, banks exploited the forbearance window to restructure loans even for unviable entities, thereby window-dressing their books. The inflated profits were then used by banks to pay increased dividends to shareholders, including the government in the case of public sector banks. As a result, banks became severely undercapitalized. Undercapitalization distorted banks’ incentives and fostered risky lending practices, including lending to zombies. As a result of the distorted incentives, banks misallocated credit, thereby damaging the quality of investment in the economy. Firms benefitting from the banks’ largesse also invested in unviable projects. In a regime of injudicious credit supply and lax monitoring, a borrowing firm’s management’s ability to obtain credit strengthened its influence within the firm, leading to deterioration in firm governance. The quality of firms’ boards declined. Subsequently, misappropriation of resources increased, and the firm performance deteriorated. By the time forbearance ended in 2015, restructuring had increased seven times while NPAs [non-performing assets] almost doubled when compared to the pre-forbearance levels.

But with these kinds of ongoing issues duly noted, India has also seized the opportunity of the pandemic to carry out some long-promised structural reforms. For example, one change is that farmers are now allowed to sell their crops to anyone, anywhere, rather than being required to sell only to a designated local agency.  Another issue of long-standing is that India has long offered a range of subsidies to smaller firms,  which sounds OK until you realize that if a small firm thinks about growing into a larger firm, it realizes that it would lose its government subsidies. These kinds of labor regulations have been substantially loosened, and the number of regulations pared back. 

The increase in the size thresholds from 10 to 20 employees to be called a factory, 20 to 50 for contract worker laws to apply, and  100 to 300 for standing orders enable economies of scale and unleash growth. The drastic reductions in compliance stem from (i) 41 central labour laws being reduced to four, (ii) the number of sections falling by 60 per cent from about 1200 to 480, (iii) the maze due to the number of minimum wages being reducing from about 2000 to 40, (iv) one registration instead of six, (v) one license instead of four, and (vi) de-criminalisation of several offences.
In the next few years, it will be interesting to see if these changes make a real difference, or if they have just rearranged the furniture, with the same regulatory burden reconfigured. 

Another aftereffect of the pandemic is to raise the visibility of public health programs in India. These were already on the rise. For example,  there "an increase in public [health care] spend from 1 per cent to 2.5-3 per cent of GDP – as envisaged in the National Health Policy 2017 – can decrease the Out-Of-Pocket Expenditures from 65 per cent to 30 per cent of overall healthcare spend." There are programs to expand telemedicine and the infrastructure needed to support it. 

Also, India's government launched a program in 2018 aimed at providing more access to health care (which is mostly privately provided in India) to the low-income population. 
In 2018, Government of India approved the Ayushman Bharat Pradhan Mantri Jan Arogya Yojana (AB-PM-JAY) as a historic step to provide healthcare access to the most vulnerable sections in the country. Beneficiaries included approximately 50 crore individuals across 10.74 crores poor and vulnerable families, which form the bottom 40 per cent of the Indian population. The households were included based on the deprivation and occupational criteria from the Socio-Economic Caste Census 2011 (SECC 2011) for rural and urban areas respectively. The scheme provides for healthcare of up to INR 5 lakh per family per year on a family floater basis, which means that it can be used by one or all members of the family. The scheme provides for secondary and tertiary hospitalization through a network of public and empanelled private healthcare providers. It also provides for three days of pre-hospitalization and 15 days of posthospitalization expenses, places no cap on age and gender, or size of a family and is portable across the country. It covers 1573 procedures including 23 specialties (see Box 1 for details). AB-PM-JAY also aims to set up 150,000 health and wellness centres to provide comprehensive primary health care service to the entire population.
Finally, in India as in so many countries, there is often a policy question as to whether the country should be striving for additional economic growth or for a reduction in inequality: or more specifically, what the tradeoffs would be in prioritizing one of these goals over the other. The Survey looks at potential tradeoffs and data across the states. It finds that in the context of India, there doesn't seem to be a conflict 

[T]he Survey examines if inequality and growth conflict or converge in the Indian context. By examining the correlation of inequality and per-capita income with a range of socio-economic indicators, including health, education, life expectancy, infant mortality, birth and death rates, fertility rates, crime, drug usage and mental health, the Survey highlights that both economic growth – as reflected in the income per capita at the state level –and inequality have similar relationships with socio-economic indicators. Thus, unlike in advanced economies, in India economic growth and inequality converge in terms of their effects on socio-economic indicators. Furthermore, this chapter finds that economic growth has a far greater impact on poverty alleviation than inequality. Therefore, given India’s stage of development, India must continue to focus on economic growth to lift the poor out of poverty by expanding the overall pie. Note that this policy focus does not imply that redistributive objectives are unimportant, but that redistribution is only feasible in a developing economy if the size of the economic pie grows.

For some previous posts on India's economy, see:

The first link discussed a three-paper "Symposium on India" in the Winter 2020 issue of the Journal of Economic Perspectives (where I work as Managing Editor). 

Wednesday, February 24, 2021

Robert J. Gordon: Thoughts on Long-Run US Productivity Growth

Leo Feler has a half-hour interview with Robert J. Gordon on "The Rise and Fall and Rise Again of American Growth"  (UCLA Anderson Forecast Direct, February 2021, audio and transcript available). The back-story here is that Gordon has been making the argument for some years now that modern economic interventions, like the rise of information technologies and the internet, have not had and will not have nearly the same size effect on productivity as some of the major technologies of the past like the spread of electricity or motor vehicles (for some background, see here and here). 

Here, Gordon makes a distinction worth considering between growth in productivity and growth in consumer welfare.
Let’s divide the computer age into two parts. One is the part that developed during the 1970s and 80s and came to fruition in the 1990s, with the personal computer, with faster mainframe computers, with the invention of the internet, and the transition of every office and every business from typewriters and paper to flat screens and the internet, with everything stored in computer memory rather than filing cabinets. That first part of the computer revolution brought with it the revival of productivity growth from the slow pace of the 70s and 80s to a relatively rapid 2.5% to 3% per year during 1995 to 2005. But unlike the earlier industrial revolution where 3% productivity growth lasted for 50 years, this time it only lasted for ten years. Most businesses now are doing their day-to-day operations with flat screens and information stored in the cloud, not all that different from how they did things in 2005. In the last 15 years, we’ve had the invention of smartphones and social networks, and what they’ve done is bring enormous amounts of consumer surplus to everyday people of the world. This is not really counted in productivity, it hasn’t changed the way businesses conduct their day-to-day affairs all that much, but what they have done is change the lives of citizens in a way that is not counted in GDP or productivity. It’s possible the amount of consumer welfare we’re getting relative to GDP may be growing at an unprecedented rate.
To understand the distinction here, say that you pay a certain amount for access to television and the internet. Now say that over time, the amount of content you can access in this way--including shows, games , shopping , communication with friends, education, health care advice, and so on--rises dramatically, while you continue to pay the same price for access. In a productivity sense, nothing has changed: you pay the same for access to television and internet as you did before. But from a consumer welfare perspective, the much greater array of more attractive and easier-to-navigate choices means that you are better off. 

The expression "timepass" is sometimes used here. One of the big gains of information technology is that, for many people, it seems like a better way of passing the time than the alternatives. 

Gordon also points out that the shift to working from home and via the internet could turn out to involve large productivity gains. But as he also points out, shifts in productivity--literally, producing the same or more output with fewer inputs--is an inherently disruptive process for the inputs that get reduced. 
This shift to remote working has got to improve productivity because we’re getting the same amount of output without commuting, without office buildings, and without all the goods and services associated with that. We can produce output at home and transmit it to the rest of the economy electronically, whether it’s an insurance claim or medical consultation. We’re producing what people really care about with a lot less input of things like office buildings and transportation. In a profound sense, the movement to working from home is going to make everyone who is capable of working from home more productive. Of course, this leaves out a lot of the rest of the economy. It’s going to create severe costs of adjustments in areas like commercial real estate and transportation.
When asked about how to improve long-run productivity, Gordon's first suggestion is very early interventions for at-risk children:  
I would start at the very beginning, with preschool education. We have an enormous vocabulary gap at age 5, between children whose parents both went to college and live in the home and children who grow up in poverty often with a single parent. I’m all for a massive program of preschool education. If money is scarce, rather than bring education to 3 and 4 year olds to everyone in the middle class, I would spend that money getting it down as low as age 6 months for the poverty population. That would make a tremendous difference. ... This isn’t immediate. These children need to grow into adults. But if we look out at what our society will be like 20 years from now, this would be the place I would start.
For some of my own thoughts on very early interventions, well before a conventional pre-K program, see here, here and here

Tuesday, February 23, 2021

Including Illegal Activities in GDP: Drugs, Prostitution, Gambling

The current international standards for how a country should compute its GDP suggest that illegal activities should be included. Just how to do this, given the obvious problems in collecting statistics on illegal activity, isn't clear. The US Bureau of Economic Analysis does not include estimates of illegal activities in GDP. However, there is ongoing research on the subject, described by Rachel Soloveichik in "Including Illegal Market Activity in the U.S. National Economic Accounts" (Survey of Current Business, February 2021).

It's perhaps worth noting up front that crime itself is not included in GDP. If someone steals from me, there is an involuntary and illegal redistribution, but GDP measures what is produced. Both public and private expenditures related to discouraging or punishing crime are already included in GDP. This is of course one of the many reasons why GDP should not be treated as a measure of social welfare: that is, social welfare would clearly be improved if crime was lower and money spent on discouraging and punishing crime could instead flow to something that provides positive pleasures and benefits. 

Thus, adding illegal activities to GDP requires adding the actual production of goods and services which are illegal. Soloveichik focuses on "three categories of illegal activity: drugs, prostitution, and gambling." 
These three categories are not equal in their recent economic impact. Consumer spending on illegal drugs was $153 billion in 2017, compared to $4 billion on illegal prostitution and $11 billion on illegal gambling in the same year. Furthermore, tracking illegal drugs raises the average real GDP growth rate between 2010 and2017 by 0.05 percentage point per year and raises the average private-sector productivity growth rate between 2010 and 2016 by 0.11 percentage point per year. In contrast, neither tracking illegal prostitution nor tracking illegal gambling has much influence on recent growth rates.
To me, the most interesting part of the essay is about some historical patterns of spending on illegal activities and drug prices. For example, here's a figure showing spending on illegal drugs over time. The line to the far right shows spending on alcohol during prohibition. The very high level of spending in the 1980s is especially striking, remembering that you need to add the different categories of illegal drugs to get the total. 

Soloveichik writes: 

Chart 1 shows that the expenditure shares for all three broad categories of illegal drugs grew rapidly after 1965 and peaked around 1980. In total, this analysis calculates that illegal drugs accounted for more than 5 percent of total personal consumption expenditures in 1980. This high expenditure share is consistent with contemporaneous news articles and may explain why BEA chose to study the underground economy in the early 1980s (Carson 1984a, 1984b). Chart 1 also shows that illegal alcohol during Prohibition accounted for almost as large a share of consumer spending as illegal drugs in 1980 and changed faster. Measured nominal growth in 1934, the first year after Prohibition ended, is badly overestimated when illegal alcohol is excluded from consumer spending.

Here's a similar graph for total spending on illegal prostitution and gambling services. Spending on gambling was especially high up until about the 1960s, when first legal state lotteries and then casinos arrived. 
It may seem counterintuitive that the US can be suffering through an opioid epidemic in the last couple of decades, but still have what looks like relatively low spending on illegal drugs. But remember that the start of the opioid epidemic up to about 2010 largely involved legally sold prescription drugs (as discussed here and here)--which would have been included in GDP. Total spending is a combination of quantity purchased and price. In addition, price must be adjusted for quality. Thus, what the data shows is that we are living in a time of cheap and powerful heroin and fentanyl. As Soloveichik writes: 
Opioid potency has rapidly increased due to the recent practice of mixing fentanyl, an extremely powerful opioid, with heroin. Marijuana potency has gradually increased due to new plant varieties that contain higher concentrations of the main psychoactivechemical in marijuana, tetrahydrocannabinol (THC).
With those patterns taken into account, here's a figure showing estimated drug prices over time, relative to the prices for legal consumption goods. Drug prices for opioids and stimulants fell sharply in the 1980s, which makes the rise in nominal expenditures on drugs shown above even more striking, and have more-or-less stayed at the lower level since then. 


Soloveichik write: "Readers should also note that illegal drugs are a large enough spending category to influence aggregate inflation. Between 1980 and 1990, average personal consumption expenditures price growth falls by 0.7 percentage point per year when illegal activity is tracked in the NIPAs."

If you are interested in data sources for these illegal goods and services and what assumptions are needed to estimate prices and output levels, this article is good place to start. 

Monday, February 22, 2021

The Dependence of US Higher Education on International Students

US higher education in recent decades had beeome ever-more dependent on rising inflows of international students--a pattern that was already in likely to slow down and now is being dramatically interrupted by the pandemic. John Bound, Breno Braga, Gaurav Khanna, and Sarah Turner describe these shifts in "The Globalization of Postsecondary Education: The Role of International Students in the US Higher Education System" (Journal of Economic Perspectives, Winter 2021, 35:1, 163-84). They write: 
For the United States, which has a large number of colleges and universities and a disproportionate share of the most highly ranked colleges and universities in the world, total enrollment of foreign students more than tripled between 1980 and 2017, from 305,000 to over one million students in 2017 (National Center for Enrollment Statistics 2018). This rising population of students from abroad has made higher education a major export sector of the US economy, generating $44 billion in export revenue in 2019, with educational exports being about as big as the total exports of soybeans, corn, and textile supplies combined (Bureau of Economic Analysis 2020).
Here's a figure showing the rise in international students from 2000-2017. Notice in particular the sharp rise in international students in master's degree students. 
Bound and co-authors write: 
[F]oreign students studying at the undergraduate level are most numerous at research-intensive public universities (about 32 percent of all bachelor’s degrees), though they also enroll in substantial numbers at non-doctorate and less selective private and public institutions. ...  The concentration of international students  in master’s programs in the fields of science, technology, engineering, and mathematics is even more remarkable: for example, in 2017 foreign students received about 62 percent of all master’s degrees in computer science and 55 percent in engineering. ... Many large research institutions now draw as much as 20 percent of their tuition revenue from foreign students (Larmer 2019)."
This table shows destinations of international students from China, India, and South Korea, three of the major nations for sending students to the US. 
However, Bound and co-authors note that the US lead as a higher education destination has been diminishing: "Although the United States remains the largest destination country for students from these countries, the US higher education system is no longer as dominant as it was 20 years ago. As an illustration, student flows from China to the United States were more than 10 times larger than the flows to Australia and Canada in 2000; by 2017, those ratios fell to 2.5 to 1 and 3.3 to 1, respectively."

This pattern of rising international enrolments in US higher ed was not likely to continue on its pre-pandemic trajectory. Other countries have been building up their higher education options. In addition, if you were a young entrepreneur or professional from China or India, the possibilities for building your career in your home country look a lot better now than they did, say, back in about 1990. But the pandemic has taken what would have been a slower-motion squeeze on international students coming to US higher education and turned it into an immediate bite. Bound and co-authors write: 
Visas for the academic year are usually granted between March (when admissions decisions are made) and September (when semesters begin). Between 2017 and 2019, about 290,000 visas were granted each year over these seven months (United States Department of State 2020). Between March and September 2020, only 37,680 visas were granted—an extraordinary drop of 87 percent. Visas for students from China dropped from about 90,000 down to only 943 visas between March and September 2020. A fall 2020 survey of 700 higher education institutions found that one in five international students were studying online from abroad in response to the COVID-19 pandemic. Overall, new international enrollment (including those online) decreased by 43 percent, with at least 40,000 students deferring enrollment (Baer and Martel 2020).
Overall, it seems to me an excellent thing for the US higher education system and the US economy to attract talent from all over the world. But even if you are uncertain about those benefits, it is an arithmetic fact that the sharp declines in international students are going to be a severe blow to the finances of US higher education. 

Saturday, February 20, 2021

The Minimum Wage Controversy

Why has the economic research of the last few decades had a hard time getting a firm handle on the the effects of minimum wages? The most recent issue of the Journal of Economic Perspectives (where I have worked as managing editor for many years) includes a set of four papers that bear on the subject.  The short answer is that the effects of a higher minimum wage are likely to vary by time and place, and are likely to include many effects other than reduced employment. In this post, I'll offer a longer elaboration. For reference, the four JEP papers are:

Manning starts his paper by pointing out that mainstream views on the minimum wage have shifted substantially in the last 30 years or so. He writes: 

Thirty years ago, ... there was a strong academic consensus that the minimum wage caused job losses and was not well-targeted on those it set out to help, and that as a result, it was dominated by other policies to help the working poor like the Earned Income Tax Credit. ,,,[P]olicymakers seemed to be paying attention to the economic consensus of the time: for example, in 1988 the US federal minimum wage had not been raised for almost a decade and only 10 states had higher minima. Minimum wages seemed to be withering away in other countries too. ... In 1994, the OECD published its view on desirable labor market policies in a prominent Jobs Study report, recommending that countries “reassess the role of statutory minimum wages as an instrument to achieve redistributive goals and switch to more direct instruments” (OECD 1994).

The landscape looks very different today.  ...In the United States, the current logjam in Congress means no change in the federal minimum wage is immediately likely. However, 29 states plus Washington, DC have a higher minimum wage. A number of cities are also going their own way, passing legislation to raise the minimum wage to levels (in relation to average earnings) not seen for more than a generation ... Outside the United States, countries are introducing minimum wages (for example, Hong Kong in 2011 and Germany in 2015) or raising them (for example, the introduction of the United Kingdom’s National Living Wage in 2016, a higher minimum wage for those over the age of 25). Professional advice to policymakers has changed too. A joint report from the IMF, World Bank, OECD, and ILO in 2012 wrote “a statutory minimum wage set at an appropriate level may raise labour force participation at the margin, without adversely affecting demand, thus having a net positive impact especially for workers weakly attached to the labour market” (ILO 2012). The IMF (2014) recommended to the United States that “given its current low level (compared both to US history and international standards), the minimum wage should be increased.” The updated OECD (2018) Job Strategy report recommended that “minimum wages can help ensure that work is rewarding for everyone” (p. 9) and that “when minimum wages are moderate and well designed, adverse employment effects can be avoided” (p 72).

Why the change? From a US point of view, one reason is surely that the real inflation-adjusted level of the minimum wage peaked back in 1968. Thus, it makes some intuitive sense that studies looking at labor market data from the 1960s and 1970s would tend to find big effects of a higher minimum wage, but as the real value of the federal minimum wage declined over time, they would tend to find smaller values. Here's a figure from the Fishback and Seltzer paper showing the real (solid yellow) and nominal (blue dashed) value of the minimum wage over time: 

Another long-recognized problem in trying to get evidence about effects of the minimum wage based on changes over time is that lots of other factors affect the labor market, too. For example, the dashed blue line shows that that most recent jump in the federal minimum wage was phased in from 2007 to 2009. Trying to disentangle the effects of that rise in the minimum wage from the effects of the Great Recession is likely a hopeless task. 

One more problem with studying the effects of minimum wage changes over time is that who actually receives the minimum wage has been shifting. Manning offers this table. If shows, for example, that teenagers used to account for 32.5% of the total hours of minimum wage workers in 1979, but now account for only 9.6% of the hours of minimum wage workers. 

Rather than trying to dig out lessons from changes in the gradually declining real minimum wage over time, lots of research in the last few decades has instead tried to look at US states or cities where the minimum wage increased over time. Then the study either does a before-and-after comparison of trends, or looks for a comparison location where the minimum wage didn't rise. 

But this kind of analysis is subject to the basic problem that the states or cities that choose to raise their minimum wages are not randomly selected. They are usually places where average wages and wages for low-skill workers are already higher. As an extreme example, the minimum wage in various cities near the heart of Silicon Valley (Palo Alto, San Francisco, Berkeley, Santa Clara, Mountain View, Sunnyvale, Los Altos) is already above $15/hour. But in general, wages are also much higher in those areas. Asking the question of whether these higher minimum wages reduced low-skill or low-wage employment in these cities is an interesting research topic, but no sensible person would extrapolate the answers from Silicon Valley to how a $15/hour minimum wage would affect employment in, say, Mississippi, where half of all workers in the state earn less that $15/hour. 

Many additional complexities arise. Clemens goes through many of the possibilities in his paper. Here are some of them. 

1) Economists commonly divided workers into those in the "tradeable" or the "nontradeable" sector. A "nontradeable" good would be working at a coffee shop, where you compete against other coffee shops in the same immediate area, but not against coffee shops in other states or countries. A "tradeable" good might be a manufacturing job where your output is shipped to other locations, and so you do compete directly against producers from other locations. 

If you work in a tradeable-sector job and the state-level or local-level minimum wage rises, it may cause real problems for the firm, which is competing against outsiders. But many low-skilled jobs are in the "nontradeable" sector: food, hotels, and others. In those situations, a rise in the minimum wage means means higher costs for all the local competing firms--in which case it will be easier to past those costs along to consumers in the form of higher prices. Of course, if an employer can pass along the  higher minimum wage to consumers, any employment effects may be muted. 

2) An employer faced with a higher minimum wage might try to offset this change by paying lower benefits (vacation, overtime, health insurance, life insurance, and so on.)  The employer might also try to get more output from workers by, for example, offering less job flexibility or pushing them harder in the workplace. 

3) A higher minimum wage means an increased incentive for employers and worker to break the law and to evade the minimum wage. Clemens cites one "analysis of recent minimum wage changes [which] estimates that noncompliance has averaged roughly 14 to 21 cents per $1 of realized wage gain."

4) An employer faced with a higher minimum wage might, for a time, not have many immediate options for adjustment. But over a timeframe of a year or two, the employer might start figuring out ways to substitute high-paid labor for the now pricier minimum wage labor, or to look for ways of automating or outsourcing minimum wage jobs. Any study that focuses on effects of a minimum wage during a relatively small time window will miss these effects. But any study that tries to look at long-run effects of minimum wage changes will find that many other factors are also changing in the long run, so sorting out just the effect of the minimum wage will be tough. 

5) A higher minimum wage doesn't just affect employers, but also affects workers. A higher wage means that workers are likely to work harder and less likely to quit. Thus, a firm that is required to pay a higher minimum wage might recoup a portion of that money from lower costs of worker turnover and training. 

There is ongoing research on all of these points. There is some evidence backing them, and some evidence not, but the evidence again often varies by place, time, occupation, and which comparison group is used. The markets for supply and demand of labor are complicated places. 

I don't mean to be a whiner about it, but figuring out the effects of a higher minimum wage from the existing evidence is a difficult question.  But of course, no one weeps for the analytical problems of economists. Most people just want a bottom line on whether a $15/hour minimum wage is good or bad, so that they know whether to treat you as friend or foe--depending on whether you agree with their own predetermined beliefs. I'm not a fan of playing that game, but here are a few thoughts on the overall controversy. 

  • It's worth remember the old adage that "absence of evidence is not evidence of absence." That is, just because it's hard to provide ironclad statistical proof that a minimum wage reduces employment doesn't prove that the effect is zero--it just means that getting strong evidence is hard. 
  • Since the federal minimum wage was enacted in the 1930s, it has always been a situation where a number of states have set a higher minimum wage. The recent shift is toward cities setting a higher minimum wage than the state. Thus, the effects of raising the federal minimum wage to $15/hour will not (mostly) be felt in places where the minimum wage is already at or near that level: instead, it will be felt in all the other locations. 
  • Many minimum wage workers are also part-time workers. Thus, it's easy to imagine an example, where, say, the minimum wage rises 20% but for a certain person their hours worked are cut by 10%. This is a situation where the minimum wage led to fewer hours worked, but the worker still has higher annual income.  
  • To the extent that a higher minimum wage does affect the demand for low-skilled labor, such effects will be less perceptible in a strong or growing economy when employment is generally expanding for other reasons, and more perceptible in a weak or recessionary economy, when fewer firms are looking to hire. 
  • Everyone agrees that a smaller rise in the minimum wage will have smaller effects, and a larger rise in the minimum wage will have larger effects. I know a number of liberal-leaning, Democratic-voting economists who are just fine with the tradeoffs of raising the federal minimum wage to some extent, but who also think that a rise to $15/hour for the national minimum wage (as opposed to the minimum wage in high-wage cities and states) is too much. 

True gluttons for punishment who have read this far may want some recent minimum wage studies to look at. In this case at least, your wish is my command: 

"Wages, Minimum Wages, and Price Pass-Through: The Case of McDonald’s Restaurants," by Orley Ashenfelter and Å tÄ›pán Jurajda (Princeton University Industrial Relations Section, Working Paper #646, January 2021). "We find no association between the adoption of labor-saving touch screen ordering technology and minimum wage hikes. Our data imply that McDonald’s restaurants pass through the higher costs of minimum wage increases in the form of higher prices of the Big Mac sandwich."

"Myth or Measurement: What Does the New Minimum Wage Research Say about Minimum Wages and Job Loss in the United States?" by David Neumark and Peter Shirley (National Bureau of Economic Research,  Working Paper 28388,  January 2021). "We explore the question of what conclusions can be drawn from the literature, focusing on the evidence using subnational minimum wage variation within the United States that has dominated the research landscape since the early 1990s. To accomplish this, we assembled the entire set of published studies in this literature and identified the core estimates that support the conclusions from each study, in most cases relying on responses from the researchers who wrote these papers.Our key conclusions are: (i) there is a clear preponderance of negative estimates in the literature; (ii) this evidence is stronger for teens and young adults as well as the less-educated; (iii) the evidence from studies of directly-affected workers points even more strongly to negative employment effects; and (iv) the evidence from studies of low-wage industries is less one-sided."

"Seeing Beyond the Trees: Using Machine Learning to Estimate the Impact of Minimum Wages on Labor Market Outcomes," by Doruk Cengiz, Arindrajit Dube, Attila S. Lindner and David Zentler-Munro (National Bureau of Economic Research Working Paper 28399, January 2021). "We apply modern machine learning tools to construct demographically-based treatment groups capturing around 75% of all minimum wage workers—a major improvement over the literature which has focused on fairly narrow subgroups where the policy has a large bite (e.g., teens). By exploiting 172 prominent minimum wages between 1979 and 2019 we find that there is a very clear increase in average wages of workers in these groups following a minimum wage increase, while there is little evidence of employment loss. Furthermore, we find no indication that minimum wage has a negative effect on the unemployment rate, on the labor force participation, or on the labor market transitions.

"The Budgetary Effects of the Raise the Wage Act of 2021," Congressional Budget Office (February 2021). "CBO projects that, on net, the Raise the Wage Act of 2021 would reduce employment by increasing amounts over the 2021–2025 period. In 2025, when the minimum wage reached $15 per hour, employment would be reduced by 1.4 million workers (or 0.9 percent), according to CBO’s average estimate. In 2021, most workers who would not have a job because of the higher minimum wage would still be looking for work and hence be categorized as unemployed; by 2025, however, half of the 1.4 million people who would be jobless because of the bill would have dropped out of the labor force, CBO estimates. Young, less educated people would account for a disproportionate share of those reductions in employment."

Thursday, February 18, 2021

Rural Poverty

Rural poverty is often overlooked. In the Spring 2021 issue of the Stanford Social Innovation ReviewRobert Atkins, Sarah Allred and Daniel Hart discuss "Philanthropy’s Rural Blind Spot," about how philanthropies have typically put much more time and attention on urban poverty than rural poverty. They write: 

Most large foundations are located in metropolitan areas and have built relationships with institutions and organizations in those communities. ... [M]any grant makers assume that urban centers have higher rates of poverty than rural areas. Moreover, many funders believe that they maximize impact and do more good when their grants go to addressing distress in densely populated areas. The rates of poverty, however, are higher in rural areas than in urban areas. In addition, it would be difficult to demonstrate that a grant going to a metropolitan community to improve high school graduation rates, increase the food security of agricultural workers, or reduce childhood lead poisoning assists a greater number of individuals than if the same grant goes to a nonmetropolitan community. In other words, giving to more densely populated areas does not clearly result in a greater equity return on investment for the grant maker.
The authors point to a resource with which I had not been familiar, the Multidimensional Index of Deep Disadvantage produced by H. Luke Shaefer, Silvia Robles and Jasmine Simington of the University of Michigan, using methods also developed by Kathryn Edin and Tim Nelson at Princeton University. They collect a combination of economic, health, and social mobility data on counties and the 500 largest cities in the United States. You can find an interactive map at the website, or click here for a full list of the 3617 areas. They then rank the areas. In an overview of the results, Shaefer, Edin, and Nelson write:

When we turn the lens of disadvantage from the individual to the community, we find that five geographic clusters of deep disadvantage come into view: The Mississippi Delta, The Cotton Belt, Appalachia, the Texas/Mexico border, and a small cluster of rust belt cities (most notably Flint, Detroit, Gary, and Cleveland). Many Native Nations also score high on our index though are not clustered for historic reasons. ...

The communities ranking highest on our index are overwhelmingly rural. Among the 100 most deeply disadvantaged places in the United States according to our index, only 9 are among the 500 largest cities in the United States, which includes those with populations as small as 42,000 residents. In contrast, 19 are rural counties in Mississippi. Many of the rural communities among the top 100 places have only rarely, if ever, been researched. Conversely, Chicago, which has been studied by myriad poverty scholars, doesn’t even appear among the top 300 in our index. Our poverty policies suffer when social science research misses so many of the places with the greatest need. ...

How deep is the disadvantage in these places? When we compare the 100 most disadvantaged places in the United States to the 100 most advantaged, we find that the poverty rate and deep poverty are both higher by greater than a factor of six. Life expectancy is shorter by a full 10 years, and the incidence of low infant birthweight is double. In fact, average life expectancy in America’s most disadvantaged places, as identified by our index, is roughly comparable to what is seen in places such as Bangladesh, North Korea, and Mongolia, and infant birth weight outcomes are similar to those in Congo, Uganda, and Botswana.

If should be noted that a list of this sort is not an apples-to-apples comparison, in part because the population sizes of the areas are so very different. Many counties have only a few thousand people, while many cities have hundreds of thousands, or more. Thus, the data for a city will average out both better-off and worse off areas, while a low population, high-poverty rural county may not have any better-off places. 

But the near-invisibility of rural poverty in our national discourse is still striking. For example, when talking about improving education and schooling, what should happen with isolated rural schools rarely makes the list.  When talking about how to assure that people have health insurance, the issues related to people who are a long way from a medical facility are often not on the list of topics. When talking about raising the national minimum wage to $15/hour, much of the discussion seems to assume an area relatively dense in population, employers, and jobs, where various job-related adjustments can take place, not a geographically isolated and high-poverty area with few or no major employers. These issues aren't new. Many of the current high-poverty areas (rural and urban) have been poor for decades.

Wednesday, February 17, 2021

Robert Shiller on Narrative Economics

Robert J. Shiller (Nobel '13) delivered the Godley-Tobin Lectures, an annual lecture delivered at the Eastern Economic Association meetings, on the subject of “Animal spirits and viral popular narratives” (Review of Keynesian Economics, January 2021, 9:1, pp. 1-10).

Shiller has been thinking about the intertwining of economics and narrative at least since his presidential address to the American Economic Association back in 2017. He suggests, for example, that the key feature distinguishing humans may be our propensity to organize our thinking into stories, rather than just intelligence per se. Indeed, there are many examples in all walks of life (politics, investing, expectations of family life, careers, reactions to a pandemic) where people will often cleave to their preferred narrative rather than continually question and challenge it with their intelligence. He begins the current essay in this way: 

John Maynard Keynes's (1936) concept of ‘animal spirits’ or ‘spontaneous optimism’ as a major driving force in business fluctuations was motivated in part by his and his contemporaries' observations of human reactions to ambiguous situations where probabilities couldn't be quantified. We can add that in such ambiguous situations there is evidence that people let contagious popular narratives and the emotions they generate influence their economic decisions. These popular narratives are typically remote from factual bases, just contagious. Macroeconomic dynamic models must have a theory that is related to models of the transmission of disease in epidemiology. We need to take the contagion of narratives seriously in economic modeling if we are to improve our understanding of animal spirits and their impact on the economy.
Thus, this lecture emphasizes the parallels between how narratives spread and epidemiology models of how diseases spread:
Mathematical epidemiology has been studying disease phenomena for over a century, and its frameworks can provide an inspiration for improvement in our understanding of economic dynamics. People's states of mind change through time, because ideas can be contagious, so that they spread from person to person just as diseases do. ...

We humans live our lives in a sea of epidemics all at different stages, including epidemics of diseases and epidemics of narratives, some of them growing at the moment, some peaking at the moment, others declining. New mutations of both the diseases and the narratives are constantly appearing and altering behavior. It is no wonder that changes in business conditions are so often surprising, for there is no one who is carefully monitoring the epidemic curves of all these drivers of the economy.

Since the advent of the internet age, the contagion rate of many narratives has increased, with the dominance of social media and with online news and chats. But the basic nature of epidemics has not changed. Even pure person-to-person word-of-mouth spread of epidemics was fast enough to spread important ideas, just as person-to person contagion was fast enough to spread diseases into wide swaths of population millennia ago.
As one illustration of the rise and fall of economic-related narratives, Shiller uses "n-grams" which search for how often certain terms are used in news media. Examples of such terms shown in this graph include "supply-side economics," "welfare dependency," "welfare fraud," and "hard-working American."


Shiller's theme is that if we want to understand macroeconomic fluctuations, it won't be enough just to look at patterns of interest rates, trade, or innovation, and it won't be enough to include factors like real-life pandemics, either. The underlying real factors matter, of course. But the real factors are often translated into narratives, and it is those narratives which then affect economic actions about buying, saving, working, starting a business, and so on. Shiller writes: "As this research continues, there should come a time when there is enough definite knowledge of the waxing and waning of popular narratives that we will begin to see the effects on the aggregate economy more clearly."

I'll only add the comment that there can be a tendency to ascribe narratives only to one's opponents: that is, those with whom I disagree are driven by "narratives," while those with whom I agree are of course pure at heart and driven only by facts and the best analysis. That inclination would be a misuse of Shiller's approach. In many aspects of life, enunciating the narratives that drive our own behavior (economic and otherwise) can be hard and discomfiting work. 

For some additional background on these topics: 

For a readable introduction to epidemiology models aimed at economists, a useful starting point is the two-paper "Symposium on Economics and Epidemiology" in the Fall 2020 issue of the Journal of Economic Perspectives: "An Economist's Guide to Epidemiology Models of Infectious Disease," by Christopher Avery, William Bossert, Adam Clark, Glenn Ellison and Sara Fisher Ellison; and "Epidemiology's Time of Need: COVID-19 Calls for Epidemic-Related Economics," by Eleanor J. Murray.

For those who would like to know more about "animal spirits" in economics, a 1991 article in the Journal of Economic Perspectives by Roger Koppl discusses the use of the term by John Maynard Keynes and then gives a taste of the intellectual history: for example, Keynes apparently got the term from Descartes, and it traces back to the second century Greek physician Galen.

Tuesday, February 16, 2021

The Allure and Curse of Mini-Assignments

During the online courses, it seems that many teachers and students have the feeling that they are working harder and accomplishing less. In its own way, this feeling is a tribute to the virtues of in-person education. Betsy Barre offers some hypotheses as to why higher education has a feeling of getting less output from more input in "The Workforce Dilemma" (January 22, 2021, Center for Teaching and Learning, Wake Forest University). 

I recommend the short essay as a whole. But the part that resonated most with me had to do discussed how attempts by teachers to use online tools as a way of encouraging and monitoring short-term academic progress can end up making everyone feel crazy. Barre writes:  

The most interesting of all six hypotheses, and the one I’ve thought the most about, is that our experience this semester has revealed an unfortunate truth about how teaching and learning took place prior to the pandemic. This theory ... suggests that students are experiencing more work because of a fundamental difference between online courses and the typical in-person course. While there may be no difference in how much work is expected of students in these courses, there is often a difference in how much work is required.

Most faculty would agree that students should be spending 30 hours a week on homework in a traditional 15-credit semester, but we also know that the average student taking in-person courses is able to get by on about 15 hours a week. This is not surprising to most faculty, as we know that students aren’t always doing the reading or coming to class prepared. Here and there a course might require the full amount of work, but a student can usually count on some of their courses requiring less.

So what makes online courses so different? In an online course, faculty can see, and students are held accountable for, all expected work. In an in-person class, students can sometimes skip the reading and passively participate in class. But in an online course, they may have to annotate the reading, take a quiz, or contribute to a discussion board after the reading is complete. While this shift would be uncomfortable for students in the case of one course, shifting all of their courses in this direction would, in fact, double their workload and entail a radical reworking of their schedules. ...

Mini-assignments are often well-meant. The idea is to keep students involved and on pace, and for certain kinds of classes and for many students I'm sure it works fine. But a steady-stream of graded mini-assignments also take time, organization and energy for both faculty and students. Barre again:  

We’ve also encouraged faculty to follow best practices by breaking up a few large assignments into multiple smaller ones. When this happens across five courses, 10 assignments can suddenly convert to 50. While those 50 assignments may take no more time than the original 10, simply keeping track of when they are due is a new job unto itself. In each of these cases, the cognitive load we are placing on students has increased, adding invisible labor to the time they spend completing the work.

There are some workplaces where every keystroke on the computer can be monitored. Most teachers and students do not aspire to have the learning experience function in this way. But a continual stream of mini-assignments moves higher education closer to that model. 

Monday, February 15, 2021

Interview with Seema Jayachandran: Women and Development, Deforestation, and Other Topics

Douglas Clement and Anjali Nair have collaborated to produce a "Seema Jayachandran interview: On deforestation, corruption, and the roots of gender inequality" (Federal Reserve Bank of Minneapolis, February 12, 2021). Here are a couple of samples: 

The U-shaped relationship between economic development and women's labor force participation

There’s a famous U-shaped relationship in the data between economic development and female labor force participation. ... Historically, in richer countries, you’ve seen this U-shape where, initially, there are a lot of women working when most jobs are on the family farm. Then as jobs move to factories, women draw out of the labor force. ... But then there’s an uptick where women start to enter the labor market more and not just enter the labor market, but earn more money. There are several reasons why we think that will happen.

One is structural transformation, meaning the economy moves away from jobs that require physical strength like in agriculture or mining towards jobs that require using brains. ... For example, the percentage of the economy in services is higher in the U.S. compared to Chad, and service jobs are going to advantage women. So that’s one reason that economic development helps women in the labor market.

The second reason is improvement in household production. Women do the lion’s share of household chores and, as nations develop, they adopt technology that reduces the necessary amount of labor. Chores like cooking and cleaning now use a lot more capital. We use machines like vacuum cleaners, washing machines, or electric stoves rather than having to go fetch wood and cook on a cookstove. This labor-saving technology frees up a lot of women’s time because those chores happen to be disproportionately women’s labor. Some of those technological advances are in infrastructure. Piped water, for instance, where we’re relying on the government or others to build that public good infrastructure. And some is within households; once piped water is available, households invest in a washing machine.

The third reason is fertility. When countries grow richer, women tend to have fewer kids and have the ability to space their fertility. For example, both the smaller family size and the ability to choose when you have children allows women to finish college before having children.  ... Less on the radar is that childbearing has also gotten a lot safer over time. There’s some research on the U.S. by Stefania Albanesi and Claudia Olivetti suggesting that reduction in the complications from childbirth are important in thinking about the rise in female labor force participation.
Paying landowners in western Uganda to prevent deforestation
In many developing countries, people are clearing forests to grow some cassava or other crop to feed their family. Obviously, that’s really important to them. You wouldn’t want to ban them from doing that. They’d go hungry! But if we think about it in absolute terms and global terms, the income people are generating by clearing forests is small. If we can encourage them to protect the forest and compensate them for the lost income, then protecting the forest actually makes them better off than clearing it. And because the income they’re forgoing is small in global terms, that could cost a lot less than other ways of reducing carbon emissions. ...

This is a truly interdisciplinary project. One of my collaborators is a specialist in remote sensing, which is analyzing satellite data to measure land use and forests. It’s similar to the machine learning that economists use often. But here we use high-resolution satellite imagery, where a single pixel covers 2.4 meters by 2.4 meters of surface area. 

If I showed you one of our images, you could spot every tree with your eye. Of course, there are 300 million pixels in the area we have imagery for, so you don’t want to go and hand-classify all those trees. But we have the algorithms and the techniques to classify all of those pixels into whether there’s a tree or not. We have this imagery for both the control villages where the program wasn’t in place and the treatment villages where it was, where landowners were paid to not cut their trees. So we could see before-and-after images of what happened in both control and treatment villages.

By doing that, we could see that in the control villages over this two-year period, 9 percent of the tree cover that existed at the beginning was gone. That’s a really rapid rate of deforestation. ... By comparison, in the villages with this program, the rate of tree loss was cut in half, closer to 4 to 5 percent. There’s still tree loss—not everybody wanted to participate in the program—but the program made a pretty big dent in the problem.

Another thing the high-resolution imagery shows is the pattern of tree-cutting, and that showed that we’ve been underestimating the rate of deforestation in poor countries. On relatively low-resolution satellite imagery, we could see clear-cutting of acres and acres of land. That is an important problem. But recent estimates suggest that, especially in Africa, half of the deforestation is smaller landholders who are cutting four or five trees in a year to pay for a hospital bill, say. That adds up.

Sunday, February 14, 2021

The Coin Shortage: Velocity Stories

In high school "velocity" referred to distance travelled divide by time. In economics, "velocity" refers to the speed with which money circulates. The formula is V= GDP/M: that is, take the size of the GDP for a year and take a measure of the money supply. Then velocity will tell you how many times that money circulated through the economy in a given year. 

During the pandemic, the velocity of money has slowed way down. One manifestation is the shortage of coins at many retailers. Tim Sablik tells the story in "The COVID-19 pandemic disrupted the supply of many items, including cold hard cash" (Econ Focus: Federal Reserve Bank of Richmond, Fourth Quarter 2020, pp. 26-29). One signal came from the coin laundries. Sablik writes: 
"I started getting a few phone calls from members asking, 'Is it just me, or are more quarters walking out the door than before?'" says Brian Wallace, president of the Coin Laundry Association. Of the roughly 30,000 self-service laundromats in the United States, Wallace says that a little more than half take only quarters as payment to operate washers and dryers. Before the pandemic, some of these coin-operated businesses would take in more quarters each week than they gave out, meaning that most customers brought their own change to the laundromat rather than exchanging bills for quarters. But as the pandemic intensified, many of those business owners who had been used to ending the week with a surplus of quarters suddenly found they had a deficit. They turned to their local bank to purchase more, but the banks had no change to spare either.
In June 2020, the Federal Reserve started rationing the supply of coins. In an absolute sense, there didn't seem to be an overall shortage of coins. There are about $48 billion of coins in circulation, and that total didn't fall. Instead, with people paying more bills online and with debit or credit cards, the velocity of circulation for coins dropped, falling by about half. 

You may not have been aware, as I was not, that the Fed created a "US Coin Task Force" to get those coins moving again, nor that last October was ""Get Coin Moving Month." However, "[o]ne aquarium in North Carolina shuttered by the pandemic put its employees to work hauling 100 gallons of coins from one of its water fixtures that had served as a wishing well for visitors since 2006."

Of course, the drop in velocity of money isn't just coins, but involves the money supply as a whole. The Federal Reserve offers several textbook definitions of the money supply, with differing levels of breadth. 
There are several standard measures of the money supply, including the monetary base, M1, and M2.
  • The monetary base: the sum of currency in circulation and reserve balances (deposits held by banks and other depository institutions in their accounts at the Federal Reserve).
  • M1: the sum of currency held by the public and transaction deposits at depository institutions (which are financial institutions that obtain their funds mainly through deposits from the public, such as commercial banks, savings and loan associations, savings banks, and credit unions).
  • M2: M1 plus savings deposits, small-denomination time deposits (those issued in amounts of less than $100,000), and retail money market mutual fund shares.
Here's one figure showing velocity of M1 over time, and another showing velocity of M2. In both figures, you can see that velocity has been on a downward path--although the path looks different depending on the measure of the money supply. You can also see the abrupt additional fall in velocity when the pandemic recession hit. 


There was a time, back in the 1970s, when the velocity of M1 looked fairly steady and predictable, climbing slowly over time. Thus, some monetary economists, most prominently Milton Friedman, argued that the Federal Reserve should just focus on having the money supply grow steadily over time to suit the needs of the economy. But when M1 velocity first flattened out and then started jumping around in the 1980s, it was clear that focusing on M1 was not a good policy target, and when M2 starting moving around in the 1990s, it didn't look like a suitable target either. At present, velocity is not especially interesting as a direct part of Fed policy, but continues to be interesting for what it tells us about how changes in how transactions and payments flow across the economy. 

Saturday, February 13, 2021

What Gets Counted When Measuring US Tax Progressivity

The "progressivity"  of a tax refers to whether those with higher incomes pay a higher share of income in taxes than those with lower incomes. The federal income tax is progressive in this sense. 

However, other federal taxes like the payroll taxes that support Social Security are regressive, rather than progressive, because it applies only to income up to a limit (set at $142,800 in 2021). The justification is that Social Security taxes combine both a degree of redistribution but also a sense of contributing to one's own future Social Security benefits. But to take it one step further, one justification for the Earned Income Tax Credit for lower-earning families and individuals serves in part to offset the Social Security payroll taxes paid by this group. 

So with all this taken into account, how progressive is the federal income tax and how has the degree of progressivity shifted in recent decades. David Splinter tackles these questions in "U.S. Tax Progressivity and Redistribution" (National Tax Journal, December 2020, 73:4, 1005–1024).

It's worth emphasizing that any measure of tax progressivity is based on an underlying set of assumptions about what is counted as "income" or as "taxes." Let me give some examples: 

The Earned Income Tax Credit is "refundable." Traditional tax credits can reduce the taxes you owe down to zero, but a "refundable" credit means that you can qualify for a payment from the government above and beyond any taxes you owe: indeed, the purposes of this tax credit is to provide additional income and an additional incentive to work for low-income workers. This tax credit cost about $70 billion in 2019.   But for purposes of categorization, here's a question: Should these payments from the federal government to low-income individuals be treated as part of the progressivity of the tax code? Or should they be treated as a federal spending program? Of course, treating them as part of the tax code tends to make the tax code look more progressive. 

Here's another example: When workers pay taxes for Social Security and Medicare, employers also pay a matching amount. However, a body of research strongly suggests that the amount paid by employers leads to lower wages for workers; in effect, workers "pay" the employer share of the payroll tax in the form of lower take-home pay, even if employers sign the check. (After all, employers care about the total cost of hiring a worker. They don't care whether that money is paid directly to the worker or whether some of it must be paid to the government.) So when looking at the taxes workers pay, should the employer share of the payroll tax be included? 

Here's another example: Many of us have retirement accounts, where our employer signs the checks for the contributions to those accounts and the total in the account in invested in a way that provides a return over time. Do the employer contributions to retirement get counted as part of annual income? What about any returns earned over time? 

Or here's another return: Say that I own a successful business. There are a variety of ways I can benefit from owning the business other than the salary I receive: for example, the business might buy me a life insurance policy, or pay for a car or other travel expenses, or make donations to charities on my behalf. Are these counted into income? 

There are many questions like these, and as a result, measurements of average taxes paid for each income group will vary. Here's a selection of six recent estimates, as collected by Splinter: 

Again, these are just federal tax rates, not including state and local taxes. Notice both that the estimates vary, but also that they are broadly similar. 

One way to boil down the progressivity of the federal tax code into a single number is to use the Kakwani index. The diagram illustrates how it works. The horizontal axis is a cumulative measure of all individuals; the vertical axis is a cumulative measure of either income received or taxes paid by society as a whole. The dashed 45-degree line shows what complete equality would look like: that is, along that line, the bottom 20 percent of individuals get 20 percent of income and pay 20 percent of taxes, the bottom 40 percent get 40 percent of income and pay 40% of taxes, and so on. 

The idea is to compare the real-world distribution of income and taxes to this hypothetical line of perfect equality. The lighter solid line shows the distribution of income. Roughly speaking, the bottom 50% of individuals received about 20% of total income in 2016 The area from the lighter gray line to the 45-degree perfect equality line measures what is called the "Gini index"--a standard measure of the inequality of the income distribution. 

The dark line carries out a similar calculation for share of taxes paid. For example, the figure shows that the bottom 50% of the income distribution paid roughly 10% of total federal taxes in 2016. If the distribution of taxes paid exactly matched the distribution of income, the tax code would be proportional to income. Because the tax line falls below this income line, this shows that overall, federal taxes are progressive. The area between the income line and the tax line is the Kakwani index for measuring the amount of progressivity. 

How has the Kawkani index shifted over recent decades?  Splinter writes:

Between 1979 and 1986, the Kakwani index decreased 34 percent (from 0.14 to 0.10) ... Between 1986 and 2016, the Kakwani index increased 120 percent (from 0.10 to 0.21) ... For the entire period between 1979 and 2016, the Kakwani index increased 46 percent (from 0.14 to 0.21) ...
In short, progressivity of federal taxes fell early in the Reagan administration, rose fairly steadily up to about 2009, and then was more-or-less flat through 2016. 

Splinter is using the Congressional Budget Office estimates of income and taxes in making these calculations. CBO estimates are mainstream, but of course that doesn't make them beyond question. In particular, a key assumption here is that payments made by the Earned Income Tax Credit are treated as part of tax system, rather than as a spending program, and that explains a lot of why the progressivity of the tax code increased by this measure. As Splinter writes: 

U.S. federal taxes have become more progressive since 1979, largely due to more generous tax credits for lower income individuals. Though top statutory rates fell substantially, this affected few taxpayers and was offset by decreased use of tax shelters, such that high-income average tax rates have been relatively stable. ... Over the longer run, earlier decreases suggest a U-shaped tax progressivity curve since WWII, with the minimum occurring in 1986.
For more arguments and details about how to measure income and wealth, a useful starting point is a post from a couple of months ago on "What Should be Included in Income Inequality?" (December 23, 2020). 

Thursday, February 11, 2021

Judges and Ideology

When judges are going through confirmation hearings, they tend make comments about how they will act as a neutral umpire, not taking sides and following the law. As one representative example, here's a comment from the statement of current US Supreme Court Chief Justice John Roberts when he was nominated back in 2005:
I have no agenda, but I do have a commitment. If I am confirmed, I will confront every case with an open mind. I will fully and fairly analyze the legal arguments that are presented. I will be open to the considered views of my colleagues on the bench, and I will decide every case based on the record, according to the rule of law, without fear or favor, to the best of my ability, and I will remember that it’s my job to call balls and strikes, and not to pitch or bat.
I will not here try to peer inside the minds of judges and determine the extent to which such statements are honest or cynical. But I will point out that there is strong evidence that many judicial decisions have a real ideological component, in the sense that it's easy to find judges who reach systematically different conclusions, even when they have both promised to follow the rule of law without fear or favor. The Winter 2021 issue of the Journal of Economic Perspectives includes two papers on this topic: 
Bonica and Sen lay out the many ways that social scientists have used to measure judicial ideology. I'll mention some of the approaches here. It will be immediately obviously that none of the approaches is bulletproof. But the key point to remember is that when one compares these rather different ways measuring judicial ideology, you get reasonably similar answers about which judges fall into which categories. 

 For example, the Supreme Court Database at Washington University in St. Louis classifies all Supreme Court decisions back to 1946 using various rules: 
As an example, the liberal position on criminal cases would be the one generally favoring the criminal defendant; in civil rights cases, the liberal position would be the one favoring the rights of minorities or women, while in due process cases, it would be the anti-government side. For economic activity cases—which make up a perhaps surprisingly large share of the Supreme Court’s docket—the liberal position will be the pro-union, anti-business, or pro-consumer stance. For cases involving the exercise of judicial power or issues of federalism, the liberal position would be the one aligned with the exercise of federal power, although this may depend on the specific issues involved. Finally, some decisions are categorized as “indeterminate,” such as a boundary dispute between states.
Another approach looks at the process by which a judge is appointed, which can include both the part of the president doing the appointing, while for federal judges appointed to district or appeals courts, one might also take into account the party of the US senators from that area. A more sophisticated version of this approach seeks to estimate the ideology of the president or the senators involved, thus recognizing that not all Republicans and Democrats are identical. Yet another approach categorized judges according to their political campaign contributions that they made before being appointed, or according to the contributions made by those that the judges choose to be law clerks. Another line of research looks at newspaper editorials about Supreme Court judges during their confirmation hearings, and how they match with other measures like the categories above. There is some recent work using text-based analysis to categorize the ideology of judges according to their use of certain terms. 

Yet another approach ignores the content of judicial decisions, and instead just looks at voting patterns. An approach called Martin-Quinn scores is based on the idea that the ideology of judges can be positioned along a line. As one extreme, if there were two groups of judges that always agreed with each other but always disagreed with the other group, they would be at extreme opposite ends of the line. If there were some other judges who voted 50:50 with one extreme group or the other, they would be in the middle of the line. Using these kinds of calculations and the voting records for each term, one can even see how a judge may evolve over time away from the extreme and toward the middle, or vice versa. 

Here's what the Martin-Quinn for Supreme Court judges look like going back to 1946. Blue lines are judges appointed by Democrats; red lines by Republicans. There is a different score for each judicial term, and the scores can thus evolve over time. The members of the Supreme Court as is stood before the appointment of Amy Coney Barrett last fall are labelled by name. Again, remember that these scores are not based on anyone making decisions about what is "conservative" or "liberal," but only on similarities in actual voting patterns. 
It's perhaps not a surprise that the red and blue lines tend to be separated. But looking back a few decades, you can also see some overlap in the red and blue lines. That overlap has now gone away. 

While research on the Supreme Court gets the most attention, there is also ongoing work looking at lower-level federal courts as well as state and local court systems. For example, a body of academic research points out that some judges are known to be less likely to grant bail or more likely to imposed more severe sentences. Given that judges are often assigned to cases at random, when the judge is available, this means that there are cases where very similar defendants are treated quite differently by the courts--just based on which judge they randomly got. For justice, this is a bad outcome. For researchers, it can be a useful tool to figuring out whether those who randomly got bail, or got a shorter sentence, have different long-term outcomes in terms of recidivism or other life outcomes than those who randomly did not get bail or ended up with a longer sentence. 

At some basic level, it isn't shocking that judges are human and have ideological differences. Indeed, sports fans will know that even when talking about referees, there are some who are more likely to call penalties, or among baseball umpires, some who are more likely to call strikes.  Indeed, the reason we need judges is because laws and their application are not fixed and indisputable. One might even argue that it's good to have a distribution of judges with at least somewhat different views, because that's how the justice system evolves. 

That said, is there some kind of judicial reform that might reduce the role of judicial  ideology and/or turn down the temperature of judicial confirmation hearings? Hemel talk about a range of proposals, including ideas like a mandatory retirement age or fixed 18-year terms for Supreme Court judges. For various reasons, he's skeptical that the situation is as historically unique as is sometimes suggested, or that most of the proposals will make much difference. 

One of the interesting facts that Hemel points out along the way is that the US Supreme Court has experienced "the fall of the short-term justice." He writes: "Over the court’s history, 40 justices have served for ten years or less. None of these quick departures occurred in the last half-century. Several factors have contributed to the fall of the short-term justice. Fewer are dying young, and no justice since Fortas in 1969 has been forced to depart in disgrace. The justices also are now less likely to leave the court to pursue political careers in other branches. Contrast this with Charles Evans Hughes, who left the court in 1916 to accept the Republican nomination for president, and James Byrnes, who would serve as US Secretary of State and Governor of South Carolina in his post-judicial life." We have instead evolved to a situation where justices leave the court only because of infirmity or death. 

Hemel offers a different kind of reform that he characterizes as a "thought experiment," which is at least useful for expanding the set of policy options. The idea is to break the rule that a judge can only be added to the court when another judge leaves. Hemel writes:
Decoupling could be implemented as follows. Each president would have the opportunity to appoint two justices at the beginning of each term, regardless of how many vacancies have occurred or will occur. Those justices would join the bench at the beginning of the next presidential term. For example, President Trump, upon taking office in January 2017, would have had the opportunity to make two appointments. Those appointees—if confirmed—would receive their commissions in January 2021. The retirement or death of a justice would have no effect on the number of appointments the sitting president could make. Justices would continue to serve for life. Decoupling thus shares some similarities with the norm among university faculties, where senior members enjoy life tenure but the departure of one does not automatically and immediately trigger the addition of a new member.
The decoupling proposal would result in an equal allocation of appointments across presidential terms, though that is not its principal advantage. It would create new opportunities for compromise when the White House and Senate are at daggers drawn: Because appointments would come in pairs, a Democratic president could resolve an impasse with a Republican Senate (or vice versa) by appointing one liberal and one conservative. It would significantly reduce the risk that a substantial number of justices would be subject to the loyalty effect, since no more than two justices would ever be appointees of the sitting president (and only in that president’s second term). The loyalty effect could be eliminated entirely by modifying the plan so that justices receive their commission only after the president who appointed them leaves office (that is, if Trump had been reelected in 2020, none of his appointees would join the court until January 2025).
The plan would likely have a modest effect on the size of the court. The mean tenure of justices who have left the court in the last half-century (since 1970) is 26.4 years, though one might expect tenure to be shorter if appointees had to wait four (or eight) years between confirmation and commission. If justices join the court at a slightly faster rate than they depart, the gradual growth in the court’s size would be tolerable. ... A larger court would serve the objective sometimes cited by term-limit proponents of reducing the influence of any individual jurist’s idiosyncrasies over the shape of American law. It would also likely lessen the macabre obsession with the health of individual older justices.
Hemel also argues that these changes could be implemented via ordinary legislation. I don't have a well-developed opinion on this kind of proposal, but I had not heard the proposal before, and it seemed as worthy of consideration as some of the better-known ideas.