Thursday, February 25, 2021

India: Pivoting from the Pandemic to Economic Reforms

Each year, the Economic Division in India's Ministry of Finance publishes the Economic Survey of India (January 2021). The first volume is a set of chapters on different topics: the second volume is a point-by-point overview of the last year's developments, in fiscal, monetary, and trade policy, along with developments in main sectors like agriculture, industry, and services. Here, I'll cherry-pick some points that caught my eye in looking over the first volume. 

Of course, any discussion of a country's economy 2020 will start with the pandemic. All statements about what "worked" or "didn't work" during 2020 are of course subject to revision as events evolve. As a country with many low-income people living in high-density cities, and high absolute numbers of elderly people, India was clearly a country with what looked as if it might experience large health costs in the pandemic. But the report argues that at least for 2020, India's COVID-19 response worked well. (For those not used to reading reports from India, "lakh" refers to 100,000, and a "crore" is 100 lakh, or 10 million.)

India was amongst the first of the countries that imposed a national lockdown when there were only 500 confirmed cases. The stringent lockdown in India from 25th March to 31st May was necessitated by the need to break the chain of the spread of the pandemic. This was based on the humane principle that while GDP growth will come back, human lives once lost cannot be brought back. 

The 40-day lockdown period was used to scale up the necessary medical and para-medical infrastructure for active surveillance, expanded testing, contact tracing, isolation and management of cases, and educating citizens about social distancing and masks, etc. The lockdown provided the necessary time to put in place the fundamentals of the '5 T' strategy - Test, Track, Trace, Treat, Technology. As the first step towards timely identification, prompt isolation & effective
treatment, higher testing was recognized as the effective strategy to limit the spread of infection. At the onset of the pandemic in January, 2020, India did less than 100 COVID-19 tests per day at only one lab. However, within a year, 10 lakh tests were being conducted per day at 2305 laboratories. The country reached a cumulative testing of more than 17 crore in January, 2021. ... 

The districts across India, based on number of cases and other parameters were classifie into red, yellow and green zones. Across the country, ‘hotspots’ and ‘containment zones’ were identified – places with higher confirmed cases increasing the prospect of contagion. This strategy was increasingly adopted for intensive interventions at the local level as the national lockdown was eased. ... 

India was successful in flattening the pandemic curve, pushing the peak to September. India managed to save millions of ‘lives’ and outperform pessimistic expectations in terms of cases and deaths. It is the only country other than Argentina that has not experienced a second wave. It has among the lowest fatality rates despite having the second largest number of confirmed cases. The recovery rate has been almost 96 per cent. India, therefore, seems to have managed the health aspect of COVID-19 well.

India's economy seems to have experienced a V-shaped recession, with a sharp decline during the 40-day lockdown period but then a return to pre-pandemic levels by the end of 2020. 

Other chapters in the report look at other issues that have become more salient as a result of the pandemic. For example, India's economy has labored for years under what has been called the "license raj," referring back to the British colonial period for a metaphor to describe how an extraordinarily instrusive level of licensing and regulation limits flexibility and growth in the India's economy. 

Element of the "license raj" still exist. As one example, the report notes: 

International comparisons show that the problems of India’s administrative processe derive less from lack of compliance to processes or regulatory standards, but from overregulation. ... [T]he issue of over-regulation is illustrated through a study of time and procedures taken for a company to undergo voluntary liquidation in India. Even when there is no dispute/ litigation and all paperwork is complete, it takes 1570 days to be stuck off from the records. This is an order of magnitude longer than what it takes in other countries. ... 

The ‘World Rule of Law Index’ published by the World Justice Project provides cross-country comparison on various aspects of regulatory enforcement. The index has various sub-categories, which capture compliance to due processes, effectiveness, timelines, etc. In 2020, India’s rank is 45 out of 128 countries in the category of ‘Due process is respected in administrative proceedings’ (proxy for following due process). In contrast, in the category ‘Government regulations are effectively enforced’ (proxy for regulatory quality/effectiveness), the country’s rank is 104 (Table 1). India stands at 89th rank in ‘Administrative Proceedings are conducted without unreasonable delay’ (proxy for timeliness) and 107th in ‘Administrative Proceedings are applied and enforced without improper influence’ (proxy for rent seeking).

Another example looks back at some aftereffects of policies taken during the Great Recession back in 2008-2009. During that time, India's banking and financial regulators instituted a policy of "forbearance," meaning that they wouldn't crack down on financial institutions that were in a shaky position during a deep recession. This policy can make sense in the short-term: if regulators crack down on banks during a recession, it can propagate a deeper recession. But soon after the recession, this policy of forbearance needs to stop--and in India that's not what happened.  

During the GFC [global financial crisis], forbearance helped borrowers tide over temporary hardship caused due to the crisis and helped prevent a large contagion. However, the forbearance continued for seven years though it should have been discontinued in 2011, when GDP, exports, IIP [international investment position] and credit growth had all recovered significantly. Yet, the forbearance continued long after the economic recovery, resulting in unintended and detrimental consequences for banks, firms, and the economy. Given relaxed provisioning requirements, banks exploited the forbearance window to restructure loans even for unviable entities, thereby window-dressing their books. The inflated profits were then used by banks to pay increased dividends to shareholders, including the government in the case of public sector banks. As a result, banks became severely undercapitalized. Undercapitalization distorted banks’ incentives and fostered risky lending practices, including lending to zombies. As a result of the distorted incentives, banks misallocated credit, thereby damaging the quality of investment in the economy. Firms benefitting from the banks’ largesse also invested in unviable projects. In a regime of injudicious credit supply and lax monitoring, a borrowing firm’s management’s ability to obtain credit strengthened its influence within the firm, leading to deterioration in firm governance. The quality of firms’ boards declined. Subsequently, misappropriation of resources increased, and the firm performance deteriorated. By the time forbearance ended in 2015, restructuring had increased seven times while NPAs [non-performing assets] almost doubled when compared to the pre-forbearance levels.

But with these kinds of ongoing issues duly noted, India has also seized the opportunity of the pandemic to carry out some long-promised structural reforms. For example, one change is that farmers are now allowed to sell their crops to anyone, anywhere, rather than being required to sell only to a designated local agency.  Another issue of long-standing is that India has long offered a range of subsidies to smaller firms,  which sounds OK until you realize that if a small firm thinks about growing into a larger firm, it realizes that it would lose its government subsidies. These kinds of labor regulations have been substantially loosened, and the number of regulations pared back. 

The increase in the size thresholds from 10 to 20 employees to be called a factory, 20 to 50 for contract worker laws to apply, and  100 to 300 for standing orders enable economies of scale and unleash growth. The drastic reductions in compliance stem from (i) 41 central labour laws being reduced to four, (ii) the number of sections falling by 60 per cent from about 1200 to 480, (iii) the maze due to the number of minimum wages being reducing from about 2000 to 40, (iv) one registration instead of six, (v) one license instead of four, and (vi) de-criminalisation of several offences.
In the next few years, it will be interesting to see if these changes make a real difference, or if they have just rearranged the furniture, with the same regulatory burden reconfigured. 

Another aftereffect of the pandemic is to raise the visibility of public health programs in India. These were already on the rise. For example,  there "an increase in public [health care] spend from 1 per cent to 2.5-3 per cent of GDP – as envisaged in the National Health Policy 2017 – can decrease the Out-Of-Pocket Expenditures from 65 per cent to 30 per cent of overall healthcare spend." There are programs to expand telemedicine and the infrastructure needed to support it. 

Also, India's government launched a program in 2018 aimed at providing more access to health care (which is mostly privately provided in India) to the low-income population. 
In 2018, Government of India approved the Ayushman Bharat Pradhan Mantri Jan Arogya Yojana (AB-PM-JAY) as a historic step to provide healthcare access to the most vulnerable sections in the country. Beneficiaries included approximately 50 crore individuals across 10.74 crores poor and vulnerable families, which form the bottom 40 per cent of the Indian population. The households were included based on the deprivation and occupational criteria from the Socio-Economic Caste Census 2011 (SECC 2011) for rural and urban areas respectively. The scheme provides for healthcare of up to INR 5 lakh per family per year on a family floater basis, which means that it can be used by one or all members of the family. The scheme provides for secondary and tertiary hospitalization through a network of public and empanelled private healthcare providers. It also provides for three days of pre-hospitalization and 15 days of posthospitalization expenses, places no cap on age and gender, or size of a family and is portable across the country. It covers 1573 procedures including 23 specialties (see Box 1 for details). AB-PM-JAY also aims to set up 150,000 health and wellness centres to provide comprehensive primary health care service to the entire population.
Finally, in India as in so many countries, there is often a policy question as to whether the country should be striving for additional economic growth or for a reduction in inequality: or more specifically, what the tradeoffs would be in prioritizing one of these goals over the other. The Survey looks at potential tradeoffs and data across the states. It finds that in the context of India, there doesn't seem to be a conflict 

[T]he Survey examines if inequality and growth conflict or converge in the Indian context. By examining the correlation of inequality and per-capita income with a range of socio-economic indicators, including health, education, life expectancy, infant mortality, birth and death rates, fertility rates, crime, drug usage and mental health, the Survey highlights that both economic growth – as reflected in the income per capita at the state level –and inequality have similar relationships with socio-economic indicators. Thus, unlike in advanced economies, in India economic growth and inequality converge in terms of their effects on socio-economic indicators. Furthermore, this chapter finds that economic growth has a far greater impact on poverty alleviation than inequality. Therefore, given India’s stage of development, India must continue to focus on economic growth to lift the poor out of poverty by expanding the overall pie. Note that this policy focus does not imply that redistributive objectives are unimportant, but that redistribution is only feasible in a developing economy if the size of the economic pie grows.

For some previous posts on India's economy, see:

The first link discussed a three-paper "Symposium on India" in the Winter 2020 issue of the Journal of Economic Perspectives (where I work as Managing Editor). 

Wednesday, February 24, 2021

Robert J. Gordon: Thoughts on Long-Run US Productivity Growth

Leo Feler has a half-hour interview with Robert J. Gordon on "The Rise and Fall and Rise Again of American Growth"  (UCLA Anderson Forecast Direct, February 2021, audio and transcript available). The back-story here is that Gordon has been making the argument for some years now that modern economic interventions, like the rise of information technologies and the internet, have not had and will not have nearly the same size effect on productivity as some of the major technologies of the past like the spread of electricity or motor vehicles (for some background, see here and here). 

Here, Gordon makes a distinction worth considering between growth in productivity and growth in consumer welfare.
Let’s divide the computer age into two parts. One is the part that developed during the 1970s and 80s and came to fruition in the 1990s, with the personal computer, with faster mainframe computers, with the invention of the internet, and the transition of every office and every business from typewriters and paper to flat screens and the internet, with everything stored in computer memory rather than filing cabinets. That first part of the computer revolution brought with it the revival of productivity growth from the slow pace of the 70s and 80s to a relatively rapid 2.5% to 3% per year during 1995 to 2005. But unlike the earlier industrial revolution where 3% productivity growth lasted for 50 years, this time it only lasted for ten years. Most businesses now are doing their day-to-day operations with flat screens and information stored in the cloud, not all that different from how they did things in 2005. In the last 15 years, we’ve had the invention of smartphones and social networks, and what they’ve done is bring enormous amounts of consumer surplus to everyday people of the world. This is not really counted in productivity, it hasn’t changed the way businesses conduct their day-to-day affairs all that much, but what they have done is change the lives of citizens in a way that is not counted in GDP or productivity. It’s possible the amount of consumer welfare we’re getting relative to GDP may be growing at an unprecedented rate.
To understand the distinction here, say that you pay a certain amount for access to television and the internet. Now say that over time, the amount of content you can access in this way--including shows, games , shopping , communication with friends, education, health care advice, and so on--rises dramatically, while you continue to pay the same price for access. In a productivity sense, nothing has changed: you pay the same for access to television and internet as you did before. But from a consumer welfare perspective, the much greater array of more attractive and easier-to-navigate choices means that you are better off. 

The expression "timepass" is sometimes used here. One of the big gains of information technology is that, for many people, it seems like a better way of passing the time than the alternatives. 

Gordon also points out that the shift to working from home and via the internet could turn out to involve large productivity gains. But as he also points out, shifts in productivity--literally, producing the same or more output with fewer inputs--is an inherently disruptive process for the inputs that get reduced. 
This shift to remote working has got to improve productivity because we’re getting the same amount of output without commuting, without office buildings, and without all the goods and services associated with that. We can produce output at home and transmit it to the rest of the economy electronically, whether it’s an insurance claim or medical consultation. We’re producing what people really care about with a lot less input of things like office buildings and transportation. In a profound sense, the movement to working from home is going to make everyone who is capable of working from home more productive. Of course, this leaves out a lot of the rest of the economy. It’s going to create severe costs of adjustments in areas like commercial real estate and transportation.
When asked about how to improve long-run productivity, Gordon's first suggestion is very early interventions for at-risk children:  
I would start at the very beginning, with preschool education. We have an enormous vocabulary gap at age 5, between children whose parents both went to college and live in the home and children who grow up in poverty often with a single parent. I’m all for a massive program of preschool education. If money is scarce, rather than bring education to 3 and 4 year olds to everyone in the middle class, I would spend that money getting it down as low as age 6 months for the poverty population. That would make a tremendous difference. ... This isn’t immediate. These children need to grow into adults. But if we look out at what our society will be like 20 years from now, this would be the place I would start.
For some of my own thoughts on very early interventions, well before a conventional pre-K program, see here, here and here

Tuesday, February 23, 2021

Including Illegal Activities in GDP: Drugs, Prostitution, Gambling

The current international standards for how a country should compute its GDP suggest that illegal activities should be included. Just how to do this, given the obvious problems in collecting statistics on illegal activity, isn't clear. The US Bureau of Economic Analysis does not include estimates of illegal activities in GDP. However, there is ongoing research on the subject, described by Rachel Soloveichik in "Including Illegal Market Activity in the U.S. National Economic Accounts" (Survey of Current Business, February 2021).

It's perhaps worth noting up front that crime itself is not included in GDP. If someone steals from me, there is an involuntary and illegal redistribution, but GDP measures what is produced. Both public and private expenditures related to discouraging or punishing crime are already included in GDP. This is of course one of the many reasons why GDP should not be treated as a measure of social welfare: that is, social welfare would clearly be improved if crime was lower and money spent on discouraging and punishing crime could instead flow to something that provides positive pleasures and benefits. 

Thus, adding illegal activities to GDP requires adding the actual production of goods and services which are illegal. Soloveichik focuses on "three categories of illegal activity: drugs, prostitution, and gambling." 
These three categories are not equal in their recent economic impact. Consumer spending on illegal drugs was $153 billion in 2017, compared to $4 billion on illegal prostitution and $11 billion on illegal gambling in the same year. Furthermore, tracking illegal drugs raises the average real GDP growth rate between 2010 and2017 by 0.05 percentage point per year and raises the average private-sector productivity growth rate between 2010 and 2016 by 0.11 percentage point per year. In contrast, neither tracking illegal prostitution nor tracking illegal gambling has much influence on recent growth rates.
To me, the most interesting part of the essay is about some historical patterns of spending on illegal activities and drug prices. For example, here's a figure showing spending on illegal drugs over time. The line to the far right shows spending on alcohol during prohibition. The very high level of spending in the 1980s is especially striking, remembering that you need to add the different categories of illegal drugs to get the total. 

Soloveichik writes: 

Chart 1 shows that the expenditure shares for all three broad categories of illegal drugs grew rapidly after 1965 and peaked around 1980. In total, this analysis calculates that illegal drugs accounted for more than 5 percent of total personal consumption expenditures in 1980. This high expenditure share is consistent with contemporaneous news articles and may explain why BEA chose to study the underground economy in the early 1980s (Carson 1984a, 1984b). Chart 1 also shows that illegal alcohol during Prohibition accounted for almost as large a share of consumer spending as illegal drugs in 1980 and changed faster. Measured nominal growth in 1934, the first year after Prohibition ended, is badly overestimated when illegal alcohol is excluded from consumer spending.

Here's a similar graph for total spending on illegal prostitution and gambling services. Spending on gambling was especially high up until about the 1960s, when first legal state lotteries and then casinos arrived. 
It may seem counterintuitive that the US can be suffering through an opioid epidemic in the last couple of decades, but still have what looks like relatively low spending on illegal drugs. But remember that the start of the opioid epidemic up to about 2010 largely involved legally sold prescription drugs (as discussed here and here)--which would have been included in GDP. Total spending is a combination of quantity purchased and price. In addition, price must be adjusted for quality. Thus, what the data shows is that we are living in a time of cheap and powerful heroin and fentanyl. As Soloveichik writes: 
Opioid potency has rapidly increased due to the recent practice of mixing fentanyl, an extremely powerful opioid, with heroin. Marijuana potency has gradually increased due to new plant varieties that contain higher concentrations of the main psychoactivechemical in marijuana, tetrahydrocannabinol (THC).
With those patterns taken into account, here's a figure showing estimated drug prices over time, relative to the prices for legal consumption goods. Drug prices for opioids and stimulants fell sharply in the 1980s, which makes the rise in nominal expenditures on drugs shown above even more striking, and have more-or-less stayed at the lower level since then. 


Soloveichik write: "Readers should also note that illegal drugs are a large enough spending category to influence aggregate inflation. Between 1980 and 1990, average personal consumption expenditures price growth falls by 0.7 percentage point per year when illegal activity is tracked in the NIPAs."

If you are interested in data sources for these illegal goods and services and what assumptions are needed to estimate prices and output levels, this article is good place to start. 

Monday, February 22, 2021

The Dependence of US Higher Education on International Students

US higher education in recent decades had beeome ever-more dependent on rising inflows of international students--a pattern that was already in likely to slow down and now is being dramatically interrupted by the pandemic. John Bound, Breno Braga, Gaurav Khanna, and Sarah Turner describe these shifts in "The Globalization of Postsecondary Education: The Role of International Students in the US Higher Education System" (Journal of Economic Perspectives, Winter 2021, 35:1, 163-84). They write: 
For the United States, which has a large number of colleges and universities and a disproportionate share of the most highly ranked colleges and universities in the world, total enrollment of foreign students more than tripled between 1980 and 2017, from 305,000 to over one million students in 2017 (National Center for Enrollment Statistics 2018). This rising population of students from abroad has made higher education a major export sector of the US economy, generating $44 billion in export revenue in 2019, with educational exports being about as big as the total exports of soybeans, corn, and textile supplies combined (Bureau of Economic Analysis 2020).
Here's a figure showing the rise in international students from 2000-2017. Notice in particular the sharp rise in international students in master's degree students. 
Bound and co-authors write: 
[F]oreign students studying at the undergraduate level are most numerous at research-intensive public universities (about 32 percent of all bachelor’s degrees), though they also enroll in substantial numbers at non-doctorate and less selective private and public institutions. ...  The concentration of international students  in master’s programs in the fields of science, technology, engineering, and mathematics is even more remarkable: for example, in 2017 foreign students received about 62 percent of all master’s degrees in computer science and 55 percent in engineering. ... Many large research institutions now draw as much as 20 percent of their tuition revenue from foreign students (Larmer 2019)."
This table shows destinations of international students from China, India, and South Korea, three of the major nations for sending students to the US. 
However, Bound and co-authors note that the US lead as a higher education destination has been diminishing: "Although the United States remains the largest destination country for students from these countries, the US higher education system is no longer as dominant as it was 20 years ago. As an illustration, student flows from China to the United States were more than 10 times larger than the flows to Australia and Canada in 2000; by 2017, those ratios fell to 2.5 to 1 and 3.3 to 1, respectively."

This pattern of rising international enrolments in US higher ed was not likely to continue on its pre-pandemic trajectory. Other countries have been building up their higher education options. In addition, if you were a young entrepreneur or professional from China or India, the possibilities for building your career in your home country look a lot better now than they did, say, back in about 1990. But the pandemic has taken what would have been a slower-motion squeeze on international students coming to US higher education and turned it into an immediate bite. Bound and co-authors write: 
Visas for the academic year are usually granted between March (when admissions decisions are made) and September (when semesters begin). Between 2017 and 2019, about 290,000 visas were granted each year over these seven months (United States Department of State 2020). Between March and September 2020, only 37,680 visas were granted—an extraordinary drop of 87 percent. Visas for students from China dropped from about 90,000 down to only 943 visas between March and September 2020. A fall 2020 survey of 700 higher education institutions found that one in five international students were studying online from abroad in response to the COVID-19 pandemic. Overall, new international enrollment (including those online) decreased by 43 percent, with at least 40,000 students deferring enrollment (Baer and Martel 2020).
Overall, it seems to me an excellent thing for the US higher education system and the US economy to attract talent from all over the world. But even if you are uncertain about those benefits, it is an arithmetic fact that the sharp declines in international students are going to be a severe blow to the finances of US higher education. 

Saturday, February 20, 2021

The Minimum Wage Controversy

Why has the economic research of the last few decades had a hard time getting a firm handle on the the effects of minimum wages? The most recent issue of the Journal of Economic Perspectives (where I have worked as managing editor for many years) includes a set of four papers that bear on the subject.  The short answer is that the effects of a higher minimum wage are likely to vary by time and place, and are likely to include many effects other than reduced employment. In this post, I'll offer a longer elaboration. For reference, the four JEP papers are:

Manning starts his paper by pointing out that mainstream views on the minimum wage have shifted substantially in the last 30 years or so. He writes: 

Thirty years ago, ... there was a strong academic consensus that the minimum wage caused job losses and was not well-targeted on those it set out to help, and that as a result, it was dominated by other policies to help the working poor like the Earned Income Tax Credit. ,,,[P]olicymakers seemed to be paying attention to the economic consensus of the time: for example, in 1988 the US federal minimum wage had not been raised for almost a decade and only 10 states had higher minima. Minimum wages seemed to be withering away in other countries too. ... In 1994, the OECD published its view on desirable labor market policies in a prominent Jobs Study report, recommending that countries “reassess the role of statutory minimum wages as an instrument to achieve redistributive goals and switch to more direct instruments” (OECD 1994).

The landscape looks very different today.  ...In the United States, the current logjam in Congress means no change in the federal minimum wage is immediately likely. However, 29 states plus Washington, DC have a higher minimum wage. A number of cities are also going their own way, passing legislation to raise the minimum wage to levels (in relation to average earnings) not seen for more than a generation ... Outside the United States, countries are introducing minimum wages (for example, Hong Kong in 2011 and Germany in 2015) or raising them (for example, the introduction of the United Kingdom’s National Living Wage in 2016, a higher minimum wage for those over the age of 25). Professional advice to policymakers has changed too. A joint report from the IMF, World Bank, OECD, and ILO in 2012 wrote “a statutory minimum wage set at an appropriate level may raise labour force participation at the margin, without adversely affecting demand, thus having a net positive impact especially for workers weakly attached to the labour market” (ILO 2012). The IMF (2014) recommended to the United States that “given its current low level (compared both to US history and international standards), the minimum wage should be increased.” The updated OECD (2018) Job Strategy report recommended that “minimum wages can help ensure that work is rewarding for everyone” (p. 9) and that “when minimum wages are moderate and well designed, adverse employment effects can be avoided” (p 72).

Why the change? From a US point of view, one reason is surely that the real inflation-adjusted level of the minimum wage peaked back in 1968. Thus, it makes some intuitive sense that studies looking at labor market data from the 1960s and 1970s would tend to find big effects of a higher minimum wage, but as the real value of the federal minimum wage declined over time, they would tend to find smaller values. Here's a figure from the Fishback and Seltzer paper showing the real (solid yellow) and nominal (blue dashed) value of the minimum wage over time: 

Another long-recognized problem in trying to get evidence about effects of the minimum wage based on changes over time is that lots of other factors affect the labor market, too. For example, the dashed blue line shows that that most recent jump in the federal minimum wage was phased in from 2007 to 2009. Trying to disentangle the effects of that rise in the minimum wage from the effects of the Great Recession is likely a hopeless task. 

One more problem with studying the effects of minimum wage changes over time is that who actually receives the minimum wage has been shifting. Manning offers this table. If shows, for example, that teenagers used to account for 32.5% of the total hours of minimum wage workers in 1979, but now account for only 9.6% of the hours of minimum wage workers. 

Rather than trying to dig out lessons from changes in the gradually declining real minimum wage over time, lots of research in the last few decades has instead tried to look at US states or cities where the minimum wage increased over time. Then the study either does a before-and-after comparison of trends, or looks for a comparison location where the minimum wage didn't rise. 

But this kind of analysis is subject to the basic problem that the states or cities that choose to raise their minimum wages are not randomly selected. They are usually places where average wages and wages for low-skill workers are already higher. As an extreme example, the minimum wage in various cities near the heart of Silicon Valley (Palo Alto, San Francisco, Berkeley, Santa Clara, Mountain View, Sunnyvale, Los Altos) is already above $15/hour. But in general, wages are also much higher in those areas. Asking the question of whether these higher minimum wages reduced low-skill or low-wage employment in these cities is an interesting research topic, but no sensible person would extrapolate the answers from Silicon Valley to how a $15/hour minimum wage would affect employment in, say, Mississippi, where half of all workers in the state earn less that $15/hour. 

Many additional complexities arise. Clemens goes through many of the possibilities in his paper. Here are some of them. 

1) Economists commonly divided workers into those in the "tradeable" or the "nontradeable" sector. A "nontradeable" good would be working at a coffee shop, where you compete against other coffee shops in the same immediate area, but not against coffee shops in other states or countries. A "tradeable" good might be a manufacturing job where your output is shipped to other locations, and so you do compete directly against producers from other locations. 

If you work in a tradeable-sector job and the state-level or local-level minimum wage rises, it may cause real problems for the firm, which is competing against outsiders. But many low-skilled jobs are in the "nontradeable" sector: food, hotels, and others. In those situations, a rise in the minimum wage means means higher costs for all the local competing firms--in which case it will be easier to past those costs along to consumers in the form of higher prices. Of course, if an employer can pass along the  higher minimum wage to consumers, any employment effects may be muted. 

2) An employer faced with a higher minimum wage might try to offset this change by paying lower benefits (vacation, overtime, health insurance, life insurance, and so on.)  The employer might also try to get more output from workers by, for example, offering less job flexibility or pushing them harder in the workplace. 

3) A higher minimum wage means an increased incentive for employers and worker to break the law and to evade the minimum wage. Clemens cites one "analysis of recent minimum wage changes [which] estimates that noncompliance has averaged roughly 14 to 21 cents per $1 of realized wage gain."

4) An employer faced with a higher minimum wage might, for a time, not have many immediate options for adjustment. But over a timeframe of a year or two, the employer might start figuring out ways to substitute high-paid labor for the now pricier minimum wage labor, or to look for ways of automating or outsourcing minimum wage jobs. Any study that focuses on effects of a minimum wage during a relatively small time window will miss these effects. But any study that tries to look at long-run effects of minimum wage changes will find that many other factors are also changing in the long run, so sorting out just the effect of the minimum wage will be tough. 

5) A higher minimum wage doesn't just affect employers, but also affects workers. A higher wage means that workers are likely to work harder and less likely to quit. Thus, a firm that is required to pay a higher minimum wage might recoup a portion of that money from lower costs of worker turnover and training. 

There is ongoing research on all of these points. There is some evidence backing them, and some evidence not, but the evidence again often varies by place, time, occupation, and which comparison group is used. The markets for supply and demand of labor are complicated places. 

I don't mean to be a whiner about it, but figuring out the effects of a higher minimum wage from the existing evidence is a difficult question.  But of course, no one weeps for the analytical problems of economists. Most people just want a bottom line on whether a $15/hour minimum wage is good or bad, so that they know whether to treat you as friend or foe--depending on whether you agree with their own predetermined beliefs. I'm not a fan of playing that game, but here are a few thoughts on the overall controversy. 

  • It's worth remember the old adage that "absence of evidence is not evidence of absence." That is, just because it's hard to provide ironclad statistical proof that a minimum wage reduces employment doesn't prove that the effect is zero--it just means that getting strong evidence is hard. 
  • Since the federal minimum wage was enacted in the 1930s, it has always been a situation where a number of states have set a higher minimum wage. The recent shift is toward cities setting a higher minimum wage than the state. Thus, the effects of raising the federal minimum wage to $15/hour will not (mostly) be felt in places where the minimum wage is already at or near that level: instead, it will be felt in all the other locations. 
  • Many minimum wage workers are also part-time workers. Thus, it's easy to imagine an example, where, say, the minimum wage rises 20% but for a certain person their hours worked are cut by 10%. This is a situation where the minimum wage led to fewer hours worked, but the worker still has higher annual income.  
  • To the extent that a higher minimum wage does affect the demand for low-skilled labor, such effects will be less perceptible in a strong or growing economy when employment is generally expanding for other reasons, and more perceptible in a weak or recessionary economy, when fewer firms are looking to hire. 
  • Everyone agrees that a smaller rise in the minimum wage will have smaller effects, and a larger rise in the minimum wage will have larger effects. I know a number of liberal-leaning, Democratic-voting economists who are just fine with the tradeoffs of raising the federal minimum wage to some extent, but who also think that a rise to $15/hour for the national minimum wage (as opposed to the minimum wage in high-wage cities and states) is too much. 

True gluttons for punishment who have read this far may want some recent minimum wage studies to look at. In this case at least, your wish is my command: 

"Wages, Minimum Wages, and Price Pass-Through: The Case of McDonald’s Restaurants," by Orley Ashenfelter and Štěpán Jurajda (Princeton University Industrial Relations Section, Working Paper #646, January 2021). "We find no association between the adoption of labor-saving touch screen ordering technology and minimum wage hikes. Our data imply that McDonald’s restaurants pass through the higher costs of minimum wage increases in the form of higher prices of the Big Mac sandwich."

"Myth or Measurement: What Does the New Minimum Wage Research Say about Minimum Wages and Job Loss in the United States?" by David Neumark and Peter Shirley (National Bureau of Economic Research,  Working Paper 28388,  January 2021). "We explore the question of what conclusions can be drawn from the literature, focusing on the evidence using subnational minimum wage variation within the United States that has dominated the research landscape since the early 1990s. To accomplish this, we assembled the entire set of published studies in this literature and identified the core estimates that support the conclusions from each study, in most cases relying on responses from the researchers who wrote these papers.Our key conclusions are: (i) there is a clear preponderance of negative estimates in the literature; (ii) this evidence is stronger for teens and young adults as well as the less-educated; (iii) the evidence from studies of directly-affected workers points even more strongly to negative employment effects; and (iv) the evidence from studies of low-wage industries is less one-sided."

"Seeing Beyond the Trees: Using Machine Learning to Estimate the Impact of Minimum Wages on Labor Market Outcomes," by Doruk Cengiz, Arindrajit Dube, Attila S. Lindner and David Zentler-Munro (National Bureau of Economic Research Working Paper 28399, January 2021). "We apply modern machine learning tools to construct demographically-based treatment groups capturing around 75% of all minimum wage workers—a major improvement over the literature which has focused on fairly narrow subgroups where the policy has a large bite (e.g., teens). By exploiting 172 prominent minimum wages between 1979 and 2019 we find that there is a very clear increase in average wages of workers in these groups following a minimum wage increase, while there is little evidence of employment loss. Furthermore, we find no indication that minimum wage has a negative effect on the unemployment rate, on the labor force participation, or on the labor market transitions.

"The Budgetary Effects of the Raise the Wage Act of 2021," Congressional Budget Office (February 2021). "CBO projects that, on net, the Raise the Wage Act of 2021 would reduce employment by increasing amounts over the 2021–2025 period. In 2025, when the minimum wage reached $15 per hour, employment would be reduced by 1.4 million workers (or 0.9 percent), according to CBO’s average estimate. In 2021, most workers who would not have a job because of the higher minimum wage would still be looking for work and hence be categorized as unemployed; by 2025, however, half of the 1.4 million people who would be jobless because of the bill would have dropped out of the labor force, CBO estimates. Young, less educated people would account for a disproportionate share of those reductions in employment."

Thursday, February 18, 2021

Rural Poverty

Rural poverty is often overlooked. In the Spring 2021 issue of the Stanford Social Innovation ReviewRobert Atkins, Sarah Allred and Daniel Hart discuss "Philanthropy’s Rural Blind Spot," about how philanthropies have typically put much more time and attention on urban poverty than rural poverty. They write: 

Most large foundations are located in metropolitan areas and have built relationships with institutions and organizations in those communities. ... [M]any grant makers assume that urban centers have higher rates of poverty than rural areas. Moreover, many funders believe that they maximize impact and do more good when their grants go to addressing distress in densely populated areas. The rates of poverty, however, are higher in rural areas than in urban areas. In addition, it would be difficult to demonstrate that a grant going to a metropolitan community to improve high school graduation rates, increase the food security of agricultural workers, or reduce childhood lead poisoning assists a greater number of individuals than if the same grant goes to a nonmetropolitan community. In other words, giving to more densely populated areas does not clearly result in a greater equity return on investment for the grant maker.
The authors point to a resource with which I had not been familiar, the Multidimensional Index of Deep Disadvantage produced by H. Luke Shaefer, Silvia Robles and Jasmine Simington of the University of Michigan, using methods also developed by Kathryn Edin and Tim Nelson at Princeton University. They collect a combination of economic, health, and social mobility data on counties and the 500 largest cities in the United States. You can find an interactive map at the website, or click here for a full list of the 3617 areas. They then rank the areas. In an overview of the results, Shaefer, Edin, and Nelson write:

When we turn the lens of disadvantage from the individual to the community, we find that five geographic clusters of deep disadvantage come into view: The Mississippi Delta, The Cotton Belt, Appalachia, the Texas/Mexico border, and a small cluster of rust belt cities (most notably Flint, Detroit, Gary, and Cleveland). Many Native Nations also score high on our index though are not clustered for historic reasons. ...

The communities ranking highest on our index are overwhelmingly rural. Among the 100 most deeply disadvantaged places in the United States according to our index, only 9 are among the 500 largest cities in the United States, which includes those with populations as small as 42,000 residents. In contrast, 19 are rural counties in Mississippi. Many of the rural communities among the top 100 places have only rarely, if ever, been researched. Conversely, Chicago, which has been studied by myriad poverty scholars, doesn’t even appear among the top 300 in our index. Our poverty policies suffer when social science research misses so many of the places with the greatest need. ...

How deep is the disadvantage in these places? When we compare the 100 most disadvantaged places in the United States to the 100 most advantaged, we find that the poverty rate and deep poverty are both higher by greater than a factor of six. Life expectancy is shorter by a full 10 years, and the incidence of low infant birthweight is double. In fact, average life expectancy in America’s most disadvantaged places, as identified by our index, is roughly comparable to what is seen in places such as Bangladesh, North Korea, and Mongolia, and infant birth weight outcomes are similar to those in Congo, Uganda, and Botswana.

If should be noted that a list of this sort is not an apples-to-apples comparison, in part because the population sizes of the areas are so very different. Many counties have only a few thousand people, while many cities have hundreds of thousands, or more. Thus, the data for a city will average out both better-off and worse off areas, while a low population, high-poverty rural county may not have any better-off places. 

But the near-invisibility of rural poverty in our national discourse is still striking. For example, when talking about improving education and schooling, what should happen with isolated rural schools rarely makes the list.  When talking about how to assure that people have health insurance, the issues related to people who are a long way from a medical facility are often not on the list of topics. When talking about raising the national minimum wage to $15/hour, much of the discussion seems to assume an area relatively dense in population, employers, and jobs, where various job-related adjustments can take place, not a geographically isolated and high-poverty area with few or no major employers. These issues aren't new. Many of the current high-poverty areas (rural and urban) have been poor for decades.

Wednesday, February 17, 2021

Robert Shiller on Narrative Economics

Robert J. Shiller (Nobel '13) delivered the Godley-Tobin Lectures, an annual lecture delivered at the Eastern Economic Association meetings, on the subject of “Animal spirits and viral popular narratives” (Review of Keynesian Economics, January 2021, 9:1, pp. 1-10).

Shiller has been thinking about the intertwining of economics and narrative at least since his presidential address to the American Economic Association back in 2017. He suggests, for example, that the key feature distinguishing humans may be our propensity to organize our thinking into stories, rather than just intelligence per se. Indeed, there are many examples in all walks of life (politics, investing, expectations of family life, careers, reactions to a pandemic) where people will often cleave to their preferred narrative rather than continually question and challenge it with their intelligence. He begins the current essay in this way: 

John Maynard Keynes's (1936) concept of ‘animal spirits’ or ‘spontaneous optimism’ as a major driving force in business fluctuations was motivated in part by his and his contemporaries' observations of human reactions to ambiguous situations where probabilities couldn't be quantified. We can add that in such ambiguous situations there is evidence that people let contagious popular narratives and the emotions they generate influence their economic decisions. These popular narratives are typically remote from factual bases, just contagious. Macroeconomic dynamic models must have a theory that is related to models of the transmission of disease in epidemiology. We need to take the contagion of narratives seriously in economic modeling if we are to improve our understanding of animal spirits and their impact on the economy.
Thus, this lecture emphasizes the parallels between how narratives spread and epidemiology models of how diseases spread:
Mathematical epidemiology has been studying disease phenomena for over a century, and its frameworks can provide an inspiration for improvement in our understanding of economic dynamics. People's states of mind change through time, because ideas can be contagious, so that they spread from person to person just as diseases do. ...

We humans live our lives in a sea of epidemics all at different stages, including epidemics of diseases and epidemics of narratives, some of them growing at the moment, some peaking at the moment, others declining. New mutations of both the diseases and the narratives are constantly appearing and altering behavior. It is no wonder that changes in business conditions are so often surprising, for there is no one who is carefully monitoring the epidemic curves of all these drivers of the economy.

Since the advent of the internet age, the contagion rate of many narratives has increased, with the dominance of social media and with online news and chats. But the basic nature of epidemics has not changed. Even pure person-to-person word-of-mouth spread of epidemics was fast enough to spread important ideas, just as person-to person contagion was fast enough to spread diseases into wide swaths of population millennia ago.
As one illustration of the rise and fall of economic-related narratives, Shiller uses "n-grams" which search for how often certain terms are used in news media. Examples of such terms shown in this graph include "supply-side economics," "welfare dependency," "welfare fraud," and "hard-working American."


Shiller's theme is that if we want to understand macroeconomic fluctuations, it won't be enough just to look at patterns of interest rates, trade, or innovation, and it won't be enough to include factors like real-life pandemics, either. The underlying real factors matter, of course. But the real factors are often translated into narratives, and it is those narratives which then affect economic actions about buying, saving, working, starting a business, and so on. Shiller writes: "As this research continues, there should come a time when there is enough definite knowledge of the waxing and waning of popular narratives that we will begin to see the effects on the aggregate economy more clearly."

I'll only add the comment that there can be a tendency to ascribe narratives only to one's opponents: that is, those with whom I disagree are driven by "narratives," while those with whom I agree are of course pure at heart and driven only by facts and the best analysis. That inclination would be a misuse of Shiller's approach. In many aspects of life, enunciating the narratives that drive our own behavior (economic and otherwise) can be hard and discomfiting work. 

For some additional background on these topics: 

For a readable introduction to epidemiology models aimed at economists, a useful starting point is the two-paper "Symposium on Economics and Epidemiology" in the Fall 2020 issue of the Journal of Economic Perspectives: "An Economist's Guide to Epidemiology Models of Infectious Disease," by Christopher Avery, William Bossert, Adam Clark, Glenn Ellison and Sara Fisher Ellison; and "Epidemiology's Time of Need: COVID-19 Calls for Epidemic-Related Economics," by Eleanor J. Murray.

For those who would like to know more about "animal spirits" in economics, a 1991 article in the Journal of Economic Perspectives by Roger Koppl discusses the use of the term by John Maynard Keynes and then gives a taste of the intellectual history: for example, Keynes apparently got the term from Descartes, and it traces back to the second century Greek physician Galen.