Pages

Saturday, February 29, 2020

The Pneumococcal Vaccine: A Success for Advance Market Commitment

Sometimes an argument by an academic economist helps to trigger a process that saves 700,000 lives. This is the story of what's called an "advance market commitment"--a contractual agreement by governments, international organizations and nonprofits to purchase a certain amount of a vaccination or drug, if and when that drug is developed. An advance market commitment launched in 2009 helped lead to the development and distribution of a  pneumococcal vaccine for low-income countries, which in turn led to the development of three vaccines that have been used to immunize 150 million children , saving an estimated 700,000 lives.

Michael Kremer, Jonathan D. Levin, and Christopher M. Snyder tell the story in "Advance Market Commitments: Insights from Theory and Experience" (February 2020, NBER Working Paper 26775). This is a more polished follow-up to a working paper presented at a session of the Allied Social Science Associations (ASSA) meetings in January.

Kremer, in particular, as been pushing the idea of advance market commitments for several decades. For example, back in the Fall 2002 issue of the Journal of Economic Perspectives (where I work as Managing Editor), he wrote an article about "Pharmaceuticals and the Developing World" (pp. 67-90). He pointed out that a number of diseases had their primary effect in low-income countries and that drug companies in high-income countries had a limited incentive to focus on these diseases. Kremer wrote:
However, the most severe distortions in developing country pharmaceutical markets probably involve dynamic issues. Pharmaceutical firms are reluctant to invest in R&D on the diseases that primarily affect developing countries not only because the poverty of the potential users reduces their willingness to pay, but also because the potential revenue from product sales is far smaller than the sum of customers’ potential willingness to pay due to the lack of intellectual property protection and the tendency for governments to force prices down after firms have sunk their research and development costs. ... 
Programs to encourage R&D can take two broad forms. “Push” programs subsidize research inputs—for example, through grants to researchers or R&D tax credits. “Pull” programs reward research outputs, for example, by committing in advance to purchase a specified amount of a desired product at a specified price. Both approaches have important roles, but current policy underutilizes pull programs. ...

[U]nder pull programs, the public pays nothing unless a viable product is developed. Pull programs give researchers incentives to self-select projects with a reasonable chance of yielding a viable product and to focus on developing a marketable product. Under pull programs, governments do not need to “pick winners” among R&D proposals—they simply need to decide what success would be worth to society and offer a corresponding reward. Moreover, appropriately designed pull programs can help ensure that if new products are developed, they will reach those who need them. One kind of pull program is a purchase commitment in which sponsors would commit to purchase a specified number of doses at a specified price if a vaccine meeting certain specifications were developed. ... An example of a purchase commitment would be for developed countries or private foundations to commit to purchase malaria vaccine at $5 per immunized person and to make it available to developing countries either free or for a modest copayment.
working group under the auspices of the Center for Global Development, chaired by Ruth Levine, Michael Kremer, Alice Albright,  thought in more concrete terms about how to design advanced market commitments so that they would be enforceable contracts, and published its report in 2005.

Kremer, Levin, and Snyder summarize what happened next:
In 2007, five countries and the Gates Foundation pledged $1.5 billion toward a pilot AMC targeting a pneumococcal conjugate vaccine (PCV). The World Health Organization (WHO) estimated pneumococcus killed more than 700,000 children under five in developing countries annually at that time (WHO 2007). A PCV covering disease strains prevalent in developed countries already existed, and PCVs covering the strains in developing countries were in late-stage clinical trials; so this was a technologically close target.
In 2009, the AMC launched under the supervision of GAVI (formerly the Global Alliance for Vaccines and Immunizations). The design called for firms to compete for ten-year supply contracts capping price at $3.50 per dose. A firm committing to supply 𝑋𝑋 million annual doses (𝑋𝑋/200 of the projected 200 million annual need) would secure an 𝑋𝑋/200 share of the $1.5 billion AMC fund, paid out as a per-dose subsidy for initial purchases. The AMC covered the 73 countries below an income threshold for GAVI eligibility. Country co-payments were set according to standard GAVI rules.
GSK, Pfizer, and the Serum Institute of India have all received payments from the advance market commitment contract. By 2018, about half of all children in these 73 countries had received the vaccine, although India had not yet rolled out a full nationwide program. In general, the World Health Organization says that an intervention is cost-effective if it avoids the loss of a "disability-adjusted life year" (DALY) at a cost of less than three times per capita GDP, and very cost-effective if it avoid the loss of a DALY at a cost of less that per capita GDP (for discussion, see here). By one early estimate, the pneumococcus vaccination   avoided the loss of a disability adjusted life year at cost of $83--making it an extreme success even from the pure cost-benefit perspective. 

There are reasonable and hard-headed questions to ask about how to value the benefits of a the advance market commitment approach. The pneumococcal vaccine was a "technologically close" target, taking vaccines that already existed in high-income countries and accelerating their development and use for the strains of disease in low-income countries. Just to be clear, no one is arguing that the advance market commitment is a magic bullet that, all by itself, can substitute for other "push" policies encouraging research and development. The argument is that it focuses priorities and speeds up what is possible, not just in the development of the vaccine or drug, but also in avoiding a protracted negotiation over what the price will be, and having countries that are ready and prepared to deliver the vaccine or drug through their health care systems.

But speeding up a public health process matters. As a counterexample, Kremer, Levin, and Snyder point out that a rotavirus vaccine was developed at about the same time, and was relevant to much the same group of countries, but did not have an advance market commitment. The rotavirus vaccine spread through the population about five years more slowly,  and shortages of rotavirus vaccine were far more common. 

Friday, February 28, 2020

Is It Useful to Call Access to Electricity as a "Right"?

There is a long-standing philosophical dispute over what should be called a "right," which often breaks down into a discussion of "negative" and "positive" rights. The US Bill of Rights offers a number of examples of "negative rights," which are typically phrased in terms of what is not allowed. For example, the First Amendment begins with "Congress shall make no law ..." before referring to freedom of religion, speech, the  press, assembly, and petitioning for redress of grievances. In this view, a "right" is something that cannot be taken away from you.

On the other side, the UN Declaration of Human Rights includes a number of "positive" rights, in which a common phrasing is that "everyone has a right to ..." For example, Article 19 says: "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers." Other articles hold that everyone has a right to "the economic, social and cultural rights indispensable for his dignity and the free development of his personality" (Article 22), "the right to work, to free choice of employment, to just and favourable conditions of work and to protection against unemployment .... to just and favourable remuneration ensuring for himself and his family an existence worthy of human dignity, and supplemented, if necessary, by other means of social protection" (Article 23), "the right to rest and leisure, including reasonable limitation of working hours and periodic holidays with pay" (Article 24), "the right to a standard of living adequate for the health and well-being of himself and of his family, including food, clothing, housing and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control" (Article 25). Many of these "positive" rights suggest that there is a duty or responsibility from government or society to provide certain goods, without specifying how this is to be done.

Of course, the line between "negative" and "positive" rights can be blurry in specific situations. But there does seem to me a meaningful distinction here. Moreover, I tend to agree with a long-ago comment by EB White, when the UN Declaration of Human Rights was under discussion, that referring to all human desires as human "rights" can lead to unwanted outcomes. As White wrote in 1953:
There is, I believe, a very real and discernible danger, to a country like ours, in an international covenant that equates human rights with human desires, and that attempts to satisfy, in a single document, governments and philosophies that are essentially irreconcilable. I do not think it safe or wise to confuse, or combine, the principle of freedom of religion or the principle of freedom of the press with any economic goal whatsoever, because of the likelihood that in guaranteeing the goal, you abandon the principle. This has happened over and over again. ... If you were to pack croquet balls and eggs in a single container, and take them travelling, you would probably end your journey with some broken eggs. I believe that if you put a free press into the same bill with a full belly, you will likely end the journey with a controlled press.
But let us slide sideways out of the questions of philosophy, and instead phrase the question in terms of practicality. My experience is that many of those who want to wrap various policy goals in the language of "rights" is that they believe this designation will serve as a useful aspirational push for society to achieve these goals. Is that in fact true?

Robin Burgess, Michael Greenstone, Nicholas Ryan, and Anant Sudarshan offer a counterexample in "The Consequences of Treating Electricity as a Right," in the Winter 2020 issue of the Journal of Economic Perspectives. They are discussing the broader provision of electricity in developing countries like India. they write:
How can treating electricity as a right undermine the aim of universal access to reliable electricity? We argue that there are four steps. In step 1, because electricity is seen as a right, subsidies, theft, and nonpayment are widely tolerated. Bills that do not cover costs, unpaid bills, and illegal grid connections become an accepted part of the system. In step 2, electricity utilities—also known as distribution companies—lose money with each unit of electricity sold and in total lose large sums of money. Though governments provide support, at some point, budget constraints start to bind. In step 3, distribution companies have no option but to ration supply by limiting access and restricting hours of supply. In effect, distribution companies try to sell less of their product. In step 4, power supply is no longer governed by market forces. The link between payment and supply has been severed: those evading payment receive the same quality of supply as those who pay in full. ...
The consequences for electricity consumers, both rich and poor, are severe. There is only one electricity grid, and it becomes impossible to offer a higher quantity or quality of supply to those consumers who are willing and sometimes even desperate to pay for it. Socially beneficial transactions are therefore prevented from occurring. This interaction of the social norm that electricity is a right and the technological constraint of a common grid for all parties makes it impossible to ration service to person by person, and firm by firm, making the consequences of treating electricity as a right more severe than for other private goods. Though private alternatives to grid electricity exist, like diesel generators and solar panels, these substitutes are inferior to grid electricity in terms of price and load (Burgess et al. 2019). In fact, the only reason these substitutes are competitive at all is that the quality of the service the grid provides is so poor. 
Their article goes through these four steps and their with detailed evidence and analysis in the context of providing broad access to electricity in low-income countries. But it is worth noting that the connection from believing that that a good or service is a "right" is often accompanied by a belief that providers of that god or service should not expect to receive full--or perhaps any--direct payment from those receiving the service. But when payment and supply become separated, then a need arises for legislatures or courts or regulators or nongovernment institutions to figure out how supply will be funded, managed, and organized. Calling something a "right" does not answer these questions, and may inflame them.

Notice that the argument here is not philosophical, but pragmatic. It doesn't ask whether access to electricity (or some other good) should for some set of philosophical/ethical/moral reasons be added to the UN Declaration of Human Rights. It simply argues on pragmatic grounds that designating electricity as a "right" triggers as set of expectations and actions that are not useful if the practical goal is to expand access to electricity.

For more on the economics of access electricity in developing countries, see:

Thursday, February 27, 2020

Left-Number Bias: What Firms Haven't Quite Figured Out

"Left- number bias" refers to when people focus on the left-hand number--and thus, why so many prices in stores take the form of $X and 99 cents, rather than rounding up that extra penny. Avner Strulov-Shlain offers some additional evidence on this well-known phenomenon and then draws out some lesser-known implications in "More than a Penny's Worth: Left-Digit Bias and Firm Pricing" (December 2019, Chicago Booth Research Paper No. 19-22).  The Chicago Booth Review offers a short readable overview here.

Strulov-Shlain estimated demand curves for products. However, in the model he uses, raising price by a certain amount (say, by a penny or a dime) are allowed to have a bigger effect when it changes the left-hand dollar value than when it doesn't change the left-hand dollar value. He writes: 

"To estimate demand, I use a sample of 1710 popular products in 248 stores of a single US retailer over 3.5 years, and another sample of 12 products in AC Nielsen RMS data across more than 60 chains and 11,000 stores, over 9 years. ... I find that consumers are biased, to the extent of treating a 1 cent increase from a 99-ending price as if it were a 15-25 cent increase. Next, I estimate retailer pricing behavior. .. Firms seem to underestimate the magnitude of the bias significantly. From the firm’s perspective, I estimate that they act as if a 99-ending price is treated by consumers as being only 1.5-3 cents lower than the round price. ... I find that firms do better than pricing as if there is no bias at all, but not half as good as possible. I estimate that they lose 1%-3% of gross profits, or $60 million annual revenue on regular price sales."

The underlying logic here is how firms should read to a strong left-number bias. For example, no prices should be set at an even dollar amount, because if a firm was to cut that price by a single penny, so that it instead ended with 99 cents, consumers would react as if the price had been cut by 15-25 cents, and the resulting increase in sales will more than make up for the small price cut.  Indeed, pushing this logic a little further, firms should not set prices at some amount not far above the even-dollar amount, either. For example, consider a price of $3.29. If that price is cut to $2.99, then consumers will react to the actual price cut of 30 cents, plus an additional 15-25 cents for changing the left-hand digit. Again, the increase in quantity sold as a result of this price cut should (for most products) lead to an increase in profits. 

So why don't we see even more prices ending in 99 cents than we already do? Maybe the model used by Strulov-Strain misses some important factor; for example, perhaps if even more prices were set to end in 99 cents, then some of the left-number bias might wear off. Or perhaps it just seems to retailers too aggressive and risky to, say, cut prices from $3.29 to $2.99. 

In another paper, Strulov-Strain looks at the effects of a law in Israel that required all prices to be set "to the dime"--that is, you could end a price with 90 cents, but not 99 cents. If the US eliminated pennies and nickels as currency, this would in effect be the result for retailers. He writes:  "Before the reform about 40%-50% of prices ended with 99, suggesting substantial levels of perceived and actual [left-hand number] bias." When the law passed, a lot of firms reacted at first by pushing the prices that ended with 99 up to 00. But this didn't last, and within 6-12 months Israeli firms were ending their prices with 90, presumably again because of left-hand number bias. 

For another study on left-number bias, see this discussion of "Left Number Bias in Used Car Prices" (October 4, 2011), which looks at how prices for used cars drop more sharply depending on the left-hand numbers on the mileage on the car.  Or here's a discussion of the related phenomenon of "round number" bias, in "One Million Page Views and Round Number Bias" (October 18, 2013). 

Wednesday, February 26, 2020

Global Corporate Bond Markets and the China Problem

The last two US recessions were both linked to financial markets: that is, the dot-com boom-and-bust of the late 1990s leading up to the recession of 2001, and how the housing market boom-and-bust worked its way through the financial system in the lead-upto the Great Recession from 20017-2009. One could that that financial markets actually led the say into the last three US recessions, depending on how one views the meltdown of the US savings and loan industry in the lead-up to the 1990-91 recession. Thus, when looking around for how the next recession might arise, it's natural to scan financial markets, and corporate bonds keep coming up as a potentially worrisome area.

S. Celik, G. Demirtaş and M. Isaksson  look at this topic from a global perspective in “Corporate Bond Markets in a Time of Unconventional Monetary Policy” (OECD Capital Market Series, February 2020). Here's some background.
Between 2008-2018 global corporate bond issuance averaged USD 1.7 trillion per year, compared to an annual average of USD 864 billion during the years leading up to the financial crisis. As a result, the global outstanding debt in the form of corporate bonds issued by non-financial companies reached almost USD 13 trillion at the end of 2018. This is twice the amount in real terms that was outstanding in 2008. The United States remains the largest market for corporate bonds. But non-financial companies from most other economies, including Japan, the United Kingdom, France and Korea, have all increased their use of corporate bonds as a means of borrowing. On a global scale, the most significant shift has been the rapid growth of the Chinese corporate bond market. The People’s Republic of China (China) has moved from a negligible level of issuance prior to the 2008 crisis to a record issuance amount of USD 590 billion in 2016, ranking second highest in the world. 
Much of this rise in corporate debt was desired by policy-makers and beneficial to the world economy. After all, when central banks reduce interest rates, the hope is to stimulate borrowing that will raise aggregate demand in the economy. When policy-makers pass regulations to limit the risks taken by banks, they are in effect pushing some of that borrowing out of the banking sector and into bond markets--where the risks will be carried by private investors.


But when borrowing rises sharply, there are also natural questions to ask. Is the overall level of risk associated with these loans rising or falling? Are the borrowers actually planning to repay, or are they planning to take out more loans in the future--thus raising the possibility of "roll-over risk" if it becomes harder for them borrow in the future? Or to make these questions concrete, think about how the coronavirus news affects the risks of the bonds already issued in the enormous surge of borrowing by Chinese corporations.

Even before coronavirus, it looked as if the riskiness of corporate bonds as an overall category was on the rise. Corporate bonds are either "investment grade," in which case they get a credit rating from bond agencies, or "non-investment grade." In the "investment grade" bonds, the OECD report points out:
Our more detailed analysis of the composition of the investment grade category reveals a marked continuous increase in BBB rated bonds, which is the rating just above non-investment grade. While BBB rated bonds made up about 30% of all investment grade bonds issued in 2008 they accounted for almost 54% in 2018. This relative increase in lower rated investment grade bonds has come at the expense of a decrease in AA and AAA rated bonds. ... This prolonged decline in bond quality points to the risk that a future downturn may result in higher default rates than in previous credit cycles.
Another way of looking at risk is to look at the "covenant protection," which refers to the legal language in the bond contracts and how much power it gives to those who purchased the bonds if repayment isn't made on time. These protections have been weakening, too.
Compared to the pre-2008 period there has been a marked decrease in the use of key covenants for non-investment grade bonds. ... While lower levels of covenant protection may allow companies to escape default for a longer time, the expectation of a company’s default and achievable recovery rates may still affect investor portfolios negatively. Moreover, historical data shows that low quality covenants have a significant negative effect on recovery rates.
Yet another way of looking at risk in corporate bonds is to look at how much the companies need to repay in the relatively near-term of the next few years.
As of December 2018, companies in advanced economies need to pay or refinance USD 2.9 trillion within 3 years and their counterparts in emerging economies USD 1.3 trillion. At the 1-, 2- and 3-year horizons, advanced and emerging market companies have the highest corporate bond repayments since 2000. Notably, for emerging market companies, the amount due within the next 3 years has reached a record of 47% of the total outstanding amount; almost double the percentage in 2008.
The OECD report has lots more detail about specific categories of corporate bonds and their risks. Here, I'll just add that from a macroeconomic perspective, the issue here isn't the safety or riskiness of specific corporate bonds, or even the corporate bond sector as a whole. The issue is that corporate debt is a magnifier in both good and bad economic times. If the world economy receives a a sufficiently large negative shock, corporations with more debt--and more risky debt--are going to find themselves in a more fragile financial position. As a result, they will be more likely to cut back on investing in expanding production through new plant and equipment, research and development, and hiring additional workers.


For China's economy, one way in which disruptions from the coronavirus are going to percolate through to the rest of the economy is through China's corporate bond market. For the US economy, at least some monetary and banking policymakers are already doing some advance thinking about how to react if the US corporate bond market comes under stress.

For some previous posts and link to reports and commenters worried about corporate debt, see:

Tuesday, February 25, 2020

Spending Comparison: Pet Care Industry and National Elections

Americans spent $5.8 billion on pet care services in 2017, according to recent estimates from economists at the US Bureau of the Census. To be clear: "The pet care services industry (NAICS code 812910) includes services such as grooming, boarding, training and pet sitting. It does not include veterinary services, boarding horses, transporting pets, pet food or other pet supplies."

My first thought on seeing the pet care services article was to be reminded, yet again, of the enormous size and richness of the US economy. My second thought was about costs of US national elections.

According to the OpenSecrets website run by the Center for Responsive Politics, total spending for federal elections in 2016--including the presidential campaign, as well as races for the House and Senate--was $4.6 billion. One suspects that with billionaires Michael Bloomberg and Tom Steyer tossing around money will raise the total election campaign spending for 2020. (Bloomberg has reported put $464 million into his campaign so far, and Steyer has put $267 million into his campaign.) But even so, Americans will probably spend roughly the same on pet care services in 2020 than the total amount spent on all campaigns for the presidency and Congress.

As another comparison, the US corporations that spend most heavily on advertising in 2018 are Comcast ($6.12 billion), AT&T ($5.36 billion), Amazon ($4.47 billion), Procter & Gamble ($4.3 billion. It's very likely in the 2020 that perhaps 5-7 large companies will each individually have an advertising budget above the total spent by all presidential candidates (in 2016, $1.4 billion), and the top 1-2 corporate spending budgets will exceed the total spent on all campaigns for the presidency and Congress combined.

On one side, the total amounts spent on national election campaigns do seem large. But give the power that politicians wield over the $4.6 trillion federal budget, not to mention over the passage of domestic regulations and foreign policy, all in the context of a US GDP of $22 trillion in 2020, the amounts spent on campaigning don't seem vastly out of line.

From this perspective, what's remarkable is not how much the US spends on elections, but how little. This isn't a new observation: back in 2003, the Journal of Economic Perspectives ran an article called "Why Is There so Little Money in U.S. Politics?" by Stephen Ansolabehere, John M. de Figueiredo and James M. Snyder Jr. They argued that the evidence does not support a view of campaign contributions as an investment by special interests expecting a return; instead, campaign contributions are better viewed as a form of consumption spending, in which contributors enjoy a sense of connectedness and participation.

Ultimately, it's not the total amount of spending on campaigns that annoys or concerns me. It's that the ads are banal, uncreative, information-free, not gently but obviously full of spin. In addition, the campaigns see fit to hit you with the same ads over and over again. By Election Day, it is hard for me to avoid a feeling that the advertising agencies for the candidates--and by extension the candidates themselves--feel disdain or contempt for the electorate.

My other concern is that while people obsess over total campaign spending, which mostly happens out in the open through live candidate events and media advertising, the totally different category of lobbying expenses fly under the radar. The OpenSecrets website also collects data on lobbying expenses; in 2019, for example, over 11,000 registered lobbyists spent $3.5 billion. My suspicion is that this total is an underestimate, because a lot of what might reasonably be called lobbying is not registered and recorded. But we know that lobbying happens every year, whether there is an election or not. How this lobbying is directed is murky, and what it accomplishes in adjusting the fine print of legislation or nipping certain proposals in the bud is opaque. Here's a figure from OpenSecrets:

Monday, February 24, 2020

Untangling India's Distinctive Economic Story

It's easy enough to explain why China's economic development has gotten more attention than that of India. China's growth rate has been faster. China's effect on international trade has created more a shock for the rest of the global economy. In geopolitical terms, China looks more like a rival. Also, China's basic story-line of trying to liberalize a centrally-planned economy while keeping a communist government is fairly easy to tell.

But whatever the plausible reasons why China's economy has gotten more attention than India, it seems clear to me that India's economic developments have gotten far too little attention. A symposium in the Winter 2020 issue of the Journal of Economic Perspectives offers some insights:
I'll also mention an article on "Caste and the Indian Economy," by Kaivan Munshi, which appears in the December 2019 issue of the Journal of Economic Literature, a sibling journal of the JEP (that, is both are published by the American Economic Association).

Lamba and Subramanian point out that over the 38 years from 1980 (when India started making some pro-business reforms), India is one of only nine countries in world to have averaged an annual growth rate of 4.5%, with no decadal average falling below 2.9% annual growth. (The nine, listed in order of annual growth rates during this time with highest first, are Botswana, Singapore, Korea, Taiwan, Malta, Hong Kong, Thailand, India, and Malaysia.) Of course, one can tweak these cutoffs in various ways, but no matter how you slice it, India's growth rate over the last four decades has been remarkable. Moreover, India's population is likely to exceed China's in the near future.

But India's path to rapid growth has been notably different than many other countries. India is ethnically fractionalized, especially when the caste system is taken into account.In addition, India path to development has been "precocious," as Lamba and Subramanian put it, in two ways.

One involves the "modernization hypothesis" that economic development and democracy evolve together over time.  In India, universal suffrage arrived all at once when India became independent in 1948. For a sense of how dramatic this difference is, the graph below shows per capita GDP on the horizontal axis and degree of democracy on the vertical axis. The lines show the path of countries over time. Clearly, India defies the modernization hypothesis by having full democracy before development. China defies the modernization hypothesis in the other direction, by having develoment without democracy.
The other precocious factor for India is that economic development in most countries involves a movement from agriculture to manufacturing to services. However, India has largely skipped the stage of low-wage manufacturing, and moved directly toward a services-based economy. One underlying factor is India's "license raj"--the interlocking combinations of rules about starting a business, labor laws, and land use that have made it hard for manufacturing firms to become established. A related factor is that in global markets, India's attempts at low-wage manufacturing over the decades were outcompeted by Korea, Thailand, China--and now by the rise of robots.

The good side of this "precocious servicification" is that that high-income economies are primarily services and services are a rising part of international trade. The bad side is that this services economy works much better for the relatively well-educated in urban areas, and offers less opportunity for others--thus leading to greater inequality.

India faces a range of other issues as well. Environmental problems in India are severe: when it comes to air pollution for example, "22 of the top 30 most polluted cities in the world are in India." The role of women in India's economy and society is in some ways moving backward: "Female labor force participation in India has been declining from about 35 percent in 1990 to about 28 percent in 2015. For perspective, the female labor force participation rate in Indonesia in 2015 was almost 50 percent; in China, it was above 60 percent. In addition, the gap between India’s labor force participation rate and the rate of countries with similar per capita GDP is widening, not narrowing. ... India’s sex ratio at birth increased from 1,060 boys born for every 1,000 girls in 1970 to 1,106 in 2014, widening its gap from the biological norm of 1,050."

The capabilities of India's government are shaped by these underlying background factors. Devesh Kapur writes in JEP:
India’s state performs poorly in basic public services such as providing primary education, public health, water, sanitation, and environmental quality. While it is politically effective in managing one of the world’s largest armed forces, it is less effective in managing public service bureaucracies. The research literature on India has many discussions of programs that fail to deliver meaningful outcomes, or that are victims of weak implementation and rent-seeking behavior of politicians and bureaucrats, or that are vitiated by discrimination against certain social groups ...

But on the other side, the Indian state has a strong record in successfully managing complex tasks and on a massive scale. It has repeatedly conducted elections for hundreds of millions of voters—nearly 900 million in the 2019 general elections—without national disputes. In this decade, it has scaled up large programs such as Aadhaar, the world’s largest biometric ID program (which crossed one billion people enrolled within seven years of its launch). Most recently, it has implemented the integrated Goods and Services Tax (GST), one of the most ambitious tax reforms anywhere in recent times. India ranks low on its ability to enforce contracts, but its homicide rate has dropped markedly from 5.1 in 1990 to 3.2 (per 100,000) in 2016 ... 
[T[he Indian state has delivered better in certain situations and settings: specifically, on macroeconomic rather than microeconomic outcomes; where delivery is episodic with inbuilt exit, rather than where delivery and accountability are quotidian and more reliant on state capacity at local levels; and on those goods and services where societal norms and values concerning hierarchy and status matter less, rather than in settings where these norms and values—such as caste and patriarchy—are resilient.

Kapur traces these issues back to the ethnic fractionalization, social cleavages and caste system in India, combined with India's early adoption of democracy. Moroever, India is a country with a low tax/GDP ratio and a relatively small number of taxpayers. He also points out that most government positions in India require a difficult civil-service examination, and by international standards India's government does not appear overstaffed. A pattern has evolved that India's government is relatively effective on big picture projects like electrification, but much less effective on local issues that are related to social expectations about caste and gender: for example, reforms related to education, or the welfare of children and women.  In countries as different as the United States and China, about 60% of all government employees are at the local level; in India, it's less than 20%.

India continues to have issues with caste differences, as explored in the article by Kaivan Munshi. he writes: 
Caste continues to play an important role in the Indian economy. Networks organized at the level of the caste or jati provide insurance, jobs, and credit for their members in an economy where market institutions are inefficient. Affirmative action for large groups of historically disadvantaged castes in higher education and India’s representative democracy has, if anything, made caste more salient in society and in the public discourse. Newly available evidence with nationally representative data indicates that there has been convergence in education, income, occupations, and consumption across caste groups over time. ... The available evidence indicates that caste discrimination, at least in urban labor markets, is statistical, that is, based on differences in socioeconomic characteristics between upper and lower castes. ... Given the strong intergenerational persistence in human capital, the key variable driving convergence, it will be many generations before income and consumption are equalized across caste groups.
The caste-based economic networks that currently serve many functions will also disappear once markets begin to function efficiently. These networks continue to be active in the globalizing Indian economy because information and commitment problems are exacerbated during a period of economic change. In the long run, however, the markets will settle into place and the caste networks will lose their purpose. This has certainly been the experience in many developed countries. In the United States, for example, ethnic networks based on a European country (region) of origin supported their members through the nineteenth century into the middle of the twentieth century. Ultimately, however, these networks no longer served a useful role and today, outside of a few pockets, European ethnic identity in the United States is largely symbolic. We might expect caste to similarly lose its salience as India develops into a modern market economy, and there is some evidence that this process may have already begun.
 Amartya Lahiri takes up yet another issue: "On November 8, 2016, India demonetized 86 percent of its currency in circulation." Specifically, India declared that people needed to turn in their large-denomination bills at banks, and that the existing bills would be worthless moving forward. They would then be replaced with new currency. The policy had several goals, like making it impossible for organized crime to hide its accumulated gains in the form of cash, and bringing people into the banking system and the digital economy. But Lahiri argues that these larger goals were not much affected by the change. Instead, the main effect of the demonetization was causing short-term hardship and higher unemployment in the areas where the demonetization led to temporary cash shortages. I had not known that India had carried out similar demonetizations of large-denomination currency in 1946 and 1978--with, Lahiri argues, much the same minimal-to-negative effects.

India's record of sustained and strong economic growth appears to be in some danger from the "twin balance sheet challenge."  As Lamba and Subramanian put it:
The sustainability of growth—which in late 2019 has cratered to a near standstill— will be determined by structural factors salient amongst which is the “twin balance sheet challenge” initiated by the toxic legacy of the credit boom of the 2000s. Recently, the rot of stressed loans has spread from the public sector banks to the nonbank financial sector, and on the real side, from infrastructure companies to most notably the real estate sector with the latter threatening middle class savings. This contagion owes both to overall weak economic growth and slow progress in cleaning up bank and corporate balance sheets. A failure to resolve this challenge could mean a reprisal of the Japanese experience of nearly two decades of lost growth, but at a much lower level of per capita income. India’s development experience could end up being a transition from socialism without entry to capitalism without exit because weak regulatory capacity and lack of social buy-in will have impeded the necessary creative destruction.
Thus, India's economy finds itself at a pivotal moment, facing both the short-run challenges of the twin balance sheet problem, the longer run economic problems of appropriate reforms to create an environment in which India's businesses can function and grow, the challenges of building transportation, energy, and communications infrastructure., and the social policy challenges of improving education and health care. Challenges never come singly.

Friday, February 21, 2020

Some Economics of Refugees

Refugee policy is defined differently from immigration policy. With immigration policy, a nation makes a decision about what number and kinds of immigration (family-based, skill-based) would benefit itself. But refugee policy, at least under the standard definition from the 1951 Refugee Convention, involves whether someone who, “owing to well-founded fear of being persecuted for reasons of race, religion, nationality, membership of a particular social group or political opinion, is outside the country of his nationality and is unable or, owing to such fear, is unwilling to avail himself of the protection of that country…”

In  theory, refugee policy is based on the need of the refugees themselves, not on some judgment about whether letting them in will benefit the receiving society. But the receiving society does retain the power to make decisions about the extent to which people have a "well-founded fear of being persecuted" in the sense of the term that would make them refugees, or whether they are trying to use refugee status to do an end-run about immigration limits.  Evaluating that distinction will often involve some degree of subjectivity and politics.

For an overview of what we know about the magnitudes, drivers, and legalities of refugee flows and assimilation, I recommend the two-paper symposium in the Winter 2020 issue of the Journal of Economic Perspectives. (Full disclosure: I work as Managing Editor of JEP.)

As Hatton notes: "The United Nations High Commissioner for Refugees (UNHCR) estimates the total number of refugees worldwide at the end of 2018 at 20.1 million. This is less than one-third of the total of 70.8 million `forcibly displaced persons,' which also includes those displaced within their home country (41.3 million) and Palestinians (5.5 million) who come under a separate mandate (UNHCR 2019, 2). In 2018, refugees were 7.6 percent of the stock of all international migrants (defined as those living outside their country of birth). ... As of 2018, two-thirds of refugees are from just five countries: Syria, Afghanistan, South Sudan, Myanmar, and Somalia. Of the total, 85 percent of refugees are located in developing countries, often just across the border from the origin country, and about 30 percent of these languish in organized refugee camps."

Here's a figure from Hatton showing the number of asylum claims by refugees over time. The figure makes obvious why concerns over refugee policy have been high in the European Union, and also in the US.  About one-third of asylum applicants are granted refugee status. 

As one might expect, the drivers of refugee patterns are less about economic differences, and more about political terror in sending countries and proximity and access to another country. Hatton writes: 
Several studies have assessed the push and pull forces behind asylum applications to industrialized countries by analyzing panel data on the number of applicants by origin, by destination, and over time. The most important origin-country variables are political terror and lack of civil liberties; civil war matters less, perhaps because war per se does not necessarily confer refugee status (Hatton 2009, 2017a). There is weaker evidence that declines in origin-country income per capita leads to more asylum applications, which offers modest support to the view that economic migration is part of the story. Proximity and access are important in determining the volume of asylum applications. Countries that are small but nearby can generate large flows—as with a quarter of a million Cubans moving to the United States in the 1970s and 400,000 Serbians and Montenegrins moving to the European Union in 1995−2004—provided that the door is left ajar. But the growth of transit routes and migrant networks have fueled the upward trend of applications from more distant origins. For example, travel in caravans through Mexico combined with violence and drought at home, a growing diaspora, and mixed messages about future US policy all combined to boost migration from Central America (Capps et al. 2019).

Brell, Dustmann and Preston present evidence on assimilation of refugees--which is often rather different from the pattern of assimilation by immigrants. For example, one striking pattern is that refugees are often slower to find employment than immigrants. In Germany, about 10% of refugees have found employment two years after arriving, while about 60% of immigrants have found employment within two years. This shouldn't be a surprise: remember, immigrants come because they are seeking something, while refugees are escaping something. The gap between employment of refugees and immigrants does tend to close over time. The outlier here is the United States, where employment rats for refugees and immigrants are much the same, right from the start. As the authors write: "It is not entirely clear why the US experience appears so different ... possible explanations could relate to the nature of the US labor market or to the nature of the settlement process in the United States, but require further investigation."

There is also usually a wage gap between refugees and other immigrants, which also exists in the US.
"For instance, while average wages of refugees who had been in the United States for two years amounted to 40 percent of native wages and 49 percent of other immigrants’ average wages, after 10 years, average wages had improved to 55 percent of natives and 70 percent of other immigrants in the same position."

What helps refugees to assimilate faster? Brell, Dustmann and Preston offer some non-obvious insights. One is that many refugees have experience substantial trauma, and their fear of being persecuted is based on recent experience. Thus, there is some evidence that paying attention to their mental and physical health needs soon after arriving can help assimilation start on a better path. Speeding up the asylum process itself, so that people do not languish for several years without being able to start their new adjustment, can help. Another issue is that politicians sometimes divide up refugees among many locations. However, social networks within a group can offer an important method of learning about jobs and opportunities and more generally how to function in a new society, so relocating refugees from a certain place so that they are in decent-sized groups can  help assimilation.

One interesting US pattern involves acquisition of language skills: "[R]efugees arrive with lower levels of language proficiency than other migrants—at the time of migration, only about 44 percent of refugees speak English `well' or better, compared with 64 percent of other immigrants. However, while other immigrants do not tend to see particularly strong gains in English speaking skills over time, refugees rapidly improve and even overtake other migrants’ speaking abilities around ten years after arriving in the United States." One hypothesis is that immigrants may be living within an extended culture from their country of origin, and perhaps travelling back and forth now and then. But refugees are often more isolated, and they aren't going back, so their incentives to learn English are different.

(In the discreet shade of these parenthesis, I'll just note that at the end of this post it's annoying to me when public attention focuses so heavily on refugees who are seeking to enter the US and Europe, while largely ignoring refugees elsewhere or the group of displaced persons more broadly. At the peak a few years ago, the share of those making asylum claims in high-income countries was 1.5 million, a small share of the 70.8 million "forcibly displaced persons." Concern over the living conditions of the forcibly displaced population should not kick in only after they reach the border of a high-income country.)

Thursday, February 20, 2020

The US Rental Housing Market

The US rental housing market is in the middle of some major shifts, as outlines by the Joint Center for Housing Studies of Harvard University in its report "America's Rental Housing 2020" (January 2020). Here are some of the changes.

The "rentership rate"--the share of households renting--rose sharply from about 2004 to about 2016, before leveling out the last few years.

From 2000 to 2010, most of the growth in the housing rental market was coming from those with relatively lower incomes. But in the last decade, most of the growth in the rental housing market is coming from those with relatively higher incomes. "But at 22 percent in 2019, rentership rates among households earning $75,000 or more are at their highest levels on record. Even accounting for overall income growth, rentership rates for households in the top decile jumped from 8.0 percent in 2005 to 15.1 percent in 2018 as their numbers more than doubled."
Rent is a big burden for many. The report looks at renters who are "cost burdened," referring to those who pay more than 30% of their income in rent. "Thanks to strong growth in the number of high-income renters, the share of renters with cost burdens fell more noticeably from a peak of 50.7 percent in 2011 to 47.4 percent in 2017, followed by a modest 0.1 percentage point increase in 2018. ... Meanwhile, 10.9 million renters—or one in four—spent more than half their incomes on housing in 2018." Another big shift is that there is a rise in the "cost-burdened renters" in middle-income groups (say, $30,000-$75,000 per year in annual income), especially in  "larger, high-cost metropolitan areas."

Vacancy rates for rentals are down, and are especially low for lower-cost, lower-quality rentals.
Meanwhile, rents are consistently rising faster than inflation.
The value of apartment properties has risen quickly, too.

Some background factors are also shifting. In the market for rental properties, stock of rentals rising in two areas  over last 15-20 year: single-family homes, and multi-family buildings with 20 or more units. These changes represent a shift in the rental housing market away from individual landlords and toward corporate ownership of rentals. In the area of single-family homes, for example, a number of institutional investors bought houses as rental properties in the aftermath of the drop in housing prices around 2010. The report notes:
Ownership of rental housing shifted noticeably between 2001 and 2015, with institutional owners such as LLCs, LLPs, and REITs accounting for a growing share of the stock. Meanwhile, individual ownership fell across rental properties of all sizes, but especially among buildings with 5–24 units. Indeed, the share of mid-sized apartment properties owned by individuals dropped from nearly two-thirds in 2001 to about two-fifths in 2015. Given that units in these structures are generally older and have relatively low rents, institutional investors may consider them prime candidates for purchase and upgrading. These changes in ownership have thus helped to keep rents on the climb.
Another shift is that many renters seem happier being renters, and less likely to view a rental as a short-term stop on the path to homeownership. Renters are staying in place longer, too. The report notes:
Changes in attitudes toward homeownership may lead some households to continue to rent later in life. The latest Freddie Mac Survey of Homeowners and Renters reports that the share of genX renters (aged 39–54 in 2019) with no interest in ever owning homes rose from 10 percent in March 2017 to 17 percent in April 2019. ... Fully 75 percent of renters overall, and 72 percent of genX renters, stated that renting best fits their current lifestyle. ...
[M]any renters are staying in the same rental units for longer periods. Between 2008 and 2018, the share of renters that had lived in their units for at least two years increased from 36 percent to 41 percent among those under age 35, and from 62 percent to 68 percent among those aged 35–64. Similarly, the National Apartment Association reported a turnover rate of just 46.8 percent in 2018— the lowest rate of move-outs since the survey began in 2000.
The US rate of homeownership has often been in the range of 63-65%, going up above that range during the housing boom around 2006, back down after that, and then rebounding a bit in the last few years.  Looking at long-run trends of aging, marriage/parenthood, and income, the US Department of Housing and Urban Development organized a pro-and-con symposium a few years ago on the question of whether the US homeownership rate will have fallen to less than 50% by 2050. Homeownership rates for young adults and for blacks are especially low. The US rate of homeownership was about average by international standards 20-25 years ago, but now is below the average. For earlier posts on these themes, see:


With regard to the broader social issue of rental prices being so high for so many people, the economic answer is straightforward. For those with very low incomes, help them afford the rent. But for the market as a whole, the way to get lower prices is to raise supply. For example, it's an interesting question as to why the individual landlord has been in such decline, and the extent to which this drop has been due to additional administrative, regulatory, and zoning costs being imposed at the state and local level. It seems to me possible that we are in the middle of a social shift in which many households at a variety of income levels put less emphasis on homeownership--which in turn means greater public attention to conditions of supply and demand in housing rental markets.

Tuesday, February 18, 2020

The Herfindahl-Hirschman Index: Story, Primer, Alternatives

It seems clear that the concept of what is now is called the Herfindahl-Hirschman Index was originated in 1945 by Albert O. Hirschman, who may be best-remembered today for his 1970 book Exit, Voice, Loyalty, discussing the options available to a dissatisfied group member.  However, the concept was then attributed to Orrin Herfindahl, who wrote five years later in 1950, and further confusion arose when it was sometimes referred to as a Gini index. Here's a primer and the the story.

The HHI, as it is often abbreviated, is a way of measuring industry concentration that is taught in every intro econ textbook. Assume that you have an industry where one big company has 50% of sales, three companies have 10% each, and 20 companies have 1% each. How can a researcher sum up the degree of concentration in this industry in a single number? 

One common approach is to use a "concentration ratio." Pick a number of firms, like the top 4 or the top 8 in an industry. Add up their market share. Thus, the 4-firm concentration ratio in this example would be 80% and the 8-firm concentration ration would be 84%. 

But this concentration ratio approach has an obvious problem. An industry where the four top firms each had 20% of the market would have the same 4-firm concentration ratio of 80% as the example above. An industry where the top eight firms each had 10.5% of the market would have the same 8-firm concentration ratio of 84%. It would seem odd to say that these counterexamples, with a number of firms of roughly equivalent size, have the same concentration as the original example, where the largest firm has a full half of the market. 

Thus, the HHI uses a different calculation. First you square the market shares of existing firms; then you add them up. Thus, in the original example the HHI would be (50)2 + 3(10)2 + 20(1)2 = 2,820. The maximum value for an HHI would be 10,000, for a single firm with 100% of the market. An industry with a large number of very small firm that each have less than 1% of the market could have an HHI lower than 100. (In some cases, the HHI is described on a scale from 0 to 1, instead of 0  to 10,000, which is the numbers you get if the market shares are expressed with decimal points before being squared--thus, a 50% market share squared would be .25, not 2500.).

The idea of measuring industry concentration in this way originated with Albert O. Hirschman in his 1945 book National Power and the Structure of Foreign Trade. As he points out, there were already ways of measuring concentration, with the Lorenz curve and the Gini coefficient for measuring inequality of income especially well-known. But as Hirschman points out (p. 158):
In various instances, however, the number of elements in a series the concentration of which is being measured is an important consideration. This is so whenever concentration means "control by the few," i.e., particularly in connection with market phenomena. Control of an industry by few producers can be brought about by an inequality of distribution of the individual output shares when there are many producers or by the fact that only few producers exist. One of the well-known conditions of perfect competition is that no individual seller should command an important share of the total market supply; this condition implies the presence of both relative equality of distribution and of large numbers. 
To put this point a little differently, imagine a market with all of equal-sized producers--maybe a small number like two or three or four, or a large number like 100 or 1,000.  A measure of equality like the measures used for income  would point out that all firms firms are of equal size. In contrast, a measure of concentration would emphasize that the number of firms matters, and that four firms means more competition than two, and 100 firms means more competition than four.  By squaring the market shares, Hirschman's measure gave greater weight to larger firms, thus emphasizing the idea that when it comes to concentration of an industry, large firms matter more.

In these ways, Hirschman's proposed measure of industry concentration was fundamentally different than the common measures of income equality. In fact, it was such a good idea that five years later, the idea was reinvented by Orris C. Herfindahl in his 1950 PhD dissertation, Concentration in the U.S. Steel Industry. Herfindahl mentions Hirschman's earlier work in a footnote.

There were surface differences between the Hirschman and Herfindahl measures. Hirschman's study was looking at concentrations of exports and imports of countries, both according to sources and destinations of international trade, while Herfindahl was applying the measure to the US steel industry. In addition, Herfindahl used (essentially) the measure described briefly above, while Hirschman took the square root of that measure.

But that's not how it evolved. Gideon Rosenbluth wrote  chapter called "Measures of Concentration," which appeared in a 1955 NBER conference volume called Business Concentration and Price Policy (pp. 57-95).  Rosenbluth wrote in 1955:
But summary measures can be devised to measure concentration, just as they have been developed for other characteristics of size distributions. An ingenious measure of this type has been employed by O. C. Herfindahl in an investigation of concentration in the steel industry. It consists of the sum of squares of firm sizes, all measured as percentages of total industry size. This index is equal to the reciprocal of the number of firms if all firms are of the same size, and reaches its maximum value of unity when there is only one firm in the industry.
But a few years later in a 1961 essay, " Remarks" in Die Konzentration in der Wirtschaft, Schriften des Vereins fuir Sozialpolitik (New Series, Vol. 22, pp. 391-92), Rosenbluth wrote:
The first point I want to make causes me some embarrassment. There is a good deal of discussion int the background material about "Herfindahl's Index." Actually, it is a mistake to ascribe this index to Herfindahl, and I believe my paper on measures of concentration, published in 1955, is the source of this mistake. I discovered later that the man who first proposed this index was Albert O.  Hirschman in his book "National Power and the Structure of Foreign Trade," published by the University of California Press in 1945. Hirschman actually proposed the square root of what I call Herfindahl's Index, since this gives a more even distribution of values. 
Hirschman made an attempt to lay out this chronology in a short note appearing in the American Economic Review in 1964 (54: 5, September, p. 761). He points out that in a number of recent papers, the index was being referred to as a "Gini index," although he had made some effort back in 1945, along with Herfindahl and Rosenbluth in later work, to be clear that it was not an index of equality. Hirschman writes: "Upon devising the index I went carefully through the relevant literature because I strongly suspected that so simple a measure might already have occurred to someone. But no prior inventor was to be found." He also points out that Rosenbluth had originally attributed the index to Herfindahl.  Hirschman concludes on a wry note: "The net result is that my index is named either after Gini who did not invent it at all or after Herfindahl who reinvented it. Well, it's a cruel world."

In the world of economics, this problem of attribution is sometimes called Stigler's law: "No scientific discovery is named after its original discoverer." Of course, Steve Stigler was quick to point out in his 1980 article that he didn't discover his own law, either! In this case, it does not seem to me a grievous miscarriage of justice to have the names of both Hirschman and Herfindahl on the index, although Hirschman should probably come first.

Those who have read this far are probably the kind of people who would be interested in knowing that justification and analysis of concentration indexes is an ongoing task.  For getting up to speed, a useful starting point is the NBER working paper by  Paolo M. Adajar, Ernst R. Berndt, and Rena M. Conti, "The Surprising Hybrid Pedigree of Measures of Diversity and Economic Concentration" (November 2019, #26512).
The characterization of industry structure and industry concentration has long been a task facing empirical economic researchers, for it is widely believed that market structure, market behavior and various market performance outcomes are important interrelated phenomena. Although a number of alternative measures of market concentration are commonly used, such as the k‐firm concentration measure and the Herfindahl‐Hirschman index (HHI), their foundations in economic theory and statistics are limited and have not been developed extensively, leaving their unqualified use as measures of market power potentially vulnerable to the criticism of “measurement without theory”.
For example, perhaps it makes sense at some intuitive level to give greater weight to the market share of large firms when measuring concentration. But why square the market shares? Why not adjust them in some other way? Indeed, there is a set of alternative concentration measures using different weights going back to the work of Gideon Rosenbluth, and known as Rosenbluth/Hall‐Tideman (RHT) metrics. Adajar, Berndt, and Conti offer an analytical basis for the idea that squaring the market shares makes sense, based on a conceptually similar diversity measure from ecology. They write:
In this paper, we have traced the pedigree of the much‐used Herfindahl‐Hirschman (HHI) economic concentration index to the Simpson Index of diversity originally developed in ecology, where an identical calculation to the HHI is interpreted as the probability of two organisms randomly selected from a sample habitat belonging to the same species (analogous in economics to the probability a pair of randomly and independently selected products are being marketed by the same manufacturer). This probabilistic foundation of the HHI to some extent shields it from the allegation that the sum of squared shares calculation is arbitrary and unscientific, even as its links to market power and antitrust competition analysis remain ambiguous. 
For those wanting to dig deeper into alternative indexes of concentration, Adajar, Berndt, and Conti write:
We have also considered alternative proposed measures of concentrations, some of them mathematical generalizations of the HHI, others such as entropy originating from information theory in engineering and physics, another set that is developed axiomatically, and still others incorporating related concepts such as inequality and absolute population size. We have considered computational and interpretability aspects of the various concentration measures, and noted the extent to which they incorporate considerations not only of relative inequality such as the Gini coefficient and Lorenz curve, but also of absolute population size. 
Other things equal, markets with a large number of competitors suggest barriers to entry are limited, and therefore such markets could plausibly be expected to be competitive, other things equal. Therefore, to economists concentration metrics incorporating both variability/relative inequality and absolute population size considerations are preferable, for if one believes that economic performance outcomes depend not only on relative sizes but also on the  absolute number of competitors in a market, then one prefers a concentration measure that incorporates both features. The existing economic literature comparing the various concentration metrics on a priori statistical and axiomatic criteria appears to view the HHI and the closely related Rosenbluth/Hall‐Tideman (RHT) metrics most favorably. Choice between these two measures on a priori grounds is indeterminate, since the choice involves selection of weights and is therefore similar to choice among alternative index number formula in economic index number theory.

Monday, February 17, 2020

Writing the Intro to Your Economics Research Paper

If you do academic research, whether in economics or other fields, you need to give an honest answer to a basic question: "Do you want readers for your research?" If the answer is "no," then read no further. If the answer is "yes," then you should probably be thinking and working considerably more than the introduction to your paper. Barney Kilgore, a famous editor of the Wall Street Journal back in the 1950s and 1960s,  posted a motto in his office: “The easiest thing in the world for a reader to do is to stop reading.” If the intro doesn't make readers want to proceed, they will often take the easy course and turn to something else. 

Several writers of economics blogs have emphasized this theme recently. 

At the Center for Global Development blog, David Evans wrote "How to Write the Introduction of Your Development Economics Paper" (February 10, 2020). Evans writes:
You win or lose your readers with the introduction of your economics paper. Your title and your abstract should convince people to read your introduction. Research shows that economics papers with more readable introductions get cited more. The introduction is your opportunity to lay out your research question, your empirical strategy, your findings, and why it matters. Succinctly. ...
Invest in your introduction. One reason that so many introductions in top journals have a similar pattern is that it’s clear: you tell the reader why the issue you studied is important, you tell them what you did, you tell them what you learned, and you tell them how it builds on what we already knew. You might tell them how it relates to policy or what the limitations of your work are. Interested readers can dive into the details of the paper, but good introductions give casual readers a clear sense of what they’ll get out of your paper. Your introduction is your kingdom. Rule it well.
Evans looks at 15 recent economic development papers published in prominent journals and discusses the ways in which  their introductions have a common pattern:  
  1. Motivate with a puzzle or a problem (1–2 paragraphs)
  2. Clearly state your research question (1 paragraph)
  3. Empirical approach (1 paragraph)
  4. Detailed results (3–4 paragraphs)
  5. Value-added relative to related literature (1–3 paragraphs)
  6. Optional paragraphs: robustness checks, policy relevance, limitations
  7. Roadmap (1 paragraph)
Evans also points to a couple of other recent discussions of introductions in economic research. For example, Keith Head presents his own view of "The Introduction Formula," which starts like this:
1. Hook: Attract the reader’s interest by telling them that this paper relates to something interesting. What makes a topic interesting? Some combination of the following attributes makes Y something worth looking at.
  • Y matters: When Y rises or falls, people are hurt or helped.
  • Y is puzzling: it defies easy explanation.
  • Y is controversial: some argue one thing while other say another.
  • Y is big (like the service sector) or common (like traffic jams).
Things to avoid:
  • The bait and switch: promising an interesting topic but delivering something else, in particular, something boring.
  • “all my friends are doing it” : presenting no other motivation for a topic than that other people have written papers on it.
2) Question: Tell the reader what this paper actually does. Think of this as the point in a trial where having detailed the crime, you now identify a perpetrator and promise to provide a persuasive case. The reader should have an idea of a clean research question that will have a more or less satisfactory answer by the end of the paper. Examples follow below. The question may take two paragraphs. At the end of the first (2nd paragraph of the paper) or possibly beginning of the second (3rd paragraph overall) you should have the “This paper addresses the question” sentence.
Claudia Sahm at the Macromom blog spent last fall reading job market papers, and gives vent to her reactions in "We need to talk MORE ..." (September 19, 2019). 
This post is for job market candidates. You need to spend more time editing your abstract and introduction. It will be worth more than your fourth robustness check. Promise. ... Sadly, it is clear that economics departments and dissertation committees are NOT teaching their doctoral students how to communicate their research. ... EVERY job market paper I read lacked a well-structured, well-written introduction and abstract. Many of these papers are from top schools and from native English speakers.
Sahm offers an intro structure as well, closely related to the others. She begins this way:

Structure of Introduction (in order):

THIS IS A VERY IMPORTANT PART OF YOUR PAPER

1) Motivation (1 paragraph)
  • Must be about the economics.
  • NEVER start with literature or new technique (unless econometrics).
  • Be specific and motivate YOUR research question.
2) Research question (1 paragraph)
  • Lead with YOUR question.
  • THEN set YOUR question within most relevant literature.
  • My favorite is an actual question: “My paper answers the question …”
  • Popular and acceptable: “My paper [studies/quantifies/evaluates/etc] …”
3) Main contribution (2-3 paragraphs, one for each contribution)
  • YOUR main contribution
  •             MUST be about new economic knowledge.
  •             Lead with YOUR work, then how it extends the literature.
  • New model, new data, new method, etc.:
  •             Can be second or third contribution.
  •             Tools are important, not most important.
  • Each paragraph begins with a sentence stating one of YOUR contributions.
  • THEN follow with three or four sentences setting YOUR contribution in literature.
  • Most important should be first (preferred) or last (sometimes most logical).
  • YOUR contributions are very important. Make them clear, compelling, and correct.
These posts caught my eye in part because they are a theme I have also tried to emphasize when talking about writing. A substantial part of my value-added as Managing Editor of the Journal of Economic Perspectives is sharpen up the introductions for papers. Most of the time, all the ingredients for a strong introduction are already there. But it's not unusual for an excellent lead-in or "hook" to be buried several pages into the paper, or even at the start of the conclusion, rather than right up front. It's not unusual to have intros that are either so long that only the author's parents will persevere to the end, or so short that the reader might just as well flip to a random page in the middle of the essay and start there. 

Here's a quote from an essay of my own, "From the Desk of the Managing Editor," written on the occasion of the 100th issue of the Journal of Economic Perspectives back in Spring 2012. I wrote:   
Invest more time in the stepping-stones of exposition: introductions, opening paragraphs of sections, and conclusions. Introductions of papers are worth four times as much effort as they usually receive.The opening paragraph of each main section of a paper is worth three times as much effort as it usually receives. Conclusions are worth twice as much effort as they usually receive. This recommendation emphatically does not call for long introductions with a blow-by-blow overview each subsection of the paper to come. It doesn’t mean repeating the same topic sentences over and over again, in introduction and section headings and conclusion. It means making a genuine effort to attract the attention of the reader and let the reader know what is at stake up front, to signpost the argument as it develops, and to tell the reader the state of the argument at the end.

Friday, February 14, 2020

Telephone Switchboard Operators: Rise and Fall

In  1950, there were 342,000 telephone switchboard operators working for the Bell Telephone System and some independent operators, as well as another 1 million or so telephone switchboard operators who worked at private locations like office buildings, factories, hotels, and apartment buildings. Almost all of these switchboard operators were female. To put it another way, about one out of every 13 working women in 1950 were telephone operators.  But by 1984, national employment as an operator in the telecommunications industry was down to 40,000, and now it's less than 2,000 (according to the Bureau of Labor Statistics). 

 David A. Price sketches the  history of this rise and fall in "Goodbye, Operator," appearing in Econ Focus (Federal Reserve Bank of Richmond, Fourth Quarter 2019, pp. 18-20). The story provokes some thoughts about the interaction of workers with new and evolving technologies. 

For more than a half-century from the late 19th century up to 1950, technology was creating jobs as telephone operators. From the phone company point of view, customers needed personal assistance and support if they were to incorporate this new technology into their lives. The workers with what we would now call the "soft skills" to provide this interface between technology and customers were reasonably well-rewarded. Price writes:
In the early decades of the industry, telephone companies regarded their business less as a utility and more as a personal service. The telephone operator was central to this idea, acting as an early version of an intelligent assistant with voice recognition capabilities. She got to know her 50 to 100 assigned customers by name and knew their needs. If a party didn't answer, she would try to find him or her around town. If that didn't succeed, she took a message and called the party again later to pass the message along. She made wake-up calls and gave the time, weather, and sports scores. During crimes in progress or medical emergencies, a subscriber needed only to pick up the handset and the operator would summon the police or doctors. ...

While operators were not highly paid, the need to attract and retain capable women from the middle classes led telephone companies to be benevolent employers by the standards of the day — and in some respects, of any day. Around the turn of the century, the companies catered to their operators with libraries, athletic clubs, free lunches, and disability plans. Operators took their breaks in tastefully appointed, parlor-like break rooms, some with armchairs, couches, magazines, and newspapers. At some exchanges, the companies provided the operators with a community garden in which they could grow flowers or vegetables. In large cities, company-owned dormitories were offered to night-shift operators.
But even as the number of telephone operator jobs was growing rapidly, the job of being a telephone operator evolved dramatically. By 1950, the hyper-personal touch seems to have greatly diminished, and the telephone operator skills involved being able to handle "the board," which involved plugging and unplugging several hundred connections per hour.

Looking back, the slow diffusion of automatic telephone switching technology seems a little puzzling. One reason is that digital technology differs in some fundamental ways from the earlier methods of automation. It's a standard story that the switchboard operators were replaced by automation. But why weren't they replaced by automation much earlier? Part of the answer seems to be that the automated telephone-switching systems in the first half of the 20th century did not actually display economies of scale. Price writes: 
With the electromechanical systems of the day, each additional customer was more, not less, expensive. Economies of scale weren't in the picture. To oversimplify somewhat, a network with eight customers needed eight times eight, or 64, interconnections; a network with nine needed 81. "You were actually getting increasing unit costs as the scope of the network increased," says Mueller. "You didn't get entirely out of the telephone scaling problem until digital switching in the 1960s."
This pattern of technology led to a situation where small-scale independent phone companies were more likely to use automated switching in the early part of the 20th century, while the giant Bell company continued to rely heavily on combinations of automatic switching with oversight from human switchboard operators--especially for long-distance calls.
More broadly, diffusion of technology is important in many contexts. Some well-known historical examples of important technologies that diffused slowly, over decades, include tractors and electricity. In the modern economy, a prominent pattern across many industries is that a few leading "superstar" firms are jumping farther ahead in terms of productivity, and their example of how to achieve such productivity gains is apparently not diffusing as quickly to other firms. There's an old economic lesson here, which is that for purposes of economic growth, just inventing a new technology is not enough: instead, many participants in the economy need to find ways to change their behavior in both simple and more fundamental ways to take full advantage of that technology. 

Back in 1964, even knowledgeable industry observers thought that the decline in telephone operators from about 1950 to 1960 was a one-time and temporary shift. Elizabeth Faulkner Baker wrote in  her 1964 book, Technology and Women's Work:
In sum, it is possible that the decline in the relative importance of telephone operators may be nearing an end. It seems that in the foreseeable future no machines will be devised that can completely handle person-to-person calls, credit-card calls, emergency calls, information calls, transient calls, messenger calls, marine and mobile calls, civilian defense calls, conference calls, and coin-box long-distance calls. Indeed, although an executive vice-president of the American Telephone and Telegraph Company has said that the number of dial telephones will reach almost 100 percent in the next few years and that there will be an increasing amount of customer dialing of long-distance calls: "Yet we will still need about the same number of operators we need now, perhaps more."
Again the underlying notion was that the job of being a telephone operator would evolve, but not the need for people who could play a role of facilitating use of telecommunications technology easier for customers. When it comes to the specific job of telephone operator, this prediction was clearly off-base. (Although as a college student in the late 1970s and early 1980s, I remember the days when if you really needed to call home, you could just grab a public phone, dial zero for "operator," and be answered by a person, from whom you would recite your home phone number and request a collect call.) But when thinking more broadly about the interaction between workers and technology, the central question remains as to what areas now and in the future will continue to benefit from human support at the interface between new technologies and ultimate users.