Tuesday, February 25, 2020

Spending Comparison: Pet Care Industry and National Elections

Americans spent $5.8 billion on pet care services in 2017, according to recent estimates from economists at the US Bureau of the Census. To be clear: "The pet care services industry (NAICS code 812910) includes services such as grooming, boarding, training and pet sitting. It does not include veterinary services, boarding horses, transporting pets, pet food or other pet supplies."

My first thought on seeing the pet care services article was to be reminded, yet again, of the enormous size and richness of the US economy. My second thought was about costs of US national elections.

According to the OpenSecrets website run by the Center for Responsive Politics, total spending for federal elections in 2016--including the presidential campaign, as well as races for the House and Senate--was $4.6 billion. One suspects that with billionaires Michael Bloomberg and Tom Steyer tossing around money will raise the total election campaign spending for 2020. (Bloomberg has reported put $464 million into his campaign so far, and Steyer has put $267 million into his campaign.) But even so, Americans will probably spend roughly the same on pet care services in 2020 than the total amount spent on all campaigns for the presidency and Congress.

As another comparison, the US corporations that spend most heavily on advertising in 2018 are Comcast ($6.12 billion), AT&T ($5.36 billion), Amazon ($4.47 billion), Procter & Gamble ($4.3 billion. It's very likely in the 2020 that perhaps 5-7 large companies will each individually have an advertising budget above the total spent by all presidential candidates (in 2016, $1.4 billion), and the top 1-2 corporate spending budgets will exceed the total spent on all campaigns for the presidency and Congress combined.

On one side, the total amounts spent on national election campaigns do seem large. But give the power that politicians wield over the $4.6 trillion federal budget, not to mention over the passage of domestic regulations and foreign policy, all in the context of a US GDP of $22 trillion in 2020, the amounts spent on campaigning don't seem vastly out of line.

From this perspective, what's remarkable is not how much the US spends on elections, but how little. This isn't a new observation: back in 2003, the Journal of Economic Perspectives ran an article called "Why Is There so Little Money in U.S. Politics?" by Stephen Ansolabehere, John M. de Figueiredo and James M. Snyder Jr. They argued that the evidence does not support a view of campaign contributions as an investment by special interests expecting a return; instead, campaign contributions are better viewed as a form of consumption spending, in which contributors enjoy a sense of connectedness and participation.

Ultimately, it's not the total amount of spending on campaigns that annoys or concerns me. It's that the ads are banal, uncreative, information-free, not gently but obviously full of spin. In addition, the campaigns see fit to hit you with the same ads over and over again. By Election Day, it is hard for me to avoid a feeling that the advertising agencies for the candidates--and by extension the candidates themselves--feel disdain or contempt for the electorate.

My other concern is that while people obsess over total campaign spending, which mostly happens out in the open through live candidate events and media advertising, the totally different category of lobbying expenses fly under the radar. The OpenSecrets website also collects data on lobbying expenses; in 2019, for example, over 11,000 registered lobbyists spent $3.5 billion. My suspicion is that this total is an underestimate, because a lot of what might reasonably be called lobbying is not registered and recorded. But we know that lobbying happens every year, whether there is an election or not. How this lobbying is directed is murky, and what it accomplishes in adjusting the fine print of legislation or nipping certain proposals in the bud is opaque. Here's a figure from OpenSecrets:

Monday, February 24, 2020

Untangling India's Distinctive Economic Story

It's easy enough to explain why China's economic development has gotten more attention than that of India. China's growth rate has been faster. China's effect on international trade has created more a shock for the rest of the global economy. In geopolitical terms, China looks more like a rival. Also, China's basic story-line of trying to liberalize a centrally-planned economy while keeping a communist government is fairly easy to tell.

But whatever the plausible reasons why China's economy has gotten more attention than India, it seems clear to me that India's economic developments have gotten far too little attention. A symposium in the Winter 2020 issue of the Journal of Economic Perspectives offers some insights:
I'll also mention an article on "Caste and the Indian Economy," by Kaivan Munshi, which appears in the December 2019 issue of the Journal of Economic Literature, a sibling journal of the JEP (that, is both are published by the American Economic Association).

Lamba and Subramanian point out that over the 38 years from 1980 (when India started making some pro-business reforms), India is one of only nine countries in world to have averaged an annual growth rate of 4.5%, with no decadal average falling below 2.9% annual growth. (The nine, listed in order of annual growth rates during this time with highest first, are Botswana, Singapore, Korea, Taiwan, Malta, Hong Kong, Thailand, India, and Malaysia.) Of course, one can tweak these cutoffs in various ways, but no matter how you slice it, India's growth rate over the last four decades has been remarkable. Moreover, India's population is likely to exceed China's in the near future.

But India's path to rapid growth has been notably different than many other countries. India is ethnically fractionalized, especially when the caste system is taken into account.In addition, India path to development has been "precocious," as Lamba and Subramanian put it, in two ways.

One involves the "modernization hypothesis" that economic development and democracy evolve together over time.  In India, universal suffrage arrived all at once when India became independent in 1948. For a sense of how dramatic this difference is, the graph below shows per capita GDP on the horizontal axis and degree of democracy on the vertical axis. The lines show the path of countries over time. Clearly, India defies the modernization hypothesis by having full democracy before development. China defies the modernization hypothesis in the other direction, by having develoment without democracy.
The other precocious factor for India is that economic development in most countries involves a movement from agriculture to manufacturing to services. However, India has largely skipped the stage of low-wage manufacturing, and moved directly toward a services-based economy. One underlying factor is India's "license raj"--the interlocking combinations of rules about starting a business, labor laws, and land use that have made it hard for manufacturing firms to become established. A related factor is that in global markets, India's attempts at low-wage manufacturing over the decades were outcompeted by Korea, Thailand, China--and now by the rise of robots.

The good side of this "precocious servicification" is that that high-income economies are primarily services and services are a rising part of international trade. The bad side is that this services economy works much better for the relatively well-educated in urban areas, and offers less opportunity for others--thus leading to greater inequality.

India faces a range of other issues as well. Environmental problems in India are severe: when it comes to air pollution for example, "22 of the top 30 most polluted cities in the world are in India." The role of women in India's economy and society is in some ways moving backward: "Female labor force participation in India has been declining from about 35 percent in 1990 to about 28 percent in 2015. For perspective, the female labor force participation rate in Indonesia in 2015 was almost 50 percent; in China, it was above 60 percent. In addition, the gap between India’s labor force participation rate and the rate of countries with similar per capita GDP is widening, not narrowing. ... India’s sex ratio at birth increased from 1,060 boys born for every 1,000 girls in 1970 to 1,106 in 2014, widening its gap from the biological norm of 1,050."

The capabilities of India's government are shaped by these underlying background factors. Devesh Kapur writes in JEP:
India’s state performs poorly in basic public services such as providing primary education, public health, water, sanitation, and environmental quality. While it is politically effective in managing one of the world’s largest armed forces, it is less effective in managing public service bureaucracies. The research literature on India has many discussions of programs that fail to deliver meaningful outcomes, or that are victims of weak implementation and rent-seeking behavior of politicians and bureaucrats, or that are vitiated by discrimination against certain social groups ...

But on the other side, the Indian state has a strong record in successfully managing complex tasks and on a massive scale. It has repeatedly conducted elections for hundreds of millions of voters—nearly 900 million in the 2019 general elections—without national disputes. In this decade, it has scaled up large programs such as Aadhaar, the world’s largest biometric ID program (which crossed one billion people enrolled within seven years of its launch). Most recently, it has implemented the integrated Goods and Services Tax (GST), one of the most ambitious tax reforms anywhere in recent times. India ranks low on its ability to enforce contracts, but its homicide rate has dropped markedly from 5.1 in 1990 to 3.2 (per 100,000) in 2016 ... 
[T[he Indian state has delivered better in certain situations and settings: specifically, on macroeconomic rather than microeconomic outcomes; where delivery is episodic with inbuilt exit, rather than where delivery and accountability are quotidian and more reliant on state capacity at local levels; and on those goods and services where societal norms and values concerning hierarchy and status matter less, rather than in settings where these norms and values—such as caste and patriarchy—are resilient.

Kapur traces these issues back to the ethnic fractionalization, social cleavages and caste system in India, combined with India's early adoption of democracy. Moroever, India is a country with a low tax/GDP ratio and a relatively small number of taxpayers. He also points out that most government positions in India require a difficult civil-service examination, and by international standards India's government does not appear overstaffed. A pattern has evolved that India's government is relatively effective on big picture projects like electrification, but much less effective on local issues that are related to social expectations about caste and gender: for example, reforms related to education, or the welfare of children and women.  In countries as different as the United States and China, about 60% of all government employees are at the local level; in India, it's less than 20%.

India continues to have issues with caste differences, as explored in the article by Kaivan Munshi. he writes: 
Caste continues to play an important role in the Indian economy. Networks organized at the level of the caste or jati provide insurance, jobs, and credit for their members in an economy where market institutions are inefficient. Affirmative action for large groups of historically disadvantaged castes in higher education and India’s representative democracy has, if anything, made caste more salient in society and in the public discourse. Newly available evidence with nationally representative data indicates that there has been convergence in education, income, occupations, and consumption across caste groups over time. ... The available evidence indicates that caste discrimination, at least in urban labor markets, is statistical, that is, based on differences in socioeconomic characteristics between upper and lower castes. ... Given the strong intergenerational persistence in human capital, the key variable driving convergence, it will be many generations before income and consumption are equalized across caste groups.
The caste-based economic networks that currently serve many functions will also disappear once markets begin to function efficiently. These networks continue to be active in the globalizing Indian economy because information and commitment problems are exacerbated during a period of economic change. In the long run, however, the markets will settle into place and the caste networks will lose their purpose. This has certainly been the experience in many developed countries. In the United States, for example, ethnic networks based on a European country (region) of origin supported their members through the nineteenth century into the middle of the twentieth century. Ultimately, however, these networks no longer served a useful role and today, outside of a few pockets, European ethnic identity in the United States is largely symbolic. We might expect caste to similarly lose its salience as India develops into a modern market economy, and there is some evidence that this process may have already begun.
 Amartya Lahiri takes up yet another issue: "On November 8, 2016, India demonetized 86 percent of its currency in circulation." Specifically, India declared that people needed to turn in their large-denomination bills at banks, and that the existing bills would be worthless moving forward. They would then be replaced with new currency. The policy had several goals, like making it impossible for organized crime to hide its accumulated gains in the form of cash, and bringing people into the banking system and the digital economy. But Lahiri argues that these larger goals were not much affected by the change. Instead, the main effect of the demonetization was causing short-term hardship and higher unemployment in the areas where the demonetization led to temporary cash shortages. I had not known that India had carried out similar demonetizations of large-denomination currency in 1946 and 1978--with, Lahiri argues, much the same minimal-to-negative effects.

India's record of sustained and strong economic growth appears to be in some danger from the "twin balance sheet challenge."  As Lamba and Subramanian put it:
The sustainability of growth—which in late 2019 has cratered to a near standstill— will be determined by structural factors salient amongst which is the “twin balance sheet challenge” initiated by the toxic legacy of the credit boom of the 2000s. Recently, the rot of stressed loans has spread from the public sector banks to the nonbank financial sector, and on the real side, from infrastructure companies to most notably the real estate sector with the latter threatening middle class savings. This contagion owes both to overall weak economic growth and slow progress in cleaning up bank and corporate balance sheets. A failure to resolve this challenge could mean a reprisal of the Japanese experience of nearly two decades of lost growth, but at a much lower level of per capita income. India’s development experience could end up being a transition from socialism without entry to capitalism without exit because weak regulatory capacity and lack of social buy-in will have impeded the necessary creative destruction.
Thus, India's economy finds itself at a pivotal moment, facing both the short-run challenges of the twin balance sheet problem, the longer run economic problems of appropriate reforms to create an environment in which India's businesses can function and grow, the challenges of building transportation, energy, and communications infrastructure., and the social policy challenges of improving education and health care. Challenges never come singly.

Friday, February 21, 2020

Some Economics of Refugees

Refugee policy is defined differently from immigration policy. With immigration policy, a nation makes a decision about what number and kinds of immigration (family-based, skill-based) would benefit itself. But refugee policy, at least under the standard definition from the 1951 Refugee Convention, involves whether someone who, “owing to well-founded fear of being persecuted for reasons of race, religion, nationality, membership of a particular social group or political opinion, is outside the country of his nationality and is unable or, owing to such fear, is unwilling to avail himself of the protection of that country…”

In  theory, refugee policy is based on the need of the refugees themselves, not on some judgment about whether letting them in will benefit the receiving society. But the receiving society does retain the power to make decisions about the extent to which people have a "well-founded fear of being persecuted" in the sense of the term that would make them refugees, or whether they are trying to use refugee status to do an end-run about immigration limits.  Evaluating that distinction will often involve some degree of subjectivity and politics.

For an overview of what we know about the magnitudes, drivers, and legalities of refugee flows and assimilation, I recommend the two-paper symposium in the Winter 2020 issue of the Journal of Economic Perspectives. (Full disclosure: I work as Managing Editor of JEP.)

As Hatton notes: "The United Nations High Commissioner for Refugees (UNHCR) estimates the total number of refugees worldwide at the end of 2018 at 20.1 million. This is less than one-third of the total of 70.8 million `forcibly displaced persons,' which also includes those displaced within their home country (41.3 million) and Palestinians (5.5 million) who come under a separate mandate (UNHCR 2019, 2). In 2018, refugees were 7.6 percent of the stock of all international migrants (defined as those living outside their country of birth). ... As of 2018, two-thirds of refugees are from just five countries: Syria, Afghanistan, South Sudan, Myanmar, and Somalia. Of the total, 85 percent of refugees are located in developing countries, often just across the border from the origin country, and about 30 percent of these languish in organized refugee camps."

Here's a figure from Hatton showing the number of asylum claims by refugees over time. The figure makes obvious why concerns over refugee policy have been high in the European Union, and also in the US.  About one-third of asylum applicants are granted refugee status. 

As one might expect, the drivers of refugee patterns are less about economic differences, and more about political terror in sending countries and proximity and access to another country. Hatton writes: 
Several studies have assessed the push and pull forces behind asylum applications to industrialized countries by analyzing panel data on the number of applicants by origin, by destination, and over time. The most important origin-country variables are political terror and lack of civil liberties; civil war matters less, perhaps because war per se does not necessarily confer refugee status (Hatton 2009, 2017a). There is weaker evidence that declines in origin-country income per capita leads to more asylum applications, which offers modest support to the view that economic migration is part of the story. Proximity and access are important in determining the volume of asylum applications. Countries that are small but nearby can generate large flows—as with a quarter of a million Cubans moving to the United States in the 1970s and 400,000 Serbians and Montenegrins moving to the European Union in 1995−2004—provided that the door is left ajar. But the growth of transit routes and migrant networks have fueled the upward trend of applications from more distant origins. For example, travel in caravans through Mexico combined with violence and drought at home, a growing diaspora, and mixed messages about future US policy all combined to boost migration from Central America (Capps et al. 2019).

Brell, Dustmann and Preston present evidence on assimilation of refugees--which is often rather different from the pattern of assimilation by immigrants. For example, one striking pattern is that refugees are often slower to find employment than immigrants. In Germany, about 10% of refugees have found employment two years after arriving, while about 60% of immigrants have found employment within two years. This shouldn't be a surprise: remember, immigrants come because they are seeking something, while refugees are escaping something. The gap between employment of refugees and immigrants does tend to close over time. The outlier here is the United States, where employment rats for refugees and immigrants are much the same, right from the start. As the authors write: "It is not entirely clear why the US experience appears so different ... possible explanations could relate to the nature of the US labor market or to the nature of the settlement process in the United States, but require further investigation."

There is also usually a wage gap between refugees and other immigrants, which also exists in the US.
"For instance, while average wages of refugees who had been in the United States for two years amounted to 40 percent of native wages and 49 percent of other immigrants’ average wages, after 10 years, average wages had improved to 55 percent of natives and 70 percent of other immigrants in the same position."

What helps refugees to assimilate faster? Brell, Dustmann and Preston offer some non-obvious insights. One is that many refugees have experience substantial trauma, and their fear of being persecuted is based on recent experience. Thus, there is some evidence that paying attention to their mental and physical health needs soon after arriving can help assimilation start on a better path. Speeding up the asylum process itself, so that people do not languish for several years without being able to start their new adjustment, can help. Another issue is that politicians sometimes divide up refugees among many locations. However, social networks within a group can offer an important method of learning about jobs and opportunities and more generally how to function in a new society, so relocating refugees from a certain place so that they are in decent-sized groups can  help assimilation.

One interesting US pattern involves acquisition of language skills: "[R]efugees arrive with lower levels of language proficiency than other migrants—at the time of migration, only about 44 percent of refugees speak English `well' or better, compared with 64 percent of other immigrants. However, while other immigrants do not tend to see particularly strong gains in English speaking skills over time, refugees rapidly improve and even overtake other migrants’ speaking abilities around ten years after arriving in the United States." One hypothesis is that immigrants may be living within an extended culture from their country of origin, and perhaps travelling back and forth now and then. But refugees are often more isolated, and they aren't going back, so their incentives to learn English are different.

(In the discreet shade of these parenthesis, I'll just note that at the end of this post it's annoying to me when public attention focuses so heavily on refugees who are seeking to enter the US and Europe, while largely ignoring refugees elsewhere or the group of displaced persons more broadly. At the peak a few years ago, the share of those making asylum claims in high-income countries was 1.5 million, a small share of the 70.8 million "forcibly displaced persons." Concern over the living conditions of the forcibly displaced population should not kick in only after they reach the border of a high-income country.)

Thursday, February 20, 2020

The US Rental Housing Market

The US rental housing market is in the middle of some major shifts, as outlines by the Joint Center for Housing Studies of Harvard University in its report "America's Rental Housing 2020" (January 2020). Here are some of the changes.

The "rentership rate"--the share of households renting--rose sharply from about 2004 to about 2016, before leveling out the last few years.

From 2000 to 2010, most of the growth in the housing rental market was coming from those with relatively lower incomes. But in the last decade, most of the growth in the rental housing market is coming from those with relatively higher incomes. "But at 22 percent in 2019, rentership rates among households earning $75,000 or more are at their highest levels on record. Even accounting for overall income growth, rentership rates for households in the top decile jumped from 8.0 percent in 2005 to 15.1 percent in 2018 as their numbers more than doubled."
Rent is a big burden for many. The report looks at renters who are "cost burdened," referring to those who pay more than 30% of their income in rent. "Thanks to strong growth in the number of high-income renters, the share of renters with cost burdens fell more noticeably from a peak of 50.7 percent in 2011 to 47.4 percent in 2017, followed by a modest 0.1 percentage point increase in 2018. ... Meanwhile, 10.9 million renters—or one in four—spent more than half their incomes on housing in 2018." Another big shift is that there is a rise in the "cost-burdened renters" in middle-income groups (say, $30,000-$75,000 per year in annual income), especially in  "larger, high-cost metropolitan areas."

Vacancy rates for rentals are down, and are especially low for lower-cost, lower-quality rentals.
Meanwhile, rents are consistently rising faster than inflation.
The value of apartment properties has risen quickly, too.

Some background factors are also shifting. In the market for rental properties, stock of rentals rising in two areas  over last 15-20 year: single-family homes, and multi-family buildings with 20 or more units. These changes represent a shift in the rental housing market away from individual landlords and toward corporate ownership of rentals. In the area of single-family homes, for example, a number of institutional investors bought houses as rental properties in the aftermath of the drop in housing prices around 2010. The report notes:
Ownership of rental housing shifted noticeably between 2001 and 2015, with institutional owners such as LLCs, LLPs, and REITs accounting for a growing share of the stock. Meanwhile, individual ownership fell across rental properties of all sizes, but especially among buildings with 5–24 units. Indeed, the share of mid-sized apartment properties owned by individuals dropped from nearly two-thirds in 2001 to about two-fifths in 2015. Given that units in these structures are generally older and have relatively low rents, institutional investors may consider them prime candidates for purchase and upgrading. These changes in ownership have thus helped to keep rents on the climb.
Another shift is that many renters seem happier being renters, and less likely to view a rental as a short-term stop on the path to homeownership. Renters are staying in place longer, too. The report notes:
Changes in attitudes toward homeownership may lead some households to continue to rent later in life. The latest Freddie Mac Survey of Homeowners and Renters reports that the share of genX renters (aged 39–54 in 2019) with no interest in ever owning homes rose from 10 percent in March 2017 to 17 percent in April 2019. ... Fully 75 percent of renters overall, and 72 percent of genX renters, stated that renting best fits their current lifestyle. ...
[M]any renters are staying in the same rental units for longer periods. Between 2008 and 2018, the share of renters that had lived in their units for at least two years increased from 36 percent to 41 percent among those under age 35, and from 62 percent to 68 percent among those aged 35–64. Similarly, the National Apartment Association reported a turnover rate of just 46.8 percent in 2018— the lowest rate of move-outs since the survey began in 2000.
The US rate of homeownership has often been in the range of 63-65%, going up above that range during the housing boom around 2006, back down after that, and then rebounding a bit in the last few years.  Looking at long-run trends of aging, marriage/parenthood, and income, the US Department of Housing and Urban Development organized a pro-and-con symposium a few years ago on the question of whether the US homeownership rate will have fallen to less than 50% by 2050. Homeownership rates for young adults and for blacks are especially low. The US rate of homeownership was about average by international standards 20-25 years ago, but now is below the average. For earlier posts on these themes, see:

With regard to the broader social issue of rental prices being so high for so many people, the economic answer is straightforward. For those with very low incomes, help them afford the rent. But for the market as a whole, the way to get lower prices is to raise supply. For example, it's an interesting question as to why the individual landlord has been in such decline, and the extent to which this drop has been due to additional administrative, regulatory, and zoning costs being imposed at the state and local level. It seems to me possible that we are in the middle of a social shift in which many households at a variety of income levels put less emphasis on homeownership--which in turn means greater public attention to conditions of supply and demand in housing rental markets.

Tuesday, February 18, 2020

The Herfindahl-Hirschman Index: Story, Primer, Alternatives

It seems clear that the concept of what is now is called the Herfindahl-Hirschman Index was originated in 1945 by Albert O. Hirschman, who may be best-remembered today for his 1970 book Exit, Voice, Loyalty, discussing the options available to a dissatisfied group member.  However, the concept was then attributed to Orrin Herfindahl, who wrote five years later in 1950, and further confusion arose when it was sometimes referred to as a Gini index. Here's a primer and the the story.

The HHI, as it is often abbreviated, is a way of measuring industry concentration that is taught in every intro econ textbook. Assume that you have an industry where one big company has 50% of sales, three companies have 10% each, and 20 companies have 1% each. How can a researcher sum up the degree of concentration in this industry in a single number? 

One common approach is to use a "concentration ratio." Pick a number of firms, like the top 4 or the top 8 in an industry. Add up their market share. Thus, the 4-firm concentration ratio in this example would be 80% and the 8-firm concentration ration would be 84%. 

But this concentration ratio approach has an obvious problem. An industry where the four top firms each had 20% of the market would have the same 4-firm concentration ratio of 80% as the example above. An industry where the top eight firms each had 10.5% of the market would have the same 8-firm concentration ratio of 84%. It would seem odd to say that these counterexamples, with a number of firms of roughly equivalent size, have the same concentration as the original example, where the largest firm has a full half of the market. 

Thus, the HHI uses a different calculation. First you square the market shares of existing firms; then you add them up. Thus, in the original example the HHI would be (50)2 + 3(10)2 + 20(1)2 = 2,820. The maximum value for an HHI would be 10,000, for a single firm with 100% of the market. An industry with a large number of very small firm that each have less than 1% of the market could have an HHI lower than 100. (In some cases, the HHI is described on a scale from 0 to 1, instead of 0  to 10,000, which is the numbers you get if the market shares are expressed with decimal points before being squared--thus, a 50% market share squared would be .25, not 2500.).

The idea of measuring industry concentration in this way originated with Albert O. Hirschman in his 1945 book National Power and the Structure of Foreign Trade. As he points out, there were already ways of measuring concentration, with the Lorenz curve and the Gini coefficient for measuring inequality of income especially well-known. But as Hirschman points out (p. 158):
In various instances, however, the number of elements in a series the concentration of which is being measured is an important consideration. This is so whenever concentration means "control by the few," i.e., particularly in connection with market phenomena. Control of an industry by few producers can be brought about by an inequality of distribution of the individual output shares when there are many producers or by the fact that only few producers exist. One of the well-known conditions of perfect competition is that no individual seller should command an important share of the total market supply; this condition implies the presence of both relative equality of distribution and of large numbers. 
To put this point a little differently, imagine a market with all of equal-sized producers--maybe a small number like two or three or four, or a large number like 100 or 1,000.  A measure of equality like the measures used for income  would point out that all firms firms are of equal size. In contrast, a measure of concentration would emphasize that the number of firms matters, and that four firms means more competition than two, and 100 firms means more competition than four.  By squaring the market shares, Hirschman's measure gave greater weight to larger firms, thus emphasizing the idea that when it comes to concentration of an industry, large firms matter more.

In these ways, Hirschman's proposed measure of industry concentration was fundamentally different than the common measures of income equality. In fact, it was such a good idea that five years later, the idea was reinvented by Orris C. Herfindahl in his 1950 PhD dissertation, Concentration in the U.S. Steel Industry. Herfindahl mentions Hirschman's earlier work in a footnote.

There were surface differences between the Hirschman and Herfindahl measures. Hirschman's study was looking at concentrations of exports and imports of countries, both according to sources and destinations of international trade, while Herfindahl was applying the measure to the US steel industry. In addition, Herfindahl used (essentially) the measure described briefly above, while Hirschman took the square root of that measure.

But that's not how it evolved. Gideon Rosenbluth wrote  chapter called "Measures of Concentration," which appeared in a 1955 NBER conference volume called Business Concentration and Price Policy (pp. 57-95).  Rosenbluth wrote in 1955:
But summary measures can be devised to measure concentration, just as they have been developed for other characteristics of size distributions. An ingenious measure of this type has been employed by O. C. Herfindahl in an investigation of concentration in the steel industry. It consists of the sum of squares of firm sizes, all measured as percentages of total industry size. This index is equal to the reciprocal of the number of firms if all firms are of the same size, and reaches its maximum value of unity when there is only one firm in the industry.
But a few years later in a 1961 essay, " Remarks" in Die Konzentration in der Wirtschaft, Schriften des Vereins fuir Sozialpolitik (New Series, Vol. 22, pp. 391-92), Rosenbluth wrote:
The first point I want to make causes me some embarrassment. There is a good deal of discussion int the background material about "Herfindahl's Index." Actually, it is a mistake to ascribe this index to Herfindahl, and I believe my paper on measures of concentration, published in 1955, is the source of this mistake. I discovered later that the man who first proposed this index was Albert O.  Hirschman in his book "National Power and the Structure of Foreign Trade," published by the University of California Press in 1945. Hirschman actually proposed the square root of what I call Herfindahl's Index, since this gives a more even distribution of values. 
Hirschman made an attempt to lay out this chronology in a short note appearing in the American Economic Review in 1964 (54: 5, September, p. 761). He points out that in a number of recent papers, the index was being referred to as a "Gini index," although he had made some effort back in 1945, along with Herfindahl and Rosenbluth in later work, to be clear that it was not an index of equality. Hirschman writes: "Upon devising the index I went carefully through the relevant literature because I strongly suspected that so simple a measure might already have occurred to someone. But no prior inventor was to be found." He also points out that Rosenbluth had originally attributed the index to Herfindahl.  Hirschman concludes on a wry note: "The net result is that my index is named either after Gini who did not invent it at all or after Herfindahl who reinvented it. Well, it's a cruel world."

In the world of economics, this problem of attribution is sometimes called Stigler's law: "No scientific discovery is named after its original discoverer." Of course, Steve Stigler was quick to point out in his 1980 article that he didn't discover his own law, either! In this case, it does not seem to me a grievous miscarriage of justice to have the names of both Hirschman and Herfindahl on the index, although Hirschman should probably come first.

Those who have read this far are probably the kind of people who would be interested in knowing that justification and analysis of concentration indexes is an ongoing task.  For getting up to speed, a useful starting point is the NBER working paper by  Paolo M. Adajar, Ernst R. Berndt, and Rena M. Conti, "The Surprising Hybrid Pedigree of Measures of Diversity and Economic Concentration" (November 2019, #26512).
The characterization of industry structure and industry concentration has long been a task facing empirical economic researchers, for it is widely believed that market structure, market behavior and various market performance outcomes are important interrelated phenomena. Although a number of alternative measures of market concentration are commonly used, such as the k‐firm concentration measure and the Herfindahl‐Hirschman index (HHI), their foundations in economic theory and statistics are limited and have not been developed extensively, leaving their unqualified use as measures of market power potentially vulnerable to the criticism of “measurement without theory”.
For example, perhaps it makes sense at some intuitive level to give greater weight to the market share of large firms when measuring concentration. But why square the market shares? Why not adjust them in some other way? Indeed, there is a set of alternative concentration measures using different weights going back to the work of Gideon Rosenbluth, and known as Rosenbluth/Hall‐Tideman (RHT) metrics. Adajar, Berndt, and Conti offer an analytical basis for the idea that squaring the market shares makes sense, based on a conceptually similar diversity measure from ecology. They write:
In this paper, we have traced the pedigree of the much‐used Herfindahl‐Hirschman (HHI) economic concentration index to the Simpson Index of diversity originally developed in ecology, where an identical calculation to the HHI is interpreted as the probability of two organisms randomly selected from a sample habitat belonging to the same species (analogous in economics to the probability a pair of randomly and independently selected products are being marketed by the same manufacturer). This probabilistic foundation of the HHI to some extent shields it from the allegation that the sum of squared shares calculation is arbitrary and unscientific, even as its links to market power and antitrust competition analysis remain ambiguous. 
For those wanting to dig deeper into alternative indexes of concentration, Adajar, Berndt, and Conti write:
We have also considered alternative proposed measures of concentrations, some of them mathematical generalizations of the HHI, others such as entropy originating from information theory in engineering and physics, another set that is developed axiomatically, and still others incorporating related concepts such as inequality and absolute population size. We have considered computational and interpretability aspects of the various concentration measures, and noted the extent to which they incorporate considerations not only of relative inequality such as the Gini coefficient and Lorenz curve, but also of absolute population size. 
Other things equal, markets with a large number of competitors suggest barriers to entry are limited, and therefore such markets could plausibly be expected to be competitive, other things equal. Therefore, to economists concentration metrics incorporating both variability/relative inequality and absolute population size considerations are preferable, for if one believes that economic performance outcomes depend not only on relative sizes but also on the  absolute number of competitors in a market, then one prefers a concentration measure that incorporates both features. The existing economic literature comparing the various concentration metrics on a priori statistical and axiomatic criteria appears to view the HHI and the closely related Rosenbluth/Hall‐Tideman (RHT) metrics most favorably. Choice between these two measures on a priori grounds is indeterminate, since the choice involves selection of weights and is therefore similar to choice among alternative index number formula in economic index number theory.

Monday, February 17, 2020

Writing the Intro to Your Economics Research Paper

If you do academic research, whether in economics or other fields, you need to give an honest answer to a basic question: "Do you want readers for your research?" If the answer is "no," then read no further. If the answer is "yes," then you should probably be thinking and working considerably more than the introduction to your paper. Barney Kilgore, a famous editor of the Wall Street Journal back in the 1950s and 1960s,  posted a motto in his office: “The easiest thing in the world for a reader to do is to stop reading.” If the intro doesn't make readers want to proceed, they will often take the easy course and turn to something else. 

Several writers of economics blogs have emphasized this theme recently. 

At the Center for Global Development blog, David Evans wrote "How to Write the Introduction of Your Development Economics Paper" (February 10, 2020). Evans writes:
You win or lose your readers with the introduction of your economics paper. Your title and your abstract should convince people to read your introduction. Research shows that economics papers with more readable introductions get cited more. The introduction is your opportunity to lay out your research question, your empirical strategy, your findings, and why it matters. Succinctly. ...
Invest in your introduction. One reason that so many introductions in top journals have a similar pattern is that it’s clear: you tell the reader why the issue you studied is important, you tell them what you did, you tell them what you learned, and you tell them how it builds on what we already knew. You might tell them how it relates to policy or what the limitations of your work are. Interested readers can dive into the details of the paper, but good introductions give casual readers a clear sense of what they’ll get out of your paper. Your introduction is your kingdom. Rule it well.
Evans looks at 15 recent economic development papers published in prominent journals and discusses the ways in which  their introductions have a common pattern:  
  1. Motivate with a puzzle or a problem (1–2 paragraphs)
  2. Clearly state your research question (1 paragraph)
  3. Empirical approach (1 paragraph)
  4. Detailed results (3–4 paragraphs)
  5. Value-added relative to related literature (1–3 paragraphs)
  6. Optional paragraphs: robustness checks, policy relevance, limitations
  7. Roadmap (1 paragraph)
Evans also points to a couple of other recent discussions of introductions in economic research. For example, Keith Head presents his own view of "The Introduction Formula," which starts like this:
1. Hook: Attract the reader’s interest by telling them that this paper relates to something interesting. What makes a topic interesting? Some combination of the following attributes makes Y something worth looking at.
  • Y matters: When Y rises or falls, people are hurt or helped.
  • Y is puzzling: it defies easy explanation.
  • Y is controversial: some argue one thing while other say another.
  • Y is big (like the service sector) or common (like traffic jams).
Things to avoid:
  • The bait and switch: promising an interesting topic but delivering something else, in particular, something boring.
  • “all my friends are doing it” : presenting no other motivation for a topic than that other people have written papers on it.
2) Question: Tell the reader what this paper actually does. Think of this as the point in a trial where having detailed the crime, you now identify a perpetrator and promise to provide a persuasive case. The reader should have an idea of a clean research question that will have a more or less satisfactory answer by the end of the paper. Examples follow below. The question may take two paragraphs. At the end of the first (2nd paragraph of the paper) or possibly beginning of the second (3rd paragraph overall) you should have the “This paper addresses the question” sentence.
Claudia Sahm at the Macromom blog spent last fall reading job market papers, and gives vent to her reactions in "We need to talk MORE ..." (September 19, 2019). 
This post is for job market candidates. You need to spend more time editing your abstract and introduction. It will be worth more than your fourth robustness check. Promise. ... Sadly, it is clear that economics departments and dissertation committees are NOT teaching their doctoral students how to communicate their research. ... EVERY job market paper I read lacked a well-structured, well-written introduction and abstract. Many of these papers are from top schools and from native English speakers.
Sahm offers an intro structure as well, closely related to the others. She begins this way:

Structure of Introduction (in order):


1) Motivation (1 paragraph)
  • Must be about the economics.
  • NEVER start with literature or new technique (unless econometrics).
  • Be specific and motivate YOUR research question.
2) Research question (1 paragraph)
  • Lead with YOUR question.
  • THEN set YOUR question within most relevant literature.
  • My favorite is an actual question: “My paper answers the question …”
  • Popular and acceptable: “My paper [studies/quantifies/evaluates/etc] …”
3) Main contribution (2-3 paragraphs, one for each contribution)
  • YOUR main contribution
  •             MUST be about new economic knowledge.
  •             Lead with YOUR work, then how it extends the literature.
  • New model, new data, new method, etc.:
  •             Can be second or third contribution.
  •             Tools are important, not most important.
  • Each paragraph begins with a sentence stating one of YOUR contributions.
  • THEN follow with three or four sentences setting YOUR contribution in literature.
  • Most important should be first (preferred) or last (sometimes most logical).
  • YOUR contributions are very important. Make them clear, compelling, and correct.
These posts caught my eye in part because they are a theme I have also tried to emphasize when talking about writing. A substantial part of my value-added as Managing Editor of the Journal of Economic Perspectives is sharpen up the introductions for papers. Most of the time, all the ingredients for a strong introduction are already there. But it's not unusual for an excellent lead-in or "hook" to be buried several pages into the paper, or even at the start of the conclusion, rather than right up front. It's not unusual to have intros that are either so long that only the author's parents will persevere to the end, or so short that the reader might just as well flip to a random page in the middle of the essay and start there. 

Here's a quote from an essay of my own, "From the Desk of the Managing Editor," written on the occasion of the 100th issue of the Journal of Economic Perspectives back in Spring 2012. I wrote:   
Invest more time in the stepping-stones of exposition: introductions, opening paragraphs of sections, and conclusions. Introductions of papers are worth four times as much effort as they usually receive.The opening paragraph of each main section of a paper is worth three times as much effort as it usually receives. Conclusions are worth twice as much effort as they usually receive. This recommendation emphatically does not call for long introductions with a blow-by-blow overview each subsection of the paper to come. It doesn’t mean repeating the same topic sentences over and over again, in introduction and section headings and conclusion. It means making a genuine effort to attract the attention of the reader and let the reader know what is at stake up front, to signpost the argument as it develops, and to tell the reader the state of the argument at the end.

Friday, February 14, 2020

Telephone Switchboard Operators: Rise and Fall

In  1950, there were 342,000 telephone switchboard operators working for the Bell Telephone System and some independent operators, as well as another 1 million or so telephone switchboard operators who worked at private locations like office buildings, factories, hotels, and apartment buildings. Almost all of these switchboard operators were female. To put it another way, about one out of every 13 working women in 1950 were telephone operators.  But by 1984, national employment as an operator in the telecommunications industry was down to 40,000, and now it's less than 2,000 (according to the Bureau of Labor Statistics). 

 David A. Price sketches the  history of this rise and fall in "Goodbye, Operator," appearing in Econ Focus (Federal Reserve Bank of Richmond, Fourth Quarter 2019, pp. 18-20). The story provokes some thoughts about the interaction of workers with new and evolving technologies. 

For more than a half-century from the late 19th century up to 1950, technology was creating jobs as telephone operators. From the phone company point of view, customers needed personal assistance and support if they were to incorporate this new technology into their lives. The workers with what we would now call the "soft skills" to provide this interface between technology and customers were reasonably well-rewarded. Price writes:
In the early decades of the industry, telephone companies regarded their business less as a utility and more as a personal service. The telephone operator was central to this idea, acting as an early version of an intelligent assistant with voice recognition capabilities. She got to know her 50 to 100 assigned customers by name and knew their needs. If a party didn't answer, she would try to find him or her around town. If that didn't succeed, she took a message and called the party again later to pass the message along. She made wake-up calls and gave the time, weather, and sports scores. During crimes in progress or medical emergencies, a subscriber needed only to pick up the handset and the operator would summon the police or doctors. ...

While operators were not highly paid, the need to attract and retain capable women from the middle classes led telephone companies to be benevolent employers by the standards of the day — and in some respects, of any day. Around the turn of the century, the companies catered to their operators with libraries, athletic clubs, free lunches, and disability plans. Operators took their breaks in tastefully appointed, parlor-like break rooms, some with armchairs, couches, magazines, and newspapers. At some exchanges, the companies provided the operators with a community garden in which they could grow flowers or vegetables. In large cities, company-owned dormitories were offered to night-shift operators.
But even as the number of telephone operator jobs was growing rapidly, the job of being a telephone operator evolved dramatically. By 1950, the hyper-personal touch seems to have greatly diminished, and the telephone operator skills involved being able to handle "the board," which involved plugging and unplugging several hundred connections per hour.

Looking back, the slow diffusion of automatic telephone switching technology seems a little puzzling. One reason is that digital technology differs in some fundamental ways from the earlier methods of automation. It's a standard story that the switchboard operators were replaced by automation. But why weren't they replaced by automation much earlier? Part of the answer seems to be that the automated telephone-switching systems in the first half of the 20th century did not actually display economies of scale. Price writes: 
With the electromechanical systems of the day, each additional customer was more, not less, expensive. Economies of scale weren't in the picture. To oversimplify somewhat, a network with eight customers needed eight times eight, or 64, interconnections; a network with nine needed 81. "You were actually getting increasing unit costs as the scope of the network increased," says Mueller. "You didn't get entirely out of the telephone scaling problem until digital switching in the 1960s."
This pattern of technology led to a situation where small-scale independent phone companies were more likely to use automated switching in the early part of the 20th century, while the giant Bell company continued to rely heavily on combinations of automatic switching with oversight from human switchboard operators--especially for long-distance calls.
More broadly, diffusion of technology is important in many contexts. Some well-known historical examples of important technologies that diffused slowly, over decades, include tractors and electricity. In the modern economy, a prominent pattern across many industries is that a few leading "superstar" firms are jumping farther ahead in terms of productivity, and their example of how to achieve such productivity gains is apparently not diffusing as quickly to other firms. There's an old economic lesson here, which is that for purposes of economic growth, just inventing a new technology is not enough: instead, many participants in the economy need to find ways to change their behavior in both simple and more fundamental ways to take full advantage of that technology. 

Back in 1964, even knowledgeable industry observers thought that the decline in telephone operators from about 1950 to 1960 was a one-time and temporary shift. Elizabeth Faulkner Baker wrote in  her 1964 book, Technology and Women's Work:
In sum, it is possible that the decline in the relative importance of telephone operators may be nearing an end. It seems that in the foreseeable future no machines will be devised that can completely handle person-to-person calls, credit-card calls, emergency calls, information calls, transient calls, messenger calls, marine and mobile calls, civilian defense calls, conference calls, and coin-box long-distance calls. Indeed, although an executive vice-president of the American Telephone and Telegraph Company has said that the number of dial telephones will reach almost 100 percent in the next few years and that there will be an increasing amount of customer dialing of long-distance calls: "Yet we will still need about the same number of operators we need now, perhaps more."
Again the underlying notion was that the job of being a telephone operator would evolve, but not the need for people who could play a role of facilitating use of telecommunications technology easier for customers. When it comes to the specific job of telephone operator, this prediction was clearly off-base. (Although as a college student in the late 1970s and early 1980s, I remember the days when if you really needed to call home, you could just grab a public phone, dial zero for "operator," and be answered by a person, from whom you would recite your home phone number and request a collect call.) But when thinking more broadly about the interaction between workers and technology, the central question remains as to what areas now and in the future will continue to benefit from human support at the interface between new technologies and ultimate users.