Tuesday, November 24, 2015

The Economics of the Retail Sector

Lots of economic analysis focuses on production, or on consumption. But there is less focus on the economic characteristics of what happens in between production and consumption--which is called the retail sector. In the Fall 2015 issue of the Journal of Economic Perspectives, Ali Hortaçsu and Chad Syverson look at "The Ongoing Evolution of US Retail: A Format Tug-of-War," while Bart J. Bronnenberg and Paul B. Ellickson take an international view in "Adolescence and the Path to Maturity in Global Retail." (Full disclosure: I've been Managing Editor of JEP since the first issue in 1987.)

Broadly understood,  the retail sector includes all of the activities between the producer and the consumer, which can include purchases by a wholesaler, transportation costs of shipping to warehouses, and the costs of holding that inventory for a time along with transportation costs of shipping to the retailer and the costs of retailing itself, including physical facilities and inventory costs. The evolution of retailing means that these costs are reshuffled among different players. For example, if I order a giant bulk-pack of paper towels from an e-retailer, and then store it in the basement and use it for several, I am bearing some of the storage and inventory costs that would otherwise be carried by a brick-and-mortar retailer.  However, the transportation costs of delivering that bulk-pack of paper towels from the e-retailer involves a relatively small shipment in a delivery van from a warehouse to my house, while the transportation costs of delivering to a warehouse store involves both a large truck shipment to the store, my own time and energy to pick up the bulk-pack and take it through check-out, followed by use of my own car to complete the delivery to my home. In various ways, the economic of retail involves issues of coordination, inventory-holding, economies of scale, as well as questions of how much variety is provided.

Hortaçsu and Syverson point out that the US retail sector accounts for about 11% of all jobs, and about 6% of the economy, as measured by the economic value-added by the sector. If one measures productivity by value-added per employee, then productivity is relatively low in the retail sector, which helps to explain why the average job in retail relatively low-paid. 


The story in US retail over the last few decades is the appearance of two new sets of players, each with a powerful gravitational pull that has greatly disrupted traditional retailes. One new set of players are the big box retailers, sometimes known as "warehouse clubs" and "supercenters," led by Walmart but also including Costco, Target, and others. The other new set of players are the e-commerce retailers, led by Amazon, eBay, and including many others.  In different ways, these new players in retail are both driven by changes in information technology. For the big-box retailers, information technology is how they manage their huge set of suppliers and  their inventories, allowing them to take advantage of economies of scale and scope. For the e-commerce retailers, information technology creates their virtual stores, ties them to their customers, and coordinates their shipping and billing. Hortaçsu and Syverson summarize the situation in US retail in this way:

"One can imagine the future of the retail sector as being pulled in one direction by the growth of e-commerce, which involves smaller employment firms, less market concentration, more geographical dispersion, and higher productivity. At the same time, the sector is being pulled in another direction by the warehouse clubs and supercenters, with higher employment firms, very high market concentration, location near population centers, and lower productivity relative to online channels. While warehouse clubs/supercenters have had more influence on the sector to this point, e-commerce has had its own effects and may be growing in relative importance. Perhaps this concurrent expansion and strength of e-commerce and a physical format portends a retail future not dominated by either, but rather with a substantial role for a “bricks-and-clicks” hybrid. The formats may end up being as much complements as substitutes, with online technologies specializing in product search and discovery, and physical locations facilitating consumers’ testing, purchase, and returns of products ...."
The international view of retail in the article by Bronnenberg and Ellickson shows a related transformation of retail happening at different speeds and in different ways around the word. They emphasize a broad view of retailing that includes potential roles for customers and government, as well as for retail firms themselves. For example, if customers have cars for transporting goods, spacious living accommodations for storage, and sufficient income, they will be more likely to buy bulk-packs of goods from warehouse retailers. The time consumers need to spend affects retail: for example, by encouraging e-commerce purchases that will be delivered.

A number of government policies affect retailing, including the road infrastructure, but also "the ease of obtaining building permits, the regulation of corruption, the availability of autos (through policies allowing the imports of used cars), and the minimum wage structure. ... In many emerging markets, another way in which government affects the retail sector lies in its ability to set policies regarding foreign direct investment."

Of course, large firms also play a role in what they call "modern retailing." They write:
"Firms are clearly the foremost strategic players driving the adoption of modern retailing technology. A modern chain of vertically integrated, large-format stores relies on an upstream distribution system of local producers, third-party logistics firms, and either third-party or integrated wholesalers who must all modernize together. Transactions that were often historically informal must be formalized through contracts with local suppliers and intermediaries. In a case study of Chile, Berdegué (2001) found that small farming cooperatives had to incur significant costs to deliver products of homogeneous quality, to coordinate harvest cycles, and to grade, sort, and package in a manner that met the downstream chain’s requirements. Also, adopting formal accounting processes makes previously informal transactions subject to taxes. ... Among the toughest coordination problems is the joint adoption of commonly used technology."
Overall: 
"In developed markets, the transition to modern retailing is nearly complete. In contrast, many low-income and emerging markets continue to rely on traditional retail formats, that is, a collection of independent stores and open air markets supplied by small-scale wholesalers, although modern retail has begun to spread to these markets as well. ... E-commerce is a notable exception: the penetration of e-commerce in China and several developing nations in Asia has already surpassed that of high-income countries for some types of consumer goods."
 This graph shows the takeoff in e-commerce in China, as measured by retail in the two sectors of "Apparel/Footware" and "Electronics/Appliances."

Bronnenberg and Ellickson agree that while e-commerce is going to be a big player in the future, it's not going to take over retail in general. Warehouses and supercenters will still play a large role, quite likely the dominant role, for some time to come. Other kinds of niche retailers--for example, those who specialize in a certain product, or those located in urban areas where huge store-spaces and parking places aren't available--will also play a role. In thinking about the tradeoffs of the various kinds of retail, they write: 
"Online purchases have benefits and costs that vary by product category. For example, online purchase of physical goods introduces a delay between purchase and delivery, but also gives consumers a greater opportunity to comparison-shop by lowering search costs and travel time and provides a seamless method of gathering information on the experience of previous customers (through online reviews). On the other hand, online retail offers less ability to inspect goods before purchase (and adds the risk of not having a product delivered at all), which renders the reputation of the firm all the more important. Whether a purchase is made online or in-store clearly depends on the frequency of purchase, the homogeneity of the product, and the number of products typically purchased in a given occasion, amongst other factors. Books fall at one end of this spectrum, and thus, in modern retailing systems, are primarily bought online, while groceries fall at the other end, and are typically bought in-store."




Monday, November 23, 2015

The Size of Automatic Stabilizers in the US Budget

The notion that bigger budget deficits (or smaller surpluses) can help to stimulate an economy in recession, and that smaller budget deficits (or bigger surpluses) can help to prevent an economy from being overstimulated into inflation, are the core ideas of countercyclical fiscal policy. As every intro econ textbook points out, this countercyclical fiscal policy can be "automatic" or "discretionary."

Discretionary fiscal policy is perhaps easier to understand: for example, it's when government passes new laws to raise spending or cut taxes in a recession. But automatic countercylical fiscal policy--also called "automatic stabilizers"--happens without any new legislation being passed. When the economy heads south so that incomes and profits fall, less in taxes is collected automatically, with no need for new legislation. When the economy booms so that incomes rise, there is automatically less need for government programs like Medicaid or welfare payment, again with no need for new legislation.

How big are the automatic stabilizers? Frank Russek and Kim Kowalewski offer some estimates, along with lots of detail about how these calculations are made, in "How CBO Estimates Automatic Stabilizers," published in November 2015 as Congressional Budget Office Working Paper 2015-07. They write:
Most types of revenues—mainly personal, corporate, and social insurance taxes—are sensitive to the business cycle and account for most of the value of the automatic stabilizers. A relatively small part of total outlays—those for the programs that are intended to support people’s income and have a cyclical component—contribute to the value of the automatic stabilizers; those benefits include ones from unemployment insurance, Medicaid, and SNAP (the Supplemental Nutrition Assistance Program). The automatic stabilizers do not include discretionary spending because that spending (which requires legislation) is not automatic or interest payments because those outlays are not designed to provide income support.CBO’s estimates of the automatic stabilizers are based on the estimated cyclical elements of those revenues and outlays. The magnitude of the automatic stabilizers is zero when the economy is operating at its potential and grows as the economy operates further away from its potential. 
To get a sense of the size of automatic stabilizers in the US economy, here's are a few figures. The first shows how automatic stabilizers on the revenue side affect federal budget deficits. The second shows how automatic stabilizers on the spending side affect budget deficits.

A few patterns emerge from these figures. The timing of the automatic stabilizers is about right. For example, if one looks at the size of the most recent recessions in 2001 and in 2007-2009, you can see the automatic drop in tax revenues and the automatic rise in spending. The automatic changes in tax revenues are typically larger than the spending changes. And taken together, the effects of automatic stabilizers are substantial, combining in the 2007-2009 recession to equal about 3% of GDP.

If you combine the automatic stabilizers on the revenue and spending sides, and put it on a graph with actual deficits, here's what it looks like.. The light blue line shows how the budget deficit actually changed, including both automatic stabilizers and discretionary changes. The dark blue line shows how the deficit would have changed with the automatic stabilizers subtracted out. Clearly, discretionary policies have a bigger effect on overall deficits than do automatic stabilizers. But in helping to counterbalance economic swings, the automatic stabilizers remain useful.

The existence of automatic stabilizers is one reason why it would be foolish to require that the federal budget be balanced in every year. Think about it: a recession arrives, and so tax revenues automatically fall and spending in recession-related categories automatically rises. A true believer that the budget should be balanced each year would have to argue that in the fact of that recession, taxes should be hiked and spending cut to offset the changes of the automatic stabilizers.

Friday, November 20, 2015

Refugees, Displaced, Resettled: Some Global Snapshots

Each year the United Nations High Commissioner for Refugees publishes a "Global Trends" report. The report for 2014, published in June 2015, was titled: "World at War: Forced Displacement in 2014." The report doesn't have much do say about policy details: it's focused mainly on describing the scope of the problem. It's perhaps useful to lay out some terms that are used in specific ways in the report, although they are often used more-or-less interchangeably in media reports and conversation.

Displaced persons is an overall category for those who have been forced to move "as a result of persecution, conflict, generalized violence, or human rights violations. The UNHCR report notes:
"The year 2014 has seen continuing dramatic growth in mass displacement from wars and conflict, once again reaching levels unprecedented in recent history. One year ago, UNHCR announced that worldwide forced displacement numbers had reached 51.2 million, a level not previously seen in the post-World War II era. Twelve months later, this figure has grown to a staggering 59.5 million,  roughly equalling the population of Italy or the United Kingdom. Persecution, conflict, generalized violence, and human rights violations have formed a ‘nation of the displaced’ that, if they were a country, would make up the 24th largest in the world."

According to the report, the total of 59.5 million displaced people can then be divided up into three groups: 19.5 million refugees, 38.2 million internally displaced persons, and 1.8 million asylum-seekers. This total does not include as many as 10 million "stateless" persons, who are living in a country but not recognized as citizens of that country--or of any other country. According to a 2014 UNHCR report, some examples of the stateless include
 More than two decades after the disintegration of the Soviet Union, over 600,000 people remain stateless. Some 300,000 Urdu-speaking Biharis were denied citizenship by the government of Bangladesh when the country gained its independence in 1971. A 2013 Constitutional Court ruling in the Dominican Republic led to tens of thousands of Dominicans, the vast majority of Haitian descent, being deprived of their nationality, and of the rights that flowed from it. More than 800,000 Rohingya in Myanmar have been refused nationality under the 1982 citizenship law and their freedom of movement, religion and education severely curtailed.
Refugees are defined in this way: "According to the 1951 Refugee Convention, a refugee is a person who is outside the country of his or her nationality and is unable or unwilling to avail him- or herself of the protection of that country because of a well-founded fear of being persecuted for reasons of “race”, religion, nationality, political opinion or membership of a particular social group in case of return. People fleeing conflicts or generalized violence are also generally considered as refugees, although sometimes under legal mechanisms other than the 1951 Convention."

Of the 19.5 million refugees, 5.1 million are Palestinians. Of the remaining 14.4 million, here are their countries of origin. While the large recent increase in refugees from Syria as moved that country to the top of this list in 2014, the top place on the list for the previous 30 years had been held by Afghanistan. Indeed, there are still 2.6 million refugees from Afghanistan living outside their country, some of whom have now been refugees for more than three decades since the late 1970s and early 1980s.



Other than the countries on this list: "Other main source countries of refugees were Colombia, Pakistan, and Ukraine. The number of Colombian refugees (360,300) decreased by 36,300 persons compared to the start of the year, mainly as a result of a revision in the number in the Bolivarian Republic of Venezuela. In contrast, figures for both Pakistan and Ukraine increased dramatically. In Pakistan, some 283,500 individuals fled to Afghanistan as armed conflict in their country unfolded during the year; likewise, fighting in eastern Ukraine not only displaced more than 800,000 people within the country but also led to 271,200 persons applying for refugee status or temporary asylum in the Russian Federation."

The goal of the UNHCR is to find a "durable solution" for refugees: return to their original country; local integration into the country where they have ended up, which would eventually involve full legal recognition and citizenship; or resettlement in a different country. Return to the original country has traditionally been the way in which most refugee issues were ultimately resolved. However, in the last few years, return to the original country has diminished.


There doesn't seem to be good data on what numbers of refugees end up being locally integrated in a given year. Such integration is often a slow and evolving process. The category of "resettlement" is what the current US disputes are about. The US has in recent years been the destination for about two-thirds of resettled refugees.
The cumulative number of resettled refugees (900,000) for the past decade is almost at par with the previous decade, 1995-2004 (923,000). Among the 105,200 refugees admitted during the year, Iraqi refugees constituted the largest group (25,800). This was followed by those from Myanmar (17,900), Somalia (11,900), Bhutan (8,200), the Democratic Republic of the Congo (7,100), and the Syrian Arab Republic (6,400). Under its resettlement programme, the United States of America continued to admit the largest number of refugees worldwide. It admitted 73,000 refugees during 2014, more than two-thirds (70%) of total resettlement admissions. Other countries that admitted large numbers of refugees included Canada (12,300), Australia (11,600), Sweden (2,000), Norway (1,300), and Finland (1,100).
This overview of displaced persons, refugees, and resettlement suggests that in a global perspective, US discussions of whether resettle perhaps 10,000 refugees from Syria is heavier on symbolism and emotion than on addressing the actual underlying humanitarian situation. It won't make much of a dent in the total number of the 3.8 million refugees from Syria, or the the 2.6 million Afghan refugees, or the millions of other refugees. It doesn't start to address the question of whether the US or the international community should seek to address the humanitarian needs of the 38.2 million internally displaced persons, who aren't counted in the total number of refugees.

Along with the immediate issues of how to support refugees and other displaced persons wherever they are, the ultimate question is about what the UNHCR calls the "durable solution." Is the vision here that most of the refugees will ultimately return to their home countries, or not?  The historical presumption has been that most refugees will return to their home countries, and resettlement elsewhere is for a few extreme situations. If there is a shift to a presumption that a high proportion of those refugees will be resettled in high-income countries, then the issue of refugees quickly becomes entangled in broader questions about freedom of international immigration.

Wednesday, November 18, 2015

Remembering Herbert Scarf: 1930-2015

Herbert Scarf, one of the giants of economic theory and operations research, died on November 15. I only met him in passing once or twice, but the Journal of Economic Perspectives (where I have toiled in the fields as Managing Editor since 1987), ran a couple of Scarf-related articles  in the Fall 1994 issue. One was an overview of Scarf's career by Kenneth J. Arrow and Timothy J. Kehoe, called "Distinguished Fellow: Herbert Scarf's Contributions to Economics," because in 1991 Scarf was named a "Distinguished Fellow" of the American Economic Association. The other was an article by Scarf himself on one of the many theoretical subjects where his contributions loom large: "The Allocation of Resourcesin the Presence of Indivisibilities."

The article by Arrow and Kehoe lays out some of Scarf's most prominent work. For example, there is a theory of the optimal holding of inventories called the  (S, s) theory: basically, the idea is that firms and stores don't re-order more supplies every day. They re-order it in batches. They wait until the quantity on hand falls to some lower level s, and then place an order for a fixed amount which raises the quantity on hand up to the higher level S. The theory of how far apart s and S should be will depend on various measures of volatility and risk. The question of when or if (S, s) theory was the right way to think about inventory problems was a hot topic in the 1950s. Scarf provided an answer that was as nearly definitive as these things get in economic theory, arguing that it was.

It turns out that the (S, s) theory isn't just about business inventories. More broadly, it's a theory about how economic agents make decisions when there are costs of adjustment, which can often lead to a situation where there are long periods where nothing much seems to happen, followed by sharp changes. For example, this pattern often arises in business investment of many kinds, in hiring and firing decision by firms, in consumer purchases of big durables like cars and houses, and even in small-scale decisions like taking a larger fixed amount of cash out of the ATM machine, rather than going by the machine every time you need $20. It further turns out that these sharp and lumpy changes can be related to overall macroeconomic business cycles.

For those who want an overview of (S, s) theory, useful starting points in JEP would be the article by Andrew Caplin and John Leahy in the Winter 2010 issue, "Economic Theory and the World of Practice: A Celebration of the (S, s) Model," and farther back, the article by Alan S. Blinder and Louis J. Maccini in the Winter 1991 issue, "Taking Stock: A Critical Assessment of Recent Research on Inventories."

Scarf was a major player in many of the central topics of research into economic theory for several decades after the 1950s. Arrow and Kehoe discuss his work on describing the "core" of an economy, on how to calculate a fixed point as the equilibrium of an overall economy (not just an individual market), how the presence of increasing returns affected the existence of equilibrium, and other issues. I would only embarrass myself by trying to summarize this work here, but I'll note that while Scarf work was often deeply technical and mathematical, he also had a gift for suggesting straightforward phrases and analogies that clarified what was at issue.

Scarf's article in the Fall 1994 issue of JEP took up the topic  of "indivisibilities," which overlaps with aspects of these issues of determining an optimal outcome in the presence of increasing returns and lumpy choices. Scarf wrote:
I am, I believe, not alone in thinking that the essence of economies of scale in production is the presence of large and significant indivisibilities in production. What I have in mind are assembly lines, bridges, transportation and communication networks, giant presses and complex manufacturing plants, which are available in specific discrete sizes, and whose economic usefulness manifests itself only when the scale of operation is large. If the technology giving rise to a large firm is based on indivisibilities, then this technology can be described by, say, an activity analysis model in which the activity levels referring to indivisible goods are required to assume integral values, like 0, 1, 2, . . . , only. When factor levels are specified and a particular objective function is chosen, we are led directly to that class of difficult optimization problems known as integer programs.
Of course, this problem of considering big lumpy changes is conceptually similar to the inventory problem, which also involved thinking about lumpy changes. Scarf uses a series of numerical examples in JEP to argue that when there are indivisibilities, the optimal answer will be a "neighborhood system"--which is to say that there often is not a single correct answer, but rather a group of closely related possibilities. Here's his conclusion in the JEP article. For those not initiated into economics, it may not carry a lot of meaning. For those of us who have drunk the economics Kool-Aid, it's an example of Scarf's facility for using the language of technical economics with a mixture of concreteness and fluidity that keeps the economic themes front and center:
But let us leave this example with only two discrete choices concerning types of plants, and remember that in a large manufacturing enterprise there will be many discrete choices involving a large menu of tasks and machinery, each of which has its own capacity, set-up cost and marginal cost. The equipment may be placed in a number of different locations on the shop floor; the work may be passed from one piece of machinery to another with complex requirements of scheduling and precedence, and the tasks may alter from one job lot to another as the product specification varies. Demands may be revised capriciously and unexpectedly over time; output may be shipped to many different regions. The enterprise may have a host of competitors or none at all. In the absence of internal market prices, combinatorial arguments and quantity tests are necessary to regulate the flow of activity inside the enterprise in an optimal fashion. 
My message boils down to a simple straightforward piece of advice; if economists are to study economies of scale, and the division of labor in the large firm, the first step is to take our trusty derivatives, pack them up carefully in mothballs and put them away respectfully; they have served us well for many a year. But derivatives are prices, and in the presence of indivisibilities in production, prices simply don't do the jobs that they were meant to do. They do not detect optimality; they aren't useful in comparative statics; and they tell us very little about the organized complexity of the large firm. Neighborhood systems are the discrete approximations to the marginal rates of substitution revealed by prices. They are relatively easy to compute, seem to behave pretty well under continuous changes in the technology, and will ultimately lead to even better algorithms than we have now. 
We know much more about the structure of neighborhood systems than I have been able to describe here—not enough, perhaps, to derive a really satisfactory theory of the internal organization of the large firm at the present time. But my own intuition is that this is an important way to proceed. I am confident that serious, ultimately useful insights about the large firm will eventually be obtained by thinking very hard and long about indivisibilities in production. 

Tuesday, November 17, 2015

Why More Humanitarian Aid Should be Given in Cash

My mental image of humanitarian relief workers after a crisis is people who are unloading and handing out supplies. But for some purposes and in some cases,  handing out cash might work better. That's the case made in the "Report of the High Level Panel on Humanitarian Cash Transfers," called "Doing Cash Differently: How Cash Transfers Can Transform Humanitarian Aid" (published in September 2015 by the Center for Global Development). The panel includes a selection of academics, representatives from humanitarian relief organizations, and organizations with experience transferring money into low-income countries. I'll list the membership of the group at the bottom of this post.

Let me start by sketching the challenge of humanitarian aid.
"The main components of international humanitarian action are donor governments, the United Nations and its implementing organisations, the Red Cross and Red Crescent Movement and international NGOs. While sometimes described as a ‘system’, it is actually a complicated and constantly evolving web of organisations. In 2014, the humanitarian system comprised some 4,480 operational aid organisations and more than 450,000 professional humanitarian aid workers. It had a combined expenditure of over $25 billion. ...  
"However, this system is under great and growing strain. The 2015 State of the Humanitarian System report concludes that international humanitarian action is at the ‘wrong scale and is structurally deficient to meet the multiple demands that have been placed upon it’. 2014 was an exceedingly difficult year, with simultaneous large-scale disasters in South Sudan, the Central African Republic, Syria and the Philippines, and in West Africa with the Ebola outbreak. ...  In 2014 there were nearly 60 million people around the world who had been displaced by conflict. Natural disasters affect on average 218 million people a year. Conflict in the Central African Republic has touched more than half its population. Almost 12 million people have been forced to flee their homes in Syria."
The Panel estimates that at present about 5-6% of humanitarian aid is distributed in the form of cash payments. "If sectors where cash is often less appropriate (health, water and sanitation) and not appropriate at all (mine action, coordination, security) are removed from the equation, then cash and vouchers were roughly 10% of the total."

It may seem counterintuitive to think of money as an answer to humanitarian crises. After all, isn't it obvious that the in such a crisis, the problem is a shortage of food, shelter, water, tents, clothing, medical care, and the like? But actually, this point isn't at all obvious. If there is buying power, market forces are often extremely clever and flexible about making goods available. One of the most famous works by Amartya Sen, the 1998 Nobel laureate in economics, looked at causes of famine. As he pointed out, famines sometimes happened in places where there had been no drop in crop production; even in famine-stricken areas, large groups of people did get food; and in some cases, food was even being exported out of famine areas. In short, Sen pointed out that famine was often not a result of a physical shortage of food. Instead, it was the result of a large group of people people who found themselves unable to pay for food, often because some sort of disaster had wiped out their way of making a living. If the government provided income to people, perhaps through make-work show-up-you-get-paid jobs, then that local buying power would often bring a supply of food to the area.

The Panel's top recommendation is "Give more unconditional cash transfers. The questions should always be asked: ‘why not cash?’ and, ‘if not now, when?’" Let me first run through some of the advantages of making greater use of cash in humanitarian relief, in no particular order, and then consider some objections.

Humanitarian cash relief offers greater flexibility for the recipients, who can prioritize what is most important to them.

"A consistent theme in research and evaluations is the flexibility of cash transfers, enabling assistance to meet a more diverse array of needs. In the Philippines, for example, people reported using the money for food, building materials, agricultural inputs, health fees, school fees, sharing, debt repayment, clothing, hygiene, fishing equipment and transport. Often people spend the vast majority of cash in fairly predictable ways – during the Somalia famine, cash transfers were mainly used to buy food and repay loans. Sometimes there are surprises. In Lebanon, for example, while UNHCR provided cash to Syrian refugees to cope with the harsh winter conditions as an alternative to ‘winterisation kits’, most directed their additional income towards food and water. It is not that they did not need fuel – it was that they needed other things more. The element of choice is critical. ... 
"The evidence shows that cash in humanitarian settings can be effective at achieving a wide range of aims – such as improving access to food, enabling households to meet basic needs, supporting livelihoods and reconstructing homes. ...  Cash impacts local economies and market recovery by increasing demand and generating positive multiplier effects. In Zimbabwe, every dollar of cash transfers generated $2.59 in income (compared to $1.67 for food aid). It can encourage the recovery of credit markets by enabling repayment of loans."
Cash lets humanitarian dollars help more people, because the humanitarian organization doesn't need to gather, transport, and store physical objects.
"Cash transfers can also make limited humanitarian resources go further. ...  It usually costs less to get money to people than in-kind assistance because aid agencies do not need to transport and store relief goods.A four-country study comparing cash transfers and food aid found that 18% more people could be assisted at no extra cost if everyone received cash instead of food."
A focus on cash can simplify and reduce overlap among the many aid organizations. It can also let aid organizations focus on broader issues of reconstruction after a disaster.

In Lebanon in 2014, 30 aid agencies provided cash transfers and vouchers for 14 different objectives, including winterisation, legal assistance and food. People do not divide their needs by sectors and clusters. A more logical approach is to have fewer, larger-scale interventions providing unconditional cash grants using common delivery infrastructure where possible, complemented by other forms of humanitarian aid in sectors where cash is not appropriate. ...
Providing cash does not and should not mean that humanitarian actors lose a focus on a key public good that they are uniquely placed to provide: proximity, presence and bearing witness to the suffering of disaster-affected populations. On the contrary, streamlining aid delivery should allow them more time to focus on exactly that. Giving people cash, therefore, does not imply simply dumping the money and leaving them to fend for themselves. People receiving cash intended to help meet shelter needs may require help to secure land rights, build disaster-resistant housing or manage procurement and contractors. Where people use cash to buy agricultural inputs this can be complemented with extension advice.
Humanitarian cash payments can help build links between low-income people and the financial sector. As I noted in an earlier post: "For the individual, it provides safety for saving, a channel for receiving and making payments, and the possibility of getting a loan at a more reasonable rate than offered by an informal money-lender. An economy in which many people have bank accounts will find it easier to make transactions, both because buying and selling are easier and because whether a payment was in fact made can be verified by a third party." Many low-income countries are moving toward providing government payments through electronic accounts already. Providing humanitarian aid through these channels may be less prone to corruption, and easier to audit, than providing in-kind assistance.

Perhaps the main concern with using cash for humanitarian relief is that it won't benefit those in need. It could be skimmed off in some way, or those who receive it might spend it on the local intoxicants rather than feeding their children. Cash assistance is surely susceptible to these problems, but so are other sorts of aid. There are plenty of stories of emergency supplies of food being stolen and sold. Those who get physical aid can sell what they have received in the black or the gray market for cash, and then buy whatever else they want instead. The report cites one study that 70% of Syrian refugees in Iraq have sold or traded some of the in-kind aid they received.
"Evidence from humanitarian settings and from social protection overwhelmingly demonstrates that people receiving money tend to buy what they most need and do not spend it on alcohol or tobacco or for other anti-social purposes. There are inevitably some exceptions, because crises and disasters do not change the fact that there are some irresponsible people in the world, but the evidence is clear that cash is no more likely to be used irresponsibly than other kinds of assistance (which can be sold to buy other things, and often is).
Of course, humanitarian aid in the form of cash doesn't work all the time. But as the report notes:
Nobody expects cash to replace vaccines or therapeutic feeding for malnourished children, or that money alone can enable the safe rebuilding of shelters. But the times and contexts when cash isn’t appropriate are narrow and limited, and should not be used as excuses to continue providing in-kind assistance if cash becomes possible. Markets recover quickly after disasters and continue during conflicts. 
Here's the membership of the Panel:

Friday, November 13, 2015

Uber: What are the Real Economic Gains?

A common accusation against Uber and other web-facilitated car-hire services is what looks like a competitive advantage only arises because they operate under a different and more lax set of rules than regular taxicabs. In other words, the newfangled service looks great until you are in a situation with an unsafe and undermaintained vehicle, along with an untrained or underinsured driver. In "The Social Costs of Uber,"  Brishen Rogers points out two sources of genuine economic gains from Uber and similar firms (The University of Chicago Law Review Dialogue, 2015, 82: pp. 85-102). He also describes the evolving negotiations over rules that Uber and other companies seem sure to face.

A company like Uber offers two sources of genuine economic gains: reduced search costs for both passengers and drivers, and gains from horizontal and vertical integration. Here's Rogers on the mess that search costs on the part of both drivers and passengers create for conventional taxicab markets, and how Uber addresses them (with footnotes omitted).
"[B]oth regulated and deregulated taxi sectors suffer from high search costs. Riders have difficulty finding empty cabs when needed. Taxis therefore tend to congregate in spaces of high demand, such as airports and hotels. Deregulation arguably made this worse. Since supply went up, cab drivers had even greater incentives to stay in high-demand areas, and yet they had to raise fares to stay afloat.
High search costs and low effective supply may also reduce demand for cabs in two ways. First, if consumers have difficulty finding cabs because cabs are scarce, they may tend not to search in the first place. Second, high search costs may create a vicious cycle for phone-dispatched cabs. Riders who get tired of waiting for a dispatched cab may simply hail another on the street; drivers en route to a rider may also decide to take another fare from the street, rationally estimating that the rider who called may have already found another car. In some cities, the result is that dispatched cabs may never arrive—full stop.
Uber has basically eradicated search costs. Rather than calling a dispatcher and waiting, or standing on the street, users can hail a car from indoors and watch its progress toward their location. Drivers also cannot poach one another’s pre-committed fares. This is a real boon for consumers who don’t like long waits or uncertainty—which is to say everyone. Uber can also advise drivers on when to enter and exit the market—for example, by encouraging part-time drivers to work a few hours on weekend nights.
The article cite some evidence from a few years back in San Francisco that fewer than half of the attempts to dispatch a cab to a certain address ended up with a cab actually arriving.

For economists, "vertical integration" refers to whether a few or many economic actors are involved in number of steps along the chain of production from start to finish. In contrast, "horizontal integration" refers to whether a few or many are involved in a particular stage of the production process. Rogers argues that the taxicab industry has evolved in ways that don't involve much vertical or horizontal integration, and Uber and other ride-sharing services are creating efficiency gains bringing greater integration in these ways. Rogers writes:
Uber is also extremely important for another reason that has received little attention: it is encouraging vertical and horizontal integration in the car-hire sector. ... In Chicago, for example, medallion owners often lease their operating rights to management companies; management companies in turn purchase or lease cars and outfit them as required per local regulations; drivers then lease those cars from management companies on a weekly, daily, or even hourly basis. Other cities have different licensing systems, but any licensing system that does not mandate owner operation or direct employment of drivers will encourage similar vertical fragmentation. Taxi companies will rationally (and lawfully) lease cars to drivers rather than employ drivers in order to avoid the costs associated with employment, which include minimum wage laws, unemployment and workers’ compensation taxes, and possible unionization. Uber is now reducing such vertical fragmentation, since it has a direct contractual relationship with its drivers. It is also integrating the sector horizontally as it gains market share within cities. Meanwhile, the company is compiling a massive database of driver and rider behavior. Those data are essential to Uber’s price-setting and market-making functions but would be all-but-impossible to compile in a fragmented industry. 
In short, the economics behind Uber and other ride-sharing services suggests the possibility of substantial and real economic gains. Rogers quickly mentions some other gains, as well: "For example, Uber reduces consumers’ incentives to purchase automobiles, almost certainly saving them money and reducing environmental harms. As consumers buy fewer cars, Uber also opens up the remarkable possibility of converting parking spaces to new and environmentally sound uses. Uber may also reduce drunk driving and other accidents."

But even if Uber isn't just a case of those who can sidestep existing regulations having a cost advantage, it is nonetheless true that Uber like any company providing service to the public is going to find itself facing some rules and regulations. For example, basic checks on driver competence, as well as rules about vehicle safety and appropriate insurance, seem to be on their way.

What is perhaps more interesting is that the web-enabled car-hire model raises some questions that didn't arise in the same way in the previous taxicab industry.

For example, there are a combination of old and new concerns about discrimination. The old concern is that taxis may not be available for hire in certain neighborhoods, or drivers may not pick up riders from certain racial or ethnic groups. A web-connected car-hire service seems likely to reduce this problem. The new concern is that Uber riders are expected to evaluate drivers. What if such evaluations carry a dose of racial/ethnic or gender prejudice?

Another issue is whether the Uber drivers should be treated as "employees." Rogers doubts that ultimately Uber drivers will be treated in this way, and refers to mentions that there are similar cases involving whether FedEx drivers are employees. He writes:
The most analogous recent cases, in which courts have split, involve FedEx drivers. Those that found for the workers have noted, for example, that FedEx requires uniforms and other trade dress, that it requires drivers to show up at sorting facilities at designated times each day, and that it requires them to deliver packages every day. Uber drivers are different in each respect. They use their own cars, need not wear uniforms, and most importantly they work whatever hours they please.
But ultimately, as these kinds of regulations are discussed and debated, the very success of Uber and similar services is likely to help in enacting and enforcing certain standards. As Rogers notes: "These developments could make it relatively simple to ensure that Uber complies with the law and plays its part in advancing public goals. The reason is simple: as scholars have documented, large, sophisticated firms can detect and root out internal legal violations—and otherwise alter employees’ and contractors’ behavior—far more easily than public authorities or outside private attorneys."

In other words, Uber and similar companies are not going to be both enormous commercial successes and also untouched by regulatory concerns. Instead, Uber's huge and growing database of drivers, fares, prices, time-of-day, locations, accidents, evaluations of drivers by passengers, evaluations by passengers of drivers, will all tend to provide information that can be used to monitor what happens and to motivate improvements where needed. Moreover, if enough potential customers or drivers are discontented with Uber and the existing web-enabled car hire companies, the barriers to entry for other firms to start up Uber-like companies on a city-by-city basis are not very high. As Rogers writes:
Moreover, it is not clear that Uber’s position at the top of the ride-sharing sector is stable. While Uber’s app is revolutionary, it is also easy to replicate. Uber already faces intense competition from Lyft and other ride-sharing companies, competition that should only become more intense given Uber’s repeated public relations disasters. While Uber’s success relies in part on network effects—more riders and drivers enable a more efficient market—the switching costs for riders and drivers appear to be fairly minimal. Uber may become the Myspace or Netscape of ride sharing—that is, a pioneer that could not maintain its market position. Concerns about monopoly therefore seem premature.   

Those interested in this subject might also want to check out an earlier post on "Who are the Uber Drivers?" (February 18. 2015).

Thursday, November 12, 2015

How Many Deaths from Mistakes in US Health Care?

Back in the 1999, the Institute of Medicine (part of the National Academies of Science) estimated in its report To Err is Human that in 1997 at least 44,000 and as many as 98,000 patients died in hospitals as the result of medical errors that could have been prevented. Current estimates are higher, as Thomas R. Krause points out in "Department of Measurement: Scorecard Needed" in the Milken Institute Review (Fourth Quarter 2015, pp. 91-94). Krause writes:
"You've seen the astounding numbers: hundreds of thousands of Americans die each year due to medical treatment errors. Indeed, the median credible estimate is 350,000, more than U.S. combat deaths in all of World War II. If you measure the “value of life” the way economists and federal agencies do it – that is, by observing how much individuals voluntarily pay in daily life to reduce the risk of accidental death – those 350,000 lives represent a loss exceeding $3 trillion, or one-sixth of GDP. But when decades pass and little seems to change, even these figures lose their power to shock, and the public is inclined to focus its outrage on apparently more tractable problems."
In case you're one of the vast majority who actually haven't seen those estimates, or at least haven't mentally registered that they exist, here are a couple of the more recent underlying sources.

The Agency for Healthcare Research and Quality (part of the US Department of Health and Human Services) published in May 2015 the 2014 National Healthcare Quality and Disparities Report. Here are some good news/bad news statistics from the report:
From 2010 to 2013, the overall rate of hospital-acquired conditions declined from 145 to 121 per 1,000 hospital discharges. This decline is estimated to correspond to 1.3 million fewer hospital-acquired conditions, 50,000 fewer inpatient deaths, and $12 billion savings in health care costs. Large declines were observed in rates of adverse drug events, healthcare-associated infections, and pressure ulcers.
The good news is 50,000 fewer deaths, along with health improvements and saving money. The bad new is that the rate of hospital-acquired conditions basically fell from one patient in every seven patients to one out of every eight. Sure, hospital-acquired conditions will never fall to zero. But it certainly looks to me as if at least tens thousands of lives were being lost each year because that rate had not been reduced, and that tens of thousands of additional could be saved be reducing the rate further. For another analysis in a different setting, here's a 2014 US government study about adverse and preventable effects of care in nursing care facilities.

John T. James published "A New, Evidence-based Estimate of Patient Harms Associated with Hospital Care" in the Journal of Patient Safety (September 2013, pp. 122-128). James reviews four studies of quality of care that focus on relatively small numbers of patients (three of the studies are less than 1000 patient records, the other is 2,300). He uses a software package called the Global Trigger Tool to flag cases where preventable errors might have occurred, and then those cases are examined by physicians. James describes the process this way:
The GTT depends on systematic review of medical records by persons trained to find specific clues or triggers suggesting that an adverse event has taken place. For example, triggers might include orders to stop a medication, an abnormal lab result, or prescription of an antidote medication such as naloxone. As a final step, the examination of the record must be validated by 1 or more physicians. As will be shown shortly, the methods used to find adverse events in hospital medical records target primarily errors of commission and are much less likely to find harm from errors of omission, communication, context, or missed diagnosis.
Projecting from four small studies to national patterns is obviously a little dicey, but for what it's worth, James finds:
Using a weighted average of the 4 studies, a lower limit of 210,000 deaths per year was associated with preventable harm in hospitals. Given limitations in the search capability of the Global Trigger Tool and the incompleteness of medical records on which the Tool depends, the true number of premature deaths associated with preventable harm to patients was estimated at more than 400,000 per year. Serious harm seems to be 10- to 20-fold more common than lethal harm.
My reactions to this body of evidence on the prevalence and costs of mistakes in the US health care system can be summarized in two bits of skepticism and one burst of outrage.

It seems sensible to be skeptical about the largest estimates of the size of the problem. There are obviously issues in deciding what was "preventable" or a "mistake."

The other bit of skepticism is that seeking to reduce the problem of medical errors is harder than it might at first sound. For example, Christine K. Cassel, Patrick H. Conway, Suzanne F. Delbanco, Ashish K. Jha, Robert S. Saunders, and Thomas H. Lee wrote about some efforts to measure and set guidelines for health care in "Getting More Performance from Performance Measurement." which appears in the New England Journal of Medicine on December 4, 2014. They point out that there are often literally  hundreds of measures of quality of care, some important, some not, and many that turn out to be useless or even harmful.
Many observers fear that a proliferation of measures is leading to measurement fatigue without commensurate results. An analysis of 48 state and regional measure sets found that they included more than 500 different measures, only 20% ofwhich were used by more than one program. Similarly, a study of 29 private health plans identified approximately 550 distinct measures, which overlapped little with the measures used by public programs. Health care organizations are therefore devoting substantial resources to reporting their performance to regulators and payers; one northeastern health system, for instance, uses 1% of its net patient-service revenue for that purpose. Beyond the problem of too many measures, there is concern that programs are not using the right ones. Some metrics capture health outcomes or processes that have major effects on overall health, but others focus on activities that may have minimal effects. ...
Unfortunately, for every instance in which performance initiatives improved care, there were cases in which our good intentions for measurement simply enraged colleagues or inspired expenditures that produced no care improvements. One example of a measurement effort that had unintended consequences was the CMS quality measure for community-acquired pneumonia. This metric assessed whether providers administered the first dose of antibiotics to a patient within 6 hours after presentation, since analyses of Medicare databases had shown that an interval exceeding 4 hours was associated with increased in-hospital mortality. But the measure led to inappropriate antibiotic use in patients without community-acquired pneumonia, had adverse consequences such as Clostridium difficile colitis, and did not reduce mortality. The measure therefore lost its endorsement by the National Quality Forum in 2012, and CMS removed it from its Hospital Inpatient Quality Reporting and Hospital Compare programs.
But even after acknowledging that quantifying death and injury caused by health care mistakes is an inexact process, and fixing it isn't simple, the sheer scale of the issue remains.

The US economy will spend about $3 trillion this year on health care. As Krause noted at the start, the loss of 350,000 lives life from preventable errors, if we value a life at about $9 million as is commonly done by federal regulators, means that the total costs of death from by health care mistakes is about  $3 trillion. On one side, perhaps this total is overstated. On the other side, it includes only costs of deaths, not health costs from serious but nonlethal harms (which James estimates are 10 or 20 times as common), and not the costs of resources used by the health care system in seeking to deal with mistakes already made.

There is considerable public debate over how to make sure all Americans have health insurance. But the issue of the enormous costs of the US health care system doesn't get the same airtime. Sure, there are arguments over how much or why the rate of growth of US health care spending has changed. In the meantime, the US continues to vastly outspend other countries. For example, here's a figure from the OECD showing health care spending as a share of GDP,50% higher than any other country and roughly double the OECD average. Based on this data, the US is spending about $8500 per person per year on health care, while Canada and Germany are spending about $4400 per person per year, and the United Kingdom and Japan are spending about $3,300 per person per year.


I understand the reasons why high US health care spending doesn't buy health. But it's a bitter irony indeed that the extremely high levels of US health care spending are actually causing at least tens of thousands, and quite possible hundreds of thousands, of deaths each year.