Pages

Tuesday, June 30, 2015

Focusing on High-Cost Patients

There's a widespread belief that a large share of US health care spending goes to highly interventionist end-of-life care that does little or nothing to prolong the length of life, while quite possibly reducing the quality of remaining life. What share of health care costs is spent on those in the last year of life? More broadly, what are the possibilities for holding down the rise in health care costs over time by focusing on the patients that experience the highest level of costs.

Melissa D. Aldridge and Amy S. Kelley offer some facts and background for thinking about this question in their essay "Epidemiology of Serious Illness and High Utilization of Health Care." It appears as Appendix E in a 2015 National Academy of Sciences report called Dying in America: Improving Quality and Honoring Individual Preferences Near the End of Life.

Aldridge and Kelley write: "As of 2011, the top 5 percent of health care spenders (18.2 million people) accounted for an estimated 60 percent of all health care costs ($976 billion) ...  In this high-cost subgroup, total annual costs ranged from approximately $17,500 to more than $2,000,000 per
person."  Just to be clear, this spending person includes spending by private or public health insurance on a given patient: it's not a measure of out-of-pocket health care costs. Aldridge and Kelley suggest dividing those with high health expenditures into three groups: "individuals who
experience a discrete high-cost event in one year but who return to normal health and lower costs; individuals who persistently generate high annual health care costs due to chronic conditions, functional limitations, or other conditions; and individuals who have high health care costs because it is their last year of life."

Here are a couple of diagrams to help envision this three-way division. First, here's the breakdown of the top 5% into these three groups. About half are those who experienced a high-cost event, but do not continue to be in the top group for health care expenses in the next year. Only 11% of this top-expenditures group was in the last year of life.

But here's another perspective on the same subject. It turns out that about 80% of those in the last year of life are indeed in the high-expenditures group. Thus, it is true that those in the last year of life often have high health care costs, but because  in a given year many others also have high health care costs, the end-of-life group is a relative small share of the overall high-cost group.

How might thinking about high-cost patients in a given year offer some guidance for holding down health care costs? As a starting point, think about those who have a high-cost event in one year, but not in the following year. One can imagine a survivor of a severe accident. Or as Aldridge and Kelley write: "Some examples of this illness trajectory might include people who have a myocardial infarction, undergo coronary bypass graft surgery, and return to stable good health after a period of rehabilitation; individuals who are diagnosed with early stage cancer, complete surgical resection and other first-line therapies, and achieve complete remission; or people who are waiting for a kidney transplant on frequent hemodialysis and then receive a transplant and return to stable health." They follow up by saying: "There may be relatively less opportunity for cost reductions in this population because many high-cost events may be unavoidable." Indeed, one might go farther and argue that this is exactly what health insurance is supposed to provide: you make payments year after year, hoping that nothing terrible will happen to you, but if it does, you have some financial protection.

What about health care practices and reimbursement policies directed toward those who had high health care expenditures in the last year of life?

The gains from reducing costs of end-of-life care shouldn't be overstated. The proportion of Medicare spending that goes to end-of-life care has been roughly the same for the last few decades at about 25%. This regularity suggests that while overall health care costs have been rising, end-of-life care is not an increasing part of that overall issue. Intriguingly, Aldridge and Kelley report: "Medicare expenditures in the last year of life decrease with age, especially for those aged 85 or older ... This is in large part because the intensity of medical care in the last year of life decreases with increasing age." Indeed, older adults as a group are a minority of those with the highest health care costs in any given year:
Our analyses of the association between older age and higher health care costs suggests that although individuals aged 65 and over are disproportionately in the top 5 percent of the population in terms of total health care spending ..., almost two-thirds of the top 5 percent spenders are younger than age 65. Although older age may be a risk factor for higher health care costs, older adults make up the minority of the high-cost spenders. Furthermore, the proportion of total annual health care spending for the population aged 65 or over (32 percent) has not changed in a decade despite the growth in the size of that population.
However, some evidence does suggest possibilities for reducing end-of-life costs. For example, in Appendix D of this same NAS report, Haiden A. Huskamp and David G. Stevenson discuss "Financing Care at the End of Life and the Implications of Potential Reforms." They point out that spending on end-of-life care varies a great deal across the country, in ways that don't seem to have anything to do with the health of patients. They write (citations omitted for readability:
Although spending on end-of-life care is uniformly high, the Dartmouth Atlas documented substantial geographic variation in use of end-of-life care services and spending by hospital referral region (HRR) over time, which researchers and policy makers viewed as evidence of wide regional differences in physician practice patterns. For example, in 2007, the average number of days spent in an ICU [intensive care unit] for chronically ill Medicare beneficiaries in the last 6 months of life varied from 0.7 in Minot, North Dakota, to 10.7 in Miami, Florida. In this same population, the percentage dying in a hospital varied from 12.0 percent in Minot, North Dakota, to 45.8 percent in Manhattan, New York, and the average number of days spent enrolled in hospice varied from a low of 6.1 in Elmira, New York, to a high of 39.5 in Odgen, Utah.
The standard prescription for reducing spending on end-of-life care is to make more use of care delivered through hospice and at home, and less use of expensive  hospital and ICU care. Many people favor such an approach in theory, but in practice, when you or your relative are involved, it can be hard to implement. One issue that should always be acknowledged in discussions of end-of-life care is that all the evidence is based on hindsight: that is, on looking back after someone has died. At the time health care decisions are actually being made, it's very difficult to figure out whether someone has a life expectancy of less than a year. As Huskamp and Stevenson write: "It is also important to note that calculations of spending in the last year of life can be made only by looking backward from the decedent’s date of death. These calculations do not necessarily reflect “real-time” decision making by patients and families about care in the final year of life, as 1-year survival is extremely difficult to predict."

For me, the biggest lesson in looking at this breakdown of the highest-cost patients is one that I've touched on before in this blog (for example, here and here), which is the importance of rethinking how the health care system deals with issues of chronic disease, especially when it is accompanied by functional limitations on behavior. Here's a breakdown from Aldridge and Kelley, showing that well over half total health care costs are attributable to those who have both chronic conditions and functional limitations.




One clear-cut example is that a large share of those in nursing homes fall into these two categories: indeed, the average person in a nursing home has health care costs that put them into the top 5% of all high-cost patients. Aldridge and Kelley write: "As of 2011, there were 1.4 million Americans residing in nursing facilities. Thus, we estimate that the average annual health expenditure per nursing home resident is more than $200,000, which is significantly higher than the $17,500 minimum average annual health expenditure required to be in the top 5 percent of health care
spenders ..."

More broadly, a lot of chronic conditions have the characteristic that if they are well-managed--say, with appropriate diet, drugs, and exercise--they will often have relatively low health care costs. However, if not well-managed, they can lead to high-cost episodes of hospitalization. The US health care system has traditionally been a lot better at providing the high-cost hospitalization than at supporting the best possible management of these conditions. Thus, Aldridge and Kelley calculate (citations omitted):
Analyses of data on chronic conditions and health care costs have found that, of the population with the highest health care costs, greater than 75 percent have one or more of seven chronic conditions, including 42 percent with coronary artery disease, 30 percent with congestive heart failure, and 30 percent with diabetes. The U.S. Department of Health and Human Services ...  reports that more than 25 percent of individuals in the United States have multiple chronic conditions, and the care of these individuals accounts for 66 percent of total health care spending. ... A recent commentary in the Journal of the American Medical Association suggests that an estimated 22 percent of health care expenditures are related to potentially avoidable complications, such as hospital admission for patients with diabetes with ketoacidosis or amputation of gangrenous limbs, or for patients with congestive heart failure for shortness of breath due to fluid overload. Reducing these potentially avoidable complications by only 10 percent would save more than $40 billion/year.
Changes in end-of-life care and in managements of chronic conditions both require cultural change in the field of medicine, with more emphasis on non-hospital, non-high-tech alternatives. But the possibilities for improved patient health and satisfaction, along with substantial cost savings, seem substantial.

Monday, June 29, 2015

The Internet of Things

Like most people, I tend to think of the Internet as digital, carrying information, images, text, music, and the like. But we seem to be standing on the edge of what is commonly called the "Internet of Things," in which physical objects--including machines, electrical systems, land, people, and animals--all become increasingly connected to online networks. A group of researchers at the McKinsey Global Institute--James Manyika, Michael Chui, Peter Bisson, Jonathan Woetzel, Richard Dobbs, Jacques Bughin, and Dan Aharon--discuss some of the possibilities and pitfalls in their June 2015 report: "Unlocking the potential of the Internet of Things." They write:
The Internet of Things is still in the early stages of growth. Every day more machines, shipping containers, infrastructure elements, vehicles, and people are being equipped with networked sensors to report their status, receive instructions, and even take action based on the information they receive. It is estimated that there are more than nine billion connected devices around the world, including smartphones and computers. Over the next decade, this number is expected to increase dramatically, with estimates ranging from 25 billion to 50 billion devices in 2025.
What are the potential gains from the Internet of Things? Here's a list, inevitably somewhat speculative, of nine areas where gains from the Internet of Things could be large. For example, sensors seem likely to help people manage illness and improve wellness. they seem likely to help retail stores with layout, checkout, and in-store customer support. It will help factories run equipment and manage supplies in ways that add to efficiency. It will help cities with traffic management, as well as managing resources from water to infrastructure repair to police time.

Some aspects of the Internet of Things may feel like science fiction. As the McKinsey writers emphasize, the development of Internet of Things capabilities will require continued dramatic developments in computing speed, wireless communication, and interoperability and interconnectedness across many systems and devices. But perhaps more difficult than the technological changes are some of the social risks and legal issues involved. Here are three examples:

ƒƒPrivacy and confidentiality. The types, amount, and specificity of data gathered
by billions of devices create concerns among individuals about their privacy and among
organizations about the confidentiality and integrity of their data. Providers of IoT [Internet of Things] enabled products and services will have to create compelling value propositions for data to be collected and used, provide transparency into what data are used and how they are being used, and ensure that the data are appropriately protected.
Security. Not only will organizations that gather data from billions of devices need to be able to protect those data from unauthorized access, but they will also need to deal with new categories of risk that the Internet of Things can introduce. Extending information technology (IT) systems to new devices creates many more opportunities for potentialbreaches, which must be managed. Furthermore, when IoT is used to control physical assets, whether water treatment plants or automobiles, the consequences associated with a breach in security extend beyond the unauthorized release of information—they could potentially cause physical harm.
Intellectual property. A common understanding of ownership rights to data producedby various connected devices will be required to unlock the full potential of IoT. Who has what rights to the data from a sensor manufactured by one company and part of a solution deployed by another in a setting owned by a third party will have to be clarified. For example, who has the rights to data generated by a medical device implanted in a patient’s body? The patient? The manufacturer of the device? The health-care providerthat implanted the device and is managing the patient’s care?
My own sense is that these kinds of issues will tend to push us away from a world in which everything is continuously interconnected, because 24/7 interconnectedness is just too susceptible to problems of privacy and security, with too much information floating around loose. I can  more easily imagine a world in which many objects connect and then disconnect from the Internet on an occasional basis as needed for their functionality, or a world in which the connectedness of things is mediated through local networks. This approach would allow most of the gains from the Internet of Things, but without setting up a situation where someone who hacks the local electricity company can looking into individual home and turning the lights on and off.

Friday, June 26, 2015

Expanding Health Insurance in 2014: How Much Progress?

One of the most prominent claims made by supporters of the Patient Protection and Affordable Care Act of 2010--now commonly called "Obamacare" both by many supporters and opponents--is that it would substantially reduce the number of Americans without health insurance. How is that working out? Probably the best source of information is the National Health Interview Survey that is conducted by the National Center for Health Statistics. The survey asks about a full range of health and insurance issues, and it is carried out continually through the year, so that results can be reported on a quarterly basis. In 2014, the sample size includes about 110,000 people.

The most recent NHIS reports came out earlier this week. Robin A. Cohen  and Michael E. Martinez authored "Health Insurance Coverage: Early Release ofEstimates From the National Health Interview Survey, 2014," with a focus on annual data for 2014. However, the expansion of health insurance "exchanges" and expansions of Medicaid coverage under the Affordable Care Act started in January 2014. As the authors note: "The 2014 estimates after implementation are based on a full year of data collected from January through December2014 and, therefore, are centered around the midpoint of this period." So in looking for patterns in the extent of health insurance coverage that are emerging through 2014, it is also useful to look at the more detailed NHIS data broken down by quarter--with the fourth quarter of 2014 being the most recent data available.

Here's the overall pattern for those lacking health insurance on an annual basis. The proportion of uninsured had peaked back around 2009 and 2010 in the immediate aftermath of the Great Recession, and had been declining since then. The decline does look more rapid in 2014, although of course the figure doesn't reveal how much is due to the improving economy and employment situation and how much is due to the provisions of the 2010 legislation that started to be enacted in January 2014.

For a different perspective, here's the share of people in various age groups who received health insurance through the exchanges in the four quarters of 2014. It looks as if the share rose after the first quarter of 2014, but hasn't shown much trend since then.

Here's a more detailed quarter-by-quarter look through 2013 and 2014. the proportion of uninsured is already dropping in 2013, from 17.1% in 2013:Q1 to 16.2% by 2013:Q4. It then keeps falling in 2014, down to 12.1% by 2014:Q4. (The numbers in parenthesis are "standard errors." For those not initiated into the mysteries of statistics, it provides information about the precision of the estimate by telling you that the number given is accurate, plus or minus the amount in parentheses.)


Again, sorting out how much of this is due to the legislative changes and how much is due to an improving economy is a challenge. But a quick-and-dirty approach would note that the share of people receiving public health coverage rose by 0.8% from 2014:Q1 to 2014:4, and the share of people getting exchange-based private health insurance rose from nothing to 2.5% by 2014:Q4. You can't just add these percentages to get an effect from the 2010 legislation. In some cases, private firms may have decided not to offer health insurance in a way that pushed people into the exchanges. The share getting public health insurance would also have been affected by employer choices and the economy, along with the legislation. But until a more systematic study comes along, it seems fair to say as a rough estimate that during 2014, the Affordable Care Act increased the share of Americans with health insurance by 2-3 percentage points.

Those who favored the legislation will call this "success." Those who opposed the legislation will raise questions about the cost, emphasize that the law is nowhere near assuring health insurance for all, and point out that if the legislation had been sold as a moderate expansion of Medicaid and building up private insurance exchanges, the law could have been a lot shorter. But for either side, this relatively modest reduction in the number of uninsured shouldn't come as a big surprise. Even supporters of the 2010 legislation predicted that it would only solve about 60% of the problem of uninsured Americans, while nonpartisan sources predicted that it would solve about 40%. So far, reaching that lower prediction of reducing the share of uninsured by 40% is a goal not yet met.

Thursday, June 25, 2015

Banning Bottled Water: Unintended Consequences

Starting in 2012, the University of Vermont began a process of requiring that all campus locations selling beverages provided 30% "healthy" beverages, and then that all locations phases out all sales of bottled water. There were two hope: 1) reduced use of bottles, when bottled water was no longer available, and 2) that healthier beverages would be consumed. In a vivid demonstration of the law of unintended consequences, bottle use rose and fewer healthy beverages were consumed. Elizabeth R. Berman and Rachel K. Johnson tell the story in "The Unintended Consequences of Changes in
Beverage Options and the Removal of Bottled Water on a University Campus," appearing in the July 2015 issue of the American Journal of Public Health (105:7, pp. 1404-1408). This journal isn't freely available online, although some readers will have access through library subscriptions.

As a starting point, here's the description of the policy change  from Berman and Johnson (footnotes omitted:
Policy changes related to the types of bottled beverages sold at the University of Vermont in Burlington, Vermont, provided an opportunity to study how changes in beverage offerings affected the beverage choices as well as the calorie and total and added sugar consumption of consumers. First, in August 2012, all campus locations selling bottled beverages were required to provide a 30% healthy beverage ratio in accordance with the Alliance for a Healthier Generation’s beverage guidelines. Then, in January 2013, campus sales locations were required to remove bottled water while still maintaining the required 30% healthy beverage ratio.
They collected data on the beverages shipped to the sellers at the University of Vermont campus, and used that data as a basis for estimating consumption of bottled beverages. The study didn't try to estimate consumption of other beverages, like fountain drinks or coffee served in cafeterias. They found:

The number of bottles per capita shipped to the university campus did not change significantly between spring 2012 (baseline) and fall 2012, when the minimum healthy beverage requirement was put in place. However, between fall 2012 and spring 2013, when bottled water was banned, the per capita number of bottles shipped to campus increased significantly. Thus, the bottled water ban did not reduce the number of bottles entering the waste stream from the university campus, which was the ultimate goal of the ban. Furthermore, with the removal of bottled water, people in the university community increased their consumption of other, less healthy bottled beverages. ...
Per capita shipments of bottled beverages did not change significantly between spring 2012 and spring 2013 but did increase significantly from 21.8 bottles per person in fall 2012 to 26.3 bottles per person in spring 2013 (P=.03; Table 1). Calories, total sugars, and added sugars shipped per capita also increased significantly between fall 2012 and spring 2013, as shown in Table 1 (P= .02, P = .02, and P=.03, respectively). Calories per bottle shipped increased significantly over the 3 semesters by an average of 8.76 calories per bottle each semester.
(For those who don't read statistics, the P numbers in parenthesis are telling you that these changes after the policy took effect are statistically significant--that is, unlikely to have happened by chance.)

Here's a visual of the change, looking at patterns of different drinks. The orange line that drops to zero shows bottled water being phased out. The rising line at the top shows the rise in sugar-sweetened beverages. The red line in the middle that rises sharply shows the rise in sugar-free beverages. 

This finding is not an enormous surprise, because a reasonable amount of survey data suggests that many people switch from sugar-sweetened drinks to bottled water, and that if bottled water isn't available, many of them will switch back. Of course, one can always argue that with more time and better community education, more people will shift to carrying their own water bottles, so that bottle usage will indeed eventually fall and people will shift to healthier drinks. But remember, this policy change was enacted among university students in Burlington, Vermont, which as the authors say is " "a midsized city that is notoriously invested in both environmental and physical well-being." Moreover, the authors report: "The university made several efforts to encourage consumers to carry reusable beverage containers. Sixty-eight water fountains on campus were retrofitted with spouts to fill reusable bottles, educational campaigns were used to inform consumers about the changes in policy, and free reusable bottles and stickers promoting the use of reusable bottles were given out at campus events."

It seems to me that true believers in the power of community education should see no particular need for proposals to ban water bottles or mandate a healthier mixture of drinks. It's only if you doubt the power of such education that bans on bottled water become a plausible option. The authors report that "[m]ore than 50 colleges and universities have banned the sale of bottled water." Time for a few more studies to find whether such bans are having any environmental or health benefit.

 

Wednesday, June 24, 2015

Raisins: When Insiders Set the Rules

Earlier this week, the US Supreme Court in Horne et al. vs. Department of Agriculture overturned an arrangement that had stood since 1937 for the sale of raisins. The case turned on what is apparently a non-obvious question, given that this program had been around for eight decades and lower courts had ruled differently: Does taking 47% of someone's crop count as a a "taking" in the legal sense prohibited by the 5th Amendment to the US  Constitution, which ends with the words " ... nor shall private property be taken for public use, without just compensation." Chief Justice John Roberts wrote the decision for an 8-1 majority. He begins with a compact overview of past practice:

The Agricultural Marketing Agreement Act of 1937 authorizes the Secretary of Agriculture to promulgate “marketing orders” to help maintain stable markets for particular agricultural products. The marketing order for raisins requires growers in certain years to give a percentage of their crop to the Government, free of charge. The required allocation is determined by the Raisin Administrative Committee, a Government entity composed largely of growers and others in the raisin business appointed by the Secretary of Agriculture. In 2002–2003, this Committee ordered raisin growers to turn over 47 percent of their crop. In 2003–2004, 30 percent. 
Growers generally ship their raisins to a raisin “handler,” who physically separates the raisins due the Government (called “reserve raisins”), pays the growers only for the remainder (“free-tonnage raisins”), and packs and sells the free-tonnage raisins. The Raisin Committee acquires title to the reserve raisins that have been set aside, and decides how to dispose of them in its discretion. It sells them in noncompetitive markets, for example to exporters, federal agencies, or foreign governments; donates them to charitable causes; releases them to growers who agree to reduce their raisin production; or disposes of them by “any other means” consistent with the purposes of the raisin program. 7 CFR §989.67(b)(5) (2015). Proceeds from Committee sales are principally used to subsidize handlers who sell raisins for export (not including the Hornes, who are not raisin exporters). Raisin growers retain an interest in any net proceeds from sales the Raisin Committee makes, after deductions for the export subsidies and the Committee’s administrative expenses. In the years at issue in this case, those proceeds were less than the cost of producing the crop one year, and nothing at all the next. 
Readers who want to plow through the discussions of "takings" and "just compensation" in the decision can feel free to do so. What's interesting to me, from an economic point of view, is that the marketing arrangement for raisins embodies a certain misguided notion of how to create a healthy economy--a notion that still has some resonance today.

In the midst of the Great Depression, firms were losing money and wages were falling. For politicians, the answer to low profits and low wages straightforward. Form organizations of producers that would limit competition and hold down production, thus pushing up prices and helping producers earn profits. On the labor side, set industry guidelines and later minimum wage laws to prevent wages from falling.

This economic philosophy was embodied the National Industrial Recovery Act passed in 1933. Back in my undergraduate days, I took a class in US economic history with Michael Weinstein, who had recently published his 1980 book, Recovery and Distribution Under the National Industrial Recovery Act. The book offered a careful statistical analysis to illuminate the underlying economic themes. When producers all group together to hold down output, the remaining incumbent firms might make higher profits on the sales that remain--but this is literally the opposite of economic growth. Also, it forces consumers to pay higher prices. Trying to push up wages in the middle of a Great Depression can help those who manage to keep their jobs, but when employment is in the neighborhood of 25%, it doesn't help the economy expand, either.

It is revealing that the Raisin Administrative Committee, which sets the proportion of "reserve raisins" to be taken from growers and handlers, lacks any meaningful representation from consumers, or other firms in related industries, or the public more broadly, or those who might wish to enter the market for raisins. Here's how the US Department of Agriculture described its membership:
Committee Structure: The Raisin Administrative Committee is comprised of 35 members representing producers; 10 members representing handlers of varying sizes; 1 member representing the Raisin Bargaining Association (RBA); and 1 public member. Members serve 2-year terms of office that begin on May 1. Producer and handler members are nominated at meetings and by mail ballots.
In short, the economic arrangements for raisins are an example of what so often happens when economic policy is set by a combination of government and existing firms: the focus tends to be on profits for those existing firms, backed up either by government regulations that function like implicit subsidies or by explicit subsidies. Economic growth ultimately comes from innovation and productivity, not from attempts to tilt the market to favored incumbent firms.

Finally, I'll just add that it's an opportune time to end Raisin Administrative Committee and its National Raisin Reserve. The Raisin Administrative Committee reports in its Marketing Policy & Industry Statistics 2014 - 2015 Marketing Season:

The Committee met on August 14, 2014 and recognized the computed Trade Demand for Natural (sun‐dried) Seedless and all other varietal types ... The Committee voted to not establish volume regulations, thereby declaring Natural (sun‐dried) Seedless and all other varietal types 100% Free. This resulted in no trade demands or volume regulations for the 2014/15 crop year.
The Supreme Court case refers to the situation in 2002-3 and 2003-2004. But If I'm reading the bureaucratese correctly, the percentage of reserve raisins now being taken by the US government is zero. The Court decision presumably means that it will stay at zero.

Friday, June 19, 2015

Access to the Financial Sector: A Global Perspective

The availability of a bank account is a big help to individuals, and to an economy. For the individual, it provides safety for saving, a channel for receiving and making payments, and the possibility of getting a loan at a more reasonable rate than offered by an informal money-lender. An economy in which many people have bank accounts will find it easier to make transactions, both because buying and selling are easier and because whether a payment was in fact made can be verified by a third party. The record-keeping in a bank also helps to limit certain kinds of corruption, by identifying where money went. For all of these reasons, attachment to the formal financial sector is a useful metric of economic development.

Thus, the World Bank carries out a Global Financial Inclusion survey to find baseline evidence on financial systems around the world.  Asli Demirguc-Kunt, Leora Klapper, Dorothe Singer, and Peter Van Oudheusden report the latest results in  "The Global Findex Database 2014Measuring Financial Inclusion around the World," published as World Bank Policy Research Working Paper 7255 (April 2015). The survey is described in this way:
The Global Financial Inclusion (Global Findex) database provides in-depth data showing how people save, borrow, make payments, and manage risk. It is the world’s most comprehensive set of data providing consistent measures of people’s use of financial services across economies and over time. The 2014 Global Findex database provides more than 100 indicators, including by gender, age group, and household income. The data collection was carried out in partnership with the Gallup World Poll and with funding by the Bill & Melinda Gates Foundation. The indicators are based on interviews with about 150,000 nationally representative and randomly selected adults age 15 and above in more than 140 economies.
Here are some of the headline results:
Between 2011 and 2014, 700 million adults became account holders while the number of those without an account—the unbanked—dropped by 20 percent to 2 billion. What drove this increase in account ownership? A growth in account penetration of 13 percentage points in developing economies and innovations in technology—particularly mobile money, which is helping to rapidly expand access to financial services in Sub-Saharan Africa. Along with these gains, the data also show that big opportunities remain to increase financial inclusion, especially among women and poor people. Governments and the private sector can play a pivotal role by shifting the payment of wages and government transfers from cash into accounts. There are also large opportunities to spur greater use of accounts, allowing those who already have one to benefit more fully from financial inclusion. In developing economies 1.3 billion adults with an account pay utility bills in cash, and more than half a billion pay school fees in cash. Digitizing payments like these would enable account holders to make the payments in a way that is easier, more affordable, and more secure.
The report presents a wealth of data on accounts, mobile money, saving, loans, credit and debit cards, online payments by employers and government, and more. Here, I'll just mention a few points that caught my eye. Here's an overall figure of the percentage of adults with a formal financial account in different regions.


Clearly, sub-Saharan Africa is an interesting case because of the number of people who have a "mobile money" account that can be accessed on-line, but not a formal bank account (quotation omits footnotes and references to figure).
In 13 countries around the world, penetration of mobile money accounts is 10 percent or more. Not surprisingly, all 13 of these countries are in Sub-Saharan Africa. Within this group, the share of adults with a mobile money account ranges from 10 percent in Namibia to 58 percent in Kenya. And in 5 of the 13 countries—Côte d’Ivoire, Somalia, Tanzania, Uganda, and Zimbabwe—more adults reported having a mobile money account than an account at a financial institution.
For a take on mobile money on Africa from a few years back, Jenny C. Aker and Isaac M. Mbiti. wrote "Mobile Phones and Economic Development in Africa" for the Summer 2010 issue of the Journal of Economic Perspectives (24:3, pp. 207-32). (Full disclosure: I've been Managing Editor of the JEP since 1987.)
There's solid evidence that access to the formal financial sector affects patterns of lending.

Globally, 42 percent of adults reported having borrowed money in the past 12 months. The overall share of adults with a new loan—formal or informal—was fairly consistent across regions and economies, with Latin America and the Caribbean at the low end with 33 percent and Sub-Saharan Africa at the high end with 54 percent. But the sources of new loans varied widely across regions.
In high-income OECD economies a financial institution was the most frequently reported source of new loans, with 18 percent of adults reporting that they had borrowed from one in the past 12 months. In all other regions family and friends were the most common source of new loans. Overall in developing economies, 29 percent of adults reported borrowing from family or friends, while only 9 percent reported borrowing from a financial institution. In several regions more people reported borrowing from a store (using installment credit or buying on credit) than reported borrowing from a financial institution. Less than 5 percent of adults around the world reported borrowing from a private informal lender. 
Finally, it's worth remembering that lower-income households in the United States are more likely to lack a bank account than those in other high-income countries. The problem of the unbanked and underbanked isn't just a problem for the poor countries of the world.
For those who would like more detailed US data, the Federal Deposit Insurance Corporation does an biennial "National Survey of Unbanked and Underbanked Households." The most recent survey, published in 2014 with data for 2013, is available here. Some headline results are "7.7 percent (1 in 13) of households in the United States were unbanked in 2013. This proportion represented nearly 9.6 million households. 20.0 percent of U.S. households (24.8 million) were underbanked in 2013, meaning that they had a bank account but also used alternative financial services (AFS) outside of the banking system." I offered some additional discussion of the previous round of this survey a couple of years ago in a post about "The Unbanked and Underbanked" (September 24, 2012).

Thursday, June 18, 2015

Revisiting the AIG Bailout

For me, the bailout of the AIG insurance company back in September 2008 always stood out from the other bailouts around that time. Whether bailing out large banks was a necessary step or not, at least it was obvious why the banks were in trouble: housing prices had dropped sharply, and lots more people than expected were failing to repay their mortgage loans. Similarly, it was obvious that the sharp drop in housing prices could cause severe troubles for Fannie Mae and Freddie Mac, the two biggest federal agencies that were buying mortgages, bundling them together, and then reselling them.  The financial difficulties of GM and Chrysler made some sense, too: they were already hampered by high costs, declining market share, and tough competition and when car sales collapsed during the Great Recession, they were hemorrhaging money. But what caused the insurance company like AIG to lose $100 billion in 2008? How did an insurance company become entangled in a crisis rooted in falling house prices and subprime mortgages?

Robert McDonald and Anna Paulson explain the financial picture behind the scenes in "AIG in Hindsight" in the Spring 2015 issue of the Journal of Economic Perspectives. Their explanation bears remembering in the light of the decision by the US Court of Federal Claims earlier this week that the federal government actions in taking over AIG were unconstitutional. Judge Thomas Wheeler's full decision is available here.  For news coverage summarizing the decision, a Washington Post story is here and a New York Times story is here.

In passing, I'll just mention that this same Spring 2015 issue of JEP includes articles about the other main bailouts, too. If you want a perspective on what happened in the car bailouts, Austan D. Goolsbee and Alan B. Krueger, who were working in the Obama administration at the time, offer "A Retrospective Look at Rescuing and Restructuring General Motors and Chrysler." (I offered my own perspective on "The GM and Chrysler Bailouts" back in May 2012.) W. Scott Frame, Andreas Fuster, Joseph Tracy, and James Vickery discuss "The Rescue of Fannie Mae and Freddie Mac." Calomiris, Charles W. Calomiris, and Urooj Khan offer"An Assessment of TARP Assistance to Financial Institutions." Phillip Swagel reviews "Legal, Political, and Institutional Constraints on the Financial Crisis Policy Response."

In the case of AIG, McDonald and Paulson lay out how an insurance company got connected to the fall in housing prices. There were two main channels, both of which will require some explanation for the uninitiated.

There's a financial activity called "securities lending." It works like this. An insurance company needs to hold reserves, so that it will have funds when the time comes to pay out claims. Those reserves are invested in financial securities, like bonds and stocks, so that the insurance company can earn a return on the reserves. However, the insurance company can also lend out these financial securities. For example, perhaps a financial firm has a customer to purchase a specific corporate bond, but the firm can't get a supply of the bond immediately. The financial firm can then borrow the bond from an insurance company like AIG, AIG continues to be the legal owner of the bond, and to receive all interest payments due on the bond. But the borrower of the bond deposits cash as collateral with the lender, in this case AIG. AIG can then also invest this cash and earn an additional return. When the borrower of the financial security returns it to AIG, then AIG has to return the cash collateral.

Securities lending is a normal everyday business for insurance companies, but AIG went took a step that looks crazy. The usual practice is to take the cash received as collateral in securities lending and invest it in something very safe and liquid--perhaps Treasury bonds. After all, you're going to have to give that cash back! But AIG took 65% of the cash it had received as collateral for its securities lending, and invested it in assets linked to subprime mortgages! McDonald and Paulson write: "At the end of 2007, 65 percent of AIG’s securities lending collateral was invested in securities that were
sensitive either directly or indirectly to home prices and mortgage defaults." Indeed, AIG became so desperate to generate more cash through additional securities lending that instead of requiring cash collateral for the loans of 102%--the standard value--it was requiring collateral of less than 100%.

When securities lending arrangements are stable, they may just be renewed for months at a time. But when those who had borrowed securities from AIG recognized what AIG was doing with their cash collateral, they started returning the securities they had borrowed and demanding their cash back. "On Monday, September 15, 2008, alone, AIG experienced returns under its securities lending programs that led to cash payments of $5.2 billion. ... Ultimately, AIG reported losses from securities lending in excess of $20 billion in 2008." Without the infusion of a government cash bailout, all the people owning life insurance policies through AIG would have been at risk. Insurance companies in the US are regulated primarily at the state level. Not all the state regulations are the same, and it's not clear what the state-level life insurance regulators would have or could have done.

The other main issue that linked insurance company AIG to the housing price meltdown was its portfolio of "credit default swaps." The easiest way to think about a credit default swap is as a kind of insurance against the value of a financial security dropping. Say that a bank or big financial institution owns a bunch of mortgage-backed securities, and it's worried that they might drop in value. It then buys a credit default swap from a seller like AIG. If a "credit event" happens--roughly, you can think of this as a default--then the company that sold the credit default swap needs to cover those losses. AIG had sold credit default swaps on corporate loans, corporate debt, mortgage-backed securities backed by prime loans, and mortgage-backed securities backed by subprime loans. (For a discussion of the role of credit default swaps in the financial crisis, Rene M. Stulz wrote on "Credit Default Swaps and the Credit Crisis" in the Winter 2010 issue of the Journal of Economic Perspectives (24:1, pp. 73-92).)

Obviously, any company that sold a lot of credit default swaps before the decline in housing prices was going to take big losses. But here's the real kicker. Say that an actual "credit event" or default hasn't happened yet, but the risk of a credit default is rising. Because credit default swaps are bought and sold, an increase in risk can be observed in how their prices change. When the risk of a default on credit default swaps rises, AIG was required by its contracts to pay "collateral" to the companies that had bought the credit default swaps. If the risks had changed back in the other direction, the collateral would have been paid back. But that didn't happen. By September 12, 2008, AIG had already posted about $20 billion in collateral based on the expected future losses from it credit default swaps on securities based on subprime mortgages. On September 15, prices of these securities shifted again and AIG found on that day that it owed another $8.6 billion in collateral.

In short, in September 2008, the insurance company AIG had tied its fortunes to the price of subprime mortgages. As a result, AIG was going to fail to meet its financial obligations. It needed literally billions of dollars to cover the collateral for its securities lending and for its credit default swaps. Moreover, in the belly of the financial crisis at that time, no private party was going to lend AIG the billions or tens of billion of dollars it needed. Without a government bailout that according to McDonald and Paulson amounted to $182.3 billion, the firm would not have survived.

This discussion should help to clarify the issues with AIG, and also to raise a larger issue. For AIG, Judge Wheeler wrote that the Federal Reserve “possessed the authority in a time of crisis to make emergency loans to distressed entities such as AIG, but they did not have the legal right to become the owner of AIG. There is no law permitting the Federal Reserve to take over a company and run its business in the commercial world (in exchange) for a loan.” Thus, Wheeler ruled that the government action was an unconstitutional taking of property.

Ultimately, several years later when housing prices had first stabilized and then recovered, the Federal Reserve and the US government have been able to sell off the mortgage-backed securities that were owned or backed by AIG in a way which more than repaid the bailout funds. In the lawsuit, AIG used this fact to argue that the government rescue wasn't really needed. However, when it came to damages, Wheeler pointed out that without the government bailout, the shareholders of AIG would have lost everything anyway when the firm went bankrupt in fall 2008. Thus, he awarded damages of zero. Judge Wheeler's decision earlier this week is unlikely to be the final word in the AIG case. By deciding that the government had acted unconstitutionally, but that no damages would be paid, he has probably created a situation in which both side will appeal.

I find it hard to second-guess decisions made in the very worst days of the financial crisis in September 2008. One often hears the federal government damned for not doing enough to rescue Lehmann Brothers, and then also damned for bailing out other firms. But actions in the middle of a financial crisis are so difficult. MIT economist Ricardo Caballero said a few years back: ""I still recall politicians and economists calling for the need to teach lessons (in a punitive sense) to the financial system in the middle of the crisis. In fact, I think Lehman happened to a large extent due to the political pressures stemming from this view. What timing! ... I draw an analogy between panics and sudden cardiac arrest. We all understand that it’s very important to have a good diet and good exercise in order to prevent cardiac arrest. But once you’re in a seizure, that’s a totally secondary issue. You’re not going to solve the crisis by improving the diet of the patient. You don’t have time for that. You need a financial defibrillator, not a lecture."

Whatever the outcome of the legal battle, there is a broader issue here about the complexity and interconnectedness of modern finance. For example, it's not clear that state life insurance regulators were looking with skepticism at the AIG securities lending operation. It's not clear that bank regulators, checking to see if banks were protected against risk, were checking on whether AIG could actually make good on the credit default swaps it had sold. When the crisis hit, both private and public sector were unprepared. No one wanted the need for a bailout to arise back in 2008, and no one wants a future bailout. But it isn't clear to me that financial regulation in the last few years has found a way to make future AIG situations impossible, or even much less likely.

Wednesday, June 17, 2015

Equality of Opportunity and Equality of Result

In discussions about inequality of income or wealth, it's common to hear an argument along the following lines: "I'm not much bothered by inequality of results, as long as there is fairly good equality of opportunity."

As a quick example of this distinction, consider two siblings of the same gender that grow up in the same family, attend the same schools and colleges, and get similar jobs. However, one sibling saves money for retirement, while the other does not. When the two of them reach retirement, one sibling can afford around-the-world cruises and extensive pampering of the grandchildren, while the other sibling can afford the early-bird discount diner buffet line. This inequality of after-retirement results between the two siblings doesn't seem especially bothersome, because of the earlier equality of opportunities.

However, the notion that the inequality resulting from different opportunities or discrimination can be more-or-less separated from the inequality that results from choices and effort, while appealing at an intuitive level,  turns out to quite difficult in practice. Ravi Kanbur Adam Wagstaf discuss the isseus in "How Useful Is Inequality of Opportunity as a Policy Construct?" in World Bank Policy Research Working Paper 6980 (July 2014). The authors have also written a recent short summary/overview of the arguments. As a starting point, they write:
In policy and political discourse, “equality of opportunity” is the new motherhood and apple pie. It is often contrasted with equality of outcomes, with the latter coming off worse. Equality of outcomes is seen variously as Utopian, as infeasible, as detrimental to incentives, and even as inequitable if outcomes are the result of differing efforts. Equality of opportunity, on the other hand, is interchangeable with phrases such as ‘leveling the playing field’, ‘giving everybody an equal start’ and ‘making the most of inherent talents.’ In its strongest form, the position is that equality of outcomes should be irrelevant to policy; what matters is equality of opportunity. ... However, attempts to quantify and apply the concept of equality of opportunity in a policy context have also revealed a host of problems of a conceptual and empirical nature, problems which may in the end even question the practical usefulness of the concept.
My sense that their argument can be divided into two parts. One problem is that it's not easy to divide up the inequalities that are observed in society into one portion based on differences in opportunity, which should be rooted in the circumstances in which people find themselves through no decision or fault of their own, and another portion based on the choices or efforts that people make. The other problem is that moral intuition in some cases suggests an aggressive role for acting against unequal opportunities, and other cases where the moral intuition is not as strong: for example, the argument for fighting race and gender discrimination in support of equality of opportunity seems considerably stronger than the argument for seeking to offset most differences in genetic talent as a way of ensuring equal opportunity. I won't attempt so summarize their arguments here, but instead just to point out some key issues. In no particular order:

1) Opportunity is entangled with incentives. Those in a society who have greater opportunities--for whatever reason--will also typically have greater incentives to put forward work and effort and take advantage of those opportunities. There may be some other cases where someone with limited opportunities becomes determined to work twice as hard, or someone with expansive opportunities is more willing to goof off. But in any of these cases, treating opportunities as separate from personal effort and choice seems like a treacherous starting point.

2) Some people are especially favored or disfavored in the labor market by traits like physical or intellectual talent, height or attractiveness--or the lack of these traits. Others are favored or disfavored by factors like race and gender. All of these factors create "inequality of opportunity," but it's not clear these various types of inequality should be of equal importance to policy-makers.

3) If government policies to reduce inequality are based on outcomes, they will also affect underlying incentives. For example, steps to equalize income or wealth levels will means less incentive to earn at both ends of the income spectrum: that is, high income or wealth taxes reduce incentives at the upper end, and high levels of income-based support can reduce incentives to work at the lower end.  Similarly, steps to assure or to equalize retirement income will mean less incentive to save during working life.

4) Thinking about opportunity and choice raises an intergenerational problem. Consider one group of parents who choose to put substantial time and energy into the skills of their children, and another group of parents who does not. It seems plausible that one's family and community have an effect attitudes about work, saving , risk-taking, belief that effort is worthwhile, and so on. It seems implausible that true  "equality of opportunity" should require that the distribution of these beliefs about work, saving, risk, and effort be randomly distributed across children of different family types and socioeconomic classes.

5) Many people may prefer to live in a society which allow combination of risk-taking with an element element of luck affecting the outcome. For example, the authors quote a comments from Milton Friedman:  "Individuals choose occupations, investments and the like partly in accordance with their tastes for uncertainty. The girl who tries to become a movie actress rather than a civil servant is deliberately choosing to enter a lottery, so is the individual who invests in penny uranium stocks rather than government bonds.” This argument suggests that there will be inequalities of result that are the result of risk-taking and lottery-like outcomes, and not the result  of differences in either inequality of opportunity or inequalities from effort and choice.

6) It seems important to separate "inequality" in its literal sense from concerns over destitution or poverty. There are many ways, with many different implications for inequality, in which society can finance support for the impoverished. There is no intellectual inconsistency in favoring a safety net for the poor but also arguing that, other than that safety net, the government shouldn't worry much about remaining inequality.

The difficult bottom line here is that seeking to draw a distinction between equality of opportunity and equality of results hides a deeper question: What sources of unequal results should a society regard as acceptable or justified, and what sources of unequal results should we regard as unacceptable or unjustified? It's easy to claim that such a distinction exists, but knowing in practical terms where it can be difficult.

In a 1965 speech, President Lyndon Johnson discussed the importance of true equality of opportunity in a famous passage:
But freedom is not enough. You do not wipe away the scars of centuries by saying: Now you are free to go where you want, and do as you desire, and choose the leaders you please. You do not take a person who, for years, has been hobbled by chains and liberate him, bring him up to the starting line of a race and then say, "you are free to compete with all the others," and still justly believe that you have been completely fair. Thus it is not enough just to open the gates of opportunity. All our citizens must have the ability to walk through those gates.
Johnson's comment contains a deep truth, but the poetic phrasing about starting lines of races and walking through gates of opportunity offers a hint that practical difficulties are being sidestepped.

For example, it is straightforward to argue that children should have at least some minimum level of opportunity, a feeling which is expressed by laws requiring compulsory and taxpayer-funded schooling and public health measures like vaccinations. But beyond that minimum, the extent to which society should intervene with parents or seek to counterbalance or offset parental decisions about raising children can become quite controversial. It is straightforward to argue that racial and ethnic discrimination should be banned. But beyond the essential step of banning explicit discrimination in employment or housing or public services, the extent to which society should act to offset the results of past discrimination becomes controversial, too. Similarly, it seems straightforward to argue that (at least in high-income societies) everyone should have access to health insurance. But beyond that minimum, the extent to which everyone should (or can) have access to all possible treatments is unclear. Moreover, many health conditions are a combination of accident or environmental effects, genetics, and personal choices, in a way where it would be difficult to draw a clear distinction between health conditions resulting from inequality of initial conditions (saym, genetic heritage) or differences in personal choices and efforts that make up a health lifestyle.

Overall, it seems that the distinction between equality of opportunity and equality of result can be the starting point for some minimum level of public policy to reduce certain causes of unequal outcomes. But given the analytical problem with separating why unequal results occur, the equality of opportunity/equality of result distinction is often not much help in resolving how aggressive such inequality-reducing policie should be. 

Tuesday, June 16, 2015

Some Fresh Water Economics

The geological starting point for thinking about the economics of fresh water is understanding that while about 70% of Earth's surface is water, fresh water is much more scarce. As a result, how the available fresh water supplies are managed and priced--and even the potential for some cost-effective large-scale desalination plants--makes a huge difference in whether water is available when needed.

As a starting point, here's a graphic from the US Geological Survey with a planetary view of water. The bar on the far left shows that of all the water on the planet, 2.5% is fresh water. The second bar shows that out of that 2.5%, about two-thirds is in glaciers and ice-caps. Most of the rest is groundwater, including underwater aquifers. The surface water that we actually see--like lakes and rivers--is a tiny percentage of the available freshwater.



Water policy has many challenges. At a basic physical level, water can be viewed as an indestructible resource, cycling through the global ecosystem and never being used up. But when patterns of natural water supply shift (say, because of drought) or patterns of water demand shift (say, because of growth of population in certain areas and the food and industry supporting that population, then quantity supplied and quantity demanded can move out of alignment. Water can be stored, stored, transferred, recycled, and used at different quality levels, all of which commonly uses an infrastructure of reservoirs, canals, pipelines, pumps, and pipes, all tied together with underground aquifers and above-ground rivers and lakes. An IMF staff team offers a primer on many of the issues in their June 2015 report,"Is the Glass Half Empty or Half Full? Issues in Managing Water Challenges and Policy Instruments. The listed authors are Kalpana Kochhar, Catherine Pattillo, Yan Sun, Nujin Suphaphiphat, Andrew Swiston, Robert Tchaidze, Benedict Clements, Stefania Fabrizio, Valentina Flamini, Laure Redifer, and Harald Finger. Here are some facts and insights from the paper.

The quantity of water used by households and the price paid for water by households varies dramatically, both within and across countries. Here's a comparison using data from cities in a number of high-income countries.

This variation in price and quantity consumed reflects a deeper pattern that countries have made extremely different choices about water infrastructure, and have evolved very different habits about water use. There are examples of countries that in the past seemed to have plentiful natural supplies of fresh water, but because of poor investments in infrastructure and underpricing of water supplies, find their water sector under considerable stress. Examples include Pakistan and Democratic Republic of Congo. Other countries that lack natural supplies of fresh water have combined investments in infrastructure with pricing practices that have led to much less stress. An example of a low-income country cited by the IMF is Burkina Faso. Here's the IMF report:
Lack of proper management exacerbates water challenges, even in countries with abundant water endowment. A case in point is Pakistan, where, despite an abundance of water a few decades ago, lagging policies have raised the prospect of water scarcity that could threaten all aspects of the economy. The bulk of Pakistan’s farmland is irrigated through a canal system, but canal water is vastly underpriced, recovering only one-quarter of annual operating and maintenance costs. Meanwhile, agriculture, which consumes almost all annual available surface water, is largely untaxed. The combination of these policies leads to overuse of water. In the Democratic Republic of the Congo (DRC), a country with an extensive system of rivers and lakes, years of poor management, conflicting water sector regulations, and low cost recovery have created a situation in which consumption of drinking water is far below the regional average and only a fraction of agricultural land is irrigated. ...
Experiences in some countries with naturally limited water resources have shown that
sound water management can be achieved and water challenges are not insurmountable. ... One notable innovation in Burkina Faso is the Bagre “growth pole,” in which a huge manmade reservoir supports diverse activities, such as fishing and irrigation for crops. ... For example, Burkina Faso introduced a progressive tariff grid for drinking water based on the volume of use, with the higher tiers subsidizing the lowest tier as well as part of sanitation activities.
One can of course add the western United States in the last few years as an example of a situation where drought and growing population, together with decades of not investing in water-related infrastructure, are combining to cause water-related stress for households and the economy. I offers some discussion of these issues in "The economics of water in the American West" (October 29, 2014).

The pricing of water makes a big difference. It both provides a substantial share of the funding for water-related infrastructure, and also shapes the incentives for water conservation. However, in many countries water is a subsidized good, often based on the argument that the poor should have access to water. However, in low-income countries, such subsidies typically mostly benefit those with high incomes. The IMF report: 

Water subsidies, defined as the difference between actual water charges and a reference
price that covers all supply costs, are inequitable. They benefit mostly upper-income groups in developing economies, as the poor often have limited or no access to piped water and improved sanitation. Even when the poor have access to piped water, lower levels of use mean they capture a smaller share of the benefits compared with other groups. For example, Cabo Verde, India, Nepal, and Nicaragua provide the richest households with $3 worth of subsidized water, on average, for every $1 worth provided to the poorest households. ... [W]ater subsidies are estimated at about US$456 billion, or about 0.6 percent of global GDP in 2012, the latest year for which data are readily available. ... Developing Asia has the largest subsidies in absolute terms (US$196 billion), with China accounting for more than two-thirds of that amount. Cost recovery is particularly low in South Asia despite its higher externalities from groundwater depletion. Subsidies are also substantial at the country level, reaching above 5 percent of GDP in seven countries: Azerbaijan, Honduras, Kyrgyz Republic, Mongolia, Tajikistan, Uzbekistan, and Zimbabwe.
Given a growing global population, water "withdrawals"--that is, the uses of fresh water for human purposes--are going to keep rising. Here are some historical figures. In effect, as water circles through the natural ecosystem, people are tapping into that water supply more frequently and in more places.

As a result, stresses over water are going to keep rising. As the IMF staff writes:
Long-term scenarios forecast large increases in water use that, for many countries,
cannot be met by existing supplies. With expected growth in population and economic activity, future global water use will far exceed today’s level.At the same time, freshwater availability is expected to remain more or less fixed in the coming decades. While expecting further improvements in efficiency is not unreasonable, their impact is highly uncertain. The consensus among analysts is that even substantial technological advances and investment would be insufficient to close the projected future gaps between water supply and water use.
Desalination is a current wild card in the economics of fresh water. In Israel, the long-term projections are that almost all of the future increase in water supplies will happen through desalinization. Technology Review lists "megascale desalination" as one of its 10 breakthrough technologies for 2015.
"On a Mediterranean beach 10 miles south of Tel Aviv, Israel, a vast new industrial facility hums around the clock. It is the world’s largest modern seawater desalination plant, providing 20 percent of the water consumed by the country’s households. ... The Sorek plant incorporates a number of engineering improvements that make it more efficient than previous RO [reverse osmosis] facilities. It is the first large desalination plant to use pressure tubes that are 16 inches in diameter rather than eight inches. The payoff is that it needs only a fourth as much piping and other hardware, slashing costs. The plant also has highly efficient pumps and energy recovery devices. ... Sorek will profitably sell water to the Israeli water authority for 58 U.S. cents per cubic meter (1,000 liters, or about what one person in Israel uses per week) ..."
(Incidentally, this description of why a larger desalination plant matches one of my favorite examples for  "Illustrating Economies of Scale" (May 21, 2012). In operations where there are lots of pipes, like chemical plants as well as desalination facilities, doubling the diameter of a pipe means that the circumference of the pipe doubles, which roughly means doubling the cost of producing the pipe. However, when circumference doubles, the cross-section (area) of the pipe rises by a factor of four. In this way, large-scale plants with lots of pipes have an economies-of-scale advantage over similar smaller-scale plants.)

Southern California is giving desalination a try, too. In the January 2015 issue of Technology Review (not freely available online), David Talbot describes some new steps in desalinization in "Desalination out of Desperation: "San Diego County, hot, dry, and increasingly populous, offers a preview of where much of the world is headed. So  too does a recent decision by the county government: it is building  the largest seawater desalination plant in the Western Hemisphere, at a cost of $1 billion. The massive project, in Carlsbad, teems with nearly 500 workers in yellow hard hats. When it’s done next year, it will take in more than 100 million gallons of Pacific Ocean water daily and produce 54 million gallons of fresh, drinkable water. While this adds up to just 10 percent of the county’s water delivery needs, it will, crucially, be reliable and drought-proof—a hedge
against potentially worse times ahead."

Desalinization isn't going to make a major difference in global freshwater supplies in the near- or the medium-term. But the price of desalinization seems likely to keep falling, and in certain oceanside locations that can support billion-dollar infrastructure investments, it seems certain to become increasingly important.

Thursday, June 11, 2015

What Lessons from Part-Timers about the US Labor Market?

The US unemployment rate has been in the range of 5.4-5.5% since February, which is clearly a vast improvement from its peak of 10% in October 2009. But of course, no single number will capture the health of the labor market. For example, the overall unemployment rate doesn't capture the rise in long-run unemployment (discussed here, here, and here). It doesn't capture concerns over whether are dropping out of the official labor force, and thus are no longer counted as unemployed (discussed here and here). It doesn't look at the lack of widespread wage growth. And of course, the unemployment rate doesn't look at part-time workers--because after all, it counts them as being employed.

However, some workers choose part-time status, while others have part-time status thrust upon them. The difference matters. Those who are working part-time, but would prefer to be working more hours, can be considered both as part-time workers, but also as part-time unemployed. Fortunately, the survey on which the unemployment rate is based asks questions about this distinction: that is, those who report working part-time are asked why. If they give a reason for working part-time that suggests they would prefer to work more hours but such jobs aren't available in their local economy, they are classified as part-time "for economic reasons." If they give a reason for working part-time that is based more on their personal circumstances--for example, family or personal obligations, being in school, being partially retired, and so on--then they are classified as part-time "for noneconomic reasons."

Here's the breakdown of US part-time workers in a figure created with the ever-useful FRED website run by the Federal Reserve Bank of St. Louis. The blue line on top shows the share of the civilian labor force working part-time for any reason, rising from about 14% back around 1970 to about 18% in more recent years. The green line in the middle shows the share of workers who are part-time for noneconomic reasons. That level doesn't show much trend: it was 12% of the civilian labor force for much of the 1970s and 1980s, up to about 13% in the 1990s, and then back down to 12% more recently. The red line on the bottom shows the share of workers who are part-time for economic reasons. In good years for the economy, this rate seems to be about 2% of the workforce. In bad recessions, like 1982 or 2009, it rises to about 6% of the workforce.



This share of part-time workers for economic reasons has declined since the end of the Great Recession, but it's still at about 4% of the workforce, a couple of percentage points above the level when the labor market is at its most robust. These workers do at least have some connection to the labor market, unlike the 5.5% of the labor force that is unemployed. But being part of the part-time labor force when you wold like to work more poses own set of challenges.

But there's one more additional distinction here. Is the higher level of part-time work for economic reasons because the broad labor market has not yet recovered? Or is it to some extent because employers have a greater preference for part-time workers, for reasons that have nothing to do with the sluggish recovery from the Great Recession? Rob Valetta and Catherine Van Der List tackle this issue in "Involuntary Part-Time Work: Here to Stay?" written as the "Economic Letter" for the Federal Reserve Bank of New York (June 8, 2015). They do a cross-state comparison over time, looking at factors that might affect the desire of employers to hire part-time labor: for example, wage levels and minimum wage levels, industry mix, and share of younger workers. Thus, they can capture to some extent whether part-time work for economic reasons is correlated with the unemployment rate, or with these structural economic factors. They write:
"From the base level of just under 3% in 2006, cyclical factors raised the rate of involuntary part-time work by slightly more than 2 percentage points at the peak in 2010, while structural factors raised it by a little over 1 percentage point. The cyclical component declined after 2010 and is likely to have continued falling beyond our sample period, while the structural component was relatively stable from 2009 through 2013."
For a final sense of perspective, here's one more figure generated from FRED website. It adds up three categories. One is the overall unemployment rate. A second category is those employed part-time for economic reasons, as discussed above. The third category is called the "marginally attached." It refers to those who to "persons who want a job, have searched for work during the prior 12 months, and were available to take a job during the reference week, but had not looked for work in the past 4 weeks." Because these people had not looked for work in the previous four weeks, they are counted as "out of the labor force," not as "unemployed," but their survey responses suggest that they would prefer to have work. The blue line shows the percentage of the labor force in these three categories combined. The red line shows the unemployment rate by itself. (The data on "marginally attached" is is only available back to 1994.)




The overall unemployment rate at 5.5% is fairly close to the sub-5% rates where it bottomed out in the previous recoveries. HAt the end of the Great Recession in 2009, the gap had soared to seven percentage points. The gap has now fallen back to about 5 percentage points, but it is still well above the gap of about three percentage points that was common from the mid-1990s through the 2001 recession and up to the start of the Great Recession. In this way, the decline in the official unemployment rate isn't  capturing some degrees of continuing weakness in the labor market.

Wednesday, June 10, 2015

Power for Africa

When it comes to electrical power, much of sub-Saharan Africa is living in a different world. A report from the African Progress Panel, People, Power, Planet: Seizing Africa's Energy and Power Opportunities, provides a nice overview of the situation. The African Progress Panel is a group of 10 prominent individuals ranging from Kofi Annan to Bob Geldof. I presume that the actual report was mostly written by staff, including Caroline Kende-Robb, Kevin Watkins, and Maria Quattri.

The report offers some eye-catching facts about the lack of electrical power in sub-Saharan Africa. Here's a selection (footnotes omitted, along with references to figures and infographics):

Measured on a global scale, electricity consumption in Sub-Saharan Africa excluding South Africa is pitifully low, averaging around 162 kilowatt hours (kWh) per capita a year. ... One-third of the region’s population lives in countries where annual electricity use averages less than 100 kWh each. The global average consumption figure is 2,800kWh, rising to 5,700kWh in the European Union and 12,200kWh in the United States. Electricity consumption for Spain exceeds that of the whole of Sub-Saharan Africa (excluding South Africa).
To put the figures in a different context, 595 million Africans live in countries where
electricity availability per person is sufficient to only light a single 100-watt light bulb
continuously for less than two months. It takes the average Tanzanian around eight years to consume as much electricity as an American uses in one month. When American households switch on to watch the Super Bowl, the annual finale of the football season, they consume 10 times the electricity used over the course of a year by the more than 1 million people living in Juba, capital city of South Sudan. Ethiopia, with a population of 94 million, consumes one-third of the electricity supplied to the 600,000 residents of Washington D.C. ...
Sub-Saharan Africa is desperately short of electricity. Installed grid-based capacity is around 90 gigawatts (GW), which is less than the capacity in South Korea where the population is only 5 per cent that of Sub-Saharan Africa. Moreover, South Africa alone accounts for around half of power-generation capacity. With 12 per cent of the world’s population, the region accounts for 1.8 per cent of world capacity for generating electricity and the share is shrinking. 
Installed capacity figures understate Africa’s energy deficit. At any one time, as much as one-quarter of that capacity is not operational. In terms of real output, South Korea generates over three times as much electricity as Sub-Saharan Africa. ... Around 30 countries in the region have grid-connected power systems smaller than 500 megawatts (MW), while another 13 have systems smaller than 100MW. For purposes of comparison, a single large-scale power plant in the United Kingdom generates 2,000MW. It is not just comparisons with the rich world that highlight the gap. Nigeria has almost twice as many people as Vietnam but generates less than one-quarter of the electricity that Vietnam generates. 
For those who prefer their striking facts in graphical form, here are a couple of examples. On average, 32% of the poulation in sub-Saharan Africa has access to electricity. As is the way with averages, in a number of countries the percentage is even less. For comparison, 60% of the popoulation in low-income Bangladesh have access to electricity.


Here's a figure showing the electricity generated on a per capita basis between sub-Saharan Africa and other regions of the world. The progress on power for Africa is slow, and the gap is worsening.

Finally, here's a figure showing how households in a number of countries in sub-Saharan Africa get their light. Kerosene lamps and candles play a major role. In Ethiopia, more households get light from moonlight/firelight than from a light bulb in a socket or lamp.



The costs of this lack of electricity are enormous and far-reaching. For industry, it means a combination of continual power outages and the need to invest in expensive stand-alone generators. The report notes that the Power Holding Company of Nigeria (PHCN) has been baptized with the nickname “Please Have Candles Nearby.”
"Frequent power cuts result in losses estimated at 6 per cent of turnover for large firms and as much as 16 per cent for enterprises in the informal sector. Unreliable power supply has created a buoyant market in diesel-powered generators. Around 40 per cent of businesses in Tanzania and Ethiopia operate their own generators, rising to over 50 per cent in Kenya. In Nigeria, around four in every five SMEs [small and medium enterprises] install their own generators. On average, electricity provided through diesel-fuelled back-up generators costs four times as much as power from grid. Diesel fuel is a significant cost for enterprises across Africa, even in less energy-intensive sectors such as finance and banking. ...  Lack of reliable and cost-effective electricity is among the top constraints to expansion in the manufacturing sector in nearly every Sub-Saharan country."
With a lack of electricity, low-income households gather firewood, which takes time, inflicts environmental damage, and when burned leads to a household air pollution problem.

Data from 30 countries showed that the average share of household spending directed to energy was 13 per cent. The poorest households typically spend a larger share of their income on energy than richer households. In Uganda, the poorest one-fifth allocated 16 per cent of their income to energy, three times the share of their richest
counterparts. Women and girls spend a lot of time collecting firewood and cooking with inefficient stoves. Factoring in the costs of this unpaid labour greatly inflates the economic costs that come with Africa’s energy deficits. Estimates by the World Bank put the losses for 2010 at US$38 billion or 3 per cent of GDP. ...
Africa is on the front line of the HAP [household air pollution] epidemic. The World Health Organization estimates that 600,000 Africans die each year as a result of it. Almost half are children under 5 years old, with acute respiratory tract infection the primary cause of fatality. If governments in Africa and the wider international community are serious about their commitment to ending avoidable deaths of children, then clean cooking facilities must be seen as a much higher priority. Put differently, achieving universal access to clean cooking stoves, allied to wider measures, could save 300,000 young lives a year. Apart from saving lives, reducing the use of biomass by 50 per cent would save 60-190 million tonnes of CO2- equivalent emissions, as production and use of solid fuels for cooking consumes over 300 million tonnes of wood annually in Sub-Saharan Africa.
The problems of a lack of electricity are widespread. It affects health care, because vaccines can't be refrigerated and equipment can't be run. It affects schools and the ability to read and study at home when there's no reliable light. The report makes a strong and persuasive claim about teh overarching importance of electrical power across many dimensions:
[T]here is an abiding sense in which power generation is seen as a peripheral concern, in contrast to priorities in areas such as education, health, nutrition, water and sanitation. It is difficult to think of a more misplaced perception. Without universal access to energy services of adequate quality and quantity, countries cannot sustain dynamic growth, build more inclusive societies and accelerate progress towards eradicating poverty. Productive uses of energy are particularly important to economic growth and job creation. Energy services directly affect incomes, poverty and other dimensions of human development, including health and education. Expanded energy provision is associated with rising incomes, increased life expectancy and enhanced social well-being.
What needs to be done? In a physical sense, there seems little doubt that sub-Saharan Africa has abundant energy resources, including the conventional sources like coal, natural gas, and hydropower, as well as abundant possibilities in certain locations for geothermal, solar, and wind power. But a dramatic rise in electricity generation and distribution will require a dramatic rise in investment in this area. The report argues that the current plans for expansion of electricity production and distribution in Africa are wildly insufficient. "According to the International Energy Agency (IEA), 645 million Africans could still lack access to electricity in 2030." As an alternative vision of the future, the report argues: 
First, overall power generation needs to increase at least 10-fold by 2040 if Africa’s energy systems are to support the growth in agriculture, manufacturing and services needed to create jobs and raise living standards. Second, if governments are serious about the 2030 commitment of “energy for all”, they must adopt the strategies needed to extend provision through the grid and beyond the grid. ...  There is no shortage of evidence to demonstrate what is possible. Brazil, China and Indonesia have achieved rapid electrification over short time periods. Vietnam went from levels of access below those now prevailing in Africa to universal provision in around 15 years. The country expanded electricity consumption fivefold between 2000 and 2013. Bangladesh has increased electricity consumption by a factor of four over the same period. ...
Current spending on investment [in the electricity sector] is around US$8 billion a year, or some 0.49 per cent of GDP. Public financing accounts for around half of overall investment and Chinese investment, public–private partnerships and concessional development finance cover the rest. Covering the costs of investment in plant, transmission and distribution would require an additional US$35 billion annually. Adding the full costs of universal access would take another US$20 billion. The total investment gap of about US$55 billion a year represents around 3.35 per cent of GDP. This figure does not take into account spending on operations and maintenance.
Where is the money to come from? It seems clear that a hearty dose of private sector funds will be needed. Such funds can also bring the virtue of outside oversight and pressure on timelines and contracts. But private funds won't be forthcoming in sufficient quantity until it's clear that governments across Africa have the willingness, the capabilities, and the vision to support moving ahead with these large-scale investments. As one possible source for finance, and also for governments of Africa to show their commitment to expanded electricity production, the report points to the large subsidies often paid across Africa to power-sector utilities, as well as for fuel sources like gasoline. These subsidies disproportionately benefit those with high incomes. Just phasing out these subsidies could raise more than $20 billion per year that could be redirected to supporting generation and distribution of electricity for all.
Power-sector utilities constitute a major fiscal burden for many countries. In 2010, Sub-Saharan Africa’s energy utilities were operating with deficits estimated at 1.4 per cent of regional GDP, some US$11.7 billion. This represented five times the level of publicly financed investment in the energy sector. ... In addition to financing loss-making utilities, many governments subsidize kerosene. According to the International Monetary Fund (IMF), the average subsidy applied to kerosene and other oil-based products amounted to 45 per cent of its market price in 2013, or US$10 billion.
The report acknowledges in a number of places the importance of moving ahead with environmentally-friendly and low-carbon sources of energy where possible, even discussing that Africa might over time be able to "leapfrog" to these alternatives. But the report is also fairly blunt in pointing out that when development finance agencies start imposing rules that limit finance for coal or natural gas, they are imposing a double standard:

It is striking that there has been little debate over whether limiting development finance for fossil fuels, including coal, in the name of cutting greenhouse gas emissions might hamper efforts to achieve universal access to energy for all. Viewed from a Sub-Saharan African perspective, it is difficult to avoid being struck by some marked double standards. Coal-fired generation occupies an important share in the energy mix of countries such as Germany, the United Kingdom and the United States, where it has a far greater share than in most countries of Sub-Saharan Africa. Yet the same countries are able to use their shareholder domination of the World Bank to limit support to Africa. One perverse side-effect is to leave African governments without the finance that might enable them to invest in more efficient coal-fired power plants with lower emissions. ...
Donald Kaberuka, the President of the African Development Bank: “It is hypocritical for Western governments who have funded their industrialization using fossil fuels, providing their citizens with enough power, to say to African countries, ‘You cannot develop dams, you cannot develop coal, just rely on these very expensive renewables’… To every single African country, from South Africa to the north, the biggest impediment to economic growth is energy, and we don’t have this kind of luxury of making this kind of choice.”