Pages

Tuesday, May 31, 2016

The Economies of Africa: Will Bust Follow Boom?

The economies of sub-Saharan Africa face a big question. Growth of real GDP in the last 15 year has averaged about 5% per year, as compared to 2% per year back in the 1980s and 1990s. But was this rapid growth mainly a matter of high prices for oil and other commodities, combined with high levels of China-driven external investment? If so, then Africa's growth is likely to diminish sharply now that oil and commodity prices have fallen and China's growth has slowed. Or was Africa's rapid growth in the last 15 years built at least in part on on sturdier and more lasting foundations? The June 2016 issue of Finance & Development, published by the International Monetary Fund, tackles this topic with a symposium of nine readable articles on "Africa: Growth's Ups and Downs." In addition, the African Economic Outlook 2016, an annual report produced by the African Development Bank, the OECD Development Centre and the United Nations Development Programme, provides an overview of the economic situation in Africa as well as a set of chapters on the theme of "Sustainable Cities and Structural Transformation."

The overall perspective seems to be that while growth rates across the countries of Africa seem certain to slow down, some of the rise in growth will persist--especially if various supportive public policy steps can be enacted. An article by Stephen Radelet in Finance & Development, "Africa's Rise--Interrupted?", provides an overview of this perspective.

In summing up the current situation, Radelet writes:
"At a deeper level, although high commodity prices helped many countries, the development gains of the past two decades—where they occurred— had their roots in more fundamental factors, including improved governance, better policy management, and a new generation of skilled leaders in government and business, which are likely to persist into the future. ... Overall growth is likely to slow in the next few years. But in the long run, the outlook for continued broad development progress is still solid for many countries in the region, especially those that diversify their economies, increase competitiveness, and further strengthen institutions of governance. ... The view that Africa’s surge happened only because of the commodity price boom is too simplistic. It overlooks the acceleration in growth that started in 1995, seven years before commodity prices rose; the impact of commodity prices, which varied widely across countries (and hurt oil importers); and changes in governance, leadership, and policy that were critical catalysts for change."

Here's a graphic showing some of the main changes across Africa in the last couple of decades.



Radelet emphasizes that the countries of Africa are diverse, and economic policies and development patterns across the countries will not be identical. But he offers five overall themes for continued economic progress in African with relatively broad applicability.
First up is adroit macroeconomic management. Widening trade deficits are putting pressure on foreign exchange reserves and currencies, tempting policymakers to try to artificially hold exchange rates stable. Parallel exchange rates have begun to emerge in several countries. But since commodity prices are expected to remain low, defending fixed exchange rates is likely to lead to even bigger and more difficult exchange rate adjustments down the line. As difficult as it may be, countries must allow their currencies to depreciate to encourage exports, discourage imports, and maintain reserves. At the same time, budget deficits are widening, and with borrowing options limited, closing the gaps requires difficult choices. ...
Second, countries must move aggressively to diversify their economies away from dependence on commodity exports. Governments must establish more favorable environments for private investment in downstream agricultural processing, manufacturing, and services (such as data entry), which can help expand job creation, accelerate long-term growth, reduce poverty, and minimize vulnerability to price volatility. ... The exact steps will differ by country, but they begin with increasing agricultural productivity, creating more effective extension services, building better farm-to-market roads, ensuring that price and tariff policies do not penalize farmers, and investing in new seed and fertilizer varieties. Investments in power, roads, and water will be critical. As in east Asia, governments should coordinate public infrastructure investment in corridors, parks, and zones near population centers to benefit firms through increased access to electricity, lower transportation costs, and a pool of nearby workers, which can significantly reduce production costs. ... At the same time, the basic costs of doing business remain high in many countries. To help firms compete, governments must lower tariff rates, cut red tape, and eliminate unnecessary regulations that inhibit business growth. Now is the time to slash business costs and help firms compete domestically, regionally, and globally.
Third, Africa’s surge of progress cannot persist without strong education and health systems. The increases in school enrollment and completion rates, especially for girls, are good first steps. But school quality suffers from outdated curricula, inadequate facilities, weak teacher training, insufficient local control, teacher absenteeism, and poor teacher pay. ... Similarly, health systems remain weak, underfunded, and overburdened ...
Fourth, continued long-term progress requires building institutions of good governance and deepening democracy. The transformation during the past two decades away from authoritarian rule is remarkable, but it remains incomplete. Better checks and balances on power through more effective legislative and judicial branches, increased transparency and accountability, and strengthening the voice of the people are what it takes to sustain progress. ...
Finally, the international community has an important role to play. Foreign aid has helped support the surge of progress, and continued assistance will help mitigate the impacts of the current slowdown. Larger and longer-term commitments are required, especially for better-governed countries that have shown a strong commitment to progress. To the extent possible, direct budget support will help ease adjustment difficulties for countries hit hardest by commodity price shocks. In addition, donor financing for infrastructure—preferably as grants or low-interest loans—will help build the foundation for long-term growth and prosperity. Meanwhile, this is not the time for rich countries to turn inward and erect trade barriers. Rather, wealthy nations should encourage further progress and economic diversification by reducing barriers to trade for products from African countries whose economies are least developed.
One possible reaction to a list like that one is "yikes." If countries of Africa need all of those things to go right, then optimism about Africa's economic future begins to look like foolhardiness. But the other possible reaction is that not everything needs to go right all the time for ongoing progress to happen.

The African Development Outlook 2016 fleshes out many of these theme in more detail, and offers some of its own. One theme the report emphasizes is the centrality of urban areas to the development path in many African countries (citations omitted from quota.

The African continent is urbanising fast. The share of urban residents has increased
from 14% in 1950 to 40% today. By the mid-2030s, 50% of Africans are expected to become urban dwellers ... However, urbanisation is a necessary but insufficient condition for structural transformation. Many countries that are more than 50% urbanised still have low-income levels. Urbanisation per se does not bring economic growth, though concentrating economic resources in one place can bring benefits. Further, rapid urbanisation does not necessarily correlate with fast economic growth: Gabon has a high annual urbanisation rate at 1 percentage point despite a negative annual economic growth rate of -0.6% between 1980 and 2011.
In addition, the benefits of agglomeration greatly depend on the local context, including
the provision of public goods. Public goods possess non-rivalry and non-excludable benefits. Lack of sufficient public goods or their unsustainable provision can impose huge costs on third parties who are not necessarily involved in economic transactions. Congestion, overcrowding, overloaded infrastructure, pressure on ecosystems, higher costs of living, and higher labour and property costs can offset the benefits of concentrating economic resources in one place. These negative externalities tend to increase as cities grow. This is especially true if urban development is haphazard and public investment does not maintain and expand essential infrastructure. Dysfunctional systems, gridlocks, power cuts and insecure water supplies increase business costs, reduce productivity and deter private investment. In OECD countries, cities beyond an estimated 7 million inhabitants tend to generate such diseconomies of agglomeration. Hence, the balance between agglomeration economies and diseconomies may have an important influence on whether city economies continue to grow, stagnate or begin to decline.
The report also comments on what is calls "three-sector" development theory, with is the notion that economies move from being predominantly agricultural, to growth in manufacturing, to growth in services. In the context of African nations, it's not clear how economies with large oil or mineral resources fit into this framework, and in a world economy with rapidly growing robotics capabilities, it's not clear that low-wage manufacturing can work as a development path across Africa similar to the way that it did in, say, many parts of Asia. Here's a quick discussion of sectors of growth across Africa:
An examination of the fastest-growing African countries over the past five years reveals very different sector patterns (Table 1.2). In Nigeria, structural changes seem to be in accordance with traditional three-sector theory, as shares of the primary sector  declined while those of other sectors increased. The share of agriculture also declined in many other countries, but increased in Kenya and Tanzania. The share of extractive industries declined in some countries but increased in others as new production started and boosted growth (oil in Ghana and iron-ore mining in Sierra Leone). The share of manufacturing increased in only a few countries (Niger, Nigeria and Uganda), but remained broadly constant or even declined in many others. In contrast, the construction and service sectors were important drivers of growth in many countries. In short, African countries are achieving growth performance with quite different sectoral patterns. However, the simplistic three-sector theory can be misleading as productivity is not only raised by factor reallocation between sectors, but also through modernisation and reallocation within sectors, as well as via better linkages between sectors. In particular, higher productivity in agriculture can boost food processing and leather processing and manufacturing to the benefit of both sectors.
For me, the ongoing theme in all discussions of Africa's economic future is an oscillation between encouragement over the progress that has occurred and a disheartened recognition of how much remains to be done. For example, the report includes a figure showing that hotel rooms across the countries of sub-Saharan Africa have growth by two-thirds in the last five years.

Hotels are in some ways a  proxy for a certain level of business development, mobility between cities, local income levels, and tourism potential, so this rise is promising. On the other side, the total for all of sub-Saharan Africa is roughly 50,000 hotel rooms, and for comparison, the city of Las Vegas alone claims to have almost 150,000 hotel/motel rooms.

For those who want more, where are links to the full list of articles about Africa in the June 2016 Finance & Development:

Monday, May 30, 2016

Allocation of Scarce Elevators

In a perfect world, an elevator would always be waiting for me, and it would always take me to my desired floor without stopping along the way. But economics is about scarce resources. What about the problem of scarce elevators?

Jesse Dunietz offers an accessible overview to how such decisions are made "The Hidden Science of Elevators: How powerful algorithms decide when that elevator car is finally going to come pick you up," in Popular Mechanics (May 24, 2016). For those who want all the details, Gina Barney and Lutfi Al-Sharif have just published the second edition of Elevator Traffic Handbook: Theory and Practice, which with its 400+ pages seems to be the definitive book on this subject (although when I checked, still zero reviews of the book on Amazon). Some of the tome can be sampled here via Google. For example, it notes at the start: 
"The vertical transportation problem can be summarised as the requirement to move a specific number of passengers from their origin floors to their respective destination floors with the minimum time for passenger waiting and travelling, using the minimum number of lifts, core space, and cost, as well as using the smallest amount of energy." 
This problem of allocating elevators is complex in detail: not just the basics like number and size of elevators, the total number of passengers, and the height of the building, but also questions of the usual timing of peak loads of passengers. Moreover, the problem is complex because passengers prefer short wait and travel times, which are costs of time imposed on them, while building owners prefer a smaller cost for elevators, which they pay.  It turns out that many people would rather have a shorter waiting time for an elevator, even if it might mean a longer travel time once inside the elevator. But although the problem of allocating elevators may not have a single best answer, some answers are better than others.  

Of course, in the early days of elevators, they often had an actual human operator. When automated elevators arrived and up until about a half-century ago, Dunietz explains in Popular Mechanics that many of them operated  rather like a bus route: that is, they went up and down between floors on a preset timetable. Of course, this meant that passengers just had to wait for the elevator to cycle around to their floor, and the elevator ran even when it was empty. 

In the mid-1960s, the "elevator algorithm" was developed. Dunietz describes it with two rules:
  1. As long as there's someone inside or ahead of the elevator who wants to go in the current direction, keep heading in that direction.
  2. Once the elevator has exhausted the requests in its current direction, switch directions if there's a request in the other direction. Otherwise, stop and wait for a call.
Not only is this algorithm still pretty common for elevators, but it is also used to govern the motion of disk drives when facing read and write request--and the algorithm has its own Wikipedia entry.

However, if you think about how the elevator algorithm works in tall buildings, you realize that it will spend a lot of time in the middle floors, and the waits at the top and the bottom can be extreme. Moreover, if a building has a bunch of elevators all responding to the same signals, all the elevators tend to bunch up near the middle floors, even leapfrogging each other and trying to answer the same signals. So the algorithm was tweaked so that only one elevator would respond to any given signal. Buildings were sometimes divided, so that some elevators only ran to certain groups of floors. Also, when an elevator was not in use, it would automatically return to the lobby (or some other high-departure floor).

By the 1970s, it becomes possible to encode the rules for allocating elevators into software, which can be tweaked and adjusted. For example, it becomes possible to use "estimated time of arrival" calculations (for example, here) which figures out which car can respond to a call first. Such algorithms can also take energy use or length-of-journal or other factors into account.
Another big step forward in the last decade or so is "destination dispatch," where when you call the elevator, you also tell it which floor you will be going to. The elevator system can then group together people heading for similar floors. An article by Melanie D.G. Kaplan  on ZDNet.com back in 2012 talks about how this kind of system created huge gains for the Marriott Marquis in Times Square in New York City. Before this system, people could wait 20-30 minutes for an elevator to show up. After the system was installed, there can still be some minutes of waiting at peak times, but as one measure, the number of written complaints about elevator delays went from five per week (!) to zero.

The latest thing, as one might expect, is "machine learning"--that is, define for the elevator system what "success" looks like, and then let the elevator system experiment and learn about how to allocate elevators not just at a given moment in time, but to remember how elevator traffic evolves from day to day and adjust for that as well. The definition of "success" may vary across buildings: for example, "success" in a system of hospital elevators might mean that urgent health situations get an immediate elevator response, even if waiting time for others is increased. The machine learning approach leads to academic papers like: "The implementation of reinforcement learning algorithms on the elevator control system," and ongoing research published in various places like the proceedings of the annual conferences of the International Society of Elevator Engineers, or publications like the IEEE Transactions on Automation Science and Engineering

From an economic point of view, it will be intriguing to see how the machine learning rules evolve. In particular, it will be interesting to see if the the machine learning rules that address the various tradeoffs of wait time, travel time, handling peak loads, and energy cost can be formulated in terms of the marginal costs and benefits framework that economist prefer--and whether the rules for elevator traffic find a use in organizing other kinds of traffic, from cars to online data. 

Friday, May 27, 2016

US Corporate Stock: The Transition in Who Owns It

It used to be that most US corporate stock was held by taxable US investors. Now, most corporate stock is owned by a mixture of tax-deferred retirement accounts and foreign investors. Steven M. Rosenthal and Lydia S. Austin describe the transition in "The Dwindling Taxable Share
Of U.S. Corporate Stock," which appeared in Tax Notes (May 16, 2016, pp. 923-934), and is available here at website of the ever-useful Tax Policy Center.

The gray area in the figure below shows the share of total US corporate equity owned by taxable accounts. A half-century ago in the late 1960s, more than 80% of all corporate stock was held in taxable accounts; now, it's around 25% The blue area shows the share of US corporate stock held by retirement plans,which is now about 35% of the total. The area above the blue line at the top of the figure shows the share of US corporate stock owned by foreign investors, which has now risen to 25%.


A few quick thoughts here:

1) These kinds of statistics require doing some analysis and extrapolation from various Federal Reserve data sources. Those who want details on methods should head for the article. But the results here are reasonably consistent with previous analysis.

2) The figures here are all about ownership of US corporate stock; that is, they don't have anything to say about US ownership of foreign stock.

3) One dimension of the shift described here is the ownership of US stock is shifting from taxable to less-taxable forms. Stock returns accumulate untaxed in retirement accounts until the fund are actually withdrawn and spent, which can happen decades later and (because post-retirement income is lower) at a lower tax rate.  Foreigners who own US stock pay very little in US income tax--instead, they are responsible for taxes back in their home country.

4) There is an ongoing dispute about how to tax corporations. Economists are fond of pointing out that a corporation is just an organization, so when it pays taxes the money must come from some actual person, and the usual belief is that it comes from investors in the firm. If this is true, then cutting corporate taxes a  half-century ago would have tended to raise the returns for taxable investors. However, cutting corporate taxes now would tend to raise returns for untaxed or lightly-taxes retirement funds and foreign investors. The tradeoffs of raising or lower corporate taxes have shifted.

Thursday, May 26, 2016

Lessons for the Euro from Early American History

The euro is still a very young currency. When watching the struggles of the European Union over the the euro, it's worth remembering that it too the US dollar a long time to become a functional currency. Jeffry Frieden looks at "Lessons for the Euro from Early American Monetary and Financial Experience," in a contribution written for the Bruegel Essay and Lecture Series published May 2016. Frieden's lecture on the paper can be watched here. Here's how Frieden starts:
"Europe’s central goal for several decades has been to create an economic union that can provide monetary and financial stability. This goal is often compared, both by those that aspire to an American-style fully federal system and by those who would like to stop short of that, to the long-standing monetary union of the United States. The United States, after all, created a common monetary policy, and a banking union with harmonised regulatory standards. It backs the monetary and banking union with a series of automatic fiscal stabilisers that help soften the potential problems inherent in inter-regional variation.
Easy celebration of the successful American union ignores the fact that it took an extremely long time to accomplish. In fact, the completion of the American monetary, fiscal, and financial union is relatively recent. Just how recent depends on what one counts as an economic and monetary union, and how one counts. Despite some early stops and starts, the United States did not have an effective national currency until 75 years after the Constitution was adopted, starting with the National Banking Acts of 1863 and 1864. And only after another fifty years did the country have a central bank. Financial regulations have been fragmented since the founding of the Republic; many were federalised in the 1930s, but many remain decentralised. And most of the fiscal federalist mechanisms touted as prerequisites for a successful monetary union date to the 1930s at the earliest, and in some cases to the 1960s. The creation and completion of the American monetary and financial union was a long, laborious and politically conflictual process.
Freiden focuses in particular on some seminal events from the establishment of the US dollar. For example, there's a discussion of "Assumption," the policy under which Alexander Hamilton had "the Federal government recognise the state debts and exchange them for Federal obligations, which would be serviced.This meant that the Federal governments would assume the debts of the several states and pay them off at something approaching face value." But after the establishment of a federal market for debt, the US government in the 1840s decided that it would not assume the debt of bankrupt states. A variety of other episode are put into a broader context. In terms of overall lessons from early US experience for Europe as it seeks to establish the euro, it suggests that while Europe has created the euro, existing European institutions are not yet strong enough to sustain it:

One of the problems that Europe has faced in the past decade is the relative weakness of European institutions. Americans and foreigners had little reason to trust the willingness or ability of the new United States government to honour its obligations. Likewise, many in Europe and elsewhere have doubts about the seriousness with which EU and euro-area commitments can be taken. Just as Hamilton and the Americans had to establish the authority and reliability of the central, Federal, government, the leaders of the European Union, and of its member states, have to establish the trustworthiness of the EU’s institutions. And the record of the past ten years points to an apparent inability of the region’s political leaders to arrive at a conclusive resolution of the debt crisis that has bedevilled Europe since 2008. ...
The central authorities – the Federal government in the American case, the institutions of the euro area and the EU in the European case – have to establish their ability to address crucial monetary and financial issues in a way acceptable to all member states. This requires some measure of responsibility for the behaviour of the member states themselves, which the central authority must counter-balance against the moral hazard that it creates.  In the American case, the country dealt with these linked problems over a sixty-year period. Assumption established the seriousness of the central government, but also created moral hazard. The refusal to assume the debts of defaulting states in the 1840s established the credibility of the Federal government’s no-bailout commitment. Europe today faces both of these problems, and the attempt to resolve them simultaneously has so far failed. Proposals to restructure debts are rejected as creating too much moral hazard, but the inability to come up with a serious approach to unsustainable debts has sapped the EU of most of its political credibility. Both aspects of central policy are essential: the central authorities must instil faith in the credibility of their commitments, and do so without creating unacceptable levels of moral hazard.
This is not, of course, to suggest that the European Union should assume the debts of its member states. Europe’s national governments have far greater capacity, and far greater resources, than did the nascent American states. But the lack of credibility of Europe’s central institutions is troubling, and is reminiscent of the poor standing of the new United States before 1789.
The US monetary and financial architecture evolved over decades, but in a country that was somewhat tied together with a powerful origin story--and nevertheless had to fight a Civil War to remain a single country. The European Union monetary and financial organization is evolving, too, but I'm not confident that the pressures of a globalized 21st century economy will give them decades to resolve the political conflicts, build the institutions, and create the credibility that the euro needs if it is to be part of broadly shared economic stability and growth in Europe.

Wednesday, May 25, 2016

Interview with Matthew Gentzkow: Media, Brands, Persuasion

Douglas Clement has another of his thoughtful and revealing interviews with economists, this one with Matthew Gentzkow. It appeared online in The Region, a publication of the Federal Reserve Bank of Minneapolis, on May 23, 2016.  For a readable overview of Gentzkow's work, a useful starting point is an essay by Andrei Shleifer titled  "Matthew Gentzkow, Winner of the 2014 Clark Medal,"  and published in the Winter 2015 issue of the Journal of Economic Perspectives. The Clark medal, for those not familiar with it, is a prestigious award  given each year by the American Economoic Associstion "to that American economist under the age of forty who is judged to have made the most significant contribution to economic thought and knowledge." Here are some answers from Gentzkow in the interview with Clement that caught my eye.

It seems to me that many discussions of politics neglect the entertainment factor. Politics isn't just about 30-page position papers and carefully worded statements. For lots of citizens and voters--and yes, for lots of politicians, too--it's a fun activity for observers and participants. Thus, when you think about how the spread of television (or newer media) affect voting, it's not enough just to talk about how media affect the information available to voters. It also matters if the new media just give the voters an alternative and nonpolitical source of entertainment. Here's a comment from Gentzkow on his research in this area:
I started thinking about this huge, downward trend that we’ve seen since about the middle of the 20th century in voter turnout and political participation. It’s really around the time that TV was introduced that that trend in the time series changes sharply, so I thought TV could have played a role.
Now, a priori, you could easily imagine it going either way. There’s a lot of evidence before and since that in many contexts, giving people more information has a very robust positive effect on political participation and voting. So, if you think of TV as the new source of information, a new technology for delivering political information, you might expect the effect to be positive. And, indeed, many people at the time predicted that this would be a very good thing for political participation.
On the other hand, TV isn’t just political information; it’s also a lot of entertainment. And in that research, I found that what seemed to be true is that the more important effect of TV is to substitute for—crowd out—a lot of other media like newspapers and radio that on net had more political content. Although there was some political content on TV, it was much smaller, and particularly much smaller for local or state level politics, which obviously the national TV networks are not going to cover.
So, we see that when television is introduced, indeed, voter turnout starts to decline. We can use this variation across different places and see that that sharp drop in voter turnout coincides with the timing of when TV came in. The more important effect of TV is to substitute for media that on net had more political content. So, we see that when television is introduced, indeed, voter turnout starts to decline. That drop is especially big in local elections. A lot of new technologies … are pushing people toward paying less attention to local politics, local issues, local communities.
People in different geographic areas show on average different consumption patterns. For example, Coke is more popular in some place, and Pepsi in others. Or imagine that someone moves from an area with high average health care spending to low average health care spending. Gentzkow and co-authors looked at people who moved from one geographic area to another, and how certain aspects of their consumption changed. Were people's preferences firmly established based on their previous location? Or did their preferences shift when they were in a new location? Here's how Gentzkow describes the differences between shifts in consumption related to brand preferences and shifts related to health care:
Well, imagine watching somebody move, first looking at how their brand preferences change; say they move from a Coke place to a Pepsi place and you see how their soft drink preferences change. Then imagine somebody moving from a place where there’s low spending on health care to a place with high spending, and you see how things change. In what way are those patterns different?
The first thing you can look at is how big the jump is when somebody moves. That’s sort of a direct measure of how important is the stuff you are carrying with you relative to the factors that are specific to the places. How important is your brand capital relative to the prices and the advertising? Or in a health care context, how important are the fixed characteristics of people that are different across places, relative to the doctors, the hospitals and the treatment styles across places. It turns out the jumps are actually very similar. In both cases, you close about half the gap between the place you start and the place you’re going, and so the share due to stuff people carry with them—their preference capital or their individual health—is about the same.
What’s very different and was a huge surprise to me, not what I would have guessed, is that with brands, you see a slow-but-steady convergence after people move; so, movers steadily buy more and more Pepsi the longer they live there. But in a health care context, we don’t see that at all; your health care consumption changes a discrete amount when you first move, but the trend is totally flat thereafter—it doesn’t converge at all.
Gentzkow's results on shifts in health care patterns may have some special applicability to thinking about how people react to finding themselves in a different and lower-spending health care system. Say that the change to this new system wasn't the result of a geographic shift--say, moving from a high-cost metro area where average spending on health care might be triple what it is in a low-cost area--but instead involved a change in policy. These results might imply that the policy reform would bring down health spending in a one-time jump, but then spending for the group that was used to being at a higher level would not continue to fall, as might have been predicted. 

Finally, here's an observation in passing from Gentzkow about social media. Are the new media a source of concern because they are not interactive enough (say, as compared to personal communication) or because they are too interactive and therefore addicting (say, as compared to television? Here's Gentzkow"
A lot of people are complaining about social media now. But think back to what they were saying back when kids were all watching TV: “It’s this passive thing where kids sit there and zone out, and they’re not thinking, they’re alone, they’re not communicating!” Now, suddenly, a thing that kids are spending lots of their time doing is interacting with other kids. They’re writing text messages and posts and creating pictures and editing them on Instagram. It’s certainly not passive; it’s certainly not solitary. It has its own risks perhaps, but not the risks that worried people about TV. I think there’s a tendency, no matter what the new technology is, to wring our hands about its terrible implications. Kind of amazing how people have turned on a dime from worrying about one thing to worrying about its exact opposite.

Tuesday, May 24, 2016

The Tradeoffs of Parking Spots

Sometimes it seems as if every proposal for a new residential or commercial building in an urban or suburban area is neatly packaged with a dispute over parking. Will the new development provide  a minimum number of parking spaces? Will it be harder for those already in the are to find parking? How should the flow of drivers in and out of the parking area be arranged? Of course, all of these questions presume the cars and drivers need and deserve to be placed front and center of development decisions.

Donald Shoup, an urban economist who focuses on parking issues, discusses this focus on parking in
"Cutting the Cost of Parking Requirements," an essay in the Spring 2016 issue of Access, a research center on surface transportation issues run by a number of University of California schools. Shoup starts this way:

At the dawn of the automobile age, suppose Henry Ford and John D. Rockefeller had hired you to devise policies to increase the demand for cars and gasoline. What planning regulations would make a car the obvious choice for most travel? First, segregate land uses (housing here, jobs there, shopping somewhere else) to increase travel demand. Second, limit density at every site to spread the city, further increasing travel demand. Third, require ample off-street parking everywhere, making cars the default way to travel.
American cities have unwisely embraced each of these car-friendly policies, luring people into cars for 87 percent of their daily trips. Zoning ordinances that segregate land uses, limit density, and require lots of parking create drivable cities but prevent walkable neighborhoods. Urban historians often say that cars have changed cities, but planning policies have also changed cities to favor cars over other forms of transportation.
Minimum parking requirements create especially severe problems. In The High Cost of Free Parking, I argued that parking requirements subsidize cars, increase traffic congestion and carbon emissions, pollute the air and water, encourage sprawl, raise housing costs, degrade urban design, reduce walkability, damage the economy, and exclude poor people. To my knowledge, no city planner has argued that parking requirements do not have these harmful effects. Instead, a flood of recent research has shown they do have these effects. We are poisoning our cities with too much parking. ...
Parking requirements reduce the cost of owning a car but raise the cost of everything else. Recently, I estimated that the parking spaces required for shopping centers in Los Angeles increase the cost of building a shopping center by 67 percent if the parking is in an aboveground structure and by 93 percent if the parking is underground.

Developers would provide some parking even if cities did not require it, but parking requirements would be superfluous if they did not increase the parking supply. This increased cost is then passed on to all shoppers. For example, parking requirements raise the price of food at a grocery store for everyone, regardless of how they travel. People who are too poor to own a car pay more for their groceries to ensure that richer people can park free when they drive to the store. ...
A single parking space, however, can cost far more to build than the net worth of many American households. In recent research, I estimated that the average construction cost (excluding land cost) for parking structures in 12 American cities in 2012 was $24,000 per space for aboveground parking, and $34,000 per space for underground parking
Shoup discusses California legislation that seeks to put a cap on minimum parking requirements. You can imagine how welcome this idea is. Another one of Shoup's parking projects is discussed by Helen Fessenden in "Getting Unstuck," which asks "Can smarter pricing provide a way out of clogged highways, packed parking, and overburdened mass transit?" Fessenden's article appears in the Fourth Quarter 2015 issue of Econ Focus, which is published by the Federal Reserve Bank of Richmond. On the subject of parking, she writes:

Economist Don Shoup at the University of California, Los Angeles has spent decades researching the inefficiencies of the parking market — including the high cost of minimum parking requirements — but he is probably best known for his work on street parking. In 2011, San Francisco applied his ideas in a pilot project to set up "performance pricing" zones in its crowded downtown, and similar projects are now underway in numerous other cities — including, later this spring, in D.C. ...
"I had always thought parking was an unusual case because meter prices deviated so much from the market prices," says Shoup. "The government was practically giving away valuable land for free. Why not set the price for on-street parking according to demand, and then use the money for public services?"
Taking a cue from this argument, San Francisco converted its fixed-price system for on-street parking in certain zones into "performance parking," in which rates varied by the time of day according to demand. In its initial run, the project, dubbed SFpark, equipped its meters with sensors and divided the day into three different price periods, with the option to adjust the rate in 25-cent increments, with a maximum price of $6 an hour. The sensors then gathered data on the occupancy rates on each block, which the city analyzed to see whether and how those rates should be adjusted. Its goal was to set prices to achieve target occupancy — in this case, between 60 percent and 80 percent — at all times. There was no formal model to predict pricing; instead, the city adjusted prices every few months in response to the observed occupancy to find the optimal rates.
The results: In the first two years of the project, the time it took to find a spot fell by 43 percent in the pilot areas, compared with a 13 percent fall on the control blocks. Pilot areas also saw less "circling," as vehicle miles traveled dropped by 30 percent, compared with 6 percent on the control blocks. Perhaps most surprising was that the experiment didn't wind up costing drivers more, on net, because demand was more efficiently dispersed. Parking rates went up 31 percent of the time, dropped in another 30 percent of cases, and stayed flat for the remaining 39 percent. The overall average rate actually dropped by 4 percent.
A summary of the 2014 evaluation report for the SFPark pilot study is available here.

For many of us, parking spots are just a taken-for-granted part of the scenery. Shoup makes you see parking in a different way. Space is scarce in urban areas, and in many parts of suburban areas, too. Parking uses space. Next time you are cycling a block, looking for parking, or navigating a city street that is made narrower because cars are parked on both sides, or walking down a sidewalk corridor between buildings on one side and parked cars on the other, or wending your way in and out of a parking ramp, it's worth recognizing the tradeoffs of requiring and underpricing parking spaces.




Monday, May 23, 2016

Telemedicine

The American College of Physicians has officially endorsed "telemedicine," which refers to using technology to connect a health care provider and a patient who aren't in the same place. An official statement of the ACP policy recommendations and a background position paper, written by Hilary Daniel and Lois Snyder Sulmasy, appear in the Annals of Internal Medicine (November 17, 2015, volume 163, number 10). The same issue includes an editorial on "The Hidden Economics of Telemedicine," by David Asch, emphasizing that some of the most important costs and benefits of telemedicine are not about delivering the same care in an alternative way.  For starters, here's are some comments from the background paper (with footnotes and references omitted for readability):
Telemedicine can be an efficient, cost-effective alternative to traditional health care delivery that increases the patient's overall quality of life and satisfaction with their health care. Data estimates on the growth of telemedicine suggest a considerable increase in use over the next decade, increasing from approximately 350 000 to 7 million by 2018. Research analysis also shows that the global telemedicine market is expected to grow at an annual rate of 18.5% between 2012 and 2018. ... [B]y the end of 2014, an estimated 100 million e-visits across the world will result in as much as $5 billion in savings for the health care system. As many as three quarters of those visits could be from North American patients. ...

Telemedicine has been used for over a decade by Veterans Affairs; in fiscal year 2013, more than 600 000 veterans received nearly 1.8 million episodes of remote care from 150 VHA medical centers and 750 outpatient clinics. ... The VHA's Care Coordination/Home Telehealth program, with the purpose of coordinating care of veteran patients with chronic conditions, grew 1500% over 4 years and saw a 25% reduction in the number of bed days, a 19% reduction in numbers of hospital readmissions, and a patient mean satisfaction score of 86% ... 
The Mayo Clinic telestroke program uses a “hub-and-spoke” system that allows stroke patients to remain in their home communities, considered a “spoke” site, while a team of physicians, neurologists, and health professionals consult from a larger medical center that serves as the “hub” site. A study on this program found that a patient treated in a telestroke network, consisting of 1 hub hospital and 7 spoke hospitals, reduced costs by $1436 and gained 0.02 years of quality-adjusted life-years over a lifetime compared with a patient receiving care at a rural community hospital ... 
The Antenatal and Neonatal Guidelines, Education and Learning System program at the University of Arkansas for Medical Sciences used telemedicine technologies to provide rural women with high-risk pregnancies access to physicians and subspecialists at the University of Arkansas. In addition, the program operated a call center 24 hours a day to answer questions or help coordinate care for these women and created evidence-based guidelines on common issues that arise during high-risk pregnancies. The program is widely considered to be successful and has reduced infant mortality rates in the state. ...
An analysis of cost savings during a telehealth project at the University of Arkansas for Medical Sciences between 1998 and 2002 suggested that 94% of participants would have to travel more than 70 miles for medical care. ...  Beyond the rural setting, telemedicine may aid in facilitating care for underserved patients in both rural and urban settings. Two thirds of the patients who participated in the Extension for Community Healthcare Outcomes program were part of minority groups, suggesting that telemedicine could be beneficial in helping underserved patients connect with subspecialists they would not have had access to before, either through direct connections or training for primary care physicians in their communities, regardless of geographic location.
Most of this seems reasonable enough, except for that pesky estimate up in the first paragraph that the global savings from telemedicine will amount to $5 billion per year on a global basis. The US health care system alone has average spending of more than $8 billion per day, every day of the years. Thus, this vision of telemedicine is that it will mostly just rearrange existing care--reach out to bring some additional people into the system, help reduce health care expenditures on certain conditions with better follow-up--but not be a truly disruptive force.

In his editorial essay in the same issue, David Asch points out: "If there is something fundamentally different about telemedicine, it is that many of the costs it increases or decreases have been off the books." He offers a number of examples:

"Some patients who would have visited the physician face to face instead have a telemedicine "visit." They potentially gain a lot. There are no travel costs or parking fees. They might have to wait, but presumably they wait at home or at work where they can do something else (like many of us do when placed on hold). There is no waiting at all in asynchronous settings (the photograph of your rash is sent to your dermatologist, but you do not need a response right away). The costs avoided do not appear on the balance sheets of insurance companies or providers ...  However, the costs avoided are meaningful even if they are not counted in official ways. There are the patients who would have forgone care entirely because the alternative was not a face-to-face visit but no visit. There are no neurologists who treat movement disorders in your region. The emergency department in your area could not possibly have a stroke specialist available at all times. ...  We leave patients out when we ask how telemedicine visits compare with face-to-face visits: all of the patients who, without telemedicine, get no visit at all.
Savings for physicians, hospitals, and other providers are potentially enormous. Clinician-patient time in telemedicine is almost certainly shorter, requiring less of the chitchat that is hard to avoid in face-to-face interactions. There is no check-in at the desk. There is no need to devote space to waiting rooms (in some facilities, waiting rooms occupy nearly one half of usable space). No one needs to clean a room; heat it; or, in the long run, build it. That is the real opportunity of telemedicine. ...

On the other hand, payers worry that if they reimburse for telemedicine, then every skin blemish that can be photographed risks turning from something that patients used to ignore into a payable insurance claim. Indeed, it is almost certainly true that if you make it easy to access care by telemedicine, telemedicine will promote too much care. However, the same concern could be reframed this way: An advantage of requiring face-to-face visits is that their inconvenience limits their use. Do we really want to ration care by inconvenience, or do we want to find ways to deliver valuable care as conveniently and inexpensively as possible?
I find myself wondering about ways in which telemedicine will be more disruptive. For example, consider the combination of telemedicine with technologies that enable remote monitoring of blood pressure, or blood sugar, or whether medications are being taken on schedule. Or consider telemedicine not just as a method of communicating with members of the American College of Physicians, but also as a way of communicating with nursing professionals, those who know about providing at-home care, various kinds of physical and mental therapists, along with social workers and others. There will be a wave of jobs in being the "telemedicine gatekeeper" who can answer the first wave of questions that most people ask, and then have access to resources for follow-up concerns. My guess is that these kinds of changes will be considerably more disruptive to traditional medical practice than a worldwide cost savings of $5 billion would seem to imply.

Homage: I ran across a mention of these reports at the always-interesting Marginal Revolution website.

Saturday, May 21, 2016

Rising Tuition Discount Rates at Private Colleges

Colleges and universities announce a certain price for tuition, but based on financial aid calculations, they often charge a lot less. The difference is the "institutional tuition discount rate." The National Association of College and University Business Officers (NACUBO) has just released a report with the average discount rate for 2015-16 based on a survey of 401 private nonprofit colleges (that is, not including branches of state university systems and not including for-profit colleges and universities), along with and how that rate has been evolving over time.




The two lines in the figure imply that the level financial help a student receives as a freshman, when making a choice between colleges, is going to be more than the financial help received in later years. Beware! More broadly, a strategy of charging ever-more to parents who can afford it, while offering ever-larger discounts to those who can't, does not seem like a sustainable long-run approach.

Friday, May 20, 2016

Inequalities of Crime Victimization and Criminal Justice

Many Americans worry about high incarceration rates and a police presence that can be heavy-handed or worse in some communities. Many Americans also are worrying about crime. For example, here's a Gallup poll result from early March:

160407CrimeandDrugs_1

And law-abiding people in some communities, many of them predominantly low-income and African-American, can end up facing an emotionally crucifying choice. One one side, crime rates in their community are high, which is a terrible and sometimes tragic and fatal burden on everyday life. On the other side, they are watching a large share of their community, mainly men, becoming involved with the criminal justice system through fines, probation, fines, or incarceration. Although those who are convicted of crimes are the ones who officially bear the costs, in fact the costs when someone needs to pay fines, or can't earn much or any income, or can only be visited by making a trip to a correctional facility are also shared with families, mothers, and children. Magnus Lofstrom and Steven Raphael explore these questions of "Crime, the Criminal Justice System, and Socioeconomic Inequality" in the Spring 2016 issue of the Journal of Economic Perspectives.

(Full disclosure: I've worked as the Managing Editor of the Journal of Economic Perspectives for 30 years. All papers appearing in the journal, back to the first issue in Summer 1987, are freely available online, compliments of the American Economic Association.)

It's well-known that rates of violent and property crime have fallen substantially in the US in the last 25 years or so. What is less well-recognized is that the biggest reductions in crime have happened in the often predominantly low-income and African-American communities that were most plagued by crime. Loftrom and Raphael look at crime rates across cities with lower and higher rates of  poverty in 1990 and 2008:
"However, the inequality between cities with the highest and lower poverty rates narrows considerably over this 18-year period. Here we observe a narrowing of both the ratio of crime rates as well as the absolute difference. Expressed as a ratio, the 1990 violent crime rate among the cities in the top poverty decile was 15.8 times the rate for the cities in the lowest poverty decile. By 2008, the ratio falls to 11.9. When expressed in levels, in 1990 the violent crime rate in the cities in the upper decile for poverty rates exceeds the violent crime rate in cities in the lowest decile for poverty rates by 1,860 incidents per 100,000. By 2008, the absolute difference in violent crime rates shrinks to 941 per 100,000. We see comparable narrowing in the differences between poorer and less-poor cities in property crime rates."
As another example, Lofstrom and Raphael refer to a study which broke down crime rates in Pittsburgh across the "tracts" used in compiling the US census. As overall rates of crime fell in Pittsburgh, predominantly African-American areas saw the biggest gains:
"The decline in violent crime in the 20 percent of tracts with the highest proportion black amounts to 54 percent of the overall decline in violent crime citywide. These tracts account for 23 percent of the city’s population, have an average proportion black among tract residents of 0.78 and an average proportion poor of 0.32. Similarly, the decline in violent crime in the poorest quintile of tracts amounts to 60 percent of the citywide decline in violent crime incidents, despite these tracts being home to only 17 percent of the city’s population."
It remains true that one of the common penalties for being poor in the United States is that you are more likely to live in a neighborhood with a much higher crime rate. But as overall rates of crime have fallen, the inequality of greater vulnerability to crime has diminished.

On the other side of the crime-and-punishment ledger, low-income and African-American men are more likely to end up in the criminal justice system. Lofstrom and Raphael give sources and studies for the statistics: "[N]nearly one-third of black males born in 2001 will serve prison time at some point in their lives. The comparable figure for Hispanic men is 17 percent ...  [F]or African-American men born between 1965 and 1969, 20.5 percent had been to prison by 1999. The comparable figures were 30.2 percent for black men without a college degree and approximately 59 percent for black men without a high school degree."

I'm not someone who sympathizes with or romanticizes those who commit crimes. But economics is about tradeoffs, and imposing costs on those who commit crimes has tradeoffs for the rest of society, too. For example, the cost to taxpayers is on the order of $350 billion per year, which in 2010 broke down as "$113 billion on police, $81 billion on corrections, $76 billion in expenditure by various federal agencies, and $84 billion devoted to combating drug trafficking." The question of whether those costs should be higher or lower, or reallocated between these categories, is a worthy one for economists.

But the costs explicitly imposed by the legal system are only part of the picture. For example, living in a community where it is common for you to experience or watch as people are regularly stopped and frisked is a cost, too. Lofstrom and Raphael discuss "collateral consequence studies: about how being in the criminal justice system affects employment prospects, health outcomes, and problem behaviors and and depression among children of the incarcerated. In addition, many local jurisdictions have dramatically increased their use of fines in the last couple of decades,, which can often end up being a high enough fraction of annual income for a low-income worker that they become nearly impossible to pay--then leading to additional fines or more jail time. The US Department of Justice Civil Rights Division report following up on practices in Ferguson, Missouri, noted an "aggressive use of fines and fees imposed for minor crimes, with this revenue accounting for roughly one-fifth of the city’s general fund sources." As Lofstrom and Raphael explain:
"Money is fungible. When fines and fees are imposed as part of a criminal prosecution, at least some of the financial burden will devolve on to the household of the person involved with the criminal justice system. When someone who is involved in the criminal justice system has reduced employment prospects, some of those financial costs will again be borne by others in their household. We have said nothing about the family resources devoted to replenishing inmate commissary accounts, the devotion of household resources to prison phone calls, time devoted to visiting family members, and the other manners by which a family member’s involvement with the criminal justice system may tax a household’s resources. To our knowledge, aggregate data on such costs do not exist."
I wrote a few weeks back about how the empirical evidence on "Crime and Incarceration: Correlation, Causation, and Policy" (April 29, 2016). Yes, there is a correlation that incarceration rates have risen in the US as crime has fallen. But a more careful look at the evidence strongly suggests that while the rise in incarceration rates probably did contribute to bringing down crime rates in the 1980s or into the early 1990s, but the continuing rise in incarceration rates since then seems to brought diminishing returns--and at this point, near-zero returns--in reducing crime further.

Lofstrom and Raphael conclude:
"Many of the same low-income predominantly African American communities have disproportionately experienced both the welcome reduction in inequality for crime victims and the less-welcome rise in inequality due to changes in criminal justice sanctioning. While it is tempting to consider whether these two changes in inequality can be weighed and balanced against each other, it seems to us that this temptation should be resisted on both theoretical and practical grounds. On theoretical grounds, the case for reducing inequality of any type is always rooted in claims about fairness and justice. In some situations, several different claims about inequality can be combined into a single scale—for example, when such claims can be monetized or measured in terms of income. But the inequality of the suffering of crime victims is fundamentally different from the inequality of disproportionate criminal justice sanctioning, and cannot be compared on the same scale. In practical terms, while higher rates of incarceration and other criminal justice sanctions may have had some effect in reducing crime back in the 1970s and through the 1980s, there is little evidence to believe that the higher rates have caused the reduction in crime in the last two decades. Thus, it is reasonable to pursue multiple policy goals, both seeking additional reductions in crime and in the continuing inequality of crime victimization and simultaneously seeking to reduce inequality of criminal justice sanctioning. If such policies are carried out sensibly, both kinds of inequality can be reduced without a meaningful tradeoff arising between them." 
While accusations of police brutality are often the flashpoint for public protests over the criminal justice system, my own suspicion is that some of the anger and despair focused on the police is because they are the visible front line of the criminal justice system. It would be interesting to watch the dynamics if protests of similar intensity were aimed at legislators who pass a cavalcade of seemingly small fines, which when imposed by judges add up to an insuperable burden for low-income families. Or if the protests were aimed at legislators, judges, and parole boards who make decisions about length of incarceration. Or if the protests were aimed at prisons and correctional officers. My own preference for the criminal justice system (for example, here and here) would be to rebalance the nation's criminal justice spending, with more going to police and less coming in  fines, and the offsetting funding to come from reducing the sky-high levels of US incarceration. The broad idea is to spend more on tamping down the chance that crime will occur or escalate in the first place, while spending less on years of severe punishments after the crime has already happened.

Thursday, May 19, 2016

Ray Fair: The Economy is Tilting Republican

Ray Fair is an eminent macroeconomist,  as well as  a well-known textbook writer (with Karl Case and Sharon Oster) who dabbles now and again in sports economics. Here I focus on one of Fair's other interests: the connection from macroeconomic to election outcomes, a topic where he has been publishing an occasional series of papers since 1978. With time and trial-and-error, Fair has developed a formula where anyone can plug in a few key economic statistics and obtain a prediction for the election. A quick overview of the calculations, along with links to some of Fair's recent papers on this subject, are available at Fair's website.

Fair's equation to predict the 2016 presidential election is
VP = 42.39 + .667*G - .690*P + 0.968*Z

On the left-hand side of the equation, VP is the Democratic share of the presidential vote. Given that a Democrat is in office, a legacy of economic growth should tend to favor the Democratic candidate, while inflation would tend to work against the Democrat. On the right-hand side, G is the growth rate of real per capita GDP in the first 3 quarters of the election year (at an annual rate); P is the growth rate of the GDP deflator (a measure of inflation based on everything in the GDP, rather than just on consumer spending as in the better-known Consumer Price Index); and Z is the number of quarters in the first 15 quarters of the second Obama administration in which the growth rate of real per capita GDP is greater than 3.2 percent at an annual rate.

Obviously, some of these variables aren't yet known, because the first three-quarters of 2016 haven't happened yet. But here are Fair's estimates of the variables as of late April: G=0.87; P=1.28; Z=3. Plug those numbers into the formula, and the prediction is that the Democratic share of the two-party presidential vote in 2016 will be 44.99%.

Fair offers a similar equation to predict the 2016 House elections. The formula is

VC = 44.09 + .372*G - .385*P + 0.540*Z

where VC is the Democratic share of the two-party vote in Congressional elections. Plugging in the values for G, P and Z, the prediction is 45.54% of the House vote for Democrats.

Of course, these formulas raise a number of questions. Where do these numbers and this formula come from? Why use these variables about economic growth rather than, say, the unemployment rate? Why measure inflation with the GDP deflator rather than with the Consumer Price Index? Where did the coefficient numbers come from?

The short answer to all these questions is that Fair's equations are chosen so that, if one looks back at historical election data from 1916 up through 2014, this equation is both fairly simple and does a pretty good job in predicting all the elections over time with the smallest possible error. The long answer to why these specific variables were chosen and how the equation is estimated is that you need to read the research papers at Fair's website.

Is there reason to believe that a correlation between the macroeconomy and election outcomes has existed during the last century or so of national elections, it will also hold true in 2016? Of course, Fair isn't making any claim that the macroeconomy fully determines election outcomes. Every election has lots of idiosyncratic factors related to the particular candidates and the events of the time. Correlations are just a way of describing or summarizing earlier patterns in the data. Fair's equation tell how macroeconomic factors have been correlated with election outcomes, based on the past historical record, but it doesn't have anything to say about all the other factors in a national election. For example, the predictions of the equation for the  Democratic vote were way low in 1992, when Bill Clinton was elected, and also  in 2004, when George W. Bush was re-elected. On the other side, predictions from the equation of the Democratic share of the vote were too high in 1984 and 1988, when Ronald Reagan was re-elected and then George Bush was elected.

At the most basic level, Fair's equation is just saying that a slow rate of economic growth during 2016, along with the fact that there haven't been many rapid quarters of economic growth during the Obama presidency, will tend to make it harder for Democrats to win in 2016. But correlation doesn't prove causation, as Fair knows as well as anyone and better than most, and he would be the last one to overstate how much weight to give to these kinds of formulas. Back in 1996, Fair provided a nontechnical overview of this work in "Econometrics and Presidential Elections," appearing in the Journal of Economic Perspectives (where I work as Managing Editor). He wrote there: 
"The main interest in this work from a social science perspective is how economic events affect the behavior of voters. But this work is also of interest from the perspective of learning (and teaching) econometrics. The subject matter is interesting; the voting equation is easy to understand; all the data can be put into a small table; and the econometrics offers many potential practical problems. ... Thus, this paper is aimed in part at students taking econometrics, with the hope that it may serve as an interesting example of how econometrics can be used (or misused?). Finally, this work is of interest to the news media, which every fourth year becomes fixated on the presidential election. Although I spend about one week every four years updating the voting equation, some in the media erroneously think that I am a political pundit—or at least they have a misleading view of how I spend most of my days."

Wednesday, May 18, 2016

What Was Different About Housing This Time?

Everyone knows that the Great Recession was tangled up with a housing boom that went bust. But more precisely, what was different about housing in the most recent business cycle? Burcu Eyigungor discusses "Housing’s Role in the Slow Recovery" in the Spring 2016 issue of Economic Insights, published by the Federal Reserve Bank of Philadelphia.

As a starting point, here's private residential fixed investment--basically, spending on home and apartment construction and major renovations--as a share of GDP going back to 1947. Notice that this category of investment falls during every recession (shown by the shaded areas) and then usually starts bouncing back just before the end of the recession--except for the period after 2009.
The most recent residential building cycle looks different. Eyigungor explains:
The housing boom from 1991 to 2005 was the longest uninterrupted expansion of home construction as a share of overall economic output since 1947 (Figure 1). During the 1991 recession, private home construction had constituted 3.5 percent of GDP, and it increased its share of GDP without any major interruptions to 6.7 percent in 2005. This share was the highest it had been since the 1950s. Just like the boom, the bust that followed was also different from earlier episodes. During the bust, private residential investment as a share of GDP fell to levels not seen since 1947 and has stayed low even after the end of the recession in 2009. In previous recessions, the decline in residential construction was not only much less severe, but the recovery in housing also led the recovery in GDP. As Federal Reserve Chair Janet Yellen has pointed out, in the first three years of this past recovery, homebuilding contributed almost zero to GDP growth.
There are two possible categories of reasons for the very low level of residential building since 2009. On the supply side, it may not seem profitable to build, given what was already built back before 2008 and the lower prices. On the demand side, one aftermath of the Great Recession could plausibly be that at least some people are feeling economically shaky and mistrustful of real estate markets. and so not eager to buy.

Both supply and demand presumably played some role. But housing prices have now been rising again for about three years, and the "vacancy" rates for owner-occupied housing and rental housing are back to the levels from before the Great Recession. In that sense, it doesn't look as if an overhang of empty dwellings or especially low prices are the big problem for the housing market. Instead, Eyigungor argues that the demand side of the housing market is holding back the housing market.

In particular, the demand for housing is tied up with the rate of "household formation"--that is, the number of people who are starting new households. The level of household formation was low for years after 2009 (and remember that these low levels are in the context of a larger population than several decades ago, so the rate of household formation would be lower still).
The rates of homeownership have now declined back to levels from the 1980s, and the share of renters has risen. "This decline has lowered overall housing expenditures, because homeowners on average spend more on housing than renters do because of the tax incentives of homeownership and holding a mortgage. Together, the declines in household formation and homeownership contributed to the decline in residential expenditures as a share of GDP."
Spending on housing usually helps lead the US economy out of recession, but not this time. The demand from new household formation hasn't been there. As I've pointed out in the past, both the Clinton administration with its National Homeownership Strategy and the Bush administration with its "ownership society" did a lot of bragging about that rise in homeownership rates from the mid-1990s up through about 2007. The gains to homeownership from those strategies has turned out to be evanescent, while some costs associated with those strategies have been all too real. 

Tuesday, May 17, 2016

Mayfly Years: Thoughts on a Five-Year Blogging Anniversary

The first post on this blog went up five years ago, on May 17, 2011: the first three posts are here, here, and here. When it comes to wedding anniversaries, the good folks at Hallmark  inform me that a five-year anniversary is traditionally "wood." But I suspect that blogs age faster than marriage. How much faster? There's an old and probably unreliable saying that a human year is seven dog-years. But when it comes to blogging, mayfly-years may be a more appropriate metric. The mayfly typically lives for only one day, maybe two. I've put up over 1,300 posts in the last five years, probably averaging roughly 1,000 words in length. Dynasties of mayflies have risen and fallen during the life of this blog.

Writing a blog 4-5 times each week teaches you some things about yourself.  I've always been fascinated by the old-time newspaper columnists who churned out 5-6 columns every week, and I wondered if I could do that. In the last five years, I've shown myself that I could.  The discipline of writing the blog has been good for me, pushing me to track down and read reports and articles that might otherwise have just flashed across my personal radar screen before disappearing. I've used the blog as a memory aid, so that when I dimly recall having seen a cool graph or read a good report on some subject, I can find it again by searching the blog--which is a lot easier than it used to be to search my office, or my hard drive, or my brain. My job and work-life bring me into contact with all sorts of interesting material that might be of interest to others, and it feels like a useful civic or perhaps even spiritual discipline to shoulder the task of passing such things along.

It's also true that writing a regular blog embodies some some less attractive traits: a compulsive need to broadcast one's views; an obsession about not letting a few days or a week go by without posting; an egoistic belief that anyone else should care; a need for attention; and a desire to shirk other work. Ah, well. Whenever I learn more about myself, the lesson includes a dose of humility.

The hardest tradeoff in writing this blog is finding windows of time in the interstices of my other work and life commitments, and the related concern that by living in mayfly years, I'm not spending that time taking a deeper dive into thinking and writing that would turn into essays or books.

In a book published last year, Merton and Waugh: A Monk, A Crusty Old Man, and The Seven Storey Mountain, Mary Frances Coady describes the correspondence between Thomas Merton and Evelyn Waugh in the late 1940s and early 1950s. Merton was a Trappist monk who was writing his autobiographical book The Seven Story Mountain. (Famous opening sentence: "On the last day of January 1915, under the sign of the Water Bearer, in a year of a great war, and down in the shadow of some French mountains in the borders of Spain, I came into the world.") Waugh was already well-known, having published Brideshead Revisited a few years earlier. Merton's publisher sent the manuscript to Waugh for evaluation, and Waugh both offered Merton some comments and also ended up as the editor of the English edition.

Waugh sent Merton a copy of a book called The Reader over My Shoulder, by Robert Graves and Alan Hodge, one of those lovely short quirky books of advice to writers that I think is now out of print. Here's a snippet from one of the early letters from Waugh to Merton:
With regard to style, it is of course much more laborious to write briefly. Americans, I am sure you will agree, tend to be very long-winded in conversation and your method is conversational. I relish the laconic. ... I fiddle away rewriting any sentence six times mostly out of vanity. I don't want anything to appear with my name that is not the best I am capable of. You have clearly adopted the opposite opinion ... banging away at your typewriter on whatever turns up. ...
But you say that one of the motives of your work is to raise money for your house. Well simply as a matter of prudence you are not going the best way about it. In the mere economics of the thing, a better return for labour results in making a few things really well than in making a great number carelessly. You are plainly undertaking far too many trivial tasks for small returns. ...
Your superiors, you say, leave you to your own judgment in your literary work. Why not seek to perfect it and leave mass-production alone? Never send off any piece of writing the moment it is finished. Put it aside. Take on something else. Go back to it a month later re-read it. Examine each sentence and ask "Does this say precisely what I mean? Is it capable of misunderstanding? Have I used a cliche where I could have invented a new and therefore asserting and memorable form? Have I repeated myself and wobbled around the point when I could have fixed the whole thing in six rightly chosen words? Am I using words in their basic meaning or in a loose plebeian way?" ... The English language is incomparably rich and can convey every thought accurately and elegantly. The better the writing the less abstruse it is. ... Alas, all this is painfully didactic--but you did ask for advice--there it is.
In all seriousness, this kind of advice makes my heart hurt in my chest. Take the extra time to write briefly? Rewrite sentences six time? Put things away for a month and return to them? Bang away at the keyboard on whatever turns up? Far too many trivial tasks for small returns? Wobbled around the point instead of hunting for six well-chosen words? Many of these blog posts are knocked out an hour before bedtime, and I often don't reread even once before clicking on "Publish."

Here are some snippets of Merton's response to Waugh:

I cannot tell you how truly happy I am with your letter and the book you sent. In case you think I am exaggerating I can assure you that in a contemplative monastery where people are supposed to see things clearly it sometimes becomes very difficult to see anything straight. It is so terribly easy to get yourself into some kind of a rut in which you distort every issue with your own blind bad habits--for instance rushing to finish a chapter before the bell rings and you will have to go and do something else.
It has been quite humiliating for me to find that my out (from Graves and Hodge) that my own bad habits are the same as those of every other second-rate writer outside the monastery. The same haste, distraction, etc. .... On the whole I think my haste is just as immoral as anyone else's and comes from the same selfish desire to get quick results with a small amount of effort. In the end, the whole question is largely an ascetic one!  .....
Really I like The Reader Over Your Shoulder very much. In the first place it is amusing. And I like their thesis that we are heading toward a clean, clear kind of prose. Really everything in my nature--and in my vocation, too--demands something like that if I am to go on writing. ... You would be shocked to know how much material and spiritual junk can accumulate in the corners of a monastery and in the minds of the monks. You ought to see the pigsty in which I am writing this letter. There are two big crates of some unidentified printed work the monastery wants to sell. About a thousand odd copies of ancient magazines that ought to have been sent to the Little Sisters of the Poor, a dozen atrocious looking armchairs and piano stools that are used in the sanctuary for Pontifical Masses and stored on the back of my neck the rest of the time. Finally I am myself embedded in a small skyscraper of mixed books and magazines in which all kinds of surreal stuff is sitting on top of theology. ...
I shall try to keep out of useless small projects that do nothing but cause a distraction and dilute the quality of what I turn out. The big trouble is that in those two hours a day when I get at a typewriter I am always having to do odd jobs and errands and I am getting a lot of letters from strangers, too. These I hope to take care of with a printed slip telling them politely to lay off the poor monk, let the guy pray. 
I find myself oddly comforted by the thought that a monastery may be just as cluttered, physically and metaphysically, as an academic office. But I'm not sure what ultimate lessons to take away from these five-year anniversary thoughts. I don't plan to give up the blog, but it would probably be a good idea if I can find the discipline to shift along the quality-quantity tradeoff. Maybe trend toward 3-4 posts per week, instead of 4-5. Look for opportunities to write shorter, rather than longer. Avoid the trivial. Try to free up some time and see what I might be able to accomplish on some alternative writing projects. I know, I know, it's like I'm making New Year's resolutions in May.  But every now and again, it seems appropriate to share some thoughts about this blogging experience.  Tomorrow the blog will return to its regularly scheduled economics programming.

Homage: I ran into part of the Waugh quotation from above in the "Notable & Quotable" feature of the Wall Street Journal on May 3, 2016, which encouraged me to track down the book.

Monday, May 16, 2016

Tradeoffs of Cultured Meat Production

A major technological innovation may be arriving in a very old industry: the production of meat. Instead of producing meat by growing animals, meat can instead be grown directly. The process has been happening in laboratories, but some are looking ahead to large-scale production of meat in "carneries."

This technology has a number of implications, but here, I'll focus on some recent research on how a shift away from conventionally produced meat to cultured or in vitro meat production could help the environment. Carolyn S. Mattick, Amy E. Landis, Braden R. Allenby, and Nicholas J. Genovese tackle this question in "Anticipatory Life Cycle Analysis of In Vitro Biomass Cultivation for Cultured Meat Production in the United States," published last September in Environmental Science & Technology (2015, v. 49, pp/ 11941−11949). One of the implications of their work is that factory-style meat production may produce real environmental gains for beef, but perhaps not for other meats.

Another complication is that not all production of vegetables has a lower environmental impact than, say, poultry or fresh fish. Michelle S. Tom,  Paul S. Fischbeck, and Chris T. Hendrickson provide some evidence on this point in their paper, "Energy use, blue water footprint, and greenhouse gas emissions for current food consumption patterns and dietary recommendations in the US," published in Environment Systems and Decisions in March 2016 (36:1, pp. 92-103),

As background, one of the first examples of the new meat production technology happened back in 2001, when a team led by bioengineer Morris Benjaminson cut small chunks of muscle from goldfish, and then immersed the chunks in a liquid extracted from the blood of unborn calves that scientists use for growing cells in the lab. The New Scientist described the results this way in 2002:
"After a week in the vat, the fish chunks had grown by 14 per cent, Benjaminson and his team found. To get some idea whether the new muscle tissue would make acceptable food, they washed it and gave it a quick dip in olive oil flavoured with lemon, garlic and pepper. Then they fried it and showed it to colleagues from other departments. "We wanted to make sure it'd pass for something you could buy in the supermarket," he says. The results look promising, on the surface at least. "They said it looked like fish and smelled like fish, but they didn't go as far as tasting it," says Benjaminson. They weren't allowed to in any case--Benjamison will first have to get approval from the US Food and Drug Administration."

The first hamburger grown in a laboratory was served in London in 2013. As an article from Issues in Science and Technology reported at the time: ""From an economic perspective, cultured meat is still an experimental technology. The first in vitro burger reportedly cost about $335,000 to produce and was made by possible by financial support from Google cofounder Sergey Brin." But the price is coming down: a Silicon Valley start-up is now making meatballs from cultured meat at $18,000 per pound.

Mattick, Landis, Allenby, and Genovese an evaluation of environment effects over the full life-cycle of production: for example, this means including the environmental effects of agricultural products used to feed lifestock. They compare existing studies of the environmental effects of traditional production of beef, pork, and poultry with a 2011 study of the environmental effects of in vitro meat production and with their own study. (The 2011 study of in vitro meat production is "Environmental Impacts of Cultured Meat Production," by Hanna L. Tuomisto and M. Joost Teixeira de Mattos, appearing in Environmental Science & Technology, 2011, 45, pp. 6117–6123). They summarize the results of their analysis along four dimensions: industrial energy use, global warming potential, eutrophication potential (that is, addition of chemical nutrients like nitrogen and phosphorus to the ecosystem), and land use.

Here's the summary of industrial energy use, which they view as definitely higher for in vitro meat than for pork and poultry, and likely to be higher for beef. They explain:
"These energy dynamics may be better understood through the analogy of the Industrial Revolution: Just as automobiles and tractors burning fossil fuels replaced the external work done by horses eating hay, in vitro biomass cultivation may similarly substitute industrial processes for the internal, biological work done by animal physiologies. That is, meat production in animals is made possible by internal biological functions (temperature regulation, digestion, oxygenation, nutrient distribution, disease prevention, etc.) fueled by agricultural energy inputs (feed). Producing meat in a bioreactor could mean that these same functions will be performed at the expense of industrial energy, rather than biotic energy. As such, in vitro biomass cultivation could be viewed as a renewed wave of industrialization." 
With regard to global warming potential, in vitro production of meat is estimated to be lower than beef, but higher than poultry and pork.


The other two dimensions are eutrophication and land use. Eutrophication basically involves effects of fertilizer use, which for traditional meat production involves both agricultural production and disposal of animal waste products. The environmental effects of in vitro meat production are quite low here, as is the effect of in vitro meat production on land use.

Of course, these estimates are hypothetical. No factory-scale production of cultured meat exists yet. But if the "carnery" does become a new industry in the next decade or so, these kinds of tradeoffs will be part of the picture.

As I noted above, it jumps out from these figures that traditional production of beef has a much more substantial environmental footprint than production of poultry or pork. In their paper, Tom, Fischbeck, and Hendrickson take on a slightly different question: what is the environmental impact of some alternative diet scenarios: specifically, fewer calories with the same mixture of foods, or same calories with an alternative mixture of foods recommended by the US Department of Agriculture, or both lower calories and the alternative diet. The USDA-recommended diet involves less sugar, fat, and mean, and more fruits, vegetables, and dairy.  But counterintuitively (at least for me), they find that the reduced calorie, altered diet choice has larger environmental effects than the current dietary choices. They write:
However, when considering both Caloric reduction and a dietary shift to the USDA recommended food mix, average energy use increases 38 %, average blue water footprint increases 10 %, and average GHG [greenhouse gas] emissions increase 6%.
Why does a shift away from meat and toward fruits and vegetables create larger environmental effects? The authors do a detailed breakdown of the environmental costs of various foods along their three dimensions of energy use, blue water footprint, and greenhouse gas emissions. Here's an overall chart. An overall message is that while meat (excluding poultry) is at the top on greenhouse gas emissions, when it comes to energy use and blue water footprint, meat is lower than fruit and vegetables.

As the authors write: "[T]his study’s results demonstrate how the environmental benefits of
reduced meat consumption may be offset by increased consumption of other relatively high impact foods, thereby challenging the notion that reducing meat consumption automatically reduces the environmental footprints of one’s diet. As our results show food consumption behaviors are more complex, and the outcomes more nuanced." For a close-up illustration of the theme, here's a chart from Peter Whoriskey at the the Washington Post Wonkblog, created based on supplementary materials from the Tom, Fischbeck and Hendrickson paper. A striking finding is that on the dimension of greenhouse gas emissions, beef is similar to lettuce. The greenhouse gas emissions associated with production of poultry are considerably lower than for yogurt, mushrooms, or bell peppers.
Again, the environmental costs of beef in particular are  high. If cultured meat could replace production of beef in a substantial way, it might bring overall environmental gains. But making defensible statements about diet and the environment seem to require some nuance. Lumping beef, pork, poultry, shellfish, and other fish all into one category called "meat" covers up some big differences, as does lumping all fruits into a single category or all vegetables into a single category.

Addendum: This post obviously focuses on environmental tradeoffs, not the economic tradeoffs that cultured meat would pose for farmers or the animal welfare tradeoffs for livestock. Jacy Reese writes about "The Moral Demand for Cultured Meat" from an animal welfare perspective in Salon, February 13, 2016.