Saturday, October 11, 2014

More on the Origins of the Free Rider Idea

About a month ago, I posted on "How the Free Rider Idea Evolved," with an emphasis on how the "free rider" terminology was used about financial markets and labor union organizing in the 1940s and 1950s, before the term seeped over into its modern economic usage by way of James Buchanan and Mancur Olsen. For those who enjoy tracking terms of art back to their burrows, here's some follow-up that varies from the pedestrian to the intriguing to the wonderful--but probably not true.

The earliest use of the "free rider" term seems to be straightforward, even boring. The Oxford English dictionary offers this definition:
orig. U.S. Originally: a person who rides a train, bus, etc., without having paid for it (when others have). Now chiefly: a person who, or organization which, benefits (or seeks to benefit) in some way from the effort of others, without making a similar contribution.
The OED offers an example dating back to 1859 about a count of rail passengers, "not including commuters and free riders." Of course, someone who doesn't pay their fare on a mass transit system is a good example even for the modern classroom of a free rider.

Reader Charles Clarke sent me an intriguing example of "free ride" terminology in the economics literature from back in the 1920. Specifically, John Maurice Clark wrote this in his 1926 book, the Social Control of Business (University of Chicago Press, pp. 110-111):

A person who does not have a job or any other source of income, and who does not know where to get one and how to go about canvassing the market effectively, does not possess the substance of liberty. That person is in a position to be exploited and to be forced to make contracts which are essentially made under duress. In addition to this equipment of knowledge, a person needs some reserve funds in order to be able to hold off from the market and see if the second or tenth or twentieth bargain that offers will not be better than the first. When pockets are empty this search may mean real privation. Often one of the chief obstacles to a real canvass of the market consists of the costs of transportation, in which case "liberty and the pursuit of happiness" may require a free ride on the railroad. If this is not forthcoming from public funds, the employer's private interest may be strong enough to furnish it. But when the employer foots the bill, his interest in the case is likely to end when he gets enough labor, without regard to what happens to the laborers after he is through with them. For example, in this country there are various ways of getting harvest hands into the fields without requiring them to pay their railroad fares, but there is no system for getting them back again after the harvest is in.
Beyond a sort of OCD compulsiveness about noting a place where the "free ride" terminology is employed in the economics literature, this reference is conceptually intriguing. In way way, it's just a reiteration of the already-established use of referring to those who ride trains without paying. But in another way, it focuses on the modern issue of addressing search costs for those finding a new job. In my reading, it also offers just a hint that an industry like a railroad with high fixed costs and low marginal costs may sometimes charging more than is socially desirable, in an attempt to cover its fixed costs, when something closer to marginal cost pricing might offer social benefits.

One final source of the "free rider" image may be an example of the "too good to check" phenomenon. In his Intermediate Microeconomics textbook (Scott, Foresman and Company, 1990 edition, p. 572), Heinz Kohler wrote:
This unwillingness of individuals voluntarily to help cover the cost of a pure public good, and their eagerness to let others produce the good so they can enjoy its benefits at a zero cost, is called the free-rider problem. The name has its origin in the Old West, in the days of cattle rustling. The ranchers of Dodge City banded together to form a vigilante group to catch (and hang) cattle theives. Everyone contributed to the cost of the security force on horseback--that is, until rustling had been sufficiently discouraged by the existence of this group. Then individual ranchers began to withdraw, realizing that they could benefit as much if they didn't pay. They became "free-riders" instead. Before long,the security force collapsed, and cattle rustling resumed. 
This story has a comforting concreteness, and certainly sounds as if it's referring to a real event. There are of course examples in the western United States of voluntary groups formed to fight cattle rustlers, with more  or less success. It's a nice intuitive story of what the broader "free rider problem" means. But at least with a cursory search (the Oxford English Dictionary and some messing around with Google), I've not found any evidence that the actual term "free rider" originated in this context. Maybe some historian of the Old West can pass along a citation?



Friday, October 10, 2014

Will U.S. Workers Start Going Abroad?

A few decades ago, there was little reason for American workers to think seriously about looking for jobs in another country. Sure, a stint abroad might make add some broadening experience, or offer a chance to earn money and do some tourism at the same time. But looking ahead at the next few decades, an ever-growing share of the economic action and opportunity is going to be outside U.S. borders. A study published by the Boston Consulting Group and The Network on "Decoding Global Talent: 200,000 Survey Responses on Global Mobility and Employment Preference" offers some hints.

It's worth remembering that this survey is a not-designed-to-be-representative Internet survey, and so the results should be interpreted with caution. But the survey did receive 200,000 responses, mostly from people who are using on-line job boards in the 132 countries where The Network has a presence. For me, that's enough of a sample to make the conclusions interesting.

At present, the U.S. economy is still the single most preferred destination for the rest of the world. U.S. workers are still among the least likely to consider working abroad or to already be working abroad: "Among everyone in the world, people in the U.S. are the least enthusiastic about moving abroad for work. Only about 35 percent of Americans say they would consider such a move, compared to 64 percent of people worldwide ..." U.S. workers are still more likely than their counterparts in the rest of the world to say that working abroad is about factors like personal experience, culture, and challenge, while being less likely to say that working abroad would be about a higher standard of living, bigger salary, or career opportunity.  Here's a figure showing those already working abroad, or with an expressed willingness to do so, by country.



The report notes: "On the other hand, people in the U.S., Germany, and the UK—three economies that have rebounded more convincingly—aren’t nearly as willing to go abroad for work. Barely a third of U.S. respondents say they’d consider the idea, and only about 44 percent of those in the UK and Germany say they would be interested in taking a job in another country. The reasons for the lower numbers differ, but many people in these countries say economic stability and the comfort of home keep them from considering a job abroad." 

But what about younger workers? " In most countries, young people are more mobile than their older compatriots. One of the biggest differentials is in the U.S. At 59 percent, Americans 21 to 30 are far more willing than Americans in general to consider opportunities abroad, possibly because of the difficulty many of them have had in getting their careers started in the wake of the financial crisis. Partly in reaction to this, many educated young Americans now consider nontraditional starts to their careers, for instance, through temporary overseas assignments with nonprofits like Teach for All." 

The horizontal axis shows the willingness to work abroad for those in the 21-30 age bracket. US workers in this category are still near the lower end of the scale. But the vertical axis shows how those in the 21-30 age bracket compare with the national average. For most countries, many of which are already more integrated into the idea of an international economy than the United States, young workers have more-or-less the same willingness to work abroad. But for the United States, along with the United Kingdom, Canada, and Sweden, younger workers are expressing much greater willingness to work abroad than older workers.



People from countries all around the world have already become used to the idea that careers will often move across countries. This survey is at least a bit of evidence that young U.S. workers are headed that way, too. The report offers this thought: 
"In other words, there is likely to be a much freer flow of talent in the workplace of the future. If they want to be part of it, individuals may not have much choice but to spend parts of their careers in places that aren’t home. To do anything else could be career stifling. “To me, it’s a must,” says Harald Legros, a 39-year-old Frenchman who has worked in Singapore, Hong Kong, and the UK and is now back in his native country, living in ­Bordeaux and running an international trade business. “You have to be able to move to the locations where there might be jobs or new business opportunities. If you just say, ‘No, no, I’ll stay in my country forever,’ that might be complicated because in this day and age the world is pretty much open.”

Acknowledgement: I ran across this report at the Real Time Economics blog run by the Wall Street Journal here.


Thursday, October 9, 2014

The Light Bulb Cartel and Planned Obsolescence

The old 1951 movie "The Man in the White Suit," starring Alec Guinness, is both an entertaining adventure/comedy and a meditation on technology and planned obsolescence. The Alec Guinness character invents a wonderful new fabric that will never get dirty and never wear out. He sees a future where ordinary people will save money on clothes and cleaning expenses. People marvel at the invention at first, but soon everyone is against him: the textile and clothing companies fear his cloth will put them out of business, the workers in those companies fear losing their jobs, and those who do the washing fear losing work, too. Near the end of the movie, one character notes wryly that markets won't function if the products work too well. He says: “What do you think happened to all the other things? The razor blade that doesn’t get blunt? The car that runs on water with a pinch of something else?”

It's harder to come up with clear-cut real-world example of where companies sought to reduce the quality of a product in order to boost sales. After all, in real-world markets there should usually be a mixture of lower-quality, lower-price products and higher-quality, higher-price products, and what people want to buy will have a substantial effect on what gets produced. But in the October 2014 issue of IEEE Spectrum, Markus Krajewski tells the story of "The Great Lightbulb Conspiracy: The Phoebus cartel engineered a shorter-lived lightbulb and gave birth to planned obsolescence."

The lightbulb conspiracy refers to the Convention for the Development and Progress of the International Incandescent Electric Lamp. It was signed in 1924 by the world's major light bulb manufacturers, including Germany’s Osram, the Netherlands’ Philips, France’s Compagnie des Lampes, Hungary’s Tungsram, the United Kingdom’s Associated Electrical Industries, and Japan’s Tokyo Electric. As Krajewski explains: "The U.S. company GE, one of the prime movers behind the group’s formation, was itself not a member. Instead it was represented by its British subsidiary, International General Electric, and by the Overseas Group, which consisted of its subsidiaries in Brazil, China, and Mexico. Over the next decade or so, GE would acquire significant stakes in all the member companies that it did not already own. ... [T]he group founded the Phoebus cartel, a supervisory body that would carve up the worldwide incandescent lightbulb market, with each national and regional zone assigned its own manufacturers and production quotas. It was the first cartel in history to enjoy a truly global reach."

Of course, cartels were widespread in the early decades of the 20th century, as the legal concept of antitrust enforcement was just getting established (which is surely why GE kept its American-based fingerprints off the Phoebus cartel). But even today, international antitrust is just now becoming a hot topic.

What makes the Phoebus cartel especially interesting is not its its standard cartel behavior in seeking to fix prices and quantities for sale, to assure higher prices. It's the effort of the cartel to shape the technological development of the light bulb, and in particular, to make light bulbs that would reliably burn out after about 1,000 hours--thus assuring additional future sales. Krajewski writes:


How exactly did the cartel pull off this engineering feat? It wasn’t just a matter of making an inferior or sloppy product; anybody could have done that. But to create one that reliably failed after an agreed-upon 1,000 hours took some doing over a number of years. The household lightbulb in 1924 was already technologically sophisticated: The light yield was considerable; the burning time was easily 2,500 hours or more. By striving for something less, the cartel would systematically reverse decades of progress. ...
[W]e found meticulous correspondence between the cartel’s factories and laboratories, which were researching how to modify the filament and other measures to shorten the life span of their bulbs. The cartel took its business of shortening the lifetime of bulbs every bit as seriously as earlier researchers had approached their job of lengthening it. Each factory bound by the cartel agreement—and there were hundreds, including GE’s numerous licensees throughout the world—had to regularly send samples of its bulbs to a central testing laboratory in Switzerland. There, the bulbs were thoroughly vetted against cartel standards. If any factory submitted bulbs lasting longer or shorter than the regulated life span for its type, the factory was obliged to pay a fine.
Much of the research on shortening the expectancy of light bulbs focused on the materials and shapes used for the filament. One project at GE, for example, set out to reduce the life expectancy of flashlight bulbs, so that the bulb would need to be changed roughly each time the batteries were changed. At one point, some cartel member tried to sneak in some longer-lasting bulbs that would also require higher voltage. But the cartel snapped back.
After the Phoebus development department’s customary report of voltage statistics revealed such product “enhancements,” Anton Philips, head of Philips, complained to an executive at International General Electric: “This, you will agree with me, is a very dangerous practice and is having a most detrimental influence on the total turnover of the Phoebus Parties…. After the very strenuous efforts we made to emerge from a period of long life lamps, it is of the greatest importance that we do not sink back into the same mire by paying no attention to voltages and supplying lamps that will have a very prolonged life.”
As Krajewski points out, the common excuse from the light-bulb makers was that the shorter life expectancy was necessary for a higher quality or volume of light. But they didn't actually seek to research light bulbs with long life expectancy and better light--only light bulbs with shorter life expectancy. The efforts to reduce the life expectancy of light bulbs succeeded: "Over the course of nearly a decade, the cartel succeeded in this quest. The average life of a standard reference lightbulb produced in dozens of Phoebus members’ factories dropped by a third between 1926 and fiscal year 1933–34, from 1,800 hours to just 1,205 hours." 

The light bulb cartel was staggered by the Great Depression and crashed for good during World War II. Of course, we now live in a world where the incandescent light bulb is being phased out, in favor of compact fluorescent and LED light bulbs, which often promise much longer life. But it's intriguing to wonder about what capabilities incandescent bulbs might have developed if the early research and development focus been on longer life, not brighter lights and planned obsolescence. And it's interesting to consider about the merits of the current legally enforced technological tradeoff for light bulbs: that is, high up-front prices and low electricity consumption, but with a fair amount of consumer grumbling about the quality of light and whether the new bulbs are really going to last for as long as promised. 


Wednesday, October 8, 2014

Shadow Banking: U.S. Risks Persist

A regular bank gets deposits from customers, and then loans out the money to borrowers. But what happens if the loans aren't repaid? Since the 1930s, the U.S. banking system (and the banking system of most other high-income countries) have relied on a two-pronged method of ensuring stability in the banking system: 1) deposit insurance means that the overwhelming majority of bank depositors don't need to worry that their money will be lost; and 2) bank regulation stops banks from taking unreasonably large risks that could lead to the loss of deposits. On the whole, this approach worked fairly well for more than  half-century. But the financial and economic crisis of 2007-2009 revealed a large and growing loophole in these arrangements: the existence of "shadow banks."

 A "shadow bank" is any financial institution that gets funds from customers and then in some way lends the money to borrowers. However, a shadow bank doesn't have deposit insurance. And while the shadow bank often faces some regulation, it typically falls well short of the detailed level of risk regulation that real banks face. In this post in May, I tried to explain how shadow banking works in more detail. Many of the financial institutions at the heart of the financial crisis were "shadow banks." For example, Reserve Primary Fund was a large money-market fund that had received money from depositors and had invested some of that money in debt issued by Lehman Brothers. When Lehman went broke, investors began to pull money out of Reserve Primary Fund--and indeed, about $300 billion flowed out of money market mutual funds that money--until the Federal Reserve and the Treasury Department stepped in with guarantees and emergency loan assistance. In turn, investment banks like Lehman Brothers were not standard commercial banks either, but they were relying on continual inflows of short-term borrowing (similar to deposits at a standard bank) and lending out and investing the funds. When their investments turned bad, their ability to receive short-term borrowing dried up at the same time, and they no longer had capital to function.

Five years past the end of the Great Recession, how vulnerable is the U.S. and the world economy to instability from shadow banking? One worrisome dynamic is that the more tightly actual banks are regulated, the more the financial industry comes up with other "shadow banking" institutions for making loans. The IMF devotes a chapter in its October 2014 Global Financial Stability Report to "Shadow Banking Around the Globe: How Large, and How Risky?" From the summary:

Although shadow banking takes vastly different forms across and within countries, some of the key drivers behind its growth are common to all: a tightening of banking regulation and ample liquidity conditions, as well as demand from institutional investors, tend to foster nonbanking activities. The current financial environment in advanced economies remains conducive to further growth in shadow banking. Many indications there point to the migration of some activities—such as lending to firms—from traditional banks to the nonbank sector. Shadow banking can play a beneficial role as a complement to traditional banking by expanding access to credit or by supporting market liquidity, maturity transformation, and risk sharing. It often, however, comes with banklike risks, as seen during the 2007–08 global financial crisis. Although data limitations prevent a comprehensive assessment, the U.S. shadow banking system appears to contribute most to domestic systemic risk; its contribution is much less pronounced in the euro area and the United Kingdom.

Along with money market mutual funds, the shadow banking sector includes hedge funds, non-bank finance companies and nonbank mortgage  originators, broker-dealers, investment funds, real estate investment trusts, and the "structured finance" sector that includes asset-backed commercial paper, collateralized debt obligations, residential mortgage-backed securities, and structured investment vehicles. Many pension funds and insurance companies now do some direct lending of their funds to businesses. As one example of a financial innovation facilitated by banks, but ultimately where the funds are held outside banks, I posted about the leveraged loans sector a few days ago. These many sectors each have their own characteristics and pose their own risks, and it's an oversimplification to lump them together. That said, here's how the assets in shadow banking compare with conventional bank assets in various countries and areas. The U.S. ranks so highly, in part, because its financial system is less dominated by banks than is the financial system of many countries in the EU and elsewhere. Still the difference is striking.



The IMF makes an effort to sort through the details of all the different shadow banking sectors and to evaluate the risks involved. The IMF notes:
So far, the (imperfectly) measurable contribution of shadow banking to systemic risk in the financial system is substantial in the United States but remains modest in the United Kingdom and the euro area. In the United States, the risk contributions of shadow banking activities have been rising, but remain slightly below precrisis levels. ...  In the United States, shadow banking accounts for at least a third of total systemic risk, (measured as extreme losses to the financial system that occur with a very low probability), similar to that of banks. In the euro area and the United Kingdom, their contribution to systemic risk is much smaller relative to the risks arising from their banking system. This largely reflects the fact that the latter are still more bank-based financial systems.
It is discomforting to me to read that for the U.S., shadow banking risks are "slightly below precrisis levels." In general, the policy approach here is clear enough. As the IMF notes: "Overall, the continued expansion of finance outside the regulatory perimeter calls for a more encompassing
approach to regulation and supervision that combines a focus on both activities and entities and places greater emphasis on systemic risk and improved transparency."

Easy for them to say! But when you dig down into the specifics of the shadow banking sector, not so easy to do. 

Tuesday, October 7, 2014

Spending on Necessities and Luxuries

The price of necessities hits us all in a vulnerable spot. If the price of airfares to New Zealand rises, it doesn't affect my life--except that my dream vacation to New Zealand looks a little less possible. But if the price of food and gasoline rise, I notice it immediately, and it cuts into the family budget for entertainment and other pleasant activities. In an Economic Commentary written for the Federal Reserve Bank of Cleveland (October 6, 2014), LaVaughn M. Henry looks at "Income Inequality and Income-Class Consumption Patterns."  By his measure, those with higher incomes can (unsurprisingly) spend a far smaller share of their income on necessities as compared to luxuries. However, all income groups are spending a lower share of their income on necessities than they did several decades ago.

Henry looks at data from the Consumer Expenditure Survey, which divides up what people buy into various categories. He then classifies some of the categories as more likely to be disproportionately "necessities" and others as disproportionately "luxuries." for example, necessities include food at home, rent, utilities, health care, education, gasoline, and household supplies. Luxuries include food away from home, entertainment, household furnishings, "other lodging," and others. His complete list appears at the bottom of the post. This division is clearly rough and ready, but as Henry explains: "A specific type of good or service is classified as a luxury if more of it is consumed, on a percentage basis, as real income levels increase (that is, going from lower to higher income quintiles). Similarly, a specific good or service is classified as a necessity if it accounts for a smaller percentage of consumption as real income levels increase."

Back in 1984, household spending was more-or-less evenly divided between necessities and luxuries. Luxuries then rose as high as 58% of annual spending before the Great Recession hit, and have now fallen back to 56% of annual spending. Here's the pattern:


What about if we look across income groups? The following five graphs show the share of spending on luxuries with the blue line, and on necessities with the red line, across the five quintiles of the income distribution (that is, dividing the income distribution into five parts with equal numbers of households in each).  The share of income spent on  luxuries is much higher for the highest-income group. while the share of income spent on necessities is much higher for the lowest-income group. Of course, this is why people in lower income groups are so vulnerable to a rise in the price of necessities. But it's also intriguing to note that since 1984, the share of income spent on luxuries is rising for each income group, and the share of income spent on necessities is falling for each income group.


Many people, including me, have a tendency to feel that their hard-earned money should be spent on something fun, rather than having too much of it go to boring necessities like food and gas. One reason the pinch of the Great Recession has felt so severe, I think, is that people have been used to a world where over time they were able to spend more on luxuries. But that long-term trend halted during the Great Recession, and has not yet resumed.

Finally, here is Henry's overall list of consumption categories, categories by luxuries, necessities, and indeterminate.

Table 1. Average Share of Total Real Consumption, 1984-2012

Consumption categoryIncome quintile

LowestSecond-lowestMiddleSecond-highestHighestConsumption type
Food away from home5.765.856.266.456.35Luxury
Owned dwellings8.398.8910.4712.7615.17Luxury
Household furnishings, equipment2.582.652.923.133.54Luxury
Vehicles (net outlay)1.722.332.773.333.77Luxury
Cash contributions2.312.843.003.054.05Luxury
Entertainment3.293.363.583.874.17Luxury
Household operations1.511.531.471.622.15Luxury
Personal insurance, pensions2.324.737.8610.7213.92Luxury
Other vehicle expenses4.805.676.096.135.54Luxury
Public transportation0.920.820.810.831.22Luxury
Other lodging1.110.931.001.192.05Luxury
Food at home11.9810.899.258.196.40Necessity
Rented dwellings14.1711.348.565.042.03Necessity
Utilities, fuels, public services11.5910.398.957.605.97Necessity
Healthcare8.589.007.245.974.71Necessity
Education4.281.761.621.903.11Necessity
Personal care1.401.411.341.291.20Necessity
Tobacco, smoking products2.532.241.921.500.82Necessity
Gas and motor oil4.895.215.274.873.72Necessity
Housekeeping supplies1.651.651.481.471.28Necessity
Alcoholic beverages0.990.961.040.991.00Indeterminate
Reading0.410.420.410.400.40Indeterminate
Apparel and services3.583.453.463.433.58Indeterminate
Source: Bureau of Labor Statistics, Consumer Expenditure Surveys, 1984-2012.

Saturday, October 4, 2014

Leveraged Loans: A Danger Spot?

In the aftermath of the Great Recession, we all learned to beware complex lending structures, which can crumble like a house of cards if repayments don't happen on time. A central ingredient in the financial crisis from 2007-9 were "collateratized debt obligations," which were essentially a way of putting a group of subprime mortgage loans into a financial security, structured in a way that the credit rating agencies would rate a large portion of their value as safe. Financial regulators and the Federal Reserve didn't pay enough attention to the dangers of this financial legerdemain.

Now yellow warning lights should be blinking in the area of "leveraged loans," which can be defined as "a large, variable-rate loan originated by a group of banks (sometimes called a syndicate) for a corporate borrower who is perceived to be riskier than most."  Alex Musatov and William Watts lay out the issues in "Despite Cautionary Guidance, Leveraged Loans Reach New Highs," in the September 2014 Economic Letter published by the Federal Reserve Bank of Dallas.

The issuance of leveraged loans spiked just before the financial crisis of 2007-9, then collapsed in 2008, but has now rebounded to new highs.

The basic idea of a leveraged loan is that because of the size of the loan and the risk posed by the borrowing firm, no individual bank wants to lend the money. However, a group of banks get together as a syndicate to organize the loan. "The banks retain portions of the loan on their own books, but the majority of it is packaged for other investors—typically finance companies, insurance companies and hedge funds." A leveraged loan can be organized in several ways, as Musatov and Watts explain:

Specific lending arrangements reflect the size of the loan and riskiness of the borrower. In an underwritten deal, the syndicate issues the full amount of the loan and then tries to sell portions to outside investors. Underwritten deals are generally the most attractive loans to borrowers because they ensure that the entire amount of needed capital is raised; the lead bank gets higher fees for the risk of holding the debt while looking for investors. A “club deal,” used for smaller loans,involves several banks raising the money within the group while splitting the fees charged to the borrower. Finally, in “best effort” syndication, the arrangers of the loan underwrite less than its entire value and attempt to raise the remainder in the credit market. This type of syndication is generally used for
the riskiest borrowers or the most complex loan agreements.
Of course, there's nothing wrong or underhanded about leveraged loans. It's just one of the ways in which modern finance works. It's not easy for firms with with a less-solid credit record to borrow, and this is one of the ways it can happen. Of course, because such firms pose greater risks, they also need to pay a higher interest rate. But if the syndicates are making too too many of these loans, while taking the fees and then selling the loans along to investors who are eager for a higher return, then there a danger that a bubble is rising in this market.

In March 2013, as as Musatov and Watts note, financial regulators began to express concerns about the leveraged loan market: "[T]he Office of the Comptroller of the Currency (OCC), Federal Deposit Insurance Corp. (FDIC) and Board of Governors of the Federal Reserve System (FRS) issued “Interagency Guidance on Leveraged Lending” in March 2013, outlining principles of safeand-
sound leveraged lending activities ..." What are some of the danger signs in this kind of market?

One signal is that the firms that are borrowing in the market sign a "covenant" contract, in which they make various promises about the total amount that the borrower will borrow, the value of short-term assets on hand that can easily be sold, limits on long-term investments, and so on. However, more and more leveraged loans are using "covenant-lite" approaches, where these rules are loosened--thus making the loan a riskier one.

While covenant-lite loans are becoming more popular, the additional interest rate charged to the borrowing firms--to account for their higher risk--has been coming down. The red line shows how much the interest rate "spread" on leveraged loans, above the baseline rate on U.S. Treasury borrowing, has been sagging over time. The green line shows the default rate on such loans, which has been rising over time. In short, the risks of such  borrowing seem to be rising while the extra interest rate charged to such borrowers to compensate for the risks is falling.


As noted earlier, financial regulators are keeping an eye on the leveraged loan market. As Janet Yellen testified before Congress in July 2014:

The Committee recognizes that low interest rates may provide incentives for some investors to "reach for yield," and those actions could increase vulnerabilities in the financial system to adverse events. While prices of real estate, equities, and corporate bonds have risen appreciably and valuation metrics have increased, they remain generally in line with historical norms. In some sectors, such as lower-rated corporate debt, valuations appear stretched and issuance has been brisk. Accordingly, we are closely monitoring developments in the leveraged loan market and are working to enhance the effectiveness of our supervisory guidance.
What ultimately concerns me is not the specifics of the leveraged loan market by itself, but the thought that there are likely to be similar niche markets out there, invisible to most of us in their day-to-day operations, but with some potential to melt down in a way that could cause broader financial and economic distress.

Thursday, October 2, 2014

Time for an Infrastructure Push?

The prospect of an infrastructure push is seductive. Economic and job growth has been sluggish. Interest rates and thus borrowing costs remain relatively low. At least some kinds of infrastructure might help to boost long-term growth. Thus, the October 2014 World Economic Outlook from the IMF includes a chapter on "Is It Time for an Infrastructure Push? The Macroeconomic Effects of Public Investment." The IMF writes:
[I]ncreased public infrastructure investment raises output in both the short and long term, particularly during periods of economic slack and when investment efficiency is high. This suggests that in countries with infrastructure needs, the time is right for an infrastructure push: borrowing costs are low and demand is weak in advanced economies, and there are infrastructure bottlenecks in many emerging market and developing economies. Debt-financed projects could have large output effects without increasing the debt-to-GDP ratio, if clearly identified infrastructure needs are met through efficient investment.
Like so many statements by economists, this may seem straightforward, but it is actually hedged with qualifiers. For example, the first sentence refers to the positive outcomes arising "when investment efficiency is high," and the closing line states "if clearly identified infrastructure needs are met through efficient investment." In the middle, there is a reference to "infrastructure bottlenecks in many emerging market and developing economies," but this sentence carefully does not claim that infrastructure bottlenecks are a first-order problem in advanced economies.

I take from this that the case for an infrastructure push is especially strong right now in those "many emerging and developing countries." Here's a figure showing current levels of electricity, roads, and phone lines by region. But of course, infrastructure would also include water and sewage, airports and seaports, rail, wireless connections, natural gas and oil pipelines, and more.



Japan offers a mildly cautionary tale for advance economies on the limits of infrastucture investment.


What about for the advanced economies? The IMF chapter goes through a variety of calculations that attempt to separate out the specific effect of a boost in public infrastructure spending. The report notes (footnotes and references to figures omitted):
The macroeconomic effects of public investment shocks are very different across economic regimes. During periods of low growth, a public investment spending shock increases the level of output by about 1½ percent in the same year and by 3 percent in the medium term, but during periods of high growth the long-term effect is not statistically significantly different from zero. Public investment shocks also bring about a reduction in the public-debt-to-GDP ratio during periods of low growth because of the much bigger boost in output.
What about infrastructure in the United States? Here's a figure from the ever-useful FRED website run by the St. Louis Fed showing total public construction spending in the U.S. In rough terms, about one-third of this is highways and streets and another one-tenth is other transportation, one-quarter is education-related construction, about one-sixth is sewage and water. Other categories include power, public safety, and conservation and development. I'm not surprised to see a boost in infrastructure spending during the Great Recession; after all, that was part of the "economic stimulus" package that passed in 2009. But I had not realized that public construction spending had actually been rising fairly briskly since about 2005, well before the recession hit.


As someone who lives in Minnesota, I favor more infrastructure spending. After all, about seven years ago a major bridge in Minneapolis collapsed during rush hour. After the brutal Minnesota winter, the potholes on the roads are large enough to swallow dogs, and sometimes appliances. But like a lot of economists, I have two concerns about how to focus and direct a push for more infrastructure, so that it means more a nice shiny bridge-to-nowhere in every Congressional district.

1) There's always a tension between civil engineers and economists. The engineers often look at every infrastructure limitation and see a building project that ought to happen. Economists often look at the same problem and see ways that  the existing infrastructure might be used more effectively. For example, instead of just automatically trying to build more roads, water-mains, electrical capacity, and the like, how about looking for ways to conserve on the need for using this infrastructure. Many economist favor finding ways to charge more at peak usage times as a way of spreading the use of infrastructure over a wider time period each day.

2)  I have a hard time believing that U.S. economic prosperity in the 21st century is going to be built on concrete and asphalt. As I said, I'm all for fixing bridges and potholes, updating the municipal water pipes, and the like.  But what about infrastructure for the 21st century? Some of this infrastructure may be funded directly by the government, but most of it will require a fairly high level of government support and cooperation if it is going to happen. For example, as we repair current highways and bridges, how about starting to build the capacity for smart highways and self-driving cars?  What about a smart electricity grid, both to facilitate the use of decentralized renewable energy sources and to implement higher prices for large users at peak times? The U.S. needs to update its rail-freight system, which could move large numbers of trucks off the highways--thus reducing congestion and saving on road repair costs. The U.S. needs to update its network of oil and gas pipelines.  When the IMF talks about "clearly identifiable needs" and "efficient investment," these seem to me some of the main U.S. infrastructure issues, although the list could doubtless be lengthened.

When talking about how infrastructure spending could boost a sluggish economy, the case of Japan often comes up. After all, didn't Japan boost infrastructure spending in the 1990s in an attempt to boost economic growth, but with little effect? The IMF report notes that the patterns of what happened in Japan are more complex:

It is true that Japan briskly increased public investment in the early 1990s, but the increase was unwound after just a few years to finance higher social security spending for a rapidly aging population. In particular, after the bursting of the bubble economy in the early 1990s, the government increased public investment spending by 1½ percent of GDP, with such spending reaching a peak of 8.6 percent in 1996. After that, the ratio of public investment to GDP steadily declined, picking up only recently in the aftermath of the global financial crisis, the 2011 earthquake, and the start of Abenomics. In the 20 years after 1992, the last year in which Japan recorded a fiscal surplus, social spending increased by 10.6 percent of GDP, and public investment declined by 2.3 percent of GDP.
And of course, this is one of the harsh truths about infrastructure spending when budget deficits are already high and public debt has been rising: in the long run, a commitment to higher public infrastructure spending will have to compete with other spending priorities, like health care, payments to the elderly, defense spending, and all in a context of rising interest payments owed on past government borrowing.