Pages

Friday, February 28, 2014

Will We Look Back on the Euro as a Mistake?

For the last few months, the euro situation has not been a crisis that dominates headlines. But the economic situation surrounding the euro remains grim and unresolved. Finance and Development, published by the IMF, offers four angles on Europe's road in its March 2014 issue. For example,
Reza Moghadam discusses how Europe has moved toward greater integration over time, Nicolas Véron looks at plans and prospects for a European banking union, and Helge Berger and Martin Schindler
consider the policy agenda for reducing unemployment and spurring growth.  But I was especially drawn to "Whither the Euro?" by Kevin Hjortshøj O’Rourke, because he finds himself driven to contemplating whether the euro will survive. He concludes:
The demise of the euro would be a major crisis, no doubt about it. We shouldn’t wish for it. But if a crisis is inevitable then it is best to get on with it, while centrists and Europhiles are still in charge. Whichever way we jump, we have to do so democratically, and there is no sense in waiting forever. If the euro is eventually abandoned, my prediction is that historians 50 years from now will wonder how it ever came to be introduced in the first place.
To understand where O'Rourke is coming from, start with some basic statistics on unemployment and growth in the euro-zone. Here's the path of unemployment in Europe through the end of 2013, with the average for all 28 countries of the European Union shown by the black line, and the average for the 17 countries using the euro shown by the blue line. 


In the U.S. economy, we agonize (and rightfully so!) over how slowly the unemployment rate has fallen from its peak of 10% in October 2009 to 6.6% in January 2014. In the euro zone, unemployment across countries averaged 7.5% before the Great Recession, and has risen since then to more than 11.5%. And remember, this  average include countries with low unemployment rates: for example, Germany's unemployment rate has plummeted to 5.1%. But  Greece has unemployment of 27.8%; Spain, 25.8%; and Croatia, Cyprus, and Portugal all have unemployment rates above 15%.

Here's the quarterly growth rate of GDP for the 17 euro countries, for all 28 countries in the European Union, and with the U.S. economy for comparison. Notice that the European Union and the euro zone actually had two recessions: the Great Recession that was deeper than the U.S. recession, and the a follow-up period of negative growth from early 2011 to early 2013. As O'Rourke writes: "In December 2013 euro area GDP was still 3 percent lower than in the first quarter of 2008, in stark contrast with the United States, where GDP was 6 percent higher. GDP was 8 percent below its precrisis level in Ireland, 9 percent below in Italy, and 12 percent below in Greece."
 


For American readers, try to imagine what the U.S. political climate would be like if unemployment had been rising almost continually for the last five years, and if the rate was well into double-digits for the country as a whole. Or contemplate what the U.S. political climate would look like if instead of sluggish recovery, U.S. economic growth had actually been in reverse for most of 2011 and 2012.

O'Rourke points out that this dire outcome was actually a predictable and predicted result based on standard economic theory before the euro was  put in  place. And he points out that there is no particular reason to think that the EU is on the brink of addressing the underlying issues.

The relevant economic theory here points out that if two areas experience different patterns of productivity or growth, some adjustment will be necessary between them. One possibility, for exmaple, is that the exchange rate adjusts between the two countries. But if the countries have agreed to use a common currency, so that an exchange rate adjustment is impossible, then other adjustments are possible. For example, some workers might move from the lower-wage to the higher-wage area. Instead of a shift in exchange rates cutting the wages and prices in global markets, wages and prices themselves could fall in an "internal devaluation." A central government might redistribute some income from the higher-income to the lower-income area.

But in the euro-zone, these adjustments are either not-yet-practical or impossible. With the euro as a common currency, exchange rate changes are out. Movement of workers across national borders is not that large, which is why unemployment can be 5% in Germany and more than 25% in Spain and Greece. Wages are often "sticky downward," as economists say, meaning that it is unusual for wages to decline substantially  in nominal terms. The EU central government has a relatively small budget and no mandate to redistribute from higher-income to lower-income areas. Without any adjustment, the outcome is that certain countries have depressed economies with high unemployment and slow or negative growth, and no near-term way out.

Sure, one can propose various steps that in time might work. But for all such proposals, O'Rourke lays two unpleasantly real facts on the table.
First, crisis management since 2010 has been shockingly poor, which raises the question of whether it is sensible for any country, especially a small one, to place itself at the mercy of decision makers in Brussels, Frankfurt, or Berlin. ... Second, it is becoming increasingly clear that a meaningful banking union, let alone a fiscal union or a safe euro area asset, is not coming anytime soon.
Given the unemployment and growth situations in the depressed areas of Europe, it's no surprise that pressure for more extreme political choices is building up. For Europe, sitting in one place while certain nations experience depression-level unemployment for years while other nations experience booms, and waiting for the political pressure for extreme change to become irresistible,  is not a sensible policy. O'Rourke summarizes in this way: 
For years economists have argued that Europe must make up its mind: move in a more federal direction, as seems required by the logic of a single currency, or move backward? It is now 2014: at what stage do we conclude that Europe has indeed made up its mind, and that a deeper union is off the table? The longer this crisis continues, the greater the anti-European political backlash will be, and understandably so: waiting will not help the federalists. We should give the new German government a few months to surprise us all, and when it doesn’t, draw the logical conclusion. With forward movement excluded, retreat from the EMU may become both inevitable and desirable.

Thursday, February 27, 2014

Death of a Statistic

OK, I know that only a very small group of people actually care about government statistics. I know I'm a weirdo.  I accept it. But data is not the plural of anecdote, as the saying goes. If you care about deciphering real-world economic patterns, you need statistical evidence. Thus, it's unpleasant news to see the press release from the US Bureau of Labor Statistics reporting that, because its budget has been cut by $21 million down to $592 million, it will cut back on the International Price Program and on the Quarterly Census of Employment and Wages.

I know, serious MEGO, right? (MEGO--My Eyes Glaze Over.)

But as Susan Houseman and Carol Corrado explain, the change means the end of the export price program, which calculates price levels for U.S. exports, and thus allows economists "to understand trends in real trade balances, the competitiveness of U.S. industries, and the impact of exchange rate movements. It is highly unusual for a statistical agency to cut a so-called principal federal economic indicator." As BLS notes: "The Quarterly Census of Employment and Wages (QCEW) program publishes a quarterly count of employment and wages reported by employers covering 98 percent of U.S. jobs, available at the county, MSA [Metropolitan Statistical Area], state and national levels by industry." The survey is being reduced in scope and frequency, not eliminated. If you don't think that a deeper and detailed understanding of employment and wages is all that important, maybe cutting back funding for this survey seems like a good idea.

These changes seem part of series of sneaky little unpleasant cuts. Last year, the Bureau of Labor Statistics saved a whopping $2 million by cutting the International Labor Comparisons program, which produced a wide array of labor market and economic data produced with a common conceptual framework, so that one could meaningfully compare, say, "unemployment" across different countries. And of course, some of us are still mourning the decision of the U.S. Census Bureau in 2012 to save $3 million per year by ending the U.S. Statistical Abstract, which for since 1878  had provided a useful summary and reference work for locating a wide array of government statistics.

The amounts of money saved with these kinds of cuts is tiny by federal government standards, and the costs of not having high-quality statistics can be severe. But don't listen to me. Each year, the White House releases an Analytical Perspectives volume with its proposed federal budget, and in recent years that volume  usually contains a chapter on  "Strengthening Federal Statistics." As last year's report says:
"The share of budget resources spent on supporting Federal statistics is relatively modest—about 0.04 percent of GDP in non-decennial census years and roughly double that in decennial census years—but that funding is leveraged to inform crucial decisions in a wide variety of spheres. The ability of governments, businesses, and the general public to make appropriate decisions about budgets, employment, investments, taxes, and a host of other important matters depends critically on the ready and equitable availability of objective, relevant, accurate, and timely Federal statistics."
I wish I had some way to dramatize the foolishness and loss of these decisions to trim back on government statistics. After all, doesn't the death of a single statistic diminish us all? Ask not for whom the statistics toll; they toll for thee. It's not working, is it?

It won't do to blame these kinds of cutbacks in the statistics program on the big budget battles, because in the context of the $3.8 trillion federal budget this year, a few tens of millions are pocket change. These cuts could easily be reversed by trimming back on the outside conference budgets of larger agencies. But all statistics do is offer facts that might get in the way of what you already know is true. Who needs the aggravation?


Wednesday, February 26, 2014

Highways of the Future

Highways, roads, and bridges are are still mostly an early to mid-20th century technology.  Clifford Winston and Fred Mannering point to some of the directions for highways of the future in "Implementing technology to improve public highway performance: A leapfrog technology from the private sector is going to be necessary," published in the Economics of Transportation. They set the stage like this (citations and notes omitted throughout):

"The nation's road system is vital to the U.S. economy.Valued at close to $3 trillion, according to the Bureau of Economic Analysis of the U.S. Department of Commerce, 75 percent of goods, based on value, are transported on roads by truck, 93 percent of workers' commutes are on roads by private automobiles and public buses, and by far the largest share of non-work and pleasure trips are taken by road. Indeed, roads can be accurately characterized as the arterial network of the United States. Unfortunately,the arteries are clogged: the benefits that commuters, families,truckers,and shippers receive from the nation's road system have been increasingly compromised by growing congestion, vehicle damage, and accident costs."
These costs are high. Estimates of the value of time and fuel spent on congested roads are $100 billion per year. Poor road conditions cost American car drivers $80 billion in operating costs and repairs. And 30,000 Americans die in traffic fatalities each year.

Many of the policy recommendations are familiar enough. For example, the traditional economist's answer to road congestion is to charge tolls for driving during congested times. "[P]oor signal timing and coordination, often caused by outdated signal control technology or reliance on obsolete data on relative  traffic volumes, contributes to some 300 million vehicle hours of annual delay on major roadways." Earlier work by Winston emphasized that roads and bridges are primarily damaged by heavier trucks, not cars: "Almost all pavement damage tends to be caused by trucks and buses because, for example, the rear axle of a typical 13-ton trailer causes over 1000 times as much pavement damage as that of a car." Thus, charging heavy vehicles for the damage they cause is a natural prescription. For greater safety, enforcement of laws against drunk driving and driving-while-texting can be a useful step.

But as Winston and Mannering note, new technologies are expanding possibilities for the highway of the future. Certain technologies, like automated collection of tolls from cars that don't need to stop, are already widespread. The combination of GPS technology and information about road conditions is already helping many drivers find alternative routes through congestion. But more is coming. As they write:
"Specific highway and vehicle technologies include weigh-in-motion capabilities, which provide real-time information to highway officials about truck weights and axle configurations that they can use to set efficient pavement-wear charges and to enforce safety standards efficiently; adjustable lane technologies,which allow variations in the number and width of lanes in response to real-time traffic flows; new vehicle attributes, such as automatic vehicle braking that could decrease vehicle headways and thus increase roadway capacities; improved construction and design technologies to increase pavement life and to strengthen roads and bridges; and photo-enforcement technologies that monitor vehicles' speeds and make more efficient use of road capacity by improving traffic flows and safety. ... The rapid evolution of material science (including nanotechnologies) has produced advances in construction materials, construction processes, and quality control that have significantly improved pavement design, resulting in greater durability, longer lifetimes, lower maintenance costs, and less vehicle damage caused by potholes."
Of course, ultimately, the driverless car may dramatically change how cars and roads are used. (Indeed, driverless trucks are already in use in places like an iron ore mine in Australia, comfortably far from public roads--at least so far.)

But the roads and bridges are not a competitive company, trying out new technologies in the hope of attracting new customer and raising profits. They are run by government bureaucracies that are set in their old ways. The federal fuel tax isn't raising enough money for new investments in road technology, partly because it is fixed in nominal terms and inflation keeps eating away at its real value, and partly because higher fuel economy means that a fuel tax collects less money. Lobbies for truckers oppose charges that would reflect road damage; lobbies for motorists oppose charges that would reflect congestion. Stir up all these ingredients, and the result is not a big push for applying new technology to America's roads and bridges.

Winston and Mannering offer a ultimately optimistic view in which private investments in the driverless car trigger a wide array of other technological investments in roads and bridges. Maybe they will be proven right. I believe the social gains from applying all kinds of technology to roads and bridges could be very large. But I also envision a complex array of interrelated and potentially costly technologies, which would be confronting a thorny tangle of political and regulatory obstacles at every turn and straightaway.


Tuesday, February 25, 2014

Financial Services From the US Postal Service?

About 8% of Americans are "unbanked," and have no bank account, while another 21% are "underbanked," which means that they have a bank account, but also use alternative financial services like payday loans, pawnshops, non-bank check cashing and money orders.  The Office of Inspector General of the U.S. Postal Service has published a report asking if the Post Office might be a useful mechanism in "Providing Non-Bank Financial Services for the Underserved." The  report points out that the average unbanked or underbanked household household spends $2,412 each year just on interest and fees for alternative financial services, or about $89 billion annually.

I admit that, at first glance, this proposal gives me a sinking feeling. The postal service is facing severe financial difficulties in large part because of the collapse in first-class mail, and it has been scrambling to consider alternatives. After watching the tremors and collapses in the U.S. financial system in recent years, providing financial services seems like a shaky railing to grasp.

But as the Inspector General report points out, a connection from the post office to financial services isn't brand-new.  For example, "The Postal Service has played a longstanding role in providing domestic and international money orders. The Postal Service is actually the leader in the U.S. domestic paper money order market, with an approximately 70 percent market share. This is a lucrative business line and demonstrates that the Postal Service already has a direct connection to the underserved, who purchased 109 million money orders in fiscal year (FY) 2012. ... While its domestic and international money orders are currently paper-based, the Postal Service does offer electronic money transfers to nine Latin American countries through the Dinero Seguro® (Spanish for “sure money”) service." For several years now, the Post Office has been selling debit cards, both for American Express and for specific retailers like Amazon, Barnes & Noble, Subway, and Macy’s.

In many countries, the postal service takes deposits and provides financial services. The Universal Postal Union published a report in March 2013 by Alexandre Berthaud  and Gisela Davico,  "Global Panorama on Postal Financial Inclusion: Key Issues and Business Models," which among more detailed findings notes that 1 billion people around the world in 50 countries do at least some of their banking through postal banking systems. The Universal Postal Union also oversees the International Financial System, which is software that allows a variety of fund transfers across postal operators in more than 60 countries.The U.S. Postal Service is not currently a member. Indeed, the US Inspector General report notes that among high-income countries, the postal services earn on average about 14% of their income from financial services.

The U.S. Postal Service even used to take deposits in the past: "[F]rom 1911 to 1967, the Postal Savings System gave people the opportunity to make savings deposits at designated Post Offices nationwide. The system hit its peak in 1947 with nearly $3.4 billion in savings deposits from more than 4 million customers using more than 8,100 postal units. The system was discontinued in 1967 after a long decline in usage." Essentially, the post office collected deposits and then loaned them along to local banks, taking a small cut of the interest.  

There's of course a vision that in the future, everyone will do their banking over the web, often through their cellphones. But especially for people with a weak or nonexistent link to the banking system, the web-based financial future is still some years away. Until then, they will be turning to to cash and physical checks, exchanged at physical locations.  However, as the Inspector General report notes, "Banks are closing branches across the country (nearly 2,300 in 2012). ... The closings are heavily hitting low-income communities, including rural and inner-city areas — the places where many of the underserved live. In fact, an astounding 93 percent of the bank branch closings since late 2008 have been in ZIP Codes with below-national median household income levels." Conversely, there are 35,000 Post Offices, stations, branches, and contract units, and "59 percent of Post Offices are in ZIP Codes with one or no bank branches."

There are at least two main challenges in the vision of having the U.S. Postal Service provide nonbank financial services. First, the USPS should do everything on a fee basis. It should not in any way, shape or form be directly making investment or loans--just handling payments. However, it could have partners to provide other kinds of financial services, which leads to the second challenge. There is an anticompetitive temptation when an organization like the Post Office creates partnerships to provide outside services. Potential partners will be willing to pay the Post Office more if they have an exclusive right to sell certain services. Of course, the exclusive right also gives them an ability to charge higher fees, which is why they can pay the Post Office more and still earn higher profits. But the intended beneficiaries of the financial services end up paying higher fees. Thus, if the US Postal Service is going to make space for ATM machines, or selling and reloading debit cards, or cashing checks, it should always be seeking to offer a choice between three or more providers of such services, not just a single financial services partner. To put it another way, if the Postal Service is linked to a single provider of financial services, then the reputation of the Postal Service is hostage to how the provider performs. It's much better if the Postal Service acts as an honest broker, collecting its fees for facilitating payments and transactions in a setting where people can always switch between multiple providers.

Finally, there is at least one additional benefit worth noting. Many communities lack safe spaces: safe for play, safe for walking down the street, safe for carrying out a financial transaction with minimal fear of fraud or assault.  Having post offices provide financial services could be one part of an overall social effort for adding to the number of safe spaces in these communities.




Monday, February 24, 2014

From BRICs to MINTs?

Back in 2001, Jim O'Neill--then chief economist at Goldman Sachs--invented the terminology of BRICs. As we all know two decades later, this shorthand is a quick way of discussing the argument that the course of the world economy will be shaped by the performance of Brazil, Russia, India, and china. Well, O'Neill is back with a new acronym, the MINTs, which stands for Mexico, Indonesia, Nigeria, and Turkey. In an interview with the New Statesman, O'Neill offers some thoughts about the new acronym. If you would like more detail on his views of these countries, O'Neill has also recorded a set of four radio shows for the BBC on Mexico, Indonesia, Nigeria, and Turkey.

In the interview, O'Neill is disarmingly quick to acknowledge the arbitrariness of these kinds of groupings. About the BRICs, for example, he says: "If I dreamt it up again today, I’d probably just call it ‘C’ ... China’s one and a half times bigger than the rest of them put together.” Or about the MINTs, apparently his original plan was to include South Korea, but the BBC persuaded him to include Nigeria instead. O'Neill says: “It’s slightly embarrassing but also amusing that I kind of get acronyms decided for me.” But even arbitrary divisions can still be useful and revealing. In that spirit, here are some basic statistics on GDP and per capita GDP for the BRICs and the MINTs in 2012.




What patterns jump out here?

1) The representative growth economy for Latin America is now Mexico, rather than Brazil. This change makes some sense. Brazil has had four years of sub-par growth, its economy is in recession, and international capital is fleeing. Meanwhile, Mexico is forming an economic alliance with the three other nations with the fastest growth, lowest inflation, and best climates for business in Latin America: Chile, Columbia and Peru.

2) All of the MINTs have smaller economies than all of the BRICs. If O'Neill would today just refer to C, for China, rather than the BRICs as a group, it's still likely to be true that C for China is the key factor shaping the growth of emerging markets in the future.

3) O'Neill argues that although the MINTs differ in many ways, their populations are both large and relatively young, which should help to boost growth. He says: "That’s key. If you’ve got good demographics that makes things easy." Easy may be overstating it! But there is a well-established theory of the "demographic dividend," in which countries with a larger proportion of young workers are well-positioned for economic growth, as opposed to countries with a growing proportion of older workers and retirees.

4) One way to think about the MINTs is that they are standing as representatives for certain regions. Thus, Mexico, although half the size of Brazil's economy, represents the future for Latin America. Indonesia, although smaller than India's economy and much smaller than China's, represents the growth potential for Factory Asia--that group of countries building international supply chains across this region. Turkey represents the potential for growth in Factory Europe--the economic connections happening around the periphery of Europe. Nigeria's economy looks especially small on this list, but estimates for Nigeria are likely to be revised sharply upward in the near future, because the Nigerian government statistical agencyis “re-basing” the GDP calculations so that they represent the structure of Nigeria’s economy in 2014, rather than the previous “base” year of 1990. Even with this rebasing, Nigeria will remain the smallest economy on this list, but it is expected to become the largest economy in sub-Saharan Africa (surpassing South Africa). Thus, Nigeria represent the possibility that at long last, economic growth may be finding a foothold in Africa. 

I'm not especially confident that MINTs will catch on, at least not in the same way that BRICs did. But of the BRICs, Brazil, Russia, and to some extent India have not performed to expectations in the last few years. It's time for me to broaden the number of salient examples of emerging markets that I tote around in my head. In that spirit, the MINTs deserve attention.



Saturday, February 22, 2014

Intuition Behind the Birthday Bets

The "birthday bets" are a standard example in statistics classes. How many people must be in a room before it is more likely than not that two of them were born during the same month? Or in a more complex form, how many people must be in a room to make it more likely than not that two of them share the same birthday?

The misguided intro-student logic usually goes something like this. There are 12 months in a year. So to have more than a 50% chance of two people sharing a birth month, I need 7 people in the room (that is, 50% of 12 plus one more). Or there are 365 days in a year. So to have more than a 50% chance of two people sharing a specific birthdate, we need 183 people in the room. In a short article in Scientific American, David Hand explains the math behind the 365-day birthday bets.

Hand argues that the common fallacy in thinking about these bets is that people think about how many people it would take to share the same birth month or birthday with them. Thus, I think about how many people would need to be in the room to share my birth month, or my birth date. But that's not the actual question being asked. The question is about whether any two people in the room share the same birth month or the same birth date.

The math for the birth month problem looks like this. The first person is born in a certain month. For the second person added to the room, the chances are 11/12 that the two people do not share a birth month. For the third person added to the room, the chances are 11/12 x 10/12 that all three of the people do not share a birth month. For the fourth person added to a room, the chances are 11/12 x 10/12 x 9/12 that all four of the people do not share a birth month. And for the fifth person added to the room, the chances are 11/12 x 10/12 x 9/12 x 8/12 that none of the five share a birth month. This multiplies to about 38%, which means that in a room with five people, there is a 62% chance that two of them will share a birth month.

Applying the same logic to the birthday problem, it turns out that when you have a room with 23 people, the probability is greater than 50% that two of them will share a birthday.

I've come up with a mental image or metaphor that seems to help in explaining the intuition behind this result. Think of the birth months, or the birthdays, as written on squares on a wall. Now blindfold a person with very bad aim, and have them randomly throw a ball dipped in paint at the wall, so that it marks where it hits The question becomes: If a wall has 12 squares, how many random throws will be needed before there is a greater than 50% chance of hitting the same square twice?

The point here is that after you have hit the wall once, there is one chance in 12 of hitting the same square with a second throw. If that second throw hits a previously untouched square, then the third throw has one chance in six (that is, 2/12) of hitting a marked square. If the third throw hits a previously untouched square, then the fourth throw has one chance in four (that is, 3/12) of hitting a marked square. And if the fourth throw hits a previously untouched square, then the fifth throw has one chance in three (4/12) of hitting a previously touched square.

The metaphor helps in understanding the problem as a sequence of events. It also clarifies that the question is not how many additions it takes to match where the first throw (or the birth of the first person entering the room), but whether any two match. It also helps in understanding that if you have a reasonably sequence of events, even if none of the events individually have a greater than 50% chance of happening, it can still be likely that during the sequence the event will actually happen.

For example, when randomly throwing paint-dipped balls at a wall with 365 squares, think about a situation where you have thrown 18 balls without a match, so that approximately 5% of the wall is now covered. The next throw has about a 5% chance of matching a previous hit, as does the next throw, as does the next throw, as does the next throw. Taken together, all those roughly 5% chances one after another mean that you have a greater than 50% chance of matching a previous hit fairly soon--certainly well before you get up to 183 throws!


Friday, February 21, 2014

The Health of US Manufacturing

The future the U.S. manufacturing sector is a matter of legitimate concern. After all, manufacturing accounts for a disproportionate share of research and development, innovation, and the leading-edge industries of the future. A healthy manufacturing sector not only supports well-paying jobs directly, but also supports a surrounding nimbus of service-sector jobs in finance, design, marketing, sales, and other areas. On the world stage, manufacturing is still most of what is traded in the world economy. If the U.S. wants to downsize its trade deficits, a healthier manufacturing sector is part of the answer. But perceptions of U.S. manufacturing, along with the reasons for concern, vary across authors.

Oya Celasun, Gabriel Di Bella, Tim Mahedy and Chris Papageorgiou focus on the perhaps surprising strength of U.S. manufacturing in the last few years in the immediate aftermath of the Great Recession, in "The U.S. Manufacturing Recovery: Uptick or Renaissance?" published as an IMF working paper in February 2014. They note that this is the first post-recession period since the 1970s when manufacturing value-added rebounded starting a couple of year after the end of the recession.

In addition, they note that while U.S. manufacturing as a share of world manufacturing was falling from 2000 to 2007, in the last five years the U.S. share of world manufacturing seems to have stabilized at about 20%. Interestingly, China's share of world manufacturing, which had been the rise before the recession, also seems to have stabilized since then at about 20%.


How does one make sense of these patterns? The IMF economists emphasize three factors: a lower real exchange rate of the U.S. dollar, which boosts exports; restraint in the growth of labor costs for U.S. manufacturing firms; and cheaper energy costs and expanding oil and gas drilling activity which matters considerably for many manufacturing operations. They write: "The contribution of manufacturing exports to growth could exceed those of the recent past,  fueled by rising global trade. U.S. manufacturing exports have proven resilient during the  crisis. Further increases will require that the U.S. diversify further its export base towards the  more dynamic world regions."

The Winter 2014 issue of the Journal of Economic Perspectives has several articles about U.S. manufacturing, with the lead-off article by Martin Neil Baily and Barry P. Bosworth, "US Manufacturing: Understanding Its  Past and Its Potential Future." (Full disclosure: I've been Managing Editor of the JEP since 1986.) They point out that when measured in terms of value-added, manufacturing has been a more-or-less constant share of the U.S. economy for decades. The share of U.S. employment in manufacturing has been dropping steadily over time, but as they write:
"The decline in manufacturing employment as a share of the economy-wide total is a long-standing feature of the US data and also a trend shared by all high-income economies. Indeed, data from the OECD indicate that the decline in the share of US employment accounted for by the manufacturing sector over the past 40 years—at about 14 percentage points—is equivalent to the average of the G -7 economies (that is, Canada, France, Germany, Italy, Japan, and the United Kingdom, along with the United States)."



Of course, there are reasons for concern, as well. For example, manufacturing output has held its ground in large part because of rapid growth in computing and information technology, while many other manufacturing industries have had a much harder time. But Baily and Bosworth argue that the real test for U.S. manufacturing is how well it competes in the emerging manufacturing industries of the future, including robotics, 3D printing, materials science, biotechnology, the "Internet of Things" in which speeds and interconnections of machinery and buildings are hooked into the web. It also depends on how how U.S. manufacturing interacts with the recent developments in the U.S. energy industry, with its prospect of lower-cost domestic natural gas. In terms of public policy, they argue that the policies most important for U.S. manufacturing are not specific to manufacturing, but instead involve more basic policies like opening global markets, reducing budget deficits over time, improving education and training for U.S. workers, investing in R&D and infrastructure, adjusting the U.S. corporate tax code, and the like.

In a companion article in the Winter 2014 JEP, Gregory Tassey offers a different perspective in "Competing in Advanced Manufacturing: The Need for Improved Growth Models
and Policies." Tassey's focus is less on the manufacturing sector as a whole and more on cutting-edge advanced manufacturing. He notes: ""One result has been a steady deterioration in the US Census Bureau’s “advanced technology products” trade balance (see http://www.census .gov/foreign-trade/balance/c0007.html) over the past decade, which turned negative in 2002 and continued to deteriorate to a record deficit of $100 billion in 2011, improving only slightly to a deficit of $91 billion in 2012."

In Tassey's discussion of advanced manufacturing, he discusses "how it differs from the conventional
simplified characterization of such investment as a two-step process in which the government supports basic research and then private firms build on that scientific base with applied research and development to produce “proprietary technologies” that lead directly to commercial products. Instead, the process of bringing new advanced manufacturing products to market usually consists of two additional distinct elements. One is “proof-of-concept research” to establish broad “technology platforms” that can then be used as a basis for developing actual products. The second is a technical
infrastructure of “infratechnologies” that include the analytical tools and standards needed for measuring and classifying the components of the new technology; metrics and methods for determining the adequacy of the multiple performance attributes of the technology; and the interfaces among hardware and software components that must work together for a complex product to perform as specified.

Tassey argues that "proof-of-concept research" and "infra-technologies" are not going to be pursued by private firms acting alone, because the risks are too high, and will not be pursued effectively by the public sector acting alone, because the public sector is not well-suited to focusing on desired market products.  Instead, these intermediate steps between basic research and proprietary applied development need to be developed through well-structured public-private partnerships. Further, without such partnerships, he argues that many advanced manufacturing technologies which show great promise in basic research will enter a "valley of death" and will not manage to be transformed into viable commercial products.

Of course the various perspectives described here are not mutually exclusive. U.S. manufacturing could can be benefiting from a short-term bounceback in cars and durable goods in the aftermath of the Great Recession, as well as from a weaker U.S. exchange rate and lower energy prices. It could probably use both broad-based economic policies as well as support for public-private partnerships. But the bottom-line lesson is that in a rapidly globalizing economy, a tautology has sharpened its teeth: U.S.-based manufacturing will only succeed to the extent that it makes economic sense to do the manufacturing in the United States.

Thursday, February 20, 2014

The Modest Effect of a Higher Minimum Wage

The mainstream arguments about "The Effects of a  Minimum-Wage  Increase on  Employment and 
Family Income" are compactly laid out in a report released earlier this week from the Congressional Budget Office. On one side, the report estimates that about 16.5 million workers would see a rise in their average weekly income if the minimum wage raised to $10.10/hour by the second half of 2016. On the other side, the higher minimum wage would reduce employment I(in their central estimate) by 500,000 jobs, which from one perspective is only 0.3% of the labor force, but from another perspective is a loss of jobs concentrated almost entirely at the bottom and low-skilled end of the wage distribution. I've laid out some of my thoughts about weighing and balancing these and related tradeoffs here and here.

In this post, I want to focus on a different issue: the modest effect of raising the minimum wage on helping the working poor near and below the poverty line. The fundamental difficulty is that many of the working poor suffer from a lack of full-time work, rather than working for a sustained time at a full-time minimum wage job. As a result, many of the working poor aren't much affected by raising the minimum wage. Here are the CBO estimates:
"Families whose income will be below the poverty threshold in 2016 under current law will have an average income of $10,700, CBO projects ... The agency estimates that the $10.10 option would raise their average real income by about $300, or 2.8 percent. For families whose income would otherwise have been between the poverty threshold and 1.5 times that amount, average real income would increase by about $300, or 1.1 percent. The increase in average income would be smaller, both in dollar amounts and as a share of family income, for families whose income would have been between 1.5 times and six times the poverty threshold."
Of course, these are averages, and families who are now working many hours at the minimum wage would see larger increases, if they keep their jobs. But the higher minimum wage actually sends an amount of money to these workers that is relatively small in the context of other government programs to  assist the working poor. CBO estimates that families below the poverty line, as a group, would receive an additional $5 billion in income from raising the minimum wage to $10.10/hour, while families with incomes between the poverty line and three times the poverty line would receive a total of $12 billion.

To put those numbers in context, consider a quick and dirty list of some other government programs to assist those near or below the poverty line.

Of course, this list doesn't include unemployment insurance, disability insurance, Social Security, Medicare, and other programs that may sometimes assist households with low incomes, along with their extended families.

A few thoughts:

1) Of course, the fact that raising the minimum wage  has a relatively small effect in the context of these other programs doesn't make it a bad idea. But it does suggest some caution for both advocates and opponents about over-hyping the importance of the issue to the working poor.

2) In particular, it's fairly common to hear people talk about the rise in U.S. inequality and a need to raise the minimum wage in the same breath--as if one was closely related to the other. If only such a view was true! If only it was possible to substantially offset the rise in inequality over the last several decades by bumping up the minimum wage by a couple of bucks an hour! But the rise in inequality of incomes to the tip-top part of the income distribution is far, far larger (probably measured in hundreds of billions of dollars) than the $17 billion the higher minimum wage would distribute to the working poor and near-poor below three times the poverty line. To put it another way, the problems of low-wage workers in a technology-intensive and globalizing United States are far more severe than a couple of dollars on the minimum wage.

3) A number of the current programs to help those with low incomes either didn't exist or existed in a much smaller form a decade or two or three ago, including the Earned Income Tax Credit, the Child Tax Credit, the expansion of Food Stamps,  and the rise in Medicaid spending. It seems peculiar to offer simple-minded comparisons of the hourly minimum wage now to, say, its inflation-adjusted levels of the late 1960s or early 1970s without taking into account that the entire policy structure for assisting those with low incomes has been dramatically overhauled since then, largely for the better, in ways that provide much more help to the working poor and their families than would a higher minimum wage.

4) For me, it's impossible to look at this list of government programs that provide assistance to those with low incomes and not notice that the costs of the U.S. health care system, in this case as embodied in Medicaid, is crowding out other spending. To put it another way, if lifting the minimum wage to $10.10/hour raises incomes for those at less than three times the official poverty rate by $17 billion per year, that would be about what Medicaid spends every two weeks.


Wednesday, February 19, 2014

Behavioral Investors and the Dumb Money Effect

Individual stock market investors often underperform the market averages because of terrible timing: in particular, they are often buy after the market has already risen, and sell when the market has already falling, and this pattern means that they end up buying high and selling low. Michael J. Mauboussin investigates this pattern, and what investors might do about it, in "A behavioural take on investment returns," one of the essays appearing at the start of the Credit Suisse Global Investment Returns Yearbook 2014. He explains (citations omitted):

Perhaps the most dismal numbers in investing relate to the difference between three investment returns: those of the market, those of active investment managers, and those of investors. For example, the annual total shareholder returns were 9.3% for the S&P 500 Index over the past 20 years ended 31 December 2013. The annual return for the average actively managed mutual fund was 1.0–1.5 percentage points less, reflecting expense ratios and transaction costs. This makes sense because the returns for passive and active funds are the same before costs, on average, but are lower for active funds after costs. ... But the average return that investors earned was another 1–2 percentage points less than that of the average actively managed fund. This means that the investor return was roughly 60%–80% that of the market. At first glance, it does not make sense that investors who own actively managed funds could earn returns lower than the funds themselves. The root of the problem is bad timing. ... [I]nvestors tend to extrapolate recent results. This pattern of investor behavior is so consistent that academics have a name for it: the “dumb money effect.” When markets are down investors are fearful and withdraw their cash. When markets are up they are greedy and add more cash.
Here's a figure illustrating this pattern. The MSCI World Index, with annual changes shown by the red line, covers large and mid-sized stocks in 23 developed economies, representing about 85% of the total equity market in those countries. The blue bars show inflows and outflows of investor capital. Notice, for example, that investors were still piling into equity markets for a year after stock prices started falling in the late 1990s. More recently, investors were so hesitant to return to stock markets after 2008 that they pretty much missed the bounceback in global stock prices in 2009, as well as in 2012.




What's the right strategy for avoiding this dumb money effect? Mauboussin explains:

"More than 40 years ago, Daniel Kahneman and Amos Tversky suggested an approach to making predictions that can help counterbalance this tendency. In cases where the correlation coefficient is close to zero, as it is for year-to-year equity market returns, a prediction that relies predominantly on the base rate is likely to outperform predictions derived from other approaches. ... The lesson should be clear. Since year-to-year results for the stock market are very difficult to predict, investors should not be lured by last year’s good results any more than they should be repelled by poor outcomes. It is better to focus on long-term averages and avoid being too swayed by recent outcomes. Avoiding the dumb money effect boils down to maintaining consistent exposure."

There are two other essays of interest at the start of this volume, both by Elroy Dimson, Paul Marsh, and Mike Staunton. In the first, "Emerging markets revisited," they write: "We construct an index of emerging market performance from 1900 to the present day and document the historical equity premium from the perspective of a global investor. We show how volatility is dampened as countries develop, study trends in international correlations and document style returns in emerging markets. Finally we explore trading strategies for long-term investors in the emerging world." In the second essay, "The Growth Puzzle," Dimson, Marsh, and Staunton explore the question of why stock prices over time have not measured up to economic growth in the ways one might expect. The report also offers a lively brief country-by-country overview of investment returns often back to 1900 in a wide array of countries and regions around the world.

Tuesday, February 18, 2014

Moore's Law: At Least a Little Longer

One can argue that the primary driver of U.S. and even world economic growth in the last quarter-century is Moore's law--that is, the claim first advanced back in 1965 by Gordon Moore, one of the founders of Intel Corporation that the number of transistors on a computer chip would double every two years. But can it go on? Harald Bauer, Jan Veira, and Florian Weig of the McKinsey Global Institute consider the issues in "Moore’s law: Repeal or renewal?" a December 2013 paper. They write:

"Moore’s law states that the number of transistors on integrated circuits doubles every two years, and for the past four decades it has set the pace for progress in the semiconductor industry. The positive by-products of the constant scaling down that Moore’s law predicts include simultaneous cost declines, made possible by fitting more transistors per area onto silicon chips, and performance increases with regard to speed, compactness, and power consumption. ... Adherence to Moore’s law has led to continuously falling semiconductor prices. Per-bit prices of dynamic random-access memory chips, for example, have fallen by as much as 30 to 35 percent a year for several decades.
As a result, Moore’s law has swept much of the modern world along with it. Some estimates ascribe up to 40 percent of the global productivity growth achieved during the last two decades to the expansion of information and communication technologies made possible by semiconductor performance and cost improvements."

The authors argue that technological advances already in the works are likely to sustain Moore's law for another 5-10 years. This As I've written before, the power of doubling is difficult to appreciate at an intuitive level, but it means that the increase is as big as everything that came before. Intel is now etching transistors at 22 nanometers, and as the company points out, you could fit 6,000 of these transistors across the width of a human hair; or if you prefer, it would take 6 million of these 22 nanometer transistors to cover the period at the end of a sentence. Also, a 22 nanometer transistor can switch on and off 100 billion times in a second. 

The McKinsey analysts point out that while it is technologically possible for Moore's law to continue, the economic costs of further advances are becoming very high. They write: "A McKinsey analysis shows that moving from 32nm to 22nm nodes on 300-millimeter (mm) wafers causes typical fabrication costs to grow by roughly 40 percent. It also boosts the costs associated with process development by about 45 percent and with chip design by up to 50 percent. These dramatic increases will lead to process-development costs that exceed $1 billion for nodes below 20nm. In addition, the state-of-the art fabs needed to produce them will likely cost $10 billion or more. As a result, the number of companies capable of financing next-generation nodes and fabs will likely dwindle."

Of course, it's also possible to have performance improvements and cost decreases on chips already in production: for example, the cutting edge of computer chips today will probably look like a steady old cheap workhorse of a chip in about five years. I suspect that we are still near the beginning, and certainly not yet at the middle, of finding ways for information and communications technology to alter our work and personal lives. But the physical problems and  higher costs of making silicon-based transistors at an ever-smaller scale won't be denied forever, either.  




Monday, February 17, 2014

Jousting over the One Percent

Robert Solow vs. Greg Mankiw, jousting over inequality. What more could those who enjoy academic blood sports desire? Their exchange is in the "Correspondence" section of the Winter 2014 issue of the Journal of Economic Perspectives.  Solow is writing in response to Mankiw's article in the Summer 2013 issue of JEP, called "Defending the One Percent." (All articles in JEP are freely available and ungated, courtesy of the American Economic Association.) Here's a quick taste of the exchange, to whet your appetite for the rest.

Solow's opening paragraph:

"The cheerful blandness of N. Gregory Mankiw’s “Defending the One Percent” (Summer 2013, pp. 21–34) may divert attention from its occasional unstated premises, dubious assumptions, and omitted facts. I have room to point only to a few such weaknesses; but the One Percent are pretty good at defending themselves, so that any assistance they get from the sidelines deserves scrutiny."

Mankiw's opening paragraph:

"Robert Solow’s scattershot letter offers various gripes about my paper “Defending the One Percent.” Let me respond, as blandly and cheerfully as I can, to his points."

Solow's closing paragraph:

"Sixth, who could be against allowing people their “just deserts?” But there is that matter of what is “just.” Most serious ethical thinkers distinguish between deservingness and happenstance. Deservingness has to be rigorously earned. You do not “deserve” that part of your income that comes from your parents’ wealth or connections or, for that matter, their DNA. You may be born just plain gorgeous or smart or tall, and those characteristics add to the market value of your marginal product, but not to your just deserts. It may be impractical to separate effort from happenstance numerically, but that is no reason to confound them, especially when you are thinking about taxation and redistribution. That is why we may want to temper the wind to the shorn lamb, and let it blow on the sable coat." 

Mankiw's closing paragraph:

"Sixth, and finally, Solow asks, who could be against allowing people their “just deserts”? Actually, much of the economics literature on redistribution takes precisely that stand, albeit without acknowledging doing so. The standard model assumes something like a utilitarian objective function and concludes that the optimal tax code comes from balancing diminishing marginal utility against the adverse incentive effects of redistribution. In this model, what people deserve plays no role in the formulation of optimal policy. I agree with Solow that figuring out what people deserve is hard, and I don’t pretend to have the final word on the topic. But if my paper gets economists to focus a bit more on just deserts when thinking about policy, I will feel I have succeeded."

Full disclosure: I've been Managing Editor of the JEP since 1987, so there is a distinct possibility that I am prejudiced toward finding the contents of the journal to be highly interesting.

Saturday, February 15, 2014

How the 2009 Tax Haven Agreement Failed

Back in April 2009, a summit of the G20 countries agreed to lean hard on tax haven nations to sign treaties to exchange information with other countries. News stories made much of the agreement (for examples, here and here.)  But what effect did the agreement actually have? Niels Johannesen and Gabriel Zucman tackle this question in "The End of Bank Secrecy?  An Evaluation of the G20 Tax Haven Crackdown," which appears in the most recent issue of American Economic Journal: Economic Policy (6:1, pp. 65–91). The journal isn't freely available on-line, but many readers will have access through library subscriptions.

The short answer is that the treaty didn't work very well. The tax haven countries were encouraged to sign bilateral treaties with other nations, and they went ahead and signed 300 or so of these treaties. But not every tax haven has a treaty with every country, and so the overall effect has been a relocation of money between tax havens. Here's the data they have available:

"For  the purpose of our study, the Bank for International Settlements (BIS) has given us  access to bilateral bank deposit data for 13 major tax havens, including Switzerland,  Luxembourg, and the Cayman Islands. We thus observe the value of the deposits  held by French residents in Switzerland, by German residents in Luxembourg, by  US residents in the Cayman Islands and so forth, on a quarterly basis from the  end of 2003 to the middle of 2011."
The full list of the 13 tax havens is Austria,  Belgium, the Cayman Islands, Chile, Cyprus, Guernsey, the Isle of Man, Jersey, Luxembourg, Macao, Malaysia, Panama, and Switzerland. These 13 jurisdictions account for about 75% of all the deposits of tax haven countries that report to the Bank of International Settlements. The authors also have data grouped together for five other tax havens: Bahamas, Bahrain, Hong Kong, the Netherlands Antilles, and Singapore. They write:

"We obtain two main results. First, treaties have had a statistically significant but quite modest impact on bank deposits in tax havens: a treaty between say France and Switzerland causes an approximately 11 percent decline in the Swiss deposits held by French residents. Second, and more importantly, the treaties signed by tax havens have not triggered significant repatriations of funds, but rather a relocation of deposits between tax havens. We observe this pattern in the aggregate data: the global value of deposits in havens remains the same two years after the start of the crackdown, but the havens that have signed many treaties have lost deposits at the expense of those that have signed few. We also observe this pattern in the bilateral panel regressions: after say France and Switzerland sign a treaty, French deposits increase in havens that have no treaty with France."

As with most studies, there are complications of interpretation. Are front companies being used to hide the movement of funds in a way that doesn't show up in these statistics? Perhaps as a response to the treaties some people are reporting to domestic tax authorities more of the income held in tax havens? This data set doesn't allow one to address that question.  But the evidence from this study strongly suggests that trying to deal with tax havens through bilateral agreements is likely to be a very long-running game, and is ultimately unlikely to make much difference to how companies and individuals in the rest of the world are able to make use of tax havens.

Finally, James Hines wrote "Treasure Islands" for the Fall 2010 Journal of Economic Perspectives, which makes an effort to look at both the concerns over tax havens and some possible benefits they might convey. From the abstract: "The United States and other higher-tax countries frequently express concerns over how tax havens may affect their economies. Do they erode domestic tax collections; attract economic activity away from higher-tax countries; facilitate criminal activities; or reduce the transparency of financial accounts and so impede the smooth operation and regulation of legal and financial systems around the world? Do they contribute to excessive international tax competition? These concerns are plausible, albeit often founded on anecdotal rather than systematic evidence. Yet tax haven policies may also benefit other economies and even facilitate the effective operation of the tax systems of other countries."

Full disclosure: The AEJ:EP is published by the American Economic Association, which also publishes the Journal of Economic Perspectives, where I work as Managing Editor. All JEP articles, like the Hines article mentioned above, are freely available courtesy of the AEA.

Friday, February 14, 2014

A German Employment Miracle Narrative

The German unemployment peaked at 12.1% in March 2005 (based on OECD statistics), and then declined more-or-less steadily since then, with only a small hiccup during the Great Recession. Here's a figure to illustrate from the ever-useful FRED website run by the Federal Reserve  Bank of St. Louis. How did Germany--the world's fourth-largest national economy--do it? Are there lessons to learn?

Graph of Registered Unemployment Rate for Germany

There are essentially three categories of explanation that have been suggested for Germany's remarkable labor market performance during the Great Recession: 1) decentralization of wage bargaining in Germany starting in the 1990s; 2) the "Hartz reforms" implemented in the mid-2000s; and 3) how the adoption of the euro influenced Germany's economic situation.

A nice statement of the first point of view, decentralization of German wage bargaining, appears in
"From Sick Man of Europe to Economic Superstar: Germany’s Resurgent Economy," by Christian Dustmann, Bernd Fitzenberger, Uta Schönberg, and Alexandra Spitz-Oener, in the Winter 2014 issue of the Journal of Economic Perspectives. (Full disclosure: I've been the Managing Editor of the JEP since 1987.) Dustman et al. start their story in the early 1990s. Germany was facing the enormous costs and disruptions of reunification, in which higher-wage West Germany found itself part of the same country with lower-wage East Germany. In addition, the fall of the Soviet Union also offered German firms access to imports produced by a number of lower-wage eastern European workers, many of who already had educational, economic, or cultural ties to Germany. Germany industry began a "factory Europe" approach which built international supply chains across countries of eastern Europe, as well as the rest of the world.

Under these pressures, German unions at the industry- and the firm-level showed considerable flexibility. The number of German workers covered by unions declined: "From 1995 to 2008, the share of employees covered by industry-wide agreements fell from 75 to 56 percent, while the share covered by firm-level agreements fell from 10.5 to 9 percent." Wages rose more slowly than productivity,a nd so starting around 1994, Germany's labor costs rose more slowly than those in other European countries, as well as the United State. In addition, Germany's wages became markedly more unequal. Here's a figure showing wage growth at the 85th percentile of wages, the 50th percentile, and the 15th percentile.


Dustmann, Fitzenberger, Schönberg, and Spitz-Oener argue that Germany's labor market institutions, which emphasize consensus bargaining at the firm-level and industry-level, actually turned out to be much more flexible than labor market institutions in other countries like France and Italy, where union wage negotiations happen at a national level. Another sign of Germany's labor market flexibility is that the country has no minimum wage.

A second set of explanations for Germany's strong labor market performance in recent years emphasizes the "Hartz reforms" that were undertaken between 2003 and 2005. Ulf Rinne and Klaus F. Zimmermann offer a nice exposition of this point of view in "Is Germany the North Star of Labor Market Policy?" in the December 2013 issue of the IMF Economic Review. (This journal isn't freely available online, but readers may have access through library subscriptions.) They summarize the reforms this way:
"First, the reforms reorganized existing employment services and related policy measures. Importantly, unemployment benefit and social assistance schemes were restructured, and a means-tested flat-rate benefit replaced earnings-related, long-term unemployment assistance. Second, a significant reduction of long-term unemployment benefits—in terms of both amount and duration—and stricter monitoring activities were implemented to stimulate labor supply by providing the unemployed with more incentives to take up a job. Third, massive deregulation of fixed-term contracts, agency work, and marginal part-time work was undertaken to stimulate labor demand. The implementation of the reforms in these three areas was tied to an evaluation mandate that systematically analyzed the effectiveness and efficiency of the various measures of ALMP [active labor market policy]."
To put all this a little more bluntly, it was strong medicine. Early retirement options were phased out. Unemployment benefits were limited in eligibility, size, and duration. For example, the unemployed had to prove periodically that they were really looking for work. Also, remember that Germany was enacting many of these policies right around 2005 when its economy was going through a deep recession that spiked the unemployment rate.  During the recession, a number of German firms avoided layoffs by using the flexibility of the Hartz reforms to reduce hours worked--and wages paid.

The third set of explanations for Germany's lower unemployment rate focuses on the creation of the euro and the pattern of German trade surpluses that has resulted. The figure shows Germany's trade surplus. Notice that around 2001, when the euro moves into general use, Germany's trade surplus takes off. This has been called the "Chermany problem"--that is, both China and Germany after about 2000 had exchange rate that was at a low enough level to generate large and rising trade surpluses.


Graph of Current Account Balance: Total Trade of Goods for Germany

But notice that Germany's unemployment rate is falling from about 2000-2005, even though the trade surpluses are rising. Then the trade surpluses declined after about 2008, as the global financial occurred, and haven't yet rebounded to their peak. This is at a time when Germany's unemployment rate is falling. In short, outsized trade deficits and surpluses can lead to economic problems of various kinds, but trade imbalances often don't have a tight link to unemployment rates. (In the US economy, for example, trade deficits were quite high when unemployment was low during the height of the housing bubble back around 2006, but since the Great Recession US trade deficits have been lower while unemployment  rate has been higher.)

While academics and policymakers will continue to dispute the reasons for Germany's stellar performance in reducing unemployment in the last few years, I'll just note that none of the possible answers look easy. Having productivity growth outstrip wages over time, so that labor costs relative to competitors fall, isn't easy. Reorganizing industry around global supply chains that include suppliers from lower-wage economies isn't easy. Increasing inequality of wages isn't easy. The "structural labor market reforms" that include trimming back on  early retirement and unemployment insurance isn't easy. U.S. discussions of economic policy sometimes make it sound as if the government can just "create jobs" with large enough spending and/or tax cuts, or low enough interest rates. But real and lasting solutions to reducing unemployment and keeping it low aren't that easy.

Thursday, February 13, 2014

Lesser Known Improvements in the US Labor Market

The headline unemployment rate gets almost all of the attention, but the U.S. Bureau of Labor Statistics also publishes the results of the Job Opening and Labor Turnover Survey (JOLTS), which gives more detail on underlying patterns of hiring and firing. The JOLTS numbers for December 2013 came out on Tuesday, and here are a few of the highlights that caught my eye.

One useful measure of the state of the labor market is the number of unemployed people per job opening. After the 2001 recession this ratio reached nearly 3:1, and during the worst of the Great Recession there were nearly 7 unemployed people for every job opening. But by December 2013, the ratio was back down to 2.6--not quite as healthy as one would like, but still a vast improvement.


Another measure is the ratio of quits to layoffs/discharges. Quits are when a person leaves a job voluntarily. Layoffs and discharges are when people are separated from their job involuntarily. In a healthy economy, more people quit than are forced to leave, so the ratio is above 1. In the recession, voluntary quits dwindled as people held onto the jobs they had, and involuntary layoffs rose, so the ratio fell below 1. We have now returned to an economy where those who leave their jobs are more likely to have done by quitting voluntarily than by being laid off or discharged involuntarily



Finally, the Beveridge curve shows a relationship between job openings and unemployment in an economy. The usual pattern is that when job openings are few, unemployment is higher, and when job openings are many, unemployment is lower. As the illustration shows, the data for the U.S. economy sketched out this kind of Beveridge curve as the 2001 recession arrived, as the labor market recovered, and then as the Great Recession hit. But since the recession ended, the U.S. economy has not moved back up the same Beveridge curve. Instead, the data since the end of the recession is tracing out a new Beveridge curve to the right of the previous one. The shift in the Beveridge curve means that for a given level of job openings (shown on the vertical axis) the corresponding unemployment rate (shown on the horizontal axis) is higher. This outcome is often described as saying that the economy isn't doing as good a job of "matching." But as the BLS writes: "For example, a greater mismatch between available jobs and the unemployed in terms of skills or location would cause the curve to shift outward, up and toward the right."


The JOLTS report just reports these statistics, and isn't about analyzing the possible underlying causes for such a mismatch. Here is a blog post from August 2012 some additional background on Beveridge curves, historical patterns, and their application to the U.S. economy in recent years.

Wednesday, February 12, 2014

Can Other Lenders Beat Back Payday Lending?

It's easy to have a knee-jerk reaction that payday lending is abusive. A payday loan works like this. The borrower writes a check for, say, $200. The lender gives the borrower $170 in cash, and promises not to deposit the check for, say, two weeks. In effect, the borrower pays $30 to receive a loan of $170, which looks like a very steep rate of "interest"--although it's technically a "fee"--for a two-week loan.

Sometimes knee-jerk reactions are correct, but economists at least try to analyze before lashing out. Here and here, I've looked at some of the issues with payday lending from the standpoint of whether laws to protect borrowers make sense. It's a harder issue than it might seen at first. If the options are to take out a payday loan, which is quick and easy, or pay fees for bank or credit card overdrafts, or have your heat turned off because you are behind on the bills, or not get your car fixed for a couple of weeks and miss your job, the payday loan fee doesn't look quite as bad. people can abuse payday loans, but if we're going to start banning financial products that people abuse, my guess is that credit cards would be the first to go. Sure, it would be better of people had other options for short-term borrowing, but many people don't.

James R. Barth, Priscilla Hamilton and Donald Markwardt tackle a different side of the question in
"Where Banks Are Few, Payday Lenders Thrive," which appears in the Milken Institute Review, First Quarter 2014. The essay is based on a fuller report, published last October, available here. They suggest the possibility that banks and internet lending operations may be starting to provide short-term uncollateralized loans that are similar to payday loans, but at a much lower price. In setting the stage, they write: :

"Some 12 million American people borrow nearly $50 billion annually through “payday” loans – very-short-term unsecured loans that are often available to working individuals with poor (or nonexistent) credit. ... In the mid-1990s, the payday loan industry consisted of a few hundred lenders nationwide; today, nearly 20,000 stores do business in 32 states. Moreover, a growing number of payday lenders offer loans over the Internet. In fact, Internet payday loans accounted for 38 percent of the total in 2012, up from 13 percent in 2007. The average payday loan is $375 and is typically repaid within two weeks."
Barth, Hamilton, and Markwardt collect evidence showing that across the counties of California, when there are more banks per person, there are fewer payday lenders per person. They also note several experiments and new firms which seem to be showing that slightly larger loans for several months rather than several days or a couple of weeks may well be a viable commercial product. For example, the Federal Deposit Insurance Commission ran a pilot program to see if banks could offer "small-dollar loans" or SDLs.

"The FDIC’s Small-Dollar Loan Pilot Program has yielded important insights into how banks can offer affordable small-dollar loans (SDLs) without losing money in the process. Under the pilot program concluded in 2009, banks made loans of up to $1,000 at APRs of less than one-tenth those charged by payday loan stores. Banks typically did not check borrowers’ credit scores, and those that did still typically accepted borrowers on the lower end of the subprime range. Even so, SDL charge-off rates were comparable to (or less than) losses on other unsecured forms of credit such as credit cards. Note, moreover, that banks featuring basic financial education in the lending process reaped further benefits by cutting SDL loss rates in half. The success of the banks’ SDLs has been largely attributed to lengthening the loan term beyond the two-week paycheck window. Along with reducing transaction costs associated with multiple two-week loans, longer terms gave borrowers the time to bounce back from financial emergencies (like layoffs) and reduced regular payments to more manageable sums. ... In the FDIC pilot, a majority of banks reported that SDLs helped to cross-sell other financial services and to establish enduring, profitable customer relationships."

What about if the financial lender can't use the small-dollar loan as a way of cross-selling other financial products? Some companies seem to be making this approach work, too.

"Another newcomer, Progreso Financiero, employs a proprietary scoring system for making small loans to underserved Hispanics. Progreso’s loans follow the pattern that emerged in the FDIC pilot program – larger loans than payday offerings with terms of many months rather than days and, of course, more affordable APRs. Moreover, the company has shown that the business model works at substantial scale: it originated more than 100,000 loans in 2012. LendUp, an online firm, makes loans available 24/7, charging very high rates for very small, very short-term loans. But it offers the flexibility of loans for up to six months at rates similar to credit cards, once a customer
has demonstrated creditworthiness by paying back shorter-term loans. It also offers free financial education online to encourage sound decision-making."
In short, the high fees charged by payday lenders may be excessive not just in the knee-jerk sense, but also in a narrowly economic sense: they seem to be attracting competitors who will drive down the price.


Tuesday, February 11, 2014

Lyndon Johnson's War on Poverty Speech

Lyndon Johnson declared "war on poverty" during his State of the Union address on January 8, 1964. A half-century later, here are a few things that struck me in looking back at that speech.

1) Johnson is frontal and direct in declaring the War on Poverty. As one example of several, he says: "This administration today, here and now, declares unconditional war on poverty in America. ... It will not be a short or easy struggle, no single weapon or strategy will suffice, but we shall not rest until that war is won. The richest Nation on earth can afford to win it. We cannot afford to lose it. ... Poverty is a national problem, requiring improved national organization and support. But this attack, to be effective, must also be organized at the State and the local level and must be supported and directed by State and local efforts. For the war against poverty will not be won here in Washington. It must be won in the field, in every private home, in every public office, from the courthouse to the White House."

2) Johnson's announced strategy in the War on Poverty is focused on offering a fair opportunity to all, not on redistribution of income. He said, "Our chief weapons in a more pinpointed attack will be better schools, and better health, and better homes, and better training, and better job opportunities to help more Americans, especially young Americans, escape from squalor and misery and unemployment rolls where other citizens help to carry them. Very often a lack of jobs and money is not the cause of poverty, but the symptom. The cause may lie deeper in our failure to give our fellow citizens a fair chance to develop their own capacities, in a lack of education and training, in a lack of medical care and housing, in a lack of decent communities in which to live and bring up their children."

3) Johnson combines the War on Poverty with a pledge for lower spending, lower budget deficits, and reduced federal employment. Johnson said: "For my part, I pledge a progressive administration which is efficient, and honest and frugal. The budget to be submitted to the Congress shortly is in full accord with this pledge. It will cut our deficit in half--from $10 billion to $4,900 million. It will be, in proportion to our national output, the smallest budget since 1951. It will call for a substantial reduction in Federal employment, a feat accomplished only once before in the last 10 years. While maintaining the full strength of our combat defenses, it will call for the lowest number of civilian personnel in the Department of Defense since 1950."

These promises were largely kept. The budget deficit was 0.9% of GDP in 1964, and LBJ pledged to make it still smaller. Indeed, the deficits were 0.2% of GDP in 1965 and 0.5% of GDP in 1966. After the enormous levels of debt taken on to fight World War II, the economy grew faster than the government debt through the 1950s and 1960s. In particular, the ratio of  federal debt held by the public had  been 108% in 1964, but it had already declined from 40% in 1964 and fell further to 28% by 1970.

4) Johnson's War on Poverty comes when the U.S. economy is about to run white-hot. In January 1964, the unemployment rate was 5.6% as the U.S. economy emerged from a recession that had bottomed out in February 1961. By February 1966, the monthly unemployment rate would fall under 4%, and would stay there through January of 1970. There has been a widespread belief among economists that the "guns and butter" policies of that time (that is, a combination of the Vietnam war and new social program) helped pave the way for some of the inflationary pressures of the 1970s. But the powerful economic growth made it an ideal time to seek to reduce poverty.

5) Johnson's war on poverty speech much shorter than modern State of the Union addresses. Checks in at about 3200 words. For comparison, Barack Obama's 2014 State of the Union address ran more than twice as long at over 6700 words.

6) The "War on Poverty" as defined in the 1960s has largely been won. Yes, the official poverty rate remains high, but the poverty rate is prone to some well-known difficulties: it doesn't take non-cash assistance programs like food stamps and Medicaid into account. Nor does it take into account that those with low-incomes today have access to technologies that affect their lives in so many ways, including household appliances, health, transportation, diet, and many more. When these kinds of factors are taken into account, we have largely won the war on poverty as the poverty level was defined it in 1964.

But this victory is a slippery one. Poverty is always defined in the context of a place and time. After all, Johnson could have pointed out that the U.S. had already largely won the war on poverty as poverty would have been defined a half-century before his speech, back in 1914. What it means to be poor in 2014 is in some ways quite different than in 1964, after the passage of Medicaid, the expansion of food stamps, the passage of the Civil Rights Act of 1964, and more. However, poverty in 2014--especially in terms of lack of opportunity for many Americans who lack support in their communities, schools, local economy, and sometimes their families--is a real and genuine problem of our own time and place.

Monday, February 10, 2014

Division of Labor: GM, Toyota and Adam Smith

One long-standing critique of capitalism portrays an alienated worker, mindlessly carrying out a repetitive task hour after hour. For example, here is a description of working on the assembly line at General Motors a few decades ago:
"In the 1960s and 1970s, jobs on the General Motors assembly line were very narrowly defined; a worker would perform the same set of tasks—for example, screwing in several bolts—every 60 seconds for eight to ten hours per day. Workers were not expected or encouraged to do anything beyond this single task. Responsibility for the design and improvement of the assembly system was vested firmly in the hands of supervisors and manufacturing engineers, while vehicle quality was the responsibility of the quality department, which inspected vehicles as they came off the assembly line. GM’s managers were notorious for believing that blue collar workers had little—if anything—to contribute to the improvement of the production process ..."

The quotation is from Susan Helper and Rebecca Henderson in "Management Practices, Relational 
Contracts, and the Decline of General Motors,"  which appears in the most recent issue of the Journal of Economic Perspectives. (Like all JEP articles back to the first issue in 1987, it is freely available on-line courtesy of the American Economic Association.)  Of course, back in the 1960s in particular General Motors was an enormously successful and profitable firm, and so the extreme division of labor seemed to be working fine from the company's point of view. 

Both the notion that the division of labor can lead to enormous gains in output, and the concern that an extreme division of labor can be bad for workers, are themes in Adam Smith's The Wealth of Nations. (Here, I'll refer to the ever-useful version available online at the Library of Economics and Liberty.) Smith's well-known story of the division of labor in a pin factory comes almost immediately in Book I: indeed, the first chapter of Book I is entitled "The Division of Labor." Smith discusses how the manufacturing of pins can be broken up into 18 separate tasks, and how after such a division of labor, small operations of 10 or more people (some doing multiple tasks) could then produce 48,000 pins per day. Smith hypothesizes that one person working alone, not knowing much about the specifics of pin manufacture, might be able to make only 20 pins per day--maybe only one. 

But in a lesser-known passages back in Book V of Smith's The Wealth of Nations discusses the perils of too great a division of labor. Smith writes: 
"In the progress of the division of labour, the employment of the far greater part of those who live by labour, that is, of the great body of the people, comes to be confined to a few very simple operations, frequently to one or two. But the understandings of the greater part of men are necessarily formed by their ordinary employments. The man whose whole life is spent in performing a few simple operations, of which the effects are perhaps always the same, or very nearly the same, has no occasion to exert his understanding or to exercise his invention in finding out expedients for removing difficulties which never occur. He naturally loses, therefore, the habit of such exertion, and generally becomes as stupid and ignorant as it is possible for a human creature to become. The torpor of his mind renders him not only incapable of relishing or bearing a part in any rational conversation, but of conceiving any generous, noble, or tender sentiment, and consequently of forming any just judgment concerning many even of the ordinary duties of private life. Of the great and extensive interests of his country he is altogether incapable of judging, and unless very particular pains have been taken to render him otherwise, he is equally incapable of defending his country in war. The uniformity of his stationary life naturally corrupts the courage of his mind, and makes him regard with abhorrence the irregular, uncertain, and adventurous life of a soldier. It corrupts even the activity of his body, and renders him incapable of exerting his strength with vigour and perseverance in any other employment than that to which he has been bred. His dexterity at his own particular trade seems, in this manner, to be acquired at the expence of his intellectual, social, and martial virtues. But in every improved and civilized society this is the state into which the labouring poor, that is, the great body of the people, must necessarily fall, unless government takes some pains to prevent it."
You'll occasionally hear someone say that "Adam Smith never really considered the downside of the division of labor," which is clearly incorrect. Indeed, Smith goes on to make this argument one basis for his argument for government support of public education for everyone.

For GM, the extreme division of labor ended up not working well. Helper and Henderson quote an autoworker named Joel Smith about what it was like working at GM back in those days. Smith said:

In the old days, we fought for job security in various ways: “Slow down, don’t work so fast.” “Don’t show that guy next door how to do your job—management will get one of you to do both of your jobs.” “Every now and then, throw a monkey wrench into the whole thing so the equipment breaks down—the repair people will have to come in and we’ll be able to sit around and drink coffee. They may even have to hire another guy and that’ll put me further up on the seniority list.
Management would respond in kind: “Kick ass and take names. The dumb bastards don’t know what they’re doing.” . . . Management was looking for employees who they could bully into doing the job the way they wanted it done. The message was simply: “If you don’t do it my way I’ll fire you and put somebody in who will. There are ten more guys at the door looking for your job.”
In contrast, Toyota plants were built on division of labor, too, but the treatment of workers was quite different. Helper and Henderson explain:
Jobs on Toyota’s production line were even more precisely specified: for example, standardized work instructions specified which hand should be used to pick up each bolt. However, Toyota’s employees had a much broader range of responsibilities. Each worker was extensively cross-trained and was expected to be able to handle six to eight different jobs on the line. They were also responsible for both the quality of the vehicle and for the continual improvement of the production process itself. Each worker was expected to identify quality problems as they occurred, to pull the “andon” cord that was located at each assembly station to summon help to solve them in real time, and if necessary to pull the andon cord again to stop the entire production line. Workers were also expected to play an active role in teams that had responsibility for “continuous improvement” or for identifying improvements to the process that might increase the speed or efficiency of the line. As part of this process, workers were trained in statistical process control and in experimental design.
A similar difference applied in how GM and Toyota dealt with suppliers. GM told suppliers what to make with a detailed blueprint, and then more-or-less told the suppliers to shut up and make it. Toyota and the Japanese firms fostered long-term relationships with suppliers where they collected more data and used it to encourage innovation, while still bargaining hard on price. Helper and Henderson make a persuasive case that a primary reason why GM saw its US market share fall from 50% in the 1960s and 1970s to less than 20% in recent years, and needed to be rescued from bankruptcy with a government bailout in 2009, is in large part because of GM's inability to change its culture and build productive relationships between its workers and suppliers.

Maybe the division of labor isn't the enemy: instead, maybe it's particular managers who have little or no respect for what the actual workers can know or contribute. Of course, that problem isn't particular to capitalism. Managers of factories in government-owned firms or planned economies were quite susceptible to it as well. Indeed, one might argue that modern firms which outsource tasks to foreign producers for nothing more than a cheap pair of hands have often ended up discovering the potential downsides of such a relationship. As the sociologist Emile Durkheim argued in his 1893 book, The Division of Labor, a division of labor in a modern society (especially when backed by support for education and training as supported by Adam Smith) can be a way of both strengthening social ties and offering freedom and opportunity for individuals.