Pages

Saturday, August 30, 2014

Richard Timberlake and the Case for Monetary Rules

Renee Haltom interviewed Richard Timberlake, perhaps best-known as a staunch supporter of fixed rules rather than government discretion for monetary policy, in Econ Focus, a publication of the Federal Reserve Bank of Richmond (First Quarter 2014, pp. 24-29). Here's a sample of Timberlake's views:

he argues that the Fed is inevitably subject to political influence.
"Until maybe 10 or 20 years ago, economists who studied money felt that they could prescribe some logical policy for the Federal Reserve, and ultimately the Fed would see the light and follow it. That proved illusory. A central bank is essentially a government agency, no matter who “owns” it. The Fed’s titular owners are the member banks, but the national government has all the controls over the Fed’s policies and profits. And as with all government agencies, the Fed is subject to public choice pressures and motives."
If the Federal Reserve followed a firm rule, he argues, asset bubbles would be unlikely.

The Fed shouldn’t pay any heed at all to asset bubbles. If it followed rigorously a constrained price level, or quantity-of-money rule, I don’t think there would be bubbles. Markets would anticipate stability. Markets today, however, anticipate, with good reason, all the government interventions that lead to bubbles. If we had a stable price level policy and everybody understood it and believed it would continue, there wouldn’t be any serious bubbles. We don’t even know whether the 1929 “bubble” was even a bubble, because after the Fed’s unwitting destruction of bank credit, no one could distinguish in the rubble what was sound from what might have been unsound.

If lender of last resort services are needed, he argues, the private sector could provide them.
Private institutions will always furnish lender of last resort services if markets are free to operate and if there are no government policies in place that cause destabilization. In the last half of the 19th century, the private clearinghouse system was a lender of last resort that worked perfectly. Its activities demonstrated that private markets handle the lender of last resort function better than any government-sponsored institution.

The overall impression from the interview is that Timberlake is open to a variety of monetary rules, as long as the rules are written in stone. He offers positive remarks about a gold standard, about a monetary policy focused solely on the price level, and a monetary policy that would involve a fixed rate of growth in the money supply. As one example, he cites discussed his reaction to the rule Milton Friedman proposed in the 1970s for a fixed rate of growth in the money supply.

"Friedman recommended a steadily increasing quantity of money — that is, bank checking deposits and currency —between 2 and 5 percent per year. Prices might rise or fall a little, but everybody would know that things were going to get better or be restrained simply because the Fed had to follow a quantity-of-money rule. I wrote him a letter at the time and remarked, “I agree with your idea of a stable rate of increase in the quantity of money, and I suggest a rate of 3.65 percent per year, and 3.66 percent for leap years — 1/100 of 1 percent per day.”
I can feel the pull of Timberlake's view, swirling around my ankles, but I am not persuaded. When you lash yourself to the mast,  as Odysseus did to resist the call of the Sirens, you are indeed constrained from giving in to temptation. But if an unforeseen problem arises while you have lashed yourself to the mast, you are incapacitated from dealing with the problem. As Timberlake readily concedes, having the Federal Reserve surrender all discretion is not at all likely. Thus, the pragmatic questions are about what kinds of constraints on the Fed, including a continual process of transparency and self-explanation, are most useful.

As a coda, Timberlake has a nice story about Milton Friedman offering him some key advice when he was a graduate student.

I recall the time when I presented a potential Ph.D. thesis proposal at Chicago to the economics department. The audience included professors and many able graduate students. I could feel that my presentation was not going over very well. After the ordeal was over, Friedman said to me, “Come back up to my office.” When we were there, he said, “The committee and the department think that your thesis proposal has less than a 0.5 probability of acceptance.” I knew that was coming, and I despondently replied that I had had a very frustrating time “finding a thesis.” My words suggested that a thesis was a bauble that one found in a desert of intellect that no one else had discovered. It was then that Milton Friedman turned me around and started me on the road to being an economist. “Dick,” he said, “theses are formed, not found.” It was the single most important event in my professional life. I finally could grasp what economic research was supposed to be.

Friday, August 29, 2014

The Secular Stagnation Controversy

For economists, the word "secular" isn't about a lack of religious belief. Instead, it's refers to whether a condition is expected to last for a long and indefinite period--and in particular, a period not related to whether the economy is entering or exiting a recession. Thus, the concept of "secular stagnation" is the idea that the U.S. economy is not just suffering through the aftereffects of the Great Recession, but is for some reason entering a longer-term period of stagnant growth. Coen Teulings and Richard Baldwin, who have edited a useful e-book of 13 short essays with a variety of perspectives on Secular Stagnation:  Facts, Causes and Cures. In the overview, they write:  “Secular stagnation, we have learned, is an economist’s Rorschach Test. It  means different things to different people."

I've taken a couple of previous cracks at secular stagnation on this blog. I discussed the
original theory of secular stagnation as put forward in 1938 in "Secular Stagnation: Back to Alvin Hansen" (December 12, 2013). Hansen was concerned that in the depressed economy of his time, with lower birthrates and a lack of discoveries of new resources and territories, the push of new inventions would not be enough to keep investment levels high and the economy growing. I have also discussed "Sluggish U.S. Investment" (June 27, 2014) in the context of a discussion of secular stagnation by Larry Summers.  Here, let me give a sense of how a range of economists are looking at different aspects of the  "secular stagnation" issue by quoting (without prejudice against the other essays!) a few sentences from six of the essays.

Larry Summers: "This chapter explains why a decline in the full-employment real interest rate (FERIR) coupled with low inflation could indefinitely prevent the attainment of full employment.  . . . Broadly, to the extent that secular stagnation is a problem, there are two possible strategies for addressing its pernicious impacts. ... The first is to find ways to further reduce real interest rates. These might include operating with a higher inflation rate target so that a zero  nominal rate corresponds to a lower real rate. Or it might include finding ways such  as quantitative easing that operate to reduce credit or term premiums. These strategies have the difficulty of course that even if they increase the level of output, they are also likely to increase financial stability risks, which in turn may have output consequences. ... The alternative is to raise demand by increasing investment and reducing saving. ... Appropriate strategies will vary from country to country and situation to situation. But they should include increased public investment, reductions in structural barriers to private investment and measures to promote business confidence, a commitment to maintain basic social protections so as to maintain spending power, and measures to reduce inequality and so redistribute income towards those with a higher propensity to spend."

Barry Eichengreen: "Pessimists have been predicting slowing rates of invention and innovation for centuries, and they have been consistently wrong. This chapter argues that if the US does experience secular stagnation over the next decade or two, it will be self-inflicted. The US must address its infrastructure, education, and training needs. Moreover, it must support aggregate demand to repair the damage caused by the Great Recession and bring the long-term unemployed back into the labour market."

Robert J Gordon: "US real GDP has grown at a turtle-like pace of only 2.1% per year in the last four years, despite a rapid decline in the unemployment rate from 10% to 6%. This column argues that US economic growth will continue to be slow for the next 25 to 40 years – not because of a slowdown in technological growth, but rather because of four ‘headwinds’: demographics, education, inequality, and government debt."

Paul Krugman: "Larry Summers’ speech at the IMF’s 2013 Annual Research Conference raised the
spectre of secular stagnation. This chapter outlines three reasons to take this possibility seriously: recent experience suggests the zero lower bound matters more than previously thought; there had been a secular decline in real interest rates even before the Global Crisis; and deleveraging and demographic trends will weaken future demand. Since even unconventional policies may struggle to deal with secular stagnation, a major rethinking of macroeconomic policy is required."

Edward L Glaeser: "US investment and innovation – the most standard ingredients in long-run economic growth – are not declining. The technological world that surrounds us is anything but stagnant. Yet we can have little confidence that the continuing flow of new ideas will solve the US’s most worrying social trend: the 40-year secular rise in the number and share of jobless adults. ... The massive secular trend in joblessness is a terrible social problem for the US, and one that the country must try to address. I do not believe that this is a macroeconomic problem that can be solved with more investment or tax cuts alone.  . . . Alongside targeted investments in education and training, radical structural reforms to America’s safety net are needed to ensure it does less to  discourage employment."

Gauti B. Eggertsson and Neil Mehrotra: "Japan’s two-decade-long malaise and the Great Recession have renewed interest in the secular stagnation hypothesis, but until recently this theory has not been explicitly formalised. This chapter explains the core logic of a new model that does just that. In  the model, an increase in inequality, a slowdown in population growth, and a tightening of borrowing limits all reduce the equilibrium real interest rate. Unlike in other recent models, a period of deleveraging puts even more downward pressure on the real interest rate so that it becomes permanently negative."

Richard C. Koo: "The Great Recession is often compared to Japan’s stagnation since 1990 and the Great Depression of the 1930s. This chapter argues that the key feature of these episodes is the bursting of a debt-financed asset bubble, and that such ‘balance sheet recessions’ take a long time to recover from. There is no need to suffer secular stagnation if the government offsets private sector deleveraging with fiscal stimulus. However, until the general public understands the fallacy of composition, democracies will struggle to implement such policies during balance sheet recessions."

Volumes like this feel a bit like the parable of the blind men and the elephant, where each man grabs one part of the elephant and then declares what an elephant feels like, depending on whether the man has a leg, tail, trunk, ear, tusk, side, or belly of the elephant. It's easy to grab hold of one part of the economy, but it can be difficult to see the interactions across the parts, or to see it as a whole.





Thursday, August 28, 2014

Outsource Corporate Boards?

Many economists have been distinctly uncomfortable with the notion of a company owned by shareholders but run by corporate executives hired by a board of directors since at least 1932, when Adolf A. Berle, Jr., and Gardiner C. Means wrote a book called "The Modern Corporation and Private Property." The early decades of the 20th century saw a huge transformation of the ownership of large U.S. companies, away from being owned (or effectively controlled) by a family or an individual, by and toward being owned by shareholders.

"In 1928, when the project was launched, the financial machinery was developing so rapidly as to indicate that we were in  the throes of a revolution in our institution of private property, at least as applied to industrial economic uses.  ... The translation of perhaps two-thirds of the Industrial wealth of the country from individual ownership to ownership by the large, publicly financed corporations vitally changes the lives of property owners, the lives of workers, and the methods of property tenure. The divorce of ownership from control consequent on that process almost necessarily involves a new form of economic organization of society." 


The "separation of ownership and control," as it is often called, has been an ongoing problem ever since. The well-founded concern is that the board of directors, which is supposed to function on behalf of the shareholders who technically own the company, is instead effectively chosen by corporate management. There have been periodic pushes for corporate board to have broader representation, or members from outside the circles of that industry, or with greater independence from management. But ultimately, most board members are part-timers who parachute in a few times a year for board meetings. They often lack information and incentives to oversee or tow challenge corporate management effectively.

Stephen M. Bainbridge and M. Todd Henderson offer an alternative vision of how corporate boards might work in "Boards-R-Us: Reconceptualizing Corporate Boards," which appears in the May 2014 issue of the Stanford Law Review. They write (footnotes omitted):

Almost every corporate governance reform proposed over the past several decades has focused on the board of directors. . . .This battle is fought on the grounds of who board members are, whether they are independent, who appoints them, how they are elected, how they are compensated, what the standards for their conduct and liability are, whether there should be more independent directors, what the optimal board size is, and so forth. All of these reforms are an attempt to optimize the monitoring and governance role played by the board. Despite the long and zealous efforts of corporate law reformers to understand and improve the board of directors, there is a gaping hole in the corporate governance literature. No one has yet questioned a fundamental assumption of the current corporate governance model—that is, only individuals, acting as sole proprietors, should provide professional board services. 
Bainbridge and Henderson propose that when a a firm is choosing a board of directors, instead of hiring a group of individuals to be on the board, the firms should be allowed to hire a "board service provider," an outside company that would provide board of director services to the firm. They write:
In other words, just as companies outsource their external audit function to an accounting firm rather than multiple individuals, the board of directors function would be outsourced to a professional services company. To see our idea, imagine a firm, Boards-R-Us, Inc., serving as the board of Acme Co. Instead of Acme shareholders hiring a dozen or so individual sole proprietors to provide board functions, they instead hire one firm—a BSP—to provide those functions, whatever they may be.22 Boards-R-Us would still act through individual agents, but the responsibility for managing a particular firm, within the meaning of state corporate law, would be that of Boards-R-Us the entity. This means, for instance, that a suit by shareholders for breach of the board’s fiduciary duties would be against Boards-R-Us, and not against individuals
or groups of individuals.
A company acting as board service provider would continue to make all the same decisions as a current board of directors: that is, hiring and firing top management, setting compensation, having final approval over major decisions like takeovers and mergers, and so on. As the authors write: "the basic version of our proposal is substantially similar to the current board model, with the one key difference that the board consists of an “it” instead of a collection of individuals." Indeed, in choosing a board of directors, it would be possible to have a slate of individuals run against a board service provider--or against several different board service providers. It would be possible to have a board of directors that was, say, half made up of a board service provider, while the other half was the typical individual board members chosen separately by shareholders.

What's the case for believing that, at least for some companies, a board service provider company might be an  improvement? One set of argument is that current boards of directors often face problems of limited time, limited information, and a lack of specialist expertise. A board service provider might be well-positioned to have full-time providers of board services, with access to both internal and external sources of information, and the ability to draw on specialist expertise.

And what about the risk that if we are already worried about mutual backrubs between boards of directors and top management, the problem might get even worse if there was only a single board service provider? This concern seems legitimate, but it's worth remembering just how incestuous bad some of the current board situations are. Bainbridge and Henderson remind us that when the board of directors at Disney decided that Michael Eisner deserved $140 million for one year of work, the board included a number of Eisner's friends, "including actor Sidney Poitier, the principal of the elementary school Eisner’s children attended, and the architect who designed one of Eisner’s homes." More recently, the media conglomerate IAC, chaired by Barry Diller, "appointed thirty-one-year-old graduate student Chelsea Clinton to the board. ...  [F]ormer board members of IAC include Diller’s wife, the fashion designer Diane von Furstenberg, and General Norman Schwarzkopf, and ... the current board also includes von Furstenberg’s son, Alex."  

Given that the oversight of current boards of directors is often pretty low, Bainbridge and Henderson argue that board service providers "would be more accountable than the group of individuals currently providing board services; indeed, we believe that the accountability of the whole would be greater than the sum of the liabilities of the parts." They argue that a board service provider might worry more about reputation than a random individual board member, and also that a company providing board services might be more susceptible to legal oversight and liability.

Allowing companies to become board service providers is no magic potion to solve all the problems of corporate governance. But more than 80 years after Berle and Means described the problems that arise from a separation of corporate ownership and control, any new proposals for addressing it are welcome.





"

From The Economist, August 16.

Wednesday, August 27, 2014

Does Economics Education Teach Students to Trust?

Last March, I discusses some of the studies on the question, "Does Economics Make You a Bad Person?" (March 31, 2014). In the Spring 2014 issue of the American Economist, Bryan C. McCannon offers some additional evidence on the question in: "Do Economists Play Well With Others? Experimental Evidence on the Relationship between Economics Education and Pro Social Behavior" (59:1, pp. 27-33). The journal is not freely available on-line, although many readers will have access through a library subscription.

The guts of the paper is an experiment with 147 students "conducted with undergraduate students at a small, private university in upstate New York." McCarron teaches at St. Bonaventure University, so you can draw your own conclusions about the identity of the school. Some of the students had already taken "a significant amount of coursework in economics," some are planning to study economics but haven't yet taken economics courses, and some have neither had economics classes nor are planning to take them.

The students participated in a "trust game," which has two players. The first player is given a certain amount of money--in this study, $5. The first player decides how much to give to the second player. But here's a twist: the amount given to the second player is tripled. Then, the second player decides whether to give some money back to the first player. The game ends there. The students played the game five times, but with a random and changing selection of opponents each time

Clearly, if the first player fully trusts the second one, the first player will give the full $5 to the second player. The amount will be tripled in transit, and the second player will be able to return the full $5, plus more, to the first player. However, a first player who is less trusting may give less than the full $5, or nothing at all, to the second player, because after all, the second player may just hold on to all the money and not return any of it. Thus, the question is whether students who have taken a lot of economic classes tend to be more or less trusting than other groups.

A typical finding in a trust game is that the first player gives half the money to the second player. The second player then returns about 80% of the money invested, and keeps the rest. Thus, trust often does not pay off for the first player--which helps to explain why they venture to pass along only half of the original sum.

In this study, it turns out that when taking the role of the first player, "[e]ach class a student takes contributes approximately ten cents more." When taking the role of the second player, "[t]aking more economics courses is associated with escalated rates of reciprocation. Approximately fifteen more cents is given back if given all five dollars, which represents a 3.5% increase." McCarron also gave the participants an attitudinal survey before playing the game, and when analyzing the survey results together with the game results, he argues that those who are selecting themselves into economics classes are more likely to practice trust and reciprocity.

This study follows several common patterns in this literature. The group being studied is a relatively small group of students at one institution, so there is a reasonable question about whether the results would generalize to a broader population. The engine of inquiry is a structured "laboratory experiment," in this case the trust game, and so there is a reasonable question about whether the motivations revealed in such studies would show up in other behaviors and contexts.

But although the results of these kinds of studies shouldn't be oversold, it's not shocking to me to find that those who study economics may be more likely to look at a trust game and see it as an opportunity for a cooperative exchange that can benefit both parties. Indeed, economists may well be more prone than non-economists to seeing the world as a place full of voluntarily agreed transactions that can represent a win for both parties.

Tuesday, August 26, 2014

New Business Establishments: The Shift to Existing Firms

A new business "establishment" occurs when a firm opens up at a new geographical location. A new "establishment" can thus occur for one of two reasons: either it's a brand-new firm started by an entrepreneur (say, if I start my own fast-food restaurant), or it's a new or additional location for an existing firm (say, when Subway or McDonald's opens a new store). In "The Shifting Source of New Business Establishments and New Jobs," an "Economic Commentary" written for the Fedearl Reserve Bank of Cleveland (August 21, 2014) Ian Hathaway, Mark E. Schweitzer, and Scott Shane make the point that when it comes to new establishments in the U.S. economy, existing firms are playing a larger role.

As a starting point, consider how the rate at which new establishments of both kinds are being born has changed over time. Hathaway, Schweitzer, and Shane note: "As the figure shows, in 1978, Americans created 12.0 new firms per business establishment. By 2011, the latest year data are available, they generated new firms at roughly half that rate—6.2 new firms per existing business establishment. By contrast, in 1978, Americans created 1.7 new outlets per existing establishment, while in 2011 they created 2.6—an increase of more than half."


I wrote about about the decline of the top line--the line showing the entrepreneurs starting new firms--in "The Decline of U.S. Entrepreneurship" earlier this month.

Is this shift toward establishments started by existing firms just what one might call a "Walmart effect"--that is, big-box stores in the retail industry driving out smaller Mom-and-Pop operations? Apparently not. The share of new establishments that are new outlets of existing firms is rising across across all industries, not just retail

This change has implications for the sources of job creation in the U.S. economy. Hathaway, Schweitzer, and Shane write: "To give a sense of the magnitude of the changing sources of job creation, we can estimate the number of new jobs that new firms would have created had they continued to generate jobs at the rate they did back in 1978 and the number of new jobs that new outlets would have created had they continued to generate jobs at the rate they did back in 1978. At the 1978 rate of new firm job creation, new firms would have produced an additional 2.4 million jobs in 2011, or 90 percent more. At the 1978 rate of new outlet job creation, new outlets would have produced 828,000 fewer jobs in 2011, or 34 percent less."

As to underlying reasons for the change, these authors note that their underlying data doesn't allow them to pinpoint any particular cause. However, "[W]e can offer one hypothesis: Growth in information and communication technologies since the late 1970s have facilitated the coordination of multiple establishments, offering existing businesses an advantage over new firms when setting up new establishments to meet the need for new business locations."

This explanation seems plausible to me, but some other possibilities seem plausible, too. The new information and communications technology may also make it easier to establish brand names, so that firms in sectors like Finance, Insurance, and Real Estate or in Services are more likely to be part of a national firm, rather than opening up a stand-alone personal shop. I also suspect that the regulatory burden of starting a business has gotten harder over time, in terms of the rules, regulations, and permits that need to be followed for the physical property of the business, for dealing with employees, for meeting requirements about the qualities of the product that are provided, and for taxes and accounting. There's a case for each individual rule and regulation. But when you pile them all up, the burden can become discouragingly high for a potential entrepreneur.

Monday, August 25, 2014

Property Rights and Saving the Rhino

South Africa is the home for 75% of the world's population of black rhinos and  96% of the world's population of white rhinos. There must be some lessons for conservationists behind those statistics.
Michael 't Sas-Rolfes tells the story in "Saving African Rhinos: A Market Success Story," written as a case study for the Property and Environment Research Center (PERC).

The story isn't just about markets. In 1900, the white rhinoceros had been hunted almost to extinction, with about 20 remaining in a single game preserve in South Africa. The population slowly recovered a bit, and by the middle of the 20th century, there were enough to start relocating breeding groups of white rhinos to other national parks in South Africa, as well as private game ranches. In 1968, the first legal hunt of a white rhino was authorized.

But by the 1980s, Sas-Rolfes reports, a strange disjunction had emerged. In 1982, the Natal Parks Board had a list price for a white rhino of about 1,000 South African rands, but the average price paid by a hunter for a rhino trophy that year was 6,000 rands. Private game preserves were quick to take advantage of the arbitrage opportunity. The Natal Parks Board soon began auctioning its rhinos. In 1989, it was selling rhinos for 49,000 rand, but the average price to a hunter for a rhino trophy had risen to 92,000 rand. There were obvious questions about whether this system of raising and hunting rhinos was a useful tool from a broader environmental perspective.

But property rights and markets enter the story in a different way in 1991.
Before 1991, all wildlife in South Africa was treated by law as res nullius or un-owned property. To reap the benefits of ownership from a wild animal, it had to be killed, captured, or domesticated. This created an incentive to harvest, not protect, valuable wild species—meaning that even if a game rancher paid for a rhino, the rancher could not claim compensation if the rhino left his property or was killed by a poacher. . . . Recognizing the problems associated with the res nullius maxim, the commission drafted a new piece of legislation: the Theft of Game Act of 1991. This policy allowed for private ownership of any wild animal that could be identified according to certain criteria such as a brand or ear tag. The combined effect of market pricing through auctions and the creation of stronger property rights over rhinos changed the incentives of private ranchers. It now made sense to breed rhinos rather than shoot them as soon as they were received.
For a sense of how much difference these issues of property rights and incentives can make to conservation, consider the difference in populations between black and white rhinos. Sas-Rolfes explains: "Figure 2 shows trends in white rhino numbers from 1960 until 2007. Contrast those
numbers with the black rhino, which mostly lived in African countries with weak or absent wildlife market institutions such as Kenya, Tanzania, and Zambia. In 1960, about 100,000 black rhinos roamed across Africa, but by the early 1990s poachers had reduced their numbers to less than 2,500. . . . Unprotected wild rhino populations are rare to non-existent in modern Africa. The only surviving African rhinos remain either in countries with strong wildlife market institutions (such as South Africa and Namibia) or in intensively protected zones."



A strong demand for rhino horn remains, and especially since about 2008, rhinos across Africa face a risk of illegal poachers. Here's a figure from the conservation group Save the Rhino showing the level of rhino poaching in South Africa:


Along with the existing choices of "intensively protected zones"--which implies costly and not-very-corruptible protectors--and allowing for private game preserves, the other option is to seek to undercut the black market for rhino horn with a legal market. Other more controversial options discussed at the Save the Rhinos website include de-horning rhinos, to make them less attractive to poachers, and perhaps even allowing legal sale of these rhino horns, to undercut the prices paid to poacher. Rhino horns are made of keratin, similar to the substance in fingernails and hair, and the horn could be removed every year or two. There are strong arguments on both sides of allowing legal sale of rhino horn: perhaps rather than undercutting the illegal market, it might also make it easier for poachers to sell their illegally obtained rhino horn. In the end, given that South Africa is now the home to most of the world's rhinos, I suspect that South Africa will end up making the decision about whether to proceed with these options.

Those interested in how property rights might be one of the tools for helping to protect endangered species might also want to check this post on "Saving Jaguars and Elephants with Property Rights and Incentives" (December 19, 2011).  

Friday, August 22, 2014

Analyzing Fair Trade

"Fair Trade" is often little more than a slogan. If you'd like a look at the analysis and reality behind that slogan, as it's working out in the real world, a good starting point is "The Economics of Fair Trade,"  by Raluca Dragusanu, Daniele Giovannucci, and Nathan Nunn, in the Summer 2014 issue of the Journal of Economic Perspectives. (Full disclosure: I've worked as Managing Editor of JEP since the first issue in 1987.)

Fair Trade is the practice whereby a nonprofit organization puts a label on certain products, certifying that certain practices were followed in the production of that product. Common required practices include standards for worker pay, worker voice, and environmental protection. The biggest of these certifying organization is Fairtrade International. There is a parallel label for the U.S. called Fair Trade USA. Other labelling standards, each with their own priorities, include Organic, Rainforest Alliance, and others. A producer who joins Fair Trade receives several benefits. When a Fair Trade producer sells to a Fair Trade buyer, they can receive a minimum price, which includes a premium now set at 20 cents per pound for coffee. Fair Trade buyers are also supposed to be more willing agree to long-term purchasing contracts, and to providing credit to producers.

At some level, Fair Trade and other certification programs are just a case of the free market at work. With the certification,  consumers who would are willing to pay something extra to purchase products produced in a certain way become able to find those products. A variety of evidence suggests that at least some consumers value this option. For example, in one study the researchers were able to add Fair Trade labels, or not, and alter prices, or not, for bulk coffee sold in 26 U.S. groceries. they found that at a given price, sales were 10% greater when the coffee was labeled as Fair Trade, and that demand for Fair Trade coffee was less sensitive to increases in price.

So what concerns or issues might be raised about Fair Trade? I'll list some of the issues here as I see them, based on evidence from the Dragusanu, Giovannucci, and Nunn paper. As they note, "the evidence is admittedly both mixed and incomplete"--so some of the concerns are tentative.

Fair Trade and other certification programs affect a relatively small number of workers.

The most important Fairtrade products, measured by number of producers and workers involved in growing them, are coffee (580,000 producers and workers covered), tea (258,000), and cocoa (141,000). Fair trade standard also cover smaller numbers of producers in seed cotton, flowers and plants, cane sugar, bananas, fresh fruit, and nuts. Obviously, compared to the total number of low-income agricultural producers in developing and emerging economies--measured in billions of workers--the share of production covered by Fair Trade certification is quite small.

Fair Trade does seem to provide higher prices and greater financial stability, at least when farmers can sell at the minimum price. 

A variety of small-scale studies in many countries suggest that Fair Trade farmers do earn more. However, there is a difficult problem of cause-and-effect here. If the more sophisticated and motivated farmers who are well-positioned by their crops and land to carry out Fair Trade practices are the ones who sign up, perhaps they would have been able to receive higher priced even without the certification. There are a variety of methods to adjust for differences across farmers: age of farmer, education of farmer; size of crop; before-and-after entering a certification program, and the like.  After such adjustments, a few studies no longer find that Fair Trade farmers earn more, but the most common finding remains that a price premium continues to exist.

The research in this area also points out that just because a producer is Fair Trade-certified does not mean that the producer can necessarily sell all of their crop as Fair Trade. The buyer determines what quantity of certified product to purchase at the Fair Trade price. In addition, while some buyers provide credit, there is some evidence that buyers who then sell to big firms like Starbucks and Costco are less likely to offer credit or long-term purchasing contracts. Again, farmers overall do seem to gain financial stability from Fair Trade certification, but what they gain in reality is often less than a simple recitation of the guidelines might suggest.

Fair Trade does seem to promote improved environmental practices. 

Again, the evidence is from small-scale studies in various countries, but Fair Trade certified producers do seem more likely to use composting, to use contouring and terraces to reduce erosion, to have systems for purifying runoff from fields, to make use of windbreaks and shade trees, and so on.

While Fair Trade helps producers, the effects on workers and work organizations is more mixed. 

Fair Trade organizations sometimes operate through cooperatives, in which farmers pass their output to the cooperative, which then negotiates the sales. A variety of studies find higher levels of tension between farmers and Fair Trade cooperatives, with farmers complaining about lack of communication and poor decision-making.

In addition, many producers of Fair Trade products hire outside workers, at least seasonally. As Dragusanu, Giovannucci, and Nunn write: "The evidence on the distribution of the benefits of Fair Trade remains limited, but the available studies suggest that, within the coffee industry, Fair Trade certification benefits workers little or not at all." A couple of months ago, I blogged on a recent study making this point in "Does Fair Trade Reduce Wages?" However, these is also some evidence that in non-coffee crops, often grown in plantation agriculture, the certification standards can improve working conditions and reduce the use of child labor.

How might entry by producers affect Fair Trade and other certification programs in the long run? 

If producers who operate in a certain way can earn higher profits, then any economist will predict that more producers will choose to operate in that way. But as more producers enter and the supply of the product produced in that way rises, it will tend to drive down the market price, until the opportunities for higher profits are competed away. At least so far, this doesn't seem to have happened for Fair Trade. But as Dragusanu, Giovannucci, and Nunn write: "This link between free entry and rents provides an interesting dilemma for certification agencies. On the one hand, they wish to induce the spread of socially and environmentally responsible production as much as possible. On the other hand, they may also wish to structure certain limits to entry so that they can continue to maintain higher-than-average rents for certified producers."

How might entry by additional certification organization affect Fair Trade in the long run?

There is considerable overlap between the various certification organizations: for example, 80% of the Fair Trade-certified producers are also certified as Organic producers. But multiple certifications mean multiple reports and audits, which can be a real burden for farmers in low-income countries. Some for-profit companies are starting their own certification programs, rather than deal with an outside certification organization. At some point, there is a risk that farmers become unwilling to deal with a plethora of organizations, and that consumers become cynical about whether many of these organizations represent something meaningful.


Thursday, August 21, 2014

Using Twitter for Perceiving Unemployment in Real Time

The official unemployment rate predictions, released early each month, are based on a monthly survey. It's a good survey, even an excellent survey, but the data is inevitably a month old. In addition, any survey is somewhat constrained by the specific wording of its questions and definitions. Would it be possible to get a faster and reasonably accurate view of labor market conditions by looking at mentions of certain key terms on Twitter and other social media? The University of Michigan Economic Indicators from Social Media has started research program on this topic. The first research paper up at the side is "Using Social Media to Measure Labor Market Flows," by Dolan Antenuccia,  Michael Cafarellab, Margaret C. Levenstein, Christopher Ré, and Matthew D. Shapiro, which is based on data from 19.3 billion Twitter messages sent between July 2011 and November 2013--which is about 10% of all the tweets sent in that time.

For those who want detail on how the official unemployment rate is calculated, the Bureau of Labor Statistics published a short memo on "How the Government Measures Unemployment" in June 2014. Basically, the government has been doing the Current Population Survey (CPS)
every month since 1940. In its current form. As BLS describes it:

There are about 60,000 eligible households in the sample for this survey. This translates into approximately 110,000 individuals each month, a large sample compared to public  opinion surveys, which usually cover fewer than 2,000 people. The CPS sample is  selected so as to be representative of the entire population of the United States ... Every month, one-fourth of the households in the sample are changed, so that no  household is interviewed for more than 4 consecutive months. After a household is  interviewed for 4 consecutive months, it leaves the sample for 8 months, and then is  again interviewed for the same 4 calendar months a year later, before leaving the sample  for good. As a result, approximately 75 percent of the sample remains the same from  month to month and 50 percent remains the same from year to year. This procedure  strengthens the reliability of estimates of month-to-month and year-to-year change in the data.  Each month, highly trained and experienced Census Bureau employees contact the 60,000 eligible sample households and ask about the labor force activities (jobholding and job seeking) or non-labor force status of the members of these households during the  survey reference week (usually the week that includes the 12th of the month).
Although the headline unemployment rate and total jobs number gets most of the attention, the survey also tries to explore whether those not looking for jobs are "discouraged" workers who would actually like a job, but have given up looking, or whether they are part-time workers who would prefer a full-time job.

At present, perhaps the main source of data on labor markets that comes out more frequently than the unemployment rate itself is the data on initial claims for unemployment insurance, which comes out weekly (for example, here). However, this data can be a an imperfect indicator--or as economists would say, a "noisy" indicator--of the actual state of the labor market. Not everyone who becomes unemployed applies for unemployment insurance or is eligible for it, and many of the long-term unemployed are no longer eligible for unemployment insurance. So the practical question about using Twitter or other social media to look at labor markets is not whether offer a perfect picture, but whether the information from such estimates is less "noisy" and more useful than the data from the initial claims for unemployment insurance. \

The University of Michigan researchers searched the 19.3 billion tweets for terms of four words or less related to job loss. Some examples would include four-word blocks of text that include the words axed, canned, downsized, outsourced, pink slip, lost job, fired job, been fired, laid off, and unemployment. Some experimentation and analysis is involved in choosing terms. For example, it turned out that "let go" was used much more frequently than any other term on this list, presumably because many there were many four-word blocks of text that used "let" and "go" but weren't related to labor market issues.

Each week, the Michigan group plans to publish a comparison between the official unemployment insurance claims data and a prediction based purely on its Twitter-based methodology. Here's the current figure:


As you can see, the patterns are similar, which is somewhat remarkable. It shows that social media content provides a similar outcome to the official statistics. The patterns are not identical, which is unremarkable, because they are after all measuring different things. The interesting question then becomes: Is there some additional information or value-added to be gained about the state of the labor market from looking at the social-media based index?

In certain specific cases, the answer seems clearly to be "yes." For example, the authors explain that the official date on unemployment insurance claims showed a big drop in September 2013 that occurred because of a data processing issue in California--that is, it wasn't a real effect. The social media prediction shows no decline. More broadly, the authors look at the predictions from market experts a few days before the data comes out on  unemployment insurance claims, and they find that the social media measure would improve these predictions.

The researchers are looking at how social media might reflect various various other measures of labor markets, including job search, job postings, and how labor markets react to short-term events like Hurricane Sandy. Of course, the goal is to develop methods that give a reasonably reliable real-time sense of how the economy is evolving based on immediately available data

 For those interested in doing their own research project based on collecting publicly available data from the web, a useful overall starting point is the article by Benjamin Edelman, "Using Internet Data for Economic Research," in the Spring 2012 issue of the Journal of Economic Perspectives, where I have worked as Managing Editor since the first issue back in 1987. As with all JEP articles, it is freely available on-line compliments of the American Economic Association. Social science researchers are busily writing programs that collect data on search queries, on how prices change in a wide variety of databases, and much more.

Wednesday, August 20, 2014

Homeownership Rates Come Back Down the Mountain

Back in the mid-1990s, I thought of the U.S. homeownership rate as fairly constant, holding at about 64-65% most of the time. In the fourth quarter of 1995, for example, the homeownership rate was a bit above this range at 65.1%. But looking back at Census Department data for the fourth quarter of various years (see Table 14 here), the homeownership rate had been 64.1% in 1990, 63.5% in 1985, 65.5% in 1980, 64.5% in 1975, 64.0% in 1970, and 63.4% in 1965.

Since 1995, U.S. homeownership rates have climbed a mountain--speaking graphically--and have now come back down. Here's a figure from the Census Bureau's July 29 report on "Residential Vacancies and Homeownership in the Second Quarter 2014." The homeownership rate checked in at 64.7% in the second quarter of 2014.


Here's a slightly different perspective from the same report, looking at the vacancy rate--that is what share of rental housing and of homes are vacant.
At about the same time that the homeownership rate was rising in the first half of the 1990s, the vacancy rate for homes was also rising--which suggests that an enormous boom in residential construction was occurring at the time.

It's worth remembering that as homeownership rates climbed up one side of the mountain from about 1995 to 2004, the change was viewed as a success by  both parties. Bill Clinton had a National Homeownership Strategy  which pushed to make it easier for people with lower incomes to own a home. As Clinton said in announcing the initiative:

You want to reinforce family values in America, encourage two-parent households, get people to stay home? Make it easy for people to own their own homes and enjoy the rewards of family life and see their work rewarded. This is a big deal. This is about more than money and sticks and boards and windows. This is about the way we live as a people and what kind of society we're going to have. ...  The goal of this strategy, to boost home ownership to 67.5 percent by the year 2000, would take us to an all-time high, helping as many as 8 million American families across that threshold. ... Our home ownership strategy will not cost the taxpayers one extra cent. It will not require legislation. It will not add more Federal programs or grow Federal bureaucracy. It's 100 specific actions that address the practical needs of people who are trying to build their own personal version of the American dream, to help moderate income families who pay high rents but haven't been able to save enough for a downpayment, to help lower income working families who are ready to assume the responsibilities of home ownership but held back by mortgage costs that are just out of reach, to help families who have historically been excluded from home ownership.


The Clinton initiative, together with the booming U.S. economy in the second half of the 1990s,  reached that goal of 67.5% homeownership rate by the year 2000. When George W. Bush became president, he pushed for an "ownership society," with policies to help people with down payments on a home and increase the number of minority homeowners. As Bush  said in a 2003 speech:
"This Administration will constantly strive to promote an ownership society in America. We want more people owning their own home. It is in our national interest that more people own their own home. After all, if you own your own home, you have a vital stake in the future of our country."

When the homeownership rate peaked at 69.4% in the second quarter of 2004, and for some months afterward, there was strong bipartisan support for the policies that had raised homeownership rates. At the time, existing homeowners were largely delighted as well with the swelling price of their homes.

Of course, the underlying problems have now become obvious. It's hard to oppose policies that gives low-income people a better chance to own a home. But if those policies involve encouraging those with lower incomes to take out subprime mortgages, so that the people you are claiming to help will be actually be carrying overly large debt burdens and become highly vulnerable to a downturn in housing prices, then this way of pushing for higher rates of homeownership is a poisoned chalice. I'm very supportive of building institutions and laws that will make it easier for those with low and medium incomes to accumulate financial and nonfinancial assets, including a home. But let's focus on ways of encouraging actual saving, not ways of encouraging excessive borrowing.

Tuesday, August 19, 2014

US Becomes Oil and Gas Production Leader

OK, I admit that it's arbitrary to compare countries according to their oil and gas production, setting aside coal, hydroelectric, nuclear, and renewables like solar and wind. Still, as someone who started paying attention to economic issues during the OPEC-related oil price shocks of the 1970s, this figure shows an outcome that I never expected to see. Taking oil and gas together, the U.S. has now surpassed Russia and Saudi Arabia as the world's leading producer.

This figure was produced by the Stanford Institute for Economic Policy Research (SIEPR) as part of it annual "Facts at a Glance" chartbook. For purposes of this comparison, natural gas has been converted into an energy-equivalent amount of oil: specifically, 5,800 cubic feet is equal to about 1 barrel of oil.

Of course the economic consequences of being the largest energy producer will be different for the U.S. than for Russia or Saudi Arabia. For example, the enormous US economy uses more energy than it produces, and thus remains an energy importer, while the economies of Saudi Arabia and Russia depend on energy exports. But before I try to figure out what it all means. I need to spend some time just wrapping my head around the idea of the U.S. as the world's leading oil and gas producer.

Monday, August 18, 2014

International Minimum Wage Comparisons

How does the level of the minimum wage relative to other wages compare across higher-income countries around the world? Here are a couple of figures generated from the OECD website, using data for 2012.

As a starter, here's a comparison of minimum wages relative to average wages. New Zealand, France, and Slovenia are near the top, with a minimum wage equal to about half the average wage. The United States (minimum wage equal to 27% of the average wage) and Mexico (minimum wage equal to 19% of the average wage) are near the bottom.


However, average wages may not be the best comparison. The average wage in an economy with relatively high inequality, like the United States, will be pulled up by the wages of those at the top. Thus, some people prefer to look at minimum wages relative to the median wage, where the median is the wage level where 50% of workers receive more and 50% receive less. For wage distributions, which always include some extremely large positive values, the median wage will be lower than the average--and this difference between median and average will be greater for countries with more inequality.

Here's a figure comparing the minimum wage to the median wage across countries.  The highest minimum wage by this standard is Turkey (71% of the median wage) followed by France and New Zealand (about    60% of the median wage). The lowest three are the United States (38%), the Czech Republic (36%) and Estonia (36%).


This post isn't the place to rehearse arguments over the minimum wage one more time: if you want some of my thoughts on the topic, you can check earlier posts like "Minimum Wage and the Law of Many Margins" (February 27, 2013), "Some International Minimum Wage Comparisons" (May 29, 2013), "Minimum Wage to $9.50? $9.80? $10?" (November 5, 2012). Moreover, minimum wages across countries should also evaluated in the context of other government spending programs or tax provisions that benefit low-wage families.

However, I will note for US readers that the international comparisons here can give aid and comfort to both sides of the minimum wage argument in this country. Those who would like the minimum wage raised higher can point to the fact that the U.S. level remains relatively low compared to other countries. Those who would prefer not to raise the minimum wage higher can take comfort in the fact that, even after the minimum wage increased signed into law by President Bush in May 2007 and then phased in through 2009, the U.S. minimum wage relative to average or median wages remains comparatively low.






Friday, August 15, 2014

What's the Difference Between 2% and 3%?

If you calculated that the difference between 2% and 3% is 1%, you are of course arithmetically correct, but in an economic sense, you are missing the point. Herb Stein explained the difference in an 1992 essay about the work of Edward Dennison on economic growth. Stein wrote:
The difference between 2 percent and 3 percent is not 1 percent but 50 percent. That, of course, is not the result of research--at least, not Dennison's--but it is an often-neglected and important proposition that he emphasized. Its significance is that what seems a small increase in the growth rate--say, from 2 to 3 percent--is really a large increase. As a first approximatino, such an increase in the growth rate would require an increase of 50 percent in all the resources, effort, and attention that went into generating the 2 percent growth rate.

Dennison had died in 1992, and Stein's short remembrance, "Memories of a Model Economist," was published in the Wall Street Journal, November 23, 1992. It was reprinted in On the Other Hand ... (pp. 235-239), a 1995 collection of Stein's popular essays and writings published by the AEI Press.

One of the challenges of teaching basic economics is to explain why small differences in the annual rate of economic growth are so important. Stein's comment from Dennison is one way to focus attention on these issues. In the short run of a single year  the difference between 2% and 3% is indeed 1%, but when the issue is how to bring down the unemployment rate, raising the number of workers needed is a big deal. In the longer run of a decade or two, the key point to remember is that economic growth accumulates, year after year, so losing 1% every year means losing (approximately, not adjusted for compounding of growth rates) 10% after a decade and 20% after two decades.

When a nation falls behind in productivity growth over a sustained period of time, it is a matter of decades to make up that foregone productivity growth. (If you doubt it, consider the experience of the United Kingdom or Argentina during the earlier parts of the 20th  century, or think about a quarter-century of lethargic growth has affected perceptions and reality of Japan's economy.) No matter what your public policy goal--more for social programs, tax cuts, deficit reduction, rescuing Social Security and Medicare--the task is politically easier if the growth rate has been on average higher and the economic pie is therefore substantially larger. In the last few years,  U.S. economic policy has for good reason been focused on the aftereffects and lessons of the Great Recession. But looking ahead a couple of decades, the single most important factor for the health of the U.S. economy is whether we create an economic climate so that the rate of per capita growth can be 1 or 2% faster per year.

Thursday, August 14, 2014

Is the Division of Labor a Form of Enslavement?

The idea that an economy functions through a division of labor, in which we each focus and specialize in certain tasks and then participate in a market to obtain the goods and services we want to consume, is fundamental to economic analysis. Indeed, the very first chapter of Adam Smith's 1776 classic The Wealth of Nations is titled "Of the Division of Labor," and offers the famous example of how dividing up the tasks involved in making a pin is what makes a pin factory so much more productive than an individual who is making pins.

But what if the division of labor, with its emphasis on focusing on a particular narrow job, runs fundamentally counter to something in the human spirit? Karl Marx raised this possibility in The German Ideology (1846 Section 1, "Idealism and Materialism," subsection on "Private Property and Communism"). Marx wrote:

“Further, the division of labor implies the contradiction between the interest of the separate individual or the individual family and the communal interest of all individuals who have intercourse with one another. … The division of labor offers us the first example of how, as long as man remains in natural society, that is, as long as a cleavage exists between the particular and the common interest, as long, therefore, as activity is not voluntarily, but naturally, divided, man's own deed becomes an alien power opposed to him, which enslaves him instead of being controlled by him. For as soon as the distribution of labor comes into being, each man has a particular, exclusive sphere of activity, which is forced upon him and from which he cannot escape. He is a hunter, a fisherman, a shepherd, or a critical critic, and must remain so if he does not want to lose his means of livelihood; while in communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticism after dinner, just as I have a mind, without ever becoming hunter fisherman, shepherd or critic. This fixation of social activity, this consolidation of what we ourselves produce into an objective power above us, growing out of our control, thwarting our expectations, bringing to naught our calculations, is one of the chief factors in historical development up till now.
Like so much of Marx's writing, this passage seems to me to give voice to a difficult concept that contains a substantial slice of truth; indeed, I had this quotation up on my office door for a time. But also like a lot of Marx, it seems to ignore or evade counterbalancing arguments.

I suspect we all know people who at times feel trapped by the division of labor. I can think offhand of several friends who aren't happy being lawyers, and a doctor who would have preferred not to become a doctor. When you're grinding out the quarterly reports or the semi-required stint of overtime, it's easy to feel trapped by the narrowness of the job.

But on the other side, the division of labor contains within it an opportunity to learn and specialize--to be the expert in your own field of study. This matters to me both as a consumer and as a worker. As a consumer, I don't want the noontime appointment with a doctor who was a shepherd this morning, a social critic this afternoon, and is planning to try a different set of jobs tomorrow. I want a doctor who works hard at being a doctor. I also want a car made by workers who have experience in their jobs, an and I want to drive that car across bridges designed by engineers who spend their working time focused on engineering. As a consumer, I like dealing with goods and services produced by specialists.

As a worker, being stuck in one narrow occupation may feel like a trap. But fluttering from job to job can be is a trap of a different kind--a trap of a string of shallow experiences. I don't mean to knock shallow experience: there are a lot of things worth trying only once, or maybe a few times. But you can't get 10 years of experience at any job if you switch jobs every year, or in Marx's illustration, several times per day. There's probably a happy medium here of finding some variation in one's tasks and building expertise in different areas, both in work and in hobbies, over a lifetime. But to me, Marx's advice sounds like telling an ADHD worker to "find your bliss," and then watching that person flit like a butterfly on amphetamines.

Marx's challenge to the division of labor also sidesteps some practical issues. His  implication seems to be that what you choose to do as a worker can be detached from what society needs. It's not clear what a society does if on a given day, not enough people feel like showing up to be garbagemen or day care providers that day. Markets and pay and defined jobs are a mechanism of coordinating what is produced and consumed, and also for allowing that mechanism to evolve over time according to the range of jobs that people want to do as providers (given a certain wage) and the goods and services that people want in their economic role as consumers.

The division of labor can be constraining, but another fundamental principal of economics is that all choices involve giving up an opportunity to do something else. A world without a division of labor would just be constraining in a different and arguably less attractive way. If you would like some additional ruminations on moral issues surrounding labor markets, one starting point is this blog from last month, "Are Labor Markets Exploitative?"


Wednesday, August 13, 2014

Characteristics of U.S. Minimum Wage Workers

Set aside for a few heartbeats the vexed question of just how a minimum wage would affect employment, and focus on a more basic set of facts: What are some characteristics of U.S. workers who receive the minimum wage? The statistics here are from a short March 2014 report from the U.S. Bureau of Labor Statistics, "Characteristics of Minimum Wage Workers, 2013." Of course, the facts about who is receiving the minimum wage also reveal who will be most directly affected by any changes.

How many workers are paid at or below the minimum wage?

The BLS reports that 75 million American workers were paid at an hourly rate in 2013, out of about 136 million total employed workers. Of that total, 3.3 million, or about 4.3%, were paid at the minimum wage or less. A figure from an April 3, 2014,  BLS newsletter puts that level in historical context--that is, the share of hourly-paid workers receiving the federal the minimum wage is lower than in most of the 1980s and 1990s, but it is a little higher than in much of the 2000s. Of course, shifts in the  the share of workers receiving the minimum wage in large part reflect changes in the level of the minimum wage. When the federal minimum wage increase signed into law by President Bush in 2007 was phased in during 2008 and 2009, more workers were then affected by the higher minimum wage.

Percentage of hourly paid workers with earnings at or below the federal minimum wage, by sex, 1979–2013 annual averages

What's the breakdown of those being paid the minimum wage by age? In particular, how many are teenagers or in their early 20s? 

Of the 3.3 million minimum-wage workers in 2013, about one-quarter were between the ages of 16-19,  another one-quarter were between the ages of 20-24, and half were over the age of 25.

What's the breakdown of those being paid the minimum wage by full-time and part-time work status? 

Of the 3.3 million minimum-wage workers in 2013, 1.2 million were full-time, and 2.1 million were part-time--that is, roughly two-thirds of minimum-wage workers are part-time.

What's the breakdown of those being paid the minimum wage across regions? 

For the country as a whole, remember, 4.3% of those being paid hourly wages get the minimum wage or less. If the states are divided into nine regions the share of hourly-paid workers getting the minimum wage in each region varies like this: New England, 3.3%; Middle Atlantic, 4.8%;  East North Central, 4.3%, West North Central, 4.6%; South Atlantic, 5.1%; East South Central, 6.3%; West South Central, 6.3%; Mountain, 3.9%; Pacific, 1.5%.

The BLS has state-by-state figures, too. There are two main reasons for the variation. Average wages can vary considerably across states, and in areas with lower wages, more workers end up with the minimum wage. In addition, 23 states have their own minimum wage that is set above the federal level. In those state, fewer workers (with exceptions often made in certain categories like food service workers who get tips) are paid below the federal minimum wage. It's an interesting political dynamic that many of those who favor a higher federal minimum wage are living in states where the minimum wage is above the federal level; in effect, they are advocating that states who have  not adopted the minimum wage policy preferred in their own state be required to do so.

In what industries are hourly-paid workers most likely to receive the minimum wage? 

Percentage of hourly paid workers with earnings at or below the federal minimum wage, by occupation, 2013 annual averages

Whatever one's feelings about the good or bad effects of raising the minimum wage, it seems fair to say that those effects will be disproportionately felt by a relatively small share of the workforce, disproportionately young and part-time, and disproportionately in southern states.

Tuesday, August 12, 2014

Why Longer Economics Articles?

Articles in leading academic economics journals have roughly tripled in length over the last 40 years. Here's a figure from the paper David Card and Stefano DellaVigna,  "Page Limits on Economics Articles: Evidence from Two Journals," which appears in the Summer 2014 issue of the Journal of Economic Perspectives (28:3, 149-68). Five of the leading research journals in economics over the lasst 40 years are the Quarterly Journal of Economics, the Journal of Political Economy, Econometrics, the Review of Economic Studies, and the American Economic Review (AER). The authors do a "standardized" comparison that accounts for variations over time and across journals in page formatting. A typical article in one of these leading economic journals was 15-18 pages back in 1970, and now is about 50 pages.


I admit that this topic may be of more absorbing interest to me than to most other humans on planet Earth. I'v been Managing Editor of the JEP since the start of the journal in 1987, and the bulk of my job is to edit the articles that appear in the journal. The length of JEP articles hasn't risen much at all during the last 27 years, while the length of articles in other journals has roughly doubled in that time. Am I doing something wrong? In an impressionistic way, what are some of the plausible reasons for additional length?

Ultimately, longer papers in academic research journals reflect an evolving consensus about what constitutes a necessary and useful presentation of research results. Over time, it is plausible that journal editors and paper referees have become more aggressive in requesting that additional materials should presented, additional hypotheses considered, additional statistical tests run, and the like.

An economics research paper back in the 1960s often made a point, and then stopped. An academic research paper in the second decade of the 21st century is more likely to spend a few pages setting the stage for their argument, setting the stage for the big question, give some sense in the introduction of the paper of main results, have a section discussing previous research, have a section giving a background theory, and so on.

Changes in information and computing technology have pushed economics papers to become longer. There is vastly more data available than in 1970, so academic papers need to spend additional space discussing data. There has been a movement in the last couple of decades toward "experimental economics," in which economists vary certain parameters--either in a laboratory with a bunch of students, or often in a real-world setting--which also means reporting in the research paper what was done and what data was collected. With cheaper computing power and better software, it is vastly easier to run a wide array of statistical tests, which means that space is need to explain which tests were run, the differing results of the tests, and which results the author finds most persuasive.

In the past, the ultimate constraint on length of academic journals was the cost of printing and postage. But in web-world, where we live today, distribution of academic research can have a near-zero cost. Editors of journals that are primarily distributed on-line have less incentive to require short articles.

Finally, one should mention the theoretical possibility that academic writing has become bloated over time, filled with loose sloppiness, with unneeded and length excursions into technical jargon, and occasion bouts of unrestrained pompousness.

Whatever the underlying cause of the added length of articles in economics journals, it creates a conflict between the underlying purposes of research publications. One purpose of such publications is to create a record of what was done, so that the data, theory, and arguments are spelled out in detail. However, another purpose is to allow findings to be disseminated among other researchers, as well as students and policy-makers, so that the results can be more broadly considered and understood. Longer articles probably do a better job of creating a record of what was done, and why. But given that time limits are real for us all, it now takes more time to read an economics article than it did four decades ago. The added length of journal articles means that many more pages of economics research articles are published, and a smaller proportion of those pages are read. I skim many economics articles, but having the time and space to read an article from beginning to end feels like a rare luxury, and I suspect I'm not alone.

The challenge is how to strike the right balance between the competing purposes of full documentation of research (which if unrestrained could easily run to hundreds of pages of data, statistics, and alternative theoretical models for a typical research paper), and the time limits faced by consumers of that research. Many modern research papers are organized in a way that allows or even encourages skimming to hit the high spots: for example, if you need to know right now about the details of the data collection, or the details of the theoretical model, or the details of statistics, you can skip past those sections.

Another option mentioned by Card and DellaVigna is the role of academic journals that go back to the old days, with a focus on presentation of key results, with all details available elsewhere. They write: "There may be an interesting parallel in the field of social psychology. The top journal in this field, the Journal of Personality and Social Psychology, publishes relatively long articles, as do other influential journals in the discipline. In 1988, however, a new journal, Psychological Science, was created to mirror the format of Science. Research papers submitted to Psychological Science can be no longer than 4,000 words. ... Psychological Science has quickly emerged as a leading journal in its area. In social psychology, journals publishing longer articles coexist with journals specializing in shorter, high-impact articles."

My own journal, the Journal of Economic Perspectives, offers articles that are meant as informed essays on a subject, and thus typically meant to be read from beginning to end. We hold to a constraint of about 1,000 published pages per year. (But even in JEP, we are becoming more likely to have added on-line appendices with details about data, additional statistical tests, and the like.) I sometimes say that JEP articles are a little like giving someone a tour of a house by walking around and looking in all the windows. You can get a good overview of the house in that way. But if you really want to know the place, you need to go into all the rooms and take a closer look.

The growing length of articles in economic research journals means that the profession has been giving greater priority to full presentation of the back-story of research, at the expense of readers. In one way or another, the pendulum is likely to swing back, in ways that make it easier for consumers of academic research to obtain a somewhat nuanced view of a range of research, without necessarily being buried in an avalanche of detail--but while still having that avalanche of detail available when desired.




Monday, August 11, 2014

Who is Holding the Large-Denomination Bills?

Most currency in major economies around the world is held in the form of large-denomination bills that ordinary people rarely use--or even see. Kenneth Rogoff documents the pattern as part of his short essay, "Costs and benefits to phasing out paper currency," presented at a conference at the National Bureau of Economic Research in April 2014.

I think I've held a $100 bill in my hand perhaps once in the last decade (and my memory is that the bill belonged to someone else). But the U.S. has $924.7 billion worth of $100 bills in circulation, which represent by value about 77% of all U.S. currency in circulation. In round numbers, say that the U.S. population is 300 million. That works out to roughly 3100 $100 bills, on average, for every man, woman, and child in the United States. Here's the table:

It's not just a U.S. phenomenon, either. Here are numbers for the euro. The euro has larger-denomination currency in common circulation than does the U.S., including 200- and 500-euro notes. More than half of all the euro notes in circulation, by value, are worth 100 euros or more, and 500-euro notes alone make up 30% of all euros in circulation.


One possible explanation for this phenomenon is that lots of currency is being held outside the borders of the United States and Europe. Given the large size of the bills, it probably isn't being used for ordinary transactions: most people wouldn't hand a $100-bill to a cab driver in Jakarta. But it could be used for holding wealth in a liquid but safe form in countries where other ways of holding wealth might seem risky. However, compared to the U.S. dollar and the euro, the widespread belief is that the Japanese yen is used much less widely outside of its home country. Even so, a hugely disproportionate share of Japan's currency in circulation is in large-denomination bills. A full 87% of the Japanese currency in circulation by value is in the form of 10,000-yen notes (roughly comparable in value to a $100 bill)

Rogoff points out that same pattern even arises in Hong Kong, as well. A Hong Kong dollar is worth about 13 cents U.S. More than half of all Hong Kong dollars in circulation by value are $1,000 bills.

It's easy enough to hypothesize explanations as to why so many large-denomination bills are in circulation, but the truth is that we don't really know the answer. It probably has something to do with the extent of tax evasion or illegal transactions, or a need for secrecy, or a a fear of other wealth being expropriated.  Adding up the value of the large-denomination bills in the U.S., Europe, and Japan, the total is in the neighborhood of $3 trillion. I find it hard to avoid the conclusion that there are some extraordinarily large stashes of cash, in the form of large bills, scattered around the world. I find it hard to imagine how this currency will ever be reintegrated into the banking system--there's just so much of it. This would seem to be a fertile field to plow for some Hollywood movie-maker looking for a real-world hook for a movie with underworld connections, a daring heist, lots of pictures of enormous amounts of cash, chase scenes, multiple double-crosses and triple-crosses, and a "what do you do it now that you have now?" ending.

For some previous posts about large denomination bills, see "Who is Using $1 Trillion in U.S. Currency?" (October 25, 2011) and "The Soaring Number of $100 Bills" (June 10, 2013).