Pages

Thursday, May 31, 2012

U.S. Imprisonment in International Context: What Alternatives?

The May 19 issue of the Economist magazine, in an article  about California's budget problems and high prison costs, tossed in the following factoid: "Excessive incarceration is an American problem. The country has about 5% of the world’s population but almost 25% of its prisoners, with the world’s largest number of inmates and highest per capita rate of incarceration."

This comment sent me scampering to the website of the  International Centre for Prison Studies,          and based on data from their World Prison Brief, I put together the following table. The table lists the 20 countries around the world that imprison the greatest numbers of people, and the first column shows the total for each country. The second column shows how many people are imprisoned in the country per 100,000 population. Either way you slice it, the U.S. leads the way with its 2,266,832 prisoners and an imprisonment rate of 730 per 100,000 population.

I've posted back on November 30, 2011, about "Too Much Imprisonment," and that post has details largely based on U.S. Department of Justice Statistics about the rapid rise in U.S. imprisonment over recent decades, the share of this rapid rise related to nonviolent offenses, and the cost.  Here, I want to raise a different question: If not prison, then what?

Here's an example from the local news: A woman named Amy Senser (wife of a former Minnesota Vikings professional football player Joe Senser) was convicted of two counts of criminal vehicular homicide: failing to immediately call for help and leaving the scene. Apparently she was taking an off-ramp from the highway, and the victim, Anousone Phanthavong, was putting gas into his stalled vehicle. It seems clear that Senser was driving the car when it hit him: her defense was that although she knew she had hit something, she didn't know it was a person. Senser may end up spending four years in prison.

Let me stipulate that I'm utterly unsympathetic to drivers who leave the scene of accidents, and I'm broadly unsympathetic to many of those who commit crimes. But whatever the ins and outs of the Senser case, it seems to me highly unlikely that she is going to drive a vehicle that hits someone else who is putting gas in their stalled car on a highway off-ramp. Moreover, my lack of sympathy with criminal behavior collides with other feelings. My skinflint spending tendencies note that imprisonment costs about $50,000 per year in the United States. My hard-headed practicality notes that most people who are imprisoned will re-enter society at some point, and we don't want to make that re-entry harder than it already is. Finally, my general soft-heartedness notes that those convicted of crimes also have children, parents, spouses, lovers and friends. When someone goes to prison, their community of human connections suffers as well.

For those who use violence in committing crimes against strangers, imprisonment seems appropriate. But for offenders who pose little or no danger of future violence, America needs to think about alternatives rather than blowing state and local budgets on imprisonment. Of course, those alternatives need to be chosen with care.

While fines or monetary penalties have their place, they aren't enough for me. I don't want the wealthy, or those with wealthy relatives, to be able to buy their way out of their misdeeds. I want to take the person's time, not their bank account. 

I'm also not especially interested in the creative penalties that one sometimes reads about, where someone convicted of drunk driving needs to give talks to high school students, or attend the funerals of drunk driving victims, or spend weekend evenings in an emergency ward as casualties arrive. I doubt that it's practical to have tens of thousands of convicted criminals being shipped around from high school to YMCA to hospital emergency room. I don't like the legal system to be in the business of coercing half-hearted apologies. And I suspect that these "creative" penalties tend to apply more to those who are articulate and well-to-do and connected, and I see no reason why that group should get a break.

My thoughts about other alternatives are not well-formed. But in a world where we are deluged with concerns that technology is allowing us to be tracked and invading our privacy all the time, often without us knowing, it seems peculiar to me that our technology for dealing with criminals is a slightly more hygenic version of a penalty that has been around for millenia.

I find my thoughts turning to the "rubber rooms" where, as Stephen Brill discussed in the New Yorker magazine back in 2009, the New York City public school system was warehousing 600 public school teachers too incompetent to be returned to the classroom. These teachers much punch a time-clock at the beginning and end of the day, and in between, they stay in the room while the teachers' union appeals their disciplinary action. The average person has been there for three years--at full pay, of course. I also think about the jurisdictions where you see people in orange jumpsuits picking up trash by the side of the freeway. I think about ankle bracelets and applying advanced technology to old-fashioned house arrest, which might include monitoring or blocking of communication.

Put pieces of these together, and I imagine an alternative system that would serve many of the functions of punishment and incapacitation of the current prison system. It would combine requirements to report to supervised rooms for long periods of time, with an option to do certain kinds of physical labor around the community, but it would also send people home for most of the 24-hour day, under house arrest. There might be some flexibility where after a minimum time served, it would be possible for those in such a system to go to work, or to have a day or two off from reporting or surveillance each week. Those who didn't comply could of course end up in the traditional prison system. Such a system would still involve heavy and punitive restrictions on personal freedom. But it could be vastly cheaper for taxpayers, while also recognizing the reality that most of those convicted of most crimes will be walking, driving, working and living among us for most of their lives.

Wednesday, May 30, 2012

Household Production: Levels and Trends

Since the early days of GDP accounting, and in every intro econ class since then, a  standard talking-point is that measures of economic output leave out home production. Further, if two neighbors stopped doing home production and instead hired each other to do housework and yardwork, total GDP would rise because those activities were now part of paid market exchange, even though the quantity of housework and yardwork actually done didn't rise.  But how much is household production actually worth in the U.S. economy and how has it changed over time? Benjamin Bridgman, Andrew Dugan, Mikhael Lal, Matthew Osborne, and Shaunda Villones tackle this question in Accounting for Household Production in the National Accounts, 1965–2010, which appears in the May 2012 issue of the Survey of Current Business. (I found this study at Gene Hayward's HaywardEconBlog.) Here are a few points that jumped out at me (footnotes omitted).

How can one estimate the value of home production?  
Get an estimate of hours devoted to home production, and then multiply by the wage that would be paid to domestic labor. "To measure the value of nonmarket services, we make use of two unique surveys that track household labor activities and apply a wage to the total number of hours spent in home production. One of these surveys is the Multinational Time Use Survey (MTUS), which combined a number of time use surveys conducted by academic institutions into a single data set. These surveys were taken sporadically between 1965 and 1999. The other is the American Time Use Survey (ATUS) produced by the Bureau of Labor Statistics (BLS). This survey was taken annually between 2003 and 2010.  ...

How does the value of home production relate to GDP?
"We find that incorporating home production in GDP raises the level of GDP 39 percent in 1965 and
25.7 percent in 2010."

 Why has the value of home production fallen over time? 
Fewer hours spent in home production over time, and the wage of household workers relative to other workers in the economy has fallen. "The impact of home production has dropped over time because women have been entering the workforce. This trend is driven by an increasing trend in the wage disparity between household workers and employees (that is, the opportunity cost of household labor)."

How would including home production in national output alter the growth rate of this expanded definition of GDP over time?
"Because standard GDP does not account for home production, some of the increase over time in GDP will be due to women switching from home production to market-based production. Our adjusted GDP measure includes the unmeasured home production, so the increase in GDP that occurs due to substitution from home production to market-based production will be smaller. During 1965 to 2010, the annual growth rate of nominal GDP was 6.9 percent. When household production is included, this growth rate drops to 6.7 percent."

How does time spent in home production vary with income level? How would including home production in output affect the inequality of income?
"We find that home production hours do not vary with family income: for women, who contribute to the bulk of home production hours, the correlation between family income and home production is about 0.01. Therefore, adding home production income to family income is essentially the same as adding a constant number to family income, which will raise the income of low income families proportionately more than high income families, leading to a decrease in inequality. This finding is consistent with earlier work in this literature ..."

What are the gender patterns for time spent in home production?
"In 1965, men and women spent an average of 27 hours in home production, and by 2010, they spent 22 hours. This overall decline reflects a drop in women’s home production from 40 hours to 26 hours, which more than offset an increase in men’s hours from 14 hours to 17 hours."

What is the connection between income and hours of household production?
 Those with more income spend tend to spend slightly more time on home production. "Averaged over the years 2003 to 2010, the home production for women (men) in the lowest income category was 32.2  (23.3) hours per week, while in the highest income category it was 26.3 (19.0) hours per week."

Tuesday, May 29, 2012

The Shifting U.S.-China Trade Picture

The standard story of U.S.-China trade over the last decade or so goes like this: Huge U.S. trade deficits, matched by huge Chinese trade surpluses. One underlying reason for the imbalance is that China has been acting to hold its exchange rate unrealistically low, which means that within-China production costs are lower compared to the rest of the world, thus encouraging exports from China, and outside-China production costs are relatively high, thus discouraging imports into China.

This story is simplified, of course, but it holds a lot of truth. But it's worth pointing out that these developments are fairly recent--really just over a portion of the last decade--and not a pattern that China has been following since its period of rapid growth started back around 1980. In addition, these development seem to be turning: the U.S. trade deficit is falling, China's trade surplus is declining, and China's exchange rate is appreciating. Here's the evolution in graphs that I put together using the ever-helpful FRED website from the St. Louis Fed.

First, here's a look at China's balance of trade over time. The top graph shows China's  current account balance since 1980, roughly when China's process of economic growth got underway. Notice that China a trade balance fairly close to zero until the early 2000s, when the surpluses took off. Because the first graph stops in 2010, the second graph shows China's trade balance from 2010 through the third quarter of 2011. Clearly, China's trade surplus has dropped in the last few years, and in all likelihood will be lower in 2011 than in 2010.



China's pattern of trade surpluses maps loosely follows its exchange rate. Here is China's exchange rate vs. the U.S. dollar since 1980. When an economy is experiencing extremely rapid productivity growth, the expected economic pattern is that its currency will appreciate over time--that is, become more valuable. However, as China's growth took off in the 1980s and into the 1990s, its currency depreciated--on the graph, it took more Chinese yuan to equal $1 U.S. than before. In about 1994, there is an especially sharp depreciation of the yuan, as it went very quickly from about 5.8 yuan/$1 to about 8.6 yuan/$1. During the booming U.S. economy of the late 1990s, this change had relatively little effect on the balance of trade, but by the early 2000s, it began to pump up China's trade surplus.  However, notice also that the value of China's exchange rate has dropped quite substantially over the last few years, withe much of the change coming before the Great Recession hit (U.S. recessions are shown with shaded gray vertical bands in the figure.)



The U.S. balance of trade in the last decade or so looks like China's pattern, in reverse. As China's trade surplus takes off around 2000 or so, the U.S. trade deficit plummets at about that time. As China's trade surplus diminishes in the last few years, the U.S. trade deficit also diminishes.


So what about the story with which I started: an overvalued Chinese currency, leading to huge Chinese trade surpluses and correspondingly huge U.S. trade deficits? At a minimum, the story is much less true that it was a few years back. Indeed, William R. Cline and John Williamson at the Peterson Institute for International Economics argue that the U.S.-China exchange rate has largely returned to the fundamental value justified by productivity and price differences between the two economies. Their argument appears in a May 2012 Policy Brief called  "Estimates of Fundamental Equilibrium Exchange Rates, May 2012."

They point out that China's trade surpluses are likely to be much smaller than the IMF, for example, was predicting a few years ago. And while they believe China's currency is still slightly undervalued, and needs to continue appreciating over time, they estimate that it's current value is not far from their "fundamental equilibrium exchange rate" or FEER.  They write:

"China is still judged undervalued by about 3 percent ... Thus, whereas a year ago we estimated that the renminbi needed to rise 16 percent in real effective terms and 28.5 percent bilaterally against the dollar (in a general realignment to FEERs), the corresponding estimates now are 2.8 and 7.7 percent, respectively. It is entirely possible that future appreciation will bring the surplus [China's trade surplus] down to less than 3 percent of GDP. But China still has fast productivity growth in the tradable goods industries, which implies that a process of continuing appreciation is essential to maintain its current account balance at a reasonable level."
In short, the episode of an overvalued Chinese currency driving huge trade imbalances may be largely behind us. The current U.S. trade deficits are thus more rooted in an economy which continues to save relatively little and to consume more than domestic production--thus drawing in imports.  

Friday, May 25, 2012

Is Wikipedia Politically Biased?

Wikipedia aspires to a neutral point of view. How well does it succeed? Shane Greenstein and Feng Zhu tackle this question in the May 2012 issue of the American Economic Review. (The article is not freely available, but many academics will have access through their library websites.) They conclude:

"To summarize, the average old political article in Wikipedia leans Democratic. Gradually, Wikipedia’s articles have lost that disproportionate use of Democratic phrases, moving to nearly equivalent use of words from both parties, akin to an NPOV [neutral point of view] on average. The number of recent articles far outweighs the number of older articles, so, by the last date, Wikipedia’s articles appear to be centered close to a middle point on average. Though the evidence is not definitive about the causes of change, the extant patterns suggest that the general tendency toward more neutrality in Wikipedia’s political articles largely does not arise from revision. There is a weak tendency for articles to become less biased over time. Instead, the overall change arises from the entry of later vintages of articles with an opposite point of view from earlier articles."

How do they reach this conclusion? Greenstein and Zhu focus on entries that bear on topics of importance in U.S. politics; in particular, they begin by selecting all articles in January 2011 that include "republican" or "democrat" as keywords. This procedure generates about 111,000 articles, and when they have dropped the articles that aren't about U.S. politics, they have about 70,000 articles remaining.

They then rely on a process from earlier research, which selects "1,000 phrases based on the number of times these phrases appear in the text of the 2005 Congressional Record, applying statistical methods to identify phrases that separate Democratic representatives from Republican representatives, under the model that each group speaks to its respective constituents with a distinct set of coded language. In brief, we ask whether a given Wikipedia article uses phrases favored more by Republican members or by Democratic members of Congress."

Some of their 70,000 articles don't include any of these phrases, and so can't be evaluated by this method. For the 28,000 article they can evaluate, they find on average a Democratic slant. "[W]hen they have a measured slant, articles about civil rights tend to have a Democrat slant (-0.16), while the topic of trade tends to have a Republican slant (0.06). At the same time, many seemingly controversial topics such as foreign policy, war and peace, and abortion are centered at zero [that is, no slant]."

They then look back at the earlier revisions of their 70,000 articles, and to keep the numbers manageable, when an article has more than 10 revisions they look only at 10.  This gives them 647,000 entries, but again many of them don't use any of the key phrases, leaving 237,000 that do include some of those phrases. They find that older revisions tend to lean more Democratic, while newer revisions and newer entries are more balanced.

Wikipedia is in many ways an extraordinary success. Greenstein and Zhu write:
"As the largest wiki ever and one of the most popular websites in the world, Wikipedia accommodates a skyrocketing number of contributors and readers. At the end of 2011, after approximately a decade of production, Wikipedia supports 3.8 million articles in English and well over twenty million articles in all languages, and it produces and hosts content that four hundreds of millions of readers view each month. Every ranking places Wikipedia as the fifth or sixth most visited website in the United States, behind Google, Facebook, Yahoo!, YouTube, and, perhaps, eBay. In most countries with unrestricted and developed Internet sectors, Wikipedia ranks among the top ten websites visited by households." 
Any semi-serious researcher (and here I include junior-high-school students) knows that while Wikipedia can be a useful starting point, it should never be an endpoint. Instead, it can serve as a useful shortcut to finding links to other sources. But the Greenstein and Zhu evidence suggests that Wikipedia on average has found a reasonable level of political balance--although you may need to read a few related entries on the same broad topic to achieve it.

Thursday, May 24, 2012

Lemley on Fixing the U.S. Patent System

Mark Lemley has written "Fixing the Patent Office" for SIEPR, the Stanford Institute for Economic Policy Research (Discussion Paper No. 11-014, published May 21, 2012).  Lemley has an interesting starting point for thinking about the U.S. patent system. He writes (footnotes omitted):

"Most patents don’t matter. They claim technologies that ultimately failed in the marketplace. They protect a firm from competitors who for other reasons failed to materialize. They were acquired merely to signal investors that the relevant firm has intellectual assets. Or they were lottery tickets filed on the speculation that a given industry or invention would take off. Those patents will never be licensed, never be asserted in negotiation or litigation, and thus spending additional resources to examine them would yield few benefits."

"Some bad patents, however, are more pernicious. They award legal rights that are far broader than what their relevant inventors actually invented, and they do so with respect to technologies that turn out to be economically significant. Many Internet patents fall into this category. Rarely a month goes by that some unknown patent holder does not surface and claim to be the true inventor of eBay or the first to come up with now‐familiar concepts like hyperlinking and e‐commerce. While some such Internet patents may be valid--someone did invent those things, after all--more often the people asserting the patents actually invented something much more modest. But they persuaded the Patent Office to give them rights that are broader than what they actually invented, imposing an implicit tax on consumers and thwarting truly innovative companies who do or would pioneer those fields.

"Compounding the problem, bad patents are too hard to overturn. Courts require a defendant to provide “clear and convincing evidence” to invalidate an issued patent. In essence, courts presume that the Patent Office has already done a good job of screening out bad patents. Given what we know about patents in force today, that is almost certainly a bad  assumption."

"The problem, then, is not that the Patent Office issues a large number of bad patents. Rather, it is that the Patent Office issues a small but worrisome number of economically significant bad patents and those patents enjoy a strong, but undeserved, presumption of validity."
Long-time devotees of my own Journal of Economic Perspectives may recognize this argument, because it is similar to what Lemley argued with co-author Carl Shapiro in "Probabilistic Patents" in the Spring 2005 issue. (JEP articles are freely available to all courtesy of the American Economic Association.) As Lemley argues, the problems of the patent system aren't as simple as taking longer to examine patent applications, hiring more patent examiners, or being more stingy in granting patents. Instead, the goal should be to give greater the question attention to patents that are likely to end up being more important. How might this be done?

One approach is to give patent applicants a method of signalling whether they believe the patent will be important. The idea here is that patent applicants can apply under the current system, in which case their patent would have only the usual legal presumption in its favor if challenged in court, or they can pay a substantial amount extra for a more exhaustive patent examination, which would have a much stronger presumption in its favor if challenged in court. Lemley writes:

"[A]pplicants should be allowed to “gold plate” their patents by paying for the kind of searching review that would merit a strong presumption of validity. An applicant who chooses not to pay could still get a patent. That patent, however, would be subject to serious—maybe even de novo—review in the event of litigation. Most likely, applicants would pay for serious review with respect to their most important patents but conserve resources on their more speculative entries. That would allow the Patent Office to focus its resources on those self-selected patents, thus benefiting from the signal given by the applicant’s own self‐interested choice. The Obama campaign proposed this sort of tiered review, and the PTO [Patent and Trademark Office] has recently implemented a scaled‐down version, in which applicants can choose the speed but not the intensity of review.Adoption has been significant but modest ... [I]t appears to be performing its intended function of distinguishing some urgent applications from the rest of the pack."

Another approach would be to allow other parties to pay a substantial fee to the Patent Office  to re-examine the grounds for a recently granted patent. Lemley again:

"Post‐grant opposition is a process by which parties other than the applicant have the opportunity to request and fund a thorough examination of a recently issued patent. A patent that survives collateral attack should earn a presumption of validity ... [P]ost‐grant opposition is attractive because it harnesses private information; this time, information in the hands of competitors. It thus helps the PTO to identify patents that warrant serious review, and it also makes that review less expensive by creating a mechanism by which competitors can share critical information directly with the PTO. A post‐grant opposition system is part of the new America Invents Act, but it won’t begin to apply for another several years,  and the new system will be unavailable to many competitors because of the short time limits for filing an opposition. ... But the evidence from operation of similar systems in Europe is encouraging."

Finally, the traditional way to focus on the 1-2% of patents that really matter, and where the parties can't agree, is to litigate. Lemley argues that such litigation will continue to be quite important, and that the underlying legal doctrine should acknowledge that many patents do not deserve a strong presumption of validity--unless is has been earned through an especially exhaustive process at the Patent and Trademark Office. Lemley one more time:

"[W]e will continue to rely on litigation for the foreseeable future as a primary means for weeding out bad patents. Litigation elicits information from both patentees and competitors through the adversarial process, which is far superior to even the best‐intentioned government bureaucracy as a mechanism for finding truth. More important, litigation is focused on the very few patents (1-2 percent) that turn out to be important and about which parties cannot agree in a business transaction. Litigation can be abused, and examples of patent litigation abuse have been rampant in the last two decades. But a variety of reforms have started to bring that problem under control,
and the courts have the means to continue that process.  ... Courts could modulate the presumption of validity for issued patents. A presumption like that embraced by the current “clear and convincing” standard must be earned, and under current rules patent applicants do not earn it. ... The current presumption is so wooden that courts today assume a patent is valid even against evidence that the patent examiner never saw, much less considered, a rule that makes no sense."

 None of this is to say that doesn't make sense to rethink training and expectations for patent examiners themselves, and Lemley has some interesting evidence about how patent examiners tend to turn down fewer patents the longer they are on the job, and how they often rely on the background that they personally gather,rather than on background collected by others--including others in the patent office itself. But the idea that patent reform shouldn't focus on trying to review every application exhaustively, but instead on how to give greater attention to the applications that have real-world importance, seems to me a highly useful insight. 

Wednesday, May 23, 2012

Dimensions of U.S. College Attendance

Alan Krueger, chairman of President Obama's Council of Economic Advisers, gave a lecture at Columbia University in late April on "Reversing the Middle Class Jobs Deficit." A certain proportion of the talk is devoted to explaining how all the good economic news is due to Obama's economic policies and how all of Obama's economic policies have benefited the U.S. economy. Readers can evaluate their own personal tolerance for that flavor of exposition. But the figures that accompany such talks are often of independent interest, and in particular, my eye was caught by some figures about U.S. college attendance.  (Full disclosure: Alan was editor of my own Journal of Economic Perspectives, and thus my direct boss, from 1996-2002.)

First look at the share of U.S. 55-64 year-olds in 2009 who have a post-secondary degree of some sort. It hovers around 40% of this age group, highest in the world, according to OECD data. Then look at the share of U.S. 25-34 year-olds in 2009 who have a post-secondary degree of some word. It's also right around 40% for this age group. Although one might expect that a higher proportion of the younger generation would be obtaining post-secondary degrees, this isn't actually true for the United States over the last 30 years. However, it is true for many other countries, and as a result, the U.S. is middle-of-the-pack in post-secondary degrees among the 25-34 age group. This news isn't new--for example, I posted about it in July 2011 here--but it's still striking. It seems to me possible to have doubts about the value and cost of certain aspects of post-secondary education (and I do), but still to be concerned that the U.S. population is falling back among its international competitors on this measure (and I am).


Krueger also points out that the chance of completing a bachelor's degree is strongly affected by the income level of your family.  The horizontal axis shows the income distribution divided into fourths. The vertical axis shows the share of those who complete a bachelor's degree by age 25. The lower red line is for those born between 1961-1964--that is, those who started attending college roughly 18 years later in 1979. The upper line is for those those born from 1979-1982--that is, those who started attending college in 1998.
Here are a few observations based on this figure:

1) Even for those from top-quartile groups in the more recent time frame, only a little more than half are completing a bachelor's degree by age 25. To put it another way, the four-year college degree has never been the relevant goal for the median U.S. high school student. Given past trends and the current cost of such degrees, it seems implausible to me that the U.S. is going to increase dramatically the share of its population getting a college degree. I've posted at various times about how state and local funding for public higher education is down; about how the U.S. plan for expanding higher education appears to involve handing out more student loans, which then are often used at for-profit institutions with low graduation rates; and about how alternatives to college like certification programs, apprenticeships,  and ways of recognizing nonformal and informal learning should be considered.


2) Those from families in in lower income quartiles clearly have a much lower chance of finishing a four-year college degree. My guess is that this difference is only partly due to the cost of college, while a major reason for the difference is that those with lower incomes are more likely to attend schools and to come from family backgrounds that aren't preparing them to attend college. Moreover, the gap in college attendance between those from lower- and higher-income families hasn't changed much over the two decades between the lower and the higher line in the figure, so whatever we've been doing to close the gap doesn't seem to be working.

3) It's a safe bet that many of those in the top quarter are families where the parents are college graduates, supporting and pushing their children to be college graduates. It's also a safe bet that many of those in the bottom quarter are families where the parents are not college graduates, and their children are not getting the support of all kinds that they need to become college graduates. In this way, it seems likely that college education is serving a substantial role in causing inequality of incomes to pass from one generation to the next.  Krueger has referred to this pattern of high income inequality at one time leading to high inequality in the future as the "Great Gatsby Curve," as I described here.

Tuesday, May 22, 2012

Lawyers without Licenses?

I wrote a few days back about how widespread state-level requirements for occupational licenses limit the job market opportunities for many low-skilled workers. But of course, many other occupations are licensed, too. In their book  First Thing We Do, Let's Deregulate All the Lawyers, Clifford Winston, Robert Crandall, and Vikram Maheshri make the case for lawyers without licenses. (The Brookings Institution Press page for ordering the book is here; the Amazon page is here.) Cliff Winston has a nice readable overview of their argument "Deregulate the Lawyers," which appears in the Second Quarter 2012 issue of the Milken Institute Review (which is ungated, although a free registration is required).

Just to be clear, the proposal here isn't for abolishing law schools or law degrees. Instead, the proposal is that it should be legal, if the buyer so desires, to hire people without such degrees to do legal work. Here are a few of the points they make:

  • The U.S. has about one million lawyers. [T]oday, all but a handful of states – the notable exception
    being California – require bar applicants to be graduates of ABA-accredited law schools. And every state except Wisconsin (which grants free passes to graduates of the state’s two major law schools) then requires them to pass a bar exam.
  • "State governments (and state appellate courts) have also gone along with the ABA’s [American Bar Association's] wish to prohibit businesses from selling legal services unless they are owned and managed by lawyers. And not surprisingly, the group’s definition of the practice of law is expansive,
    including nearly every conceivable legal service, including the sale of simple standard form
    wills."
  • In the book, Winston, Crandall, and Maheshri attempt to estimate how much more income lawyers are able to receive, above and beyond the alternative jobs for those with similar levels of education, as a result of these licensing rules. They argue that about 50% of the income of lawyers is a result of the licensing limits. I view this number as closer to an educated guess than a precise valuation, but given that the U.S. spends about $200 billion per year on legal services, even half that amount would be a very large dollar value. 
  • In usual markets, more supply drives down the price. But when a society has more lawyers, and more of those lawyers end up in political and regulatory jobs, it is plausible that under the current regulatory restrictions that lawyers as a group are making more work and generating billable hours for each other. 
  • It's never easy to predict what would happen if it became legal to hire those with less than a three-year law degree from an accredited institution to do legal work. But it seems plausible that a lot of jobs done by lawyers could be done by someone with fewer years of education and fewer student loans to pay off: basic wills; criminal defense in simple cases of DWI or public intoxication; basic divorce and bankruptcy; simple incorporation papers; real estate transactions; and other situations. One can imagine that the skills needed in these cases might be taught as part of an undergraduate major in law, or law schools might offer one-year and two-year degrees along with the full three-year degree, or even as part of an apprenticeship program. National firms might seek to establish brand names and reputations in these areas, like H&R Block does for tax preparation services. In some cases, like certain kinds of legally required corporate disclosure filings, perhaps sophisticated forms and information technology could substitute for a lawyer filling in the blanks. 
  • Certainly, some of these steps might drive down wages for existing lawyers. But on average, lawyers receive pay that is well above the U.S. average, with unemployment rates below the U.S. average. Indeed, the U.S. economy as a whole might be better off if some of those who now work as lawyers entered other parts of the private sector--perhaps starting and managing businesses. "There is little doubt that some people who become attorneys would have chosen to work in other occupations – and possibly made greater contributions to society – if they were not attracted to law by the prospect of inflated salaries."
  •  Many people with low-incomes end up without legal representation because of cost. "Surely, many of the currently unrepresented litigants would be better off even if they gained access only to uncredentialed legal advocates."
  • Perhaps the quality of legal representation would decline without the licensing laws, but it isn't obviously true. "[T]the American Bar Association’s own Survey on Lawyer Discipline Systems reported that, in 2009, some 125,000 complaints were logged by state disciplinary agencies
    – one complaint for every eight lawyers practicing in the United States. Note that this figure is a lower bound on client dissatisfaction because it includes only those individuals who took the time to file a complaint." A deregulated environment for lawyers might well produce other methods of ensuring quality: warrantees; money-back guarantees; brand-name reputation; and firms that monitor or rate providers of legal services. 
  • Deregulation in the airline industry back in the 1970s occurred partly because it was possible to observe how airline competition was actually working within the states of California and Texas. Might there be some example of deregulating the lawyers that would have similar effect? "One state – perhaps Arizona, whose legislature has declined to re-enact its unauthorized practice statute, or California, whose bar indicated it would not initiate actions under its statute – may realize benefits that build support elsewhere. And perhaps England’s and Australia’s recent efforts to liberalize regulation of their legal services will attract attention here."
My own sense is that while the U.S. economy does need a certain number of big-time lawyers, but many law students spend years of class-time and tens of thousands of tuition dollars on classes that bear no relationship to the law that they will actually practice. Back in college, one of my economics professors used to have a nice pre-packaged rant against regulations that were intended to ensure high quality, because he believed that everyone should have the right to buy cheap and low-quality stuff if it was what they wanted--or all they could afford. The legal services that most of us need most of the time could be provided far more cheaply, and at least as reliably, without requiring that every provider get a four-year college degree and then spend three more years in law school.


Monday, May 21, 2012

Illustrating Economies of Scale

The concept of "economies of scale"has been lurking around economics since Alfred Marshall's Principles of Economics back in 1890 (see Book IV, Ch. VIII, from the 1920 edition here). It's one of the few semi-technical bits of economics-speak to make it into everyday discussions. But in explaining the concept to students I don't always have as as many good concrete examples as I would like. Here are some of the examples I use. But if readers are aware of sound citations to academic research to back up these examples, or other other examples with a sound research backing, I'd be delighted to hear about them.

A number of examples of economies of scale are plausible real-world examples. Why are there only two major firms producing airplanes: Boeing and Airbus? A likely answer is that because of economies of scale make it, it is difficult for smaller firms to get more than a very specialized niche of the market. Why are there two big cola soft-drink companies: Coca-Cola and Pepsi? Why are there a relatively small number of national fast-food hamburger chains: McDonald's, Burger King, Wendy's? A likely explanation is that there are economies of scale for such firms, partly in terms of being able to afford a national advertising and promotion budget, partly in terms of cost advantages of buying large quantities of inputs. Why is there only one company providing tap water in your city? Because there are economies of scale to building this kind of network and running duplicative sets of pipes for additional water companies would be inefficient.

While I believe that these examples are a reasonable enough approximation of an underlying truth to pass along to students, I confess that I'm not familiar with solid economic research establishing the existence and size of economies of scale in these cases.
  
In the second edition of my own Principles of Economics textbook, I give one of my favorite example of economies of scale: the "six-tenths rule" from the chemical manufacturing industry. (If you are an instructor for a college-level intro economic class--or you know such an instructor!--the book is available from Textbook Media. The price ranges from $20 for a pure on-line book to $40 for a black-and-white paper book with on-line access as well. In short, it's a good deal--and on-line student questions and test banks are available, too). The research on this rule actually goes back some decades. Here's my one-paragraph description from the textbook (p. 178):

"One prominent example of economies of scale occurs in the chemical industry. Chemical plants have a lot of pipes. The cost of the materials for producing a pipe is related to the circumference of the pipe and its length. However, the volume of gooky chemical stuff that can flow through a pipe is determined by the cross-section area of the pipe. ... [A] pipe which uses twice as much material to make (as shown by the circumference of the pipe doubling) can actually carry four times the volume of chemicals (because the cross-section area of the pipe rises by a factor of four). Of course, economies of scale in a chemical plant are more complex than this simple calculation suggests. But the chemical engineers who design these plants have long used what they call the “six-tenths rule,” a rule of thumb which holds that increasing the quantity produced in a chemical plant by a certain percentage will increase total cost by only six-tenths as much. "

A recent related example of how pure size can add to efficiency is a trend toward even larger container ships. For a press discussion, see Economies of scale made steel: The economics of very large ships in the Economist, November 12, 2011,. The new generation of ships are 400 meters long and 50 meters wide, with the largest internal combustion engines ever built, driving a propeller shaft that is 130 meters long and a propeller that weighs 130 tons. Running this ship takes a crew of only 13 people, although they include a few more for redundancy. Ships with 20% larger capacity than this one are on the way.

One useful way to help make economies of scale come alive for students is to link it with antitrust and public policy concerns. For example, a big question in the aftermath of the financial crisis is whether big banks should be broken up, so that the government doesn't need to face a necessity to bail them out because they are "too big to fail." I posted about this issue in Too Big To Fail: How to End It? on April 2, 2012  One piece of evidence in the question of whether to break up the largest banks is whether they might have large economies of scale--in which case breaking them up would force consumers of bank services to pay higher costs. However, in that post, I cite Harvey Rosenblum of the Dallas Fed arguing: "Evidence of economies of scale (that is, reduced average costs associated with increased size) in banking suggests that there are, at best, limited cost reductions beyond the $100 billion asset size threshold." Since the largest U.S. banks are a multiple of this threshold, the research suggests that they could be broken up without a loss of economies of scale.

Another recent example of the interaction between claims about economies of scale and competition policy came up in the recently proposed merger between AT&T and T-Mobile. The usual counterclaims arose in this case: the companies argued that the merger would bring efficiencies that would benefit consumers, while the antitrust authorities worried that the merger would reduce competition and lead only to higher prices.
Yan Li and Russell Pittman tackle the question of whether the merger was likely to produce efficiencies in "The proposed merger of AT&T and T-Mobile: Are there unexhausted scale economies in U.S. mobile telephony?" a discussion paper published by the Economic Analysis Group of the U.S. Department of Justice in April 2012.


"AT&T’s proposed $39 billion acquisition of T-Mobile USA (TMU) raised serious concerns for US policymakers, particularly at the Federal Communications Commission (FCC) and the Antitrust Division of the Justice Department (DOJ), which shared jurisdiction over the deal. Announced on March 20, 2011, the acquisition would have combined two of the four major national providers of mobile telephony services for both individuals and businesses, with the combined firm’s post-acquisition share of revenues reportedly over 40 percent, Verizon a strong number two at just under 40 percent, and Sprint a distant number three at around 20 percent. ...

All of this raises the crucial question: How reasonable is it to assume that under current (i.e. without the merger) conditions, AT&T and T-Mobile enjoy substantial unexhausted economies of density and size of national operations? Recall that the fragmentary estimates made public suggest claims of at least 10-15 percent reductions in cost, and perhaps 25 percent or more. Absent an econometric examination of mobile telephony for the US as a whole as well as for individual metropolitan areas, what can we infer from the existing literature? The literature on at least one other network industry is not particularly supportive. ... Most of the existing empirical literature features observations at the firm level, with output measured as number of subscribers or, less frequently, revenues or airtime minutes. These studies tend to find constant returns to scale or even decreasing returns to scale for the largest operators – i.e., generally U-shaped cost curves. ...


[I]t is unlikely that T-Mobile, and very unlikely that AT&T, are currently operating in a range where large firm-level economies related to activities such as procurement, marketing, customer service, and administration would have been achievable due to the merger. Regarding both measures, the presence of “immense” unexhausted economies for the two firms seems unlikely indeed. On this basis (and on this basis alone), our results support the decision of DOJ to challenge the merger and the scepticism expressed by the FCC staff."


Li and Pittman also raise the useful point that very large firms should perhaps be cautious about claiming huge  not-yet-exploited economies of scale are available if only they could merge with other very large firms. After all, if economies of scale persist to a level of output where only one or a few mega-firms can take advantage of them, then an economist will ask whether this is a case of "natural monopoly," and thus whether there is a case for regulation to assure that the mega-firm, insulated from competitive challenge because it can take advantage of economies of scale, will not exploit its monopoly power to overcharge consumers. As Li and Pittman write of the proposed merger between AT&T and T-Mobile: "[W]e may justifiably ask whether if one believes the evidence of “immense” economies presented by the merging companies, one should take the next step and consider whether mobile telephony in U.S. cities is a “natural monopoly”, with declining costs throughout the relevant regions of demand?"

Finally, an intriguing though that economies of scale may become less important in the future, at least in some areas, comes from the the new technology of manufacturing through 3D printing. Here's a discussion from the Economist, April 21, 2012, in an article called "A third industrial revolution"


"Ask a factory today to make you a single hammer to your own design and you will be presented with a bill for thousands of dollars. The makers would have to produce a mould, cast the head, machine it to a suitable finish, turn a wooden handle and then assemble the parts. To do that for one hammer would be prohibitively expensive. If you are producing thousands of hammers, each one of them will be much cheaper, thanks to economies of scale. For a 3D printer, though, economies of scale matter much less. Its software can be endlessly tweaked and it can make just about anything. The cost of setting up the machine is the same whether it makes one thing or as many things as can fit inside the machine; like a two-dimensional office printer that pushes out one letter or many different ones until the ink cartridge and paper need replacing, it will keep going, at about the same cost for each item.
"Additive manufacturing is not yet good enough to make a car or an iPhone, but it is already being used to make specialist parts for cars and customised covers for iPhones. Although it is still a relatively young technology, most people probably already own something that was made with the help of a 3D printer. It might be a pair of shoes, printed in solid form as a design prototype before being produced in bulk. It could be a hearing aid, individually tailored to the shape of the user’s ear. Or it could be a piece of jewellery, cast from a mould made by a 3D printer or produced directly using a growing number of printable materials."

Right now, 3D printing is a more expensive manufacturing technology than standard mass production, but it is also vastly more customizeable. For uses where this flexibility matters, like a hearing aid or other medical device that exactly fits, or making a bunch of physical prototypes to be tested, 3D printing is already beginning to make some economic sense. As the price of 3D printing falls, it will probably become integrated into a vast number of production processes that will combine old-style mass manufacturing with 3D-printed components. One suspects that a high proportion of the value-added and the price that is charged to consumers will be in the customized part of the production process.

Friday, May 18, 2012

Capital Controls: Evolution of the IMF and Conventional Wisdom

Back when I was first being doused with economics in the late 1970s and early 1980s,  the idea that a country might impose controls on the inflow or outflow of international capital had a sort of fusty, past-the-expiration-date odor around it. Sure, such control had existed back before World War II, and persisted for some years after the war. But wasn't it already time, or slightly past time, to phase them out? Not coincidentally, this was also the position of the IMF at around this time. But when the IMF was founded back in the late 1940s, it accepted the necessity for capital controls, and now, the IMF is returning to an acceptance of capital controls--but with a twist. Let me walk through this evolution.

At the founding of the IMF back in the late 1940s,  "almost all members maintained comprehensive capital controls that the drafters of the Articles assumed would remain in place for the foreseeable future," reports the IMF in a 2010 paper on "The IMF's Role Regarding Cross-Border Capital Flows." The perspective was that "bodies such as the General Agreement on Tariffs and Trade (GATT—now the World Trade Organization, WTO) wereto be responsible for the liberalization of trade in goods and now services, while the Fund would ensure that members liberalized the payments and transfers associated with such trade." However, while the IMF would encourage liberalizing payments for trade, it would not especially encourage international capital flows for purposes of financial investment, based on a "rather negative view of capital flows that then prevailed, premised on the belief that speculative capital movements had contributed to the instability of the prewar system and that it was necessary to control such movements."

By the 1970s, the position of the IMF (and many mainstream economists) had changed. "Many advanced economies were liberalizing their capital accounts and it was recognized that international capital movements had begun to play an important role in the functioning of the international monetary system ..."
The IMF began to actively encourage countries to allow free movement of capital for investment purposes, although it stopped short of requiring such a change as a condition for loans. The general tenor of the IMF advice was that if there were problems with international capital flows, they could typically be resolved in other ways, like though flexibility of exchange rates or alterations in fiscal and monetary policy. By the 1990s, the IMF was proposing that all of its countries should gradually but surely remove all capital controls.
For more details on the IMF history with respect to capital controls, see the April 2005 report from the IMF's Independent Evaluation Office, Report on the Evaluation of the IMF's Approach to Capital Account Liberalization.

But this proposal never took effect, and one big reason was the East Asian financial crisis of 1997-98. The east Asian "tiger" economy like South Korea, Thailand, Malaysia, Indonesia and Taiwan had been growing ferociously in the 1980s and into the 1990s. They were viewed as genuine economic success stories: growing with rapid productivity growth, and fairly well-managed in their fiscal and monetary policies. But they attracted a wave of international investment capital in the early 1990s that pumped up their currencies and stock markets to unsustainable levels, and when the bubble burst and the international financial capital rushed out, it left behind a financial crisis and a deep recession. Of course, the recent difficulties in small European economies like Greece, Ireland, Portugal and Spain follow a similar pattern: international financial capital rushed in, promoted a boom, and then rushed out, leaving financial crisis and recession.


If you are in an economy that is small by global standards--which is most of them--then international capital markets have a tendency to dramatically overreact. It's like if when I said "I'm hungry," someone dumped a bathtub full of spaghetti over my head, and then when I said "that's too much," they starved me for a week. When small national economies look like a good place to invest, international money floods in and can lead to price bubbles and unsustainable booms. When the economic problems become apparent, then at some point a "sudden stop" occurs and international money floods out, leading to financial and economic crises.

Even before the east Asian crisis hit, some folks in the IMF and many outside it were rethink its notion that capital controls should only be viewed as obstructions to be removed, and began trying to develop a more nuanced view. The most recent IMF effort along these lines is a series of papers on "capital flows and the policies that affect them." The first paper in the series is the 2010 paper cited above. The fourth paper came out in March 2012, called  "Liberalizing Capital Flows and Managing Outflows." Here are a few highlights (references to figures and citations omitted):

Removing capital controls has theoretical benefits, but in the real world often has costs
"In perfect markets with full information and no externalities, liberalization of capital flows can benefit both source and recipient countries by improving resource allocation. The more efficient global allocation of savings can facilitate investment in capital-scarce countries. In addition, liberalization of capital flows can promote risk diversification, reduce financing costs, generate competitive gains from entry of foreign investors, and accelerate the development of domestic financial systems."

The main cost of removing capital controls is the risk of sudden stops
"The principal cost of capital account openness stems from the vulnerability to financial crises triggered by sudden stops in capital flows, and from currency and maturity mismatches. Systemic risk-taking can increase investment, leading to higher growth but also to a greater incidence of crises. Many empirical studies have established the strong association between surges in capital inflows (and their composition) and the likelihood of debt, banking, and currency crises in emerging market countries. Other studies, however, do not find a systematic association between crises and capital account openness, but find that the relationship hinges on the level of financial sector development, institutional quality, macroeconomic policy, and trade openness ... "

The main policy recommendations still propose a gradual movement toward fewer restrictions on capital movements, but this recommendation now comes hedged about with qualifications. Three of these seem especially important to me.

1) "In low-income countries, the benefits of capital flows arise mainly from foreign direct investment (FDI). In many countries, FDI has helped to boost investment, employment, and growth. Low-income countries generally need to strengthen their institutions and markets in order to safely absorb most other types of capital flows, which carry substantial risks until such thresholds are met." A common recommendation is that countries should allow foreign direct investment, and then gradually open up to international investment in equity markets, and then gradually open up to international lending and borrowing.


2) The pace at which this gradual opening-up to international capital happens should depend on the prior development of a nation's economic conditions and political institutions.

3) When using capital constraints, focus more on limiting international inflows than on outflows. This recommendation is a reversal from the early days of IMF, when constraints on capital outflows were common, but constraints on inflows were almost unheard of. But the recommendation makes sense. Limits on capital outflows are very hard to enforce in an interconnected modern world economy, and they are only needed when the financial crisis has already occurred. Limits on capital inflows, like encouraging foreign direct investment but discouraging dependence on short-term capital inflows from abroad, helps to prevent a financial bubble from inflating in the first place.

This advice seems sensible to me, if perhaps difficult to implement After all, would it have been politically or economically possible for Greece or Ireland or Spain to have restricted inflows of international financial capital back in 2007 or 2008? In response to such questions it's worth repeating an honest confession from up near the front of the March 2012 IMF report. "[T]he theoretical and practical understanding of capital flows remains incomplete. Capital flows are a financial phenomenon, and many of the unresolved analytical and policy questions related to the financial sector carry over to capital flows."


Thursday, May 17, 2012

China: Does 8% Growth Cause Less Satisfaction?

China's economy grew at extraordinary annual rates of 8% or more, on a per capita basis, in the two decades from 1990 to 2009. Using the old "rule of 72" that is sometimes taught to approximate the effect of growth rates, take 72, divide by the annual growth rate, and it will tell you (roughly) how many years it takes for the original quantity to double. So at an 8% growth rate, China's per capita GDP doubles in 9 years, and quadruples in 18 years. In the two decades from 1990-2009, average per person GDP in China has quadrupled, at least. The number of Chinese living below the international poverty line of $1.25 in consumption per day fell by 662 million from the early 1980s up to 2008, according to World Bank estimates.

But survey researchers as people in China (and all over the world): "All things considered, how satisfied are you with your life as a whole these  days? Please use this card to help with your answer:
       1 “dissatisfied” 2 3 4 5 6 7 8 9 10 “satisfied”."
These researchers find that people in China are not, on average, more satisfied in 2009 than in 1990. How can this be?

This finding is an example of the "Easterlin paradox." Back in 1974, Richard Easterlin wrote a paper called A. (1974) "Does Economic Growth Improve the Human Lot?" which appeared in a conference volume (Nations and Households in Economic Growth: Essays in Honor of Moses Abramovitz, edited by Paul A. David and Melvin W. Reder). The paper is available here.  Easterlin found that in a given society, those with more income tended to report higher happiness or satisfaction than those with less income. However, he also found that the average level of happiness or satisfaction on a 10-point scale didn't seem to rise over time as an economy grew: for example, in the U.S. economy between 1946 and 1970. He argued: "The increase in output itself makes for an escalation in human aspirations, and thus negates the expected positive impact on welfare."

But can this effect hold true even when the standard of living is rising as dramatically as in China?  Easterlin, still going strong at USC, looks at the data with co-authors Robson Morgan, Malgorzata Switek, and Fei Wang in "China's life satisfaction, 1990-2010," just published in the Proceedings of the National Academy of Sciences.  Here is a (slightly messy) graph showing survey results from six different surveys of satisfaction or happiness in China.: the World Values Survey, a couple of Gallup surveys, and surveys by Pew, Asiabarometer, and Horizon. The surveys use different scales: 1-10, 0-10, 1-4, 1-5, so the vertical axes of the graph are a mess. But remember, this is a time frame when per capita GDP more than quadrupled! It's hard to look at this data and see a huge upward movement.

Easterlin and co-authors summarize the patterns this way: "According to the surveys that we analyzed, life satisfaction in the Chinese population declined from 1990 to around 2000–2005 and then turned upward, forming a U-shaped pattern for the period as a whole (Fig. 1). Although a precise comparison over the full study period is not possible, there appears to be no increase and perhaps some overall decline in life satisfaction. A downward tilt along with the U-shape is evident in the WVS, the series with the longest time span."

Indeed, Easterlin and co-authors point out that the happiness trend may be biased upward, because of this time there was a large rise in the “floating population" of "persons living in places other than where they are officially registered) in urban areas." This group tends to have lower life satisfaction.  " Between 1990 and 2010, the floating population rose substantially, from perhaps 7% to 33% of the total urban population ...  If the floating population is not as well covered in the life satisfaction surveys as their urban-born counterparts, then this negative impact is understated, and thus the full period trend is biased upward."

Why has satisfaction not flourished in China with the rise in GDP growth? Surely one reason is what some call the "aspirational treadmill:" the more you have, the more you want. But the reason emphasized by Easterlin's group is that "the high 1990 level of life satisfaction in China was consistent with the low unemployment rate and extensive social safety net prevailing at that time. Urban workers were essentially guaranteed life-time positions and associated benefits, including subsidized food, housing, health care, child care, and pensions, as well as jobs for grown
children ..." However, urban unemployment in China rose sharply from about 1990 into the early 2000s, but has fallen some since the mid-2000s. In addition, "Although incomes have increased for all income groups, China’s transition has been marked by a sharp increase in income inequality. This increasing income inequality is related to the growing urban–rural disparity in income, increased income differences in both urban and rural areas, and the significant increase of unemployment in urban areas associated with restructuring ..."

Intriguingly, the rise in income inequality in China is mirrored by greater inequality in reported life satisfaction. "In its transition, China has shifted from one of the most egalitarian countries in terms of distribution of life satisfaction to one of the least egalitarian. Life satisfaction has declined markedly in the lowest-income and least-educated segments of the population, while rising somewhat in the upper SES [socioeconomic status] stratum." For example, here's a graph that divides the population into thirds by income level. The figure shows what share of the population gave an answer from 7-10 on the World Values Survey satisfaction data. Notice that in 1990, all three income groups are clustered together. By 2007, they have separated out, with the highest income group remaining at about the same level, and the other groups declining in reported satisfaction--despite the fact that the incomes for all groups are much higher.


Taken to an extreme, the Easterlin paradox and these results from China might seem to suggest that economic growth is a waste of time. After all, economic growth doesn't seem to be making people more satisfied! But Easterlin would not make this argument, and it doesn't quite fit the survey results.
When answering on a scale of 1-10 or 1-5, most people will make a choice thinking about the present. They don't answer by thinking:  "Wow, I'm sure glad that I wasn't born a Roman slave 2000 years ago, and compared to that, I'm a 10 in satisfaction." Nor do they think: "Wow, compared to people who live 100 years from now, I'm living a short and deprived life, so I'm a 1 in satisfaction."

When people in China answered a "satisfaction" survey in 1990, they were not all that far removed in time from a period of brutal repression, and so it's not shocking to me that many in the lower and middle part of the income distribution told surveyers (who after all might have a government connection) that they were really quite satisfied. It's not just the economy that has grown in China since 1990; it's also the willingness and ability of many ordinary people to express dissatisfaction or discontent. I suspect that not many people in China would view their 1990 standard of living as similar or preferable to their current standard of living.


People do seem to answer satisfaction questions with some perspective on the rest of the world. For example, in a Spring 2008 article in my own Journal of Economic Perspectives, Angus Deaton presents evidence that if you look across the countries of the world in 2003, the level of satisfaction seems to rise steadily each time per capita GDP doubles.

It would be unwise to use survey data at different points in time, measured on scales that only offer the same limited range of choices, to argue that people do not receive greater satisfaction or happiness from economic growth. If I was choosing a 1970 standard of living in 1970, I might give it a similar numerical satisfaction score that I would give a 2012 standard of living in 2012--but that doesn't mean I would be equally happy in 2012 with a 1970 standard of living!

However, the results of the satisfaction surveys do highlight that when people are asked about their satisfaction, they take many factors into account along with average income levels: health, education, personal and political freedom, economic security, risk of unemployment, inequality, and others.When China allowed a greater degree of economic and political freedom, it unleashed an extraordinary rate of economic growth, but it also made created a public space for many other potential reasons for dissatisfaction. A record of past economic growth, even when exceptionally rapid, doesn't trump present concerns in people's minds--nor should it.

Wednesday, May 16, 2012

McWages Around the World

It's hard to compare wages in different countries, because the details of the job differ. A typical job in a manufacturing facility, for example, is a rather different experience in China, Germany, Michigan, or Brazil. But for about a decade, Orley Ashenfelter has been looking at one set of jobs that are extremely similar across countries--jobs at McDonald's restaurants. He discussed this research and a broader agenda of "Comparing Real Wage Rates" across countries in his Presidential Address last January to the American Economic Association meetings in Chicago. The talk has now been published in the April 2012 issue of the American Economic Review, which will be available to many academics through their library subscription. But the talk is also freely available to the public here as Working Paper #570 from the Princeton's Industrial Relations Section. 

How do we know that food preparation jobs at McDonald's are similar? Here's Ashenfelter:  

"There is a reason that McDonald’s products are similar.  These restaurants operate with a standardized protocol for employee work. Food ingredients are delivered to the restaurants and stored in coolers and freezers. The ingredients and food preparation system are specifically designed to differ very little from place to place. Although the skills necessary to handle contracts with suppliers or to manage and select employees may differ among restaurants, the basic food preparation work in each restaurant is highly standardized. Operations are monitored using the 600-page Operations and Training Manual, which covers every aspect of food preparation and includes precise time tables as well as color photographs. ... As a result of the standardization of both the product and the workers’ tasks, international comparisons of wages of McDonald’s crew members are free of interpretation problems stemming from differences in skill content or compensating wage differentials."

Ashenfelter has built up McWages data from about 60 countries. Here is a table of comparisons. The first column shows the hourly wage of a crew member at McDonald's, expressed in U.S. dollars (using the then-current exchange rate). The second column is the wage relative to the U.S. wage level, where the U.S. wage is 1.00. The third column is the price of a Big Mac in that country, again converted to U.S. dollars. And the fourth column is the McWage divided by the price of a Big Mac--as a rough-and-ready way of measuring the buying power of the wage.

Ashenfelter sums up this data, and I will put the last line in boldface type: "There are three obvious, dramatic conclusions that it is easy to draw from the comparison of wage rates in Table 3.  First, the developed countries, including the US, Canada, Japan, and Western Europe have quite similar wage rates, whether measured in dollars or in BMPH.   In these countries a worker earned between 2 and 3 Big Macs per hour of work, and with the exception of Western Europe with its highly regulated wage structure, earned around $7 an hour.  A second conclusion is that the vast majority of workers, including those in India, China, Latin America, and the Middle East earned about 10% as much as the workers in developed countries, although the BMPH comparison increases this ratio to about 15%, as would any purchasing-power-price adjustment.   Finally, workers in Russia, Eastern Europe, and South Africa face wage rates about 25 to 35% of those in the developed countries, although again the BMPH comparison increases this ratio somewhat.  In sum, the data in Table 3 provide transparent and credible evidence that workers doing the same tasks and producing the same output using identical technologies are paid vastly different wage rates."

In passing, it's interesting to note that McWage jobs pay so much more in western Europe than in the U.S., Canada and Japan. But let's pursue the highlighted theme: How can the same job with the same output and the same technology pay more in one country than in another? One part of the answer, of course, is that you can't hire someone in India or Sough Africa to make you a burger and fries for lunch. But at a deeper level, the higher McWages in high-income countries is not about the skill or human capital in those countries, but instead reflects that the entire economy is operating at a higher productivity level.
 

Here is an illustrative figure. The horizontal axis shows the "McWage ratio": that is, the U.S. McWage is equal to 1.00, and the McWages in all other countries are expressed in proportion. The vertical axis is "Hourly Output Ratio." This is measuring output per hour worked in the economy, again with the U.S. level set equal to 1.00, and the output per hour worked in all other countries expressed in proportion. The straight line at a 45-degree angle plots the points in which a country with, say, a McWage at 20% of the U.S. level also has output per hour worked at 20% of the U.S. level, a country with a McWage at 50% of the U.S. level also has output per hour worked at 50% of the U.S. level, and so on. 

The key lesson of the figure is that the differences in McWages across countries line up with the overall productivity differences across countries. The main exceptions, in the upper right-hand part of the diagram, are countries where the McWage is above U.S. levels but output-per-hour for the economy as a whole is below U.S. levels: New Zealand, Japan, Italy, Germany. These are countries with minimum wage laws that push up the McWage. 

Ashenfelter emphasizes in his remarks how real wages can be used to assess and compare the living standards of workers. I would add that these measures show that the most important factor determining wages for most of us is not our personal skills and human capital, or our effort and initiative, but whether we are using those skills and human capital in the context of a a high-productivity or a low-productivity economy.