Back in January 1969, the story goes, U.S. Treasury Secretary Joseph Barr testified before the Joint Economic Committee of Congress that 155 Americans with income of over $200,000 had paid no income tax in 1967. Adjusted for inflation, $200,000 in 1967 income would be equal to about $1.4 million in 2013. Back in 1969, members of Congress received more letters from constituents about 155 non-taxpayers than they did about the Vietnam War.
The public outrage notwithstanding, it's not obvious to me that 155 high-income people paying no income taxes is a problem that needs a solution. After all, a few high-income people will have made very large charitable donations in a year, knocking their tax liability down to zero. A few taxophobes will invest all their funds in tax-free municipal bonds. A few may have high income this year, but be able to offset it for tax purposes with large losses from previous years. It only takes a few dozen people in each of these and similar categories to make a total of 155 high-income non-taxpayers.
The ever-useful Tax Policy Center has now published some estimates of what share of taxpayers will pay no income taxes in 2013 and future years. Just to be clear, these estimates are based on their microsimulation model of the tax code and taxpayers--the actual IRS data for 2013 taxpayers won't be available for a couple of years. But the estimates are nonetheless thought-provoking. Here's a summary table:
At the high end, the top 0.1% of the income distribution would kick in at about $1.5 million in annual income for 2013--not too different, adjusted for inflation, from the $200,000 level that created such controversy back in 1969. Of the 119,000 "tax units" in the top 0.1% of incomes, 0.2% paid no income tax--so about 200-250 people. Given the growth of population in the last four decades, it's a very similar number to those 155 non-taxpayers that caused such a stir back in 1969.
The big change, of course, is that after 1969 an Alternative Minimum Tax was enacted in an attempt to ensure that all those with high incomes would pay something in taxes. But with about 162 million tax units in the United States, and an income tax code that has now reached 4 million words, it's not a big shock to me that a few hundred high-income people would be able to find legitimate and audit-proof ways of knocking their tax liability down to zero.
Rather than getting distracted by the lack of tax payments by a few hundred outliers, I'd much rather focus on the tax payments of all 119,000 in the top 0.1%, or all 1,160,000 in the top 1%. As I've written before on this blog, I'm open to policies that would raise marginal tax rates on those with the highest incomes, especially if they are part of an overall deal to reduce the future path of U.S. budget deficits. But I would prefer a tax reform approach of seeking to reduce "tax expenditures," which is the generic name for all the legal deductions, exemptions and credits with which those with higher incomes can reduce their taxes (for earlier posts on this subject, see here, here, and here).
The other pattern that jumps out when looking at those who pay zero in income taxes is that the overwhelming majority of them have low incomes. The table shows that 87% of those in the lowest income quintile, 52% of those in the second income quintile, and 28% of those in the middle income quintile owed nothing in federal income taxes. This situation is nothing new, of course. The original federal income tax back in 1913 was explicitly aimed only at those with high incomes, and only about 7% of households paid income tax. Even after the income tax expanded during World War I, and then was tweaked through the 1920s and into the 1930s, only about 20-30% of households owed federal income tax in a given year.
The standard explanation for why the federal income tax covers only a portion of the population is that it is the nation's main tax for having those with higher incomes pay a greater share of income in taxes. With payroll taxes for Social Security and Medicare, as well as with state and local sales taxes, those with higher incomes don't pay a higher share of their income--in fact, they typically pay a lower share of income.
The standard argument for why a higher share of people should pay into the income tax is that democracy is healthier if more people have "skin in the game"--that is, if tax increases and tax cuts affect everyone, and aren't just a policy that an untaxed or lightly-taxed majority can impose on a small share of the population. I recognize the theoretical power of this argument, but in practical terms, it doesn't seem to have much force. If those with low incomes made minimal income tax payments so that they had "skin in the game," and
then saw those minimal payment vary by even more minimal amounts as taxes rose and fell, it doesn't much alter their incentives to impose higher taxes on others. Also, it doesn't seem like the US has been subject to populist fits of expropriating the income of the rich, so I don't worry overmuch about it. Sure, it would be neat and tidy if we could reach a broad social agreement on how the income tax burden should be distributed across income groups, and then with that agreement in hand, we could raise or lower all taxes on everyone together. But I'm not holding my breath for such an agreement on desirable tax burdens to be reached.
Pages
▼
Friday, August 30, 2013
Thursday, August 29, 2013
Will a Computer at Home Help My Children in School?
How much do students benefit from having access to a computer at home? Obviously, one can't just compare the test performance of students who have a computer at home with those who don't, because families and households that do provide a computer at home to their students are likely to be different in many ways from families that do not do so. It's possible to make statistical adjustments for these differences, but such adjustments only account for "observable" factors like income, ethnicity, gender, family structure, employment status of parents, and the like. Differences across families that aren't captured in readily-available statistics will remain.
Thus, an alternative approach is a social experiment. Take a substantial number of families who don't have a home computer, and randomly give some of them a home computer. Then compare the results for those with and without a computer. Robert W. Fairlie and Jonathan Robinson report the results of the largest field experiment thus far conducted along these lines in "Experimental Evidence on the
Effects of Home Computers on Academic Achievement among Schoolchildren," recently published in the American Economic Journal: Applied Economics (5:3, pp. 211–240). (This journal is note freely available on-line, although many readers will have access through library subscriptions or their membership in the American Economic Association.) Here's the conclusion from their abstract:
The sample for this study includes students in grades 6–10 in 15 different middle and high schools in 5 school districts in the Central Valley area of California, during the two school years from 2008-2010. The researchers surveyed students at the beginning of the school year, about whether they had a computer at home. After going through parental consent forms and all the paperwork, they ended up with about 1,100 students, and they gave half of them a computer at the beginning of the year and half a computer at the end of the year. Everyone got a home computer--but the researchers could study the effect of having one a year earlier.
Having a computer at home increased computer use. Students without a computer at home (the "control group") reported using a computer (at school, the library, or a friend's house) about 4.2 hours per week, while students who now had a computer at home (the "treatment group") used a computer 6.7 hours per week. Of that extra computer time , "Children spend an additional 0.8 hours on schoolwork, 0.8 hours per week on games, and 0.6 hours on social networking."
Of course, any individual study is never the final say. Perhaps having access to a home computer for several years, rather than just one year, would improve outcomes. Perhaps in the future, computer-linked pedagogy will improve in a way where having a computer at home makes a demonstrable difference to education outcomes. Perhaps there is some overall benefit from familiarity with computers that pays off in the long run, even if not captured in any of outcomes measured here. It's important to remember that this study is not about use of computers in the classroom or in education overall, just about access to computers at home. My wife and I have three children ranging in age from grades 6 to 10--the same age group as represented in this study--and they have access to computers at home. The evidence suggests that while this may be more convenient for them in various ways, I shouldn't be expecting it to boost their reading and math scores.
(Full disclosure: The American Economic Journal: Applied Economics is published by the American Economic Association, which also publishes the Journal of Economic Perspectives, where I work as the managing editor.)
Thus, an alternative approach is a social experiment. Take a substantial number of families who don't have a home computer, and randomly give some of them a home computer. Then compare the results for those with and without a computer. Robert W. Fairlie and Jonathan Robinson report the results of the largest field experiment thus far conducted along these lines in "Experimental Evidence on the
Effects of Home Computers on Academic Achievement among Schoolchildren," recently published in the American Economic Journal: Applied Economics (5:3, pp. 211–240). (This journal is note freely available on-line, although many readers will have access through library subscriptions or their membership in the American Economic Association.) Here's the conclusion from their abstract:
"Although computer ownership and use increased substantially, we find no effects on any educational outcomes, including grades, test scores, credits earned, attendance, and disciplinary actions. Our estimates are precise enough to rule out even modestly-sized positive or negative impacts. The estimated null effect is consistent with survey evidence showing no change in homework time or other “intermediate” inputs in education."And here's a bit more detail on their results. They note: "There are an estimated 15.5 million instructional computers in US public schools, representing one instructional computer for every three schoolchildren. Nearly every instructional classroom in these schools has a computer, averaging 189 computers per school ... [M]any children do not have access to a computer at home. Nearly 9 million children ages 10–17 in the United States (27 percent) do not have computers with Internet connections at home ..."
The sample for this study includes students in grades 6–10 in 15 different middle and high schools in 5 school districts in the Central Valley area of California, during the two school years from 2008-2010. The researchers surveyed students at the beginning of the school year, about whether they had a computer at home. After going through parental consent forms and all the paperwork, they ended up with about 1,100 students, and they gave half of them a computer at the beginning of the year and half a computer at the end of the year. Everyone got a home computer--but the researchers could study the effect of having one a year earlier.
Having a computer at home increased computer use. Students without a computer at home (the "control group") reported using a computer (at school, the library, or a friend's house) about 4.2 hours per week, while students who now had a computer at home (the "treatment group") used a computer 6.7 hours per week. Of that extra computer time , "Children spend an additional 0.8 hours on schoolwork, 0.8 hours per week on games, and 0.6 hours on social networking."
Of course, any individual study is never the final say. Perhaps having access to a home computer for several years, rather than just one year, would improve outcomes. Perhaps in the future, computer-linked pedagogy will improve in a way where having a computer at home makes a demonstrable difference to education outcomes. Perhaps there is some overall benefit from familiarity with computers that pays off in the long run, even if not captured in any of outcomes measured here. It's important to remember that this study is not about use of computers in the classroom or in education overall, just about access to computers at home. My wife and I have three children ranging in age from grades 6 to 10--the same age group as represented in this study--and they have access to computers at home. The evidence suggests that while this may be more convenient for them in various ways, I shouldn't be expecting it to boost their reading and math scores.
(Full disclosure: The American Economic Journal: Applied Economics is published by the American Economic Association, which also publishes the Journal of Economic Perspectives, where I work as the managing editor.)
Wednesday, August 28, 2013
The Myth Behind the Origins of Summer Vacation
Why do students have summer vacation? One common answer is that it's a holdover from when America was more rural and needed children to help out on the farm, but even just a small amount of introspection suggests that answer is wrong. Even if you know very little about the practical side of farming, think for just a moment about what are probably the most time-sensitive and busiest periods for a farmer: spring planting and fall harvest. Not summer!
I'm not claiming to have made any great discovery here that summer vacation didn't start as result of following some typical pattern of agricultural production. Mess around on the web a bit, and you'll find more accurate historical descriptions of how summer vacation got started (for example, here's one from a 2008 issue of TIME magazine and here's one from the Washington Post last spring). My discussion here draws heavily on a 2002 book by Kenneth M. Gold, a professor of education at the City University of New York, called School's In: The History of Summer Vacation in American Public Schools.
Gold points out that back in the early 19th century, US schools followed two main patterns. Rural schools typically had two terms: a winter term and a summer one, with spring and fall available for children to help with planting and harvesting. The school terms in rural schools were relatively short: 2-3 months each. In contrast, in urban areas early in the first half of the 19th century, it was fairly common for school districts to have 240 days or more of school per year, often in the form of four quarters spread over the year, each separated by a week of official vacation. However, whatever the length of the school term, actual school attendance was often not compulsory.
In the second half of the 19th century, school reformers who wanted to standardize the school year found themselves wanting to length the rural school year and to shorten the urban school year, ultimately ending up by the early 20th century with the modern school year of about 180 days. Indeed, Gold cites an 1892 report by the U.S. Commissioner of Education William Torrey Harris which sharply criticized "the steady reduction that our schools have suffered" as urban schools had reduced their school days down toward 200 per year over the preceding decades.
With these changes, why did summer vacation arise as a standard pattern during the second half of the 19th century, when it had not been common in either rural or urban areas before that? At various points, Gold notes a number of contributing factors.
1) Summer sessions of schools in the first half of the 19th century were often viewed as inferior by educators at that time. It's not clear that the summer sessions were inferior: for example, attendance didn't seem to drop off much. But the summer sessions were more often taught by young women, rather than by male schoolteachers.
2) School reformers often argued that students needed substantial vacation for their health. Horace Mann wrote that overtaxing students would lead to "a most pernicious influence on character and habits ... not infrequently is health itself destroyed by overstimulating the mind." This concern over health seemed to have two parts. One was that schoolhouses were unhealthy in the summer: education reformers of the time reminded teachers to keep windows open, to sprinkle floors with water, and to build schools with an eye to good air ventilation. Mann wrote that "the small size, ill arrangement, and foul air, of our schoolhouses, present serious obstacles to the health and growth of the bodies and minds of our children." The other concern over health was that overstudy would lead to ill-health, both mental and physical. An article in the Pennsylvania School Journal expressed concern that children "were growing up puny, lank, pallid, emaciated, round-shouldered, thin-breasted all because they were kept at study too long. Indeed, there was an entire medical literature of the time that "mental strain early in life" led to lifelong "impairment of medical and physical vigour."
Of course, these arguments were mainly deployed in urban areas as reasons for shortening the school year. In rural areas where the goal was to lengthen the school year, an opposite argument was deployed, that the brain was like a muscle that would develop with additional use.
3) Potential uses of a summer vacation for teachers and for students began to be discussed. For students, there were arguments over whether the brain was a muscle that should be exercised or relaxed during the summer. But there was also a widespread sense at the time, almost a social mythology, that summer should be a time for intense interaction with nature and outdoor play. For teachers, there was a sense that they also needed summer: as one writer put it, "Teachers need a summer vacation more than bad boys need a whipping." There was a sense in both urban and rural areas that something like a 180-day school, with a summer vacation, would be the sort of job that would be attractive to talented individuals and well-paid enough to make teaching a full-time career. For teachers as well, there was a conflict as to whether they should spend summers working on lesson plans or relaxing, but the slow professionalization of teaching meant that more teachers were using the summer at least partially for work.
4) More broadly, Gold argues that the idea of a standard summer vacation as widely practiced by the start of the 20th century grew out of a tension in the ways that people thought about annual patterns itself in the late 19th century. On one side, time was viewed as an annual cycle, not just for agricultural purposes, but as a series of community practices and celebrations linked to the seasons. On the other side, time was starting to be industrial, in a way that seasons mattered much less and the smooth coordination of production effort mattered more. A standard school year with a summer vacation both coordinated society along the lines of time, while offering a respect for seasonality as well.
I'm not claiming to have made any great discovery here that summer vacation didn't start as result of following some typical pattern of agricultural production. Mess around on the web a bit, and you'll find more accurate historical descriptions of how summer vacation got started (for example, here's one from a 2008 issue of TIME magazine and here's one from the Washington Post last spring). My discussion here draws heavily on a 2002 book by Kenneth M. Gold, a professor of education at the City University of New York, called School's In: The History of Summer Vacation in American Public Schools.
Gold points out that back in the early 19th century, US schools followed two main patterns. Rural schools typically had two terms: a winter term and a summer one, with spring and fall available for children to help with planting and harvesting. The school terms in rural schools were relatively short: 2-3 months each. In contrast, in urban areas early in the first half of the 19th century, it was fairly common for school districts to have 240 days or more of school per year, often in the form of four quarters spread over the year, each separated by a week of official vacation. However, whatever the length of the school term, actual school attendance was often not compulsory.
In the second half of the 19th century, school reformers who wanted to standardize the school year found themselves wanting to length the rural school year and to shorten the urban school year, ultimately ending up by the early 20th century with the modern school year of about 180 days. Indeed, Gold cites an 1892 report by the U.S. Commissioner of Education William Torrey Harris which sharply criticized "the steady reduction that our schools have suffered" as urban schools had reduced their school days down toward 200 per year over the preceding decades.
With these changes, why did summer vacation arise as a standard pattern during the second half of the 19th century, when it had not been common in either rural or urban areas before that? At various points, Gold notes a number of contributing factors.
1) Summer sessions of schools in the first half of the 19th century were often viewed as inferior by educators at that time. It's not clear that the summer sessions were inferior: for example, attendance didn't seem to drop off much. But the summer sessions were more often taught by young women, rather than by male schoolteachers.
2) School reformers often argued that students needed substantial vacation for their health. Horace Mann wrote that overtaxing students would lead to "a most pernicious influence on character and habits ... not infrequently is health itself destroyed by overstimulating the mind." This concern over health seemed to have two parts. One was that schoolhouses were unhealthy in the summer: education reformers of the time reminded teachers to keep windows open, to sprinkle floors with water, and to build schools with an eye to good air ventilation. Mann wrote that "the small size, ill arrangement, and foul air, of our schoolhouses, present serious obstacles to the health and growth of the bodies and minds of our children." The other concern over health was that overstudy would lead to ill-health, both mental and physical. An article in the Pennsylvania School Journal expressed concern that children "were growing up puny, lank, pallid, emaciated, round-shouldered, thin-breasted all because they were kept at study too long. Indeed, there was an entire medical literature of the time that "mental strain early in life" led to lifelong "impairment of medical and physical vigour."
Of course, these arguments were mainly deployed in urban areas as reasons for shortening the school year. In rural areas where the goal was to lengthen the school year, an opposite argument was deployed, that the brain was like a muscle that would develop with additional use.
3) Potential uses of a summer vacation for teachers and for students began to be discussed. For students, there were arguments over whether the brain was a muscle that should be exercised or relaxed during the summer. But there was also a widespread sense at the time, almost a social mythology, that summer should be a time for intense interaction with nature and outdoor play. For teachers, there was a sense that they also needed summer: as one writer put it, "Teachers need a summer vacation more than bad boys need a whipping." There was a sense in both urban and rural areas that something like a 180-day school, with a summer vacation, would be the sort of job that would be attractive to talented individuals and well-paid enough to make teaching a full-time career. For teachers as well, there was a conflict as to whether they should spend summers working on lesson plans or relaxing, but the slow professionalization of teaching meant that more teachers were using the summer at least partially for work.
4) More broadly, Gold argues that the idea of a standard summer vacation as widely practiced by the start of the 20th century grew out of a tension in the ways that people thought about annual patterns itself in the late 19th century. On one side, time was viewed as an annual cycle, not just for agricultural purposes, but as a series of community practices and celebrations linked to the seasons. On the other side, time was starting to be industrial, in a way that seasons mattered much less and the smooth coordination of production effort mattered more. A standard school year with a summer vacation both coordinated society along the lines of time, while offering a respect for seasonality as well.
Monday, August 26, 2013
C-Sections: Trends and Comparisons
It would be comforting to believe that medical decisions are always make based on a clean, clear evaluation of the health of the patient. But when it comes to births by Caesarean section, it's hard to believe that this is the case. For example, here's a comparison of C-section rates across countries. (The figure was produce for a briefing book distributed by the Stanford Institute for Economic Policy Research, using OECD data.)
I am unaware of any evidence about the health of mothers and children that would explain why the U.S. rate of C-sections is similar to Germany and Portugal, twice as high as Sweden, but only 2/3 the rate of Mexico. China seems to be the world leader, with nearly half of all births occurring via C-section.
In the U.S., the rate of C-sections has risen dramatically over time, as Michelle J.K. Osterman and Joyce A. Martin lay out in "Changes in Cesarean Delivery Rates by Gestational Age:United States, 1996–2011," a National Center for Health Statistics Data Brief released in June.
In the US, C-sections were 21% of all births in 1996, but 33% of all births by 2009, although the rate has not increased since then. To be sure, the calculation of costs and benefits for doing a C-section will evolve over time, as the surgery gradually become safer. But this sharp increase doesn't seem to be driven by health calculations. As Osterman and Martin point out, "the American College of Obstetricians and Gynecologists developed clinical guidelines for reducing the occurrence of nonmedically-indicated cesarean delivery and labor induction prior to 39 weeks." And the much higher rates of C-sections in countries where surgery can be less safe than in the U.S., like China and Mexico, are clearly not driven by concerns over the health of mother and child.
Some C-sections are necessary and even life-saving. But to me, the high and rising rates of C-sections have the feeling of a boulder rolling downhill: as C-sections have become more popular, they have become more expected and acceptable for a broader range of reasons, which in turn has made them even more popular, and so on. It won't be easy to push that boulder back up its hill.
I am unaware of any evidence about the health of mothers and children that would explain why the U.S. rate of C-sections is similar to Germany and Portugal, twice as high as Sweden, but only 2/3 the rate of Mexico. China seems to be the world leader, with nearly half of all births occurring via C-section.
In the U.S., the rate of C-sections has risen dramatically over time, as Michelle J.K. Osterman and Joyce A. Martin lay out in "Changes in Cesarean Delivery Rates by Gestational Age:United States, 1996–2011," a National Center for Health Statistics Data Brief released in June.
In the US, C-sections were 21% of all births in 1996, but 33% of all births by 2009, although the rate has not increased since then. To be sure, the calculation of costs and benefits for doing a C-section will evolve over time, as the surgery gradually become safer. But this sharp increase doesn't seem to be driven by health calculations. As Osterman and Martin point out, "the American College of Obstetricians and Gynecologists developed clinical guidelines for reducing the occurrence of nonmedically-indicated cesarean delivery and labor induction prior to 39 weeks." And the much higher rates of C-sections in countries where surgery can be less safe than in the U.S., like China and Mexico, are clearly not driven by concerns over the health of mother and child.
Some C-sections are necessary and even life-saving. But to me, the high and rising rates of C-sections have the feeling of a boulder rolling downhill: as C-sections have become more popular, they have become more expected and acceptable for a broader range of reasons, which in turn has made them even more popular, and so on. It won't be easy to push that boulder back up its hill.
Friday, August 23, 2013
Looking Back at the Baby Boom
The trend toward lower fertility rates seems like an inexorable long-run trend, in the U.S. and elsewhere. The U.S. total fertility rate--that is, the number of average births per woman--is about 2 right now, and the long-run projections published by Social Security Administration assume that it will hold at about 2.0 over the next 75 years or so.
But I recently saw a graph that raised my eyebrows. Here is the fertility rate for US women (albeit only for white women for the early part of the time period) going back to 1800, taken from a report done for the Social Security Administration. If you were making projections about fertility rates in about 1940, and you had access to this data, you might have predicted that the rate of decline would level off. But it would have taken a brash forecaster indeed to predict the fertility bump that we call the "baby boom."
Two thoughts here:
1) The baby boom was a remarkable demographic anomaly. It gave the U.S. economy a "demographic dividend" in the form of a higher-than-otherwise proportion of working-age adults for a time. But the aging of the boomers is already leading to financial tensions for government programs like Medicare and Social Security.
2) There's really no evidence at all that another baby boom might happen: but then, there was no evidence that the first one was likely to happen either. Someone who is wiser and smarter than I about social trends--perhaps a science fiction writer--might be able to offer some interesting speculation about what set of factors and events could lead to a new baby boom.
boomer generation is an enormous historical anomaly
But I recently saw a graph that raised my eyebrows. Here is the fertility rate for US women (albeit only for white women for the early part of the time period) going back to 1800, taken from a report done for the Social Security Administration. If you were making projections about fertility rates in about 1940, and you had access to this data, you might have predicted that the rate of decline would level off. But it would have taken a brash forecaster indeed to predict the fertility bump that we call the "baby boom."
Two thoughts here:
1) The baby boom was a remarkable demographic anomaly. It gave the U.S. economy a "demographic dividend" in the form of a higher-than-otherwise proportion of working-age adults for a time. But the aging of the boomers is already leading to financial tensions for government programs like Medicare and Social Security.
2) There's really no evidence at all that another baby boom might happen: but then, there was no evidence that the first one was likely to happen either. Someone who is wiser and smarter than I about social trends--perhaps a science fiction writer--might be able to offer some interesting speculation about what set of factors and events could lead to a new baby boom.
boomer generation is an enormous historical anomaly
Thursday, August 22, 2013
Thoughts on the Diamond-Water Paradox
Water is necessary to sustain life. Diamonds are mere ornamentation. But getting enough water to sustain life typically has a low price, while a piece of diamond jewelry has a a high price. Why does an economy put a lower value on what is necessary to sustain life than on a frivolity? This is the "diamond-water paradox," a hardy perennial of teaching intro economics since it was incorporated into Paul Samuelson's classic 1948 textbook. Here, I'll offer a quick review of the paradox as it originated in Adam Smith's classic The Wealth of Nations, and then some thoughts.
Adam Smith used the comparison of diamonds and water to make a distinction between what he called "value in use" and "value in exchange." The quotations here taken from the version of the Wealth of Nations that is freely available on-line at the Library of Economics and Liberty website. Smith wrote:
In the classroom, the example is often then used to make two conceptual points. One is that economics is about value-in-exchange, and that value-in-use is a fuzzy concept that Smith (and the class) can set aside. The other is to explain the importance of scarcity and marginal analysis. Diamonds are high-priced because the demand is high relative to the limited quantity available. Water is inexpensive because it is typically fairly abundant, but if one is dying of thirst, then it would have a much higher value-in-exchange--conceivably even greater than diamonds.
It now seems possible that Uranus and Neptune may have oceans of liquid carbon, with diamond icebergs floating in them. (For a readable overview, see here. For an underlying scientific paper, see J. H. Eggert et al. 2010. "Melting temperature of diamond at ultrahigh pressure." Nature Physics 6, pp. 40-43.) On such planets, the scarcity and price of water and diamonds might well be reversed!
But at a deeper level, Michael V. White pointed out 10 years ago in an article in an article in the History of Political Economy that Smith wasn't thinking of this as a "paradox" ("Doctoring Adam Smith: The Fable of the Diamonds and Water Paradox," 2002, 34:4, pp. 659-683). White traces the references to the value and price of diamonds and water through Jeremy Bentham, David Ricardo, William Stanley Jevons, Alfred Marshall, and other luminaries. In various ways, these writers all deconstructed Smith's paragraph to argue that value could not depend on use alone, that "use" would vary according to scarcity, that supply must be included, and so on.
Of course, Smith was aware of the importance of scarcity. A few chapters later in the Wealth of Nations, he revisited the subject of the price and value of diamonds in several other passages. He wrote:
However, White documents that by the late 19th and early 20th century, references to Smith's diamond-water paradox tended to condemn Smith for hashing up the conceptual discussion so badly. For example, White describes a contribution from Paul Douglas at a University of Chicago symposium in 1926, 150 years after the publication of The Wealth of Nations. Here's White, summarizing Douglas's argument (citations omitted):
Paul Samuelson was a student of Paul Douglas at the University of Chicago, and Samuelson inserted the diamond-water question into his 1948 textbook, where it has remained a standard example--and for all the ambiguity and complexity, I think a useful piece of pedagogy--since then.
Adam Smith used the comparison of diamonds and water to make a distinction between what he called "value in use" and "value in exchange." The quotations here taken from the version of the Wealth of Nations that is freely available on-line at the Library of Economics and Liberty website. Smith wrote:
"The word VALUE, it is to be observed, has two different meanings, and sometimes expresses the utility of some particular object, and sometimes the power of purchasing other goods which the possession of that object conveys. The one may be called 'value in use ;' the other, 'value in exchange.' The things which have the greatest value in use have frequently little or no value in exchange; and on the contrary, those which have the greatest value in exchange have frequently little or no value in use. Nothing is more useful than water: but it will purchase scarce any thing; scarce any thing can be had in exchange for it. A diamond, on the contrary, has scarce any value in use; but a very great quantity of other goods may frequently be had in exchange for it."
In the classroom, the example is often then used to make two conceptual points. One is that economics is about value-in-exchange, and that value-in-use is a fuzzy concept that Smith (and the class) can set aside. The other is to explain the importance of scarcity and marginal analysis. Diamonds are high-priced because the demand is high relative to the limited quantity available. Water is inexpensive because it is typically fairly abundant, but if one is dying of thirst, then it would have a much higher value-in-exchange--conceivably even greater than diamonds.
It now seems possible that Uranus and Neptune may have oceans of liquid carbon, with diamond icebergs floating in them. (For a readable overview, see here. For an underlying scientific paper, see J. H. Eggert et al. 2010. "Melting temperature of diamond at ultrahigh pressure." Nature Physics 6, pp. 40-43.) On such planets, the scarcity and price of water and diamonds might well be reversed!
But at a deeper level, Michael V. White pointed out 10 years ago in an article in an article in the History of Political Economy that Smith wasn't thinking of this as a "paradox" ("Doctoring Adam Smith: The Fable of the Diamonds and Water Paradox," 2002, 34:4, pp. 659-683). White traces the references to the value and price of diamonds and water through Jeremy Bentham, David Ricardo, William Stanley Jevons, Alfred Marshall, and other luminaries. In various ways, these writers all deconstructed Smith's paragraph to argue that value could not depend on use alone, that "use" would vary according to scarcity, that supply must be included, and so on.
Of course, Smith was aware of the importance of scarcity. A few chapters later in the Wealth of Nations, he revisited the subject of the price and value of diamonds in several other passages. He wrote:
"Their highest price, however, seems not to be necessarily determined by any thing but the actual scarcity or plenty of those metals themselves. It is not determined by that of any other commodity, in the same manner as the price of coals is by that of wood, beyond which no scarcity can ever raise it. Increase the scarcity of gold to a certain degree, and the smallest bit of it may become more precious than a diamond, and exchange for a greater quantity of other goods. ...The demand for the precious stones arises altogether from their beauty. They are of no use, but as ornaments; and the merit of their beauty is greatly enhanced by their scarcity, or by the difficulty and expence of getting them from the mine."
"Smith's `failure' to correctly analyze utility was attributed to his personality, which, Douglas asserted, reflected a national stereotype. The inability to follow the `hints' of his predecesssors (Locke, Law, and Harris) was due to Smith's `moralistic sense. ... in his thrifty Scottish manner with it sopposition to ostentation as almost sinful he concluded that diamonds 'have scarce any value in use.' The stingy Scot had thus managed to `divert' English (!) political economists `into a cul-de-sac from which they did not emerge ... for nearly a century. Smith on value and distribution was embarrassing: `it might seem to be the path of wisdom to pass these topics by in discreet silence.'"
Paul Samuelson was a student of Paul Douglas at the University of Chicago, and Samuelson inserted the diamond-water question into his 1948 textbook, where it has remained a standard example--and for all the ambiguity and complexity, I think a useful piece of pedagogy--since then.
Monday, August 19, 2013
American Women and Marriage Rates: A Long-Term View
Julissa Cruz looks at some long-run patterns of marriage rates for U.S. women in "Marriage: More Than a Century of Change," written as one of the Family Profile series published by the National Center for Family and Marriage Research at Bowling Green State University. Here are some striking patterns.
"The proportion of women married was highest in 1950 at approximately 65%. Today, less than half 47%) of women 15 and over are married— the lowest percentage since the turn of the century."
"The proportion of women married has declined among all racial/ ethnic groups since the 1950s. This
decline has been most dramatic for Hispanic and Black women, who experienced 33% and 60% declines in the proportion of women married, respectively."
Back in 1940, education level made relatively little difference to the likelihood that a woman was married, but women with less education were more likely to be married. Those patterns have now changed. Education levels now show a much larger correlation with whether a women is married, and women with less education have become much less likely to be married.
I'll forebear from offering a dose of pop sociology about the changing nature of marriage and what it all means. But clearly, the changes over recent decades are substantial.
"The proportion of women married was highest in 1950 at approximately 65%. Today, less than half 47%) of women 15 and over are married— the lowest percentage since the turn of the century."
"The proportion of women married has declined among all racial/ ethnic groups since the 1950s. This
decline has been most dramatic for Hispanic and Black women, who experienced 33% and 60% declines in the proportion of women married, respectively."
Back in 1940, education level made relatively little difference to the likelihood that a woman was married, but women with less education were more likely to be married. Those patterns have now changed. Education levels now show a much larger correlation with whether a women is married, and women with less education have become much less likely to be married.
I'll forebear from offering a dose of pop sociology about the changing nature of marriage and what it all means. But clearly, the changes over recent decades are substantial.
Friday, August 16, 2013
John Maynard Keynes, Investment Innovator
When I think of John Maynard Keynes as an investor, a few images and thoughts run through my mind.
One image is an insouciant and self-satisfied global citizen, making global investments while sipping tea in bed. In his 1919 essay, The Economic Consequences of the Peace, Keynes painted a picture of what the world economy looked like before 1914. He wrote:
A second thought is how very difficult it is to make money in the stock market, because you are essentially trying to pick the stock today that other people think will want to buy at a higher price tomorrow. In the General Theory, Keynes offers a famous metaphor:
Over the entire time period, Keynes was wildly successful as an investor.
"When John Maynard Keynes managed the endowment of King’s College at Cambridge University, the actively managed part of his portfolio beat the performance of the British common stock index by an average of 8 percentage points per year from 1921 to 1946."
Keynes was one of the first institutional investors to move heavily into stocks.
"Keynes was among the first institutional managers to allocate the majority of his portfolio to the then-alternative asset class of equities. In contrast, most British (and American) long-term institutional investors of a century ago regarded ordinary shares or common stocks as unacceptably risky and shunned this asset class in favor of fixed income and real estate. ... To our knowledge, no other Oxbridge colleges made a substantial allocation to equities until the second half of the twentieth century. In the United States, the largest university endowments allocated less than 10 percent to
common stock in the 1920s (on a historical cost-weighted basis), and this total only rose above 20 percent in the late 1930s."
However, Keynes performed poorly as an investor for the first eight years or so.
"The year-by-year results also show that Keynes underperformed [a comparable market index] in only six out of the 25 financial years and that four of those years occurred in the first eight years of his management of the Discretionary Portfolio. By August 1929, he was lagging the UK equity market by a cumulative 17.2 percent since inception. In addition, he failed to foresee the sharp fall in the market the following month."
Keynes dramatically shifted his investment philosophy around 1930, switching from a market-timing macroeconomic approach to becoming one of the first "value" investors.
"Keynes independently championed value investing in the United Kingdom at around the same time as Benjamin Graham was doing so in the United States. Both Keynes’ public statements and his economic theorizing strongly suggest that he did not believe that “prices of securities must be good indicators of value” (Fama 1976). Beginning as a top-down portfolio manager, seeking to time his allocation to stocks, bonds, and cash according to macroeconomic indicators, he evolved into a bottom-up investor from the early 1930s onwards, picking stocks trading at a discount to their “intrinsic value”—terminology he himself employed. Subsequently, his equity investments began to
outperform the market on a consistent basis."
Keynes' overall portfolio was far from an index fund: he concentrated on a relatively small number small and medium-sized firms in certain industries.
"[T]he majority of his UK equity holdings were concentrated in just two sectors, metal mining—tin mining stocks in the 1920s and gold mining stocks in the following decade—and commercial and industrial firms ... Banking carried an index weight of 20 percent, and Keynes had little or no exposure in this sector. ... Keynes’ substantial weighting in commercial and industrial stocks began in the early and mid-1920s with a diversified portfolio of industrial names. However, soon thereafter he concentrated his exposure in this sector on the two leading British automobile stocks, Austin Motors and Leyland Motors. In the context of the time, these would have been viewed as “technology” stocks. In terms of firm size, Keynes had a decided tilt towards mid-cap and small-cap stocks."
One image is an insouciant and self-satisfied global citizen, making global investments while sipping tea in bed. In his 1919 essay, The Economic Consequences of the Peace, Keynes painted a picture of what the world economy looked like before 1914. He wrote:
"The inhabitant of London could order by telephone, sipping his morning tea in bed, the various products of the whole earth, in such quantity as he might see fit, and reasonably expect their early delivery upon his doorstep; he could at the same moment and by the same means adventure his wealth in the natural resources and new enterprises of any quarter of the world, and share, without exertion or trouble, in their prospective fruits and advantages; or he could decide to couple the security of his fortunes with the good faith of the townspeople of any substantial municipality in any continent that fancy or information might recommend."
A second thought is how very difficult it is to make money in the stock market, because you are essentially trying to pick the stock today that other people think will want to buy at a higher price tomorrow. In the General Theory, Keynes offers a famous metaphor:
"[P]rofessional investment may be likened to those newspaper competitions in which the competitors have to pick out the six prettiest faces from a hundred photographs, the prize being awarded to the competitor whose choice most nearly corresponds to the average preferences of the competitors as a whole, not those faces which he himself finds prettiest, but those which he thinks likeliest to match the fancy of the other competitors, all of whom are looking at the problem from the same point of view."Finally, I also think of Keynes as a legendarily successful investor, and in particular how he grew the endowment of King's College. I don't think he did all of it sipping tea in bed, nor by thinking about investing as a beauty contest. In fact, I had no real idea how Keynes succeeded as an investor until reading the article by David Chambers and Elroy Dimson, "Retrospectives: John Maynard Keynes, Investment Innovator," in the most recent issue of the Journal of Economic Perspectives. (Like all articles appearing in JEP, it is freely available on-line courtesy of the American Economic Association. Full disclosure: I'm the managing editor of the journal.) Here are a few insights from their article (as always, with citations and notes omitted for readability):
Over the entire time period, Keynes was wildly successful as an investor.
"When John Maynard Keynes managed the endowment of King’s College at Cambridge University, the actively managed part of his portfolio beat the performance of the British common stock index by an average of 8 percentage points per year from 1921 to 1946."
Keynes was one of the first institutional investors to move heavily into stocks.
"Keynes was among the first institutional managers to allocate the majority of his portfolio to the then-alternative asset class of equities. In contrast, most British (and American) long-term institutional investors of a century ago regarded ordinary shares or common stocks as unacceptably risky and shunned this asset class in favor of fixed income and real estate. ... To our knowledge, no other Oxbridge colleges made a substantial allocation to equities until the second half of the twentieth century. In the United States, the largest university endowments allocated less than 10 percent to
common stock in the 1920s (on a historical cost-weighted basis), and this total only rose above 20 percent in the late 1930s."
However, Keynes performed poorly as an investor for the first eight years or so.
"The year-by-year results also show that Keynes underperformed [a comparable market index] in only six out of the 25 financial years and that four of those years occurred in the first eight years of his management of the Discretionary Portfolio. By August 1929, he was lagging the UK equity market by a cumulative 17.2 percent since inception. In addition, he failed to foresee the sharp fall in the market the following month."
Keynes dramatically shifted his investment philosophy around 1930, switching from a market-timing macroeconomic approach to becoming one of the first "value" investors.
"Keynes independently championed value investing in the United Kingdom at around the same time as Benjamin Graham was doing so in the United States. Both Keynes’ public statements and his economic theorizing strongly suggest that he did not believe that “prices of securities must be good indicators of value” (Fama 1976). Beginning as a top-down portfolio manager, seeking to time his allocation to stocks, bonds, and cash according to macroeconomic indicators, he evolved into a bottom-up investor from the early 1930s onwards, picking stocks trading at a discount to their “intrinsic value”—terminology he himself employed. Subsequently, his equity investments began to
outperform the market on a consistent basis."
Keynes' overall portfolio was far from an index fund: he concentrated on a relatively small number small and medium-sized firms in certain industries.
"[T]he majority of his UK equity holdings were concentrated in just two sectors, metal mining—tin mining stocks in the 1920s and gold mining stocks in the following decade—and commercial and industrial firms ... Banking carried an index weight of 20 percent, and Keynes had little or no exposure in this sector. ... Keynes’ substantial weighting in commercial and industrial stocks began in the early and mid-1920s with a diversified portfolio of industrial names. However, soon thereafter he concentrated his exposure in this sector on the two leading British automobile stocks, Austin Motors and Leyland Motors. In the context of the time, these would have been viewed as “technology” stocks. In terms of firm size, Keynes had a decided tilt towards mid-cap and small-cap stocks."
Thursday, August 15, 2013
A Euro Narrative
When a timetable for putting the euro in place was annouced by the Delors Commission back in 1988, I didn't believe it would ever happen. Germany was going to give up the deutsche mark? Really? Europe was already
reaping gains from a more-free flow of goods, services, labor, and
capital across borders, as well as from economic cooperation in other
areas. Would a single currency increase those economic gains by much? Of course, the euro somehow managed to disregard my personal doubts and proceed to be implemented on schedule. But as the euro-zone has done a zombie stagger for the last few years, I've been feeling more vindicated in my earlier doubts. The most recent Summer 2013 issue of the Journal of Economic Perspectives contains four articles putting what has happened in the euro-zone in perspective. (Courtesy of the American Economic Association, all articles in the JEP are freely available on-line going back to the first issue in 1987. Full disclosure: I am predisposed to think that JEP articles are worth reading because I've been the managing editor since 1987.) Here's my own narrative of what has happened with the euro, drawing on those articles.
To set the stage for what needs explaining here, consider some patterns of unemployment. When the euro went into full effect early in 2002, unemployment dropped across the euro zone, but since early in 2008 it has been on the rise, now at about 12%. Notice that unemployment in the euro-zone, shown by the light line, has been higher and has risen faster than unemployment for the euro area as a whole.
Moreover, if one goes beyond the overall euro-average and looks at specific countries, the unemployment rates become even more disturbing. Unemployment in Greece and Spain, the two countries to the far right of the graph, are agonizingly high at above 26%.
So what explains the pattern of unemployment first falling when the euro came into effect in 200s, and then 2002, and then rising? In their JEP paper, Jesús Fernández-Villaverde, Luis Garicano, and Tano Santos offer a couple of striking graphs looking at borrowing patterns. Before the euro, countries in the periphery of Europe like Greece, Spain, Ireland, and Portugal had to pay higher interest rates than Germany. But with the advent of the euro, international capital lender apparently decided that all countries borrowing in euros posed the same risk, and interest rates across these countries converged.
As the countries of the periphery experienced this sharp drop in interest rates, they went on something of a borrowing binge. In Greece and Portugal, the binge involved large government deficits. In Ireland and Spain, it involved large borrowing to finance an unsustainable housing bubble. But in all of these countries, their indebtedness to foreign investment rose dramatically. Meanwhile, much of this inflow of foriegn investment was coming from Germany, where foreign assets were piling up.
The rise in borrowing in the periphery countries was unsustainable, and the rubber hit the road when the recession hit in 2008. These economies had been living high on inflows of international capital. Average wages had risen substantially for workers. Governments had made lots of promises about spending for pensions and other areas. One hope of the euro had been that it would force countries on the periphery of Europe to run more sensible macroeconomic policies: after all, the plan was that they would hand over their central banking to the professionals at the European Central Bank, and various euro-wide rules would limit government borrowing. But instead, countries on the periphery of Europe were deluged with easy money in the form of unsustainable inflows of foreign capital. Now, those explicit and implicit promises of prosperity have fallen apart.
The current hope is that it may be possible to cobble together some set of arrangements that can sustain the euro. For example, there is talk of a unified banking system across Europe, with common deposit insurance, bank regulation, and rules for closing down insolvent banks. There is talk of swapping some of the debt of countries for debt that would be repaid by the euro area as a whole, to avoid the "sovereign-debt doom loop" that arises when banks in a country mainly own the government debt of that country. In that situation, problems of bank bailouts turn into large increases in government debt (as in Ireland) and problems of government debt turn into sick banks--which in turn makes the economy worse. In their JEP article, Stephanie Schmitt-Grohé and Martin Uribe discuss how price inflation of perhaps 20%, spread over 4-5 years, could serve to reduce real wages in the countries with high unemployment and thus make them economically more competitive. There are even whispers of a hint of a shadow of a possibility of steps like a shared unemployment insurance fund across Europe.
But while I'm sure it's possible to cobble together a series of bailouts and steps like these to keep the euro going awhile longer, the economic case that the euro may not be able to function very well remains. The economic case against the euro is based on the "optimal currency area" literature associated with the work of Robert Mundell back in the 1960s. (Mundell won the Nobel prize in 1999, and their website has a nice overview of his main contributions here.)
To understand the guts of the argument, consider a situation where one region of the U.S. economy performs well with rapid productivity and output growth, while another region does not. What happens? Well, substantial numbers of people migrate from the area with fewer jobs to the area with more jobs. Prices of nontraded goods like land and housing adjust, so that they are cheaper in the area that is performing less well and more expensive in the area that is performing better. Eventually, firms will see potential profits in these lower-cost areas and expand there. The U.S. tax code and spending programs reallocate some resources, too, because a progressive tax code will collect a greater share of the rising income in the area that is relatively better off, while a range of safety net programs and other spending will tend to favor the area that is relatively worse off. In short, when the modern U.S. economy has a situation where growth differs across regions, those differences are ameliorated to some extent by other economic shifts and policy choices. With these preconditions in place, a single currency works fairly well across the U.S. economy.
But what would happen if the two areas of the U.S. were not connected in these ways? What if one area performed well, another performed poorly, but there was very little movement of people, relatively little adjustment of wages and prices, no central government cushioning the differences of regional performance, and no eventual movement of business to the lower-cost region. In this case, the low-performance economic region could remain a depressed regional economy with high unemployment for a long, long time. In this situation, without the other offsetting economic changes, it would be useful for the two areas to have different currencies. Then the currency of the low-performance area could depreciate, which would make its workers and goods more attractive, and help the region to climb out of its slump.
When you look at Germany and Greece, you don't see tens of thousands of Greek workers heading to Germany for jobs. You don't see German firms setting up shop in Greece to take advantage of lower-priced land and labor. The European Union government has quite a small budget relative to national economies, and does very little to offset cross-country economic differences. Back before the euro, when areas of Europe differed in their economic performance, adjustments in the exchange rate--which in effect change the prices of all labor and goods from that economy on world markets--could ease the adjustment process. But for countries in the euro-zone, adjusting the exchange rate is no longer possible, and the other possible economic adjustments are not very strong.
In this situation, the economic outcome is sometimes called an "internal devaluation." When countries like Greece, Spain, Portugal, and Ireland on the periphery of Europe can't cut their exchange rate and can't adjust in other ways, all that's left is the prospect of a long, miserable period where unemployment stays high and wages remain stagnant. In reviewing these kinds of economic dynamics, Kevin H. O'Rourke and Alan M. Taylor write in their JEP article:
To set the stage for what needs explaining here, consider some patterns of unemployment. When the euro went into full effect early in 2002, unemployment dropped across the euro zone, but since early in 2008 it has been on the rise, now at about 12%. Notice that unemployment in the euro-zone, shown by the light line, has been higher and has risen faster than unemployment for the euro area as a whole.
Moreover, if one goes beyond the overall euro-average and looks at specific countries, the unemployment rates become even more disturbing. Unemployment in Greece and Spain, the two countries to the far right of the graph, are agonizingly high at above 26%.
So what explains the pattern of unemployment first falling when the euro came into effect in 200s, and then 2002, and then rising? In their JEP paper, Jesús Fernández-Villaverde, Luis Garicano, and Tano Santos offer a couple of striking graphs looking at borrowing patterns. Before the euro, countries in the periphery of Europe like Greece, Spain, Ireland, and Portugal had to pay higher interest rates than Germany. But with the advent of the euro, international capital lender apparently decided that all countries borrowing in euros posed the same risk, and interest rates across these countries converged.
As the countries of the periphery experienced this sharp drop in interest rates, they went on something of a borrowing binge. In Greece and Portugal, the binge involved large government deficits. In Ireland and Spain, it involved large borrowing to finance an unsustainable housing bubble. But in all of these countries, their indebtedness to foreign investment rose dramatically. Meanwhile, much of this inflow of foriegn investment was coming from Germany, where foreign assets were piling up.
The rise in borrowing in the periphery countries was unsustainable, and the rubber hit the road when the recession hit in 2008. These economies had been living high on inflows of international capital. Average wages had risen substantially for workers. Governments had made lots of promises about spending for pensions and other areas. One hope of the euro had been that it would force countries on the periphery of Europe to run more sensible macroeconomic policies: after all, the plan was that they would hand over their central banking to the professionals at the European Central Bank, and various euro-wide rules would limit government borrowing. But instead, countries on the periphery of Europe were deluged with easy money in the form of unsustainable inflows of foreign capital. Now, those explicit and implicit promises of prosperity have fallen apart.
The current hope is that it may be possible to cobble together some set of arrangements that can sustain the euro. For example, there is talk of a unified banking system across Europe, with common deposit insurance, bank regulation, and rules for closing down insolvent banks. There is talk of swapping some of the debt of countries for debt that would be repaid by the euro area as a whole, to avoid the "sovereign-debt doom loop" that arises when banks in a country mainly own the government debt of that country. In that situation, problems of bank bailouts turn into large increases in government debt (as in Ireland) and problems of government debt turn into sick banks--which in turn makes the economy worse. In their JEP article, Stephanie Schmitt-Grohé and Martin Uribe discuss how price inflation of perhaps 20%, spread over 4-5 years, could serve to reduce real wages in the countries with high unemployment and thus make them economically more competitive. There are even whispers of a hint of a shadow of a possibility of steps like a shared unemployment insurance fund across Europe.
But while I'm sure it's possible to cobble together a series of bailouts and steps like these to keep the euro going awhile longer, the economic case that the euro may not be able to function very well remains. The economic case against the euro is based on the "optimal currency area" literature associated with the work of Robert Mundell back in the 1960s. (Mundell won the Nobel prize in 1999, and their website has a nice overview of his main contributions here.)
To understand the guts of the argument, consider a situation where one region of the U.S. economy performs well with rapid productivity and output growth, while another region does not. What happens? Well, substantial numbers of people migrate from the area with fewer jobs to the area with more jobs. Prices of nontraded goods like land and housing adjust, so that they are cheaper in the area that is performing less well and more expensive in the area that is performing better. Eventually, firms will see potential profits in these lower-cost areas and expand there. The U.S. tax code and spending programs reallocate some resources, too, because a progressive tax code will collect a greater share of the rising income in the area that is relatively better off, while a range of safety net programs and other spending will tend to favor the area that is relatively worse off. In short, when the modern U.S. economy has a situation where growth differs across regions, those differences are ameliorated to some extent by other economic shifts and policy choices. With these preconditions in place, a single currency works fairly well across the U.S. economy.
But what would happen if the two areas of the U.S. were not connected in these ways? What if one area performed well, another performed poorly, but there was very little movement of people, relatively little adjustment of wages and prices, no central government cushioning the differences of regional performance, and no eventual movement of business to the lower-cost region. In this case, the low-performance economic region could remain a depressed regional economy with high unemployment for a long, long time. In this situation, without the other offsetting economic changes, it would be useful for the two areas to have different currencies. Then the currency of the low-performance area could depreciate, which would make its workers and goods more attractive, and help the region to climb out of its slump.
When you look at Germany and Greece, you don't see tens of thousands of Greek workers heading to Germany for jobs. You don't see German firms setting up shop in Greece to take advantage of lower-priced land and labor. The European Union government has quite a small budget relative to national economies, and does very little to offset cross-country economic differences. Back before the euro, when areas of Europe differed in their economic performance, adjustments in the exchange rate--which in effect change the prices of all labor and goods from that economy on world markets--could ease the adjustment process. But for countries in the euro-zone, adjusting the exchange rate is no longer possible, and the other possible economic adjustments are not very strong.
In this situation, the economic outcome is sometimes called an "internal devaluation." When countries like Greece, Spain, Portugal, and Ireland on the periphery of Europe can't cut their exchange rate and can't adjust in other ways, all that's left is the prospect of a long, miserable period where unemployment stays high and wages remain stagnant. In reviewing these kinds of economic dynamics, Kevin H. O'Rourke and Alan M. Taylor write in their JEP article:
"So where the eurozone needs to go in the long run, we argue, is towards a genuine banking union; a eurozone-wide safe bond to break the sovereign-bank doom loop; a central bank that is more flfl exible and willing to act as a true lender of last resort against such bonds and other assets as necessary; and a fiscal union at least sufficient to support the above. But the short-run problems facing countries in the periphery of Europe are now so great that politicians may never get a chance to solve these long-run problems because the eurozone may well have collapsed in theThe ultimate question here, perhaps, involves the desired destination of the euro-zone. In his article in the JEP symposium, Enrico Spolaore looks at the arguments of the "intergovernmentalists," who see European integration as a process in which national governments look to cooperate for mutual benefit, while remaining ultimately in charge, and the "functionalists," who see European integration as a long march toward a United States of Europe. Many Europeans seem to ambiguous on this choice, sometimes leaning one way, sometimes the other. As Spolaore writes: "This same ambiguity is present in the conflicting views about the euro among its supporters: is it a currency without a state yet, or is it a currency without a state ever?"
meantime."