The Conversable Economist--that's me--is taking the rest of the year off. For your delectation, here are 16 of the most-viewed posts that appeared in 2012, at least one from each month, listed here in reverse chronological order. Of course, I encourage you to spend your holidays surfing the archives as well.
"Paper Towels vs. Air Dryers." (December 10, 2012)
Somewhat to my surprise, this post was by far the most popular of 2012. My idea was to provide an example of the structured analysis of a tradeoff that might be especially useful to classroom teachers and of mild interest to others. However, the post also clearly touched a broader audience and generated a wave of heartfelt reactions from people who just plain love their paper towels.
"China's Economic Growth: A Different Storyline." (November 19, 2012)
The standard story of China's economic growth involves a story of how low wages in China have combined with an undervalued exchange rate to create huge trade surpluses that drive economic growth. This post pokes some holes in that story. China's very rapid economic growth in the 1980s and 1990s didn't involve trade surpluses, which only started expanding in the 2000s when China's rates of wage growth were taking off. And China's currency was flat when the trade surpluses took off, and has how been strengthening for six years. The post proposes a different storyline for China's growth, rooted in how China's exports took off after China joined the World Trade Organization in 2001, and China's underdeveloped financial system had no way to turn all of these earnings by firms into national consumption.
"Marginal Tax Rates on the Poor and Lower Middle Class" (November 16, 2012)
Consider the situation of a low-income person who is eligible for various public support programs. However, each time that person earns an additional $1 in income, the amount of government support is reduced by, say, 30 or 40 cents. The economic incentives here are the same as those of a high marginal income tax rate. From this perspective, the marginal tax rates faced by the poor and the lower middle class are often just about as high as the marginal tax rates for those with high incomes.
"Hydraulic Models of the Economy." (November 12, 2012)
Two famous economists of the past built hydraulic models of the economy: that is, economic models where flows of spending and saving, as well as price levels, were revealed by liquid flowing through a system of tubes and containers. Bill Phillips--the originator of the Phillips curve--built his model back in the late 1940s. Irving Fisher, the originator of much of modern monetary economics, built his model as part of his dissertation back in 1891. This post tells the story of the models--with pictures.
"Driverless Cars." (October 31, 2012)
Driverless cars are coming: Google has already been testing prototypes on public roads. How might this invention change our lives? Fewer accidents. More productive or relaxing time spent in transit. More cars on the road, so less need for infrastructure. Greater energy efficiency. Remote parking--just tell your car to come and get you when you are ready. The possibility of shared cars, coming when you call. Greater mobility for those too young or too old to drive safely. Drive overnight, sleeping in your car, and arrive in the morning. The possibilities just keep coming.
"Are CEOs Overpaid?" (September 14, 2012)
It may seem that the answer should obviously be "yes," but a number of facts suggest a more nuanced answer. CEO pay relative to household income did spike back in the dot-com boom in the late 1990s, but since then, it is relatively
lower. CEO pay relative to the top 0.1%of the income distribution is now back to the levels common in the the 1950s. The pay of those at the top of other highly-paid occupations has grown
dramatically as well, like lawyers, athletes, and hedge fund managers. CEOs are fired sooner than they used to be, on average, especially when the stock price doesn't perform well.
"Are Groups More Rational than Individuals?" (August 30, 2012)
A body of evidence from laboratory economics experiments suggests: 1) Groups are often more rational and
self-interested than individuals; and 2) This behavior doesn't always
benefit the participants in groups, because the group can be less good
than individuals at setting aside self-interest when cooperation is more
appropriate. The greater rationality of groups arises in part because when there is a problem to be solved, several people working on the problem are more likely to discern the best solution than just one person. But in some situations, cooperation can benefit all parties. This same evidence suggests that individuals are often better at putting aside narrow self-interest and looking to cooperative outcomes than groups. One ultimate goal in this literature is to figure out when it is more useful for organizations to operate through groups, and when it is more useful for organizations to delegate individuals to make decisions.
"What is a Beveridge Curve and What is it Telling Us?" (August 20, 2012)
A Beveridge curve is a graphical relationship between job openings and the unemployment rate. The Beveridge curve seems to have shifted out in the last few years, meaning that for a given number of job openings, the unemployment rate is higher than it used to be. Some possible explanations include a mismatch between
the skills of unemployed workers and the available jobs; incentives from
extended unemployment insurance that have slowed the incentive to take
available jobs; and heightened uncertainty over the future course of the
economy and economic policy. Over the middle term, these factors should fade, and the unemployment rate will then fall.
"The Improving U.S. Labor Market." (July 17, 2012)
In July, the unemployment rate seemed stuck at about 8%. However, certain more detailed measures of labor markets were showing signs of life. For example, the ratio of unemployed people per job opening had spiked above 6 at the worst of the recession, but by May 2012, the ratio had fallen to about
3.5. Hires had increased. Even the trend toward more people quitting their jobs in mid-2012 was probably good news, because people are more likely to quit when they perceive that other labor market options are available.
"Wealth by Distribution, Region, and Age." (June 13, 2012)
Once every three years the Federal Reserve carries out the Survey of Consumer Finance,which is the canonical source for data on household wealth. Results from the 2010 survey were just being released. One headline finding is that the median household wealth level fell from $126,000 in 2007 to $77,000 in 2010.
"McWages Around the World." (May 16, 2012)
The study underlying this May 16 post looked at one set of jobs that are largely identical in countries around the world: food preparation jobs at McDonald's. It provides strong evidence that workers with the same skills are being rewarded very differently in different countries. I wrote: "[T]hese measures show that the most important factor determining wages
for most of us is not our personal skills and human capital, or our
effort and initiative, but whether we are using those skills and human
capital in the context of a a high-productivity or a low-productivity
economy."
"Why Does the U.S. Spend More on Health Care than Other Countries?" (May 14, 2012)
At the end of this post, I wrote: "The question of why the U.S. spends more than 50% more per person
on health care than the next highest countries (Switzerland and
Netherlands), and more than double per person what many other countries
spend, may never have a simple answer. Still, the main ingredients of an
answer are becoming more clear. The U.S. spends vastly more on
hospitalization and acute care, with a substantial share of that going
to high-tech procedures like surgery and imaging. The U.S. does a poor
job of managing chronic conditions, which then lead to
episodes of costly hospitalization. The U.S. also seems to spend vastly
more on administration and paperwork, with much of that related to
credentialing, documenting, and billing--which is again a particular
important issue in hospitals. Any honest effort to come to grips with
high and rising U.S. health care costs will have to tackle these factors
head-on." I suspect that this post must have been assigned as reading to some classes, because the pageviews kept climbing steadily through the fall semester.
The Price of Nails. (April 5, 2012)
Nails may seem like an everyday product, but this analysis shows how their price has fallen dramatically over time, by a factor of about 15 from the mid-1700s to the mid-1900s. Back around 1800, nails alone could represent 10% of the cost of a house, and household purchases of nails were of the same magnitude, relative to GDP, as current household purchases of computers or of airfares. Even in a seemingly simple product, technological innovation has been quite dramatic: hand-forged, nails, cut nails, wire nails, and more recently the emergence of the nail gun.
"Top Marginal Tax Rates: 1958 and 2009." (March 16, 2012)
Top marginal income tax rates used to be much higher back in the 1950s and 1960s, as high as 91%. This post looks at how top tax rates, and the money collected by those rates, changed over time. The tip-top rates applied to only a small group, and so the share of income taxes paid by those in the top tax brackets today is actually higher now than back in the 1960s. The marginal tax rates paid by those in the middle class were also often higher in the 1960s.
"Six Adults and One Child." (February 15, 2012)
The title of this post refers to a pattern observed in China after several generations of the one-child policy: that is, a single child walking around a park, closely followed by two parents and four grandparents. A fertility implosion is coming around the world, and family reunions of the future are likely to be made up of four and five generations of relative, who will greatly outnumber the children on hand.
"Giffen Goods in Real Life." (January 4, 2012)
Every economics student at some point must confront the theory behind a Giffen good, which is the case in which a higher price for a good leads to people purchasing more of that good. I have usually taught the example as a theoretical curiosity, but some plausible evidence has emerged that in certain very low-income parts of China, rice is a Giffen good. In these areas, rice is a major part of the diet of poor people. When the price of rice rises, the effective buying power of their income is reduced, which then pushes them to give up on other items and consume even more rice.
Pages
▼
Wednesday, December 26, 2012
Monday, December 24, 2012
Real Tree or Artificial Tree?
My family always had real Christmas trees when I was growing up. I've always had real trees as an adult. Living in my own little bubble, it thus came as a shock to me to learn that, of the households that have Christmas trees, over 80% use an artificial tree, according to Nielsen survey results commissioned by the American Christmas Tree Association (which largely represents sellers of artificial trees). But in a holiday season where the focus is often on whether we are naughty or nice, what choice of tree has greater environmental impact?
There seem to be two main studies often quoted on this subject: "Comparative Life Cycle Assessment (LCA) of Artificial vs. Natural Christmas Tree," published by a Montreal-based consulting firm called ellipsos in February 2009, and "Comparative Life Cycle Assessment of an Artificial Christmas Tree and a Natural Christmas Tree," published in November 2010 by a Boston consulting firm called PE Americas on behalf of the aforementioned American Christmas Tree Association.Both studies assume the artificial tree is manufactured in China and transported to North America. (If readers know of other recent published studies, please send me a link!)
Here are some of the main messages I take away from these studies:
1) One artificial tree has greater environmental impact than one natural tree. However, an artificial tree can also be re-used over a number of years. Thus, there is some crossover point, if the artificial tree is used for long enough, that its environmental effect is less than an annual series of trees. For example, the ellipsos study finds that an artificial tree would need to be used for 20 years before its greenhouse gas effects would be less than those of an annual series of natural trees. The PE Americas study offers a wide range of scenarios, and summarizes, but here is the situation "for the base case when individual car transport distance for tree purchase is 2.5 miles each way. Because the natural tree provides an environmental benefit in terms of Global Warming Potential when landfilled, and Eutrophication Potential when composted or incinerated, there is no number of years one can keep an artificial tree in order to match the natural tree impacts in these cases. ... For all other scenarios, the artificial tree has less impact provided it is kept and reused for a minimum between 2 and 9 years, depending upon the environmental indicator chosen."
2) The full analysis needs to look at effects across all the full life-cycle of the tree, whether natural or artificial. This seems to involve the following steps.
The ellipsos study sums up this way: "When aggregating the data in damage categories, the results show that the impacts for human health are approximately equivalent for both trees, that the impact for ecosystem quality are much better for the artificial tree, that the impacts for climate change are much better for the natural tree, and that the impacts for resources are better for the natural tree ..."
4) In the context of many other holiday and everyday activities, the environmental effects of the tree are small. For example,the studies offer some comparisons of the environmental effects of the tree compared with the electricity used to light the tree, the driving by a household to pick up the tree, and even the environmental effect of the tree stand.
For example, in comparing Primary Energy Demand for the tree and the energy demand for lighting the tree. For an artificial tree, the PE Americas study reports: "The electricity consumption during use of 400 incandescent Christmas tree lights during one Christmas season is 55% of the overall Primary Energy Demand impact of the unlit artificial tree studied, assuming the worst‐case scenario that the artificial tree is used only one year. For artificial trees kept 5 and 10 years respectively, the PED for using incandescent lights is 2.8 times and 5.5 times that of the artificial tree life cycle." For a natural tree: "The life cycle Primary Energy Demand impact of the natural tree is 1.5 ‐ 3.5 times less (based on the End‐of‐Life scenario) than the use of 400 incandescent Christmas tree lights during one Christmas season."
In comparing the environmental effects of driving with those of the tree, ellipsos writes: "Due to the uncertainties of CO2 sequestration and distance between the point of purchase of the trees and the customer’s house, the environmental impacts of the natural tree can become worse. For instance, customers who travel over 16 km from their house to the store (instead of 5 km) to buy a natural tree would be better off with an artificial tree. ... [C]arpooling or biking to work only one to three weeks per year would offset the carbon emissions from both types of Christmas trees.
The PE Americas report strikes a similar theme: Initially, global warming potential (GWP) for the landfilled natural tree is negative, in other words the life cycle of a landfilled natural tree that is a GWP sink. Therefore, the more natural trees purchased, the greater the environmental global warming benefit (the more negative GWP becomes). However, with increased transport to pick up the natural tree, the overall landfilled natural tree life cycled becomes less negative. When car transport becomes greater than 5 miles (one‐way), the overall life cycle of the natural tree is no longer negative, and there is a positive GWP contribution."
Even the tree stand for a natural tree has an environmental cost that can be considered in the same breath with the costs of a natural tree. PE Americas: "The tree stand is a significant contributor to the overall impact of the natural tree life cycle with impacts ranging from 3% to 41% depending on the impact category and End‐of‐Life disposal option."
I would add that the environment effect of the ornaments on the trees may be as large or greater than the effect of the tree itself. Data from the U.S. Census Bureau shows that America imported $1 billion in Christmas tree ornaments from China (the leading supplier) between January to September 2012, but only $140 million worth of artificial Christmas trees. Thus, spending on ornaments is something like six times as high as spending on trees. The choice of what kind of lights on the tree, or whether to drape the house and front yard with lights, is a more momentous environmental decision than the tree itself.
Of course, these kinds of comparisons don't even try to compare the environmental cost of the tree with the cost of the presents under the tree, or the long-distance travel to attend a family gathering. Thus, the PE Americas study concludes: "Consumers who wish to celebrate the holidays with a Christmas tree should do so knowing that the overall environmental impacts of both natural and artificial trees are extremely small when compared to other daily activities such as driving a car. Neither natural nor artificial Christmas tree purchases constitute a significant environmental impact within most American lifestyles." Similarly, ellipsos writes: "Although the dilemma between the natural and artificial Christmas trees will continue to surface every year before Christmas, it is now clear from this LCA study that, regardless of the chosen type of tree, the impacts on the environment are negligible compared to other activities, such as car use."
Certainly, celebrations at holidays and big events can sometimes be exorbitant and over the top. But the use of a Christmas tree, and the choice between a natural tree or an artificial tree, is a small-scale luxury. If the environmental issue is bothering you, even knowing these facts, make a resolution to use your artificial tree for a few more years, rather than replacing it, or to save some energy in January by driving less or being more vigilant about turning off unneeded lights. Gathering around the tree should be one less reason for moralizing around the holidays, not one more. So celebrate with good cheer and generous moderation.
There seem to be two main studies often quoted on this subject: "Comparative Life Cycle Assessment (LCA) of Artificial vs. Natural Christmas Tree," published by a Montreal-based consulting firm called ellipsos in February 2009, and "Comparative Life Cycle Assessment of an Artificial Christmas Tree and a Natural Christmas Tree," published in November 2010 by a Boston consulting firm called PE Americas on behalf of the aforementioned American Christmas Tree Association.Both studies assume the artificial tree is manufactured in China and transported to North America. (If readers know of other recent published studies, please send me a link!)
Here are some of the main messages I take away from these studies:
1) One artificial tree has greater environmental impact than one natural tree. However, an artificial tree can also be re-used over a number of years. Thus, there is some crossover point, if the artificial tree is used for long enough, that its environmental effect is less than an annual series of trees. For example, the ellipsos study finds that an artificial tree would need to be used for 20 years before its greenhouse gas effects would be less than those of an annual series of natural trees. The PE Americas study offers a wide range of scenarios, and summarizes, but here is the situation "for the base case when individual car transport distance for tree purchase is 2.5 miles each way. Because the natural tree provides an environmental benefit in terms of Global Warming Potential when landfilled, and Eutrophication Potential when composted or incinerated, there is no number of years one can keep an artificial tree in order to match the natural tree impacts in these cases. ... For all other scenarios, the artificial tree has less impact provided it is kept and reused for a minimum between 2 and 9 years, depending upon the environmental indicator chosen."
2) The full analysis needs to look at effects across all the full life-cycle of the tree, whether natural or artificial. This seems to involve the following steps.
- Under what conditions is the tree manufactured or cultivated, with what use of energy, fertilizer, and logging methods?
- By what combination of transportation mechanisms is the finished tree moved to the home? A substantial share of artificial trees are manufactured in China and then shipped to North America.
- What are the different issues in use of the tree, including use of water and emissions of fumes?
- What is the end-of-life for the tree? For example, the carbon in a natural tree will be stored for some decades if the tree goes into a landfill, but not if if is composted or incinerated.
The ellipsos study sums up this way: "When aggregating the data in damage categories, the results show that the impacts for human health are approximately equivalent for both trees, that the impact for ecosystem quality are much better for the artificial tree, that the impacts for climate change are much better for the natural tree, and that the impacts for resources are better for the natural tree ..."
4) In the context of many other holiday and everyday activities, the environmental effects of the tree are small. For example,the studies offer some comparisons of the environmental effects of the tree compared with the electricity used to light the tree, the driving by a household to pick up the tree, and even the environmental effect of the tree stand.
For example, in comparing Primary Energy Demand for the tree and the energy demand for lighting the tree. For an artificial tree, the PE Americas study reports: "The electricity consumption during use of 400 incandescent Christmas tree lights during one Christmas season is 55% of the overall Primary Energy Demand impact of the unlit artificial tree studied, assuming the worst‐case scenario that the artificial tree is used only one year. For artificial trees kept 5 and 10 years respectively, the PED for using incandescent lights is 2.8 times and 5.5 times that of the artificial tree life cycle." For a natural tree: "The life cycle Primary Energy Demand impact of the natural tree is 1.5 ‐ 3.5 times less (based on the End‐of‐Life scenario) than the use of 400 incandescent Christmas tree lights during one Christmas season."
In comparing the environmental effects of driving with those of the tree, ellipsos writes: "Due to the uncertainties of CO2 sequestration and distance between the point of purchase of the trees and the customer’s house, the environmental impacts of the natural tree can become worse. For instance, customers who travel over 16 km from their house to the store (instead of 5 km) to buy a natural tree would be better off with an artificial tree. ... [C]arpooling or biking to work only one to three weeks per year would offset the carbon emissions from both types of Christmas trees.
The PE Americas report strikes a similar theme: Initially, global warming potential (GWP) for the landfilled natural tree is negative, in other words the life cycle of a landfilled natural tree that is a GWP sink. Therefore, the more natural trees purchased, the greater the environmental global warming benefit (the more negative GWP becomes). However, with increased transport to pick up the natural tree, the overall landfilled natural tree life cycled becomes less negative. When car transport becomes greater than 5 miles (one‐way), the overall life cycle of the natural tree is no longer negative, and there is a positive GWP contribution."
Even the tree stand for a natural tree has an environmental cost that can be considered in the same breath with the costs of a natural tree. PE Americas: "The tree stand is a significant contributor to the overall impact of the natural tree life cycle with impacts ranging from 3% to 41% depending on the impact category and End‐of‐Life disposal option."
I would add that the environment effect of the ornaments on the trees may be as large or greater than the effect of the tree itself. Data from the U.S. Census Bureau shows that America imported $1 billion in Christmas tree ornaments from China (the leading supplier) between January to September 2012, but only $140 million worth of artificial Christmas trees. Thus, spending on ornaments is something like six times as high as spending on trees. The choice of what kind of lights on the tree, or whether to drape the house and front yard with lights, is a more momentous environmental decision than the tree itself.
Of course, these kinds of comparisons don't even try to compare the environmental cost of the tree with the cost of the presents under the tree, or the long-distance travel to attend a family gathering. Thus, the PE Americas study concludes: "Consumers who wish to celebrate the holidays with a Christmas tree should do so knowing that the overall environmental impacts of both natural and artificial trees are extremely small when compared to other daily activities such as driving a car. Neither natural nor artificial Christmas tree purchases constitute a significant environmental impact within most American lifestyles." Similarly, ellipsos writes: "Although the dilemma between the natural and artificial Christmas trees will continue to surface every year before Christmas, it is now clear from this LCA study that, regardless of the chosen type of tree, the impacts on the environment are negligible compared to other activities, such as car use."
Certainly, celebrations at holidays and big events can sometimes be exorbitant and over the top. But the use of a Christmas tree, and the choice between a natural tree or an artificial tree, is a small-scale luxury. If the environmental issue is bothering you, even knowing these facts, make a resolution to use your artificial tree for a few more years, rather than replacing it, or to save some energy in January by driving less or being more vigilant about turning off unneeded lights. Gathering around the tree should be one less reason for moralizing around the holidays, not one more. So celebrate with good cheer and generous moderation.
Friday, December 21, 2012
The Sandy Hook Mass Killling: A Meditation on Living in the Global Village
I have three children, ages 14, 13, and 10, and so of course my wife and I, like so many other families, have been talking with the children about the mass killings at Sandy Hook Elementary School in Newtown, Connecticut.The conversations have made me think again about Marshall McLuhan's idea of the "global village," and the challenges that it poses in the 21st century for cognitively limited human beings.
When McLuhan wrote about the "global village" in the early 1960s, he was pointing out that in the pre-electronic age, people's main experience of the world involved those who lived nearby. Of course, other news filtered in by way of media and gossip. But the arrival of electronic technology creates a common set of experiences and perceptions. The telegraph provided much higher-speed connections about news events. Radio broadcasts of sporting events, music, entertainment shows, presidential speeches, and news meant that many people across the country were sharing the common experience of the broadcast as it happened. Movies and television then added a visual component, so that people from all over the country, and in some cases the world, began to share a common set of mental images of what events and people were important and what those events and people looked like--all based on highly edited clips of film.
Of course, we have gone far beyond McLuhan's global village of the 1960s. In the internet age, anyone can post digital images and sound to the world. When a 24/7 media environment combines with social media, we now live in the global neighborhood, or perhaps even in a global extended family.
Nothing in the evolutionary history of humans particularly prepares us to process the information from living in this information environment. For example, did you know that the deadliest school massacre in U.S. history was a bomb attack on a school in Michigan back in 1927? But at that time, there was no national outcry, no presidential proclamations, no screaming news headlines all over the country. In 1927, mass killings at a school in Michigan seemed so far away for most of America; in 2012, the deaths in Connecticut feel so close for most of us.
This shift in the content and immediacy of the information we receive, together with the experience of receiving it simultaneously across the country, creates a severe challenge for how to think about it.
Daniel Kahneman, who shared the Nobel prize in economics back in 2002, write about how humans think in his recent book: Thinking, Fast and Slow. I haven't yet finished reading the book, and for a summary I'll turn here to a review by Andrei Shleifer that was published in December 2012 issue of the Journal of Economic Literature. Andrei writes:
I find myself wondering about the possible effects of being a cognitively limited person living in a global neighborhood defined by the rapidly expanding capabilities of information and communications technology.
One possible outcome of living in a global village--or a global neighborhood--is that one has a sense of access and connection to a far larger number of people and experiences. I prefer to live in a world where I can grieve, even in my separate and unattached way, to the people of Newtown. A global neighborhood can be a world of greater empathy and connection.
But another possible outcome of living in a global neighborhood is that, given the ability to connect to every act of evil that occurs, we will be exposed to many more acts of evil. Even if the overall quantity of evil is not rising, our limited cognitive facilities combined with the surrounding information and media environment will cause us to perceive evil as rising sharply. In other words, instead of the global neighborhood causing us to have broader access and connection to the full range of human and natural experience, instead we expand our access to the evil, violent, grotesque, and sentimental.
Yet another possible outcome is that we become numbed and overwhelmed by the by the wide range of input that we are receiving, such that all electronic input seems to have a similar quality. Real-world violence merges into movie violence merges into video-game violence. A personal putdown on a situation comedy is like a personal putdown between two talking heads on a news commentary show is like a personal putdown via social media. Reactions blur between the real and the fictional, the impersonal and the personal. We have ever-heightened attention to events for an ever-shorter window of time, until nothing means very much for very long--until it is stoked by a news hook like new information or an anniversary.
I want to live in the global neighborhood, with a heightened sense of connection. I want to know what happened before the Sandy Hook killings, and what is happening since. (I confess that I have little taste for details of what happened during the actual episode.)
I don't want to be overwhelmed by the old, sad, true reality that there is always something terrible happening somewhere, just because it is now possible to consume a perpetual diet of such events. I don't want the details of the Sandy Hook killings to terrify my children, or to move me to tears (any more than they already have). I want to be a person who counts his blessings, not one who counts the world's disasters.
I want to have an attention span considerably longer and broader than the news cycle. I don't want to be a person who reacts to the horror of children being killed in some knee-jerk, automatic, sentimentalized fashion, although the controlled and deliberate side of my mind sheers away from contemplating the horror too closely. I don't want to forget the challenges and joys of the children at the 132,000 other schools across the country.
As a human being with limited cognitive abilities, I struggle with being who I want to be in the face of the Sandy Hook mass killing. I struggle in my roles as a parent, as a citizen, as a member of the human race.
When McLuhan wrote about the "global village" in the early 1960s, he was pointing out that in the pre-electronic age, people's main experience of the world involved those who lived nearby. Of course, other news filtered in by way of media and gossip. But the arrival of electronic technology creates a common set of experiences and perceptions. The telegraph provided much higher-speed connections about news events. Radio broadcasts of sporting events, music, entertainment shows, presidential speeches, and news meant that many people across the country were sharing the common experience of the broadcast as it happened. Movies and television then added a visual component, so that people from all over the country, and in some cases the world, began to share a common set of mental images of what events and people were important and what those events and people looked like--all based on highly edited clips of film.
Of course, we have gone far beyond McLuhan's global village of the 1960s. In the internet age, anyone can post digital images and sound to the world. When a 24/7 media environment combines with social media, we now live in the global neighborhood, or perhaps even in a global extended family.
Nothing in the evolutionary history of humans particularly prepares us to process the information from living in this information environment. For example, did you know that the deadliest school massacre in U.S. history was a bomb attack on a school in Michigan back in 1927? But at that time, there was no national outcry, no presidential proclamations, no screaming news headlines all over the country. In 1927, mass killings at a school in Michigan seemed so far away for most of America; in 2012, the deaths in Connecticut feel so close for most of us.
This shift in the content and immediacy of the information we receive, together with the experience of receiving it simultaneously across the country, creates a severe challenge for how to think about it.
Daniel Kahneman, who shared the Nobel prize in economics back in 2002, write about how humans think in his recent book: Thinking, Fast and Slow. I haven't yet finished reading the book, and for a summary I'll turn here to a review by Andrei Shleifer that was published in December 2012 issue of the Journal of Economic Literature. Andrei writes:
"Kahneman’s book is organized around the metaphor of System 1 and System 2 .... As the title of the book suggests, System 1 corresponds to thinking fast, and System 2 to thinking slow. Kahneman describes System 1 in many evocative ways: it is intuitive, automatic, unconscious, and effortless; it answers questions quickly through associations and resemblances; it is nonstatistical, gullible, and heuristic. System 2 in contrast is what economists think of as thinking: it is conscious, slow, controlled, deliberate, effortful, statistical, suspicious, and lazy (costly to use).... For Kahneman, System 1 describes “normal” decision making. System 2, like the U.S. Supreme Court, checks in only on occasion. Kahneman does not suggest that people are incapable of System 2 thought and always follow their intuition. System 2 engages when circumstances require. Rather, many of our actual choices in life, including some important and consequential ones, are System 1 choices, and therefore are subject to substantial deviations from the predictions of the standard economic model. System 1 leads to brilliant inspirations, but also to systematic errors."In the aftermath of the Sandy Hook shootings, my children's school district has been sending out emails and letters. One of them gave the statistics that there are 132,656 K-12 schools in the United States, and that including what happened last week, there have been 32 school shootings in the last 25 years. Of course, this is classic System 2 information, appealing to the conscious, controlled, statistical side of my brain. I find it hard even to read these kinds of statistics in the aftermath of the deaths; I can literally feel my brain wanting to escape back to automatic and effortless responses.
I find myself wondering about the possible effects of being a cognitively limited person living in a global neighborhood defined by the rapidly expanding capabilities of information and communications technology.
One possible outcome of living in a global village--or a global neighborhood--is that one has a sense of access and connection to a far larger number of people and experiences. I prefer to live in a world where I can grieve, even in my separate and unattached way, to the people of Newtown. A global neighborhood can be a world of greater empathy and connection.
But another possible outcome of living in a global neighborhood is that, given the ability to connect to every act of evil that occurs, we will be exposed to many more acts of evil. Even if the overall quantity of evil is not rising, our limited cognitive facilities combined with the surrounding information and media environment will cause us to perceive evil as rising sharply. In other words, instead of the global neighborhood causing us to have broader access and connection to the full range of human and natural experience, instead we expand our access to the evil, violent, grotesque, and sentimental.
Yet another possible outcome is that we become numbed and overwhelmed by the by the wide range of input that we are receiving, such that all electronic input seems to have a similar quality. Real-world violence merges into movie violence merges into video-game violence. A personal putdown on a situation comedy is like a personal putdown between two talking heads on a news commentary show is like a personal putdown via social media. Reactions blur between the real and the fictional, the impersonal and the personal. We have ever-heightened attention to events for an ever-shorter window of time, until nothing means very much for very long--until it is stoked by a news hook like new information or an anniversary.
I want to live in the global neighborhood, with a heightened sense of connection. I want to know what happened before the Sandy Hook killings, and what is happening since. (I confess that I have little taste for details of what happened during the actual episode.)
I don't want to be overwhelmed by the old, sad, true reality that there is always something terrible happening somewhere, just because it is now possible to consume a perpetual diet of such events. I don't want the details of the Sandy Hook killings to terrify my children, or to move me to tears (any more than they already have). I want to be a person who counts his blessings, not one who counts the world's disasters.
I want to have an attention span considerably longer and broader than the news cycle. I don't want to be a person who reacts to the horror of children being killed in some knee-jerk, automatic, sentimentalized fashion, although the controlled and deliberate side of my mind sheers away from contemplating the horror too closely. I don't want to forget the challenges and joys of the children at the 132,000 other schools across the country.
As a human being with limited cognitive abilities, I struggle with being who I want to be in the face of the Sandy Hook mass killing. I struggle in my roles as a parent, as a citizen, as a member of the human race.
Thursday, December 20, 2012
Africa: The Jobs Challenge
The economies of sub-Saharan Africa have been experiencing fairly rapid growth over the last decade. But for most people, the way that they share in economic growth is by having a steady wage-paying job. A report released in August by the McKinsey Global Institute considers the problem of "Africa at Work: Job Creation and Inclusive Growth."
As a starting point, Africa's economic growth sped up around 2000, and for the decade from 2000-2010, it was the second-fastest growing region of the world. If one counts a "consuming household" as a household with over $5000 per year in income, the number of African households in this category rose from about 59 million to 90 million over this decade.
But for sharing this growth broadly across the population, people need stable wage-paying jobs. For example, when GDP grows because of a rise in the mining sector, wage-paying jobs do not grow commensurately as much. McKinsey reports: "The continent's official unemployment rate is only 9 percent. Today, however, just 28 percent of Africa's labour force has stable wage-paying jobs." In some countries like Ethiopia, Mali, and the Democratic Republic of Congo, fewer than 10 percent of adults have stable wage-paying jobs. McKinsey refers to those who live with subsistence agricultural jobs or informal self-employment as having "vulnerable employment," which strikes me as a nicely understated name for a very difficult life situation. Here's a figure with some information about the number of wage paying jobs across countries.
Africa certainly has potential for creation of wage-paying jobs in areas like commercial farming, manufacturing, retail and hospitality--all labor-intensive sectors of the economy that in different ways can tap into world markets and export demand. In a McKinsey survey of employers in a number of countries, more than half named macroeconomic problems as a main factor holding back job growth were macroeconomic and 40 percent names political instability.
In the figure above, diversified economies are much more likely to have a larger share of their workers in stable, wage-paying jobs. McKinsey points out that when countries South Korea, Thailand, and Brazil were at Africa's current stage of economic development, they were all more successful in creating more wage-paying jobs. Ultimately, the difficulty is that an employer with employees is a particular form of social organization, which in turn is affected by political, legal, regulatory, and social factors. In many countries in Africa, the particular form of social organization that is a wage-paying firm with fairly stable and steady employment is not well-known or well-established. Africa's prospects for inclusive economic growth may well depend on its ability to foster the conditions for starting and growing such business organizations.
For some previous posts on whether Africa is at long last generating self-sustaining growth, see "Africa's Economic Development" (June 13, 2011), "Africa's Growing Middle Class" (September 19, 2011), and "Africa's Prospects: Half Full or Half Empty?" (December 15, 2011).
As a starting point, Africa's economic growth sped up around 2000, and for the decade from 2000-2010, it was the second-fastest growing region of the world. If one counts a "consuming household" as a household with over $5000 per year in income, the number of African households in this category rose from about 59 million to 90 million over this decade.
But for sharing this growth broadly across the population, people need stable wage-paying jobs. For example, when GDP grows because of a rise in the mining sector, wage-paying jobs do not grow commensurately as much. McKinsey reports: "The continent's official unemployment rate is only 9 percent. Today, however, just 28 percent of Africa's labour force has stable wage-paying jobs." In some countries like Ethiopia, Mali, and the Democratic Republic of Congo, fewer than 10 percent of adults have stable wage-paying jobs. McKinsey refers to those who live with subsistence agricultural jobs or informal self-employment as having "vulnerable employment," which strikes me as a nicely understated name for a very difficult life situation. Here's a figure with some information about the number of wage paying jobs across countries.
Africa certainly has potential for creation of wage-paying jobs in areas like commercial farming, manufacturing, retail and hospitality--all labor-intensive sectors of the economy that in different ways can tap into world markets and export demand. In a McKinsey survey of employers in a number of countries, more than half named macroeconomic problems as a main factor holding back job growth were macroeconomic and 40 percent names political instability.
In the figure above, diversified economies are much more likely to have a larger share of their workers in stable, wage-paying jobs. McKinsey points out that when countries South Korea, Thailand, and Brazil were at Africa's current stage of economic development, they were all more successful in creating more wage-paying jobs. Ultimately, the difficulty is that an employer with employees is a particular form of social organization, which in turn is affected by political, legal, regulatory, and social factors. In many countries in Africa, the particular form of social organization that is a wage-paying firm with fairly stable and steady employment is not well-known or well-established. Africa's prospects for inclusive economic growth may well depend on its ability to foster the conditions for starting and growing such business organizations.
For some previous posts on whether Africa is at long last generating self-sustaining growth, see "Africa's Economic Development" (June 13, 2011), "Africa's Growing Middle Class" (September 19, 2011), and "Africa's Prospects: Half Full or Half Empty?" (December 15, 2011).
Wednesday, December 19, 2012
Can $12.1 Trillion be Boring? Thoughts on International Reserves
I knew that many countries were holding substantial international reserves, but I hadn't quite realized how large those reserves have become. Edwin Truman explains: "At the end of 2011, international reserve assets alone amounted to 17 percent of world GDP and an average of 29 percent of the national GDP of emerging market and developing countries. ... Including the international assets of SWFs [sovereign wealth funds] and similar entities would boost those percentages substantially above 20 percent and close to 40 percent respectively. At a 5 percent total return, those assets yield 1 to 2 percent of GDP per year."
Truman has some additional "Reflections on Reserve Management and Inernational Monetary Cooperation" in remarks he delivered at a World Bank/Bank of International Settlements Conference earlier this month. I saw the talk posted at the website of the Peterson Institute for International Economics. Truman compiled a useful table to show the rise in such reserves: here, I'll give a trimmed-down version of the table, along with some of the main patterns as I see them.
Here are a few of the patterns that jump out at me from the table.
1) International reserves have grown dramatically since 1990, and expecially since 2000, rising from $2.3 trillion in 2000 to $12.1 trillion in 2011. To put it another way, international reserves were equal to roughly one-third of world trade in 1990 and 2000, but equal to about two-thirds of world trade in 2011.
2) Most of this increase in reserves has come from the emerging and developing economies. The share of world reserves held by advanced economies was about 80% from the 1960s up through about 1990, but since then has plummetted to 40%. For the emerging and developing countries, their total reserves were about one-third of their trade from 1960 up through about 1990, but hen rose to half of their volume of trade by 2000 and to 104% of their trade volume by 2011.
3) Countries used to hold their reserves in the form of gold, but now are more likely to hold their reserves in the form of foreign exchange. Back in 1960, the advanced economies held 70% of their reserves in the form of gold, and the emerging/developing countries held 44% of their reserves in the form of gold. In the last decade or so, advanced economies had 70-80% of their reserves in foreign exchange, and emerging/developing countries had 94% of their reserves in foreign exchange. (I trimmed from the table two relatively small categories of how reserves are held: special drawing rights and reserve position at the IMF.)
Truman offers a useful decade-by-decade sketch of how international reserves have evolved since the 1960s. For the most recent decade, Truman points out:
My own sense is that discussions of international reserves often seem to be deeply boring (if it's possible for discussions of $12.1 trillion to be boring!), but these financial flows are enormous enough to rock the world economy, and their management and purpose are often obscure. I suspect that, perhaps sooner rather than later, the ways in which these funds are managed and the choices they make will be the source of very prominent and not-at-all boring political and economic conflict.
Truman has some additional "Reflections on Reserve Management and Inernational Monetary Cooperation" in remarks he delivered at a World Bank/Bank of International Settlements Conference earlier this month. I saw the talk posted at the website of the Peterson Institute for International Economics. Truman compiled a useful table to show the rise in such reserves: here, I'll give a trimmed-down version of the table, along with some of the main patterns as I see them.
Here are a few of the patterns that jump out at me from the table.
1) International reserves have grown dramatically since 1990, and expecially since 2000, rising from $2.3 trillion in 2000 to $12.1 trillion in 2011. To put it another way, international reserves were equal to roughly one-third of world trade in 1990 and 2000, but equal to about two-thirds of world trade in 2011.
2) Most of this increase in reserves has come from the emerging and developing economies. The share of world reserves held by advanced economies was about 80% from the 1960s up through about 1990, but since then has plummetted to 40%. For the emerging and developing countries, their total reserves were about one-third of their trade from 1960 up through about 1990, but hen rose to half of their volume of trade by 2000 and to 104% of their trade volume by 2011.
3) Countries used to hold their reserves in the form of gold, but now are more likely to hold their reserves in the form of foreign exchange. Back in 1960, the advanced economies held 70% of their reserves in the form of gold, and the emerging/developing countries held 44% of their reserves in the form of gold. In the last decade or so, advanced economies had 70-80% of their reserves in foreign exchange, and emerging/developing countries had 94% of their reserves in foreign exchange. (I trimmed from the table two relatively small categories of how reserves are held: special drawing rights and reserve position at the IMF.)
Truman offers a useful decade-by-decade sketch of how international reserves have evolved since the 1960s. For the most recent decade, Truman points out:
"The increased wealth in the hands of more and more governments has raised new concerns about the motivations, accountability, and transparency of the managers of that wealth. ... [E]nhancement of cooperative arrangements in this area is falling behind the need for them in the face of the explosion of the size and number of significant public investors, bringing in many non-traditional investors. This is a global issue. The notion that a country’s public investments are the exclusive concern of the country itself is analytically wrong and fundamentally dangerous. Two countries (at least) share an exchange rate. Similarly, two countries (at least) share the effects of cross-border public investments. ... The alternative to increased cooperation on public sector investment policies is a currency war. ... The greater risk is that restrictions and barriers will increase affecting not only cross-border official investments, but all cross-border financial transactions. Once we start down that path, a trade war would not be difficult to envisage, and the consequences for global growth and stability could be severe."
My own sense is that discussions of international reserves often seem to be deeply boring (if it's possible for discussions of $12.1 trillion to be boring!), but these financial flows are enormous enough to rock the world economy, and their management and purpose are often obscure. I suspect that, perhaps sooner rather than later, the ways in which these funds are managed and the choices they make will be the source of very prominent and not-at-all boring political and economic conflict.
Tuesday, December 18, 2012
Elinor Ostrom on the Commons
Elinor Ostrom shared the 2009 Nobel prize in economics. Her contribution, as described by the Nobel committee, was that she "[c]hallenged the conventional wisdom by demonstrating how local property
can be successfully managed by local commons without any regulation by
central authorities or privatization." Last March, she gave the Hayek Memorial Lecture at the Institute for Economic Affairs on the topic: "The Future of the Commons: Beyond Market Failure and Government Regulation." The IEA has now made her talk available as an e-book, together with some useful essays and commentary.
For example, Vlad Tarko offers some biographical and intellectual background on Ostrom. He sums up some main themes of her work in this way:
Ostrom's essay is especially useful as a reminder of how determinedly pragmatic she was in her work, and how unwilling to be pigeon-holed. She wrote:
So what was Ostrom's framework for analysis? She tried to look at what she called "social-ecological systems," which was a sort of check-list of different categories. Here's an example from her essay of what she called first-tier and second-tier categories. There were third-tier categories, too!
This kind of table helps to illustrate why I often found Ostrom's to be remarkably insightful and also frustrating at the same time. Her insistence on pragmatic analysis forced one to look at fine-grain detail in a way that often yielded fascinating insights. But in her approach, it often seemed hard to draw more general lessons, because it sometimes felt as if every individual study fell into its own category with its own rules. Ostrom acknowledges both reactions in her essay:
For example, Vlad Tarko offers some biographical and intellectual background on Ostrom. He sums up some main themes of her work in this way:
"The classic solution to the public goods problem has been to use taxes to pay for public goods, thus adjusting their supply level upwards (presumably towards the optimum). The classic solution to the ‘tragedy of the commons’ problem, provided by Hardin (1968), has been to transform the resource into a private good (either by privatising it or by turning it into government property with proper monitoring). One of the main reasons for which Elinor Ostrom received her Nobel Prize is the discovery that these classic solutions are not the only possible ones. What Ostrom discovered in her empirical studies is that, despite what economists have thought, communities often create and enforce rules against free-riding and assure the long-term sustainability of communal properties. Her ‘design principles’ explain under what conditions this happens and when it fails."
Ostrom's essay is especially useful as a reminder of how determinedly pragmatic she was in her work, and how unwilling to be pigeon-holed. She wrote:
"Challenge one, as I mentioned, is the panacea problem. A very large number of policymakers and policy articles talk about ‘the best’ way of doing something. For many purposes, if the market was not the best way people used to think that it meant that the government was the best way. We need to get away from thinking about very broad terms that do not give us the specific detail that is needed to really know what we are talking about.
"We need to recognise that the governance systems that actually have worked in practice fit the diversity of ecological conditions that exist in a fishery, irrigation system or pasture, as well as the social systems. There is a huge diversity out there, and the range of governance systems that work reflects that diversity. We have found that government, private and community-based mechanisms all work in some settings. People want to make me argue that community systems of governance are always the best: I will not walk into that trap."
"There are certainly very important situations where people can self-organise to manage environmental resources, but we cannot simply say that the community is, or is not, the best; that the government is, or is not, the best; or that the market is, or is not, the best. It all depends on the nature of the problem that we are trying to solve."
So what was Ostrom's framework for analysis? She tried to look at what she called "social-ecological systems," which was a sort of check-list of different categories. Here's an example from her essay of what she called first-tier and second-tier categories. There were third-tier categories, too!
This kind of table helps to illustrate why I often found Ostrom's to be remarkably insightful and also frustrating at the same time. Her insistence on pragmatic analysis forced one to look at fine-grain detail in a way that often yielded fascinating insights. But in her approach, it often seemed hard to draw more general lessons, because it sometimes felt as if every individual study fell into its own category with its own rules. Ostrom acknowledges both reactions in her essay:
"I am going to warn you that when people see this for the very first time, there is a kind of worried reaction at its complexity. This looks very complex. ... Researchers can fall into the trap of pretending that their own cases are completely different from other cases. They refuse to accept that that there are lessons that one can learn from studying multiple cases. In reality, to diagnose why some social-ecological systems do self-organise in the first place and are robust, we need to study similar systems over time. We need to examine which variables are the same, which differ and which are the important variables so that we can understand why some systems of natural resource managementOstrom's work excelled at dispelling ideological certitudes: there was always an example to show that things might work differently. She had an extraordinary ability to postpone the easy answer, and to keep digging down into specific details.
are robust and succeed and others fail."
Monday, December 17, 2012
The Pill Over the Counter?
The ability of women to access the contraceptive pill is mediated
through the health care profession: in particular, the pill is a
prescription drug. Almost two decades ago in August 1993, a doctor named David Grimes wrote in the American Journal of Public Health (footnotes omitted): "On public health grounds, oral contraceptives could be made available in vending machines and cigarettes by prescription only. ... Our society's approach to these two agents, both widely used by young women, is paradoxical. Cigarettes, which are readily available even to children, kill over a thousand persons each day. In contrast, oral contraceptives prevent unwanted pregnancy and improve women's health. Nevertheless, the medical profession poses numerous obstacles to this method of contraception, including a physical examination, a prescription, often a pharmacist, and an impenetrable package insert. ... [T]hese medical requirements neither serve nor protect women; they are merely impediments."
Some important voices in the health care profession seem to be coming around to this point of view. The Committee on Gynecologic Practice of the American College of Obstetricians and Gynecologists has now published its opinion concerning "Over-the-Counter Access to Oral Contraceptives." The committee begins:
Just about every drug, including many over-the-counter drugs, can cause unwanted side effects for some people, or be be misused or ineffectively used. The appropriate dividing line here is not to require perfect safety, but to make a judgement that the drug is safe enough that people can self-medicate. That 1993 essay in the American Journal of Public Health argued that two decades ago: "More is known today about the safety of oral contraceptives than has been known about any other drug in the history of medicine. Thirty years of intense epidemiologic study have confirmed
that oral contraceptives are very safe." On the other side, the factors that limit women from having access to effective contraception pose real and immediate risks to their own health, and often to the health of their children.
When the Committee on Gynecologic Practice of the American College of Obstetricians and Gynecologists speaks up, it is essentially a group of doctors who specialize in this area, saying that with all the health risks taken into account, the available evidence suggests that over-the-counter access to the pill makes sense.
Many of the non-health-related arguments about making the pill available over-the counter are easily dismissed. For example, the concern that women with access to reliable contraception may not show up for preventive care is just old-style paternalism with a concerned face. Should we also require that condoms be sold via prescription, so that young men will be pressured to go to doctor's offices for their regular check-ups? Yes, figuring out how an over-the-counter pill would interact with insurance and with pharmacists is worth consideration. But surely, those factors should not be the central ones in thinking about whether a drug should require a prescription.
The contraceptive pill has been a society-shaking innovation. In the "millenium issue" of the Economist magazine back at the very end of 1999, the contraceptive pill was described this way: "But there is, perhaps, one invention that historians a thousand years in the future will look back on and say, “That defined the 20th century.” It is also one that a time-traveller from 1000 would find breathtaking—particularly if she were a woman. That invention is the contraceptive pill."
Among academic economists, probably the best-known work on the pill is a paper by Claudia Goldin and Lawrence F. Katz, "The Power of the Pill: Oral Contraceptives and Women’s Career and Marriage Decisions," published in 2002 in the Journal of Political Economy. The academic paper is available here; a write-up of the material for a broader readership in the Second Quarter 2001 issue of the Milken Institute Review is available here. Goldin and Katz describe their work this way: "The fraction of U.S. college graduate women entering professional programs increased substantially just after 1970, and the age at first marriage among all U.S. college graduate women began to soar
around the same year. We explore the relationship between these two changes and the diffusion of the birth control pill (“the pill”) among young, unmarried college graduate women." While Goldin and Katz are careful to point out that many other factors were in play around this time, they make a compelling case that the availability of the pill played an important role, too. As they put it: "The Pill thus enabled a larger group of women to invest in expensive, long-duration training without paying a high social price."
But while the pill has fundamentally altered the lives of women who have ready access to health care appointments and doctors who write prescriptions, there are also many women for whom the requirement to see a doctor regularly and to get a series of prescriptions presents real logistical a nd personal barriers. It's time to stop using the contraceptive pill as a sort of carrot-and-stick to encourage regular doctor visits by women. It should be available over the counter.
Acknowledgement: I ran across the 1993 article in the American Journal of Public Health in a March 2012 Bloomberg column by Virginia Postrel called, "Fight Birth-Control Battle Over the Counter."
Some important voices in the health care profession seem to be coming around to this point of view. The Committee on Gynecologic Practice of the American College of Obstetricians and Gynecologists has now published its opinion concerning "Over-the-Counter Access to Oral Contraceptives." The committee begins:
"Unintended pregnancy remains a major public health problem in the United States. Over the past 20 years, the overall rate of unintended pregnancy has not changed and remains unacceptably high, accounting for approximately 50% of all pregnancies. The economic burden of unintended pregnancy has been recently estimated to cost taxpayers $11.1 billion dollars each year. According to the Institute of Medicine, women with unintended pregnancy are more likely to smoke or drink alcohol during pregnancy, have depression, experience domestic violence, and are less likely to obtain prenatal care or breastfeed. Short interpregnancy intervals have been associated with adverse neonatal outcomes, including low birth weight and prematurity, which increase the chances of children’s health and developmental problems.Of course, it's easy to toss out some potential reasons why offering birth control pills over-the-counter might pose some unwanted tradeoffs. Would women be appropriately aware of possible side effects? Would women use oral contraceptives regularly and thus effectively if they were available over the counter? If women could get the pill over-the-counter, might they then have fewer doctor visits that could focus on preventive health care? How would an over-the-counter pill interact with insurance reimbursement? Would pharmacists be involved in some way?
Many factors contribute to the high rate of unintended pregnancy. Access and cost issues are common reasons why women either do not use contraception or have gaps in use. Although oral contraceptives (OCs) are the most widely used reversible method of family planning in the United States, OC use is subject to problems with adherence and continuation, often due to logistics or practical issues. A potential way to improve contraceptive access and use, and possibly decrease the unintended pregnancy rate, is to allow over-the-counter access to OCs."
Just about every drug, including many over-the-counter drugs, can cause unwanted side effects for some people, or be be misused or ineffectively used. The appropriate dividing line here is not to require perfect safety, but to make a judgement that the drug is safe enough that people can self-medicate. That 1993 essay in the American Journal of Public Health argued that two decades ago: "More is known today about the safety of oral contraceptives than has been known about any other drug in the history of medicine. Thirty years of intense epidemiologic study have confirmed
that oral contraceptives are very safe." On the other side, the factors that limit women from having access to effective contraception pose real and immediate risks to their own health, and often to the health of their children.
When the Committee on Gynecologic Practice of the American College of Obstetricians and Gynecologists speaks up, it is essentially a group of doctors who specialize in this area, saying that with all the health risks taken into account, the available evidence suggests that over-the-counter access to the pill makes sense.
Many of the non-health-related arguments about making the pill available over-the counter are easily dismissed. For example, the concern that women with access to reliable contraception may not show up for preventive care is just old-style paternalism with a concerned face. Should we also require that condoms be sold via prescription, so that young men will be pressured to go to doctor's offices for their regular check-ups? Yes, figuring out how an over-the-counter pill would interact with insurance and with pharmacists is worth consideration. But surely, those factors should not be the central ones in thinking about whether a drug should require a prescription.
The contraceptive pill has been a society-shaking innovation. In the "millenium issue" of the Economist magazine back at the very end of 1999, the contraceptive pill was described this way: "But there is, perhaps, one invention that historians a thousand years in the future will look back on and say, “That defined the 20th century.” It is also one that a time-traveller from 1000 would find breathtaking—particularly if she were a woman. That invention is the contraceptive pill."
Among academic economists, probably the best-known work on the pill is a paper by Claudia Goldin and Lawrence F. Katz, "The Power of the Pill: Oral Contraceptives and Women’s Career and Marriage Decisions," published in 2002 in the Journal of Political Economy. The academic paper is available here; a write-up of the material for a broader readership in the Second Quarter 2001 issue of the Milken Institute Review is available here. Goldin and Katz describe their work this way: "The fraction of U.S. college graduate women entering professional programs increased substantially just after 1970, and the age at first marriage among all U.S. college graduate women began to soar
around the same year. We explore the relationship between these two changes and the diffusion of the birth control pill (“the pill”) among young, unmarried college graduate women." While Goldin and Katz are careful to point out that many other factors were in play around this time, they make a compelling case that the availability of the pill played an important role, too. As they put it: "The Pill thus enabled a larger group of women to invest in expensive, long-duration training without paying a high social price."
But while the pill has fundamentally altered the lives of women who have ready access to health care appointments and doctors who write prescriptions, there are also many women for whom the requirement to see a doctor regularly and to get a series of prescriptions presents real logistical a nd personal barriers. It's time to stop using the contraceptive pill as a sort of carrot-and-stick to encourage regular doctor visits by women. It should be available over the counter.
Acknowledgement: I ran across the 1993 article in the American Journal of Public Health in a March 2012 Bloomberg column by Virginia Postrel called, "Fight Birth-Control Battle Over the Counter."
Friday, December 14, 2012
Why Lobbyists Get Paid
One theory of why lobbyists get paid, the one often proffered by the lobbyists, is that the legislative arena is a complex place, and having someone who knows the ins and outs is useful. An alternative theory, often proffered by critics, is that lobbyists are paid because they deliver access to top politicians--in other words, it's not what they do, but who they know. Both theories doubtless hold some truth, but in the December 2012 issue of the American Economic Review, Jordi
Blanes i Vidal, Mirko Draca, and Christian Fons-Rosen offer some fuel for the critics in their paper "Revolving Door
Lobbyists"
(102:7, pp. 3731-48). The AER isn't freely available on-line, but
many academics will have access through a library subscription.
Blanes i Vidal, Draca, and Fons-Rosen point out that many lobbyists went through the "revolving door," meaning that they used to work for the federal government before becoming lobbyists. "
"One important characteristic of the US lobbying industry is the extent to which it is dominated by the “revolving door” phenomenon—i.e., the movement of federal public employees into the lobbying industry. For example, 56 percent of the revenue generated by private lobbying firms between 1998 and 2008 can be attributed to individuals with some type of federal government experience. ... Reflecting this, a recent ranking of the 50 top Washington lobbyists identified 34 as having federal
government experience ..."
The authors di\ide nto those who formerly worked directly with a member of Congress, and those who didn't. In addition, for lobbyists who worked with a member of Congress, they can look at how the revenue generated by that lobbyis changes when the member of Congress with whom they are personally connected leaves office. They write:
"Our main finding is that lobbyists connected to US senators suffer an average 24 percent drop in generated revenue when their previous employer leaves the Senate. The decrease in revenue is out of line with preexisting trends, it is discontinuous around the period in which the connected senator exits Congress, and it persists in the long term. Measured in terms of median revenue per staffer-turned-lobbyist, this estimate indicates that the exit of a senator leads to approximately a $182,000 per year fall in revenues for each affiliated lobbyist. We also find evidence that ex-staffers are less likely to work in the lobbying industry after their connected senators exit Congress. We regard the above findings as evidence that connections to powerful, serving politicians are key determinants of the revenue that lobbyists generate."
Along the way, they point out: "The average weighted revenue per lobbyist/year ranges around $349,000 for the subgroup of congressional staffers we consider. This figure is closely in line with
the reported salaries of lobbyists in this group. For example, the Washington Post reported in 2005 that “[s]tarting salaries have risen to about $300,000 a year for the best-connected aides eager to ‘move downtown from Capitol Hill’ ” .... Obviously, our estimates are not easily extrapolated to lobbyists with no government experience, although they help to explain the fact that these lobbyists generate substantially less revenue and are known to command lower salaries."
A policy implication here is that "cooling off" periods, in which people who leave government employment are banned for a time from becoming lobbyists, might diminish this business of trading on personal access. They write: "One common instrument to regulate the revolving door phenomenon is to impose “cooling off ” periods to officials leaving public office (Ethics Reform Act of 1989; Honest Leadership and Open Government Act of 2007; for a review, see Maskell 2010). The perishable nature of ex-staffers’ assets suggests that such restrictions could in fact be quite useful to
a legislator interested in significantly decreasing the attractiveness of a lobbying career for ex–government officials."
Back in September, I posted on "Campaign Contributions vs. Lobbying Expenses." Basically, my theme was that we would be wise to worry more about lobbyists than about campaign contributions. Year in, year out, more money is spent on lobbying than on campaign contributions, and just what happens with lobbyists behind the scenes is far more focused and secretive than what happens with contributions to a candidate or a party.
Blanes i Vidal, Draca, and Fons-Rosen point out that many lobbyists went through the "revolving door," meaning that they used to work for the federal government before becoming lobbyists. "
"One important characteristic of the US lobbying industry is the extent to which it is dominated by the “revolving door” phenomenon—i.e., the movement of federal public employees into the lobbying industry. For example, 56 percent of the revenue generated by private lobbying firms between 1998 and 2008 can be attributed to individuals with some type of federal government experience. ... Reflecting this, a recent ranking of the 50 top Washington lobbyists identified 34 as having federal
government experience ..."
The authors di\ide nto those who formerly worked directly with a member of Congress, and those who didn't. In addition, for lobbyists who worked with a member of Congress, they can look at how the revenue generated by that lobbyis changes when the member of Congress with whom they are personally connected leaves office. They write:
"Our main finding is that lobbyists connected to US senators suffer an average 24 percent drop in generated revenue when their previous employer leaves the Senate. The decrease in revenue is out of line with preexisting trends, it is discontinuous around the period in which the connected senator exits Congress, and it persists in the long term. Measured in terms of median revenue per staffer-turned-lobbyist, this estimate indicates that the exit of a senator leads to approximately a $182,000 per year fall in revenues for each affiliated lobbyist. We also find evidence that ex-staffers are less likely to work in the lobbying industry after their connected senators exit Congress. We regard the above findings as evidence that connections to powerful, serving politicians are key determinants of the revenue that lobbyists generate."
Along the way, they point out: "The average weighted revenue per lobbyist/year ranges around $349,000 for the subgroup of congressional staffers we consider. This figure is closely in line with
the reported salaries of lobbyists in this group. For example, the Washington Post reported in 2005 that “[s]tarting salaries have risen to about $300,000 a year for the best-connected aides eager to ‘move downtown from Capitol Hill’ ” .... Obviously, our estimates are not easily extrapolated to lobbyists with no government experience, although they help to explain the fact that these lobbyists generate substantially less revenue and are known to command lower salaries."
A policy implication here is that "cooling off" periods, in which people who leave government employment are banned for a time from becoming lobbyists, might diminish this business of trading on personal access. They write: "One common instrument to regulate the revolving door phenomenon is to impose “cooling off ” periods to officials leaving public office (Ethics Reform Act of 1989; Honest Leadership and Open Government Act of 2007; for a review, see Maskell 2010). The perishable nature of ex-staffers’ assets suggests that such restrictions could in fact be quite useful to
a legislator interested in significantly decreasing the attractiveness of a lobbying career for ex–government officials."
Back in September, I posted on "Campaign Contributions vs. Lobbying Expenses." Basically, my theme was that we would be wise to worry more about lobbyists than about campaign contributions. Year in, year out, more money is spent on lobbying than on campaign contributions, and just what happens with lobbyists behind the scenes is far more focused and secretive than what happens with contributions to a candidate or a party.
Thursday, December 13, 2012
Supplemental Security Income: Where the Program Stands
I have sometimes said that Supplemental Security Income, or SSI, is the federal program to those who are both old and low-income. But while that was an OK if inaccurate shorthand a few decades ago, its no longer appropriate. SSI does cover the low-income elderly, but it also covers those who are low-income from ages 18-64 with disabilities, and also disabled children under the age of 18 in low-income household. Back in 1980, about half of those receiving benefits were in the over-65 low-income. But at present, only 25 percent of the people covered by SSI are elderly, and they receive only 19 percent of the payments from this program. The Congressional Budget Office offers this and other facts about the program in its just-released report: "Supplemental Security Income: An Overview."
Here a figure from CBO showing the three main groups in the SSI program, and how their numbers have evolved over time.
Given this shift in SSI toward those who are disabled, an obvious question is how SSI relates to the other other main federal program for those with disabilities, the Social Security Disability Insurance program. For a quick overview of that program with some suggestions for reform, see this post from August 2011 on "Disability Insurance: One More Trust Fund Going Broke." The CBO report explains the practical differences in this way:
Here's a quick overview of eligibility rules for the three main groups in the SSI program. For those in the single largest category of age 18-64, low-income, and disabled, the rules look like this:
For children to qualify for SSI, here are the standards:
"Children who qualify for SSI must be disabled and, in most cases, must live in a household with low income and few assets. To be considered disabled, a child must have a physical or mental impairment that results in marked and severe functional limitations and that is either expected to last for at least 12 consecutive months or to result in death. Most child recipients—three-quarters of recipients
between the ages of 5 and 17 and one-third of those under the age of 5—qualify because of a mental disorder.
And for the elderly, the rules for SSI are based on low income. The low-income elderly rely less on SSI than they used to in part because of broader participation in Social Security -- for example, more women with an earnings history that brings non-negligible amount of Social Security payments--and also because of how Social Security benefits have been indexed to rise with inflation over time.
The SSI program will cost about $53 billion this year. Over the last 20 years or so, spending on the program expressed as a share of GDP is fairly flat.
As with any program oriented to those with disabilities or with low incomes, I'm sure there should be a continual process of re-considering just how "disability" is defined and what incentives to work at least part time are being provided by the benefit structure. But this program isn't one where I would expect even a fairly rabid budget-cutter to find substantial spending cuts.
Here a figure from CBO showing the three main groups in the SSI program, and how their numbers have evolved over time.
Given this shift in SSI toward those who are disabled, an obvious question is how SSI relates to the other other main federal program for those with disabilities, the Social Security Disability Insurance program. For a quick overview of that program with some suggestions for reform, see this post from August 2011 on "Disability Insurance: One More Trust Fund Going Broke." The CBO report explains the practical differences in this way:
"Social Security Disability Insurance (DI), the other major federal program that provides cash benefits to people with disabilities, uses the same disability standard for working-age adults that applies in SSI, but it differs from SSI in several respects. For example, DI is available only to adults (and their dependents) who have a sufficient record of work, but past work is not a requirement for SSI eligibility. DI also places no limits on beneficiaries’ income or assets, but SSI recipients must have low income and few assets. In addition, DI is funded primarily by means of a dedicated payroll tax, but SSI is funded out of general revenue."
Here's a quick overview of eligibility rules for the three main groups in the SSI program. For those in the single largest category of age 18-64, low-income, and disabled, the rules look like this:
"To qualify for SSI, those recipients must demonstrate that their disability prevents them from participating in “substantial gainful activity,” which in 2012 is considered to mean work that would produce earnings of more than $1,010 a month. (That amount is adjusted annually for average wage growth.) Older adults are more likely than younger adults are to receive payments: Fewer than 2 percent of people between the ages of 18 and 29 receive payments; slightly more than 3 percent of people between the ages of 50 and 64 do. Especially among younger adults, eligibility for the program is determined most commonly on the basis of mental disability: Three-quarters of participants ages 18 to 39 were awarded payments primarily because of a mental disorder. That share declines with age, as conditions such as spinal disorders and heart disease become more prevalent. Among SSI recipients between the ages of 60 and 64, for example, one-third receive payments because of mental disorders, one-quarter receive payments because of musculoskeletal disorders, and one-tenth receive payments because of circulatory disorders ... The share of adults ages 18 to 64 receiving SSI payments has increased over time, rising from slightly more than 1 percent of the population 30 years ago to more than 2 percent today."
For children to qualify for SSI, here are the standards:
"Children who qualify for SSI must be disabled and, in most cases, must live in a household with low income and few assets. To be considered disabled, a child must have a physical or mental impairment that results in marked and severe functional limitations and that is either expected to last for at least 12 consecutive months or to result in death. Most child recipients—three-quarters of recipients
between the ages of 5 and 17 and one-third of those under the age of 5—qualify because of a mental disorder.
And for the elderly, the rules for SSI are based on low income. The low-income elderly rely less on SSI than they used to in part because of broader participation in Social Security -- for example, more women with an earnings history that brings non-negligible amount of Social Security payments--and also because of how Social Security benefits have been indexed to rise with inflation over time.
"People age 65 or older can qualify for SSI on the basis of low income and assets alone; they need not be disabled. As a result, people in that age group are more likely than younger people are to qualify for the program; about 2.1 million, or 5 percent of the elderly population, do. (About half of those recipients qualified as disabled recipients before they turned 65.)
"The share of the aged population that receives payments has fallen by more than half since 1974 because of the increase in the share of that population eligible for Social Security and because of the real (inflation-adjusted) increase in the average Social Security benefit. Many more women now have had sufficient earnings to qualify for Social Security benefits based on their own work. In addition, the Social Security benefits that each new group of beneficiaries receives are linked to average wages in the economy, which generally increase faster than SSI benefits, which are linked to prices. As more people qualified for Social Security benefits and as the benefit amounts rose, fewer people met SSI’s income standard."
The SSI program will cost about $53 billion this year. Over the last 20 years or so, spending on the program expressed as a share of GDP is fairly flat.
As with any program oriented to those with disabilities or with low incomes, I'm sure there should be a continual process of re-considering just how "disability" is defined and what incentives to work at least part time are being provided by the benefit structure. But this program isn't one where I would expect even a fairly rabid budget-cutter to find substantial spending cuts.
Wednesday, December 12, 2012
Cautionary Details on U.S. Manufacturing Productivity: Susan Houseman
There's a basic and often-told story about output and employment in the U.S. manufacturing sector: I'm sure I've told it a time or two myself. The story begins by pointing out that the total quantity of U.S. manufacturing output has actually held up fairly well over recent decades, although it hasn't grown as quickly as the services sector. However, productivity growth in manufacturing has been rising quickly enough that productivity growth. However, manufacturing productivity has been rising quickly enough that, even though manufacturing output has remained fairly strong, the number of jobs has been falling. The standard historical analogy is that just as rising agricultural productivity meant that fewer U.S. farmers were needed, now rising manufacturing productivity means that fewer manufacturing workers are needed.
That story isn't exactly wrong, at least not over the long-run, but Susan Houseman has been digging down into the details and finding arguments which suggests that it is a seriously incomplete version of what's happening in the U.S. manufacturing sector. Houseman presented some of these arguments in a paper written with Christopher Kurz, Paul Lengermann, and Benjamin Mandel, called "Offshoring Bias in U.S. Manufacturing," which appeared in the Spring 2011 issue of my own Journal of Economic Perspectives. (Like all articles in JEP back to the first issue in 1987, it is freely available courtesy of the American Economic Association.) In turn, their JEP paper was a revision of a more detailed Federal Reserve working paper in 2010, available here. However, Houseman offers a nice overview of her arguments in an interview recently published in fedgazette, a publication of the Federal Reserve Bank of Minneapolis.
For background, here are four figures created by the ever-useful FRED website maintained by the Federal Reserve Bank of St. Louis. The first shows level of manufacturing output, which since the official end of the recession in 2009 has recovered to the level in 2000. The second shows manufacturing employment, which has dropped off substantially over that time. The third shows annual rates of change in manufacturing productivity, which is volatile, but seems often to be rising at 2-3% per year. And the fourth shows levels of manufacturing compensation, which hasn't been rising since 2000--as one might have expected based on rising productivity in thus sector.
After reading Houseman, when you hear the standard story about how high productivity in manufacturing is leading to reduced employment, the following thoughts should rattle through your head:
1) Most of the productivity growth in manufacturing is computers. Houseman: "First, a very important fact, but one I find most people don’t know—including some people who write a lot about the manufacturing sector—is that manufacturing growth in real [price-adjusted] value added and productivity wasn’t that strong without the computer and electronics industry. The computer industry is small—it only accounts for about 12 percent of manufacturing’s value added.... But we find that without the computer industry, growth in manufacturing real value added falls by two-thirds and productivity growth falls by almost half. It doesn’t look like a strong sector without computers."
2) Most of the productivity growth in manufacturing computers is because computers are becoming so much faster and better over time, and government statistics count that a productivity growth, not because an average worker is producing a dramatically greater quantity of computers. Houseman: "The standard argument is that the rapid productivity growth in computers is coming from product innovation. This year’s computers and semiconductors are faster and do more than last year’s models. And that product innovation essentially gets captured in the price indexes the government uses to deflate computer and semiconductor shipments. The price indexes for most products increase over time—that’s inflation. But, for example, the price indexes used to deflate computer shipments have actually fallen by a whopping 21 percent per year since the late 1990s. Those rapid price declines largely reflect adjustments for the growing power of computers. And that extraordinary decline in computer price indexes translates into extraordinary growth in real value added and productivity in the computer industry as measured in government statistics. So, in some statistical sense, today’s computer may be the equivalent of, say, 13 computers in 1998. ... The reason jobs in computers have been lost is not because productivity growth has crowded them out; not at all. It’s because much of the production has gone overseas...."
3) A sizeable share of what looks like growth in manufacturing productivity is actually from importing less expensive inputs to production. Houseman: "[T]here’s been a lot of growth in manufacturers’ use of foreign intermediate inputs since the 1990s, and most of those inputs come from developing and low-wage countries where costs are lower. We point out that those lower costs aren’t being captured by statistical agencies, and so, as a result, the growth of those imported inputs is being undercounted. ... Suppose an auto manufacturer used to buy tires from a domestic tire manufacturer. Then it outsources the purchase of its tires to, say, Mexico, and the Mexicans sell the tires for half the price. That price drop—when the auto manufacturer switches to the low-cost Mexican supplier—isn’t caught in our statistics. And if you don’t capture that price drop, it’s going to look like, in some statistical sense, the manufacturer can make the same car but only needs two tires. ... Our statistical agencies try to measure price changes, but they miss them when the price drops because companies have shifted to a low-cost supplier. So because we don’t catch the price drop associated with offshoring, it looks like we can produce the same thing with fewer inputs—productivity growth. It also looks like we are creating more value here in the United States than we really are."
4) If productivity in manufacturing rises because of automation, then those gains in productivity may benefit the owners of the machines--that is, benefit capital rather than labor. Houseman: "And then another standard story has to do with automation. Basically, capital is substituting for labor. Automation can lead to job losses. And the returns from automation, or higher capital use, won’t necessarily be shared with workers."
5) If low-wage labor-intensive manufacturing tasks are now more likely happen overseas, an higher-wage tasks remain in the U.S., then it may appear as if the productivity of an average U.S. manufacturing worker is higher--but it's just a shift in the composition of U.S. manufacturing workers. Houseman: "Then, finally, there’s probably been some shifting in the sorts of production that occur here. In particular, less of the labor-intensive production is done in the United States, and that would result in job losses and higher labor productivity. Again, the gains from that productivity growth aren’t necessarily going to be shared with remaining workers. So part of the answer to the puzzle is that even if productivity gains are real, there’s really nothing that guarantees those gains will be broadly shared by workers."
Add all these factors up, and the condition of U.S. manufacturing looks more ominous than the standard story of high productivity and resulting job losses. For more on the future of global and U.S. manufacturing, see this November 30 post on "Global Manufacturing: A McKinsey View."
That story isn't exactly wrong, at least not over the long-run, but Susan Houseman has been digging down into the details and finding arguments which suggests that it is a seriously incomplete version of what's happening in the U.S. manufacturing sector. Houseman presented some of these arguments in a paper written with Christopher Kurz, Paul Lengermann, and Benjamin Mandel, called "Offshoring Bias in U.S. Manufacturing," which appeared in the Spring 2011 issue of my own Journal of Economic Perspectives. (Like all articles in JEP back to the first issue in 1987, it is freely available courtesy of the American Economic Association.) In turn, their JEP paper was a revision of a more detailed Federal Reserve working paper in 2010, available here. However, Houseman offers a nice overview of her arguments in an interview recently published in fedgazette, a publication of the Federal Reserve Bank of Minneapolis.
For background, here are four figures created by the ever-useful FRED website maintained by the Federal Reserve Bank of St. Louis. The first shows level of manufacturing output, which since the official end of the recession in 2009 has recovered to the level in 2000. The second shows manufacturing employment, which has dropped off substantially over that time. The third shows annual rates of change in manufacturing productivity, which is volatile, but seems often to be rising at 2-3% per year. And the fourth shows levels of manufacturing compensation, which hasn't been rising since 2000--as one might have expected based on rising productivity in thus sector.
After reading Houseman, when you hear the standard story about how high productivity in manufacturing is leading to reduced employment, the following thoughts should rattle through your head:
1) Most of the productivity growth in manufacturing is computers. Houseman: "First, a very important fact, but one I find most people don’t know—including some people who write a lot about the manufacturing sector—is that manufacturing growth in real [price-adjusted] value added and productivity wasn’t that strong without the computer and electronics industry. The computer industry is small—it only accounts for about 12 percent of manufacturing’s value added.... But we find that without the computer industry, growth in manufacturing real value added falls by two-thirds and productivity growth falls by almost half. It doesn’t look like a strong sector without computers."
2) Most of the productivity growth in manufacturing computers is because computers are becoming so much faster and better over time, and government statistics count that a productivity growth, not because an average worker is producing a dramatically greater quantity of computers. Houseman: "The standard argument is that the rapid productivity growth in computers is coming from product innovation. This year’s computers and semiconductors are faster and do more than last year’s models. And that product innovation essentially gets captured in the price indexes the government uses to deflate computer and semiconductor shipments. The price indexes for most products increase over time—that’s inflation. But, for example, the price indexes used to deflate computer shipments have actually fallen by a whopping 21 percent per year since the late 1990s. Those rapid price declines largely reflect adjustments for the growing power of computers. And that extraordinary decline in computer price indexes translates into extraordinary growth in real value added and productivity in the computer industry as measured in government statistics. So, in some statistical sense, today’s computer may be the equivalent of, say, 13 computers in 1998. ... The reason jobs in computers have been lost is not because productivity growth has crowded them out; not at all. It’s because much of the production has gone overseas...."
3) A sizeable share of what looks like growth in manufacturing productivity is actually from importing less expensive inputs to production. Houseman: "[T]here’s been a lot of growth in manufacturers’ use of foreign intermediate inputs since the 1990s, and most of those inputs come from developing and low-wage countries where costs are lower. We point out that those lower costs aren’t being captured by statistical agencies, and so, as a result, the growth of those imported inputs is being undercounted. ... Suppose an auto manufacturer used to buy tires from a domestic tire manufacturer. Then it outsources the purchase of its tires to, say, Mexico, and the Mexicans sell the tires for half the price. That price drop—when the auto manufacturer switches to the low-cost Mexican supplier—isn’t caught in our statistics. And if you don’t capture that price drop, it’s going to look like, in some statistical sense, the manufacturer can make the same car but only needs two tires. ... Our statistical agencies try to measure price changes, but they miss them when the price drops because companies have shifted to a low-cost supplier. So because we don’t catch the price drop associated with offshoring, it looks like we can produce the same thing with fewer inputs—productivity growth. It also looks like we are creating more value here in the United States than we really are."
4) If productivity in manufacturing rises because of automation, then those gains in productivity may benefit the owners of the machines--that is, benefit capital rather than labor. Houseman: "And then another standard story has to do with automation. Basically, capital is substituting for labor. Automation can lead to job losses. And the returns from automation, or higher capital use, won’t necessarily be shared with workers."
5) If low-wage labor-intensive manufacturing tasks are now more likely happen overseas, an higher-wage tasks remain in the U.S., then it may appear as if the productivity of an average U.S. manufacturing worker is higher--but it's just a shift in the composition of U.S. manufacturing workers. Houseman: "Then, finally, there’s probably been some shifting in the sorts of production that occur here. In particular, less of the labor-intensive production is done in the United States, and that would result in job losses and higher labor productivity. Again, the gains from that productivity growth aren’t necessarily going to be shared with remaining workers. So part of the answer to the puzzle is that even if productivity gains are real, there’s really nothing that guarantees those gains will be broadly shared by workers."
Add all these factors up, and the condition of U.S. manufacturing looks more ominous than the standard story of high productivity and resulting job losses. For more on the future of global and U.S. manufacturing, see this November 30 post on "Global Manufacturing: A McKinsey View."
Tuesday, December 11, 2012
Rock-Bottom U.S. Mobility Rates
Everyone knows that Americans are a mobile society, moving toward opportunity and jobs, right? Not according to the data from the U.S. Census Bureau, which shows that of geographic mobility in 2011 were at their all-time low since the start of the data in 1948, and were only a tad higher in 2012. Here's the figure just released by the U.S. Census Bureau. The blue bars show the absolute number of moves, as measured on the left-hand axis. The black line shows the rate of mobility, as measured by the percentage of U.S. households that moved.
Another chart gives a sense of how far the move are. Most moves are within a given county, or between nearby counties, while relatively few involve moves to another state or abroad.
Why is the mobility rate down? One potential set of explanations focuses on the Great Recession: with jobs scarce, and declining home values in many areas, people stayed in place either because of a lack of jobs to move to, or by the unexpectedly low price of their home, or both. But this explanation is at best a very partial one.The downward trend in U.S. mobility goes back well before the start of the recession. People who are unemployed are often more likely to move, not less likely, as a report accompanying these charts pointed out. And if the issue is declining home values, it's hard to explain why mobility rates are down for both renters and for homeowners.
The Census Bureau puts out the data, but often sidesteps much discussion of underlying causes. However, in the Spring 2011 issue of my own Journal of Economic Perspectives, Raven Molloy, Christopher L. Smith, and Abigail Wozniak fill this gap with a discussion of "Internal Migration in the United States." Like all articles in JEP back to the first issue in 1987, the article is freely available compliments of the American Economic Association.
Molloy, Smith, and Wozniak consider possible long-term explanations for a declining rate of mobility, like the possibility that an aging population less likely to move. As they put it: "However, these differences across groups are not useful in explaining why migration has fallen in recent decades. The decrease in migration does not seem to be driven by demographic or socioeconomic trends, because migration rates have fallen for nearly every subpopulation ..."
They freely admit that there is not yet an answer in the economic research as to why geographic mobility has been declining, but they offer some hypotheses.
For example, one argument is that migration was high in the post WWII years as part of a significant population shift to the South, a shift which has been diminishing every since. But this factor doesn't seem to be significant enough, given the observed data on interregional migration.
Another hypothesis is that there are more two-earner families, and so when one person loses a job the household may be more reluctant to relocate. But this argument faces the problem that "the percentage of households with two earners has been quite stable over the last 30 years."
Yet another possibility "is that technological advances have allowed for an expansion of telecommuting and flexible work schedules, reducing the need for workers to move for a job." However, the data on telecommuting doesn't show that it is a large enough factor to explain the decline in mobility.
And yet another possibility "is that locations have become less specialized in the types of goods and services produced, making the types of available jobs more similar across space. ... A related idea is that the distribution of amenities has become more homogeneous across locations, making residence in any particular city less attractive." This explanation may have some truth in it, but it's proven difficult to gather data that would allow it to be tested in any definitive way.
Finally, it may just be that many Americans are shifting their preferences away from being willing to move. Molloy, Smith and Wozniak present evidence that "the secular decline in geographic mobility appears to be specific to the U.S. experience, since internal mobility has neither fallen in most other European economies nor in Canada—with the United Kingdom as a notable exception."
Whatever the reason behind the decline in geographic mobility, there are implications for the economy if the workforce becomes less flexible and less willing to move from areas where the economy is weaker to where it is stronger. In addition, lower mobility has broad implications for what its like to live in America. People find it harder to envision their lives as involving a big move. Social networks are reshaped. When mobility drops, we become a country where you are less likely to end up living and working with people from other states, other counties, or even other parts of your own county.
Another chart gives a sense of how far the move are. Most moves are within a given county, or between nearby counties, while relatively few involve moves to another state or abroad.
Why is the mobility rate down? One potential set of explanations focuses on the Great Recession: with jobs scarce, and declining home values in many areas, people stayed in place either because of a lack of jobs to move to, or by the unexpectedly low price of their home, or both. But this explanation is at best a very partial one.The downward trend in U.S. mobility goes back well before the start of the recession. People who are unemployed are often more likely to move, not less likely, as a report accompanying these charts pointed out. And if the issue is declining home values, it's hard to explain why mobility rates are down for both renters and for homeowners.
The Census Bureau puts out the data, but often sidesteps much discussion of underlying causes. However, in the Spring 2011 issue of my own Journal of Economic Perspectives, Raven Molloy, Christopher L. Smith, and Abigail Wozniak fill this gap with a discussion of "Internal Migration in the United States." Like all articles in JEP back to the first issue in 1987, the article is freely available compliments of the American Economic Association.
Molloy, Smith, and Wozniak consider possible long-term explanations for a declining rate of mobility, like the possibility that an aging population less likely to move. As they put it: "However, these differences across groups are not useful in explaining why migration has fallen in recent decades. The decrease in migration does not seem to be driven by demographic or socioeconomic trends, because migration rates have fallen for nearly every subpopulation ..."
They freely admit that there is not yet an answer in the economic research as to why geographic mobility has been declining, but they offer some hypotheses.
For example, one argument is that migration was high in the post WWII years as part of a significant population shift to the South, a shift which has been diminishing every since. But this factor doesn't seem to be significant enough, given the observed data on interregional migration.
Another hypothesis is that there are more two-earner families, and so when one person loses a job the household may be more reluctant to relocate. But this argument faces the problem that "the percentage of households with two earners has been quite stable over the last 30 years."
Yet another possibility "is that technological advances have allowed for an expansion of telecommuting and flexible work schedules, reducing the need for workers to move for a job." However, the data on telecommuting doesn't show that it is a large enough factor to explain the decline in mobility.
And yet another possibility "is that locations have become less specialized in the types of goods and services produced, making the types of available jobs more similar across space. ... A related idea is that the distribution of amenities has become more homogeneous across locations, making residence in any particular city less attractive." This explanation may have some truth in it, but it's proven difficult to gather data that would allow it to be tested in any definitive way.
Finally, it may just be that many Americans are shifting their preferences away from being willing to move. Molloy, Smith and Wozniak present evidence that "the secular decline in geographic mobility appears to be specific to the U.S. experience, since internal mobility has neither fallen in most other European economies nor in Canada—with the United Kingdom as a notable exception."
Whatever the reason behind the decline in geographic mobility, there are implications for the economy if the workforce becomes less flexible and less willing to move from areas where the economy is weaker to where it is stronger. In addition, lower mobility has broad implications for what its like to live in America. People find it harder to envision their lives as involving a big move. Social networks are reshaped. When mobility drops, we become a country where you are less likely to end up living and working with people from other states, other counties, or even other parts of your own county.
Monday, December 10, 2012
Paper Towels v. Air Dryers
After washing your hands with anti-microbial soap, is it better to dry them with a paper towel or with an air dryer? Like many economists, I'm always on the lookout for persuasive analysis of the benefits, costs, and tradeoffs of life's difficult questions. Thus, I was delighted to run across "The Hygienic Efficacy of Different Hand-Drying Methods: A Review of the Evidence," by Cunrui Huang, Wenjun Ma, and Susan Stack, which appeared in the August 2012 issue of the Mayo Clinic Proceedings
(87: 8, pp. 791-798).
Basically, paper towels win out over regular air dryers, jet air dryers, and cloth rollers, at least in settings like health care provision where hygiene is especially important. But here's a sketch of the arguments,based on a review of 12 studies on hand-drying since 1970. Summary statements are mine: quotations are from the study. As usual, footnotes and citations are omitted for readability.
Removing water from hands after hand-washing is an important part of killing the germs.
Paper towels are the most hygienic of the hand-drying options: they dry skin faster, help remove contamination through friction, and don't risk spreading germs through the air.
Paper towels cost slightly more than air dryers.
People prefer to use paper towels--and people's preferences have value in this overall calculation, too.
(87: 8, pp. 791-798).
Basically, paper towels win out over regular air dryers, jet air dryers, and cloth rollers, at least in settings like health care provision where hygiene is especially important. But here's a sketch of the arguments,based on a review of 12 studies on hand-drying since 1970. Summary statements are mine: quotations are from the study. As usual, footnotes and citations are omitted for readability.
Removing water from hands after hand-washing is an important part of killing the germs.
"For centuries, hand washing has been considered the most important measure to reduce the burden of health care–associated infection. ... Although studies have reported the importance of thorough hand drying after washing, the role of hand drying has not been widely promoted, and its relevance to hand hygiene and infection control seems to have been overlooked. Lack of attention to this aspect may negate the benefits of careful hand washing in health care."
Paper towels are the most hygienic of the hand-drying options: they dry skin faster, help remove contamination through friction, and don't risk spreading germs through the air.
"Although jet air dryers had drying efficiency similar to paper towels, their hygiene performance was still worse than paper towels. The differences in bacterial numbers after drying with air dryers and paper towels could be due to other factors rather than the percentage of dryness alone. Friction can dislodge microorganisms from the skin surface during both hand washing and drying. Antimicrobial agents in soaps have too little contact time to have bactericidal effects during a single use or with sporadic washings, making friction the most important element in hand drying. It is likely that paper towels work better because they physically remove bacteria from the hands, whereas hot air dryers and jet air dryers cannot. In many instances, however, rubbing hands with hot air dryers to hasten drying would only lead to greater bacterial numbers and airborne dissemination. It might be that rubbing hands causes bacteria to migrate from the hair follicles to the skin surface. Many studies have found friction to be a key component in hand drying for removing contamination. ..."Air dryers, and especially jet dryers, are noisier. They can irritate skin.
"Hot air dryers are generally not recommended for use in health care settings because such dryers are relatively slow and noisy and their hygiene performance is questionable. Cloth roller towels are not recommended because they can become common use towels at the end of the roll and can be a source of pathogen transfer to clean hands. Recently, jet air dryers have undergone independent certification within the food safety arena in Australia, attesting to their increased hygiene benefits as opposed to the traditional hot air-drying method. However, the criteria and process of obtaining this type of certification remain questionable. The health and safety aspects of jet air dryers for use in locations where hygiene is paramount should still be carefully examined by the scientific community. Therefore, this makes paper towel drying, during which little air movement is generated, the most hygienic option of hand-drying methods in health care."
"Air dryers, particularly jet air dryers, are obviously noisier than paper towels or cloth towels. ... [T]he mean decibel level of using a jet air dryer at 0.5 m was 94 dB, which is in excess of that of a heavy truck passing 3 m away. When 2 jet air dryers were used at the same time, the decibel level at a distance of 2 m was 92 dB. Therefore, in washrooms with jet air dryers, the noise level could constitute a potential risk to those exposed to it for long periods. ... "
"Use of air dryers may cause hands to become excessively dry, rough, and red. ... Affected persons often experience a feeling of dryness or burning; skin that feels rough; and erythema, scaling, or fissures. When the hands become irritated, health care workers may not wash their hands as often or as well. Concern regarding this effect of air dryers could become an important cause of poor acceptance of hand hygiene practices."The environmental effect of paper towels is slightly worse than air dryers, but only very slightly.
"[T]he paper towel method emits relatively higher greenhouse gases than the hot air dryer method (1377 vs 1337 kg of carbon dioxide equivalent). In terms of environment sustainability, the hot air dryer method surpasses the paper towel method with better scores for 6 indicators (respiratory organics, respiratory inorganics, ozone layer, ecotoxicity, acidification/eutrophication, and fossil fuels) compared with 5 indicators (carcinogens, climate change, radiation, land use, and minerals) for paper towels."
Paper towels cost slightly more than air dryers.
"Using paper towels is more costly than using air dryers. Paper towels must be replaced frequently, whereas air dryers usually require little maintenance. ... However, air dryers can be costly to purchase and install. Therefore, those responsible for facility management should perform a careful cost analysis to determine whether they are cost-effective in their building."
People prefer to use paper towels--and people's preferences have value in this overall calculation, too.
"Another survey of 2516 US adults in 2009 still found that most people preferred to dry their hands with paper towels. If they had a choice, 55% of respondents selected paper towels, 25% selected jet air dryers, 16% selected hot air dryers, 1% selected cloth roller towels, and 3% were not sure. ... Hence, given the strong preference for using paper towels, hand hygiene adherence would possibly decrease if paper towels are not available in washrooms."As the conclusion of academic studies often love to point out, there are vast possibilities for future research on this topic that go beyond the questions already discussed.
"Does the quality of paper towel have an effect on hand hygiene adherence? When recycled paper is used for hand drying, what kinds of studies are appropriate to assess the cost benefit of using recycled paper? Many questions remain unanswered. ... The maintenance of a clean environment around paper towels is also important. Paper towels deposited in bins could act as a bacteriologic reservoir if disposal is not managed properly. ... The risk of potential contamination among dispenser exits, paper towels, and hands should be considered in the design, construction, and use of paper towel dispensers. Architects working in the health care industry should also be aware of these issues when designing equipment for new facilities."
Friday, December 7, 2012
Some Facts On Foreign Aid
The OECD has just published its Development Co-operation Report 2012: Lessons in linking sustainability and development, which includes a number of essays about various aspects of foreign aid and its role in development. (Fair warning: Those looking for deeply skeptical viewpoints about foreign aid will not find them well-represented in this volume.) Here, I'll stick to some of the big-picture facts about patterns of foreign aid and present a few figures from the Statistical Annex. (And yes, I'm the sort of person who, when getting a report, has a tendency to read the Statistical Annex first.)
First, here's the trendline of official development assistance over time, expressed in constant 2010 dollars, and showing some context of private capital flows. The heading refers to the DAC, which is the Development Assistance Committee, a group of the OECD countries that give most of the aid. The bottom blue area is official development aid. The two small ribbons in the middle are other official aid flows and grants from private voluntary organizations. The gray area at the top is private capital flows to these aid-recipient countries. Clearly, private capital flows fluctuate a lot, and it's always useful to remember that the countries which need aid the most are often not the countries that are especially attractive for private sector investment. Still, it's striking that in most years over the last three decades, private capital flows to the group of countries receiving aid is considerably larger in size than foreign aid.
This figure puts foreign aid in perspective in two other ways. In constant 2010 U.S. dollars, as measured on the right-hand axis, foreign aid from all countries in the world now exceeds $120 billion. In my checking account, this would be untold riches. But spread over the context of the world economy, it is not an especially large amount. The right-hand axis shows foreign aid as a percentage of the Gross National Income of the donor countries: since the 1960s, this share has sagged from about 0.5% of GNI to about 0.25-0.30% of GNI. To put it another way, the economies of donor countries have been growing faster than their foreign aid spending over the last half-century.
The final figure shows the sources of official development aid. Clearly, foreign aid is primarily a European project, although the U.S. also gives a significant share.
Many Americans wildly overestimate how much the federal government spends on foreign aid. For example, this 2010 survey found that Americans believe that the federal government spends 25% of its budget on foreign aid, and would like to cut that amount to 10%. In reality, only about 1% of federal spending is foreign aid. Maybe this is 1% is still too much! But as a matter of arithmetic, trimming foreign aid would have an essentially negligible effect on the U.S. governments deficit problem.
First, here's the trendline of official development assistance over time, expressed in constant 2010 dollars, and showing some context of private capital flows. The heading refers to the DAC, which is the Development Assistance Committee, a group of the OECD countries that give most of the aid. The bottom blue area is official development aid. The two small ribbons in the middle are other official aid flows and grants from private voluntary organizations. The gray area at the top is private capital flows to these aid-recipient countries. Clearly, private capital flows fluctuate a lot, and it's always useful to remember that the countries which need aid the most are often not the countries that are especially attractive for private sector investment. Still, it's striking that in most years over the last three decades, private capital flows to the group of countries receiving aid is considerably larger in size than foreign aid.
This figure puts foreign aid in perspective in two other ways. In constant 2010 U.S. dollars, as measured on the right-hand axis, foreign aid from all countries in the world now exceeds $120 billion. In my checking account, this would be untold riches. But spread over the context of the world economy, it is not an especially large amount. The right-hand axis shows foreign aid as a percentage of the Gross National Income of the donor countries: since the 1960s, this share has sagged from about 0.5% of GNI to about 0.25-0.30% of GNI. To put it another way, the economies of donor countries have been growing faster than their foreign aid spending over the last half-century.
The final figure shows the sources of official development aid. Clearly, foreign aid is primarily a European project, although the U.S. also gives a significant share.
Many Americans wildly overestimate how much the federal government spends on foreign aid. For example, this 2010 survey found that Americans believe that the federal government spends 25% of its budget on foreign aid, and would like to cut that amount to 10%. In reality, only about 1% of federal spending is foreign aid. Maybe this is 1% is still too much! But as a matter of arithmetic, trimming foreign aid would have an essentially negligible effect on the U.S. governments deficit problem.