Tuesday, December 9, 2014

Biosciences Innovation

When thinking about future technology and how it may affect economic growth, it's common enough, for obvious reasons, to focus on the possibilities related to information and communications technology: new possibilities for the internet, robotics, driverless cars, and so much more. It's worth remembering that the powers of computation can combine with biological research to bring breakthroughs in other  areas too. William Hoffman draws attention to "The Shifting Currents of Bioscience Innovation" in an article earlier this year published in Global Policy.  William Hoffman  and Leo Furcht also have a just-published book out on the subject called
The Biologist's Imagination: Innovation in the Biosciences.

Here's are a couple of figures from the Hoffman article to illustrate the shockingly rapid change in bioscience innovation in recent decades. This figure shows global population over time, with the timing of various important discoveries shown as well. Robert Fogel did an earlier version of this figure, while Hoffman added detail on biosciences innovations. Innovation in this area has gone from antibiotics and the polio vaccine to sequencing the human genome and synthetic bacterial cells in a little more than half-century.




For a sense of the speed of change in this area, think about the gains from "Moore's law," the relationship first pointed out back in 1965 by Gordon Moore, one of the founders of Intel Corporation that the number of transistors on a computer chip was doubling every two years. That rate of growth has held up pretty well since thenm and seems likely to continue for at least a few more years (for earlier discussions of Moore's law on this blog, see here and here). The results of the information technology revolution are all around us. The key point here is that the cost of sequencing a human-sized genome has been falling faster than Moore's law since about 2007.


Up until now, the best-known economic payoffs from this biosciences have been pharmaceuticals (the group of medicines and drugs often called  "biologics") and genetically modified crops (which have effects both on food products and on related outputs like biofuels). But my sense is that a wide variety of industrial and even household applications may not be far behind. Here's a sample of Hoffman's argument:

Cutting-edge tools from genomics and bioinformatics, cellular technologies including stem cells, and synthetic biology, with assists from nanotechnology and automation, are poised to revolutionize bioscience productivity. These tools make it possible to sequence and synthesize DNA at an industrial scale, edit genes precisely, control the growth and differentiation of cells and seed them in three-dimensional (3D) constructs, and create microbial factories that produce medicines, chemicals, fuels and materials. They are transforming traditional models of drug discovery and development and diagnostic testing. The more DNA, RNA, and cellular components fall under the purview of bioengineers, the likelier we are to see large-scale production of renewable fuels, biodegradable materials, and safer industrial chemicals.
Genomics is opening a window on genetic alleles that enable food crops to adapt to a changing climate, and synthetic biology is being used to design novel environmental remediation systems. Using 3D printers puts science into the hands of people ‘whether in the far corners of Africa or outer space’ so that they can print drugs on demand. They can be modified to print cells including stem cells, which are key to cellular differentiation and tissue repair. Digitally enabled bioprinting means on-demand tissue and organ production for surgical modelling, medical therapy, drug testing and science education.
At the most basic level, the idea that useful materials can be grown as needed, not just manufactured from raw materials, seems to me a potentially vast breakthrough. A couple of years ago I read the article "Form and Fungus: Can mushrooms help us get rid of Styrofoam?" by Ian Frazier in the New Yorker magazine (May 20, 3013). He tells the story of Ecovative, which essentially uses fungus, like clever mushrooms, as its production process to replace Styrofoam. Frazier writes:
The packing material made by their factory takes a substrate of agricultural waste, like chopped-up cornstalks and husks; steam-pasteurizes it; adds trace nutrients and a small amount of water; injects the mixture with pellets of mycelium [this is the fungus part]; puts it in a mold shaped like a piece of packing that protects a product during shipping; and sets the mold on a rack in the dark. Four days later, the mycelium has grown throughout the substrate into the shape of the mold, producing a material almost indistinguishable from Styrofoam in form, function, and cost. An application of heat kills the mycelium and stops the growth. When broken up and thrown into a compost pile, the packing material biodegrades in about a month.
It turns out that when you start thinking about the properties of funguses grown in shaped molds, all sorts of possibilities arise, like the potential for pieces of insulation or even replacements for wood. As the rapidly developing power of the biosciences begins to interact with this new mindset--which I think of as "grow it, don't manufacture it"--I suspect that a wide array of goods and services will be affected.

Hoffman is also up-front that the future "bioeconomy" may raise some difficult ethical and safety questions. The questions can in some cases be difficult and worth deep consideration. But I'll add that there is no reason to believe that the answers to these questions will be determined by the politicians, regulators, scientists, and citizens of high-income countries like the United State or the European Union. Top-level biosciences research is happening all over the world, very much including China, India, Brazil, and other locations. A revolution in biosciences is coming, whether or not the U.S. decides to let its domestic researchers and domestic companies participate fully.

Monday, December 8, 2014

Milk Production: Economies of Scale, Agriculture, Management

I'm always on the lookout for real-world examples of economies of scale: that is, situations where expanding the scale of output leads to lower average costs of production. I offer a range of examples and thoughts on the subject here, and an example of economies of scale in financial asset management here. But a number of vivid examples of economies of scale come from the U.S. agricultural sector.

James MacDonald and Doris Newton offer an example in "Milk Production Continues Shifting to Large-Scale Farms," which appears in the December 1, 2014 issue of Amber Waves, published by the U.S. Department of Agriculture. They point out that the number of small dairy farms is falling, while the number of large farms is rising:

In 2012, there were still nearly 50,000 dairy farms with fewer than 100 cows, but that represented a large decline from 20 years earlier, when there were almost 135,000. Over the same period, the number of dairy farms with at least 1,000 cows more than tripled to 1,807 farms in 2012. Movements in farm numbers were mirrored by movements in cow inventories. Farms with fewer than 100 cows accounted for 49 percent of the country’s 9.7 million milk cows in 1992, but just 17 percent of the 9.2 million milk cows in 2012. Meanwhile, farms with at least 1,000 cows accounted for 49 percent of all cows in 2012, up from 10 percent in 1992.
If you graph the underlying data, the mean or average size of a dairy herd has more than doubled in the last 20 years, from 61 to 144. But tthe midpoint of the herd size--that is, the herd size where half of all cows are in a herd that is larger and half are in a herd that is smaller--has gone from 101 cows back in 1992 to 900 cows in 2012.




Perhaps unsurprisingly, the main driver behind this change is that larger dairy herds can produce milk at a lower average cost. Here's the pattern from MacDonald and Newton. They write: "While some small farms earn profits and some large farms incur losses, financial performance is linked to herd size. Most of the largest dairy farms generate gross returns that exceed full costs, while most small and mid-size dairy farms do not earn enough to cover full costs. Full costs include annualized costs of capital as well as the cost of unpaid family labor (measured as what they could earn off the farm), in addition to cash operating expenses. ... In 2012, dairy farms with at least 2,000 cows incurred costs that were 16 percent lower, on average, than farms with 1,000-1,999 cows, a difference that could provide a spur to further structural change to even larger farms. In 1992, there were just 31 farms with 3,000 or more milk cows; by 2012, there were 440, and many of them had 5,000 or more cows."




The economies of scale in dairy farms is just one example of larger scale in U.S. agriculture as a whole. Daniel A. Sumner explores this topic in "American Farms Keep Growing: Size, Production, and Policy," in the Winter 2014 issue of the Journal of Economic Perspectives. (Full disclosure: I've been Managing Editor of the JEP since the first issue in 1987. All JEP articles back to the first issue are freely available online courtesy of the American Economic Association.)

Sumner offers this image. The horizontal axis shows the proportion of farms, where you can think of farms as ranked by size from smallest in sales to largest. The vertical axis shows the proportion of agricultural output. Thus, the bottom 60% of farms represent about 5% of all agricultural revenue. The line for 1987 is highest, for 1997 is lower, and for 2007 is lowest, which shows that smaller farms are producing a lower share of total output over time.


Sumner argues that farm subsidies can't explain this growing concentration, nor can contractual relationships in the agricultural chain of production. The size of farm operations is growing both in agricultural sectors with and without farm subsidies, and with and without a high dependence on contractual relationships. He argues that the most likely factor driving economies of scale in agriculture is the interrelationship between technological developments and good management: that is, being the kind of  strong manager who can take advantage of new technology has been a continual and even a growing advantage in U.S. agriculture. Sumner concludes along these lines (citations omitted):

The size of commercial farms is sometimes best-measured by sales, in other cases by acreage, and in still other cases by quantity produced of specific commodities, but for many commodities, size has doubled and doubled again in a generation. That does not mean that typical commercial farm operations are becoming large by any nonfarm corporate standard, or that there is any near-term prospect that these large firms will be able to exercise market power. For example, even as the typical herd size of dairy farms rises from 500 cows to 1,000 to 2,000, there will remain thousands of commercial farms operating the national milk cow herd of eight or nine million cows. The few dairy farms with 10,000 cows are located in several units in distinct locations and remain a small share of the relevant national and international market into which they deliver. ...

In some industries, such as intensive animal feeding, farms are often operated as franchises in which farms are connected closely with larger processing and marketing firms through contractual relationships. Many commodity industries have traditionally used contractual relationships between farms and processors or marketers to coordinate timing of shipments and commodity characteristics. For example, the processing tomato industry links growers and processors in annually negotiated contracts, and wineries work closely with contracted grape growers, often providing long-term guarantees to encourage vineyard development. Growth in farm size in these industries has occurred at roughly the same pace as for commodity industries with fewer contractual relationships. Economists do not yet have a good understanding of the relationships between contractual relationships, farm size patterns, and productivity, and this remains an area of active research.
Changes in farm size distributions and growth of farms seems closely related to technological innovations, managerial capability, and productivity. Opportunities for competitive returns from investing financial and human capital in farming hinge on applying managerial capability to an operation large enough to provide sufficient payoff. Farms with better managers grow, and these managers take better advantage of innovations in technology, which themselves require more technical and managerial sophistication. Farms now routinely use outside consultants for technological services such as animal health and nutrition, calibration and timing of fertilizers and pesticides, and accounting. The result is higher productivity, especially in reducing labor and land per unit of output. Under this scenario, agricultural research leads to technology that pays off most to more-capable managers who operate larger farms that have lower costs and higher productivity. The result is reinforcing productivity improvements.
Subsidy programs seem to be relatively unimportant in the evolution of farming in the United States. Farm sizes are growing, numbers of commercial farms are falling, and farm operations are transforming industries with and without commodity subsidies. In specific instances and for specific commodities, farm programs have affected the patterns of farm size and growth. 


Sunday, December 7, 2014

MONIAC in Action!

It is part of the lore of economics that Alban William Housego Phillips, better known as Bill Phillips, and still better known as the originator of the Phillips curve--which posits a tradeoff between unemployment and inflation--started his career by building a hydraulic model of the economy called the MONIAC.

MONIAC stood for Monetary National Income Analogue Computer, which is a bit of wordplay on ENIAC, the Electronic Numerical Integrator and Computer, which had been announced in 1946 as the first general-purpose electronic computer. MONIAC is a physical model of the economy in which flows of consumption, saving and investment, taxes and government spending, imports and exports, and other economic forces were represented by liquid moving through tubes and pipes. You can tinker with different elements of the economy, and see what effects it has.

What I had not known until running across this article by Klint Finley in the most recent issue of Wired magazine is that a Cambridge engineering professor Allan McRobie has restored a MONIAC. Moreover, McRobie offers a lively 45-minute demonstration of the MONIAC at work on video here. As he says at the start: "It is a fabulous pleasure to demonstrate this. It is a thing of wonder and joy, and I would give this talk to an empty room. It is a brilliant machine, and a privilege for me to work with it." If you have similar feelings about economics and economic models, you are likely to have similar feelings about his talk.

For more on the MONIAC, as well as how Irving Fisher also built a hydraulic model of an economy as part of his doctoral dissertation back in 1891, you can start with my post from a couple of years ago (November 12, 2012) on "Hydraulic Models of the Economy: Phillips, Fisher, Financial Plumbing." As I wrote there, after discussing the Phillips and Fisher hydraulic models:

The idea of a hydraulic computer seems anachronistic in these days of electronic computation, but I can imagine that an illustrative teaching tool, watching flows of liquid rebalance might be at least as useful as looking at a professor sketching a supply and demand diagram. In addition, the notion of the economy as a hydraulic set of forces still has considerable rhetorical power. We talk about "liquidity" and "bubbles." The Federal Reserve publishes"Flow of Funds" accounts for the U.S. economy. When economists talk about the financial crisis of 2008 and 2009, they sometime talk in terms of financial "plumbing." ...  I find myself wondering about what an hydraulic model of an economy would look like if it also included bubbles, runs on financial institutions, credit crunches--along with tubes that could break. Sounds messy, and potentially quite interesting.







Friday, December 5, 2014

Women, Mathematical Skills, Academia

Focus on the so-called STEM departments in academia: that is, science, technology, engineering and mathematics. There is a fairly clear pattern that women are less well-represented in the academic departments that rely on higher mathematical skills. The harder questions are explaining this phenomenon.  Stephen J. Ceci, Donna K. Ginther, Shulamit Kahn, and Wendy M. Williams address these issues in their article "Women in Academic Science: A Changing Landscape," which appears in the December 2014 issue of Psychological Science in the Public Interest. Here's a taste of their conclusions:

We conclude by suggesting that although in the past, gender discrimination was an important cause of women’s underrepresentation in scientific academic careers, this claim has continued to be invoked after it has ceased being a valid cause of women’s underrepresentation in math-intensive fields. Consequently, current barriers to women’s full participation in mathematically intensive academic science fields are rooted in pre-college factors and the subsequent likelihood of majoring in these fields, and future research should focus on these barriers rather than misdirecting attention toward historical barriers that no longer account for women’s underrepresentation in academic science.
Here's an illustrative figure that helps to illustrate the starting point for Ceci, Ginther, Kahn, and Williams. Each of the points is a STEM department. The authors divide up their analysis into what they call the LPS departments, which are the three in the upper right with more females among those who get a PhD and relatively lower math scores on the GRE exam, and what they call the GEEMP departments, which are the departments to the bottom right with a smaller share of females among those who get a PhD in that field and higher average math scores on the GRE exam. These sorts of differences in PhDs granted to females are reflected in large gaps in the number of female professors in these fields.


Ginther and Kahn are economists, while Ceci and Williams are psychologists. Thus, the paper combines economics-style analysis of career development patterns with psychology-style analysis of how people learn. For economists, a standard approach is to look at the "pipeline" to producing tenured professors. For example, you can look at how many females major in STEM subjects in college, and what proportion go on to a PhD, and then to various academic jobs. The idea is to see where there is "leakage" in the pipeline--and thus identify the barriers to women professors.  When they carry out this analysis, the authors offer a (to me) surprising conclusion: the LPS fields have a relatively substantial "leakage" when comparing how females and males move from undergrad majors to grad school and professorships. But in the GEEMP areas--which includes economics--women and men in recent years proceed from undergrad degrees through grad school and into professorships at similar rates. After reviewing a range of evidence, they write:
Thus, the points of leakage from the STEM pipeline depend on the broad discipline being entered—LPS or GEEMP. By graduation from college, women are overrepresented in LPS majors but far underrepresented in GEEMP fields. In GEEMP fields, by 2011, there was very little difference in women’s and men’s likelihood to advance from a baccalaureate degree to a PhD and then, in turn, to advance to a tenure-track assistant professorship. ... [O]nce women are within  GEEMP fields, their progress resembles that of male GEEMP majors. In contrast, whereas far more women than men major in LPS fields, in 2011, the gender difference in the probability of advancing from an LPS baccalaureate degree to a PhD was not trivial, and the gap in the probability of advancing from PhD to assistant professorship was particularly large, with fewer women than men advancing.
The message is that the the most substantial barriers to women in economics and other GEEMP fields arise before college. Why might this be so? One set of explanations focuses on the high scores for boys on a wide range of math tests. The other set of explanations focuses on social expectations about interests and careers. Of course, these explanations become entangled, because getting skills is interrelated with social expectations.

With regard to higher math scores for boys, the paper reviews evidence on how in utero exposure to  androgen hormones is greater for boys, and how certain math-related abilities (like 3D spatial processing) appear to be greater for boys at young ages. I'll skip past that evidence here because: i) as the authors note, it's far from definitive; ii) I lack any particular competence to evaluate this evidence, anyway. Instead, let me stick to several points that seem well established.

In terms of the basic data from math scores themselves, it used to be true that math test scores for boys were higher than those for girls, but on average, high school girls have now caught up. The authors note: "However, by the beginning of the 21st century, girls had reached parity with boys—including on the hardest problems on the National Assessment of Educational Progress (NAEP) for high school students." It also seems true that at the top of the distribution of math test scores, boys substantially outnumber girls: "Thus, a number of very-large-scale analyses converged on the conclusion that there are sizable sex differences at the right tail of the math distribution." One of many studies they discuss looked at the "Programme for International Student Assessment data set for the 33 countries that provided data in all waves from 2000 to 2009. They, too, found large sex differences at the right tail: 1.7:1 to 1.9:1 favoring males at the top 5% and 2.3:1 to 2.7:1 favoring males at the top 1%."

There is an ongoing nature-vs.-nurture argument about how to interpret these higher math scores at the top. Not only have gender differences in math scores changed over time, but they also "vary by cohort, nation, within-national ethnic groups, and the form of test used. ... Moreover, mathematics is heterogeneous, comprising many different cognitive skills ..." At a minimum, these patterns suggest that gender gaps in test scores are quite sensitive to environmental factors. For example, in Iceland, Singapore, and Indonesia, more girls than boys scored at the top 1% of math tests at certain ages.

Some of the evidence the authors cite on the importance of social environment in affecting math scores comes from a Spring 2010 symposium in the Journal of Economic Perspectives on "Tests and Gender." (Full disclosure: I've been Managing Editor of JEP since its first issue in 1987. All JEP articles back to the first issue are freely available on-line at the journal's website.)

For example, in that issue of JEP, Devin G. Pope and Justin R. Sydnor look at "Geographic Variation in the Gender Differences in Test Scores" across U.S. states and regions. Here's an illustrative finding based on scores from 8th graders on the National Assessment of Educational Progress (NAEP). The vertical axis shows that in every region, the female-male ratio in the top 5% of reading scores is greater than 2, almost reaching 3 in Mountain states. The horizontal axis shows that in every ratio, the male-female ratio in the top 5% of math and science scores ranges from 1.3 in the New England States to 2.2 in the Middle Atlantic states. This finding confirms the fact of a difference in math test scores at the extreme. It also strongly suggests that such differences strongly affected by where you live--and thus are strongly linked to social expectations.


In another paper in the 2010 JEP symposium, Glenn Ellison and Ashley Swanson look at "The Gender Gap in Secondary School Mathematics at High Achievement Levels: Evidence from the American Mathematics Competitions." In a striking finding, the note that most U.S. high school girls who participate in international math competitions come from a very small pool of about 20 high schools. This finding strongly suggests that many other girls, if they were in a different academic setting, would demonstrate high-end math skills. Ellison and Swanson write:
[W]e examine extreme high-achieving students chosen to represent their countries in international competitions. Here, our most striking finding is that the highest-scoring boys and the highest-scoring girls in the United States appear to be drawn from very different pools. Whereas the boys come from a variety of backgrounds, the top-scoring girls are almost exclusively drawn from a remarkably small set of super-elite schools: as many girls come from the 20 schools that generally do best on these contests as from all other high schools in the United States combined. This suggests that almost all American girls with extreme mathematical ability are not developing their mathematical talents to the degree necessary to reach the extreme top percentiles of these contests.
Finally, there is intriguing evidence that a number of women with equivalent math skills may not perform as well in the context of competitive and high-stakes math testing.  In the 2010 JEP symposium, Muriel Niederle  and Lise Vesterlund look at a range evidence on "Explaining the Gender Gap in Math Test Scores: The Role of Competition." I was especially struck by this study:
They examine the performance of women and men in an entry exam to a very selective French business school (HEC) to determine whether the observed gender differences in test scores reflect differential responses to competitive environments rather than differences in skills. The entry exam is very competitive: only about 13 percent of candidates are accepted. Comparing scores from this exam reveals that the performance distribution for males has a higher mean and fatter tails than that for females. This gender gap in performance is then compared both to the outcome of the national high school graduation exam, and for admitted students, to their performance in the first
year. While both of these performances are measured in stressful environments, they are much less competitive than the entry exam. The performance of women is found to dominate that of men, both on the high school exam and during the first year at the business school. Of particular interest is that females from the same cohort of candidates performed signififi cantly better than males on the national high school graduation exam two years prior to sitting for the admission exam. Furthermore, among those admitted to the program they find that within the first year of the M.Sc. program, females outperform males.
A possible reason here is a well-known phenomenon called "stereotype threat"--that is, if reminded of a negative stereotype about a group to which you belong before a test, people often perform worse. Here's one study that Ceci, Ginther, Kahn, and Williams cite along these lines: "For example, female test takers who marked the gender box after completing the SAT Advanced Calculus test scored higher than female peers who checked the gender box before starting the test, and this seemingly inconsequential order effect has been estimated to result in as many as 4,700 extra females being
eligible to start college with advanced credit for calculus had they not been asked to think about their gender before completing the test ..."

To recap the argument to this point, the basic question is why women are underrepresented in academic disciplines in certain STEM fields where math scores are higher. For current students, the main underlying reasons seem to trace back to the choices that college students make about undergraduate majors. In turn, a possible explanation is that more males get high scores on pre-college math tests than do women. In turn, a substantial part of this difference seems to trace to social expectations about gender and math, and about gender and test-taking. If more women felt more positive about math before reaching college, then majors in GEEMP areas would presumably tend to rise.

But there is also a different set of arguments about why fewer women sign up for the GEEMP disciplines as undergraduates, which suggests that whole issue of math test scores may be a distraction. For example, it's not clear how much the gender difference in math scores at the extreme top end should matter for academia. As the authors point out above, the typical GRE math scores for those in the math-oriented GEEMP fields was about at the 75th percentile--not the top 1%. Another intriguing fact is that women have now been receiving 40-45% of math Ph.D's for the last few decades. This alternative view focuses less on math skills and more on perceptions about self and occupation. The Ceci, Ginther, Kahn, and Williams team points out (some citations omitted):

Psychologists have charted large sex differences in occupational interests, with women preferring so-called “people-oriented” (or “organic,” or natural science) fields and men preferring “things” (people- and thing-oriented individuals are also termed “empathizers” and “systematizers,” respectively. This people-versus-things construct ... is one of the salient dimensions running through vocational interests; it also represents a difference of 1 standard deviation between men and women in vocational interests. Lippa has repeatedly documented very large sex differences in occupational interests, including in transnational surveys, with men more interested in “thing”-oriented activities and occupations, such as engineering and mechanics, and women more interested in people-oriented occupations, such as nursing, counseling, and elementary school teaching. And in a very extensive meta-analysis of over half a million people, Su, Rounds, and Armstrong (2009) reported a sex difference on this dimension of a full standard deviation.
In other words, the reason that fewer women choose the GEEMP disciplines as undergraduates--and thus the reason that women are underrepresented as faculty in those areas--may be less related to math skills and more related to this distinction between people-oriented and thing-oriented.

In the context of economics, it seems to me true, and also deeply frustrating, that this distinction does capture something about how the field is perceived. Economics is the stuff of life: full of choices that people make about work, consumption, saving, parenthood, and crime, as well as about the structure and decisions of organizations like firms and government that affect people's daily lives in profound ways. But the perception that many students have of economics, which is sometimes unfortunately confirmed by how the subject is taught, can lose track of the people, instead viewing the economy as a thing.

Thursday, December 4, 2014

How Did Germany Limit Unemployment in the Recession?

Here's a puzzle: During the Great Recession, the total contraction in economic output was noticeably larger in Germany than in the United States, but the rise in the unemployment rate was noticeably higher in the United States than in Germany. How did Germany manage it? Shigeru Fujita and Hermann Gartner offer "A Closer Look at the German Labor Market ‘Miracle’" in the most recent issue of the Business Review published by the Federal Reserve Bank of Philadelphia, (Q4, 2014, pp. 16-24). 

Let's start by stating the puzzle clearly. The top figure shows the change in  unemployment rates for the U.S. and Germany during the recession. The bottom figure shows the fall in real output in each economy.


The authors consider two main alternative explanations for this puzzle, and at least from a U.S. perspective, they come from different ends of the political spectrum. One possible set of explanations is that German unemployment stayed relatively low because of government programs, like the short-time work program that help firms adjust to shorter hours without firing employees.  The other possible set of explanations is that German unemployment stayed relatively low because of earlier labor market reforms that reduced unemployment benefits and kept wages and benefits lower and more flexible, which in turn encouraged a growth of jobs.  Fujita and Gartner argue that the second set of explanation is more plausible. 

Germany does have several government programs that encourage firms to reduce hours when business slows down, rather than firing employees. But Fujita and Gartner argue that these programs have existed in past recessions, and they didn't seem to have any particularly large effect in the most recent recession. They write: 

One is the shorttime work program. When employees’ hours are reduced, the participating firm pays wages only for those reduced hours, while the government pays the workers a “short-time allowance” that offsets 60 percent to 67 percent of the forgone earnings. Moreover, the firm’s social insurance contributions on behalf of employees in the program are lowered. In general, a firm can use this program for at most six months. At the beginning of 2009, though, when the slowdown of the economy became apparent, the German government encouraged the use of the program by expanding the maximum eligibility period first to 18 months and then to 24 months and by further reducing the social security contribution rate. The usual eligibility requirements were also relaxed. 
An important thing to remember here is that these special rules had also been applied in past recessions and thus were not so special after all. True, the share of workers in the program increased sharply in 2009, and thus it certainly helped reduce the impact of the Great Recession on German employment. But a more important observation is that even at its peak during the Great Recession, participation in the program was not extraordinary compared with the levels observed in past recessions. Moreover, in previous recessions, the German labor market had responded in a similar manner to the U.S. labor market. 
Another German program that some have credited with staving off high unemployment is the working-time account, which allows employers to increase working hours beyond the standard workweek without immediately paying overtime. Instead, those excess hours are recorded in the working-time account as a surplus. When employers face the need to cut employees’ hours in the future, they can do so without reducing workers’ take-home pay by tapping the surplus account. German firms overall came into the recession with surpluses in these accounts. Thus, qualitatively speaking, this program certainly reduced the need for layoffs. However, less than half of German workers had such an account, and most working-time accounts need to be paid out within a relatively short period — usually within a year or less. According to Michael Burda and Jennifer Hunt, the working-time account program reduced hours per worker by 0.5 percent
in 2008-09, accounting for 17 percent of the total decline in hours per worker in that period.
To understand the allure of the alternative explanation, consider this graph showing the German employment rate in recent decades. Notice that after around 2003, Germany employment starts steadiily rising, and that trend shows only a hiccup during the Great Recession. 


What caused German employment to start rising around 2003? 

We argue that the underlying upward trend was made possible by labor market policies called the Hartz reforms, implemented in 2003-05. ... The Hartz reforms are regarded as one of the most important social reforms in modern Germany. The most important change was in the unemployment benefit system. Before the reforms, when workers became jobless, they were eligible to receive benefits equal to 60 percent to 67 percent of their previous wages for 12 to 32 months, depending on their age. When these benefits ended, unemployed workers were eligible to receive 53 percent to 57 percent of their previous wages for an unlimited period. Starting in 2005, the entitlement period was
reduced to 12 months (or 18 months for those over age 54), after which recipients could receive only subsistence payments that depended on their other assets or income sources. Moreover, unemployed workers who refused reasonable job offers faced greater and more frequent sanctions such as cuts in benefits. To further lower labor costs and spur job creation, the size of firms whose employees are covered by unemployment insurance was raised from five to 10 workers. Also, regulation of temporary contract workers was relaxed. Furthermore, starting in 2004, the German Federal Employment Agency and the local employment agencies were reorganized with a stronger focus on returning the unemployed to work and by, for example, outsourcing job placement services to the private sector.

An earlier post from February 14, 2014,  "A German Employment Miracle Narrative," argues that the flexibility of German wages and labor market institutions starting in the mid-1990s started the rise in German employment. In this story, the Hartz reforms take on less importance, but the emphasis of the story is still on greater flexibility in markets, not government programs for sharing hours. Fujita and Gartner make a similar point: "In other words, in the boom leading up to the Great Recession, wage growth was much more muted than during previous booms, and thus this wage moderation was an important factor in creating the upward trend in employment."

A final point from Fujita and Gartner is that the comparison from the U.S. to Germany isn't apples-to-apples, because the underlying causes of the recessions were different. Germany didn't have a housing bubble; instead, it had an export bust. The incentives for what kind of financial crisis emerges and for laying off workers may be rather different in these two different kinds of recessions. They write: 

The recession in Germany was brought about by a different shock than that which triggered the recession in the U.S. The U.S. economy suffered a decline in domestic demand as the plunge in home values reduced households’ net wealth, whereas Germany had experienced no housing bubble. Instead, the decline in German output was driven by a short-term plunge in world trade. Whether a recession is expected to be short or long-lasting is an important factor in firms’ hiring and firing decisions. If a firm expects a downturn to last only a short period, it may well choose not to cut its work force, even though it faces lower demand, especially if laying off and hiring workers is costly, as it is in Germany. Consistent with this possibility, Burda and Hunt point out anecdotal
evidence that, especially by 2009, German firms were reluctant to lay off their workers because of the difficulty in finding suitable replacements.
Of course, the argument that German unemployment didn't rise as much because of reductions in unemployment benefits, low wage growth, and flexible labor markets doesn't prove that German innovations like the short-time allowance or working-time accounts are a bad idea. They may still be moderately helpful. But it doesn't look like they are the main explanation for Germany's success in limiting the rise in unemployment during and after the recession.



Wednesday, December 3, 2014

A Global Health Care Spending Slowdown: Temporary or Permanent?

The growth in health care spending has been putting pressure on government budgets all over the world, and in the U.S., it puts pressure on household budgets, too. However, the rise in U.S. healthcare costs has slowed in the last few years, which has led to a dispute. Is the slowdown in health care spending mainly a reflection of the Great Recession and the sluggish economic growth that has followed? Or does it represent the start of a potentially long-run slowdown in rising health care costs? Partisans of the Obama administration, like the White House Council of Economic Advisers, like to argue that the Patient Protection and Affordable Care Act of 2010 may be an important part of slowing down health care costs, too.

As I've argued in the past (here and here), U.S. health care spending seemed to slow down in the mid-2000s, well before any cost-constraining measures of the 2010 legislation could take effect. In addition, the slowdown in health care costss has been international, which suggests that changes in U.S. law are not the driving factor. In the December 2014 issue of Finance & Development, Benedict Clements, Sanjeev Gupta, and Baoping Shang offer more explanation on the international dimensions of health care costs in high-income countries in their article, "Bill of Health."

Here is there figure showing the pattern of health care spending across OECD countries in the last few decades. Notice that there are several times when it looks as if health care costs are slowing for a few years, before they start rising again. In discussing the recent slowdown in the rise in health care costs, they note: "The slowdown in growth for all types of spending in nearly all advanced economies—and at about the same time—suggests that it was driven by a common factor. The common element appears to be the global financial crisis, which affected economic activity and governments’ capacity to finance continued health care spending growth."

Moreover, the authors point out that the countries where the global recession hit hardest, circled in red, have had the biggest slowdown in health care costs, where the countries where the recession was weakest, circled in green, have had less of a slowdown in health care costs.

Of course, it's theoretically possible that health care spending in almost every other country was slowing down because of the recession, and short-run cuts in government health care spending, while health care spending in the U.S. was slowing down for long-run reasons driven by the 2010 legislation. But it's pretty unlikely. Indeed, when the authors project how much public health spending will rise as a percent of GDP in the next 15 years, they project that the rise will be largest of all in the U.S.--thus squeezing government budgets even further. Their predictions separate out the aging of the population, in blue, from the rest of the increase in health care spending, in green. 


Monday, December 1, 2014

Automation and Job Loss: The Fears of 1964

A half-century ago, there was deep and widespread concern that automation and new technology were leading to chronically high levels of unemployment. In retrospect, we know the fear at that time was unfounded. But it is nonetheless fruitful to review the controversy.

To set the stage, the U.S. economy suffered 10 months of recession from April 1960 to February 1961. The unemployment rate rose from 5.0% in June 1959 to 7.1% by May 1961. A widespread fear was that the job losses were due to the arrival of automation and electronic technology. For example, here are some excerpts from a TIME magazine article on February 24, 1961, "The Automation Jobless."
The rise in unemployment has raised some new alarms around an old scare word: automation. How much has the rapid spread of technological change contributed to the current high of 5,400,000 out of work? ... While no one has yet sorted out the jobs lost because of the overall drop in business from those lost through automation and other technological changes, many a labor expert tends to put much of the blame on automation. ... Dr. Russell Ackoff, a Case Institute expert on business problems, feels that automation is reaching into so many fields so fast that it has become "the nation's second most important problem." (First: peace.)
The number of jobs lost to more efficient machines is only part of the problem. What worries many job experts more is that automation may prevent the economy from creating enough new jobs. ... Throughout industry, the trend has been to bigger production with a smaller work force. ... Many of the losses in factory jobs have been countered by an increase in the service industries or in office jobs. But automation is beginning to move in and eliminate office jobs too. ... In the past, new industries hired far more people than those they put out of business. But this is not true of many of today's new industries. ... Today's new industries have comparatively few jobs for the unskilled or semiskilled, just the class of workers whose jobs are being eliminated by automation.
Thus, President John F. Kennedy--who probably edged out Richard Nixon in the 1960 presidential race in substantial part due to the seemingly dicey state of the economy at the time--delivered a speech to a joint session of Congress on May 25, 1961. The speech has become best-known for Kennedy's call to put a man on the moon. But that was part IX of the speech. Much earlier in section II, Kennedy stated:
I am therefore transmitting to the Congress a new Manpower and Training Development program to train or retrain several hundred thousand workers particularly in those areas where we have seen chronic unemployment as a result of technological factors and new occupational skills over a four-year period, in order to replace those skills made obsolete by automation and industrial change with the new skills which the new processes demand. 
The U.S. unemployment rate had declined back to the range of 5.0% by August 1964, but concerns over how the U.S economy might adapt to technology and automation remained serious enough that President Lyndon Johnson signed into law a National Commission on Technology, Automation, and Economic Progress. The Commission eventually released its report in February 1966. when the unemployment rate had fallen to 3.8%.

Before reviewing the tone and findings of the Commission, I'll just note that when I run into people who are concerned that technology is about to decimate U.S. jobs, I sometimes bring up the 1964 report. The usual response is to dismiss the 1964 experience very quickly, on the grounds that the current combination of information and communications technology, along with advanced in robotics, represent a totally different situation than in 1964. It's of course true that modern technologies differ from those of a half-century ago, but that isn't the issue. The issue is how an economy and a workforce makes a transition when new technologies arrive. It is a fact that technological shocks have been happening for decades, and that the U.S. economy has been adapting to them. The adaptations have not involved a steadily rising upward trend of unemployment over the decades, but they have involved the dislocations of industries falling and rising in different locations, and a continual pressure for workers to have higher skill levels.

It is of course theoretically possible that the technological changes of our own time will be profoundly different than anything which has come before. There is of course no way of proving that something in the future either will or will not be completely different than what has come before, but I am highly wary of such claims. After all, history also reminds us of that claims about how the present moment is utterly unique can sound plausible at the time, but look less plausible even just a few years or a decade later. What strikes me in looking back at the 1966 report is how much the description of the problem sounds quite modern, but how the recommendations of policies sound by contemporary standards fairly extreme.

For a sample, here's an overall perspective on technology and jobs from Chapter Two of the 1966 Commission report:
We believe that the general level of unemployment must be distinguished from the displacement of particular workers at particular time and places, if the relation between technological change and unemployment is to be clearly understood. The persistence of a high general level of unemployment in the years following the Korean war was not the result of accelerated technological progress. Its cause was interaction between rising productivity, labor force growth, and an inadequate response of aggregate demand.  This is firmly supported by the response of the economy to the expansionary fiscal policy of the last 5 years. Technological change on the other hand, has been a major factor in the displacement and temporary unemployment of particular workers. Thus technological change (along with other forms of economic change) is an important determinant of the precise places, industries, and people affected by unemployment. But the general level of demand for goods and services is by far the most important factor determining how many are affected, how long they stay unemployed, and how hard it is for new entrants to the labor market to find jobs. The basic fact is that technology eliminates jobs, not work. It is the continuous obligation of economic policy to match increases in productive potential with increases in purchasing power and demand. Otherwise the potential created by technical progress runs to waste in idle capacity, unemployment, and deprivation."

My guess is that a lot of contemporary economists could still sign on to most of this sentiment, a half-century later, although there would be squabbling on a few points.  For example, economic discussions in the early 1960s put a heavy emphasis on Keynesian-style stimulation of aggregate demand, and at least some modern economists would put more emphasis on supply-side growth and adjustment problems. The focus here is primarily on job loss and unemployment, while a modern economist would be likely to focus at least as much on issues about rising inequality. And of course, the claim that "The basic fact is that technology eliminates jobs, not work" proved true for the 1960s, but there is controversy over whether it will continue to be true.

The 1966 Commission report offers a long list of recommendations, and I found it interesting to consider how many of the topics are still very much with us 50 years later.  It's worth remembering that this is a Commission appointed by a Democratic President at the heart of what came to be called Johnson's "Great Society" wave of legislation. That said, here's a sampling of the recommendations:

"We recommend a program of public service employment, providing, in effect, that the Government be an employer of last resort, providing work for "hard-core unemployed" in useful community enterprises."
"We recommend that economic security be guaranteed by a floor under family income. That floor should include both improvements in wage-related benefits and a broader system of income maintenance for those families unable to provide for themselves."
"We recommend compensatory education for those from disadvantaged environments, improvements in the general quality of education, universal high school education and opportutnity for 14 years of free public education, eliminatino of financial obstacles to higher education, lifetime opportunities for education, training, and retraining ..." 
"We recommend the creation of a national computerized job-man matching system which would provide more adequate information on employment opportunities and available workers on a local, regional, and national scale. In addition to speeding job search, such a service would provide better information for vocational choice ..." 
"We recommend that present experimentation with relocation assistance to workers and their families stranded in declining areas be developed into a permanent program."
"We recommend ... regional technical institutes to serve as centers for disseminating scientific and technical knowledge relevant to the region's develoment ..." 
There's more, including discussion of how to encourage the use of technology to address health and environmental concerns, to improve workplace conditions, and to make government work better. Much of this list is more about overall goals ("improvements in the general quality of education") than about details of how public policy might address these goals. But viewed as a list of areas for concern, this list of priorities for helping a modern workforce adjust over time to changes in technology seems quite relevant today, a half-century later. The notion that this list still seems so relevant a half century later is in part, no doubt, because the underlying issues are hard ones. But it also seems a depressing commentary on some central inadequacies of public policy in the last half-century, and a grim commentary on the irrelevance of much of what passed for public debate in the 2014 election season.