Wednesday, August 24, 2016

How Much Slack is Left in US Labor Markets?

When the US monthly unemployment rate topped out 10% back in October 2009, it was obvious that the labor market had a lot of "slack"--an economic term for underused resources. But the unemployment rate has been 5.5% or below since February 2015, and 5.0% or below since October 2015. At this point, how much labor market slack remains? The Congressional Budget Office offers some insights in its report, An Update to the Budget and Economic Outlook: 2016 to 2026 (August 23, 2016).

I'll offer a look at four measures of labor market slack mentioned by CBO: the "employment shortfall,"  hourly labor compensation, rates at which worker are being hired or are quitting jobs, and hours worked per week. The bottom line is that a little slack remains in the US labor market, but not much.

From the CBO report: "The employment shortfall, CBO’s primary measure of slack in the labor market, is the difference between actual employment and the agency’s estimate of potential (maximum sustainable) employment. Potential employment is what would exist if the unemployment rate equaled its natural rate—that is, the rate that arises from all sources except fluctuations in aggregate demand for goods and services—and if the labor force participation rate equaled its potential rate. Consequently, the employment shortfall has two components: an unemployment
component and a participation component. The unemployment component is the difference between the number of jobless people seeking work at the current rate of unemployment and the number who would be jobless at the natural rate of unemployment. The participation component is the difference between the number of people in the current labor force and the number who would be in the labor force at the potential labor force participation rate. CBO estimates that the employment shortfall was about 1.4 million people in the second quarter of 2016; nearly the entire shortfall (about 1.3 million people) stemmed from a depressed labor force participation rate."

Here's a figure from the CBO measuring the employment shortfall in millions of workers. During the recession, the blue lines showing unemployment made up most of the employment shortfall. Now, it's pretty much all workers who would be expected to be working but are "out of the labor force," but are not counted as unemployed because they have stopped looking for work.
To get a better sense of what's behind this figure, it's useful to see the overall patterns of the labor force participation rate (blue line in graph below) and the employment/population ratio (red line). The difference between the two is that the "labor force" as a concept includes both the employed and  unemployed. Thus, you can see that the employment/population ratio veers away from the during periods of recession, and then the gap declines when the economy recovers and employment starts growing again.  Looking at the blue line in the figure, notice that the labor force participation rate peaked around peaked around 2000, and has been declining since then. As I've discussed here before, some of the reasons behind this pattern are that women were entering the (paid) workforce in substantial numbers from the 1970s through the 1990s, but that trend topped out around 2000. After that, various groups like young adults and low-skilled workers have seen their participation rates fall, and the aging of the US workforce tends to pull down labor force participation rates as well. Thus, the CBO is estimating what the overall trend of labor force participation should be, and saying that it hasn't yet rebounded back to the long-term trend. But you can also see, if you squint a bit, that the drop in labor force participation has leveled out a bit in the recent data. Also, the employment/population ration has been rising since about 2010.

A second measure of labor market slack looks at compensation for workers (including wages and benefits). The argument here is that when labor market slack is low, the This figure from the CBO report shows the change in compensation with actual data through the end of 2015, and projections after that. There does seem to be a little bump in hourly labor compensation toward the end of 2015 (see here for earlier discussion of this point), so as data for 2016 becomes available, the question will be whether that increase is sustained.

One more measure of labor market slack is the rate at which workers are being hired, which shows the liveliness of one part of the labor market, and the rate at which workers are quitting. The quit rate is revealing because when the economy is bad, workers are more likely to  hang onto their existing jobs. Both hiring and quits have largely rebounded back to pre-recession levels, as shown by this figure from the August 2016 release of the Job Openings and Labor Turnover Survey conducted by the US Bureau of Labor Statistics
Finally, average hours worked per week is also a common measure of labor market slack. The CBO report notes that this measure has mostly rebounded back to its pre-recession level. Here's a figure from the US Bureau of Labor Statistics showing the pattern.

All economic news has a good news/bad news quality, and the fall in labor market slack is no exception. The good news is obvious: unemployment rates are down and wages are showing at lesat some early signs of rising. It wasn't obvious, back during the worst of the Great Recession in 2008-2009, how quickly or how much the unemployment rate would decline.  As one example of the uncertainty, the Federal Reserve announced in December 2012, that “this exceptionally low range for the federal funds rate will be appropriate at least as long as the unemployment rate remains above 6½ percent," along with some other conditions, to reassure markets that its policy interest rate would remain low.But then the unemployment rate fell beneath 6.5% in April 2014, and the Fed decided it wasn't yet ready to start raising interest rates, so it retracted its policy from less than 18 months earlier.

The corresponding bad news is that whatever you dislike about the labor market can't really be blamed on the Great Recession any more. So if you're worried about issues like a lack of jobs for low-wage labor, too many jobs paying at or near the minimum wage, not enough on-the-job training, not enough opportunities for longer-term careers, loss of jobs in sectors like manufacturing and construction, too much part-time work, inequality of the wage distribution, one can no longer argue that the issues will be addressed naturally as the economy recovers. After all, labor market slack has now already declined to very low levels.

Monday, August 22, 2016

Automation and Job Loss: Leontief in 1982

Wassily Leontief is not especially well-known at  present by the general public, but he was one of the giants of 20th century economics. (He died in 1999.) When the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel (commonly known as "the Nobel prize in economics") first started to be given in 1969, there was a backlog of worthy winners, and those who won in the prize in the first decade or so formed a particularly elite group. Leontief won in 1973 for "for the development of the input-output method and for its application to important economic problems".

Thus, it was a big deal when Leontief wrote an essay for Scientific American in September 1982 arguing that new trends in mechanization and computing were displacing jobs. The title and subtitle give a good sense of his themes: "The Distribution of Work and Income: When workers are displaced by machines, the economy can suffer from a loss of their purchasing power. Historically the problem has been eased by shortening the work week, a trend currently at a standstill." (The archives of Scientific American from these years are not readily available on-line, as far as I can tell, but many libraries will have back issues on their shelves.) That special issue of Scientific American issue contained seven other essays about how American jobs were being lost to the "mechanization of work," with articles discussing how mechanization was reducing jobs in in a wide range of industries: manufacturing, design and coordination of manufacturing, agriculture, mining,  commerce (including finance, transport, distribution, and communications), and information-based office work.

Leontief's concern was of course not a new one in 1982. Indeed, his essay starts by hearkening back to the Luddite movement of the early 19th century in which hand-weavers banded together to destroy some of machines that were automating the textile industry. I've posted before on this website about other episodes in which concerns about automation and job loss ran especially high: for example, here's a discussion of "Automation and Job Loss: Fears of 1964" (December 1, 2014) and "Automation and Job Loss: Fears of 1927" (March 16, 2016). Joel Mokyr, Chris Vickers, and Nicolas L. Ziebarth provide a long-term perspective on these issues in "The History of Technological Anxiety and the Future of Economic Growth: Is This Time Different?" which appeared in the Summer 2015 issue of the Journal of Economic Perspectives.

Of course, Leontief knew perfectly well that in the past, technology had been one of main drivers of disruptions that over time raised the average standard of living. hy would the effects of new technologies be different? In terms that seem very similar to the concerns raised by some current writers, Leontief wrote in 1982:
There are signs today, however, that past experience cannot serve as a reliable guide for the future of technological change. With the advent of solid-state electronics, machines that have been displacing human muscle from the production of goods are being succeeded by machines that take over the functions of the human nervous system not only in production but in the service industries as well ... The relation between man and machine is being radically transformed. ... Computers are now taking on the jobs of white-collar workers, performing first simple and then increasingly complex mental tasks. Human labor from time immemorial played the role of principal factor of production. There are reasons to believe human labor will not retain this status in the future.
Re-reading Leontief's 1982 essay today, with the benefit of hindsight, I find myself struck by how he sometimes hits, and then sometimes misses or sideswipes, what I would view as the main issues of how technology can lead to dislocation and inequality.

For example, Leontief expresses a concern that "the U.S. economy has seen a chronic increase in unemployment from one oscillation of the business cycle to the next." Of course, he is writing in 1982, after the tumultuous economic movements of the 1970s. The US unemployment rate was above 10% from September 1982 (when his article was published) through June 1983. But since then, there have been multiple periods (late 1980s and early 1990s, the mid-1990s, and mid-2000s, and since February 2015), when the monthly unemployment rate has been 5.5% or lower. With the benefit of three decades of hindsight since Leontief's 1982 essay, the issue of technological disruption is not being manifested in a steadily higher unemployment rate, but instead is the dislocation for workers and the way in which technology contributes to inequality of wages.

If one presumes (for the sake of argument) a continued advance in technology that raises output, then the question is what form these gains will take. More leisure? If not more leisure, will the income gains be broadly or narrowly based?  

Leontief emphasizes that one of the gains of technology in broad historic terms was a shorter work week. For example, he writes of how "reduction of the average work week in manufacturing from 67 hours in 1870 to somewhat less than 42 hours" by the mid-1940s, and points out that the work week did not continue to decline at the same pace after that. This notion that economic gains from technology will lead to a dramatically shorter work week is not new to Leontief: for example, John Maynard Keynes in his 1930 essay "Economic Possibilities for Our Grandchildren" (available a number of places on the web, like here and here) wrote about how technology was going to be so productive that we would move us toward a 15-hour work-week.

The hidden assumption behind this prediction for less working time seems to be that production will, either soon or in the not-too-distant future, become sufficient to cover everyone's desires, so that as technology continues to increase production beyond that level, the total hours worked can decline substantially. Back in 1848, the greatest economist of his time, John Stuart Mill, was already arguing that in the richer countries already had plenty of production, and what was needed was just a more equal distribution of that production. Indeed, if society was mostly content with the mixture of goods and services available to middle-class England in 1848, or to Keynes in 1933, or to Leontief in 1982, then the work-week could be a lot shorter. But I don't see a lot of Americans out there who would be willing to settle for, say, the information technology or health care technology or the housing or transportation from those earlier times.

If technology doesn't just make the same things more cheaply, but also makes new goods and services that people desire, then the gains from technology may not lead to dramatically  shorter work weeks. Very little in Leontief's essay discusses how technology can produce brand-new industries and jobs, and how these new industries provide consumers with goods and services that they value.

Concerning the issue of how technology can lead to greater inequality of incomes, Leontief offers some useful and thought-provoking metaphors. For example, here's his Adam and Eve comparison:
"Adam and Eve enjoyed, before they were expelled from Paradise, a high standard of living without working. After their expulsion they and their successors were condemned to eke out a miserable existence, working from dawn to dusk. The history of technological progress over the past 200 years is essentially the story of the human species working its way slowly and steadily back into Paradise. What would happen, however, if we suddenly found ourselves in it? With all goods and services provided without work, no one would be gainfully employed. Being unemployed means
receiving no wages. As a result until appropriate new income policies were formulated to fit the changed technological conditions everyone would starve in Paradise."
As noted earlier, the evidence since 1982 doesn't support a claim of steadily higher unemployment rates. But it does support a concern of increasing inequality, where those who find themselves in a position to benefit most from technology will tend to gain. One need not be worried about "starving in Paradise" to be worried that the economy could be a Paradise for those receiving a greater share of income, but not for those on the outside of Paradise looking in.

Leontief also offers an interesting image about what it means to be a worker who can draw on a larger pool of capital, using an example of an Iowa farmer. He writes:
What I have in mind is a complex of social and economic measures to supplement by transfer from other income shares the income received by blue- and white-collar workers from the sale of their services on the labor market. A striking example of an income transfer of this kind attained automatically without government intervention is there to be studied in the long-run effects of the mechanization of agriculture on the mode of operation and the income of, say, a prosperous Iowa farm.
Half a century ago the farmer and the members of his family worked from early morning until late at night assisted by a team of horses, possibly a tractor and a standard set of simple agricultural implements. Their income consisted of what essentially amounted to wages for a 75- or 80-hour work week, supplemented by a small profit on their modest investment. Today the farm is fully mechanized and even has some sophisticated electronic equipment. The average work week is much shorter, and from time to time the family can take a real vacation. Their total wage income, if one computes
it at the going hourly rate for a much smaller number of manual-labor hours, is probably not much higher than it was 50 years ago and may even be lower. Their standard of living, however, is certainly much higher: the shrinkage of their wage income is more than fully offset by the income earned on their massive capital investment in the rapidly changing technology of agriculture.
The shift from the old income structure to the new one was smooth and practically painless. It involved no more than a simple bookkeeping transaction because now, as 50 years ago, both the wage income and the capital income are earned by the same family. The effect of technological progress on manufacturing and other nonagricultural sectors of the economy is essentially the same as it is on agriculture. So also should be its repercussions with respect to the shortening of the work day and the allocation of income.
Leontief here is eliding the fact that the share of American workers in agriculture was about 2-3% back in 1982, compared to 25-30% about 50 years earlier. He is discussing a smooth transfer of new technology for a single family, but with the rise in agricultural output for that family, something like 90% of their neighbors from 50 years earlier ended up transferring out of farming altogether. When Leontief and other modern writers talk about how modern technology is fundamentally more disruptive than earlier technology, I'm not sure I agree. The shift of the US economy to mechanized agriculture was an extraordinarily disruptive change.

But Leontief also has his finger on a central issue here, which is that jobs which find ways to technology and investment as a complement are more likely to prosper. Along these lines, I'm intrigued by the notion that when workers use web-based connectivity and applications, they are accessing a remarkable global capital infrastructure that complements their work--even though the Internet isn't physically visible in my side yard like a combine harvester.

A final Leontief metaphor might be called the "horses don't vote" issue. In a short article written at about this same time for a newsletter called Bottom Line Personal (April 30, 1983, 4:8, pp. 1+), Leontief wrote:
People cannot eat much more than they already do. They cannot wear much more clothing. But they certainly can  use more services, and they begin to purchase more of them. This natural shift comes simultaneously with  the technological changes. But in the long run, the role of labor diminishes even in service industries.  Look at banking, where more and more is done electronically and automatically,  and at secretarial areas, where staff work is being replaced by word processors.
The problem becomes: What happens to the displaced labor? In the last century, there was an analogous problem with horses. They became unnecessary with the advent of tractors, automobiles and trucks. And a farmer couldn't keep his horses and postpone the change to tractors by feeding them less oats. So he got rid of the horses and used the more productive tractor. After all, this doesn't precipitate  a political problem, since horses don't vote. But it is more difficult to find a solution when you have the  same problem with people. You do not need them as much as before. You can produce without them. 
So the problem becomes the task of reevaluating the role of human labor in production as it becomes less important. It is a simple fact that fewer people will be needed, yet more goods and services  can be produced. But the machinery and technology will not benefit everyone equally. We must ask: Who  will get the benefit? How will the income be be distributed? We are accustomed  to rewarding people for work based on market mechanisms, but we can no longer rely on the market mechanism  to function so conveniently.
As noted earlier, when Leontief says that it's "a simple fact" that fewer people will be needed, I think he is overstating his case. Since 1982, the prediction of steadily rising unemployment rates has not come true. However, the prediction of steadily rising inequality of incomes and diminished opportunity for low-skilled labor has occurred. 

The extent to which one views inequality as a problem isn't a matter of pure economics, but involves political and even moral or aesthetic judgments. The same can be said about preferred political solutions.  Leontief, who did his early college studies at the University of Leningrad and his Ph.D. work at the University of Berlin, both in the 1920s, had a strong bias that more government planning was a necessary answer. His essay is heavily sprinkled with comments about how dealing with distributional issues will require "close and systematic cooperation between management and labor carried on with government support," and with support for the German/Austrian economic policy model of the 1980s. 

With Leontief's policy perspective in mind,,, I was intrigued to read this comment from his 1982 essay: "In the long run, responding to the incipient threat of technological unemployment, public policy should aim at securing an equitable distribution of work and income, taking care not to obstruct technological progress even indirectly." My own sense is that if you take seriously the desire not to obstruct technological progress, even indirectly, then you need to allow for and even welcome the possibility of strong disruptions within the existing economy. In the world of US-style practical politics, that you must then harbor grave doubts about a Leontief-style strong nexus of government along with the management and labor of existing firms. 

I agree with Leontief economic policy should seek to facilitate technological change and not to obstruct it, even indirectly. But rather than seeing this as a reason to support corporatist public policy, I would say that when technology is contributing to greater inequality of incomes, as it  seems to be doing in recent decades, then address the inequality directly. Appropriate steps include taxes on those with higher incomes, direct subsidies to lower-income workers in ways that increase their wages, and indirect subsidies in the form of public spending on schools, retraining and job search; public transportation and public safety; and parks, libraries, and improvements in local living environments. 

Friday, August 19, 2016

Convert Carbon Dioxide from the Air to Methanol?

When it comes to rising levels of carbon and other greenhouse gases in the atmosphere, I'm in favor of a consider-everything approach, including carbon capture and storage, geoengineering, noncarbon energy sources, energy conservation, and any other options that come to hand. But perhaps the most miraculous possibilities involve finding ways to absorb carbon dioxide from the air directly and then use it as part of a fuel source like methanol. This technology is not yet close to practical on any wide scale, but here are three examples of what's happening.

For example, researchers at Argonne National Laboratory and the University of Illinois Chicago have been working on what can be viewed as an "artificial leaf" for taking carbon dioxide out of the atmosphere. A press release from Argonne described it this way: "To make carbon dioxide into something that could be a usable fuel, Curtiss and his colleagues needed to find a catalyst — a particular compound that could make carbon dioxide react more readily. When converting carbon dioxide from the atmosphere into a sugar, plants use an organic catalyst called an enzyme; the researchers used a metal compound called tungsten diselenide, which they fashioned into nanosized flakes to maximize the surface area and to expose its reactive edges. While plants use their catalysts to make sugar, the Argonne researchers used theirs to convert carbon dioxide to carbon monoxide. Although carbon monoxide is also a greenhouse gas, it is much more reactive than carbon dioxide and scientists already have ways of converting carbon monoxide into usable fuel, such as methanol."  The research was just published in the July 29 issue of Science magazine, in "Nanostructured transition metal dichalcogenide electrocatalysts for CO2 reduction in ionic liquid," by a long list of co-authors headed by Mohammad Asadi (vol. 353, issue 6298, pp. 467-470).

As another example, researchers at the USC Loker Hydrocarbon Research Institute at the University of Southern California "have directly converted carbon dioxide from the air into methanol at relatively low temperatures," according to a summary of the research by Robert Perkins. "The researchers bubbled air through an aqueous solution of pentaethylenehexamine (or PEHA), adding a catalyst to encourage hydrogen to latch onto the carbon dioxide under pressure. They then heated the solution, converting 79 percent of the carbon dioxide into methanol." The researchers hope that the method might be viable at industrial scale in 5-10 years. The research was published earlier this year in the Journal of the American Chemical Society, "Conversion of CO2 from Air into Methanol Using a Polyamine and a Homogeneous Ruthenium Catalyst," by Jotheeswari Kothandaraman, Alain Goeppert, Miklos Czaun, George A. Olah, and G. K. Surya Prakash (2016, 138:3, pp 778–781).

One of the co-authors of the USC study, Alain Goeppert, points out in an article in the Milken Institute Review by Lawrence M. Fisher (Third Quarter, 2016, pp. 3-13) that a company in Iceland is already recycling carbon to make methanol and exporting it to Europe.
“A company in Iceland is already doing that: Carbon Recycling International,” Goeppert said. “There, they are recycling CO2 with hydrogen they obtain from water. They use geothermal energy, which is relatively cheap. They have been producing methanol that way for five years, exporting it to Europe, to use as a fuel. It’s still relatively small scale, but it’s a start.”
Methanol can easily be mixed into gasoline, as ethanol is today, or cars can be adapted fairly cheaply to run on 100% methanol. Diesel engines can run on methanol, too.

Of course, I don't know if carbon-dioxide-to-methanol can put a real dent into atmospheric carbon in any cost-effective way. But again, I'm a consider-everything kind of guy.  And before I get too skeptical about how fields of artificial leaves might work for this purpose, it's worth remembering that fields of solar collectors didn't look very practical as a method of generating electricity a couple of decades ago, either.

Thursday, August 18, 2016

Patterns in US Information Techology Jobs

Would you expect that the number of US jobs in information technology fields is rising or falling over time? On one side, the growing importance of IT in so many areas of the US economy suggests that the job totals should be rising. On the other hand, one often reads warnings about  how a combination of advances in technology and outsourcing to other countries are making certain jobs obsolete, and it seems plausible that a number of IT-related jobs could be either eliminated or outsourced to other countries by improved web-based software and more powerful and reliable computing capabilities. So which effect is bigger?  Julia Beckhusen provides an overview of "Occupations in Information Technology," published by the US Census Bureau (August 2016,  American Community Survey Reports ACS-35).

The top line is that US jobs in IT seem to be roughly doubling in each decade since the 1970s. Here's an illustrative figure.

What exactly are these jobs? Here's a breakdown for 2014 The top five categories, which together make up about three-quarters of all the IT jobs, are softward developers, systems and applications software; computer support specialists; computer occupations, all other; computer and information systems managers; and computer systems analysts.

Are these IT jobs basically in the category of high-paying jobs for highly educated workers? Some are, some aren't. The proportion of workers in each of these IT job categories with a master's degree or higher is shown by the bar graphs on the left. The median pay for each job category is shown by the dot-graph on the right. Unsurprisingly, more than half of all those categorized as "computer and information research scientists" have a master's degree or higher; what perhaps surprising here is that almost half of those in this job category don't have this level of education. But in most of these IT job categories, only one-quarter and in many cases much less than one-quarter of those holding such an IT job have a master's degree. Indeed, I suspect that in many of the lower-paid IT job categories, many do not have a four-year college degree either--there are a lot of shorter-term programs to get some IT training. In general, IT jobs do typically pay more than the average US job.  But the highest-paid IT jobs in the "research scientists" category also has the smallest number of workers (as shown in the graph above).

Finally, to what extend are these IT jobs held by those born in another country who have immigrated at least for a time to the United States?  As the bars at the top of the figure show, 17% of all US jobs are held by foreign-born workers; among IT workers, it's 24%.

Beckhusen provides lots more detail in breaking down IT jobs along various dimensions. My own guess is that the applications for IT in the US economy will continue to be on the rise, probably in a dramatic fashion, and that many of those applications will turn out to be even more important for society than Twitter or Pokémon Go. The biggest gains in jobs won't be the computer science researchers, but instead will be the people installing, applying, updating, and using IT in a enormously wide range of contexts. If your talents and inclinations lead this way, it remains a good area to work on picking up some additional skills.

Tuesday, August 16, 2016

What are Motivated Beliefs?

"Motivated beliefs" is a relatively recent development economics which offers a position between traditional assumptions of rational and purposeful behavior and the conventional approaches of behavioral economics. It is introduced and explored in a symposium in the Summer 2016 Journal of Economic Perspectives. Nicholas Epley and Thomas Gilovich contribute an introductory essay in "The Mechanics of Motivated Reasoning." Roland Bénabou and Jean Tirole have written: "Mindful Economics: The Production, Consumption, and Value of Beliefs."  Russell Golman, George Loewenstein, Karl Ove Moene, and Luca Zarri look at one aspect of motivated beliefs in "The Preference for Belief Consonance."  Francesca Gino, Michael I. Norton, and Roberto A. Weber focus on another aspect in "Motivated Bayesians: Feeling Moral While Acting Egoistically." 

Of course, I encourage you to read the actual papers. I'm worked as the Managing Editor of JEP for 30 years, so I always want everyone to read the papers! But here's an overview and a taste of the arguments.

In traditional working assumptions of microeconomics, people act in purposeful and directed ways to accomplish their goals. Contrary to the complaints I sometimes hear, this approach doesn't require that people have perfects and complete information or that they are perfectly rational decision-makers. It's fairly straightforward to incorporate imperfect information and bounded rationality into these models. But even so, this approach is built on the assumption that people act purposefully to achieve their goals and do not repeatedly make the same mistakes without altering their behavior.

 Behavioral economics, as it has usually been practiced, is sometimes called the "heuristics and biases" approach. It points to certain patterns of behavior that have been well-demonstrated in the psychology literature: for example, people often act in a short-sighted or myopic way that puts little weight on long-term consequences; people have a hard time evaluating how to react to low-probability events; people are "loss averse" and treat a loss of a certain amount as a negative outcome that is bigger in absolute value than a gain of the same amount; the "confirmation bias" of interpreting new evidence so that it tends to support previously held beliefs; and others. In this view, people can make decisions and regret them, over and over. Short-sighted people may fail to save, or fail to exercise, and regret it. People who are loss-averse and have a hard time evaluating low-probability events may be sucked into buying a series of service plans and warranties that don't necessarily offer them a good value. When decision-making includes heuristics and biases, people can make the same mistakes repeatedly.

The theory of motivated beliefs fall in-between these possibilities. In these arguments, people are not strictly rational or purposeful decision-makers, but nor does their decision-making involve built-in flaws. Instead, people have a number of goals, which include fitting in with their social group, feeling moral, competent, and attractive, fitting in with their existing social group or achieving higher social status. As Epley and Gilovich explain in their introductory essay,
"This idea is captured in the common saying, “People believe what they want to believe.” But people don’t simply believe what they want to believe. The psychological mechanisms that produce motivated beliefs are much more complicated than that. ... People generally reason their way to conclusions they favor, with their preferences influencing the way evidence is gathered, arguments are processed, and memories of past experience are recalled. Each of these processes can be affected in subtle ways by people’s motivations, leading to biased beliefs that feel objective ...
One of the complexities in understanding motivated reasoning is that people have many goals, ranging from the fundamental imperatives of survival and reproduction to the more proximate goals that help us survive and reproduce, such as achieving social status, maintaining cooperative social relationships, holding accurate beliefs and expectations, and having consistent beliefs that enable effective action. Sometimes reasoning directed at one goal undermines another. A person trying to persuade others about a particular point is likely to focus on reasons why his arguments are valid and decisive—an attentional focus that could make the person more compelling in the eyes of others but also undermine the accuracy of his assessments. A person who recognizes that a set of beliefs is strongly held by a group of peers is likely to seek out and welcome information supporting those beliefs, while maintaining a much higher level of skepticism about contradictory information (as Golman, Loewenstein, Moene, and Zarri discuss in this symposium). A company manager narrowly focused on the bottom line may find ways to rationalize or disregard the ethical implications of actions that advance short-term profitability (as Gino, Norton, and Weber discuss in this symposium). 
The crucial point is that the process of gathering and processing information can systematically depart from accepted rational standards because one goal— desire to persuade, agreement with a peer group, self-image, self-preservation—can commandeer attention and guide reasoning at the expense of accuracy. Economists are well aware of crowding-out effects in markets. For psychologists, motivated reasoning represents an example of crowding-out in attention. In any given instance, it can be a challenge to figure out which goals are guiding reasoning ... 
In one classic study, mentioned in the overview and several of the papers, participants were given a description of a trial and asked to evaluate whether they thought the accused was guilty or innocent. Some of the players were assigned to play the role of prosecutors or defense attorneys before reading the information; others were not assigned a role until after evaluating the information. Those who were assigned to be prosecutors before reading the evidence were more likely to evaluate the evidence as showing the defendant was guilty, while those assigned to be defense attorneys before reading the evidence were more likely to evaluate the evidence as showing the defendant to be not guilty. The role you play will often influence your reading of evidence. 

Bénabou and Tirole offer a conceptual framework for thinking about motivated beliefs, and then apply the framework in a number of context. They argue that motivated beliefs arise for two reasons,which they label "self-efficacy" and "affective."  In the self-efficacy situation, people use their beliefs to give their immediate actions a boost. Can I do a good job in the big presentation at work? Can I save money? Can I persevere with a diet? In such situations, people are motivated to distort their interpretation of information and their own actions in a way that helps support their ability to persevere with a certain task.  In the "affective" situation, people get immediate and visceral pleasure from seeing themselves as smart, attractive, or moral, and they can also get "anticipatory utility" from contemplating pleasant future outcomes.

However, if your motivated beliefs do not reflect reality, then in some cases reality will deliver some hard knocks in response. They analyze certain situations in which these hard knocks, again through a process of motivated beliefs, makes you cling to those beliefs harder than ever. Moreover, if you are somewhat self-aware and know that you are prone to motivated beliefs, then you may be less likely to trust your own interpretations of evidence, which complicates the analysis further. Bénabou and Tirole apply these arguments in a wide array of contexts: political beliefs (a subject of particular interest in 2016), social and organizational beliefs, financial bubbles, and personal identity. Here's one example of a study concerning political beliefs (most citations omitted). 

The World Values Survey reveals considerable differences in beliefs about the role of effort versus luck in life. In the United States, 60 percent of people believe that effort is key; in Western Europe, only 30 percent do on average, with major variations across countries. Moreover, these nationally dominant beliefs bear no relationship to the actual facts about social mobility or how much the poor are actually working, and yet they are strongly correlated with the share of social spending in GDP. At the individual level, similarly, voters’ perceptions of the extent to which people control their own fate and ultimately get their just desserts are first-order determinants of attitudes toward inequality and redistribution, swamping the effects of own income and education. 
In Bénabou and Tirole (2006), we describe how such diverse politico-ideological equilibria can emerge due to a natural complementarity between (self-)motivation concerns and marginal tax rates. When the safety net and redistribution are minimal, agents have strong incentives to maintain for themselves, and pass on to their children, beliefs that effort is more important than luck, as these will lead to working hard and persevering in the face of adversity. With high taxes and generous transfers, such beliefs are much less adaptive, so fewer people will maintain them. Thus, there can coexist: i) an “American Dream” equilibrium, with just-world beliefs about social mobility, and little redistribution; and ii) a “Euro-pessimistic” equilibrium, with more cynical beliefs and a large welfare state. In the latter, the poor are less (unjustly) stigmatized as lazy, while total effort (annual hours worked) and income are lower, than in the former. More generally, across all steady-states there is a negative correlation between just-world beliefs and the size and the welfare state, just as observed across countries.
Golman, Loewenstein. Moene, and Zarri consider one aspect of motivated beliefs, the "preference for belief consonance," which is the desire to be in agreement with others in one's immediate social group. They endeared themselves to me by starting with a quotation from Adam Smith's first great work, The Theory of Moral Sentiments (Part VII, Section IV): "The great pleasure of conversation, and indeed of society, arises from a certain correspondence of sentiments and opinions, from a certain harmony of minds, which like so many musical instruments coincide and keep time with one another." They write:

Why are people who hold one set of beliefs so affronted by alternative sets of beliefs—and by the people who hold them? Why don’t people take a live-and-let-live attitude toward beliefs that are, after all, invisibly encoded in other people’s minds? In this paper, we present evidence that people care fundamentally about what other people believe, and we discuss explanations for why people are made so uncomfortable by the awareness that the beliefs of others differ from their own. This preference for belief consonance (or equivalently, distaste for belief dissonance) has far-ranging implications for economic behavior. It affects who people choose to interact with, what they choose to exchange information about, what media they expose themselves to, and where they choose to live and work. Moreover, when people are aware that their beliefs conflict with those of others, they often try to change other people’s beliefs (proselytizing). If unsuccessful in doing so, they sometimes modify their own beliefs to bring them into conformity with those around them. A preference for belief consonance even plays an important role in interpersonal and intergroup conflict, including the deadliest varieties: Much of the conflict in the world is over beliefs—especially of the religious variety—rather than property ... 
A substantial group of studies show that if you ask people about their opinions on certain issues, and if you ask people about their opinions while telling them that certain other specific groups hold certain opinions, the patterns of answers can be quite different. Personally, I'm always disconcerted that for every opinion I hold, some of the others who hold that same opinions are people I don't like very much.

Gino, Norton, and Weber take on another dimension of motivated beliefs in  their essay on "feeling moral while acting egoistically." They explain that when given some wiggle room to manage their actions or their information, people often choose to act in a way that allows them to feel moral while acting selfishly. Gino, Norton, and Weber write: 
 In particular, while people are often willing to take a moral act that imposes personal material costs when confronted with a clear-cut choice between “right” and “wrong,” such decisions often seem to be dramatically influenced by the specific contexts in which they occur. In particular, when the context provides sufficient flexibility to allow plausible justification that one can both act egoistically while remaining moral, people seize on such opportunities to prioritize self-interest at the expense of morality. In other words, people who appear to exhibit a preference for being moral may in fact be placing a value on feeling moral, often accomplishing this goal by manipulating the manner in which they process information to justify taking egoistic actions while maintaining this feeling of morality.
They cite many studies of this phenomenon. Here's an overview of one: 
[P]articipants in a laboratory experiment distribute two tasks between themselves and another participant: a positive task (where correct responses to a task earn tickets to a raffle) and a negative task (not incentivized and described as “rather dull and boring”). Participants were informed: “Most participants feel that giving both people an equal chance— by, for example, flipping a coin—is the fairest way to assign themselves and the other participant to the tasks (we have provided a coin for you to flip if you wish). But the decision is entirely up to you.” Half of participants simply assigned the tasks without flipping the coin; among these participants, 90 percent assigned themselves to the positive task. However, the more interesting finding is that among the half of participants who chose to flip the coin, 90 percent “somehow” ended up with the positive task—despite the distribution of probabilities that one would expect from a two-sided coin. Moreover, participants who flipped the coin rated their actions as more moral than those who did not—even though they had ultimately acted just as egoistically as those who did not flip in assigning themselves the positive task. These results suggest that people can view their actions as moral by providing evidence to themselves that they are fair (through the deployment of a theoretically unbiased coin flip), even when they then ignore the outcome of that coin flip to benefit themselves.
The theory of motivated beliefs still views people as motivated by self-interest. However, the dimensions of self-interest expand beyond the standard concerns like consumption and leisure, and encompass how we feel about ourselves and the social groups we inhabit. In this way, the analysis opens up insights into insights into behavior that is otherwise puzzling in the context of economic analysis, as well as building intellectual connections to other social sciences of psychology and sociology. 

Monday, August 15, 2016

Alfred Marshall and the Origin of Ceteris Paribus

When non-economists ask me questions, they often seem to be jumping from topic to topic. A question about the effects of raising the minimum wage, for example, shifts from how it will affect jobs, and earnings, and companies that hire minimum wage workers, and work effort, and automation, and the the overall income distribution, and children of minimum wage earners, and so on. The questions are all reasonable. But I become self-aware that economists have trained themselves into a one-thing-at-a-time method of analysis, and so bouncing from one topic to another can feel somehow awkward.

The ceteris paribus or "other things equal" assumption involves an intellectual approach, common among economists, of trying to focus on one thing at a time. After all, many economic issues and policies have a number of possible causes and effects. Rather than hopscotching among them, economists often try to discuss isolate one factor at a time, and then to move on to other factors, before combining it all into an overall perspective. The use of this approach in economic analysis traces back to trace back to Alfred Marshall's 1890 classic Principles of Economic Analysis.

The Library of Economics and Liberty provides a useful place for finding searchable editions of many classic works in economics. The site provides the 8th edition of Marshall's Principles, published in 1920. In Book V, Chapter V, "Equilibrium of Normal Demand and Supply, Continued, With Reference To Long and Short Periods," Marshall described the overall logic of looking at one thing at a time, offers some hypothetical examples from a discussion of supply and demand shocks in fish markets, and points out that longer the time period of analysis, the harder it becomes to assume that everything else is constant. Marshall writes:
"The element of time is a chief cause of those difficulties in economic investigations which make it necessary for man with his limited powers to go step by step; breaking up a complex question, studying one bit at a time, and at last combining his partial solutions into a more or less complete solution of the whole riddle. In breaking it up, he segregates those disturbing causes, whose wanderings happen to be inconvenient, for the time in a pound called Cœteris Paribus. The study of some group of tendencies is isolated by the assumption other things being equal: the existence of other tendencies is not denied, but their disturbing effect is neglected for a time. The more the issue is thus narrowed, the more exactly can it be handled: but also the less closely does it correspond to real life. Each exact and firm handling of a narrow issue, however, helps towards treating broader issues, in which that narrow issue is contained, more exactly than would otherwise have been possible. With each step more things can be let out of the pound; exact discussions can be made less abstract, realistic discussions can be made less inexact than was possible at an earlier stage. ...

The day to day oscillations of the price of fish resulting from uncertainties of the weather, etc., are governed by practically the same causes in modern England as in the supposed stationary state. The changes in the general economic conditions around us are quick; but they are not quick enough to affect perceptibly the short-period normal level about which the price fluctuates from day to day: and they may be neglected [impounded in cœteris paribus] during a study of such fluctuations.

Let us then pass on; and suppose a great increase in the general demand for fish, such for instance as might arise from a disease affecting farm stock, by which meat was made a dear and dangerous food for several years together. We now impound fluctuations due to the weather in cœteris paribus, and neglect them provisionally: they are so quick that they speedily obliterate one another, and are therefore not important for problems of this class. And for the opposite reason we neglect variations in the numbers of those who are brought up as seafaring men: for these variations are too slow to produce much effect in the year or two during which the scarcity of meat lasts. Having impounded these two sets for the time, we give our full attention to such influences as the inducements which good fishing wages will offer to sailors to stay in their fishing homes for a year or two, instead of applying for work on a ship. We consider what old fishing boats, and even vessels that were not specially made for fishing, can be adapted and sent to fish for a year or two. The normal price for any given daily supply of fish, which we are now seeking, is the price which will quickly call into the fishing trade capital and labour enough to obtain that supply in a day's fishing of average good fortune; the influence which the price of fish will have upon capital and labour available in the fishing trade being governed by rather narrow causes such as these. This new level about which the price oscillates during these years of exceptionally great demand, will obviously be higher than before. Here we see an illustration of the almost universal law that the term Normal being taken to refer to a short period of time an increase in the amount demanded raises the normal supply price.  ...

Relatively short and long period problems go generally on similar lines. In both use is made of that paramount device, the partial or total isolation for special study of some set of relations. In both opportunity is gained for analysing and comparing similar episodes, and making them throw light upon one another; and for ordering and co-ordinating facts which are suggestive in their similarities, and are still more suggestive in the differences that peer out through their similarities. But there is a broad distinction between the two cases. In the relatively short-period problem no great violence is needed for the assumption that the forces not specially under consideration may be taken for the time to be inactive. But violence is required for keeping broad forces in the pound of Cateris Paribus during, say, a whole generation, on the ground that they have only an indirect bearing on the question in hand. For even indirect influences may produce great effects in the course of a generation, if they happen to act cumulatively; and it is not safe to ignore them even provisionally in a practical problem without special study. Thus the uses of the statical method in problems relating to very long periods are dangerous; care and forethought and self-restraint are needed at every step. The difficulties and risks of the task reach their highest point in connection with industries which conform to the law of Increasing Return; and it is just in connection with those industries that the most alluring applications of the method are to be found.
For those who want more on the history of ceteris paribus (the modern spelling no longer uses the ligature version that ties together the o and e), Joseph  Persky offers a nice introduction in his 1990 article "Retrospectives: Ceteris Paribus," which appeared in the Journal of Economic Perspectives (4: 2, pp. 187-193). Persky finds early uses of the term back in the 1600s, including a 1662 passage by the economist William Petty that was often quoted in the 19th century--and thus may have inspired Marshall's use of the term. 

Persky notes the dueling concerns that economists may in some cases feel that they should avoid big-picture subjects in the global economy or historical analysis because the ceteris are not always paribus, or in other cases that economic research may be focusing on one factor while other important factors are also changing. But as Persky points out, the ceteris paribus assumption is not meant as a literal statement that nothing else has changed, but only to remind the reader that the analysis may be leaving something out. As Persky writes: "Economists could do much worse than to flag our fallibility with a bit of Latin."

Friday, August 12, 2016

The Future of DSGE Models in Macroeconomics

One of the hardest problems in studying the macroeconomy is that time keeps advancing. You can't go back to, say, 2001 or 2009, not enact the Bush tax cuts or the Obama economic stimulus, and then re-run the economy and see what happens. Instead, researchers end up comparing effects of seemingly similar policies enacted at different times--but the policies and the circumstances are never quite identical, so room for dispute remains. Indeed, disagreements among macroeconomists are nearly proverbial. "Macroeconomists have predicted nine of the last five recessions." "Two macroeconomists, five opinions." "Economists are the experts who explain why the prediction they made yesterday didn't come true today."

I sometimes receive notes from readers asking for a sense of why macroeconomists disagree.  Olivier Blanchard opens up some of the central issues for useful discussion in a short and readable paper, "Do DSGE Models Have a Future?" written for the Peterson Institute for International Economics (Policy Brief 16-11, August 2016).

For the uninitiated, DSGE models of the macroeconomy are a method that is both well-established and the stuff of continuing controversy. DSGE stands for "dynamic stochastic general equilibrium model," which represents a broad class of macroeconomic models. In the jargon, "dynamic" means that the models show the evolution of a (hypothetical) economy over time. "Stochastic" means that the models show how the economy would respond if certain shocks occur, whether the shocks involve policy choices or economic events (like a rise or fall in the rate of productivity growth). "General equilibrium" means that these models don't look at the macroeconomy one sector at a time--say, first consumption, then investment, then foreign trade--but instead try to take all the interactions of these sectors into account.  Blanchard describes the models in this way:
"For those who are not macroeconomists, or for those macroeconomists who lived on a desert island for the last 20 years, here is a brief refresher. DSGE stands for “dynamic stochastic general equilibrium.” The models are indeed dynamic, stochastic, and characterize the general equilibrium of the economy. They make three strategic modeling choices: First, the behavior of consumers, firms, and financial intermediaries, when present, is formally derived from microfoundations. Second, the underlying economic environment is that of a competitive economy, but with a number of essential distortions added, from nominal ties to monopoly power to information problems. Third, the model is estimated as a system, rather than equation by equation in the previous generations of macroeconomic models. ... [C]urrent DSGE models are best seen as large scale versions of the New Keynesian model, which emphasizes nominal rigidities and a role for aggregate demand."
Blanchard gives four main concerns about DSGE models along with some thoughts about each one. Thus, he writes:
There are many reasons to dislike current DSGE models. First: They are based on unappealing assumptions. Not just simplifying assumptions, as any model must, but assumptions profoundly at odds with what we know about consumers and firms.  ... Second: Their standard method of estimation, which is a mix of calibration and Bayesian estimation, is unconvincing. ... Third: While the models can formally be used for normative purposes, normative implications are not convincing. ... Fourth: DSGE models are bad communication devices. A typical DSGE paper adds a particular distortion to an existing core. It starts with an algebra-heavy derivation of the model, then goes through estimation, and ends with various dynamic simulations showing the effects of the distortion on the general equilibrium properties of the model. "
You can read the details of Blanchard's responses in the paper, but I'd characterize his overall Blanchard's view of DSGE models seems to be negative, ambivalent, and  positive all at the same time. He writes: "I see the current DSGE models as seriously flawed, but they are eminently improvable and central to the future of macroeconomics." A snippet of his more detailed answer like this:
The pursuit of a widely accepted analytical macroeconomic core, in which to locate discussions and extensions, may be a pipe dream, but it is a dream surely worth pursuing. If so, the three main modeling choices of DSGEs are the right ones. Starting from explicit microfoundations is clearly essential; where else to start from? Ad hoc equations will not do for that purpose. Thinking in terms of a set of distortions to a competitive economy implies a long slog from the competitive model to a reasonably plausible description of the economy. But, again, it is hard to see where else to start from. Turning to estimation, calibrating/estimating the model as a system rather than equation by equation also seems essential. Experience from past equation-by-equation models has shown that their dynamic properties can be very much at odds with the actual dynamics of the system. 
It's worth unpacking this a bit. Blanchard's comment that the DSGE approach "may be a pipe dream, but it is a dream surely worth pursuing," is not calculated to inspire confidence in the results of such studies! This intellectual agenda involves modelling activities of real-world economic actors, including various assumptions and some combination of rational choice and behavioral economics, involves many possible choices. The selection of possible frictions like monopoly power, wages and prices which adjust in a sticky manner, the formation of expectations, the issue raised by financial markets, all adds another set of possible choices. The question of how to get a workable quantatitive number out of this model involves choosing some plausible values from other studies (that is, "calibrating" the model) and what parts of the model to estimate using data involves still more choices.

In addition, Blanchard discusses how DSGE modelling needs to be open to new insights from behavioral economics, from the use of big data, from issues about problems that can arise in financial markets, and more. He also suggests: "At one end, maximum theoretical purity is indeed the niche of DSGEs. For those models, fitting the data closely is less important than clarity of structure." This comments is not calculated to inspire confidence in the results of such studies either. He suggests that there is also a need for one set of related-but-different studies for policy purposes, and another set of related-but-different models for puruposes economic forecasting, and still other lessons that are most accessible through simpler ad hoc models (like the IS-LM model from intermediate-level macro textbooks).

In a way, what macroeconomists have been learning in the last few decades is to reach a deeper understanding how many different ingredients might be included in a macroeconomic model. But no model can look at everything at once, so macroeconomists are always trying to figure out which ingredients matter most. My own takeaway is that DSGE models will continue to matter a lot to high-powered researchers in macroeconomics, like Blanchard. But for the rest of us, the task is to keep track of how insights from those models filter down through the research literature and become practical lessons that can be explained and applied in more stripped-down contexts.