Friday, June 5, 2020

Exploding US Unemployment Rates: A Peek Inside

US unemployment rates have reached higher levels, and risen in a way that is more dramatic, than at any time since the start of regular employment statistics in the late 1940s. Here's the basic picture. The unemployment rate was 14.7%  in April and then dropped unexpectedly (to me, at least!) to 13.3% in May. Even so, looking back over the last 75 years, the monthly unemployment rate has never risen this fast or reached a level this high.  
The explosive rise in the unemployment rate has been accompanied by a sharper decline in jobs than the US economy has experienced in the last 75 years. The figure shows total US employees. As you see, the number rises gradually over the decades, keeping pace with the US population. The total number of jobs drops during or just after recessions, shown by the shaded gray bars. But whether it's the Great Recession of 2007-9 or the severe double-dip recession of the early 1980s, the US economy has not seen a drop in total jobs this fast and severe. Total number of jobs was 151 million in March and 130 million in April--a drop of about 14% in a single month--before the gain of about 2.5 million total jobs in May. 
The key question about unemployment is whether there could be a quick bounceback. Are many of these employers poised to resume hiring? Are many of these workers poised to go back to work? One interesting tidbit of evidence here is the share of the unemployed who lost their jobs because of layoffs--which has some implication that they could be readily rehired. Here's another striking figure. The share of "job losers on layoff" is about 8-15% of the total unemployed from the mid-1980s up to the is around 8-15% of 
One of the shifting labor market patterns in the last 30 years or so has been the disappearance of the "layoff." If you look back at recessions in the 1970s and 1980s, you see that the share of "job losers on layoff" rises during recessions, and then falls. It was a much more common pattern for factories and other employer to lay off and then to rehire those same workers. But when you look at the recessions of 1990-91, 2001, and 2007-9, you don't see much of a rise in layoffs. Instead, the chance that an unemployed workers was laid off with a plausible prospect of being rehired, rather than just let go, got lower and lower. For example, look how low the percentage falls in the years after the Great Recession. 

But the share of "job losers" on layoff just spiked to 78% in April and 73% in May, which implies that large numbers of the unemployed could conceivably be rehired quickly. But of course, a "layoff" could become an empty promise, where most of these workers are not rehired, and instead need to find new jobs in the new socially distancing economy. 

I've also been struck by the difference between US and European unemployment data. When US unemployment was spiking to 14.7% in April, unemployment in the 27 countries of the European Union barely nudged up to 6.6% in April; for the subset of 19 countries in the euro zone, unemployment was 7.3% in April. Why did US unemployment spike to double European levels? The likely answer involves interactions between public policy and what is counted as "unemployment." 

One key policy choice is whether assistance to workers has been sent to them directly--say, via unemployment insurance--or whether assistance to worker was funneled through employers, so that workers who were not necessarily going to work still kept receiving a (government-funded) paycheck from their employer. Jonathan Rothwell describes the difference in "The effects of COVID-19 on international labor markets: An update" (May 27, 2020, Brookings Institution). 

Here's a figure from Rothwell showing the change in workers getting unemployment benefits. Notice that it's way up in Canada, Israel, Ireland, and the US. But in France, Germany, Japan, and Netherlands, there's essentially no rise in unemployment benefits. 
The reason is that in many countries, a number of worker are getting government assistance via their employers. In the unemployment stats for those countries, they are still counted as employed. Here's the figure from Rothwell: 
Another policy choice in the US has been to increase unemployment assistance substantially, so that it is closer to the actual pay that workers receive. Manuel Alcalá Kovalski and Louise Sheiner provide a quick background primer on "How does unemployment insurance work? And how is it changing during the coronavirus pandemic?" (Brookings Institution, April 7, 2020). As they write: 
Most state UI [Unemployment Insurance] systems replace about half of prior weekly earnings, up to some maximum. Before the expansion of UI during the coronavirus crisis, average weekly UI payments were $387 nationwide, ranging from an average of $215 per week in Mississippi to $550 per week in Massachusetts. ... The CARES Act—a $2 trillion relief package aimed at alleviating the economic fallout from the COVID-19 pandemic—extends the duration of UI benefits by 13 weeks and increases payments by $600 per week through July 31st. This implies that maximum UI benefits will exceed 90 percent of average weekly wages in all states.
In other words, rather than trying to keep laid-off or furloughed workers receiving much the same income via their employer, the US approach has been to do so via the unemployment insurance system. This has caused problems. For lower-wage US workers, the higher unemployment insurance payments cover a substantial part of their typical working income--in some cases, more than 100% of their previous pay. They have a financial incentive not to return to work, even if their employer would like to re-open, until these benefits run out. Of course, other unemployed workers receiving these higher benefits may not have an option to return. In the meantime, other low-wage workers who have kept working in grocery stores, warehouses, delivery services, and from home, are not receiving such payments at all. 

Given that the US policy choice was to funnel assistance to workers through the unemployment system, it's not a big shock that the unemployment rate rose so high, so fast. A near-term policy question is whether to extend the higher unemployment payments, perhaps by another six months. The Congressional Budget Office (June 4, 2020) has just released some estimate of the effects of that choice. CBO writes: 
Roughly five of every six recipients would receive benefits that exceeded the weekly amounts they could expect to earn from work during those six months. The amount, on average, that recipients spent on food, housing, and other goods and services would be closer to what they spent when employed than it would be if the increase in unemployment benefits was not extended. ... In CBO’s assessment, the extension of the additional $600 per week would probably reduce employment in the second half of 2020, and it would reduce employment in calendar year 2021. The effects from reduced incentives to work would be larger than the boost to employment from increased overall demand for goods and services.
My own sense is that a blanket extension of the additional unemployment benefits is probably the politically easy choice. But the pragmatic choice would be to start thinking more carefully about how structuring these payments in a way that would strike a better balance helping those who need it with incentives to return to work. 

There is a sense in which the very high US unemployment rates both understate and overstate the condition of US labor markets. Unemployment rates, by definition, leave out those who are "out of the labor force," perhaps because added family responsibilities have made it too difficult to work, or the bleak unemployment picture has made it difficult to seek a job. On the other side, some of the unemployed are are hovering in place, ready and able to return to their previous employer, but receiving enhanced unemployment insurance payments in the meantime. 

Estimating these kinds of factors of course involves a bunch of judgement calls. But for an example of such analysis,  Jason Furman and Wilson Powell III have written "The US unemployment rate is higher than it looks—and is still high if all furloughed workers returned" (Peterson Institute for International Economics, June 5, 2020). Furman and Powell look at the rise in the number of people “not at work for other reasons” and the rise in the number of people who are out of the labor force. They write: "Adjusting for these factors our “realistic unemployment rate” was 17.1 percent in May, down from the April value but still higher than any other unemployment rate in over 70 years."

They also look at what the unemployment rate would be if those who say they are on layoff all returne to their jobs: "In total, an additional 14.5 million of the unemployed reported being on temporary layoff. If all of these people were immediately recalled back to work and the labor force adjusted accordingly—a very optimistic scenario—the “full recall unemployment rate” would still be a very elevated 7.1 percent."

Either way, the US economy is clearly in the midst of a recession. The question is whether it turns out to be a deep-at-the-start-but-short recession, or deep-at-the-start-and-prolonged recession. The eventual outcome is only partly about economic policy: the coronavirus and public health policy will also play a big role. 

Thursday, June 4, 2020

Tales of Frank Ramsey: Economics, Wittgenstein, and More

Economists know Frank Ramsey (1903-1930) mostly through two classic papers written for the Economic Journal in 1927 and 1928, and also as a story of a genius who died at age 26. Cheryl Misak has written the first full biography of Ramsey: Frank Ramsey: A Sheer Excess of Powers, which I have not yet read. But I ran across the review/overview of the book by Anthony Gottlieb in the New Yorker (May 4, 2020), titled and subtitled "The Man Who Thought Too Fast: Frank Ramsey—a philosopher, economist, and mathematician—was one of the greatest minds of the last century. Have we caught up with him yet?"

Here, I'll rely on Gottleib's account to give the barest taste of what was so extraordinary about Ramsey, and the remind economists of the contributions of his two great papers on our field. As Gottlieb notes:
Dons at Cambridge had known for a while that there was a sort of marvel in their midst: Ramsey made his mark soon after his arrival as an undergraduate at Newton’s old college, Trinity, in 1920. He was picked at the age of eighteen to produce the English translation of Ludwig Wittgenstein’s “Tractatus Logico-Philosophicus,” the most talked-about philosophy book of the time; two years later, he published a critique of it in the leading philosophy journal in English, Mind. G. E. Moore, the journal’s editor, who had been lecturing at Cambridge for a decade before Ramsey turned up, confessed that he was “distinctly nervous” when this first-year student was in the audience, because he was “very much cleverer than I was.” . ...

His contribution to pure mathematics was tucked away inside a paper on something else. It consisted of two theorems that he used to investigate the procedures for determining the validity of logical formulas. More than forty years after they were published, these two tools became the basis of a branch of mathematics known as Ramsey theory, which analyzes order and disorder. (As an Oxford mathematician, Martin Gould, has explained, Ramsey theory tells us, for instance, that among any six users of Facebook there will always be either a trio of mutual friends or a trio in which none are friends.) ...

In 1926, Ramsey composed a long paper about truth and probability which looked at the effects of what he called “partial beliefs”—that is, of people’s judgments of probability. This may have been his most influential work. It ingeniously used the bets one would make in hypothetical situations to measure how firmly one believes a proposition and how much one wants something, and thus laid the foundations of what are now known as decision theory and the subjective theory of probability. ...
Economists now study Ramsey pricing; mathematicians ponder Ramsey numbers. Philosophers talk about Ramsey sentences, Ramseyfication, and the Ramsey test. 
For economists, two particular papers stand out: "A Contribution to the Theory of Taxation". (Economic Journal, 1927, 37: 145, pp. 47–61) and “A Mathematical Theory of Saving,” Economic Journal, 1928, 38:4 pp. 543–559. John Maynard Keynes was editor of the EJ, and as Gottleib notes: "John Maynard Keynes was one of several Cambridge economists who deferred to the undergraduate Ramsey’s judgment and intellectual prowess." 

Perhaps fortunately, dear reader, you need not rely on my personal efforts to summarize the influence an insights of these articles. Back in 2015, the Economic Journal on its 125th anniversary published as series of essays reflecting back on the most prominent papers that had appeared in the history of the journal, and two of the 13 papers deemed worthy of remembrance were by Ramsey. 

Joseph E. Stiglitz contributed "In Praise of Frank Ramsey's Contribution to the Theory of Taxation." He writes: 
Frank Ramsey's brilliant 1927 paper, modestly entitled, ‘A contribution to the theory of taxation’, is a landmark in the economics of public finance. Nearly a half century later, through the work of Diamond and Mirrlees (1971) and Mirrlees (1971), his paper can be thought of as launching the field of optimal taxation and revolutionising public finance. ... Here, he addresses a question which he says was posed to him by A. C. Pigou: given that commodity taxes are distortionary, what is the best way of raising revenues, i.e. what is the set of taxes to raise a given revenue which maximises utility. The answer is now commonly referred to as Ramsey taxes. ...

Ramsey showed that efficient taxation required imposing a complete array of taxes – not just a single tax. A large number of small distortions, carefully constructed, is better than a single large distortion. And he showed precisely what these market interventions would look like. (He even explains that the optimal intervention might require subsidies – what he calls bounties – for some commodities.  ...

In particular, when there are a set of commodities with fixed taxes (including commodities that cannot be taxed at all), he shows that there should be an equi‐proportionate reduction in the goods for which taxes can be freely set. In the case of linear and separable demand and supply curves (quadratic utility functions) and small taxes, he shows that optimal taxes are inversely related to the compensated elasticity of demand and supply. ...  Ramsey, however, went beyond this into an exploration of third best economics. He asked, what happens if there are some commodities that cannot be taxed, or whose tax rates are fixed. He argues that the same result (on the equi‐proportionate reduction in consumption) holds for the set of goods that can be freely taxed. ...
To boil this down a bit, there is a common intuition that the "best" commodity tax will be a tax of the same rate across most or all goods. Ramsey instead emphasizes that if the goal of a tax is to collect money while having that tax distort other behavior as little as possible, then you need to think about demand and supply for each commodity and how they will be affected by a tax. This way of thinking about "optimal taxation" has turned out to have very broad applicability. 

Ramsey's basic model was not looking at issues of inequality, but his basic framework can readily be adapted to do so.  Stiglitz describes how "at the centre of modern optimal tax theory and the work growing out of Ramsey lies a balancing of distributional and efficiency concerns." Nor was Ramsey's model looking at problems with markets like issues of pollution externalities, which his adviser A.C. Pigou was already discussing at that time, but the idea of thinking about how taxes on goods can be adapted to address externalities flows naturally from Ramsey's framework. If there is a concern that taxes on labor might encourage some people to shift away from taxed labor to untaxed leisure, one can build on Ramsey's approach to advocate taxing goods that are associated with leisure. It turns out hat when a government is thinking about how to regulate the prices charged by a public utility, Ramsey taxes become an important part of thinking about how to balance costs and benefits. 

Orazio P. Attanasio described the influence of Ramsey's 1928 paper, in "Frank Ramsey's A Mathematical Theory of Saving."  He writes: 
In 1928, Frank Ramsey, a British mathematician and philosopher, at the time aged only 25, published an article (Ramsey, 1928) whose content was utterly innovative and sowed the seeds of many subsequent developments.  ... The article sets out to answer an interesting and important question: ‘how much of its income should a nation save?’.
The basic tradeoff here is that more consumption in the present leads to less saving and investment in future growth. Over long periods of time, or successive generations, one wants to think about a rate of saving that makes sense from the standpoint of each generation. Moreover, Ramsey brings into the picture issues like  technical progress, population growth, capital wearing out or being destroyed, and so on. He discusses what we know call an "overlapping generations" model, where even if individuals only care about their own lifetimes, the overlap of successive generations keeps propelling us forwad with concerns about the future. As Attanasio points out, a number of later prominent ideas in economics are a reworking and extension of ideas from Ramsey's 1928 article. Here are some examples he mentions: 
The most obvious anticipation in the article is its central theme and result: the optimal growth model, as formulated by Ramsey, is very similar to what has become a basic workhorse of modern macroeconomics. In a 1998 interview Cass (1998) recounts that he read Ramsey's paper after writing the first chapter of his PhD dissertation in 1963, which eventually became the review of economic studies article (Cass, 1965). Talking about his celebrated 1965 article Cass (1998) says: ‘In fact I always have been kind of embarrassed because that paper is always cited although now I think of it as an exercise, almost re‐creating and going a little beyond the Ramsey model’ (p. 534). ... 

When considering the optimal saving problem, Ramsey uses as a first building block an intertemporal consumption problem which essentially defines the permanent income model. ... These intuitions and this way of modelling were written 30 years before the publication of Friedman's (1957) book and Modigliani and Brumberg's (1954) seminal paper on the life cycle model of consumption.

Analogously, the brief description on an economy populated by individuals with ‘different birthdays’ and how their individual savings aggregates into the supply of capital is essentially a description of the overlapping generation model which was Samuelson (1958) developed 30 years later5 and subsequently enriched and studied by Diamond (1965).
Ramsey developed an abdominal infection, underwent surgery, but died in the hospital. He was an avid swimmer, and one possibility is that he picked up a liver infection from swimming in the river. His early death is one of the biggest intellectual what-if stories of the twentieth century. 


Wednesday, June 3, 2020

Are Firms Too Risk-Averse?

There's a plausible argument that from the point of view of investors, firms are too risk-averse. After all, an investor can diversify across lots of firms. If some firms do well and some go broke, the overall return on the investment portfolio can be just fine. But from the standpoint of the managers running a company, the picture looks rather different. They want to protect their own jobs, and the jobs of people working for them. For them, a risky strategy that might have a big upside, but also a substantial possibility of failure, will not be to their personal taste. Indeed, top managers may fear that negative news of even a single investment project that ended up doing poorly could end up being used as a reason for new management to take over. From the point of view of top managers, focusing on low-risk activities like minor tweaks to existing products, me-too versions of product from competitors, and cost-cutting may look personally more attractive than trying to develop and launch a new product. If managers of many companies follow this logic, the level of risk-taking and innovation in the economy as a whole will be reduced. 

Dan Lovallo,  Tim Koller,  Robert Uhlaner and  Daniel Kahneman present some evidence and argumenta on this point in "Your Company Is Too Risk-Averse" (Harvard Business Review, March-April 2020). Some of the evidence is from surveys of managers. They write: 

In a 2012 McKinsey global survey, for example, two of us (Koller and Lovallo) presented the following scenario to 1,500 managers: You are considering a $100 million investment that has some chance of returning, in present value, $400 million over three years. It also has some chance of losing the entire investment in the first year. What is the highest chance of loss you would tolerate and still proceed with the investment? A risk-neutral manager would be willing to accept a 75% chance of loss and a 25% chance of gain; one-quarter of $400 million is $100 million, which is the initial investment, so a 25% chance of gain creates a risk-neutral value of zero. Most of the surveyed managers, however, demonstrated extreme loss aversion. They were willing to accept only an 18% chance of loss, much lower than the risk-neutral answer of 75%. In fact, only 9% of them were willing to accept a 40% or greater chance of loss. What’s more, the size of the investment made little difference to the degree of loss aversion. When the initial investment amount was lowered to $10 million, with a possible gain of $40 million, the managers were just as cautious ... 
Their argument focuses in particular on middle- and lower-level managers. After all the CEO may be evaluated based on the overall corporate performance, with a mixture of successes and failures. But managers farther down the food chain may have oversight of only one or two main projects, and if their next promotion or bonus is going to be based on the success or failure of this one project, they have a strong incentive to play it (reasonably) safe. The authors offer anecdotal evidence that the "risk aversion tax" from their behavior may be as high as one-third. They write: 
So how much money is left on the table owing to risk aversion in managers? Let’s assume that the right level of risk for a company is the CEO’s risk preference. The difference in value between the choices the CEO would favor and those that managers actually make is a hidden tax on the company; we call it the risk aversion tax, or RAT. Companies can easily estimate their RAT by conducting a survey, like Thaler’s, of the risk tolerance of the CEO and of managers at various levels and units.

For one high-performing company we worked with, we assessed all investments made in a given year and calculated that its RAT was 32%. Let that sink in for a moment. This company could have improved its performance by nearly a third simply by eliminating its own, self-imposed RAT. It did not need to develop exciting new opportunities, sell a division, or shake up management; it needed only to make investment decisions in accordance with the CEO’s risk tolerance rather than that of junior managers.
Their solutions involve making risk explicit. In many companies, it's can be hard to get a new project approved if you start talking about the full range of risks involved. It's a lot easier to set an expectation for what "success" might be, and to try get that bar set relatively low, so that "success" is more likely to happen. But companies which are more up-front about the range of likely outcomes--from the probabilities of a negative return to the probabilities of large gains--are more likely to see benefits from taking additional risks. And it helps if middle- and low-level managers are only evaluated on what they can personally control: for example, the lower-level managers should get credit within the organization if a new project is carried through on-time and on-plan (the factors they can control), even if it ends up not making any money (the factors they couldn't control.

The overall goal, as the authors write, is that "companies need to switch from processes predicated on managing outcomes to those that encourage a rational calculation of the probabilities."

Tuesday, June 2, 2020

Some Economics of the 1968 US Riots

"The Kerner report was the final report of a commission appointed by the U.S. President Lyndon B. Johnson on July 28, 1967, as a response to preceding and ongoing racial riots across many urban cities, including Los Angeles, Chicago, Detroit, and Newark. These riots largely took place in African American neighborhoods, then commonly called ghettos. On February 29, 1968, seven months after the commission was formed, it issued its final report. The report was an instant success, selling more than two million copies. ...  The Kerner report documents 164 civil disorders that occurred in 128 cities across the forty-eight continental states and the District of Columbia in 1967 (1968, 65). Other reports indicate a total of 957 riots in 133 cities from 1963 until 1968, a particular explosion of violence following the assassination of King in April 1968 (Olzak 2015)."

 The September 2018 issue of the  Russell Sage Foundation Journal of the Social Sciences includes a 10-paper symposium from a range of social scientists concerning "The Fiftieth Anniversary of the Kerner Commission Report." The introductory essay by Susan T. Gooden and Samuel L. Myers Jr., "The Kerner Commission Report Fifty Years Later: Revisiting the American Dream" (pp.  1–17) does an excellent job of setting the historical context and contemporary reactions to the report, along with offering some comparisons that I at least had not seen before about difference between rioting and non-rioting cities over over time.

[This post is republished from my earlier post of September 6, 2018, when this issue came out, with weblinks refreshed and a touch of editing.]

The opening paragraph above is quoted from the Gooden/Myers paper. As they point out, perhaps the most commonly repeated comment from the report was that it baldly named white racism as an underlying cause of the problems. As one example, to quote from the Kerner report: “What white Americans have never fully understood—but what the Negro can never forget—is that white society is deeply implicated in the ghetto. White institutions created it, white institutions maintain it, and white society condones it.”

Although the report was widely disseminated, it was not popular. As Gooden and Myers report:
"President Johnson was enormously displeased with the report, which in his view grossly ignored his Great Society efforts. The report also received considerable backlash from many whites and conservatives for its identification of attitudes and racism of whites as a cause of the riots. `So Johnson ignored the report. He refused to formally receive the publication in front of reporters. He didn’t talk about the Kerner Commission report when asked by the media,' and he refused to sign thank-you letters for the commissioners (Zelizer 2016, xxxii–xxxiii)."
Other contemporary critics of the report complained that by emphasizing white racism, the report seemed to imply that changes in the beliefs of whites should be the main topic, while not paying attention to institutions and behaviors. Gooden and Myers cite a pungent comment from the American political scientist Michael Parenti, who wrote back in 1970:
"The Kerner Report demands no changes in the way power and wealth are distributed among the classes; it never gets beyond its indictment of “white racism” to specify the forces in the political economy which brought the black man to riot; it treats the obviously abominable ghetto living conditions as “cause” of disturbance but never really inquires into the causes of the “causes,” viz., the ruthless enclosure of Southern sharecroppers by big corporate farming interests, the subsequent mistreatment of the black migrant by Northern rent-gorging landlords, price-gorging merchants, urban “redevelopers,” discriminating employers, insufficient schools, hospitals and welfare, brutal police, hostile political machines and state legislators, and finally the whole system of values, material interests and public power distributions from the state to the federal Capitols which gives greater priority to “haves” than to “have-nots,” servicing and subsidizing the bloated interests of private corporations while neglecting the often desperate needs of the municipalities. . . . . To treat the symptoms of social dislocation (e.g., slum conditions) as the causes of social ills is an inversion not peculiar to the Kerner Report. Unable or unwilling to pursue the implications of our own data, we tend to see the effects of a problem as the problem itself. The victims, rather than the victimizers, are defined as “the poverty problem.” It is a little like blaming the corpse for the murder." 
Gooden and Myers point to another issue with the report that social scientists immediately point out. The members of the Kerner Commission made personal visits to cities that had experienced rioting, and made an effort to talk with people in the affected communities. But they made essentially no effort to visit cities that had not experienced riots. It's hard to draw inferences about the causes of riots without making some effort to look at what differs across rioting and non-rioting cities. 

They offer a preliminary look at some of the economic differences across rioting and non-rioting cities. For example, this figure shows the black-white ratio of family incomes in rioting (blue) and nonrioting (orange) cities. The ratio hasn't moved much in the cities that had 1960s riots, while it increased substantially in the cities without riots. Indeed, the cities that did not riot have had slightly more equal black-white income ratios for most of the last few decades.  


These sorts of patterns are open to a range of interpretations. Perhaps cities were less likely to riot in the late 1960s if more immediate progress in black-white incomes was happening. Perhaps something about having a higher black-white income ratio at the start made rioting more likely. Perhaps rioting led to an outmigration of middle- and upper-class families of both races, which could contribute to a stagnation of the black-white ratio. The cities that rioted were mainly the northeast, midwest, and west, and so political, social, and economic differences across the geography of the US surely also have played a role. 

In other measures like the black-white ratios of unemployment rates, high school graduation rates, and poverty rates, the rioting and non-rioting cities look very similar. As Gooden and Myers write: 
"This evidence points to a possible flaw in the Kerner Commission’s report. Although the evidence clearly points to a divided America—a divide that continues today—the trajectories of the riot cities and the nonriot cities are remarkably similar. Thus, it is a bit more difficult to embrace the conclusion that this racial divide was the cause of the riots given that the racial divide was evident in both riot cities and nonriot cities and perhaps was even more pronounced in the nonriot cities than in the riot cities before the riots."
For a take on the Kerner Commission report earlier this year, see "Black/White Disparities: 50 Years After the Kerner Commission" (February 27, 2018). Here's the Table of Contents of this issue of the Russell Sage Foundation Journal, with links to the papers:

Monday, June 1, 2020

Sabotaging the Competition: A Home Construction Example

Why are monopolies bad? In a standard intro-econ textbook, the problem of monopolies is that because of the lack of competition, they can reduce output from what it would otherwise be, jack up prices, and thus earn higher profits. Some books also mention that monopolies may have less incentive for quality or innovation--again, because of a lack of competition.

James A. Schmitz, Jr.  at the Federal Reserve Bank of Minneapolis refers to this standard intro-econ model as a "toothless" monopoly, because in that model, all the monopoly firm can do is raise prices. He argues that it doesn't capture what bothers most people about monopoly. There's also also a concern that monopolies take actions to take action to sabotage and even destroy their rivals--especially the rivals who might have provide low-cost competition.  Moreover, monopolies may form concentrations of power with other monopolies or with with political allies to accomplish this goal, and in this way corrupt institutions of law and politics as well. 

Schmitz is in the middle of a substantial research project that encompasses both the intellectual history of these two views of monopoly and also a set of concrete examples. As a work-in-progress, he has posted "Monopolies Inflict Great Harm on Low- and Middle-Income Americans" (Federal Reserve Bank of Minneapolis, Staff Report No. 601, May 2020), which is nearly 400 pages long but described as the "first essay" in a collection of essays to be produced in the next year or two. It can usefully be  read as a preliminary overview of an ongoing research project. 

However, it's worth noting that Schmitz doesn't focus on the conventional everyday meaning of "monopoly"--that is, a super-big company that dominates sales within its market. Instead, he referring to "monopoly power" in a way that refers to when a group (not just a single large firm) acts restrict competition. Thus, his main examples are where existing producers have exerted political power to sabotage lower-cost competitors, including residential construction, credit cards, legal services, repair services, dentistry, hearing aids, eye care, and others.  

Here, I'll just focus on sketching his discussion of residential construction. Schmitz writes: "The most extensively used technology, by far, is often called the stick-built technology because sticks (two by fours) visually dominate the construction sites. This technology has been used for centuries. Homes are built outside, with a highly labor-intensive technology. It also requires highly-skilled-labor. The other technology is factory-production of homes. This technology substitutes capital for labor and also semi-skilled workers for highly skilled workers."

There has been a battle going back about a century between stick-built technology and factory technology for residential construction.  Schmitz traces the early legal conflicts back to the late 1910s. Here's a summary comment from Thurman Arnold, who was Assistant Attorney General for Antitrust in the 1930s, in a 1947 article. 
When Arnold left the DOJ, he did not stop challenging monopolies in traditional construction. He did not stop trying to protect producers of factory-built homes. In “Why We Have a Housing Mess,” Arnold (1947) began with a picture of a homeless Pacific War veteran, with his wife and five children, sitting on the street with their belongings (see Figure 2). The caption said: “This Pacific War veteran and his family are homeless because we have let rackets, chiseling and labor feather-bedding block the production of low-cost houses.” Arnold began his text this way: “Why can’t we have houses like Fords [i.e., automobiles]? For a long time, we have been hearing about mass production of marvelously efficient postwar dream houses, all manufactured in one place and distributed like Fords. Yet nothing is happening. The low-cost mass production house has bogged down. Why? The answer is this: When Henry Ford went into the automobile business, he had only one organization to fight [an organization with a patent] . . . But when a Henry Ford of housing tries to get into the market with a dream house for the future, he doesn’t find just one organization blocking him. Lined up against him are a staggering series of restraints and private protective tariffs."
Essentially, Arnold and other (including a substantial multi-author research project at the University of Chicago in the late 1940s) claimed that while no one explicitly passed rules to make factory-built housing illegal, building codes were carefully written in a way which had that effect. 

Some standard issues were that local building codes were different everywhere, which was fine for local stick-built construction firms, but posed a problem for a factory producer hoping to ship everywhere. There was often a distinction in building codes about living in "trailers" or in permanent structures, in which a "double-wide" home brought to the site in two parts was treated as a "trailer," even when it was installed permanently on-site and looked much the same as a stick-built home of similar size. 

In the 1960s, economic pressure had gathered for factory-built homes, which are typically much cheaper on a per-foot basis. But in the 1970s, regulators pushed back hard, with the the newly-created US Department of Housing and Urban Development playing a big role. Here are some snippets from how Schmitz tells the story. 
Many housing industry observers noted that stick-builders were facing such threats from
factory-built home producers in the 1960s. Though they did not have direct measures of
productivity, they compared the costs and prices of new, site-built homes to the costs and
prices of other consumer durables. Alexander Pike (1967), an architect, compared the prices of new homes and the prices of new cars from the 1920s. Though he did not have productivity statistics, his point was clear: the productivity of construction badly lagged that of the car industry. At roughly the same time, the research department of Morgan Guaranty Trust Company (1969) wrote about this productivity divergence when discussing the potential for industrialized housing ... in “Factory-Built Houses: Solution for the Shelter Shortage?” They noted the serious problems facing the stick built industry as its productivity lagged. They showed that, over the period 1948-68, the prices of consumer durables rose roughly 22 percent, while residential construction costs rose roughly 100 percent.
Modular construction for single-family homes took off in the 1960s. Schmitz cites statistics that they "increased from roughly 100,000 units to 600,000 units" annually. "The share of factory production of
single-family residential homes began growing in the mid 1950s, rising from about 10 percent
of home production to nearly 60 percent of home production by the beginning of the 1970s
(where total home production equals stick-built production plus factory production)."

But the stick-built industry, assisted by local and federal regulators, pushed back 
While the sabotage of factory housing has been going on for 100 years, there was a dramatic surge in the ferocity of this sabotage in the middle 1970s. During this period, laws were passed, and regulations implemented, that sent the factory-built housing industry into a tailspin. These regulations, and additional harmful ones introduced since the 1970s, remain on the books and mean the industry is a shell of its former self. When this new sabotage was unleashed in the middle 1970s, the producers of factory homes were well aware of it, of course. They fought the HUD and NAHB monopolies to reverse the sabotage but lost the fight. Today the members of the factory-built housing industry are unaware of this history.
As Schmitz documents, the pushback came in many forms, including regulations and subsidies. As one  example: Who knows how high the factory share would have risen if new sabotage of factory production would not have commenced in 1968. At that time, a national subsidy program was started
for households buying stick-built homes (see below). Under these programs, households purchased 430,000 stick-built homes (per year) in the early 1970s." There have been court battles, and the "is it a trailer, is it a house" battle has been refought many times. For example, there is often a rule that a manufactured home must be built on a permanent and unremovable chassis--like a trailer--even though that's not what many customers would want. 

For those with at taste for irony, there were also complaints from stick-built construction firms that manufactured housing was "unfair competition" because it could be built so much less expensively. Schmitz cites estimates from the US Census Bureau in 2007 that manufactured homes are one-third the price per square foot. One suspects that if manufactured housing was encouraged and allowed to flourish, the cost advantage from economies of scale would only increase. 

The US economy is widely acknowledged to have a shortage of affordable housing. It has also for a century has monopolizing, competition-reducing forces that have favored more expensive stick-built housing and sabotaged the economic prospects of manufactured housing. As Schmitz points out, whatever defense one wishes to offer for these kinds of competition-restricting rules, the unavoidable fact is that the costs of the rule are carried by those of low and middle income levels, who would benefit most from lower prices. 

For readers who are interested in antitrust discussions as they apply to the FAGA companies (Facebook, Amazon, Google, and Apple), here are a couple of earlier posts that offer a starting point. 

Sunday, May 31, 2020

"To be Happy at Home is the Ultimate Result of All Ambition"

As the time of recommended stay-at-homes and shutdowns continues, I found myself remembering this post from a couple of years ago. Being happy at home is an ongoing challenge, albeit for different reasons at different times. It first appeared back around the holiday season; here, I've edited lightly to trim the holiday references. 
_____________

I sometimes reflect on how many of us put considerable time and energy into thinking about where to live and furnishing our home--but then rush off and travel to other place to vacation, celebrate, and meet with friends.

Back in 1750, Samuel Johnson wrote in the November 10 issue of his magazine, The Rambler, "To be happy at home is the ultimate result of all ambition, the end to which every enterprise and labour tends ..." It's a thought-provoking sentiment. Many people would not describe their ambitions in this way, but instead would focus on their idea of ambition in a role outside the home and on the idea of becoming a "star" in some way, in business, politics, entertainment, social activism, or some other way. It is of course conceptually impossible for everyone to be recognized as a star by everyone else, and so a desire for public recognition of star-status will leave most people unhappy. Being happy at home can be a difficult goal in its own way, but it does have two virtues. One is that being happy at home is based on one's own feelings and one's own ungilded personality, rather than about how one is perceived and treated by those outside one's family and close friends. The other is that being happy at home is a more broadly achievable goal for many people, unlike the evanescent dreams of fame and celebrity.

Going back further in time, the philosopher Blaise Pascal discussed a related question in 1669. He argued that we cannot be happy in our homes because when we are alone, we fall into thinking about our "weak and mortal condition," which is depressing. Rather than face ourselves and our lives squarely and honestly, we instead rush off looking for diversion. Pascal writes of how people "aim at rest through agitation, and always to imagine that they will gain the satisfaction which as yet they have not, if by surmounting certain difficulties which now confront them, they may thereby open the door to rest. Thus rolls all our life away. We seek repose by resistance to obstacles, and so soon as these are surmounted, repose becomes intolerable."

I aspire to remember and to live out the value of happiness at home. But I recognize in myself the contradiction of aiming at rest through agitation. I know that my opinion of myself, along with the those who have known me longer and more intimately, should matter most. But I recognize in myself a desire to receive attention and plaudits from those who barely know me at all.

Here's a longer version of the comments from Johnson and Pascal. First, from Samuel Johnson, from the November 10, 1750 issue of The Rambler:   
For very few are involved in great events, or have their thread of life entwisted with the chain of causes on which armies or nations are suspended; and even those who seem wholly busied in publick affairs, and elevated above low cares, or trivial pleasures, pass the chief part of their time in familiar and domestick scenes; from these they came into publick life, to these they are every hour recalled by passions not to be suppressed; in these they have the reward of their toils, and to these at last they retire.
The great end of prudence is to give chearfulness to those hours, which splendour cannot gild, and acclamation cannot exhilarate; those soft intervals of unbended amusement, in which a man shrinks to his natural dimensions, and throws aside the ornaments or disguises, which he feels in privacy to be useless incumbrances, and to lose all effect when they become familiar. To be happy at home is the ultimate result of all ambition, the end to which every enterprise and labour tends, and of which every desire prompts the prosecution.
It is, indeed, at home that every man must be known by those who would make a just estimate either of his virtue or felicity; for smiles and embroidery are alike occasional, and the mind is often dressed for show in painted honour, and fictitious benevolence. ... The most authentick witnesses of any man's character are those who know him in his own family, and see him without any restraint, or rule of conduct, but such as he voluntarily prescribes to himself. 
"When I have set myself now and then to consider the various distractions of men, the toils and dangers to which they expose themselves in the court or the camp, whence arise so many quarrels and passions, such daring and often such evil exploits, etc., I have discovered that all the misfortunes of men arise from one thing only, that they are unable to stay quietly in their own chamber. A man who has enough to live on, if he knew how to dwell with pleasure in his own home, would not leave it for sea-faring or to besiege a city. An office in the army would not be bought so dearly but that it seems insupportable not to stir from the town, and people only seek conversation and amusing games because they cannot remain with pleasure in their own homes.
But upon stricter examination, when, having found the cause of all our ills, I have sought to discover the reason of it, I have found one which is paramount, the natural evil of our weak and mortal condition, so miserable that nothing can console us when we think of it attentively.
Whatever condition we represent to ourselves, if we bring to our minds all the advantages it is possible to possess, Royalty is the finest position in the world. Yet, when we imagine a king surrounded with all the conditions which he can desire, if he be without diversion, and be allowed to consider and examine what he is, this feeble happiness will never sustain him; he will necessarily fall into a foreboding of maladies which threaten him, of revolutions which may arise, and lastly, of death and inevitable diseases; so that if he be without what is called diversion he is unhappy, and more unhappy than the humblest of his subjects who plays and diverts himself.
Hence it comes that play and the society of women, war, and offices of state, are so sought after. Not that there is in these any real happiness, or that any imagine true bliss to consist in the money won at play, or in the hare which is hunted; we would not have these as gifts. We do not seek an easy and peaceful lot which leaves us free to think of our unhappy condition, nor the dangers of war, nor the troubles of statecraft, but seek rather the distraction which amuses us, and diverts our mind from these thoughts. ...
They fancy that were they to gain such and such an office they would then rest with pleasure, and are unaware of the insatiable nature of their desire. They believe they are honestly seeking repose, but they are only seeking agitation.
They have a secret instinct prompting them to look for diversion and occupation from without, which arises from the sense of their continual pain. They have another secret instinct, a relic of the greatness of our primitive nature, teaching them that happiness indeed consists in rest, and not in turmoil. And of these two contrary instincts a confused project is formed within them, concealing itself from their sight in the depths of their soul, leading them to aim at rest through agitation, and always to imagine that they will gain the satisfaction which as yet they have not, if by surmounting certain difficulties which now confront them, they may thereby open the door to rest.
Thus rolls all our life away. We seek repose by resistance to obstacles, and so soon as these are surmounted, repose becomes intolerable. For we think either on the miseries we feel or on those we fear. And even when we seem sheltered on all sides, weariness, of its own accord, will spring from the depths of the heart wherein are its natural roots, and fill the soul with its poison.

Thursday, May 28, 2020

How Economists and Sociologists See Racial Discrimination Differently

Economists tend to see discrimination as based on actions of individuals, who in turn are interacting in markets and society. However, sociologists do not feel the same compulsion as economists to build their theories on purposeful decision-making by individuals: "Sociologists generally understand racial discrimination as differential treatment on the basis of race that may or may not result from prejudice or animus and may or may not be intentional in nature."   The Spring 2020 issue of the Journal of Economic Perspectives illustrates the difference with a two-paper symposium on "Perspectives on Racial Discrimination: 
As most economists learned somewhere along the way, one can think of individual motivations for discrimination as coming in two flavors: taste-based discrimination in the oeuvre of Gary Becker (Nobel 1992) or "statistical discrimination" from the writings of Edmund Phelps (Nobel 2006) and Kenneth Arrow (Nobel 1972). One can dispute how economists discuss the subject of discrimination, but it would just be false to claim that it has not been a high-priority topic of top-level economists for decades.)

Taste-based discrimination is the name given to racial prejudice and animus. Statistical discrimination refers to the reality that we all make generalizations about people. Sometimes the generalizations are socially useful: Lang and Spitzer mention the generalizations that people are more likely to give up their seat on the bus or subway to a pregnant woman or an elderly person, based on the statistical generalization that they are more likely to need the seat, or that health care providers are more likely to emphasize breast-cancer screening for women than for me. However, statistical discrimination can also be harmful: say, if it is based on beliefs that those of a certain race who are applying to be hired for a job or to rent an apartment are more likely to be criminals. Moreover, when statistical discrimination is based on inaccurate statistics and exaggerated concerns, it begins to look functionally similar to taste-based discrimination. 

In addition, economists have long pointed out that the effects of discrimination may vary based on the parties involved: for example, in the context of labor market discrimination one can look separately at discrimination by employers, by co-workers, and by customers. For example, if the issue is discrimination by employers, one possible result is firms that are segregated by race, but both selling to the same consumers. If the issue is discrimination by consumers, one result may be that whites become more likely to have the "front-facing" jobs that deal directly with customers.  

The economic approach to discrimination, with its focus on purposeful and intentional acts by individuals, can offer some useful insights, and Lang and Spetzer give a useful overview of the research. For example, while the basic statistics show that blacks are more likely to be arrested for traffic violations, how can we know whether this is linked to prejudiced behavior by the police? One line of research has looked at traffic violations at different times of day, when there is more or less daylight. The underlying idea is that racial prejudice is more likely to manifest itself when the police can see the driver! The evidence from these studies is mixed: One study found that no effect of daylight on the racial mix of traffic stops, but another found that blacks were stopped more often at night on street with better lighting. 

Studies of "ban-the-box" legislation also have unexpected effects, as Lang and Spetzer point out: 
Because a higher proportion of blacks have criminal records than whites do, one might expect that preventing employers from inquiring about criminal records, at least at an early stage, would increase black employment. However, if firms cannot ask for information about criminal records, they may rely on correlates of criminal history, including being a young black man. This concern is even greater if employers tend to exaggerate the prevalence of criminal histories among black men, thus leading to inaccurate statistical discrimination. Agan and Starr (2018) investigate “ban the box” legislation in which companies are forbidden from asking job applicants about criminal background. Before such rules took effect, employers interviewed similar proportions of black and white male job applicants without criminal records. Prohibiting firms from requesting this information reduced callbacks of black men relative to otherwise similar whites. Consistent with this, Doleac and Hansen (2016) find that banning the box reduced the employment of low-skill young black men by 3.4 percentage points and low-skill young Hispanic men by 2.3 percentage points. Similarly, occupational licensing increases the share of minority workers in an occupation despite their lower pass rates on such exams (Law and Marks 2009). Prohibiting the use of credit reports in hiring reduced black employment rather than increasing it (Bartik and Nelson 2019). Taken together, these studies provide strong evidence that statistical discrimination plays an important role in hiring.
As sociologists, Small and Pager have no direct issue with this kind of work in economics: as they point out, some sociologists work in a similar vein. But their essay emphasizes that discriminatory outcomes can emerge from reasonable-sounding institutional choices and from history. 

For example, many companies, when they are hiring, encourage current workers to refer their friends and neighbors. This practice is not overtly racial. But given US patterns of residential segregation and friendship, it means that new hires will tend to reinforce the earlier racial composition of the workforce Or consider the standard practice that when doing layoffs, last hired will be first fired. If a company has only fairly recently started hiring minority groups, then the weight of layoffs will fall more heavily on these groups. As Small and Pager write: 
It is not surprising that a national study of 327 establishments that downsized between 1971 and 2002 found that downsizing reduced the diversity of the firm’s managers—female and minority managers tended to be laid off first. But what is perhaps more surprising is that those companies whose layoffs were based formally on tenure or position saw a greater decline in the diversity of their managers; net of establishment characteristics such as size, personnel structures, unionization, programs targeting minorities for management, and many others; and of industry characteristics such as racial composition of industry and state labor force, proportion of government contractors, and others (Kalev 2014). In contrast, those companies whose layoffs were based formally on individual performance evaluations did not see greater declines in managerial diversity (Kalev 2014).
In other cases, actions taken for discriminatory reasons in the past can have effects for long periods into the future. For example, blacks are much less likely to accumulate wealth through homeownership than whites, and one reason dates back to decisions made by federal agencies in the 1930s. 
However, the Home Owners Loan Corporation and Federal Housing Administration were also responsible for the spread of redlining. As part of its evaluation of whom to help, the HOLC created a formalized appraisal system, which included the characteristics of the neighborhood in which the property was located. Neighborhoods were graded from A to D, and those with the bottom two grades or rankings were deemed too risky for investment. Color-coded maps helped assess neighborhoods easily, and the riskiest (grade D) neighborhoods were marked in red. These assessments openly examined a neighborhood’s racial characteristics, as “% Negro” was one of the variables standard HOLC forms required field assessors to record (for example, Aaronson, Hartley, and Mazumder 2019, 53; Norris and Baek 2016, 43). Redlined neighborhoods invariably had a high proportion of AfricanAmericans. Similarly, an absence of African-Americans dramatically helped scores. For example, a 1940 appraisal of neighborhoods in St. Louis by the Home Owners Loan Corporation gave its highest rating, A, to Ladue, an area at the time largely undeveloped, described as “occupied by ‘capitalists and other wealthy families’” and as a place that was “not the home of ‘a single foreigner or Negro’” (Jackson 1980, 425). In fact, among the primary considerations for designating a neighborhood’s stability were, explicitly, its “protection from adverse influences,” “infiltration of inharmonious racial or nationality groups,” and presence of an “undesirable population” (as quoted in Hillier 2003, 403; Hillier 2005, 217).
More recent research looks at the long-term effects of the boundaries that were drawn at the time. 
The results are consistent with the HOLC boundaries having a causal impact on both racial segregation and lower outcomes for predominantly black neighborhoods. As the authors write, “areas graded ‘D’ become more heavily African-American than nearby C-rated areas over the 20th century, [a] . . . segregation gap [that] rises steadily from 1930 until about 1970 or 1980 before declining thereafter” (p. 3). They find a similar pattern when comparing C and B neighborhoods, even though “there were virtually no black residents in either C or B neighborhoods prior to the maps” (p. 3). Furthermore, the authors find “an economically important negative effect on homeownership, house values, rents, and vacancy rates with analogous time patterns to share AfricanAmerican, suggesting economically significant housing disinvestment in the wake of restricted credit access” (pp. 2–3).
While economists have not totally neglected the role of institutions and history in the transmission of racial discrimination, it's fair to say that it hasn't been their main emphasis, either.  My own sense is that through most of US history, the main issue of racial discrimination was explicit white prejudice. But the balance has shifted, and current differences in racial outcomes are a difficult combination of history, institutions, and social patterns. 

For example, one theme that has emerged from earlier research both by economists and sociologists is that discrimination can reduce the incentives to gain human capital. Indeed, a group that is has experienced discrimination may end up with less human capital for interrelated reasons: less access to educational resources, reduced motivation to gain human capital (because of lurking future  discrimination), reduced expectations or less support from family and peer groups,  and other reasons. Once this  dynamic has unfolded, then even employers who have zero preference for taste-based discrimination, but just hire on the basis of observable background and skills, will end up with different labor market outcomes by race.