Monday, May 20, 2019

Daniel Hamermesh: How Do People Spend Time?

For economists, the idea of "spending" time isn't a metaphor. You can spend any resource, not just money. Among all the inequalities in our world, it remains true that every person is allocated precisely the same 24 hours in each day. In "Escaping the Rat Race: Why We Are Always Running Out of Time," the Knowledge@Wharton website interviews Daniel Hamermesh, focusing on themes from his just-published book Spending Time: The Most Valuable Resource.

The introductory material at the start quotes William Penn, who apparently once said, “Time is what we want most, but what we use worst.” Here are some comments from Hamermesh:

Time for the Rich, Time for the Poor
The rich, of course, work more than the others. They should. There’s a bigger incentive to work more. But even if they don’t work, they use their time differently. A rich person does much less TV watching — over an hour less a day than a poor person. They sleep less. They do more museum-going, more theater. Anything that takes money, the rich will do more of. Things that take a lot of time and little money, the rich do less of. ...
I think complaining is the American national pastime, not baseball. But the thing is, those who are complaining about the time as being scarce are the rich. People who are poor complain about not having enough money. I’m sympathetic to that. They’re stuck. The rich — if you want to stop complaining, give up some money. Don’t work so hard. Walk to work. Sleep more. Take it easy. I have no sympathy for people who say they’re too rushed for time. It’s their own darn fault.

Time Spent Working Across Countries
Americans are the champions of work among rich countries. We work on average eight hours more per week in a typical week than Germans do, six hours more than the French do. It used to be quite a bit different. Forty years ago, we worked about average for rich countries. Today, even the Japanese work less than we do. The reason is very simple: We take very short vacations, if we take any. Other countries get four, five, six weeks. That’s the major difference. ...
What’s most interesting about when we work is you compare America to western European countries, and it’s hard to find a shop open on a Sunday in western Europe. Here, we’re open all the time. Americans work more at night than anybody else. It’s not just that we work more; we also work a lot more at night, a lot more in the evenings, and a heck of a lot more on Sundays and Saturdays than people in other rich countries. We’re working all the time and more. ...
It’s a rat race. If I don’t work on a Sunday and other people do, I’m not going to get ahead. Therefore, I have no incentive to get off that gerbil tube, get out of it and try to behave in a more rational way. ...  The only way it’s going to be solved is if somehow some external force, which in the U.S. and other rich countries is the government, imposes a mandate that forces us to behave differently. No individual can do it. ...
We have to force ourselves, as a collective, as a polity, to change our behavior. Pass legislation to do it. Every other rich country did that between 1979 and 2000. We think the Japanese are workaholics. They’re not workaholics. Compared to us, they work less than we do, yet 40 years ago they worked a heck of a lot more. They chose to cut back. ,.. It’s going to be a heck of a lot of trouble to change the rules so that people are mandated to take four weeks of vacation or to take a few more paid holidays. Other countries have done it. It didn’t just happen from the day the countries were born. They chose to do it. It’s a political issue, like the most important things in life. 
Time and Technology, Money Chasing Hours
Time is an economic factor; economics is about scarcity more than anything else. Because our incomes keep on going up, whereas time doesn’t go up very much, time is the increasingly important scarce factor.  ...
There’s no question technology has made us better off. Think about going to a museum. When I went to the Museum of Science and Industry in Chicago as a kid, you’d pull levers. You did a few things. These days, it’s all incredibly immersive. Great technology. But you can’t go to the museum in any less time. You can’t cut back on sleep. A few things are easier to do more quickly because of technology: cooking, cleaning, washing, I don’t know if you’re old enough to remember the semi-automatic washing machine with a ringer. Tremendous improvements in the things you do with the house. Technology has made life better, but it hasn’t saved us much time. ... So, we are better off, but it’s not that we’re going to have more time; we’re going to have less time. But we have more money chasing the same number of hours.
 For a longer and more in-depth and wide-ranging discussion of these subjects, listen to the hour-long EconTalk episode in which Russ Roberts interviews Daniel Hamermesh  (March 25, 2019). 

Friday, May 17, 2019

Time for a Return of Large Corporation Research Labs?

It often takes a number of intermediate steps to move from a scientific discovery to a consumer product. A few decades ago, many larger and even mid-sized corporations spent a lot of money on research and development laboratories, which focused on all of these steps. Some of these corporate laboratories like those at AT&T, Du Pont, IBM, and Xerox were nationally and globally famous. But the R&D ecosystem has shifted, and firms are now much more likely to rely on outside research done by universities or small start-up firms. These issues are discussed in "The changing structure of American innovation: Cautionary remarks for economic growth," by Ashish Arora, Sharon Belenzon,  Andrea Patacconi, and Jungkyu Suh, presented at conference on  "Innovation Policy and the Economy 2019," held on on on April 16, 2019, hosted by the National Bureau of Economic Research, and sponsored by the Ewing Marion Kauffman Foundation.

On the importance of corporate laboratories much better decades of US productivity growth, they authors note:
From the early years of the twentieth century up to the early 1980s, large corporate labs such as AT&T's Bell Labs, Xerox's Palo Alto Research Center, IBM's Watson Labs, and DuPont's Purity Hall were responsible for some of the most consequential inventions of the century such as the transistor, cellular communication, graphical user interface, optical bers, and a host of synthetic materials such as nylon, neoprene, and cellophane.
But starting in the 1980s, firms began to rely more on universities and on start-ups to do their R&D. Here's one of many examples, the closing of the main DuPont research laboratory: 
A more recent example is DuPont's closing of its Central Research & Development lab in 2016. Established in 1903, DuPont Central R&D served as a premiere lab on par with the top academic chemistry departments. In the 1960s, the central R&D unit published more articles in the Journal of the American Chemical Society than MIT and Caltech combined. However, in the 1990s, DuPont's attitude toward research changed as the company started emphasizing business potential of research projects. After a gradual decline in scientifi c publications, the company's management closed the Experimental Station as a central research facility for the firm after pressure from activist investors in 2016.
The pattern shows up in broader trends. The authors write that "the number of publications per firm fell at a rate of 20% per decade from 1980 to 2006 for R&D performing American listed firms." Business-based R&D as a share of total R&D peaked back in the 1990s, and has been falling since then. The share of business R&D which is "research," as opposed to "development," has been falling, too. 

The authors tell the story of how so much research was based in corporations, or shared by corporations and universities, for the first sis or seven seven decades of the 20th century, and how the shift to a greater share of research happening universities took place. One big change was the Bayh-Dole act of 1980 (citations omitted):
Perhaps the most widely commented on reform of this era is the Bayh-Dole Patent and Trademark Amendments Act of 1980, which allowed the results of federally funded university research to be owned and exclusively licensed by universities. Since the postwar period, the federal government had been funding more than half of all research conducted in universities and owned the rights to the fruits of such research, totaling in 28,000 patents. However, only a few of these inventions would actually make it into the market. Bayh-Dole was meant to induce industry to develop these underutilized resources by transferring property rights to the universities, which were now able to independently license at the going market rate.
As universities took on more research, corporations backed off. Here are a couple of examples: 
In 1979, GE's corporate research laboratory employed 1,649 doctorates and 15,555 supporting staff, while IBM employed 1,900 staff and 1,300 doctorate holders. The comparable figures in 1998 for GE was 475 PhDs supported by 880 professional staff, and 1,200 doctorate holders for IBM. Indeed, rms whose sales grew by 100% or higher between 1980 and 1990 published 20.6 fewer scienti c articles per year. This contrast between sales growth and publications drop persists into the next two decades: rms that doubled in sales between 1990 and 2000 published 12.0 fewer articles. Publications dropped by 13.3 for such fast growth firms between 2000 and 2010.
A common pattern seems to be that the number of researchers and scientific papers is falling at a number of firms, but the number of patents at these same firms has been steadily rising.  Firms are putting less emphasis on the research, and more on development that can turn into well-defined intellectual property. This pattern seems to hold (mostly) across big information technology and computer firms. The pharmaceutical and biotech firms offer an exception of an industry that has continued to publish research--probably because published research is important in regulatory approval for many of their products. 
Overall, the new innovation ecosystem exhibits a deepening division of labor between universities that specialize in basic research, small start-ups converting promising new findings into inventions, and larger, more established firms specializing in product development and commercialization. Indeed, in a survey of over 6,000 manufacturing- and service-sector firms in the U.S. ... 49% of the innovating firms between 2007 and 2009 reported that their most important new product originated from an external source.
But in this new eco-system of innovation, has something been lost? The authors argue that as businesses have outsourced R&D, it has contributed to the sustained sluggish pace of US productivity growth. They write: 
Spinoffs, startups, and university licensing offices have not fully filled the gap left by the decline of the corporate lab. Corporate research has a number of characteristics that make it very valuable for science-based innovation and growth. Large corporations have access to signi ficant resources, can more easily integrate multiple knowledge streams, and their research is directed toward solving specifi c practical problems, which makes it more likely for them to produce commercial applications. University research has tended, more so than corporate research, to be curiosity-driven rather than mission-focused. It has favored insight rather than solutions to specifi c problems, and partly as a consequence, university research has required additional integration and transformation to become economically useful. This is not to deny the important contributions that universities and small rms make to American innovation. Rather, our point is that large corporate labs may have distinct capabilities, which have proved to be difficult to replace. Further, large corporate labs may also generate signi ficant positive spillovers, in particular by spurring high-quality scienti fic entrepreneurship.
It's not clear how to encourage a resurgence of corporate research labs. Companies and their investors seem happy with the current division of R&D labor. But from a broader social perspective, the growing separation of companies from the research on which they rely suggests that the gap between scientific research and consumer products is growing, along with the the possibility that economically valuable innovations are falling into that gap and never coming into existence.

Afterwords

Those interested in this argument might also want to check "The decline of science in corporate R&D," written by Ashish Arora, Sharon Belenzon, and Andrea Patacconi, published in Strategic Management (2018, vol. 39, pp.  3–32).

For those with an interest in the broader subject of US innovation policy, here's the full list of papers presented at the April 2019 NBER conference:

Thursday, May 16, 2019

Does the Federal Reserve Talk Too Much?

For a long time, the Federal Reserve (and other central banks) carried out monetary policy with little or no explanation. The idea was that the market would figure it out. But in the last few decades, there has been an explosions of communication and transparency from the Fed (and other central banks), consisting both of official statements and an array of public speeches and articles by central bank officials. On one side, a greater awareness has grown up that economic activity isn't just influenced by what the central bank did in the past, but on what it is expected to do in the future. But does the this "open mouth" approach clarify and strengthening monetary policy, or just muddle it?

Kevin L. Kliesen, Brian Levine, and Christopher J. Waller present some evidence on the changes in Fed communication and the results in "Gauging Market Responses to Monetary Policy Communication," published in the Review of the Federal Reserve Bank of St. Louis (Second Quarter 2019, pp. 69-92). They start by describing the old ways, by quoting an exchange between John Maynard Keynes and Bank of England Deputy Governor Sir Ernest Harvey on December 5, 1929:
KEYNES: Arising from Professor Gregory's questions, is it a practice of the Bank of England never to explain what its policy is?
HARVEY: Well, I think it has been our practice to leave our actions to explain our policy.
KEYNES: Or the reasons for its policy?
HARVEY: It is a dangerous thing to start to give reasons.
KEYNES: Or to defend itself against criticism?
HARVEY: As regards criticism, I am afraid, though the Committee may not all agree, we do not admit there is a need for defence; to defend ourselves is somewhat akin to a lady starting to defend her virtue.
From 1967 to 1992, the Federal Open Market Committee released a public statement 90 days after its meetings. The FOMC then started, sometimes, releasing statements right after meeting. Here's a figure showing how the length of these statements has expanded over time, with the shaded area showing the period of "unconventional monetary policy" during and after the Great Recession.

As one example,

[F]ollowing the August 9, 2011, meeting, the policy statement stated the following:
"The Committee currently anticipates that economic conditions—including low rates of resource utilization and a subdued outlook for inflation over the medium run—are likely to warrant exceptionally low levels for the federal funds rate at least through mid-2013."
In this case, the FOMC's intent was to signal to the public that its policy rate would remain low for a long time in order to spur the economy's recovery.
Here's count of the annual "remarks" (speeches, interviews, testimony) by presidents of the regional Federal Reserve banks, members of the Board of Governors, and the chair of the Fed:



Here are some comments about Fed communication that seems to m:
"Speeches have become important communication events. Chairman Greenspan's new economy speech in 1995 and his "irrational exuberance" speech in 1996 were among his more notable speeches. Chairman Ben Bernanke also gave notable speeches during his tenure. Two that standout are his "Deflation: Making Sure 'It' Doesn't Happen Here" speech in 2002 and his global saving glut speech in 2005. ...
One of the key communication innovations during the Bernanke tenure was the public release of individual FOMC participants' expectations of the future level of the federal funds rate. Once a quarter, with the release of the SEP [Summary of Economic Projections], each FOMC participant—anonymously—indicates their preference for the level of the federal funds rate at the end of the current year, at the end of the next two to three years, and over the "longer run." These projections are often termed the FOMC "dot plots." According to the survey, both academics and those in the private sector found the dot plots of limited use as an instrument of Fed communication (more "useless" than "useful"). One-third of the respondents found the dot plots "useful or extremely useful," 29 percent found them "somewhat useful," and 38 percent found them "useless or not very useful." ...
We find that Fed communication is associated with changes in prices of financial market instruments such as Treasury securities and equity prices. However, this effect varies by type of communication, by type of instrument, and by who is doing the speaking. Perhaps not surprisingly, we find that the largest financial market reactions tend to be associated with communication by Fed Chairs rather than by other Fed governors and Reserve Bank presidents and with FOMC meeting statements rather than FOMC minutes.
It's probably impossible for a 21st century central bank to operate with what used to be an unofficial motto attributed to the long-ago Bank of England: "Never explain, never apologize." Just for purposes of political legitimacy, and for maintaining the independence of the central bank, a greater degree of transparency and explanation is needed. But if the choice is between the risk of  instability from financial markets making predictions in a situation of very little central bank disclosure, or the risk of instability from financial markets making predictions in a situation with the current level of central bank disclosure, the current level seems preferable. The authors write:
The modern model of central bank communication suggests that central bankers prefer to err on the side of saying too much rather than too little. The reason is that most central bankers believe that clear and concise communication of monetary policy helps achieve their goals.

Wednesday, May 15, 2019

Alice Rivlin, 1931-2019, In Her Own Words

Alice Rivlin, who died yesterday, was a legend in the Washington policy community. In "Alice Rivlin: A career spent making better public policy," Fred Dewes interviewed Rivlin for the Brookings Cafeteria Podcast on March 8, 2019. 

If you would like some additional detail about Rivlin's career, there's a shorter interview from 1998 by Hali J. Edison, originally published in the newsletter of the Committee on the Status of Women in the Economics Profession (although a more readable reprint of the interview is here). A 1997 interview David Levy of the Minneapolis Fed is here. If you want more Rivlin, here's an hour-long podcast she did with Ezra Klein, Alice Rivlin, queen of Washington's budget wonks," from May 2016.

Rivlin was an economics major at Bryn Mawr College. From the Edison interview:
I wrote my undergraduate honors thesis on the economic integration of Western Europe, which was a pretty prescient topic choice in 1952. I even had a discussion of European monetary union! By then I was sufficiently hooked to be thinking about graduate school, but I went to Europe for a year first, where I had a junior job in Paris working on the Marshall Plan.
She entered Harvard's PhD program in economics in the 1950s. Here are some thoughts about graduate study and the academic job market at that time, from the Edison interview:
Harvard was having a hard time adjusting to the idea of women in the academy. Indeed, since I was already focused on policy, I applied first to the graduate school of public administration (now The Kennedy School), which rejected my application on the explicit grounds that a woman of marriageable age was a "poor risk." I then applied to the economics department, which had about 5 per cent females in the doctoral program. They were just working up their courage to allow women to be teaching fellows and tutors in economics. I taught mixed classes, but initially was assigned only women tutees. One of my tutees wanted to write an honors thesis on the labor movement in Latin America--a subject on which one of my male colleagues had considerable expertise. He was willing to supervise my young woman if I would take one of his young men. However, the boy's senior tutor objected to the switch on the grounds that being tutored by a woman would make a male student feel like a second class citizen. People actually said things like that in those days!

The second year that I taught a section of the introductory economics course, I was expecting a baby in March and did not teach the spring semester. The man who took over my class announced to the class that, since no woman could teach economics adequately, he would start over and the first semester grades would not count. It was an exceptionally bright class and I had given quite a few "A's," so the students were upset. The department chair had to intervene.

In retrospect, the amazing thing was that the women were not more outraged. I think we thought we were lucky to be there at all. Outwitting the system was kind of a game. One of the university libraries was closed to women, and its books could not even be borrowed for a female on inter-library loan. I don't remember being upset. If I needed a book, I just got a male friend to check it out for me. ...

Realistically, moreover, academic opportunities were limited for my generation of women graduate students. Most major universities did not hire women in tenure track positions. Early in my career (about 1962), the University of Maryland was looking for an assistant professor in my general area. I was invited by a friend on the faculty to give a seminar and then had an interview with the department chairman. He was effusive in his praise for my work and said how sorry he was that they could not consider me for the position. I asked why not, and he said that the dean had expressly forbidden their considering any women. That wasn't illegal at the time, so we both expressed our regrets, and I left with no hard feelings.
She ended up at the Brookings Institution. In the late 1960s came as stint at the Department of Health, Education and Welfare during the Johnson administration, then back to Brookings. In the mid-1970s it was decided to start the Congressional Budget Office, which Rivlin ran from 1975-1983. Here's Rivlin's description of  how she was chosen as the original director, from the Dewes interview:
 I was the candidate of the Senate. They, rather stupidly, had two separate search processes, one in the Senate and one in the house. I told them they should never do that again, and they haven't. But that left them with two candidates. I was the candidate of the Senate and a very qualified man named Sam Hughes, who had been the deputy at OMB—no, at the Government Accounting Office— was the other candidate. But the chairman of the House Budget Committee was a man named Al Ullman, and Mr. Ullman had said in an off moment, over his dead body was a woman going to get this job. So, there was kind of a standoff, and then it was solved by an accidental event. The chairman of Ways and Means was a powerful congressman from Arkansas named Wilbur Mills, and he was a mover and shaker in the Congress and a very intelligent man. But he had a weakness—he was an alcoholic. And one night he and an exotic dancer named Fanne Fox were proceeding down Capitol Hill toward the Tidal Basin in his car and Fanne leapt out of the car and into the Tidal Basin. She didn't drown in the Tidal Basin—it's quite shallow—but it was a scandal and Wilbur Mills had to resign. And Al Ullman, chairman of the Budget Committee, was ranking member on Ways and Means, so he moved up. And that left a new chairman who wasn't committed to the previous process, Brock Adams, and he said to Senator Muskie, who was my sponsor, if you want Rivlin it's okay with me. So, I owe that job to Fanne Fox.
Rivlin later ran the Office of Management and Budget during the Clinton administration in the early 1990s. From 1996-99 she was vice-chair on the Federal Reserve Board of Governors. Here's her description of the switch, from the Levy interview:
Off and on over my career, I've been asked if I wanted to be on the Federal Reserve, usually when I was doing something else that I loved doing. One time I was running the Congressional Budget Office. I was doing something very exciting that I wanted to go on doing. And then later, when I was in the Clinton administration, I was asked about the Fed, but I was fully engaged at the Office of Management and Budget and didn't want to leave that. But after I'd been there for almost four years, it did seem, perhaps, time for a change.
For some reason, that description makes me smile. For some people, being on the Fed is a once-in-a-lifetime opportunity. But if you have the capabilities and judgement of Alice Rivlin, it's an opportunity that gets offered to you every few years, until the time is right.  From 1998 to 2001, Rivlin was chair of the District of Columbia Financial Responsibility and Management Assistance Authority, which had legal authority to oversee the finances of the District of Columbia. 

Along the way, Rivlin went back to Brookings a few times, where she started her career 62 years ago in 1957. She taught classes at Georgetown and gave talks and wrote. Rivlin was working on one more book, hoping to publish it this fall. I hope it was close enough to complete that economists and everyone else can hear from her one more time. 

Tuesday, May 14, 2019

Are Firms Doing a Lousy Job in How they Hire?

In a lot of economic models, firms decide to hire based on whether they need more workers to meet the demand for their products; in the lingo, labor is a "derived demand," derived from the desired level of output. Beyond that, economic models often don't pay much attention to the details of how hiring happens, assuming that profit-maximizing firms will figure out relatively cost-effective ways of gathering and keeping the skills and workers they need. But what if that hypothesis is wrong?

Peter Cappelli thinks so, and writes "Your Approach to Hiring Is All Wrong" in the May-June 2019 issue of the Harvard Business Review.  He writes:
Only about a third of U.S. companies report that they monitor whether their hiring practices lead to good employees; few of them do so carefully, and only a minority even track cost per hire and time to hire. ... Employers also spend an enormous amount on hiring—an average of $4,129 per job in the United States, according to Society for Human Resource Management estimates, and many times that amount for managerial roles—and the United States fills a staggering 66 million jobs a year. Most of the $20 billion that companies spend on human resources vendors goes to hiring.

One big change that Capelli emphasizes is a shift from filling job vacancies internally to filling them externally. The old working assumption was to hire from within, but in the last few decades, the working assumption seems to be that hiring from outside is preferable. Capelli writes:
In the era of lifetime employment, from the end of World War II through the 1970s, corporations filled roughly 90% of their vacancies through promotions and lateral assignments. Today the figure is a third or less. When they hire from outside, organizations don’t have to pay to train and develop their employees. Since the restructuring waves of the early 1980s, it has been relatively easy to find experienced talent outside. Only 28% of talent acquisition leaders today report that internal candidates are an important source of people to fill vacancies—presumably because of less internal development and fewer clear career ladders. ... Companies hire from their competitors and vice versa, so they have to keep replacing people who leave. Census and Bureau of Labor Statistics data shows that 95% of hiring is done to fill existing positions. Most of those vacancies are caused by voluntary turnover. LinkedIn data indicates that the most common reason employees consider a position elsewhere is career advancement—which is surely related to employers’ not promoting to fill vacancies.
There doesn't seem to be evidence that hiring from outside is better. What evidence does exist seems to be that internal hires get up the learning curve faster, and often don't need as much of an immediate pay bump. If you persuade someone to leave their current employer by offering more money, what you get is a worker whose top priority is "more money," rather than on work challenges and career opportunities. ("As the economist Harold Demsetz said when asked by a competing university if he was happy working where he was: `Make me unhappy.'”)

A common emphasis of modern labor markets is to have a big "funnel," with lots of people applying for jobs but only maybe 2% eventually getting a job. But making the funnel as big as possible means that you face the costs of sorting through a very large number of applicants. And it turns out that lots of managers who are perfectly fine at running a business aren't necessarily all that good at evaluating job applicants.

It turns out that college grades aren't a great predictor of future job performance. Interviews by managers aren't a great predictor, either. There tend to be lots of biases about who the interviewer would choose as a friend with shared interests and cultural background, but not necessarily who will turn out to be the best managers. There are lots of newfangled machine learning techniques that purport to guide hiring, but they are recent enough that it's not clear what kind of workforces they ultimately end up producing.

So what does work?

1) Actual tests of skills that will be useful in the job.

2) Think about promoting and filling positions from within.

3) Giving applicants a realistic preview of what the job actually involves. This is old-style advice, but some companies like Google and Marriott Hotel have set up online games that give applicants a sense of the kinds of decisions and tasks they would need to make.

4) Evaluate hiring by following up on how employees perform. Yes, employee performance in big organizations can be hard to measure, but some basic approaches are available and underused. Which employees quit? Which employees are absent a lot? Which employees qualify for performance-based raises? Or just ask the supervisor if they would hire that person again.

In a nearby article in the same issue of HBR, Dane E. Holmes of Goldman Sachs describes how they hire 3,000 summer interns each year, thus collecting a talent pool they hope will drive the company in the future. Rather than having many different people try to carry out many different interviews at many different locations, Holmes describes a different approach:
"[W]e decided to use `asynchronous' video interviews—in which candidates record their answers to interview questions—for all first-round interactions with candidates. Our recruiters record standardized questions and send them to students, who have three days to return videos of their answers. This can be done on a computer or a mobile device. Our recruiters and business professionals review the videos to narrow the pool and then invite the selected applicants to a Goldman Sachs office for final-round, in-person interviews. (To create the video platform, we partnered with a company and built our own digital solution around its product.)"
This approach allows the company to reach out to a broader group of applicants, to standardize the interview process, to give applicants a sense of the sorts of issues that arise at this employer, to test the ability of applicants to respond to these sorts of issues, and to allow the first round of applicants to be being evaluated in the same way. Goldman Sachs can also use the results to help match applicants to appropriate roles within the company.
We seem to be living in an economy with very low unemployment rates, and where lots of jobs are being advertised, but where actually being hired is often a costly process for both applicants and employers. Moreover, it's an economy that seems relatively full of outside options for shifting to other employers, but relatively light on inside options for expanding skills and building a career with one's current employer. A job market in a dynamic economy will always have some element of musical chairs, as people shift between jobs, but it should also encourage lasting matches between an employee and an employer when the fit is a good one.

Monday, May 13, 2019

The Origin of "Third World" and Some Ruminations

Back in the late 1970s when I was first reading about the world economy in any serious way, it was still common to describe the world as divided into "first world" market-driven high income economies, "second world" command-and-control economies, and "third world" low-income countries. Jonathan Woetzel offers a commentary on the sources of that nomenclature, and how outdated it has come to sound, in "From Third World To First In Class: Rapid economic growth is blurring the distinctions among developing, emerging and advanced countries," appearing in the most recent Milken Institute Review (Second Quarter 2019, pp. 22-33).  Woetzel writes:
When historians in the distant future look back at our era, the name Alfred Sauvy may appear in a footnote somewhere. Sauvy was a French demographer who coined the term “third world” in a magazine article in 1952, just as the Cold War was heating up. His point was that there were countries not aligned with the United States or the Soviet Union that had pressing economic needs, but whose voices were not being heard.
Sauvy deliberately categorized these countries as inferior: “tiers monde” (or third world) was an explicit play on “tiers état” (third estate), the ragged assembly of peasants and bourgeoisie under France’s ancien régime that was subservient to the monarchy (the first estate) and the nobility (the second). “The third world is ignored, exploited and mistrusted, just like the third estate,” Sauvy wrote. “The millennial cycle of life and death has become a cycle of misery.”
As a piece of editorial rhetoric based on the fetid geopolitical atmosphere of the time, Sauvy’s essay was on the mark. As prophecy about the course of economic progress, he could hardly have been more wrong. “Third world” today is politically incorrect as a phrase and economically incorrect as a concept, for it fails to take into account one of the biggest stories of the past half-century: the spectacular economic development that has taken place across the globe. Since Sauvy’s essay, some (but not all) of the countries he referred to have enjoyed very rapid growth and huge leaps in living standards, including in health and education. ... The changes have been so striking that we have reached a point where the very distinctions among “developing,” “emerging” and “advanced” countries have become blurred.
These other terms have been criticized for a lack of accuracy and political correctness, too. For example, if some countries are "advanced," then are other countries "backward" or "behind"? If some countries are describes as  "emerging," what are they emerging from, and what are they becoming? When countries were referred to as "developing," it sometimes seemed to be more of an optimistic outlook than an actual description, and referring to countries with rich and lengthy cultural, political and human inheritances as "undeveloped" seemed to put economic values ahead of all others. 

Others have used acronyms "From BRICs to  MINTs" (February 24, 2014), but looking at clusters of four countries, whether it's Brazil, Russia, India, and China or Mexico, Indonesia, Nigeria, and Turkey, doesn't capture the breadth of the economic shift that is occurring.
Woetzel describes how the global economy is changing in response to four shifts: the rapid march of technological progress; the emerging “superstar” phenomenon, which is exacerbating inequalities; the rapidly changing dynamics of China’s economy; and the evolving nature of globalization itself. He draws on a report that he co-authored with Jacques Bughin, "Navigating a world of disruption" (McKinsey Global Institute, January 2019),  which describes the range and scope of economic success stories in countries around the world. That report notes: 
Among emerging economies, our research has identified 18 high-growth “outperformers” that have achieved powerful and sustained long-term growth—and lifted more than one billion people out of extreme poverty since 1990.1 Seven of these outperformers (China, Hong Kong, Indonesia, Malaysia, Singapore, South Korea, and Thailand) have averaged GDP growth of at least 3.5 percent for the past 50 years. Eleven other countries (Azerbaijan, Belarus, Cambodia, Ethiopia, India, Kazakhstan, Laos, Myanmar, Turkmenistan, Uzbekistan, and Vietnam) have achieved faster average growth of at least 5 percent annually over the past 20 years. Underlying their performance are pro-growth policy agendas based on productivity, income, and demand—and often fueled by strong competitive dynamics. The next wave of outperformers now looms, as countries from Bangladesh and Bolivia to the Philippines, Rwanda, and Sri Lanka adopt a similar agenda and achieve rapid growth.
It's certain true that the old distinctions are breaking down. I've written before about how it's different to be in a world economy "When High GDP No Longer Means High Per Capita GDP" (October 20, 2015).

Here's a list of high-income economies around the world, as classified by the World Bank.  Some of the entrants on the list of high-income may surprise people. Argentina and Chile? Korea and Israel? Poland and Croatia? If one digs into the numbers on GDP per capita, you find that South Korea is ahead of Spain, Portugal, and Greece, and only a couple of notches behind Italy Israel is ahead of France and the United Kingdom in per capita GDP. 


Meanwhile, China ranks with Mexico, Brazil, Thailand, and others in the "upper middle income" category. India and Indonesia are in the "lower middle income group." Looking ahead at the next few decades, most of the growth in the global economy seems likely to be coming from countries that were still being called "third world" four or five decades ago.

Follow-up: A correspondent from France sent along some follow-up thoughts about the origins of "third world." Above, Woetzel writes: "`Tiers monde' (or third world) was an explicit play on `tiers état' (third estate), the ragged assembly of peasants and bourgeoisie under France’s ancien régime that was subservient to the monarchy (the first estate) and the nobility (the second)." My correspondent writes:
1 - In fact, the "first estate" was the clergy and the "second estate" was the nobility.
2 - The "Tiers Etat" was far from uniformly "ragged", it also included some of the largest fortunes of France.
3 - The play on words is much more subtle and less dismissive in French. "Tiers", in French legalese and in everyday speak, means "third party", so basically Sauvy was also implicitly referring to countries which were not engaged in the defining conflict of the era, ie the Cold War.

Also you may be interested to know that, yes, "Sauvy was a French demographer", that was his main job, but that he was also an economic historian, whose 3 volume, 1500 pp textbook on the French economy between 1918 and 1939 was the standard - and fairly unwholesome - text ...

Friday, May 10, 2019

How To Cut US Child Poverty in Half

Back in the 1960s, the poverty rate for those over-65 was about 10 percentage points higher than the poverty rate for children under 18. For example, in 1970 the over-65 poverty rate was about 25%, while the under-18 poverty rate was 15%. But government support for the elderly rose substantially, and  in the 1970s, the over-65 poverty rate dropped below the under-18 rate. For the last few decades, the under-18 poverty rate has been 7-9 percentage points higher than the over-65 poverty rate. In 2017, for example, the under-18 poverty rate was 17.5%, while the over-65 poverty rate was 9.2%.   (For the numbers, see Figure 6 in this US Census report from last fall.)

Poverty is always distressing, but poverty for children has the added element that it shapes the lives of future citizens, workers, and neighbors. The National Academies Press has published A Roadmap to Reducing Child Poverty, edited by Greg Duncan and Suzanne Le Menestrel (February 2019). There is of course nothing magic about specific "poverty line." Being just a little above the poverty line isn't all that different from being just a little below it. But the existence of such a line that is measured the same way over time can still be useful for analysis and policy.

In my own mind, there is a compelling case for reducing child poverty based on the importance of improving equality of opportunity in America. But even if that argument leaves you cold, there is a compelling case based on cold-blooded cost-benefit analysis.

The correlation between child poverty and later outcomes is unarguable. As one example, the report notes:
A study by Duncan, Ziol-Guest, and Kalil (2010) is one striking example. Their study uses data from a national sample of U.S. children who were followed from birth into their thirties and examines how poverty in the first six years of life is related to adult outcomes. What they find is that compared with children whose families had incomes above twice the poverty line during their early childhood, children with family incomes below the poverty line during this period completed two fewer years of schooling and, as adults, worked 451 fewer hours per year, earned less than half as much, received more in Food Stamps, and were more than twice as likely to report poor overall health or high levels of psychological distress . Men who grew up in poverty, they find, were twice as likely as adults to have been arrested, and among women early childhood poverty was associated with a six-fold increase in the likelihood of bearing a child out of wedlock prior to age 21.
But correlation isn't causation, of course, as economists (and this study) are quick to note. For example, say that there is a strong correlation between families in poverty and a lower education level for the parents. Perhaps a substantial share of the problems for children in poverty are not caused by lower family income, but by the lower education level of parents. If the root cause is lower parental education levels, then raising these families above the poverty line in terms of income won't have much effect on the long-term problems faced by children from these families.  

Making the case that various income-support programs will indeed address problems of children in poverty thus requires more detailed arguments, and the report goes through a number of studies in detail. But broadly speaking, raising families with children out of poverty affects the long-term outcomes for children in two ways. The report notes (citations omitted):
An “investment” perspective may be adopted ... emphasizing that higher income may support children’s development and well-being by enabling poor parents to meet such basic needs. As examples, higher incomes may enable parents to invest in cognitively stimulating items in the home (e.g., books, computers), in providing more parental time (by adjusting work hours), in obtaining higher-quality nonparental child care, and in securing learning opportunities outside the home. Children may also benefit from better housing or a move to a better neighborhood. Studies of some poverty alleviation programs find that these programs can reduce material hardship and improve children’s learning environments.
The alternative, “stress” perspective on poverty reduction focuses on the fact that economic hardship can increase psychological distress in parents and decrease their emotional well-being. Psychological distress can spill over into marriages and parenting. ... Parents’ psychological distress and conflict have in fact been linked with harsh, inconsistent, and detached parenting. Such lower-quality parenting may harm children’s cognitive and socioemotional development. 
These are ways in which additional income affects child development. Here are a couple of examples, chosen from meny, of the evidence that has accumulate on this point. The report writes:

Neuroscientists have produced striking evidence of the effect of early-life economic circumstances on brain development. Drawing from Hanson et al. (2013), Figure 3-3 illustrates differences in the total volume of gray matter between three groups of children: those whose family incomes were no more than twice the poverty line (labeled “Low SES” in the figure); those whose family incomes were between two and four times the poverty line (“Mid SES”); and those whose family incomes were more than four times the poverty line (“High SES”). Gray matter is particularly important for children’s information processing and ability to regulate their behavior. The figure shows no notable differences in gray matter during the first nine or so months of life, but differences favoring children raised in high-income families emerge soon after that. Notably, the study found no differences in the total brain sizes across these groups—only in the amount of gray matter."
This study is again a correlation, not a proof of causality. As the report notes: "However, the existence of these emerging differences does not prove that poverty causes them. This study adjusted for age and birth weight, but not for other indicators of family socioeconomic status that might have been the actual cause of these observed differences in gray matter for children with different family incomes." But with all due caution rigorously observed, it seems to me a highly suggestive correlation. 

Other studies look at the long-term effects of existing government programs that have raised income levels for poor families. Here's another example:
In their 2016 study of possible long-term effects of Food Stamp coverage in early childhood on health outcomes in adulthood, Hoynes, Schanzenbach, and Almond focus on the presence or absence of a cluster of adverse health conditions known as metabolic syndrome. In the study, metabolic syndrome was measured by indicators for adult obesity, high blood pressure, diabetes, and heart disease. Scores on these indicators of emerging cardiovascular health problems increased (grew worse) as the timing of the introduction of Food Stamps shifted to later and later in childhood (Figure 3-4). The best adult health was observed among individuals in counties where Food Stamps were already available when these individuals were conceived. Scores on the index of metabolic syndrome increase steadily until around the age of five.

Add all these kinds of studies and factors up, and you can obtain a rough-and-ready estimate a total cost of child poverty. 

Holzer et al. (2008) base their cost estimates on the correlations between childhood poverty (or low family income) and outcomes across the life course, such as adult earnings, participation in crime, and poor health. ... Their estimates represent the average decreases in earnings, costs associated with participation in crime (e.g. property loss, injuries, and the justice system), and costs associated with poor health (additional expenditures on health care and the value of lost quantity and quality of life associated with early mortality and morbidity) among adults who grew up in poverty. ... Holzer et al. (2008) make a number of very conservative assumptions in their estimates of earnings and the costs of crime and poor health. ... All of these analytic choices make it likely that these estimates are a lower bound that understates the true costs of child poverty to the U.S. economy.
The bottom line of the Holzer et al. (2008) estimates is that the aggregate cost of conditions related to child poverty in the United States amounts to $500 billion per year, or about 4 percent of the Gross Domestic Product (GDP). The authors estimate that childhood poverty reduces productivity and economic output in the United States by $170 billion per year, or by 1.3 percent of GDP; increases the victimization costs of crime by another $170 billion per year, or by 1.3 percent of the GDP; and increases health expenditures, while decreasing the economic value of health, by $163 billion per year, or by 1.2 percent ...
McLaughlin and Rank (2018) build on the work of Holzer and colleagues by updating their estimates in 2015 dollars and adding other categories of the impact of childhood poverty on society. They include increased corrections and crime deterrence costs, increased social costs of incarceration, costs associated with child homelessness (such as the shelter system), and costs associated with increased childhood maltreatment in poor families (such as the costs of the foster care and child welfare systems). Their estimate of the total cost of childhood poverty to society is over $1 trillion, or about 5.4 percent of GDP. ...  They do make it clear that there is considerable uncertainty about the exact size of the costs of childhood poverty. Nevertheless, whether these costs to the nation amount to 4.0 or 5.4 percent of GDP—roughly between $800 billion and $1.1 trillion annually in terms of the size of the U.S. economy in 2018—it is likely that significant investment in reducing child poverty will be very cost-effective  over time.
Of course, various programs are already reducing the number of children who live below the poverty line. The figure shows estimates of what the child poverty rate would have been without certain programs, including the Earned Income Credit, the Child Tax Credit, the Supplemental Nutrition Assistance Program ("food stamps"), Supplemental Security Income, Social Security, unemployment compensation, and others. (One warning about the figure: the poverty rate for children is given here as 13%, because the study is using a Supplemental Poverty Measure that (for example) includes a value for in-kind benefits like Medicaid.) 
What additional programs would it take to reduce US child poverty by half? The report looks at a range of programs and designs and combinations, seeking to provide  menu of options rather than a single recommendation. For example, one can look at general assistance linked directly to work, like the Earned Income Credit, or assistance like food stamps or housing vouchers. One could provide means-tested benefits only to the poor, or a universal benefit to all children--but where the value of that benefit would treated be taxable income for the non-poor. But for example, here's one set of policies that would make a substantial difference, with their estimated effects and costs. 

For example, if one chose the four top items on this list, the annual cost would be about $160 billion. The benefits later in life would be considerably larger. 

I don't propose spending $160 billion lightly. But I will point out that the expansion of the health insurance under the Patient Protection and Affordable Care Act of 2010 costs the US government over $100 billion per year.  Similarly, the costs of the Tax Cuts and Jobs Act passed in 2017 are projected to have an average cost of $100 billion per year (or more?) In short, our political system does seem fully capable of belching up expenditures of this size when the stars are properly aligned. 

As the report points out, some American cousins have taken the plunge to reducing child poverty by half.
The United States spends less to support low-income families with children than peer English-speaking countries do, and by most measures it has much higher rates of child poverty. Two decades ago, child poverty rates were similar in the United States and the United Kingdom. That began to change in March 1999, when Prime Minister Tony Blair pledged to end child poverty in a generation and to halve child poverty in 10 years. Emphasizing increased financial support for families, direct investments in children, and measures to promote work and increase take-home pay, the United Kingdom enacted a range of measures that made it possible to meet the 50 percent poverty reduction goal by 2008—a year earlier than anticipated. More recently, the Canadian government introduced the Canada Child Benefit in its 2016 budget. According to that government’s projections, the benefit will reduce the number of Canadian children living in poverty by nearly half.
Personally, I would be a lot more comfortable with the extent of US inequality if the child poverty rate was considerably lower, and thus the starting points for American children were closer together. 

Thursday, May 9, 2019

Low-Skill Male Workers: A Black Spot on the Rosy Employment Outlook

The monthly unemployment rate in April fell to 3.6%, the lowest monthly rate since December 1969. It's now been a 4.0% or less for more than a year. But in this generally quite positive employment environment, low-skill male workers have been an ongoing sore spot. The issues are discussed in a three-paper symposium in the Spring 2019 issue of the Journal of Economic Perspectives:
Binder and Bound set the stage: 
During the last 50 years, labor market outcomes for men without a college education in the United States worsened considerably. Between 1973 and 2015, real hourly earnings for the typical 25–54 year-old man with only a high school degree declined by 18.2 percent,1 while real hourly earnings for college-educated men increased substantially. Over the same period, labor-force participation by men without a college education plummeted. In the late 1960s, nearly all 25–54 year-old men with only a high school degree participated in the labor force; by 2015, such men participated at a rate of 85.3 percent.
Here's a figure from their paper showing labor force participation by level of education for "prime-age" males in the 25-54 age group. In the late 1960s, prime-age men of all education levels had very high labor force participation. But it has sagged over time for all education levels, and sagged the most for those with lower education levels. 

This drop-off in labor force participation has been accompanied by a wave of other symptoms, as discussed in the paper by Coile and Duggan. As one example, consider mortality rates for prime-age men, using their table. 

The overall mortality rate for men (bottom row) dropped dramatically from 1980 to 2000, but barely budged from 2000-2016. In particular, from 2000-2016, the mortality rate rose for men age 25-34 and for white men in the 25-age group as a whole. Looking at cause of death, there are big falls in death rates for prime-age men from heart disease and cancer in the 1980s and 1990s, but much smaller falls since then. Meanwhile, death rates for this group from accidents, suicides, and homicides went up from 2000-2016. Data on cause of death doesn't include education level, but the authors go on to show that in areas with lower education levels, these rises in death rates were more pronounced. 

When Coile and Duggan look instead at reporting of health problems, they find:
"There is a steep health gradient with respect to education—within each age group, the share in fair or poor health is roughly 2.5 times as large for men with a high school education or less than for men with some college or more. Men with less education are similarly more likely to report having a work-limiting disability, limitations in physical activity or ADLs/IADLs [Activities of Daily Living or Instrumental Activities of Daily Living], and obesity ...  Men’s health ... is getting worse over time. ... [T]he fraction of men reporting a health problem is higher in 2015 than in 2000 in nearly every case."
Coile and Duggan look at a variety of other patterns for prime-age men, focusing on lower skill levels where the data makes it possible. For example, they note the sharp rise in incarceration rates for men from 1980 to 2000. The pattern that emerges is that the incarceration rate for men in the 45-54 age group is higher in 2016 than in 2000, reflecting large numbers of younger men sentenced to prison in the 1980s and 1990s. However, the incarceration rate for men in the 25-34 and 35-44 age group is generally down in 2016 compared to 2000. As one example, the incarceration rate of black men ages 25-34 was 5.5% in 1980, 12.8% in 2000, and 7.4% in 2016. 

Marriage rates have been on a generally downward trend as well, although the drop-off from 2000 to 2016 is a lot smaller than the fall for the 1980-2000 period. Here's an illustrative table from Coile and Duggan: 
In some general way, all of these factors seems to combine into a shadowy picture. Low-skill men are working less, reporting worse health, were for a time more likely to be locked up, and seem less likely to form family ties. How do these factors connect? 

Binder and Bound focus on the task of explaining the drop in labor force participation. They argue that the reduced demand for labor of low-skill but prime-age men (perhaps because of shifts in technology or international trade) isn't nearly enough to explain the drop in their labor force participation.  They also offer back-of-the envelope estimates that while higher disability rates may affect men in the 45-54 age bracket, they aren't likely to explain less labor force participation for the younger prime-age men. They write:
On its own, falling labor demand does not sufficiently explain the secular decline in less-educated male labor-force participation—at least, not without allowing for substantial adjustment frictions in the long run as well as the short run. Rising access to Disability Insurance is at most a partial explanation for the 45–54 year-old group and matters quite little for younger men and for high school dropouts. Rising exposure to prison may be a significant factor for dropouts and for blacks without college education, but labor-force participation for these groups began declining decades before prison populations skyrocketed. Certainly no single explanation can sufficiently explain the decline, and even in combination, the explanations appear insufficient.
We suspect that there is another factor at play. We will argue that the prospect of forming and providing for a new family constitutes an important male labor supply incentive; and thus, that developments within the marriage market can influence male labor-force participation. A decline in the formation of stable families produces a situation in which fewer men are actively involved in family provision or can expect to be involved in the future. This removes a labor supply incentive; and the possibility of drawing support from one’s existing family ... creates a feasible labor-force exit.
The paper by Edin, Nelson, Cherlin and Francis is by a group of sociologists, based on in-depth interviews with working-class men who have children but are not married to the mothers and do not live with them. They argue that low-skilled men are often trying to renegotiate their relationship to jobs, family and religion--but that many of them are in a social setting where these attempts lead to "haphazard lives." They write (citations omitted):
[W]e show that working-class men are not simply reacting to changes in the economy, family norms, or religious organizations. Rather, they are attempting to renegotiate their relationships to these institutions by attempting to construct autonomous, generative selves. For example, these men’s desire for autonomy in jobs seems rooted in their rejection of the monotony and limited autonomy that their fathers and grandfathers experienced in the workplace, along with a new ethos of self-expression. Similarly, these working-class men focus on their ties to their children even when they have little relationship with the children’s mothers, and they seek spiritual fulfillment even though they disdain organized religion. ... In sum, these working-class men show both a detachment from institutions and an engagement with more autonomous forms of work, childrearing, and spirituality ... . Autonomy refers to independent action in pursuit of personal growth and development. Personal growth has come to be highly valued among middle class Americans but until recently has not been associated with the working class. ... [P]ast scholarship typically assumed that such forms of action would usually only be found among those so materially comfortable that they needn’t spend time worrying about their economic circumstances ...

Our interviews strongly suggest that the autonomous, generative self that many men described is also a haphazard self. For example, vocational aspirations usually remain nebulous and tentative, rarely taking the form of an explicit strategy. In the meantime, career trajectories are often replaced by a string of random jobs. These men typically transitioned to parenthood more by accident than design, and in the context of tenuous romantic relationships. ... Religious community and a systemic belief system have been replaced by a patched-together religious identity that holds little sway over behavior, especially as it is divorced from the communal aspects of faith that have adhered working-class men to a set of behavioral norms. ...

The optimistic reading of the developments we have described is that workingclass men are now sharing in the autonomy and generativity that was largely the province of middle- and upper-class men in previous generations. Moreover, the interest they show in being involved as fathers and in helping others could represent a widening of the boundaries of masculinity in ways that are more consistent with contemporary family and work life. The pessimistic reading is that these men are pursuing goals that they are unlikely to achieve due to their lack of social integration. They must find their way without ties to steady work, stable families, and organized religion. Without social support, their chances of success diminish. Those who fail to achieve the autonomous, generative selves they crave will have little to fall back on and few people to prevent them from sinking into despair.
In other words, the problems of low-skilled men in US society are certainly not just a matter of income, and not just a matter of having a job, either. Instead, they are related to a more wide-ranging disconnectness, which shows up across many domains of behavior and outcomes.  

Wednesday, May 8, 2019

Snapshots of US Income Taxation Over Time

As Americans recover from our annual April 15 deadline for filing income taxes, here are a series of figures about longer-term patterns of taxes in the US economy. They are drawn from a series of blog posts by the Tax Foundation over the last few months.  The Tax Foundation is a nonpartisan group whose analysis typically leans toward side that taxes on those with high incomes are already high enough. However, the figures that follow are compiled from fairly standard data sources: IRS data, the Congressional Budget Office, and the like.

For example, here's a figure showing what taxes are the main sources of federal income over time from Erica York. She writes: "Before 1941, excise taxes, such as gas and tobacco taxes, were the largest source of revenue for the federal government, comprising nearly one-third of government revenue in 1940. Excise taxes were followed by payroll taxes and then corporate income taxes. Today, payroll taxes remain the second largest source of revenue. However, other sources have shifted in relative importance. Specifically, individual income taxes have become a central pillar of the federal revenue system, now comprising nearly half of all revenue. Following an opposite trend, corporate income and excise taxes have decreased relative to other sources."



Indeed, for all the huffing and puffing over income taxes, it's worth remembering that 67.8% of US taxpayers in 2019 will pay more in federal payroll taxes (which fund Social Security, Medicare, and disability insurance) than in federal income taxes. Robert Bellefiore offers this figure, drawn from a Joint Committee on Taxation study, showing that this pattern holds on average for all income groups under $200,000.


Arguments over taxes often make fairness claims about the share of taxes paid by various income groups. Whatever one's ultimate conclusions about what should happen, it's useful to start from teh basis of what is actually happening.

It's common to hear a complaint that those with high incomes are evading federal taxes. Some do, of course. It's a big country. If a very rich person puts all their money into tax-exempt bonds, with the associated lower interest rates for being tax-free, they won't pay taxes on that income. But on average, those with higher incomes do pay a much larger share of taxes. Robert Bellefiore offers a couple of illustrative graphs. The first figure focuses only on federal income taxes.



The second figure includes the share of all federal taxes: that is, income, payroll, corporate (as attributed to individuals who benefit from corporate profits), excise taxes on gasoline, tobacco, and alcohol, and so on. Again, those with higher income levels pay a larger share of total federal taxes.



One can of course still argue that the share of taxes paid by those with high incomes should be larger. But again, arguing that those with high incomes don't already pay a larger share of federal taxes is not a true statement.

What about taxes paid at the very tip-top of the income distribution? Erica York offers this figure on the average tax rates paid by the top 0.1%. To be clear, the "average" rate rate is the actual share of income paid in taxes, which is different from the "marginal" tax rate charged on the highest $1 of income earned. Back in the 1950s, the highest marginal income tax rates sometimes reached 90%. The fact that the average tax rate is so much lower tells you that those very high marginal tax rates were largely for show, in the sense that they didn't actually apply to very much income. York writes: "The graph below illustrates the average tax rates that the top 0.1 percent of Americans faced over the last century, based on research from Thomas Piketty, Emmanuel Saez, and Gabriel Zucman. The blue line includes the impact of all federal, state, and local taxes on individual income, payroll taxes, estates, corporate profits, properties, and sales. The purple line shows income taxes only, including federal, state, and local." The overall pattern is while effective tax rates on the top 0.1% were higher in the 1950s, they haven't shown much long-term trend one way or the other in the last half-century or so.


When listening to arguments over tax policy, it's common to hear complaints about whether deductions should be limited for purposes like mortgage interest, state and local taxes, or charitable contribution. It's useful to remember that those deductions don't apply to most taxpayers. Erica York explains: "In 2016, barely a quarter of households with adjusted gross income (AGI) between $40,000 and $50,000 claimed itemized deductions when filing their taxes. In contrast, more than 90 percent of households making $200,000 and above itemized their deductions." One effect of the 2017 tax reform law is that the number of taxpayers who find it useful to itemize deductions will drop by as much as 60%.



The share of total federal taxes paid by those with high incomes has been rising over time. Part of that change is because the share of those who owe zero in federal income tax has been rising over time. Robert Bellefiore provides a graph. One main reason for the rise share of taxpayers who owe zero is the expansion of refundable tax credits aimed at those with lower income, including the Earned Income Tax Credit and the Child Credit. You can also see the share of those with zero income taxes owed rises in the Great Recession.


In a different post, Robert Bellefiore offers a chart showing the overall effects of federal tax and transfer policy on the share of income received by different groups. He writes: "The lowest quintile’s income nearly doubles, while the second and middle quintiles experience relatively smaller increases in income. The fourth quintile’s income share remains constant, and only the highest quintile has a lower share of income after taxes and transfers. The top 1 percent’s share of income, for example, falls from 16.6 percent to 13.2 percent."



Again, one can argue that the amount of redistribution should be larger. But it would be untrue to argue that a significant amount of redistribution--like doubling the after-taxes-and-transfers share of the lowest quintile--doesn't already happen.

Tuesday, May 7, 2019

The High Costs of Renewable Portfolio Standards

A "renewable portfolio standard" is a rule that a certain percentage of electricity generation needs to come from renewable sources.  Such rule have been spreading in popularity. But Michael Greenstone and Ishan Nath argue in "Do Renewable Portfolio Standards Deliver?" that they are an overly costly way of reducing carbon emissions (Becker Friedman Institute, University of Chicago, April 21, 2019). As they explain in the Research Summary (a full working paper is also available at the link):
"29 states and the District of Columbia have been successful in passing Renewable Portfolio Standards (RPS), which require that a percentage of the electricity generation come from renewable sources. These programs currently cover 64 percent of the electricity sold in the United States. 2. Until now, studies have suggested that RPS programs only marginally increase electricity costs, because they have only examined differences in the costs of generation. These studies fail to fully incorporate three key costs that the addition of renewable resources impose on the electricity system: 1) The intermittent nature of renewables means that back-up capacity must be added; 2) Because renewable sources take up a lot of physical space, are geographically dispersed and are frequently located away from population centers, they require the substantial addition of transmission capacity; and 3) In mandating an increase in renewable power, baseload generation is prematurely displaced, which imposes costs on ratepayers and owners of capital."
Their research design is straightforward. They compare states with and without RPS policies, using data over the quarter-century from 1990-2015. They find that the Renewable Portfolio Standards do increase the use of renewables in the generation of electricity, but at a cost.
Seven years after legislation creating an RPS program, retail electricity prices are 11 percent higher on average (1.3 cents per kWh), or about $30 billion annually across the 29 states. Twelve years afterward, prices are 17 percent higher on average (2 cents per kWh). In total, seven years after the start of the programs, consumers in the 29 RPS states paid $125.2 billion more for electricity than they would have in its absence. ... In states with RPS policies, renewables’ share of generation increased about 1.8 percent seven years after passage, and 4.2 percent twelve years afterwards. These figures are net of renewable generation that was already in place at the time an RPS was implemented.
Even the most ardent advocates of reducing carbon emissions should desire to do so at the lowest practical cost. By that standard, the Renewable Portfolio Standards have not been a success. Greenstone and Nath write:
In increasing the share of renewable generation, the states with an RPS policy saved 95 to 175 million tons of carbon emissions seven years after the start of the programs. This was driven by a decrease in the carbon intensity of electricity supply in RPS states. However, this study finds that the cost of reducing carbon emissions through an RPS policy is more than $130 per ton of carbon abated and as much as $460 per ton of carbon abated—significantly higher than conventional estimates of the social and economic costs of carbon emissions. For example, the central estimate of the Social Cost of Carbon (SCC) tallied by the Obama Administration is approximately $50 per ton in today’s dollars. A second point of comparison comes from the cost of abating a metric ton of CO2 in current cap-and-trade markets in the US: it is about $5 in the northeast’s Regional Greenhouse Gas Initiative (RGGI) and $15 in California’s cap-and-trade system.

For the record, because we live in a time when people obsess over the potential bias of researchers, Greenstone has been, among a number of other professional affiliations, "Chief Economist for President Obama’s Council of Economic Advisers, where he co-led the development of the United States Government’s social cost of carbon." Nath is a PhD student at the University of Chicago.

For discussion of cost-effective ways of reducing carbon emissions, a useful starting point is Kenneth Gillingham and James H. Stock. 2018. "The Cost of Reducing Greenhouse Gas Emissions." Journal of Economic Perspectives, 32 (4): 53-72.

Monday, May 6, 2019

Is Something Different this Time about the Effect of Technology on Labor Markets?

There's a well-worn conversation about the relationship between new technology and possible job displacement which goes something like this:

Concerned person: "New developments in information technology and artificial intelligence are going to threaten lots of jobs."

Skeptical person: "Economies in developed countries have been experiencing extraordinary developments and shifts in new technology for literally a couple of centuries. But as old jobs have been dislocated, new jobs have been created."

Concerned person: "This time seems different."

Skeptical person: "Every time is different in the specific details. But there's certainly no downward pattern in the number of jobs in the last two centuries, or the last few decades."

Concerned person: "Still, the way in which information technology and artificial intelligence replace workers seems different than the way in which, say, assembly lines replaced skilled artisan workers or combine harvesters replaced farm workers. "

Skeptical person: "Maybe this time will be different. After all, it's logically impossible to prove that something in the future will NOT be different. But based on the long-run historical pattern, the evidence that new technology leads to shifts in the labor market is clear-cut, while the evidence that it leads to permanent job loss for the population as a whole is nonexistent."

Concerned person: "Still, this current wave of technology seems different."

Skeptical person: "I guess we'll see how it unfolds in the next decade or two."

The most recent Spring 2019 issue of the Journal of Economic Perspectives has a symposium on "Automation and Employment." Two of the articles in particular offer a concrete arguments about how something is different with how the current new technologies are interacting with labor markets.

Daron Acemoglu and Pascual Restrepo discuss "Automation and New Tasks: How Technology Displaces and Reinstates Labor." They suggest a framework in which automation can have three possible effects on the tasks that are involved in doing a job: a displacement effect, when automation replaces a task previously done by a worker; a productivity effect in which the higher productivity from automation taking over certain tasks leads to more buying power in the economy, creating jobs in other sectors; and a reinstatement effect, when new technology reshuffles the production process in a way that leads to new tasks that will be done by labor.

In this approach, the effect of automation on labor is not predestined to be good, bad, or neutral. It depends on how these three factors interact. Acemoglu and Restrepo attempt to calculate the size of these three factors for the US economy in two time periods: 1947-1987 and 1987-2017. There is of course considerable technological change through all of this 60-year period. For example, I've written  on this blog about "Automation and Job Loss: The Fears of 1964" (December 1, 2014)   and "Automation and Job Loss: Leontief in 1982" (August 22, 2016). But the later period can be associated more closely with the rise of computers and information technology.

Their calculations suggest that in the 1987-2017 period, the effects of automation have involved a larger displacement effect, lower productivity growth, and a lower reinstatement effect. The lower demand for labor can be seen in stagnant wage growth over this period for lower- and medium-skilled workers. They argue that the real issue isn't whether automation displaces tasks and alters jobs--or course it does--but rather how those displacement effects compare to how automation leads to greater productivity the possibility of new job-related tasks that reinstate labor. They argue that public policy has some power to affect how the forward movement of technology will affect demand for labor: for example, they argue that public policy has tended to favor investment in new equipment and machinery over investment in human capital, like on-the-job training by employers.

Another angle on new technology and labor markets in the same issue of JEP comes from Jeremy Atack, Robert A. Margo, and Paul W. Rhode in "`Automation' of Manufacturing in the Late Nineteenth Century: The Hand and Machine Labor Study."  The focus of their paper is on a remarkably detailed US government study done in the 1890s of how machines were replacing the tasks involved in specific jobs.

The new assembly line machines of that time clearly displaced large number of tasks previously done by workers. However, the productivity effects of this wave of automation were very large. In addition, the new automation technology of that time had a powerful reinstatement effect of creating new tasks to be done by workers. They write:
[T]he net effect of the introduction of new tasks on labor demand appears to have been positive. This is because the share of time taken up by new tasks in machine labor was larger than the share of time associated with hand tasks that were abandoned—indeed, five times larger. Among other activities, these new tasks included maintenance of steam engines, a foreman supervising large numbers of workers (discussed further below), and workers packaging products for distant markets.
Atack, Margo, and Rhode also offer a broader point about technology and labor that seems to me worth considering. They point out that back in the 1890s, with a much heavier use of machines in the production process, there was a shift toward a broader division of labor: that is, the study counted more overall tasks to be done when machines were used, as compared to before the machines were used. One implication for workers of that time is that the path to a steady and well-paid job was to focus on a very particular niche of the production process. Indeed, one broad description of labor markets at this time is that there is a shift away from artisan workers (say, blacksmiths) who carried out many tasks, and toward workers who focused on a smaller set of tasks.

The authors suggest that one way in which modern technology is different from the 1890s is that it does not reward or encourage this kind of extreme division of labor. They write: 
The massive division of labor documented front and center in the Hand and Machine Labor study dramatically affected the nature of the human capital investment decision facing successive cohorts of American workers contemplating whether to enter the manufacturing sector. Earlier in the nineteenth century, the human capital investment problem such workers faced was mastering the diverse set of skills associated with most or all of the tasks involved in making a product, along with managing the affairs of a (very) small business, an artisan shop. The human capital investment problem facing the prospective manufacturing worker in the 1890s was quite different. There was little or no need to learn how to fashion a product from start to finish; mastery of one or two tasks
would do, and such mastery might be gained quickly on the job. The more able or ambitious might gravitate to learning new skills, such as designing, maintaining, or repairing steam engines, or clerical/managerial tasks, the demand for which had grown sharply as average establishment size increased over the century.
For many decades in the twentieth century, specialization was economically beneficial to workers—the costs of learning skills were relatively modest and the return on the investment—a relatively secure, highly paid job in manufacturing—made that investment worthwhile. The prospect of widespread automation has arguably changed this calculus. No single “job” is safe and the optimal investment strategy may be very different—a suite of diverse, relatively uncorrelated skills as insurance against displacement by robotics and artificial intelligence. This is perhaps the sense in which the history of how technology affects jobs is not repeating itself, and “this time” really is different.
In watching the cohort that includes my own children move from high school into young adulthood, this observation seems to me to contain a lot of truth. When it comes to training for a future job, many of us are still mentally in the 1890s, looking for one or a few particular focused skills that will guarantee a "good job." But modern technologies are likely to disrupt what tasks are actually done in a very wide array of jobs, which will put a premium on workers with the ability to shift flexibly as the job situation is reshaped. 

Saturday, May 4, 2019

How Single Payer Requires Many Choices

I sometimes hear "single payer" spoken as if it was a complete description of a plan for revising the US health care system. But "single payer" actually involves a lot of choices. The Congressional Budget Office walks through the options in their report "Key Design Components and Considerations for Establishing a Single-Payer Health Care System" (May 2019).

As a preview of some of these issues, its worth noting that a some prominent countries with universal health coverage and reasonably good cost control (at least by US standards!) use regulated multipayer systems, like Germany, Switzerland and Netherlands. For those who like the sound of "Medicare for All," it's worth remembering that a certain number of analysts don't consider Medicare to be a single-payer system, because of the large role played by private insurers in the Medicare Advantage program, while all of Medicare's drug benefits (in Part D of the program) are delivered by private insurers.

However, if one narrows the options to an actual single player plan, which is the label typically put on Canada, Denmark, the UK, Sweden and others, a number of questions still need to be answered. Here's a chart from the report showing many of these questions, but because I fear it won't be readable in this blog format, I repeat a number of the questions below:


Would the plan be run by the federal government, the states, or some third-party administration? In Canada, for example, national health insurance is best-understood as 13 separate plans run by the provinces and territories. Would the single-payer plan use a single information technology infrastructure nationwide?

Who determines exactly what services are covered or not covered by the single payer plan? Who decides when new treatments would be covered? Would the mandated package of benefits cover outpatient prescription drugs? What about dental, vision, and mental health services? These are not mandated benefits in Canada.

Would there be cost-sharing for physician and hospital services? There is in Sweden, but not in the United Kingdom. How about a limit on out-of-pocket spending? There is such a limit in Sweden, but not in the UK. Would long-term care services be covered? The answer is "yes" in Sweden, "limited" in the UK, and "no" in Canada. If there is cost-sharing, would it take the form of deductibles, co-payments, or co-insurance?

Will supplemental health insurance be allowed? "In England, private insurance givespeople access to private providers, faster access to care, or coverage for complementary or alternative therapies, but participants must pay for it separately in addition to paying their individual required tax contributions to the NHS. In Australia, private insurance covers services that the public plan does not, such as access to private hospitals, a choice of specialists in both public and private hospitals, and faster access to nonemergency care."

Would people be allowed to "opt out" of the government health insurance plan and purchase private insurance instead?

Will hospitals be publicly owned, privately owned, or a mixture? Will hospitals be paid with a global budget to allocate across patients, or by a payment based on what patients are diagnosed with what conditions, or by fee-for-service? "Currently, about 70 percent of U.S. hospitals are privately owned: About half are private, nonprofit entities, and 20 percent are for-profit. Almost all physicians are self-employed or privately employed. A single-payer system could retain current ownership structures, or the government could play a larger role in owning hospitals and employing providers. In one scenario, the government could own the hospitals and employ the physicians, as it currently does in most of the VHA [Veterans Health Administration] system."

Will doctors be salaried public employees? If they are private providers, will they be paid on a fee-for-service basis, or receive a per-head or "capitation" payment based on the number of patients they serve? In many single-payer systems, the primary care physicians are private, but the outpatient specialist physicians are sometimes (Denmark) or always (UK) public and salaried.

How are prices to be determined for prescription drugs?

Does the financing for the system come from general tax revenues (Canada), an earmarked income tax (Denmark), a mixture of general revenues and payroll taxes (UK), or some other source?

The CBO report goes into these kinds of questions, and others, in more detail. My point here isn't to argue for or against "single payer." There are versions of single payer I would prefer to others, and although it's a story for another day, I like a lot of the elements of the German and Swiss multi-payer systems for financing health care. My point here is that if you are trying to describe a direction for reform of the US health care system, all $3.5 trillion of it, "single payer" is barely the beginning of a useful description; indeed, it sidesteps many of the tough decisions that would still need to be made.