Monday, May 20, 2019

Daniel Hamermesh: How Do People Spend Time?

For economists, the idea of "spending" time isn't a metaphor. You can spend any resource, not just money. Among all the inequalities in our world, it remains true that every person is allocated precisely the same 24 hours in each day. In "Escaping the Rat Race: Why We Are Always Running Out of Time," the Knowledge@Wharton website interviews Daniel Hamermesh, focusing on themes from his just-published book Spending Time: The Most Valuable Resource.

The introductory material at the start quotes William Penn, who apparently once said, “Time is what we want most, but what we use worst.” Here are some comments from Hamermesh:

Time for the Rich, Time for the Poor
The rich, of course, work more than the others. They should. There’s a bigger incentive to work more. But even if they don’t work, they use their time differently. A rich person does much less TV watching — over an hour less a day than a poor person. They sleep less. They do more museum-going, more theater. Anything that takes money, the rich will do more of. Things that take a lot of time and little money, the rich do less of. ...
I think complaining is the American national pastime, not baseball. But the thing is, those who are complaining about the time as being scarce are the rich. People who are poor complain about not having enough money. I’m sympathetic to that. They’re stuck. The rich — if you want to stop complaining, give up some money. Don’t work so hard. Walk to work. Sleep more. Take it easy. I have no sympathy for people who say they’re too rushed for time. It’s their own darn fault.

Time Spent Working Across Countries
Americans are the champions of work among rich countries. We work on average eight hours more per week in a typical week than Germans do, six hours more than the French do. It used to be quite a bit different. Forty years ago, we worked about average for rich countries. Today, even the Japanese work less than we do. The reason is very simple: We take very short vacations, if we take any. Other countries get four, five, six weeks. That’s the major difference. ...
What’s most interesting about when we work is you compare America to western European countries, and it’s hard to find a shop open on a Sunday in western Europe. Here, we’re open all the time. Americans work more at night than anybody else. It’s not just that we work more; we also work a lot more at night, a lot more in the evenings, and a heck of a lot more on Sundays and Saturdays than people in other rich countries. We’re working all the time and more. ...
It’s a rat race. If I don’t work on a Sunday and other people do, I’m not going to get ahead. Therefore, I have no incentive to get off that gerbil tube, get out of it and try to behave in a more rational way. ...  The only way it’s going to be solved is if somehow some external force, which in the U.S. and other rich countries is the government, imposes a mandate that forces us to behave differently. No individual can do it. ...
We have to force ourselves, as a collective, as a polity, to change our behavior. Pass legislation to do it. Every other rich country did that between 1979 and 2000. We think the Japanese are workaholics. They’re not workaholics. Compared to us, they work less than we do, yet 40 years ago they worked a heck of a lot more. They chose to cut back. ,.. It’s going to be a heck of a lot of trouble to change the rules so that people are mandated to take four weeks of vacation or to take a few more paid holidays. Other countries have done it. It didn’t just happen from the day the countries were born. They chose to do it. It’s a political issue, like the most important things in life. 
Time and Technology, Money Chasing Hours
Time is an economic factor; economics is about scarcity more than anything else. Because our incomes keep on going up, whereas time doesn’t go up very much, time is the increasingly important scarce factor.  ...
There’s no question technology has made us better off. Think about going to a museum. When I went to the Museum of Science and Industry in Chicago as a kid, you’d pull levers. You did a few things. These days, it’s all incredibly immersive. Great technology. But you can’t go to the museum in any less time. You can’t cut back on sleep. A few things are easier to do more quickly because of technology: cooking, cleaning, washing, I don’t know if you’re old enough to remember the semi-automatic washing machine with a ringer. Tremendous improvements in the things you do with the house. Technology has made life better, but it hasn’t saved us much time. ... So, we are better off, but it’s not that we’re going to have more time; we’re going to have less time. But we have more money chasing the same number of hours.
 For a longer and more in-depth and wide-ranging discussion of these subjects, listen to the hour-long EconTalk episode in which Russ Roberts interviews Daniel Hamermesh  (March 25, 2019). 

Friday, May 17, 2019

Time for a Return of Large Corporation Research Labs?

It often takes a number of intermediate steps to move from a scientific discovery to a consumer product. A few decades ago, many larger and even mid-sized corporations spent a lot of money on research and development laboratories, which focused on all of these steps. Some of these corporate laboratories like those at AT&T, Du Pont, IBM, and Xerox were nationally and globally famous. But the R&D ecosystem has shifted, and firms are now much more likely to rely on outside research done by universities or small start-up firms. These issues are discussed in "The changing structure of American innovation: Cautionary remarks for economic growth," by Ashish Arora, Sharon Belenzon,  Andrea Patacconi, and Jungkyu Suh, presented at conference on  "Innovation Policy and the Economy 2019," held on on on April 16, 2019, hosted by the National Bureau of Economic Research, and sponsored by the Ewing Marion Kauffman Foundation.

On the importance of corporate laboratories much better decades of US productivity growth, they authors note:
From the early years of the twentieth century up to the early 1980s, large corporate labs such as AT&T's Bell Labs, Xerox's Palo Alto Research Center, IBM's Watson Labs, and DuPont's Purity Hall were responsible for some of the most consequential inventions of the century such as the transistor, cellular communication, graphical user interface, optical bers, and a host of synthetic materials such as nylon, neoprene, and cellophane.
But starting in the 1980s, firms began to rely more on universities and on start-ups to do their R&D. Here's one of many examples, the closing of the main DuPont research laboratory: 
A more recent example is DuPont's closing of its Central Research & Development lab in 2016. Established in 1903, DuPont Central R&D served as a premiere lab on par with the top academic chemistry departments. In the 1960s, the central R&D unit published more articles in the Journal of the American Chemical Society than MIT and Caltech combined. However, in the 1990s, DuPont's attitude toward research changed as the company started emphasizing business potential of research projects. After a gradual decline in scientifi c publications, the company's management closed the Experimental Station as a central research facility for the firm after pressure from activist investors in 2016.
The pattern shows up in broader trends. The authors write that "the number of publications per firm fell at a rate of 20% per decade from 1980 to 2006 for R&D performing American listed firms." Business-based R&D as a share of total R&D peaked back in the 1990s, and has been falling since then. The share of business R&D which is "research," as opposed to "development," has been falling, too. 

The authors tell the story of how so much research was based in corporations, or shared by corporations and universities, for the first sis or seven seven decades of the 20th century, and how the shift to a greater share of research happening universities took place. One big change was the Bayh-Dole act of 1980 (citations omitted):
Perhaps the most widely commented on reform of this era is the Bayh-Dole Patent and Trademark Amendments Act of 1980, which allowed the results of federally funded university research to be owned and exclusively licensed by universities. Since the postwar period, the federal government had been funding more than half of all research conducted in universities and owned the rights to the fruits of such research, totaling in 28,000 patents. However, only a few of these inventions would actually make it into the market. Bayh-Dole was meant to induce industry to develop these underutilized resources by transferring property rights to the universities, which were now able to independently license at the going market rate.
As universities took on more research, corporations backed off. Here are a couple of examples: 
In 1979, GE's corporate research laboratory employed 1,649 doctorates and 15,555 supporting staff, while IBM employed 1,900 staff and 1,300 doctorate holders. The comparable figures in 1998 for GE was 475 PhDs supported by 880 professional staff, and 1,200 doctorate holders for IBM. Indeed, rms whose sales grew by 100% or higher between 1980 and 1990 published 20.6 fewer scienti c articles per year. This contrast between sales growth and publications drop persists into the next two decades: rms that doubled in sales between 1990 and 2000 published 12.0 fewer articles. Publications dropped by 13.3 for such fast growth firms between 2000 and 2010.
A common pattern seems to be that the number of researchers and scientific papers is falling at a number of firms, but the number of patents at these same firms has been steadily rising.  Firms are putting less emphasis on the research, and more on development that can turn into well-defined intellectual property. This pattern seems to hold (mostly) across big information technology and computer firms. The pharmaceutical and biotech firms offer an exception of an industry that has continued to publish research--probably because published research is important in regulatory approval for many of their products. 
Overall, the new innovation ecosystem exhibits a deepening division of labor between universities that specialize in basic research, small start-ups converting promising new findings into inventions, and larger, more established firms specializing in product development and commercialization. Indeed, in a survey of over 6,000 manufacturing- and service-sector firms in the U.S. ... 49% of the innovating firms between 2007 and 2009 reported that their most important new product originated from an external source.
But in this new eco-system of innovation, has something been lost? The authors argue that as businesses have outsourced R&D, it has contributed to the sustained sluggish pace of US productivity growth. They write: 
Spinoffs, startups, and university licensing offices have not fully filled the gap left by the decline of the corporate lab. Corporate research has a number of characteristics that make it very valuable for science-based innovation and growth. Large corporations have access to signi ficant resources, can more easily integrate multiple knowledge streams, and their research is directed toward solving specifi c practical problems, which makes it more likely for them to produce commercial applications. University research has tended, more so than corporate research, to be curiosity-driven rather than mission-focused. It has favored insight rather than solutions to specifi c problems, and partly as a consequence, university research has required additional integration and transformation to become economically useful. This is not to deny the important contributions that universities and small rms make to American innovation. Rather, our point is that large corporate labs may have distinct capabilities, which have proved to be difficult to replace. Further, large corporate labs may also generate signi ficant positive spillovers, in particular by spurring high-quality scienti fic entrepreneurship.
It's not clear how to encourage a resurgence of corporate research labs. Companies and their investors seem happy with the current division of R&D labor. But from a broader social perspective, the growing separation of companies from the research on which they rely suggests that the gap between scientific research and consumer products is growing, along with the the possibility that economically valuable innovations are falling into that gap and never coming into existence.

Afterwords

Those interested in this argument might also want to check "The decline of science in corporate R&D," written by Ashish Arora, Sharon Belenzon, and Andrea Patacconi, published in Strategic Management (2018, vol. 39, pp.  3–32).

For those with an interest in the broader subject of US innovation policy, here's the full list of papers presented at the April 2019 NBER conference:

Thursday, May 16, 2019

Does the Federal Reserve Talk Too Much?

For a long time, the Federal Reserve (and other central banks) carried out monetary policy with little or no explanation. The idea was that the market would figure it out. But in the last few decades, there has been an explosions of communication and transparency from the Fed (and other central banks), consisting both of official statements and an array of public speeches and articles by central bank officials. On one side, a greater awareness has grown up that economic activity isn't just influenced by what the central bank did in the past, but on what it is expected to do in the future. But does the this "open mouth" approach clarify and strengthening monetary policy, or just muddle it?

Kevin L. Kliesen, Brian Levine, and Christopher J. Waller present some evidence on the changes in Fed communication and the results in "Gauging Market Responses to Monetary Policy Communication," published in the Review of the Federal Reserve Bank of St. Louis (Second Quarter 2019, pp. 69-92). They start by describing the old ways, by quoting an exchange between John Maynard Keynes and Bank of England Deputy Governor Sir Ernest Harvey on December 5, 1929:
KEYNES: Arising from Professor Gregory's questions, is it a practice of the Bank of England never to explain what its policy is?
HARVEY: Well, I think it has been our practice to leave our actions to explain our policy.
KEYNES: Or the reasons for its policy?
HARVEY: It is a dangerous thing to start to give reasons.
KEYNES: Or to defend itself against criticism?
HARVEY: As regards criticism, I am afraid, though the Committee may not all agree, we do not admit there is a need for defence; to defend ourselves is somewhat akin to a lady starting to defend her virtue.
From 1967 to 1992, the Federal Open Market Committee released a public statement 90 days after its meetings. The FOMC then started, sometimes, releasing statements right after meeting. Here's a figure showing how the length of these statements has expanded over time, with the shaded area showing the period of "unconventional monetary policy" during and after the Great Recession.

As one example,

[F]ollowing the August 9, 2011, meeting, the policy statement stated the following:
"The Committee currently anticipates that economic conditions—including low rates of resource utilization and a subdued outlook for inflation over the medium run—are likely to warrant exceptionally low levels for the federal funds rate at least through mid-2013."
In this case, the FOMC's intent was to signal to the public that its policy rate would remain low for a long time in order to spur the economy's recovery.
Here's count of the annual "remarks" (speeches, interviews, testimony) by presidents of the regional Federal Reserve banks, members of the Board of Governors, and the chair of the Fed:



Here are some comments about Fed communication that seems to m:
"Speeches have become important communication events. Chairman Greenspan's new economy speech in 1995 and his "irrational exuberance" speech in 1996 were among his more notable speeches. Chairman Ben Bernanke also gave notable speeches during his tenure. Two that standout are his "Deflation: Making Sure 'It' Doesn't Happen Here" speech in 2002 and his global saving glut speech in 2005. ...
One of the key communication innovations during the Bernanke tenure was the public release of individual FOMC participants' expectations of the future level of the federal funds rate. Once a quarter, with the release of the SEP [Summary of Economic Projections], each FOMC participant—anonymously—indicates their preference for the level of the federal funds rate at the end of the current year, at the end of the next two to three years, and over the "longer run." These projections are often termed the FOMC "dot plots." According to the survey, both academics and those in the private sector found the dot plots of limited use as an instrument of Fed communication (more "useless" than "useful"). One-third of the respondents found the dot plots "useful or extremely useful," 29 percent found them "somewhat useful," and 38 percent found them "useless or not very useful." ...
We find that Fed communication is associated with changes in prices of financial market instruments such as Treasury securities and equity prices. However, this effect varies by type of communication, by type of instrument, and by who is doing the speaking. Perhaps not surprisingly, we find that the largest financial market reactions tend to be associated with communication by Fed Chairs rather than by other Fed governors and Reserve Bank presidents and with FOMC meeting statements rather than FOMC minutes.
It's probably impossible for a 21st century central bank to operate with what used to be an unofficial motto attributed to the long-ago Bank of England: "Never explain, never apologize." Just for purposes of political legitimacy, and for maintaining the independence of the central bank, a greater degree of transparency and explanation is needed. But if the choice is between the risk of  instability from financial markets making predictions in a situation of very little central bank disclosure, or the risk of instability from financial markets making predictions in a situation with the current level of central bank disclosure, the current level seems preferable. The authors write:
The modern model of central bank communication suggests that central bankers prefer to err on the side of saying too much rather than too little. The reason is that most central bankers believe that clear and concise communication of monetary policy helps achieve their goals.

Wednesday, May 15, 2019

Alice Rivlin, 1931-2019, In Her Own Words

Alice Rivlin, who died yesterday, was a legend in the Washington policy community. In "Alice Rivlin: A career spent making better public policy," Fred Dewes interviewed Rivlin for the Brookings Cafeteria Podcast on March 8, 2019. 

If you would like some additional detail about Rivlin's career, there's a shorter interview from 1998 by Hali J. Edison, originally published in the newsletter of the Committee on the Status of Women in the Economics Profession (although a more readable reprint of the interview is here). A 1997 interview David Levy of the Minneapolis Fed is here. If you want more Rivlin, here's an hour-long podcast she did with Ezra Klein, Alice Rivlin, queen of Washington's budget wonks," from May 2016.

Rivlin was an economics major at Bryn Mawr College. From the Edison interview:
I wrote my undergraduate honors thesis on the economic integration of Western Europe, which was a pretty prescient topic choice in 1952. I even had a discussion of European monetary union! By then I was sufficiently hooked to be thinking about graduate school, but I went to Europe for a year first, where I had a junior job in Paris working on the Marshall Plan.
She entered Harvard's PhD program in economics in the 1950s. Here are some thoughts about graduate study and the academic job market at that time, from the Edison interview:
Harvard was having a hard time adjusting to the idea of women in the academy. Indeed, since I was already focused on policy, I applied first to the graduate school of public administration (now The Kennedy School), which rejected my application on the explicit grounds that a woman of marriageable age was a "poor risk." I then applied to the economics department, which had about 5 per cent females in the doctoral program. They were just working up their courage to allow women to be teaching fellows and tutors in economics. I taught mixed classes, but initially was assigned only women tutees. One of my tutees wanted to write an honors thesis on the labor movement in Latin America--a subject on which one of my male colleagues had considerable expertise. He was willing to supervise my young woman if I would take one of his young men. However, the boy's senior tutor objected to the switch on the grounds that being tutored by a woman would make a male student feel like a second class citizen. People actually said things like that in those days!

The second year that I taught a section of the introductory economics course, I was expecting a baby in March and did not teach the spring semester. The man who took over my class announced to the class that, since no woman could teach economics adequately, he would start over and the first semester grades would not count. It was an exceptionally bright class and I had given quite a few "A's," so the students were upset. The department chair had to intervene.

In retrospect, the amazing thing was that the women were not more outraged. I think we thought we were lucky to be there at all. Outwitting the system was kind of a game. One of the university libraries was closed to women, and its books could not even be borrowed for a female on inter-library loan. I don't remember being upset. If I needed a book, I just got a male friend to check it out for me. ...

Realistically, moreover, academic opportunities were limited for my generation of women graduate students. Most major universities did not hire women in tenure track positions. Early in my career (about 1962), the University of Maryland was looking for an assistant professor in my general area. I was invited by a friend on the faculty to give a seminar and then had an interview with the department chairman. He was effusive in his praise for my work and said how sorry he was that they could not consider me for the position. I asked why not, and he said that the dean had expressly forbidden their considering any women. That wasn't illegal at the time, so we both expressed our regrets, and I left with no hard feelings.
She ended up at the Brookings Institution. In the late 1960s came as stint at the Department of Health, Education and Welfare during the Johnson administration, then back to Brookings. In the mid-1970s it was decided to start the Congressional Budget Office, which Rivlin ran from 1975-1983. Here's Rivlin's description of  how she was chosen as the original director, from the Dewes interview:
 I was the candidate of the Senate. They, rather stupidly, had two separate search processes, one in the Senate and one in the house. I told them they should never do that again, and they haven't. But that left them with two candidates. I was the candidate of the Senate and a very qualified man named Sam Hughes, who had been the deputy at OMB—no, at the Government Accounting Office— was the other candidate. But the chairman of the House Budget Committee was a man named Al Ullman, and Mr. Ullman had said in an off moment, over his dead body was a woman going to get this job. So, there was kind of a standoff, and then it was solved by an accidental event. The chairman of Ways and Means was a powerful congressman from Arkansas named Wilbur Mills, and he was a mover and shaker in the Congress and a very intelligent man. But he had a weakness—he was an alcoholic. And one night he and an exotic dancer named Fanne Fox were proceeding down Capitol Hill toward the Tidal Basin in his car and Fanne leapt out of the car and into the Tidal Basin. She didn't drown in the Tidal Basin—it's quite shallow—but it was a scandal and Wilbur Mills had to resign. And Al Ullman, chairman of the Budget Committee, was ranking member on Ways and Means, so he moved up. And that left a new chairman who wasn't committed to the previous process, Brock Adams, and he said to Senator Muskie, who was my sponsor, if you want Rivlin it's okay with me. So, I owe that job to Fanne Fox.
Rivlin later ran the Office of Management and Budget during the Clinton administration in the early 1990s. From 1996-99 she was vice-chair on the Federal Reserve Board of Governors. Here's her description of the switch, from the Levy interview:
Off and on over my career, I've been asked if I wanted to be on the Federal Reserve, usually when I was doing something else that I loved doing. One time I was running the Congressional Budget Office. I was doing something very exciting that I wanted to go on doing. And then later, when I was in the Clinton administration, I was asked about the Fed, but I was fully engaged at the Office of Management and Budget and didn't want to leave that. But after I'd been there for almost four years, it did seem, perhaps, time for a change.
For some reason, that description makes me smile. For some people, being on the Fed is a once-in-a-lifetime opportunity. But if you have the capabilities and judgement of Alice Rivlin, it's an opportunity that gets offered to you every few years, until the time is right.  From 1998 to 2001, Rivlin was chair of the District of Columbia Financial Responsibility and Management Assistance Authority, which had legal authority to oversee the finances of the District of Columbia. 

Along the way, Rivlin went back to Brookings a few times, where she started her career 62 years ago in 1957. She taught classes at Georgetown and gave talks and wrote. Rivlin was working on one more book, hoping to publish it this fall. I hope it was close enough to complete that economists and everyone else can hear from her one more time.

Added later: For one more Rivlin interview, here's a 2002 interview which is part of an oral history of the Clinton presidency, and thus focused on the late 1980s and early 1990s. The summary says: "Alice Rivlin discusses deficit reduction, working with the National Economic Council, North American Free Trade Agreement, 1995-1996 government shutdown, Haiti, and press relations." 

Tuesday, May 14, 2019

Are Firms Doing a Lousy Job in How they Hire?

In a lot of economic models, firms decide to hire based on whether they need more workers to meet the demand for their products; in the lingo, labor is a "derived demand," derived from the desired level of output. Beyond that, economic models often don't pay much attention to the details of how hiring happens, assuming that profit-maximizing firms will figure out relatively cost-effective ways of gathering and keeping the skills and workers they need. But what if that hypothesis is wrong?

Peter Cappelli thinks so, and writes "Your Approach to Hiring Is All Wrong" in the May-June 2019 issue of the Harvard Business Review.  He writes:
Only about a third of U.S. companies report that they monitor whether their hiring practices lead to good employees; few of them do so carefully, and only a minority even track cost per hire and time to hire. ... Employers also spend an enormous amount on hiring—an average of $4,129 per job in the United States, according to Society for Human Resource Management estimates, and many times that amount for managerial roles—and the United States fills a staggering 66 million jobs a year. Most of the $20 billion that companies spend on human resources vendors goes to hiring.

One big change that Capelli emphasizes is a shift from filling job vacancies internally to filling them externally. The old working assumption was to hire from within, but in the last few decades, the working assumption seems to be that hiring from outside is preferable. Capelli writes:
In the era of lifetime employment, from the end of World War II through the 1970s, corporations filled roughly 90% of their vacancies through promotions and lateral assignments. Today the figure is a third or less. When they hire from outside, organizations don’t have to pay to train and develop their employees. Since the restructuring waves of the early 1980s, it has been relatively easy to find experienced talent outside. Only 28% of talent acquisition leaders today report that internal candidates are an important source of people to fill vacancies—presumably because of less internal development and fewer clear career ladders. ... Companies hire from their competitors and vice versa, so they have to keep replacing people who leave. Census and Bureau of Labor Statistics data shows that 95% of hiring is done to fill existing positions. Most of those vacancies are caused by voluntary turnover. LinkedIn data indicates that the most common reason employees consider a position elsewhere is career advancement—which is surely related to employers’ not promoting to fill vacancies.
There doesn't seem to be evidence that hiring from outside is better. What evidence does exist seems to be that internal hires get up the learning curve faster, and often don't need as much of an immediate pay bump. If you persuade someone to leave their current employer by offering more money, what you get is a worker whose top priority is "more money," rather than on work challenges and career opportunities. ("As the economist Harold Demsetz said when asked by a competing university if he was happy working where he was: `Make me unhappy.'”)

A common emphasis of modern labor markets is to have a big "funnel," with lots of people applying for jobs but only maybe 2% eventually getting a job. But making the funnel as big as possible means that you face the costs of sorting through a very large number of applicants. And it turns out that lots of managers who are perfectly fine at running a business aren't necessarily all that good at evaluating job applicants.

It turns out that college grades aren't a great predictor of future job performance. Interviews by managers aren't a great predictor, either. There tend to be lots of biases about who the interviewer would choose as a friend with shared interests and cultural background, but not necessarily who will turn out to be the best managers. There are lots of newfangled machine learning techniques that purport to guide hiring, but they are recent enough that it's not clear what kind of workforces they ultimately end up producing.

So what does work?

1) Actual tests of skills that will be useful in the job.

2) Think about promoting and filling positions from within.

3) Giving applicants a realistic preview of what the job actually involves. This is old-style advice, but some companies like Google and Marriott Hotel have set up online games that give applicants a sense of the kinds of decisions and tasks they would need to make.

4) Evaluate hiring by following up on how employees perform. Yes, employee performance in big organizations can be hard to measure, but some basic approaches are available and underused. Which employees quit? Which employees are absent a lot? Which employees qualify for performance-based raises? Or just ask the supervisor if they would hire that person again.

In a nearby article in the same issue of HBR, Dane E. Holmes of Goldman Sachs describes how they hire 3,000 summer interns each year, thus collecting a talent pool they hope will drive the company in the future. Rather than having many different people try to carry out many different interviews at many different locations, Holmes describes a different approach:
"[W]e decided to use `asynchronous' video interviews—in which candidates record their answers to interview questions—for all first-round interactions with candidates. Our recruiters record standardized questions and send them to students, who have three days to return videos of their answers. This can be done on a computer or a mobile device. Our recruiters and business professionals review the videos to narrow the pool and then invite the selected applicants to a Goldman Sachs office for final-round, in-person interviews. (To create the video platform, we partnered with a company and built our own digital solution around its product.)"
This approach allows the company to reach out to a broader group of applicants, to standardize the interview process, to give applicants a sense of the sorts of issues that arise at this employer, to test the ability of applicants to respond to these sorts of issues, and to allow the first round of applicants to be being evaluated in the same way. Goldman Sachs can also use the results to help match applicants to appropriate roles within the company.
We seem to be living in an economy with very low unemployment rates, and where lots of jobs are being advertised, but where actually being hired is often a costly process for both applicants and employers. Moreover, it's an economy that seems relatively full of outside options for shifting to other employers, but relatively light on inside options for expanding skills and building a career with one's current employer. A job market in a dynamic economy will always have some element of musical chairs, as people shift between jobs, but it should also encourage lasting matches between an employee and an employer when the fit is a good one.

Monday, May 13, 2019

The Origin of "Third World" and Some Ruminations

Back in the late 1970s when I was first reading about the world economy in any serious way, it was still common to describe the world as divided into "first world" market-driven high income economies, "second world" command-and-control economies, and "third world" low-income countries. Jonathan Woetzel offers a commentary on the sources of that nomenclature, and how outdated it has come to sound, in "From Third World To First In Class: Rapid economic growth is blurring the distinctions among developing, emerging and advanced countries," appearing in the most recent Milken Institute Review (Second Quarter 2019, pp. 22-33).  Woetzel writes:
When historians in the distant future look back at our era, the name Alfred Sauvy may appear in a footnote somewhere. Sauvy was a French demographer who coined the term “third world” in a magazine article in 1952, just as the Cold War was heating up. His point was that there were countries not aligned with the United States or the Soviet Union that had pressing economic needs, but whose voices were not being heard.
Sauvy deliberately categorized these countries as inferior: “tiers monde” (or third world) was an explicit play on “tiers état” (third estate), the ragged assembly of peasants and bourgeoisie under France’s ancien régime that was subservient to the monarchy (the first estate) and the nobility (the second). “The third world is ignored, exploited and mistrusted, just like the third estate,” Sauvy wrote. “The millennial cycle of life and death has become a cycle of misery.”
As a piece of editorial rhetoric based on the fetid geopolitical atmosphere of the time, Sauvy’s essay was on the mark. As prophecy about the course of economic progress, he could hardly have been more wrong. “Third world” today is politically incorrect as a phrase and economically incorrect as a concept, for it fails to take into account one of the biggest stories of the past half-century: the spectacular economic development that has taken place across the globe. Since Sauvy’s essay, some (but not all) of the countries he referred to have enjoyed very rapid growth and huge leaps in living standards, including in health and education. ... The changes have been so striking that we have reached a point where the very distinctions among “developing,” “emerging” and “advanced” countries have become blurred.
These other terms have been criticized for a lack of accuracy and political correctness, too. For example, if some countries are "advanced," then are other countries "backward" or "behind"? If some countries are describes as  "emerging," what are they emerging from, and what are they becoming? When countries were referred to as "developing," it sometimes seemed to be more of an optimistic outlook than an actual description, and referring to countries with rich and lengthy cultural, political and human inheritances as "undeveloped" seemed to put economic values ahead of all others. 

Others have used acronyms "From BRICs to  MINTs" (February 24, 2014), but looking at clusters of four countries, whether it's Brazil, Russia, India, and China or Mexico, Indonesia, Nigeria, and Turkey, doesn't capture the breadth of the economic shift that is occurring.
Woetzel describes how the global economy is changing in response to four shifts: the rapid march of technological progress; the emerging “superstar” phenomenon, which is exacerbating inequalities; the rapidly changing dynamics of China’s economy; and the evolving nature of globalization itself. He draws on a report that he co-authored with Jacques Bughin, "Navigating a world of disruption" (McKinsey Global Institute, January 2019),  which describes the range and scope of economic success stories in countries around the world. That report notes: 
Among emerging economies, our research has identified 18 high-growth “outperformers” that have achieved powerful and sustained long-term growth—and lifted more than one billion people out of extreme poverty since 1990.1 Seven of these outperformers (China, Hong Kong, Indonesia, Malaysia, Singapore, South Korea, and Thailand) have averaged GDP growth of at least 3.5 percent for the past 50 years. Eleven other countries (Azerbaijan, Belarus, Cambodia, Ethiopia, India, Kazakhstan, Laos, Myanmar, Turkmenistan, Uzbekistan, and Vietnam) have achieved faster average growth of at least 5 percent annually over the past 20 years. Underlying their performance are pro-growth policy agendas based on productivity, income, and demand—and often fueled by strong competitive dynamics. The next wave of outperformers now looms, as countries from Bangladesh and Bolivia to the Philippines, Rwanda, and Sri Lanka adopt a similar agenda and achieve rapid growth.
It's certain true that the old distinctions are breaking down. I've written before about how it's different to be in a world economy "When High GDP No Longer Means High Per Capita GDP" (October 20, 2015).

Here's a list of high-income economies around the world, as classified by the World Bank.  Some of the entrants on the list of high-income may surprise people. Argentina and Chile? Korea and Israel? Poland and Croatia? If one digs into the numbers on GDP per capita, you find that South Korea is ahead of Spain, Portugal, and Greece, and only a couple of notches behind Italy Israel is ahead of France and the United Kingdom in per capita GDP. 


Meanwhile, China ranks with Mexico, Brazil, Thailand, and others in the "upper middle income" category. India and Indonesia are in the "lower middle income group." Looking ahead at the next few decades, most of the growth in the global economy seems likely to be coming from countries that were still being called "third world" four or five decades ago.

Follow-up: A correspondent from France sent along some follow-up thoughts about the origins of "third world." Above, Woetzel writes: "`Tiers monde' (or third world) was an explicit play on `tiers état' (third estate), the ragged assembly of peasants and bourgeoisie under France’s ancien régime that was subservient to the monarchy (the first estate) and the nobility (the second)." My correspondent writes:
1 - In fact, the "first estate" was the clergy and the "second estate" was the nobility.
2 - The "Tiers Etat" was far from uniformly "ragged", it also included some of the largest fortunes of France.
3 - The play on words is much more subtle and less dismissive in French. "Tiers", in French legalese and in everyday speak, means "third party", so basically Sauvy was also implicitly referring to countries which were not engaged in the defining conflict of the era, ie the Cold War.

Also you may be interested to know that, yes, "Sauvy was a French demographer", that was his main job, but that he was also an economic historian, whose 3 volume, 1500 pp textbook on the French economy between 1918 and 1939 was the standard - and fairly unwholesome - text ...

Friday, May 10, 2019

How To Cut US Child Poverty in Half

Back in the 1960s, the poverty rate for those over-65 was about 10 percentage points higher than the poverty rate for children under 18. For example, in 1970 the over-65 poverty rate was about 25%, while the under-18 poverty rate was 15%. But government support for the elderly rose substantially, and  in the 1970s, the over-65 poverty rate dropped below the under-18 rate. For the last few decades, the under-18 poverty rate has been 7-9 percentage points higher than the over-65 poverty rate. In 2017, for example, the under-18 poverty rate was 17.5%, while the over-65 poverty rate was 9.2%.   (For the numbers, see Figure 6 in this US Census report from last fall.)

Poverty is always distressing, but poverty for children has the added element that it shapes the lives of future citizens, workers, and neighbors. The National Academies Press has published A Roadmap to Reducing Child Poverty, edited by Greg Duncan and Suzanne Le Menestrel (February 2019). There is of course nothing magic about specific "poverty line." Being just a little above the poverty line isn't all that different from being just a little below it. But the existence of such a line that is measured the same way over time can still be useful for analysis and policy.

In my own mind, there is a compelling case for reducing child poverty based on the importance of improving equality of opportunity in America. But even if that argument leaves you cold, there is a compelling case based on cold-blooded cost-benefit analysis.

The correlation between child poverty and later outcomes is unarguable. As one example, the report notes:
A study by Duncan, Ziol-Guest, and Kalil (2010) is one striking example. Their study uses data from a national sample of U.S. children who were followed from birth into their thirties and examines how poverty in the first six years of life is related to adult outcomes. What they find is that compared with children whose families had incomes above twice the poverty line during their early childhood, children with family incomes below the poverty line during this period completed two fewer years of schooling and, as adults, worked 451 fewer hours per year, earned less than half as much, received more in Food Stamps, and were more than twice as likely to report poor overall health or high levels of psychological distress . Men who grew up in poverty, they find, were twice as likely as adults to have been arrested, and among women early childhood poverty was associated with a six-fold increase in the likelihood of bearing a child out of wedlock prior to age 21.
But correlation isn't causation, of course, as economists (and this study) are quick to note. For example, say that there is a strong correlation between families in poverty and a lower education level for the parents. Perhaps a substantial share of the problems for children in poverty are not caused by lower family income, but by the lower education level of parents. If the root cause is lower parental education levels, then raising these families above the poverty line in terms of income won't have much effect on the long-term problems faced by children from these families.  

Making the case that various income-support programs will indeed address problems of children in poverty thus requires more detailed arguments, and the report goes through a number of studies in detail. But broadly speaking, raising families with children out of poverty affects the long-term outcomes for children in two ways. The report notes (citations omitted):
An “investment” perspective may be adopted ... emphasizing that higher income may support children’s development and well-being by enabling poor parents to meet such basic needs. As examples, higher incomes may enable parents to invest in cognitively stimulating items in the home (e.g., books, computers), in providing more parental time (by adjusting work hours), in obtaining higher-quality nonparental child care, and in securing learning opportunities outside the home. Children may also benefit from better housing or a move to a better neighborhood. Studies of some poverty alleviation programs find that these programs can reduce material hardship and improve children’s learning environments.
The alternative, “stress” perspective on poverty reduction focuses on the fact that economic hardship can increase psychological distress in parents and decrease their emotional well-being. Psychological distress can spill over into marriages and parenting. ... Parents’ psychological distress and conflict have in fact been linked with harsh, inconsistent, and detached parenting. Such lower-quality parenting may harm children’s cognitive and socioemotional development. 
These are ways in which additional income affects child development. Here are a couple of examples, chosen from meny, of the evidence that has accumulate on this point. The report writes:

Neuroscientists have produced striking evidence of the effect of early-life economic circumstances on brain development. Drawing from Hanson et al. (2013), Figure 3-3 illustrates differences in the total volume of gray matter between three groups of children: those whose family incomes were no more than twice the poverty line (labeled “Low SES” in the figure); those whose family incomes were between two and four times the poverty line (“Mid SES”); and those whose family incomes were more than four times the poverty line (“High SES”). Gray matter is particularly important for children’s information processing and ability to regulate their behavior. The figure shows no notable differences in gray matter during the first nine or so months of life, but differences favoring children raised in high-income families emerge soon after that. Notably, the study found no differences in the total brain sizes across these groups—only in the amount of gray matter."
This study is again a correlation, not a proof of causality. As the report notes: "However, the existence of these emerging differences does not prove that poverty causes them. This study adjusted for age and birth weight, but not for other indicators of family socioeconomic status that might have been the actual cause of these observed differences in gray matter for children with different family incomes." But with all due caution rigorously observed, it seems to me a highly suggestive correlation. 

Other studies look at the long-term effects of existing government programs that have raised income levels for poor families. Here's another example:
In their 2016 study of possible long-term effects of Food Stamp coverage in early childhood on health outcomes in adulthood, Hoynes, Schanzenbach, and Almond focus on the presence or absence of a cluster of adverse health conditions known as metabolic syndrome. In the study, metabolic syndrome was measured by indicators for adult obesity, high blood pressure, diabetes, and heart disease. Scores on these indicators of emerging cardiovascular health problems increased (grew worse) as the timing of the introduction of Food Stamps shifted to later and later in childhood (Figure 3-4). The best adult health was observed among individuals in counties where Food Stamps were already available when these individuals were conceived. Scores on the index of metabolic syndrome increase steadily until around the age of five.

Add all these kinds of studies and factors up, and you can obtain a rough-and-ready estimate a total cost of child poverty. 

Holzer et al. (2008) base their cost estimates on the correlations between childhood poverty (or low family income) and outcomes across the life course, such as adult earnings, participation in crime, and poor health. ... Their estimates represent the average decreases in earnings, costs associated with participation in crime (e.g. property loss, injuries, and the justice system), and costs associated with poor health (additional expenditures on health care and the value of lost quantity and quality of life associated with early mortality and morbidity) among adults who grew up in poverty. ... Holzer et al. (2008) make a number of very conservative assumptions in their estimates of earnings and the costs of crime and poor health. ... All of these analytic choices make it likely that these estimates are a lower bound that understates the true costs of child poverty to the U.S. economy.
The bottom line of the Holzer et al. (2008) estimates is that the aggregate cost of conditions related to child poverty in the United States amounts to $500 billion per year, or about 4 percent of the Gross Domestic Product (GDP). The authors estimate that childhood poverty reduces productivity and economic output in the United States by $170 billion per year, or by 1.3 percent of GDP; increases the victimization costs of crime by another $170 billion per year, or by 1.3 percent of the GDP; and increases health expenditures, while decreasing the economic value of health, by $163 billion per year, or by 1.2 percent ...
McLaughlin and Rank (2018) build on the work of Holzer and colleagues by updating their estimates in 2015 dollars and adding other categories of the impact of childhood poverty on society. They include increased corrections and crime deterrence costs, increased social costs of incarceration, costs associated with child homelessness (such as the shelter system), and costs associated with increased childhood maltreatment in poor families (such as the costs of the foster care and child welfare systems). Their estimate of the total cost of childhood poverty to society is over $1 trillion, or about 5.4 percent of GDP. ...  They do make it clear that there is considerable uncertainty about the exact size of the costs of childhood poverty. Nevertheless, whether these costs to the nation amount to 4.0 or 5.4 percent of GDP—roughly between $800 billion and $1.1 trillion annually in terms of the size of the U.S. economy in 2018—it is likely that significant investment in reducing child poverty will be very cost-effective  over time.
Of course, various programs are already reducing the number of children who live below the poverty line. The figure shows estimates of what the child poverty rate would have been without certain programs, including the Earned Income Credit, the Child Tax Credit, the Supplemental Nutrition Assistance Program ("food stamps"), Supplemental Security Income, Social Security, unemployment compensation, and others. (One warning about the figure: the poverty rate for children is given here as 13%, because the study is using a Supplemental Poverty Measure that (for example) includes a value for in-kind benefits like Medicaid.) 
What additional programs would it take to reduce US child poverty by half? The report looks at a range of programs and designs and combinations, seeking to provide  menu of options rather than a single recommendation. For example, one can look at general assistance linked directly to work, like the Earned Income Credit, or assistance like food stamps or housing vouchers. One could provide means-tested benefits only to the poor, or a universal benefit to all children--but where the value of that benefit would treated be taxable income for the non-poor. But for example, here's one set of policies that would make a substantial difference, with their estimated effects and costs. 

For example, if one chose the four top items on this list, the annual cost would be about $160 billion. The benefits later in life would be considerably larger. 

I don't propose spending $160 billion lightly. But I will point out that the expansion of the health insurance under the Patient Protection and Affordable Care Act of 2010 costs the US government over $100 billion per year.  Similarly, the costs of the Tax Cuts and Jobs Act passed in 2017 are projected to have an average cost of $100 billion per year (or more?) In short, our political system does seem fully capable of belching up expenditures of this size when the stars are properly aligned. 

As the report points out, some American cousins have taken the plunge to reducing child poverty by half.
The United States spends less to support low-income families with children than peer English-speaking countries do, and by most measures it has much higher rates of child poverty. Two decades ago, child poverty rates were similar in the United States and the United Kingdom. That began to change in March 1999, when Prime Minister Tony Blair pledged to end child poverty in a generation and to halve child poverty in 10 years. Emphasizing increased financial support for families, direct investments in children, and measures to promote work and increase take-home pay, the United Kingdom enacted a range of measures that made it possible to meet the 50 percent poverty reduction goal by 2008—a year earlier than anticipated. More recently, the Canadian government introduced the Canada Child Benefit in its 2016 budget. According to that government’s projections, the benefit will reduce the number of Canadian children living in poverty by nearly half.
Personally, I would be a lot more comfortable with the extent of US inequality if the child poverty rate was considerably lower, and thus the starting points for American children were closer together.