Pages

Wednesday, February 28, 2018

Two Central Bankers Walk Into a Restaurant, and the Pawnbroker for All Seasons

Here's the set-up line for the story: Two central bankers walk into a London restaurant ...

Mervyn King tells the tale at the start of his lecture "Lessons from the Global Financial Crisis," in a speech given upon receipt of the Paul A. Volcker Lifetime Achievement Award from the National Association for Business Economics (February 27, 2018). I'm quoting from the prepared text for the talk. Of course, King was Governor of the Bank of England from 2003-2013, while Paul Volcker was Chair of the Federal Reserve from 1979-1987. This is how King tells the story of their first meeting back in 1991:
"I first met Paul in 1991 just after I joined the Bank of England. He came to London and asked Marjorie Deane of the Economist magazine to arrange a dinner with the new central banker. The story of that dinner has never been told in public before. We dined in what was then Princess Diana’s favourite restaurant, and at the end of the evening Paul attempted to pay the bill. Paul carried neither cash nor credit cards, but only a cheque book, and a dollar cheque at that. Unfortunately, the restaurant would not accept it. So I paid with a sterling credit card and Paul gave me a US dollar cheque. This suited us both because I had just opened an account at the Bank of England and been asked, rather sniffily, how I intended to open my account. What better response than to say that I would open the account by depositing a cheque from the recently retired Chairman of the Federal Reserve. I basked in this reflected glory for two weeks. Then I received a letter from the Chief Cashier’s office saying that most unfortunately the cheque had bounced. Consternation! It turned out that Paul had forgotten to date the cheque. What to do? Do you really write to a former Chairman pointing out that his cheque had bounced? Do you simply accept the financial loss? After some thought, I hit upon the perfect solution. I dated the cheque myself and returned it to the Bank of England. They accepted it without question. I am hopeful that the statute of limitations is well past. But the episode taught me a lifelong lesson: to be effective, regulation should focus on substance not form."
Maybe you need to be an economist to see the humor in the story. But consider:  One central bank chair trying to pay with US dollars in a London restaurant, and the other adding dates to someone else's check. It's a story that should vanquish any doubts about whether central bankers are fallible human beings.

The rest of the lecture has some good nuggets, tot. King points out that 10 years ago at this time in 2008, we were about two weeks before the failure of Bear Stearns. He says:
"During the crisis we were vividly reminded of three old lessons. First, our system of banking is fragile, and reflects what I call financial alchemy. Banks that appear to be well-capitalised one day are not the next. Solvency is in the eye of the lender. Second, banks that borrow short and lend long are, as we saw in 2008, subject to runs that threaten the payments system and hence the wider economy. Third, regulatory reform is well-intentioned but has fallen into the trap of excessive detail. The complexity of the current regulation of financial services is damaging and unsustainable."
One of King's proposals is for that central banks should become what he calls the Pawnbroker for All Seasons. The idea is that rather than putting central banks in a position where, in an emergency, they face a choice between uncontrolled lending and letting the financial system melt down, we need a plan in advance. King suggests that banks work with a central bank to think about the value of the bank's collateral--say, the mortgages and other loans held by the bank. The bank and the central bank together would agree that if a bank finds itself caught in a financial crisis, it would give this collateral to the bank in exchange for a loan of a predetermined amount. King says:
"My proposal replaces the traditional lender of last resort function, and the provision of deposit insurance, by making the central bank a Pawnbroker for all Seasons. ... In normal times, each bank would decide how much of its assets it would preposition at the central bank allowing plenty of time for the collateral to be assessed. ... Adding up over all assets that had been pre-positioned, it would then be clear how much central bank money the bank would be entitled to borrow – with no questions asked – at any instant. The pawnbroker rule would be that the credit line of the bank would have to be sufficient to cover all liabilities that could run within a pre-determined period of, say, one month or possibly longer. ...The scheme is not a pipedream. US banks have pre-positioned collateral with the Federal Reserve sufficient to produce a total lendable value of just under one trillion dollars. Together with deposits at the Federal Reserve, the cash credit line of banks is around one-quarter of total bank deposits."
King's point is that a bunch of complex financial regulations--where their very complexity means they can be gamed--isn't a real answer. Solemnly swearing that the central bank won't lend in a crisis, and will just let the financial system melt down, isn't an answer, either. But thinking in advance about how much a central bank would lend to a bank, based on the collateral that bank has available, could actually help cushion the next (and there will be a next) financial crisis.

Tuesday, February 27, 2018

Black/White Disparities: 50 Years After the Kerner Commission

The National Advisory Commission on Civil Disorders was appointed by President Lyndon Johnson in the summer of 1967 and issued its report in February 1968. US cities had been experiencing sporadic but severe riots for three years. The report became known as the Kerner Commission after its chair, Governor Otto Kerner, Jr. of Illinois. I'm not seeing a place online where the full report is freely available online, which is a bit of a surprise to me, although the intro and first 70 pages or so are available here. For the tone, here is some of the introduction: 
"The summer of 1967 again brought racial disorders to American cities, and with them shock, fear, and bewilderment to the Nation. The worst came during a 2-week period in July, first in Newark and then in Detroit. Each set off a chain reaction in neighboring communities. ...
"This is our basic conclusion: Our Nation is moving toward two societies, one black, one white--separate and unequal. Reaction to last summer's disorders has quickened the movement and deepened the division. Discrimination and segregation have long permeated much of  American life; they now threaten the future of every American. 
"This deepening racial division is not inevitable. The movement apart can be reversed. Choice is still possible. Our principal task is to define that choice and to press for a national resolution. 
"To pursue our present course will involve the continuing polarization of the American community and, ultimately, the destruction of basic democratic values. The alternative is not blind repression or capitulation to lawlessness. It is the realization of common opportunities for all within a single society. This alternative will require a commitment to national action--compassionate, massive, and sustained, backed by the resources of the most powerful and the richest nation on this earth.
"From every American it will rcquire ncw attitudes, ncw understanding, and, above all, new will. The vital needs of the Nation must be met; hard choices must be made, and, if necessary, new taxes enacted. 
"Violence cannot build a better society. Disruption and disorder nourish repression, not justice. They strike at the freedom of every citizen. The community cannot--it will not--tolerate coercion and mob rule. 
"Violence and destruction must be ended--in the streets of the ghetto and in the lives of people. 
"Segregation and poverty have created in the racial ghetto a destructive environment totally unknown to most white Americans. What white Americans have never fully understood--but what the Negro can never forget--is that white society is deeply implicated in the ghetto. White institutions created it, white institutions maintain it, and white society condones it."
From a modern standpoint, it's interesting to note that this report is being published in 1968, four years after the passage of the Civil Rights Act of 1964. The decade after that law did see some dramatic changes for African-Americans in employment, education, and voting. For details, see John J. Donohue III and James Heckman, "Continuous Versus Episodic Change: The Impact of Civil Rights Policy on the Economic Status of Blacks," Journal of Economic Literature  (December 1991, 29:4 , pp. 1603-1643). But much of that progress seemed to taper off by the mid-1970s.

Thus, the 50th anniversary of the Kerner commission offers a chance for reflection.  Janelle Jones, John Schmitt, and Valerie Wilson offer a useful starting point in the short report, 50 years after the Kerner Commission: African Americans are better off in many ways but are still disadvantaged by racial inequality, from the Economic Policy Institute (February 26, 2018). The report compares black and white America in 1968 and the present, along a number of dimensions.  (So that the table would be more readaable on the blog, I've trimmed off the last column, which shows changes between the two years.)

For example, blacks have made dramatic gains to near-equality with whites in high school graduation rates. But in terms of college attendance, the racial gap remains very large. 

Black unemployment rates continue to be consistently higher than white rates. The black/white gaps in terms of hourly pay, annual income, poverty rate, and annual wealth have closed a bit, after 50 years (!), but remain large.  The homeownership rate for blacks has barely budged in 50 years.

Life expectancy is close to equal for whites and blacks. Infant mortality has dropped substantially for both blacks and whites in the last 50 years, but remains more than twice as high for blacks. Incarceration rates have more than doubled forth both whites and blacks, and the ratio of the black/white incarceration rate has risen from 5:1 to 6:1 in the last 50 years. 
The authors of this Economic Policy Institute aren't arguing over causes and reasons--at least not here--just putting some facts out there. But I'll add a few words.

The patterns mentioned here are of course intertwined. The fact that whites have far higher college graduation rates affects wages, income, poverty rates, and wealth. Lower education and income are also correlated with infant mortality rates and crime.  Differences in education level between groups can take a long time to play out. Someone who didn't finish high school back around 1970 has probably just retired in the last few years. Someone who didn't go to college in the 1970s or the 1980s or the 1990s has been probably been working for decades in lower-paying jobs. When parents have less education, their children tend to have less education, too. And while racial discrimination in America doesn't operate with the same brute force as 50 years ago, it surely continues to play a role in patterns of housing, employment, and what society is willing to tolerate in terms of educational results.

The racial disparities in 2018 aren't happening in the same society as in 1968, or from the same causes, or against the same backdrop of urban rioting. But black/white racial disparities remain very large. Very large indeed. 

Friday, February 23, 2018

Some Thoughts About Economic Exposition in Math and Words

Analysis in economics and other social sciences often uses a combination of words and mathematics. There is an ongoing critique by those who argue that while the math may sometimes be useful, it is too often deployed with insufficient regard for whether it captures the underlying economic reality. In such cases, the argument goes, math is used as a way of pretending that an argument about the real-world economy has been definitely made, or settled, when in reality only an equation of limited application has been solved.

Paul Romer recently offered a pithy commentary on this longstanding dispute on his blog. Romer quotes from an earlier post by Dani Rodrik, who in some lists of advice for economists and non-economists, includes the following:
"Make your model simple enough to isolate specific causes and how they work, but not so simple that it leaves out key interactions among causes. ... Unrealistic assumptions are OK; unrealistic critical assumptions are not OK. ...  Do not criticize an economist’s model because of its assumptions; ask how the results would change if certain problematic assumptions were more realistic. ... Analysis requires simplicity; beware of incoherence that passes itself off as complexity. ... Do not let math scare you; economists use math not because they are smart, but because they are not smart enough."
Romer has in recent years expressed concerns over "mathiness," in which predetermined conclusions masquerade behind what looks like an "objective" mathematical model.  As he wrote in a 2015 paper:
"The style that I am calling mathiness lets academic politics masquerade as science. Like mathematical theory, mathiness uses a mixture of words and symbols, but instead of making tight links, it leaves ample room for slippage between statements in natural versus formal language and between statements with theoretical as opposed to empirical content. ... [M]athiness could do permanent damage because it takes costly effort to distinguish mathiness from mathematical theory." 
In his December blog post, Romer adds some rules of his own: 
1/ The test is whether math adds to or detracts from clarity and precision. A writer can use either the words of everyday language or the symbols from math to make assertions that are clear and precise; or opaque and vague.
2/ The deep problem is intent, not ability or skill.
3/ Writers who want to make predictions use words and math to be clear and precise. Writers who want to make excuses use words and math to be opaque and vague.
4/ Compared to words, math and code tend to be both more precise and more opaque ...
I might quibble a bit with his rule #2, because my sense is that many of those who fail to communicate clearly, whether in math or in words, either have simply not put the time and effort into doing so, or lack the ability/skill to do so. But at least for me, the notion that math is "both more precise and more opaque" than words is an insight worth keeping.

Romer's comment reminded me of an earlier set of rules from the great economist Alfred Marshall, who in his great Principles of Economics, published in 1890, and the standard textbook of economics for several decades thereafter, was quite careful about having almost all words in the text, with the underlying mathematics in footnotes. Marshall wrote in 1906 letter to Alfred Bowley (reprinted in A.C. Pigou (ed.), Memorials of Alfred Marshall, 1925 edition, quotation on p. 427): 
"But I know I had a growing feeling in the later years of my work at the subject that a good mathematical theorem dealing with economic hypotheses was very unlikely to be good economics; and I went more and more on the rules -- (1) Use mathematics as a shorthand language; rather than as an engine of inquiry. (2) Keep to them till you have done. (3) Translate into English. (4) Then illustrate by examples that are important in real life. (5) Burn the mathematics. (6) If you can't succeed in 4, burn 3. This last I did often. ...
Mathematics used in a Fellowship thesis by a man who is not a mathematician by nature -- and I have come across a good deal of that -- seems to me an unmixed evil. And I think you should do all you can to prevent people from using Mathematics in cases in which the English language is as short as the Mathematical ..."
As a final example of challenges that arise when mixing mathematics into economic situations, consider this definition of "logic" taken from Ambrose Bierce's (1906) The Devil's Dictionary:
LOGIC, n. The art of thinking and reasoning in strict accordance with the limitations and incapacities of the human misunderstanding. The basic of logic is the syllogism, consisting of a major and a minor premise and a conclusion— thus:
Major Premise: Sixty men can do a piece of work sixty times as quickly as one man.
Minor Premise: One man can dig a posthole in sixty seconds; therefore—
Conclusion: Sixty men can dig a posthole in one second.
This may be called the syllogism arithmetical, in which, by combining logic and mathematics, we obtain a double certainty and are twice blessed.
An awful lot of writing in economics combines logic and mathematics, and as Bierce would sarcastically say, we thus obtain "a double certainty and are twice blessed."

Thursday, February 22, 2018

A Grasshopper View: The 2018 Economic Report of the President

The Council of Economic Advisers was established by the Employment Act of 1946 as a separate office in the White House. Along with an ongoing stream of stream of reports and advice, the CEA also produces an annual report, and the 2018 Economic Report of the President  is now available. The CEA is headed by three political appointees, which is fine by me. I expect every president to choose appointees who are broadly in agreement with administration policies. My (often unrequited) hope is that the president will then actually listen to such experts on the dangers of likely tradeoffs and the details of policy design.

If you want to read the reasonable person's defense of Trump's economic policies--either because you want to support these policies or you want to rebut them--the Economic Report thus offers a useful starting point. For example, the first chapter is a defense of the Tax Cuts and Jobs Act (TCJA) passed in December 2017, and the second chapter is a defense of the Trump's regulatory policy efforts.  I tend to skim over such partisan arguments, but I still appreciate the figures and tables surrounding these arguments. They often describe facts and patterns worthy of note, whatever one's policy beliefs about the appropriate reaction to them.

In that spirit, I'll offer a grasshopper view of the report, leaping with no particular rhyme or reason between figures that caught my eye, with a few sentences quoted from the report about about each. Topics include: geographic mobility, road infrastructure, the US balance of trade since 1790, the fall in US manufacturing employment after 2000, the opioid epidemic, firms are investing less while paying out more to shareholders, and there is an international decline in labor productivity in the last decade or so.

The Fall of Geographic Mobility 

"High levels of unemployment concentrated in particular locations may amplify exit from the labor force, especially when workers are unwilling or unable to move to a location with a stronger job market. People move to improve their life circumstances, but the share of Americans moving has been declining, and is currently at its lowest value on record (figure 3-21).
"In addition to family and social connections, deterrents to moving include (1) search time, (2) State-specific occupational licensing requirements, (3) local land use regulations that raise housing costs in places with the greatest potential growth, and (4) homeowners’ limited ability to sell their homes. Such obstacles to finding better employment opportunities, coupled with high local unemployment rates, may lead employees to ultimately exit the workforce. More research is needed to determine the direct relationship between geographic immobility and labor force participation, specifically exploring whether obstacles to moving have increased over time, and how prime-age workers in particular have been affected."
More Driving on the Same Roads


"For example, between 1980 and 2016, vehicle miles traveled in the United States more than doubled, while public road mileage and lane miles rose by only 7 and 10 percent, respectively (figure 4-1). Unsurprisingly, queuing caused by traffic congestion has risen, imposing both direct and indirect costs on business and leisure travelers alike ..."

Swings in the US balance of trade since 1790

"Figure 5-7 illustrates the U.S. trade balance from 1790 to the present, expressed as a share of GDP. From 1790 through 1873, the U.S. trade balance was volatile, in part due to the low trade volumes (Lipsey 1994). The trade balance swung back and forth between surplus and deficit, but was mostly in deficit. From 1873 through the 1960s, the trade balance was mostly in surplus. The largest historic surpluses were during the years 1916–17 and 1943–44, as wartime production and trade with allies predominated. Since 1976, the trade balance has been continually in deficit. The largest deficit as a share of GDP was nearly 6 percent in 2006, a share exceeded in only six other years in U.S. history and not seen since 1816."
The sharp fall in manufacturing employment after 2000

"Although employment in the sector has long exhibited procyclicality, it has seen changes since 2000. In the expansionary period from the end of 2000 through 2007, the economy failed to recover the manufacturing job losses it experienced during the previous recession years."

The Opioid Epidemic

"And since 1999, over 350,000 people have died of opioid-involved drug overdoses, which is 87 percent of the 405,399 Americans killed in World War II (DeBruyne 2017). The staggering opioid death toll has pushed drug overdoses to the top of the list of leading causes of death for Americans under the age of 50 and has cut 2.5 months from the average American’s life expectancy (Dowell et al. 2017). ... 
"The opioid epidemic evolved with three successive waves of rising deaths due to different types of opioids, with each wave building on the earlier one (Ciccarone 2017). In the late 1990s, in response to claims that pain was undertreated and assurances from manufacturers that new opioid formulations were safe, the number of opioid prescriptions skyrocketed (CDC 2017b). What followed was an increase in the misuse of and deaths related to these prescriptions (figure 6-2). As providers became aware of the abuse potential and addictive nature of these drugs, prescription rates fell, after peaking in 2011. Deaths involving prescription opioids leveled off, but were followed by a rise in deaths from illicit opioids: heroin and fentanyl. Heroin deaths rose first, followed by a rise in deaths involving fentanyl—a synthetic opioid that is 30 to 50 times more potent than heroin and has legitimate medical uses but is increasingly being illicitly produced abroad (primarily in Mexico and China) and distributed in the U.S., alone or mixed with heroin. In 2015, males age 25 to 44 (a core group of the prime-age workers whose ages range from 25 to 54) had the highest heroin death rate, 13 per 100,000. Fentanyl-related deaths surpassed other opioid-related deaths in 2016."
Investment is down, while firms pay out more to shareholders

"It is a matter of concern that net investment has been generally falling as a share of the capital stock during the past 10 years, which limits the economy’s productive capacity. In 2016, net investment as a share of the capital stock fell to a level previously seen only during recessions (figure 8-10). The Tax Cuts and Jobs Act is designed to increase the pace of net investment. As discussed further below, the slowdown in investment has also exacerbated the slowdown in labor productivity growth. ...
"The rate of payouts to shareholders by nonfinancial firms, in the form of dividends together with net share buybacks, has been gradually trending higher for several decades, although it fell in 2017 (figure 8-11). Nonfinancial corporations returned nearly half the funds that could be used for investment to stockholders in 2017. In a well-functioning capital market, when mature firms do not have good investment opportunities, they should return funds to their stockholders, so the stockholders can invest these funds in young and growing firms. Although it may be admirable for individual firms to thus return funds to their shareholders, the rising share of paybacks to shareholders suggests that investable funds are not being adequately recycled to young and dynamic firms. GutiƩrrez and Philippon (2016) find that firms in industries with more concentration and more common ownership invest less."
The International Decline in Labor Productivity
"Labor productivity growth has been slowing in the major advanced economies (figure 8-41), driven by an estimated decline of 0.3 percent in annual multifactor productivity growth relative to average precrisis growth of 1 percent a year. This slowdown is broad-based, also affecting developing and emerging market economies. Although the causes of this productivity slowdown are unclear, a number of hypotheses have been offered—including financial crisis legacies, such as weak demand and lower capital investment; the trade slowdown; the aging of the post–World War II Baby Boom generation in its various forms around the world; the rising share of low-productivity firms; and a widening gap between high- and low-productivity firms. The Organization for Economic Cooperation and Development (OECD) finds that the survival of abnormally low-productivity firms may have contributed to the slowdown in productivity growth." 

Wednesday, February 21, 2018

Brainstorming Anti-Poverty Programs

There are legitimate concerns about how the poverty line is set. It was set back in the early 1960s and adjusted for inflation since then, but without regard for other changes in the economy. It doestn't vary according to regional differences in the cost of living and it doesn't include in-kind benefits like Medicaid and food stamps. It is based on income levels, rather than consumption levels (or see also  here). At least for a bloodless social scientist, it's possible to become focused on these conceptual issues to such an extent that basic empathy for those living with very low income levels gets crowded out. But just as one should remember that the poverty line is an arbitrary but useful convention, one should also remember that households living with very low income levels have a hard time.

Thus, it seems to me a  profoundly useful exercise, every now and then, to set aside the questions of how to measure poverty and instead to focus on what might be done about it. In that spirit, the Russell Sage Foundation Journal for the Social Sciences has put together a special double issue on the theme of  "Anti-poverty Policy Initiatives for the United States" (February 2018, vol 4, issues 2-3). After an overview essay by Lawrence M. Berger, Maria Cancian, and Katherine Magnuson, the two issues include 15 papers with a wide range of concrete proposals: focused on children in low-income families, the elderly, renters, food stamps, the earned income tax credit, the minimum wage, subsidizing or guaranteeing jobs, postsecondary training and higher education, contraception, and more.

There's a lot to contemplate in these issues, and I'll list the table of contents, with links to specific papers, below. A number of the papers are focused on how to adjust existing programs at a moderate cost. Here, I'll just sketch a couple of proposals that think much bigger. 

For example, a "universal child allowance" means that any household with children, regardless of income level, would receive a per child payment from the government. . Luke Shaefer, Sophie Collyer, Greg Duncan, Kathryn Edin, Irwin Garfinkel, David Harris, Timothy M. Smeeding, Jane Waldfogel, Christopher Wimer, Hirokazu Yoshikawa  lay out what such a proposal might look like in "A Universal Child Allowance: A Plan to Reduce Poverty and Income Instability Among Children in the United States."

The point out that the US has oriented much of it social safety net toward work, with the result that children in families where the adults don't work much can be very badly off. Their baseline proposal is to that every US family with a child  would be about $250 per month ($3,000 per year).  Because the allowance doesn't depend on income or hours worked, it is not reduced if an adult works more hours or gets higher pay. There is no need for a large bureaucracy to monitor eligibility and sliding-scale reductions in benefits.

Lots of other countries have enacted policies along these lines. They write (citations omitted):
Part of the reason that other nations have fewer poor children than the United States is that they provide what the OECD terms a universal child benefit—a cash grant that goes to all families with children. Austria, Canada, Denmark, Finland, France, Germany, Ireland, Luxembourg, the Netherlands, Norway, Sweden, and the UK have all implemented a version of a child benefit. Some call their measures child allowances (CA). Others implement their CA through the tax code as universal child tax credits. A notable feature of these universal child benefit plans is that they are accessible to all: families with children receive them regardless of whether parents work and whatever their income. The level of these child benefits varies by country. The benefit in U.S. dollars for two children in Belgium and Germany is about $5,600 per year; in Ireland $4,000, and in the Netherlands $2,400 (. Canada has a base child allowance, in U.S. dollars, of roughly $5,000 per child under six and $4,300 per child age six to seventeen ...  
To pay for the allowances, they would start by scrapping the existing "$1,000 per child per year Child Tax Credit and a $4,000 per child per year tax exemption (often referred to as the child deduction)," which as they point out mostly go to families with incomes well above the poverty line." This saves $97 billion.  The total estimated cost of the baseline proposal about $190 billion, so the additional spending needed would be $93 billion. For those who are skeptical about whether the money would actually benefit children, the author point to a body of evidence that for low-income families in particular, available cash seems to make a real difference in many measures of well-being. 

Another aggressive proposal is that the federal government should guarantee a job to everyone who wants one. Mark Paul, William Darity Jr., Darrick Hamilton, and Khaing Zaw sketch a proposal long these lines, and respond to some of the common concerns, in "A Path to Ending Poverty by Way of Ending Unemployment: A Federal Job Guarantee." Their proposal is: "Any American wanting a job, at any time, would be able to obtain one through the public employment program."

Their proposal is that local, state and federal governments would "conduct an inventory of their needs and develop a jobs bank. ... At the federal level, we anticipate a wide array of major public investment activities, which may include fostering a transition to a green energy economy, extending access to high-speed rail, improvements in our public park service, revival and product diversification for the postal service, and an increase in general services across the economy. At the state level, we anticipate the states to undertake major infrastructure investment projects, as well as projects to improve the services they offer to their citizens. At the local level, we expect communities to undertake community development projects, provide universal daycare, maintain and upgrade their public school facilities, and improve and expand the services provided by their libraries."

They envision that workers would be paid a starting hourly wage of of $11.56 per hour, plus health and retirement benefits. The program would have some room for promotions and pay raises, so they envision that the average wage about 35% above that level. Total cost of course depends on  how many people would come looking for these jobs. But their estimates for a time of fairly low unemployment, like July 2016 the costs could run $651 Billion to $2.1 trillion. On the other hand, anyone who takes this kind of job would have reduced eligibility for other kinds of government assistance: for example, they would no longer get Medicaid.  

As they emphasize, this job guarantee would be most useful to those who have the hardest time finding a job now. Moreover, the guaranteed federal job would become a floor for the rest of the labor market: if a private employer wanted to hire someone, you would need to offer at least as much as the federal job guarantee. Although they don't emphasize this point, it would be interesting to live in a US society where no one would be ever be able to say that they just can't find a job. 

I tend to be an incrementalist at heart, so both of these proposals stretch beyond my comfort level. I'm OK with the idea of a child allowance for a majority of families with children, but the "universal" part goes a little too far for me. The authors offer a number of choices that would hold down costs, and I'd be looking for more ways to do that. 

It would be fascinating to watch the politics of a job guarantee program evolve. There would certainly be concerns from a wide array of workers--from construction unions to day care workers-- that the government would use the cheaper guaranteed jobs to cut back on wages. Politicians would bid for votes by offering higher pay to the guaranteed job workers. They authors argue that concerns over phantom workers collecting paychecks are overblown, but I"m not so sure. Ultimately, I'm more comfortable with targeted job subsidies for employers (as discussed in other chapters). 

But whatever my specific preferences, these essays are the intellectual equivalent of splashing your face with cold water first thing in the morning. They offer a useful jolt and a wake-up call about what the shape of a serious anti-poverty agenda. 

_________________________________
Table of Contents with links:

Volume 4, Number 2

"Anti-poverty Policy Innovations: New Proposals for Addressing Poverty in the United States," by
Lawrence M. Berger, Maria Cancian, Katherine Magnuson (4:2, pp. 1–19)

"A Universal Child Allowance: A Plan to Reduce Poverty and Income Instability Among Children in the United States," by H. Luke Shaefer, Sophie Collyer, Greg Duncan, Kathryn Edin, Irwin Garfinkel, David Harris, Timothy M. Smeeding, Jane Waldfogel, Christopher Wimer, Hirokazu Yoshikawa (4:2, pp. 22–42)

"Cash for Kids," by Marianne P. Bitler, Annie Laurie Hines, Marianne Page (4:2, pp. 43–73)

"A Targeted Minimum Benefit Plan: A New Proposal to Reduce Poverty Among Older Social Security Recipients A Targeted Minimum Benefit Plan," by Pamela Herd, Melissa Favreault, Madonna Harrington Meyer, Timothy M. Smeeding (4:2, pp.74–90)

"Reforming Policy for Single-Parent Families to Reduce Child Poverty," by Maria Cancian, Daniel R. Meyer (4:2, pp. 91–112)

"Reconstructing the Supplemental Nutrition Assistance Program to More Effectively Alleviate Food Insecurity in the United States." by Craig Gundersen, Brent Kreider, John V. Pepper (4:2, pp. 113–130)

"A Renter’s Tax Credit to Curtail the Affordable Housing Crisis," by Sara Kimberlin, Laura Tach, Christopher Wimer (4:2, pp. 131–160)

"The Rainy Day Earned Income Tax Credit: A Reform to Boost Financial Security by Helping Low-Wage Workers Build Emergency Savings," by Sarah Halpern-Meekin, Sara Sternberg Greene, Ezra Levin, Kathryn Edin (4:2, pp. 161–176)

Volume 4, Number 3

"Anti-poverty Policy Innovations: New Proposals for Addressing Poverty in the United States," by Lawrence M. Berger, Maria Cancian, Katherine Magnuson (4:3, pp. 1–19)

"Coupling a Federal Minimum Wage Hike with Public Investments to Make Work Pay and Reduce Poverty," by Jennifer Romich, Heather D. Hill (4:3, pp. 22–43)

"A Path to Ending Poverty by Way of Ending Unemployment: A Federal Job Guarantee," by Mark Paul, William Darity Jr., Darrick Hamilton, Khaing Zaw (4:3, pp. 44–63)

"Working to Reduce Poverty: A National Subsidized Employment Proposal," by Indivar Dutta-Gupta, Kali Grant, Julie Kerksick, Dan Bloom, Ajay Chaudry (4:3, pp. 64–83)

"A “Race to the Top” in Public Higher Education to Improve Education and Employment Among the Poor," by Harry J. Holzer (4:3,  84–99)

"Postsecondary Pathways Out of Poverty: City University of New York Accelerated Study in Associate Programs and the Case for National Policy," by Diana Strumbos, Donna Linderman, Carsosn C. Hicks (4:3, pp. 100–117)

"A Two-Generation Human Capital Approach to Anti-poverty Policy," by Teresa Eckrich Sommer, Terri J. Sabol, Elise Chor, William Schneider, P. Lindsay Chase-Lansdale, Jeanne Brooks-Gunn, Mario L. Small, Christopher King, Hirokazu Yoshikawa (4:3, pp. 118–143)

"Could We Level the Playing Field? Long-Acting Reversible Contraceptives, Nonmarital Fertility, and Poverty in the United States." by Lawrence L. Wu, Nicholas D. E. Mark (4:3, pp. 144–166)

"Assessing the Potential Impacts of Innovative New Policy Proposals on Poverty in the United States," by Christopher Wimer, Sophie Collyer, Sara Kimberlin (4:3, pp. 167–183)



Tuesday, February 20, 2018

Pollutant Taxes on Energy: Why So Focused on Roads?

Most countries of the world tax oil products, especially the gasoline used in the road sector, but impose only very low taxes on other fossil fuels.  The OECD compiles a Taxing Energy Use Database that includes energy use and energy taxes for 42 countries that make up 80% of global energy use, and not coincidentally, 80% of global carbon emissions. For the latest iteration of this database, the OECD has also published Taxing Energy Use 2018: Companion to the Taxing Energy Use Database (February 2018), which pulls together some of the overall patterns. Here are a couple of the figures that caught my eye.

This figures shows energy taxes on a country-by-country basis. The energy taxes are measured in terms of euros per ton of carbon dioxide emitted. The little horizontal notches show the total tax on carbon emissions from energy use. Switzerland, Luxembourg, Germany, and Norway have the highest taxes, while the US and China are down at the low end,along with Brazil, Indonesia, and Russia. The variation across countries is considerable.

The dark blue balls show the tax on oil products, including gasoline. As is quickly apparent, the effective tax rate on carbon emissions from oil is higher than the tax rate on carbon emissions from other sources of energy. In particular, coal goes essentially untaxed in most countries. As a result, the OECD finds that 81% of carbon emissions are not taxed at all.


Here's a similar story, but told in terms of sectors of the economy. The first section of the table on the left shows taxing of energy in the road sector, and the blue lines show carbon taxes on diesel and gasoline. But the next sections show energy taxes in industrial uses, in residential/commercial, and in electricity generation. From a global perspective, energy pollution arising from these other sources is largely untaxed.


If burning fossil fuels is enough of an environment problem that it justifies taxes on oil products in the road sector, then it seems quite peculiar to have burning of fossil fuel from other sectors going essentially untaxed. Indeed, there is a literature on the "co-benefits" of reducing air pollution, which points out that there are immediate short-term health gains to doing so, as well as a longer-term reduction  in the risks posed by carbon emissions.

Monday, February 19, 2018

Some Patterns for Same-Sex Households

A couple of decades back, it was  hard to find systematic and reliable information on the economic and family characteristics of same-sex couples. However, the US Census has been collecting baseline data for some years now, and just released an annual report, Characteristics of Same-Sex Couple Households: 2005 to Present. The report is just a few statistical tables. I've taken one of the tables, cut some of the rows, left out the columns showing statistical significance, and cut the footnotes. But there's enough left to note some patterns.

  • For the US as a whole, The Census folks counted 56.4 million married opposite-sex couples, 6.8 millions unmarried opposite sex couples, and almost 900,000 single-sex couples, more or less evenly split between male-male and female-female couples. 
  • By average age, the same-sex couples are much closer to the opposite-sex couples than to the unmarried opposite-sex couples. 
  • Same-sex couples are much more likely to be interracial than married opposite-sex couples, especially male-male couples. 
  • Same-sex couples tend to have higher education levels than opposite-sex couples, who in turn have  higher education levels than unmarried opposite-sex couples. 
  • In terms of employment, unmarried opposite-sex couples rank highest, followed by the same-sex couples, while married single-sex couples are lowest. 
  • Married and unmarried opposite-sex couples are about equally likely to have children in the household, while same-sex couples--especially male-male couples--are much less likely to have children in the household. 
  • In terms of median household income, the rank from highest to lowest is: male-male couples, married opposite-sex couples, female-female couples, and unmarried opposite-sex couples. 


A table like this one is useful for showing some factual patterns, but it would be extremely unwise to draw any conclusions about causality from it. For example, it is certainly not true that if the unmarried couples were to marry, they would immediately become older, better educated, and with higher paychecks.

[Note added 3/1/18: Those interested in this topic might also want to check this Research Report from the Tax Policy Center: "Same-Sex Married Tax Filers After Windsor and Obergefell," by Robin Fisher, Geoff Gee, and Adam Looney (February 28, 2018) It looks at evidence on same-sex married couples from tax returns, and how the patterns shifted after key Supreme Court rulings affecting same-sex marriage in 2013 and 2015.]

A decade ago, Dan A. Black, Seth G. Sanders, and Lowell J. Taylor. wrote about "The Economics of Lesbian and Gay Families." in the Spring 20017 issue of the Journal of Economic Perspectives (21:2,  53-70). They noted that many of the observed patterns of same-sex couples are consistent with economic reasoning. For example, if a smaller share of same-sex couples are likely to have children, this will influence patterns of work, willingness to live in urban areas, and other factors. They wrote:
"The economic approach to thinking about families—with its focus on deliberative choices made by people with reasonably stable preferences—stresses the importance of constraints. That is, when economists seeks to understand systematic differences in the behavior of gays, lesbians, and heterosexuals, we do not start with a hypothesis of innate differences in preferences, but instead seek to understand how differences in constraints systematically alter incentives faced by gay, lesbian, and heterosexual people. This paper uses the economic approach to organize our thinking about gay and lesbian families, and thus implicitly assumes that, aside from sexual preference, other preferences do not systematically differ by sexual orientation. Based on that approach we provide evidence pertaining to some initial questions. 
"For example, do differing biological constraints faced by gay, lesbian, and heterosexual couples affect choices over children? Clearly they do; there are far fewer children in lesbian and gay families than heterosexual-partnered families. Gays and lesbians also face higher costs for adoption, which further reduces opportunities to raise children. 
"Do differences in fertility (or anticipated fertility), again owing to differences in constraints, influence where people live? We argue that because gay couples face a relatively high price for children, they will consume more non-child goods, and this increased consumption often takes the form of locating in expensive highamenity locations like San Francisco or New York. 
"Do same-sex couples have patterns of household specialization that differ in predictable fashion from heterosexual couples? Available evidence indicates that lesbian women have higher wages and greater labor force attachment than heterosexual women, while the opposite situation pertains for gay men relative to heterosexual men. A variant of Becker’s (1991) model of household specialization would predict this pattern. 
"Of course, many other interesting and fundamental questions await analysis, from economics and other social science disciplines, as better data sources become available."

"The Great Advantage of the Americans Consists in Their Being Able to Commit Faults Which They may Afterward Repair"

Why is it that the United States (or democracies in general) can in many cases pass foolish bad laws or make a  misguided selection of leaders--but nonetheless continue over time to flourish over a sustained time? Alexis de Tocqueville tackles that question in this passage from his 1840 work Democracy in America (Chapter XIV: Advantages American Society Derive From Democracy – Part I, available various places on the web like here and here).  It's food for thought for today's President's Day holiday.

Tocqueville argues that aristocracies "are infinitely more expert in the science of legislation" than democracy and also that those " who are entrusted with the direction of public affairs in the United States are frequently inferior, both in point of capacity and of morality, to those whom aristocratic institutions would raise to power." But in Tocqueville's view, these problems are more than counterbalanced by the fact that democracy over time is responsive to "the well-being of the greatest possible number," and that a democracy has checks and balances. From this perspective, Tocqueville offers one of his famous lines: "[T]he great advantage of the Americans consists in their being able to commit faults which they may afterward repair." Here's the full passage:
"Democratic laws generally tend to promote the welfare of the greatest possible number; for they emanate from the majority of the citizens, who are subject to error, but who cannot have an interest opposed to their own advantage. The laws of an aristocracy tend, on the contrary, to concentrate wealth and power in the hands of the minority, because an aristocracy, by its very nature, constitutes a minority. It may therefore be asserted, as a general proposition, that the purpose of a democracy in the conduct of its legislation is useful to a greater number of citizens than that of an aristocracy. This is, however, the sum total of its advantages.
"Aristocracies are infinitely more expert in the science of legislation than democracies ever can be. They are possessed of a self-control which protects them from the errors of temporary excitement, and they form lasting designs which they mature with the assistance of favorable opportunities. Aristocratic government proceeds with the dexterity of art; it understands how to make the collective force of all its laws converge at the same time to a given point. Such is not the case with democracies, whose laws are almost always ineffective or inopportune. The means of democracy are therefore more imperfect than those of aristocracy, and the measures which it unwittingly adopts are frequently opposed to its own cause; but the object it has in view is more useful.
"Let us now imagine a community so organized by nature, or by its constitution, that it can support the transitory action of bad laws, and that it can await, without destruction, the general tendency of the legislation: we shall then be able to conceive that a democratic government, notwithstanding its defects, will be most fitted to conduce to the prosperity of this community. This is precisely what has occurred in the United States; and I repeat, what I have before remarked, that the great advantage of the Americans consists in their being able to commit faults which they may afterward repair.
"An analogous observation may be made respecting public officers. It is easy to perceive that the American democracy frequently errs in the choice of the individuals to whom it entrusts the power of the administration; but it is more difficult to say why the State prospers under their rule. In the first place it is to be remarked, that if in a democratic State the governors have less honesty and less capacity than elsewhere, the governed, on the other hand, are more enlightened and more attentive to their interests. As the people in democracies is more incessantly vigilant in its affairs and more jealous of its rights, it prevents its representatives from abandoning that general line of conduct which its own interest prescribes. In the second place, it must be remembered that if the democratic magistrate is more apt to misuse his power, he possesses it for a shorter period of time. But there is yet another reason which is still more general and conclusive. It is no doubt of importance to the welfare of nations that they should be governed by men of talents and virtue; but it is perhaps still more important that the interests of those men should not differ from the interests of the community at large; for, if such were the case, virtues of a high order might become useless, and talents might be turned to a bad account. ... The advantage of democracy does not consist, therefore, as has sometimes been asserted, in favoring the prosperity of all, but simply in contributing to the well-being of the greatest possible number.
"The men who are entrusted with the direction of public affairs in the United States are frequently inferior, both in point of capacity and of morality, to those whom aristocratic institutions would raise to power. But their interest is identified and confounded with that of the majority of their fellow-citizens. They may frequently be faithless and frequently mistaken, but they will never systematically adopt a line of conduct opposed to the will of the majority; and it is impossible that they should give a dangerous or an exclusive tendency to the government.
"The mal-administration of a democratic magistrate is a mere isolated fact, which only occurs during the short period for which he is elected. Corruption and incapacity do not act as common interests, which may connect men permanently with one another. A corrupt or an incapable magistrate will not concert his measures with another magistrate, simply because that individual is as corrupt and as incapable as himself; and these two men will never unite their endeavors to promote the corruption and inaptitude of their remote posterity. The ambition and the manoeuvres of the one will serve, on the contrary, to unmask the other. The vices of a magistrate, in democratic states, are usually peculiar to his own person."

Friday, February 16, 2018

Homeownership Rates: Some International Comparisons

High-income countries vary considerably in the share of households that own their own homes, The US rate of homeownership was about average by international standards 20-25 years ago, but now is below the average. Here are some facts from Laurie S. Goodman and Christopher Mayer, "Homeownership and the American Dream" in the Winter 2018 issue of Journal of Economic Perspectives (32:1, pp. 31-58).
"The United States does not rank particularly high among other high-income countries when it comes to homeownership. Table 1 compares the homeownership rate from 1990 to 2015 across 18 countries where we have been able to obtain somewhat comparable data over the entire time period. The United States was ranked tenth in 1990, at the middle of the pack and close to the mean rate. By 2015, the United States was the fifth-lowest, its homeownership rate of 63.7 percent falling well below the 18-country average of 69.6 percent. Over the 1990–2015 period,  13 of the 18 countries increased their homeownership rates. The five countries with declines in homeownership were Bulgaria, Ireland, Mexico, the United Kingdom—and the United States.
"In a broader sample of countries, many of which have missing data for some of the years in question, the United States homeownership rate in 1990 was slightly below the median and mean of the 26 countries reporting data. By 2015, the US ranked 35 of 44 countries with reliable data, and was almost 10 percentage points below the mean homeownership rate of 73.9 percent."


There are a lot of possible reasons for this variation, including "culture, demographics, policies, housing finance systems, and, in some cases, a past history of political instability that favors homeownership." They offer an interesting comparison of how homeownership rates in the UK and Germany evolved after World War II (citations and footnotes omitted):
"For example, consider the evolution of homeownership in (the former) West Germany and the United Kingdom. Both countries pursued a similar policy of subsidizing postwar rental construction to rebuild their countries. However, in intervening years, German policies allowed landlords to raise rents to some extent and thus finance property maintenance while also providing “protections” for renters. In the United Kingdom, regulation strongly discouraged private rentals, whereas the quality of public (rental) housing declined with undermaintenance and obtained a negative stigma. As well, German banks remained quite conservative in mortgage lending. The result was that between 1950 and 1990, West German homeownership rates barely increased from 39 to 42 percent, whereas United Kingdom homeownership rates rose from 30 to 66 percent. Interestingly, anecdotes suggest that many German households rent their primary residence, but purchase a nearby home to rent for income (which requires a large down payment but receives generous depreciation benefits). This allows residents to hedge themselves against the potential of rent increases in a system that provides few tax subsidies to owning a home."
By international standards, the US has had fairly generous mortgage interest deductions. Moreover, Goodman and Mayer walk though the question of whether owning a home in the US typically makes financial sense. Of course, buying a home at the peak of hosing prices circa 2006 and then trying to sell that home in 2008 is a losing proposition. But they argue that if Americans are buying a home at a typical price and willing and able to hold on to the home for a sustained time--say, buying in 2002 and holding on through 2015 or later--then housing pays off pretty well in comparison to alternative investments. They write:
"Our results suggest that there remain very compelling reasons for most American households to aspire to become homeowners. Financially, the returns to purchasing a home in a “normal” market are strong, typically outperforming the stock market and an index of publicly traded apartment companies on an after-tax basis. Of course, many caveats are associated with this analysis, including variability in the timing and location of the home purchase, and other risks and tradeoffs associated with homeownership. There is little evidence of an alternative savings vehicle (other than a government-mandated program like Social Security) that would successfully encourage low-to-moderate income households to obtain substantial savings outside of owning a home. The fact that homeownership is prevalent in almost all countries, not just in the United States, and especially prevalent for people near retirement age, suggests that most households still view homeownership as a critical part of a life-cycle plan for savings and retirement."
Thus, owning a house is a kind of self-discipline that encourages saving. Also, buying a house in which to live is an investment that offers two kinds of returns: both the financial return when you sell, but also the fact that you can live inside your owner-occupied house, but not inside a stock portfolio.

Thursday, February 15, 2018

Rising Interest Rates, but Easier Financial Conditions

The Federal Reserve has been gradually raising its target interest rate (the "federal funds interest rate) for about two years, since early 2016. This increase has been accompanied by a controversy that I think of as a battle of metaphors. By raising interest rates, is the Fed stepping on the brakes of the economy? Or is it just easing off on the accelerator pedal?

To shed light on this controversy, it would be useful to to have a measure of financial conditions in the US economy that doesn't involve one specific interest rate, but instead looks at actual factors like whether credit is relatively available or not, whether leverage is high or low, and whether those who provide loans are able to raise money with relatively low risk. Fortunately, the Federal Reserve Bank of Chicago has been putting together a National Financial Conditions Index based on exactly these components. Here's a figure of the data going back to the 1970s.



This figure needs a little interpreting. Zero is when financial conditions are average. Positive numbers reflect when financial conditions are tight or difficult. For example, you can see that in the middle of the Great Recession, there is an upward spike showing that financial conditions were a mess and it was hard to raise capital or get a loan at that time. Several previous recessions show a similar spike. On the other side, negative numbers mean that financial conditions are fairly easy by historical standards to finance and receive loans. 

As the Chicago Fed explains: "The National Financial Conditions Index (NFCI) and adjusted NFCI (ANFCI) are each constructed to have an average value of zero and a standard deviation of one over a sample period extending back to 1971. Positive values of the NFCI have been historically associated with tighter-than-average financial conditions, while negative values have been historically associated with looser-than-average financial conditions."

The interesting thing about our present time is that although the Fed has been raising its target interest rate since early 2016, financial conditions haven't gotten tighter. Instead the National Financial Conditions Index is lower now than it was back in early 2016; indeed, this measure is at its lowest level in about 25 years. At least for the last two years, any concerns that a higher federal funds interest rate would choke off finance and lending have been misplaced. Instead, having the Fed move the federal funds rate back close to its historically typical levels seems to have helped in convincing financial  markets that the crisis was past and normality was returning, so it was a good time to provide finance or to borrow.

The National Financial Conditions Index can also be broken down into three parts: leverage, risk, and credit. The Chicago Fed explains: "The three subindexes of the NFCI (risk, credit and leverage) allow for a more detailed examination of the movements in the NFCI. Like the NFCI, each is constructed to have an average value of zero and a standard deviation of one over a sample period extending back to 1973. The risk subindex captures volatility and funding risk in the financial sector; the credit subindex is composed of measures of credit conditions; and the leverage subindex consists of debt and equity measures. Increasing risk, tighter credit conditions and declining leverage are consistent with increases in the NFCI. Therefore, positives values for each subindex have been historically associated with a tighter–than–average corresponding aspect of financial conditions, while negative values indicate the opposite."

Here's a figure showing the breakdown of the three components. Although the three lines do tend to rise and fall together, it seems clear that the blue line--showing the extent of leverage or borrowing--plays an especially large role in the fluctuations over the last 25 years. But right now, all three parts of the index are comfortably down in the negative numbers.



Patterns can turn, of course. Perhaps if the Federal Reserve increases the federal funds rate at its next scheduled meeting (March 20-21), financial conditions will worsen in some substantial way. But at least for now, the Federal Reserve raising interest rates back from the near-zero rates that had prevailed for seven years is having the (somewhat paradoxical) effect of being accompanied by  looser financial conditions. And concerns over raising those rates at least a little further seem overblown.

Wednesday, February 14, 2018

Nudge Policies

A considerable body of evidence suggests that people's decisions are affected by how a choice is presented, or what the default option looks like. There's a reason that grocery stores put some products at eye-level and some near the floor, or why the staples like milk and eggs are often far from the door (so you have to walk through the store to grab them), or why the checkout counters have nearby racks of candy. There's a reason that gas stations sometimes advertise that gas is 5 cents a gallon less if you pay cash, but never advertise that gas is 5 cents more per gallon if you don't pay cash. There's a reason that many people have their employer automatically deduct money from paychecks for their retirement accounts, rather than trying to make their own monthly or annual payments to that same account.

Once you have admitted that people's decisions are affected by these kinds of factors, an obvious question is whether public policy might make use of how decisions are presented to influence behavior. A decade ago in 2007, Richard H. Thaler and Cass R. Sunstein brought this possibility to public attention in their book Nudge: Improving Decisions About Health, Wealth, and Happiness.  For a sense of what has happened since then, Bob Holmes offers an overview in "Nudging grows up (and now has a government job)," which is subtitled, "Ten years after an influential book proposed ways to work with — not against — the irrationalities of human decision-making, practitioners have refined and broadened this gentle tool of persuasion" (Knowable Magazine, February 1, 2018).

(A side note here: Knowable Magazine  is a publication of the good folks who publish the Annual Review volumes familiar to academics. There are now about 50 of these volumes across a wide array of topics, from economics to entomology and from analytical chemistry to vision science. Like the title says, there is one volume per year on each subject, with each volume containing an array of papers written by prominent experts in the field describing what is happening in their area of research. The articles in the magazine take the Annual Review papers in the last few years as a starting point, and then publish an essay that draws out common themes--with references to the underlying papers for those who want the gritty details. In short, the magazine is a good place to get up to speed on a very wide range of topics across the sciences and social sciences in a hurry.)

As one example of a "nudge" policy. consider organ donation. In opinion polls, people overwhelmingly support being an organ donor. But in practice, fewer than half of adults are actually signed up. A nudge policy might suggest that all driver be automatically enrolled as organ donors--with a choice to opt out if they wish to do so. In other words, instead of framing the choice as "do you want to sign up to be an organ donor?", the choice would become "do you want to opt out of being an organ donor?" As long as the choice is presented clearly, it's hard to argue that anyone's personal autonomy is being violated by the alternative phrasing of the question. But the alternative phrasing would lead to more organ donors--and the United States alone currently has about 100,000 people on waiting lists for organ transplants.

Perhaps the best-known example is that employers can either offer workers the option to enroll in a retirement savings plan, or they can automatically enroll workers in a retirement savings plans, with a choice to opt out. Phrasing the choice differently has a big effect on behavior. And a lot of people who never quite got around to signing up for the retirement plan end up regretting that choice when it's too late in life to do much about it. 

A graph shows that automatic enrollment leads to almost 100% of users contributing to their retirement savings, while “opt in” plans are much less successful.


Once you start thinking about nudging, possibilities blossom. Along with applications to organ donation and saving, Holmes discusses nudges related to home energy use, willingness to use generic drugs, choice of when to start receiving Social Security, requiring a common and simpler format for information about mortgages or phone contracts to make them easier to comprehend and compare,


Holmes reports: "At last count, more than 60 government departments and international agencies have established “nudge units” tasked with finding and pulling the right behavioral levers to accomplish everything from increasing retirement savings to boosting diversity in military recruits to encouraging people to get vaccinated against flu. The United Kingdom's Behavioural Insights Team, one of the first and largest such units, has expanded from a handful of people in 2010 to about 100 today, with global reach. Clearly, nudging has moved into the mainstream."

Three broad concerns discussed by Holmes seem worth noting. First, nudges can often be very specific to context and detail. For example, when people in the UK got a letter saying that most people pay their taxes on time, the number of tax delinquents fell sharply, but the same nudge in Ireland had no effect. Sometimes small details of a government notification--like whether the letter includes a smiley face or not--seem to have a substantial effect. 

Second, the total effect of nudge policies may be only moderate. But saying that a policy won't totally solve, say, poverty or obesity hardly seems like a reason to rule out the policy. 

Finally, there is a legitimate concern over the line between "nudge" policies and government paternalism. The notion that government is purposely acting in subtle ways to shift our choices is mildly disturbing. What if you just sort of forget to opt out of being an organ donor--but you actually have genuine personal objections to doing so? What if you just sort of forget to opt out of the retirement savings account, but you know that you have a health condition that is extremely likely to give you a shortened life expectancy? A nudge policy can be beneficial on average, but still lead to less desirable choices in specific cases. 

Moreover, what if the goals of a nudge policy start to reach beyond goals like adequate retirement saving or use of generic drugs, and start edging into more controversial settings? One can imagine nudge policies to affect choices about abortion, or gun ownership, or joining the military, or enrolling your child in a charter school. No matter which direction these nudges are pushing, they would  certainly be controversial. 

In the Annual Review of Psychology for 2016, Cass Sunstein contributed an essay titled, "The Council of Psychological Advisers." It begins: "Many nations have some kind of council of economic advisers. Should they also have a council of psychological advisers? Perhaps some already do." For many people, the idea of a government council of psychological advisers seeking to set up your choices in such a way as to influence the outcome, in ways you don't even know are happening, will sound fairly creepy.

Like many people, I like to think of myself as someone who considers options and makes choices. But the reality of nudge policies calls this perception into doubt. For many real-world life choices, a truly neutral presentation of the options does not exist. There will always be a choice about the order in which options are presented, how the options are phrased, what background information is presented, what choice serves as the default option. Even when no official nudge policy exists, and all of these choices have been made for other reasons, the setting of the choice will often influence the choice that is made. It will influence me, and it will influence you, too. Thus, there isn't any escape from nudge policies. There is only a choice as to what kinds of nudges will happen--and a need for all of us to be aware of how we are being nudged and when we want to shove back by making other choices.

Tuesday, February 13, 2018

Network Effects, Big Data, and Antitrust Issues For Big Tech

You don't need to be a weatherman to see that the antitrust winds are blowing toward the big tech companies like Amazon, Facebook, Google, Apple, and others. But an immediate problem arises. At least under modern US law, being a monopoly (or a near-monopoly) is not illegal. Nor is making high profits illegal, especially when it is accomplished by providing services that are free to consumers and making money through advertising. Antitrust kicks in when anticompetitive behavior is involved: that is, a situation in which a firm takes actions which have the effect of blocking actual or potential competitors./

For example, the antitrust case against Microsoft that was settled back in 2001 wasn't that the firm was big or successful, bur rather that the firm was engaged in an anticompetitive practice of "tying" together separate products, and in this way trying to use its near-monopoly position in the operating systems that run personal computers to gain a similar monopoly position for its internet browser--and in this way to drive off potential competitors. .

In the case of big tech companies, a common theory is that they hold a monopoly position because of what economists call "network effects." The economic theory of network effects started with the observation that certain products are only valuable if other people also own the same product--think of a telephone or fax machine. Moreover, the product becomes more valuable as the network gets bigger. When "platform" companies like Amazon or Facebook came along, network effects got a new twist. The idea became that if a website managed to gain a leadership position in attracting buyers and sellers (like Amazon, OpenTable, or Uber), or users and providers of content (like Facebook, YouTube, or Twitter), then others would be attracted to the website as well. Any potentially competing website might have a hard time building up its own critical mass of users, in which case network effects are acting as an anticompetitive barrier. 

Of course, the idea that an already-popular meeting place has an advantage isn't limited to the virtual world: many shopping malls and downtown areas rely on a version of network effects, too, as to stock markets, flea markets, and bazaars.

But while it's easy to sketch in the air an argument about network effects,  the question of how network effects work in reality isn't a simple one.  David S. Evans and Richard Schmalensee offer a short essay on "Debunking the `Network Effects' Bogeyman: Policymakers need to march to the evidence, not to slogans," in Regulation magazine Winter 2017-18, pp. 36-39).


As they point out, lots of companies that might at the time seemed to have an advantage of "network effects"  have faltered: for example, eBay looked like the network Goliath back in 2001, but it was soon overtaken by Amazon. They write:
"The flaw in that reasoning is that people can use multiple online communications platforms, what economists call `multihoming.'   A few people in a social network try a new platform. If enough do so and like it, then eventually all network members could use it and even drop their initial platform. This process has happened repeatedly. AOL, MSN Messenger, Friendster, MySpace, and Orkut all rose to great heights and then rapidly declined, while Facebook, Snap, WhatsApp, Line, and others quickly rose. ...
"Systematic research on online platforms by several authors, including one of us, shows considerable churn in leadership for online platforms over periods shorter than a decade. Then there is the collection of dead or withered platforms that dot this sector, including Blackberry and Windows in smartphone operating systems, AOL in messaging, Orkut in social networking, and Yahoo in mass online media ... 
"The winner-take-all slogan also ignores the fact that many online platforms make their money from advertising. As many of the firms that died in the dot-com crash learned, winning the opportunity to provide services for free doesn’t pay the bills. When it comes to micro-blogging, Twitter has apparently won it all. But it is still losing money because it hasn’t been very successful at attracting advertisers, which are its main source of income. Ignoring the advertising side of these platforms is a mistake. Google is still the leading platform for conducting searches for free, but when it comes to product searches—which is where Google makes all its money—it faces serious competition from Amazon. Consumers are roughly as likely to start product searches on Amazon.com, the leading e-commerce firm, as on Google, the leading search-engine firm."
It should also be noted that if network effects are large and block new competition, they pose a problem for antitrust enforcement, too. Imagine that Amazon or Facebook was required by law to split into multiple pieces, with the idea that the pieces would compete with each other. But if network effects really are large, then one or another of the pieces will grow to critical mass and crowd out the others--until the status quo re-emerges.

A related argument is that big tech firms have access to Big Data from many players in a given market, which gives them an advantage. Evans and Schmalensee are skeptical of this point, too. They write:
"Like the simple theory of network effects, the “big data is bad” theory, which is often asserted in competition policy circles as well as the media, is falsified by not one, but many counterexamples. AOL, Friendster, MySpace, Orkut, Yahoo, and many other attention platforms had data on their many users. So did Blackberry and Microsoft in mobile. As did numerous search engines, including AltaVista, Infoseek, and Lycos. Microsoft did in browsers. Yet in these and other categories, data didn’t give the incumbents the power to prevent competition. Nor is there any evidence that their data increased the network effects for these firms in any way that gave them a substantial advantage over challengers.
"In fact, firms that at their inception had no data whatsoever sometimes displaced the leaders. When Facebook launched its social network in India in 2006 in competition with Orkut, it had no data on Indian users since it didn’t have any Indian users. That same year Orkut was the most popular social network in India, with millions of users and detailed data on them. Four years later, Facebook was the leading social network in India. Spotify provides a similar counterexample. When Spotify entered the United States in 2011, Apple had more than 50 million iTunes users and was selling downloaded music at a rate of one billion songs every four months. It had data on all those people and what they downloaded. Spotify had no users and no data when it started. Yet it has been able to grow to become the leading source of digital music in the world. In all these and many other cases the entrants provided a compelling product, got users, obtained data on those users, and grew.
"The point isn’t that big data couldn’t provide a barrier to entry or even grease network effects. As far as we know, there is no way to rule that out entirely. But at this point there is no empirical support that this is anything more than a possibility, which one might explore in particular cases."
Evans and Schmalensee are careful to note that they are not suggesting that online platform companies should be exempt from antitrust scrutiny, and perhaps in some cases the network and data arguments might carry weight. As they write:
"Nothing we’ve said here is intended to endorse a “go-easy” policy toward online platforms when it comes to antitrust enforcement. ... There’s no particular reason to believe these firms are going to behave like angels. Whether they benefit from network effects or not, competition authorities ought to scrutinize dominant firms when it looks like they are breaking the rules and harming consumers. As always, the authorities should use evidence-based analysis grounded in sound economics. The new economics of multisided platforms provides insights into strategies these firms may engage in as well as cautioning against the rote application of antitrust analysis designed for single-sided firms to multisided ones.

"It is time to retire the simple network effects theory—which is older than the fax machine—in place of deeper theories, with empirical support, of platform competition. And it is not too soon to ask for supporting evidence before accepting any version of the “big data is bad” theory. Competition policy should march to the evidence, not to the slogans."
For an introduction to the economics of multi-sided "platform" markets, a useful starting point is Marc Rysman's "The Economics of Two-Sided Markets" in the Summer 2009 issue of the Journal of Economic Perspectives (23:3, 125-43). 

For an economic analysis of policy, the underlying reasons matter a lot, because they set a precedent that will affect future actions by regulators and firms. Thus, it's not enough to rave against the size of Big Tech. It's necessary to get specific: for example, about how public policy should view network effects or online buyer-and-seller platforms, and about the collection, use, sharing, and privacy protections for data. We certainly don't want the current big tech companies to stifle new competition or abuse consumers. But in pushing back against the existing firms, we don't want regulators to set rules that could close off new competitors, either. 

Monday, February 12, 2018

Four Examples from the Automation Frontier

Cotton pickers. Shelf-scanners at Walmart. Quality control at building sites. Radiologists. These are just four examples of jobs that are being transformed and even sometime eliminated by the newest wave of automated and programmable machinery. Here are four short stories from various sources, which of course represent a much broader transformation happening across the global economy.
_____________________________

Virginia Postrel discusses "Lessons From a Slow-Motion Robot Takeover: Cotton harvesting is now dominated by machines. But it took decades to happen" (Bloomberg View, February 9, 2018). She describes a "state-of-the-art John Deere cotton stripper." It costs $700,000, and harvests 100-120 acres each day. As it rolls across the field, "every few minutes a plastic-wrapped cylinder eight feet across plops out the back, holding as much as 5,000 pounds of cotton ready for the gin." Compared to the old times some decades back of cotton-picking by hand, the machine replaces perhaps 1,000 workers.

One main lesson, Postrel emphasizes, is that big technological changes take time, in part because they often depend on a group of complementary innovations becoming available. In this case: "Gins had to install dryers, for instance, because machine-harvested cotton retained more moisture. Farmers needed chemical defoliants to apply before harvesting so that their bales wouldn’t be contaminated with leaf trash. Breeders had to develop shorter plants with bolls that emerged at the same time, allowing a single pass through the fields." Previous farm innovations often took decades to diffuse, too: as I've mentioned before on this website, that was the pattern for previous farm breakthroughs like the McCormick reaper and the tractor.

The high productivity of the modern cotton-stripper clearly costs jobs, but although it's easy for me to say, these were jobs that the US is better off without. Cotton-picking by hand was part of a social system built on generations of low-paid, predominantly black workers. And inexpensive clothing, made possible by cotton harvested more efficiently, is important for the budgets of low-income families.

____________________
Another example mentioned by Postrel is the case of robots at Walmart that autonomously roam the aisles, "identifying when items are out of stock, locating incorrect prices, and detecting wrong or missing labels." Erin Winick tells the story in "Walmart’s new robots are loved by staff—and ignored by customers" Bossa Nova is creating robotic coworkers for the retail world" (MIT Technology Review, January 31, 2018).


Again, these robots take jobs that a person could be doing. But the article notes that the robots are quite popular among the Walmart staff, who name the robots, make sure the robots are wearing their official Walmart nametags, and introduce the robots to customers. From the employee point of view, the robots are taking over the dull and menial task of scanning shelves--and the employees are glad to hand over that task. Apparently some shoppers are curious about the robots, and ask, but lots of other shoppers just ignore them and navigate around them.

_______________________

An even more high-tech example is technology which uses lidar-equipped robots to do quality control on construction sites. Even Ackerman explaines in "AI Startup Using Robots and Lidar to Boost Productivity on Construction Sites Doxel's lidar-equipped robots help track construction projects and catch mistakes as they happen" (IEEE Spectrum, January 24, 2018).

On big construction projects, the tradition has been that at the end of the workday, someone walks around and checks how everything is going. The person carries a clipboard and a tape measure, and spot-checks key measurements. This technology sends in a robot at the end of the day, instead, programmed to crawl all around the building site. It's equipped with lidar, which stands for "Light Detection and Ranging," which essentially means using lasers to measure distances. It can check exactly what has been installed, and that it is installed in precisely the right place. Perhaps the area that is going to be the top of a staircase is not precisely aligned with the bottom? The robot will know. Any needed changes or corrections can thus happen much sooner, rather than waiting until a problem becomes apparent later in the building process.

As Ackerman writes: "[I]t may or may not surprise you to learn that 98 percent of large construction projects are delivered (on average) 80 percent over budget and 20 months behind schedule. According to people who know more about these sorts of things than I do, productivity in the construction industry hasn’t improved significantly in 80 years." In a pilot study on one site,this technology raised labor productivity by 38%--because workers could fix little problems now, rather than bigger problems later.

But let's be honest: At least in the immediate short-run, this technology reduces the need for employment, too, because fewer workers would be needed to fix problems on a given site. Of course, ripping out previous work and reinstalling it again, perhaps more than once, isn't the most rewarding job, either. And the ultimate result is not just a building that is constructed more efficiently, but a building that is likely to be longer-lasting and perhaps safer, too.
___________________

A large proportion of hospital patients have some kind of imaging scan: X-ray, MRI, CAT, and so on. Diagnostic radiologists are the humans who look at those scans and interpret them. Could most of their work be turned over to computers, with perhaps a few humans in reserve for the tough cases?

Hugh Harvey offers a perspective in "Why AI, Will Not Replace Radiologists," (Medium: Towards Data Science, January 24, 2018).  As Harvey notes: " In late 2016 Prof Geoffrey Hinton, the godfather of neural networks, said that it’s “quite obvious that we should stop training radiologists." In constrast, Harvey offers arguments as to "why diagnostic radiologists are safe (as long as they transform alongside technology)." The parenthetical comment seems especially important to me. Technology is especially good at taking over routine tasks, and the challenge for humans is to work with that technology while doing the nonroutine. For example, even if the machines can do a first sort-through of images, many patients will continue to want a human to decide what scans should be done, and with whom the results can be discussed. For legal reasons alone, no institution is likely to hand over life-and-death personal decisions to an AI program completely.

In addition, Harvey points out that as AI makes it much cheaper to do diagnostic scans, a likely result is that scanning technologies will be used much more often, and will be more informative and effective. Harvey's vision is that radiologists of the future " will be increasingly freed from the mundane tasks of the past, and lavished with gorgeous pre-filled reports to verify, and funky analytics tools on which to pour over oceans of fascinating ‘radiomic’ data."
_____________________

The effects of technology will vary in important ways across jobs, and I won't twist myself into knots trying to draw out common lessons across these four examples. I will say that embracing these four technologies, and many more, is the only route to long-term economic prosperity.