Thursday, October 17, 2019

Some Income Tax Data on the Top Incomes

How much income do US taxpayers have at the very top? How much do they pay in taxes? The IRS has just published updated date for 2017 on "Individual Income Tax Rates and Tax Shares."  Here, I'll focus on data for 2017 and "returns with Modified Taxable Income," which for 2017 basically means the same thing as returns with taxable income. Here are a couple of tables for 2017 derived from the IRS data.

The first table shows a breakdown for taxpayers from the top .001% to the top 5%. Focusing on the top .001% for a moment, there were 1,433 such taxpayers in 2017. (You'll notice that the number of taxpayers in the top .01%, .1% and 1% rise by multiples of  10, as one would expect.)

The "Adjusted Gross Income Floor" tells you that to be in the top .001% in 2017, you had to have income of $63.4 million in that year. If you had income of more than $208,000, you were in the top 5%,

The total income for the top .001% was $256 billion. Of that amount, the total federal income tax paid was $61.7 billion. Thus, the average federal income tax rate paid was 24.1% for this group. The top .001% received 2.34% of all gross income, and paid 3.86% of all income taxes.
Of course, it's worth remembering that this table is federal income taxes only. It doesn't include state taxes on income, property, or sales.  It doesn't include the share of corporate income taxes that end up being paid indirectly (in the form of lower returns) by those who own corporate stock.

Here's a follow-up table showing the same information, but for groups ranging from the top 1% to the top 50%.
Of course, readers can search through these tables for what is of most interest to them. But here are af few quick thoughts of my own.

1) Those at the very tip-top of the income distribution, like the top .001% or the top .01%, pay a slightly lower share of income in federal income taxes than say, the top 1%. Why? I think it's because those at the very top are often receiving a large share of their annual income in the form of  capital gains, which are taxed at a lower rate than regular income.

2) It's useful to remember that many of those at the very tip-top are not there every year. It's not like the fall into poverty the next year, of course. But they are often making a decision about when to turn capital gains into taxable income, and they are people who--along with their well-paid tax lawyers-- have some control over the timing of that decision and how the income will be received.

3) The average tax rate shown here is not the marginal tax bracket. The top federal tax bracket is 37% (setting aside issues of payroll taxes for Medicare and how certain phase-outs work as income rises). But that marginal tax rate applies only to an additional dollar of regular income earned. With deductions, credits, exemptions, and capital gains taken into account, the average rate of income tax a as a share of total income is lower.

4) The top 50% pays almost all the federal income tax. The last row on the second table shows that the top 50% pays 96.89% of all federal income taxes. The top 1% pays 38.47% of all federal income taxes. Of course, anyone who earns income also owes federal payroll taxes that fund Social Security and Medicare, as well as paying federal excise taxes on gasoline, alcohol, and tobacco, and these taxes aren't included here.

5) This data is about income in 2017. It's not about wealth, which is accumulated over time. Thus, this data is relevant for discussions of changing income tax rates, but not especially relevant for talking about a wealth tax.

6) There's a certain mindset which looks at, say, the $2.3 trillion in total income for the top 1%, and notes that the group is "only" paying $615 billion in federal income taxes, and immediately starts thinking about how the federal government could collect a few hundred billion dollars more from that group, and planning how to spend that money. Or one might focus further up, like the 14,330 in the top .01%  who had more than $12.8 million in income in 2017. Total income for this group was $565 billion, and they "only" paid about 25% of it in federal income taxes. Surely they could chip in another $100 billion or so? On average, that's only about $7 million apiece in additional taxes for those in the top .01%. No big deal. Raising taxes on other people is so easy.

I'm not someone who spends much time weeping about the financial plight of the rich, and I'm not going to start now. It's worth remembering (again) that the numbers here are only for federal income tax, so if you are in a state or city with its own income tax, as well as paying property taxes and the other taxes at various levels of government, the tax bill paid by those with high incomes is probably edging north of 40% of total income in a number of jurisdictions.

But let's set aside the question of whether the very rich can afford somewhat higher federal income taxes (spoiler alert: they can), and focus instead on the total amounts of money available. The numbers here suggest that somewhat higher income taxes at the very top could conceivably bring in a few hundred billion dollars, even after accounting for the ability of those with very high income to alter the timing and form of the income they receive. To put this amount in  perspective, the federal budget deficit is now running at about $800 billion per year.  To put it another way, it seems implausible to me that plausibly higher taxes limited to those with the highest incomes would raise enough to get the budget deficit down to zero, much less to bridge the existing long-term funding gaps for Social Security or Medicare, oi to support grandiose spending programs in the trillions of dollars for other purposes. Raising federal income taxes at the very top may be a useful step, but it's not a magic wand that can pay for every wish list. 

Wednesday, October 16, 2019

Video Clips of Economists Explaining for Intro Econ Classes

I know a number of economics faculty who have been incorporating video clips into their classes. Sometimes it's part of a lecture presentation. Sometimes it's for students to watch before class. For intro students in particular, it can be a useful practice because it gives them a sense that they are being introduced to a universe of economists, not just to one professor and a textbook. The faculty member can also react to the video clip, and in this way offer students some encouragement to react and to comment as well--in a way that students might not feel comfortable reacting if they need to confront their own professor.

Amanda Bayer and Judy Chevalier have been compiling a list of video clips that may be useful for the standard intro econ class. It's available at the Diversifying Economic Quality ("Div.E.Q") website.  Most are in the range of 3-6 minutes, although a few are longer or shorter. The economists are often talking about their own research, but in a way that the evidence can easily be incorporated into an intro presentation.

For a few examples grabbed from lectures on micro topics. Kathryn Graddy talks about her work studying the Fulton Fish Market in New York City, and how even in a highly competitive and open environment, buyers sometimes pay different prices. (Graddy also wrote an article on this topic in the Spring 2006 issue of the Journal of Economic Perspectives.)

Petra Moser discusses her work showing that "copyright protection for 19th century Italian operas led to more and better operas being written, but the evidence also suggests that intellectual property rights may do more harm than good if they are too broad or too long-term."

Heidi Williams describes new data and empirical methodogies to study and advance technological change in health care markets.

Kerwin Kofi Charles looks at his empirical research on how the extent to which prejudice leads to discrimination in the labor market and  how it may affect wages of black workers. 1

Cecilia Rouse talks about her research on how change in to blind procedures for the musicians auditioning for symphony orchestras led to more women being selected.

In short, the presenters in the video clips are top-quality economists describing their own research, in ways that spark interest among students. In addition, economics has an ongoing issue with attracting women and minorities. This list is heavily tilted toward presentations by economists from those groups, and there's some evidence that when intro students see economists who look more like them, they may feel more comfortable expressing interest in economics moving forward. 

Tuesday, October 15, 2019

Opinions about Semicolons

When you live your life as an editor, you develop strange preoccupations, like the semicolon. Thankfully, Cecilia Watson has removed any temptation I might have had to spend vast amount of time on this subject by publishing Semicolon: The Past, Present, and Future of a Misunderstood Mark (Ecco, 2019). 

If you're the sort of person who enjoys facts and commentary about punctuation, then welcome to our smallish club. For example, you will be able to answer the trivia question: What was the first book to use a semicolon, and name the publisher, author, and typesetter? The semicolon originated in Venice in 1494, during a time of time of great innovation in symbols of punctuation. Many swirls and lines and dashes and other symbols of punctuation were invented, and mostly discarded. But apparently, a printer and publisher named Allstud Manutius was the first to combine the comma and colon, and thus to create the semicolon. The book was De Aetna, by Piertro Bembo, a dialogue about climbing Mount Etna. The Bolognese type designer Francesto Griffo created the shape of the semicolon.

I especially enjoyed some of the more grandiose denunciations of the semicolon. Watson's book reminded me of Paul Robinson's essay several decades ago in the New Republic, "The Philosophy of Punctuation Against the semicolon; for the period" (April 26, 1980).  Robinson wrote:
Semicolons are pretentious and overactive. These days one seems to come across them in every other sentence. “These days” is alarmist, since half a century ago the German poet Christian Morgenstern wrote a brilliant parody, “Im Reich der Interpunktionen,” in which imperialistic semicolons are put to rout by an “antisemikolonbund” of periods and commas. Nonetheless, if the undergraduate essays I see are representative we are in the midst of an epidemic of semicolons. I suspect that the semicolon is so popular because it is the first fancy punctuation mark students learn of, and they assume that its frequent appearance will lend their writing a properly scholarly cast. Alas, they are only too right. But I doubt that they use semicolons in their letters. At least I hope they don’t.
More than half of the semicolons one sees, I would estimate, should be periods, and probably another quarter should be commas. Far too often, semicolons, like colons, are used to gloss over an imprecise thought. They place two clauses in some kind of relation to one another, but relieve the writer of saying exactly what that relation is. Even the simple conjunction “and,” for which they are often a substitute, has more content, since it suggests compatibility or logical continuity. (“And,” incidentally, is among the most abused words in the language. It is forever being exploited as a kind of neutral vocalization connecting two things that have no connection whatever.)

In exasperation I have tried to confine my own use of the semicolon to demarking sequences that contain internal commas and therefore might otherwise be confusing. I recognize that my reaction is extreme. But the semicolon has become so hateful to me that I feel almost morally compromised when I use it.
Or if you prefer a pithier comment on the colon, here's one from Kurt Vonnegut's 2005 book,  A Man Without A Country:
Here is a lesson in creative writing. First Rule: Do not use semicolons. They are transvestite hermaphrodites representing absolutely nothing. All they do is show you've been to college. 
June Casagrande puts the problem in more prosaic terms ("A Word, Please: Writers who use semicolons aren’t thinking about the reader," Los Angeles Times, July 23, 2015) 
Here’s a fun thing you can do with your writing: Take any two simple, clear sentences and use a semicolon to mush them into one. For example, imagine you have a paragraph with just two sentences. “The alarm went off. Joe hit the snooze.” Through the magic of semicolons, you can make that just one sentence: “The alarm went off; Joe hit the snooze.” Isn’t that a great idea?
This works just as well for long sentences that you want to mush into super-long ones: “On a stormy morning in January of 2015, the alarm in Joe Jacobson’s swanky Santa Monica condo went off, ushering in the morning with an ugly screech; Joe, a hung-over stockbroker deeply immersed in a dark, disturbing dream about the woman who’d broken his heart, reached for the clock and pounded the snooze button with the force of a jackhammer.”
When you understand how semicolons work, you see that any pair of sentences can be made one. Then, when you’re done, those longer Frankenstein sentences can themselves be mushed together, and so on and so on, until every paragraph you write is just one long sentence! Neat, huh? ...

I’ll kill the facetiousness here and just be blunt: Semicolons are trouble. ... They’re favored by writers who are so proud they know how to use semicolons that they’ll happily shortchange readers to show off their knowledge. They’re also a popular crutch among writers who don’t know how to manage all the information they want to convey, so they use semicolons to cobble it all into a single monstrous sentence. ... 
So just about any time you have two sentences next to each other, you could make the case for using a semicolon to fashion them into one longer sentence. A lot of writers do. They do so not because they believe the results will be better for the reader. They do so because they forgot the reader. They saw an opportunity to put their punctuation savvy on proud display and forgot that, as every professional writer knows, short sentences are more digestible. That’s why, to me, semicolons cause more trouble than they’re worth.
Of course, the fact that a punctuation mark or a word can be misused doesn't mean that it can't be well-used. For example, Herman Melville's Moby Dick is perhaps the literary champion of semicolon use. Watson makes this case at some length, concluding: 
Moby Dick .. was .. around 210,000 words, but had 4000 semicolons. That's one for every 52 words. The semicolons are Moby-Dick's joints, allowing the novel the freedom of movement is needed to tour such a large and disparate collection of themes.
She also points out that Martin Luther King Jr.'s Letter from a Birmingham Jail makes exquisite use of the semicolon, as a way of linking together and drawing out a painful meditation in a way that forces the reader to follow along without a full stop for breath. (For example, see the paragraph that starts, "We have waited for more than 340 years for our constitutional and God given rights.")

So yes, the semicolon can require care in handling. But it offers a connectedness, continuation, and flexibility in situations when a period would create a break that is be too definite and firm a break, while a comma isn't enough of a pause. Watson writes:
The semicolon represents a way to slow down, to stop, and to think; it measures time more meditatively than the catchall dash, and it can't  be chucked thoughtlessly into just any sentence in place of just any other mark. ... Semicoloned sentences cannot be dashed off.
The short book also offers an excuse to roam through other rules of grammar, like whether to split infinitives. Watson tend to be in favor of good writing, but against rules. Me, I'm in favor of good writing, but I'm also favor knowing the rules--in part so that you can know when it makes sense to break them. 

Monday, October 14, 2019

Interview with Hal Varian: An Academic Goes to Google

Tyler Cowan conducts one of his rapid-fire, many-different-topics "Conversations with Tyler" with Hal Varian ("Hal Varian on Taking the Academic Approach to Business," June 19, 2019). Of course, Varian was a very prominent academic economist (and textbook author) for decades, but then 20 years ago became one of the first prominent economists to move over to the tech industry. He has now spent the last 20 years at Google. The conversation is full of highlights, but here are some snippets.

On textbook prices

COWEN: Why are textbooks still priced so high? Not all textbooks, but many.

VARIAN: They are priced remarkably high, and it’s a situation where I really would like to see lower prices because, obviously, there’s a durable goods monopoly problem there. As you have more and more competition from previous editions, each of the new editions has to differ markedly from the old edition to support the pricing model. But that’s getting harder and harder to do.

In fact, a friend of mine once told me, “Having a successful textbook is like being married to a very wealthy person you don’t like much anymore.”

On the high volume of trading in financial markets

COWEN: Why do people trade so much in financial markets? It doesn’t seem Bayesian rational, right? “Oh, you want to trade with me. I’ll take that off your back.” Yet trade volume is massive. ...

VARIAN: Yeah, I agree with your point that there is more trading than there should be by any reasonable model. Part of it is because people really do have differences of opinion, and they’re not fully Bayesian, so they may not find the other person’s opinion credible. We don’t really get the agreeing to disagree or, I guess, the converse. We don’t get this pushed into the model where you’ve got full agreement.

I actually did some work in this area several years ago, and it really came down to people do have a different model. We can’t agree on the model. If we don’t agree on the model, then we won’t get uniformity. ...

COWEN: I don’t trade, by the way, if you’re curious to know.

VARIAN: Well, that’s good. I would say, yeah, why trade? You shouldn’t be trading. We know that just as an empirical fact.

Billionaire envy

COWEN: Does the typical American envy more the billionaire or the next-door neighbor?

VARIAN: I think the next-door neighbor. It’s interesting, the billionaires are like our instance of royalty. The people want to see what they’re doing and where they’re going out and how they dress and all those kind of stuff. But I don’t think it’s actually envy.

By the way, I know a few billionaires, and there’s a lot of cost to being a billionaire in the sense that you can’t go out in public. Maybe you need bodyguards. Doing a trip here and there is a major undertaking because of the people that have to be informed. It’s much better to be a half a billionaire, I think, than to be a billionaire.

COWEN: Do the billionaires envy each other more than poorer people envy billionaires?

VARIAN: There seems to be something of a pecking order there. It depends. Not so much in tech, I would say, but maybe in finance and Wall Street guys, I think. It’s a motivation that’s different than we see on the West Coast.

Don't be too quick to read the experts

COWEN: You once wrote as advice to graduate students, “Don’t look at the literature too soon.” Is that still true?


COWEN: And why not?

VARIAN: Because if you look at the literature, you’ll see this completely worked-out problem, and you’ll be captured by that person’s viewpoint. Whereas, if you flounder around a little bit yourself, who knows? You might come across a completely different phenomenon. Now, you do have to look at the literature. I want to emphasize that. But it’s a good idea to wrestle with a problem a little bit on your own before you adopt the standard viewpoint.


COWEN: How will 5G change my world?

VARIAN: Basically, you should think of 5G as Wi-Fi everywhere so that you’ve got a high-speed communication without having to go through any sort of special operations.

COWEN: But will it save me seven seconds a week, or will it deliver some new and exciting product that I haven’t thought of yet?

VARIAN: When you look at technologies like autonomous vehicles and things like that, they’re dealing with vast amounts of information. It’s often stored and manipulated locally, but sometimes it needs to be shared. Doing that kind of sharing will be easier if you have high-bandwidth 5G technology. But realistically speaking, for most of what you’re going to be doing, it will just save you a small amount of time.

Wooster, Ohio, and small cities

COWEN: Wooster, Ohio — I believe you’re from there. Is it economically inefficient? Should its population, over time, be reallocated to larger cities?

VARIAN: It’s funny you mention that because I grew up on a farm — apple orchard — outside of Wooster, Ohio, which is a town of about 20,000 people, and they have a nice college there. The College of Wooster. And it seems to be thriving. So, what happens when you look at these towns in the upper Midwest . . . if they have a hospital, they’ll probably survive. If they don’t have a hospital, they’re in big trouble.

COWEN: In Wooster, Ohio, do you think the value of Facebook and Google, relative to per capita income, is higher or lower than in, say, midtown Manhattan?

VARIAN: We have actually done a little research into this question but only on one aspect, namely, looking at online shopping. And I will tell you, if you live on a farm in the Midwest, you love online shopping. If you’re living in Manhattan, you’ve got a lot of opportunities to go shopping in the physical world. Those rural residents really like the internet for just that reason. The shopping, access to content, all sorts of things.

Saturday, October 12, 2019

When Hayek Opposed the Nobel Prize in Economics

The Nobel Prize in economics will be announced on Monday. Thus, it is perhaps an appropriate time to revisit this post from a couple of years ago.

As the pedants among us never tire of pointing out, the so-called "Nobel Prize in economics" is not literally a "Nobel prize." It was not established by the original bequest from Alfred Nobel, but instead was first given in 1969, with the prize money provided by a grant from Sweden's central bank as part of the 300th anniversary of the founding of the bank. Thus, the award is officially "The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel." (Justin Fox gives a nice brief overview of the history here.) Although I am pedantic in many matters, this doesn't happen to be one of them, so I will continue following the conventional usage in calling it the "Nobel prize in economics."

More interesting is that Friedrich Hayek the co-winner of the sixth Nobel prize in economics (with Gunnar Myrdal), spoke at the prize banquet in 1974 as to why the establishment of the prize was mistaken. Here's is Hayek's call to humility for economists from his speech at the Nobel banquet on December 10, 1974.
Your Majesty, Your Royal Highnesses, Ladies and Gentlemen,
Now that the Nobel Memorial Prize for economic science has been created, one can only be profoundly grateful for having been selected as one of its joint recipients, and the economists certainly have every reason for being grateful to the Swedish Riksbank for regarding their subject as worthy of this high honour.
Yet I must confess that if I had been consulted whether to establish a Nobel Prize in economics, I should have decidedly advised against it.
One reason was that I feared that such a prize, as I believe is true of the activities of some of the great scientific foundations, would tend to accentuate the swings of scientific fashion. This apprehension the selection committee has brilliantly refuted by awarding the prize to one whose views are as unfashionable as mine are.
I do not yet feel equally reassured concerning my second cause of apprehension. It is that the Nobel Prize confers on an individual an authority which in economics no man ought to possess.
This does not matter in the natural sciences. Here the influence exercised by an individual is chiefly an influence on his fellow experts; and they will soon cut him down to size if he exceeds his competence.
But the influence of the economist that mainly matters is an influence over laymen: politicians, journalists, civil servants and the public generally. There is no reason why a man who has made a distinctive contribution to economic science should be omnicompetent on all problems of society - as the press tends to treat him till in the end he may himself be persuaded to believe. One is even made to feel it a public duty to pronounce on problems to which one may not have devoted special attention.
I am not sure that it is desirable to strengthen the influence of a few individual economists by such a ceremonial and eye-catching recognition of achievements, perhaps of the distant past.
I am therefore almost inclined to suggest that you require from your laureates an oath of humility, a sort of hippocratic oath, never to exceed in public pronouncements the limits of their competence.
Or you ought at least, on confering the prize, remind the recipient of the sage counsel of one of the great men in our subject, Alfred Marshall, who wrote: "Students of social science, must fear popular approval: Evil is with them when all men speak well of them".
Hayek is quoting from an comment from Marshall which appears in "In Memoriam: Alfred Marshall," a speech given by A.C. Pigou in 1924 and published as part of a Memorials of Alfred Marshall volume in 1925 (pp. 81-90). The fuller quotation attributed to Marshall (on p. 89) is:
Students of social science, must fear popular approval: Evil is with them when all men speak well of them. If there is any set of opinions by the advocacy of which a newspaper can increase its sales, then the student who wishes to leave the world in general and his country in particular better than it would have been if he had not been born, is bound to dwell on the limitations and defects and errors, if any, in that set of opinions: and never to advocate them unconditionally even in ad hoc discussion. It is almost impossible for a student to be a true patriot and to have the reputation of being one in his own time.

Friday, October 11, 2019

The Economics Nobel: Who Might Have Won?

Next Monday the 51th Nobel prize in Economics will be awarded. Allen R. Sanderson and John J. Siegfried offer some perspective on the first 50 years of the economics award and offers some context with the other Nobel prizes in "The Nobel Prize in Economics Turns 50" (American Economist, 2019, 64:2, pp. 167–182). They offer background on the genesis of the prize, how its official name as evolved, and the ages, academic backgrounds, and big idea that spanned several awards.

For those interested in more detail about past Nobel prize-winners in economics, I strongly recommend the Nobel website itself. Especially for winners in the last few decades, there is rich infomation about why the prize was given, often with an autobiographical essay from the winner, and of course the address given by the prize-winner.

Here, I'll pass along a couple of lists from Sanderson and Siegfried. The Nobel is only given to living people, so there are inevitably some economists worthy of consideration for the prize who died after 1969 without receiving the award. I was also intrigued by their list of how many Nobel prize-winners in economics had a direct tie to a previous winner.

Here's their list of economists who were alive in 1969, but died without receiving an economics Nobel, and "who certainly would have had advocates" for winning the prize.

  1. Frank Knight (1972). One of the founders of the “Chicago School of Economics,” he is best known for his 1921 book, Risk, Uncertainty and Profit.
  2. Alvin Hansen (1975). Macroeconomist and public policy adviser, often referred to as “the American Keynes,” he is most noted for development (with Hicks) of the “investment-savings” and “liquidity preference-money supply” (IS-LM) macroeconomics model.
  3. Oskar Morgenstern (1977). Princeton economist, coauthor of Theory of Games and Economic Behavior (1944, with John von Neumann).
  4. Joan Robinson (1983). Cambridge economist known for her work on monopolistic competition (The Economics of Imperfect Competition, 1933) and coining the term monopsony.
  5. Piero Sraffa (1983). Italian economist and considered the neo-Ricardian school founder owing to his Production of Commodities by Means of Commodities (1960).
  6. Fischer Black (1995), part creator of the Black–Scholes equation on options pricing, surely would have shared the 1997 Nobel with Scholes and Merton for devising a model for the dynamics of a financial market containing derivative investment instruments.
  7. Amos Tversky (1996). A cognitive psychologist, who undoubtedly would have shared the 2002 Nobel Prize with his friend and frequent collaborator Daniel Kahneman (and Vernon Smith).
  8. Zvi Griliches (1999). A student of Schultz and Arnold Harberger at Chicago, he is best known for work on technological change (the diffusion of hybrid corn in particular) and econometrics.
  9. Sherwin Rosen (2001). Labor economist with far-ranging contributions in microeconomics, he is perhaps best known for his 1981 American Economic Review article “The Economics of Superstars,” and his 1974 Journal of Political Economy article outlining how the market solves the problem of matching buyers and sellers of multidimensional goods.
  10. John Muth (2005). Doctoral advisee of Herbert Simon, he is considered—mainly formulated on the microeconomics side—as the originator of “rational expectations” theory.
  11. John Kenneth Galbraith (2006), long-time Harvard economist, was a prolific writer (The Affluent Society (1958), The New Industrial State (1967)), public intellectual, and liberal political activist.
  12. Anna Schwartz (2012). A National Bureau of Economic Research monetary and banking scholar, she was a coauthor with Milton Friedman of A Monetary History of the United States, 1867-1960 (1963).
  13. Martin Shubik (2018). A doctoral advisee of Morgenstern and collaborator with Nash, at Princeton, he was a long-time Yale professor of mathematical economics and outstanding game theorist.
To this list, one could certainly add more of their contemporaries, for example (in alphabetical order), Anthony Atkinson (2017), William Baumol (2017), Harold Demsetz (2019), Evsey Domar (1997), Rudiger Dornbusch (2002), Henry Roy Forbes Harrod (1978), Harold Hotelling (1973), Nicholas Kaldor (1986), Jacob Mincer (2006), Hyman Minsky (1996), and Ludwig von Mises (1973), among many others.
Sanderson and Siegfried also point out that a substantial number  Nobel laureates in economics had another laureate as a dissertation adviser, For example:

  • Jan Tinbergen was an adviser of Koopmans.
  • Paul Samuelson was an adviser for Klein and Merton.
  • Kenneth Arrow advised the research of Harsanyi, Spence, Maskin, and Myerson.
  • Wassily Leontief advised Samuelson, Schelling, Solow, and Smith.
  • Richard Stone supervised the research of both Mirrlees and Deaton.
  • Franco Modigliani was Shiller’s adviser.
  • James Tobin advised Phelps.
  • Merton Miller advised Eugene Fama’s dissertation, and Fama advised Scholes’s.
  • Robert Solow supervised the work of Diamond, Akerlof, Stiglitz, and Nordhaus.
  • Thomas Schelling was Spence’s adviser.
  • Edward Prescott advised Kydland, with whom he shared the 2004 Nobel Prize.
  • Eric Maskin advised Tirole.
  • Christopher Sims advised Hansen.
  • Simon Kuznets supervised both Friedman and Fogel.

Sanderson and Siegfried also pass along perhaps the most common joke about the economics Nobel prize:
[A]s a well-known quip has it, “economics is the only field in which two people can share a Nobel Prize for saying opposing things.” The 1972 Prizes awarded to Myrdal and Hayek spring to mind, as would the 2013 awards to Fama and Shiller.

Thursday, October 10, 2019

Foreign Exchange Markets: $6.6 Trillion Per Day

It's hard to understand at an intuitive level the  difference between millions, billions, and trillions. I sometimes try to describe it this way. One million seconds in the past is about 11 days ago. One billion seconds is 11,000 days, which about 30 years ago. One trillion second is about 30,000 years ago, which would be the time period when early rock-paintings were done, when the main human inventions of the time were the oven, pottery, and twisting fibers to make rope.

So the difference between a million and a billion is the difference between what happened the weekend before last, and what happened in 1989. The difference between a billion and a trillion is the difference between how long ago it was that world headlines were about Tiananmen Square demonstrations and the Berlin Wall coming down, compared with how long ago human culture was in its hunter-gatherer cave-dwelling stage .

Mull over that comparison with this fact: foreign exchange markets trade $6.6 trillion per day, up from $5.1 trillion per day in the previous survey. The authoritative statistics on foreign exchange markets come from the Triennial Central Bank Survey conducted by the Bank of International Settlements. The data for the latest round of the survey, completed in April 2019, is now available (September 16, 2019).

At first glance, this volume of trading seems like it must be a mistake. Total world exports of goods and services are about $25 trillion per year. Add in the investment flows across borders for 2018, which were $2 trillion in foreign direct investment, $1.9 trillion in portfolio investment, and $2 trillion in other financial transactions, mainly bank loans (according to UNCTAD). But these annual totals don't get anywhere close to the volume of for $6.6 trillion per day in foreign exchange markets.

The obvious conclusion is that most foreign exchange trading isn't about facilitating exports and imports, nor about facilitating flows of international investment. Instead, it's about financial transactions that seek to address the risks of shifts in foreign exchange rates, or to profit directly from those shifts. For example, about half the foreign exchange market is swaps contracts (that is a contract where one party is owed a certain amount in one currency, over some period of time, and another party is owed an amount in a different currency, over some different period of time, and they agree to swap these payments).

There's also an interesting insight into how foreign exchange markets work from looking at what currencies are used the most. It turns out that the US dollar is involved in 88% of all foreign exchange deals, either as the currency being sold or being bought. Again, this isn't about facilitating trade or investment related to the US economy. Instead, it's because if there is a deal happening between, say, the Brazilian real and the South African rand, the usual pattern of foreign exchange markets behind the scenes would be to convert both  currencies into US dollars, and then to convert out to the desired currency.  This pattern is one dimension of what is meant when people say that the US is the global "reserve currency."

With regard to other currencies, the BIS report notes:
The US dollar retained its dominant currency status, being on one side of 88% of all trades. The share of trades with the euro on one side expanded somewhat, to 32%. By contrast, the share of trades involving the Japanese yen fell some 5 percentage points, although the yen remained the third most actively traded currency (on one side of 17% of all trades). ... As in previous surveys, currencies of emerging market economies (EMEs) again gained market share, reaching 25% of overall global turnover. Turnover in the renminbi, however, grew only slightly faster than the aggregate market, and the renminbi did not climb further in the global rankings. It remained the eighth most traded currency, with a share of 4.3%, ranking just after the Swiss franc.

Given the size and complexity of foreign exchange markets, and their potentially very rapid reaction times, it's little wonder that they often move in ways which have long been hard for economists to explain. Every now and then, I post on the bulletin board beside my office a quotation from Kenneth Kasa back in 1995: "If you asked a random sample of economists to name the three most difficult questions confronting mankind, the answers would probably be: (1) What is the meaning of life? (2) What is the relationship between quantum mechanics and general relativity? and (3) What's going on in the foreign exchange market. (Not necessarily in that order)."

Wednesday, October 9, 2019

Waste and Worse in US Health Care Spending

About 25% of all US health care spending is wasted, according to an article just published in the Journal of the American Medical Association by William H. Shrank, Teresa L. Rogstad, and Natasha Parekh ("Waste in the US Health Care System Estimated Costs and Potential for Savings," October 7, 2019). They write: 
In this review based on 6 previously identified domains of health care waste, the estimated cost of waste in the US health care system ranged from $760 billion to $935 billion, accounting for approximately 25% of total health care spending ...  Computations yielded the following estimated ranges of total annual cost of waste: failure of care delivery, $102.4 billion to $165.7 billion; failure of care coordination, $27.2 billion to $78.2 billion; overtreatment or low-value care, $75.7 billion to $101.2 billion; pricing failure, $230.7 billion to $240.5 billion; fraud and abuse, $58.5 billion to $83.9 billion; and administrative complexity, $265.6 billion.
This isn't a new problem. For a few decades now, I've been seeing estimates that up to one-third of US health care spending is wasted. Also, the US estimate isn't actually all that different from international estimates: an OECD study a couple of years ago estimated that "around one-fifth of health expenditure makes no or minimal contribution to good health outcomes."  But since the US spends about 18% of GDP on health care, while other high-income countries spend about 11% of GDP on health care, wasteful health care spending hurts even more in the US.

JAMA also offers some comments on these results. I was struck by Donald M. Berwick's essay: "Elusive Waste: The Fermi Paradox in US Health Care." 

In 1950, at lunch with 3 colleagues, the great physicist Enrico Fermi is alleged to have blurted out a question that became known as “the Fermi paradox.” He asked, “Where is everybody?” referring to calculations suggesting that extraterrestrial life forms are abundant in the universe, certainly abundant enough that many of them should have by then visited our solar system and Earth. But, apparently, none had.
Health care in the United States has its own version of the Fermi paradox. It involves the strong evidence of massive waste ... With US health care expenditures exceeding $3.5 trillion annually, 25% of the total would amount to more than $800 billion per year of waste (more than the entire 2019 federal defense budget, and as much as all of Medicare and Medicaid combined). Even 5% of the total cost is more than $150 billion per year (almost 3 times the budget of the US Department of Education).
That is worth repeating: by many pedigreed estimates, annual waste in US health care equals or exceeds the entire annual cost of Medicare plus Medicaid.
But, to paraphrase Fermi, “Where is it?” ... The paradox is that, in an era of health care when no dimension of performance is more onerous than high cost, when many hospitals and clinicians complain that they are losing money, when individuals in the United States are experiencing financial shock at absorbing more and more out-of-pocket costs for their care, and when governments at all levels find that health care essentially confiscates the money they need to repair infrastructures, strengthen public education, build houses, and upgrade transportation—in short, in an era when health care expenses are harming everyone—as much as $800 billion in waste (give or take a few hundred billion) sits untapped as a reservoir for relief. Why? ... 
What Shrank and colleagues and their predecessors call “waste,” others call “income.” People and organizations (for-profit and not-for-profit) making big incomes under current delivery models include very powerful corporations and guilds in a nation that tolerates strong influences on elections by big donors. Those donors now include corporations whose “right” to “free speech” as “persons” has been certified by the US Supreme Court, conferring on them an unlimited license to support political candidates financially. When big money in the status quo makes the rules, removing waste translates into losing elections. The hesitation is bipartisan. For officeholders and office seekers in any party, it is simply not worth the political risk to try to dislodge even a substantial percentage of the $1 trillion of opportunity for reinvestment that lies captive in the health care of today, even though the nation’s schools, small businesses, road builders, bridge builders, scientists, individuals with low income, middle-class people, would-be entrepreneurs, and communities as a whole could make much, much better use of that money.
I was also struck by the comments from Karen E. Joynt Maddox and Mark B. McClellan in their short essay, "Toward Evidence-Based Policy Making to Reduce Wasteful Health Care Spending."  They argue that various "incentive-based" or "value-based" systems that purport to provide incentives to reduce wasteful health care spending don't work all that well. These schemes have been complicated, not aligned across providers, without buy-in from clinicians, costly to implement--and in general have not led to any broad redesign of care. They sketch an alternative path to health care reform that looks like this:

The current piecemeal approach, which imposes complexity and additional implementation costs on clinicians, hospitals, and health systems, should evolve to a simpler and more holistic approach to value-based payment. Primary care should move toward a capitated payment system, with a streamlined set of quality measures and financial supports for keeping people healthy and out of the hospital. Specialty care will likely need a combination of a primary care–like chronic disease management track and add-on “bundles” for procedures, with quality measures relevant to specialized care comprising the core of quality measurement. Hospital care should be structured within such bundles where feasible, with clear quality measures around safety, and the move of accountable care organizations from fee-for-service–based models to organizations paid on a person level should continue.
Finally, although it's not part of this set of JAMA articles, I'll add that the issues of the US health care system go beyond wasted opportunities to make better use of resources. There have been prominent studies for a couple of decades now suggesting that medical errors in the US lead to the deaths of either tens of thousands or even several hundred thousand people every year. As one partial measure, the Agency for Healthcare Research and Quality (which is part of the US Department of Health and Human Services) publishes a "scorecard on hospital-acquired conditions." The AHRQ scorecard issued in January 2019 offers this good news/bad news report on "hospital-acquired conditions," or HACs:
The 2014 rate started at 99 HACs per 1,000 hospital discharges and is estimated at 86 HACs per 1,000 discharges for 2017. ... Based on the HAC reductions seen in 2015, 2016, and 2017 compared with 2014, AHRQ estimates a total of 910,000 fewer HACs occurred than if the 2014 rates had persisted through 2017. These HAC reductions lead to estimates of approximately $7.7 billion in costs saved and approximately 20,500 HAC-related inpatient deaths averted from 2015 through 2017. Data reported in 2016 estimated that from 2011 through 2014, HAC reductions totaled 2.1 million, and these reductions resulted in approximately $19.9 billion in cost savings and 87,000 fewer HAC-related inpatient deaths. 
So the good news is 87,000 fewer deaths along with other prevented health and monetary costs since 2010. The bad news is that the US health care system was causing those deaths and costs up through 2010, and with 86 hospital-acquired conditions per 1,000 discharges in 2017, it's still causing high costs. Of course, one could also add other costs with a close linkage to health care, like prescription drug overdoses.

A huge amount of US public attention has focused on the issue of providing health insurance coverage and health care to all, and rightly so. But there should be enough space in our brains to also consider the issues of high costs and wasteful healthcare spending and reducing the health costs that are being created by the US healthcare system.

Tuesday, October 8, 2019

The Jobs Problem in India

One of India's biggest economic challenges is how new jobs are going to be created. Venkatraman Anantha Nageswaran and Gulzar Natarajan explore the issue in "India’s Quest for Jobs: A Policy Agenda" (Carnegie India, September 2019). They write:
The Indian economy is riding the wave of a youth bulge, with two-thirds of the country’s population below age thirty-five. The 2011 census estimated that India’s 10–15 and 10–35 age groups comprise 158 million and 583 million people, respectively. By 2020, India is expected to be the youngest country in the world, with a median age of twenty-nine, compared to thirty-seven for the most populous country, China. In the 2019 general elections, the estimated number of first-time voters was 133 million. Predictably, political parties scrambled to attract youth voters. It is therefore not surprising that, according to several surveys, the parties’ primary concern was job creation. The burgeoning youth population has led to an estimated 10–12 million people entering the workforce each year.6 In addition, the rapidly growing economy is transitioning away from the agricultural sector, with many workers moving into secondary and tertiary sectors. Employing this massive supply of labor is, perhaps, the biggest challenge facing India
India's jobs in the future aren't going to be in agriculture: as that sector modernizes, it will need fewer workers, not more. A common assumption in the past was that India's new jobs would be in big factories, like giant assembly plants or manufacturing facilities. But manufacturing jobs all around the world are under stress from automation, and with trade tensions high around the world, building up an export-oriented network of large factories and assembly plants doesn't seem likely. As Nageswaran and Natarajan point out, most of India's employment is concentrated in very small  micro-firms in informal, unregulated business. The challenge is to add employment is small and medium formal firms, sector often in industries with a service orientation.
The Sixth Economic Census of India, 2013, which combines all types of enterprises, shows that India had 58.5 million enterprises, which employed 131.9 million workers. Nonemployer, or own account firms, constituted 71.7 percent of these enterprises and 44.3 percent of workers. Further, 55.86 million (or 95.5 percent) of all the enterprises employed just 1–5 workers, 1.83 million (3.1 percent) employed 6–9 workers, and just 0.8 million (1.4 percent) employed ten or more workers ... Further, comparing India’s formal and informal manufacturing establishments to Mexico and Indonesia reveals the true scale of India’s challenge within this sector. Enterprises with fewer than ten workers make up nearly 70 percent of the employment share in India, compared to over 50 percent in Indonesia and just 25 percent in Mexico.
To put this in a bit of context, India's Census is finding employment of 131.9 million workers, mostly in very small firms. But India as a country has a workforce of over 500 million, and it's growing quickly. The other workers are either working for subsistence, in agriculture or cities, or in the informal economy. 

Why has India had such a hard time in creating new small- and medium-sized firms? Part of the answer is a heavy hand of government regulation. 
India is often considered one of the most difficult places to start and run a business. ... One of the biggest hurdles that potential enterprises in India face is the complexity of the registration system—all enterprises must register separately with multiple entities of the state and central governments. Under the state government, the enterprise has to register with the labor department (Shop and Establishment Act), the local government (municipal or rural council acts), and the commercial taxes department for indirect tax assessments. There are also several state-specific legislations—the labor department alone has thirty-five legislations. 
Under the central government, enterprises must register with the Ministry of Corporate Affairs for incorporation (Companies Act), the Central Board of Direct Taxes for direct tax assessments, and the labor department’s Employees’ Provident Fund Organization (EPFO) and Employees’ State Insurance Corporation (ESIC). Further, there are registrations specific to sector or occupational categories—for example, manufacturing enterprises with more than ten employees must register with the labor department under the Factories Act.
Based on the application or software employed for each registration, employers also must possess a multitude of numbers: for example, a labor identification number—used to register on the Shram Suvidha Portal, the Ministry of Labor and Employment’s single window for reporting compliances; a company registration number; and a corporate permanent account number. Employees must possess an Aadhaar biometric identity number, an EPFO member number, an ESIC identity number, and a universal account number.
According to current labor laws, service enterprises and factories must maintain twenty-five and forty-five registers, respectively, and file semi-annual and annual returns in duplicate and in hard copy. Furthermore, regular paperwork tends to be convoluted; salary and attendance documents should be simple but instead require tens of entries. In addition to the physical requirements of complying with these regulations—making payments, designing human resource strategies, or meeting physical infrastructure standards—enterprises also have onerous periodic reporting requirements. All these requirements add up to impose prohibitive costs that reduce the success of
these businesses.
This regulatory environment offers a powerful incentive for small firms to remain informal, off-the-record, under-the-radar. A related issue arises because payroll taxes in India are very high--for workers in the formal sector, that is.
Manish Sabharwal, the chairman of TeamLease Services, a staffing company, wrote that salaries of 15,000 rupees a month end up as only 8,000 rupees after all deductions, from both the employer and employee sides. The employer makes deductions for pensions, health insurance, social security, and even a bonus, which are statutorily payable in India and would otherwise increase costs to companies. Consequently, the take-home pay for a worker earning less than 15,000 rupees a month is only 68 percent of their gross wages. Lower-wage workers are far more affected than higher-wage workers, who are protected by the maximum permissible deductions, which lowers the amount of deductions from their gross salary. Further, though international comparisons are often difficult and misleading, a cursory examination suggests that India’s deductions are among the highest in the world and are a deterrent to businesses starting or becoming formal.
Yet another issue is that there are many programs providing support and finance to very small firms. An unintended result is that these firms have an incentive to remain small--so they don't have to give up their incentives. 
Gursharan Bhue, Nagpurnanand Prabhala, and Prasanna Tantri point out that firms are willing to forgo growth in order to retain their access to finances. That is, when certain easier financing access is provided to firms below a certain threshold (say, SME firms), they prefer to forgo growth opportunities that would allow them to cross this threshold: “firms that near the threshold for qualification slow down their investments in plant and machinery, other capital expenditure” and experience slower growth in manufacturing activity and output. The authors also point out that when banks are put under pressure to lend to micro, small, and medium enterprises, they fear the fallout of not meeting those lending targets and consequently encourage their borrowers to stay small.
Nageswaran and Natarajan argue that most of India's informal firms are "subsistence" firms, unlikely to grow. They cite evidence from Andrei Shleifer and Rafael La Porta that few informal firms ever make a transition to formal status. Instead, the goal needs to be to have more firms that are "born formal," and which are run by entrepreneurs who have a vision of how how the firm can grow and hire.  In India, this doesn't seem to be happening.   They write:

Chang-Tai Hsieh and Peter Klenow’s latest work, “The Life Cycle of Plants in India and Mexico,” is instructive in its exploration of the life-cycle dynamic of firm growth across countries. They find that, in a sample of eight countries including the United States and Mexico, India is the only  ountry where the average number of employees of firms (in the manufacturing sector) ages 10–14 years is less than that of firms ages 1–5 years. It is generally expected that, as firms remain in business for longer periods, they would naturally employ more workers. In India, however, the inverse has proven true—employment in older firms is less than in younger firms. Hsieh and Klenow also find that the typical Indian firm stagnates or declines over time, with only the handful that reach around twenty years of age showing very slight signs of growth.
What's to be done? As is common with emerging market economies, the list of potentially useful policies is a long one. Reforming government regulations, payroll taxes, and financial incentives with the idea of supporting small-but-formal businesses, and not hindering their growth, is one step. Nageswaran and Natarajan also point out that the time needed to fill out tax forms is especially onerous in India.

Ongoing increases in infrastructure for transportation, energy, communications matters a lot. Along with overall support for rising education levels, it may be useful to take the idea of an agricultural extension service--which teaches farmer  how to use new seeds or crop methods--and create a "business extension service" that helps teach small firms the basic managerial techniques that can raise their productivity. India's government might take steps to help establish an information framework for a national logistics marketplace, which would help organize and smooth the movement of business inputs and consumer goods around the country: "Amounting to 13 percent of India’s GDP, the country’s logistics costs are some of the highest in the world."

But in a broad sense, the job creation problem in India comes down to a more fundamental shift in point of view. Politicians tend to love situations where they can claim credit: a giant new factory opens, or a new power plant. Or at a smaller scale,  politicians will settle for programs that sprinkle subsidies among smaller firms, so those firms that receive such benefits can be claimed as a success story. But if the goal for India's future employment growth is to have tens of millions of firms started by well-educated entrepreneurs, this isn't going to happen with firm-by-firm direction and subsidies allocated by India's central or state governments. Instead, it requires India's government to be active and aggressive in creating a general business environment where such firms can arise of their own volition, and it's a hard task for any government to get the right mix of acting in some areas while being hands-off in others.

Monday, October 7, 2019

The Pedagogical Lessons and Tradeoffs of Online Higher Education

The Fall 2019 issue of Daedalus is on the subject "Improving Teaching: Strengthening the College Learning Experience," edited by Sandy Baum and Michael S. McPherson. There's a lot to digest in the issue, and I'll list the table of contents below. But I found myself especially interested by the comments on online education in "The Human Factor: The Promise & Limits of Online Education," by Baum and McPherson, as well as in "The Future of Undergraduate Education: Will Differences across Sectors Exacerbate Inequality?" by  Daniel I. Greenstein.

It was now seven years ago, back in 2012, that companies like Coursera, Udacity, and edX announced their plans to revolutionize higher education with "massive open online courses," or MOOCs. While the use of online tools has clearly spread, it seems fair to say that the revolution has not yet arrived. Where does online higher education stand at this point?

On the spread of online classes to this point, Baum and McPherson write (footnotes omitted):
But MOOCs, as attention-getting as they have been, have never been the main source of online education. For-profit, career-oriented institutions and large public universities have been the major providers at the undergraduate level, although several private nonprofit institutions now enroll thousands of online students. Today, more than 40 percent of all undergraduate students take at least one course that is offered purely online; 11 percent–including 12 percent of those in bachelor’s degree programs–study entirely online.
What's the evidence on how well online courses teach? A key difference here seems to be that hybrid courses with high online content can work well, but pure online courses have some problems.  Baum and McPherson:
But studies that focus on course completion rates as opposed to test scores generally show weaker outcomes when courses are entirely online.Moreover, recent randomized controlled trials of semester-long college courses have found lower test scores for students in fully online courses than for similar students in traditional classroom settings–but no significant difference in outcomes between those in settings that mix technology with classroom experience and students in fully face-to-face courses. Economist David Figlio and colleagues compared a fully online course to a classroom course; economists William Bowen and Ted Joyce each had teams comparing traditional courses to those replacing some live instructor time with online learning; and labor economist William Alpert and colleagues studied all three models.  The results of these studies are consistent. Classroom instruction time can be reduced without a negative impact on student learning. But eliminating the classroom and moving instruction entirely online appears to lead to lower course completion rates and worse outcomes, even when guidelines are followed for best practices for generating online discussion. The weaker results for students listening to lectures online instead of in a classroom with other students suggests that it may not be just personal attention, but being in a social environment that contributes to student learning. It is also possible that the more structured scheduling of classroom courses is important for some students.
The other big change in online higher education in the last decade or so has been a shift in who is most likely to be offering these courses. Back in 2009, it was mostly for-profits, but that has changed. Greenstein offers a comparison:
Unsurprisingly, by 2009, online instruction outside the for-profit sector was highly concentrated in a relatively small number of outlier institutions. In that year, Western Governor’s University (WGU), established in 1997 by the governors of nineteen states and with a significant grant from the Bill and Melinda Gates Foundation, offered fully online courses to over fifty thousand students, Penn State’s World Campus served twenty-five thousand (9,500 full-time equivalent) students, University of Maryland’s University College had twelve thousand online students, and there were one or two others operating outside the for-profit sector at something bigger than fledgling scale. There were also a number of headlining failures in the not-for-profit sector to point to, failures that reflected outright resistance to the genre, notably at the University of Illinois, where the Global Campus effort announced with great fanfare and with an investment of $10 million collapsed after only three years. By comparison, in the very same year–2009–the for-profit University of Phoenix was nearing its high watermark enrollment of nearly four hundred thousand online students.
Within a decade, the tables had turned. For-profits, under enormous pressure resulting from the Great Recession and a hostile regulatory environment, collapsed, losing as much as a half of all enrollments. Several of the biggest for-profits went out of business (Corinthian Colleges), were bought out by private equity firms (University of Phoenix), merged with not-for-profit institutions looking to accelerate their own online learning initiatives (Kaplan and Purdue Universities), or transitioned from for- to not-for-profit status. Large public universities and community colleges, meanwhile, moved in to pick up some of the slack. WGU grew to one hundred thousand enrollments and continues achieving 10 percent year-on-year growth. Arizona State University serves nearly the same number annually, and the University of Central Florida has grown to nearly sixty thousand students with almost one-third of all student credit hours taken online. Other evidence collected annually since 2002 has demonstrated how online learning has become part of the mainstream in higher education. Large public universities and colleges are particularly likely to offer a large share of student credit hours online. 
One of the hopes of online higher education was that it would be a low-cost way to make college classes widely available to underserved and at-risk student populations. This hope has gone largely unfulfilled. Baum and McPherson:
Two rigorous large-scale studies of community college students by the Community College Research Center (CCRC) found lower course persistence and program completion among students in online classes. These studies found that students who take online classes do worse in subsequent courses and are more likely than others not only to fail to complete these courses, but also to drop out of school.Males, students with lower prior GPAs, and Black students have particular difficulty adjusting to online learning. The performance gaps that exist for these subgroups in face-to-face courses become even more pronounced in online courses.
According to the CCRC, the differences are even greater for developmental courses than for college-level courses. In a study of online developmental English courses, failure and withdrawal rates were more than twice as high as in face-to-face classes. Students who took developmental courses online were also significantly less likely to enroll in college-level gatekeeper math and English courses. Of students who did enroll in gatekeeper courses, those who had taken a developmental education course online were far less likely to pass than students who had taken it face-to-face.
Thus, many of the current successes of online learning in higher education are for students who are pre-screened for  high admissions standard, and or highly motivated, or both. As one example, Baum and McPherson write:
Georgia Tech’s widely cited computer science master’s degree program is getting very positive reviews and appears to be opening opportunities to new students, rather than diverting them from face-to-face programs. Since this is a graduate program, all of the students have already earned bachelor’s degrees and, in the case of Georgia Tech, passed rigorous admission standards. Evidence about success in MOOCs confirms the reality that students from higher-income and more-educated backgrounds are most likely to participate and succeed in these courses.

Greenstein offers some other examples:
Two potentially very promising trajectories are beginning to take shape. The first is the use of hybrid modalities: modalities that mix face-to-face and online instruction. Where implemented well, they appear to lower costs and improve student outcomes. This at least is the experience at the University of Central Florida (UCF). With undergraduates taking nearly one-third of their credits online, UCF shows the best course outcomes for students in hybrid courses (with outcomes for face-to-face and fully online falling behind in that order). A second very promising development is seen in adaptive technology platforms and courseware that integrate data science to make machine-assisted learning directly responsive to individual students’ needs and their progress and pace in mastering explicitly specified course competencies. By the mid-2010s, results were more rather than less promising for the technology demonstrating improved student outcomes for students from all demographic groups.

I've heard enthusiasts for online education point out more than once that the possibilities for innovative technological progress in the form  of a human delivering a live lecture are somewhat limited. In contrast, it's easy to imagine all kinds of potential for improvement in online higher education. It remains true that most online higher-ed involves lecture-based presentations followed with online quizzes and tests. One can easily imagine over time that the interaction of an online class with a student will become more adaptive, flexible, and responsive. The methods of group participation online with other students and faculty will become more sophisticated. But after some years of watching online classes not cause a revolution in higher education, some hard questions are emerging.

1) It's easy to imagine online higher education getting better, but it's not going to happen easirly or on the cheap. It's clear at this point that just recording some classroom lectures and linking up students to a multiple-choice online test-bank will work for a highly motivated few, but not for the many. The investment needed for really good online courses may be large, and it may be ongoing. The old model of finding a professor who teaches a course well, and then having the professor record some lectures or write a textbook, isn't going to suffice. Instead, there will be a need for experts in computer programming, psychology, artificial intelligence, and more. A highly-evolved version of an online education class is also not a one-time project, but instead is going to require ongoing cycles of learning, and respecting differences across topics. Teaching statistics online may look very different from teaching a foreign language or writing or chemistry or economics. Before we're too quick to assume that online higher education will soon and quickly get a lot better,, it's important to remember that creating the highly evolved online education courses of the future isn't just a matter of jumping a few hurdles, but of overcoming a multidimensional obstacle course. It's not about a few incremental gains to the existing courses, but of evolution into a different kind of online experience that barely exists--or may not yet exist.

2) Who is going to make these costly, risky investments? Maybe it will be a few very well-to-do schools. It would be an interesting irony of those who attend huge-endowment highly-selective schools also ended up with access to much better online courses! Another possibility is that it will be schools with extremely large enrollments--probably larger than the enrollment of any specific campus.  It would be interesting to see if some conferences, like the Big 10, SEC, Pac-10 or the ACC could put together a team along these lines. It's not at all clear how community colleges, smaller schools, or  schools with lower levels of funding can afford to make large and ongoing investments in a dramatically better version of online education.  Thus, it's not at all clear that these online courses of the future will be focused on at-risk or nontraditional students.

3) There are times when the discussion of online education seems to be based on a vision of education as something that can be downloaded or viewed online by individuals in isolation, who then absorb the necessary information. But most education has traditionally happened in groups, and the social and emotional structures of the group may matter--at least for most learners most of the time.  Thus, a challenge is to make online learning into a genuinely shared experience. I'll give Baum and McPherson the last word:
Behind the successive would-be revolutions in the technology of delivering college education seems to lie a desire to minimize, if not eliminate, the need for messy, often inconvenient, and always costly human interaction in the college-going experience. This desire is particularly evident when the concern is for mass higher education. A purely automated delivery system for much of higher education would appear to be very cheap and efficient, and perhaps even higher quality than traditional higher education because everyone could be exposed to the best lecturers. Unfortunately for this dream, developments in psychology and learning theory over the last two decades have made ever more clear how central the social, emotional, and interactional dimensions of learning are.

Here's the Table of Contents for the issue, with links to the articles:

Friday, October 4, 2019

Challenges Facing the "Arab Development Model"

Here's a description of the Arab "social contract" and "development model" according to a recent report by Adel Abdellatif, Paola Pagliani, and Ellen Hsu, "Leaving No One Behind Towards Inclusive Citizenship in Arab Countries" (July 2019). It an Arab Human Development Report Research Paper, written for the Regional Bureau for Arab States in the UN Development Programme. They write:
The social contract that emerged from and continues to evolve as a result of contesting and bargaining stemmed from the state-building and formation after Arab states won their independence in the 1950s–1970s. The emergence of independent states was associated with a strong nationalistic sentiment and the idea that the state should be the provider and engine of social and economic development. Despite considerable variation across countries, which was affected by natural resources endowments, the dominant model of development from the 1950s onward was having limited political participation and civil and political liberties in exchange for material benefits such as services, subsidies and employment. The model was based on strong central states overseeing and driving economic and social priorities while implementing wide-scale policies for redistribution and equity. It rested on four main pillars:
  • Establishing a large bureaucracy to provide and deliver services.
  • Expanding security services and the army.
  • Setting up a large public sector of factories and companies.
  • Subsidizing basic foodstuffs and energy products.
In  keeping with the development model, a large share of total employment--often more than 20%--is in the public sector.

This model can claim some successes. For example, life expectancy at birth in Arab countries was about 55 years in 1970, below the world average of 58 years. Now, life expectancy in Arab countries is about 76 years, above the world average of 73 years. However, human development gains for countries in the Arab world have generally slipped back since 2010. The big drop in global oil prices back around 2014 has meant a reduction in resources for oil-exporting countries in the Middle East, and lower spillover buying power for the non-oil exporters in the region.

The emphasis of the report is that many of the countries in the region do not have "inclusive citizenship. For example, females in Arab countries lag males by more than the usual average for emerging-market economies in areas like education and political representation. The report notes:
The greatest measurable disparities are economic: globally women’s income is 57 percent of men’s, but Arab women’s income is only 21 percent of Arab men’s. Unequal gendered division of labour—both in unpaid care and domestic work and in the labour market—is a major characteristic of gender economic inequality across the Arab region. Women’s participation in the formal labour market remains among the lowest globally because of both cultural norms and weak incentives
There are big gaps between rural and urban areas, and within urban areas, "[i]n at least seven countries with data, more than half of the urban population lives in slums." The concerns just keep coming:
Unaccountable and unresponsive public institutions as well as perceived widespread corruption often drive exclusion and disenfranchisement for large segments of the population. ... A substantial number of citizens believe that the institutions meant to take care of their needs are leaving them behind ... Trust in elected bodies, those that should be in charge of redesigning the social contract, is particularly low. Lack of trust is also reflected in low electoral turnouts—below 50 percent in most countries ... Perceptions of ineffective institutions seem confirmed by stagnating or narrowly based economic structures, high unemployment, young people facing difficult prospects to secure their future and uneven provision of social services and social protection nets. Unemployment, averaging 10 percent, almost double the world average, disproportionately affects young people, at 25 percent. ... 84% of the population is affected by or at risk of water scarcity. The decline of arable land and the dependency on food imports expose the population to risks of food insecurity ...
Part of what makes the report interesting is that it is from Regional Bureau for Arab States in the UN Development Programme. And the unmistakable theme is that the Arab development model and the associated social contract isn't working very well. Another part of what makes the report interesting is its hesitancy about suggesting alaternative policy directions.

Yes, there's some discussion about how subsidies for energy prices that end up mainly benefiting the well-to-do, who after all use more energy, could be converted to to support for the poor. This is a problem in a lot of countries (for an overview, see this IMF working paper). But the challenges facing the Arab development model aren't about recalibrating some subsidies. The problem is that a "development model" based on high public employment, along with lots of social services and subsidies, needs substantial numbers of firms in a solid underlying economy to provide jobs and tax revenues and growth. 

For some earlier posts on the economic outlook for the Middle East, see:

Thursday, October 3, 2019

The Dispersion of High- and Low-Productivity Firms Within an Industry

If you think about an economy as fairly stable and static, you would expect that any two companies within an industry would be fairly close in terms of productivity. After all, if Company A and Company B are selling similar products, and A has much higher productivity than B, it should drive B out of business. Thus, one might expect that at the end of this process, the competitors we observe within an industry in the real world should be fairly close in productivity level.

However, this expectation is dramatically wrong. Within an industry, it is a standard pattern to find a wide dispersion of productivity across firms in the industry. Academic researchers have been familiar with this pattern for at least 15 years. But now (pulse rate accelerates) there is systematic time series data across industries from 1997-2015!  "The Dispersion Statistics on Productivity (DiSP) is a joint experimental data product from the U.S. Bureau of Labor Statistics and the U.S. Census Bureau. The DiSP provide statistics on within-industry dispersion in productivity."

For example, here's a figure from Cheryl Grim of the US Census Bureau. The bar graphs show that if you take a firm in the 75th percentile of the shoe or the cement industry and compare it with a firm in the 25th percentile of the shoe or cement industry, the firm in the 75th percentile will be about 150% as productive. In the computer industry, a firm in the 75th percentile is 400% more productive than a firm in the 25th percentile.


The existence of such differences in productivity across industry have been known for some time.   Cindy Cunningham, Lucia Foster, Cheryl Grimm John Haltiwanger, Sabrina Wulff Pabilonia, Jay Stewart, and Zoltan Wolf explain in "Dispersion in Dispersion: Measuring Establishment-Level Differences in Productivity" (Center for Economic Studies Working Paper CES 18-25R, September 2019).

They point out that research by Chad Syverson back in 2004, looking at data from manufacturing industries in 1977, found that firms in the 90th percentile of a certain industry were about four times as productive as firms in the 10th percentile. In the more recent data: "Illustrating the properties of the new data product, we find large within-industry dispersion in labor productivity: establishments at the 75th percentile are about 2.4 times as productive as those at the 25th percentile on average.

Why do such differences exist? The reasons are obvious enough, as Grim explains?
Producers within industries differ in many ways. They produce different products of varying quality and have different customers and markets. They use different technology and business practices to combine different amounts of materials and equipment to produce their products. Some businesses are also larger and/or older than other businesses. Their ability to adjust their scale and mix of operations may vary due to these differences. Experimenting with new products and processes can also contribute to productivity differences. Businesses that have successfully adopted new technologies are likely to be more “productive” (as measured by these differences in revenue per hour) compared to businesses that have not yet adopted such technologies. All of these factors can contribute to enormous variations in this measure of business performance.
The fact that firms in the same industry be so different in productivity levels, and that these differences don't seem to fade away, has a number of interesting implications.

First, the pattern suggests that productivity growth doesn't always mean cutting-edge gains; indeed there is enormous potential for economic growth if the firms now lagging in productivity can be brought up to speed, perhaps by merging with higher productivity firms. In addition, one way that productivity growth happens for the economy as a whole is when high-productivity firms put low-productivity firms out of business.

Second, the persistence of these gaps suggests that some firms are protected from competition. For example, cement is not very transportable, and so competition in the cement industry is often limited to local firms. The potential reason why productivity differences may persist in other firms is worth considering.

Third, there seems to be some evidence that productivity diffusion is widening, as "superstar" firms in various industries pull further ahead. Indeed, this may be an important factor contributing to growth of inequality of wages, because workers and managers at high-productivity firms are typically much better-paid than those at low-productivity firms. 

Tuesday, October 1, 2019

Are CLOs the New CDOs?

CDOs, or "collateralized debt obligations," were at the heart of what broke down in the US financial system and helped put the "Great" in the "Great Recession." Is there another financial instrument out there that raises similar concerns? CLOs, or "collateralized loan obligations," have a similar structure and have now reached a similar size to the CDOs circa 2008.

 How much should we be worried? As I've noted in past discussions of the subject, several Fed officials including  Lael Brainerd of the Fed Board of Governors and Robert Kaplan of the Federal Reserve Bank of Dallas (who will rotate on to the membership of the Federal Open Market Committee in 2020) have raised concerns.  Sirio Aramonte and Fernando Avalos offer a nice short discussion of this comparison in "Structured finance then and now: a comparison of CDOs and CLOs," which appears in the BIS Quarterly Review (September 2019, pp. 11-14). They write: "The rapid growth of leveraged finance and CLOs has parallels with developments in the US subprime mortgage market and CDOs during the run-up to the GFC. We examine the CLO market in light of that earlier experience."

Here's some backstory. The collateralized debt obligation of concern back in 2007 were a set of financial securities that were based on pools of subprime mortgages. There's nothing wrong with collecting mortgages into a pool, packaging them into a security, and then reselling them to investors like insurance companies, pension funds, hedge funds, and banks.

But the problem with creating a financial security based on subprime mortgages was that--by the definition of "subprime"--a relatively high percentage of these mortgage were going to default, so a financial security based on these subprime mortgages would be fairly risky. For example, banks would not be allowed by regulators to hold such securities. However, some financial wizardry solved that problem.  The CDOs were divided up into sections, called "tranches," with some of the tranches being very risky and some being very safe. For example, if losses on the underlying subprime mortgages were in the range of 0-10%, then all of those losses would fall on one set of investors in the highest-risk tranche. If losses on the underlying subprime mortgages fell in the range of 10-20%, then those losses would fall entirely on another set of investors in the next highest-risk tranche. With several of these tiers built into place, so that any losses would be concentrates on a subset of investors, the other tranches of the CDO appeared to be very safe: indeed, those tranches were rated AAA and banks were allowed to hold them.

The current wave of collateralized loan obligations are also financial securities based on pools of debt--but in this case, the debts are corporate loans rather than subprime mortgages. Again, there';s nothing wrong with collecting debt into a pool, packaging it into a security, and reselling it to investors. This kind of corporate debt is called  "leveraged loan." As Aramonte and Avalos write:
CDOs and CLOs are asset-backed securities (ABS) that invest in pools of illiquid assets and convert them into marketable securities. They are structured in tranches, each with claims of different seniority over the cash flows from the underlying assets. The most junior or so-called equity tranche is often unrated and earns the highest yields, but is the first to absorb credit losses. The most senior tranche, which is often rated AAA, receives the lowest yields but is the last to absorb losses. In between are mezzanine tranches, usually rated from BB to AA, which start to absorb credit losses once the equity tranche is wiped out. The larger the share of junior tranches in the capital structure of the pool, the more protected the senior tranche (for a given level of portfolio credit risk).
The market for collateralized loan obligations has grown quickly. For comparison, the size of the total market for CDOs in 2007 $1.2 trillion-$2.4 trillion, and the size of the total market for CLOs at present is $1.4 trillion to $2.0 trillion. In addition, investors (facing low interest rates elsewhere) are eager to buy CLOs--which means that the credit standards for such loans have deteriorated.  Aramonte and Avalos write: 
For both CDOs and CLOs, strong investor demand led to a deterioration in underwriting standards. For example, US subprime mortgages without full documentation of borrowers’ income increased from about 28% in 2001 to more than 50% in 2006. Likewise, leveraged loans without maintenance covenants increased from 20% in 2012 to 80% in 2018. In recent years, the share of low-rated (B–) leveraged loans in CLOs has nearly doubled to 18%, and the debt-to-earnings ratio of leveraged borrowers has risen steadily. Weak underwriting standards can reduce the likelihood of defaults in the short run but increase the potential credit losses when a default eventually occurs. 
Here are a couple of images: one showing the rise in the leveraged loan market, the other showing that borrowers with more debt have an increasing share of the market and that "covenant-lite" loans with fewer protections for investors have been on the rise. 

Thus, the concern is over a scenario where the economy gets a negative shock. The risk of leveraged loans rises. Some investors start trying to sell off those loans, but in a situation where everyone is trying to sell, the prices are going to be low--which encourages even more investors to try to sell. Banks see the value of their holdings of CLOs is falling, which raises concerns for bank regulators. Some banks also find that, although they had not quite realized it, they are connected to that they have connection to these other parts of the financial industry through legal and reputational ties, or because they have open lines of credit outstanding to these other companies. Ultimately, companies find it much harder to borrow, and banks become less willing to lend to consumers, too. Say it all in one long breath, and it's a recipe for recession. 

But while the parallels from CDOs to CLOs are are suggestive, and reason for a moderate degree of concern, there are also meaningful differences. 

The CDOs of 2007 were all based on housing, and thus were all vulnerable to a common shock. The CLOs of 2019 are more diversified because they are spread across industries, and not all industries are likely to become vulnerable in the same way at the same time. 

The CDOs of 2007 became entangled in other types of complexity. For example, the financial wizards started off with subprime mortgages and then created CDOs with tranches. But then they took tranches from separate CDOs and combined the tranches into a new CDO--sometimes called a CDO-squared--with tranches of its own. CDOs also became entangled with a market for "credit default swaps," a way of buying insurance against a decline in your CDO tranche. Selling that "credit default swap" insurance was a big part of what drove the insurance company AIG into bankruptcy and a federal bailout. The financial structure of the recent wave of CLOs has not (so far!) been complicated with these kinds of additional complications. If stress does occur in the CLO market, it will be a lot easier to identify the risks and who is facing them. 

Yet another issue is that back in 2008, banks were often investing in CDOs through another bit of financial wizardry called a "special-interest vehicle," which was technically separate from the bank and thus off the bank's balance sheet, but where the bank would suffer if losses occurred. But banks that own CLOs are owning them directly and clear, not through a veiled financial transaction. Again, if risks occur, those risks should be much more clear. 

As Aramonte and Avalos, it also seems that CLOs are less likely to be financed by short-term borrrowing, and less likely to serve a collateral for short-term borrowing, as well. Less of a connection to short-term financial markets means that the risk of a "run" on the asset is reduced. 

Bottom line: CLOs aren't the new CDOs, at least not yet. But perhaps cast a weather eye in their direction, now and then, just in case.