Pages

Tuesday, July 31, 2018

"Whoever Is Not a Liberal at 20 Has No Heart ..."

There's an saying along these general lines "If you’re not a liberal when you’re 25, you have no heart. If you’re not a conservative by the time you’re 35, you have no brain." Who said it? One blessing of the web is that I can fiddle around with such questions without needing to spend three days in the library.

It's apparently not Winston Churchill. At least, there's no record of him having said or written it. And Churchill scholars point out that he was a conservative at 15 and a liberal at 35. 

Indeed, it seems the origins of the comments may be French, rather than English. The Quote Investigator website writes:
The earliest evidence located by QI appeared in an 1875 French book of contemporary biographical portraits by Jules Claretie. A section about a prominent jurist and academic named Anselme Polycarpe Batbie included the following passage [translated as] ... 
"Mr. Batbie, in a much-celebrated letter, once quoted the Burke paradox in order to account for his bizarre political shifts: “He who is not a républicain at twenty compels one to doubt the generosity of his heart; but he who, after thirty, persists, compels one to doubt the soundness of his mind.”
Quote Investigator has not found an actual record of Mr. Batbie's "much-celebrated letter." And although the "Burke paradox" seems mostly likely to apply to Edmund Burke, it isn't clear whether it's a reference to something not-yet-discovered that was written by Burke, or by a reference to a pattern purportedly revealed by Burke's life and writings. 

But hearkening back to Burke is interesting, because in Thomas Jefferson's journals one finds an entry relevant to this subject for January 1799. John Adams is president at this time. Jefferson writes: 
"In a conversation between Dr. Ewen and the President, the former said one of his sons was an aristocrat, the other a democrat. The President asked if it was not the youngest who was the democrat. Yes, said Ewen. Well, said the President, a boy of 15 who is not a democrat is good for nothing, and he is no better who is a democrat at 20. Ewen told Hurt, and Hurt told me."
For a lengthy list of other places where something similar to this quotation has appeared, see here or here.  While the quotation clearly has staying power, it seems overly facile to me. The distinction that liberals feel and conservatives think is silly and shallow, and shows little understanding of either. The strong beliefs of young people are easily dismissed as rooted only in feelings, but at least young people often show some flexibility about learning and adapting. It often seems the strong feelings of the middle aged and elderly are often based as much on being set in their ways and confirmation bias, and about lessons learned in the rather-different past, rather than seeking to apply some deeper weighing of facts, values, and experience. 

Herbert Stein, who was an economist in many positions in Washington, DC for more than 50 years, captured some of my own sense here  in his 1995 collection of essays, On the Other Hand - Essays on Economics, Economists, and Politics (from pp. 1-2):
"An old saying goes that whoever is not a Socialist when young has no heart and whoever is still a Socialist when old has no head. I would say that whoever is not a liberal when young has no heart, whoever is not a conservative when middle-aged has no head, and whoever is still either a liberal or a conservative at age seventy-eight has no sense of humor. Obviously, orthodox certainty on matters about which there can be so little certitude must eventually be seen as only amusing."
If you can't learn from both liberals and conservatives, and also laugh at both liberals and conservatives, you might want to reconsider the vehemence of your partisan commitments. 

Monday, July 30, 2018

How Coalitional Instincts Make Weird Groups and Stupid People

I like to think of myself as an individual who makes up his own mind, but that's almost certainly wrong for me, and you, gentle reader, as well. A vast literature in psychology points out that, in effect, a number of separate personalities live in each of our brains. Which decision gets made at a certain time is determined in part by how issues of reward and risk are framed and communicated to us.  Moreover, we are members of groups. If my wife or one of my children is in a serious dispute, I will lose some degree of my sunny disposition and rational fair-mindedness. Probably I won't lose all of it. Maybe I'll lose less of it than a typical person in a similar situation. But I'll lose some of it. 

John Tooby, a professor of anthropology at the University of California-Santa Barbara, has written about what he calls "Coalitional Instincts" in a short piece for Edge.com (November 22, 2017). Tooby argues that human brains have evolved so that we have "a nearly insuperable human appetite to be a good coalition member." But to demonstrate clearly that we are part of a coalition, we are all drawn to "unusual, exaggerated beliefs ... alarmism, conspiracies, or hyperbolic comparisons." Here's Tooby (I have inserted the boldface emphasis): 
"Every human—not excepting scientists—bears the whole stamp of the human condition. This includes evolved neural programs specialized for navigating the world of coalitions—teams, not groups.  ... These programs enable us and induce us to form, maintain, join, support, recognize, defend, defect from, factionalize, exploit, resist, subordinate, distrust, dislike, oppose, and attack coalitions. ...
"Why do we see the world this way? Most species do not and cannot. ... Among elephant seals, for example, an alpha can reproductively exclude other males, even though beta and gamma are physically capable of beating alpha—if only they could cognitively coordinate. The fitness payoff is enormous for solving the thorny array of cognitive and motivational computational problems inherent in acting in groups: Two can beat one, three can beat two, and so on, propelling an arms race of numbers, effective mobilization, coordination, and cohesion.

"Ancestrally, evolving the neural code to crack these problems supercharged the ability to successfully compete for access to reproductively limiting resources. Fatefully, we are descended solely from those better equipped with coalitional instincts. In this new world, power shifted from solitary alphas to the effectively coordinated down-alphabet, giving rise to a new, larger landscape of political threat and opportunity: rival groups or factions expanding at your expense or shrinking as a result of your dominance.

"And so a daunting new augmented reality was neurally kindled, overlying the older individual one. It is important to realize that this reality is constructed by and runs on our coalitional programs and has no independent existence. You are a member of a coalition only if someone (such as you) interprets you as being one, and you are not if no one does. We project coalitions onto everything, even where they have no place, such as in science. We are identity-crazed.

"The primary function that drove the evolution of coalitions is the amplification of the power of its members in conflicts with non-members. This function explains a number of otherwise puzzling phenomena. For example, ancestrally, if you had no coalition you were nakedly at the mercy of everyone else, so the instinct to belong to a coalition has urgency, preexisting and superseding any policy-driven basis for membership. This is why group beliefs are free to be so weird. Since coalitional programs evolved to promote the self-interest of the coalition’s membership (in dominance, status, legitimacy, resources, moral force, etc.), even coalitions whose organizing ideology originates (ostensibly) to promote human welfare often slide into the most extreme forms of oppression, in complete contradiction to the putative values of the group. ... 
"Moreover, to earn membership in a group you must send signals that clearly indicate that you differentially support it, compared to rival groups. Hence, optimal weighting of beliefs and communications in the individual mind will make it feel good to think and express content conforming to and flattering to one’s group’s shared beliefs and to attack and misrepresent rival groups. The more biased away from neutral truth, the better the communication functions to affirm coalitional identity, generating polarization in excess of actual policy disagreements. Communications of practical and functional truths are generally useless as differential signals, because any honest person might say them regardless of coalitional loyalty. In contrast, unusual, exaggerated beliefs—such as supernatural beliefs (e.g., god is three persons but also one person), alarmism, conspiracies, or hyperbolic comparisons—are unlikely to be said except as expressive of identity, because there is no external reality to motivate nonmembers to speak absurdities.

"This raises a problem for scientists: Coalition-mindedness makes everyone, including scientists, far stupider in coalitional collectivities than as individuals. Paradoxically, a political party united by supernatural beliefs can revise its beliefs about economics or climate without revisers being bad coalition members. But people whose coalitional membership is constituted by their shared adherence to “rational,” scientific propositions have a problem when—as is generally the case—new information arises which requires belief revision. To question or disagree with coalitional precepts, even for rational reasons, makes one a bad and immoral coalition member—at risk of losing job offers, one's friends, and one's cherished group identity. This freezes belief revision.

"Forming coalitions around scientific or factual questions is disastrous, because it pits our urge for scientific truth-seeking against the nearly insuperable human appetite to be a good coalition member. "
The lesson I draw here is although we all feel a strong need to join groups, we do have some degree of choice and agency over what groups we end up joining. Even within larger groups, like a certain religion or political party, there will be smaller groups with which one can have a primary affiliation. It may be wise to give an outlet to our coalitional nature by joining several different groups, or by pushing oneself to occasionally phase out one membership and join another.

In addition, we all feel a need to do something a little whacky and extreme to show our group affiliation, but again, we have some degree of choice and agency over what actions and messages define our group. Wearing the colors of a professional sports team, for example, is a different kind of whackiness than sending vitriolic social media  messages. Humans want to join coalitional groups, but we can at least consider whether the way a group expresses solidarity is a good fit with who we want to be.

Friday, July 27, 2018

Difficulties of Making Predictions: Global Power Politics Edition

Making predictions is hard, especially about the future. It's a comment that seems to have been attributed to everyone from Nostradamus to Niels Bohr to Yogi Berra. But it's deeply true. Most of us have a tendency to make statements about the future with a high level of self-belief, avoid later reconsidering how wrong we were, and then make more statements.

Here's a nice vivid example from back in 2001. The Bush administration has just taken office, and a Department of Defense Linville Wells at the US Department of Defense was reflecting on the then-forthcoming "Quadrennial Defense Review." He wanted offer a pungent reminder that the entire exercise of looking ahead even just 10 years has often been profoundly incorrect. Thus, Wells wrote this memo (dated April 12, 2001):
  • If you had been a security policy-maker in the world's greatest power in 1900, you would have been a Brit, looking warily at your age-old enemy, France. 
  • By 1910, you would be allied with France and your enemy would be Germany. 
  • By 1920, World War I would have been fought and won, and you'd be engaged in a naval arms race with your erstwhile allies, the U.S. and Japan. 
  • By 1930, naval arms limitation treaties were in effect, the Great Depression was underway, and the defense planning standard said "no war for ten years." 
  • Nine years later World War II had begun. 
  • By 1950, Britain no longer was the world's greatest power, the Atomic Age had dawned, and a "police action" was underway in Korea. 
  • Ten years later the political focus was on the "missile gap," the strategic paradigm was shifting from massive retaliation to flexible response, and few people had heard of Vietnam.
  • By 1970, the peak of our involvement in Vietnam had come and gone, we were beginning détente with the Soviets, and we were anointing the Shah as our protégé in the Gulf region.
  • By 1980, the Soviets were in Afghanistan, Iran was in the throes of revolution, there was talk of our "hollow forces" and a "window of vulnerability," and the U.S. was the greatest creditor nation the world had ever seen. 
  • By 1990, the Soviet Union was within a year of dissolution, American forces in the Desert were on the verge of showing they were anything but hollow, the U.S. had become the greatest debtor nation the world had ever known, and almost no one had heard of the internet. 
  • Ten years later, Warsaw was the capital of a NATO nation, asymmetric threats transcended geography, and the parallel revolutions of information, biotechnology, robotics, nanotechnology, and high density energy sources foreshadowed changes almost beyond forecasting. 
  • All of which is to say that I'm not sure what 2010 will look like, but I'm sure that it will be very little like we expect, so we should plan accordingly.
The questions of how to predict for what you don't expect, and how to plan for what you don't expect, are admittedly difficult. The ability to pivot smoothly to face the new challenge may be  one of the most underrated skills in politics and management. 

Thursday, July 26, 2018

The Need for Generalists

One can make a reasonable argument that the concept of an economy and the study of economics begins with the idea of specialization, in the sense that those who function within an economy specialize in one kind of production, but then trade with others to consume a broader array of good. Along these lines, the first chapter of Adam Smith's 1776 Wealth of Nations is titled, "On the Division of Labor." But in the push for specialization, there can be a danger of neglecting the virtues of generalists. Even when it comes to assembly lines, specialization of tasks can be pushed so far that it becomes unproductive (as Smith recognized). In a broad array of jobs, including managers, journalists, legislators and politicians, and even editors of academic journals, there is a need for generalists who can synthesize and put in context the work of a range of specialists.

The need for generalists is not at all new, of course. Here's a commentary from 80 years on "The Need for Generalists" from AG Black, who was Chief of the Bureau of Agricultural Economics at the US Department of Agriculture, " published in the Journal of Farm Economics (November 1936, 18:4, pp. 657-661, and available vis JSTOR). Black is writing in particular about specialization within what is already a specialty of agricultural economics, but his point applies more broadly.
"The past generation, like several generations before it, has indeed been one of greater and greater specialization. This has resulted in great advances in agricultural economics. Our specialists have developed new technics of analysis, they have discovered new relationships, they have been able to give close attention to important facts and factors that might otherwise have escaped attention and by such escape might have led to wrong conclusions. Without this specialized attention our discipline in agricultural economics could not have attained the position it has reached today.
"This advance has not been attained without cost. The price has been the loss of minds, or the neglect to develop minds, trained to cope with the complex problems of today in the comprehensive, overall manner called for by such problems. Our specialists are splendidly equipped to solve a problem concerning the price of wheat, or of corn, or of cotton, or a technical question in cooperative marketing, farm management, taxation or international trade. But the more important problems almost never present themselves in those narrow terms; rather they may involve elements of all the above and perhaps several more. ...
"Increased specialization itself tends to raise barriers between fields. It tends to create a system of professional jealousies that is not conducive to the development of generalists. The specialist who burrows deeper and deeper into a narrower and narrower hole becomes convinced that no one who has been sapping in a neighboring tunnel can possibly know as much about the peculiar twists and turns of his burrow as he, himself. And he is right. He knows that he can readily confound and confuse a neighboring specialist if the latter strays from his own confines, and what is more, he will. One of the greatest joys of the specialist is to make an associate appear infantile and ridiculous on the occasions when the latter appears to be getting out of his field.
"The specialist stakes out his claim and guards it as jealously as ever did a prospector of the '40s, and woe to the unwary trespasser. As the specialist knows how he looks upon intruders, he knows how he would be treated if he had the temerity to wander outside his main field. Consequently he is usually quite willing to leave outside fields to other specialists.
"The development of the whole field takes on a honeycomb appearance with series upon series of well-marked and almost wholly isolated cells. These cell walls need to be broken down. There is need of men who can correlate and coordinate the specialized knowledge in the separate cells--men who can bring to bear on the larger problems the findings of the different specialists and who have sufficient perspective and sense of proportion to apply just the correct shade of emphasis to the contribution of each particular specialist. ...
"Our whole organization has developed on the assumption that the generalizing function is not important, that it does not require quite the ability and training of the specialist, that it can be satisfactorily done by almost anyone and that certainly there is nothing about it that demands the attention of really first class men. If generalizing be done at all, it can safely be committed to the specialist who can play with it as relaxation from the really serious and important demands of his specialty, or to the administrator who can give it all of the attention it requires between telephone calls and committee meetings.
"All of this, I suppose, leads to the conclusion that in agricultural economics we need another specialist, that is a "specialist" who is a "generalist." We need to make a place for the trained economist of highest ability who will be free from administrative demands as well as free from the tyranny of specialization, who will have the job of keeping abreast of the results of the various specialists and who can spend a good deal of time in analyzing findings having a bearing upon the ultimate solution of these same problems. ... In other words, students need training in analysis and in synthesis. Today the ability to synthesize facts, research results and partial solutions into a well rounded whole, is too infrequently available."
One of the many political cliches that makes me die a little bit inside is when someone claims that all we need to address a certain problem (health care, poverty, transportation, the Middle East, whatever) is to bring together a group of experts who will provide the common-sense solution that we have all been ignoring. But while bringing together a group of specialist experts can provide a great deal of information and insight, they are often not especially good at melding their specific insights into a general policy.

Homage: I ran across a mention of this article at Carola Binder's always-useful "Quantitative Ease" website  last summer, and left myself a note to track it down. But given my time constraints and organizational skills, it took awhile for me to do so.

Wednesday, July 25, 2018

"Half the Money I Spend on Advertising is Wasted, and the Trouble is I Don't Know Which Half"

There's an old rueful line from firms that advertise: "We know that half of all we spend on advertising is wasted, but we don't know which half." It's not clear who originally coined the phrase. But we do know that the effects of advertising have changed dramatically in a digital age. Half of all advertising spending may still be wasted, but now it's for a very different reason.

I was raised with the folklore that John Wanamaker, founder of the eponymous department stores, was the originator of the phrase at hand. But the attribution gets pretty shaky, pretty fast. David Ogilvy, the head of the famous Ogilvy & Mather advertising agency, wrote in his 1963 book Confessions of an Advertising Man (pp. 86-87): "As Lord Leverhulme (and John Wanamaker after  him) complained, `Half the money I spend on advertising is wasted, and the trouble is I don't know which half."

So how about William Lever, Lord Leverhulme, who built a fortune in the soap business (with Sunlight Soap, and eventually Unilever)? Career advertising executive Jeremy Bullmore has looked into it, and wrote in the 2013 annual report of the British advertising and public relations firm WPP:
"There are at least a dozen minor variations of this sentiment that are confidently quoted and variously attributed but they all have in common the words ‘advertising’, ‘half’ and ‘waste’. Google the line and you’ll get about nine million results. ... As it happens, there’s little hard evidence that either William Lever or John Wanamaker (or indeed Ford or Penney) ever made such a remark. Certainly, neither the Wanamaker nor the Unilever archives contains any such reference. Yet for a hundred years or so, with no accredited source and no data to support it, this piece of folklore has survived and prospered." 
Bullmore makes some compelling points. One is that even 100 years ago, it was widely believed tha advertising could be usefully shaped and targeted. He writes:
"Retail advertising in the days of John Wanamaker was mostly placed in local newspapers and was mainly used to shift specific stock. An ad for neckties read, ‘They’re not as good as they look, but they’re good enough. 25 cents.’ The neckties sold out by closing time and so weren’t advertised again. Waste, zero. Experiment was commonplace. Every element of an advertisement – size, headline, position in paper – was tested for efficacy and discarded if found wanting. Waste, if not eliminated, was ruthlessly hounded.
"Claude Hopkins published Scientific Advertising in 1923. In it, he writes, “Advertising, once a gamble, has… become… one of the safest of business ventures. Certainly no other enterprise with comparable possibilities need involve so little risk.” Even allowing for adman’s exuberance, it strongly suggests that, within Wanamaker’s lifetime, there were very few advertisers who would have agreed that half their advertising money was wasted."
Further, Bullmore points out that people are more comfortable buying certain products because "everyone knows" about them, and "everyone knows" because even those who don't purchase the product have seen the ads.
"A common attribute of all successful, mass-market, repeat-purchase consumer brands is a kind of fame. And the kind of fame they enjoy is not targeted, circumscribed fame but a curiously indiscriminate fame that transcends its particular market sector. Coca-Cola is not just a famous soft drink. Dove is not just a famous soap. Ford is not just a famous car manufacturer. In all these cases, their fame depends on their being known to just about everyone in the world: even if they neither buy nor use. Show-biz publicists have understood this for ever. When The Beatles invaded America in 1964, their manager Brian Epstein didn’t arrange a series of targeted interviews in fan magazines; he brokered three appearances on the Ed Sullivan Show with an audience for each estimated at 70 million. Far fewer than half of that 70 million will have subsequently bought a Beatles record or a Beatles ticket; but it seems unlikely that Epstein thought this extended exposure in any way wasted."
And of course, if large amounts of advertising are literally wasted, it seems as we should be able to observe a substantial number of companies who cut their advertising budget in half and suffered no measurable decline in sales. (In fact, if  half of advertising is always wasted, shouldn't the firm then keep cutting the advertising budget by half, and half again, and half again, and so down to zero? Seems as if there must be a flaw in this logic!)

Of course, one of the major changes in advertising during the last decade or two is that print advertising has plummeted, while digital advertising has soared. More generally, digital technology has made it much more straightforward to create systematic variations in the quantity and qualities of advertising-- and to track the results. Bullmore writes: "And given modern measurements and the growth of digital channels, it’s easier than ever for advertising to be held accountable; to be seen to be more investment than cost."

But Bullmore is probably too optimistic here about how easy it is to hold advertising accountable, for a couple of reasons.

One problem is that the idea of targeting specific audiences for digital advertising is a lot more complicated in practice than it may seem at first. Judy Unger Franks of Medill School of Journalism, Media, Integrated Market Communications at Northwestern University explained the issues in a short essay late last summer: She wrote:

"Programmatic Advertising enables marketers to make advertising investments to select individuals in a media audience as opposed to having to buy the entire audience. Advertisers use a wealth of Big Data to learn about each audience member to then determine whether that audience member should be served with an advertisement and at what price. This all happens in near real-time and advertisers can therefore make near real-time adjustments to their approach to optimize the return-on-investment of its advertising expenditures.
"In theory, Programmatic Advertising should solve the issue of waste. However, in our attempt to eliminate waste from the advertising value chain, we may have made things worse. We have unleashed a dark side to Programmatic Advertising that comes at a significant cost. Now, we know exactly which half of the money spent on advertising is wasted: it’s the half that marketers must now spend on third parties who have inserted themselves into the Programmatic Advertising ecosystem just to keep our investments clean. ... 
"How bad is it? How much money are advertisers spending on this murky supply chain? The IAB (Interactive Advertising Bureau) answered this for us when they released their White Paper, “The Programmatic Supply Chain: Deconstructing the Anatomy of a Programmatic CPM” in March of 2016. The IAB identified ten different value layers in the Programmatic ecosystem. I believe they are being overly generous by calling each a “value” layer. When you need an ad blocking service to avoid buying questionable content and a separate verification service to make sure that the ad was viewable by a human, how is this valuable? When you add up all the costs associated with the ten different layers, they account for 55% of the cpm (cost-per-thousand) that an advertiser pays for a programmatic ad. This means that for every dollar an advertiser spends in Programmatic Advertising over half (55%) of that dollar never reaches the publisher. It falls into the hands of all the third parties that are required to feed the beast that is the overly complex Programmatic Advertising ecosystem. We now know which half of an advertising investment is wasted. It’s wasted on infrastructure to prop up all those opportunities to buy individual audiences across the entire Programmatic Advertising supply chain."
In other words, by the time an advertiser has spent the money to do the targeting, and to make sure that the mechanisms to do the targeting work, and to follow up on the targeting, the costs can be so high that the reason for targeting in the first place is in danger of being lost.

The other interesting problem is that academic studies that have tried to measure the returns to targeted online advertising have run into severe problems. For a discussion, see "The Unfavorable Economics of Measuring the Returns to Advertising," by Randall A. Lewis and Justin M. Rao

(Quarterly Journal of Economics, 130:4, November 2015, pp. 1941–1973, available here). They describe the old "half of what I spend in advertising is wasted" slogan in these terms (citations omitted):
"In the United States, firms annually spend about $500 per person on advertising. To break even, this expenditure implies that the universe of advertisers needs to casually affect $1,500–2,200 in annual sales per person, or about $3,500–5,500 per household. A question that has remained open over the years is whether advertising affects purchasing behavior to the degree implied by prevailing advertising prices and firms’ gross margins ..."
The authors look at 25 studies of digital advertising. They find that the variations in what people buy and how much they spend are very large. Thus, it's theoretically possible that if adverting causes even a small number of people to "tip" from spending only a little on a product to being big spender on a product, the advertising can pay off for the advertiser. But in statistical sense, given that people vary so much in their spending on products and change so much anyway,  it's really hard to disentangle the effects of advertising from the changes in buying patterns that happen anyway. As the authors write: "[W]e are making the admittedly strong claim that most advertisers do not, and indeed some cannot, know the effectiveness of their advertising spend ..."

Thus, the economics of spending on advertising remain largely unresolved, even in the digital age. Those interested in more on the economics of advertising might want to check my post on "The Case For and Against Advertising" (November 15, 2012).


Tuesday, July 24, 2018

Early Examples of Randomization in the Social Sciences

Randomization is one of the most persuasive techniques for determining cause and effect. Half of a certain group get a treatment; half don't. Compare. If the groups were truly chosen at random, and the treatment was truly the only difference between them, and the differences in outcomes are meaningful and the size of the samples are also large enough for drawing statistically meaningful conclusions, then the differences can tell you something about causes.

Economists have surged into randomized experimental work in the last few decades. From the 1970s, up into the 1990s, such work was often focused on social policies, with experiments on different kinds of health insurance, job training and job search, changes in welfare rules, early childhood education, and others. More recently, such work has become very prominent in the development economics literature, as well as on a variety of focused economics topics like how incentive pay affects work or how charitable contributions could be increased. Running experiments is now part of the common tool-kit (for a small taste, see here, here, here, and the three-paper symposium in the Fall 2017 issue of the Journal of Economic Perspectives on "From Experiments to Economic Policy").

Thus, economists and other social scientists may find it useful to keep some historical examples of randomization near at hand. Julian C. Jamison provides a trove of such examples in "The Entry of Randomized Assignment into the Social Sciences"  (World Bank Policy Research Working Paper 8062, May 2017).

Jamison has a run-through of the classic examples of randomization over the centuries, which were often in a medical context. For example, he quotes the correspondence between poet/writers Petrarch and  Boccaccio in 1364, in which Petrarch wrote:
"I solemnly affirm and believe, if a hundred or a thousand men of the same age, same temperament and habits, together with the same surroundings, were attacked at the same time by the same disease, that if one half followed the prescriptions of the doctors of the variety of those practicing at the present day, and that the other half took no medicine but relied on Nature’s instincts, I have no doubt as to which half would escape." 
Or Jan Baptist van Helmont, a doctor writing in the first half of the 1600s, proposed that the two "cures" of the day--bloodletting vs. induced vomiting/defecation--be tested in this way:
"Let us take out of the Hospitals, out of the Camps, or from elsewhere, 200 or 500 poor People, that have Fevers, Pleurisies, etc. Let us divide them in halfes, let us cast lots, that one half of them may fall to my share, and the other to yours; I will cure them without bloodletting… we shall see how many Funerals both of us shall have."
There are cases from the 1600s, 1700s, and and 1800s of randomization as a way of testing for the effectiveness of  treatments for scurvy, or smallpox, salt-based homeopathic treatments. Perhaps one of the best-known experiments was done by Louis Pasteur in 1881 to test his vaccine for sheep anthrax. Jamison writes:
"He was attempting to publicly prove that he had developed an animal anthrax vaccine (which may not have been his to begin with), so he asked for 60 sheep and split them into three groups: 10 would be left entirely alone; 25 would be given his vaccine and then exposed to a deadly strain of the disease; and 25 would be untreated but also exposed to the virus .... [A]ll of the exposed but untreated sheep died, while all of the vaccinated sheep survived healthily."
There are LOTS of examples. But in Jamison's telling, the earliest example of randomization in an experiment within a subject conventionally thought of as economics was actually done by two non-economists working on game theory, psychologist Richard Atkinson and polymath Patrick Suppes, in work published in 1957:
"Atkinson and Suppes (1957), also not economists by training, analyzed different learning models in two-person zero-sum games, and they explicitly “randomly assigned” pairs of subjects into one of three different treatment groups. This is the earliest instance of random assignment in experimental economics, for purposes of comparing treatments, that has been found to date."
As to broader social experiments about the effects of policy interventions, the first one goes back to the 1940s:
"The first clearly and individually randomized social experiment was the Cambridge-Somerville youth study. This was devised by Richard Clarke Cabot, a physician and pioneer in advancing the field of social work. Running from 1942-45, the study randomized approximately 500 young boys who were at risk for delinquency into either a control group or a treatment group, the latter receiving counseling, medical treatment, and tutoring. Results (Powers and Witmer 1951) were highly disappointing, with no differences reported." 
Back in high school, we had to design and carry out our own experiment in a psychology class. I wrote up the same message (a request for some saccharine and meaningless information) on two sets of postcards. One of the sets of postcards was typed; the other was handwritten. I chose the first 60 households in the local phone directory, and sent the postcards out at random. My working hypothesis was that the typewritten notes would get a higher response (perhaps because they would look more "professional," but actually the handwritten notes got a much higher response (probably because they reeked of high school student). Even at the time, it felt like a silly little experiment to me. But the result felt powerful, nonetheless.

Monday, July 23, 2018

The Modern Shape-Up Labor Market

I'm taking some family vacation the next 10 days or so. The lake country of northern Minnesota calls. My wife says that I get a distinctively blissful expression when I'm sitting in the back of a canoe with a paddle in my hand. While I'm gone, I've prescheduled a string of posts that look at various things I've been reading or have run across in the last few months that are at least loosely connected to my usual themes of economics and academia.

It’s not unusual to hear predictions that in the future, we will all have opportunities to run our own companies, or that jobs will become a series of freelance contracts. Here’s a representative comment from business school professor Arun Sundararajan (“The Future of Work,” Finance & Development, June 2017, p. 7-11):
“To avoid further increases in the income and wealth inequality that stem from the sustained concentration of capital over the past 50 years, we must aim for a future of crowd-based capitalism in which most of the workforce shifts from a full-time job as a talent or labor provider to running a business of one—in effect a microentrepreneur who owns a tiny slice of society’s capital."
To me, this description is reminiscent of what used to be called the “shape-up” system of hiring, described by journalist Malcolm Johnson in his Pulitzer-prize winning articles about crime on the docks of New York City in the late 1940s (Crime on the Labor Front, quotation from pp. 133-35),  which is perhaps best-remembered today for how it was depicted in the 1954 movie “On the Waterfront.”  Johnson described the process for a longshoreman of seeking and getting a job in this way:

“The scene is any pier along New York’s waterfront. At a designated hour, the longshoremen gather in a semicircle at the entrance to the pier. They are the men who load and unload the ships. They are looking for jobs and as they stand there in a semicircle their eyes are fixed on one man. He is the hiring stevedore and he stands alone, surveying the waiting men. At this crucial moment he possesses the crucial power of economic life or death over them and the men know it. Their faces betray that knowledge in tense anxiety, eagerness, and fear. They know that the hiring boss, a union man like themselves, can accept them or reject them at will. He can hire them or ignore them, for any reason or for no reason at all.  Now the hiring boss moves among them, choosing the man he wants, passing over others. He nod or points to the favored ones or calls out their names, indicating that they are hired. For those accepted, relief and joy. The pinched faces of the others reflect bleak disappointment, despair. …

“Under the shape-up, the longshoreman never knows from one day to the next whether he has a job or not. Under the shape-up, he may be hired today and rejected tomorrow, or hired in the morning and turned away in the afternoon. There is no security, no dignity, and no justice in the shape-up. … The shape-up fosters fear. Fear of not working. Fear of incurring the displeasure of the hiring boss.”

You can call it “crowd-based capitalism,” but to a lot of people, the idea of “running a business of one” does not sound attractive.  Many people don’t want to apply for a new job every day, or every week, or every month. They don't want to be a "microentrepreneur who owns a tiny slice of society’s capital." They don’t want to be treated as interchangeable cogs, at the discretionary power of a modern hiring boss. All workers know that others have the power of economic life and death over them, but many prefer not to  have that fact rubbed in our faces every day.

It seems to me that a lot of the concern about the modern labor market isn't over whether the wage rates is going up a percentage point or two faster each year. It's about a sense that careers which build skills are harder to find, and that the labor market for many people feels like a modern version of the shape-up. 

Saturday, July 21, 2018

The Chicken Paper Conundrum

Harald Uhlig delivered a talk on "Money and Banking: Some DSGE Challenges" (video here, slides here) at the Nobel Symposium on Money and Banking recently held in Stockholm. He introduces the "Chicken Paper Conundrum," which he attributes to Ed Prescott.

 I've definitely read academic papers, as well as listed to policy discussions, which follow this pattern.

Homage: I ran across this in the middle of two long blog posts by John Cochrane at his Grumpy Economist blog (here and here), which summarize and give links to many papers at this conference given by leading macroeconomists. Many have links to video, slides, and sometimes full papers. If you are interested in topics on the cutting edge of macroeocnomics, it's well worth your time.

Friday, July 20, 2018

Early Childhood Education Fails Another Randomized Trial

Public programs for pre-K education have a worthy goal: reducing the gaps in educational achievement that manifest themselves in early grades. I find myself rooting for such programs to succeed. But there are now two major randomized control trial studies looking at the the results of publicly provided pre-K programs, and neither one finds lasting success. Mark W. Lipsey, Dale C. Farran, and Kelley Durkin report the results of the most recent study in "Effects of the Tennessee Prekindergarten Program on children’s achievement and behavior through third grade" (Early Childhood Research Quarterly, 2018, online version April 21, 2018). 

The Tennessee Voluntary Pre-K Program offers support for families with low-income levels. In 2008-2009, the program only had sufficient funding to cover 42% of applicants. Admission to the program was thus decided by a lottery. This selection procedure makes the eyes of academic researchers light up, because it means that there was random selection of who was in the program (the "treatment" group) and those who were not (the "control" group). As these students continue into Tennessee public schools, there is follow-up information on how these two groups performed. The result was that that students in the pre-K program has a short-term boost in performance, which quickly faded. The abstract of the study says:
"Low-income children (N = 2990) applying to oversubscribed programs were randomly assigned to receive offers of admission or remain on a waiting list. Data from pre-k through 3rd grade were obtained from state education records; additional data were collected for a subset of children with parental consent (N = 1076). At the end of pre-k, pre-k participants in the consented subsample performed better than control children on a battery of achievement tests, with non-native English speakers and children scoring lowest at base-line showing the greatest gains. During the kindergarten year and thereafter, the control children caught up with the pre-k participants on those tests and generally surpassed them. Similar results appeared on the 3rd grade state achievement tests for the full randomized sample – pre-k participants did not perform as well as the control children. Teacher ratings of classroom behavior did not favor either group overall, though some negative treatment effects were seen in 1st and 2nd grade."
The previous major randomized control trial study of an early childhood education program was done on the federal Head start program. I discussed it here. Lipsey, Farran and Durkin summarize the finding in this way:
"This study [of Head Start] began in 2002 with a national sample of 5000 children who applied to 84 programs expected to have more applicants than spaces. Children were randomly selected for offers of admission with those not selected providing the control group.The 4-year-old children admitted to Head Start made greater gains across the pre-k year than nonparticipating children on measures of language and literacy, although not on math. However, by the  end of kindergarten the control children had caught up on most achievement outcomes; subsequent positive effects for Head Start participants were found on only one achievement measure at the end of 1st grade and another at the end of 3rd grade. There were no statistically significant effects on social–emotional measures at the end of the pre-k or kindergarten years. A few positive effects appeared in parent reports at the end of the 1st and 3rd grade years, but teacher and child reports in those years showed either null or negative effects."
Of course, the fact that the two major studies of publicly provided pre-K find near-zero results by third grade doesn't prove that such programs never or can't work. There are many studies of early childhood education programs. The results of the Tennessee and Head Start studies don't rule out the possibility of benefits from narrower programs specifically targeted at children with certain needs. Also, some studies with longer-term follow-up found that although measures of educational performance didn't move much. pre-K programs had longer-term effects on outcomes like high school graduation rates.

But the case for believing that publicly provided  pre-K programs will boost long-term educational outcomes for the disadvantaged is not very strong. If positive results were clear-cut and of meaningful size, it seems as if they should have shown up in these major studies.

Some researchers in this area have suggested that interventions earlier than pre-K might be needed to close achievement gaps. For example, some evidence suggests that the black-white educational achievement gap is apparent by age 2. It may be, although the evidence on this point isn't clear, that some of the funding now being spent on pre-K programs would have a bigger payoff if spent on home visits to the parents of very young children. Greater attention to the health status of pregnant women--including both personal health and exposure to environmental risks--might have a substantial payoff for the eventual educational performance of their children, too.

Thursday, July 19, 2018

Conflict Minerals and Unexpected Tradeoffs

The cause seemed worthy, and the policy mild. Militia groups in the Democratic Republic of the Congo (DRC) were taxing and extorting revenue from those who mined like tin, tungsten, and tantalum. Thus, Section 1502 of the Dodd-Frank Act of 2010 required companies to disclose the source of their purchases of such minerals. The hope was to reduce funding for the militias, and thus to benefit people in the area. Human rights advocacy groups supported the idea. The Good Intentions Paving Company was up and running.

But tradeoffs are no respecters of good intentions. Dominic Parker describes some research on the tradeoffs that occurred in  "Conflict Minerals or Conflict Policies? New research on the unintended consequences of conflict-mineral regulation" (PERC Reports, Summer 2018, 37:1, pp. 36-40). Parker writes:
"First, Section 1502 initially caused a widespread, de facto boycott on minerals from the eastern DRC. Rather than engaging in costly due diligence to identify the sources of minerals—and risking being considered a supporter of rebel violence—some U.S. companies simply stopped buying minerals from the region. This de facto boycott had the intended effect of reducing funding to militias, but its unintended effect was to undercut families who depended on mining for income and access to health care. The decreases in mineral production rocked an artisanal mining sector that had supported an estimated 785,000 miners prior to Dodd-Frank, with spillovers from their economic activity thought to affect millions.
"Second, the legislation changed the relative value of controlling certain mining areas from the perspective of militias, who changed their tactics accordingly. Before the boycott, the militias could maximize revenue by taxing tin, tungsten, and tantalum at or near mining sites. They therefore had an interest in keeping mining areas productive and relatively safe for miners. After the legislation, the militias sought to make up for reduced revenue in other ways. According to the evidence, they started to loot civilians who were not necessarily involved in mining. They also started to fight for control over other commodities, including gold, which was in effect exempt from the regulation."
One result of the economic losses in the area was a sharp rise in infant mortality rates: "The combined evidence suggests that Dodd-Frank increased the probability of infant deaths (that is, babies who died before reaching their first birthday) from 2010 to 2013 for children who lived part of or all of their first year in villages targeted by the legislation and mining ban. The most conservative estimate is that the legislation increased infant mortality from a baseline average of 60 deaths per 1,000 births to 146 deaths per 1,000 births over this period—a 143 percent increase."


The level of violent conflict affecting civilians actually seemed to rise, rather that fall: "At the end of 2010, after the passage of Dodd-Frank, looting in the territories targeted by the mining policies became more common and remained that way through much of 2011 and 2012, when our study period ended. ... The incidence of violence against civilians also increased in the policy regions after the legislation ..."

One economic insight here is the "stationary bandit" theory that when a bandit remains in one location, there are incentives for the bandit to keep local workers and companies safe and productive.

The political insights are fuzzier. One can't rule out that if the Dodd-Frank provisions had been better thought-out or better targeted, maybe the effects would have been better, too. Or maybe this is a case where long-run benefits of these provisions will outweigh short-run costs. But it's also possible that an alternative strategy for bolstering the economy and human rights in the area might have worked better. And it's quite clear that those who supported this particular conflict mineral policy did not predict or acknowledge that their good intentions could have these adverse consequences.

Tuesday, July 17, 2018

On Preferring A to B, While Also Preferring B to A

"In the last quarter-century, one of the most intriguing findings in behavioral science goes under the unlovely name of `preference reversals between joint and separate evaluations of options.' The basic idea is that when people evaluate options A and B separately, they prefer A to B, but when they
evaluate the two jointly, they prefer B to A." Thus, Cass R. Sunstein begins his interesting and readable paper "On preferring A to B, while also preferring B to A" (Rationality and Society 2018,  first published July 11, 2018, subscription required)

Here is one such problem that has been studied: 

Dictionary A: 20,000 entries, torn cover but otherwise like new
Dictionary B: 10,000 entries, like new

"When the two options are assessed separately, people are willing to pay more for B; when they are assessed jointly, they are willing to pay more for A." A common explanation is that when assessed separately, people have no basis for knowing if 10,000 or 20,000 words is a medium or large number for a dictionary, so they tend to focus on "new" or "torn cover." But when comparing the two, people focus on the number of words.

Here's another example, which (as Sunstein notes is "involving an admittedly
outdated technology":

CD Changer A: Can hold 5 CDs; Total Harmonic Distortion = 0.003%
CD Changer B: Can hold 20 CDs; Total Harmonic Distortion = 0.01%


"Subjects were informed that the smaller the Total Harmonic Distortion, the better the sound quality. In separate evaluation, they were willing to pay more for CD Changer B. In joint evaluation, they were willing to pay more for CD Changer A." When looking at them separately, holding 20 CDs seems more more important. When comparing them, the sound quality in Total Harmonic Distortion seems more important--although most people have no basis for knowing if this difference ins sound quality would be meaningful to their ears or not.

And one more example:

Baseball Card Package A: 10 valuable baseball cards, 3 not-so-valuable baseball cards
Baseball Card Package B: 10 valuable baseball cards


"In separate evaluation, inexperienced baseball card traders would pay more for Package B than for Package A. In joint evaluation, they would pay more for Package A (naturally enough). Intriguingly, experienced traders also show a reversal, though it is less stark." When comparing them, choosing A is obvious. But without comparing them, there is something about getting all valuable cards, with no less valuable cards mixed in, which seems attractive.

And yet another example:

Congressional Candidate A: Would create 5000 jobs; has been convicted of a misdemeanor
Congressional Candidate B: Would create 1000 jobs; has no criminal convictions

"In separate evaluation, people rated Candidate B more favorably, but in joint evaluation they preferred candidate A." When looking at them separately, the focus is on criminal history; when looking at them together, the focus is on jobs.
And one more: 

Cause A: Program to improve detection of skin cancer in farm workers
Cause B: Fund to clean up and protect dolphin breeding locations

When people see the two in isolation, they show a higher satisfaction rating from giving to Cause B, and they are willing to pay about the same. But when they evaluate them jointly, they show a much higher satisfaction rating from A, and they want to pay far more for it." The explanation here seems to be a form of category-bound thinking, where just thinking about the dolphins generates a stronger visceral response, but when comparing directly, the humans weigh more heavily. 

One temptation in these and many other examples given by Sunstein is that joint evaluation must be more meaningful, because there is more context for comparison. But he argues strongly that this conclusion is unwarranted. He writes: 
"In cases subject to preference reversals, the problem is that in separate evaluation, some characteristic of an option is difficult or impossible to evaluate—which means that it will not receive the attention that it may deserve. The risk, then, is that a characteristic that is important to welfare or actual experience will be ignored. In joint evaluation, the problem is that the characteristic that is evaluable may receive undue attention. The risk, then, is that a characteristic that is unimportant to welfare or to actual experience will be given excessive weight."
In addition, life does not usually give us a random selection of choices and characteristics for our limited attention spans to consider. Instead, choices are defined and described by sellers of products, or by politicians selling policies. They choose how to frame issues. Sunstein writes: 
"Sellers can manipulate choosers in either separate evaluation or joint evaluation, and the design of the manipulation should now be clear. In separate evaluation, the challenge is to show choosers a characteristic that they can evaluate, if it is good (intact cover), and to show them a characteristic  that they cannot evaluate, if it is not so good (0.01 Total Harmonic Distortion). In joint evaluation, the challenge is to allow an easy comparison along a dimension that seems self-evidently important, whether or not the difference along that dimension matters to experience or to people’s lives. ... Sellers (and others) can choose to display a range of easily evaluable characteristics (appealing ones) and also display a range of others that are difficult or impossible to assess (not so appealing ones). It is well known that some product attributes are “shrouded,” in the sense that they are hidden from view, either because of selective attention on the part of choosers or because of deliberative action on the part of sellers." 
We often think of ourselves as having a set of personal preferences that are fundamental to who we are--part of our personality and self. But in many contexts, people (including me and you) can be influenced by the framing and presentation of choices. Whether the choice is between products or politicians, beware.

Monday, July 16, 2018

Carbon Dioxide Emissions: Global and US

US emissions of carbon have been falling, while nations in the Asia-Pacific region have already become the main contributors to the rise in atmospheric carbon dioxide. These and other conclusions are apparent from the BP Statistical Review of World Energy (June 2018), a useful annual compilation of global trends in energy production, consumption, and prices. 

Here's a table from the report on carbon emissions (I clipped out columns showing annual data for the years from 2008-2016). The report is careful to note: "The carbon emissions above reflect only those through consumption of oil, gas and coal for combustion related activities ... This does not allow for any carbon that is sequestered, for other sources of carbon emissions, or for emissions of other greenhouse gases. Our data is therefore not comparable to official national emissions data." But the data does show some central plot-lines in the carbon emissions story.



A few thoughts: 

1) The US has often had the biggest declines in the world in carbon emissions in absolute magnitudes in recent years. Granted, this is in part because the quantity of US carbon emissions is so large that even a small percentage drop is large in absolute size. Still, better down than up. The BP report notes: "This is the ninth time in this century that the US has had the largest decline in emissions in the world. This also was the third consecutive year that emissions in the US declined, though the fall was the smallest over the last three years. ... Carbon emissions from energy use from the US are the lowest since 1992, the year that the UNFCCC came into existence.:

2) Anyone who follows this topic at all knows that China leads the world in carbon emissions. Still, it's striking to me that China accounts for 27.6% of world carbon emissions, compared to 15.2% for the US. On a regional basis, the Asia Pacific region--led by China, India, and Japan, but also with substantial contributions from Indonesia, South Korea, and Australia--by itself accounts for nearly half of global carbon emissions. If you're concerned about carbon emissions, you need to think about proposals that would have strong effects on China and this region. 

3) Total carbon emissions from the three regions of South and Central America, the Middle East, and Africa total 13.8% of the global total, and thus their combined total is less than either the United States or the European/Eurasian economies. However, if the carbon emissions for this group of three regions keeps growing at about 3% per year, while the carbon emissions for the US economy keeps falling at 1% per year, their carbon emissions will outstrip the US in a few years.

4) In an interconnected global economy, it's worth remembering that the country where energy is used doesn't always reflect where the final product is consumed. If China produces something through an energy-intensive process that is later consumed in the US, it counts as energy use in China--but both countries play a role.

For some more US-specific data, here's some data from the Monthly Energy Review (June 2018) published by the US Energy Information Administration. This table shows total carbon emissions for the US, emissions per capita, and emissions relative to GDP, going back to 1950.


A few comments: 

1) US carbon emissions on this measure peaked around 2007, and have generally declined since then.  An underlying pattern here is a reduction in the use of coal and rise in the use of natural gas, along with greater use of renewables. US emissions are now back to the levels from the late 1980s and early 1990s. 

2) Carbon emissions per capita in the US economy have fallen back to the level of the early 1950s. 

3) Carbon emissions relative to GDP produced have been falling pretty steadily for the almost 70 years shown in this data. 


Friday, July 13, 2018

Time to Reform Unemployment Insurance

The best time to fix your roof is when the weather is sunny and warm, not when it's rainy, cold--and actually leaking. In a similar spirit, the best time to fix unemployment insurance is when the unemployment rate is low. Conor McKay, Ethan Pollack, andAlastair Fitzpayne offer some ideas in "Modernizing Unemployment Insurance for the Changing Nature of Work" (Aspen Institute, January 2018). They write:

The UI [Unemployment Insurance] program — which is overseen by the U.S. Department of Labor and administered by the states — collects payroll taxes from employers to insure workers against unexpected job loss. Eligible workers who become unemployed through no fault of their own can receive temporary income support while they search for reemployment. In 2016, the program paid $32 billion to 6.2 million out-of-work individuals. UI is one of America’s most important anti-poverty programs for individuals and families, serving as a key counter-cyclical stabilizer for the broader economy. In 2009, the worst year of the Great Recession, the UI program kept 5 million Americans out of poverty, and prevented an estimated 1.4 million foreclosures between 2008 and 2012."

So what's wrong with UI as it stands? The system was designed for full-time workers, who have been with an employer for some time, losing full-time jobs. It doesn't do a good job of covering independent contractors, freelancers, short-timers, and part-timers. However, if you are receiving unemployment insurance and you take a freelance or part-time or self-employed job, your benefits usually stop. The share of unemployed workers who are actually covered by unemployment insurance is falling over time.

Many of the reforms mentioned here have been kicked around before, but it's still useful to have them compiled in one place.

The main conceptual difficulty is that unemployment insurance needs someone to pay into the system on a regular basis, but only to withdraw money from the system when it's really needed. Some legislative creativity may be needed  here. But for example, independent and freelance workers could pay unemployment insurance premiums while employed, and if they did so for some period of time (maybe a year or more), they could become eligible for some level of unemployment insurance payouts.  Figure out ways to offer some protection to those who hold multiple jobs, but are not currently eligible for unemployment insurance from any single employer. Figure out how to offer at least some protection to self-employed and temporary workers.

Alternatively, nontraditional workers could be allowed to set up tax-free savings accounts that they would only use if they became unemployed. Such an account could be combined with a retirement account: basically, a worker with a short-term financial need could withdraw some money from the account, but only up to a certain maximum--while the rest stayed in the retirement account.

Finally, it seems wise not to be too quick to cut off unemployment benefits for those who try to work their way back with a part-time job or by starting their own company. Or unemployment insurance could be designed to encourage workers to conduct a long-distance job search and consider moving to another city or state. 

The OECD Employment Outlook 2018 that was just published includes a chapter on unemployment insurance issues across high-income countries. The problem of limited coverage of unemployment insurance is common.
"Across 24 OECD countries, fewer than one-in-three unemployed, and fewer than one-in-four jobseekers, receive unemployment benefits on average. Coverage rates for jobseekers are below 15% in Greece, Italy, Poland, Slovak Republic, Slovenia and the United States. Austria, Belgium and Finland show the highest coverage rates in 2016, ranging between approximately 45% and 60%: In countries with the highest coverage in the OECD, at least four-in-ten jobseekers still report not receiving an unemployment benefit."
Unemployment insurance systems differ quite a bit across countries: qualifications to receive benefits (like what kind of job you previously had, for how long, and how long you have been unemployed from that job), level of benefits, time limits on benefits, whether you are required to get training or some kind of job search assistance while unemployed--and how all of these factors were adjusted by political systems during and after the rise in unemployment during the Great Recession.

But the OECD report emphasizes that unemployment insurance doesn't just help those who are unemployed--it also provides a mechanisms for government to focus on what kinds of assistance might help the unemployed get jobs again. The chapter in the OECD report ends (citations omitted):
"[U]nemployment benefits provide the principal instrument for linking jobless people to employment services and active labour market programmes to improve their job prospects. In the absence of accessible unemployment benefits, it can be difficult to reach out to those facing multiple barriers to employment, who therefore risk being left behind. In these cases, achieving good benefit coverage can be essential to make an activation strategy effective and sustainable. For this reason the new OECD Jobs Strategy calls for clear policy action to extend access to unemployment benefit within a rigorously-enforced `mutual obligation' framework, in which governments have the duty to provide jobseekers with benefits and effective services to enable them to find work and, in turn, beneficiaries have to take active steps to find work or improve their employability ..." 

Thursday, July 12, 2018

China Stops Importing Waste Plastic

For a few decades now, the US and Europe have been managing their plastic waste by shipping it to China and other countries in east Asia for recycling and reuse. But in the last few years, China has been tightening up what it was willing to import, wanting only plastic waste that is uncontaminated. In 2017, China announced that in the future it was banning the import of nonindustrial plastic waste--that is the plastic waste generated by households.

Amy L. Brooks, Shunli Wang, Jenna R. Jambeck look at some consequences in "The Chinese import ban and its impact on global plastic waste trade," published in Science Advances (June 20, 2018). Here's a figure showing patterns of exports and imports of plastic waste, in quantities and values, based on UN data. In theory, of course, the lines for imports and exports should match exactly, but the data is collected from different countries and errors of classification and inclusion do creep in. Still, the overall pattern of a dramatic rise, leveling off in the last few years as China imposed additional restrictions, is clear.
Brooks, Wang, and Jambeck summarize in this way:
"The rapid growth of the use and disposal of plastic materials has proved to be a challenge for solid waste management systems with impacts on our environment and ocean. While recycling and the circular economy have been touted as potential solutions, upward of half of the plastic waste intended for recycling has been exported to hundreds of countries around the world. China, which has imported a cumulative 45% of plastic waste since 1992, recently implemented a new policy banning the importation of most plastic waste, begging the question of where the plastic waste will go now. We use commodity trade data for mass and value, region, and income level to illustrate that higher-income countries in the Organization for Economic Cooperation have been exporting plastic waste (70% in 2016) to lower-income countries in the East Asia and Pacific for decades. An estimated 111 million  metric tons of plastic waste will be displaced with the new Chinese policy by 2030. As 89% of historical exports consist of polymer groups often used in single-use plastic food packaging (polyethylene, polypropylene, and polyethylene terephthalate), bold global ideas and actions for reducing quantities of nonrecyclable materials, redesigning products, and funding domestic plastic waste management are needed."
The pattern of high-income countries sending their recycling to lower- and middle income countries is common. The share of plastic waste going to China seems to actually be greater than the 45% mentioned above. The authors write: "China has imported 106 million MT of plastic waste, making up 45.1% of all cumulative imports. Collectively, China and Hong Kong have imported 72.4% of all plastic waste. However, Hong Kong acts as an entry port into China, with most of the plastic waste imported to Hong Kong (63%) going directly to China as an export in 2016." 

I don't have a solution here. The authors write: "Suggestions from the recycling industry demonstrate that, if no adjustments are made in solid waste management, and plastic waste management in particular, then much of the waste originally diverted from landfills by consumers paying for a recycling service will ultimately be landfilled." Other nations of east Asia don't have the capacity to absorb this flow of plastic waste, at least not right now. There doesn't seem to be much market for this type of plastic waste in the US or Europe, at least not right now. Substitutes for these plastics that either degrade or recycle more easily do not seem to be immediately available. But there is a mountain of plastic waste coming, so we will have a chance to see how the forces of supply, demand, and regulation deal with it. 

For short readable surveys of the study, I can recommend Ellen Airhart, "China Won't Solve the World's Plastics Problem Anymore," in Wired (June 20, 2018) and Jason Daley, "China’s Plastic Ban Will Flood Us With Trash," in Smithsonian (June 21, 2018).

Wednesday, July 11, 2018

When Growth of US Education Attainment Went Flat

Human capital in general, and educational background in particular, are one of the key ingredients for economic growth. But the US had a period of about 20 years, for those born through most of the 1950s and 1960s, where educational attainment barely budged.  Urvi Neelakantan and Jessie Romero provide an overview in "Slowing Growth in Educational Attainment," an Economic Brief written for the Federal Reserve Bank of Richmond (July 2018, EB18-07).

Here's a figure showing years of schooling for Americans going back to the 1870s. You can see the steady rise for both men and women up until the birth cohorts, when the educational gains for women slow down and those for men go flat.


In their essay, Neelakantan and Romero argue that this strengthens the case for improving K-12 education, and offer some thoughts. Here are a few related points I would emphasize.

1) Lots of factors affect productivity growth for an economy. But rapid US education growth starting back in the 19th century has been tied to later US economic growth. And it's probably not just a coincidence that when those born around 1950 were entering the workforce in the 1970s, there is a sustained slump in productivity that lasts about 20 years--into the early 1990s.

2) One reason for the rise in inequality of incomes that started in the late 1970s is that the demand for high-skilled workers was growing faster than the supply. For example, the wage gap between college-educates worker and workers with no more than a high school education increased substantially. As Neelakantan and Romero write: "This slowdown in skill acquisition, combined with growing demand for high-skill workers, contributed to a large increase in the `college premium' — the higher wages and earnings of college graduates relative to workers with only high school degrees." When educational attainment went flat, it also helped to create the conditions for US inequality to rise.

3) When a society has a period of a couple of decades where educational attainment doesn't rise, there's no way to go back later and "fix" it. The consequences like slower growth and higher inequality just march on through time. Similarly, the current generation of students--all of them, K-12, college and university--will be the next generation of US workers.

Monday, July 9, 2018

Three Questions for the Antitrust Moment

There seems to be a widespread sense that many problems of the US economy are linked to a lack of dynamism and competition, and that a surge of antitrust enforcement might be a part of the answer. Here are three somewhat separable questions to ponder in addressing this topic. 

1) Is rising concentration a genuine problem in most of the economy, or only in a few niches?

The evidence does suggest that concentration has risen in many industries. However, it also suggests that for most industries the rise in concentration is small, and within recent historical parameters. For example, here's a figure from an article by  Tim Sablik, "Are Markets Too Concentrated?" published in Econ Focus, from the Federal Reserve Bank of Richmond (First Quarter 2018, pp. 10-13). The HHI is a standard measure of market concentration: it is calculated by taking the market share of each firm in an industry, squaring it, and then summing the result. Thus, a monopoly with 100% of the market would have a HHI measure of  1002 , or 10,000. A industry with, say, two leading firms that each have 30% of the market and four other firms with 10% of the market would have an HHI of 2200. The average HHI across industries has indeed risen--back to the level that prevailed in the late 1970s and early 1980s.



A couple of other points are worth noting:

In some of the industries where concentration has risen, recent legislation is clearly one of the important underlying causes. For example, healthcare providers and insurance firms became more concentrated in the aftermath of restrictions and rules imposed by the Patient Protection and Affordable Care act of 2010. The US banking sector became more concentrated in the aftermath of
Wall Street Reform and Consumer Protection Act of 2010 (the Dodd-Frank act). In both cases, supporters of the bill saw additional concentration as a useful tool for seeking to achieve the purported benefits of the legislation.

The rise in bigness that seems to bother people the most is the dominance of Apple, Alphabet,  Amazon, Facebook, and Microsoft. The possibility that these firms raise anticompetitive issues seems to me like a very legitimate concern. But it also suggests that the competition issues of most concern apply mostly to a relatively small number of firms in a relatively small number of tech-related industries.

2) Is rising concentration the result of pro-competitive, productivity-raising actions that benefit consumers, or anti-competitive actions that hurt consumers? 

The general perspective of US antitrust law is that there is no reason to hinder or break up a firm that achieves large size and market domination by providing innovative and low-cost products for consumers. But if a large firm is using its size to hinder competition or to keep prices high, then the antitrust authorities can have reason to step in. So which is it? Sablik writes:

"Several recent studies have attempted to determine whether the current trend of rising concentration is due to the dominance of more efficient firms or a sign of greater market power. The article by Autor, Dorn, Katz, Patterson, and Van Reenen lends support to the Chicago view, finding that the industries that have become more concentrated since the 1980s have also been the most productive. They argue that the economy has become increasingly concentrated in the hands of `superstar firms,' which are more efficient than their rivals." 
"The tech sector in particular may be prone to concentration driven by efficiency. Platforms for search or social media, for example, become more valuable the more people use them. A social network, like a phone network, with only two people on it is much less valuable than one with millions of users. These network effects and scale economies naturally incentivize firms to cultivate the biggest platforms — one-stop shops, with the winning firm taking all, or most, of the market. Some economists worry these features may limit the ability of new firms to contest the market share of incumbents.  ...  Of course, there are exceptions. Numerous online firms that once seemed unstoppable have since ceded their dominant position to competitors. America Online, eBay, and MySpace have given way to Google, Amazon, Facebook, and Twitter."
There is also international evidence that leading edge firms in many industries are pulling ahead of others in the industry in terms of productivity growth. Here seems to me reason for concern that well-established firms in industries with these network effects have found a way to establish a position that makes it hard--although clearly not impossible--for new competitors to enter. For example, Federico J. Díez, Daniel Leigh, and Suchanan Tambunlertchai have published "Global Market Power and its Macroeconomic Implications" (IMF Working Paper WP/18/137, June 2018). They write:

"We estimate the evolution of markups of publicly traded firms in 74 economies from 1980-2016. In advanced economies, markups have increased by an average of 39 percent since 1980. The increase is broad-based across industries and countries, and driven by the highest markup firms in each economic sector. ... Focusing on advanced economies, we investigate the relation between markups and investment, innovation, and the labor share at the firm level. We find evidence of a non-monotonic relation, with higher markups being correlated initially with increasing and then with decreasing investment and innovation rates. This non-monotonicity is more pronounced for firms that are closer to the technological frontier. More concentrated industries also feature a more negative relation between markups and investment and innovation."
In other words, firms may at first achieve their leadership and higher profits with a burst of innovation, but over time, the higher profits are less associated with investment and innovation.


An interrelated but slightly different argument that what the rise in concentration is telling us is less about the behavior of large firms, and more about a slowdown in the arrival of new firms. For example, it's no surprise that concentration was lower in the 1990s, with the rise of the dot-com companies, and it's no surprise that concentration then rose again after that episode.  Jason Furman and Peter Orszag explore these issues in "Slower Productivity and Higher Inequality: Are They Related?" (June 2018, Peterson Institute of International Economics, Working Paper 18-4). They argue that the rise of "superstar" firms has been accompanied by slower productivity growth and more dispersion of wages, but that the underlying cause is a drop in the start-up rates of new firms and the dynamism of the US economy. They write:
"Our analysis is that there is mounting evidence that an important common cause has contributed to both the slowdown in productivity growth and the increase in inequality. The ultimate cause is a reduction in competition and dynamism that has been documented by Decker et al (2014, 2018) and many others. This reduction is partly a “natural” reflection of trends like the increased importance of network externalities and partly a “manmade” reflection of policy choices, like increased regulatory barriers to entry. These increased rigidities have contributed to the rise in concentration and increased dispersion of firm-level profitability. The result is less innovation, either through a straightforward channel of less investment or through broader factors such as firms not wanting to cannibalize on their own market shares. At the same time, these channels have also contributed to rising inequality in a number of different ways ..." 
Here are a couple more articles I found useful in thinking about these issues, and in particular about the cases of Google and Amazon. 

Charles Duhigg wrote "The Case Against Google," in the New York Times Magazine (February 20, 2018). He notes that a key issue in antitrust enforcement is whether a large firm is actively undermining potential competitors, and offers some examples of small companies are pursuing legal action because they felt undermined. If Google is using its search functions and business connections to disadvantage firms that are potential competitors, then that's a legitimate antitrust issue. Duhigg also argues that if Microsoft had not been sued for this type of anticompetitive behaviour about 20 years ago, it might have killed off Google.

The argument that Google uses its search functions to disadvantage competitors reminds me of the longstanding antitrust arguments about computer reservation systems in the airline industry. Going back to the late 1980s, airlines like United and American build their own computer reservation systems, which were then used by travel agents. While in theory the systems listed all flights, the airlines also had a tendency to list their own flights more prominently, and there was some concern that they could also adjust prices for their own flights more quickly. Such lawsuits continue up to the present. The idea that a firm can use search functions to disadvantage competitors, and that such behavior is anticompetitive under certain conditions, is well-accepted in existing antitrust law.

As Duhigg notes, the European antitrust authorities have found against Google. "Google was ordered to stop giving its own comparison-shopping service an illegal advantage and was fined an eye-popping $2.7 billion, the largest such penalty in the European Commission’s history and more than twice as large as any such fine ever levied by the United States." As you might imagine, the case remains under vigorous appeal and dispute.

As a starting point for thinking about Amazon and anticompetitive issues, I'd recommend Lina M. Khan's article on "Amazon's Antitrust Paradox"  (Yale Law Journal, January 2017, pp. 710-805).  From the abstract:
"Amazon is the titan of twenty-first century commerce. In addition to being a retailer, it is now a marketing platform, a delivery and logistics network, a payment service, a credit lender, an auction house, a major book publisher, a producer of television and films, a fashion designer, a hardware manufacturer, and a leading host of cloud server space. Although Amazon has clocked staggering growth, it generates meager profits, choosing to price below-cost and expand widely instead. Through this strategy, the company has positioned itself at the center of e-commerce and now serves as essential infrastructure for a host of other businesses that depend upon it. Elements of the firm’s structure and conduct pose anticompetitive concerns—yet it has escaped antitrust scrutiny.
"This Note argues that the current framework in antitrust—specifically its pegging competition to `consumer welfare,' defined as short-term price effects—is unequipped to capture the architecture of market power in the modern economy. We cannot cognize the potential harms to competition posed by Amazon’s dominance if we measure competition primarily through price and output. Specifically, current doctrine underappreciates the risk of predatory pricing and how integration across distinct business lines may prove anticompetitive. These concerns are heightened in the context of online platforms for two reasons. First, the economics of platform markets create incentives for a company to pursue growth over profits, a strategy that investors have rewarded. Under these conditions, predatory pricing becomes highly rational—even as existing doctrine treats it as irrational and therefore implausible. Second, because online platforms serve as critical intermediaries, integrating across business lines positions these platforms to control the essential infrastructure on which their rivals depend. This dual role also enables a platform to exploit information collected on companies using its services to undermine them as competitors."
This passage summarizes the conceptual issue.  In effect, it argues that Amazon may be good for consumers (at least in the short-run of some years), but still have potential "harms for competition." The idea that antitrust authorities should act in a way that hurts consumers in the short run, on the grounds that it will add to competition that will benefit consumers in the long run, would be a stretch for current antitrust doctrine--and if applied too broadly could lead to highly problematic results. Khan's article is a good launching-pad for that discussion.

3) Should bigness be viewed as bad for political reasons, even if it is beneficial for consumers?

The touchstone of antitrust analysis has for some decades now been whether consumers benefit. Other factors like whether workers lose their jobs or small businesses are driven into bankruptcy do not count. Neither does the potential for political clout being wielded by large firms. But the argument that antitrust should go beyond efficiency that benefits consumers has a long history, and seems to be making a comeback.

Daniel A. Crane discusses these issues in "Antitrust’s Unconventional Politics The ideological and political motivations for antitrust policy do not neatly fit the standard left/right dichotomy," appearing in Regulation magazine (Summer 2018, pp. 18-22).
"Although American antitrust policy has been influenced by a wide variety of ideological schools, two influences stand out as historically most significant to understanding the contemporary antitrust debate. The first is a Brandeisian school, epitomized in the title of Louis Brandeis’s 1914 essay in Harper’s Weekly, “The Curse of Bigness.” Arguing for `regulated competition' over `regulated monopoly,' he asserted that it was necessary to `curb[...] physically the strong, to protect those physically weaker' in order to sustain industrial liberty. He evoked a Jeffersonian vision of a social-economic order organized on a small scale, with atomistic competition between a large number of equally advantaged units. His goals included the economic, social, and political. ... The Brandeisian vision held sway in U.S. antitrust from the Progressive Era through the early 1970s, albeit with significant interruptions. ...
"The ascendant Chicago School of the 1960s and 1970s threw down the gauntlet to the Brandeisian tendency of U.S. antitrust law. In an early mission statement, Bork and Ward Bowman characterized antitrust history as `vacillat[ing] between the policy of preserving competition and the policy of preserving competitors from their more energetic and efficient rivals,' the latter being an interpretation of the Brandeis School. Chicagoans argued that antitrust law should be concerned solely with economic efficiency and consumer welfare. `Bigness' was no longer necessarily a curse, but often the product of superior efficiency. Chicago criticized Brandeis’s `sympathy for small, perhaps inefficient, traders who might go under in fully competitive markets.' Preserving a level playing field meant stifling efficiency to enable market participation by the mediocre. Beginning in 1977–1978, the Chicago School achieved an almost complete triumph in the Supreme Court, at least in the limited sense that the Court came to adopt the economic efficiency/consumer welfare model as the exclusive or near exclusive goal of antitrust law ..."
As Crane points out, the intellectual currents here have been entangled over time, reflecting our tangled social views of big business. The Roosevelt administration trumpeted the virtues of small business, until it decided that large consolidated firms would be better at getting the US economy out of the Great Depression and fighting World War II. After World War II, there was a right-wing fear that large consolidated firms were the pathway to a rise of government control over the economy and Communism, and Republicans pushed for more antitrust. In the modern economy, we are more likely to view unsuccessful firms as needing support and subsidy, and successful firms as having in some way competed unfairly. One of the reasons for focusing antitrust policy on consumer benefit was that it seemed clearly preferable to a policy that seemed focused on penalizing success and subsidizing weakness.

The working assumption of current antitrust policy is that no one policy can (or should) try to do everything. Yes, encouraging more business dynamism and start-ups is a good thing. Yes, concerns about workers who lose their jobs or companies that get shut down are a good thing. Yes,  certain rules and restrictions on the political power of corporations are a good thing. But in the conventional view (to which I largely subscribe), antitrust is just one policy. It should focus on consumer welfare and specific anticompetitive behaviors by firms, but not become a sort of blank check for government to butt in and micromanage successful firms.