To get a sense of what "research design" means in this area, consider some examples. Imagine that you want to know if a student does better from attending a public charter school. If the school is oversubscribed and holds a lottery (as often happens), then you can compare those attending the charter with those who applied but were not chosen in the lottery. Does being surrounded by high-quality peers help your education? You can look at students who were accepted to institutions like Harvard and MIT but chose not to attend, and compare them with the students that were accepted and did choose to attend. Of course, these kinds of comparisons have to be done with appropriate statistical consideration. But their results are much more plausibly interpreted as causal, not just as a set of correlations. Here are some comments from Angrist in the interview that caught my eye.
Peer Effects in High School?
I think people are easily fooled by peer effects. Parag, Atila Abdulkadiroglu, and I call it "the elite illusion." We made that the title of a paper. I think it's a pervasive phenomenon. You look at the Boston Latin School, or if you live in Northern Virginia, there's Thomas Jefferson High School for Science and Technology. And in New York, you have Brooklyn Tech and Bronx Science and Stuyvesant.
And so people say, "Look at those awesome children, look how well they did." Well, they wouldn't get into the selective school if they weren't awesome, but that's distinct from the question of whether there's a causal effect. When you actually drill down and do a credible comparison of students who are just above and just below the cutoff, you find out that elite performance is indeed illusory, an artifact of selection. The kids who go to those schools do well because they were already doing well when they got in, but there's no effect from exposure to higher-achieving peers.How Much Does Attending a Selective College Matter?
I teach undergrad and grad econometrics, and one of my favorite examples for teaching regression is a paper by Alan Krueger and Stacy Dale that looks at the effects of going to a more selective college. It turns out that if you got into MIT or Harvard, it actually doesn't matter where you go. Alan and Stacy showed that in two very clever, well-controlled studies. And Jack Mountjoy, in a paper with Brent Hickman, just replicated that for a much larger sample. There isn't any earnings advantage from going to a more selective school once you control for the selection bias. So there's also an elite illusion at the college level, which I think is more important to upper-income families, because they're desperate for their kids to go to the top schools. So desperate, in fact, that a few commit criminal fraud to get their kids into more selective schools.
Charter schools and takeovers
The most common charter model is what we call a startup — somebody decides they want to start a charter school and admits kids by lottery. But an alternative model is the takeover. Every state has an accountability system with standards that require schools to meet certain criteria. When they fail to meet these standards, they're at risk of intervention by the state. Some states, including Massachusetts, have an intervention that involves the public school essentially being taken over by an outside operator. Boston had takeovers. And New Orleans is actually an all-charter district now, but it moved to that as individual schools were being taken over by charter operators.
That's good for research, because you can look at schools that are struggling just as much but are not taken over or are not yet taken over and use them as a counterfactual. The reason that's important is that people say kids who apply to the startups are self-selected and so they're sort of primed to gain from the charter treatment. But the way the takeover model works in Boston and New Orleans is that the outside operator inherits not only the building, but also the existing enrollment. So they can't cherry-pick applicants. What we show is that successful charter management organizations that run successful startups also succeed in takeover scenarios.Angrist has developed the knack of looking for these ways of interpreting a given data set, sometimes called "natural experiments." For those trying to find such examples as a basis for their own research, he says:
One thing I learned is that empiricists should work on stuff that's nearby. Then you can have some visibility into what's unique and try to get on to projects that other people can't do. This is particularly true for empiricists who are working outside the United States. There's a temptation to just mimic whatever the Americans and British are doing. I think a better strategy is to say, "Well, what's special and interesting about where I am?"Finally, as a bit of a side note, I was intrigued by Angrist's neutral-to-negative take on the potential for machine learning in econometrics:
I just wrote a paper about machine learning applications in labor economics with my former student Brigham Frandsen. Machine learning is a good example of a kind of empiricism that's running way ahead of theory. We have a fairly negative take on it. We show that a lot of machine learning tools that are very popular now, both in economics and in the wider world of data science, don't translate well to econometric applications and that some of our stalwarts — regression and two-stage least squares — are better. But that's an area of ongoing research, and it's rapidly evolving. There are plenty of questions there. Some of them are theoretical, and I won't be answering those questions, but some are practical: whether there's any value added from this new toolkit. So far, I'm skeptical.Josh has written for the Journal of Economic Perspectives a few times. Interested readers might want to follow up with:
- Angrist, Joshua D., and Jörn-Steffen Pischke. 2010. "The Credibility Revolution in Empirical Economics: How Better Research Design Is Taking the Con out of Econometrics." Journal of Economic Perspectives, 24 (2): 3-30.
- Angrist, Joshua, D., and Alan B. Krueger. 2001. "Instrumental Variables and the Search for Identification: From Supply and Demand to Natural Experiments." Journal of Economic Perspectives, 15 (4): 69-85.
- Angrist, Joshua D., and Jörn-Steffen Pischke. 2017. "Undergraduate Econometrics Instruction: Through Our Classes, Darkly." Journal of Economic Perspectives, 31 (2): 125-44.