Pages

Wednesday, March 30, 2016

Grade Inflation Update: A's Rule

There's no systematic data collected on the distribution of college and university grades. Instead, such data is collected by individual researchers. Perhaps the largest and most prominent data on college grades over time, now with current data from over 400 schools with a combined enrollment of more than four million undergraduate students is from Stuart Rojstaczer and Christopher Healy. I wrote about the previous update of their data back in August 2014. They now have a substantial update of the data available at http://www.gradeinflation.com.

Their overall finding is that during the 30 years from 1983 to 2013, average grades for undergraduates at four-year college have risen from about 2.85 to 3.15 on a 4.0-point scale--that is, the average used to be halfway between a B- (2.7) and a B (3.0), and it's now halfway between a B (3.0) and a B+ (3.3).



Along the way, A's became the most common grade back in the mid-1990s. The prevalence of grades of C and D slumped back in the 1960s, and have continued to slide since then. More recently, B's have been declining, too.



I've commented on the grade inflation phenomenon before, but perhaps a quick recap here is useful. I view grades as a mechanism for communicating information, and grade inflation makes that mechanism less useful--with consequences both inside academia and out.

For example, grade inflation is not equal across academic departments; it has been most extreme in the humanities and softer social sciences, and mildest in the sciences and the harder social sciences (including economics). Thus, one result of this differential grade inflation across majors is that a lot of freshmen and sophomores are systematically being told by their grades that they are worse at science than at other potential majors. The Journal of Economic Perspectives (where I work as Managing Editor) carried an article on this connection way back in the Winter 1991 issue: Richard Sabot, and John Wakeman-Linn on  "Grade Inflation and Course Choice." (pp. 159-170). For an overview some of the additional evidence here, see "Grade Inflation and Choice of Major" (November 14, 2011).  In turn, when grade inflation influences the courses that students choose, it also influences the shape of colleges and universities--like which kinds of departments get additional resources or faculty hires

Another concern within higher education is that in many classes, the range of potential grades for a more-or-less average student has narrowed, which means that an extra expenditure of effort can raise grades only modestly. With grade inflation, an average student is likely to perceive that they can get the typical 3.0 or 3.3 without much effort. So the potential upside from working hard is at most a 3.7 or a 4.0.

Grade inflation also makes grades a less useful form of information when students start sending out their transcripts to employers and graduate programs. As a result,  the feedback that grades provide to the skills and future prospects of students has diminished, while other forms of information about student skills become more important. For example, employers and grad schools will give more weight to achievement or accreditation tests, when these are available, rather than to grades. Internships and personal recommendations become more important, although these alternative forms of information about student quality depend on networks that will typically be more available to students at certain colleges and universities with more resources and smaller class sizes.

As the data at the top suggests, efforts to limit grade inflation have not been especially successful. In "Grade Inflation: Evidence from Two Policies" (August 6, 2014), I wrote about a couple of efforts to reduce grade inflation. Wellesley College enacted a policy that the average grade in lower-level courses shouldn't exceed 3.3, which was somewhat successful at reducing the gap between high-grading and low-grading department. Cornell University, took a different tack, by deciding to publish student grades along with median grades for each course, so that it would be possible to compare how the student looked relative to the median. This plan seemed to worsen grade inflation, as students learned more about which courses were higher-grading and headed for those classes. For the Wellesley study, see Kristin F. Butcher, Patrick J. McEwan, and Akila Weerapana on "The Effects of an Anti-Grade-Inflation Policy at Wellesley College," in the Summer 2014 issue of the JEP. For the Cornell study, see Talia Bar, Vrinda Kadiyali, and Asaf Zussman on "Grade Information and Grade Inflation: The Cornell Experiment," in the Summer 2009 issue of the JEP.