New Yorker Writer Malcolm Gladwell Trashes Rankings

Writes Gladwell about the “Best Colleges” version of the survey: “Every year, the magazine sends a survey to the country’s university and college presidents, provosts, and admissions deans asking them to grade all the schools in their category on a scale of one to five. Those at national universities, for example, are asked to rank all 261 other national universities.” The magazine’s rankings editor, Robert Morse told Gladwell that the typical respondent assigns grades to roughly half of the schools in his or her category.


“But it’s far from clear how any one individual could have insight into that many institutions,” concludes Gladwell. “Sound judgments of educational quality have to be based on specific, hard-to-observe features. But reputational ratings are simply inferences from broad, readily observable features of an institution’s identity, such as its history, its prominence in the media, or the elegance of its architecture. They are prejudices.”

As an example to prove his point, Gladwell cites an analysis of another ranking by U.S. News, its rating of the Best Hospitals which also rely heavily on reputation scores generated by professional peers. “Why, after all, should a gastroenterologist at the Ochsner Medical Center, in New Orleans, have any specific insight into the performance of the gastroenterology department at Mass General, in Boston, or even, for that matter, have anything more than an anecdotal impression of the gastroenterology department down the road at some hospital in Baton Rouge?…

“When U.S. News asks a university president to perform the impossible task of assessing the relative merits of dozens of institutions he knows nothing about, he relies on the only source of detailed information at his disposal that assesses the relative merits of dozens of institutions he knows nothing about: U.S. News. A school like Penn State, then, can do little to improve its position. To go higher than 47th, it needs a better reputation score, and to get a better reputation score it needs to be higher than 47th. The U.S. News ratings are a self-fulfilling prophecy.”

Gladwell may not have known that the most egregious example of this problem plagues U.S. News’ so-called specialty business schools rankings because they are based solely on the survey of business school deans and directors who are asked to nominate up to 10 programs for excellence in each of a dozen or so categories, from accounting to management. The schools that get the most votes make the ranking.


U.S. News’ Morse, director of data research, counters this criticism. In response to a reporter, Morse once said: “U.S. News is not expecting people to have knowledge or be able to rate each school in its category. It’s based on the premise that since we have a big enough respondent base, enough people have some knowledge of enough schools that we get a statistically significant number of respondents for each school. There are subjective parts of education, parts that can’t be measured by just quantitative data. The peer survey tries to capture that part of it.”

One of the techniques Gladwell deploys in the article is to reorder certain schools by changing some of the metrics used to rank them. He revises a top ten list of schools by doing what U.S. News doesn’t—including the cost of tuition as a variable. U.S. News would include cost if value for the dollar was something it judged important. Simply taking cost into account, the rankings of seven of the top schools changed and three schools—Northwestern University, Columbia University and Cornell University–dropped out of the top ten, replaced by the University of Alabama, the University of Texas, and the University of Virginia. Concludes Gladwell: “The U.S. News rankings turn out to be full of these kinds of implicit ideological choices.”


Questions about this article? Email us or leave a comment below.