New Yorker Writer Malcolm Gladwell Trashes Rankings

Rankings illustration from the new graphic novel on MBA admissions by Menlo Coaching

New Yorker staff writer Malcolm Gladwell makes no bones about it: he hates college rankings--maybe even more than business school deans.

New Yorker staff writer Malcolm Gladwell makes no bones about it: he hates college rankings.

Malcolm Gladwell does not like college rankings. In fact, he abhors them.

The bestselling author and New Yorker writer turns his attention to the subject of rankings in the latest edition of The New Yorker. The result of his exploration into the highly controversial topic: a scathing six-page critique of the value and intellectual honesty of all ratings of universities. Gladwell, known for using more subtlety and finesse in writing about topics and issues, takes out a sledgehammer to beat up U.S. News in particular and all rankings in general. “Who comes out on top, in any ranking system, is really about who is doing the ranking,” concludes the well-known author and staff writer at The New Yorker.

It’s something we at Poets&Quants have been saying ever since we launched six months ago (see “Turning the Tables: Ranking the Rankings”). And as far as B-school rankings go, U.S. News is far more transparent than most and it measures factors that are more directly related to program quality than rankings by either The Financial Times or The Economist. U.S. News, for example, uses average GMAT and GPA scores to assess the quality of incoming students and post-MBA compensation and employment data to assess the opportunities an MBA program affords its graduates.

Writing in the Feb. 14 & 21st issue of The New Yorker under the headline, “The Order of Things: What College Rankings Really Tell Us,” Gladwell takes a deep dive into the college rankings published by U.S. News & World Report. He doesn’t specifically address business school rankings by U.S. News or any other media outlet. But his conclusions are just as applicable to the B-school ranking game as they are to U.S. News’ general college rankings.

His major conclusions:

  • Determinations of ‘quality’ turn on relatively arbitrary judgments about how much different variables should be weighted.
  • Asking deans and MBA program directors to rate other schools is less a measure of a school’s reputation than it is a collection of prejudices partly based on the self-fulfilling prophecy of U.S. News’ own rankings.
  • It’s extremely difficult to measure the variable you want to rank because of differences in how a specific metric can be compiled by different people at different schools in different countries.
  • Rankings turn out to be full of ‘implicit ideological choices’ by the editorial chef who cooks up the ranking’s methodology.
  • There is no right answer to how much weight a ranking system should give to any one variable.

“The first difficulty with rankings is that it can be surprisingly hard to measure the variable you want to rank,” writes Gladwell, “even in cases where that variable seems perfectly objective.”

The writer then trots out a ranking of suicides per hundred thousand people, by country, to prove his point. At the top of the list, with what would seem the highest suicide rate of any country is Belarus where 35.1 people out of every 100,000 deaths are declared suicides. At the bottom of this top ten list is Sri Lanka where 21.6 people out of every 100,000 deaths are determined suicides.

“This list looks straightforward,” believes Gladwell. “Yet no self-respecting epidemiologist would look at it and conclude that Belarus has the worst suicide rate in the world. Measuring suicide is just too tricky. It requires someone to make a surmise about the intentions of the deceased at the time of death. In some cases, that’s easy. In most cases, there’s ambiguity, and different coroners and different cultures vary widely in the way they choose to interpret that ambiguity.”

‘U.S. NEWS RANKINGS SUFFER FROM A SERIOUS CASE OF THE SUICIDE PROBLEM.’

What does this have to do with college rankings? “The U.S. News rankings suffer from a serious case of the suicide problem,” believes Gladwell. “There’s no direct way to measure the quality of an institution—how well a college manages to inform, inspire, and challenge its students. So the U.S. News algorithm relies instead on proxies for quality—and the proxies for educational quality turn out to be flimsy at best.”

Consider the most important variable in the U.S. News methodology—accounting for 22.5% of the final score for a college: reputation. In U.S. News’ business school ranking, it’s the single most important metric as well—given a weight of 25% of the final ranking for a B-school (the next most important variable in the B-school survey is its highly flawed survey of corporate recruiters which accounts for 15% of the final ranking). This so-called “peer assessment score” comes from a survey of business school deans and directors of accredited master’s programs in business. They are asked to rate programs on a scale from “marginal” (1) to “outstanding” (5). A school’s score is the average of all the respondents who rate it. About 44% of the deans and directors surveyed by U.S. News responded to it in the fall of 2009.

Questions about this article? Email us or leave a comment below.