U.S. News Rankings: What Are Deans Really Assessing?


“The more things change, the more they stay the same.”

Philosopher Yogi Berra referred to this phenomenon as “déjà vu all over again.” Optimists describe it as the circle of life. And cynics call it “the system” (and argue it is stacked against us). Indeed, the status quo protects a lot of vested interests. And that’s especially true with the U.S. News & World Report business school rankings.


Picture this: In the 2010 rankings, the top five schools were (in order): Harvard, Stanford, MIT Sloan, Northwestern Kellogg, Chicago Booth, and Wharton (with Booth and Wharton tied for fifth). In 2016, the rankings stack up as follows Stanford, Harvard, Wharton, Chicago Booth, and MIT Sloan (with Northwestern placing sixth). In fact, if you go back to the 2003 rankings, you won’t find any schools besides these six in the top five (though Columbia, Dartmouth Tuck, and Berkeley Haas have each done a stint in the sixth spot ahead of Booth).

Think about leading companies or sports teams. Over a decade or more, how often do you find the same six holding their spots near the top? Rarely (in a free market, at least). So what’s behind these schools’ dominance? Here’s one theory: Rankings foster a virtuous cycle, where renown draws top faculty, students, employers, and support that ultimately feeds into each other. How about another: These six schools may simply execute better, by virtue of their foresight, investments, collective talent, and commitment.

Or perhaps, to some extent, these schools are getting a free ride.

When it comes to the U.S. News peer assessment scores, the top schools are virtually the academic equivalent of “made men.” Once they establish a certain notoriety, nothing can touch them. Their scores rarely vary, as if their evaluations have gone on autopilot. As a result, they are seemingly impossible to dislodge from their lofty positions.


Here’s how the U.S. News ranking works: 25% of a school’s ranking–more than any other single component measured by the magazine–is based off these peer assessments, which are filled out by “business school deans and directors of accredited master’s programs.” Using a scale of 1 (marginal) to 5 (outstanding), these peers evaluate the perceived quality of full-time MBA programs. In fact, U.S. News touts this sample – and its 40% response rate – as “the people in the best position to have an informed opinion – “academics who administer and teach in these programs.”

The truth suggests otherwise. Years ago, New Yorker writer Malcolm Gladwell made the point that asking deans and MBA program directors to rate other schools is less a measure of a school’s reputation than it is a collection of prejudices partly based on the self-fulfilling prophecy of U.S. News’ own ranking. In other words, deans fill the magazine’s surveys out by consulting the previous year’s rankings and simply fill in the blanks.

After all, how well do these peers really know the other schools? Take the Stanford Graduate School of Business, U.S. News’ top-ranked MBA program, for example. Since the 2003 rankings, Stanford has notched a 4.8 score for 14 consecutive years. That’s right: Stanford’s score hasn’t fluctuated up or down during this entire period. In other words, Stanford can count on receiving the highest peer assessment score on record (4.8) to comprise a quarter of its ranking.


And Stanford’s peers enjoy a similar leg up when it comes to their academic reputation. Harvard Business School, for example, earned a 4.8 score in 13 of 14 years, only slipping to 4.7 in the 2006 U.S. News rankings. Similarly, Wharton scored a 4.8 in all but one year (2016). MIT Sloan’s scores reflect a modicum of diversity, as the school yielded a 4.7 score in 10 years and a 4.8 in four others (though it has maintained a 4.7 for the past six years).

Even Chicago Booth, whose climb from 9th to 4th over the past 14 years validates the notion that schools can move up in the top 10, has also produced relatively torpid (albeit enviable) peer assessments, with the school earning a 4.7 in seven of the past eight years (only broken up by a 4.8 blip in the 2015 ranking). Kellogg follows a similar path. Aside from a 4.4 low in 2013 – and a 4.8 high in 2003, Kellogg has notched either a 4.7 or a 4.6 in the peer assessment over the past 14 years.

In fact, these six schools are the only ones that seemingly merited a score a 4.7 or higher among peers. And only Berkeley Haas, which has consistently ranked seventh in U.S. News’ full-time MBA rankings, has scored a 4.6. In other words, peer assessment scores do correlate (or show some causation) with schools’ overall rankings. These seven schools are seemingly impervious to slipping below the seventh spot – with the peer assessment scores acting almost as a safety net.


This same result plays out for schools outside the top seven. Beginning with the 2006 rankings, for example, the University of Michigan’s Ross School of Business has scored a 4.4 on the peer assessment seven times – with its 4.3 scores coming over four consecutive rankings (2011-2014). The same trend follows at Columbia, Duke Fuqua, Yale, Cornell Johnson, Carnegie Mellon Tepper, Emory Goizueta, Indiana Kelley, USC Marshall, and Notre Dame Mendoza. At these schools, just 0.1 of a point separates their top and bottom scores over the last 11 years. And the trend is even more pronounced at UCLA Anderson. Here, the school received scores of either 4.1 or 4.2 during this same time period. And this includes two stretches – each lasting four years – where Anderson received the same peer assessment score.