The Numbers Behind The Rankings

Robert Bruner, dean of the University of Virginia's Darden School of Business

Robert Bruner, dean of the University of Virginia’s Darden School of Business

Dean Robert Bruner of the University of Virginia’s Darden School of Business is an astute observer of graduate business education and the MBA game. In a recent post on his excellent blog, Bruner opined on the winner-take-all theory as it applies to higher education and business schools in particular.

“Today, higher education resembles a winners-take-all market,” he concludes. “To win is great, gratifying, and reinforcing.  But striving to be one of the winners is fraught with great difficulty: heavy investment, long duration with slow advancement, and serious temptations to err—it may place the school on a long and weary treadmill that is unsustainable and ultimately dangerous.”

The most fascinating part of his observations are a series of charts that show how the highest ranked MBA programs tend to have the best stats on every key metric of quality, from average GMAT scores to the number of articles published by faculty in academic journals. In fact, perhaps the most surprising insight from his analysis is the remarkable correlation between a school’s ranking by Bloomberg BusinessWeek and the publication record of its faculty (see on following pages)–especially because the academic research only accounts for 10% of the BusinessWeek ranking.

It’s pretty much what you would expect to see. But reduced to chart material, it makes a strong point: the most highly ranked schools benefit greatly from their relative position on such lists, getting better students, faculty and funding. It’s also why striving to get into a highly ranked school is more likely to payoff.


“As applications, quality of students, donations, and affirmative media coverage fall, so do rankings,” believes Bruner. “And the cycle continues: more doubt and disbelief; more declining relative performance; fewer rewards. The school stalls, and then nose-dives. This is a self-reinforcing pernicious cycle.”

Bruner points out that the two most prominent attributes of winner-takes-all markets are that “(1) assessment is based on relative, not absolute, performance, and (2) that rewards are concentrated in the hands of a few top performers. Higher education models these attributes well.”

Here’s his assessment (and his charts with apologies for the less than perfect appearance of them):

Admissions selectivity. “Reward” could also be measured by the ability to recruit excellent students. Consider some evidence from US business schools. One metric would be the selectivity ratio of admissions (the number of offers of admission divided by the number of applicants). This graph shows that the higher the rank, the more selective the admissions.

Source: Robert F. Bruner blog

Source: Robert F. Bruner blog

  • Geoff Law

    apologies for the delayed transmission…
    I am intrigued by you recent post as I am currently examining the influence of improved ranking on MBA students choice to pursue their postgraduate studies with Milpark (South Africa). I would sincerely appreciate any further insights that you could offer in respect of the influence that ranking has on MBA students enrolment decisions.
    Kind regards
    Geoff (

  • Geoff Law

    Dear John

    I am currently a MBA student at a private higher education institution Milpark Business School (recently acquired by Apollo Education Group of Phoenix, AZ) and I am fascinated by your

  • Test
  • bwanamia

    Bruner is wrong that it’s a winner-take-all market. It’s classic oligopoly. An oligopoly of two, three or five b-schools produces employees for the most oligopolistic or monopolistic firms, i.e., the investment banks, the consulting firms, the PE funds and Google. Though it wasn’t always so, the US has been headed down this road for roughly three decades. Foreign students love US b-schools because everywhere else on the planet finding a sinecure within an oligopoly/monopoly is the only way to go.

  • Please forgive me for copying a post I made in a different thread here on P&Q ( but it seems to apply here too. A point that I made there was that many people are focused on the latest rankings and do not consider any historical data on trend analysis.

    So, I thought I’d take a look at the BW rankings over time and see what they describe. There are many ways to analyze this data ( but here are a few observations (US data only, 1988-2012, 13 rankings).

    In the history of the BW rankings there have only been 11 different schools that have ever been ranked in the Top 5 (the number of times in the Top 5 and average ranking shown):

    – Kellogg (13/13 times, average rank 2.23)
    – Wharton (13/13 times, average rank 2.62)
    – Harvard (13/13 times, average rank 3.31)
    – Booth (10/13 times, average rank 3.77)
    – Ross (5/13 times, average rank 5.77)
    – Stanford (6/13 times, average rank 6.23)
    – Fuqua (1/13 times, average rank 9.08)
    – MIT Sloan (1/13 times, average rank 9.77)
    – Tuck (1/13 times, average rank 10.23)
    – Darden (1/13 times, average rank 11.15)
    – Johnson (1/13 times, average rank 11.23)

    If we arbitrarily eliminate the schools that broke into the Top 5 only once, that means that there are only six US business schools that have managed to be ranked in the Top 5 by BW more than once. Moreover, the 60% of the schools ranked among the Top 5 in the US since 1988 have not changed! This means that only three schools have always been in the Top 5 (Kellogg, Wharton, Harvard) and generally three other schools have fought over the other two slots (Booth, Ross, Stanford) for 24 years.

    To be sure, there’s lots of ways to analyze this data, as Dean Brunner’s bog demonstrates very well (e.g., the variability of one school’s rankings over time; the direction of the trend of one school’s ranking over time). I think that the observations shared above, however, underscore how little new data really appear in any rankings.