The Bloomberg Businessweek Ranking Is Determined Largely By Compensation

Rankings illustration from the graphic novel on MBA admissions by Menlo Coaching

Bloomberg Businessweek Continues to Distort Stakeholder-Generated Weights 

In the last several years, Bloomberg Businessweek’s (BB) annual business school ranking has claimed that the weights they attach to the factors used in the ranking (‘indexes,’ as BB calls them) are determined by stakeholders: “graduating students, recent alumni, and companies that recruit MBAs.”  Yet, by failing to normalize the raw scores on these ‘indexes,’ BB’s ranking continues to distort their stakeholder-generated weights. 

The elaborate determination of these weights is described at length in BB’s methodology, but there is no acknowledgment that their ranking computation lays waste to the painstakingly determined weights.  The exercise of crowdsourcing index weights from survey respondents therefore remains pointless and deceptive.  Preserving the stakeholder-generated weights would result in a rank order considerably at variance with what they publish.  These distortions have been readily discernible in the data published by BB and have received considerable media coverage, both in P&Q (e.g., here, here, and here) and elsewhere.  The details below pertain to BB’s ranking of US schools, but the distortions are similarly pervasive in their ranking of European, Canadian, and Asia-Pacific schools.

BB’s Ranking is Determined Largely by Compensation 

BB computes its ranking by applying stakeholder-generated weights of the five indexes of the ranking (Compensation, Learning, Networking, Entrepreneurship, and Diversity) to the index scores they publish.  But because the index scores are not normalized before the application of weights, indexes with greater variability of scores get a disproportionately larger weight in the ranking.  (It is easy to check if a data vector is normalized; if it is, further normalization leaves it unchanged.)  A simple comparison of the coefficients of variation (std. dev. divided by mean) reveals that BB’s Compensation scores have considerably greater variability compared to all others:

pastedGraphic.png

To see how much the published ranking distorts the stakeholder-generated weights, I computed z-scores from the published index scores and then used a constrained optimization model to determine the effective index weights which, when applied to the z-scores, produce the smallest variance from the published ranking.  When applied to the z-scores, the effective weight vector so computed replicates the published BB ranking of 75 US b-schools:

pastedGraphic_1.png

The overweighting of Compensation and the underweighting of all other indexes except Diversity is now a consistent feature of the BB ranking, as seen in the pattern of the previous three years:

pastedGraphic_2.png

An equivalent way to think about the distortion of stakeholder-generated weights is that if these weights were preserved (by using z-scores or normalized versions of raw data), the rank order of schools would have been substantially different from that published by BB.  For schools ranked in the top 25 in 2021-22, the table below shows the weight-preserving ranks (using z-scores) for the last four years, juxtaposed against the BB ranks in each of those years: 

pastedGraphic_3.png

The disproportionately large weight accorded to compensation in the BB ranking invites scrutiny. The compensation score is determined partly by published employment data (salaries, bonuses, employment rate, etc.) and partly by a survey of alumni.  Though BB does not break down the compensation score into its components, it seems likely that the year-to-year variation of a school’s compensation score (and hence of its overall rank) is largely due to the survey-based component.  It is worth noting that for 12 of BB’s top 20 schools in 2024, the median salary of graduates was identical ($175,000).  

It is, of course, BB’s prerogative to decide what factors to use in the ranking and what weights to ascribe to the factors.  But it makes no sense to trumpet the stakeholders’ contribution to a process that ends up smothering the stakeholders’ preferences.   Some readers may wonder why BB doesn’t employ the obvious fix: apply the stakeholder weights to normalized data, which would eliminate the distortion of weights? 

Perhaps they don’t like the ranking that this fix would have produced, placing, for example, Wharton at #17, Columbia at #27, and NYU at #22, each well behind Dartmouth (#2), Carnegie Mellon (#4), and UCLA (#11) this year.  The other consequence of preserving the stakeholder weights would be greater year-to-year volatility in the ranks (evident in the table above), which would undermine the ranking’s credibility.  But, to reiterate what I have previously said, if BB did believe in letting their stakeholders define the relative importance of their indexes, they would have no qualms about publishing the resulting ranking no matter how unorthodox it may seem.  And for schools that currently get buried under the weight of current orthodoxy about rankings, it may create the opportunity to gain prominence.

Yale SOM Deputy Dean Anjani Jain. Yale photo

Anjani Jain is the deputy dean of Yale School of Management. He joined the faculty of the Wharton School of the University of Pennsylvania in 1986 and served for 26 years before joining Yale SOM. Dean Jain’s research interests include the analysis and design of manufacturing systems, optimization algorithms, and probabilistic analysis of combinatorial problems. 

Listen to our Business Casual podcast: The 2024 Bloomberg Businessweek MBA Ranking
Our analysis of the new list which has Stanford on top for the sixth time in a row