The Bloomberg Businessweek 2022-23 MBA Ranking Remains Flawed

Bloomberg Businessweek (BBW) Signals a Change in the Ranking Methodology

After last year’s extensive media coverage (in P&Q and elsewhere) of problems with Bloomberg Businessweek’s 2021-22 B-school ranking, I was curious to see what methodological changes, if any, the magazine might make to the 2022-23 ranking.  On September 14th, Bloomberg Businessweek (BBW) sent a communication to business schools alerting us to a change in the methodology:

“Previously, schools’ raw scores of 1-7 for each index were weighted according to the stakeholder-determined weights for a final overall score of 1-7. This final overall score was then normalized onto a 0-100 scale. The school with the highest 1-7 raw score received a 100, while the school with the lowest received a 0. All other schools were scored proportionally in between.”

I was pleased to see the unambiguous acknowledgment that BBW’s previous rankings had applied the stakeholder-determined weights to the raw scores.  This was a question I had asked BBW several times last year without once receiving an answer.  A unique feature of the BBW ranking (much touted on their methodology page) is that the weights they apply to the five factors of the ranking (Compensation, Learning, Networking, Entrepreneurship, and Diversity) are generated through stakeholder surveys.  (BBW calls these factors ‘indexes.’)  A part of my criticism last year was that BBW’s failure to normalize the raw data before applying these stakeholder-generated weights results in a substantial distortion of the weights.  Conversely, to preserve their stakeholder-generated weights, they first needed to normalize the raw scores; but preserving the weights produces a ranking that is substantially at variance with, and considerably less plausible than, the one they publish.  Until 2021-22, BBW had published normalized versions of the index scores for each school, and the September 14th communication admitted for the first time that these normalized scores were never used in the ranking computation.  BBW’s September 14th message went on to suggest that the 2022-23 ranking might depart from applying the weights to raw scores:  

“To better highlight the competitiveness between schools within and across each index, we’ve adjusted the scale. Schools’ raw scores of 1-7 for each index are first transformed into a 0-100 score, where 100 represents the theoretical maximum of 7, and 0 represents a 1. Then these 0-100 scores are weighted according to the stakeholder-determined weights for a final overall score of 0-100.”

This hint of reform was further reinforced when BBW published the 2022-23 ranking on September 15th.  The index scores they published no longer suffered from the ‘replication problem’ of the last few years.  It was this replication problem—i.e., applying the stakeholder-generated weights to the normalized scores failed to produce the published ranking—which alerted me to the deeper problem with the BBW methodology: that it distorts the index weights and fixing the distortion substantially reshuffles the resulting ranking.

The Change Turns out to be a Fig Leaf … 

A closer examination of BBW’s 2022-23 ranking and the accompanying data revealed that the putative change in methodology was a fig leaf.  I noticed that though the published index scores appear to be on a 100-point scale, they are not normalized.  (It is easy to check if a data vector is normalized; if it is, further normalization leaves it unchanged.)  Normalization is done as follows: 

pastedGraphic.png

Instead, the published index scores are obtained by multiplying the 1–7 raw scores with a constant approximately equal to 14 (so that a raw score near 7 lands in the 90s).  BBW carefully avoids the word ‘normalization’ in their description above of ‘transforming’ the 1–7 raw score to the 0–100 scale.  The overall score for each school is obtained by applying the stakeholder-generated index weights to the school’s index scores.  This immediately solves the ‘replication problem’ and obfuscates the deeper flaw of the methodology—that it either ends up making a lie of the stakeholder-generated index weights or, equivalently, if weights are preserved, it produces a ranking that departs substantially from the one published.  Re-scaling the raw data with a multiplication factor does little to rectify the flawed ranking, and its appearance in place of the normalized index scores of previous years defies explanation except as a decoy.  

… While Deeper Problems Remain

To see how much the published ranking distorts the stakeholder-generated weights, I computed z-scores from the published index scores (the multiplication factor is immaterial for this computation) and then used a constrained optimization model to determine the effective index weights which, when applied to the z-scores, produce the smallest variance from the published ranking.  When applied to the z-scores, the effective weight vector so computed replicates the published BBW ranking of 81 US b-schools (with two minor exceptions caused by the limited precision of available data):

pastedGraphic_1.png

The analogous computation last year yielded a similar pattern of distorted weights:

pastedGraphic_2.png

An equivalent way to think about the ranking’s deep flaw is that if the stakeholder-generated weights are preserved (by using z-scores or normalized versions of raw data), the rank order of schools is substantially different from that published by BBW.  For schools ranked in the top 25 in 2021-22, the table below shows the corrected ranks (using z-scores) for both 2021-22 and 2022-23 juxtaposed against the BBW ranks: 

pastedGraphic_3.png

The full list of 81 US schools ranked by BBW in 2022-23, and their corrected (i.e., weight-preserving) rank order, is available in a spreadsheet that the interested reader can download.  Also included in the spreadsheet are calculations showing that the effective index weights stated above replicate the published 2022-23 BBW ranking when applied to z-scores.    

Bloomberg Businessweek’s Response

I contacted a senior editor at BBW to ask why they chose not to normalize the raw data.  Their rankings team obviously understands why normalization is necessary and how it is done, and has in fact been publishing normalized index scores for several years (without using them, as they now acknowledge).  I explained, as I did last year, how the lack of normalization severely distorts their stakeholder-generated index weights, and I shared with them the effective index weights one would have to use to replicate the ranking.  The reply was simultaneously revealing and non-responsive.  As to the lack of normalization, the editor said, “we don’t normalize before weighting because it exaggerates small differences between schools.”  I pointed out that not exaggerating small differences is tantamount to overriding their stakeholders’ collective judgment of what the index weights should be.  In particular, BBW’s failure to normalize the raw data causes considerably less weight to be placed on the indexes of Learning, Networking, and Entrepreneurship than their stakeholders wish, and their choice smothers the differences on these factors among schools.  These three indexes effectively receive a total of 20.6% weight compared to the stakeholder-generated 54.5%.  On the other hand, small differences in Compensation receive almost twice the importance than BBW’s stakeholders wish.  It goes without saying that BBW can choose to forego normalization of the raw data if they wish.  But they cannot simultaneously make claims about the stakeholder-generated weights being a distinctive and important feature of the ranking.  The long description of their methodology opens with the following statement:

“The Bloomberg Businessweek Best B-Schools ranking starts from the basic premise that the best judges of MBA programs are graduating students, recent alumni, and companies that recruit MBAs. We ask those stakeholders questions about everything from jobs to salaries, classroom learning to alumni networks. Their answers form the heart of this ranking. … Rather than define the relative weighting of the indexes ourselves, as most rankings do, we let the stakeholders define them.”

In light of BBW’s acknowledgment about not using normalization, the whole enterprise of soliciting index weights from stakeholders is meaningless, and their assertions about letting stakeholders determine the ranking’s index weights are disingenuous. 

I asked another question of the editor and as of this writing have not received a response.  My question was about the sudden switch to publishing a rescaled version of the raw data (which reveals all the information about raw data) instead of their years-long practice of publishing normalized data.  This editor and others at BBW had made emphatic claims to me last year that not divulging the raw data was essential to thwarting the ‘gaming’ and ‘reverse engineering’ of the ranking and my lack of access to the raw data was why my analysis last year was invalid.  (They apparently did not realize that I could nevertheless mathematically deduce the raw data from the information they had published, but did not need it for unearthing the ranking’s errors.)  The sudden disinhibition about divulging raw data, re-scaled to look like normalized 0–100 scores, makes no sense except as a patch for the replication problem and as a fig leaf for the deeper problems of distortion.  The only explanation offered on BBW’s methodology is that it allows them “[T]o better highlight the competitiveness between schools within and across each index.”  I await a more cogent explanation.    

A concluding thought.  Some readers may wonder, as I have, why doesn’t BBW employ the obvious fix: apply the stakeholder weights to normalized data, which would eliminate the distortion of weights.  I have not received an explanation from BBW (except repeated assertions that they “stand by [their] rankings”).  Presumably, they don’t like the ranking that this fix would produce, placing, for example, Wharton at #20, and Berkeley Haas at #22, each well behind USC (#9), Emory (#11), Indiana (#13), Washington, St. Louis (#14), Georgia Tech (#15), and William and Mary (#17).  But if BBW did believe in the virtuous claims they make about letting their stakeholders define the relative importance of their indexes, they would have no qualms about publishing the resulting ranking, as unorthodox as it may seem to conventional observers.  And for schools that currently get buried under the weight of current orthodoxy about rankings, it may create the opportunity to gain prominence.

Anjani Jain is the Deputy Dean for Academic Programs at the Yale School of Management

Author Anjani Jain is the Deputy Dean for Academic Programs & a Professor in the Practice of Management at the Yale School of Management. His research interests include the analysis and design of manufacturing systems, optimization algorithms, and probabilistic analysis of combinatorial problems. A long-time observer of MBA rankings, Jain had also been director of Wharton’s MBA Program, Vice Dean and Director of Wharton’s Graduate Division and Vice Dean of Wharton’s MBA Program for Executives.

DON’T MISS: Bloomberg Businessweek’s Ranking Distortions Persist For Non-U.S. Schools

 

Questions about this article? Email us or leave a comment below.