An analysis of the 2018 and 2019 rankings of U.S. business school by Bloomberg Businessweek (BBW) reveals that the problem of irreproducibility identified in the 2021 ranking can be traced to at least 2018, when the magazine made major changes to its methodology and introduced the ‘crowd-sourcing’ of weights to place on the ranking’s components (‘indexes’ in BBW’s lexicon). Applying these weights to the ‘normalized’ index scores provided by BBW yields a ranking considerably different from the published version—for both 2018 and 2019.
Applying the weights to standardized scores (i.e., z-scores) computed from the normalized scores also fails to replicate the published rankings. The 2018 and 2019 rankings generated by applying BBW’s weights to their normalized index scores are shown at the end of this article.
I again computed the weight vectors which, when applied to the normalized scores, replicate the published rankings These ‘true weights,’ shown in the tables below, reveal the extent to which the crowd-sourced weights must be distorted to produce the published ranking.
Bloomberg Businessweek Ranking Weights
|BBW’s crowdsourced weights||35.7%||25.8%||17.8%||12.0%||8.6%|
|True weights to replicate ranking||58.5%||7.7%||10.5%||7.9%||15.4%|
|BBW’s crowdsourced weights||37.3%||21.3%||25.7%||15.7%|
|True weights to replicate ranking||61.9%||11.6%||14.9%||11.6%|
|BBW’s crowdsourced weights||38.5%||23.1%||27.9%||10.5%|
|True weights to replicate ranking||60.2%||20.3%||10.1%||9.4%|
The general pattern of distortion is similar: the compensation weight gets inflated substantially at the expense of other index weights, especially networking and learning. I do not believe that the published rankings are generated by applying the true weights shown above. Rather, the true weights, computed via an optimization model that seeks to minimize variances from the published ranking, are a measure of the distortion of weights induced by the inexplicable and non-replicable process that generates BBW’s published rankings. I have been seeking an explanation from BBW, but my messages to a number of senior editors and leaders in the last several days have so far yielded no acknowledgement, let alone an explanation.
Another measure of distortion is the artificial ‘boost’ in rank—or conversely the loss of rank—that BBW’s departure from the stated methodology has produced for most schools included in the ranking. The following table shows that the distortions of rank were roughly similar between 2018 and 2019. To compare the extent of distortion, I limited myself to the top 84 schools in the 2018 and 2019 rankings (because the 2021 ranking had only 84 U.S. schools).
(Editor’s Note: Bloomberg Businessweek continues to stand by its ranking and methodology and claims that Jain’s analysis is inaccurate. “By design, our proprietary ranking cannot be replicated or gamed using published data,” a spokesperson told Poets&Quants. Jain, in turns, calls Businessweek‘s response “disingenuous and nonsensical.”)
The mean absolute deviation (which counts all differences, positive or negative, between the published rank and what the rank would have been without distortion) for the 84 schools was below 5 in 2018 and 2019. The 2021 ranking’s distortion became more acute up and down the list of schools, with mean absolute deviation increasing to 6.3. The most dramatic change from 2018/19 to 2021 was in the top 20 schools, for which the mean absolute deviation went from 1.5 to 3.2 to 5.1 between 2018 and 2021. In other words, the distortions of rank became very large among the top tier schools, with Wharton, MIT, and Duke receiving boosts of 19, 14, and 14, respectively. On the other hand, UT Dallas, which would have ranked #9 with the correct application of BBW’s methodology, got pushed down by 22 ranks, and Emory by eight. The following table indicates which schools were the top five gainers and losers due to the distortion:
Largest Ranking Boosts & Losses
|Largest boosts and losses||2018||2019||2021|
|Schools with 5 largest boosts||Rutgers (18), Florida (17), Boston College (16), UIUC (12)||Minnesota (16), Fordham (12), Boston Univ. (11), Washington U. (10), USC (10)||Wharton (19), Geo. Wash. (15), MIT (14), Duke (14), Rutgers (14), Ohio St. (14), Purdue (14) Boston Col. (14)|
|Schools with 5 largest losses||Willamette (-17), Mississippi (-17), Chapman (-15), Cincinnati (-12), Tampa (-12)||Tennessee (-16), Mississippi (-16), Utah (-13), San Diego (-11), Syracuse (-10)||UT Dallas (-22), Baylor (-18), Wm. & Mary (-15), Mississippi (-15), BYU (-12)|
|Note: The numbers in parentheses indicate the boost (+ve) or loss (-ve) in ranking|
|due to BBW's departure from its stated methodology.|
This is a lamentable state of affairs with the BBW ranking. If BBW were to fix the error that causes the distortion, the ranking would produce a sharp break from its previous pattern. The situation brings to mind the Chinese proverb, “he who rides a tiger is afraid to dismount.”
Anjani Jain is the deputy dean for academic programs at Yale University's School of Management. His research interests include the analysis and design of manufacturing systems, optimization algorithms, and probabilistic analysis of combinatorial problems. He joined the faculty of the Wharton School of the University of Pennsylvania in 1986 and served for 26 years before joining Yale SOM.
(Go to the following pages to see Anjani Jain's full analysis of Bloomberg Businessweek's 2018 and 2019 MBA rankings)