Bloomberg Businessweek today (Oct. 14) rejected challenges to the credibility of its 2021 MBA ranking. The magazine’s editors said they are standing by the ranking and the methodology that produced it.
The statement comes after an in-depth analysis of the ranking by Anjani Jain, deputy dean overseeing academic programs at Yale University’ School of Management. Using Businessweek’s reported scores for each school, Jain found that applying the stated weights to the five metrics used by the magazine would lead to results that are “egregiously off-kilter when compared to the published ranking” (see his full analysis here). He calculated that the only way to replicate the ranks published by Businessweek is to apply a very different set of weights to the five metrics used to rank programs.
Jain’s recalculation would change the positions of 23 of the Top 25 business schools. The academic found that applying the stated weights to scores published by Businessweek would cause MIT’s Sloan School of Management to fall uncharacteristically to a rank of 21 from 7th; the Jindal School of Management at the University of Texas in Dallas to oddly skyrocket into the Top Ten, placing ninth, a rise of 22 places, and Emory University’s Goizueta Business School to achieve its highest rank ever in tenth place, eight spots better than the published ranking of 18th.
‘THE PREMISE OF ANJANI JAIN’S ANALYSIS AND YOUR ACCOMPANYING STORY IS INACCURATE’
But a spokesperson for Bloomberg News maintains that “the premise of Anjani Jain’s analysis and your accompanying story (published Oct. 8) on Bloomberg Businessweek‘s B-school ranking is inaccurate. Both should be corrected. Neither Poets&Quants nor Yale had access to the raw scores that are used in calculating the ranking, which Businessweek pointed out to Yale multiple times prior to the analysis’ publication.”
The spokesperson suggested that those raw scores–not the reported scores for each of the five indexes used to rank MBA programs–are what Jain should have used in his analysis. ‘We calculate Bloomberg Businessweek‘s B-school ranking by using raw scores, which are not published, and by applying index weights as exactly stated in the methodology,” the spokesperson added. “By design, our proprietary ranking cannot be replicated or gamed using published data. Disclosing our raw data would create the possibility that the rankings could be reverse-engineered or gamed by a school for an unfair advantage. In addition, our methodology is repeatedly and carefully vetted by multiple data scientists. We fully stand by our ranking and our methodology.”
Jain fires back in a statement provided to Poets&Quants, calling its response, among other things, “a deceitful canard and a deflection.” “Bloomberg Businessweek’s response to the disclosure that its ranking of business schools is irreproducible from their published data and stated methodology is disingenuous and nonsensical,” wrote Jain. “In the two weeks since I brought the error to the editors’ attention, they offered no explanation and have refused to answer simple questions I asked repeatedly, which could have identified the source of the error, e.g., were the raw data of index scores normalized (linearly re-scaled to a 0-100 scale) or standardized (converted to z-scores) before applying the crowdsourced index weights?
‘THE PUBLISHED RANKING IS A SERIOUS DISTORTION OF WHAT ONE WOULD GET BY APPLYING BBW’S STATED METHODOLOGY’
“Instead,” he added, “I received non sequitur obfuscations such as: ‘By design, our proprietary ranking cannot be replicated or gamed using published data. Disclosing our raw data would create the possibility that the rankings could be reverse-engineered or gamed by a school for an unfair advantage.’ I pointed out that the distortions I identified in the published ranking were not the result of some secret element of the methodology successfully thwarting my attempts at reverse-engineering or gaming the ranking. My simple calculations, which anyone can replicate, merely reveal that the published ranking is a serious distortion of what one would get by applying BBW’s stated methodology in a statistically rigorous way to the index scores data they published.”
Jain says he never asked for, nor claimed to be in the possession of the magazine’s raw data. “What BBW published with its ranking was a re-scaled (or normalized) version of the data for each index,” he says. “But if BBW did the weighting of index scores in a statistically valid manner, the published normalized scores are a sufficient proxy for the raw data. The statistically valid way to preserve the weights is to apply them to either the normalized or standardized versions of the raw data. That is what I did, and found the resulting rankings to be widely divergent from what BBW published. To get the standardized index scores (i.e., z-scores), I used BBW’s normalized scores. It is an elementary statistical fact that the z-scores I computed from normalized data are identical to the z-scores one would get from the raw data (as long as the re-scaling is linear). So if BBW’s secret calculations are statistically valid, we have sufficient information in the published data to be able to replicate the ranking. BBW’s claim that my calculation is ‘inaccurate’ because I did not have access to raw data is a deceitful canard and a deflection.
“The only explanations that remain are grievously damaging to BBW’s claim that the ranking is valid. Either they applied the weights before normalizing or standardizing the raw data, which by their own admission is not scaled uniformly across indexes, or the published ranking was perturbed after computation. The first possibility is an inexcusable statistical error, which BBW can still correct in an amended ranking. The second is too troubling to contemplate. As I mentioned in the previous article, I don’t know which of these possibilities explains the non-replicability of the published ranking, and I would rather not speculate. But it is incontrovertible that the published ranking is a serious distortion of what a valid statistical procedure would produce.”
FROM THE BEGINNING, BUSINESSWEEK HAS DISPUTED THE ANALYSIS
Jain’s initial analysis of the ranking was published by Poets&Quants on Oct. 8. He concluded in an essay that the MBA ranking could not be replicated by the published data and methodology. An accompanying story on the analysis posed the following question: “Did Businessweek Botch Its Latest MBA Ranking.”
That article included an email exchange between Jain and Caleb Solomon, the editor in charge of the ranking, who maintained that Jain was in error.
“Sorry you went through all that,” wrote Solomon. “The reason that multiplying the weights assigned to the five components by their normalized scores doesn’t equal their final normalized score is because:
“For all indexes, schools first receive a raw score of 1 to 7 (to reflect the seven choices offered for each survey question). For “hard” data, like salaries and employment rates, figures are re-scaled 1 to 7 based on the minimum/maximum amounts in the entire cohort.
“These 1-7 scores for each index are then weighted (according to our index weightings) into a total raw score between 1 and 7. This final raw score is then re-scaled 0-100. The school with the lowest total raw score gets a 0, while the one with the highest gets a 100. All others are scored proportionally in between. So for example, if a school’s average Networking Index score was 4.5 out of 7, but that was the minimum score among all non-U.S. schools, it receives a 0 for its normalized score that we display.
“I hope that solves the riddle.”
‘WE CAN’T DISCLOSE ALL ASPECTS OF OUR METHODOLOGY BEYOND WHAT WE’VE ALREADY PUBLISHED’
Unsatisfied by that explanation, Jain mentioned that the raw data had to be standardized or normalized before weights could be applied to the index scores. Otherwise the weighted raw scores would greatly distort the crowdsourced weights, as his analysis revealed. He followed up with further questions. Solomon, in turn, responded in an email. “We can’t disclose all aspects of our methodology beyond what we’ve already published,” he maintained. “As I’m sure you can appreciate, we invest a great deal of time and effort into creating a proprietary ranking that can be neither replicated nor gamed. As I said before, we stand by our rankings and our methodology.”
Since that exchange, Jain also went back to examine the magazine’s ranking of non-U.S. schools. Again, he was unable to replicate the results of the magazine’s European and Asian MBA rankings (see A Challenge To Businessweek’s European & Asian MBA Rankings).
Comments or questions about this article? Email us.