‘I HOPE THIS SOLVES THE RIDDLE’
“Sorry you went through all that,” wrote Solomon. “The reason that multiplying the weights assigned to the five components by their normalized scores doesn’t equal their final normalized score is because:
“For all indexes, schools first receive a raw score of 1 to 7 (to reflect the seven choices offered for each survey question). For “hard” data, like salaries and employment rates, figures are re-scaled 1 to 7 based on the minimum/maximum amounts in the entire cohort.
“These 1-7 scores for each index are then weighted (according to our index weightings) into a total raw score between 1 and 7. This final raw score is then re-scaled 0-100. The school with the lowest total raw score gets a 0, while the one with the highest gets a 100. All others are scored proportionally in between. So for example, if a school’s average Networking Index score was 4.5 out of 7, but that was the minimum score among all non-U.S. schools, it receives a 0 for its normalized score that we display.
“I hope that solves the riddle.”
BEST B-SCHOOLS EDITOR: ‘WE STAND BY OUR RANKINGS AND OUR METHODOLOGY’
For Jain, the riddle hardly remained unsolved. So he immediately wrote back seeking more clarification. Ultimately, Jain was stonewalled. “We can’t disclose all aspects of our methodology beyond what we’ve already published,” maintained Solomon in an email. “As I’m sure you can appreciate, we invest a great deal of time and effort into creating a proprietary ranking that can be neither replicated nor gamed. As I said before, we stand by our rankings and our methodology.”
An email to Solomon from Poets&Quants also did not receive a direct reply. Instead, he referred Poets&Quants to a public relations representative who would presumably repeat what Solomon told Jain in an email.
‘MEDIA RANKINGS INFLUENCE TENS OF THOUSANDS OF PROSPECTIVE STUDENTS’
The explanation provided by Solomon was unconvincing to Jain because rescaling the scores would still preserve the ordering of the raw scores. Re-scaling cannot explain the wide divergence between BBW’s published ranks and what Jain’s recalculated ranking shows by applying the component weights to the re-scaled 0-100 scores. The only weighting of components that can get close to replicating the published ranking are still very different from Businessweek‘s stated methodology.
The editors lengthly explanation of the methodology for the ranking also offers no explanation for the differences. Concludes Jain: “The methodology described on BBW’s site is Byzantine, bearing greater resemblance to alchemy than to statistics in the various transformations and manipulations of data.”
Ultimately, Jain doesn’t know for sure how Businessweek screwed up its ranking but that may be beside the point. The greater takeaway is what it means. “So what is the root of the error?,” he asks in his analysis. “I don’t know and would rather not speculate. But it is indubitable and troubling that BBW’s published ranking cannot be replicated by their stated methodology. This state of affairs would be regrettable for any media publication; it is especially so for the magazine that launched b-school rankings and requires participating schools to ‘abide by Bloomberg’s strict code of ethics.’ Media rankings of B-schools influence tens of thousands of prospective students each year, and participating schools should indeed compile the requested data with unimpeachable integrity and diligence. A parallel expectation of methodological rigor, transparency, and data integrity rests upon media organizations producing the rankings and seeking public trust.”