The headlines attract many readers, including a sizable number of potential MBA applicants who then use those rankings to form judgments about the quality of business schools. But those more familiar with the world of MBA admissions know better: the primary purpose of MBA rankings is to generate page views and advertising revenue for the organizations that publish them, at times at the expense of objective assessment.
This state of affairs is perhaps best exemplified by the 2021 MBA ranking published by The Economist. Despite the fact that over 40 highly regarded business schools did not participate in the ranking–including all of the M7 and European heavyweights such as LBS and INSEAD–a woefully incomplete list was released that still claimed to present the “world’s best business schools.”
Even without this recent example of sensationalism, it has never been in an applicant’s best interest to pay too much attention to MBA rankings. Questionable assumptions, inherent biases, and out-of-context metrics plague the methodologies used by vendors – not to mention the difficulty of controlling the data supplied by schools. Applicants should always approach the lists with an abundance of caution.
Distorted Salary Metrics In MBA Ranking Methodologies
If you compare MBA rankings side-by-side, you will note dramatic differences between each list.
Of course, one source of minor variation between the rankings published by The Financial Times, Forbes, and U.S. News is the fact that The Financial Times considers international business schools while Forbes and U.S. News focus exclusively on US institutions.
And yet, as highlighted in the table below, the differences between these three rankings extend far beyond the inclusion of one or two foreign programs: reading across the table, you will rarely find agreement.
|Financial Times MBA Rankings||Forbes MBA Rankings||US News MBA Ranking|
|15||NU Singapore||15||UNC Kenan-Flagler||15||Cornell – Johnson|
|20||HKUST||20||NYU Stern||20||UNC Kenan-Flagler|
Note: The Financial Times ranking and the U.S. News ranking data above is from 2020, and the Forbes ranking is from 2019.
Taking a closer look at the methodologies employed by each vendor reveals the reason for the discrepancies: MBA rankings are determined by metrics that the vendor has decided to privilege over others, and thus each individual ranking is more a reflection of the bias inherent to the methodology than to overall MBA program quality.
What’s worse, not all metrics used by ranking vendors are fair in themselves, and many data points are taken out of context, giving certain schools distinct advantages over their peer institutions. This trend is especially apparent in two popular rankings–Forbes and the Financial Times–which will be examined here.
(For a complete analysis of the popular MBA rankings, visit our page: MBA Rankings – Useful Tools or Hot Garbage?.)
Forbes’ MBA Ranking
Based on a survey of over 4,000 MBA graduates, the Forbes MBA ranking zeros in on compensation both pre- and post-MBA. From the responses collected, Forbes calculates a five-year gain per school, which is the sum of the total compensation minus tuition, fees, and forgone compensation. In short, financial return is the sole metric for the Forbes ranking.
There are major problems with this approach, and to make matters worse, Forbes fails on multiple fronts to evaluate salaries fairly.
For one, Forbes only takes exercised stock options into account, which could downgrade some career tracks–and thus certain programs–over others. For example, the lower ranking of Berkeley Haas when compared to U.S. News could be explained by the school’s high number of placements in the tech industry, with some graduates taking jobs at startups and receiving valuable stock options that will not yet have been exercised in the 5-year window that Forbes is examining.
Similarly, Forbes normalizes salaries for cost of living, which inflates the rankings of certain MBA programs. Take, for example, business schools that place a significant number of graduates in Chicago–Booth, Kellogg, and Ross. Because the cost of living in Chicago is lower than in NYC, and starting salaries at companies like McKinsey, Bain, and BCG are the same across all offices nationally, Chicago-based programs receive a higher score for post-MBA compensation. But this does NOT reflect better career placements by Booth, Kellogg, and Ross when compared to East Coast schools such as HBS, Wharton, and Columbia. It is merely selection bias: MBA students who wish to work in Chicago are more likely to attend MBA programs in Chicago. No one believes that getting an MBB job in Chicago is a “better” career outcome versus getting an MBB job in NYC.
This is not a question of whether salary metrics should or should not be an important factor for MBA rankings: it is a question of simple fairness in measuring salaries in the first place. On a yet different note, taking into account forgone earnings (also: cost of opportunity), gives shorter programs an advantage, which particularly impacts the publication’s International MBA ranking. Beyond that, it penalizes programs with high earners (typically from finance for example) in the class, whereas most post-MBA salaries are standardized, no matter what school graduates are hired from. Cost of opportunity tells us much more about the candidates’ pre-MBA trajectory than the quality of a business school.
The Financial Times’ MBA Ranking
I consider this ranking the gold standard of global MBA rankings: it has been running for decades now, and has the huge merit of ranking programs across the world, dispensing with segmenting into US, and non-US. This in itself represents value for candidates as it allows, along certain very specific metrics, to compare “traditional” US options with some very very attractive alternatives, mainly in Europe but also in Asia. Two other very good reasons to take this ranking seriously are (1) the focus it gives to gender balance, with three dedicated metrics (% of female students, faculty, and board members), and (2) the fact that participating schools get periodically audited by (currently) KPMG. In an environment where little is known about how data quality is ensured, this is a refreshing effort of transparency and accuracy. Lastly, the FT has not been afraid of showing its teeth when it excluded IE from its 2018 MBA ranking after reporting irregularities were uncovered. The IE MBA, previously in the top 10 never fully recovered.
Even so, the FT ranking is to be taken with a pinch of salt.
The Financial Times assembles a Student Survey, School Data, and Research (the number of articles published by faculty/staff in 50 selected academic and practitioner journals, corrected for the size of the faculty). Whether or not the research output of a business school is something that deserves a 15% weight in an MBA ranking is an entire debate in itself, and this last category is unique to the Financial Times, but the other categories are quite standard. And though the Student Survey does include qualitative questions, the majority of the questions cover topics like pre- and post-MBA salary.
In terms of quantitative data, the Financial Times treats metrics differently than other ranking bodies. For example, instead of focusing on GMAT scores and GPAs, the Financial Times has a distinct focus on international diversity: what % of students come from a country outside of the MBA program’s host country. Rewarding diversity is a great idea, but defining it in this narrow way means that European MBA programs will almost always outperform American ones. Despite its remarkable focus on gender balance, it also fails to recognize other aspects of diversity, such as US MBA programs’ efforts to recruit more US under-represented minority students.
Despite the FT ranking’s relative bias towards non-US programs, US programs have been faring well historically, with Wharton and Harvard historically dominating the ranking, claiming the number one spot 10 and 6 times respectively. As for non-US programs, LBS and INSEAD are tied, also with Stanford, in having claimed the top spot 3 times. With INSEAD, LBS and IESE regularly in the top 10, this ranking, despite its flaws, is currently the most solid comparison tool between US and European programs.
Follow Your Fit
Time and again, the consultants at Menlo Coaching have to dissuade MBA applicants from relying too heavily on rankings. And for all the reasons outlined above, it’s clear why they are not to be blindly trusted: rankings use arbitrary metrics to make biased and sometimes unrealistic comparisons across US and international business schools. If you count on MBA rankings to tell you which school is best for you, you’re bound to be disappointed.
Instead, take the time to think about your professional aspirations, your temperament, and your general personality. Then, conduct research on MBA programs to figure out which school is the right one for you–regardless of rankings. At the end of the day, it is very personal factors such as where you see yourself studying and/or working post-graduation, as well as cultural fit, that, on top of broader notions of prestige, are likely to drive your decision.
Pascal Michels, a senior admissions consultant for Menlo Coaching, has experienced every perspective in the world of business school admissions. After obtaining his own MBA from IESE Business School, Pascal began a career in financial auditing in Paris; for a time, he was actively involved in MBA recruiting for Citi in London. He eventually returned to IESE as director of MBA admissions. At Menlo Coaching, he has drawn on his varied background to help clients understand how their profiles might be perceived throughout the MBA admissions process and beyond, with particular emphasis on successful post-MBA career planning.
Comments or questions about this article? Email us.