The Financial Times MiM Ranking Shows What’s Wrong With Rankings

Students in a brainstorm session

Ranking critics often focus on the volatility of the lists cranked out by media organizations every year. But yesterday’s Financial Times ranking of the best Master’s in Management programs has the opposite problem. The winner, the University of St. Gallen in Switzerland, topped the ranking for the tenth consecutive year.

It is a record of consistency virtually unheard of in business school rankings. Some might argue that the lack of movement at the top of this ranking is a sign of its reliability. Truth is, it’s more likely an indication of what is wrong with this and many other lists that purport to rate the quality of a business school and its individual programs.

Consider these facts: St. Gallen enrolls a mere 46 students in its MiM program yet it is ranked with rival programs that have more than ten and in some cases more than 20 times that number. Delivering a quality educational experience with 490 students as HEC Paris does is dramatically different than providing education for a single class in which every seat isn’t even occupied in a classroom. Or how about Essec Business School’s enrollment of 927 MiM students. Or how about Edhec Business School’s 986. In fact, five of the ranked programs have enrollments of more than 1,000 students each.


Yet, here you have schools with far larger programs with more students, faculty, support staff and alumni in the same ranking with a mere class of students. You could easily argue that MiM’s from St. Gallen or other schools with fewer than 50 students are graduating into alumni networks that are not nearly as robust and therefore as valuable as schools that have a critical mass of students. It’s also known that corporate recruiters favor schools where, like fishermen, there are more fish to catch. And the resources devoted to putting together a larger program are far more substantial than would be necessary for a single class.

So is it entirely fair to have these schools competing against each other in the same ranking? Probably not, especially when the disparities are that great. But the bigger problem with this ranking is that it only includes schools that are willing to cooperate with it. While the master’s in management is the predominant graduate degree in business in Europe and one would therefore expect a ranking of these programs to be dominated by European schools, it’s foolhardy to think that not a single U.S. school with a standalone program wouldn’t make this list of 90 different schools.

The U.S. is the epicenter of business education. Prospective students from every corner of the world aspire to come to the U.S. for their degrees. And there are many world-class U.S. business schools with world-class MiM programs, including Northwestern Kellogg, MIT Sloan, Michigan Ross, Yale SOM, USC Marshall, and Duke Fuqua, to name a few. The complete absence of these major business school brands from the list renders it virtually worthless as a global ranking, just as U.S. News & World’s Report annual U.S.-centric list of the best MBA programs is merely a list of U.S. options.


After all, when these U.S. schools are on the FT’s full-time MBA ranking, they generally outperform a good many of the highly ranked European schools. Wouldn’t the same outcome occur if MIT, Northwestern, Yale, Duke, and Michigan were included in this MiM ranking? In most cases, they share the same faculty, the same campus, and similar admission standards and career management offices. St. Gallen’s MBA program, for example, is ranked 68th. MIT Sloan is sixth. Kellogg is 11th. Yale is 14th. Duke Fuqua is 16th. Michigan Ross is 30th.

The same is true of the best European schools, nearly all of which rank above Gallen’s MBA program. INSEAD’s MBA program is ranked fourth, yet because the school just admitted its first MiM cohort it is not ranked at all. London Business School is ranked seventh in the world for its MBA program, while HEC Paris is ranked ninth.

There’s more. Unlike some rankings, the Financial Times fails to reveal the underlying index numbers that are used to produce the actual numerical ranks. This lack of transparency makes it impossible for would-be students to see how far ahead or how far behind a program is. It also makes it impossible for a user of the rankings to determine whether one rank is statistically significant versus another rank. Instead of revealing the index metrics, the FT simply acknowledges the problem, noting that 180 points separate the top programme, at the University of St Gallen, from the school ranked 90. “The schools are divided into four groups, indicated by bold lines: those in groups one and two score above the average for the cohort, and groups three and four are below it,” according to the FT. “The difference in score between schools ranked consecutively is greater within groups one and four than in groups two and three.”

And then there is the methodology itself. The ranking makes no attempt to either measure the quality of incoming students into a program nor does it ask the alumni who are surveyed to assess the actual academic experience or whether they would recommend it to others. There is no assessment of the quality of teaching, the accessibility of the professors, impressions of one’s classmates, or the quality of the course content. It is as if the academics don’t even count. Instead, the salary of graduates three years after graduation is given the most weight, 20%, while the increase in salary between graduation and three years later is weighted 10%. The payoff of the program is obviously important and it is reflected in those numbers, but in the absence of any attempt to measure the academics or the incoming students leaves the ranking suspect.

The bottom line: The best way to use a ranking is to understand its limitations. And sometimes, the limitations outweigh the benefits. This is one of those times.


Questions about this article? Email us or leave a comment below.