When business education professionals get together, it is only a matter of time before talk turns to rankings. Academics and administrators, commentators and consultants all have an encyclopedic knowledge of the different methodologies used by the different rankings, along with their perceived strengths and weaknesses. Prospective students, who the lists are ostensibly aimed at? Not so much.
Alumni often say that they wish they had understood them better at the start of their business school journey. We thought we would do something about that, by writing a short guide to using one of the most-followed lists, the Financial Times Global MBA ranking.
The aim is to help potential students understand what the numbers mean, how to use the ranking to help them choose the right program and to point out some of the peculiarities and limitations. With the 2019 FT ranking just five days away from publication on Jan. 28th, here’s our guide to understanding what the list really means.
Lesson #1: It’s complicated
The annual FT list is the best-known MBA ranking — at least in Europe and Asia — and international students tell us that it is the one they respect most. This year’s ranking will be the British newspaper’s 24th list. It is generally considered to be a formidable piece of work which synthesizes 20 different pieces of data including salary increases for alumni, but also more obscure measures such as the gender balance in programs, the international mobility of alumni, and the number of PhDs awarded by a school (not that this latter metric really should matter to an MBA student). This gives the list at least the veneer of sophistication compared to some other rankings, even if many of the metrics used by the FT may be less important to the overall quality of an MBA experience.
It’s also highly transparent. Previous rankings are easily accessible (something that is not true in U.S. News‘ case) and so are year-over-year changes in a school’s ranking. The FT also publishes a full explanation of the methodology it employs (you can read about it here), and the school provided data is occasionally audited by the accounting firm of KPMG. That’s not just window-dressing. In 2018, IE Business School in Spain was excluded because of irregularities in the data provided. It also involves a large subset of many of the best business schools in the world. In 2018, 155 schools responded to the FT’s survey. Such is its reputation that a number of schools will only respond to the FT’s ranking, and do not appear in other lists. So it is the most comprehensive global ranking. Responses from alumni “inform” eight of the ranking criteria, while the remaining metrics are based on school-reported data. So, at least to some extent, then, a good portion of the data is independent of the school.
Will there be any changes to the methodology this year? Unlikely. Although a new statistician is in charge, another strength of the ranking is its “stability,” meaning that there are rarely radical changes in the methodology from year to year. (For the record, the FT declined to comment for this article.)
For all its admirable qualities, though, the FT ranking comes in for criticism in some circles. One big question is: given that the ranking is made up of so many elements, does it actually measure anything meaningful? And there is no question that the criteria the newspaper uses to rank programs significantly favor schools outside the U.S. As a result, the list deliberately diminishes the true quality of U.S. MBA programs when compared to other schools. The FT ranking might charitably be thought of as something of a Rolls Royce, but you’ll never understand what goes on under the bonnet.
Lesson # 2: It is very money focused
On the other hand, many criticize the FT ranking for putting too much weight on pay and placement. Although many business school deans sell their MBA programs as “transformational” experiences, 40% of a school’s FT ranking is linked directly to pay. Some 20% of a program’s score is made up of the average alumni salary three years after graduation, and another 20% by the average salary increase for an alum over their pre-MBA salaries. Those who go to work in government or NGOs are excluded from the data crunching, which evens things out but also means that the overall figures are artificially inflated, especially when they are adjusted for students in countries where poverty looms large due to the purchasing parity lift some schools get.
The upshot? Schools which concentrate on sustainability, work with governments, or focus on producing entrepreneurs who might accept lower starting salaries, might have lower rankings than those which produce lots of investment bankers and consultants. And because the data is based merely on salary and not performance bonuses, stock options or other equity awards, the compensation data gathered by the FT is woefully incomplete.
After all, 39% of this year’s graduating MBA class at Stanford reported getting equity in their starting compensation packages, while 72% expected first-year performance bonuses averaging $64,529. Few schools in the world can boast graduates with those kinds of extras in their paychecks. Excluding these key elements of pay from mostly the elite U.S. business schools makes competing programs look a lot better than they really are. Nonetheless, students who are not money-focused should be aware that salary plays a big part in the ranking. Prospective students should also understand that the figure records salaries three years after graduation, not immediately and that the data comes from alumni who know the Financial Times is ranking their schools on the data they provide.
While 8,300 alumni from the class of 2014 completed the FT survey that produced the 2018 ranking, that is only a response rate of 38%. It’s worth asking whether the people who respond are representative or merely the most satisfied graduates and the high-earners. Also, whether they have a vested interest in reporting high salaries; students might feel that having a high-ranking school on their resume is advantageous, and have an incentive to exaggerate a bit.
Lesson # 3: Apples are not oranges
It’s obvious that schools near Wall Street or the City of London will place more people in high-paying jobs than those linked to mainstream industries in, for example, Germany’s industrial heartland. Those high-paying jobs give rise to higher rankings. Is the business education in London and New York really any better than one you can get in Dusseldorf? Or is a one-year MBA program very different from a two-year experience? If so, the FT ranking can be accused of comparing apples and oranges.
Others quibble that converting salaries to purchasing power parity rates distorts them and that students work in many different countries after graduation so their salary levels can’t be compared. How much value is there in comparing a pay-packet in China with one in Denmark? Probably not a whole lot.
The focus on salary gives rise to a lot of skepticism over the practice of ranking schools with different focuses, histories, and aims. How can Stanford’s Graduate School of Business, ranked first by the FT last year, really be compared to the Indian Institute of Management in Calcutta, ranked 78th, for example? Some argue that the act of ranking schools is largely irrelevant. “The differences between schools within rankings are so tiny that they’re effectively insignificant,” says Nick Barniville, associate dean of degree programs at ESMT in Berlin. “If you are looking for a mark of quality, it’s more important whether a school is ranked versus not ranked as an indication than what position the school gets.”