How Business School Deans Would Change MBA Rankings

Dean François Ortalo-Magné, London Business School. Courtesy photo

HOW DO YOU MEASURE CULTURE AND MISSION?

Along with being too convoluted, Jain adds that rankings take the power out of the hands of users. His solution? Craft custom tools where applicants can apportion the weights to the variables that hold the greatest appeal to them and tag the results to particular programs. He touts the University of Washington’s “Do It Yourself” platform as an example of such an interface.

Another deep-rooted flaw with rankings involves differentiation. Ortalo-Magné points to a P&Q article on 10 Business Schools to Watch, which uses a mix of stats, cultural attributes, and recent initiatives to identify MBA programs on the rise. He then contrasts the story with rankings, which he says wrongly assumes that business schools pursue the same mission and haven’t differentiated themselves from peers in the marketplace.

Rodriguez echoes Ortalo-Magné’s sentiments.It can be hard to use a single measure like a ranking or a single group of measures without knowing something about a school’s particular aim,” he shares. “We don’t all aspire to the same goals – and therefore some rankings are better for some than others. My school – and my prior schools – have been fairly global, but it is certainly not true of a lot of schools. That may not be their appropriate mission or aim.”

Rodriguez points to employment metrics as an example. “The organizations to which schools supply graduates are probably rather coarsely measured. We all tend to be measured against a very national and even global group of companies – but some schools have something more narrow as their appropriate target.”

VOLATILITY UNDERCUTS CREDIBILITY

Then, there is the volatility of rankings, where a Duke Fuqua can bounce from 1st to 8th to 3rd over three years in the Bloomberg Businessweek rankings or USC Marshall can somehow leap 25 spots in just one year with The Economist. Such swings undermine the credibility of business school rankings in Jain’s view – and he knows exactly where the blame falls. “Some of that volatility may reflect real change underlying the school’s curricular experience, but some of it is simply the result of methodology, such as survey response rates being as low as 30% (or lower).”

Yale SOM Acting Dean Anjani Jain

Surveys are hardly the only issue, Jain notes. He argues that using certain metrics also creates loopholes in how data is measured or reported. For example, employment rates can give some schools a decided advantage over their peers. According to Jain, the MBA CSEA (Career Services and Employer Alliance) standards fix reporting requirements at 85%. In other words, schools can meet this threshold despite lacking placement data on 15% of its graduating class. In context, this means a major MBA program such as Wharton could conceivably miss over 125 students in their reporting and still meet the standard.

Turns out, this was more than a theoretical weakness. In the 2018 U.S. News rankings, Jain notes, there were a couple of schools where schools lacked data on nearly 10% of the class. That creates quite a dilemma, where pay and placement constitutes 35% of the ranking’s weight. “The population of non-reports is not likely to be an unbiased sample of all students,” Jain argues. “It’s quite likely that the non-reporting percentage of the class has a lower employment rate. By excluding that group completely, schools end up artificially inflating their employment rates.”

The solution to this, Jain adds, is already in place in the Financial Times ranking. “They use not just the employment rate, but they multiply the employment rate by the percentage of class reporting. By doing this, they are saying that you cannot effectively claim that those who are not reporting are under employment. They penalize schools and close the loophole.”

‘YOU CAN ONLY GO TO ONE SCHOOL’

Which ranking metrics most concern the deans? Ross’ DeRue, for one, worries about the harm that can come from focusing on pay – and for good reason. For him, pay correlates more to which industries hire the most grads rather than quality. “The rankings presume programs that average a higher salary level are somehow better programs relative to those programs that average a lower salary, yet salary differences are mostly a function of industry rather than program quality. The intense focus on salary also creates perverse incentives for schools to find legally permissible but ethically questionable ways to inflate salary information.”

Jain also worries that pay data hurts programs that cater to graduates who accept applicants from outside the elite professions and firms. “A number of surveys measure the difference between salary at graduation and 3-5 years,” he observes. “Sometimes the same surveys are using incoming salaries, which is a different measure in itself. So if a school tends to attract students who end up working in lower paying jobs like the public sector or non-profits, they’ll get penalized if the entering salary is measured.”

Surveys, such as U.S. News’ ranking of business programs by academics, also rankle deans to an extent. Rodriguez jokes that no student goes to two MBA programs, which makes him wonder how deans or MBA directors can possess the first-hand knowledge to judge the success of far-away peer schools. “There is a big brand effect,” he admits. “It’s particularly true for schools that are smaller, younger or more regional. That will be interpreted as being less familiar.”

METHODOLOGIES USED TO GENERATE PAGE VIEWS

Harvard Business School

This “brand effect” also creates a self-fulfilling prophecy that weighs on respondents who evaluate other programs, Rodriguez adds. “I think the analogy is probably from banking: If you owe the bank $1000 and you can’t pay, that’s your problem. If you owe the bank a billion dollars and you can’t pay, that’s the bank’s problem. If HBS doesn’t show up high enough on your ranking, that’s the rankings’ problem. It just makes it hard to think about the programs. The problem with peer rankings is that we just don’t know enough about each other (to make valid assessments).”

A lack of familiarity and potential bias aren’t the only issues dogging ranking surveys. Like Jain, DeRue points to volatility, though he attributes that more to nurture than nature. “Some rankings’ agencies have concluded that they need or want volatility in the rankings to drive engagement (e.g., readership),” he asserts. “In some cases, this results in tweaking criteria without any clear reason.”

At the same time, DeRue wonders whether certain rubrics, such as scores from corporate recruiters, truly differentiate programs from each other. “Most rankings create a “black box” around the use of ordinal rankings,” he says. “For example, if school A’s employer survey comes back with a 4.7 out of 5.0 score, and school B’s survey comes back with a 4.8 out of 5.0 score – is school B really better than school A? Any basic data analytic course would tell us no and that these are statistically the same result…yet, we treat them as different when we create the ranking.”

Questions about this article? Email us or leave a comment below.