How Business School Deans Would Change MBA Rankings

Want to stir up an argument in the faculty lounge? It only takes one word: rankings.

They are the bane of academia, a crude instrument used to measure learning – an elusive and evolving mark that can take years to fully materialize. Some rankings are structured, often inadvertently, to confer advantages to established brands. Others offer perverse incentives for schools to “game the system” and calibrate their investments to maximize their return-on-ranking. Indeed, rankings are often lagging indicators, diffuse and fickle, that invariably cost some educators their jobs.

MAKING WHAT’S INTANGIBLE INTO SOMETHING LINEAR

Anjani Jain, the acting dean at the Yale School of Management, frames the conundrum represented by rankings this way: “How do you take this multidimensional and multifaceted complexity of a higher educational institution and reduce it to a linear order?”

Leading outlets apply a mix of methodologies to do just that. Bloomberg Businessweek leans heavily on surveys, with responses from students, alumni, and recruiters accounting for 80% of the ranking’s weight. U.S. News is far more quantitative, with inputs (GMATs and GPAs) and outputs (pay and placement) supplemented with recruiter and academic surveys. Forbes attempts to exclusively measure return on investment, while the Financial Times and The Economist rankings are a potpourri of nearly every conceivable benchmark.

Dean  Glenn Hubbard, Columbia Business School

Three years ago, Columbia Business School’s Dean Glenn Hubbard even crafted his own ranking, which emphasized demand (applications per seat), choice (yield), average pay, and job offers. Predictably, his school lost a spot under his own methodology. Still, it was a good faith effort to grapple with the painful tradeoffs inherent in compiling an MBA ranking. Recently, Poets&Quants surveyed four leading business schools to get their take on lists that cause so much angst. Notably, they shared the data that they consider relevant, if not indispensable. They also outlined the variables and assumptions that produce volatility and distortion. At the same time, they offered alternative ways for students to measure school performance and identify the best fits for themselves.

STUDENT AND ALUMNI SURVEYS RIDDLED WITH CONFLICT OF INTEREST

Make no mistake: business school deans parse rankings very carefully. Peter Rodriguez, dean of Rice University’s Jones Graduate School of Business, is among them. For him, an overall numerical rank is just window dressing. The real value comes from the collection of underlying “raw” data, such as placement rates. An added benefit? The data is broken into columns so he can easily compare schools side-by side. “All of us look at the rankings because they often measure things we care about in the absence of rankings,” he tells P&Q. “They give you a sense of what the marketplace looks like and your place within it.”

Notably, Rodriguez assesses data closely related to student capabilities, such as GMATs and GPAs. Yield rate – a school’s ability to convert accepted applicants into full-time students – is another piece of data he values. However, he pays special attention to such outcomes as starting pay. “I have more confidence in the market outcomes than I do with some of the ways that quality is measured that don’t necessarily reflect what students want,” he explains. “I always feel like the quality of the school is best measured by rolling up student choices, employer choices, and research outcomes that are hard and measurable.”

Rankings can also tip deans off to how their school brand is perceived in the marketplace. François Ortalo-Magné, dean of the London Business School, tells P&Q that alumni surveys spark his curiosity. That said, he is under no illusions about this tool. In his experience, alumni surveys embody a conflict of interest, where the sample carries a vested interest in the outcome. “The survey of opinions – it can be as valuable as a beauty contest,” he says. “There is a complication with asking people for their opinions when they know their opinions will be used for rankings.”

Rice’s Rodriguez feels a similar push-pull with alumni surveys. “They’ll say, ‘Sure, it was a great experience’ even if it wasn’t if they know they are better off doing that. It’s probably why students want first-hand accounts as much as they can. They know the number doesn’t tell them quite enough.”

METHODOLOGIES AND WEIGHTS IMPLY SOME BIAS

Scott DeRue, Dean of the Ross School of Business at the University of Michigan

Scott DeRue, dean at the University of Michigan’s Ross School of Business, preserves a soft spot for employer surveys in rankings. He views the relationship between business schools and employers in terms similar to supply-and-demand, with programs being accountable for the quality of talent they produce. “No single metric is perfect in [regards to quality], “especially considering that organizations define quality differently based on their unique values and needs,” he concedes in a written statement to P&Q.  For this reason, employer surveys that assess the quality of talent are particularly insightful. Employers are also the only source of information that sees across programs, which is a vital perspective for evaluating relative quality.”

While rankings collect comparative data and conduct surveys of key stakeholders, they also come with several drawbacks, according to the deans. One stems from the nature of rankings themselves. Regardless of intent, LBS’ Ortalo-Magné observes that rankings are designed with certain biases toward particular measures. “The weights and aggregation – that’s a bit more complicated because that starts with a certain value function on the value of certain pieces of data,” he explains. “It implies tradeoffs across the data. The way the data aggregates implies a particular stance on the variation of one metric as opposed to variation in the other metric. That I find much less valuable.”

The numbers of variables measured – and the weights they are assigned – are also concern for Yale’s Jain. He calls simplicity “a virtue,” contending that a narrower focus is best as users often place different weights on different variables themselves. “Making this calculus overly complicated – relying on factors of data that are not easily measurable or relying on surveys that tend to get lower response rates – is not always helpful. It makes the calculation too elaborate. A more Parsimonious design of a ranking in terms of what variables are being measured is likely to be more robust.”

Questions about this article? Email us or leave a comment below.