WHAT MAKES A PROGRAM TRULY “GLOBAL?”
DeRue refers to such tricks as a “disservice,” one that portrays cracks as chasms. Another area where that may be true involves how “international” a program truly is. This “global presence” is often measured by how many students and faculty hail from outside the institution’s country. Rodriguez considers this a misnomer, as it doesn’t necessarily account for geographic breadth. It also doesn’t factor in context, adds Jain.
“A number of surveys will measure the proportion of faculty who are international,” he says. “That gives rise to, ‘How do you define international?’ Is it passports? Would people like me count as an American? Do I have an accent? Does living experience outside a home country count? What I teach (modeling) is a conventional subject where the degree to which I provide an international perspective may not be particularly relevant. Whether I am considered U.S. or not, my citizenship does not reflect in the material I teach or how I teach it.”
While the deans were happy to lay out the limitations of rankings, they weren’t shy about sharing solutions and alternatives, either. For Rodriguez, customer satisfaction – student satisfaction, namely – is one of the best indicators of quality – and one where institutions hold far greater control.
ALL ABOUT CUSTOMER SERVICE
“What could be better measured is the teaching experience; the way one is treated as a whole by the organization; and the overall student experience – they all matter once students get to campus,” Rodriguez says. “After you’ve made your choice, they really form the basis; you’re captive and the school should treat you really well once you’re no longer on the open market. Too often, we’re measuring them at the point of decision. We’re measuring everything about them while they’re here. The rest is harder to measure.”
DeRue labels such a benchmark as “the process,” a qualitative means to better spotlight the experiential considerations alongside the usual inputs and outputs. “The process would consist of the educational experience, the culture and community of people, and the general access to career and professional development opportunities,” he reveals. “Too often, prospective students use rankings to generate a consideration set of schools based on input and output metrics, and then spend considerable resources trying to figure out and make sense of the experience.”
Customer satisfaction comes in other forms, too. For DeRue, this means highlighting satisfaction rates through the lens of the ultimate consumer: employers. He urges outlets that produce rankings to better factor in the perspective of employers. In DeRue’s view, cognitive measures like GMAT remain important. However, he worries that programs sometimes spend too much energy on inputs at the expense of intangibles like a “diverse student body” that both enriches the learning experience and supplies a “more diverse talent pool.” True to form, he also believes ranking outlets should devote greater attention to what employers need from MBAs.
HOW MUCH IS AN MBA WORTH OVER THE LONG HAUL?
“Every single employer that I talk with indicates a strong need for greater analytical capability, more teamwork and collaboration skills, and enhanced leadership-related capabilities,” DeRue says. “What if ranking agencies surveyed employers about what they need, and then aligned their employer surveys (and their rankings) with those needs? This would be consistent with the premise that business schools are suppliers of talent, and our supply of talent needs to align with and meet the needs of these employers.”
Hiring an MBA is a six-figure investment for most firms. That’s one reason why DeRue, again looking at rankings from an employer’s perspective, would plot a length of tenure point into his MBA ranking coupled with a “subjective assessment” of the “overall value” that each school’s MBAs deliver over time.
“The value of talent to employers is a function of (a) how much value the person offers and (b) how long the person stays at the organization,” he explains. “If an organization hires an MBA from a top school, my understanding from employers is that the return on their investment is negative until two or three years in. The market value (salary) is greater than the delivered value until the person has moved up the learning curve. After two or three years, the person begins to add value that is commensurate with his or her salary. If true, the average number of years a person stays with the employer is a key metric of value.”
TRACKING CHOICES TO PICK WINNERS AND LOSERS
The deans also shared ideas that would be game-changers in theory, but might be more difficult to pull off in practice. Jain’s emphasis on ‘revealed preferences’ is a case in point. An economic theory, revealed preferences examines how subjects actually behave when faced with a choice. In the case of MBA programs, it would measure how many students chose a particular school after they weighed it against an equal set of options (i.e. being accepted into other programs with equal financial aid offerings). Mind you, schools are loathe to share such information – and there is no centralized means to break down the thousands of decisions that revealed preferences would encompass. However, it presents an enticing alternative to similar, albeit hazy, measures like applications-per-seat and yield.
“There are many attributes relevant to the business school experience and prospective students think about them with different weights,” Jain explains. “If you look at the schools that students choose – as far as School A, School B or School C – their revealed preferences – voting with their feet – capture a lot of what is relevant to aggregate populations of students about all of the attributes of the school that gives rise to its reputation, prominence, and quality of the educational experience.”
These revealed preferences serve another function too, Jain adds. “They also capture students’ expectations of future benefits that they will derive from the education – the network and so on. In some sense, what is relevant, at least to an aggregate population, is captured in the choices they end up making. A very simple way to do a survey is to look at the win-loss ratios – What fraction go to School A vs. School B. You could also do tiers of schools or between schools as well.”
DOES LONG-TERM PERFORMANCE HAVE A PLACE?
Another gap in rankings, deans say, involves an outlook that is all too short-term. Law schools, for example, can measure learning (to an extent) by bar passage rates and long-term success by the number of alumni in key positions like judgeships. Jain, for one, would love to find a way to measure an education’s impact over 5-10 years – and even longer. Rather than wrestling with the chicken-or-the-egg argument – was success the reflection of inputs or education quality – Jain simply urges ranking outlets to factor in the business impact made by business schools.
A herculean task, no doubt, which may be why DeRue would scale it back to an extent. “If rankings continue to survey alumni, in my opinion, the focus should be on career success over some time period, or maybe even multiple time periods (short and long term).”
Overall, the deans urged ranking outlets to place less emphasis on GMATs, with Jain being the lone exception. In his experience, he has found a very tight correlation between GMAT and GPA scores and academic success. In many corners, Jain notes, popular opinion treats academic success as “inconsequential to the MBA experience” – a place where, Jain jests, “the top third of the class academically will work in the middle in companies started by the bottom third.”