The dean of Columbia Business School believe that rankings of business schools really matter and has proposed a simplified approach to rank MBA programs that may well make a lot of sense.
In an essay published by fortune.com yesterday (April 6), Columbia Dean Glenn Hubbard argues that the most effective way to judge the quality of a business school isn’t in the number of prizes a faculty wins, or the number of alumni who are CEOs of companies, or, for that matter, what other business deans think. Some of those attributes are metrics calculated by U.S. News & World Report, which surveys business school deans, and The Financial Times, which measures publication of articles by faculty in academic journals.
Instead, Hubbard makes a convincing case that the quality of an MBA program can best be measured by the students. “Every business school dean, myself included, will tell you that their school is the best, so as much as it pains me to say, you should probably look past the deans,” writes Hubbard. “Instead, look to the students. It’s in the student network that you will find the metrics that matter for assessing any business school: inputs and outputs. (Sorry for the econspeak, but I am an economist!).”
MEASURE STUDENT INPUTS AND OUTPUTS
The dean then suggests several metrics to gauge a school’s student quality. “By inputs I mean applications. It’s valuable to know how many applications a given business school receives during a year. It’s also valuable to know whether the volume of applications is trending up or down over the long term. It stands to reason that the marketplace of prospective students will send the most applications to the best schools, which will, in turn, have more selective admission rates. If you study the data on applications—which some rankings provide as part of their research—you will have a key piece of the puzzle,” believes Hubbard.
“It’s just as important, however, to know what happens to students when they leave school—the outputs. If the job market deems the students to have received a valuable education, they will receive good job offers. That should hardly come as a surprise—employers want the best employees they can get. Schools that routinely graduate classes at full or near-full employment, with good job satisfaction, have reason to believe they’re receiving a vote of confidence from the market. Rankings that provide data on job offers, salary levels, and other “value added” criteria are providing another critical piece of the puzzle.”
Hubbard invites applicants to “put these pieces together…and the picture that emerges might be shocking to some.” So that is exactly what Poets&Quants did, starting with our own ranking of the top 25 U.S. business schools. To measure inputs, we collected two sets of data: The number of applications an MBA program receives against the number of class seats available (which is preferable to total applications because smaller schools obviously would be at an unfair advantage) and yield, the percentage of applicants who enroll in the MBA program once admitted (because this number tells you which schools are preferred by candidates and which ones are not). We then compared those two “inputs” against the “outputs” recommended by Dean Hubbard: salary and job offers, or more specifically average starting salary and bonus and the percentage of a class employed three months after graduation. All the data is for the Class of 2014 which graduated last spring.
THE HANDFUL OF B-SCHOOLS THAT DOMINATE THE UPPER TIER HAVE THE INPUT & OUTPUT NUMBERS YOU WOULD EXPECT
The benefit of this system is obvious. Many of the surveys of students, graduates, recruiters and deans done by every organization from Bloomberg Businessweek to The Financial Times have several flaws built into them. The first and foremost problem is survey bias. The people who complete those surveys know they are being used for a ranking. As a result, their choices are likely to be influenced by that fact. A student or alum of a school is less likely to downgrade the value of his degree by providing negative feedback on the school. Many deans have enough trouble knowing what is going on in their own schools that it’s a real stretch to ask them to name the best rival schools. Secondly, sample sizes and response rates vary from year to year and can have a significant impact on the results of these surveys. Some schools, in fact, are disqualified from some rankings if the response rate falls below a set level.
The metrics that Hubbard suggests using are all self-reported by the schools. They are hard and generally reliable numbers. While some schools have been caught fudging the data form time to time, they are by and large pretty reliable and standardized across the schools. That makes them pretty solid and consistent to hang a ranking on.
When you go through this exercise, the results–especially at the top–are not shockingly different. As Hubbard himself notes, “there are, in fact, top business schools—consistently so—and there is a quantitative way to differentiate them from the rest. The handful of business schools that dominate the upper tiers of today’s rankings no doubt see the input and output numbers you would expect. And because they do, they enjoy the cascading effects of being a top business school, like the programmatic adaptability that comes with financial health, and the ability to build or maintain an extraordinary faculty.”
AN ON-THE-GROUND REALITY IS CAPTURED BY JUST A FEW KEY DATA POINTS
His conclusion: “So all rankings of business schools, at least in part, reflect an on-the-ground reality—if they are taking into consideration the inputs and outputs that are key data points. Admittedly, the aggregate difference between schools in the top five or 10 can be slight. However, being ranked #1 versus #10 can significantly impact the perspective of the marketplace. It’s up to each school and each prospective applicant to discern the signal from the noise and act accordingly.”
It’s fascinating to look at how the schools compare to each other on each of these metrics, but also to meld them together to come up with the ranking that Hubbard proposes. As is often the case, gathering a single set of statistics can result in some peculiar anomalies, or to use Hubbard’s words, “noise.” The University of Wisconsin’s business school, for example, has a higher yield rate at 73.0% than Wharton or MIT. Is that an indication of the school’s quality or merely a likely outcome of Wisconsin being more of a regional school attracting candidates who want to stay in and around the state of Wisconsin? Similarly, the University of Maryland’s Smith School, has more applicants per class seat at 9.4 Columbia or Duke. Is that merely a reflection of the small class size at Smith or a true measure of market preference? For that reason, we’ve limited our analysis to Poets&Quants’ Top 25 business schools in the U.S. A latter update will more broadly expand the methodology and reach deeper into the top pool of MBA programs.