What If Business Schools Were Rated Instead Of Ranked?

MBA Ratings

MBA ratings vs. rankings?

For years, many business deans have argued that their MBA programs shouldn’t be ranked. They should be rated, in much the same way the credit rating agencies such as &P or Moody’s assesses corporate bonds.

The strongest argument for such an approach is that the underlying index numbers used to numerically rank an MBA program are so closely clustered together that a program’s position is often statistically meaningless. So a school that is ranked 18th is really no better than a school ranked 26th.

That is a legitimate assertion. Just look at the index numbers behind the ranks of MBA programs in Bloomberg Businessweek’s latest list. At a score of 77.5, the MBA programs at Indiana University and the University of North Carolina are in a dead tie at a rank of 22nd.

RATINGS VS. RANKINGS FOR MBA PROGRAMS?

Little more than a single point difference separates those schools from Rice University and Vanderbilt, both ranked 29th with a score of 76.3. Any statistician will tell you that Businessweek’s methodology cannot show a significant difference between schools ranked seven places apart. That is true up and down the ranking.

This point came up in a recent prep call I had with three business school deans in advance of what promises to be a provocative panel discussion I will moderate on rankings at Rutgers Business School’s annual Innovations in Business Education Conference this October. The deans on that call believed that ratings–rather than rankings–made a lot more sense.

The Financial Times, which of course puts a number against each of the 100 MBA programs it annually ranks, admits as much. In a footnote to its list, the FT states clearly that “schools are divided into four tiers. Business schools in tiers l and ll score above the average for the cohort, and tiers lll and lV are below it. The difference in scores between schools ranked consecutively is greater within tiers l and lV than in tiers ll and lll. Tier l includes 18 schools from Columbia to Virginia’s Darden. Tier ll includes schools from New York’s Stern, ranked 19, to Edhec at 47. Tier lll, headed by Fudan, spans schools ranked 48 to 81. Tier lV includes schools from Sungkyunkwan at 82 to Eada at 100.”

So borrowing the credit rating logic, wouldn’t it make sense for the Tier I schools to get the highest rating, a AAA, programs of the highest quality and lowest risk for its students and graduates? Tier IV schools would then get a rating of BAA, medium-grade programs with moderate risk.

JUNK BONDS AND MBA PROGRAMS

And what of the programs that under this scheme would fall to a non-investment grade rating? Securities rated lower than Baa3 by Moody’s or BBB- by S&P are in this category and are often called junk bonds by investors. What business school dean would want any program to be labeled junk? You might as well write an obituary for that program.

No less important, if you lead Harvard, Stanford, or Wharton, would you want the same Aaa rating that would be given—based on the FT’s current tiers—to HEC Paris, Cornell’s Johnson Graduate School of Business, and Duke University’s Fuqua School of Business?

Or if you are the dean of New York University’s Stern School of Business or the University of Southern California’s Marshall School would you want to be rated below the University of Virginia’s Darden School?

Probably not.

And who would agree on where the line gets drawn between an AAA MBA and an AA MBA, or even more divisively, between an A and a BBB?

THE LARGER CHALLENGE: GETTING A ROOMFUL OF DEANS TO AGREE ON ANY ONE WAY TO RANK SCHOOLS OR PROGRAMS

Which leads us to an important point for deans who know that the way current rankings are put together is severely flawed and intellectually dishonest: Moving to ratings would do little to allay their concerns because the bigger issue are the metrics used in these rankings and not the actual ranking itself, despite how aggravating it may be to be reduced to a single number.

The larger challenge here is getting a room full of deans to agree to any one rankings reform. Every one in the room will understandably act in their own self-interest.

In an essay published earlier this year in the Financial Times, three of the most prominent business school deans—Geoffrey Garrett at USC Marshall, Ann Harrison at UC-Berkeley Haas, and Andrew Karolyi at Cornell’s SC Johnson College of Business—made a strong argument for an alternative way to assess the quality of a business school.

“One important mission of business schools is to increase the upward socio-economic mobility of their students,” they wrote. “Business school rankings should therefore measure this ‘societal value added’. Today, the rankings concentrate too much on the prior accomplishments of incoming students and too little on how much schools help to enhance their skills and improve their opportunities by the time they graduate.”

SOME DEANS ARGUE FOR LESS EMPHASIS ON GPAS AND STANDARDIZED TEST SCORES

They particularly emphasize the need for better undergraduate business school rankings because a fifth of all the degrees awarded in the U.S. and likely higher proportions in Asia and Europe. The deans believe that rankings put too much emphasis on the prior accomplishments of incoming students and too little on how much schools help to enhance their skills and improve their opportunities by the time they graduate.

What would they actually change? “First,” they conclude, “the rankings should focus on the ‘value added’ of undergraduate business programmes in transforming the lives of the students they educate. Second, they should use metrics based on readily accessible and comparable data that can be externally validated and verified independently. Finally, they should limit emphasis on input measures (such as Grade Point Averages and SATs) reflecting the perceived quality of incoming students and focus on opportunities, outputs and longer-term outcomes.”

The deans go into deeper detail, arguing for metrics that have nothing to do with the quality of a school or program but instead affordability and accessibility. “Access measures could include the percentage of students from traditionally under-represented groups, who are the first in their families to attend university, receive government aid, and transfer to four-year universities from community colleges,” they add. “In order to assess the affordability of undergraduate business schools, we suggest rankings should compare the “sticker price” — ie, the nominal official price — of degrees with the net price after all scholarships, both need-based and merit-based, are taken into account.”

PUTTING THE FOCUS ON AFFORDABILITY AND ACCESS RATHER THAN SELECTIVITY AND BRAND VALUE

Getting agreement on this change alone would be pretty difficult, particularly from deans of schools that are both highly selective and pricey, which after all, tend to be the schools that most prospective students want to attend. This is all about what is taught as religion in most business schools: the market speaks so let the market decide. If more candidates apply to a school due to its reputation, it will naturally limit access. If the price is high, it’s because a school is leveraging its brand value and reputation in the marketplace to charge more for its degrees. There is a difference between a $21,700 Toyota Corolla and a $108,490 Tesla Model X Plaid–and it’s not merely the $86,790 that separates the two in price. Sure, the Corolla is more affordable and therefore accessible. But you would be hard pressed to find any car expert who would say the Toyota is better than a Tesla.

Can rankings be substantially improved? Yes, if they are put together in a thoughtful way by unbiased professionals who know and understand what they are ranking. Sadly, that is rarely the case.

Switching to ratings rather than rankings may well please some business school deans and more fairly represent the small statistical differences among school ranks, but it would likely draw far fewer readers. Most importantly, a switch to ratings would do little to diminish the love-hate relationship we all have with this imperfect system that has become an obsession for far too many.

Poets&Quants’ Editor-in-Chief John A. Byrne confesses to being the Dr. Frankenstein who unleashed the MBA rankings monster on the world. In 1988, as management editor at Businessweek magazine, he created the first of the regularly published MBA rankings from satisfaction surveys of the latest graduates and corporate recruiters.