How A Dean Would Rank Business Schools

Columbia Business School Dean Glenn Hubbard

Columbia Business School Dean Glenn Hubbard

The dean of Columbia Business School believe that rankings of business schools really matter and has proposed a simplified approach to rank MBA programs that may well make a lot of sense.

In an essay published by fortune.com yesterday (April 6), Columbia Dean Glenn Hubbard argues that the most effective way to judge the quality of a business school isn’t in the number of prizes a faculty wins, or the number of alumni who are CEOs of companies, or, for that matter, what other business deans think. Some of those attributes are metrics calculated by U.S. News & World Report, which surveys business school deans, and The Financial Times, which measures publication of articles by faculty in academic journals.

Instead, Hubbard makes a convincing case that the quality of an MBA program can best be measured by the students. “Every business school dean, myself included, will tell you that their school is the best, so as much as it pains me to say, you should probably look past the deans,” writes Hubbard. “Instead, look to the students. It’s in the student network that you will find the metrics that matter for assessing any business school: inputs and outputs. (Sorry for the econspeak, but I am an economist!).”

MEASURE STUDENT INPUTS AND OUTPUTS

The dean then suggests several metrics to gauge a school’s student quality. “By inputs I mean applications. It’s valuable to know how many applications a given business school receives during a year. It’s also valuable to know whether the volume of applications is trending up or down over the long term. It stands to reason that the marketplace of prospective students will send the most applications to the best schools, which will, in turn, have more selective admission rates. If you study the data on applications—which some rankings provide as part of their research—you will have a key piece of the puzzle,” believes Hubbard.

“It’s just as important, however, to know what happens to students when they leave school—the outputs. If the job market deems the students to have received a valuable education, they will receive good job offers. That should hardly come as a surprise—employers want the best employees they can get. Schools that routinely graduate classes at full or near-full employment, with good job satisfaction, have reason to believe they’re receiving a vote of confidence from the market. Rankings that provide data on job offers, salary levels, and other “value added” criteria are providing another critical piece of the puzzle.”

Hubbard invites applicants to “put these pieces together…and the picture that emerges might be shocking to some.” So that is exactly what Poets&Quants did, starting with our own ranking of the top 25 U.S. business schools. To measure inputs, we collected two sets of data: The number of applications an MBA program receives against the number of class seats available (which is preferable to total applications because smaller schools obviously would be at an unfair advantage) and yield, the percentage of applicants who enroll in the MBA program once admitted (because this number tells you which schools are preferred by candidates and which ones are not). We then compared those two “inputs” against the “outputs” recommended by Dean Hubbard: salary and job offers, or more specifically average starting salary and bonus and the percentage of a class employed three months after graduation. All the data is for the Class of 2014 which graduated last spring.

THE HANDFUL OF B-SCHOOLS THAT DOMINATE THE UPPER TIER HAVE THE INPUT & OUTPUT NUMBERS YOU WOULD EXPECT

The benefit of this system is obvious. Many of the surveys of students, graduates, recruiters and deans done by every organization from Bloomberg Businessweek to The Financial Times have several flaws built into them. The first and foremost problem is survey bias. The people who complete those surveys know they are being used for a ranking. As a result, their choices are likely to be influenced by that fact. A student or alum of a school is less likely to downgrade the value of his degree by providing negative feedback on the school. Many deans have enough trouble knowing what is going on in their own schools that it’s a real stretch to ask them to name the best rival schools. Secondly, sample sizes and response rates vary from year to year and can have a significant impact on the results of these surveys. Some schools, in fact, are disqualified from some rankings if the response rate falls below a set level.

The metrics that Hubbard suggests using are all self-reported by the schools. They are hard and generally reliable numbers. While some schools have been caught fudging the data form time to time, they are by and large pretty reliable and standardized across the schools. That makes them pretty solid and consistent to hang a ranking on.

When you go through this exercise, the results–especially at the top–are not shockingly different. As Hubbard himself notes, “there are, in fact, top business schools—consistently so—and there is a quantitative way to differentiate them from the rest. The handful of business schools that dominate the upper tiers of today’s rankings no doubt see the input and output numbers you would expect. And because they do, they enjoy the cascading effects of being a top business school, like the programmatic adaptability that comes with financial health, and the ability to build or maintain an extraordinary faculty.”

AN ON-THE-GROUND REALITY IS CAPTURED BY JUST A FEW KEY DATA POINTS

His conclusion: “So all rankings of business schools, at least in part, reflect an on-the-ground reality—if they are taking into consideration the inputs and outputs that are key data points. Admittedly, the aggregate difference between schools in the top five or 10 can be slight. However, being ranked #1 versus #10 can significantly impact the perspective of the marketplace. It’s up to each school and each prospective applicant to discern the signal from the noise and act accordingly.”

It’s fascinating to look at how the schools compare to each other on each of these metrics, but also to meld them together to come up with the ranking that Hubbard proposes. As is often the case, gathering a single set of statistics can result in some peculiar anomalies, or to use Hubbard’s words, “noise.” The University of Wisconsin’s business school, for example, has a higher yield rate at 73.0% than Wharton or MIT. Is that an indication of the school’s quality or merely a likely outcome of Wisconsin being more of a regional school attracting candidates who want to stay in and around the state of Wisconsin? Similarly, the University of Maryland’s Smith School, has more applicants per class seat at 9.4 Columbia or Duke. Is that merely a reflection of the small class size at Smith or a true measure of market preference? For that reason, we’ve limited our analysis to Poets&Quants’ Top 25 business schools in the U.S. A latter update will more broadly expand the methodology and reach deeper into the top pool of MBA programs.

  • fidel305

    Yes. Independently wealthy and/or sorting through investment or other ops.

  • K

    Exactly! Financier douches commenting on P&Q are annoying. Very few top students want to go to Finance, DEAL WITH IT. Even w/o finance, Wharton is still a great school, but most def not in the level of H/S.

  • JohnAByrne

    Sure.

  • kailash

    John,

    Could you also put together a list of all schools that are increasing their class sizes this year. TIA

  • fidel305

    Move Wharton to what is now your third grouping and make it group 2.

    Drop tuck and Haas into your fifth grouping and make it group 3

    And that pretty much does it

  • fidel305

    Nailed it

  • fidel305

    Salary measures skew the rankings toward schools who send grads into finance and away from those who graduate a greater percentage of people who actually create something or go into nonprofits ( as such it’s a useless, self serving metric you would expect CBS to employ)

    Percentage employed does the same to some extent and definitely skews against schools whose grads are more likely to actual create something. Again, it’s no wonder CBS likes the stat.

  • fidel305

    PE isn’t rocket science, contrary to what some seem to believe, and even a rudimentary finance background is sufficient for PE.

    HBS provides its students with many more important skills and opportunities than mere finance courses. And, besides, if you happen to want some especially heavy number crunching on something, you can always just hire someone from wharton or Booth to do it. it’s just a commodity..

  • fidel305

    basically, finance is garbage. myopic Wharton types just haven’t received the memo yet. But. if that’s what you want to do, get a one year degree from MIT or LSE or better yet just go to MIT undergrad and major in math.

  • fidel305

    H and W have both slipped

  • Click

    I think it is too late to apply this metrics for building rankings. Because, most of the students, no matter how is the school actually, would choose the best ranked universities. and like a deans most of the graduates will not tell something bad about their universities. it is normal. no one would underestimate him/herself university in public. this suggestion worth nothing.

  • someone

    I don’t believe these things. All top 20-25 schools can get you where you want to go. As a current first year at a top 15/20 school, I’ve realized that really there is not a significant difference in career outcomes once you’re in the Top 20. It’s really on the individual to put in the work to take on the leadership opportunities available, and recruit at the companies its school is most well-known for. To me, go for the school you see yourself fitting in at best, and choose a school based on post-MBA outcomes and make sure you’ll have the resources to do make it happen (talk to current students to get the inside information on these things). For example if you want to go into tech vs. CPG vs. an i-banking job – the list of schools you consider should be widely different.

  • The Good Doctor

    An exceptional analysis. If everyone thought with your power and frequency oh what a world this would be.

  • Also..

    I would add Tepper to the Tech and Consulting list as well. As a % of the class over half go into those two industries.

  • The Way it Is

    Excellent – finally some people calling the way it really is out there…
    Nice post!

  • nothing_to_see_here

    Totally agree, P&Q, USNWR, and other types of ranking more or less perpetuate this lie that you can stack all these schools one next to another and get a nice, neat ranking from top to bottom. When in fact pretty much all top 15 schools are more or less going after the same body of students and funnel their graduates to more or less all the same employers. Yes, there are exceptions to PE / Hedge Funds and other uber exclusive careers but your chance of landing a job at the major banks, major tech firms, or top consulting firms are by in large more or less the same with maybe a few % points difference.

    Honestly these days as someone who is actively trying to recruit talents and trying to find people with the best fit. There is absolutely zero difference in my mind between the top 15-20 schools. Every schools, even H/S/W, has their duds and bad apples, nobody throws out a resume or making a hire or no hire decision based on rankings.

  • nothing_to_see_here

    It really depends on the industry you’re looking to go into… HBS/SGSB/Wharton/Booth grads can more or less recruit successfully across all industry. Wharton, Columbia, Tuck, NYU has a bit of a leg up on Finance / PE, Stanford too for PE in the West Coast. I’ve seen consulting recruiting very heavily out of Booth, Kellogg, Duke, Haas, and Darden (on top of H/S/W), For Tech – Haas, Duke Fuqua, and actually UT McComb are some of the best funnel schools.

  • nothing_to_see_here

    Newer, less established b-schools also has an incentive to keep class size large to expand their alumni base. It’s always a fine balance between ensuring you have a healthy class size vs. not compromising on quality. Even S/H/W has their fair share of duds and bad apples.

  • Fill

    Harvard’s subpar and scrambled culture is already being handily exposed. No one wants to go to a restaurant where the atmosphere is unappealing and overrated!

    I’ve witnessed this difference first/second hand!

  • Truth

    Wow. How can anyone take your response seriously when you use outdated and offensive terminology like that?

  • Financier

    This is why Johns reasoning of relying on applicant trends is retarded. Only a retard would fail to see the blatant fallacies in this method.

  • Financier

    There was a P&Q article just a few weeks ago that showed Wharton had more PE/VC than H+S combined. Idiot. Look it up, educate yourself, then come back. Everyone has realized H finance is garbage.

  • Sauron

    From what I understand, W has historically been much more selective. Application trends these days are influenced by so much more – which is why GSB is the most selective and difficult to get into. And why cross-admits overwhelming choose GSB over HBS. This is what causes HBS to be called GSB’s safety school, and is ranked below Stanford in USNWR. But historically, it must be conceded that W has been at the top. And it looks like in finance it will be at the top forever.

  • Financier

    Wharton finance has always ranked #1 in every finance ranking in the world.

    HBS finance has never made the top 10 list. Stop being stupid with your dumb long posts.

  • shakeitoff

    dude, don’t know what you’re talking about but wharton’s PE/VC placement is lower than HBS and GSB….

  • polk

    “Absolutely no one takes HBS finance seriously. No one. Period.”

    What are you talking about? Have you ever looked at the HBS employment report? Sounds like the following financial services firms take HBS pretty seriously:

    * Accel Partners
    * Bain Capital
    * Bessemer Venture Partners
    * Blackrock
    * Blackstone
    * Bridgewater
    * Cerberus
    * Fortress
    * Google Capital
    * Index Ventures
    * Khosla Ventures
    * Kleiner Perkins
    * Maverick
    * Paulson & Co
    * RRE Ventures
    * Soros
    * Thrive
    * Trinity Ventures
    * Viking

    That’s only a sampling of the employers and that’s excluding Goldman, MS, JPM, and KKR, among several others, because I assume most HBS students looking at jobs in finance have already worked at one of those places previously.

    Hmm, now let’s look at the Wharton list:
    http://mbacareers.wharton.upenn.edu/statistics/files/cr13.pdf

    The three overlaps that aren’t investment banks are Blackstone, Perella Weinberg, and Warburg Pincus, the latter two of which I didn’t even include on the HBS list because students at HBS probably already worked at those firms before matriculating.

    Sure, maybe you learn a marginally better theoretical framework of finance at Wharton than at HBS, but when it comes time to pound the pavement, it’s not even close.

  • Guest

    That is incorrect based on my experience in the past, but maybe that’s changed since

  • JohnAByrne

    Actually, schools with larger class sizes have advantages in the MBA job market because they have the critical mass necessary to make it worthwhile for companies to come to campus to recruit their MBAs. In fact, several schools today, including the University of Toronto’s Rotman School and Oxford University’s Said School, are increasing class sizes for that very reason.

  • Truth

    Totally agree with everything you said. It should be a disclaimer, however, that these rankings (whether it’s from P&Q or USN) should not be the only factor in one’s decision. It would be impossible to create rankings that apply to every individual for all the reasons you stated. I turned down a higher ranked school, and I know there were people who didn’t understand my decision, but those people also didn’t do the hours & hours of research and soul searching that I did.

  • LOL

    Nothing in the article suggests he’s looking for a scapegoat.

  • LOL

    Exactly – this is the fundamental problem with this website and John’s so called “logic” – it should be based on solid factors like employment, salaries, international alumni presence, research contributions and the like. These are the factors that matter.
    Oh, and also whether you actually graduate having learned something tangible.

  • LOL

    H/S only gained from global financial crisis. W has more finance hires then both combined, highest employment statistics, highest salaries, and already higher GMAT selectivity than H.
    No one will even be disputing this issue once finance picks up again.
    I think it’s pretty clear to conclude that H’s embarrassing pathetic finance program has been exposed permanently.

  • Recent Grad

    Based on what this guy said the yield should actually be computed inversely. If the yield is high, the school is getting the best students, hence the output will be good even if the school doesn’t add any value.
    I would also take class size into consideration. The bigger the class size, the more challenging it is to get everybody a good job.

  • Unemployment

    All of those categories are accounted for, not reported as unemployed.

  • Guest

    … I’m assuming you didn’t go to one of these programs. What it means is that they started a company, are looking for something incredibly specific that only hires as needed (VC for example) or dont report because it’s family business

  • Buffett

    So if I were to make a #1 ranked school:

    I’d form a class of 1 person; target 10 candidates and make sure he/she accepts contingent on lining up a $150k job before-hand with scholarships. Not so hard if I were Jack Welsh or Warren Buffet for instance!

  • Killa

    And consulting stats are identical.
    This difference and W’s top position will become even more pronounced as everyone realizes the difference in the quality of education and international reputation for rigor. It’s already made a huge comeback, and it’s led to H slipping in every single ranking, while W reclaims the throne..

  • Killa

    Everyone knows fear of Wharton’s curve and fierce quantitative approach artificially lowers yield %’s. Doesn’t mean a thing.

    It should be based entirely on employment statistics – Wharton’s IB/PE/VC is twice H+S combined,

  • LOL you went to BOOTH

    Lol. Intentional unemployment. After an MBA?

  • Common sense

    This is precisely what is wrong with this site.
    Application statistics change every year. So you’re saying a single snapshot based on applicant behavior determines the quality of the school?

    Ridiculous. John, your deductive reasoning leaves a lot to be desired.

  • Truth

    Agreed. If you want to do Marketing, Kellogg > Wharton. If you want to go into Tech, then there is a viable reason for choosing Berkeley over Tuck.

  • LOL you went to BOOTH

    Depends on what you want to study. If you want finance/wall street, rankings are:

    Wharton
    .
    .
    .
    The rest

    Same for international brand prestige in UK and especially Asia, where Wharton has always dominated historically. Their perceptions weren’t distorted by USNWR. USNWR is toilet paper outside the U.S.

  • sid

    Obviously ! hahaha..Isn’t that obvious. Especially based on the methodology described above!

  • 2cents

    Would say the most questionable is jobs after graduation – if you don’t have one at 3 months from the top schools its intentional.

  • 2cents

    Makes sense… CBS was hit by subjective scoring. More interesting in these rankings is the clustering that comes from this kind of ranking. You see what many think are the coastal premiere schools (H/S) followed by the current hot field of tech (MIT/Haas) and then NYC centric finance (W/C/Stern).

  • Rebecca Robot

    That’s why yield is also used in this equation. The two together provide far better insight than a combination metric like admit rate.

  • dd

    It boggles my mind that no one produces a weighted salary measure. It would be so easy, and so much more indicative of the typical salary a graduate can expect to receive. Average salary is effectively meaningless. John? That article would be hugely popular. Do NYU graduates really make more than Yale graduates, or do Yale graduates just go into nonprofit at a higher rate? It could very well be that Yale graduates, on average, make more than NYU graduates when they enter PE, because of something like oversupply of NYU applicants vs. the five applicants Yale puts up, but we don’t know because no one has produced weighted salary numbers.

    It’s a huge disservice to applicants.

  • dd

    Hubbard? The same Hubbard? The one who has steadily dropped Columbia in rankings? The one who has failed to diversify Columbia beyond Wall Street even the tiniest bit? The one who has allowed his school to be considered peers with Stern, a perennial backup school for Investment bankers? The nation’s most ideological dean who is only marginally respected in his profession? What world are you living in?

  • Caio

    Agree! Maybe Tuck side by side with Berkeley though?

  • TCasg2014

    Sure — UCLA in the last group.

  • Picky_dean

    Saloner and Hubbard are the most succesful business school deans by far. They have an outstanding track record. Sadly, the same can’t yet be said of other deans, namely Nohria, Garret or Kumar.

  • Orange1

    Totally agree but throw UCLA into the last grouping.

  • TCasg2014

    Doesn’t matter how you try to cut it, there are 8-15 schools in the world worth going to, depending on your background and goals. Of those 15, it goes something like this:

    Harvard and Stanford
    Wharton
    Booth, Kellogg, Tuck, Columbia, and MIT
    Berkeley
    Duke, Darden, NYU, Yale, Ross, Cornell

    *schools within the same line are appx. equal / decision to go based on personal choice — personally wouldn’t have Wharton separated, but most others would.

  • 4cents

    It is certainly the most reasonable and transparent ranking that we have ever seen. The only questionable metric is the yield.

  • _

    That metric has nothing to do with “competition per seat”. In any given year regardless of whether or not they choose to attend all those who have gained admission to the program have won the “competition for a seat”. Given that you can’t have 1000 winners for a seat when under your regime of measurement there are only 900 seats, your measurement of seats must be incorrect.

  • hmm

    Emory>Georgetown ?

  • Chinee Foo

    I disagree with these ratings, not least because their needs to be an adjustment to Yale’s salary for the the number of people (in a small graduating class) that go into government or non-profit. Also the yield rates may be higher for mid-tier schools because that’s the best the applicants could do and the employment numbers higher because said schools do have local/regional market value.

  • Esuric

    No it isn’t. First of all, the ranking published here isn’t “his ranking.” It’s a ranking loosely based on a methodology he vaguely describes. Second, Columbia is ranked 6th here while its ranking in USN, which is the only ranking that anyone really cares about, is 8th. Columbia’s drop in that pivotal ranking is probably what triggered his reaction (blaming the rankings rather than his own terrible performance.

  • 2cents

    I’m actually… agreeing with these rankings?

  • Rebecca Robot

    Students who enroll is the correct definition of seats. This is different from the inverse of admit percentage because of admitted students who do not attend.

    The schools will, in fact, fill the seats no matter what. Whether that be through R2, R3, or the waitlist. The number of seats is known in advance and does not change, even with variance in yield.

  • Folly

    The J-Term is no longer a problem for CBS full time students. Employers now recognize the selectivity involved in the full 2-year MBA program. Therefore, CBS students are constantly being ranked amongst the best in almost every firm (Eg. GS, McK, BCG) as opposed to their J-Term counterparts

  • huh?

    The number of seats DOES change. For example take the Wharton example. This is assuming wharton admits 1000 people.

    100% yield: the denominator is 1000
    90% yield: the denominator is 900
    80% yield: The denominator is 800

    This is because his definition of seats available is “students who enroll” not “students who are admitted”. If the denominator didn’t change then the competition per seat figure would be the inverse of “percent admitted” (aka “total applicants/seats offered”).

    Essentially the only way your logic would work would be if you assume most admissions are made rd 1 and subsequent rounds are just fillers (when in most case rd 2 is the largest, and some schools only have 2 rds).

  • iknow

    LOL because most Columbia and Harvard MBA’s are starting companies and hedge funds and not in some avg 200K job

  • bwanamia

    I would like to see more granular information about the applicant pool at each of the schools. At a minimum, there should be a matrix corresponding to US, non-US vs. non-URM, URM. It’s meaningless to talk about applications per seat if there are thousands of applications from India that don’t stand a chance of being accepted.

    Moreover, I might create another matrix for US applicants: 700+ GMAT, 700- GMAT, non-URM, URM. Again, it’s meaningless to talk about applications per seat if there are thousands of applications from 700- GMAT, non-URM applicants who don’t stand a chance of being accepted.

    What we need is something that enables us to identify the comet’s tail of non-competitive applicants. Meaningful applications per seat can only be based on applications that are competitive with regard to GMAT, GPA, undergraduate pedigree and work experience.

  • JohnAByrne

    Scores are scaled from the top-ranked value.

  • Rebecca Robot

    Again, the # of seats in the class does not change, despite yield.

  • Rebecca Robot

    No, the denominator (# of seats) does not change. The school will fill them even with a low yield (waitlist, R2, R3, etc).

  • Rebecca Robot

    I would argue that applicants absolutely factor in a school’s selectivity when deciding to apply. Both safety and reach schools can be chosen based on viability rather than applicant preference.

    For example, it’s difficult for Haas to be a safety school because so few people get in, but Booth is a good safety pick.

    Similarly, if a lower-tier applicant is going to shoot for a top-tier reach school, they are more likely to pick Harvard over Stanford.

  • Rebecca Robot

    His school is ranked lower in his own ranking than in “official” published rankings… He’s not blaming anything.

  • Rebecca Robot

    Student preference. Many top students don’t want $350k starting salaries because they don’t want to work in PE or some other dreadful role. They would rather work in retail or media for $110k.

    I would argue that the demand for MBB consulting positions is much, much higher at rank 10-15 schools than at H/S, where few people (relatively speaking) apply for those positions, and everyone who does gets one.

  • Rebecca Robot

    As a fellow psych grad: could be. However, I think it’s more likely that scores are scaled linearly from the top-ranked value rather than using value distributions.

    John?

  • Roland

    In true dismal scientist fashion Glen is missing the forest for the trees. His stress on selectivity would imply that the caliber of students going to places like Harvard is far and away better. But when you look at the outcomes, the top 13 schools all have salaries above $135K and mighty Harvard is only $144K. Big whoop. I’m disappointed. Harvard is only worth a few thousand dollars more? What’s the big deal? Either the students at Harvard aren’t appreciably better or the quality of the Harvard education is appreciably worse. And how about Columbia being in the center of the financial universe and having lower starting salaries than someplace like Michigan? What gives? Either the top 10,000 MBA graduates are interchangeable or some of these sloppy seconds schools are doing a hell of a job. Occam’s razor points to interchangeable. Face it. Once you establish a certain level of intelligence and diligence, it’s virtually impossible to know which candidates will have that magical combination of conformity, kiss-assery and ruthlessness to climb the corporate heap.

  • Bliu

    IIFT MBA class of 2014 earned an “highest international CTC (cost to company)” of 88,500 USD. This makes IIFT fall from this top 25 list entirely.

    http://edu.iift.ac.in/iift/docs/FinalPlacementReport2014.pdf

  • Fred

    This methodology isn’t perfect, but it does make a reasonable effort to address the bucket differences. Given that we can only speculate, this is a good rule of thumb for me. Berkeley might not steal admits from Columbia, but MIT could more easily. These same results can happen as you walk up the rankings!

    Depends on what circles you talk to. Overall, you can’t go wrong, but the higher up the ladder you climb, the better. There are noticeable differences between #3 and #8!

  • Dr_Ads

    HBS MBA Class of 2014
    9,500 applicants, 950 enrolled, yield 88%, 97% employed within three months

    Indian Institute of Foreign Trade MBA Class of 2014
    62,029 applicants, 258 enrolled, yield 98%, 100% employed within three months

    So there you have it, a small government-backed BS in New Delhi is (according to Hubbard) clearly superior to HBS…

  • Emas

    Your analysis is wrong in the practice. At the moment you have two main buckets. HBS, Stanford and Wharton (even though its popularity is decreasing) and B, K, H, T, M, C. If you applied to BS this year and got in touch with peers, you’d have noticed that students are choosing among schools in the second bunch without taking care about the rankings.
    All of them are great schools and if you have to choose among them, you take into account your career path and your fit with the culture. You aim at marketing? Kellog. You like a big school or aim at finance? Booth. You aim at consulting? Tuck…
    Rankings are clearly very important but my impression but lots of “rankings nerd” people are giving them a mystical importance. I’ve been admitted at Wharton, Booth, Kellogg and Tuck and having to decide among them, I’m really not interested in rankings at all.
    #3, #4, #8….who cares, we are talking about the best schools in the world. They offer the same opportunities.

  • Fred

    Using 2-yr average P&Q rankings, I’ve calculated the adjusted YIELD % using premium system described above. This is the way I read Dean Glenn Hubbard’s student “demand” signal reference: adjusted yield paired w/ application volume (NOT apps per seat). By these measures, student demand signals (1) Harvard, (2) Stanford, (3) Wharton, (4) Columbia, (5) Kellogg, (6) MIT, (7) Booth. These combined “input rankings” should be paired with the Salary and Job placement stats to determine overall ranking based on Glen Hubbard’s suggested methodology.

    PQ RNK / Yld% / Prem-Disc / Adj Yld% / Adj Yld% RNK / Aps#
    H (#1PQ) – 88.8 + 10 = 98.8 (#1 & #1 application vol ~9400)
    S (#2PQ) – 78.7 + 9 = 87.7 (#2 & #2 application vol ~7500)
    W (#T3PQ) – 68.0 + 8 = 76.0 (#3 & #3 application vol ~6100)
    B (#T3PQ) – 59.4 + 8 = 67.4 (#6 & #7 application vol ~4000)
    K (#5PQ) – 63.9 + 6 = 69.9 (#5 & #6 application vol ~4400)
    C (#6PQ) – 70.4 + 5 = 75.4 (#4 & #4 application vol ~5900)
    M (#7PQ) – 62.3 + 4 = 66.3 (#7 & #5 application vol ~4900)
    T (#8PQ) – 52.2 + 3 = 55.2 (#8) (lost interest at this point…)
    D (#9PQ) – 50.9 + 2 = 52.9 (#10
    B (#10PQ) – 52.5 + 1 = 53.5 (#9)
    M (#11PQ) – 50.9 +0 = 50.9 (#12)
    D (#T12PQ) – 45.8 – 1 = 44.8 (#15)
    C (#T12PQ) – 52.6 – 1 = 51.6 (#11)
    U (#14PQ) – 48.2 – 3 = 45.2 (#14)
    Y (#15PQ) – 49.5 – 4 = 45.5 (#13)
    N (#16PQ) – 48.7 – 5 = 43.3 (#16)
    C (#17PQ) – 46.6 – 6 = 40.6 (#17)
    N (#18PQ) – 37.9 – 7 = 30.9 (#22)
    T (#19PQ) – 44.4 – 8 = 36.4 (#19)
    I (#20PQ) – 45.6 – 9 = 36.6 (#18)
    E (#21PQ) – 43.5 -10 = 33.5 (#20)
    G (#22PQ) – 34.5 -11 = 23.5 (#23)
    W (#T23PQ)- 33.0 -12 = 21.0 (#24)
    W (#T23PQ) – 30.9 -13 = 17.9 (#25)
    V (#T25PQ) – 46.1 – 14 = 32.1 (#21)

  • Fred

    Applicants do not factor in # of seats when applying to their target programs. TOTAL APPLICATIONS is the true comparable measure of demand for a school’s brand (not seats). TOTAL APPLICATION volume plus YIELD % should be most telling.

    However, YIELD % is a flawed metric as well when used to compare schools due to the “competition tiers.” For instance, HSW compete for applicants in the same bucket while Tuck/NYU/Darden competes in another largely separate bucket. One should ask themselves “which schools are stealing X school’s admits?” For the most part, if two schools have identical YIELD %, then one should reward the higher-tiered school with a YIELD PREMIUM or, conversely, a YIELD DISCOUNT should be assessed to a lower-tiered school.

    This should be obvious to most people and it should be thought of as a rule of thumb. Using this assumption, take the P&Q ranking (averaged over the past 3 years) to rank schools then assign the highest premium to the #1 school, a slightly lower premium to the #2 school, slightly lower for #3, etc. Eventually, you’ll get to the middle of the pack where NO premium or discount should be assigned. Let’s call this school #10. Then, at school #11, an increasing discount should be applied as far down in the ranking as you go.

  • Dr. Z

    A note from a humble Psych grad: Is the statistical tool used to create your indexed scores also known as a “Z score” transformation?

  • Esuric

    Olin ranks 2nd, and not third, for employment rates (three months after graduation). P&Q’s really needs to proof read their content before they release it.

  • Esuric

    How shocking.. the dean that tanks his school’s ranking blames the rankings!!

  • Dan

    There are multiple serious flaws to the analyses discussed in this article. Business school rankings have been analyzed to a ridiculous degree but one finding has been obvious for some time: in most cases the most significant determinant of the number of applications a school receives is the ranking of the school. To rank schools based on the number of applications received has this completely backward or at least uses a circular logic. Another key driver of application numbers is the location of the school. Is the school in a great place to live for 1-2 years, or a great place to pursue a job, or a great place to live for the long term? Does the school’s location offer a trailing spouse good employment prospects and/or a decent quality of life at least for the duration of the program? Other key drivers of application numbers include the level of scholarship aid available, the cost of living in the area, and issues of family safety, living accommodations and services in the area. A pledge to offer many “full ride” scholarships will increase applications. My own school is among those in this P&Q list. Location is an advantage for my school as it is for Stanford, UC Berkeley and certainly Columbia as well. Location is a disadvantage for some other schools. Either way, though all these factors impact application numbers, none of these factors have anything to do with the quality of the education delivered by the school. Further, the increasingly common argument that placement numbers and salary should drive rankings is also flawed. These are inadequate metrics to use as the only metrics for business school outcomes. Again, my school scores exceptionally well on these two metrics and, yes, these metrics deserve consideration by candidates. However, salary and placement percentages are driven by many factors other than the quality of education delivered by the school. Do graduates of the school pursue jobs in the highest paying industries and/or in the highest cost of living cities? Do graduates pursue the immediate rewards of a high corporate salary or seek the potential for higher future income with a start-up? Are graduates of the school drawn to international or non-profit positions for personal reasons even though salaries are lower than in New York or Silicon Valley or with national for-profit companies? Are the students at the school hungry for the first good job offer they can land, or are some students more likely to scour all options for the best possible fit, even if it delays a job acceptance until more than 3 months after graduation making they recorded as unemployed and “still seeking?” It is a fantasy that students always choose to attend the highest ranked and therefore “best” school where they are accepted. It is a fantasy that all the “best” graduates from the “best” schools are employed fastest and earn the highest salaries. These simplistic concepts lie behind the major business school rankings. Despite the frequent disclaimer advising candidates to look for a personal “fit” when selecting a program, the rankings often assume that neither students nor recruiters actually look for “fit” in making decisions. The fact they do look for fit spreads quality students across many schools and many different employers making it very hard to put a reliable rank order on the quality of the education based only on student data or career outcomes.

  • Yatta Yatta

    Forget the rankings entirely. Does P&Q really need them?
    If P&Q keeps Wharton off some the lists who will take them seriously anyway. You
    are right; mixing
    biased ranks doesn’t necessarily make a weighted ranking better.
    The rankings are meaningless – right out of an Edward Albee play. PS.
    Less I seem like a pessimist, please remember that a pessimist
    is a seriously profound optimist who just knows too much. On behalf
    of so many older, experienced business professionals Houdini said: “My
    professional life has been a constant record of disillusion and many things
    that seem wonderful to most men are the every-day commonplaces of my business.”
    I hope the Deans ‘chill’ about rankings. They have students to care for.

  • midwestern

    Very interesting. I wonder if this will become P&Q 2015 ranking criteria?
    Because….mixing biased ranks doesnt necessarily make a weighted ranking better, you know….
    @JohnAByrne:disqus

  • Yatta Yatta

    Forget business school and law school – seriously this is good advice from a salty ole’ ceo. Get a degree in contemporary philosophy with many multidisciplinary courses that will help you have a life as close as possible to the life you want. But learn a bit
    from the late Joseph Campbell: “’If the path before you is clear, you’re probably
    on someone else’s,’ ‘The privilege of a lifetime is being who you are;’ ‘We must be willing to let go of the life we planned so as to have the life that is waiting for us,’ and ‘Life has no meaning. Each of us has meaning and we bring it to life. It is a waste to be asking the question when you are the answer.’”

    Measuring business schools is metaphysical – silly – the top business schools have the
    biggest endowments and brand names. No one knows how the rest of the business
    schools make any one of the myriad of lists. The explanations are just too surreal and non-nonsensical to we businessmen. It just does not matter, Business school helps you get you your first job – that’s all s/he wrote. Thereafter, the name is an self-ego boost at cocktail or pot parties.

    “The most farsighted thinkers from around the world addressed seemingly intractable
    global problems and found science has shown that in only one time out of nine,
    when faced with preventable conditions like heart attacks, are people able to
    change. The lesson translates across all realms of human activity. Confronted
    with radical changes from outside their walls, businesses find themselves
    unable to adapt. If they hope to thrive, corporate leaders need a strategy for
    continuous mental rejuvenation and new learning.” (Alan Deutschman (Change or Die 2005))

    First, the business school needs to talk with the vast percentage of their graduates who fail at business, and therefore, often at life, because they learned little to nothing
    that prepared them for the Darwinian world they joined after graduation. Measure that!

    Then a business school has to have the courage to change by “building a visionary organization that counterbalances its fixed core ideology with a relentless drive for progress. While core ideology provides continuity, stability, and cohesion, the drive for
    progress promotes change, improvement, innovation, and renewal. An organization
    must be prepared to change everything about itself except its basic beliefs as
    it moves through life. An organization must preserve its core ideology while allowing room for the manifestations of the core ideologies to change”. (Jim Collins and Jerry Porras (Build to Last: Successful of Visionary Companies (1994))

    Then that transcendent business school will not have to be ranked – it will
    be obvious to everyone! And it sure won’t be Harvard or MIT. They are too busy clapping for themselves and celebrating the reputation their predecessors handed them and raising another $1 billion (HBS) then seeking a strategy for continuous mental rejuvenation and new learning that they have a DUTY to do for their universities. PS Games and contests are for kids.

    Get rid of tenure and every business school will get better fast!

  • huh x2

    The applications per seat metric that you use is completely irrelevant and misleading. It penalizes the schools that have high yield rates (such as HBS).

    To give you an example of why it is a garbage metric, let’s say that I start a business school tomorrow. I convince 20 people to apply and accept all of them. However, I can only get 1 person to enroll. By your applications per seat metric, I would now be the #1 school on your list.

    Why not just use a metric for total number of applicants divided by number of admitted students? This is a much more useful metric.

  • Matt

    Re: Columbia — P&Q always has to “explain away” the high yield as being due to early decision. In reality, I think the bigger factor to yield is J-term. Anyone applying to J-term is likely not considering many other options, and the school have little competition from other programs due to the off-cycle entry.

    How many people really sign up for ED who would have otherwise been admitted to a school they would have otherwise attended were it not for the ED clause? The number is not meaningful. In reality people reneg or just gain some signaling pick-up because CBS is their first choice anyway.

    Furthermore, J-term seriously disadvantages Columbia in the applicants-per-seat metric, as I have to imagine the J-term program is significantly less competitive than the fall program (there are just fewer people for whom that is a good fit, not that they’re lower quality). Intuitively, the J-term yield pickup would be offset by the worse applicants per seat metric.

  • Hell_Biker

    Not that useful to a potential applicant though. My decision where to apply would be based on one factor alone: is the school likely to get me the job I want and prepare me to succeed in that job?

    Anything after that is a bonus.

  • Matt

    Then aren’t you double-counting yield? The “seats available” metric then takes into account yield, as does the pure yield metric. A school with a better yield gets the benefit both ways (better yield metric and a lower denominator on this metric)

  • YeahRight

    Before you go to business school you need to learn something real… after which there is zero reason to go to business school.

  • huh?

    Additionally, if that is how you’re viewing the “seats available” then you’re no longer checking competition per seat. The “competition” will always be for the 1000 seats in either case, and the subsequent decision by an applicant (stay or go) has no bearing on “competition”. In fact as you mentioned the negative delta should hurt a school rather than help it (which is the opposite of what a lower yield does in your equation competition per seat equation)

  • huh?

    By that logic if in 2014 400/1000 people came to wharton then the “seats available” would be 400…..and thats too far outside the realm of possibility

  • VetCEO

    Researching and keeping metrics on the number of graduates liv(ed) a complete holistic life. The rest if the stuff doesn’t count! Steve Job’s: “Remembering that I’ll be dead soon is the most important tool I’ve ever
    encountered to help me make the big choices in life. Because almost everything
    – all external expectations, all pride, all fear of embarrassment or failure –
    these things just fall away in the face of death, leaving only what is truly
    important,” and Derek Bok, Harvard: “As time goes by, the
    technical and practical skills that vocational majors learn in college become
    less important to continued success [in the workplace]. Such abilities as
    communication skills, human relations, creativity, and ‘big picture thinking’ matter
    more.” (Harvard President, (ret) Our
    Underachieving Colleges: A Candid Look at How Much Students Learn and
    Why. They Should Be Learning (Princeton, NY, Princeton
    University Press 2006) Charles McGrath (The New York Times 2006): “In the Bok
    view, American colleges and universities are victims of their own success: they
    answer to so many constituencies and are expected to serve so many ends that no
    one can agree on even a few common goals.” Business school is a liberal arts education in the field of business and cannot be measured by business metrics.

  • JohnAByrne

    If Wharton accepts 1,000 people and 900 enroll that does mean that 900 seats were available because 1,000 seats were never available in the first place. Only 900 were and admissions is managing the number to account for people who won’t come. The 100 who turn Wharton down also is a reflection on Wharton because those candidates are rejecting Wharton to go elsewhere for a variety of reasons.

  • huh?

    I get what the metric purports to show. But a more accurate measure is ACTUAL applicant per seat numbers, not the ratio to seats that were accepted. Given that market demand from students is not a function of students who accept (at least in the same year), separating the variables would lead to a better metric of demand per seat.

    Ex. If Wharton accepts 1000 people and 900 matriculate (100 go somewhere else) that doesn’t mean that applicants were competing for 900 seats. The competition per seat should not be a function of what those 100 people chose to do because person 901 choosing not to come didn’t make it any harder or easier for peron 899 to get in.

  • JohnAByrne

    Not at all. Applications received for each seat in the class is a reflection of the market demand for the program. Yield shows how successful a school is in converting admits into enrolled students who most likely have other choices.

  • huh?

    Given that metric, I guess the HBS ranking is still confusing. Shouldn’t the competition per seat match up with the % admitted given that they are inverses….and if it doesn’t doesn’t that mean something is wrong with the competition per seat metric (principally that the yield differences across schools are mucking up the metric)?

  • JohnAByrne

    We calculated applicants per seat by taking the total number of applicants and dividing by the number of enrolled students in the latest class.

  • Stuart

    It’s interesting that on the inputs side at least, this is what schools are essentially optimizing for — and I guess it’s not surprising that a Dean’s ranking would reflect these, a kind of market-based metric around student selection by schools and school selection by students. West Coast small schools “win” the apps per seat, if not necessarily “the yield.” I would have thought Stanford had the highest yield in addition to highest apps per seat. Oh well.

  • huh?

    How are you calculating applicant per seat. Ex. how can HBS have the 2nd lowest admit rate and the 5th highest applicant per seat. shouldn’t the two mirror each other. Also is that figure based on seats in the incoming class or offered seats (aka does it take into account the later numbers on yield). Thought the article was pretty interesting. Thanks!