Are Academics Out Of Step With Recruiters & Students?


Grade inflation isn’t just for students anymore. Turns out, academics give their fellow MBA programs high marks, too. Question is, are those grades based on rigorous examination — or just slipshod opinion?

That’s the question at the heart of the annual U.S. News & World Report Best Business School Rankings, which uses academic opinion as its foundation. In theory, business academics may appear well positioned to evaluate their peers. Unlike their liberal arts kin, who are traditionally lampooned as insulated fusspots, business profs are often regarded as thought leaders who help shape their field.

Just because they’re attuned to the macro trends of industry, however, doesn’t necessarily translate to being literate in the micro developments at individual business schools.


U.S. News

It’s no secret: “Peer” assessments can make or break a business school’s annual U.S. News ranking, the most closely followed ranking of U.S. MBA programs. These assessments account for 25% of a ranking, but the process is shrouded in mystery. According to U.S. News’ methodology, business school deans and directors of accredited MBA programs score schools using a scale from 1 (Marginal) to 5 (Outstanding). As a caveat, U.S. News notes that respondents — the 41% who actually participated — are told to just mark “Don’t Know” for schools where they have limited exposure.

You can imagine the issues this creates.

For one, the actual size of the sample is never disclosed by U.S. News, let alone the number of evaluations a school received. That means readers have no idea just how much weight each assessment carried — particularly considering U.S. News’ narrow five-point system, which could be a boon or albatross for schools with lower response rates. More importantly, the scoring is vague. Are respondents appraising academic acumen, student quality, or outputs like salary and placement — or some mixture that may not be consistent from one respondent to the next?

Above all, the survey’s simplistic requirements rouse concerns about the respondents’ actual exposure to schools where they neither work nor (in all likelihood) retain a regular presence. As a result, the results risk being tainted by chatter or bias — or, most likely, the year-prior rankings.


Stanford University

Stanford University

This distance between evaluator and subject may be one reason for the gap between peer and recruiter assessments with U.S. News. Both rely on the same five-point scale. However, their school averages are noticeably different. Take U.S. News’ Top 10 MBA programs in the 2017 rankings. In seven out of 10 schools, two- or three-tenths of a point separates peer and recruiter scores. Notably, academics gave Stanford a 4.8 average, tying the GSB with Harvard Business School for the top score. However, recruiter scores for Stanford and Harvard averaged 4.5, the same scores as Kellogg, Booth, Sloan, and Wharton. A similar result occurs with UC-Berkeley’s Haas School of Business (4.6 peer versus 4.3 recruiter) and the Columbia Business School (4.4 versus 4.1). Not to mention, academics scored four top 10 programs —Harvard, Sloan, Wharton, and Booth —at a 0.2 point clip higher than recruiters.

It doesn’t just stop with the elite schools. Among U.S. News’ top 25 schools overall, there were 12 schools where academics gave a 0.2-point average advantage over recruiter scores, with another four schools sporting a 0.3-point gain. In this segment, just three schools came away with the same average in both categories: Yale (4.3), Cornell (4.1), and Notre Dame (3.5).

Big picture, these disparities are even more pronounced. Academics confer higher scores, as a whole, by a 4.32-to-4.20 margin among top 25 schools. However, there were exceptions to academics rewarding perceived top schools with higher scores — notably among schools ranked below 15th. They included Texas and Georgetown (both 0.2 of a point better with recruiters) and Emory, Vanderbilt, and Rice (+0.1 among recruiters).

(Go to next page to see peer assessments from 2006-2017)

  • Thanks for this info.

  • Certainly money. I am not sure if it is still the case, but I was once told by someone from BusinessWeek that their MBA ranking issue is consistently their best selling (and highest advertising revenue generator). Accordingly, I expect that all MBA ranking publications are sensitive to messing with a proven revenue generating product.

    From the school perspective, I actually expect that many schools would welcome a personalized ranking system rather than the one-size-fits-all systems we have today with the publications choosing the criteria and weights. Since the rankings rarely contain new information (i.e., top schools stay around the top, year after year), personalized rankings would allow mid-tier schools to potentially surprise some candidates based on a fit that wasn’t obvious looking at the standard rankings.

  • Orange 1

    Would love to see this Daniel but we all know that there is too much effort, money and ego involved to get rid of the current ratings system.

  • The ranking of academic institutions and/or programs have always been criticized for the measures or indicators used, the methods used to collect data for those measures, and the choices in aggregating data from multiple measures into some sort of overall scoring. Today, we have a solution for the question regarding the selection of measures and their aggregation that was not as simple to offer as when these publications began ranking b-schools years ago in a pre World Wide Web era: personalized rankings.

    Publications should move away from publishing rankings to publishing databases. Then, on a convenient web site, individuals could select the specific measures they believe are important (e.g., starting salaries, salary differentials, recruiter opinions, academic opinions, class demographics, selectivity, average GMAT, etc…), assign a weight to each, and have their own, personalized ranking based on their personal criteria. The digital, interactive world we live in today does not require journalists to choose the criteria to measure and the weights to assign when comparing MBA programs – we can do that for ourselves if provided the data.