Tuck | Mr. First Gen Student
GMAT 740, GPA 3.0
Columbia | Mr. Global Healthcare
GMAT 740, GPA 4.0
Stanford GSB | Mr. Airline Developer
GMAT 710 (planning a retake), GPA 3.48
Harvard | Mr. First Gen Consultant
GMAT 710, GPA 4.0 (First Class Honours)
Harvard | Mr. Big 4 Auditor
GMAT 740, GPA 3.55
Stanford GSB | Mr. JD Explorer
GRE 340, GPA 3.5
Georgetown McDonough | Mr. Automotive Project Manager
GMAT 680, GPA 3.5
NYU Stern | Mr. Honor Roll Student
GRE 320, GPA 3.1
Stanford GSB | Ms. Healthtech Venture
GMAT 720, GPA 3.5
Chicago Booth | Mr. Bank AVP
GRE 322, GPA 3.22
UCLA Anderson | Ms. Apparel Entrepreneur
GMAT 690, GPA 3.2
MIT Sloan | Mr. AI & Robotics
GMAT 750, GPA 3.7
Tuck | Mr. Liberal Arts Military
GMAT 680, GPA 2.9
Stanford GSB | Mr. Social Entrepreneur
GRE 328, GPA 3.0
Wharton | Mr. Industry Switch
GMAT 760, GPA 3.95
Stanford GSB | Mr. Irish Consultant
GMAT 710, GPA 3.7
McCombs School of Business | Mr. Marine Executive Officer
GRE 322, GPA 3.28
Harvard | Ms. Developing Markets
GMAT 780, GPA 3.63
Harvard | Mr. Policy Player
GMAT 750, GPA 3.4
Wharton | Mr. Future Non-Profit
GMAT 720, GPA 8/10
Duke Fuqua | Mr. Tough Guy
GMAT 680, GPA 3.3
Harvard | Mr. CPPIB Strategy
GRE 329 (Q169 V160), GPA 3.6
Harvard | Mr. Defense Engineer
GMAT 730, GPA 3.6
Chicago Booth | Mr. Unilever To MBB
GRE 308, GPA 3.8
Kellogg | Mr. Double Whammy
GMAT 730, GPA 7.1/10
Stanford GSB | Mr. Infantry Officer
GRE 320, GPA 3.7
McCombs School of Business | Mr. Ernst & Young
GMAT 600 (hopeful estimate), GPA 3.86

Few Trust B-School Rankings, But Nearly All Admit Their Impact


Rankings are a huge part of any business school applicant’s due diligence. For many, how a school’s MBA program ranks compared to its peers may be the first thing they look up in their pre-application research. For some, it may even be the most important factor.

But new research suggests that few are willing to credit rankings as accurate reflections of MBA programs’ worth. In a new online survey of B-school professionals and MBA students, graduates, and employers by the Association of MBAs and its sister organization the Business Graduates Association, 9 in 10 say they believe rankings “have a fair amount of influence” on demand for a school’s MBA program — but only 1 in 10 say rankings reflect the true performance of the programs “very well.”

Rankings can, of course, be done the right way. Leaving aside the questionable methodology and data gathering techniques of some of the major rankings, part of the reason for a lack of trust in them among the business school community is, no doubt, a proliferation of scam rankings. But there is also a widespread perception of rankings as the product of click-hungry editors making arbitrary judgments about what to weigh and what to discount about an education — that they are, in short, out of touch with reality.

“The findings from this study suggest that MBA rankings are largely seen to be out of touch with the delivery of MBAs on the ground,” says Will Dawes, AMBA & BGA’s research and insight manager. “One cause might be that the measurements used to assess the quality of MBAs are outdated — with the recent developments in program design, student compositions, and the evolution of what an MBA means to students and schools — and that further consideration needs to be given to rankings modernization.”



AMBA & BGA conducted the online survey of 1,291 stakeholders between August and October 2018. It found that while only 11% think MBA rankings reflect the true performance of MBA programs “very well,” approximately a third (34%) think the opposite — that rankings do not reflect an MBA’s performance very well or at all well, a contrast that AMBA described as “a perceived lack of alignment between MBA rankings and the quality of actual MBA delivery.” Business school professionals, including deans and MBA directors, were the least likely group to agree that rankings reflect MBA performance effectively. Just 4% think rankings do this “very well,” though more than half (52%) say rankings measure MBA programs “well.”

Stakeholders overall are not particularly confident that MBA rankings help students determine the quality of an MBA, with just 29% strongly agreeing while 51% “tend to agree” that this is the case. The majority (59%) have views on changes they would make to MBA rankings; just 12% of stakeholders do not think any changes are needed, while 29% do not know what changes should be made.

Survey respondents were asked to state what percentage they would give to each of five broad MBA rankings criteria when deciding the overall composition of an MBA ranking. Overall, there is an even split between the weight they believe should be applied to each of the criteria. The highest mean weighting was given to “the quality of management faculty e.g. qualifications, quality of research” (25%), closely followed by”‘the overall student experience” (24%), “the outcomes of MBA graduates e.g. career progression and salary” (21%), and “the diversity, breadth, and quality of the MBA cohort e.g. background of MBAs” (18%). Meanwhile, “alumni engagement e.g. frequency and quality of the network” is given a suggested weighting of 13% among this range of criteria.

For each of these criteria, participants were also asked about the importance of a range of sub-criteria. A little over a third, 34%, consider salary to be one of the top two most important factors among graduate outcome measurement criteria (a category which, as indicated above, is deemed worthy of composing a mere 21% of a ranking’s overall score on average).



As Dawes says, “Another issue raised in the survey is that excessive use of salary data in rankings provides an inadequate picture of the impact of an MBA program — specifically, that this measure is unable to reflect what students want from their MBAs, what the MBA program is seeking to achieve, overall student satisfaction with the program, and a rounded view of how it impacts on the student.”

One respondent, a program manager, suggests that salary measurements are too short-term. “Rankings,” he says, “often put too much emphasis and weight on financial gain and often within three months of graduation.” Other respondents pointed to students’ objectives, which may not be conducive to earning significantly higher salaries, such as those who become entrepreneurs. Salary measurements, they say, subsume a number of factors that are intrinsically linked to the personal attributes of the student, rather than the input of the business school.

Stakeholders also highlight measurement issues with salaries, such as the performance of local economies or living costs, which may distort measurements of the value added from the program. As one business school professional put it, “The emphasis on criteria relating to income levels pre- and post-MBA disadvantages schools which, for example, have lots of students moving into self-employment or the third sector. The course might be an excellent personal and business education, but its true performance in relation to AMBA criteria might never be recognized due to the destinations of its graduates.”

Another B-school professional adds: “The majority of rankings are determined by financial data, and that depends upon the location of the respondents. A great program in a country that has a low purchasing power parity value is disadvantaged. Also, they don’t recognize programs that have a focus on areas such as innovation management or entrepreneurship, where managers are neither likely to, nor inclined to, (want) executive positions and thus won’t see the huge salary increases.”


There always will be questions about who is qualified to assess business schools. But the AMBA & BGA survey also found circumspection about the criteria being used in the assessments — and a lack of transparency in how those criteria were chosen. “Rankings are based on only a few criteria,” one survey respondent, a B-school decision maker, says. “But why those and not others? Who decides which criteria are the important ones?” Another B-school leader adds: “Criteria are not being made explicit and the rationale behind the choice of criteria is never debated. While most rankings try to assess the financial success of graduates, few are really looking into the relevance and the efficacy of the hard and soft skills being taught in the program.”

A significant number of MBA stakeholders are also unconvinced about the evidence-collection techniques employed by rankings agencies. “Sometimes, the methodologies of rankings are ambiguous, or the questions are open (to) interpretation, making it difficult to know whether schools are providing comparable answers,” one B-school leader says. Another adds: “Questionnaires and the response data being provided can be crafted in such a way as to skew the data to the ends of either the surveying organization or the responding school. A survey may not be asking the right questions to get to the true areas of excellence of a school and schools may have to manipulate their data to fit the needs of the surveying organization.”

Dawes, summarizing, says the perceived lack of transparency and resulting ambiguity could be associated with wider cynicism about the accuracy of MBA rankings. “Much of the feedback generated in the survey points to low trust in the way responses are collected,” he says, “with stakeholders suggesting that self-reporting of evidence could potentially lead to distorted results.

“Others highlight methodological issues, such as the quality of questionnaire design, skewed sampling of survey respondents, and inherent biases towards larger programs, which have a better opportunity to reach minimum response requirements. Other concerns relate to data verification processes, including the implementation of feedback back-checking and the opportunity for schools to manipulate data (for example, to retrospectively take advantage of currency fluctuations).”

For a breakdown of how B-school stakeholders believe rankings should be composed, see the full report here.