Rating B-Schools For Societal Impact: Fundamental Questions About Rankings — And A Very Different Approach

Rating B-Schools For Societal Impact: Fundamental Questions About Rankings — And A Very Different Approach

In his August commentary “What if Business Schools Were Rated Instead of Ranked?Poets&Quants Editor-in-Chief John Byrne discusses two fundamental questions concerning the structure and purpose of MBA rankings.

First, he asks whether rankings should not be replaced by ratings, similar to credit ratings, by giving the group of best-rated programs a “AAA” and the group of lowest-rated programs a “BAA,” assuming a total of four groups of programs. The programs thereby would be assigned a rating, not a ranking. The reason for this, he writes, is that in the existing rankings, “the programs are so closely clustered together that a program’s position is often statistically meaningless.” The rankings consequences may be severe even though their foundations are shaky.

Second, Byrne raises an even more fundamental question about the metrics used in the rankings, thereby addressing their underlying purpose: What should be measured in rankings, and who should be served by them? It is interesting to note that this question is rarely addressed or clarified in the major rankings. Readers usually are left alone in finding out what the rankings are all about — and why they are structured and presented the way they are.


Rating B-Schools For Societal Impact: Fundamental Questions About Rankings — And A Very Different Approach

Thomas Dyllick

In this article, I want to shed some light on these fundamental questions about rankings and present a new and very different model for an international business school rating, the Positive Impact Rating for Business Schools. The PIR has been launched for the fourth time in 2023 with 71 participating global business schools after it had been presented for the first time in 2020 at the World Economic Forum in Davos. (See Poets&Quants‘ coverage of the latest PIR.)

In discussing the fundamentals of rankings, I want to address first the purpose of rankings, the “what-for question”: What have they been created for? And then we will address the “how question”: How do they go about in collecting and assessing their data? And how do they report the results? This sequence is important, because the what-for determines the how. Without clarifying the former, it will be difficult to make sense of the latter.

Looking at established rankings like The Financial Times‘ 2023 Master in Management ranking, they typically have been designed to serve students in selecting their school. The makers of FT’s MiM ranking seem to assume, that business school students have clear and dominant preferences for financial income and career success. And by measuring the economic performance of alumni from different business schools they come up with the more or less promising business schools which then can be selected by incoming students. We thus may conclude that the FT ranking has been created to serve students in selecting their business school by using mostly economic performance criteria like financial income and career success. 

In his August piece, Byrne refers to an essay, “Business school rankings must measure the societal value added,” published in The Financial Times by three prominent business school deans: Geoffrey Garrett of USC Marshall School of Business, Ann Harrison of UC-Berkeley Haas School of Business, and Andrew Karolyi of Cornell’s SC Johnson College of Business. They suggest a very different way of assessing the quality of a business school. “One important mission of business schools,” they write, “is to increase the upward socio-economic mobility of their students.” B-school rankings should therefore measure this “societal value added” in transforming the lives of the students they educate. Here we recognize a very different perspective for the purpose of a ranking. It is not catering to students who focus on their financial and career success but to schools who want to increase their societal impact by strengthening the access and affordability for students from traditionally under-represented groups. The purpose here is defined very differently. It is not student financial and career success as measured by economic criteria, but school success as measured by its societal impact. The three deans suggest a focus on student upward socioeconomic mobility. It should be obvious that rankings as a consequence — and ranking results — will look very differently from the FT ranking. This is something that has become clearly visible in the PIR as well, which is also addressing a positive societal impact, though in a different way.


Different norms and values are not only applied with regard to the purpose of the ranking — the what-for question — but also with regard to the methods used to measure the performance of the ranking and to report about its results: the how question. What do these implicit norms and values in the methodology look like?

In the case of the FT MiM ranking, two separate surveys are used to collect the data, one completed by alumni who finished their MiM program three years earlier, the second by the business schools. The ranking has 19 criteria. Alumni responses inform 8 criteria that together contribute 56% of the ranking’s total weight. The remaining 11 criteria are calculated from school data and account for 44% of the weight. Alumni data focus on salary data, value for money, career progress, and aims achieved. School data are used to measure the diversity of teaching staff, students, and board members according to gender, citizenship, and the international reach of the program. And the weighted ranking results of the schools are then positioned on a relative scale between the best and the poorest of all 100 schools ranked in the same year. The results then are published as a ranking, with each school showing a clear position that differentiates it from its competitors.

Critical questions need to be raised here. Why are schools not measured differently? For example, by teaching quality and learning success, by the contributions of students and alumni to innovation and development, or by societal value added? And then, we should ask why are the schools being compared to an increasingly tighter knitted community of highly ranked schools? John Byrne asks rightly: “Does every MBA league table have to include the same ten schools at the top? The heavyweights of the Dow Jones have changed greatly in the last 20 years. Why should MBA rankings be static — not from year to year but over a 5- to 10-year period?”

Byrne concludes that switching to ratings “may represent more fairly the small statistical differences among school ranks, but it would likely draw far fewer readers.” He may be right in this, but this cannot be the reason for continuing with a system that is not correctly representing negligible differences between schools. Moreover, it is also forcing homogeneity and competition between the schools with regard to the criteria chosen by the ranking, where diversity and cooperation may be much better for students, the overall business school system, and for society.


Over the last quarter century, business schools have been the major success story in the world of higher education, at least when measured in terms of growing demand. However, business schools have been heavily criticized by Khurana for having become successively the objects of a “tyranny of the faculty” and a “tyranny of the markets” at the expense of a professional orientation and a contribution to society. Today, there are growing societal pressures for business to take the lead in finding solutions to big societal problems such as the climate crisis, resource shortages, migration, and poverty. As this pressure mounts, Peter Tufano, dean at the University of Oxford’s Saïd Business School, acknowledged in a Harvard Business Review article that “the traditional business school model is looking dated. The pace of change in business schools is far slower than in business, with the result that MBAs are increasingly less well prepared for the complex challenges of leading companies.” To keep their relevance and remain a “force for good” for the next decades, business schools need to orient their teaching and research accordingly.

Meanwhile societal impact and sustainability have become increasingly influential topics across the business school landscape. This is being supported by international institutions like PRME, a UN-supported initiative with over 800 signatory business schools worldwide engaged in transforming management education; NBS, the Network for Business Sustainability, a research network dedicated to make business school research more practically relevant and business more sustainable; and RRBM, an organization of scholars and B-schools in pursuit of providing “Science for Better Business and a Better World.” But, more importantly, this transformation is also supported by international accreditations like EQUIS and AACSB, where topics such as ethics, sustainability, and societal impact have made their way into the revised business school standards.

In 2013, EQUIS established criteria for integrating ethics, responsibility, and sustainability transversally into business school management. The new standards demand that ethical, responsible, and sustainable behavior are made an integral part of a business schools’ strategy and governance, as well as be reflected in their regular research, teaching, continuous education, and service activities. AACSB published their revised Standards for Business Accreditation in 2020. They present the expectation for all accredited schools to demonstrate their “societal impact”. It should be reflected across all core elements of the standards and reaches from the school’s mission and strategic plan, to its curriculum and intellectual contributions, covering issues of policy as well as practice. A new Standard 9 addresses the school’s societal engagement and impact directly: “(It) demonstrates positive societal impact through internal and external initiatives and/or activities, consistent with the school’s mission, strategies, and expected outcomes.”


The Positive Impact Rating for Business Schools builds on these changing demands to integrate societal impact and sustainability into business schools and the education offered. Many students care deeply about making a positive difference through their professional lives, yet they do not necessarily know how to find the right business school to get prepared. The PIR has been designed as a tool for this next generation of change agents and as a response to wide-spread demands for a positive societal impact of business schools. Its focus is on the school, not on a particular program. And its overall intention is “to enable business schools from becoming the best in the world, to becoming the best for the world. 

The PIR responds to a very different vision of the business school’s purpose. Which schools are the true leaders in creating societal impact and are ready to develop students as change agents? Which schools have effective study programs in place and employ state-of-the art learning methods? Which schools effectively walk their talk in creating impact? 

PIR assessments are determined through surveys of an often neglected but powerful group of stakeholders: the school’s own students. It is designed as a rating conducted by students and for students. Students as important current stakeholders of business schools, but more importantly, students as hugely important stakeholders of the next generation, that will inherit the world. 

The PIR measures the school’s positive impact in seven dimensions: How it energizes the school to move ahead through its governance and culture. It measures the school’s educational role in preparing students to become responsible leaders through the programs offered, the learning methods applied, and the support offered to students to get engaged in society. And it measures the school’s active engagement to earn the trust of students and society, but also its status as a respected public citizen. At the end of the survey comprised of twenty closed questions, the students are asked two open questions: What do you want your school to STOP and START doing to improve their impact? Learnings from this direct feedback for the schools have been considerable. 

Rankings position business schools in a highly differentiated league table, thereby pitting one school against the other. It creates a competition between different schools, much like in a soccer or football league. Naturally, the participating schools will try everything to end up as leaders, thereby “beating” their competitors. At the same time, rankings are being criticized for creating differences between schools which are too small to be meaningful. Therefore, and unfortunately, rankings discourage rather than support cooperative and collective action between schools. There is no reason to cooperate with a competitor. As a rating, the PIR follows a very different philosophy. Ratings position schools in different categories. The PIR positions the schools according to their overall scores on five different levels. And schools are listed alphabetically within each level, not in order of their performance, to further reduce a sense of competition. And PIR only publishes the schools on the top three levels, purposefully reinforcing those that are successful in their transformation rather than shaming those who are not (yet) there.

Established rankings assess the schools relative to each other, with the best and poorest performing schools — among those schools participating — defining the range. The PIR rating compares all schools against an absolute ideal, a top rating in the eyes of their students, hence showing the potential for improvement even for leading schools. PIR is designed as a tool for improvement and transformation, giving the participating schools some protection, by classifying them into groups.

Traditional rankings serve a single purpose, to measure and rank business schools against each other. The PIR is different. And it is unique. It is designed to serve a dual purpose: First, it rates business schools. Second, it serves as a tool for continuous societal impact measurement and improvement. This fits the demands from global accreditations. There are several practical features that make the PIR uniquely useful. It offers all participating schools their personalized dashboards featuring the survey results in full detail and transparency. A two-page snapshot of the school results can be downloaded from the dashboard to communicate the school results easily. Also, the school data can be downloaded as an excel file for further detailed analyses by the schools. 

Additional features integrated for the first time into the PIR survey in 2023 are the possibility to sign-up for “AACSB-compatible questions” and “school-specific questions.” While the former is composed of four pre-defined questions related to AACSB Standard 9 on “Engagement and Societal Impact,” the latter are four open questions to be defined by the interested schools themselves. Both answers can be and are used by the schools to report to AACSB, EQUIS, or PRME on their impact as perceived by their students. They offer an individualization of the survey, a feature the schools had asked for to be provided. For the 2024 edition, the PIR is offering as a new feature an evaluation of the PRME-related profile of the participating school.

Thomas Dyllick is a member of the Supervisory Board of the Positive Impact Rating Association and a co-founder of the PIR. He is a professor emeritus at the University of St. Gallen in Switzerland.


Questions about this article? Email us or leave a comment below.