The Canary In The Lecture Hall: What The AI Crisis Scenario Means For Business Academia

Last month, Citrini Research and Alap Shah published The 2028 Global Intelligence Crisis, a speculative macro memo written from the vantage point of June 2028. In their scenario, AI succeeds beyond all expectations, companies replace white-collar workers at scale, consumer spending collapses, and a negative feedback loop spirals into a financial crisis that rivals the GFC. The S&P falls 38% from its highs. Unemployment hits 10.2%. 

The piece went viral within hours. Michael Burry shared it, adding “and you think I’m bearish.” On Monday, markets responded as though the scenario had already begun: the Dow dropped over 800 points, software, payments, and delivery stocks were hammered, and the Indian IT sector lost over $50 billion in market capitalization as the memo’s scenario of AI collapsing India’s IT services export model circulated across trading desks. A thought exercise published on Substack had, for a moment at least, moved billions in real market value.

I study strategy and sustainability at a business school. My research examines how firms respond to institutional pressures, how markets fail to reward certain capabilities, and how governance structures shape corporate behaviour. When I read the Citrini memo, my first thought was that the scenario maps directly onto frameworks I’ve spent years studying. My second thought was more unsettling: what happens to the institutions that produce those frameworks?

So, I did something that felt appropriate to the moment, and perhaps slightly reckless. I sat down with Claude, Anthropic’s AI, and spent several hours working through the implications of the Citrini scenario for business academia: the schools, the programmes, the research enterprise, and the people who work in it. What follows reflects our joint exploration, our debates, our disagreements, and the conclusions we arrived at together. Like the Citrini memo, this is just a plausible scenario and a thought experiment. We were certain of very little by the end. We were also convinced that the conversation was worth having.

THE TEACHING PRODUCTION FUNCTION CHANGES FIRST

The Citrini scenario begins with agentic coding tools taking a step function jump in late 2025, followed by enterprise procurement recalibrating by mid-2026. We applied the same logic to higher education, and the first impact we identified is on how education is produced rather than whether it’s consumed.

AI tutoring systems can already deliver personalized instruction, grade assignments, provide feedback on written work, run case study simulations, and answer student questions at a level approaching what a teaching assistant or adjunct professor can do. The economics are severe for the lower tiers of the academic labour market. Business schools employ large numbers of adjunct faculty, visiting lecturers, and teaching assistants whose primary function is delivering and supporting instruction. If an AI system can run a credible finance or accounting tutorial, the demand for humans doing that work declines.

The Citrini memo describes how the companies most threatened by AI became AI’s most aggressive adopters, because resistance meant slow death. We argued that the same logic applies to business schools. The schools that adopt AI teaching tools expand margins. Schools that resist lose cost competitiveness. Competitive pressure forces adoption regardless of faculty sentiment.

At elite institutions, the initial impact is modest because the value proposition rests less on content delivery and more on the faculty member’s expertise, the peer cohort, and the brand. But the support infrastructure, TAs, programme administrators, career services staff, learning technologists, starts to thin. And the adjunct and visiting faculty who deliver the bulk of instruction at many schools find themselves in a position structurally identical to the white-collar workers the Citrini memo describes: individually replaceable by technology that improves every quarter.

THE DEMAND SIDE CRACKS

This is where the Citrini macro logic hits business schools most directly. The pipeline of MBA applicants is a function of expected post-graduation earnings minus the cost of the degree. If the white-collar roles that traditionally absorbed MBA graduates, consulting, investment banking, product management, corporate strategy, are being compressed or eliminated, the return on a six-figure investment in tuition plus two years of forgone income deteriorates.

We discussed the MBA’s value proposition as resting on three pillars: signalling (the credential opens doors), human capital development (you learn things that increase your productivity), and network formation (you build relationships with future peers and partners). Under the Citrini scenario, the first two pillars erode significantly. Signalling value depends on employers paying a premium for MBA holders, and if those employers are cutting headcount, the premium shrinks. The human capital argument weakens because much of what MBAs teach in analytical frameworks and functional knowledge becomes commoditized when AI performs the same analysis faster and cheaper.

The network pillar is more resilient. Relationships with ambitious, capable people retain value in almost any economic configuration. But network alone has rarely been sufficient to justify the full cost of an MBA.

We estimated that MBA applications at mid-tier schools would start declining meaningfully within 2-3 years if white-collar hiring softens noticeably. Top-five global programmes would feel it 3-5 years later, because the brand premium and network effects create a longer tail. Full-time two-year MBAs at schools ranked outside the global top 20 face existential pressure. Many of these programmes were already struggling with declining applications before AI entered the picture. The scenario accelerates a consolidation that was coming anyway.

Executive education follows a different curve. In the short term, it benefits: when senior leaders face a genuinely novel strategic challenge, they seek frameworks and structured thinking. Over the medium term, corporate training budgets contract as margins come under pressure, and AI-delivered learning improves. The net effect is concentrating: the top providers capture a larger share of a shrinking market.

Standalone business schools face a particular structural vulnerability here. Unlike business schools embedded within large universities, they lack diversification. They are unable to cross-subsidize from engineering, medicine, or sciences if business education revenues decline. Their cost structures are built for a certain scale of operations, and premium real estate and faculty compensation in global cities are expensive. If the MBA market contracts by 30-40% and executive education concentrates at the top, schools that depend primarily on these revenue lines face questions about viability that their university-embedded competitors can defer.

THE RESEARCH FUNCTION TRANSFORMS

This is where the disruption reaches the part of the academy that currently seems most insulated. The Citrini memo describes AI agents handling multi-week research and development tasks by mid-2028. Applied to academic research, this means literature reviews, data collection and cleaning, statistical analysis, and even preliminary manuscript drafting become largely automated. The implications cascade through the structure of academic careers.

We debated this at length. PhD programmes exist partly to train researchers and partly to provide instructional labour. If AI handles both the analytical grunt work and the teaching, the functional rationale for a five-year doctoral programme weakens considerably. The number of PhD students business schools can justify admitting shrinks, which over a decade reshapes the pipeline of future faculty and the entire profession.

For senior faculty, the picture is more nuanced. The scarce capabilities in research, identifying important questions, exercising methodological judgment, interpreting findings in context, building theoretical frameworks, remain human for longer. But the volume of output per researcher increases dramatically, which means fewer researchers can produce the same quantity of published work. The competitive dynamics shift: a productive scholar using AI tools can generate work that previously required a team of co-authors and research assistants. This favours those with strong intellectual taste and judgment, but it compresses the market for academics whose primary contribution is methodological execution rather than conceptual originality.

These are changes in how research gets done. The harder question, and the one that occupied us for the rest of the conversation, is whether the enterprise of academic research itself retains its value.

DOES RESEARCH STILL MATTER?

Claude pressed this question, and it sat between us for a long time. What happens when AI produces research that is reliably sound? Today, AI-generated research requires human validation because the systems can produce sophisticated output that contains subtle errors in design, causal identification, or interpretation. Human peer review catches these problems imperfectly but often enough to matter. If AI systems advance to the point where they reliably design sound studies, identify genuine causal relationships, and interpret findings with appropriate nuance, the human validation layer becomes redundant for epistemic purposes.

At that point, the volume of reliable research explodes. The journal as a quality filter collapses, since peer review adds friction with minimal epistemic value when the research is reliable by construction. And the academic career loses its economic foundation: if a university can subscribe to an AI research service that produces more comprehensive and faster scholarship than its faculty, the rationale for employing researchers at high salaries weakens to the point of collapse.

The standard response in academia is that AI handles execution while humans provide meaning, judgment, and originality. This framing flatters researchers. It has been true so far. But it rests on an assumption about the ceiling of machine capability that the Citrini scenario explicitly challenges.

We found a stronger defence, though, and it rests on understanding what academic research actually does beyond producing epistemic content.

Research is a social process of legitimation. Knowledge becomes influential when produced by credible actors within trusted institutions and validated through peer review. A paper in a top journal, like the Strategic Management Journal, could shape how executives and policymakers think in ways that an identical analysis posted anonymously would fail to achieve. If AI produces a brilliant paper on stakeholder governance, who is the author? Who is accountable? Who defends the claims at a conference, refines them in response to criticism, advocates for the implications in a policy hearing? The social embedding of knowledge production is a human function, and it matters because humans are the audience whose behaviour we are trying to influence.

Research is also a normative activity. Work on how capitalism should be restructured, what firms owe to stakeholders, or how systems should be designed to distribute the gains from AI involves value judgments. AI can model the consequences of different institutional arrangements with extraordinary precision, but the choice of which arrangement is desirable remains a human choice. An AI can compute that a particular policy reduces inequality by X% while reducing GDP growth by Y%. Deciding whether that trade-off is worth making is a question about values.

And research is a form of political action within institutions and fields. When scholars publish work arguing that markets systematically fail to reward firms’ sustainability capabilities, or that current governance structures produce coordination failures, they intervene in debates. They shift the Overton window. They give language and frameworks to practitioners who want to act differently but lack the intellectual cover to do so. That function operates through human credibility and social relationships.

These arguments are real. They are also time-bound. If AI systems become sophisticated enough to produce reliable research, they will also become sophisticated enough to produce persuasive normative arguments. Whether readers find those arguments as compelling when they know a machine generated them is an empirical question, and the answer may shift as people grow accustomed to trusting AI outputs in other domains. People once trusted financial calculations only when performed by a named, reputable human accountant. The shift to trusting software took decades, required regulatory adaptation, and involved building new forms of institutional trust. A plausible path exists where something similar occurs in research: AI systems certified by trusted bodies, methods auditable, track records public. That transition might take 10-20 years. For many working academics, it falls within their career horizon.

The honest synthesis we arrived at: academic research will matter less as a source of analytical novelty. The window in which any individual human can produce findings or frameworks that outperform a well-designed AI system is closing. Academic research will matter more as a form of legitimate, socially embedded, normatively grounded intervention in how institutions and societies navigate the transition. The value shifts from “I discovered X“ to “I am a credible voice arguing that we should do Y about X, and here is the rigorous foundation for that argument.“ Whether that feels like enough is a question each scholar will answer for themselves.

THE INCENTIVE STRUCTURE WILL LAG

Perhaps the most discouraging part of our conversation concerned whether academic institutions can adapt fast enough to remain functional under this scenario. We both concluded, with some reluctance, that they likely will fail to do so in time.

Academic incentive structures are among the slowest-adapting institutional features in any sector. The basic architecture of how scholars are evaluated, hired, and promoted, publications in top journals, citation counts, teaching evaluations, has remained essentially unchanged for half a century. Tenure committees are composed of senior scholars who succeeded under the existing system. Accreditation bodies codify existing norms. Rankings embed specific metrics that schools optimize for, creating lock-in. The entire system is a coordination equilibrium in which everyone would need to move simultaneously for any individual institution’s deviation to be costless.

The Citrini memo introduces the concept of “habitual intermediation,“ business models built on exploiting friction that AI eliminates. We argued that academic publishing is a form of habitual intermediation. The months-long review process, the gatekeeping by anonymous reviewers, the multi-year lag between a research idea and its appearance in print: these exist because the system was designed for a world where producing and validating knowledge was slow and expensive. In a world where AI generates a rigorous empirical paper in hours and the relevant policy window is measured in months, a system that takes years to validate and disseminate knowledge is functionally mismatched with the environment it serves.

For the incentive structure to adapt, promotion criteria would need to weight real-world impact (policy influence, practitioner adoption, public discourse) as heavily as journal publications. Journals would need to radically accelerate review and publication. Doctoral training would need restructuring away from narrow methodological specialisation toward the skills that remain scarce. The funding model for research would need to evolve. Each of these changes requires coordination across institutions, accreditors, and ranking bodies. None of them move quickly.

What we expect in practice is fragmentation. A small number of elite institutions will have enough financial cushion to experiment with new appointment categories, faster publication cycles, and new impact metrics. A larger group will cling to the existing system because it’s familiar and because rankings reward it. And a third group, the individual scholars who see the shift early, will work around the system: publishing op-eds, advising governments, building public platforms, and using AI to accelerate their research, while maintaining enough traditional output to satisfy their tenure committees.

The system will lag. The disconnect between incentives and reality will widen. The correction will come through crisis rather than foresight.

WHAT SURVIVES

There is a deeper dilemma beneath the institutional one that we kept returning to. Academic knowledge derives its authority from a slow, careful, human-mediated process of validation. Science advances deliberately because establishing that something is true, rather than merely plausible, requires replication, scrutiny, debate, and the gradual accumulation of evidence. This deliberateness is costly, but it produces knowledge with high epistemic reliability.

In a world where AI generates vast quantities of analytically sophisticated but potentially unreliable research, this slow, careful process becomes simultaneously more important and more inadequate to the pace of change. The system is caught in a genuine dilemma: accelerate and risk sacrificing the rigour that gives academic knowledge its authority, or maintain rigour and risk irrelevance because the world has moved on before the findings are published. We suspect this dilemma is irresolvable within the existing institutional architecture. What emerges on the other side may be a two-track model: rapid, AI-assisted analysis addressing immediate practical questions on a fast cycle, alongside slower, deeper, human-led inquiry tackling the normative and conceptual questions that require careful deliberation.

If we extend the Citrini timeline to its logical conclusion for business academia, the picture by the mid-2030s looks something like this. A handful of elite global institutions, perhaps 10-15, survive and thrive as hybrid organisations combining high-end cohort-based experiences, curated executive programmes, trusted research production, and advisory functions. They look more like think tanks with educational arms than traditional schools. A second tier of 30-50 schools survive as primarily teaching institutions, delivering shorter, modular, AI-augmented programmes. The long tail of business schools contracts dramatically. Many close or are absorbed into parent universities.

The academic profession becomes smaller, more senior-weighted, and more polarized. A small elite of tenured scholars at top institutions coexist with a much larger group of practitioners and AI-augmented instructors with less security and lower compensation. The middle of the profession, tenure-track faculty at mid-ranked schools, faces the most pressure. The scholars best positioned for what comes next are those who combine traditional rigour with public engagement, who use AI tools fluently rather than reluctantly, and who work on questions that address the central institutional and governance challenges of the era. The irony is that the current incentive structure underrewards precisely this profile while overrewarding the narrow technical virtuosity that AI will commoditize first.

At the institutional level, what survives is what machines fail to replicate. Socialisation: the experience of people from different backgrounds forming relationships and challenging each other’s assumptions, which is embodied and experiential. Legitimacy and credentialing: as AI-generated analysis floods every domain, institutions with reputations for rigour become more valuable as certifiers of quality. And the production of genuinely novel normative ideas: asking “what if everyone’s framework is wrong“ and building rigorous alternatives.

THE REFLEXIVE MOMENT

We want to close by noting the obvious and perhaps having a quiet laugh about it: this essay was produced through a collaboration between a human academic and an AI system. The AI contributed analytical frameworks, pushed back on comfortable assumptions, and helped structure the argument. The human contributed domain knowledge, normative judgment, editorial taste, and the willingness to publish under his own name. We disagreed on several points. We also surprised each other.

We are sharing this for two reasons. The first is to demonstrate what AI looks like as a genuine thinking partner rather than a search engine or a drafting tool. The conversation that produced this essay was substantive, challenging, and at times unsettling in ways that reshaped how we each approached the questions. The second reason is to spark a debate that business academia has been remarkably slow to have with itself. We spend our careers studying how industries respond to disruption, how institutions adapt or fail to adapt, how incumbents resist change until resistance becomes more expensive than transformation. We teach these frameworks to executives and MBA students. It would be something between ironic and tragic if we failed to apply them to our own profession. The Citrini memo closes with “the canary is still alive.“ In business academia, the canary is still singing. But if you listen carefully, the song is changing.


Ioannis Ioannou is Associate Professor of Strategy and Entrepreneurship at London Business School. This essay draws on The 2028 Global Intelligence Crisis by Citrini Research and Alap Shah (February 2026). Their scenario is a thought exercise, and so is this one. The reflections here emerged from a structured conversation between Yiannis Ioannou and Claude (Anthropic).

© Copyright 2026 Poets & Quants. All rights reserved. This article may not be republished, rewritten or otherwise distributed without written permission. To reprint or license this article or any content from Poets & Quants, please submit your request HERE.