When Algorithms Decide: Are Business Schools Preparing Leaders Or Training Managers To Obey Machines? by: Dr. Raul Villamarín Rodríguez & Dr. Hemachandran K on March 10, 2026 Woxsen University March 10, 2026 Copy Link Share on Facebook Share on Twitter Email Share on LinkedIn Share on WhatsApp Share on Reddit In many MBA classrooms, decision-making is still taught using the same familiar tools: case discussions, spreadsheets, and frameworks for evaluating risk and return. In the second year, a course on analytics or AI is added. Students learn how models can classify customers, predict churn, and price loans more accurately than any individual manager. The message is clear: algorithms are now part of serious business processes. What is less openly discussed is the quiet shift in who or what is actually deciding. A risk model indicates that a customer should be dropped. A recommendation engine states that this product should be pushed to that segment. The fraud system states that this transaction should be blocked. A manager, often an MBA graduate, clicks “approve” because the model is 94% accurate and the dashboard is green. The question that should trouble business schools is simple: in these moments, are they producing leaders who make decisions or managers who obey machines? A NEW KIND OF AUTHORITY IN THE ROOM In earlier generations, decision-making authority was embodied in titles and people: the regional head, the credit committee, and the senior partner. Their judgment was not perfect, but it was visible and came with a name attached. Today, in many organizations, authority flows through models that are deeply embedded in platforms and dashboards. The individuals building these systems may sit in a different country. Individuals using them may never see the training data, objective functions, or the assumptions that went into tuning them. For frontline managers, the experience is deceptively simple. The dashboard lights up with a score or recommendation. Green means go. Red means stop. Amber means escalate. This is what can be called algorithmic authority: a situation in which the output of a model becomes the default decision and human judgment is reduced to confirming or executing what the machine proposes. In consumer finance, credit approvals and pricing are heavily mediated by automated scoring. In HR, screening tools decide which résumés are seen and which never leave the database. In marketing, bidding and personalization engines choose which customers see which messages are displayed. In supply chains, forecasting and routing algorithms shape production and delivery schedules. None of these are inherently bad. In fact, many of these systems reduce errors, reveal hidden patterns, and save time and money when used appropriately. The problem is not that algorithms are powerful; in practice, they are increasingly treated as if they are neutral and unquestionable. HOW BUSINESS SCHOOLS ARE RACING TO ADD AI Business schools have not remained stagnant. Globally, MBA programs are redesigning their curricula to respond to the AI moment. Poets&Quants has chronicled how AI is reshaping the MBA, from core courses that now include analytics and machine learning to new electives on AI strategy and product management. Schools are building AI labs, launching specialist master’s programs, and weaving AI content into finance, marketing, and operations courses. A recent Poets&Quants report highlighted how IE Business School won a “Best in Class” award for artificial intelligence, with AI taught across programs and embedded in real projects. Separate coverage has tracked the rise of AI concentrations at leading U.S. and European business schools, often marketed as the path to “future-ready leadership.” Panels and video series on “The MBA in an AI World” now feature deans and professors explaining how they are preparing graduates to work alongside intelligent systems. Simultaneously, MBA students are asking for more. Survey work reported by Poets&Quants notes that many students want AI in the core – not just as an elective – and feel that their programs are not yet delivering enough depth or integration. On the surface, this is encouraging. This means that schools recognize that AI is not a niche topic; it is part of how marketing, finance, and operations actually work. However, there is a risk hidden under this enthusiasm. In the rush to teach AI as a skill, schools may underinvest in AI as a responsibility. SKILLS WITHOUT SOVEREIGNTY Most AI-related offerings in business schools can be grouped into three categories. Courses that show students how models work at a conceptual level: supervised vs. unsupervised learning, training and test data, and basic metrics. Tool-focused workshops where students learn to prompt large language models, run experiments with analytics software, or build simple dashboards are also recommended. Strategy and innovation electives on what AI makes possible in business models and industries. All three are valuable. Graduates who do not understand how AI systems work – or what they can and cannot do – will be at a disadvantage in almost every sector. However, understanding how a model works is not the same as understanding when to resist it. In many programs, there is still relatively little structured practice in saying: “The model recommends this, but here is why we will not follow it.” There are even fewer graded experiences in which a student is rewarded not for using AI more aggressively but for defining acceptable boundaries and trade-offs. The danger is the emergence of a new archetype: the data-literate manager who can speak fluently about accuracy, lift, and A/B tests but quietly treats the model as the real decision-maker. When things go well, the numbers take credit. When things go badly, the model takes the blame for the failure. In this script, leadership has been quietly downgraded to model supervision by exception. LEADERSHIP ABDICATION IN PRACTICE To determine how easily this can occur, consider three situations that are already common in many industries. 1. The hiring filter that never meets the candidate An AI-based screening system is trained on historical hiring data to predict “success” in a given role. It learns that candidates from certain institutions, regions, or career paths have higher retention or performance scores because the firm hired more of them in the past. When the next hiring cycle begins, the model gives a low score to applicants from non-traditional backgrounds. The recruiter sees a long list of “recommended” profiles at the top of the dashboard and a much shorter list of “outliers” below the list. The shortlisting process is completed within minutes. No one explicitly decided to filter out certain groups. However, in practice, the system has done exactly that. All it took was a silent slide from “the model helps us” to “the model decides for us.” 2. The pricing engine that punishes the vulnerable Dynamic pricing algorithms can optimize revenue by charging higher prices when demand is strong and lowering them when demand weakens. If left unchecked, they may also learn that certain neighborhoods, times, or customer profiles are more tolerant of higher prices than others. From the dashboard’s perspective, this is revenue optimization. From a citizen’s perspective, it may appear as systematic overcharging in the places that can least afford it. Again, the key question is not whether the model is technically accurate. It is whether any human in the chain is equipped – and empowered – to ask, “Is this acceptable?” and insist on constraints that reflect the organization’s values. 3. The retention model that erodes trust Customer churn models are designed to predict who is likely to leave and trigger retention. When combined with generative AI for message drafting and targeting, they can deploy a steady stream of personalized messages. In isolation, each message is harmless. However, as the system scales, customers may feel surveilled, pressured, or manipulated. What began as a rational attempt to reduce churn quietly erodes long-term trust. In all three cases, the immediate business logic was compelling. The risk-adjusted numbers often appear better with the model than without it. This is precisely why these are leadership problems rather than purely technical ones. Leadership abdication occurs when no one in the room feels responsible for defining the limits of what the model is allowed to do. WHAT AI-ERA LEADERSHIP SHOULD REALLY MEAN If AI is now part of how decisions are made, it must also be part of how leadership is defined. For MBA programs, this requires a subtle but important shift in emphasis on the curriculum. Instead of asking only, “Can our graduates work with AI?” schools must also ask, “Can our graduates set the terms on which AI is allowed to operate in their organizations?” Three capabilities, in particular, deserve to be treated as core learning outcomes: 1. Interrogating, not just interpreting, models Interpreting a model means understanding what a score or recommendation says. Interrogating a model involves asking deeper questions: What data were used to train this system, and from which period and geography? What objective was optimized – short-term profit, long-term value, risk reduction, or something else? Which types of errors are considered acceptable and which are treated as intolerable? Who is most likely to be harmed if the model is incorrect in a particular direction? These questions do not require MBA students to become data scientists. They require the confidence to challenge the framing of a problem and humility to recognize that a technically elegant solution can still be ethically flawed. Case discussions, simulations, and projects can all be redesigned to make this interrogation explicit: students should not only present what “the data says,” but also defend why that particular framing of the data is appropriate. 2. Practicing ethical override An ethical override is a conscious decision to reject or modify a model’s recommendation because it conflicts with the values, stakeholder commitments, or long-term trust. In medicine, it is taken for granted that a doctor can override a protocol when they judge that a particular patient does not fit a pattern. In aviation, pilots are trained to handle rare scenarios in which the right decision is to turn off the autopilot. In business education, however, there are still relatively few structured opportunities where students must say, “We will not follow the model here – and here is the reasoning.” Ethics modules and workshops exist, and interest in AI ethics has grown, but they are often siloed from quantitative courses where models are built, tuned, and celebrated. The result is a split curriculum: numbers over here, values over there. A more integrated approach would bring ethical overrides into finance, marketing, and operations classrooms. Students can be graded not only on the financial impact of their decisions but also on how they balance model outputs with considerations of fairness, dignity, and societal impact. 3. Owning system-level accountability AI systems blur the lines of responsibility. Engineers design them, data teams feed them, vendors maintain them, and managers use them to make decisions. When something goes wrong, it is tempting for everyone involved to say, “The problem is somewhere else in the stack.” For senior leaders, this is not an acceptable response. An MBA curriculum that takes AI seriously must therefore include the basics of AI governance: data provenance, documentation of model assumptions, monitoring for drift, escalation paths for unexpected behavior, and clear statements of who is accountable for what. Graduates should be able to sit in a meeting about deploying an AI system and ask: “Who signs off on this? Who is informed when the system behaves unexpectedly? How will affected customers and employees raise concerns? What would count as a red line?” These questions do not slow down business; they prevent the kinds of failures that destroy value and reputation. A NEW MANDATE FOR BUSINESS SCHOOLS If today’s MBAs will be in control of AI-enabled organizations, business schools themselves carry a new mandate. They must move from treating AI as a set of tools that students should master to treating AI as an environment in which graduates will be morally and strategically tested. Some responses are already visible. Institutions are being recognized for their ambitious AI strategies that cut across programs. AI appears not only in electives but also in core analytics, strategy, and technology courses. Dedicated concentrations and tracks signal to employers that graduates can navigate AI-heavy roles. The next step is to redesign leadership education to focus on ownership of algorithmic decisions. That could mean: Running simulations where teams face realistic AI-generated recommendations and must decide not only whether to follow them but also under what conditions they would refuse. Creating joint modules between analytics and ethics centers so that every model-building exercise is accompanied by a model-questioning exercise. Bringing in interdisciplinary voices – from law, philosophy, sociology, and public policy – to help students see AI decisions as part of a broader social fabric, not just a firm-level optimization problem. MBA students themselves signal that they want this depth. In surveys, many say they expect AI to be part of core management education and are looking for programs that can deliver both technical fluency and ethical grounding. Business schools that respond only to the skills dimension may produce graduates who can do impressive work with AI tools. Business schools that respond to the responsibility dimension will produce graduates who can be trusted with the systems that shape entire markets and communities. TWO KINDS OF MBAs IN AN ALGORITHMIC AGE Imagine two graduates, ten years from now, both leading teams in AI-heavy organizations. The first is fluency with tools. When faced with a complex decision, they open the dashboard, review the latest model output, and ask, “What does the system say we should do?” They are efficient, quick, and rarely deviate from their recommended paths. The second is equally fluent, but their first instinct is different. They ask, “What problem did we ask this model to solve? Whose perspective shaped this objective? What might we be missing if we follow this recommendation blindly?” When necessary, they are willing to slow the process down, bring more voices into the room, and accept responsibility for going against the model’s recommendations. Both MBAs will list “AI-enabled decision-making” on their resumes. Only one will truly lead. Currently, business schools are quietly choosing which of these archetypes to send into boardrooms, startups, banks, and ministries. The choice is not only about adding more AI content; it is about the deeper question of what it means to teach leadership in a world where algorithms increasingly frame the options. When algorithms decide, the real test of an MBA is not how fast they can follow or how many tools they can use. It is whether they have the judgment – and the courage – to know when the most important decision is to say, “Not this time.” Dr. Raul Villamarín Rodríguez is vice president of Woxsen University in Hyderabad, India. Dr. Hemachandran K is director of Woxsen’s AI Research Centre. © Copyright 2026 Poets & Quants. All rights reserved. This article may not be republished, rewritten or otherwise distributed without written permission. To reprint or license this article or any content from Poets & Quants, please submit your request HERE.