AI Is Disrupting Entrepreneurship. Business Schools Must Learn Faster by: Levi Belnap, Nate Sharp & Shrihari Sridhar on May 13, 2026 | 9 minute readMays Business School, Texas A&M University May 13, 2026 Copy Link Share on Facebook Share on Twitter Email Share on LinkedIn Share on WhatsApp Share on Reddit Hundreds of business schools and universities run student venture competitions every year. Collectively, they award millions of dollars in prize money, draw thousands of student teams, and absorb enormous faculty, staff, and mentor time. For decades, the model has been remarkably stable: write the plan, polish the pitch, present with confidence, and win. AI should make us ask whether that model still rewards what students most need to learn. Last year, our own competition looked successful by conventional measures. Students wrote plans, built presentations, stood on stage, and made compelling cases for ambitious ventures. But afterward, we wondered whether we had placed too much weight on how convincingly students could describe a venture and not enough on how quickly and honestly they could test what mattered most. That question matters more now because artificial intelligence is changing the speed at which ideas can be tested. A student with a laptop can now research a market, synthesize feedback, build a prototype, analyze competitors, write code, model pricing, and revise a product concept in hours instead of weeks. AI SHOULD CHANGE WHAT COMPETITIONS REWARD For a long time, student founders often had to persuade others before they could act. They needed money, technical help, research support, design capacity, or access to specialized tools before they could test much of what they believed. That made the pitch disproportionately important. AI does not eliminate the need to persuade. Founders will always need to communicate clearly and earn trust. But AI lowers the cost of action. More students can now test parts of an idea before they ask others to believe in it. The scarce skill is no longer only producing a polished plan. The scarce skill is knowing which assumptions matter, testing them honestly, learning from evidence, and changing direction before too much time and money are wasted. That was the insight behind the AI Venture Velocity Challenge at Mays Business School at Texas A&M University. We took a traditional student business plan competition and rebuilt it around learning velocity. Instead of asking students to spend weeks producing a long business plan, we asked for a short Venture Snapshot: What problem are you solving? Why might your solution be different? What are the critical assumptions that must be true for this venture to work? How will you test them? And how will you use AI to learn faster than would have been possible before? Click here to see a public interactive map and summary of the 531 applications from 160 institutions in the AI Venture Velocity Challenge, including geography, institution type & idea categories. FROM BUSINESS PLAN TO LEARNING VELOCITY Then we made the challenge open to undergraduate and graduate students at any U.S. degree-granting institution: research universities, regional campuses, liberal arts colleges, community colleges, technical colleges, and everything in between. That choice was intentional. AI does not make all students equal. It does not erase differences in networks, capital, mentorship, or prior experience. But it does reduce the cost of trying. It gives more students the ability to research, prototype, analyze, and iterate in ways that previously required more time, money, or technical support. We had about 50 days to redesign the challenge, explain it, and invite students across the country. In the inaugural year of this redesigned challenge, 531 student teams applied from 160 institutions across the United States. The response did not come only from the schools one might expect to dominate student entrepreneurship or from institutions most often associated with elite computer science or coastal startup ecosystems. Applications came from large research universities, regional campuses, liberal arts colleges, master’s-focused institutions, community colleges, and technical colleges across the country. AI LOWERS THE COST OF TRYING The ideas were just as varied, and often more practical than the public conversation about AI would suggest. Students were working on healthcare workflows, agricultural productivity, robotics, manufacturing, energy, education, financial services, consumer products, and AI-native software. Many teams are not simply building “AI companies.” They are using AI to move faster on problems they already care about. The next phase is not a pitch round. It is an experiment round. Every team is eligible to submit structured learning logs over the next two months. Each log asks what assumption they tested, why that assumption mattered, how they used AI, what evidence they collected, what they learned, and what they will do next. The top teams will not be selected because their original idea sounded compelling. They will be selected because they show disciplined progress: testing the right things, learning from real evidence, and using AI to move faster without confusing activity for validation. WHEN ACTION GETS CHEAPER, JUDGMENT MATTERS MORE The obvious objection is that AI makes it easy to fake learning. A team can ask a chatbot to generate a plausible experiment log, manufacture customer interview quotes, and present synthesized output as field evidence. We know this. The challenge is built around it. Teams are evaluated on the specificity of their assumptions, the realism of their evidence, and the coherence between what they claim to have tested and what they show they have learned. Whether that judging discipline holds up is itself part of what we are testing. When AI makes it easier to build quickly, planning does not matter less. It may matter more. Students need to become better at deciding what is worth building, what assumption should be tested first, and what evidence would cause them to stop, pivot, or admit that a beloved idea is wrong. At the end, finalist teams will come to College Station, Texas. They will present, but we do not want theater. We want the journey: what they believed at the beginning, what they tested, what changed, what AI made possible, and where human judgment still mattered most. WHAT AI CAN – AND CAN’T – TEACH We are also watching the process itself. Because teams submit learning logs over time, we can see not only what they build, but how their thinking changes: which assumptions they test first, how quickly they revise their beliefs, when AI accelerates real learning, and when it merely produces more activity. Those patterns may offer an early view of what entrepreneurship looks like when build-test-learn cycles become dramatically faster. The goal is not to teach students that AI replaces judgment. It is to teach them that judgment becomes more important when AI makes action cheaper. When students can generate a market analysis in minutes, they need to know whether the analysis is any good. When they can build a prototype overnight, they need to know whether it tests the right assumption. When they can create customer personas, landing pages, financial models, code, surveys, and sales emails almost instantly, they need to learn the difference between output and evidence. WHY THIS MATTERS BEYOND ENTREPRENEURSHIP That distinction matters far beyond entrepreneurship. Students are entering a labor market being reshaped in real time. Employers will not simply need graduates who have heard about AI. They will need people who can work with AI to ask better questions, test ideas, evaluate results, and act responsibly under uncertainty. Entrepreneurship is a powerful training ground for that future because entrepreneurship makes uncertainty unavoidable. You cannot hide forever behind a slide deck. Someone has to care. Someone has to buy. Something has to work. Evidence eventually wins. This does not mean every student should become a founder. It means more students should practice founder-like learning. They should learn how to identify assumptions, test them quickly, talk to customers, interpret weak signals, use technology responsibly, and revise their beliefs. They should learn that AI can accelerate both good thinking and bad thinking, depending on the human using it. A TEST, NOT A FINISHED ANSWER We are reluctant to present this challenge as a finished answer. It is not. It is a test. Some teams will use AI brilliantly. Some will mistake AI-generated output for real evidence. Some will move quickly in the wrong direction before learning how to slow down and ask better questions. Some will surprise us entirely. Business schools cannot help students navigate the AI era by pretending we already know exactly what the future of work, entrepreneurship, and learning will look like. We do not. No one does. But we can build environments where students, faculty, employers, investors, and institutions learn together in public. We do not believe Texas A&M has all the answers. We do believe large public universities have a responsibility to ask better questions at scale. THE PITCH IS NOT THE PROOF What should business education reward now? What kinds of practice prepare students for an AI-shaped economy? How do we make sure students at community colleges, technical colleges, regional universities, and flagship institutions all have access to serious opportunities to build? How do we teach students to use AI not as a shortcut around thinking, but as a force multiplier for disciplined learning? We are sharing this not because we have solved the problem, but because we think these questions are too important for any one school to answer alone. A business plan, in this context, is not dead. It is a hypothesis. A pitch still matters, but it is not the proof. If our argument is right, the students who emerge strongest from this process will not simply have better ventures. They will be better at identifying which of their own assumptions are wrong, learning from evidence, and changing course before sunk cost makes them stubborn. In this new moment, the students who learn fastest may be the ones who teach the rest of us what business education needs to become. Levi Belnap is Executive Director of the Center for Applied Entrepreneurship and Innovation at Mays Business School. Nate Sharp is Dean of the Mays School. Shrihari Sridhar is Senior Associate Dean and a professor of marketing at Mays. © Copyright 2026 Poets & Quants. All rights reserved. This article may not be republished, rewritten or otherwise distributed without written permission. To reprint or license this article or any content from Poets & Quants, please submit your request HERE.