How Carnegie Mellon Is Putting AI To Work In Business

Willem-Jan van Hoeve at Tepper School of Business

Willem-Jan van Hoeve is the senior associate dean for education at Carnegie Mellon University’s Tepper School of Business


An operations professor at Carnegie Mellon’s Tepper School of Business, Willem-Jan van Hoeve is also the senior associate dean of education. In that role, he is leading the initiative to embed artificial intelligence into Tepper’s DNA.

His research focuses on developing new methodology for mathematical optimization, data mining, and machine learning, with applications to network design, scheduling, vehicle routing, health care operations, and analytical marketing.

In this interview with Poets&Quants Founder John A. Byrne, Van Hoeve details how Tepper is using AI in research and in the classroom and what changes the technology will pose for business and society. The transcript has been edited for clarity.

John A. Byrne: Willem, let’s start our interview by examining the full implications on business that artificial intelligence I is posing. There’s been a lot of discussion and a lot of fear, but I wonder if you might put some perspective around these conversations and what you see happening in years ahead.

Willem Van Hoeve: I think anytime there is something big going on, such as AI in this case, people are uncomfortable. There is an implication of fear for most people because of the unknown. At the same time, we’ve been living this already for a decade or so, with Siri in our Apple phones and so forth. I don’t think we take it for granted. But I think most people have embraced it.

That’s actually AI technology right there that is state of the art. So it’s natural language processing and so forth. At first, these new technologies will be met with fear. As people get more comfortable with it, see the benefits and realize the risks are not that bad, I think they will be happy to embrace it. I expect the same to happen in many business applications.

The impact and the risks are bigger than an individual product such as a Siri app on your phone. But it will take a little bit more time, I think, to fully embrace AI in society. But overall, the same trend will happen and adoption for these technologies will go very quickly, much more quickly than it used to go with other technologies. That’s my true prediction.

Byrne: Generally, when people look at the future of AI, they see it as either a grave challenge or a major opportunity. Let’s deal with the opportunity first. The opportunity is that it will eliminate a lot of drudge work and free us up to do more high value, more fulfilling work. And then the challenge is that it will result in major job loss. If you actually listen to most cynical observers, Elon Musk among them, they would say that AI has the opportunity to tell humans what to do instead of humans telling machines what to do.
How do you size up the challenges and the opportunities?

Van Hoeve: I’m a positive person when it comes to the use of AI. And I realized that many of the tasks that are currently done by humans could be done by machines or AI systems. I don’t necessarily see that as a bad thing. It has happened before and I know it has had an impact economically on many people. it may be a phase that we as a society have to deal with. I expect the politicians to take their responsibility and put in place some social responses from an employability and a taxation point of view.

I don’t think that’s undoable. And it doesn’t have to happen overnight. I think it will be a gradual process. At the same time, I also see that the workforce is already changing. So we need different skills now than we needed 10 years ago or 20 years ago. We cannot pretend that this comes out of nowhere. So I believe we are already in a transition period. What we need is people to take that serious and help guide the transition in a safe and responsible way. I’m definitely an optimist on this.

Byrne: Obviously Carnegie Mellon and its business school has had a long history in management science and has taken that as the approach to teaching management and business. Which makes me think that you are well positioned to dive deep into the implications of AI and to teach people how to harness it for the positive good. Might you talk a little bit about your initiative and what kinds of things you’re doing?

Van Hoeve: For sure. Carnegie Mellon actually was one of the places on earth where AI was invented. Herb Simon, one of the founders of Carnegie Mellon University as an intellectual research environment, was also one of the founders of AI and so many other fields as well. So we are well positioned as the Tepper School but also with our other schools across campus including computer science, engineering and so forth to put AI to work in business. As soon as new technology comes out, we typically embrace it as faculty in our research and then adopt it into the classroom.

So many of these things like machine learning models or neural networks have been in our courses already for years. So this is not new for us. We are doing this by default almost. So it’s very natural for us to embrace these technologies. Concretely, we are using AI in say a marketing analytics course, in addition to having traditional models such as predictive analytics, or using standard statistical models. We would also use machine learning-based models for similar tasks in marketing, for example. We’ve been doing this for years. What is now new, of course, are large language models, which is a whole other level of interaction with AI systems. We can now simulate almost a human in a text-based environment. You can have a ChatGPT-like simulation where you can create anything you want. I’m going to do marketing research, not with humans, but with LLM-based (large language model) agents. So I can create a market segment using an LLM1, then another market segment, using LLM2. And I would be able to see how the decisions I make as a manager would influence these different market segments.

The drawback there is that these models are not as well understood as the traditional ones. That’s a research challenge that we also are very happy to take on. So what we can apply safely and soundly, we will do in research and the classroom. I think what we’re doing now is taking the latest and greatest of AI, pushing it in research as well as in education.

Byrne: I understand that in your AI initiative you’re working in collaboration with other schools and departments at Carnegie Mellon.

Van Hoeve: If you want to do it well, I think that is almost unavoidable and we really rely on our partners from across campus for their thought leadership and collaboration on this.

Byrne: I should point out that not all that long ago, the Tepper School opened a brand new building and it just happens to be smack in the center of Carnegie Mellon University. One of the goals of that new structure was to embrace the broader university in all of its expertise.

Van Hoeve: Absolutely. It is indeed physically located at the center of campus. So we have informal interactions with people much more so than when we were still on the edge of campus, so to speak.

Byrne: As the senior associate dean of education at Tepper, one of your core jobs is to basically embed AI tools into the curriculum for use by students and faculty. How do you do that?

Van Hoeve: So I distinguish two sides of AI. AI is a tool that we can use as an educational tool. Like we use Zoom right now, we could use it as educational tool. For example, we an use them for automated assessments or to create an environment for a student to do training by interacting with an AI bot. The instructor can prompt the AI Bot to respond in a certain way to trigger responses from the students. For example, it may be an employee with a problem, or perhaps there’s a conflict in a team. So that is an AI tool that you use for educational purposes.

But of course, we’re also very much interested in the use of AI for business applications. What’s really interesting from an academic point of view is what courses can we create? What’s new? Where is it being adopted right now in industry? What faculty are working on this in their research? And that is really, really nice. So we have the foundations. If you teach people statistics, or optimization, now we’ll also teach them neural networks. So that’s a box to check so they have the foundation on how to apply AI in marketing, operations or finance and even in accounting. You can use these latest AI tools to automatically scan financial statements, do an analysis of that data, which can then in turn be useful for accounting purposes or for financial risk analysis. So those courses are being developed and we are running some very new courses this year and last year on these topics.

For example, one on language processing from an accounting point of view. And that’s the application of AI to business domains. So the first one is foundations, the second is business domain and the third is how do we do this responsibly? What is the impact? What’s the managerial view of using AI in business? So we’re not just applying it. We take an off-the-shelf package and run it on an operations problem. We explore what are we going to invest in? Where’s the field going? What are the ethical considerations? We have a core course specific on ethics and AI. I think it’s extremely important that our students are being trained to apply AI responsibly. And so that’s really a core choice that we made to put in our curriculum. So those are the three elements that I try to push AI into our educational curriculum: foundation, business domains, and the managerial aspects, including economic impact and ethics.

Byrne: In addition to adding elective courses on AI, you’re also embedding this topic in more traditional courses, as you mentioned earlier, in the course on marketing, right? So you’re tracking this on multiple levels. You are embedding it throughout the entire curriculum, and you are giving students the choice to dive deeper into the subject through elective courses.

Van Hoeve: Many of our courses have already naturally adopted some of these technologies. But it would not be called out in its course title. It would probably be in the syllabus, but it’s so natural for us that we have it everywhere basically to some extent.
But there are some special courses, special electives that are really about this latest form of AI in business.

Byrne: Are you using experience learning to teach AI applications to students?

Van Hoeve: Well, we have experiential learning in many of our courses. So it can take different forms. So we had, for example, experiential learning, in the context of entrepreneurship, where we think about emerging technologies in their adoption in industry from that entrepreneurial lens. We have a capstone course that typically touches on those elements. It’s extremely experiential. The students don’t necessarily have to code themselves, although they do, but it’s a different type of experiential learning. We actually have an elective course on programming in R and Python, where they also use those skills to code AI tools. It almost like an engineering version of experiential learning. Some of our MBAs like that a lot because they really see it in action. And honestly, it’s a very good practice to understand what exactly can this tool do. Once you have an appreciation of that, maybe you can then understand what the limitations are in an actual application.
Not all our students are interested in coding, obviously, but many are. So that’s the other side of experiential learning that we at least offer to our students.

Byrne: What do you like the most about AI? Is there anything that you fear?

Van Hoeve: What I like most is that for decades after we invented management science in the 1950s, we have used fairly rigid mathematical models and analytical models to model the behavior of humans. The data we use is very structured, and the models are very strong. You can make all sorts of analyses, but they’re also very limited. We have a very simplistic view of rational agents that model, say, consumers. The modules that are being used are very non-traditional in the sense that they are not analytical in the traditional sense. You cannot prove certain things easily about it but they have something intangible something human almost so you can simulate things in a very different way than you could with traditional methods. I’m super excited about using this technology to do things we never have been able to do with quantitative methods that that can be simulated to act like real humans rather than rational agents that are supposed to act like a human.

I don’t know where it will lead us, but we already have seen very nice examples of that. The challenges are that this is a very powerful technology. We have some examples in the past using it for the wrong reasons or with unfortunate outcomes, for example, in politics, influencing people with false information. That’s very dangerous, and there’s also a huge danger in the data that we draw everything from. Data can be biased almost by definition. So getting results that can be used responsibly is a big challenge. I’m not afraid of that. I think it’s a challenge we will just have to deal with.

It is, in my opinion, almost unavoidable, and we’ll be able to do that, there’s no reason why we cannot. And with any new technology, there are pros and cons. But we are primed to do that. Our researchers are ready to take on those challenges, so I actually turn those challenges into opportunities, personally. I think it’s great to be at Tepper at this time.

Byrne: True. I think a lot of the early use of AI, at least by consumers and faculty for that matter, has been with ChatGPT, maybe Bard, and it’s proven its limitations today. There’s the interesting term hallucinations when AI comes up with things that are completely erroneous or made up. I wonder what AI can’t do today that you think it will be able to do in three to five years?

Van Hoeve: Everything I say is future forward and future looking. But we’re not going to give these techniques or tools to our students if we don’t trust them. If we do, we tell the students why we won’t. So you’re absolutely right. Certain things are currently out of reach for these AI tools, especially simple things like calculus-style reasoning. Simple calculations are sometimes surprisingly missed by these tools. These ChatGPT style tools have no logical reasoning built inside naturally. But I know that some teams are working on that. And I’ve seen some examples of ChatGPT-like systems that are much better in handling this. If you use ChatGPT to build an optimization model that would provide the best solution to a business problem, it typically gives me garbage. It only knows language. It doesn’t know calculus or anything like that. There are other intelligent systems, not necessarily language-based, that can be based on very sophisticated mathematical models for optimization or statistics. By embedding them into those ChatGPT-style tools, it will have a mechanism to give you what you want. Now, that has to be curated, has be validated, and so forth. But I think that is what will happen in the next five years.

Byrne: Will AI have staying power? In other words, if schools do not address this topic and address it in a serious and thoughtful way, will they have the possibility of becoming irrelevant?

Van Hoeve: I think so, but I still think you don’t always have to use AI, obviously, to be a successful manager. Quantitative methods have been very, very important in business. Do you really need to be a quant whiz or a mathematician to make use of them and to manage people well, manage industries well? I don’t think so. You can also do without it. It’s a bit odd for me to say that at a very quantitative oriented school, but I recognize there are other MBA programs in general management that are very successful. I think there’s a place for everyone. Similarly, I can imagine not all MBA programs need to be so heavily involved in AI, but I cannot imagine a world where they completely ignore it. They will obviously have to embed it at some point via their educational tools because it’s just available to them. And the amount in which they want to integrate it into their curriculum will depend on what their MBA program is meant to do. We’ll see how it pans out, right? I think our MBAs will be trained well with these tools. And they will make changes in the world. I think they will take advantage of this technology for a long time in their careers. So I think most of the MBA programs will have to adopt some form of AI.

Byrne: Well, Willem, thank you so much for your time. I really appreciated your conversation. I Think we covered a lot of ground, got some really great insights.

DON’T MISS: THE P&Q INTERVIEW: CARNEGIE MELLON TEPPER DEAN ISABELLE BAJEUX-BESNAINOU ON AI