Looking At An AI MBA? Consider Ethics And Education In Collaborating With AI by: Professor Nathan Colaner, Seattle University's Albers School of Business and Economics on September 16, 2024 | 225 Views September 16, 2024 Copy Link Share on Facebook Share on Twitter Email Share on LinkedIn Share on WhatsApp Share on Reddit The more artificial intelligence transforms society, the more economic anxiety it provokes. One way that we try to comfort ourselves is by looking to history, which is replete with examples of entire industries gobbled up by some new technology, only to discover upon later analysis that the jobs lost were replaced by more and arguably better jobs. This is not exactly a comfort to the actual workers whose lives were disrupted, but there is meaningful solace in that kind of macro-economic story.Ā However, there are two problems with assuming we should expect another version of this story in the case of AI: first of all, we have never seen a technology change so many industries and at once, and second, we have never seen a technology create change so rapidly.Ā Therefore, comparing the technical upheaval we face now to, for example, the advent of the automobile and its displacement of horse-powered transportation, is more contrast and less comparison. History allows us to be hopeful that AI will have an overall positive effect on the labor market, but critical thinking will not allow us to be confident about this. As important as this issue is, this question about economic displacement by artificial intelligence is just one of a set of ethical questions about human collaboration with AI. These include questions about physical safety in collaboration (e.g. autonomous vehicles) and the potential for unnecessary destruction enabled by collaboration (e.g. combat drones).Ā The Albers School of Business and Economics is certainly keeping our eye on these issues, but our focus on teaching our students about the transformational nature of AI has forced us to go through our own internal reflection and experimentation. These have unearthed other questions about how collaboration with AI is impacting the educational experience.Ā These questions center around how collaboration with AI is affecting learning, growth, critical thinking, emotional intelligence, and productivity. These are more subtle and less discussed than the questions about job displacement and killer machines, but they may turn out to be just as important. Since these issues are remarkably complex, it is fortunate we are not rudderless as we explore them. This is because we are not only a business school, but a Jesuit business school, which means that we work under a mission statement that incorporates values such as educating the whole person.Ā As we create an educational paradigm centered around education that is not just about the intellect, we have both the freedom and obligation to ask such questions as whether the collaboration is actually good for the character formation of our students even if it increases their productivity, or about how the use of AI tools is affecting their sense of self, even if it is making them more valuable in the workforce. These priorities help guide the selection and adoption of AI powered learning activities. Here are some examples of how actual professors are incorporating AI into the classroom: Encouraging students to leverage AI-driven platforms that offer personalized learning experiences. These tools allow students to progress at their own pace based on their individual performance and needs. Immersing students in a 3D virtual world where they interact with characters in realistic situations, encouraging more empathic responses.Ā Using AI to enrich the contextual and background information. This adds depth to the cases, making them more comprehensive and realistic for students. Using AI to generate thought-provoking questions or scenarios to stimulate and deepen class discussions. Using AI to generate activities that require meaningful interactions between students related to the topics being covered. What is the relationship between the user and an LLM? How will they view and ultimately use content supplied by generative AI? Students can become clerksāthose who cut-and-paste AI output and include it in ātheir own work.ā Or they can be analystsāthose who use generative AI to help them think about their current project, helping them to produce a superior project. Assignments requiring or permitting use of an LLM should encourage students to become analysts, not clerks These are important and dynamic questions that leave the future of education unwritten. But while we do not know the future, we do know our values involved in educating the whole person, and that is enough to provide guidance for the immediate future. Professor Nathan Colaner is a Teaching Professor in the Department of Management at Seattle University, where he is also Director of Business Analytics. His research and teaching focus on the ethical implications of the development and use of analytics and artificial intelligence. He also consults for the Vatican, the National Science Foundation, the National Institute of Standards and Technology, and various other business and nonprofit organizations.