On Nov. 30, 2022, tech company OpenAI released ChatGPT 3.0, a large language model chatbot that could respond to a wide range of user prompts with startling accuracy. This new tool, with its capability to generate text responses so lifelike as to be virtually indistinguishable from human writing, disrupted classrooms from elementary school to higher education seemingly overnight.
Dr. Joe Wall knows the truth: the revolution didn’t occur overnight. It was bubbling under the surface for years, and he witnessed it happening.
“I technically have been involved in the artificial intelligence space since 2016,” says Wall, executive director of Accelerating Ingenuity in Markets and the Flynn Chair of Accounting Ethics and Disruptive Technologies. “I was introduced to ChatGPT and Google Bard before they were mainstream. Everybody thought AI was neat, but people in the industry didn’t think it was going to catch on in the public for at least another five or 10 years.”
When those predictions were proven false, faculty and students alike found themselves confronting new, pressing questions. If a student prompts ChatGPT to write a paper, is it plagiarism? Should faculty implement a “no AI” policy in class, or is it better to treat it as another legitimate learning tool? Even if a code of conduct around AI is created, will it still make sense six months from now, when even more powerful tools have hit the market?
Instructor of practice and AI engineer Hunter Sandidge, who designed much of AIM’s financial technology curriculum, sees echoes of old debates in this new one over AI. Specifically, Sandidge thinks of the pocket calculator: an invention that today’s students may not regard as revolutionary, but it sparked immense controversy when it debuted decades ago.
“So many professors said that calculators would not be allowed in their classrooms, and they weren’t wrong to think that way. The thought was one would miss an element of learning math by using a calculator because one wouldn’t understand how the answer was determined,” Sandidge says. “But ultimately, if there’s a tool that’s available to people in the real world, it’s not helpful to ignore it in the classroom. It’s better to acknowledge it, then teach students when and how to apply the tool.”
Wall uses his background in AI research to employ the set of best practices that he developed to align with the goal of ethical AI use in the classroom. One of his favorites is to ask students to submit a log of the prompts they gave ChatGPT to complete an assignment, which provides a clear window into each student’s problem-solving efforts. When Wall asked his fintech class to build their own chatbots, he was pleased to review the logs and see that no two applications were made with the same methodology.
“If you ask ChatGPT to provide ideas for a paper instead of writing it for you, and if you do your own research, write the paper, and ask ChatGPT to clean up the spelling and grammar, then submit it along with all the logs, you’ve probably written the best paper you could while also learning a lot about what a new technology can do for you,” Wall says.
The speed at which AI programs are changing and new ones are released complicates an already hard-to-grasp innovation. Unlike the pocket calculator, which existed unchanged for years and left time for debate about its proper use, AI changes daily. Wall tested the efficacy of GPTZero, a popular tool that claims to detect text written by chatbots and found it to be much less accurate now than it was just eight months ago. Why? The language ChatGPT uses to respond to queries had already changed. Based on his findings, Wall decided not to use GPTZero to check student assignments.
Sandidge, who leads data science efforts for Collins Aerospace’s space portfolio, including projects such as the next generation spacesuit, says this pace of growth will only accelerate.
“The thing about AI is that it grows on an exponential curve,” Sandidge says. “At the beginning, you’re moving slow and not much is changing, but then you hit that elbow point and find that there’s new crazy stuff happening every day. We’ve passed that elbow point in the last two years.”
“No matter what you’re teaching, I would adopt an approach that assumes more students are going to use this technology,” Wall says. “Part of our charge and our ethical duty as professors is to harness new technology, not shy away from it. We have an opportunity at Marquette to provide leadership on how to use technology in ways that protect people and privacy while achieving the best outcomes we can for humankind.”
Wall and Sandidge reflect that philosophy in the assignments they give the first-ever fintech students accepted into the AIM program. From constructing chatbots to using predictive AI to forecast stock prices, Wall’s students learn both the tools and the principles behind those tools. Students graduate from AIM with portfolio-ready projects that mirror assignments they’d be asked to complete in the real world.
When visiting large banks, Wall eagerly explains why students with AI expertise are invaluable employees. One Marquette graduate schooled in modern AI methods could sit at the intersection of several trading desks, constantly making their processes more efficient and generating greater returns for the firm. Amazon’s AI team hired a Class of 2025 AIM student for a coveted internship in Seattle, as did multinational consulting firm Ernst & Young.
However, one does not have to aspire to a career in AI to find value in it.
“You can prompt ChatGPT to adapt to your exact circumstances,” Sandidge says. “Tell it that you’re a professor and ask it to generate an AI-inclusive syllabus, or that you’re a student looking for rules about how to use it ethically. The biggest thing I would tell people is to experiment. Mess around with the tool.”
Over the next 12 to 18 months, newer tools may have the ability to execute tasks instead of just responding to prompts. Wall used the example of taking a vacation. Today’s chatbots will give you a suggested itinerary and links for more information, while tomorrow’s will book the airline tickets, make the dinner reservations and send the whole plan to your phone. Large companies, initially leery of using popular AI tools because of data security concerns, have begun constructing in-house versions of ChatGPT that exist only on internal servers, a trend Wall and Sandidge both expect will intensify.
Ultimately, AI isn’t going anywhere, and higher education will have to adapt.
“I’d rather be a thought leader in this space and have those crucial discussions instead of putting my head in the sand,” Wall says. “History tends to punish people who ignore new technologies and reward people who find good ways to use it.”