Marquette Business

New MBA course teaches AI best practices alongside ethics  

Students will learn to solve business problems with AI while weighing ethical concerns. 

William Caraher, instructor of practice at Marquette University

Business leaders across every industry continue to grapple with the question of how to embrace and implement AI technology—and how to do so ethically. A new course in the Graduate School of Management will prepare students to address these questions in their own workplaces while sharpening their practical AI skills.  

“Applied AI: From Business Case to Ethical Deployment” debuts in spring 2026. The course provides “an overview of AI and will focus on the ethical best practices and ethical application of AI,” says Instructor of Practice William Caraher, who is also chief information officer and director of operations at von Briesen & Roper law firm. “With Marquette’s mission and values rooted in ethics, that was an important distinction when creating this course.”  

Marquette Today spoke with Caraher about how the eight-week course will help students dissect AI’s opportunities and challenges alongside ethical questions around bias, hallucinations and more. 

How is the course structured to teach about AI, which is a constantly evolving technology?  

Since it’s such a new topic, there aren’t many published case studies that one can refer to. This is a very real-time, right-now topic. Students will bring news, stories, articles and case studies that they’ve seen in the last week to kick off each week’s discussion. I find that in all my technology courses, this is a great way to start off the topic for the day, talking just about what happened in the last week. A lot happens in a week. Rooting the course in real time is an important factor, and students get a lot out of it. 

The course covers practical business uses for AI. How will that look in action for students?  

I envision students bringing their laptops and tablets to class and not just following along in lecture, but also applying the hands-on knowledge on the platforms we’re talking about. For example, if we’re looking at AI biases, we’ll look at examples all together.   

Students will also do a deep dive on an industry or vertical that they’re working in or are interested in while doing an analysis of the current state of AI. They’ll look at how well it has been adopted, any pitfalls or challenges and specific use cases. The other deliverable will be a SWOT (strengths, weaknesses, opportunities, threats) analysis in which students will pick an AI platform and analyze the technology company, the IP, the ownership structure, the investors — the whole AI ecosystem.    

One challenge in business right now is that companies don’t have unlimited resources, so a lot of companies are struggling with which platforms to utilize and maximize ROI. Students will research some of the platforms to determine which ones are a good investment for a company. If they’re in manufacturing, supply chain or real estate, for example, are there specific tools that would be better suited for those industries? 

Which AI tools will the course cover?  

The usual suspects: OpenAI’s ChatGPT, Claude by Anthropic and Google Gemini are top of mind for me — and yes, even Grok from Twitter/X. We’ll look at who owns what and who are the investors that are funding these programs. We’ll also dig into the infrastructure Wisconsin is playing such a big part in during the AI data center boom. One aspect getting a fair share of attention, especially from locals, is the AI data center environmental impact. One of the reasons data centers are coming here is our fresh water and expansive land re-use, but are these concerns we should be thinking about? 

How will this course equip students with an ethical compass for managing AI in the workplace?  

A lot of AI’s answers and responses are self-generated but need human guidance and influence. But who’s giving that guidance and influence? That’s kind of a slippery slope. These are the questions students will ask as they use AI platforms to hold the owners and the creators of these systems accountable for representing diverse thought and unbiased results.   

Whether an AI answer is right or wrong is for you to determine. AI gives you its answer, but are we questioning it? Are we thinking of alternatives? Or are we just accepting it? That’s a danger and risk that we have to talk through. What about the impact on jobs and careers?  There is a lot to unpack. 

From your perspective working at von Briesen & Roper, what are some AI challenges you’re seeing in the legal field?  

In legal, we’re constantly evaluating AI tools. The number one area AI has failed in the legal industry is around legal research, citations and case law. There are at least six industry examples of AI hallucinations submitted as facts by well-intended attorneys across the U.S. in court proceedings and court filings. The intent does not make up for the fact they have created a mess for the courts and their clients — judges are not amused either.  

Von Briesen & Roper is aware that AI has had a few early bumps in the road. The firm has implemented thoughtful policies and procedures to prevent those things from happening while not stifling innovation. Missteps with cutting-edge technology are bound to happen, but if you have solid guardrails and an understanding of the limitations, the results can be transformative.