As artificial intelligence becomes a bigger part of higher education, subject matter experts are paying closer attention to its ethical use. Dr. Terence Ow, WIPLI Fellow in AI and professor of information systems and analytics in the College of Business Administration, has thought extensively about how higher education institutions can ensure artificial intelligence is used responsibly.
Past predicts the future
Ow describes artificial intelligence, particularly large language models, as tools for pattern recognition. AI can recognize an input or detect patterns in data, compare it to previous instances in its training data and then predict the logical output based off that information.
However, an AI’s ability to do this well relies on having a strong, unbiased data set.
“If you have a bias or any kind of skew in your past data, your end result is going to be inaccurate and need correction,” Ow says. “It’s going to take time for people who work on these things to refine the data set and fix errors.”
Distinguishing fact from opinion
Large language models currently produce “hallucinations;” responses that are presented as fact but are nonetheless inaccurate. These are reflective of artificial intelligence’s limitations. For instance, AI has trouble placing its output into context.
“Artificial intelligence struggles with whether something is fact or opinion, for instance, and if you replicate something a million times and it’s wrong, the AI is going to label it a fact because it most often completes the pattern. That’s a big flaw right now,” Ow says.
People who can use the right AI tool to augment their own critical thinking skills and independent judgment will be best positioned for tomorrow’s job market.
Ethical application
While artificial intelligence unlocks broad possibilities for positive change, unethical actors have access to these same tools. For instance, companies hoping to grow cigarette sales can target people who are prone to smoking or trying to quit with greater precision. Deepfake videos allow scam callers to imitate the faces and voices of loved ones.
In this world, it is more important than ever that students be trained on the limits of AI and its proper use cases.
“We need to think about the societal impact of artificial intelligence; who gets this data, what it’s being used for and how we steer people toward value-creating activities,” Ow says. “Using AI has the potential to improve your life and to provide insights and opportunities for the individual, the community and society. It balances the field and offer hopes of greater social mobility; you come to Marquette because you want to use technology for these purposes.”
To learn more, join us on Nov. 21 at Marquette University for the inaugural AI Ethics Symposium, entitled “From Policy to Practice,” sponsored by the Northwestern Mutual Data Science Institute and the Marquette Center for Data, Ethics and Society.