Communication

How AI can help — and hurt — when people fundraise for urgent medical needs 

Generative AI chatbots excel at making medical crowdfunding campaigns more effective, says new Marquette research — at least until donors learn that AI is involved.

Despite a few well-publicized stumbles, artificial intelligence models such as ChatGPT have been impressively handling tasks ranging from researching term papers and writing computer code to composing sonnets about climate change in the style of Shakespeare. So, it’s intriguing to consider the difference they could make contributing to something where the needs are urgent and the stakes are often life and death. 

As a scholar focused on the intersection of communication, data science and technology, Dr. Larry Zhiming Xu, assistant professor of strategic communication in the Diederich College of Communication, has conducted research for years on one of those high-stakes endeavors: people’s use of crowdfunding platforms to access broad online donor bases for help affording crucial medical care. 

While his early work focused on crowdfunding’s ability to do something remarkable — to overcome the usual reluctance people have giving money to strangers, often by leveraging affinities such as shared experiences with the same disease — more recent studies have revealed the all-too-human struggles that cause these campaigns to falter. “If I have to raise money for a family member, I must be desperate,” Xu explains. “Probably, I don’t have the time, energy and resources needed to write a good story, take pictures or maybe shoot a video. Yet those actually are very important things to persuade people that I’m doing this for legitimate reasons.” 

So, it was natural for Xu to wonder about the role generative AI could play in improving people’s crowdfunding efforts — and the practical and ethical issues that assistance could introduce. 

These are no longer strictly academic questions, Xu notes. Crowdfunding platforms such as AngeLink and GiveAsia incorporate built-in tools to help users hone their campaigns with “AI-powered storytelling.” And people facing major medical bills are free to run their fundraising drafts through an AI assistant such as ChatGPT for writing help. 

“Now that AI has this potential to help people with these campaigns, when it is actually used, is it really helpful? Does it increase effectiveness and trustworthiness, or do some unintended consequences occur?”

Dr. Larry Zhiming Xu

That was all the more reason for Xu to launch into action, recruiting faculty colleagues from computer science and information systems to partner on two research studies funded by a $50,000 grant from the Northwestern Mutual Data Science Institute. Marquette is an anchor institution of the NMDSI, with the University of Wisconsin–Milwaukee and Northwestern Mutual, and Xu and his research partners are NMDSI affiliated faculty. Xu is excited about the institute’s facilitation of faculty collaboration and synergies. And in these studies, they proved productive, leading the researchers to findings showing significant ways AI can improve fundraising appeals, while also revealing an unwelcome erosion of trust AI involvement can generate, if disclosed to potential donors. 

In the first study, Xu served as principal investigator and partnered with Dr. Praveen Madiraju, professor of computer science, and Dr. Kambiz Saffarizadeh, assistant professor of management, to examine what’s known as “alignment problems” in adapting generative AI to crowdfunding efforts, or as Xu puts it, asking: “Now that AI has this potential to help people with these campaigns, when it is actually used, is it really helpful? Does it increase effectiveness and trustworthiness, or do some unintended consequences occur?”  

Another innovation from Xu, this one student oriented.

To conduct the study, the research team visited the popular GoFundMe site and accessed all of its publicly available medical crowdfunding projects, retrieving more than 900 written by humans. The researchers then used ChatGPT to rewrite the stories in a way that the chatbot considered more effective. (Having absorbed the entire internet as its learning material, ChatGPT has familiarity with crowdfunding projects and the attributes of successful ones, says Xu.) 

Using established textual analysis tools to measure the presence of elements associated with fundraising success, Xu and his partners generally observed AI outperforming human writers. The AI-enhanced writing was found to be 8% more analytical, 16% more likely to use goal-directed language related to money, and — get this — 10.5% more emotionally charged than human writers. “People assume AI is more robotic, but that’s not true,” says Xu. “AI can use a better vocabulary with more emotional words, because in this context apparently empathy is very important. People need to be emotionally moved to make their decisions.” 

Although ChatGPT even used 37% fewer words in achieving these impressive measures, the analysis did yield a caveat. The AI-improved stories were measured as 15.4% less authentic than the human originals, based on a system that scans for signs of the spontaneous or sometimes flawed communication seen in real life. “AI was deemed less authentic,” says Xu, “maybe a little too polished, a little too perfect.”  

“We are seeing this somewhat ethically paradoxical finding that honesty and transparency is costly and that it is punished.”

Dr. Xu

For the next study, Xu partnered with Dr. Terence Ow, professor of information systems and analytics in the College of Business Administration, and went further, recruiting 600 human judges so their reaction to traditional and AI-enhanced fundraising could be compared, along with whether the presence of AI affected their willingness to support a campaign.  

The short story, says Xu, is that the judges couldn’t identify whether projects were enhanced with AI or not, simply from the linguistic markers. But when one group was informed of AI involvement in a project, and others were left in the dark, their reactions diverged. “If we tell people that something was generated by AI, all of a sudden they believe that it is inferior, and their trust is eroded,” Xu says. When participants learned of AI’s role in a project, their donation amount dropped by 22.5%.  

Amid growing calls for full transparency for AI-generated text, images and video — a movement Xu supports — these findings raise implications that won’t be easy to resolve, he says. “We are seeing this somewhat ethically paradoxical finding that honesty and transparency is costly and that it is punished. If I’m raising money for someone in my family, using generative AI does not necessarily make my practice unethical. But the question becomes whether fundraisers want to incur an additional financial cost for being transparent.” 

Through their involvement in NMDSI (where Xu also co-chairs the group’s regional talent subcommittee) and an increasingly robust data science community at Marquette that spans disciplines such as communication, business, health studies and computer science, Xu and colleagues will be well positioned to further explore these and other challenges associated with AI use, and to provide critical guidance along the way.