Sports Illustrated came under fire recently for surreptitiously publishing a series of AI-generated articles credited to authors that don’t exist. The ensuing public furor over the incident suggests a looming challenge for publishers that have adopted or are currently experimenting with artificial intelligence, at a time when trust in the news is already at an all-time low. If there’s a single takeaway from the SI fiasco, it’s this: Media organizations debating the use of AI-generated content in their newsrooms should proceed with extreme caution.

Case in point: Most American news consumers perceive articles written by AI as less trustworthy and want publishers to disclose when they’ve used AI in news stories, according to a study conducted by researchers at the University of Minnesota’s Hubbard School of Journalism & Mass Communication and the University of Oxford’s Oxford Internet Institute.

The study, which analyzed audience perceptions of AI-generated news content, suggests that the use of AI newsrooms remains an unpopular idea. The largest share of respondents polled (40 percent) said they believe AI technologies do a “worse job than humans” when it comes to producing news content, compared to a third (33 percent) who said they do “about the same job.”

Only about one in ten (11 percent) said they think AI does “a better job than humans” in a newsroom, while an additional 16 percent admitted that they “don’t know.”

In addition, 81 percent said they believe news organizations should “alert readers or viewers” anytime AI is used in the creation of news content, and 78 percent of those who advocated for this disclosure also believe news organizations “should provide an explanatory note describing how AI was used.”

Half of respondents (50 percent) said they’re in favor of news organizations providing bylines on stories “attributing the work to AI.”

Perhaps the most damning indictment is the study’s discovery that people are less likely to trust AI-written articles regardless of the content those articles contain or the claims they make. Respondents were asked to read news articles containing a variety of political content, some of which were labeled as AI-generated, with many of those AI-labeled articles accompanied by a list of news sources used. Overall, respondents rated the AI-generated stories lower than the articles that did not contain such a label, even though respondents didn’t evaluate those articles as being inaccurate or biased.

One silver lining, however—and perhaps a potential path forward for the use of generative AI in newsrooms—is that AI-produced content fared better with audiences when those articles provided a list of sources. Researchers discovered that the “negative effects associated with perceived trustworthiness are largely counteracted” when AI-written articles shown to respondents cited sources alongside the articles.

Some experts have suggested that the use of AI in newsrooms or the curation of algorithmic content in stories might reduce the public’s perception of bias. However, the study discovered that those who already distrust the news media or aren’t very knowledgeable about journalism weren’t swayed in their convictions whenever articles came with an AI label. Making matters worse, the study also found that people who generally trust the media and exhibit an understanding of what reporting entails appear to be the most negatively affected by the presence of AI labels. This suggests that AI could further worsen America’s news trust crisis, as it may negatively impact the sentiment among the remaining share of U.S. news consumers who still currently trust that institution.

Only about a quarter of respondents (28 percent) said they’d heard or read “a lot” about news organizations using generative AI to write articles and report on events, while nearly two-thirds (63 percent) said they’d heard “a little.” Fewer than one in ten (9 percent) said they'd heard “nothing at all” about the phenomenon. The study pointed out that respondents who said they'd heard or read “a lot” about news organizations using generative AI were almost twice as likely to say they think AI does a better job than humans in writing news articles (16 percent versus 9 percent).

The study, titled “‘Or they could just not use it?’: The Paradox of AI Disclosure for Audience Trust in News,” surveyed nearly 1,500 U.S.-based participants and was conducted in September. The study is currently a "preprint" that hasn’t yet been peer-reviewed.