COLUMBIA, Mo. — Throughout the course of history, technology has continually transformed the news media industry — telegraph, radio, television and then the Internet. Yet despite these evolutions, technology remained the medium and human journalists the messengers.
The introduction of Artificial Intelligence has changed that model.
Today, AI machines designed to perform the communicator role are generating news content independent of humans. That means AI is the medium and the messenger, giving human journalists a new synthetic partner programmed to aid in news gathering.
But a new study from the Missouri School of Journalism has found many Americans are unaware of the role AI plays in their world, including in the production of news.
The findings come at a challenging time in the news media industry when news makers are facing historically low levels of public trust in their product. More Americans than ever before report getting their news from the very social media platforms using AI to provide them with their media while, at the same time, news makers are reporting on the dangers AI can pose to privacy, fairness, equality, safety and security. It’s a conundrum.
“So, how do we help people trust AI but still report on companies that use AI to the detriment of the public?” asked Chad S. Owsley, a doctoral student at the School of Journalism and co-author of the recent study. “How do you separate the two?”
Using an online survey in 2020, Owsley and co-author Associate Professor Keith Greenwood found that less than half the respondents — 48% — were certain that they had read, seen or heard something about AI in the past year, another 40% could only say it was possible and only 25% of participants claimed to perceive AI as capable of writing or reporting news equal to or better than human journalists.
The 48% was consistent with a European study that was conducted three years earlier, which Owsley said was interesting considering the high rate of technology usage the participants claimed in the 2020 study. For example, 61% reported owning a smartphone.
He expected that as people’s use of technology increased, so would their awareness of AI.
“We seemed to have stalled for three years and that is worth asking why and how,” Owsley said, adding that the study did not seek to answer those questions, which he is addressing in his dissertation. “Some of this stuff is pretty geeky. There might be a general disinterest in a high level of detail [about AI]. People might be thinking: ‘I just want it to work. I don’t care how it works.’”
Despite this lack of awareness, some forms of AI are replacing journalists, while others serve as an aid to newsgathering.
In 2018, Forbes introduced a publishing platform called “Bertie,” which uses AI to aid reporters with news articles by identifying trends, suggesting headlines and providing visual content matching relevant stories. The Washington Post and Associated Press are also using AI to perform the role of journalist. Among the topics most often aided by AI is financial reporting.
“The way an AI machine thinks or operates is based on constructed data,” Owsley said. “It has to have the information in tabulated form, such as spreadsheets and tables.”
AI then takes that knowledge and generates a human language story.
Greenwood said a longstanding criticism of innovations is that news organizations often jump into the “next best thing” without important considerations, such as “what does our audience think and how will they interact with this?”
“If organizations are going to be thinking about adopting these AI technologies, one of the things they need to be asking is, ‘OK, how does this fit with what our audience needs or expects or how does it fit with who they think we are?’” he said. “Rather than thinking, ‘Well, this is the future, and we need to go that way.’’
“Awareness and perception of artificial intelligence operationalized integration in news media industry and society,” by Chad S. Owsley and Keith Greenwood was published in AI & Society.