Artificial Intelligence (AI) applications can be powerful and useful tools to help journalists and media professionals but AI requires thorough, informed, and ethically responsible humans to take control of it and to supervise the outcome.
These were the general conclusions of a panel of journalism and media production specialists, as well as news controllers and journalism ethics experts.
They were invited by the School of Communication of the Hong Kong Baptist University (HKBP) for a symposium Wednesday which debated the innovations and the risks AI technology presents for the profession.
The symposium, titled, “When Journalism Meets AI,” discussed the challenges and opportunities Artificial intelligence (AI) tools pose for both journalists and media organizations.
The symposium noted that, in an era where AI is reshaping industries, journalism is at a crossroads.
While Al promises to revolutionize news production, enhance storytelling, and redefine audience engagement, it also poses challenges and ethical considerations.
Though they discussed different aspects of AI according to their fields of expertise and professional experience, all the speakers agreed AI can help journalists and media professionals do their jobs in many different ways.
This included tasks that are traditionally time-consuming and tedious.
But new technologies still need human judgment, demanding more attentive and ethically responsible professionals to control them.
Haris Amiri, senior machine learning engineer at Ground News said AI helps journalists and the public to understand the tone and political inclination of a news report.
It can help journalists to produce more balanced and trustworthy news stories because it helps them find original sources and original quotes that may have become distorted due to the way different people have treated the information.
Amiri said the technology is great and useful but it “needs to rely on guardrails.”
He said that in some cases, while trying to balance news with no sources from a particular point of view, AI tools had generated what he termed “hallucinations.”
That is, the AI produced a “false” and “non-real” view contrary to the one originally suggested.
Eric Wishart, Standards and Ethics Editor of Agence France-Presse raised the same issue, noting the difficulty of using AI-generated images in news stories.
Wishart said there is a risk AI might “amplify stereotypes in images.”
“Human oversight is the basic principle. AI can be a great tool, very useful, but needs human oversight,” he said.
He noted AI is now so significant that it has forced most news agencies to update their charters.
He said that while AI can help with many time-consuming aspects of journalistic work including research, fact-checking and translations, it cannot do one thing that is crucial, and that is to replace the reporter in the field.
He noted a study from Trusting News and Online News Association found 94% of people want journalists to disclose their use of AI in the production of news stories.
A very significant portion of the public (news consumers) tend to distrust content created using AI, he said.
The study notes that while consumers are mostly “very comfortable” with the use of AI for checking spelling and grammar, this comfort drops drastically when the topic involves news anchor/presenter tasks and drops even further when it comes to writing headlines, even if there is a human review in place.
Wishart showed how media companies that use of AI-generated images to report on news stories have lost consumer trust.
“I don’t think it does much for your credibility and your media organization [if you publish AI-generated images] unless it is a story about AI-generated images,” he said.
“Doing real news with AI images raises a lot of questions and distrust from users. It needs to be real and not how AI thinks it should look.”
Wang Yi, deputy director of the Technology Research and Development Centre of the Xinhua News Agency, and Echo Wai, Controller (Production Services) at RTHK, have shown how both organizations are using AI to enhance some aspects of reporting.
They showed how AI can help the restoration process and enhance the quality of old, low-resolution archives as well as to prepare topics and help to choose the best images to illustrate a given story.
Wang also showed how Xinhua uses AI tools to track, for example, copyrighted materials and how it tracks the original source of a given image for written content.
She showed how the same agency uses AI to make its content available in many languages to reach a broader audience.
Wang and Wai joined with other speakers to highlight the need to “double-check” stories with the objective of “aiding and assisting rather than replacing a human’s job.”
The symposium Convener, professor Raymond Wong from the HKBP opened the discussion by saying that intelligence, being a human quality of its very nature, can never be artificial.
“I always advise my students, don’t trust Google,” said Wong.
“You can google as a reference but once you get the references go back to the original sources otherwise you will end up with misleading statements every day of the week.”
“Likewise, what is artificial intelligence, these two words are contradictory. If it is artificial, it isn’t intelligence.”
“If you want good journalism, you need to do it the hard way, ask the sources, and don’t go to Google, Wikipedia, or ChatGPT. Trust yourselves, use your intelligence, not artificial intelligence.”
No Comments