At a webinar attended by journalists, academics and industry stakeholders, Impress introduced the guidance before hosting a Q&A session with experts from Microsoft, the BBC and Byline Times.
“Stories about AI yo-yo, in and out of the news most day,” Andrea Wills, chair of the Impress Code Committee, told the webinar. “But it's the unethical uses of generative AI models that concern us most.
“The impress Code Committee had already incorporated clauses into the revised Standards Code, to ensure human editorial review of all AI generated content and the transparent use of AI tools.
“But in such a complex area, we wanted to do more and this best practise note is the result. It's about giving our publishers the confidence to adopt and use AI tools in ethical and responsible way.”
The guidance has been designed to provide clarity on the opportunities, limitations and precautions that should be taken by journalists when utilising AI in their news production, added Impress.
Wills was joined at the webinar by special guest speakers Matthew Eltringham, the BBC’s senior advisor on editorial policy; Krishna Sood, assistant general counsel at Microsoft; and Peter Jukes, co-founder and executive editor, Byline Times.
Tom Spencer, Impress’s regulatory executive, and Veronica Gordon, a member of the Impress Code Committee, were also in attendance to guide attendees through the new guidance.
Impress says the guidance was developed after a global review of AI policies, consultations with experts in natural language processing and the regulation of AI content, and a six-week public consultation.
While it was decided with members of Impress in mind as a supplement to the Impress Standards Code, added Impress, the guidance provides a framework for ethical AI adoption in journalism that can be widely adopted across the industry.
What does the guidance say?
AI has become a focal point for much debate in the world of journalism and media. While many laud the increased efficiencies it could provide the professions, plenty continue to warn of the risks related to inaccuracies, misinformation and discrimination it could continue to perpetuate.
Impress says it feels it is important that publishers and journalists do not feel restrained when it comes to embracing technological advances.
However, it is crucial that ethical practises and journalistic integrity remain at the forefront of news and content production. With this in mind, the key recommendations of Impress’ latest guidance include:
- Always using human editorial oversight for any content produced using AI
- Being transparent with your audience about AI use, including clear labelling whenever the technology has been utilised.
- Fact check the information AI produces: it can be inaccurate and misleading.
- Consider what data you input into AI tools: Anything you put in may be used to train future models and infringe your personal copyrights.
- Do not use generative AI to depict real events of people.
- Do not share Personal Identifiable Information when using AI. This is particularly important for investigative journalists or those working in war zones in order to protect both yourself and your sources.
You can read the full guidance here.
Keep up-to-date with publishing news: sign up here for InPubWeekly, our free weekly e-newsletter.
