Hosted by the News Media Association, speakers discussed the dangers of AI in spreading misinformation, the vital role journalism will play in combatting it and what must be done to regulate the dangers.
The panel featured John Stevens, political editor at the Daily Mirror; Alex Davies-Jones, Labour MP and Shadow DSIT Minister; Sam Sharps, executive director of policy, Tony Blair Institute; Matt Rogerson, director of public policy at Guardian News and Media; Henry Parker, global head of government affairs at Logically and Laura Foster, head of policy technology and innovation, at techUK.
Speaking on AI regulation, Matt Rogerson said: “With relaxing copyright laws, creators of generative AI large language models can take anything written and use it to build their models. We don’t think that’s right. The owners of the IP should have the right to decide who uses their content.
“For output, you need full transparency around the downsides of AI. GenAI tools should be clear that they do not produce journalism, and what they provide to users is a best guess, used from crawled information.
“We need warnings for people to let them know they need to check the information they receive.”
Alex Davies-Jones said: “How do we protect news sources? That can be through strong competition laws and regulations and making sure the platforms are held accountable for what is being spread algorithmically on their platforms.”
Alex also noted that it was essential that remuneration for news publishers was prioritised, noting Australia’s approach. She said: “The regulation they brought in has fostered and grown their regional and local journalism there - because of the steps they’ve taken in making sure they get equitable remuneration for trusted journalism.”
Speaking on the worries around AI ramping up misinformation online, Sam Sharps said: “The reality is we know that for all the positives and ambitious stories we tell around AI, the responsibility that goes along with it is to ensure it is introduced in the right way, regarding the fundamental safety and protection of people.
“The speed and access to tools and level of convincing fakery that can go on is something we must think about and address. The regulatory tools available to us in addressing these issues are complex and knotty to work through.”
Matt added: “The AI we are worried about is generative AI, and that can take the form of images, video, audio, and text. This can have ramifications for journalists’ resources, in terms of working out if those outputs are real.
“A lot of people trust Google Search. In the US, Google is experimenting with integrating a version of generative AI into their search engine, so you may no longer get a list of links to verify whether a story is true, Google would provide a list instead to suggest they know what the news is. That is really detrimental.
“This technology is not reliable. Generative AI is a misbranding of the technology. It is not intelligent. It is a tool that extracts, and crawls lots of information, usually from journalists and without a commercial license to do so and then regurgitates that information in response to a query. It is not intelligent. It is not a journalist. It is not a human.”
When asked about the role of journalism in combatting these issues, Matt said: “Journalists will do what they do – they will look at the facts, the evidence and will double check and get in touch with anyone affected. That is all baked into the training of a journalist and what is in our editorial code.
“Everything our journalists have written in the history of digital publication has been used to train these tools without permission. That is quite a scary phenomenon and none of us have given permission for this to happen, and we’re all trying to catch up with the greatest heist of intellectual property the world has ever seen.
“Journalists have limited resources. A lot of journalistic organisations are constrained in their resources nowadays, there are various reasons – changing business models and media habits, particularly acute at a local level.
“There are 650 constituencies, so 650 battlegrounds for AI to be misused. There aren’t 650 fact- checkers that are going to be able to address those small cases of disinformation.”
Speaking about the threat AI poses to democracy and national security, Alex Davies-Jones said: “We have to be able to tackle it, to call it out and identify it and also ensure we have robust policies in place to be able to counteract this and to regulate and how we deal with this in the mainstream.
“There are 44 elections next year happening around the world. It is going to be the biggest year for global democracy that there has been for generations, so this does pose a huge threat, not just in the UK, but around the world.”
Keep up-to-date with publishing news: sign up here for InPubWeekly, our free weekly e-newsletter.