New research from the News Media Association and Newsworks – published as global leaders gather at Bletchley Park for the inaugural AI safety summit starting today – shows that the spread of misinformation and fake news is the public’s main concern with AI technology.
According to a nationally representative OnePoll survey of the public, by Newsworks, the spread of misinformation and fake news was identified as the public’s main concern with AI (67 per cent), ahead of the lack of human creativity and judgment (63 per cent), and the loss of human jobs (61 per cent).
Additionally, a YouGov poll of editors and MPs by the NMA, shows three quarters of MPs agree that trusted journalism created by news publishers is critical in minimising the risk of misinformation ahead of a potential general election next year. Among Labour MPs, 85 per cent agree, a higher proportion than Conservative MPs (69 per cent).
Ninety-seven per cent of news brand editors who responded to the NMA survey agree that the risk to the public from AI-generated misinformation ahead of a potential election next year is greater than ever before, while 60 per cent of MPs agree with the same statement.
Published during Journalism Matters week, the findings strongly reflect the views of the public and found that 64 per cent of people believe AI could increase the risk of misinformation ahead of future political elections.
Media Minister Sir John Whittingdale said: “At a time when AI can rapidly fuel the spread of fake news, trusted journalism has never been more important. We are in ongoing discussions with news industry leaders on the steps we can take to protect journalism from the risks of AI while harnessing its benefits, and through the UK’s upcoming AI Safety Summit we are working to encourage global cooperation on the responsible use of this powerful technology."
People still overwhelmingly value content produced by humans – 72 per cent of the public surveyed would prefer to read content solely created by humans, while 59 per cent believe AI could erode trust and credibility in online information sources.
Confidence in identifying AI-generated content is low, with 74 per cent of people being unsure if they could, according to the OnePoll findings. Eighty-six per cent of people believe we should have guidelines or regulations in place for AI-generated content on the web, while a similar number believe online content that is wholly or partly generated by AI should be clearly highlighted.
NMA chief executive Owen Meredith said: “As global leaders gather in Bletchley Park to discuss the future of AI, it is essential that the importance of protecting trusted journalism from the damaging effects of this technology is not overlooked. Society values trusted journalism and it is essential government’s do all they can to support a free and sustainable press.”
“Robust IP rights are fundamental to sustained investment in journalism and tools must be developed to ensure that publishers can fully protect their content from being exploited by AI companies who rely on journalist works to train their systems.”
Newsworks chief executive Jo Allan said: “Our research shows that the spread of fake news and misinformation is the number one concern the public has with AI and the confidence they have in identifying AI-generated material is also very low.
“AI is the next exciting era but is it clear that we must tread with caution. The insights from our study show that the craft of journalism, and news brands in general, should play an even bigger and more important role in our democratic society than ever before.”
Keep up-to-date with publishing news: sign up here for InPubWeekly, our free weekly e-newsletter.