Mobile navigation

FEATURE 

AI Governance in publishing: ensuring ethical, compliant content creation

As AI technologies become embedded in the workflows of media and publishing, they bring opportunities to enhance the efficiency and quality of content creation. They also bring a host of pressing governance issues, says Media Systems’ Paul Driscoll.

By Paul Driscoll

AI Governance in publishing: ensuring ethical, compliant content creation

From streamlining repetitive editing tasks to optimising content distribution based on audience engagement, AI can elevate the reach and impact of publishers. However, the increasing reliance on AI in these processes also introduces significant ethical and regulatory challenges. Inaccurate, biased, or manipulative AI-produced content can weaken audience trust, and non-compliance with data protection regulations can lead to severe legal consequences. AI governance — the structured approach to managing AI’s role responsibly — is essential to mitigate these risks and safeguard the integrity of AI-assisted content.

The importance of AI Governance in media and publishing

AI governance is the framework of policies, practices, and oversight designed to ensure that AI systems are used responsibly, ethically, and in alignment with organisational and societal values. In publishing, it is essential to maintain the integrity of content and protect it against potential legal risks and reputational damage. Publishers using AI for content creation should check AI output vigilantly to ensure it aligns with editorial standards and audience expectations. Without clear governance, publishers run the risk of producing and publishing content that lacks the necessary ethics and trustworthiness to retain reader loyalty.

For example, AI-generated content, without intention, may uphold stereotypes, overlook diverse perspectives, or misinterpret data, leading to biased or even misleading information. With AI governance, publishers create a protective structure that guides how AI is applied and sets boundaries that protect both their content quality and their readers.

AI challenges in content creation and distribution

1. Bias and accuracy risks

AI models rely heavily on the data on which they are trained. If that data is biased (eg. culturally, gender-based), AI may continue or even amplify that bias in the content it generates. In publishing, inaccurate content can shatter a publication’s reputation and damage trust among readers. It can have broader social implications, especially if content undervalues certain groups or spreads misinformation.

2. Intellectual property, privacy, and data protection concerns

With AI-driven personalisation and targeted content, publishers aim to use audience data to make content distribution more effective. These practices must align with data protection regulations, such as the European GDPR. Without governance, publishers risk infringing on users’ privacy and subsequent legal sanctions and reputational damage. Governing AI can help ensure data is used ethically and compliant with applicable regulations.

A related factor to consider is copyright infringement. UK publishers, much like their counterparts in other countries, believe that AI companies like ChatGPT scrape the internet and use copyrighted content to train their systems. Dan Conway, CEO of the Publishers Association in the UK, said it like this: “AI is not being developed in a safe and responsible, or reliable and ethical way. And that is because LLMs are infringing copyrighted content on a massive scale.” Governments are now being pushed to interfere. At some point, legislation regarding AI will cover copyright infringement, which will also affect AI governance and its dynamics.

3. Maintaining editorial standards

AI-generated content does not automatically meet the editorial standards audiences have come to expect. It may lack nuance, emotional resonance, or contextual understanding. AI can be of great value in content creation, but publishers must set clear boundaries to guarantee that content aligns with the tone, quality, and editorial standards their audiences expect from their publications.

Key strategies for effective AI governance in publishing

Implementing an AI governance framework can help overcome challenges and use AI responsibly and effectively. With the new EU AI Act, the European Union is already on course to regulate the use of AI, and the UK government is following suit with its 2023 AI regulation whitepaper that aims to set legal standards and boundaries for the use of AI. Publishers should still consider the following practical strategies in the process:

1. Establish clear ethical guidelines for AI use

The first step is to set clear ethical guidelines that define how AI will be used in content creation and distribution. What are, from data handling to content generation standards, acceptable practices and which measures prevent AI from producing potentially manipulative, harmful, or offensive content? Establishing these guidelines will provide a foundation for using AI in line with the publisher’s commitment to ethical standards and audience trust.

Ethical guidelines can also dictate what types of content are unacceptable for AI to generate. A publisher may refrain from using AI in an area like news reporting, where accuracy and neutrality are crucial. With such rules firmly in place, AI output remains aligned with brand values and audience expectations.

2. Implement bias detection and content review processes

AI models may unintentionally reproduce biases from their training data. Therefore, using bias detection tools is an essential component of good AI governance. Using tools to catch potential biases or inaccuracies in AI-generated content and establishing human review processes as an extra safeguard should be standard procedure and will assist publishers in mitigating the risk of bias and ensuring the accuracy and cultural sensitivity of content.

A content review process should be part of a larger workflow that allows editors to oversee AI-generated content before it is published. This step doesn’t just help catch potential errors or biases but also ensures that AI’s output aligns with the publication’s quality standards. Human editors could be the final gatekeepers of AI-generated content in this workflow, ready and able to make adjustments to maintain content integrity.

3. Prioritise transparency with audiences

An effective way to ensure audience trust is by being fully transparent about the role of AI in content creation. Publishers should be upfront about how AI contributes to their content production and distribution. By being upfront about AI’s involvement, they demonstrate their commitment to responsible use of technology, solidifying audience trust.

This transparency is also relevant to AI’s role in content personalisation and targeted distribution. When audiences understand how AI influences the content they see, they are more likely to positively engage with it. This transparency builds a stronger relationship between publisher and audience and boosts readers’ confidence in the integrity of the content they consume.

4. Regularly monitor and update AI models

AI governance is an ongoing process that requires continuous monitoring. Regularly review and update AI models to make sure they reflect ethical and regulatory standards. When new data becomes available or regulations change, publishers should adapt their AI systems and governance practices accordingly. That way, AI remains valuable and compliant, and AI governance stays on point.

Regular monitoring also helps identify areas where AI performance could improve. If an AI model consistently generates content that requires human revision, retraining it to better meet editorial standards may be necessary.

5. Promoting industry collaboration

To ensure the creation of a shared vision of what AI governance should be and do, and how it should accommodate a healthy and responsible use of AI in publishing, AI governance should not be left to individual publishers alone. Collaboration between industry players, regulators, and tech developers will be crucial to establishing shared best practices and standards for responsible AI use. By cooperating, the media industry can develop collective guidelines for AI governance that ensure fairness, transparency, and compliance with legal frameworks.

Conclusion: ensuring responsible AI use in publishing

AI offers publishers exciting opportunities to optimise content creation and distribution, but these benefits come with legal, ethical and regulatory responsibilities. Effective AI governance provides a framework that allows publishers to use AI responsibly, protecting both the integrity of their content and the trust of their audiences. Establishing ethical guidelines, implementing bias detection and review processes, prioritising transparency, and regularly updating AI models enables publishers to mitigate risks and confidently embrace AI’s potential.

Interested in learning more about AI governance? For more extensive information, check out this article about the ins and outs of AI governance.

About us

Media Systems Ltd (MSL) provides solutions to the corporate, media and publishing industries. We are software platform implementation, systems integration and development specialists who provide sales, production, multichannel editorial workflow and digital asset management solutions to the content creation and publishing industries.

Our outlook for mutual success is to become partners with our clients. With a client base that encompasses national press and regional newspapers, magazine, educational training and book publishers, MSL brings 25 years of publishing experience and world-class solutions backed up with outstanding customer service.

Website: www.mediasys.co.uk