Mobile navigation


Digital Content Next issues AI principles

Digital Content Next, a US-based trade body, this week issued seven principles for the development and governance of generative AI.

Digital Content Next issues AI principles

The seven ‘Principles for Development and Governance of Generative AI’ published this week by Digital Content Next are:


1) Developers and deployers of GAI must respect creators’ rights to their content. Developers and deployers of Generative Artificial Intelligence (GAI) systems—as well as legislators, regulators and other parties involved in drafting laws and policies regulating GAI—must respect the value of creators’ proprietary content.

2) Publishers are entitled to negotiate for and receive fair compensation for use of their IP. Use of original content by GAI systems for training, surfacing or synthesizing is not authorized by most publishers’ terms and conditions, or contemplated by existing agreements (for example, for search). GAI system developers and deployers should not be crawling, ingesting or using publishers’ proprietary content for these three stages without express authorization.

3) Copyright laws protect content creators from the unlicensed use of their content. Like all other uses of copyrighted works, use of copyrighted works in AI systems are subject to analysis under copyright and fair use law. Most of the use of publishers’ original content by AI systems for both training and output purposes would likely be found to go far beyond the scope of fair use as set forth in the Copyright Act and established case law. Exceptions to copyright protections for text and data mining (TDM) should be narrowly tailored to not damage content publishers or become pathways for uses that would otherwise require permission.


4) GAI systems should be transparent to publishers and users. Strong regulations and policies imposing proportionate transparency requirements are needed to the extent necessary for publishers to enforce their IP rights where publishers’ copyright-protected content is included in training datasets. Generative outputs that use publisher content should include clear and prominent attributions in a way that identifies to users the original sources of the output (not third-party news aggregators) and encourages users to navigate to those sources. Users should also be provided with comprehensible information about how such systems operate to make judgments about system quality and trustworthiness.


5) Deployers of GAI systems should be held accountable for system outputs. GAI systems pose risks for competition and public trust in publishers’ content. This can be compounded by GAI systems generating content that improperly attributes false information to publishers. Deployers of GAI systems should be legally responsible for the output of their systems.


6) GAI systems should not create, or risk creating, unfair market or competition outcomes. Regulators should be attuned to ensuring GAI systems are designed, trained, deployed, and used in a way that is compliant with competition laws and principles.


7) GAI systems should be safe and address privacy risks. Collection and use of personal data in GAI system design, training and use should be minimal, disclosed to users in an easily understandable manner and in line with Fair Information Privacy Principles (FIPPS). Systems should not reinforce biases or facilitate discrimination.

Keep up-to-date with publishing news: sign up here for InPubWeekly, our free weekly e-newsletter.