The guidance below, which was originally adopted on 14th March, was issued by The New York Times Deputy Managing Editor Sam Dolnick and Editorial Director for A.I. Initiatives Zach Seward:
As we embark on experiments that make use of generative A.I in the newsroom and Opinion, these principles will guide our work and clarify why and how we plan to use the technology:
- As a tool in service of our mission. Generative A.I. can assist our journalists in uncovering the truth and helping more people understand the world. Machine learning already helps us report stories we couldn’t otherwise, and generative A.I. has the potential to bolster our journalistic capabilities even more. Likewise, The Times will become more accessible to more people through features like digitally voiced articles, translations into other languages and uses of generative A.I. we have yet to discover. We view the technology not as some magical solution but as a powerful tool that, like many technological advances before it, may be used in service of our mission.
- With human guidance and review. The expertise and judgment of our journalists are competitive advantages that machines simply can’t match, and we expect that will become even more important in the age of A.I. Our talent is what makes The Times the world’s best resource for curious people. Generative A.I. can sometimes help with parts of our process, but the work should always be managed by and accountable to journalists. We are always responsible for what we report, however the report is created. Any use of generative A.I. in the newsroom must begin with factual information vetted by our journalists and, as with everything else we produce, must be reviewed by editors.
- Transparently and ethically. The first principles of journalism should apply just as forcefully when machines are involved. Readers must be able to trust that any information presented to them is factually accurate, meets the high standards of The Times and follows our Handbook for Ethical Journalism. We should tell readers how our work was created and, if we make substantial use of generative A.I., explain how we mitigate risks, such as bias or inaccuracy, with human oversight.