The rules are set out in the first AI ethics policy from Cambridge University Press and apply to research papers, books and other scholarly works.
They include a ban on AI being treated as an ‘author’ of academic papers and books published by Cambridge University Press.
The move provides clarity to academics amid concerns about flawed or misleading use of powerful large language models like ChatGPT in research, alongside excitement about its potential.
Mandy Hill, managing director for academic at Cambridge University Press & Assessment, said: “Generative AI can enable new avenues of research and experimentation. Researchers have asked us for guidance to navigate its use.
“We believe academic authors, peer reviewers and editors should be free to use emerging technologies as they see fit within appropriate guidelines, just as they do with other research tools.
“Like our academic community, we are approaching this new technology with a spirit of critical engagement. In prioritising transparency, accountability, accuracy and originality, we see as much continuity as change in the use of generative AI for research.
“It’s obvious that tools like ChatGPT cannot and should not be treated as authors.
“We want our new policy to help the thousands of researchers we publish each year, and their many readers. We will continue to work with them as we navigate the potential biases, flaws and compelling opportunities of AI.”
R. Michael Alvarez, professor of political and computational social science at the California Institute of Technology, said: “Generative AI introduces many issues for academic researchers and educators. As a series editor for Cambridge University Press, I appreciate the leadership the Press is taking to outline guidelines and policies for how we can use these new tools in our research and writing. I anticipate that we will be having this conversation about the opportunities and pitfalls presented by generative AI for academic publishing for many years to come.”
Professor Alvarez and his Caltech collaborators use AI, including LLMs, to detect online harassment, trolling and abusive behaviour on social media platforms and in videogames such as Call of Duty, as well as to combat misinformation.
Professor Alvarez is co-editor of Quantitative and Computational Methods for Social Science, published by Cambridge University Press.
The publisher says the Cambridge principles for generative AI in research publishing include that:
Each year Cambridge University Press publishes tens of thousands research papers in more than 400 peer-reviewed journals and 1,500 research monographs, reference works and higher education textbooks.
Keep up-to-date with publishing news: sign up here for InPubWeekly, our free weekly e-newsletter.