
Guidance On The Use Of AI In Scholarly Publishing
Journal editors issue guidance on the use of AI in scholarly publishing: A firm ban on use of chatbots as authors — but allowing AI-generated text and illustrations
Editors at seven scholarly journals published recommendations on using generative artificial intelligence tools by authors, reviewers, and editors. The recommendations ban generative AI as an author – but allow its use to generate text and illustrations.
“These constraints are needed in part to protect high-quality scholarship, as other statements have noted, but they are also vital for wider social reasons,” said Gregory E. Kaebnick, lead author of the recommendations and editor of the Hastings Center Report.
These tools “have the potential to transform scholarly publishing in ways that may be harmful but also valuable,” says the statement, published in several of the bioethics and humanities journals edited by the authors and signatories.
Signatories include Karen J. Maschke, editor of The Hastings Center’s journal Ethics & Human Research, and Laura Haupt, managing editor of the Hastings Center Report and Ethics & Human Research. Six additional editors are signatories.
The five recommendations are as follows:
- LLMs (large language models) or other generative AI tools should not be listed as authors on papers.
- Authors should be transparent about their use of generative AI, and editors should have access to tools and strategies for ensuring authors’ transparency.
- Editors and reviewers should not rely solely on generative AI to review submitted papers.
- Editors retain final responsibility in selecting reviewers and should exercise active oversight of that task.
- The final responsibility for editing a paper lies with human authors and editors.
While these recommendations are consistent with those taken by the Committee on Publishing Ethics and many journal publishers, they differ in some respects. For one thing, they address the responsibilities of reviewers to authors. In addition, the new statement takes a different position from that of Science magazine, which holds not only that a generative AI tool cannot be an author but also that “text generated by ChatGPT (or any other AI tools) cannot be used in the work, nor can figures, images, or graphics be the products of such tools.”
“Such a proscription is too broad and may be impossible to enforce, in our view,” the new statement says. The recommendations are preliminary.
“We do not pretend to have resolved the many social questions that we think generative AI raises for scholarly publishing, but in the interest of fostering a wider conversation about these questions, we have developed a preliminary set of recommendations about generative AI in scholarly publishing,” the statement says. “We hope that the recommendations and rationales set out here will help the scholarly community navigate toward a deeper understanding of the strengths, limits, and challenges of AI for responsible scholarly work.”