Published on

Don't use AI for research until you've watched this...NEW Rules

Introduction

As generative AI technologies increasingly permeate academic research, several prominent journals have established a set of guidelines governing their use. These rules have emerged in response to debates surrounding the legitimacy and ethical implications of employing AI in research. Here, we’ll delve into some of the crucial regulations set by various journals on the responsible use of generative AI tools.

Importance of Disclosure

One of the pivotal rules is the requirement for authors to disclose their use of AI in their manuscripts. Journals such as those published by Elsevier (ELSE), Wiley, and others specify that authors must transparently acknowledge any AI or AI-assisted technologies utilized in their research. Such a disclosure often needs to specify the type of tool used, detailing how it aided in the research, such as while writing or editing the manuscript. Specifically, this includes naming large language models like GPT-4 or Claude, and mentioning whether the application falls under the methods section or the acknowledgments section of the paper.

Limits on Original Research

AI's role in research must be treated with caution, especially regarding original research. Journals are clear: AI tools cannot create or alter images and should not be employed to fabricate or manipulate results. Instead, they can be used for language-related improvements—enhancing readability, clarity, and academic tone. Tools like Grammarly or spell checkers do not require disclosure as they are primarily for language checking rather than content creation.

Responsibility for Content

Researchers maintain full responsibility for the language and ideas presented in their work. The use of AI should not lead to the misrepresentation of primary research data under any circumstances. Journals emphasize that all statements in a manuscript must represent the author’s original ideas. Moreover, any claims made must be backed by reliable references.

Authorship Limitations

It is crucial to understand that AI cannot be listed as an author on a manuscript. Journals such as Nature explicitly assert that large language models like ChatGPT do not meet authorship standards due to their inability to be held accountable for the content they generate. Authors must refrain from including AI in the author list or referring to it as a co-author.

Peer Review Considerations

Lastly, AI technologies should not play a role in the peer review process. Using AI to generate reviews can breach confidentiality and compromise the integrity of the evaluation process. Journals recommend that while reviewers may use AI to enhance the clarity of their feedback, they should rely predominantly on their expertise and experience in evaluating manuscripts. The peer review process must remain an expert evaluation and not morph into an automated assessment.

Understanding and adhering to these guidelines will ensure that research maintains its integrity and credibility in an evolving academic landscape.


Keywords

  • Disclosure
  • AI Tools
  • Original Research
  • Responsibility
  • Authorship
  • Peer Review
  • Academic Integrity
  • Guidelines

FAQ

Q: Why do I need to disclose my use of AI in my research?
A: Journals require disclosure to maintain transparency in the research process and to ensure the credibility of the work.

Q: Can I use AI to create or modify images for my research?
A: No, AI tools should not be used to create or alter images; their role should be limited to language-related tasks.

Q: What happens if I don't disclose AI usage?
A: Failure to disclose the use of AI can lead to issues with journal submission, including potential retractions or issues with academic integrity.

Q: Can AI be an author of a research paper?
A: No, AI cannot be listed as an author. Only human researchers who can be held accountable for the content and research can receive authorship credit.

Q: Is it acceptable to use AI in the peer review process?
A: No, while some AI use can improve the quality of feedback, reliance on AI for evaluating research undermines the peer review process's integrity.