๐๐ ๐ฎ๐ป๐ฑ ๐๐ฐ๐ฎ๐ฑ๐ฒ๐บ๐ถ๐ฐ ๐ช๐ฟ๐ถ๐๐ถ๐ป๐ด: ๐๐๐ต๐ถ๐ฐ๐ฎ๐น ๐๐ป๐๐ฒ๐ด๐ฟ๐ถ๐๐
Artificial Intelligence tools can make academic writing easier. However, the rise of Generative AI tools like ChatGPT, Claude, and other Large Language Models (LLMs) have raised important questions about ethics, authenticity, and transparency in research communication.
As researchers, we seek novel scientific contributions while maintaining high standards of academic integrity and GenAI can help with communication. At the same time, journal editors are challenged with upholding rigorous standards in the face of potential 'misuse' of AI in the publication process and are developing policies.
๐๐ฎ ๐ฅ๐ง๐ค๐ฅ๐ค๐จ๐๐ก:
Just as we share data in supplemental information, we should record and be prepared to ๐๐ต๐ฎ๐ฟ๐ฒ ๐ผ๐๐ฟ ๐ฝ๐ฟ๐ผ๐บ๐ฝ๐ ๐ต๐ถ๐๐๐ผ๐ฟ๐ (a record of the instructions and queries given to the AI during the writing process) when using tools like ChatGPT. This practice offers:
-๐ง๐ฟ๐ฎ๐ป๐๐ฝ๐ฎ๐ฟ๐ฒ๐ป๐ฐ๐: Prompt logs clearly differentiate between researcher-authored ideas and those generated by AI, ensuring clarity and accountability regarding author contributions.
-๐๐๐ต๐ถ๐ฐ๐ ๐ฎ๐ป๐ฑ ๐๐ฑ๐ถ๐๐ผ๐ฟ๐ถ๐ฎ๐น ๐ฅ๐ฒ๐๐ถ๐ฒ๐: Sharing prompt logs with reviewers can support ethical standards in academic publishing and provide transparency during the editorial review process, fostering trust between authors, reviewers, and editors.
Co-authors, reviewers, and editors are understandably concerned about the role of Generative AI in manuscript writing. Questions about plagiarism and unintended misattributions are real and valid.
That said, AI tools can offer significant advantages. Beyond efficiency, they can ๐ต๐ฒ๐น๐ฝ ๐ป๐ผ๐ป-๐ป๐ฎ๐๐ถ๐๐ฒ ๐๐ป๐ด๐น๐ถ๐๐ต-๐๐ฝ๐ฒ๐ฎ๐ธ๐ถ๐ป๐ด ๐ฟ๐ฒ๐๐ฒ๐ฎ๐ฟ๐ฐ๐ต๐ฒ๐ฟ๐ publish in predominantly English-language journals and ๐๐๐ฝ๐ฝ๐ผ๐ฟ๐ ๐ฟ๐ฒ๐๐ฒ๐ฎ๐ฟ๐ฐ๐ต๐ฒ๐ฟ๐ ๐๐ถ๐๐ต ๐ฑ๐ถ๐๐ฎ๐ฏ๐ถ๐น๐ถ๐๐ถ๐ฒ๐ by breaking down some writing barriers. Scientific editors have long supported writing efforts - AI can be an analogous and complementary tool in our toolkit.
By keeping prompt logs, we can address some of the concerns about AI misuse, promote transparency, and foster trust in AI-assisted research.


