GPT (Ghost-Penman-Typist): Has Generative Artificial Intelligence (GenAI) Paved Its Way into Scholarly Writing?

— by Fanny Liu

Introduction

Researchers have been using Generative Artificial Intelligence (GenAI) for research, such as writing codes, brainstorming research ideas, drafting research manuscripts, and more (Van Noorden & Perkel, 2023). While recognising the negative impacts, including proliferation of misinformation, making plagiarism easier and harder to detect, and bringing inaccuracies into research texts, researchers believed that large language models (LLMs) helped those who do not speak English as their first language by making their research papers better in grammar and style.

[Image curtesy of いらすとや]

GenAI tools as research article authors

COPE (Committee on Publication Ethics) is an organization that provides guidance and support to publishers, editors, and authors on ethical issues related to scholarly publishing, whose members include a number of publishers, such as Elsevier, Springer Nature, Taylor & Francis, and Wiley. In its position of authorship and AI (2024), COPE clearly states that AI tools cannot be listed as an author of a paper, with the following reasons:

  • AI tools cannot take responsibility for the submitted work.
  • AI tools cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements.

Other organisations including World Association of Medical Editors (WAME) (Zielinski et al., 2023) and the JAMA Network (Flanagin et al., 2023) also state that AI tools do not qualify for authorship.

[Image curtesy of いらすとや]

Usage and disclosure

A survey discovered that scientists’ views of using AI to write papers are split (Kwon, 2025). While only 10% percent respondents think AI-assisted language editing is unacceptable under any circumstances, 35% think generating a whole paper text with AI is unacceptable.

Studies found that use of GenAI tools has been found in the scholarly corpus. A recent research paper by Kobak et al. (2025) showed that the appearance of LLMs led to a sudden rise in the frequency of specific style words in more than 15 million biomedical abstracts from 2010 to 2024 indexed by PubMed. Based on excess words usage, the authors suggested that at least 13.5% of 2024 abstracts were processed with LLMs. Also, Liang et al. (2024) analysed 950,965 papers on the arXiv, bioRxiv, and Nature Portfolio journals and revealed a steady increase in LLM usage from January 2020 to February 2024, with the largest and fastest growth observed in Computer Science papers (up to 17.5%).

[Image curtesy of いらすとや]

Publishers’ policies

GenAI can be useful in scholarly writing, but its use is subject to the policies and ethical standards of specific publishers, institutions, and academic fields. Some may permit its use with proper disclosure, while others may restrict or prohibit it altogether.

Below is a quick summary of the policies of four popular publishers.

Note: The data presented below were collected in July 2025 and may be subject to updates or revisions. Authors are advised to verify the latest information prior to making decisions.

Publishers Authorship Usage Language Editing Disclosure
Elsevier (Elsevier) Authors should not list AI and AI-assisted technologies as an author. The use of GenAI or AI-assisted tools to 1.) create or alter images in submitted manuscripts or 2.) in the production of artwork such as for graphical abstracts is not permitted. AI and AI-assisted technologies should only be used to improve readability and language of the work. Authors should disclose in their manuscript the use of AI and AI-assisted technologies.
Nature Portfolio (Springer Nature) Large Language Models (LLMs) do not satisfy the authorship criteria. GenAI images are not permitted. AI-assisted improvements to human-generated texts for readability and style, and to ensure that the texts are free of errors in grammar, spelling, punctuation, and tone do not need to be declared. Use of an LLM should be documented in the manuscript.
Taylor & Francis (Informa plc) Generative AI tools must not be listed as an author. Supported responsible use of Generative AI tools include idea generation, online search with LLM-enhanced search engines, literature classification, and coding assistance. The use of Generative AI in the creation and manipulation of images and figures, or original research data is not permitted. Taylor & Francis supports the responsible use of Generative AI tools in language improvement. Authors must clearly acknowledge any use of Generative AI tools.
Wiley (Wiley) AI tools cannot be listed as an author of an article. Authors may only use AI Technology as a companion to their writing process, not a replacement. GenAI tools must not be used to create, alter, or manipulate original research data and results. Tools that are used to improve spelling, grammar, and general editing are not included in the scope of the disclosure guidelines. Authors must document and disclose use of AI Technologies.

While the current policies vary among different publishers, the STM has developed a classification of AI use in manuscript preparation which aims to help publishers to develop the relevant policies.

Conclusion

Use of GenAI technologies has been evident in scholarly writing. However, such technologies cannot fulfil the responsibilities as authors. Authors are responsible for ensuring the originality, accuracy, and integrity of their submitted works. When utilizing Generative AI tools, authors should do so responsibly and document and disclose the use in compliance with the journal’s editorial guidelines.

References

COPE Council. (2024). COPE position – Authorship and AI – English. Retrieved 2 July 2025 from https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools

Elsevier. Generative AI policies for journals. Retrieved 3 July 2025 from https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals#0-about

Flanagin, A., Bibbins-Domingo, K., Berkwits, M., & Christiansen, S. L. (2023). Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge. JAMA, 329(8), 637-639. https://doi.org/10.1001/jama.2023.1344

Informa plc. AI Policy. Retrieved 3 July 2025 from https://taylorandfrancis.com/our-policies/ai-policy/

Kobak, D., González-Márquez, R., Horvát, E.-Á., & Lause, J. (2025). Delving into LLM-assisted writing in biomedical publications through excess vocabulary. Science Advances, 11(27), eadt3813. https://doi.org/doi:10.1126/sciadv.adt3813

Kwon, D. (2025). Is it OK for AI to write science papers? Nature survey shows researchers are split. Nature, 641(8063), 574-578. https://doi.org/10.1038/d41586-025-01463-8

Liang, W., Zhang, Y., Wu, Z., Lepp, H., Ji, W., Zhao, X., Cao, H., Liu, S., He, S., & Huang, Z. (2024). Mapping the increasing use of LLMs in scientific papers. [Preprint]. arXiv preprint arXiv:2404.01268.

Springer Nature. Editorial policies – Artificial Intelligence (AI). Retrieved 3 July 2025 from https://www.nature.com/nature-portfolio/editorial-policies/ai

Van Noorden, R., & Perkel, J. M. (2023). AI and science: what 1,600 researchers think. Nature, 621(7980), 672-675. https://doi.org/10.1038/d41586-023-02980-0

Wiley. (12 March 2025). Best Practice Guidelines on Research Integrity and Publishing Ethics. Retrieved 3 July 2025 from https://authorservices.wiley.com/ethics-guidelines/

Zielinski, C., Winker, M., Aggarwal, R., Ferris, L., Heinemann, M., Lapeña, J., Pai, S., Ing, E., Citrome, L., Alam, M., Voight, M., & Habibzadeh, F. (2023). Chatbots, Generative AI, and Scholarly Manuscripts. WAME Recommendations on Chatbots and Generative Artificial Intelligence in Relation to Scholarly Publications. Retrieved 2 July 2025 from https://wame.org/page3.php?id=106

Share