Published: 10:29, February 15, 2023 | Updated: 10:33, February 15, 2023
PDF View
Journal bans listing AI chatbot as co-author
By Zhang Zhihao

Text from the ChatGPT page of the OpenAI website is shown in this photo, in New York, Thursday, Feb 2, 2023. (PHOTO / AP)

A prestigious Chinese academic journal has banned listing ChatGPT as a co-author on papers, joining several publishers that have restricted the use of the artificial intelligence chatbot to prevent inaccuracies and plagiarism in academic research.

The Jinan Journal (Philosophy &Social Science Edition), published by Jinan University since 1936, announced on Friday that it will turn down submissions that list AI tools based on "large language models", such as ChatGPT, as a co-author on papers.

If contributors have used such technology to write their papers, they must explain in details how the AI tools were used and show proof of their works' originality, the journal's editorial board said in a statement.

The journal reserved the right to reject or retract papers if contributors omitted these details. It also said it will seek a detailed explanation should an article have referenced papers written by AI tools.

Since its launch in November, ChatGPT — developed by US software company OpenAI — has attracted over 100 million users worldwide by creating realistic and intelligent texts in response to user prompts.

The chatbot's large language model is a deep-learning algorithm that can recognize, summarize, translate, predict and generate texts and other contents in a coherent and conversational manner.

Scholars and students have used ChatGPT in academia, from writing research-paper abstracts to editing manuscripts. This has led to several papers crediting the AI tool as a co-author, raising questions on academic integrity.

Last month, Holden Thorp, editor-in-chief of the journal Science, banned the use of texts from ChatGPT and explicitly stated in a new editorial policy that the program cannot be listed as an author.

"In a recent study, abstracts created by ChatGPT were submitted to academic reviewers, who only caught 63 percent of these fakes," Thorp wrote in an opinion piece. "That's a lot of AI-generated text that can find its way into the literature soon. For the Science journal, the word 'original' is enough to signal that text written by ChatGPT is not acceptable."

Many publishers have made similar changes. Last month, German-British academic publishing company Springer Nature Group stated that ChatGPT cannot be listed as an author since any attribution of authorship carries accountability for the work, which an AI tool cannot provide.

However, these AI tools can still be used in the preparation of a paper, if full details about their use are provided in the manuscript, the publisher said.

"As researchers dive into the brave new world of advanced AI chatbots, publishers need to acknowledge their legitimate uses and lay down clear guidelines to avoid abuse," Nature journal said in an opinion piece. "Research must have transparency in methods, and integrity and truth from authors. This is, after all, the foundation that science relies on to advance."

Dutch academic publishing company Elsevier's latest guidelines allow AI tools to improve the readability and language of a research article, but the tools are not allowed to perform critical tasks such as drawing scientific conclusions.

A Beijing-based professor of information technology, who requested anonymity, said publishers must take action to regulate the use of AI technology in scholarly writing to "prevent cutting corners and producing junk science".

The professor emphasized that the adoption of AI tools in academia is inevitable and regulations must be drafted to ensure the honest and responsible use of such technologies.

zhangzhihao@chinadaily.com.cn