Top scientific journals, including Science and the Springer-Nature group, have announced new editorial policies that ban or curtail researchers from using advanced artificial intelligence bots like ChatGPT to write scientific studies.
OpenAI’s ChatGPT chatbot gained prominence in December for its ability to respond to user queries with human-like output, with many experts warning that there could be significant disruptions caused by the breakthrough technology.
Some AI researchers have praised the language model as a major advancement that may revolutionise entire industries and might even replace tools like Google’s search engine.
Google’s management also reportedly issued a “code red” for the company’s search engine business following the release of the experimental chatbot.
The AI chatbot has also shown capabilities to summarise research studies, reason and answer logical questions, and also more recently the ability to crack business school and medical exams crucial for students to pass.
However, users of the AI chatbot have also flagged that it sometimes provides plausible-sounding but incorrect responses with glaring mistakes to some queries.
Holden Thorp, editor-in-chief of Science journals, noted that the publishing group is updating its policies to specify that any text generated by “ChatGPT (or any other AI tools) cannot be used in the work, nor can figures, images, or graphics be the products of such tools”.
“An AI programme cannot be an author,” he noted.
The journal editor noted that violation of these policies may constitute scientific misconduct in the same league as plagiarism or unfairly manipulating study images.
He said data sets legitimately and intentionally generated by AI in research papers for study purposes are not covered by the policy change.
Springer-Nature, which publishes nearly 3,000 journals, also expressed concern in an editorial on Tuesday that people using the model may pass AI-written text as their own or produce incomplete literature review research using such systems.
The publishing group pointed to several already written yet-to-be-published studies that have ChatGPT acknowledged as a formal author.
It announced that such language model tools “will be accepted as a credited author on a research paper” reasoning that AI tools cannot take accountability and responsibility that human authors do.
Researchers using such tools in the course of the study should also document its use in the “methods” or “acknowledgements” sections of the scientific paper, Springer-Nature noted.
Other publishing groups such as Elsevier which platforms over 2,500 journals have also revised their policies on authorship after ChatGPT gaining prominence.
Elsevier announced that while such AI models can be used “to improve the readability and language of the research article, but not to replace key tasks that should be done by the authors, such as interpreting data or drawing scientific conclusions.”