Content
summary Summary

As the use of generative AI grows, academia is also getting its share of ChatGPT texts. Publishers and researchers are trying to develop policies and a path to a future where AI-assisted papers will be commonplace.

Ad

In August 2022, the journal Resources Policy published a study on e-commerce and fossil fuels that included a sentence characteristic of AI chatbots like ChatGPT: 'Please note that as an AI language model, I am unable to generate specific tables or conduct tests, so the actual results should be included in the table'. To no one's surprise, this sentence tipped off the researchers that AI may have been used to write parts of the article without disclosure. Elsevier is investigating the incident.

Major publishers and journals such as Science, Nature, and Elsevier have rushed to implement new AI policies, concerned about threats to credibility. The policies typically require disclosure of any use of AI and prohibit listing AIs as authors. But detecting text written by AI is extremely difficult, and no foolproof methods have yet been developed.

AI could help researchers to improve their academic writing

But it's not all bad news: Experts say AI tools could help non-native English speakers improve the quality of their academic writing and their chances of being accepted for publication. It could also help researchers write better and more clearly. In one recent experiment, researchers used ChatGPT to write a passable paper in just an hour. However, generative AI often fabricates facts and references, repeats biased data, and can be used to disguise plagiarism.

Ad
Ad

Generative AI will also enable 'paper mills' to sell low-quality AI-assisted research to time-pressed academics under 'publish or perish' pressure. The proliferation of such studies could pollute the research literature and draw attention away from legitimate research.

Multimodal models are set to bring more change

With open-source models such as Stable Diffusion and generative AI tools in Photoshop also in the mix, the manipulation or outright fabrication of images will also become a problem. Nature recently banned these outright.

The use of such tools can be expected to grow rapidly, especially with multimodal models such as Google Deepmind's Gemini on the horizon. If the rumors are true, it may also be able to analyze graphs and tables, making it easier to process entire papers, including supplementary material. We may even see publishers themselves start to integrate such tools more and more. Finding the right balance between policy and technology will require further trial and error.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Publishers and researchers are working to develop policies for AI-assisted papers as generative AI tools become more prevalent in academia, raising concerns about credibility and plagiarism.
  • AI tools could help non-native English speakers improve their academic writing, but could also enable "paper mills" to produce low-quality research.
  • With multimodal models such as Google Deepmind's Gemini, AI may soon be able to analyze and generate even more forms of data, challenging publishers to find a balance between policy and technology integration.
Sources
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.