Content
summary Summary

The International Conference on Machine Learning (ICML) is the world's leading academic conference on machine learning. Its program chairs are now speaking out against AI-generated texts in science.

Ad

In the Call for Papers for ICML 2023, the following sentence sparked discussion among experts: "Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis."

The rapid advances in large language models could lead to unforeseen consequences and would raise many questions, the ICML said, such as copyright on generated content and plagiarism.

"There is also a question on the ownership of text snippets, images or any media sampled from these generative models: which one of these owns it, a user of the generative model, a developer who trained the model, or content creators who produced training examples?" the program chairs write.

Ad
Ad

ICML wants to act "careful and somewhat conservative"

While these questions will be answered over time as generative AI arrives in everyday life, at present the situation is not clear, the ICML said. The answers to these questions, however, would have a direct impact on the peer review process and thus on the research community and careers. At this point, the ICML intends to act "careful and somewhat conservative."

The ban applies to this year's conference. The organizers expect that the rules will change with better understanding of large language models and their potential impact.

Unfortunately, we have not had enough time to observe, investigate and consider its implications for our reviewing and publication process. We thus decided to prohibit producing/generating ICML paper text using large-scale language models this year (2023).

ICML, Program Chairs

AI as a writing aid is still allowed - but it's a fine line

The use of AI tools and thus of (L)LMs for things like spelling correction or translations is still allowed. These semi-automatic AI tools are permitted as long as they are used to improve the text written by the author. The use of e.g. ChatGPT as a creative writing aid should be excluded from this definition as long as the text generated by ChatGPT is taken and only edited.

The ICML acknowledges that it is difficult to find out whether a text was generated by AI. Therefore, the conference does not plan to introduce a detection tool this year to check submitted scientific papers for possible violations on a large scale. However, the conference would follow up on a specific suspicion.

"Any submission flagged for the potential violation of this LLM policy will go through the same process as any other submission flagged for plagiarism," program officials write.

Recommendation

OpenAI itself is working on a kind of watermark for GPT-generated text, and there are tools that promise to recognize AI text. However, there are no serious studies on the reliability of these.

Science is particularly sensitive to the problems of LLMs

The use of LLMs is highly controversial in the scientific context, where false information and citations, as well as plagiarism, are particularly serious and could poison human knowledge at the source, so to speak.

Just how controversial the topic is was demonstrated by the publication of Meta's Galactica scientific language model. Shortly after its publication, it was sharply criticized by parts of the scientific community for its hallucinations, classified as a danger and taken off the net again by Meta within a few hours.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • The ICML is the leading academic conference on machine learning. In the 2023 Call for Papers, it argues against fully AI-generated texts.
  • AI tools for language editing, such as translations or grammar, are still allowed.
  • The ICML cannot automatically check possible violations, but wants to investigate suspicions.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.