The current review phase for the next major AI conference reveals deep cracks in the scientific enterprise. While researchers from elite universities invent AI-generated sources, frustrated authors are withdrawing their papers because reviewers apparently no longer read them, but instead let AI write the critiques.
Preparations for the upcoming International Conference on Learning Representations (ICLR) 2026 show AI-shape cracks in the academic peer-review system. Posts on Reddit and discussions on the OpenReview platform reveal how generative AI is eroding trust in research from both sides: authors are inventing sources, and reviewers are letting AI write their reports. A study published in 2024 adds critical context, showing that these incidents reflect deep structural pressure rather than simple laziness.
One case that highlights the problem on the author side is "BrainMIND", a paper from researchers at the Georgia Institute of Technology and China's Tsinghua University. The study promised an interpretable mapping of brain activity but fell apart after reviewers discovered numerous fake citations. The reference list contained completely fabricated titles and placeholder names like "Jane Doe" as co-authors. A reviewer flagged the obvious use of a language model and issued a "Strong Reject" recommendation. The authors revised the manuscript and references, but additional errors surfaced, leading them to withdraw the paper altogether.
In another case, "Efficient Fine‑Tuning of Quantized Models via Adaptive Rank and Bitwidth", the authors withdrew their submission in protest after receiving four rejections. They accused reviewers of using AI tools to generate feedback without reading the paper. The reviews mentioned missing experiments, such as GSM8K benchmarks, and unspecified methods that were, according to the authors, clearly described in both the main text and appendix. In their withdrawal statement, they called this behavior "flagrant desecration of the reviewer's sacred duty" and condemned what they described as AI-induced reviewer laziness.
Within the community, few criticize the use of AI itself. What's condemned is the reckless way it's applied. Using large language models for editing and language polishin, especially for non-native speakers, is generally accepted.
Systemic pressure behind academic misconduct
These incidents reflect deeper structural issues that go far beyond individual papers. A study in Research Ethics by Xinqu Zhang and Peng Wang examines how government programs like China’s "Double First-Class" initiative create toxic incentive systems at top universities.
The researchers describe a mechanism they call cengceng jiama—a stepwise intensification of pressure through layers of academic bureaucracy. National policymakers set vague goals such as achieving "world-class status." University leaders interpret those as ranking targets and pass them down as strict publication quotas. Faculty deans, anxious to meet expectations, tighten them even further. What begins as encouragement for research output often ends in mandatory SCI journal publication quotas.
According to Zhang and Wang, this creates what they call "goal-means decoupling." To meet unrealistic productivity requirements, researchers disconnect their work from ethical standards. The study documents cases where junior scientists admitted they had "no choice" but to falsify data or hire ghostwriters to keep their positions in a publish-or-perish environment. They also cite data from publisher Hindawi, which in 2023 retracted more than 9,600 papers—about 8,200 of them co-authored by researchers from China.
Perhaps most troubling is how academic institutions themselves respond. To avoid damaging their external reputation or losing position in rankings, university administrators often tolerate unethical behavior as long as results look good. One dean quoted in the study cited a Chinese proverb: "Where the water is too clean, there are no fish." One must not be too strict in punishing misconduct, as this would impair research efficiency. The strategy is often: Turn big problems into small ones, and ignore small problems—unless a scandal becomes public.