Trending:
AI & Machine Learning

AI conferences tighten LLM-generated paper bans as submissions flood review systems

Major AI conferences including ICML and ICLR are enforcing strict policies against LLM-generated paper text, allowing editing use only. The move follows exponential growth in AI paper submissions—83% of top-cited 2023 papers focused on LLMs—that's overwhelming peer review capacity and enabling low-quality 'AI slop' submissions.

AI conferences tighten LLM-generated paper bans as submissions flood review systems

The Policy Reality

ICML 2026 maintains its prohibition on entirely LLM-generated paper text, permitting use only for editing or experimental analysis. This continues a pattern established across major AI conferences since late 2022, when ChatGPT's launch triggered a submission surge.

The numbers tell the story: AI publications grew exponentially in 2023-2024 (R²=0.979, p<0.001), with 83% of top-100 cited AI papers in 2023 focusing on LLMs. Review systems designed for pre-ChatGPT submission volumes are buckling.

What Conferences Allow

The policies are more nuanced than outright bans. ACM and IEEE permit disclosed LLM use for authors and reviewers, but prohibit exposing confidential submissions to these tools. Authors can use LLMs to polish their own writing—they just can't have the LLM write the paper.

The concern isn't theoretical. Conferences face what some researchers call a 'DDoS attack' via junk papers: low-effort submissions that consume reviewer time without advancing the field. One analysis found LLMs impacted biomedical writing more significantly than major global events.

The Implementation Challenge

Submission caps alone haven't solved the problem. Critics argue they intensify competition without addressing the root issue: distinguishing between legitimate LLM-assisted editing and wholesale generation is difficult at scale.

Some researchers are testing whether LLMs could help identify paper limitations, suggesting the tools might have constructive roles in peer review. The policies will likely evolve—current restrictions reflect caution rather than settled consensus.

Why This Matters

For enterprise tech leaders funding or consuming AI research, these policies signal quality-control concerns at the research level. If conferences struggle to maintain paper quality, that affects the research pipeline your teams rely on. The pattern also previews challenges other professional communities will face: how do you maintain standards when AI can generate plausible-but-shallow content at scale?

Worth noting: Initial ICLR 2023 data showed minimal LLM-generated submissions, but that was early days. The problem has scaled with adoption.