13 Comments
User's avatar
Dr Sam Illingworth's avatar

LLM screening actually seems like a super smart move. I wonder if it could only be applied to people who themselves used LLMs in their manuscript prep (self declaration)?

James Butcher's avatar

Many publishers saw a ~20% increase in submissions last year. Everyone expects the same to happen this year, perhaps more. Working out how to deal with the increase of AI-generated slop is keeping a lot of people awake at night within the academic publishing industry.

Your article focuses on genuine papers, produced by "good actors". There are also many bad actors out there (e.g. paper mills) that are looking to publish at scale to make quick cash.

Here are a couple of articles, published recently, that discuss this topic:

The emerging submission crisis in behavioral science

https://www.sciencedirect.com/science/article/pii/S2211949326000013

Ninety-seven ignored: A personal reflection on the hidden struggles of an academic editor

https://ese.arphahub.com/article/182020/

This quote from the second article sums up what many editors are experiencing:

"Managing approximately 20 manuscripts required sending dozens to over a hundred reviewer invitations per article, often yielding few or no responses and prolonging decision timelines. The relentless, voluntary nature of this work resulted in significant mental fatigue, intrusion into personal life, and eventual burnout, culminating in my resignation from both roles. This reflective account highlights the demanding, often invisible labor of editors and calls for greater empathy and support within the scholarly community."

Submission fees get discussed a lot by publishers. I expect more publishers to roll them out, although they will be wary of introducing barriers for LMIC authors.

This is alternative take that's worth considering:

https://scholarlykitchen.sspnet.org/2025/11/04/manuscript-submissions-are-up-thats-good-right/

scott cunningham's avatar

Yes but that’s under traditional chatbots not AI agents right? The AI slop gets managed much better by Claude code and the skills needed to do it well are trivial to acquire.

James Butcher's avatar

Oh yes. The situation is only going to get harder to deal with. No doubt there.

We're living in a perfect storm:

- Academics have to publish or perish

- AI tools make it easier to write and submit

- Publishers make more cash by publishing more papers under open access business models

scott cunningham's avatar

I haven’t touched even on the predatory journal model, and the global hacking of publication stats algorithms like hacking h-indices. We already have seen gray markets emerge globally where people pay for articles to be written that cite them in these weird journals. Some authors are “producing” before AI at a rate of 5 articles a day given their vitas which is literally not possible. And that’s before AI agents. Who knows what sort of milling you get now — except I bet it will be better work. Why? Because it’s zero marginal cost to make it better as the agents are better at it than those hackers.

Erick Gong's avatar

These are questions worth discussing at all journals. There are some journals that prohibit the use of AI when reviewing papers which seems odd if authors are using AI already. I also wonder if In addition to raising fees you can limit the number of submissions an author can make in a year as well.

Richard Devine's avatar

One additional idea I'd like to add, I thought you might propose it yourself, but: a token system where you earn submission tokens by completing high quality reviews (perhaps no token earned if the review is poor quality). Certain journals could form pacts to have token reciprocity, e.g., 5 reviews at top 5 journals earns you 1 submission at any top journal since top tier journals may not want to recognize review tokens from lower tier journals (and vice versa). This would slow down the flow and cause authors to be more thoughtful about what ideas they would like to spend their precious tokens to submit since review tokens presumably were hard earned by time and attention. This combined with some other modifications (e.g., AI sorting on desk rejects) is one potential path.

scott cunningham's avatar

Your more general point I think is worth emphasizing — forcing authors to be judicious and selective of their best work or making the work better is the goal I think. Whatever the mechanism to do it, I think we should say that’s a normative priority.

dr. doug liebe's avatar

Maybe I'm wrong, but I get the feeling that an AI layer to filter papers will only briefly (if ever) reduce the burden on reviewers. As soon as journals state their intention to use such a layer, authors will also use it (or reverse-engineer something close to it), which will again raise the quality of the average submission. The addition of an AI-check pre-submission requires only marginal effort from the researchers, if they're not doing something like this already.

This may even cause a bigger bottleneck, as submissions require more and more context for the final approver to read (AI-generated), given how close to the cutline all these papers will be. There will be less AI-generated "don't accept this"-type feedback.

scott cunningham's avatar

Oh it’s absolutely a band aid and a short term fix. The average quality is going to go up, but the ability to evaluate them won’t, so it’s a matter of getting from A to B as painless as possible. The increase in price to get more revenue to purchase a solution is almost certainly key to this but using AI to deal with burdens now is a possible candidate.

Sarah Rosenberg's avatar

As a junior researcher without any job security I am really glad you’re talking about this. I also think collectively for society using AI in research can be hugely beneficial and we should do it, but individually it’s scary.

Jason Godfrey's avatar

Loved this, and loved the push towards greater transparency in methods through repo inclusion as a baseline for submissions.

This all assumes that human authors and editors will remain the gold standard. Given how slow academia is to adopt new norms, this is likely to be the case; however, I believe there is room for larger, system-level rethinking of how knowledge is produced and validated that scrutinizes proposed submissions based on their rigor and merit more than their provenance. That may be a bigger project... an immodest proposal?

Alexander Kustov's avatar

This is great. What do you think of Kevin Munger et al.'s proposal to cap the number of submissions (e.g., not more than 2-3 submissions to a certain journal by author)?

https://kevinmunger.substack.com/p/peer-review-2027