A surge in high-quality grant proposals written by agentic AI threatens to overwhelm peer review. Nature warns that funders must act before sorting wheat from chaff becomes impossible. This phenomenon is not a distant possibility; significant increases in submission volumes are already being observed, many of which exhibit impeccable technical quality but lack the depth and originality characteristic of genuine research. The scientific community faces a crossroads: adapt evaluation mechanisms or risk a degradation in the quality of funded science.

The Science

Agentic AI in Science: Grant Funding Under Siege

Generative AI has already transformed academic writing. But the new wave of 'agentic AI'—systems that not only draft but research, structure, and optimize entire proposals—poses an existential challenge to funding systems. Nature, in its April 27, 2026 edition, sounds the alarm on the increase in high-quality proposals written by AI models. These systems can analyze thousands of previous proposals, identify success patterns, and generate texts that maximize reviewer scores. The speed and efficiency of these tools far exceed human capacity, potentially leading to saturation of the peer review system.

scientist reviewing documents in lab
scientist reviewing documents in lab

The problem is not just quantitative but qualitative. AI-generated proposals can be technically flawless, yet lack the intuition, creativity, and context that human reviewers value. Funders fear the system will become saturated, making it impossible to distinguish genuinely innovative ideas from algorithmically generated ones. Moreover, there is a risk that overwhelmed reviewers will begin to rely on superficial signals like text fluency or formal structure rather than evaluating actual scientific content. This could favor AI-generated proposals, which are optimized precisely for those indicators.