
The Speed of Innovation Surpasses Peer Review
Over the past decade, AI research has evolved from a niche scientific endeavor to a global enterprise driving advancements across disciplines. As of 2019, artificial intelligence preprints were submitted to the open-access repository arXiv at a rate exceeding three per hour, a 148-fold increase compared to 1994, with deep learning-related preprints alone submitted every 0.87 hours, representing a 1,064-fold increase over the same period (1). Yet, this rapid advancement generates a paradox: peer-reviewed literature struggles to reflect the pace at which validated AI technologies enter clinical practice. Andersen et al. conducted a systematic review and found substantial variability in publication turnaround times among biomedical journals, with mean durations from submission to publication ranging from 91 to 639 days. This variability highlights the inconsistency and significant delays that can hinder the timely dissemination of research findings (2).
Technologies that have been validated in real-world settings, approved by regulatory bodies, and adopted in clinical practice may still be awaiting publication. Meanwhile, readers of even the most prestigious scientific periodicals are often digesting studies that reflect a landscape already reshaped by the time of print. As an illustrative example of AI’s rapid integration into healthcare, FDA data as of September 2024 indicate that 1,016 AI or machine learning-enabled medical devices have been authorized since the first approval in 1995 (3). This marks a substantial increase, as only six such devices were approved in 2015, compared to 224 devices in 2023 alone. A recent scoping review analyzing 692 FDA-approved AI or machine learning-enabled medical devices from 1995 to 2023 found notable gaps in available validation data (4). Specifically, comprehensive performance study results were reported for only 46.1% of these devices, and scientific publications providing detailed safety and efficacy data were linked for just 1.9%. These observations highlight the importance of timely, thorough, and peer-reviewed validation to support clinical trust, efficacy, and equitable application of AI technologies.
While AI adoption expands, ethical concerns remain, including the potential for algorithmic bias, data privacy issues, and accountability in clinical settings. As Geoffrey Hinton and Yoshua Bengio, both Turing Award-winning AI researchers, have noted, “Society’s response, despite promising first steps, is incommensurate with the possibility of rapid, transformative progress that is expected by many experts.”(5). Such perspectives underscore the critical need for scholarly discourse to evolve in step with technological advancements.
Legacy Peer-Review Practices and Ethical Complexities
Traditional peer review, considered the cornerstone of scholarly rigor, was not designed to accommodate rapid technological iterations. Conventional manuscript evaluation, from submission through multiple revisions, often spans months, acceptable in slower-evolving fields but increasingly impractical for rapidly evolving biomedical AI research.
While some biomedical journals have initiated AI-supported processes, many remain cautious due to ethical complexities. Li et al. underscore a critical gap, noting that although approximately 78% of biomedical journals provide guidance on AI use, nearly 59% explicitly prohibit AI-assisted peer review, primarily citing confidentiality concerns (96%), transparency issues (69%), and attribution challenges (62%), highlighting the urgent need for scholarly discourse and policies to swiftly evolve alongside technological advancements (6).
Reimagining Peer Review with the Help of AI
AI should play an essential supporting role in the editorial workflow. Already, natural language models can analyze manuscripts for structural clarity, methodological soundness, statistical integrity, and logical consistency. Tools that can provide structured, multi-dimensional reviews, within minutes, are now commercially available, offering insights akin to those from human reviewers but without fatigue, bias, or delay (7).
Imagine the potential of an editorial workflow where the first pass of every submitted manuscript includes a structured evaluation by a language model configured to highlight strengths, surface methodological concerns, and flag missing disclosures or errata. This initial feedback, reviewed by human editors, could prioritize which manuscripts merit full peer review, streamline the revision cycle, and support junior reviewers in refining their assessments. Additionally, large-scale models could assist with reviewer matching, detect redundant publications, and even assess novelty based on existing indexed literature.
Integrating AI into Editorial Workflows: Scope and Practical Considerations
AI could responsibly enhance editorial workflows, with the applicability of these innovations varying across medical disciplines depending on the complexity of data, availability of standardized data sets, and regulatory considerations. Disciplines heavily reliant on structured data, such as radiology, pathology, and genetics, are particularly suited for early adoption of AI-supported review due to the availability of clearly defined and consistent datasets. Conversely, disciplines involving complex, qualitative data, or ethically sensitive issues, such as psychiatry or bioethics, might require more cautious, human-led approaches to integrating AI.
The study by Liang et al. highlights exciting possibilities for transforming scientific publishing through AI integration (8). Their large-scale analysis demonstrates that GPT-4 can provide manuscript feedback closely aligning with human peer reviewers, achieving overlap rates comparable to inter-reviewer agreements among humans. In a prospective evaluation conducted by the same group involving 308 researchers, 57.4% rated GPT-4-generated feedback as helpful or very helpful, and 82.4% considered it superior to feedback from at least some human reviewers. These promising findings suggest AI’s potential to significantly enhance editorial workflows, offering timely, constructive evaluations of structural clarity, methodological rigor, statistical integrity, and logical coherence. If implemented responsibly with clear guidelines and human oversight, large language models could streamline peer review, reduce reviewer burdens, and enhance efficiency in scholarly communication.
From Static Documents to Living Knowledge
Beyond accelerating review, AI offers a path to rethinking what constitutes a scientific article. Rather than viewing publication as a static endpoint, journals might evolve into platforms for versioned and living documents. Articles could be updated as new data emerges, structured in modular formats that separate validated findings from ongoing exploration, and paired with supplementary materials such as code, datasets, and explainable model outputs. Structured summaries, powered by generative models, could ensure accessibility for diverse audiences, clinicians, policymakers, technologists, each engaging with content tailored to their roles.
This transformation would not diminish the integrity of peer review but enhance it. Connecting research to dynamic citations, version control, and community commentary could make publishing more robust and responsive.
Conclusion: Aligning Scholarly Publishing with AI Innovation
This commentary has notable limitations. First, several quantitative figures, such as journal-policy percentages and FDA device counts, are drawn from single, cross-sectional sources, so broader or more recent sampling could change these estimates. Second, the AI-publishing ecosystem is evolving so rapidly that the recommendations offered here will need regular reevaluation to remain relevant across disciplines and editorial workflows.
As AI technologies become more integral to medical practice and decision-making, scientific journals must transition from passive dissemination channels to proactive participants in facilitating innovation. Journals can enhance credibility and relevance by integrating AI responsibly, balancing efficiency with ethical diligence, and maintaining rigorous standards.
By proactively embracing AI within clearly defined, discipline-specific workflows and carefully implemented dynamic publication formats, scientific publishers can continue to fulfill their foundational mission, advancing health through trustworthy and timely knowledge dissemination. Navigating rapid technological advancements involves significant challenges and opportunities; addressing these proactively ensures peer review remains rigorous and responsive in the age of AI.
References
1.Tang X, Li X, Ding Y, Song M, Bu Y. The pace of artificial intelligence innovations: Speed, talent, and trial-and-error. Journal of Informetrics. 2020;14(4):101094.
2.Andersen MZ, Fonnes S, Rosenberg J. Time from submission to publication varied widely for biomedical journals: A systematic review. Curr Med Res Opin. 2021;37(6):985-93. doi:10.1080/03007995.2021.1905622
3.FDA. Artificial intelligence and machine learning (ai/ml)-enabled medical devices 2025 [Available from: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices. Accessed 04.11.2025.
4.Muralidharan V, Adewale BA, Huang CJ, Nta MT, Ademiju PO, Pathmarajah P, et al. A scoping review of reporting gaps in fda-approved ai medical devices. NPJ Digit Med. 2024;7(1):273.
5.Bengio Y, Hinton G, Yao A, Song D, Abbeel P, Darrell T, et al. Managing extreme ai risks amid rapid progress. Science. 2024;384(6698):842-5.
6.Li ZQ, Xu HL, Cao HJ, Liu ZL, Fei YT, Liu JP. Use of artificial intelligence in peer review among top 100 medical journals. JAMA Netw Open. 2024;7(12):e2448609.
7.News N. Ai is transforming peer review — and many scientists are worried 2025 [Available from: https://www.nature.com/articles/d41586-025-00894-7. Accessed 04.11.2025.
8.Liang W, Zhang Y, Cao H, Wang B, Ding DY, Yang X, et al. Can large language models provide useful feedback on research papers? A large-scale empirical analysis. NEJM AI. 2024;1(8):AIoa2400196.
© 2025 José A. Acosta. All rights reserved.