This is the second of two parts summarizing the 16th NAS Journal Summit. Part 1 addressed research integrity, stakeholder collaboration, and Open Access challenges. Part 2 explores the transformative role of artificial intelligence (AI) in scientific discovery and scholarly publishing, building on themes of trust, transparency, and evolving systems introduced on Day 1. The summit was conducted under Chatham House Rules, so names and affiliations have been intentionally omitted.
AI’s Disruption of Science and Research Practices
Day 2 opened with an exploration of AI’s rapidly growing influence on the scientific process. AI is already transforming laboratory research, including automating nanosystem design and DNA origami, where generative models create intricate nano-structures beyond human design capacity. By scripting experiments, managing full provenance, and enabling large-scale production of materials like DNA armor for medical use, AI is pushing science into new realms of complexity and speed.
In biomedical sciences, AI is generating hypotheses and performing administrative tasks like summarizing manuscripts or ambient notetaking. Yet, speakers warned that AI does not replace the need for empirical experiments and validation. Datasets risk being strip-mined by AI systems if incentives to generate new data remain weak. Tools capable of detecting spin in clinical literature were flagged as promising, but the same AI technologies could fabricate fraudulent science that is difficult to detect.
Concerns over explainability and AI’s tendency to hallucinate facts were raised, reinforcing the need for human oversight. AI’s ability to create plausible but false information could blur the line between fraud and novel discovery. Ensuring reproducibility and provenance was a consistent theme—participants stressed linking research outputs to clear processes and protocols, though the volume of AI-generated data may overwhelm traditional publishing formats.
The Impact of AI on Journals, Peer Review, and Provenance
The final session examined AI’s impact on scholarly publishing, from authorship to peer review. Few authors currently disclose AI use, though some fully AI-generated papers have already appeared. Journals are scrambling to create guidelines that mandate human accountability and transparency around AI involvement, but only about 2% of authors currently report AI use.
While AI could help ease peer review bottlenecks—matching reviewers, detecting fraud, or assisting with statistical analysis—there are risks of amplifying biases and overwhelming editorial systems. As AI accelerates discovery, concerns were raised that peer review capacity may not keep pace. Furthermore, AI tools may begin prioritizing content based on optimization algorithms, leading to potential new forms of “AI agent optimization,” similar to search engine optimization (SEO).
Panelists debated whether journals should focus on certifying individual articles or the knowledge itself, especially as AI models consume content faster than humans. The energy demands of AI tools also risk creating inequities between well-resourced and under-resourced researchers and institutions.
Rethinking Integrity, Trust, and the Role of Human Oversight
The discussions consistently returned to themes of trust, integrity, and the role of humans in an increasingly AI-mediated research landscape. Preserving provenance, improving transparency, and ensuring equity will be central challenges. Participants stressed that AI should augment human judgment, not replace it. Similarly, integrity cannot be automated—it requires cultural, institutional, and systemic commitments.
Summit Conclusion
The summit concluded with a strong consensus: trust in science is not automatic but must be continuously earned. Addressing research integrity challenges, navigating Open Access complexities, and integrating AI responsibly will require collaboration, new frameworks, and a rethinking of incentives across the research ecosystem. By focusing on transparency, accountability, and impact, the scholarly community can adapt to disruption while reinforcing the credibility of the scientific record.
Read the previous part