“At the same time that [the Big Tech industrialists] are trying to sweetheart us into thinking that AI tools and products are good for us, they’re also undermining [the] expertise that we all really value within our careers and industries.”
– Lisa Messeri
Description
As AI is integrated into scientific practice, the practice of science itself is changing. AI models that summarize, categorize, simulate, and predict not only stand to accelerate scientific research; they now sit inside these practices, alternately enhancing and eroding craft while shifting how questions are posed, what counts as evidence, how tacit judgment is taught and exercised, and reshaping trust in results.
In a conversation moderated by AI on the Ground Program Director Ranjit Singh, Kristin M. Branson, senior group leader at the Howard Hughes Medical Institute’s Janelia Research Campus; Lisa Messeri, associate professor of sociocultural anthropology at Yale University; and Nicole C. Nelson, associate professor in the Department of Medical History and Bioethics at the University of Wisconsin–Madison discussed the impact of machine learning tools on the nature of proof, inference, uncertainty, and error in scientific workflows today.
Speakers
Dr. Kristin M. Branson is a senior group leader at the Howard Hughes Medical Institute’s (HHMI) Janelia Research Campus in Ashborn, Virginia.
Dr. Lisa Messeri is an associate professor of sociocultural anthropology at Yale University.
Dr. Nicole C. Nelson is an associate professor in the Department of Medical History and Bioethics at the University of Wisconsin–Madison.
Moderator
Resources
References
- Porter, Theodore M. Trust in Numbers: The Pursuit of Objectivity in Science and Public Life, Princeton University Press, 2020.
- Ulpts, Sven, Sheena Fee Bartscherer, Bart Penders, and Nicole Nelson. “Epistemic Oligarchies: Capture and Concentration Through Science Reform,” Zenodo, September 22, 2025.
- Dzieza, Josh. “The Laid-off Scientists and Lawyers Training AI to Steal Their Careers,” New York Magazine, 2026.
Readings
- Kapoor, Sayash, and Arvind Narayanan. “Leakage and the Reproducibility Crisis in Machine-Learning-Based Science,” Patterns 4, no. 9 (2023): 100804.
- MacKenzie, Donald A. Mechanizing Proof: Computing, Risk, and Trust, MIT Press, 2001.
- Messeri, Lisa, and M. J. Crockett. “Artificial Intelligence and Illusions of Understanding in Scientific Research,” Nature 627, no. 8002 (2024): 49–58.
- Singh, Ranjit. “AI in Science: A Series Exploring How AI Systems Are Being Taken up in Scientific Practice,” Points, Data & Society.
- Stevens, Hallam. Life Out of Sequence: A Data-Driven History of Bioinformatics, University of Chicago Press, 2013.
- Collins, H. “Interactional Expertise as a Third Kind of Knowledge,” Phenomenology and the Cognitive Sciences 3, 125–143 (2004).
Credits
Production: Rigoberto Lara Guzmán
Post-Production: Tunika Onnekikami
Design: Surbhi Chawla
Editorial: Eryn Loeb
