As scientists use AI systems as a proxy for working with students, the future of scientific research itself is at stake.
April 29, 2026
Over the last few months, the comparison between AI systems and graduate students has emerged again and again in the conversations I’ve been having about the impact of AI on science. Sometimes it is offered casually, almost admiringly: I’ve been told that using AI feels like working with “a very good graduate student,” except the responses come back in minutes instead of a week. One of my interlocutors cautions that in using AI systems as “proxy graduate students” — cheaper, and fast enough for many routine tasks such as brainstorming ideas and scoping the state of research on a new topic of interest — we may be putting the training pipeline of academic fields at risk. This could be “the last generation of graduate students as we knew them,” he warns.
This comparison is worth taking seriously, though I am wary of framing it simply in terms of machines replacing people. The anxieties it carries speak to a real and consequential shift in how academic work is being reorganized, specifically around the role of graduate students in the sciences. A PhD program is one of the main ways an academic discipline reproduces itself, by turning students into practitioners. Graduate students are trained through apprenticeship. They do work, but they are also meant to learn how to inhabit a field: how to recognize a good question, how to identify the right methods for pursuing one and which result needs more work before it can be published, and how to live inside the slower, often frustrating process by which scholarly judgment takes shape.
The shift in the organization of academic work is not only visible in everyday work of laboratories; it is also beginning to reshape the conditions under which students enter academic life. Students in some disciplines are encountering a more restricted PhD landscape, where admissions have been reduced or paused because of funding uncertainty. Others are entering programs where AI use is becoming part of the ordinary infrastructure of research: literature search, drafting, summarizing, revising, and presentation work. Graduate students are using AI tools precisely for this range of tasks, while also worrying that these tools could erode the skills a doctorate is meant to build. The result is a change in what it means to be prepared for graduate school. For some, this might open access to forms of support that were previously unevenly distributed. For others, it shifts the burden of apprenticeship, asking students to become competent users and evaluators of AI tools before they have fully learned the disciplinary judgment those tools are supposed to assist.
There are scientists who now use AI systems the way they would work with a student or junior collaborator: bouncing ideas off them, asking them to try proofs, refine a formulation, or explore a direction that may or may not work. Using AI in this way changes the tempo of research. What used to take a graduate student a week or a month can now come back in minutes. It lowers the threshold for trying things out. This compression of time changes the rhythm of experimentation and has a real impact on how quickly exploratory work can now happen. Speed also changes who gets to participate in the making of a problem. When a faculty member turns to an AI system to test an idea rather than handing it to a student to work through, a critical moment in which students learn problem formulation begins to disappear. What is lost is one of the ordinary settings in which disciplinary judgment is formed.
There is already a wider technical world built around this kind of acceleration. Anthropic’s own account of Claude Code presents it as more than a niche coding assistant; it is now woven into everyday work across the company, used not only by engineers but by teams doing analysis, design, security, product development, and even legal coordination. Public discussion around that internal uptake has gone even further; Anthropic’s co-founder and head of policy Jack Clark recently claimed that “Some of our systems like Claude Code are almost entirely written by Claude.” In this world, “graduate students” are often invoked as a kind of shorthand — a way of describing expertise as something that can be built into technical systems. The comparison becomes more troubling, though, when you follow it one step further into the world of academia. What appears in the tech industry as evidence of organizational efficiency becomes, in university settings, a question about apprenticeship and the reproduction of a field.
Time is one of the most crucial conditions for building expertise. Expertise emerges through sustained engagement with the hardest problems in a field, through years of learning what matters and how to communicate it to a community of peers. This process now sits uneasily within universities and research labs where incentives increasingly favor more output. It becomes especially visible in discussions of computer science review culture, where peer review has become noisy enough that quantity begins to look like a rational strategy: submit more papers, increase your odds, flood the system a little further, and make careful reading harder for everyone. Under these conditions, it becomes harder to tell what counts as a real contribution, even as the practices through which judgment is formed are steadily weakened.
This does not mean there is no meaningful or valuable use of AI in research. Researchers already use these tools heavily, especially for search and editing; non-native English speakers, in particular, often experience them as helpful in navigating a research culture where fluency in the presentation of findings matters. But what happens when institutions stop valuing the cumulative work of forming vocation and judgment — the kind of work that takes time?
Graduate students have never only been workers. Their role in academic programs has also been one of the ways fields induct new people into their practical and interpretive culture. As AI systems take on more of the visible intellectual labor of research they begin to reconfigure students’ relations to the objects they study, and to the sensibilities through which they come to inhabit a discipline.
I have seen this tension in my ongoing conversation with a graduate student in ecology, whose dissertation centers on studying the behavior of fish groups around coral reefs by tracking their movement using computer vision. Because computer vision models still perform unevenly on underwater footage, her project keeps drifting from the ecological question of what can be inferred about collective behavior toward the narrower technical problem of how to make the model work better. The pressure this creates raises a consequential question for her research: is the dissertation becoming a study of fish behavior, or a project in improving computer vision for underwater imagery? There have long been doctoral projects organized around making a tool or an instrument work before it became part of ordinary practice in a discipline. What her case suggests, though, is that tool-building can begin to crowd out the work of learning how to inhabit a field as a vocation. As she said to me in one moment of frustration, “I am an ecologist and I want to be an ecologist.”
The comparison between graduate students and AI systems within university settings is a debate over the purpose of graduate education, which has always contained a tension between disciplinary formation and labor. Students are apprenticed into a field, but they also teach and keep research projects moving. AI systems can take over many of the visible tasks through which graduate work has long been justified, while leaving unresolved the apprenticeship by which students acquire disciplinary feel and develop responsibility for the knowledge they produce. If student labor is no longer required to move research forward, involving students can begin to look less like the ordinary way scholarship gets done and more like an act of mentorship that has to be consciously chosen despite the slower pace it imposes. This changes the relationship between research and training and separates the production of research outputs from the intellectual formation of judgment. What is at stake is the future of scientific research itself: whether it can gain speed through automation without losing the social and intellectual conditions that allow it to endure.