Dawn Nafus
17/03/2026 | Report Back
Our field is charged with the close examination of science and technology in the broadest possible sense. We are not easily swayed by hype, or known for indulging in teleological progress myths. Yet we have somehow managed to devote approximately a quarter of our collective research effort to a phenomenon that, for all its material reality, contains many aspects that have been credibly framed as a con. Cons, traps, and talismans can certainly hold scholarly interest, but when they become lazy excuses for brute force resource grabs, neither they nor the crude exercise of power axiomatically require analysis2. They demand resistance and action, yes, but that is not the same as posing multitudes of compelling research questions worth a quarter of our collective research labor. It is remarkable that so many choose to stay with this trouble in such great numbers.
The objects we study shape how we know the world, and what we know about it. If, as the famous phrase goes, "it matters what matters we use to think other matters with,"3 then the preoccupation with this particular matter demands our curiosity. With a quarter of us doing research as if AI does matter, STS itself cannot be unaffected. What are we holding onto when we hold onto "AI," even at a critical distance?
My own fraught relationship with AI puts me in a somewhat unique position to open up this conversation. Until recently, I did not work at a university but in a tech industry AI lab, where, by definition, there was no issue worth researching that did not involve machine learning. In that role, I directly experienced how AI technologies--the objects and the baggage they come with-- enroll people into a particular way of seeing the world. From computer vision to multimodal foundation models and agents, extended wrestling with AI technologies inevitably becomes a lens for knowing even in the very act of resisting their worst tendencies, even when you "know better," and even when you actively try to unlearn that lens.
I talk about AI as if it had coherence with some hesitance. Lucy Suchman recently warned against treating AI as one single thing, or even a thing at all. There is a difference between actually existing computational systems marketed as "AI" and assuming there exists “a thing” that could be credibly called artificial intelligence. My counting 4S papers risks this false equivalence, but I proceeded anyway because those 329-ish papers intertwine in ways that do carve patterns in scholarly attention. Even if we treat AI as a floating signifier, its constant use nevertheless ties together narratives, arguably at the expense of other intellectual pathways. At 4S there were clear themes of state and corporate surveillance, the California ideology, AI applications within the sciences, automation, and the material extractions of AI infrastructure. These are all serious matters, but our knowledge of them benefits from long-standing bodies of literature, and I left the meeting wondering what they had crowded out.
We doubtlessly have had waves of interest in technical areas that ebb and flow over time: nanotechnology and nuclear energy come to mind as examples, though I have not traced these flows with any rigor. Still, in the last two decades I cannot recall one that became this widespread, which is why I did the count. What I can say is that, like every other site of knowledge production, STS makes social choices about what is worth knowing about, and these stabilize and evolve over time. Matters of concern propagate through pre-existing social relations--through students and their advisors, departments, conferences, and so on. A collective direction is clearly afoot.
National funding priorities and their politics also play a role. Governments, foundations, and private sector funders have steered policy activity and resources towards all manner of inquiry about AI. Large tech companies often dominate the imaginations of sometimes credulous, sometimes captured institutional actors. Many scholars cannot afford to do without funding ring-fenced for AI. Universities are also centering AI by uncritically championing faculty and student adoption without regard for their own employees' expertise. To the extent there is a con happening, it seems to be working. Between public policy, media hype, and institutional adoption choices, there are now vast suites of AI controversies for STS scholars to map. We might attend to them out of anger, fascination, necessity, or all three.
With media, public policy, funding bodies, and university employers collectively framing AI as a priori mattering (with "for what" a secondary concern), an institutional isomorphism and cultural homogeneity forms and stabilizes. That isomorphism yields opportunities for public scholarship, but at a cost. When funding bodies see fit to determine which technologies are matters of public interest, other potential research areas have already lost. Research policy “influenc[es] not only what technologies are built but which questions can be asked about their consequences.” Even funding schemes that invite the social sciences to sand down the worst edges of problematic technologies leave little room to direct resources towards more beneficial enterprises. In turn, many of us do not pursue those directions either. That we respond to these dictates with so much of our labor is yet another sign of our precarity. It would not go too far to interpret the proliferation of faculty job advertisements for AI specialists not as opportunity but as subordination, where we must smuggle into the empty signifier of AI more worthy interests as best we can. I myself have finely honed the art.
I did not leave 4S Seattle thinking that our community is doing a poor job of critically analyzing AI. There was no shortage of papers that studiously avoided Suchman's false "thingness," or "criti-hype," the parasitic form of critique that gains its power by inflating the capacities of that which it critiques. The issue is not the sophistication of the critique, but the cost. Lots of STS goes undone as a result. What conceptual frameworks are not being built? What patterns of critical thinking are being hardened and stabilized by returning again and again to the study of prediction machines? Perhaps this 24% can be compared to a plant growing in a pot. There comes a point when the pot-- the direction-limiting homogeneity--forces densely entangled roots to wrap around themselves. Without sending new shoots outside the pot, they curve into eventual self-strangulation.
It is one thing to value technodiversity and designs for the pluriverse; it is quite another to actually work towards it under intense political economic pressures towards homogeneity coming from all sides. Even research grounded in critical refusal of harmful data practices, beautifully exemplified by the Feminist Manifest-No, is not fully liberated from that which it refuses. Nor should it be: it is a reaction against, a response to. Imagine what STS scholarship might be like if we were freed from the need to refuse at all. Would computational systems be so interesting at that point? What about them would remain interesting? What else would capture our attention instead--and shouldn't some of us start to turn our efforts to those things now, before the AI investment bubble erodes the edifice of consensus that AI is self-evidently interesting?
One cost of devoting this much collective energy to AI is that it incentivizes research in what can be called the "muddled middle." Here, I see a parallel with user experience research (UX). UX--once a major non-academic career path for social scientists -- is often a compromise between what researchers deem interesting and what employers deem useful. This compromise produced a glut of UX research that is neither detailed and focused enough to be of instrumental use, nor conceptually robust enough to think deeply with.
STS might be forming a similar middle where our political motivations and scholarly ambitions do not always line up in the way that we hope. On one end of the spectrum, there is no shortage of conceptually ambitious work on AI, where the significance goes beyond the case at hand. At the other end, some STS scholars do make an immediate, material difference to political arenas where the stakes are immediate. Alondra Nelson, who led the Biden administration's Office of Science and Technology Policy in the US, is one of STS's most powerful examples. These ends of the scale are playing entirely different games, though. In the middle, conflicting incentives and desires swirl within a cultural web that puts AI at the center. It then becomes too easy to take AI's importance as read, and neither move the political needle nor bring readers to a fresh point of view while hoping to do both.
As a counter example, in The Body Multiple, Annamarie Mol could not assume widespread interest in atherosclerosis: there was no STS genre built around the topic. Instead, she invited her readers to see what atherosclerosis was a case of. In this particular moment in AI, one does not need to show why the object of interest justifies researcher attention because the social world is now built to make its significance appear self-evident. Important authorial work–both justification and historicization–gets cut short as a result. This short-cutting also means we have already lost the first site of struggle: we are limited in how much we can question why it matters at all. We are not in a position to conclude that it matters, when and if that is the case.
Moving out from the muddled middle, and exploring both edges, can help clarify why or whether a particular object is worthy of its researcher. When it is not, it might still be worth our time as vocal adversaries. When we ask, “does this need my analysis or my advocacy?” we avoid unwittingly channelling others' agendas. We honor our own expertise when we question what constitutes a deserving matter of STS attention. While few of us can avoid the traps of funding requirements, some researchers do have the capacity to cultivate stronger technodiversity, and to sit with curiosities that feel somehow peripheral or irrelevant or otherwise not in keeping with one’s professional self-narrative. Scholars who live in democracies–not just as individuals but as collectivities and associations–could also use the approaching burst of the AI bubble as a frame to draw policymakers and grant makers towards what is worth knowing more about, and not just what is worth resisting.
Dawn Nafus is affiliate faculty at Oregon State University Department of Anthropology. Before that, she was Principal Engineer and Research Manager at Intel, where she led a sociotechnical research lab and a responsible AI governance program. She can be found on Bluesky, LinkedIn, and Medium.
The author would like to thank Laura Watts, Gwen Ottinger, Shaozeng Zhang, and Ludovico Rella for vital input. The title is partially derived from a personal conversation with David Widder. Any objectionable statements are squarely my own.
Published: 03/30/2026