AI after Simondon: Individuation, Technicity, and Milieu

Fabio Iapaolo, Susana Aires, Ludovico Rella 
Edited by: Christine Haskell
04/21/2026 | Report-backs


What does Gilbert Simondon’s philosophy offer at a moment when AI systems are increasingly framed as “agents,” ubiquitous across infrastructures, and credited with generative capacities? We brought this question to the 10th STS Italia Conference, held at the Politecnico di Milano from 11–13 June 2025, through our panel “Simondon and AI: A Collective Individuation in the Year of His Birth Centenary.” Bringing together perspectives from STS, philosophy, media studies, geography, and technically engaged AI research, the panel approached AI less as a set of bounded systems or quasi-human actors than as a process unfolding across technical, social, and spatial milieus.

Our aim was not historical recovery but to see how Simondon’s concepts continue to work in the present, especially in relation to AI. Keeping the call deliberately open to themes, we let shared lines of inquiry surface through the submissions themselves, an approach that felt true to Simondon's own emphasis on emergence and collective individuation. Across two sessions and eleven presentations, four themes emerged: the genealogy of Simondon’s thought; individuation and agency; technicity and ethics; and the spatiality of AI.
 
Call for Abstracts poster for the “Simondon and AI” panel at the 10th STS Italia Conference, Milan, June 2025.

Call for Abstracts poster for the “Simondon and AI” panel at the 10th STS Italia Conference, Milan, June 2025.

Situating Simondon

Two interventions provided the key historical framing. Isabella Consolati revisited Simondon's engagement with cybernetics as the moment when technology first appeared as a “science of the social order,” and asked whether AI represents a new turning point or largely continues that cybernetic trajectory. Freya Häberlein situated Simondon within the post-war Gestalt debates over whether perception could be mechanized, reading his refusal of both cybernetic automatism and Gestalt holism as occupying a middle ground between the machine that thinks and the mind that cannot be mechanized—a position where questions about AI cognition become most interesting. Together, they offered an important note of caution: while Simondon's processual philosophy offers real traction for understanding AI, his ideas were nonetheless developed in response to specific machines and mid-twentieth-century disputes. Bringing them to contemporary AI, therefore, requires care about where the parallels hold and where they do not.

 

Individuation without Anthropomorphism

A second theme centered on individuation and agency. Participants repeatedly asked how Simondon's account of individuation—the process through which an entity comes into being in relation to a milieu—might be mobilized without sliding into claims about machine personhood. Three contributions approached this from different angles. Ludovico Rella asked what sort of “individual” an algorithm could be, and whether the current shift toward agentic systems marks, in Simondon's sense, a passage from one mode of existence to another. Matt Ratto and Sarah Gram shifted attention to the co-individuation of humans and machines, suggesting, via Karen Barad’s notion of “intra-action,” that generative AI reorganizes the relations through which agency is distributed, rather than simply confronting human agency from the outside. Francesco Bentivegna carried this relational emphasis into the sensory and aesthetic domain through synthetic voice. He proposed that synthetic voice occupies a “pre-individual” zone in Simondon’s terms—the field of potentials that precedes any formed individual—and noted, in a post-humanist vein, that performance practices are already exploring what it means to speak not only in or of the machine, but with it. Read in dialogue, these contributions framed individuation as a way to make sense of AI’s transformative effects without anthropomorphizing it, taking the redistribution of agency across human–machine milieus as the starting point of analysis.

Gilbert Simondon ©ArchivesSimondon

Gilbert Simondon. ©ArchivesSimondon.

 

Technicity and AI Ethics

Another recurring concern focused on ethics, understood neither as an add-on nor as a checklist for AI, but as inseparable from technicity itself. Luuk Stellinga’s critique of Human-Centred AI showed how appeals to “the human” can reinstate an instrumental view of technology, treating AI systems as tools to be steered rather than as socio-technical assemblages that already materialize values. Drawing on decolonial studies, Diego Vicentin juxtaposed deep learning with Simondon's notion of technologie approfondie, or in-depth technology. For Vicentin, giving AI ethical depth requires reckoning with the epistemic violence embedded in these systems, or else critique risks becoming complicit in what he called an “ethics of destruction.”

From there, the discussion shifted from critique to the cultivation of what Simondon called a technical culture, understood as an active engagement with how technical systems function and what forms of life they enable or foreclose. For Tyler Reigeluth, this meant reframing our relationship to AI technologies as a move from mastery to maintenance, emphasizing collective care and upkeep rather than control. Susana Aires extended this orientation into pedagogy, asking not whether AI belongs in classrooms, but how learners can be equipped to understand its operations rather than merely use its outputs. In dialogue, these contributions challenged the assumption that AI can be governed from a position of sovereign oversight, shifting attention from control to how we live with, care for, and take responsibility for technical systems whose operations exceed individual command.

 

The Spatiality of AI: From Latent Space to City

A fourth thread asked where AI actually is, and suggested that the answer shifts with scale. Rather than treating AI as placeless computation, two papers examined how it takes shape within specific technical and material environments. Raffaele Andrea Buono traced the trajectory of a single purple pixel moving through a Variational Autoencoder's latent space of reds and blues. Its transformation exposed a fundamental tension between human perception, which amplifies difference to construct meaning, and machine learning models, which tend to flatten and enclose it. This close technical reading echoed Simondon's own method, illustrating a concrete instance of what he meant by “transduction,” while raising the important question of whether his vocabulary actually maps onto machine learning operations. At another scale, Fabio Iapaolo offered a geographical reading of Simondon through the case of self-driving cars. Developing the concept of “techni(city),” he showed how autonomous vehicles do not simply operate within cities but reorganize urban space by redistributing perception, decision-making, and responsibility across sensors, infrastructures, regulatory frameworks, and road users. The city, in this account, is not the backdrop for intelligent systems but an active milieu that conditions action and subjectivity. Read together, these contributions suggested that AI's spatiality cannot be reduced to model space or urban infrastructure alone, but unfolds across layered and interdependent milieus.

 

Conclusion

The panel’s discussions underscored Simondon’s continued relevance to scholars inside and beyond STS. As algorithmic systems and “agents” proliferate and become increasingly infrastructural, his vocabulary continues to provide a way to ask sharper questions about how technical systems take shape, how they couple with socio-spatial milieus, and what consequences this has for collective life. The panel also brought out what perhaps remains most generative in Simondon's broader orientation: his insistence on technical literacy for social scientists and his wariness of "facile humanism" in technological critique still carry the same force today as when he first articulated them. At the same time, as many papers made clear, drawing on Simondon does not mean applying his concepts wholesale, but testing, extending, and, where needed, revising them in light of what contemporary AI brings into view. We hope the panel marks a step towards a “Simondonian STS” community in the making: not as a unified agenda but as a shared effort to develop and adjust concepts and methods adequate to what today’s forms of technicity demand of us.

 


Fabio Iapaolo is an urban geographer and digital media scholar whose work brings together spatial, political, and computer science perspectives to examine how AI and machine learning transform cities and society. He is currently a Postdoctoral Fellow and Adjunct Professor in the Department of Design at the Polytechnic University of Milan, and a Visiting Fellow at the Centre for AI, Culture, and Society at Oxford Brookes University. He also serves as an Associate Editor of AI & Society.

Susana Aires is a researcher in the Global Governance, Regulation, Innovation and Digital Economy (GRID) unit at CEPS (Centre for European Policy Studies) in Brussels, where she works on AI and technology policy. She holds a PhD in Digital Humanities from King’s College London, with research at the intersection of AI, explainability in complex models, and the philosophy of technology. Previously, she worked on EU research, innovation, and education policy in both the private sector and the European Parliament.

Ludovico Rella is a Research Associate at Durham University on the ERC Algorithmic Societies project. His research examines AI’s infrastructural materiality and blockchain technologies. He has published in Political Geography, Social Studies of Science, and Big Data & Society, and has a monograph forthcoming with Manchester University Press, based on his PhD dissertation, which received the Financial Geographies Doctoral Dissertation Prize in 2021. He is an editor for Digital Geography and Society and Backchannels (the 4S blog), and chairs Durham’s Digital Geographies Thematic Group.

 



Published: 04/21/2026