The dominant governance discussions imagined around autonomous systems and artificial intelligence are seen as flowing from autonomy: What happens when AI takes over our jobs? How should a self-driving car decide who to kill in the event of a crash? What are the existential risks of superintelligence? Tech billionaires like discussing these things precisely because they sidestep the real politics of AI.
STS suggests a very different set of questions that switch attention to the attachments of innovation rather than its autonomy: Given that AI will not be as independent as its proponents suggest, under what conditions could it benefit particular groups? How might the world need to change to accommodate robots? Who will pay for the infrastructure that supports AI? What is the role for public resistance and reconfiguring?