Alastair Iles, UC Berkeley; Patrick Baur, University of Rhode Island;
Techno-optimistic accounts of automation have become ubiquitous in nearly every domain of public and private life. Despite occasional concerns raised over the consequences of adoption, the default position in public discourse seems to be passive acceptance that automation will inevitably drive progress: impartiality, speed, efficiency, and uniformity -- properties that bureaucracies, investors, industrial supply chains, and producers all treasure. Automation is said to remove human error, decision-making bias, unruly workers, and other undesirable things. These projections and promises bear the hallmarks of 'techno-chauvinism', which Meredith Broussard (2018) describes as 'the assumption that computers are superior to people, or that a technological solution is superior to any other'. By dispensing with human decision-makers, inequitable (re)distributions of power, risk, and benefit can go unseen. Human decision-makers can be freed from accountability. Without active, ongoing scrutiny, automated systems can generate social, health, and environmental problems that are not recognized as users-e.g. farmers, bureaucrats, managers, journalists, researchers-put their trust in automated systems. We welcome papers in the following areas: The immaculate conception of automation (cf. Bronson 2022). How does automation continue to be fairly unchallenged and increasingly used and tolerated? Why and how does automation come to trump situated human expertise/judgment? Labor. How are knowledge, expertise, skill, and agency eroded or redefined in the pursuit of automation? How can labor be made visible again? Governance. How can automation efforts be tempered and meaningfully governed to prioritize human dignity? What institutions or principles should guide the design, implementation, and use of automation? Resistance. How can resistances push back?