Kaya Akyüz, University of Vienna; Mónica Cano Abadía, BBMRI-ERIC; Melanie Goisauf, BBMRI-ERIC; Michaela Th. Mayrhofer, BBMRI-ERIC;
At a time when there is an increased attention to artificial intelligence (AI) - especially with the hype around AI-based tools such as ChatGPT or apps that use facial recognition expression to assess emotion or disease - we invite a critical re-thinking of imaginaries of trust and sharing. Concepts such as trustworthiness and explainability are often brought up as normative qualities to be inscribed into AI technologies. Furthermore, AI opens up so many issues at hand, from justice to privacy, with conflicting imaginaries of dystopia and hyped trust, technooptimism and technosolutionism. Scholars like McQuillan observe colonialism in AI due to its intellectual background and its practices and, at the same time, a decolonial approach rejects any form of 'dividing practice' as Adams notes. This includes subverting established (racialized, gendered, etc.) taxonomies, and considering the relationship between AI-related practices and power relations, extractivism, and labor relations. The intellectual frameworks that are rationalistic, individualistic, Western, and colonial are informing the imaginaries of doom and/or trust, along with the guise of scientific progress. In this panel we invite empirical research and theoretical/critical discussions on trust in data infrastructures and AI technologies and 'how it could be otherwise'. As in the shift from public understanding of science to critical discussions of the deficit model towards more engagement focused, bottom-up approaches to (techno)science-society relationship, we realize STS has much to offer to these discussions.