Advances in natural language processing have led to the development of 'large language models' (LLMs) capable of generating texts by predictive algorithms, and 'writing' convincing essays, computer code, and even poetry. Their performance has led to claims that they can replace human authorship, and even that AIs have become 'sentient' beings. However, a range of risks has been identified; that they lead to environmental degradation, that synthetic text reinscribes discriminatory language and viewpoints, that they encourage plagiarism, and more fundamentally that they threaten the concept of human authorship itself. This development clearly has profound implications for society, education, knowledge and our relationship to texts, learning, and authority, and raises questions about human co-existence with these technologies as their capacities increase. This proposal is for a panel to consider this 'more-than-human' form of authorship, providing insights into the manifold implications of LLMs. This will generate theoretical and practical insights, setting the agenda for this emergent field of enquiry in STS, interrogating the following themes and more:
What sociotechnical imaginaries are expressed regarding LLMs in society?
How do LLMs reinscribe or amplify social and epistemic injustices?
What are the risks to 'sea, sky and land' brought about by LLMs?
How are LLMs are changing texts and how we generate, interpret, and use them?
What are the effects in terms of knowledge practices in education and beyond?
What are the ontological implications for human authorship and texts themselves?
What are the future research agendas for STS in terms of theory and methodology?