The politics of curriculum: Inflecting knowledge in computer science education

Benedetta Catanzariti
01/23/2024 | Reflections


AI and data ethics are burgeoning fields of inquiry. But despite growing public awareness around the social, political, and ethical implications of data and algorithms, the role that education plays in building just and responsible computing cultures is often overlooked within critical scholarship. In this post, I reflect on my experience as an STS scholar embedded in a core undergraduate computer science program at the University of Edinburgh. Between 2020 and 2021, I joined the teaching team of a project-based course aimed at equipping students with practical experience of building large-scale systems which incorporate human-robot interaction while working collaboratively as members of a team. As a social scientist researching the social and political dimensions of AI and automation, I collaborated with instructors to evaluate student demos and encourage critical reflection on the potential harm or unintended impacts of their ongoing projects.

 

In recent years, critical scholars have repeatedly presented evidence of the societal harms, particularly racial discrimination, propagated by AI and data-driven systems (Benjamin 2019; Eubanks 2018; Abdurahman 2022). Importantly, efforts in this area point to the political role that AI and data practices play in automating bias and harm (Costanza-Chock 2018; Buolamwini and Gebru 2018; Hutchinson et al. 2021; Birhane 2021). However, mainstream approaches to ethics and responsible innovation within computer science training tend to treat ethics as a siloed discipline that practitioners apply to their own practice, often without deeper conceptual engagement with different skills and epistemologies. This can lead to what Raji et al call “the myth of the ethical unicorn” (Raji, Scheuerman, and Amironesei 2021), or the assumption that computer science practitioners can acquire ‘ethical expertise’ by simply enrolling in a few data ethics modules. Technical training—with its emphasis on abstraction and formalism— contributes to this sense of exceptionalism by building ideological barriers between computer science practice and social responsibility (Malazita and Resetar 2019; Agre 1992).  Finally, there are concerns with the broader political economy of technology development (and, particularly, of AI and machine learning) by which material and conceptual resources are often concentrated in the hands of few large corporations, and their own ethics agendas (Whittaker 2021; Cath and Keyes 2022). 

 

Conscious of these challenges, my involvement in the course aimed at translating ethical and societal concerns into the practical contexts of early technology design and development while, at the same time, probing students’ (as well as my own) assumptions around computer science practice and knowledge.

 

Curriculum as infrastructure

 

Academic curricula carry political weight. As STS scholar Gary Lee Downey notes, normative choices about what counts as ‘core’ and what counts as ‘elective’ knowledge within degree pathways can stabilise and reinforce students’ assumptions about engineering knowledge and practice as well as its ethical bearings. Here, ‘basic science’ is often prioritized over ‘social’ knowledge. By passing through these “curricular infrastructures” (Downey 2021), students are implicitly expected to accept this hierarchy of values as given.

 

In this sense, the curricular infrastructure of the course posed two main challenges, by prioritizing: (1) technical training, and (2) project marketability. Technical training was elevated through the course’s learning outcomes; one of which was the ability to design a complex system capable of solving a ‘practical and useful problem’. While the definition of what constitutes a ‘practical’ or ‘useful’ problem was not subject to discussion, students were offered examples from the domain of ‘assistive technologies’, described here as systems capable of performing an autonomous task ‘in the real world.’ Such examples included robotic assistance for the visually impaired, smart appliances, autonomous cleaning devices, and robotic chess opponents. Based on this guidance, students were encouraged to prioritize the implementation of complex features over more mundane technical choices. Complexity, however, often came at a cost. Indeed, it resulted in the implementation of features that were not necessarily useful to the final user, often requiring the collection of more data (an example of this would be the implementation of camera sensors for navigation or recognition purposes over less intrusive methods). While this might reflect real-world expectations placed on engineers by managers and investors, it can also normalize the pervasive culture of surveillance that underlie most data-driven practices and artifacts. 

 

Moreover, the course placed great emphasis on project marketability, which risks promoting a unidimensional view of technology as a viable business-case, rather than a social practice. This focus on profit often reinforced – inadvertently or not – student assumptions that marketable projects should prioritize cost-efficiency over quality of service, care, and environmental sustainability. Similarly, discussions about technology trade-offs were often absent from claims made during student demonstrations. In real-world scenarios, projects that truly improve the quality of user’s life, or the quality of service offered, rarely prioritize cutting costs or saving time (and vice versa).

 

This was a crucial moment in student training, where their assumptions and expectations about the computing profession were first constructed or reinforced. I hoped to counter these assumptions by offering a view of technology design and development as a socio-technical practice, rather than a mere technical skill, encouraging students to contextualize their contributions within larger socio-historical structures. However, this re-configuration of computer science core epistemology (or what Downey would term the “inflection” of “dominant images of engineering knowledge” (Downey 2021, 220–21)) requires a restructuring of wider institutional and political arrangements that tend to disincentivize cross-boundary collaboration while maintaining demarcations  between technical and social knowledge. For example, the funding structures that tend to support computer science departments (often industry or military grants) do not prioritize socio-technical perspectives on technology development, nor do they promote interdisciplinary, research-led teaching interventions in computer science education. Here, STS analyses that critically examine the hierarchies of values that support these projects might offer insights into the political and economic structures that shape knowledge formation within disciplines and institutions. 

 

Charting new paths

 

STS scholarship continually locates teaching and learning spaces as sites of knowledge production. However, critical interventions in science and engineering fields remain difficult to implement, posing challenges to scholars sensitive to the risks of being instrumentalized to benefit corporate capital or state power (think, for example, of the phenomena of ethics- or green-washing within both industry and academic contexts). In response to these concerns, some have re-framed their involvement in engineering pedagogy and practice as “critical participation” (York 2018; Downey 2021)—a form of participation aimed at opening up spaces for reflexive production of socio-technical knowledge. However, alternative paths remain relatively untraveled. How can we mobilize and translate STS knowledge to counteract technological determinism and accelerationism? How can we bring more contingency and uncertainty to the curricular infrastructure that supports computer science education and create, for instance, opportunities for students’ and practitioners’ resistance and refusal to participate in harmful data practices? And what institutional or societal constraints would shape these efforts? Similarly, when discussing ethics in the computer science classroom, how can we de-center notions of individual responsibility, ethics, and leadership and re-center instead the local and global labor and other collective power structures that underpin technology development and its governance? Little to no space is typically given within computer science curricula to the role of worker mobilization and its potential to resist or subvert hegemonic data practices. 

 

In attending to these overlooked spaces of knowledge (and power) formation within computer science curricula, we can hopefully resist harmful data practices while exploring possibilities for more sustainable and just approaches to complex social and political problems. Approaching graduation, computer science students are often promised some of the highest-earning careers in the country, working for organisations in technology, finance, healthcare, and public sectors. Here, a multitude of social, political, and economic structures might shape their professional experience, and further isolate their practice from questions of social and ethical responsibility. Elevating the political significance of computing knowledge and practice early in computer science education, then, becomes a timely and much-needed intervention.

 
Dr Benedetta Catanzariti is is a British Academy Postdoctoral Fellow in Science, Technology and Innovation Studies at the University of Edinburgh. Her work explores the social, historical, and political dimensions of data-driven technologies, with a focus on machine learning and its related data practices. She is also a core member of the Edinburgh-based network AI Ethics & Society, and a Postdoctoral affiliate of the Centre for Technomoral Futures at the Edinburgh Futures Institute.

 



Published: 01/23/2024