Amok & Coding Back: A Decolonial Epistemic Framework

Eirliani Abdul Rahman
12/11/2023 | 
Reflections
 
I was one of the three women who resigned from Twitter’s Trust and Safety Council in December 2022, speaking out against the meteoric rise in hate speech since Elon Musk’s purchase of the platform. In response, Musk dissolved the Council entirely four days later. In the wake of recent developments in artificial intelligence (AI), I write to underline the danger of AI being a vector of neocolonial harm in its deployment, diffusion, and adoption, as technology companies prioritize profits over safety.
 
I propose building on social critic bell hooks’ work with the concept of “coding back”. In “Talking Back: Thinking Feminist, Thinking Black”, hooks described “talking back” as an act of resistance that challenges the politics of domination that “would render us nameless and voiceless”. Coding back challenges the asymmetrical power dynamics that are embedded within the technology companies and institutions that develop artificial intelligence. As I will show, it is a decolonial epistemic approach in that coding back centers Indigenous mental models and ways of being that break away from narrow neocolonial knowledge production. In short, it offers a set of tools to fight back against the colonial erasure of the formerly colonized.
 
In the United States (US), a survey of 11,445 developers by Stack Overflow in 2017 found that 85.5% were men, a majority of whom were white. According to the US Bureau of Labor in 2015, women held 25% of computing-related occupations in the US. Of these, only 5% were Asian, 3% Black or African American, and 1% Latina or Hispanic. Until we can integrate these historically marginalized developers, social media platforms and tech companies will continue to design for a supposedly homogenous user base and not account for systems of, and the historical impacts from, social oppression. At the same time, as Mahlet Zimeta has pointed out, until we can reckon with the factors that enable colonization, such as predatory business models and the violent imposition of hierarchies, these same factors are still at play today. This is affecting how AI is being developed and deployed, making AI a vector for colonial harm. How then do we resist AI colonialism?
 
 
Refusal and Resistance
 
Eve Tuck and Wayne Yang have described all practices of refusal as particular within a diversity of decolonizing frameworks. Refusal is not simply a categorical ‘no’ to the adoption of AI but a redirection to ideas and values otherwise not acknowledged or interrogated. Coding back allows the option of talking back, of resisting by engaging. Coding back also entails the inclusion of commonly excluded cultural values and activities to catalyze conversations around the ethics of computing. While a single approach to decolonization does not exist, an AI design process centered on the decolonizing politics of refusal maintains the potential to challenge the Western techno-utilitarian lens and its neocolonial knowledge production. I argue that we, Indigenous peoples and those who have historically been colonized and marginalized, need to meaningfully engage with technology to make our voices heard. Let me elucidate on how we can do so by the example of amok.
 
The Malay word amok, is used colloquially in English in the phrase “running amok,” as referring to someone who is out of control. In my mother tongue, amok is usually used as a verb mengamuk, meaning avenging those who have been wronged. There is honor and courage imbued in such an act. In the Malay classical texts such as Hikayat Hang Tuah (Tales of Hang Tuah) and Sejarah Melayu (The Malay Annals), amok is seen as intentional acts to defend one’s honor, and as a battle tactic deployed against colonial techniques and institutions of control. 
 
These sovereign articulations do not agree with the anthropologies of the colonizers. With the arrival of the British in the Malay Archipelago (present-day Brunei, Indonesia, Malaysia, and Singapore), amok was reduced by colonizers from a collective martial strategy by the Malays to defend their land, to an individualized irrational act of the Malay madman. In their racist and colonial epistemology, there could be no attribution of intent in such spontaneous, indiscriminate behavior. That the British chose to co-opt a Malay word was a cruel act, implying that such irrational behavior of the Malays could not have been captured in the English language. 
 
Enacting amok allows for the correction of this injustice by breathing new life into the word and is a rebuke of its forced meaning by the colonizers. Amok thus describes any action that enacts one’s individual and collective right to restore justice while affirming the dignity of those who have been historically oppressed. Amok is therefore integral to a decolonial epistemic approach.
 
Let me give an example. A month into Musk’s purchase of the platform, I enacted amok within the Trust and Safety Council, asking those who were concerned about developments within the company, to resign en masse. Power would come in numbers, I argued. In the end, Anne Collier, Lesley Podesta, and I resigned from the Council. In our joint press release, I cited research by the Anti-Defamation League and the Center for Countering Digital Hate that slurs against Black Americans and gay men jumped 202 percent and 58 percent respectively since Musk’s takeover. Antisemitic posts soared more than 61 percent in the two weeks after Musk’s acquiring of Twitter. Another red line for me was when previously banned accounts such as those on the far right, and those who had incited others to violence, such as former US President Donald Trump’s, were reinstated.
 
 
Coding Back
 
In coding back, we interrogate power and weave in resistance to the furious pace of technology. This resistance, whether performed at the individual or the collective level, takes cognizance of the embedded inequity within AI. Coding back therefore includes action like enacting amok. Sociologist Ruha Benjamin in her book “Race after Technology” argues that coded inequity should be met with collective defiance, acknowledging that this kind of defiance recognizes that not every person is in a position to refuse due to existing inequities. She highlights that emphasizing technophobia or technological illiteracy takes the focus away from the structural barriers to access, and ignores the many forms of technical engagement and innovation that people of color are involved in.
 
How do we code back? First, it is by rallying against the coded injustice within AI. One way would be to reappraise data sets used for machine learning that have our biases encoded in them. In the words of Ruha Benjamin, we would need to train AI “like a child” and be more cognizant of the inherent biases within such data. For example in the field of public health, in a country like the US with its history of redlining, using racist data from the past as training sets becomes problematic because you entrench biases. In 2021, the maternal death rates for Black women in this country was 2.6 times higher than for white women. According to the Centers for Disease Control and Prevention, many factors contribute to this, including structural racism and implicit bias. In sectors in the US where there is evidence of the impact of redlining on historically vulnerabilized populations,authorities would need to start afresh and collect new data, especially in healthcare, financial services, and the housing market. Vulnerabilized populations should have the right to inspect and provide feedback on data sets. Governments and research institutions should allocate funding to gather data in ways that might be more accountable to these communities.
 
Second, to counter the neocolonial structures within technology, to code back entails insisting on a different temporal framework, what Mark Rifkin calls “temporal sovereignty”, which demands disrupting the linear neoliberal Western understanding of temporality, centering instead Indigenous and decolonizing ways of engaging with time and relationships including with data. This means giving all communities the tools to resist or refuse using technology, foregrounding the right to data sovereignty and the issue of responsible development of technology, including AI, that is non-extractive and accountable to all. This temporal sovereignty could be demonstrated through calls to slow down the “ChatGPT race” so as to prioritize humans’ safety and our mental well-being. Karen Hao and others have shown how AI has resulted in the exploitation of labor in the Majority World with poor pay and exposure to traumatizing and harmful content through labelling datasets.  Slowing down is not, on its own, enough to subvert the capitalist drive of AI corporations. It could, however, help to illuminate how our consumption is always bound up in the oppressive conditions of AI production.
 
Third, is privileging Indigenous values in ascribing mental models when it comes to coding and design. Jason Edward Lewis argues that we should model how humans draw on their socio-cultural contexts to learn and make decisions, and how situated socio-cultural intelligences could be modeled in AI systems. One way to do this is to form advisory councils comprising people from historically vulnerabilized communities, including Indigenous peoples. Kimberly Christen and Jane Anderson have written about Indigenous knowledge systems, relating this to archival practices that are embodied and affective in opposition to neocolonial practices of order and emphasis on the written form. Their work invites us to ask: how could embodied and affective practices of sensing, remembering and remaking be centered when developing AI as decolonial praxis?

Now is the time to wrest back control and ask the fundamental question of what we want AI to do for our children, for humanity. Enacting amok is the right to rebel in the cause for justice. To code back is to elevate Indigenous and decolonizing ways of doing including embodied and dynamic practices; to privilege temporalities that are unruly and nonlinear; and to realize the data sovereignty of those who have been historically oppressed. In coding back, we center the transformative potential of the Indigenous and decolonizing imaginaries. For all our futures.
 

Eirliani Abdul Rahman was a founding member of Twitter’s Trust and Safety Council and is currently a doctoral student at Harvard University. She would like to thank Dr. Jesse B. Bump, Dr. Ad Maulod, and Dr. Trystan Goetze for the conversations and their ideas related to this article. 

Research for this work was made possible with the support of the Heinrich-Böll-Stiftung Washington, DC.

     
     
 
 



Published: 12/11/2023