Holding the Line: Values Drift, AI Anomia, and the Craft of Accountable Leadership
Between Data and Decision: Introducing Myself to 4S Backchannels
A few years ago, I sat in a conference room as a leadership team discussed a new AI tool to 'optimize' staff performance. The slides were full of color-coded dashboards and uplift projections. What was missing was any serious conversation about how this system would change the way people experienced their work, or how it might shift acceptable norms about what counted as good enough effort, care, or attention. In my experience, optimization and lived experience are rarely complementary; often, the former is achieved by smoothing over frictions with the latter. I remember thinking: There’s a whole sociology of this decision that isn’t on the slide deck, but it’s in the room.
That gap, between what shows up in the data and what’s actually happening to people and institutions, is where most of my work lives.
My name is Christine Haskell, and I’m joining 4S Backchannels as an Assistant Editor. I work at the intersection of Science and Technology Studies (STS), organizational governance, and AI ethics, with one central question running through my research and practice: How do leaders make decisions under pressure when their tools, metrics, and infrastructures are quietly nudging them in other directions?
For the past three decades, I’ve moved between the disparate worlds of large tech companies, consulting, and higher education. I began my career in the early .com startups of the internet that changed several paradigms, including consumer platforms with Yahoo!, audio/video streaming with RealNetworks, data science and AI with Microsoft, permission marketing with Seth Godin, and data culture/literacy with Starbucks and Salesforce. These companies were training grounds for my work with systems that promised informational clarity (more dashboards, faster metrics), but often produced a new kind of organizational opacity that obscured the human context and labor required to move those needles. Later, I returned to graduate school to study leadership and organizational behavior because I wanted language and methods for what I was seeing: how tools designed for 'efficiency' reconfigured relationships, roles, and responsibility. I observed it wasn’t the data, the storytelling, or even the lack of judgment that was the problem. It was the willful avoidance of accountability, refusing to re-architect old systems to bear the weight of new problems.
Today, I teach EMBA students, advise organizations on AI and data governance, and write about how to maintain healthy default lines that support well-being in and outside of organizations, by paying attention to what I’ve come to call values drift, AI anomia, and artificial mirroring. In what follows, I define these patterns not just as theoretical constructs, but as the lived reality of leadership in automated environments.
Values Drift, AI Anomia & Artificial Mirroring
By values drift, I mean the slow, often invisible process by which organizations move away from their stated commitments—not through dramatic betrayal, but through a thousand small, plausible decisions. No one wakes up and says, “Let’s erode trust this quarter.” Instead, leaders approve the slightly easier metric, the cheaper proxy, the automated workflow that makes sense in the moment but accumulates downstream harm.
In AI and data projects, this drift often shows up as a subtle shift in who or what is being optimized for. A system that was meant to allocate resources more fairly ends up maximizing throughput. A wellbeing tool becomes a performance surveillance tool. A student support platform becomes an enrollment management instrument. On paper, nothing 'wrong' has happened. In practice, the moral center of the work quietly moved.
AI anomia is my term for what happens when the language we use to talk about AI stops reliably pointing to shared meanings. We refer to 'transparency,' 'governance,' 'fairness,' 'ethics,' or 'human in the loop,' but different actors mean very different things, and sometimes those differences are strategic. Much like 'sustainability' became a floating signifier to cover anything from survival to greenwashing, terms like 'AI safety' are often deployed to mean 'brand safety' rather than 'human rights.' The breakdown occurs when this shared vocabulary masks unshared goals, allowing institutions to claim ethical alignment while pursuing contradictory ends. The result is a breakdown of meaning that makes real accountability much harder to achieve.
The third pattern I’ve been working on is what I call artificial mirroring, a concept I explain in detail in a forthcoming publication written for an organizational governance audience (Haskell, in press). Generative AI systems are increasingly designed to simulate deep relational attunement: they reflect our words, preferences, and emotional cues in ways that can feel uncannily responsive. Users describe feeling 'seen,' 'held,' even spiritually accompanied by systems that have no needs, stakes, or inner life of their own. This creates what I think of as a narcissistic illusion of mutuality—a sense that there is a 'someone' on the other side who understands and cares, when in fact the system is arranging patterns of text and affect.
Artificial mirroring matters for institutional and relational governance because it blurs the boundaries of responsibility and attachment. When people come to depend on these systems for affirmation, guidance, or companionship, it becomes harder to parse where agency lies: is this 'my' insight, the AI’s suggestion, or the invisible entity of the institution that owns the model, tunes the prompts, and sets the defaults (to a certain mean)? Here, the concepts converge: the invisible processes of values drift occuring upstream in the corporation become the invisible entity shaping the intimate mirroring downstream. The boundaries between leadership decisions and user experiences are not just blurred; they are structurally linked.
One way I’ve been framing this is a shift from the dyad to the triad. Much of the public and commercial language around AI companionship imagines a two-way relationship, person and machine, 'you and your copilot.' STS invites us to see a sociotechnical triad instead:
the person (with their history, vulnerabilities, and situated expertise),
the interface (the chatbot or system that performs artificial mirroring), and
the infrastructural actors (the organizations, datasets, and governance regimes that shape what the system can say and do).
Thinking in triads rather than dyads helps individuals keep institutional power, ownership, and design choices in the frame, even when the interaction feels intimate and personal. It is a small conceptual move with big implications for how we talk about care, consent, and responsibility in AI-mediated relationships.
All three patterns are sociotechnical, they are about tools and architecture, but also about culture, norms, and power. They are not failures of individual character so much as predictable outcomes of particular arrangements of metrics, incentives, and institutional histories. This is where STS has been invaluable to me: drawing on the work of scholars like Paul Edwards on the politics of infrastructure and N. Katharine Hayles on the nature of posthuman cognition, it offers conceptual and methodological ways to see these dynamics not as isolated 'ethics problems,' but as embedded in infrastructures, discourses, and material practices.
Craft Intelligence and Quiet Acts of Governance
Alongside this diagnostic work, I’ve been writing and teaching about what I call craft intelligence, a way of naming the situated, relational, often tacit judgment that people bring to complex work. In a forthcoming publication written for a leadership studies audience (Haskell, in press), I examine craft intelligence as a form of deliberate stewardship over how knowledge is turned into decisions—the capacity to notice the 'almost wrong' pattern in a dataset, the uneasy feeling in a stakeholder meeting, and the tension between what the dashboard rewards and what care or justice would require.
In many institutions, this kind of intelligence is everywhere but rarely centered. It lives in backchannels: side conversations between colleagues, notes scribbled in margins, informal workarounds that keep systems from doing harm. My concern is that as AI systems become more tightly coupled to decision-making, the space for those quiet acts of governance (the human friction, the pause to reconsider, the micro-decisions that regulate a system’s impact) shrinks. When the defaults harden into infrastructure, it becomes harder to raise a hand and say, “This may be efficient, but is it right?” without sounding anti-innovation.
My current projects (i.e., books, articles, and workshops) try to make craft intelligence more visible and more politically legible. I’m interested in how reflection practices, values work, and 'soft' skills become hard infrastructure: decision protocols, review rituals, audit trails, curricular designs, and leadership norms that can withstand pressure.
A Scholar-Practitioner Between Worlds
I often describe myself as a scholar-practitioner, not because it sounds tidy, but because I genuinely live between worlds. I teach in business programs while drawing heavily on STS, critical theory, and feminist ethics. I work with executives on AI readiness while thinking about curriculum design, pedagogy, and students’ lived experiences with AI tools. I write for organizational audiences, but my questions are deeply shaped by conversations in 4S and adjacent communities.
That in-between position has convinced me that we need more bridges between research and practice, not fewer, and that those bridges must be bluntly honest about power, not just celebratory about “impact.” We must name the gap between those who design the optimization functions and those who live inside them. When an institution adopts a new AI system, for example, who really decides what counts as success? Whose data, labor, and vulnerability are being mobilized to achieve it? And crucially, what is the cost of refusal? While individual educators may bravely return to analog islands (e.g., blue books, paper, and device bans), who has the structural power to say 'no' to the enterprise systems that envelop them?
These are not abstract questions for me. They show up when I sit with a CIO under pressure to 'modernize' their data team, a dean being sold 'AI-powered student success,' or a public sector leader facing budget cuts while being told that AI is the future. They show up in my classrooms when both faculty and students ask, “Can I use these tools and still maintain integrity in my teaching and learning?”
What I Hope to Do With 4S Backchannels
Joining 4S Backchannels, I’m excited about a few specific things:
Surfacing field reports from the middle of AI adoption: Not just polished case studies, but honest accounts of messy experiments: pilots that went sideways, governance committees that stalled, moments when someone refused a seemingly neutral tool on values-based grounds.
Exploring language breakdowns: Pieces that grapple with how concepts like “safety,” “risk,” “participation,” or “inclusivity” are being stretched, narrowed, or co-opted in AI discourse—and how practitioners, students, and communities push back.
Highlighting craft and care: Essays that foreground the everyday work of people who keep systems humane: advisors, frontline staff, data stewards, teachers, organizers, student workers. I’m particularly interested in writing that refuses the easy binary of “for or against AI” and instead traces the ambivalence, improvisation, and situated expertise that make institutional life possible.
Connecting reflection to structure: Contributions that don’t stop at “we need more ethical reflection,” but also ask: What structures would make that reflection durable? What checklists, review processes, policies, or teaching practices actually anchor values in the flow of decisions?
My editorial sensibility leans toward pieces that hold tension rather than rush to closure—work that is analytically rigorous and sharp, but also generous to the people caught inside systems they did not design. I’m especially eager to support authors who are experimenting with formats that sit comfortably in 4S Backchannels: shorter conceptual provocations, interviews, collaborative reflections, or multimodal pieces that integrate text with images, diagrams, or classroom artifacts.
An Open Invitation
If any of this resonates with where you’re sitting—whether you’re working in a university IT shop, a community organization, a regulatory body, a classroom, a research lab, or somewhere that doesn’t fit neatly into any of these categories, I’d love to hear from you.
What backchannel conversations about AI, data, and governance are already happening in your context? What quiet acts of resistance or care are you seeing? What language are people using to describe what they’re living through, and where does that language fail?
As I begin this role with 4S Backchannels, my hope is to help curate and support work that treats AI not just as a technical artifact or a policy problem, but as a lived environment: a shifting terrain of infrastructures, identities, and relationships. I’m grateful to join this editorial community and look forward to learning from and with you in the process.