Techno-Activism and the Vestiges of Hope
In 2008, Hope won an American presidency. In 2020, elections ring with coups.
In 2008, Hope won an American presidency. In 2020, elections ring with coups.
Hope is a kindly sentiment that feels historically out of place— cheerful and glossy in a moment charred with grit. Viral death, political disorder, and the erosion of truth together push hope aside, while the resurgence of explicit White supremacy and militarization of police against those who oppose it sweep hope out of mind, putting it away, for now, as a mood ill-suited for the present unraveling.
On a gut level, I find the notion of hope affronting. A protective and palliative escape, a way out of bearing witness. Like what’s happening now is some unfortunate detour on an otherwise smooth path, and once we get back on the main road everything will be fine again and the sun will shine again and people will have block parties, and small-shop owners will sell their wares to friendly customers from all walks of life. But that reality was never true, and people are dead, and people are dying, and the earth is literally on fire.
Yet as a sociologist, I’m nagged by the disciplinary truism that reality is most evident and most pliable when it comes undone. And anyway, what’s left once hope dissolves?
In this, the strangest of all years, conventions corrode and uncertainty saturates. Perhaps in this decay and uncertainty—indeed because of it— hope lingers. In what follows, I entertain the idea.
I’m not certain that there is hope, or if hopeful sentiments are even productive. But, if there is, and if they are, then that “hope” has to be actionable, it must tangibly and materially matter. Hope has to be a means of interrogation and a mechanism of change. Hope isn’t something to have, but something to do—a critical and grounded praxis that attends to the systems that infuse and pervade.
On a practical level, doing hope means identifying ports of entry. Where are social problems located and how can we demonstrably intercede? One place to start is with the technologies of the time.
Technologies embody and perpetuate social values, and shape daily life in ways profound and mundane. Left unchecked, technologies arc toward existing patterns of power and privilege, reinforcing and amplifying the status quo. The troubles of today are inextricably wrapped up with existing and emergent technological systems—misinformation and political propaganda, data economies that extract and surveil, and automated decision tools with prejudicial proclivities that put Black men in jail and keep women under-employed. However, these dynamics are not inevitable and hope, as an instrument of tenacity, may spur material efforts toward reform.
To be clear, I am not talking about a techno-utopia in which mechanization solves societal blights. Rather, I am exploring technology as a material point of action. I am talking about holding existing systems to account, adjusting them when possible and dismantling them when necessary. I am talking about building new systems with equitable, socially just foundations. I am talking about social action with, through, and against, sociotechnical ensembles. This is not a techno-optimism but a “techno-activism”—an activist approach that enters the social via the technical.
Techno-activism is not, and cannot be, a floating ideal. As a vehicle for hope, it is best articulated and enacted through specific projects that rectify identifiable harms. This means asking where is techno-activism most needed? How do we do it? And who is already on the task?
Techno-activism is needed wherever technical systems enact social harms. Because technologies and their effects are pervasive, it is useful to identify concrete points of intervention and specific technologies under use. I’ll highlight two here: predictive policing and facial recognition software. Both examples are tied to algorithmic prediction through machine learning—the du jour “advancement” of our time.
Predictive policing uses tools of automation to anticipate where crimes are most likely to occur and who is most likely to commit them. Police personnel are deployed based on these predictions in an effort to prevent, rather than respond to, criminal activity. The rationale for these systems seems reasonable: efficient precision that optimizes policing resources and keeps communities safe. In practice, predictive policing is racist-classist in ways that directly and negatively affect poor communities of color, especially, but not exclusively, Black men.
Policing has always been raced and classed. As a matter of normative practice, police over-surveil and under-protect low income neighborhoods populated by minority racial groups, while giving wide berth to affluent White neighborhoods. Moreover, once in the judicial system, White defendants receive more lenient rulings and lighter sentences. These are historical facts. The data from this history is what populates and drives today’s predictive policing algorithms. Concretely, this means Black men of low socio-economic status will be disproportionately defined as risky and thus monitored by police, police monitoring increases the likelihood of arrests and convictions, which feed back into the algorithmic system as data points that validate the original predictions and solidify a self-perpetuating cycle. Predictive policing intends to make society safe. Instead, it enacts and amplifies a fundamentally unjust carceral state.
Facial recognition software and related biometric data products are used in multiple arenas, including social media, search engine queries, policing, and employment. In all cases, this suite of technologies has a race problem (and often a gender problem, too). Because facial recognition has so many applications, its harms are both representational and allocative. Representational harms are those that reinforce negative stereotypes about individuals and groups. Allocative harms are those that affect resource distributions. For example, when Google image search conflates Black people with Gorillas, this is a representational harm that draws on and reconstitutes a history of White supremacy legitimated through hierarchies of dehumanization. When businesses utilize facial recognition and biometric tools to sort job candidates, it creates allocative harms that penalize women, people of color, and those for whom English is a second language, due to their distal approximation of a White masculine ideal.
In the realm of facial recognition software, this is referred to as the “pale male data problem.” This problem derives from datasets that are disproportionately trained on White men’s faces, increasing the accuracy for this particular demographic group at the expense of others, thus normalizing default Whiteness as the valued status quo. Such patterns of photographic (under)representation are not new, but endemic to technologies of human image capture. Take for example the Shirley Cards of the 1970s—a literal card used in chemical film development to calibrate color based on the image of a woman named Shirley who had brown hair and light skin. Although film and digital photography have since incorporated more diverse color capacities, these correctives have been retrofitted atop a troubled foundation, resulting in a skin-tone issue that continuously reasserts itself, as evinced in the examples above.
Mobilizing techno-activism is a complicated task with a simple underpinning: hold power to account. This means drawing on multiple resources and channeling them toward injustice. Those who intervene need conceptual resources to articulate the problem and propose solutions, network resources to activate a critical mass of disaffected persons, and specialized knowledge in technology, legislation, and procedures of governance. Techno-activists also need tenacity, drive, and mutual social support. These resources won’t be contained in a single person—they rarely if ever are—but can distribute among activist collectives who pool material and immaterial assets.
For predictive policing, this means drawing on specialized knowledge of algorithms and their operation, legal and historical knowledge of the carceral system, and an unflinching willingness to show up on social media, in the streets, and at the courthouse to unmask and unmake, a technologically augmented police state.
For facial recognition software, this means checking, re-checking, sharing and re-sharing image outputs that come back racist, sexist, ableist, colonialist, and otherwise harmful across intersectional axes of identity and oppression. It also means using broadcast, micro-cast, and legislative channels to call for moratoria on the use of these technologies for consequential decisions, as has been done by prominent figures and organizations who work in the field of AI.
It is worth noting that although technology producers are central to the problem(s), they will also be integral to any solution. They are the ones who make the things. They can make them better. In my own work researching ethics in Australia’s tech startup sector, the practitioners with whom I speak are, to a person, eager, earnest, and ready to enact ethics in their practice, even within an incentive structure that prioritizes speed and profitability. This eagerness, however confounded, is a thread to hang onto.
Techno-activism is already in swing, with a vibrant force in scholarship, art, non-profits, and variously organized networks. Hope lingers here. These are the people and organizations who create a vehicle for hope, in a time when hope hardly belongs. As reality is most evident and most pliable when it comes undone, feelings of collective vulnerability may in fact signal the precise moment for social re-imagining. These techno-activists recognize this opportunity-amidst-tumult, and they have committed to the fight. Rather than droning abstractions about what techno-activism looks like, I’ll highlight a sampling of those leading the charge. For me, these are a place to start, and an impetus to keep going.
Be informed: Reading List
Algorithms of Oppression: How Search Engines Reinforce Racism
Anti-Social Media: How Facebook Disconnects Us and Undermines Democracy
Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor
Behind the Screen: Content Moderation in the Shadows of Social Media
Dark Matters: On the Surveillance of Blackness
Designs for the Pluriverse: Radical Interdependence, Autonomy, and the Making of Worlds
Design Justice: Community-Led Practices to Build the Worlds We Need
Distributed Blackness: African American Cybercultures
Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass
Hacking Diversity: The Politics of Inclusion in Open Technology Cultures
Made by Humans: The AI Condition
Race after Technology: Abolitionist Tools for the New Jim Code
Value Sensitive Design: Shaping Technology with Moral Imagination
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
How Artifacts Afford: The Power and Politics of Everyday Things * I’m the author of this book and its (non-alphabetized) inclusion is self-generous and aspirational. The book is informed by all of the authors above.
Get involved: Organizations/Data Justice Projects
Be inspired: Activist Art
Keith Obadike’s Blackness for Sale
Born or Built? Our Robotic Future
Data Visualizations by Erin Gallagher
CV Dazzle (facial recognition disruptor)
Bodies in Translation: Activist Art, Technology, and Access to Life
Jenny L. Davis (@Jenny_L_Davis) is a Senior Lecturer in the School of Sociology at the Australian National University, a Chief Investigator on the Humanising Machine Intelligent Project, and author of “How Artifacts Afford: The Power and Politics of Everyday Things (MIT Press 2020). Learn more here: JennyLDavis.com