Coded Displacement

During her graduate studies, black computer scientist Joy Buolamwini developed Aspire Mirror, an AI system that recognizes and reworks the user’s facial image with a chosen motif—e.g., an inspirational quote or an animal, like an owl representing wisdom—reflecting back at the user their idealized self. Unfortunately, the software failed to detect the contours of Buolamwini’s black face. To use her own machine, she resorted to donning a white mask or tagging in a white friend. Afterward, she coined the concept “the coded gaze” to capture comparable instances of gendered and racial insensitivity in learning machines. In their 2018 paper, “Gender Shades,” Buolamwini and Timnit Gebru, who then co-led Google’s Ethical AI Team, caution that the coded gaze could result in someone being “wrongfully accused of a crime based on erroneous but confident misidentification of the perpetrator from security video footage analysis.”

Their prediction came true for black Detroiter Robert Williams, arrested in 2020 by Michigan Police after the department’s facial recognition system identified him as a burglar. Neither was Williams the actual thief nor did the system’s proprietor, DataWorks, consider their technology sufficient as “probable cause for arrest.” Nonetheless, the algorithm, to Detroit’s law enforcement, operated as an investigative shortcut, the upshot of a local covenant between businesses and police entitled Project Green Light. Companies participating in Project Green Light can functionally jump the queue on police assistance by installing, on their properties, surveillance cameras. Live video data are then fed into the department’s “Real-Time Crime Center,” whose platform for aggregating and analyzing data is the spoil of a public-private partnership with Motorola Solutions. Through DataWorks’ algorithm, on Motorola’s platform, face data from feeds are compared to face data from public databases (including license and mugshot datasets) until a number of potential identifications (sometimes hundreds, with associated probability scores) are sent to a detective. 

Williams’ arrest is a pressing harbinger of how law enforcement in American cities may integrate AI systems. Last year, black Detroiter Porcha Woodruff was falsely arrested while eight months pregnant by officers using the same tech. Black Atlantan Randal Reid was wrongfully arrested for a crime in a different state, in a county he had never been in, on a misidentification. In Maryland, Alonzo Sawyer. In New Jersey, Nijeer Parks. In the interrogation room, after booking him and hence recapturing his image and fingerprints (a data trail difficult to expunge), officers show Williams a snapshot of the burglary he allegedly committed. Williams puts the photo beside his face and says “I hope y’all don’t think all black people look alike.”

Now there surely is something wrong (as in unjust) about a system that not only disrupts and jeopardizes a life course, in an unfounded arrest, but also catalogues said course for system development. There too is something wrong (as in distorted) about a concept of wrong that cannot register that victim’s experience of disruption and jeopardy. In her recently published memoir, Buolamwini writes, “Sitting in my office late at night and coding in a white mask in order to be rendered visible by a machine, I couldn’t help but think of Frantz Fanon’s Black Skin, White Masks. […] I was reminded I was still an outsider. I left my office feeling invisible.” We should, in lieu of recapping the revolutionary psychiatrist’s famous scene, tarry with this problem of algorithmic bias as it reveals how neither unfair predictions nor violated compacts encapsulate all that is concerning in Buolamwini and Williams’ cases. In fact, we may wish to follow Fanon the clinician who might consider this complex of feelings, practices, and technologies wrong (as in sickening).

To have one’s aspirations thwarted. To be invisible or hypervisible. With Fanon as our guiding light, we ought to inquire about the injury accompanying these cases of misrecognition, where one is incapable of identifying themselves otherwise. Clearly these harms are disorienting. They all signal one’s powerlessness in making use of, not only one’s self, but also of an environment increasingly saturated with statistical machines. Case in point: Buolamwini cannot appropriate the technology that she made—an alienating experience that is no mere skill issue but rather haunts much of computer vision. When you cannot help but contribute to a world in which you do not show up, how do you feel? Leaving home for work, Williams, because his municipality has integrated a faulty technology (one that wasn’t voted for, for what it’s worth), is arrested on his driveway, brought to a standstill, where resistance (even out of curiosity or shock) could be fatal. How would any of us feel?

A social clinician like Fanon would not seek to charge blame but rather ask “what permits all of this?” Individual or institutional racism may seem explanatory enough, but these bare terms are insufficient to account for how augmented disruptions like those experienced by Williams and Buolamwini are. Officers don’t randomly stop and frisk Williams but triangulate him: they seek out, specifically, him. The cops mightn’t think all black people look alike but plausibly refuse to trust their own eyes given that the chief justification for these technologies in our social systems is how unshackled from individual and institutional bias they are. Hauntingly, such anti-rationalism, or second-guessing of human assumptions, is an upshot of some proposed police reforms, such as bias training. Further, if we focus all our attention on faulty models (or unrepresentative datasets), we open a hatch for individuals and institutions to escape liability. Technologies can be and often are scapegoats: for our algorithm to be fair, so the thought goes, we need better and more data! But are these learning machines good for us, for our institutions, for our societies as a whole? 

Today, technology ethics circles around a consensus: we cannot ignore any of the above elements (individual, machine, data, institution). Instead, we must figure ourselves as part and parcel of sociotechnical systems. Algorithmic bias can, in this sense, be figured as a symptom of systemic dysfunction. For all its merits, this framing still cannot account for the feeling of bewilderment by those caught in our systems’ gears. To experience algorithmic bias is to be incapable of gathering yourself, getting your bearings, of navigating and making use of your sociotechnical environment. A hostile environment is not broken, but uninhabitable.

It is instructive to scale up this diagnosis of vertigo from the individual user to society writ large. Keeping up with technical advances is dizzying for critic, engineer, scientist, and public alike. One prominent reason is the precarious character of technology disruption. Nowadays it is as if all of society, like Williams, is merely going about its business only to be apprehended, suddenly, by a miseducated algorithm. One company can publicly release software in its alpha cycle and at once create innumerable crises within institutions the world over. Onlookers must get with the program (investors investing, students using, professionals mastering) or face the dangers that come with obsolescence. The same company can then shield itself from critique by hiring ethicists, lamenting an impending software apocalypse, and/or rhetorically steering the narrative of how everyone ought to interface with the commotion. Workers, threatened in their positions, may feel forced to machinate against bosses who are themselves threatened by competing firms more willing to reallocate wages to automation costs. Even if local and national governments work to moderate discontent and rearrange their own protocols, they are in some respects powerless against the original disruption. The young hopeful feels less inclined to prospects that cannot clearly secure their future and tends towards studying the disruptive technology, which might itself become obsolete overnight by another disruption. In a world where all of this is permitted, it seems inaccurate to say that something like a misalignment between technical and social values is at fault. What’s sickening is the widespread sense of uncertainty, unsustainability, but most primarily: distress. All the above frustrates, and none of it seems all that tenable. A doctor might declare our society unwell and that our technology has something to do with it.

Let’s however return to the level of the city in question: Detroit. A report in 2020 by Michigan State University’s Justice Statistics Center concluded that there are no “clear and consistent indications of crime declines associated with [Project Green Light] participation,” yet even so “700 [businesses] have ‘voted’ with their financial resources to enroll” in the wake of the year’s nationwide protests against police brutality. One of the supposed benefits of the protests for the city was that the police department would no longer use facial recognition technology in conjunction with Project Green Light. Yet in a 2022 interview with journalist Laura Herberg, Detroit Police Commander Ian Severy acknowledged that, though the approved Green Light cameras (supplied by private vendors) have no facial recognition software installed in them, “the department takes still images from Project Green Light footage and puts them into facial recognition software.” Nothing has really changed in the city of Robocop; the very next year, Porcha Woodruff was wrongfully arrested off a misidentification while eight months pregnant. If protest fails to actually change things, what use is philosophy in addressing these cases of machine bias? 

A work of philosophy, respectful of its limitations, is a mirror image of Buolamwini’s Aspire Mirror: when successful, philosophy reveals to us what thwarts our aspirations. What then is necessary for such analysis? If we follow Fanon and take up the task of diagnosing our present, we would not shy away from but begin at the experience of bias—at the feeling of frustration from one’s arrest by a coded gaze or the despair at a system claiming to change while remaining the same. We would not hesitate before such feelings but work to comprehend the hurt as itself a symptom, microcosm, and product of all that permits it. That is, we would consider how the tools we take up, no matter our reasons for doing so, change not only what we do with them but also our understanding of what we’re doing. In seeking to understand suffering, we would realize how the changes that technology makes do not begin and end at the user, the machine, or the institution that incorporates the system. We must thereby track how these ripples affect all that surrounds these entities and how our algorithmic solutions can render problematic the conceptual, practical, and technical environments that we call society. This itself is resistance inasmuch as our present societies, in their supposed complexity, reject and curtail thought. To critically diagnose our sociotechnical systems, which are harmful and uninhabitable for so many, is to come to realize something painfully obvious: it is we who make these systems, we who permit the hurt. If we recognized the hurt of disruption as itself a system error, we could, together, go back to the drawing board and engineer a world where we all have a place we can call home. 

Jerome Clarke
Assistant Professor of Philosophy at American University | Website
Jerome Clarke is an Assistant Professor of Philosophy at American University. He writes on Technology Ethics (esp. AI/ML), the Critical Philosophy of Race, and 20th-century Social Philosophy. His book manuscript (under construction) reformulates the theory of racism in light of algorithmic harms in contemporary, institutional life. The project prominently features a reevaluation of W.E.B. Du Bois' critique of empiricism in governance and social science. Dr. Clarke's other research intervenes on recent debates in Black Studies and the Philosophy of Technology under the principle of bridging conceptual discussion and applied inquiry.

1 COMMENT

  1. Amazing, this is impressive! The content is insightful, engaging, and truly captivating. I loved learning from it.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Philosophical Mastery and Conceptual Competence

I roughly sort pedagogical issues into two broad categories: engagement and mastery. By “engagement” I mean roughly discussion and reflection on teaching methods that...