Diversity and Inclusiveness(Un)Fairness in AI: An Intersectional Feminist Analysis

(Un)Fairness in AI: An Intersectional Feminist Analysis

Racial, Gender, and Intersectional Biases in AI

Artificial Intelligence (AI) is now an integral part of society. It is used to make high-stakes decisions, such as hiring, admissions, and loan approvals. AI-based decision-making is popular as it is believed to be faster, more accurate, and more consistent than humans.

However, there is a huge problem: AI is not neutral. It reproduces racism, sexism, and other forms of social injustice.

For example, a 2016 ProPublica article revealed that COMPAS, a recidivism prediction algorithm widely used in US courtrooms, was biased against Black people. In predicting who was likely to commit new crimes, this AI algorithm tended to mark Black defendants as higher risk, falsely labeling them as future criminals twice as often as their white counterparts.

Increasing concern about biased algorithms has led to a proliferation of studies on this topic. In particular, Joy Buolamwini and Timnit Gebru’s groundbreaking work “Gender Shades” (2018) drew attention to intersectional bias in AI. They found that major face recognition algorithms provided by Microsoft, IBM, and Face++ failed to recognize about 3 out of 10 Black female faces. Not only did the algorithms perform better at recognizing men than at recognizing women (gender discrimination) and at recognizing white people than at recognizing Black people (racial discrimination), but they also showed the worst accuracy on Black women, far worse than Black men and white women (intersection of gender and racial discrimination).

Dominant View of Intersectional Fairness in the AI Literature

Since “Gender Shades,” a growing number of researchers have started to employ the concept of intersectionality for analyzing unfairness in AI and improving fairness. Intersectionality refers to the idea that racial, gender, and other forms of discrimination are not separate but intersect and mutually construct one another. Rooted in Black feminist thought and popularized by Kimberlé Crenshaw, intersectionality has long been a gold standard in feminist theory. Most studies in computer science that use it now as a conceptual tool for measuring AI fairness (most notably, Kearns et al and Foulds et al) interpret “intersectional fairness” as follows:

The “PA” Definition of Intersectional Fairness: An AI algorithm is intersectionally fair if it achieves parity of a statistical measure among intersectional subgroups that are defined by different combinations of the protected attributes.

I call this dominant view of intersectional fairness in the AI literature “PA,” which stands for its two keywords: parity and attributes.

For example, let’s assume that a company uses an algorithm to predict who will be successful employees and makes hiring decisions based on those predictions. According to PA, the algorithm is “intersectionally fair” if each of the eight subgroups below has the same probability of getting hired regardless of their intersecting combination of the three attributes (i.e., race, gender, and disability).

graph of PA hiring algorithm

This sounds pretty straightforward: every applicant group gets to have an equal 30% chance of getting hired, no matter what their identity. However, is such an algorithm really fair? Is PA a useful framework for implementing fairer AI for a fairer society?

Three Fundamental Problems with the Dominant View

I argue that the answer is “no,” and that there are three major problems with PA. I address these more thoroughly in a recent article, and will briefly discuss each problem here.

1.     Overemphasis on Intersections of Attributes

First, PA is so preoccupied with the intersection of identity categories (e.g., race, gender, and disability) that it fails to address the intersection of oppressions (e.g., racism, sexism, and ableism), which is more central to intersectionality as a critical framework.

I am not suggesting that AI fairness research abandon the language of identity altogether. Instead, I am suggesting examining identity in its relationship to power rather than as an independent unit of analysis in a vacuum. Black women are oppressed not because they have intersecting identities of “Black” and “woman” per se, but because these identities are shaped by and lived in the intersecting structure of racism and sexism.

Let’s consider why face recognition algorithms are, as revealed by “Gender Shades,” so bad at recognizing Black women. Possible reasons include, to name a few: training datasets for machine learning that have few images of Black women and are composed mostly of whites and men; implicit bias of crowdworkers who collect and label images; the lack of racial and gender diversity in Big Tech companies; and the hierarchical and colonial labor market in high-tech industries.

All of these constitute the oppressive structure of US society and the global AI market.

As Marilyn Frye and Iris Marion Young first conceptualized it, oppression being a “structure” means that all of the above (e.g., implicit bias and lack of diversity) are not abnormal phenomena but “normal” processes of everyday life. At the core of intersectionality is the idea that multiple forms of oppression intersect to form a normalized structure. By focusing too much on generating fine-grained combinations of attributes, the dominant view diverts attention away from structural analyses of how white supremacist capitalist patriarchy is embedded in the AI development pipeline and perpetuates the marginalization of Black women and other women of color.

2.     Dilemma between Infinite Regress and Fairness Gerrymandering

The predominant focus of PA on attributes leads to another problem: How many attributes and subgroups should we consider to make algorithms intersectionality fair? In responding to this question, PA faces a dilemma.

On the one hand, if a fairness standard seeks parity among all subgroups defined by every possible combination of attributes, it should keep splitting groups into smaller subgroups (e.g., Black women who are working-class, queer, disabled, and so on) until the point where there is no group and the individual is the only cohesive unit of analysis. This way, PA falls into an infinite regress.

On the other hand, if a fairness standard seeks parity only between relevant subgroups, it is susceptible to the problem called “fairness gerrymandering,” i.e., an arbitrary selection of protected attributes. As a solution to gerrymandering, some researchers have proposed to consider only statistically meaningful subgroups that computers can identify. They say, for example, that if race and gender make a statistically meaningful difference in the outcome while disability does not, it is justifiable to require parity among racial-gender groups (e.g., Black women, Black men, white women, and white men), while not further dividing each group into those with and without disabilities.

However, this kind of statistical turn misses the point: what is “relevant” is itself a political battleground. Sasha Costanza-Chock’s #TravelingWhileTrans anecdote illustrates this point clearly. Costanza-Chock, a “gender nonconforming, nonbinary trans feminine person,” describes how airport scanners falsely identify their nonbinary body as a risk to security.

In short, in an algorithm that labels humans as either ‘male’ or ‘female,’ ‘nonbinary’ has no place as a type of gender whose relevance can be statistically measured. For that reason, ‘nonbinary’ can never be assessed as “relevant” or “statistically meaningful.” It is an unobserved characteristic: a characteristic that matters in people’s experiences of discrimination, and yet remains unobserved because sociotechnical systems exclude it. Issues of relevance—or more precisely, what can be judged to be more or less relevant, and what is excluded even from the discourse of relevance—are thus political problems, not merely statistical ones that computers can calculate.

3.     Narrow Understanding of Fairness as Parity

Lastly, I argue that PA fails to capture what it truly means for AI algorithms to be fair, in terms of both distributive and non-distributive fairness.

PA takes a distributive approach to fairness. Suppose that a philosophy graduate program uses an AI algorithm to make admissions decisions. According to PA, the algorithm is “fair” with respect to race and gender if it distributes admissions rates equally among racial-gender groups—for example, if Black women, Black men, white women, and white men all have an equal chance of getting in, say, 33%.

However, there are cases in which unequal distribution is a more proper distribution. Philosophy in the US is a white- and male-dominated field, which could discourage Black female undergraduates from applying to philosophy graduate programs. With this in view, suppose that 30 white men make up the majority of the applicants in the case at hand, while only 3 Black women have applied. By PA’s definition, the algorithm is fair if 10 white men (30 applicants × 33%) and 1 Black woman (3 applicants × 33%) are likely to be accepted. Is it really fair, though? It is not, because it reproduces the status quo underrepresentation of Black women. In order to actively mitigate the effects of systemic marginalization, the admissions algorithm may need to distribute a higher probability to Black women (e.g., 66% = 2 out of 3 applicants get accepted, or 100% = 3 out of 3) than to white men.

Without attending to the contexts that systemically privilege certain groups while marginalizing others, a mere equal probability distribution does not achieve fairness. The dearth of Black women in philosophy is reproduced through a number of mechanisms, including professional philosophy’s culture of justification that constantly pressures critical race, feminist, queer, and other “non-traditional” philosophers to justify how their papers are philosophy. It is these cultural and institutional—that is, non-distributive—contexts that shape the unfair distributive pattern, such as the racial-gender gap in admission rates. As Iris Young notes, the exclusive focus on distribution inadequately restricts the scope of fairness because it fails to bring the non-distributive structure under scrutiny. I maintain that AI fairness should be examined through the lens of non-distributive justice, which Young defines as the elimination of structural oppression.

Rethinking AI Fairness: from Weak to Strong Fairness

A takeaway from my analysis is that we need a different framework to understand and implement intersectional fairness in AI. Purely mathematical and technological solutions to AI bias, such as enforcing statistical parity across the board, face the three problems discussed above. I suggest distinguishing a strong fairness from a weak sense that is prevalent in the literature and working toward that stronger fairness.

In a weak sense, AI fairness would mean passively and retroactively “debiasing” algorithms. The dominant PA approach that seeks to debias by creating an equal distribution among subgroups is a step forward. However, this alone cannot make algorithms substantively (as opposed to merely formally) fair. Because the intersecting structure of racial, gender, class, and other oppressions is reflected in and reproduced by AI algorithms, making algorithms substantively fair involves resisting and undermining the very structure of oppression that leads to biased algorithms in the first place.

Therefore, AI fairness in a stronger sense requires designing algorithms to actively and proactively challenge oppression and make society fairer. Strong fairness requires reframing the purpose of algorithms. To illustrate, let me return to the case of COMPAS, the recidivism prediction algorithm found to be biased against Black people. Questions to ask to promote strong fairness include: What is the purpose of developing and using this algorithm? Is the goal to put people in jail for more years and reproduce the current pattern of incarceration, or to change it? How can and should we redesign the algorithm, if we reframe its purpose as challenging the mass incarceration of poor people of color and intersecting racial-economic inequality? The “we” should refer, as Yolanda Rankin aptly notes, not only to those who are already at the table (researchers, engineers, and companies) but also to the marginalized (here, poor communities of color) who suffer the most from algorithmic bias but have had no voice. How can we reform the AI development process so that marginalized groups participate as “co-producers” of the algorithm rather than just serving token inclusivity targets?

These questions might sound too radical or idealistic. While I acknowledge the difficulty of changes, I believe that paths toward strong intersectional fairness in AI require paradigm-shifting changes. To break the pattern of AI reproducing discrimination under the disguise of impartiality, we would have to restructure the purpose of algorithms from an accurate reflection of discriminatory realities to an active opposition to such realities.

The Women in Philosophy series publishes posts on women in the history of philosophy, posts on issues of concern to women in the field of philosophy, and posts that put philosophy to work to address issues of concern to women in the wider world. If you are interested in writing for the series, please contact the Series Editor Adriel M. Trott or the Associate Editor Alida Liberman.

picture of Youjin Kong
Youjin Kong

Youjin Kong is an incoming Assistant Professor in the Department of Philosophy at the University of Georgia. Previously, she was a Visiting Assistant Professor of Philosophy at Oregon State University. Located at the nexus of Ethics of Artificial Intelligence (AI), Social-Political Philosophy, and Feminist Philosophy, her research critically analyzes how AI reproduces gender and racial injustice and develops philosophical frameworks for improving fairness in AI. She is also committed to advancing Asian American feminist philosophy, which remains underrepresented in the philosophy literature.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

How to Practice Embodied Pedagogy

When preparing my poster for the AAPT/APA conference in New York in January 2024, I had to consider not only what topics would interest...