Diversity and InclusivenessAbleism and ChatGPT: Why People Fear It Versus Why They Should Fear...

Ableism and ChatGPT: Why People Fear It Versus Why They Should Fear It

Philosophers have been discouraging the use of ChatGPT and sharing ideas about how to make it harder for students to use this software to “cheat.” A recent post on Daily Nous represents the mainstream perspective. Such critiques fail to engage with crip theory, which brings to light ChatGPT’s potential to both assist and, in the long run, silence disabled people. On the one hand, ChatGPT could be used as assistive technology by the millions of people with a communication disability or difficulty. On the other hand, the widespread use of this technology, and the perception of algorithmic objectivity, could create a standard of “correct English” that further marginalizes and stigmatizes alternative modes of communication. The pros and cons of ChatGPT for disabled people have been widely neglected by debates that center on the potential for this technology to be used for deceptive purposes. These debates not only sideline disabled people, but also promote carceral techniques, like stricter policing and punishment, that disproportionally harm disabled and disadvantaged students. Educators already excessively discipline and punish racialized and disabled students, and stricter policing will exacerbate these disparities. Little attention has been paid to why these disparities exist: namely, because of elitist academic standards that uphold intersections of power and privilege. 

In this post, I object to carceral responses to ChatGPT, and defend a structural approach. I say that we should value, preserve, and protect the rich variety of communication styles that exist in the human population. If we do this, then students will not feel that they need to use a chatbot to code-switch and assimilate into academic English rather than using their own voice.

ChatGPT: Cheating or Assistive Technology?

In my social media feed, many philosophers have been complaining that ChatGPT will allow students to cheat, plagiarize, and neglect their critical thinking skills. Some professors have banned GPT, some are returning to more traditional grading methods like in-class exams and presentations, and some are using detection software to penalize students who use this technology. A headline from Inside Higher Ed declares, “ChatGPT is a plague upon education.” 

Few critics have commented on what ChatGPT could mean for disabled students and teachers. In the U.S., an estimated 5-10% of people have a communication disability, though this estimate is conservative because it only tracks recorded cases of speech/language difficulties. In addition, an estimated 54% of American adults lack literacy proficiency at a sixth-grade level. People with motor disabilities like dysgraphia can struggle with spelling, grammar, and organizing thoughts. These individuals may need more time to complete written assignments.

These are just a few examples of the types of disability and disadvantage that ChatGPT could help to equalize. Contrary to Inside Higher Ed’s cautionary tale, proponents of the technology say that ChatGPT could level the playing field and improve inclusion for people with disabilities. There are some caveats, of course. As the software is upgraded, it should be designed to be more accessible with each iteration. The current model can already understand “poorly written” commands and summarize complex text, and these accessibility features should be expanded in subsequent versions. To this end, developers should consult with disabled users. 

If ChatGPT can be used to make education more accessible, then why are so many educators against it? I suspect that part of the reason is that ChatGPT threatens to disrupt able-bodied privilege, which is an entrenched feature of the education system—something used to make decisions about grading, publishing, and hiring. This is particularly true of higher education, which is exceptionally elitist. Higher education is designed to be exclusionary, and therefore assistive technologies that disrupt ableism (and other forms of exclusion) are seen as a threat. ChatGPT makes it harder to enforce academia’s bias against marginalized speakers. How can we grade our students’ papers if anyone can write well-formed English sentences, even people from the most underfunded school districts? How can we publish the “best” papers if anyone can write fluent English prose, even people who speak English as a second language? How can we hire the “best” job candidates if anyone can keep up with academia’s infamous publish-or-perish mandate, even people with dysgraphia and chronic fatigue syndrome and full-time caregiving responsibilities? Assistive technology makes it easier for disabled and disadvantaged people to compete in the “hunger games” of academic achievement. It also makes it harder for academics to exclude and punish the most oppressed students or to justify their own privilege.

Perhaps this is why so many academics are wary of it…? It is telling that few philosophers have commented on ChatGPT’s potential to increase educational access. If it can do this, then is it cheating or is it assistive technology? Assistive technology is defined by The World Health Organization (WHO) as any device that “enables and promotes inclusion and participation, especially [but not exclusively] of persons with disability, aging populations, and people with non-communicable diseases.” The WHO promotes the unimpeded and unstigmatized use of assistive technology in educational settings, as does American legislation including the Individuals with Disabilities Education Act, the Assistive Technology Act, and the Americans with Disabilities Act. Nonetheless, many people still view assistive devices with suspicion, saying things like “the technology is doing all the work,” “it’s an unfair advantage,” and “every student should have to work just as hard.” These objections rest on false assumptions about the value of independence while ignoring the fact that privileged people depend on social scaffolding for their success. (The built environment, after all, has been designed to assist and uplift them). Studies confirm that the usage of assistive technology is stigmatized, and there is pressure on disabled people to avoid using this technology so as to “pass as normal,” even if doing so carries social, emotional, and economic costs. If ChatGPT is a form of assistive technology—a system used to increase access to education—then it is unsurprising that educators would be skeptical of it, since people are skeptical of assistive technology in general. This skepticism may rest on a fear of losing able-bodied privilege and other advantages conferred by institutional hierarchies.

One might object that if nondisabled people use ChatGPT, then it is not assistive technology. But nondisabled people use screen readers and closed captions, and this does not preclude them from being classified as assistive products. Contrary to popular beliefs, assistive technology benefits everyone, and can be used to mitigate many structural inequalities besides ableism—for example, disparities in literacy, fluency, and free time. Assistive technology is defined by its ability to reduce barriers to education, not by its usefulness to disabled people per se.

To be clear, I do not mean to suggest that ableism (or elitism more generally) is the only reason for people’s skepticism of ChatGPT. There are many reasons to be wary of this technology, some of which I will raise in a moment. But the lack of attention to ChatGPT’s potential to increase educational access, along with the tenor of the objections lodged against it (it facilitates “cheating”), suggest that ableism may play a role in many people’s thinking on this subject. This theory gains credibility from the fact that ableism in academia is structural and systemic.

ChatGPT: Inclusion or Assimilation?

Having said this, I myself am averse to the widespread adoption of ChatGPT, though not for the reasons normally given. I am skeptical of this software because I fear that it will normalize and render compulsory the use of fluent Standard English to the exclusion of alternative modes of communication. That is, the use of ChatGPT—especially if it is seen as a model of objectively correct English—may stigmatize and potentially eliminate non-standard modes of communication, particularly those preferred by racialized, feminine, and disabled speakers.

To begin, ChatGPT is typically used to produce Standard English (SE), the preferred vernacular of wealthy white people. Few students will use AI to generate non-standard vernaculars, even though millions of Americans use them at home. Yet non-standard vernaculars are penalized by many educators due to institutional racism. The popular myth that there is a “correct” vernacular is called “linguistic prescriptivism,” and this belief “is not only a mark of class, ability, and educational privilege, but is also, particularly in the United States, entangled with racist, xenophobic, and White supremacist attitudes.” African-American Vernacular English, in contrast to SE, is stereotyped as “unintelligent, lazy, and broken,” and this false perception contributes to racial disparities in education, healthcare, housing, and other public goods. It also leads to epistemic injustice—specifically, AAVE speakers are denied the credibility and respect that they deserve. ChatGPT could exacerbate these inequalities by making non-standard vernaculars seem “wrong” and “exotic” compared to its own algorithmic bias for SE. If marginalized speakers can use this technology to mask their natural voice, they may feel obligated to do so.

Second, ChatGPT is typically used to produce “strong” and “clear” sentences, which are considered a “masculine” style of communication. In contrast, “feminine rhetoric” is characterized by the use of hedging statements like “I wonder” and “you know”; tag questions like “isn’t it?”; and qualifying statements like “maybe” and “probably.” Note that feminine rhetoric is not necessarily preferred by women; it is favored by feminine speakers of any gender. The important point is that there is nothing wrong with using feminine rhetoric. On the contrary, it can be useful for such purposes as piquing interest, evincing humility, and showing compassion for an interlocutor. ChatGPT may unfairly marginalize people who prefer this style by making their speech seem “weak” and “indirect” compared to its own algorithmic standard. But this standard is based on exposure to online sources like Reddit, Wikipedia, and archived books, which already encode a masculine bias. The illusion of algorithmic objectivity may convince people their rhetorical style is “incorrect” and they would be better off using ChatGPT. 

Third, ChatGPT is typically used to produce fluent Standard English. I doubt that anyone would use the software to product dysfluent text, although many disabled and disadvantaged people communicate dysfluently. As more people use ChatGPT, fewer will write in dysfluent English. Yet communicating in one’s natural voice is a human right that ought to be protected and encouraged. This point is argued compellingly by Joshua St. Pierre, co-author of the Did I Stutter? Project. This project aims to 1) resist speech assimilation and 2) advocate for dysfluency pride. ChatGPT threatens to eliminate the presence of dysfluent speech in academic writing, forcing more people to assimilate. Disabled speakers are entitled to write in their natural voice, and this entitlement is eroded by the widespread use of a device that produces fluent English. If students can write in fluent English, then they may be forced to do so as a condition of getting an A. Yet the notion that dysfluency is “wrong” is merely an ableist bias.   

Another issue with ChatGPT is the content. Because ChatGPT was trained on mainstream websites, it reproduces the racist, sexist, ableist, and classist prejudices of its training data. For instance, when asked to describe the influential African-American blues singer Bessie Smith, ChatGPT could not provide as much information as it could for notable white and male artists. This is unsurprising given that Black women are underrepresented online. Wikipedia, for example, “acknowledges that systemic biases have led to the underrepresentation of women, minorities, and other demographic groups on its pages—and that the problem is particularly acute for biographies of living persons.” This is just one of many of ChatGPT’s known algorithmic biases. In my own personal use, I have noticed that if I ask ChatGPT to solve an ethical dilemma, it will always, by default, produce a centrist response that ignores structural inequalities. That is, ChatGPT reproduces the “commonsense” of the white, male, able-bodied majority, thereby discrediting “the commonsense of the racialized, poor, queer, transgender, or disabled” as “philosophically irrelevant ‘ideology,’ ‘activism,’ or ‘delusion,’” to quote Robin Dembroff. But people will continue to use ChatGPT as long as its biases are considered objective fact in contrast to the experiential knowledge and interests of oppressed groups.

While it is true that the push-back against ChatGPT in education will militate against these risks, the reasons behind this push-back are wrong-headed. They are about protecting academia’s elitist standards by policing and penalizing people who break them, rather than eliminating the standards that discriminate against marginalized speakers. Punishing rule-breakers (or resistors) will uphold the status quo of white, able-bodied, class privilege. Furthermore, the use of carceral techniques to enforce academic norms already disproportionally harms racialized, disabled, and low-income students. This is one of the main objections to the police state, as articulated by Angela Davis and Ruth Wilson Gilmore: policing harms oppressed groups. I follow decarceral feminists in advocating for the abolition of regimes of surveillance and punishment. 

My own worry about ChatGPT is that its routine use will further marginalize and stigmatize non-standard vernaculars, feminine rhetorical styles, and dysfluent communication, and, by extension, the marginalized speakers who use them. If bot-generated text is widely regarded as “normal” and “correct,” then alternative ways of communicating will be seen as wrong and inferior by comparison. (Disabilities studies scholars have raised similar worries about the use of gene-editing technologies to create “designer babies.” If enough people use this technology, then unedited human beings will be seen as inferior and “invalid” in comparison to their edited counterparts, even though there is nothing inherently wrong with being unedited, as all of us currently are. Over time, gene editing will become compulsory, as “invalids” are excluded from society. This is the premise of the film Gattaca, which has influenced biomedical policies). The use of technology to eliminate stigmatized human variations can be seen as a form of eugenics.

Eugenics is described by Rosemarie Garland-Thomson as an “ideology and practice” that aims “to rid society of the human characteristics that we consider to be disabilities in the broadest sense and, often by extension, of people with disabilities.” She says “in the broadest sense” to denote that disability is a contingent social construct that shifts from one context to the next. “Disability,” as such, encompasses a variety of traits seen as undesirable at a given time and place.

On this interpretation, many other social classifications overlap with disability. As I have shown elsewhere, Blackness and queerness overlap with disability in that they are situated as disabling. (Blackness, for instance, was constructed under slavery as a form of deviancy or delinquency, and thus a disabling condition, something to be rehabilitated, contained, or eliminated). Garland-Thomson adds that eugenics “advances the modern project of designing technologies to produce the future we want, to include designing the future people we want.” ChatGPT qualifies as a eugenic technology on this description insofar as its standard usage serves to eliminate the modes of communication typically used or preferred by “socially undesirable” groups. Writing in fluent Standard English will become increasingly normalized and compulsory as more people use ChatGPT and regard it as a standard of acceptable speech. This is a form of eugenics in that it threatens to eliminate and stigmatize non-standard modes of communication, and the marginalized groups that use these modes in their day-to-day lives.

In opposition to eugenics, Garland-Thomson encourages us to “conserve the human variations we think of as disabilities because they are essential, inevitable aspects of human being and because these lived experiences provide individuals and human communities with multiple opportunities for expression, creativity, resourcefulness, relationships, and flourishing.” This is part of a broader project of conserving and valuing diversity in general. ChatGPT, in its standard usage, is not consistent with conservation. On the contrary, it advances the eugenic agenda of eliminating diversity of communication. This goal is epistemically, emotionally, and socially harmful to people who use alternative modes of communication. It is also harmful to academia, which benefits from diverse ways of communicating, knowing, and relating to others. ChatGPT standardizes and homogenizes human speech, producing a bland and boring sludge of “correct English.” Scholarly writing, even when done well, can be boring and hard to read. But if everyone were to write in this style, we would lose the richness and excitement of real-life communication, which includes speaking to people from different cultures and communities.

Why Juridical Solutions Don’t Work

The popular response to ChatGPT is to identify and punish people who use it. This is a band-aid solution that shifts blame onto the victim—the marginalized speakers who are already punished for using their non-standard, but perfectly-acceptable, natural voices. If people are punished for using their own voice, they will naturally resort to techniques that allow them to code-switch and assimilate into the dominant culture. Elitist academic norms create a double-bind for marginalized speakers: either code-switch or use a chatbot to mask your native tongue.

The popular solution is doomed to fail because it doesn’t get to the heart of the problem: the elitist academic standards that incentivize and encourage the use of ChatGPT. Rather than punishing people who use this software, we should be addressing the structural injustices that motivate its use in the first place. When people are allowed to write in their own voice, they will not see ChatGPT as the best or only option for getting an A or a job or a grant. The real problem is not, as many academics believe, that students are “lazy,” “unintelligent,” or “dishonest,” but rather that academia’s elitist rules push students to adopt techniques of assimilation that silence and stifle their natural voice. As we know, epistemic injustice doesn’t just silence people; it motivates them to silence themselves. This is what is happening in higher education today. The master’s tools will never dismantle the master’s house. There is no reason to think that stricter cheating, plagiarism, and policing protocols will deter students from using ChatGPT. These carceral techniques punish the least well-off students. A structural, non-juridical approach is needed. Namely, we need to value and conserve the diversity of communication styles that exist in the human population, rather than insisting that everyone use the same voice. People like to express their thoughts and they want to feel heard. Students would not resort to using a chatbot so readily if their natural voices were respected and valued.

I have heard many objections to ChatGPT, but none that address its use as a tool of eugenics to marginalize “socially undesirable” communication styles and speakers. This oversight stems from systemic ableism and elitism in academia. At the same time, many people have advocated for stricter policing and punishment of students who use ChatGPT, which discounts the software’s potential as assistive technology, and, in practice, would exacerbate structural inequalities in education. This, again, underscores a deep-seated ableism. Academia’s lack of crip perspectives allows academics to ignore disabled people’s thoughts on the pros and cons of ChatGPT. While I personally feel most comfortable writing in a standard form of English that has been drilled into me from an early age (though fluency does not come naturally to me), I value and cherish the multiplicity of vernaculars and rhetorical styles that my students and friends use, and I do not want to see them silenced. I worry that as ChatGPT becomes more popular, fewer people will fight for the right to communicate in their natural voice, more people will assimilate, and the rich mosaic of natural communication will be dissolved in the melting pot of AI.

I am sure that many people will disagree with me. I suspect that many academics will defend the view that AAVE, dysfluency, and feminine rhetoric are “wrong” or “lesser.” But I hope that some of my colleagues will use their voices to defend and conserve the diversity of communication that real human beings use, in contrast to the bland and boring output of a robot.

Miche Ciurria, black and white headshot
Mich Ciurria

Mich Ciurria is a queer, disabled philosopher who works on Marxist feminism, critical disability theory, and critical race theory. She/they completed her PhD at York University in Toronto and subsequently held postdoctoral fellowships at Washington University in St. Louis and the University of New South Wales, Sydney. She is the author of An Intersectional Feminist Theory of Moral Responsibility (Routledge, 2019), and a regular contributor to BIOPOLITICAL PHILOSOPHY, the leading blog on critical disability theory.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Women in Philosophy Behaving Badly? Or Madly?

*The term “Mad” is a contentious identifier. I use Mad as a form of resistance but not all diagnosed persons are on board. The...