Public PhilosophyCurrent Events in Public PhilosophyChatGPT Reveals What We Value and What We Do Not

ChatGPT Reveals What We Value and What We Do Not

The promise of automation has traditionally been that machines will handle the work that we do not want to do so that we can focus our attention on work with greater value. Technologists and business leaders often capture this promise of automation using an alliteration: new technologies will steadily eliminate the need for human beings to perform dull, dirty, or dangerous work. Often other “Ds” are added to distinguish particular authors’ concerns. For example, MIT business and technology scholars Andrew McAfee and Erik Brynjolfsson add dear, by which they mean expensive. Some typical examples of dull, dirty, or dangerous work include mining, construction, manufacturing, customer service, transportation, agriculture, and security. McAfee and Brynjolfsson add middle management and analysis as examples of “dear” work that is important but costly and thus desirable to automate.

While automation once referred to mechanical means of replacing human labor, artificial intelligence (AI) has recently taken the spotlight. Indeed, when OpenAI released ChatGPT—their AI-enabled text-generating chatbot—the company’s CEO, Sam Altman, tweeted that this is only the beginning:

soon [sic] you will be able to have helpful assistants that talk to you, answer questions, and give advice. later [sic] you can have something that goes off and does tasks for you. eventually [sic] you can have something that goes off and discovers new knowledge for you.

Left unsaid was what exactly Altman hopes will be automated and why we would want it to be. Other tweets of his make clear that his goal is a general-purpose automation tool: an AI application intelligent and powerful enough to do whatever we wish of it, including solving the most pressing social, political, and ecological problems of our time.

Without specifics for Altman’s messianic vision, people have filled in the details on their own based on the current capabilities of his company’s product. The conversations I recall from the winter of ChatGPT’s release were of three general kinds:

  1. Fearful: “This will facilitate plagiarism!”
  2. Hopeful: “This will save me time and effort writing emails!”
  3. Sardonic: “Finally, I can have the computer write letters of recommendation for my students!”

Each of these responses to the automation of text generation reveals something about what we value in our working lives—or what we take others to value.

Consider the fearful responses. Many academics have opinions about ChatGPT and how it is disrupting education. Some of us think there is pedagogical value in traditional essay writing and are frustrated that students have an easy avenue for sidestepping—in whole or in part—the intellectual process of wrangling sources and regimenting thoughts into prose. Others among us have always been suspicious of old-fashioned essay assignments and have used the threat of AI-powered plagiarism to argue that all professors should devise more creative and engaging ways of evaluating their students.

Either way, the discourse follows the contours of a familiar debate in education over what kind of academic work is or is not valuable, from both teacher and student perspectives, and how to achieve these valued ends. The traditionalist educator claims that the essay writing process reinforces the learning of course material and the development of critical thinking and communication skills, both of which have intrinsic and instrumental value. The radical educator might find instrumental and intrinsic value in the same skills and knowledge but reject the idea that traditional assignments are the only or best ways to achieve these ends. Finally, at least some (but, one hopes, few) students find no intrinsic value in their education at all. For the profoundly disengaged student, at best, education provides instrumentally valuable workplace skills and the credentials and connections necessary to secure a well-paying job. Anything else—such as an essay assignment—is merely an obstacle.

The hopeful and sardonic responses do something similar, but in a way that reveals contours of values that might otherwise remain unvoiced. Most of us write, read, and respond to too many emails. Moreover, many academics find the system of requesting, writing, and reading letters of reference—most of which are substantially identical and based on too little knowledge of a student’s character and abilities—to waste everyone’s time and energy. Nevertheless, these views rarely find their way to the surface in formal contexts since, on some level, we accept that the systems we depend upon require these forms of communication.

When ChatGPT and other forms of automation expose our values like this, we can respond in two different ways. On the one hand, we can adopt the technology, automating away such disvalued activities. This approach leaves things essentially unchanged on a structural level. For some individuals, it might mean having additional time to focus on activities they find more valuable—professors can write more papers if they write fewer emails and letters of recommendation; students can focus more on their post-baccalaureate careers if they spend less time writing essays. It might also mean, however, that some roles instantly become redundant.

Writing emails and updating other people’s schedules is dull and could perhaps be automated with an AI tool—eliminating the need to hire human personal assistants for deans and big-shot professors. What is more, if Altman is correct that some future AI could discover new knowledge, university boards might decide that it is not worth employing so many professors, especially if students are automating away their education. Professors are expensive (or “dear”), after all.

The other line of response we can take to AI and automation is to use their current or near-future ability to replace human work as an opportunity to reflect. Is essay writing or the skills it purportedly develops really valuable? Do letters of recommendation actually serve a valuable purpose? If we answer “yes,” then even if the activity can be automated, we might have a reason not to. But, even if we find that some automatable activity is disvalued, that does not force the conclusion that we should automate it. Before making that decision, we should have more honest conversations about why this kind of work is so prevalent to begin with. Otherwise, in a rush to automate aspects of our work, we risk entrenching problematic systems while simultaneously reducing opportunities for human oversight. We might find that by rethinking our social structures, we can achieve valuable ends more consistently without automation. Instead of dealing with automated essay writing, we could find assignments with the same learning outcomes that are more personally meaningful to our students, pre-empting the desire to automate in the first place. Instead of automating the writing of letters of recommendation, we could remove them from admissions and hiring altogether and rely on the many other, maybe fairer means we have to evaluate candidates.

Finally, we need to ask not just what is (dis)valuable but also for whom. There is a whiff of classism to many automation projects—a failure to recognize that some kinds of work have intrinsic value to some individuals and groups despite their perception by business and technology elites as being “dull,” “dangerous,” or “dirty.” Moreover, there often seems to be little thought about what sort of work might replace these jobs that would still align with these workers’ values. Communities whose local economy is tied to agriculture, aquaculture, manufacturing, or resource extraction are sometimes in this position.

Another example is care work. Our recent pandemic experience shows that there are not enough full-time carers to meet the needs of elders, people with disabilities, and children. Seeing this unmet need, several companies have sprung up to provide technologies that supplement or replace human caregiving for these populations with robots or other AI applications. Implicit in this business model is that to be cared for is valuable, but caring is not. After all, care work involves much drudgery, is frequently unsanitary, and is occasionally dangerous. Alternatively, perhaps, we see the value of care work but not enough to pay its “dear” cost, individually or collectively. So, what choice do we have but to automate?

Instead, we could use the possibility of automating care work to reflect on what we value in this regard. Are the views of technology and business leaders—primarily male and so less likely to be called upon to do care work than the women in their lives—reflective of a lack of value for care work more generally? Or are the rest of us sincere when we claim to value care work, either for the good of the cared-for or for the intrinsic value of caring itself? If the latter, perhaps we should consider restructuring aspects of our society to make care work a more viable career and a more widely available service. For example, perhaps feminist Marxists, such as Selma James and Angela Davis, have been right all along that caregiving should be paid for or organized by the state.

No technological development is inevitable, no matter how much quasi-religious zeal technology leaders may inject into their sales pitches and congressional testimony. As individuals and as a society, we always have a choice when presented with some new technology that promises to automate away some activity. When we make that choice, we must be as transparent as possible about our values. If what we truly value is incompatible with automation, we can—and must—say no.

The Current Events Series of Public Philosophy of the APA Blog aims to share philosophical insights about current topics of today. If you would like to contribute to this series, email rbgibson@utmb.edu or sabrinamisirhiralall@apaonline.org.

Trystan S. Goetze
Postdoctoral Fellow of Embedded EthiCS at Harvard University | Website

Trystan S. Goetze (they/he/she) is a Postdoctoral Fellow in Embedded EthiCS at Harvard University. Their research interests include epistemic injustice, moral responsibility, and the ethics of technology. Most recently, they have taught courses and modules on the ethics of computing and artificial intelligence. As of July 2023, they will be Senior Lecturer and Director of the Sue G. and Harry E. Bovay Program in the History and Ethics of Professional Engineering at Cornell University.

1 COMMENT

  1. Thanks for the thoughts, Trystan. I wholly agree with your main point–namely, that ChatGPT, other AI tools, and automation in general reveal the values of all involved. You do a good job pointing out a lot of interesting dimensions/cases in which this is true.

    However, I find myself disagreeing with some of your smaller points, at least as I understand them. For instance, it is not at all clear that /only/ “old-fashioned essay assignments” will be able to be completed with AI tools. The insinuation is that only those professors who are traditionalists or “behind the times” will have to deal with students using AI tools to complete their assignments. As we have shown at our newsletter/blog, many very clever and innovative take-home assignments can be completed with a mixture of current AI tools–and more powerful tools are being released every month (see, e.g., OpenAI’s Code Interpreter). This is one reason we started our “AI-immunity challenge”. In general, we are finding that professors are significantly overconfident in this dimension. Even cases where we cannot crack an clever/demanding assignment with AI tools, we still get passing grades with little effort (see, e.g., https://automated.beehiiv.com/p/failed-plagiarize-economics-project-ai).

    Another point I disagree about is the assumption that we teach in contexts where we have the privilege of ge.tting students to intrinsically care about our subjects. For many professors in the trenches of teaching these days–especially adjuncts teaching non-majors, general education courses, asynchronous courses, etc–this can at best be a small part of the solution. For us, the ship has sailed, try as we might to bring it back to port.

    Finally, regarding a few of your points about whether automation is good for those who tasks are being automated, there is some preliminary evidence that it is: https://economics.mit.edu/sites/default/files/inline-files/Noy_Zhang_1.pdf

    I hope I haven’t misrepresented your position in my comments. Please let me know if I have.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Philosophical Mastery and Conceptual Competence

I roughly sort pedagogical issues into two broad categories: engagement and mastery. By “engagement” I mean roughly discussion and reflection on teaching methods that...