I went to a conference on academic integrity at Randolph College back in 2018. There was one hot topic at the time. It was not AI. It was paper mills. What could be done about paper mills? If one can just pay somebody to write a real paper, however, would one be able to show that someone else had indeed written it? A disaster!
Times change quickly. I doubt many people are very worried about paper mills anymore beyond their use in, of all things, academic publishing. Now, instead of Googling “college papers on demand,” students can just open ChatGPT 4 (for free!) and probably be able to make a passing paper.
Within roughly five years the “cutting edge” way to cheat has radically changed. However can we philosophers, the truest teachers of good, argumentative writing, still achieve our pedagogical goals?
It is the assumption behind that dramatic question I want to challenge. The concern around AI is somewhat misplaced. Why do we care so much about writing specifically?
We often place too much emphasis on written skills over the wider range of skills that we, as philosophers, can instill in our students. Writing is just one way to enlightenment. Students never praise how much really great writing they did for a semester. They say that they really liked being able to think for themselves. I even have had students say it was the first time they had ever been invited to really think and judge for themselves. Writing need not carry so much weight.
What skills are students taking away from philosophy? I think making “good philosophical writing skills” the goal in itself is a mistake: equipping students to engage sincerely with difficult questions as a shared endeavor is the greatest end of philosophy as an educational endeavor. That is the skill that best enables them to navigate their rich futures.
To be clear, I am not trying to say that writing has no place in how we teach students. I think writing has value, as should be clear by me writing this! Rather, my question to my fellow educators is more reflective: why are students using ChatGPT on their assignments? It might not be easy to admit, but some common approaches to written assignments are just not very engaging. Nearly everyone knows of the classic Canvas “Post 2 responses to your peers.” All of us are familiar with the summarize-and-respond style paper. When the writing comes off as more of a chore than an interesting moment to reflect, students will be drawn to the easy out. The written assignment that just serves as a way for students to demonstrate their comprehension is not all that helpful and is, to them and to you as the grader, not very efficient or interesting. Especially at the introductory level, written assignments should be fun, or at least engaging to the students from where they are coming from. Where I think philosophers go wrong is by failing to invite students to reflect on their own prior beliefs.
I’d estimate about 1-in-30 of my students end up using ChatGPT dishonestly. Some of you might think I am just underreporting or unaware. Maybe. I have seen reports of much, much higher rates than that. One can find testimony of rates as high as 30% or more!
Where we go wrong, at times, is demanding students to write out an argument. They might not have sincere reasons for judgment. Space for suspending judgment is vital and something, in my experience, academics tend to end up taking for granted after arguing about the most difficult problems they can imagine for years and years on end.
The obvious reply here is that some philosophy requires students to engage with unobvious questions they do not have many priors about. I acknowledge that. Why write about it? What makes writing so special? If the point is to be able to reproduce famous problems and answers, why not just design a traditional exam? Why not an oral examination? Why not a set of short responses? I include a low-stakes midterm in my usual course. Most pass, and some fail. I then figured out how to address those cases. I believe, I achieve, the same pedagogical ends as a short, written assignment.
What about AI? I say: that AI cannot replace the philosopher as a person. If anything, AI is going to make it such that those important personal connections are rarer in young people’s lives than ever. Now, I am not saying every philosopher teacher has to be deeply invested in the personal well-being of their students. But what I am saying is that in the lecture hall or seminar room, whether or not we educators realize it, we are serving as role models. We should be aspiring to be inspiring.
AI can play a role in this story. Consider: why are we so often focused on readings? “Sacrilege! They must read!” part of me thinks. But part of me thinks otherwise.
Imagine, instead, they get a 1-page primer like, “Jeremy Bentham was a utilitarian who thought…” and then, are told, to go have an LLM-based “discussion” with “Bentham” about a selected topic and bring “his” answers to the following class. The educator in class then leads from there, with the students then equipped to start from somewhere rather than nowhere. Students, then, get to have a lecture (or further discussion) about the discussions they have already had.
This kind of technology is emergent, such as Lucretius-GPT, but I suspect will become more and more available as time goes on. We should have an open mind about this possible narrow use of AI.
Written assignments can be designed in such a way as to do this, too. They need not use AI tools specifically, but an ideal written assignment (insofar as it disincentives the dishonest use of AI/LLMs) is conversational. A prompt can be written in such a way as to, tacitly, be responding to you as the educator. Responding to a basic distinction, for example, can be one way of doing this. Instead of “Respond to X’s argument for Y,” it could be, “We talked about A or B, explain what we talked about and defend some form of A or B.” The latter is, whether or not we actively think about it in such a way, more personal. Replace the mandatory quotations or page number citations with more a fluid recall of what has happened in the classroom.
This, importantly, makes the endeavor shared rather than just words on a page responding to words on a page. That is one of the reasons young folk are drawn to Chat-GPT, to begin with: LLMs “sound” like a person. It is okay to make space for that in our pedagogy.
That’s my optimistic take. However, my own views about AI overall are more mixed. Let me briefly mention my pessimistic take on AI. The greatest threat AI poses to all of us, in my view, is the collapse of social skills. Perhaps academics, let alone philosophers, honestly, are not the best people to counteract that. Forget worrying about the collapse of written skills, students are becoming incapable of even thinking for themselves. Why think when one can just prompt?
Here, I return back to the idea of the educator as a model. I tend to be fairly personable as a lecturer. That is not just for reasons of personality, it is somewhat of an intentional decision. How can I expect students to be excited about the hard problems if I am not? The impartial and distant lecture can mislead, do not be afraid to be dramatic: sometimes what is at stake is dramatic.
Discussions are not just important because of the material they contain; they are also important because they teach how to discuss, specifically hard questions together in an honest and charitable way. That last skill, in my view, is the single most important one we philosophers can practice and pass along. Forget writing just for its own sake, we should be teaching dialogue above all else, in all its forms. Writing specifically need not play such a dominant role in that as some of us may think.
Joshua Paschal
Joshua Paschal (he/him) is a doctoral candidate and currently Nelson Dissertation Fellow at Indiana University. His dissertation research is on the philosophy of criminal law, more specifically the setting of standards, the negligence debate, and the hate crimes debate.