Why should your students do the work?

Most philosophy teachers we know have adopted strikingly defensive positions on AI use in their classes. One faction—call them Luddites—rejects these novel “labor-saving” technologies altogether. After all, many students are just copy/pasting dubious-quality AI output without internalizing anything. In response, Luddites often end up reinstituting in-class essays for students who may never again write out an extended essay by hand. Might those students benefit more from learning to co-write with AI instead? That’s an empirical question, but it’s one the Luddites are not particularly interested in exploring.

At the other extreme are teachers—call them Surveillants—adopting sophisticated technological countermeasures to ensure their students are doing the work. Again, this makes sense: We don’t want our students to cheat themselves out of learning. So keep your camera on during class, and do all your work while Revision History watches! But even if these efforts succeed today, they won’t tomorrow, and Surveillants are running themselves ragged trying to stay ahead of the AI curve. Meanwhile, students rightly feel belittled, overpoliced, and even resentful.

We find the responses of both Luddites and Surveillants completely understandable. As philosophy teachers, we don’t just want to teach students the content of philosophy; we want to help them learn to think for themselves. And the way we learned to think for ourselves was to write for ourselves. We were taught to articulate our own thoughts by staring at a blank page and drinking coffee until words began to pour out. So how can we expect our students to become clear and careful thinkers if AI simply writes their papers for them?

But let’s be honest: The Luddites and the Surveillants are both stuck in AI denial. If their goal is to travel back to halcyon times when students simply plagiarized from Google or paid someone else in beer to write for them, those days are long gone. Trying to turn back the clock is psychologically unrealistic, because AI is here to stay—there’s been way too much money behind it from the start.

We want to highlight an assumption the Luddites and Surveillants share: that learning is fundamentally about doing the work. Somewhere in our academic careers, many of us became convinced that education must be earned through labor. We proved ourselves early on by memorizing and regurgitating, later on by synthesizing and producing. And we repeated that cycle again and again through high school, college, grad school—and, for the lucky few, all the way through tenure.

This model of education as work has always had problems, long before AI. But now, these problems have become unavoidable. Instead of trying to compel students to complete assignments without AI, by either handwriting in front of us or typing under technological surveillance, maybe we should ask why our assignments have failed to motivate students.

Were those assignments just… boring?

In any event, this motivation problem can’t be solved by withdrawing or introducing more technology. And honestly, if fancy autocomplete can effortlessly complete the assignments of years past, maybe we were never really assessing student understanding as deeply as we thought. (Are we really just upset that we can’t reliably distinguish a chatbot’s mechanical reproduction from a student’s own robotic regurgitation?)

It’s worth separating teachers’ surprising complaint:

“Oh no, AI is making it so that no one has to work!

From their pedagogical response:

“We need to ensure our students are working!”

Let’s step back for a moment. As teachers, we want our students to learn. But when students don’t put work into our assignments, we can’t tell if they’re getting any learning out of them. That’s how AI has come to appear as such a threat, given that large language models are basically tailor-made to create derivative, prompt-satisfying work with minimal or even no human engagement.

But how much can we blame students for efficiently navigating the accreditation vending machine of the contemporary university? Maybe this points more to the weakness of our institutions or assignments than our students’ moral characters. And if our job is to prepare our students for the world they’ll be living in, they do have a legitimate complaint: Why are you making me do this—write an essay on Descartes by hand while you watch? Okay, should I use a chatbot to generate an outline the night before and memorize it? Is that really how you plan to “assess my learning?”

The truth is, our students are learning something—how to engage with the least capable AI models they’ll ever see in their lifetimes. They’re going to live in a very different world than the one we grew up in. Against that background, calls for students to “do the work themselves” sound at least as antiquated as the calculator analogies we keep rehashing. But tellingly, our knee-jerk responses recall what David Graeber calls the “Paradox of Modern Work”:

  1. Most people’s sense of dignity and self-worth is caught up in working for a living.
  2. Most people hate their jobs.

Hmm. Why do so many of us spend vast amounts of time and energy on work we despise?

Tragically, Graeber thinks, many of us “grown-ups” feel the need to prove our adulthood by demonstrating our incredible capacity to grind, hustle, or flat-out suffer for our careers. (Any philosophers perking up yet?) That’s just responsible adulting! And when we unthinkingly pass on our own inherited values and practices around education as work, we end up teaching our students to punish themselves similarly, whether or not these values and practices will be useful for them.

But what a lousy model for learning anything, let alone maturing into a healthy, well-adjusted adult.

Why can’t learning be more like playmore self-directed, spontaneous, and fun?

Today, many students 1) don’t want to do their own work, and 2) have realized they don’t have to. But to solve this motivation problem, it’s time to drop the conflation between education and suffering. In the age of AI, the new pedagogical frontier is to create assignments that students find intrinsically motivating and enjoyable, not to find ways of reinstituting instrumentally useful drudgery. If we make learning less like work and more like play, more students will choose to engage in it, and we won’t have to fight them for it.

There’s certainly a need for larger institutional changes. Employers need accreditation, so registrars need grades, and education as work practices have emerged in response, compelling students to jump through years of stressful hoops to develop their skills, prove their compliance, and keep their funding. There’s a serious conversation to be had about how our society conceives of the purpose and nature of education itself.

But in the meantime, it’s worth thinking about the smaller-scale changes we can make in our own classrooms in 2025. How can we redesign our assignments, syllabi, and courses to be more

  1. Fun,
  2. Realistic, and
  3. Formative for students who will live in a very different world than the one we grew up in?

Here are a few ideas we loved to help get you thinking:

  1. Make assignments tangibly interactive

Eli Shupe (UT Arlington) directs Make Philosophy, which develops 3D prints and lesson plans you can access for free right now. Confronting students with a physical model of the Ship of Theseus or the Experience Machine draws them in and gets them thinking, imagining, and sharing in ways our sloppy chalkboard sketches just can’t match.

Even if you feel a bit silly bringing “toys” into classrooms, the enthusiastic responses from students are undeniable. Grounding your discussion day with the surprise of a physical model can make it far more memorable and even formative.

  1. Make assignments genuinely cooperative

The group projects you’re remembering were awful because they weren’t cooperative enough. So Frederick Choo (Rutgers) has been experimenting with team tests, where groups of students reason and answer together in class. The instructor can walk from team to team, hearing their justifications and even giving hints until a team decides to lock in their answer on a shared scratch-off card—which replicates the feeling of a lottery scratcher. If they get it right the first time, they get full credit; if not, students still get to discuss and re-attempt for lesser credit, which they really appreciate.

Even if you can’t invest in scratch-offs, Frederick’s been experimenting with other methods. But students love that team tests are more fun and less anxiety-producing than traditional tests. They’re also more realistic because they get students engaging with each other. And those are the formative skills students need going forward—not the ability to write in perfect isolation, but the ability to communicate clearly with each other and arrive at and justify joint decisions.

  1. Or, make assignments competitive (like most games!)

Why not lean into the competition? Layla Williams (University of Oklahoma) has students race to piece together arguments that are missing key words. Which team can lay out the right words from a physical stack of options first? In conversation, Layla explicitly connects the thrill of competition with the philosophical virtues and skills students are developing in action, which they’ll need whether they’re working with AI or not.

You don’t have to throw out your old syllabus and transform your whole class into an Introduction to Philosophy through Video Games—though as Javier Gomez-Lavin is proving at Purdue, you certainly can! But if your students are disengaged enough to use AI as a labor-saving device, you might try assigning at least some tasks they won’t find so dreadfully laborious. Who knows, you might even have a bit more fun teaching and grading them.

So, you want more philosophy majors? Make more playful philosophy classes! Why have we been making them laboriously rewrite our lectures into formulaic essays all this while? How did we manage to make their first-ever philosophy classes unengaging in the first place?!

Maybe we can learn to see this “crisis” as a blessing in disguise.

Author Image
Ricky Mouser

Ricky Mouser is a Hecht-Levi Fellow at the Johns Hopkins Berman Institute of Bioethics where he studies ethical tradeoffs in AI and bioethics by attempting to reconstruct where our values and practices come from, and how they’ve shaped each other over time. He also works in social and political philosophy, philosophy outreach, aesthetics, and philosophy of games and sports.

Picture of author
Savannah Pearlman

Savannah Pearlman is a Lecturer in the Philosophy Department at Howard University, where she works in normative epistemology with an emphasis on feminist philosophy and philosophy of race. In addition to her philosophical interests, she also has published work on college teaching, with a focus on curricular design and inclusive pedagogy.

Previous articleSmart Ass Pawn: Ideology and Ideological Interpellation in The Wire
Next articleRadical Protest and Moral Justification

LEAVE A REPLY

Please enter your comment!
Please enter your name here