Professor Reflection SeriesReflections on Teaching in the AI Age

Reflections on Teaching in the AI Age

I kind of miss plagiarism. Some turn of phrase in a student’s paper would sound a bit off, and some googling would uncover that the paper was pieced together by copy and paste from the Internet Encyclopedia of Philosophy and a random blog post. I feel nostalgic for the days I spent hunting around Quizlet, Chegg, CourseHero searching for leaked banks of my old quiz questions, deciding whether to submit a DMCA request or to rewrite the quiz.

That all seems so old fashioned, so dated, so 2022. Over a year ago, ChatGPT popularized AI writing. My students have no use for the old ways anymore.

I remember the clues I could lean on when grading. Word count meant time. Writing quality meant effort. Rebutting objections meant rational thought. Relevant citations meant research. Getting the facts right and using terminology correctly meant learning. Even if the ideas weren’t always interesting, writing meant thinking. That was the point. At least students were thinking!

That’s all meaningless now. No signals in the text can distinguish days of hard work from a few minutes of prompt engineering – except those brief, so very fleshly sorts of errors and eccentricities which suggest to me that the author is not just rational but a fellow animal. My first glance at a paper now isn’t to check for citations and coherence, but to check for any breaths of authenticity.

Generative AI writing tools have some benefits. I like being able to understand what more of my students have to say. Much like the impact spell check had on my own generation, AI writing tools have given a voice to students who have always had something to say but could never quite piece the words together. AI gives my students a first-round tutor on their writing, an instant round of critique, a coach through writer’s block – and my own feedback can stay focused on the substance now, instead of the structure and style. Maybe I can assign future students to get in a live debate with pseudo-Lucretius. It’s not all bad.

But it’s still pretty bad. About half of students are using it, some for good, and some for bad. Students with solely extrinsic motivations for signing up for philosophy classes will always find ways to do what they have always done. What I care about are those who are intrinsically motivated, who want to learn how to think better, who love knowledge – and who have to negotiate their precious study time against the demands of overbearing families and co-workers begging them to cover their shift. For them, it’s hard to justify taking a day off to explore the relevant literature and mull over an interesting argument if they know that they could get away with half an hour of playing around with ChatGPT prompts. Especially if they suspect their peers are getting away with it.

I’ve tried a lot of things that don’t work. In December of 2022, I thought AI writing would have noticeable “tics” and “tells”. (It did for a little while, and then it didn’t). There are AI detection tools! (But too many false positives). It doesn’t have personal experiences! (But it can pretend very well). It produces fake citations! (Just supplement it with an AI “Source Finder”). It is vague and indecisive on philosophical issues! (Just prompt it to “take a stand”). It might know the internet, but it doesn’t know the contents of my notes and lectures! (Just upload a transcript). I even tried writing vague prompts and subtle references that would only make sense to those who had studied. I found that ChatGPT was better at guessing what I was looking for, based on contextual clues, than my students were.

Some things still work. It still works to give handwritten exams in the classroom without access to electronic devices. My online students accept video proctoring of their exams, and I’m ditching typed online discussion boards in favor of asynchronous video discussions. It still works to remind students that, on small “formative” writing assignments and online quizzes, “outsourcing” the “busy work” is just depriving oneself of practice opportunities for those in-class or proctored big assignments.

The challenges are term papers and the classic “take home” essay exam. Relying only on timed exams loses the depth which comes from reviewing notes and spending time thinking through one’s own original responses and critiques. Good philosophy requires time to reflect, to struggle, to travel down roads that go nowhere and to discover new ideas on the way. I still value having learned to do arithmetic by hand, even though I grew up surrounded by calculators. My students still value learning how to do philosophy through writing. The pedagogical purpose of writing is in the process rather than the product.

The transition which AI forces me to make is no longer to evaluate writings, but to evaluate writers. I am accustomed to grading essays impersonally with an objective rubric, treating the text as distinct from the author and commenting only on the features of the text. I need to transition to evaluating students a bit more holistically, as philosophers – to follow along with them in the early stages of the writing process, to ask them to present their ideas orally in conversation or in front of their peers, to push them to develop the intellectual virtues that they will need if they are not going to be mastered by the algorithms seeking to manipulate them. That’s the sort of development I’ve meant to encourage all along, not paragraph construction and citation formatting. If my grading practices incentivize outsourcing to a machine intelligence, I need to change my grading practices.

One method I experimented with in 2023 was requiring an oral defense of a written essay. In five to ten minutes of a live oral defense I can confirm that a student probably knows what the author of the essay knows. The results were encouraging and surprising – nearly all of them had learned what their submitted essay suggested they had learned. (Of course, it was a self-selected sample, since there were increased withdrawals and failures due to skipping the exam). I did not issue separate grades for the oral and written components; instead, the oral defense largely served just to confirm the essay grade, except in rare cases where the oral exam forced me to re-read the essay, or when a student didn’t “remember” what they had “written”. That was rare, though. The mere prospect of the oral defense was sufficient to incentivize actually learning and not just submitting text.

But I teach a 4/4 load at a large state university, with some very large class sizes. An oral defense for every exam in every class is infeasible. For now, I’m trying out a three-part exam model: first, an extensive take-home essay; second, a live, timed, proctored short answer exam with questions corresponding directly to points on the take-home essay; third, an optional oral defense of the take-home essay. By default, the live exam is weighted heavily over the take-home essay, since those who put effort into the take-home essay should be fully prepared for the live exam. The few who opt for the oral defense earn the right to weigh their take-home essay more heavily instead. We’ll see how it goes.

One fear I have is that the purported efficiencies of artificial intelligence will suggest to certain decision-makers that faculty should be able to teach more students in the future, not that they need to teach fewer in order to offer more personal focus. We naturally develop and maintain self-discipline in response to social pressure from conscious beings like ourselves. The reactive attitudes of my dog are more effective at getting me moving in the morning than the automated notifications on my phone. People sign up for exercises classes because social pressures facilitate achieving their goals, not because there aren’t plenty of exercise videos online. Automated tutors could be helpful, but students will also need intellectual coaches, interactions with experienced faculty who encourage them to develop themselves as readers, reasoners, and writers, active witnesses who can attest to their learning. I worry that the pursuit of efficiencies will instead subject students to even more automated feedback, more digital nagging, more of a sense of anonymity. Outsourcing your academic tasks doesn’t feel like deception if what you really know doesn’t really matter to anybody, anyway.

It has been a wild year. At this pace, I’m certain I will look back on my reflections here wistfully, as warm memories of the last fantasies of the age of innocence, by sometime next June.

I would be interested to hear a bit more information about how others weight the written work and oral defense of that work. This seems like a promising direction, but I haven’t been able to figure out how I want to balance the written and oral components for determining their grade on the “take-home exam.”

Jeffrey Watson

Jeffrey Watson is an Associate Teaching Professor at Arizona State University, where he has taught Philosophy for the last ten years, both on campus and as part of the online program. His interests are primarily in Metaphysics and Philosophy of Mind.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Philosophical Mastery and Conceptual Competence

I roughly sort pedagogical issues into two broad categories: engagement and mastery. By “engagement” I mean roughly discussion and reflection on teaching methods that...