Questions about the relationship between information technology and education are among the oldest in philosophy, dating back to Plato’s famous criticisms of writing in the Phaedrus. But they have a renewed importance now that AI dominates the educational landscape. Indeed, the correlation between chatGPT-use and the academic calendar suggests that some of the most pressing issues about AI are precisely those concerning its use in education. Some economists’ predictions suggest the same. According to one analysis, 13 of the top 20 occupations affected by the advent of large-language models are post-secondary teachers of various kinds. (Philosophy and religion teachers ranked 6th, behind telemarketers and our colleagues in English, foreign language, history, and law.) The moment thus calls for thoughtful reflection on the nature of AI and teaching.
The issues go far beyond the bane of many instructors’ existence, chatGPT-enabled plagiarism. Yes, the infamous LLM can produce an OK-looking term paper. But it can also be used to generate ideas, edit papers, flesh out an outline, or play devil’s advocate in a dialogue—all of which have pedagogical potential. And chatGPT is just the tip of the AI-powered EdTech iceberg. Some AI tools promise to alleviate the burden of grading. Some are designed to produce lesson plans. Some offer tutoring to students. Some could help students navigate murky academic literature for the first time. Some can produce visual aids for our lectures. Some aim to help teachers promote in-class engagement. Nearly all raise important questions about the mechanics of teaching philosophy:
- What is an appropriate AI policy for regulating student use of these tools? How can we enforce it or motivate students to abide by it?
- What kinds of assignments should we use in the AI era? How can we AI-proof them?
- What are some creative uses of this technology that can help us achieve our teaching goals?
These are questions we must address, ones that this series hopes to help answer. But the ubiquity of AI in education raises more general, more philosophical questions. So, beyond the practicalities of teaching, the series aims to address questions along these lines:
- What constitutes plagiarism? Does the use of AI differ in important ways from traditional plagiarism?
- What is the point of learning writing when large language models can, and presumably will, take on an increasing share of that task?
- In general, what philosophical skills are important to learn, given that AI can perform some (e.g. basic writing) that are often central to our teaching? Should we discard any that have been a traditional focus of our pedagogy?
- Should the aim of our teaching be to help students learn how to use AI tools effectively?
- What pedagogical tasks are appropriate for us to automate, and which should remain human? What, if anything, would be lost were teachers to be replaced by AI assistants?
Even these don’t exhaust the scope of the series. The technology is so new and so rapidly changing that it’s unclear what the most important questions in this area even are. But we’re philosophers, dammit. If there’s one thing we do well, it’s ask questions. Indeed, this very skill is important to impart to our AI-using students—LLMs can only provide us with quality outputs if we know what questions to ask of them. The series thus welcomes contributions that raise new questions about AI and teaching.
No philosopher is without thoughts on AI and teaching. If you’d like to share yours, please send submissions to zwebera@uncw.edu.
Adam Zweber
Adam Zweber (Series Editor, AI and Teaching) earned his PhD in Philosophy from Stanford in 2023 and is currently a lecturer at UNC-Wilmington. He is interested in questions that run the gamut of value theory from metaethical naturalism to the ethics of AI use in education. His research on such topics has been published in Philosophical Studies, European Journal of Philosophy, and Teaching Philosophy. He is especially passionate about getting students to “see” philosophical questions as they arise outside the classroom. When he’s outside the classroom he enjoys pondering philosophy while painting, clothes-making, and marveling at the Sonoran Desert.
This topic has definitely been top-of-mind for me lately and becomes more pressing with each passing semester! I have been thinking about ways to incorporate AI into my classroom without fighting against emerging technology, but I also recognize the real threat to student learning that could result from overuse and/or improper use. It also becomes a threat to my time when I have to spend my grading-allocated time reaching out individually to students for suspected improper AI use, and the mental burden of deciding which infractions are worth pursuing further, etc. I appreciate you creating this series, and look forward to following along!