TeachingAI and TeachingAI Can’t Replace Teachers (Entirely)

AI Can’t Replace Teachers (Entirely)

Alpha School in Austin, Texas has no teachers. Students instead spend two hours a day studying core subjects “taught” by an AI-powered software program. True to Silicon Valley’s educational ideals, the school argues this approach gives students time to practice “life-skills” like coding, entrepreneurship, and public speaking.

The school claims its students perform well on standardized tests. A perhaps-ChatGPT-authored bullet point on the website reads “99th Percentile: The majority of students consistently outperform national averages.” Who knows what to make of such a statement? But to the school’s credit, one can see how a personalized AI tutor would be in many ways beneficial to student learning. Such a system can provide instant feedback, personalized learning plans, and extra time for students to explore academic interests.

Let us thus grant that Alpha School’s approach is more effective than traditional schooling at improving standardized test scores. Let us even grant a far more general and contentious assumption: that personalized AI instruction outperforms traditional classroom teaching when it comes to improving students’ performance on any ordinary pen-and-paper assessment. Let us also set aside the non-epistemic social skills students might miss in this style of education. Under these conditions—a near best-case scenario for AI in education—would it be wise to surrender the role of the teacher to AI?

It will come as no surprise that a human philosophy teacher will argue that we are irreplaceable, even in the circumstances described. First, purely AI-instructed students would fail to engage in inquiry together. (In the parlance of the Philosophy for Children movement, they would fail to form communities of inquiry.) They would thus miss out on an activity valuable both instrumentally and intrinsically. Second, students might develop an unhealthy relationship to AI, habitually treating it with deference rather than critically. But from here on out, I’ll focus on a third reason for AI-skepticism: teachers are irreplaceable intellectual role models.

Imagine a teenager in 1960s London seeing the young Rolling Stones perform live. Amid the exhilaration would be thoughts like “That looks fun! I wonder if I could do that. I want to be able to make people feel like this!” One is easily inspired to pursue excellence upon bearing witness to it in human form.

Contrast this experience with listening to AI-generated music. Suppose the music is just as good as anything a human could produce. Suppose one could even attend AI concerts where an embodied AI sings and dances onstage. It is easy to imagine being impressed by such a performance, but difficult to imagine feeling inspired by it in the same way as before. (People take up track in response to watching the Olympics, not documentaries about cheetahs.)

This is one edge teachers have over AI, even as it improves. We are motivated by the fact that we can see ourselves doing something we see another doing excellently. If we can never see ourselves in AI, teachers will always be more inspirational in this respect.

But there is a further reason why human teachers are better situated to serve the intellectual role-model function of teaching. Two of the most important habits teachers model, virtuous thought processes and intellectual motivation are absent from current AI.

LLMs don’t share our thought processes. (At the coarsest level, we obviously don’t generate our words by predicting the likelihood of the next ones.) AI can provide multiple succinct explanations of a concept. It can offer individualized instruction. It can ask Socratic questions. It can say the exact same words as someone who is explaining their intellectually virtuous thought process. But as far as I can tell, current AI isn’t engaging in the real thing. If students know this, one would expect them to be less inspired to engage in virtuous thought processes themselves: exposure to the real deal often has a stronger influence on us than known mimicry.

The same goes for motivation. We are not simply dog trainers, succeeding when our students can perform intellectual tricks on command. We want our students to genuinely want to find the truth for its own sake, to enjoy inquiry, and to see philosophy as important. (As an aside: Alpha School’s attitude toward core subjects seems perverse in this respect—they are treated as a chore to be done efficiently so you can move on to more important things, like coding.) A time-honored way to inculcate this in students is to authentically exhibit these motivational states ourselves. When students see our passion, they respond in kind. Again, we have this advantage over cold, indifferent AI. We thus remain the only genuine models of such motivations.

To be clear, we should not go into the cloning business. Our intellectual role-modeling need not extend to having students share our most contentious, substantive philosophical views merely because we hold them. Nor should students mimic all of our habits of thinking. But they should certainly try to emulate some and be inspired by the rest. Return to the music analogy: one can easily be inspired to form a rock band by the Rolling Stones. But this need not be a cover band. Our role can be similar—inspiring students to pursue the truth in a philosophically respectable way and, as they mature, develop their own philosophical style.

In philosophy, we do not have laboratory experiments to perform in front of our classes as our colleagues in the sciences do. But we can still give live, authentic demonstrations of our own thinking and our own motivations, demonstrations which students can easily imagine themselves replicating. This alone is reason enough to set hard limits on how much teaching ought to be turned over to AI.

Picture of the author
Adam Zweber

Adam Zweber (Series Editor, AI and Teaching) earned his PhD in Philosophy from Stanford in 2023 and is currently a lecturer at UNC-Wilmington. He is interested in questions that run the gamut of value theory from metaethical naturalism to the ethics of AI use in education. His research on such topics has been published in Philosophical Studies, European Journal of Philosophy, and Teaching Philosophy. He is especially passionate about getting students to “see” philosophical questions as they arise outside the classroom. When he’s outside the classroom he enjoys pondering philosophy while painting, clothes-making, and marveling at the Sonoran Desert.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

The Ethics and Character of Creating Personhood

As artificial intelligence continues to develop, the list of ethical questions regarding its creation and use grows. This is as it should be. We...