TeachingAI and TeachingLarge language models (LLMs) are often poor philosophers. But these shortcomings make...

Large language models (LLMs) are often poor philosophers. But these shortcomings make them more useful for teaching, not less.

There are a few reasons why a philosophy instructor might hesitate to use assignments with AI. One worry is that AI will just encourage students to cheat or otherwise turn their brains off.

Another worry (and source of contempt!) is that current LLMs are just bad philosophers. They make basic logical errors. They traffic in caricatures. They fail to maintain a consistent position. They repeat themselves. They have trouble focusing on one issue. They stumble when you push the discussion past the surface. They hallucinate. Worst of all, they are bullshitters: machines that sound smart without any regard for the truth.

These complaints are sometimes overblown. With practice, you can design assignments that mitigate the above problems. And even if you think that—at their core—LLMs are mere bullshitters, the fact remains that they often do not act like mere bullshitters.

Still, I agree that current LLMs are not especially impressive philosophers. Is this a reason for not using assignments with AI? It depends on the form those assignments take.

On one model, students collaborate with AI to produce better work. We might imagine the LLM as a helpful coauthor, suggesting arguments the student may not have considered. If this is your model, then the shortcomings of current LLMs are a problem.

But this may not be the best model anyway. After all, we want students to think through arguments on their own. Perhaps LLMs could help certain students with this. But other students will just mindlessly copy whatever their “collaborator” spits out. This is useless for everyone. (And, besides, nothing is more soul-crushing than reading page after page of AI-produced text.)

On my preferred model, students primarily critique the philosophical efforts of LLMs. For these assignments to work, LLMs do not need to be especially good philosophers. (Of course, they can’t be total garbage either. But they’re not.) In fact, the flaws of LLMs make them all the more useful for teaching philosophy. I will explain this by describing some of my recent assignments.

1) Teaching students to be relentless.

I ask students to have debates with ChatGPT; they might begin such a debate as follows:

There are certain recurring problems with ChatGPT’s argumentation. For example, when you try to sharpen the argument, ChatGPT often tries to slide away from the fundamental issue. It will change the goalposts, bring up unrelated issues, repeat itself, say something vaguely conciliatory before ignoring the problem, or …

At first, I found this annoying. But then I realized that chasing an eel is actually a great philosophical practice. I now tell my students: “Hold ChatGPT to account! Don’t let it muddy the waters. Keep turning the ratchet. Don’t blink. Get to the bottom of the issue!”

Before I developed these assignments, I don’t think I had paid much explicit attention to the philosophical virtue of relentlessness. Debates with ChatGPT, flawed as it is, are great practice for cultivating this virtue.

2) Teaching students to distinguish what is good from what merely looks good.

In class, I give ChatGPT a paper prompt, such as:

After we read ChatGPT’s essay, I ask the class how good it is. They all think it is amazing. But then we go through it carefully. The students notice that section 2 does not actually address section 1. They notice that, by section 3, ChatGPT has retreated away from utilitarianism to some mixed position. And so on. They give the essay a C+.

This helps teach students about what really matters when writing philosophy. The students learn that good philosophy isn’t about having polished writing, or having a big list of arguments, or … It is about thinking through an argument with real depth.

3) Testing students’ command of subtle distinctions.

In another assignment, I ask ChatGPT to summarize the arguments of a philosophical paper. After reading the paper themselves, students then critique ChatGPT’s summary, identifying mistakes and misrepresentations. (In some cases, I highlight places where ChatGPT makes mistakes so that students know where to focus.) Here is an example excerpt:

These assignments help students approach readings critically and actively. They also test students’ grip on subtle distinctions. Someone may think they understand (say) epistemic vs. metaphysical possibility. But the real test is when they must identify what is wrong with ChatGPT’s vaguely plausible summaries.

4) Teaching students what it really means to “use AI skillfully”

Many students will have to use AI in the future. What, exactly, is required to be “skilled at using AI”? It isn’t some set of clever tricks for writing prompts: any such tricks will soon be outdated. What is really important is for students to be skillful judges of what AI produces. Someone with these skills can know when to trust AI, when to experiment further, and when to take the reins themselves.

5) Encouraging students in an unsettling time.

Many students are intimidated by AI: it writes (and codes, and does math, etc.) so much better than they can write. Students wonder: will I be able to get a job? Why put all this effort into learning stuff when AI is already so smart?

When faced with these doubts, it is good for students to remember that LLMs aren’t God. Sure, they do many marvelous things. But they are very marvelous brains, they are in control: they are the ones competent to evaluate how good or bad an LLM actually is. The above types of assignments help students to remember their marvelous brains.

6) Teaching students to confront bullshit.

It is often said that LLMs are bullshitters. Suppose that this is so. Well—philosophers have always had to confront bullshit. There have always been sophists offering fine words and pleasing speeches.

If LLMs are bullshitters, then we will soon have quite the crisis: LLMs spit out more fine words than any sophist ever could. We will need students who can confront bullshit and who can distinguish what is true from what is pleasing. With assignments that critique AI outputs, we can train students to turn a stern eye toward LLMs.

Picture of author
Robert Smithson

Robert Smithson is an Associate Professor in Philosophy at UNC-Wilmington. He received his Ph.D. from UNC Chapel Hill in 2016. His research interests are in metaphysics, philosophy of mind, and the philosophy of science. His most recent work focuses on both theoretical and ethical issues raised by large language models.

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Asexuality and Epicureanism

What makes sex desirable? Aren’t there lots of risks and downsides? Unless you’re trying to reproduce, why have sex at all? Maybe you’ve considered these...