TeachingChatGPT After Six Months: More Practical Reflections

ChatGPT After Six Months: More Practical Reflections

When ChatGPT was released to the public late last year, its impact was immediate and dramatic. In the six months since, most people have barely had time to understand what ChatGPT is, yet its core model has already been upgraded (from GPT 3.5 to GPT 4.0) and a competitor has been released (Bard, from Google).

I’ll assume that those reading this are generally familiar with ChatGPT; if not, I provide a basic explanation of it in a previous post. Here, my focus is on the current state of affairs. How is ChatGPT being used by students? How is it understood by faculty and administrators? What has been the policy response? While many questions about ChatGPT can’t yet be answered, I’ve noticed some consistent themes in written articles and discussions with colleagues. In short: students are using it, some quite strategically; there is no clear institutional response, either nationwide or in many institutions; faculty themselves are split on ChatGPT; and it is rapidly forcing us to rethink key aspects of teaching.

Student use of ChatGPT

A recent article written by a Columbia student manifests the central worry of ChatGPT pessimists: given an easy way to avoid doing assignments, many students take it. This fits with what some faculty are overhearing from students on their campuses, including myself—I have heard students openly discuss using ChatGPT right outside my office. Not every student is using it, but many are.

Further, some know how to use it strategically. The student cited above doesn’t just copy and paste a ChatGPT response to a prompt. He asks it for sample thesis statements, then for an outline to the statement that appeals to him, then for information on each point in the outline. Some tweaking, some filling in some examples, and he has what looks like an essay written totally on his own.

This student, who is at a well-regarded university and writes op-eds, does not represent all or even most students. But what he shows is how effectively ChatGPT can be used to circumvent meaningful effort. The “work” the student does in using ChatGPT, he makes clear, is far less than that involved in actually doing the assignment; it becomes more like a fill-in-the-blanks exercise. The practice of skills and engagement with content that instructors usually care about disappear from the process.

The student’s bit of extra effort is also likely to get him around chatbot detectors. A range of detectors exist, though initial testing suggests that most are pretty flawed. Even if they weren’t, should someone spend a few minutes editing the output—or just throw it into a “text spinner”—bots could probably be fooled. This also speaks against the idea of making “bot-proof” prompts and take-home assignments, an approach I argued against in my previous post. Not only can ChatGPT already do things like include personal pronouns and revise its own work, and not only are it and other chatbots getting better. Chatbot users are getting better as well, and many will be aware of ways to avoid detection.

National and Institutional Responses

The endless discussion about ChatGPT has contrasted with near silence when it comes to policies or even clear recommendations at the national level, such as from accrediting agencies or national associations. This isn’t entirely a bad thing; strong recommendations or requirements could lead to concerns about academic freedom and faculty autonomy. At the same time, the lack of a coherent response pushes responsibility downward, increasing the burden on institutions and ultimately those of us who teach.

At the level of institutions themselves, many have a webpage devoted to discussing ChatGPT, but they are too often light on policy. Take the University of Texas at Austin’s page. It discusses what ChatGPT is, some downsides to it, and so on. All useful information. But the only policy discussion is a note that, “The university is in the process of refreshing our honor code and honor code affirmation to renew our commitment to supporting students in their journey to master complex knowledge and skills.” Translation: we’ll get back to you. (To be fair, the page links to a statement from their Faculty Writing Committee that is more helpful, but again has no force of policy and is largely tentative.) Looking at a range of other cases: UCLA, Yale, and Penn State make just passing references to academic integrity in their ChatGPT pages. Nashua Community College in New Hampshire makes no mention of policy, while McLennan Community College in Texas links to their academic dishonesty policy. For many institutions, I can find no page on ChatGPT at all, including large and visible schools like University of Michigan and Vanderbilt (the latter having gotten into the news for using ChatGPT to produce a statement on a school shooting).

My own institution’s ChatGPT page is, I think, better than many. It directly links ChatGPT to the university’s academic integrity policy and suggests syllabus language that mentions AI-generated content. Yet even in this case, the policy elements are very brief and reference existing policies that do not themselves discuss AI-generated content. Given the newness and pervasiveness of ChatGPT, and given that many students may not link it to cheating and plagiarism in their own minds (credit to my colleague Cassie Herbert for this important observation), just pointing to policies written in a world before ChatGPT is unlikely to be sufficient.

Stronger policies and resources may emerge in the future, of course. But that does not change the fact that six months in, in many ways we remain in the Wild West.

Faculty Responses

Even turning to fellow faculty may not provide clear guidance, as faculty range from implacable opposition to full advocacy of ChatGPT. The many pieces lamenting ChatGPT are met by many pieces suggesting it be proactively included in the classroom.

Put aside for now whether one group is correct. The practical implication is that, so long as there are no definite policies and so long as professors themselves disagree, there will likely be very different standards from classroom to classroom. Some will ban all chatbots forever. Others will require ChatGPT for assignments. Students will experience both, which could lead to student confusion about policies and about deeper priorities. This means that faculty must be very clear from day one about their class policies—we just don’t know what else the student is seeing and hearing.

As for what individual faculty are doing, I have found that if you talk to ten faculty, you will get ten different responses. Limiting myself to just colleagues I’ve heard from or spoken to in philosophy, I have heard responses ranging from (on the GPT-skeptic end) eliminating out-of-class assignments or submitting them to multiple chatbot detectors, to (on the ChatGPT advocate end) incorporating ChatGPT into assignments by having it create outlines or write sample arguments that students critique, to indifference or defeatism. There are also many who still know little about what ChatGPT is or how it works—remember, it only appeared six months ago!

If there’s one thing I would emphasize in this whole article, it’s this: a line in the syllabus and a brief mention of ChatGPT on day one of class is not sufficient. Explain your policy to students, explain why you have it, and make clear that what applies in other classes may not apply in yours. Reiterate this when the first major assignment comes up, and probably at later points as well. Students need to know exactly where you stand.

Reflections on Assignments and Assessment

Like few things before, ChatGPT is challenging many teaching practices long taken for granted. Whether for or against it, few think it will have no impact on their teaching, and many discussions focus on how it forces us to rethink assignments and assessments. Much of what we’ve historically done may have to be rethought entirely if students can just generate written work automatically, and what’s done in the classroom will have to shift in response. Some of the issues that have arisen likely reflect long-standing problems now made intolerable. As a result, dealing with chatbots may lead us to change practices long overdue for change. And of course, chatbots could provide positive benefits, such as increasing accessibility within education.

It’s too early to say what the upshot of these changes will be, or whether they’ll be good overall. It could turn out that good assignments or approaches are no longer feasible. There are not a few who believe that the old-fashioned philosophical essay, where students present a full-length structured discussion that is carefully developed over time, provides unique and important value. Can that be replaced, or improved upon? How does it compare to (for instance) oral assignments, or various forms of in-class work? What were once theoretical questions have become far more urgent.

It’s difficult to predict what sort of balance will be reached with chatbots in education. For disciplines like philosophy, where reasoning through language is so central, the implications can feel almost existential. What is clear is that after six months our education system as a whole remains unprepared, and faculty must work to make up the difference.

Derek O'Connell

Derek O’Connell is Assistant to the Department Chair in the Department of Philosophy at Illinois State University, where his roles include instructor and academic advisor. His current interests center on philosophical pedagogy and philosophy of education.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Philosophical Mastery and Conceptual Competence

I roughly sort pedagogical issues into two broad categories: engagement and mastery. By “engagement” I mean roughly discussion and reflection on teaching methods that...