In the following clip from “The Measure of a Man,” episode nine of the second season of Star Trek: The Next Generation (1989), we see a dramatic demonstration of several philosophical arguments for granting rights to intelligent robots – an issue we may soon have to grapple with as a society. In this episode, the android officer Lieutenant Commander Data (Brent Spiner) faces a hearing to determine whether he is legally considered a person and entitled to the same rights as other intelligent species in the United Federation of Planets, or if he is merely the property of Starfleet and therefore cannot refuse to be dismantled for research by cybernetics expert Commander Bruce Maddox (Brian Brophy). Captain Jean-Luc Picard (Patrick Stewart) defends Data; Commander William Riker (Jonathan Frakes) is ordered to argue for Starfleet; the hearing is presided by Sector Judge Advocate General Officer Captain Phillipa Louvois (Amanda McBroom).
The closing unit in the computer ethics course I taught at Dalhousie University (recently featured in the Blog of the APA’s Syllabus Showcase series) concerns the ethics of machine learning and artificial intelligence (AI). When most people think about AI, they tend to picture characters from science fiction, such as Sonny from the 2004 film I, Robot starring Will Smith. The possibility of creating a generally intelligent robot or AI raises questions about whether such an entity counts as a person, whether they have moral rights similar to those borne by human beings, and whether it would be possible to have a true friendship or romantic relationship with them.
These issues are fascinating and exciting, but they can distract from the actual, pressing AI ethics issues we face today. So, in part to engage the students and in part to set these issues aside, I use them to introduce the topic of AI ethics before getting into the issues AI developers are grappling with now. These include sexist and racist machine learning systems, unclear liability when robots cause harm, and autonomous weapons.
The above clip, and the rest of the episode from which it is taken, dramatizes several ethical arguments we can make in favour of recognizing rights for AI. In the clip, Picard begins by asking Maddox what would be required for Data to be sentient and therefore a person deserving to have his rights protected. Maddox gives three criteria: (1) Intelligence, (2) Self-awareness, and (3) Consciousness. Picard proceeds to apply these criteria to Data, compelling Maddox to admit that Data meets at least (1) and (2). He also emphasizes that if Data meets all three, to rule that he is property and not a person would “condemn him and all who come after him to servitude and slavery.” Faced with this possibility, Maddox is left flustered and humbled, and Louvois issues a ruling in Data’s favour.
We might criticize Picard for not being as careful as he could have been, at times giving in to the rhetorical flourishes of the courtroom instead of philosophical substance. But there is a deeper, perhaps more important point to Picard’s overall strategy. He opens his line of questioning by demanding that Maddox prove to the court that he, Picard, is sentient. Maddox dismisses the demand as absurd, since “we all know” that Picard is sentient. And I think part of Picard’s point – echoed by Louvois in her ruling – is that these are perhaps not questions that can be resolved empirically. That is to say, we can give a philosophically convincing account of what sentience is and why that is where we should draw the line between persons and non-persons, but in the end, it may still be difficult or impossible to determine which creatures actually meet those criteria. And since the risk of harm if we make a mistake in answering this question is so great, whether an entity meets those criteria is perhaps beside the point.
In my computer ethics class, I used this clip in a lecture on AI and robot rights, in which I also discuss a paper by Mark Coeckelbergh. He argues that the criteria for personhood and for deserving moral rights may be philosophically interesting and important, but when we decide how to treat other creatures, including robots, what may matter more is whether we can form morally significant relationships with them. That is to say, the right question is not “Is this robot sentient?” but rather “Is this robot my friend, my colleague, a part of my family?” Coeckelbergh argues that when it comes to questions about relationships, it doesn’t matter whether the robot (or whatever other entity) actually meets the criteria of personhood; rather, it suffices that they appear to meet those criteria pre-theoretically, to the human beings in those relationships. And, simply being in such a relationship is sufficient to grant an important kind of moral status.
As I suggest in lecture, this is precisely the conclusion that Picard urges Louvois to make. In his questioning of Maddox, he emphatically makes the point that Data appears, albeit not beyond doubt, to meet the criteria for sentience. And, in an earlier scene, Picard shows how Data has formed significant relationships with others by asking Data to explain several items from his quarters: military medals he has earned, a book gifted to him by Picard, and a holographic portrait of his first lover. That Data at least seems to be a person and has shown that he can form deep and morally significant bonds with people is really what matters when considering whether he deserves the moral regard owed to rights-bearing persons.
The lecture then closes with an open line of inquiry. We might wonder whether the line of argument pursued by Coeckelbergh (and Picard) can be extended. Perhaps pets, or spirits, or features of the natural landscape can enter similar relationships with human beings, and so also deserve to have their rights recognized.
Possible Readings
- Coeckelbergh, Mark. 2010. “Robot Rights? Towards a Social-Relational Justification of Moral Consideration.” Ethics and Information Technology 12: 209–21. https://doi.org/10.1007/s10676-010-9235-5.
- Coeckelbergh, Mark. (2021). “Does kindness towards robots lead to virtue? A reply to Sparrow’s asymmetry argument.” Ethics and Information Technology (forthcoming): 8 pages.
- Clarke, Roger. (1993). “Asimov’s Laws of Robotics: Implications for Information Technology, Part I,” Computer, 26.12: 53–61.
- Clarke, Roger. (1994). “Asimov’s Laws of Robotics: Implications for Information Technology, Part II,” Computer, 27.1: 57–66.
- Müller, Vincent C., “Ethics of Artificial Intelligence and Robotics”, The Stanford Encyclopedia of Philosophy (Summer 2021 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2021/entries/ethics-ai/.
Other Sources
Additional Star Trek clips on similar themes could be taken from the following episodes and series:
- The remainder of “The Measure of a Man,” as well as the following additional Star Trek episodes:
- Star Trek: The Next Generation, Season 2, Episode 3, “Elementary, Dear Data” (1988)
- Star Trek: The Next Generation, Season 3, Episode 16, “The Offspring” (1990)
- Star Trek: Voyager, Season 5, Episode 11, “Latent Image” (1999)
- Star Trek: Voyager, Season 6, Episode 4, “Tinker, Tenor, Doctor, Spy” (1999)
- Star Trek: Voyager, Season 7, Episode 20, “Author, Author” (2001)
Star Trek: Picard (2020–), much of which takes direct inspiration from “The Measure of a Man”
The Teaching and Learning Video Series is designed to share pedagogical approaches to using video clips, and humorous ones in particular, for teaching philosophy. Humor, when used appropriately, has empirically been shown to correlate with higher retention rates. If you are interested in contributing to this series, please email the Series Editor, William A. B. Parkhurst, at parkhurw@gvsu.edu
Trystan S. Goetze
Trystan S. Goetze (they/he/she) is a Postdoctoral Fellow in Embedded EthiCS at Harvard University. Their research interests include epistemic injustice, moral responsibility, and the ethics of technology. Most recently, they have taught courses and modules on the ethics of computing and artificial intelligence. As of July 2023, they will be Senior Lecturer and Director of the Sue G. and Harry E. Bovay Program in the History and Ethics of Professional Engineering at Cornell University.