ResearchPhilosophy and TechnologyInterview with John Tasioulas: The Institute for Ethics in AI

Interview with John Tasioulas: The Institute for Ethics in AI

The Philosophy and Technology series attempts to construe the question of technology in the broadest possible sense, assessing the impact to the discipline, as well as to science and our culture. Perhaps there is nothing more topical than the emergence of AI. To help frame the debate, we published a piece this summer by John Tasioulas that emphasized the contributions of the arts and humanities. This month, we feature an interview with John on his important work for The Institute for Ethics in AI at The University of Oxford. John also discusses the recently advertised post for a moral philosopher interested in AI.

John, thanks so much for your time today and the follow-up to your vital piece on the contribution of the arts and the humanities. To start, can you describe the genesis of the Institute—what motivated its inception and how it came to life?

It’s a pleasure to have the opportunity to speak with you, Charlie. The Institute has its origins in a £150m donation made by Stephen A. Schwarzman to Oxford University in 2019—the largest single donation received by Oxford since the Renaissance. The purpose of the donation is to house, for the first time in its history, virtually all of Oxford’s humanities departments in a state-of-the-art, purpose-built Humanities Centre. But in addition to the humanities departments, the Schwarzman Centre for the Humanities will also include a Humanities Cultural Programme, which will bring practitioners in music, theatre, poetry, and so on into our midst, and also the Institute for Ethics in AI, which will connect the humanities to the rapid and hugely consequential developments occurring in AI and digital technology. So, the underlying vision is one in which the humanities play a pivotal role in our culture, engaging in a mutually beneficial dialogue with artistic practice on the one hand and science and technology on the other. It was evident to me when I applied for the job of director of the Institute that a lot of deep thought had gone into its conception, that it could potentially make an important intellectual and social contribution, and that Oxford, with its strong philosophical tradition and exceptional commitment to interdisciplinary engagement, was the ideal environment for this project. 

Secondly, please summarize the charter of the organization, as ethical challenges posed by AI seem to surface all the time—from facial recognition to voter profiling, brain machine interfaces to weaponized drones. Indeed, perhaps most importantly, how AI will impact global employment.

The fundamental aim of the Institute is to bring the rigour and the intellectual depth of the humanities to the urgent task of engaging with the wide range of ethical challenges posed by developments in Artificial Intelligence. These challenges range from the fairly specific, such as whether autonomous weapons systems should ever be deployed and, if so, under what conditions, to more fundamental challenges such as the implications of AI systems for humans’ self-conception as possessors of a special kind of dignity in virtue of our capacity for rational autonomy. The Institute is grounded in the idea that philosophy is the central discipline when it comes to ethics, but we also believe that it has to be a humanistic form of philosophy—one enriched by humanities disciplines such as classics, history, and literature. A humanistic approach is imperative, given Anglo-American philosophy’s own unfortunate tendency to lapse into a form of scientism that hampers it in playing the critical role it should be playing in a culture in which scientistic and technocratic modes of thought are already dangerously ascendant. In addition to being enriched by exchanges with other humanities colleagues, the Institute has also forged close connections with computer scientists at Oxford to ensure that our work is disciplined by attentiveness to the real capacities and potentialities of AI technology. Especially here, in a domain rife with hype and fear-mongering, it is important to resist the lure of philosophical speculations that escape the orbit of the feasible. Finally, I think you are right to highlight the issue of the impact of AI on work. Work consumes so much of our lives, but where are the rich and sophisticated discussions on the nature and value of work, its contribution to our individual well-being or to our status as democratic citizens? These issues have been neglected by contemporary philosophers, so in this way AI is doing philosophy a great service in redirecting our attention to important questions that have been unjustly sidelined. I think similar observations apply to the topic of democracy, which for many years was way down the list of priorities in political philosophy, but which rightly has assumed considerable salience in the ethics of AI.

To expand on the charter, how do you think “AI ethics” can truly become a field, comparable to medical ethics?

I think there are both positive and negative lessons to be learnt from the instructive example of medical ethics. As my Oxford colleague Julian Savulescu has emphasized, medical ethics tends to become intellectually thin—it tends to shift into a bureaucratic, committee-sitting mode—when disconnected from a deeper disciplinary grounding, especially in philosophical ethics. Indeed, it is notable that the key contributors to medical ethics, figures such as Onora O’Neill, Jonathan Glover, Tom Beauchamp, and Mary Warnock, pursued their work in medical ethics as part of a much broader philosophical agenda, both in moral philosophy and beyond. Similar remarks can be made about another interdisciplinary field in which Oxford has had notable success in recent decades, that of the philosophy of law. So, it is important to ensure that AI ethics remains grounded in philosophy and other disciplines, rather than thinking of it as self-standing discipline. We also have to recognize that AI ethics has a distinctive challenge of its own stemming from the all-pervasive nature of AI technology, which impacts not only medicine but also law, the arts, the environment, politics, warfare, and so on. The idea that one can credibly be an ethical expert across all these multifarious domains is a non-starter. So, one must combine a serious disciplinary grounding with real expert knowledge of specific domains and their distinctive configuration of salient values. This is key to the growing maturity and intellectual respectability of the field, and I think we are already seeing the field evolve in this direction.

In this vein, how do you plan to build the capabilities of the Institute? Indeed, please describe the exciting new position you recently publicized for moral philosophers interested in AI.

An important aspect of the Institute is that our members are not reliant upon soft money; instead, they have established positions that give them extensive freedom to pursue the issues that grip them and that ensure they are accepted as genuine peers in the Oxford philosophical community, rather than people engaged in ‘parallel play’ as they call it in kindergartens. We have already filled three of our five Associate Professor / Professor posts. The next one is the recently advertised post for a moral philosopher to be based at St Edmund Hall. AI ethics is still at an early stage of its development, so we try not to be highly prescriptive in our job specifications, but we are on the lookout for someone who combines an excellent research track-record in moral philosophy with a genuine and demonstrable interest in the ethical challenges of AI. It’s personally gratifying for me that the appointee to this post will be effectively a successor of one of my former teachers, the late Susan Hurley, who made important contributions in ethics, political philosophy, and the philosophy of mind. The Institute is keenly aware that many of the issues in AI ethics demand an interdisciplinary response. We have already appointed one social scientist—Dr. Katya Hertog—who does research on the impact of AI on work, especially domestic work. It’s likely that our fifth post will be in political science or law. We also have four postdoctoral fellows attached to the Institute working on a shifting array of topics from ranging from autonomous weapons systems to the impact of AI on human autonomy.

Further, what kind of partnerships do you envision—in the public and private sectors—that can advance the work of the Institute?

We want to attract the brightest graduate students to this area. But this cannot be done single-handedly by any one institution, however illustrious, since it involves creating an intellectual infrastructure that can assure young would-be academics that there are genuine opportunities for career progression. This is one reason we have partnered with colleagues at the Australian National University, Harvard, Princeton, Stanford, and Toronto to create the Philosophy, AI, and Society (PAIS) Network, under the wise and energetic leadership of Seth Lazar. This will help foster a shared culture of cooperation and exchange across leading English-speaking philosophy departments in AI ethics. PAIS is working on a doctoral thesis colloquium to be held in Oxford early next year and also on an annual conference. The Institute is also developing formats that will enable us to engage in a responsible fashion both with policy-makers and with the tech industry, given that so many innovative AI research developments take place in the private sector. One project in the pipeline, which I am working on with my colleague Dr. Linda Eggert, is a Summer Academy to be held annually at Oxford aimed at key decision-makers.

How do you think the Institute, and this kind of effort generally, fits into the discipline? Is it fair to say it’s an example of its increasing importance—that practical and critically important questions can be addressed by the discipline? 

I think Hilary Putnam had it right when he said that philosophy, at its best, addresses both issues of an abstract and foundational character as well as more practical, urgent issues that confront us as citizens. I think this is a general truth. But I also believe it has been rendered all the more vivid by our present political and cultural situation, with the rise of ideological polarization, the declining faith in democracy especially among the young, the erosion of old certainties, and the anxieties and perplexities that come with rapid technological advances. In this new environment, even less can be taken for granted than before. This means that we need to resist the temptation to throw phrases like ‘the rule of law’ or ‘democracy’ as rhetorical missiles aimed at our opponents. We need more than ever to do the hard work of articulating these notions, especially in a way that brings out the genuine values they capture, how they relate to each other, and their practical significance in contemporary circumstances. AI, and the problems and opportunities its applications throw up, is one key site at which this vital philosophical work needs to be done. Of course, it does not fall to philosophers to decide these questions, as they have no political authority to do that. But the hope is that philosophical discussion can help improve the quality of the democratic discourse on these urgent topics, if only by modeling civil discourse and encouraging the idea that value disagreements are in some measure amenable to rational inquiry.

I wrote a polemic for the Common Good that focused on aspects of the law that could be used to constrain technology—as it has become, in Heidegger’s warning, a form of being in the modern world, a kind of structuring that permeates thinking and even our history. What do you think are the biggest challenges for the Institute and the broader effort to manage technological developments?

I think the biggest challenges that confront the Institute are twofold. First, fostering genuine interdisciplinary dialogue and understanding, especially across the humanities and the sciences. On this front, I am very optimistic, not least because of the extremely warm reception the Institute has received from computer scientists at all levels in Oxford. I am especially grateful to Sir Nigel Shadbolt, who is now our Distinguished Senior Scientist and who played a key role in founding the Institute, and Mike Woolridge, the former head of the Computer Science Department. But the future lies with those young scholars who, from an early stage, will make themselves highly literate across the sciences and the humanities. The second challenge is, I think, even more difficult, and that is trying to inject a humanistic approach to ethics—one that is attentive to the full range of ethical values in their complexity and richness—into a public discourse that is dominated, whatever the rhetorical window-dressing, either by a scientistic and technocratic self-understanding that flattens out the domain of value or by the cynical notion that ethical and political disagreements are simply power struggles in which reason is a helpless bystander.

Finally, as the stakes are existential, are you hopeful about the prospects for controlling/harnessing AI—and what would success look like?

One always has to retain hope. What makes me especially hopeful are the brilliant and conscientious young people who are increasingly attracted to this field—people like Carina Prunkl, Linda Eggert, Charlotte Unruh, Divya Siddarth, Kyle van Oosterum, and Jen Semler at Oxford. I think real success would be to make some contribution to the preservation and the advance of a genuine democratic culture both at home and abroad. This is a culture in which the profound challenges posed by AI as well as the other existential challenges confronting humanity, such as climate change and nuclear proliferation, are genuinely addressed by informed democratic publics in which free and equal citizens deliberate about the shape of the common good and its realization. I don’t think that’s too much to hope for.

John Tasioulas
Director of the Institute for Ethics in AI at Institute for Ethics in AI | Website

John Tasioulas is Professor of Ethics and Legal Philosophy; Director of the Institute for Ethics in AI.  John joined as Director in October 2020 and was previously Chair of Politics, Philosophy and Law and Director of the Yeoh Tiong Lay Centre for Politics, Philosophy & Law at King’s College London.  He is also Distinguished Research Fellow of the Oxford, Uehiro Centre and Emeritus Fellow of Corpus Christi College, Oxford.

John is a member of the International Advisory Board Panel for the Future of Science and Technology (STOA), European Parliament and a member of the Greek Prime Minister's High-Level Advisory Committee on AI.

Charlie Taben headshot
Charlie Taben

Charlie Taben graduated from Middlebury College in 1983 with a BA in philosophy and has been a financial services executive for nearly 40 years.  He studied at Harvard University during his junior year and says one of the highlights of his life was taking John Rawls’ class.  Today, Charlie remains engaged with the discipline, focusing on Spinoza, Nietzsche, Kierkegaard and Schopenhauer. He also performs volunteer work for the Philosophical Society of England and is currently seeking to incorporate practical philosophical digital content into US corporate wellness programs. You can find Charlie on Twitter @gbglax.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Philosophical Mastery and Conceptual Competence

I roughly sort pedagogical issues into two broad categories: engagement and mastery. By “engagement” I mean roughly discussion and reflection on teaching methods that...