Recently Published Book SpotlightRecently Published Book Spotlight: Self-Improvement

Recently Published Book Spotlight: Self-Improvement

Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the University of Vienna, and the former Vice Dean of the Faculty of Philosophy and Education. He has served on the High-Level Expert Group on AI of the European Commission and other advisory bodies. His past work includes AI Ethics (MIT Press), Introduction to Philosophy of Technology (Oxford University Press), and The Political Philosophy of AI (Polity Press). His most recent publication, Self-Improvement: Technologies of the Soul in the Age of Artificial Intelligence, showcases his broader interests in ethics, politics, and the philosophy of self.

Why did you feel the need to write this work?

I felt that many people today are obsessed with all kinds of self-improvement techniques and saw that artificial intelligence (AI) also offers possibilities for this. I wanted to write a book that is critical about the current self-improvement craze and at the same time offers some insight into what AI could mean for self-improvement.

Which of your insights or conclusions do you find most exciting or important?

For me, the key conclusions of the book are that our modern focus on self and self-improvement is problematic and that we should reconnect to different, perhaps also more ancient ideas about that in order to find a more relational idea of self. And my main message is: focus less on changing the self and more on changing society. If we are too focused on the self, we forget that many of our problems cannot be solved at the individual level because they are not just the fault and responsibility of individuals. What we are and the problems we have are shaped by our social environment. And technology and capitalism play important roles in that environment. If we want improvement, we should stop just blaming individuals and look at the bigger picture. Self-improvement risks becoming a distraction from the problems we face as a society, problems that influence our personal happiness and well-being.

Consider the current economic and environmental crisis, for example. Sure, we can do yoga and meditation. We can eat more healthily. We can try to cope. But coping is not enough. We need to tackle the disease, not just the symptoms. I’m very impatient with the idea that you just need to change your mentality, your mind, your body, and so on. Of course, the ancients were right that it is important to control your mind. Buddhists said that, and so did Western thinkers such as the Stoic philosophers. But we also need to change society if we want real change. Our individualist thinking is, in the end, harmful and self-destructive. It’s time to move on to a more social and relational way of thinking. The current self-improvement culture, and especially the form it takes under capitalism and neoliberalism, is more toxic than helpful since it is a barrier to real improvement.

How has technology affected our quest for self-improvement?

Technology always tempts us to say that we can have a quick, technological fix for our problems. Do we want to improve our minds? Take drugs, visit virtual worlds, or listen to specific sounds. Want to be healthier? Let AI feed you data about your eating habits and give you physical exercises. Some researchers and entrepreneurs even suggest we implant chips in our brains. We can “enhance” ourselves by means of technology. AI is the new thing. But is this real improvement? Is this sufficient? Is it even good? There’s a high chance that we get even more obsessive about reaching our self-improvement goals (goals which are often set by the technology) and that we forget the world around us. And that world is dominated by a socioeconomic system that encourages self-obsession and self-distraction. Self-improvement means money. Not for the improvers, but for the industry that benefits from all those courses and videos and personal trainings. In the meantime, there is not much improvement. In the best case, we survive. We may improve a bit. But we definitely suffer without addressing the problems. Problems that are not just my problems but that are shared. Solutions, therefore, also need to be collective. The rest is wellness capitalism.

You admit in the book that, like it or not, technology is here to stay, and suggest a narrative solution to the hermeneutical gap that technology has created. You encourage us to understand our selves as relational and part of a larger community/collective. You also encourage moving beyond anthropocentric notions of self-improvement to view our selves as part of a larger natural and political environment which our attempts at improvement must also necessarily address. Do you have any visions for what sort of narratives could replace technosolutionism and dataism? To free ourselves from the grip of these narratives and of wellness capitalism in turn, what “better” stories could we use our technologies to tell?

Good question… and a difficult one to answer in the abstract. Your summary of the last part of my book indicates the direction. We need technonarratives that fully recognize our relational and ecological nature, for example. But what stories, exactly, must be developed in a collective and preferably democratic way in the face of today’s crises and situations.

I see opportunities in the current crises. The climate crisis, the pandemic, the geopolitical situation, and the political turns towards authoritarianism: they are harmful, dangerous, and disastrous, but they also challenge us to re-think our ways of living and thereby improve ourselves and our societies in ways that are more ecologically and socially sustainable, more meaningful, and hopefully also a lot more fun.

What directions would you like to take your work in the future?

In my current work, I go in a more “political” direction. I argue that the ethics of AI needs to recognize the political nature of technology and that we need more open and sophisticated discussions about that. In my recent book The Political Philosophy of AI, I propose to use the resources from political philosophy for doing precisely that. And now I’m working on the theme of AI and democracy. Everywhere democracies are under threat. It’s a very vulnerable system. It relies on fundamental rights and freedoms. It also requires a delicate power balance and knowledge on the part of the citizens. Currently, all of those are not safe. And AI may contribute to this. Think about surveillance. Shoshana Zuboff has written about that. But there are also problems with the way knowledge is organized. AI enables fake news, can be used for manipulation, and can help to maintain knowledge bubbles: people often tend to talk only to others that agree with them. How can we strengthen democracy in the face of these problems and given these technological possibilities? Philosophers can help to analyze these issues and help policymakers.

What writing tips do you have?

Writing is not magic. It’s a skill. It’s like riding a bicycle or playing a musical instrument. The more you do it, the more it gets a routine and the better you get at it. Of course, you also need structure and content. But writing as such is a lot about practice. Musicians often play every day for many hours. So, my advice to beginning book writers is: play as much as you can.

Mark Coeckelbergh

Mark Coeckelbergh is Professor of Philosophy of Media and Technology at the University of Vienna, and the former Vice Dean of the Faculty of Philosophy and Education. He was also the President of the Society for Philosophy and Technology, and has been member of the High-Level Expert Group on AI of the European Commission and other advisory bodies. He is the author of AI Ethics (MIT Press), Introduction to Philosophy of Technology (Oxford University Press), and The Political Philosophy of AI (Polity Press).

Maryellen Stohlman-Vanderveen is the APA Blog's Diversity and Inclusion Editor and Research Editor. She graduated from the London School of Economics with an MSc in Philosophy and Public Policy in 2023 and currently works in strategic communications. Her philosophical interests include conceptual engineering, normative ethics, philosophy of technology, and how to live a good life.

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Sexism, Inattention, and Moral Responsibility

Consider an all-too-familiar scene. John and Martha are visiting Barry—their adult son—and his family for a holiday. After a delicious dinner he played no...