Anthony Elliott is Distinguished Professor of Sociology at the University of South Australia, where he is Executive Director of the Jean Monnet Centre of Excellence in Digital Transformation. His research focuses on the digital revolution, lifestyle change and social theory. The author and editor of some 50 books translated into 17 languages, The New Republic has described Elliott’s research breakthroughs as “thought-provoking and disturbing.” In this Book Spotlight, Elliott discusses his recent book Algorithms of Anxiety: Fear in the Digital Age, how the book connects with his broader intellectual project, the philosophers and social theorists who have influenced his writing of this book and the impact he hopes the work will have in both policy circles and business and industry.
What is your work about and how does it connect with your larger research project?
My recent work confronts the digital revolution, namely artificial intelligence. My argument, broadly speaking, is that the rise of AI is generating a new sense of personal identity—“new subjects” are demanded and delivered in the age of advanced machine intelligence. These “new subjects” are people oriented towards using digital technologies—smartphones, laptops, chatbots, virtual assistants—to alleviate the anxieties of a fast-paced, information-overloaded world. In this dependence on intelligent algorithms and predictive technologies, the goal of users (whether they are using Netflix, Spotify, or Uber) is to avoid uncertainty and escape the intensive demands and emotional torments of interpersonal relationships. Apps, algorithms and automation are the new gods that promise pathways from uncertainty to certainty. My work addresses these conundrums, rethinking the relations between self and society in the age of AI.
In Algorithms of Anxiety, I pose the issue of whether people can still make autonomous decisions and lead the kind of lives they want to lead in our age of smart algorithms. My research indicates that people might well be as much bewitched as stimulated by their delegation of decisions and tasks to artificial agents and that greater heed needs to be given to the resulting depersonalized world of automated societies. Automated actions change the dynamics of social life: delegating personal decisions to smart machines can result in various functional benefits and new opportunities, but the very act of delegation can also narrow our interests or constrain our desires.
This relates to my larger research project in quite specific ways. There are three interrelated dimensions to this project. First, some twenty years ago, I conceived the idea of interrogating the contributions of European social theorists and philosophers for contemporary problems of human subjectivity and human agency. So, part of my project, in books including Concepts of the Self and Subject To Ourselves, has been concerned with developing an adequate account of the human agent for the social sciences and humanities. Second, I was particularly interested in the analysis of modernity. The rise of globalization, the issue of postmodernity and the mobility of capital, which has now left labour far behind, are themes I’ve addressed in books such as On Society with Bryan Turner and Mobile Lives with John Urry. Connected to this, third, is the digital revolution—which, I argue, represents a new phase of modernity in which calculability, prediction, quantification and datafication are raised to the second power. I call this the advent of “algorithmic modernity”, and my trilogy—The Culture of AI, Making Sense of AI and Algorithmic Intimacy—develops a critique of economy, society, and politics in the contemporary digital age.
Who has influenced your arguments in Algorithms of Anxiety the most?
From one angle, my book might be seen as an update on the arguments of the late French philosopher and sociologist Jacques Ellul for the contemporary age. In his magisterial The Technological Society, Ellul famously argued that “technique“’” (scooping up machines, technical devices and other rationally ordered processes which render human action more efficient) advances because it advances. Techniques, he said, precede goals: the mere technical possibility that something can be done increasingly serves as the inspiration for undertaking it. In our own time of artificial intelligence, this “technological fix” has become deeply interwoven with machine learning algorithms, Big Data, neural networks, blockchain technology, and the Internet of Things. Just think of what we are witnessing with the rise of automated weapons systems. In the age of AI, the means of waging war affected by nation-states now involves literally outsourcing killing to smart machines. While there aren’t precise figures for this, Sir Roger Carr (former CEO of BAE) has estimated that some forty countries have developed capacities in killer robot technology. This is an extraordinary set of global developments, one anticipated by the late French philosopher Cornelius Castoriadis, who wrote of technoscience as the intersecting forces of technoscientific rationality, omnipotent thinking and the autonomization of decision-making. In the age of AI, this automation of decision-making is becoming explosive—literally!
More specifically, my book seeks to develop and deepen arguments recently advanced in social theory and modern European thought. From the late philosopher Bernard Stiegler, I appropriate an emphasis on the speed of digital flows which exceed the capacities of human perception. From the science and technology studies pioneer Helga Nowotny, I develop further (and try to refine) our understanding of the performativity of predictive algorithms. Neither Nowotny nor Stiegler directly addresses the theme of intimacy. Still, they both have crucial things to say about the unseen, the unthought and the largely invisible dimensions of predictive algorithms which imprint upon the inner fabric of the self, interpersonal relations as well as our relations with the social world. What I have termed “algorithmic intimacy” seeks to position AI as a specific emissary for significant elements of emerging technologies, most dramatically in the “re-writing” of our private, internal, intimate lives as well as our public, social lives in these times.
What topics do you discuss in the work, and why do you discuss them?
Algorithms of Anxiety is somewhat frenetic and shows a disdain for academic boundaries, crossing as it does from Hannah Arendt to Amazon, the politics of techno-surveillance to Netflix’s Squid Game, and Herbert Marcuse to the Metaverse. In pitching the book somewhere between heavy-duty theory and spicy social commentary, I aimed to underscore the utter centrality of the digital revolution to our lives in these times.
The book’s overarching theme is fear: what it looks like in the digital age and how menacing fears and forebodings about AI are transforming our lives in the here and now. In today’s algorithmic version of modernity, we are increasingly afraid of any number of things: fear of information overload, fear of the Metaverse, fear about social media’s harmful effects on teenagers, fear of technological unemployment, fear of algorithmic bias, fears about ChatGPT, fears that artificial general intelligence will outstrip human intelligence, fears over deepfakes and misinformation, fears of drowning in data deluge, fears of robots, fears of rapid cyberattacks, fears about the development of autonomous drones, fears of CCTV cameras and surveillance capitalism, fears that AI will erode privacy, and the fear that AI will make us all redundant. There is, then, the ‘mother of all AI fears,’ the fear that artificial intelligence will destroy humanity: this is the existential risk of AI, the fear of an abrupt end to the world as we know it.
In the book, I try to set contemporary fears in the context of the bigger picture. I start with our automated meta-power society of algorithmic software and global flows, a complex interplay of smart machines and clever people, which provides a new shape to the world of fear, uncertainty and insecurity. Our present global order, I argue, is based upon new coordinates for taming anxiety along with new life strategies for containing today’s fearsome ubiquity of fears. Today, people are tormented by no greater anxiety than to find (and swiftly) apps, algorithms and other kinds of digital automation to which they can ‘outsource’ that capacity for decision-making and freedom, which has been long hard-fought for and previously thought secure. Automated decisions, actions and controls in the sense of smart home assistants, smart health devices, algorithmic-enabled dating apps, therapy bots, lifelogging, virtual shopping assistants, chatbot companions, customer care bots and smart support agents have increasingly become the very air which people in our digitally-minted times breathe.
Today’s hi-tech culture conditioned by smart algorithms reiterates what each of us absorbs, either by design or default, from our circumstances and affairs. It presents the world as a series of computational calculations and quantified data, with life-pursuits encoded into machine learning algorithms and marked by the social and technical conditions of outsourcing, offshoring, fragmentation, discontinuity and inconsequentiality. In an algorithmic society, decisions are outsourced to smart machines daily, and the demands of thinking about decisions vanish, only occasionally requiring further fleeting attention or consent at the click of a mouse. Problems commanding attention may continually arise but disappear once outsourced to automatic calculating machines, only to be replaced by the next cycle of decision-making outsourcing. In this complex entanglement of humans and machines, the limited capacity of individuals to exercise autonomy becomes threateningly frail as algorithms iteratively learn, compose, generate and authorize actions based on the informational attributes of people, data and other algorithms.
How is your work relevant to the contemporary world?
The digital revolution is the most transformative force in the world today—period. The relevance of the digital revolution is not so much about the future as the here and now. Our lives are already saturated with AI—evidenced by the rise of chatbots, Google Maps, Uber, Amazon recommendations, email spam filters, robo-readers, and AI-powered personal assistants such as Siri, Alexa, and Echo. Not only is the digital revolution well underway, but from a sociological angle, it is important to stress that it is unfolding in complex and uneven ways across the globe.
In terms of how the work I’ve been doing is relevant to the contemporary world, it is essential to emphasize that the digital revolution is not simply an “out-there” phenomenon, such as the technological field of machine learning algorithms or emotion recognition technologies. Automated digital technologies also pervade personal life, re-organizing the nature of self-identity and the fabric of social relations in the broadest sense. Much of what we do in everyday life is organized and mediated by AI. However, AI transforms the fabric of everyday life in largely unnoticed ways. Like electricity, AI is essentially invisible. AI functions automatically, operating “behind the scenes”—so that airport doors automatically open (or not!), GPS navigation gets us home, and virtual personal assistants help in our daily lives. And just like electricity, AI has fast become a general-purpose technology—that is to say, a technology enabling the development of a range of further innovative applications.
What effect do you hope your work will have?
I am calling for renewed efforts—by governments, big tech, as well as users, consumers, and citizens—to come together to create emotional literacy in how we, as a society, engage with automated smart machines. In the age of AI, it is tremendously difficult for people to get a sense of what one might want in life, given the sway of predictive analytics. Computational predictions and calculated probabilities encoded in algorithms serve as substitutes for individual agency and rein in people’s capacity for autonomous action in the world. We urgently need fresh thinking for this global challenge, and I hope my research serves as a stimulus—a small step forward on a gigantic path—towards breaking the current cycle.
I’ve been fortunate that my work has struck a chord not only in policy circles but with industry and enterprise. My research has directly influenced policy on AI. An example is the Horizon Scanning Project on AI undertaken by the Australian Council of Learned Academies, where I was an invited member of the Expert Working Group at the request of the Chief Scientist of Australia and the Department of Prime Minister and Cabinet. I’ve also been deeply engaged with emerging industry initiatives on the socio-ethical value and implications of automated technologies and their applications. My industry work has largely been with European-based companies, but also increasingly in Japan.