ResearchPhilosophy and TechnologyAI and Social Justice: The latest technological ‘revolution’ and the Capability Approach

AI and Social Justice: The latest technological ‘revolution’ and the Capability Approach

Face recognition to unlock our phones, navigation apps that show traffic in real-time, digital assistants that turn on lights and lock doors in our homes, chatbots writing college-level essays—the point is clear enough: living and thriving as a human being in a world with artificial intelligence is very different from living and thriving in a world without it.

History books often describe periods of technological progress as “revolutions” (e.g., the Agricultural Revolution, the Industrial Revolution, etc.), implying that, once the new technologies became widespread and easily accessible, life would never be the same as before. Gruetzemacher and Whittlestone argue that the currently available AI technology is indeed creating a similar revolution, and that, as a consequence, AI can be considered radically transformative. Radically transformative technologies not only set and “lock in” a new trajectory in human development (this can be done by less “disruptive,” general-purpose technologies, too, such as refrigerators and telephones), but also re-shape the standards and expectations that human beings have regarding their well-being, health, wealth, rights, accomplishments—in one word, their thriving.

But how exactly is AI radically transforming life as we know it, and what kind of implications does this fact have for our society? I believe that these questions are best answered within the framework of a theory of social justice, and, more specifically, one that is influenced by philosophical views that focus on human flourishing or self-realization. The so-called Capability Approach is one such theory. Let’s start with the first question.

According to the approach, all members of a given society must be put in a position to acquire and exercise a series of basic capabilities, or “possibilities to function” in certain ways. Only if a society (and its governing body) grants the fundamental capabilities can the society be considered just, and its members live a life worthy of human dignity. I suggest that AI shows its radically transformative power by playing an active role in determining new conditions of possession and realization of at least some basic capabilities. More precisely, AI is partially constitutive of the social, political, and economic environment in which a person realizes her capabilities.

To support my suggestion that AI is now part of the conditions of possession and realization of certain basic capabilities in virtue of being a radically transformative technology, I will briefly present a few concrete examples. These examples are not meant to provide definitive evidence for my claim, but they can nonetheless give readers an idea of why I find the proposal attractive.

Life. Being able to live to the end of a human life of normal length; not dying prematurely, or before one’s life is so reduced as to be not worth living.

This first capability has been affected by AI in several ways already. For example, machine learning is being employed in natural disaster prevention and to coordinate evacuation and rescue operations. In war zones, AI helps guide missile strikes toward strategic objectives and minimize civilian casualties. Thus, AI-powered technologies allow for more human (and non-human) lives to be saved, for these lives to last longer, and to not be prematurely interrupted.

Bodily Health. Being able to have good health, including reproductive health; to be adequately nourished; to have adequate shelter.

Machine learning algorithms are now used to issue recommendations regarding resource management in hospitals, transplant waitlists, and more [3]. Some surgeries and exploratory exams can now be performed by high-precision robots equipped with smart interfaces and surface mapping algorithms, thus minimizing human error and increasing efficiency. Some have argued that machine learning is transforming psychiatry by providing novel insights into diagnostic concepts, categories, definitions, and therapies for mental disorders. Machine learning is also used by building engineers to create safer, more affordable, and more sustainable housing.

Other Species. Being able to live with concern for and in relation to animals, plants, and the world of nature.

While AI might not be already part of the conditions of possession and realization of this capability yet, evidence suggests that it might in the not-so-distant future. In particular, AI is starting to show potential in areas like wildlife conservation and the preservation of biodiversity. For example, analyzing databases of drone and satellite images can help track and categorize endangered animal species down to the individual animal, helping re-population efforts and the fight against poaching. In addition, machine learning can help us model the movement of certain animal populations, their habits and preferences, and predict migration routes so that they can be made safer. AI is also employed in the fight against climate change and in the development of alternatives to fossil fuels.

These examples suggest that having the capability to “live with concern for and in relation to nature” one day might entail making decisions based on data collected through AI, changing the expectations about what we can and ought to do to care for the planet.

Regarding the second question, the general answer is quite simple: because AI is part of the condition of possession and realization of certain fundamental capabilities, and putting people in a position to possess and realize such capabilities is a requirement of any just society, social justice requires that access to AI technologies is granted to everyone. I understand access to AI as twofold.

On the one hand, one must have practical access to AI, that is, one must be equipped with the material means to take advantage of AI’s power and of the new resources AI is contributing to create. Minimally, practical access to AI involves access to a computer, high-speed internet, and some basic software. On the other hand, members of a just society must be granted intellectual access to AI, too. Intellectual access includes the possibility of learning and being educated about how AI works, its socio-political impact, and, most importantly, its limitations and pitfalls.

The latter aspect is particularly crucial in the context of the capability approach because of one of its essential features: one must be free to choose whether or not to realize the capabilities one is granted. This requirement is important because it avoids the risk of just societies becoming paternalistic societies, in which the government positively dictates to its citizens how to act and what to choose to live “the right way.” One essential feature of capabilities is, on the contrary, their openness to being deliberately ignored or rejected by individuals: “to promote capabilities”, Nussbaum writes, “is to promote areas of freedom, and this is not the same as making people function in a certain way”.

Thus, insofar as AI partially determines the conditions of possession and realization of a capability, AI must be involved in determining the conditions for opting out of realizing them, too. For instance, using AI for cybersecurity and countersurveillance can help protect people’s privacy and compensate for the risks coming from the massive and largely uncontrolled flow of information that fuels machine learning systems. However, sometimes preserving the “freedom to opt out” might instead require actively resisting the use of AI in its totality. For example, given the various forms of algorithmic bias vexing many machine learning-powered search engines, recommendation systems, etc., it seems consistent with the capability approach to demand that members of social groups affected by such biases have the opportunity to appeal decisions made with the help of AI, or to veto the use of AI during decision-making altogether. A society that aims to be just, therefore, should grant its members practical and intellectual access to AI in all the senses I just specified. Only this way can society preserve the areas of freedom that are being re-shaped by AI.

Author headshot
Alessandra Buccella

Alessandra Buccella is a postdoctoral researcher at Chapman University’s Institute for Interdisciplinary Brain and Behavioral Sciences. She studies the philosophical foundations of cognitive neuroscience and artificial intelligence. She has published several articles in philosophy of mind, and she recently co-authored a piece in Scientific American about the ‘neurophilosophy’ of free will.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Philosophical Mastery and Conceptual Competence

I roughly sort pedagogical issues into two broad categories: engagement and mastery. By “engagement” I mean roughly discussion and reflection on teaching methods that...