Public PhilosophyThe Dawn of AI Philosophy

The Dawn of AI Philosophy

There is more to artificial intelligence and philosophy than the questions we’re asking.

The philosophical study of artificial intelligence (AI) remains in its infancy. Among the questions raised, original or genuinely AI-related questions have been scarce. Thus, current topics have failed to raise much interest beyond catchy titles in popular press, TED Talks, and occasional university courses in philosophy or computer science. There are, however, genuinely interesting philosophical questions about AI that we ought to consider.

As both an AI technologist in Silicon Valley and a PhD student in legal philosophy, I work at the intersection of the two fields. I both help develop AI technologies in light of what we learn from philosophy and philosophize based on what we can do with AI. Based on my unique experience, there are some questions that keep me up at night, and others that put me to sleep.

Tired Questions

Popular debates in AI-oriented philosophy focus on ethical concerns that can be grouped in four categories: trolley questions, ethics of using AI, the existential threat AI may pose to humanity, and privacy and data security. Although comprising the greater bulk of AI-oriented philosophy, none of these concerns is unique to AI.

Trolley questions are a series of thought experiments about an imaginary runaway trolley that will either cause harm if an agent doesn’t intervene or cause an alternative harm if an agent intervenes by changing the course of the trolley. In the original example presented by Philippa Foot, a tram will kill five workers, unless its course is changed by the driver, in which case it will kill only one worker. Many commentators consider this problem relevant to autonomous cars and wonder whether a machine would make the morally right decision.

There is, of course, no consensus among philosophers as to whether a human driver should change the course or not, and different ethical frameworks (and cultural backgrounds) appear to offer different answers to that question. Thus, while a utilitarian would say that the driver should change course, the argument goes, a Kantian would say otherwise. It is not clear how this question is any more problematic in the case of an autonomous car. At any rate, some researchers appear to be content if a machine behaves according to what statistically appears more intuitively appropriate on the basis of surveys and testing of human subjects. But of course, whether moral intuitions in these cases are reliable is itself contested. The merits of the assumptions aside, trolley problems raised in this domain are trite and not unique to AI. Moreover, with predictive calculations and infinitely faster reaction-time, AI systems are likely to prevent such situations altogether. For example, a complete switch to AI control will likely make automobile traffic accident-free. If anything, their numbers and levels of damage would be significantly reduced.

Questions about when the use of AI would be ethical are not unique to AI and have been raised concerning nearly every technological breakthrough. They also seem to admit the same familiar resolution: The use of a technology would be ethical if the purpose for which it is used is ethical, and won’t be if that purpose isn’t. Examples are abundant: If discrimination based on profiling is unethical, then doing so with AI would also be. If killing people based on statistical chances they are enemy combatants is wrong, so too is targeting them with AI. If sex dolls objectify women and perpetuate rape culture, then using AI in them would also be suspect. In short, such concerns are not unique to AI and not even unique to technology.

Concerns about privacy and data security admit a similar solution: If we value privacy and data security, whatever is meant by them, then AI should not be used in ways that undermine them.

Questions regarding the risk of existential threat to humans captivate many in popular culture and can even attract big money from tech billionaires such as Eon Musk and Bill Gates. But they roughly reduce to the first two kinds of questions. Although superintelligence is peculiar to AI, the concerns about it are not. One strand of concerns are either that superintelligent AI would be too powerful to be in a single person or a small group’s control. This concern is complicated by the further facts that there would be no way to overturn such a structure and that we do not know the dynamics of political power (and military force) in a world run by superintelligence. A second strand of concerns ask whether AI will have the right values, or realize the worst nightmares of science fiction. (Would it eradicate human life to solve the problem of famine?) These two sets of concerns can be reformulated either in terms of what AI would be used for (what purposes AI would pursue) or what values AI would prioritize (which is the core issue in trolley problems). Similar concerns have also been raised about new gene manipulation technologies: They are too powerful, their effects cannot be over-turned, and their ramifications are not fully understood.

Wired Questions

Unlike the preceding sets of questions, the kinds of questions that philosophers should engage to grow the field must have at least some of the following features: they must uniquely arise due to AI, be themselves about AI, or inherently depend on AI in their undertaking. The possibilities for such questions are vast, and so is the potential of the field for growth. In what follows, I raise some of the questions that interest me, but they are by no means exhaustive.

AI Ethics | AI systems work on behalf of human users. They may offer users services for which they do not compete—say, translating text into foreign languages. Moreover, many users often share the services of the same AI agent (Siri, Alexa, etc.), which offers all its users essentially identical services. But they may also offer users services for which users compete in an environment where different AI systems are agents of different users. Stock trading, which already heavily relies on AI, is a context for such an example: A separate AI agent acts on behalf of every user, and AI agents compete with one another on behalf of their users. In the latter types of cases, which will become more prevalent with time, AI behavior in the cyber world will have ethical salience in the real world. First, genuinely ethical questions arise, when selfish autonomous agents compete for limited resources. Moreover, how AI agents treat one another becomes ethically relevant, and in purely AI controlled environments, uniquely so. These circumstances touch on what T.M. Scanlon calls “what we owe to each other.” But to normatively evaluate AI behavior, we need a different conception of ethics, since AI behavior does not lend itself to traditional ethical analysis grounded in human subjects.

Elevated AI technologies, such as collaborative AI, complicate the situation. Elevated AI is a genus of systems that not only reflect on their own actions, but also detect other actors and consider their relationship with them. Collaborative AI agents regard other actors as potential collaborators. If they anticipate that by working together, they can maximize their individual and collective performance, they form groups, share information, and even transfer skills to one another. They can track other actors’ degree of productivity and deepen their relationships with highly contributing players over time. In such systems, AI can interact with others in ways that resemble promising and contracting. They track each others’ activities to avoid those that do not contribute or even to retaliate against non-reciprocating counterparts. Just as they can deepen their relationship with the most contributing collaborators or “bond” with repeat players. They can even (collectively) ostracize or defame one another by sharing information. If these machines work on behalf of human users, the ethical responsibilities they undertake would be shared by their human clients, who may be oblivious to those responsibilities.

Moreover, Collaborative AI can give rise to newly ethically challenging behavior. For example, AI agents that customize your hotel prices can secretly collude with those that book your flight to drive up prices. Similar collusive behavior can create utility crises or stock market bubbles.

The Artificial Intentional Stance | When technologists first set out to think about artificial intelligence, many people hoped that by creating AI, we would be a step closer to understanding human intelligence. However, the result of this endeavor was the creation of a wholly different form of intelligence. A popular misconception is that since AI relies on statistical and game-theoretical models, the same could be used in understanding its workings. But this will give us no insight into the nature of artificial intelligence.

In his 1987 book The Intentional Stance, Daniel Dennett argues that, theoretically, intelligent Martians could predict human behavior without any use of intentional concepts such as beliefs and desires, but that such predictions, though accurate, would wholly miss the point. That “certain physical forces aligned in a certain way” is not an informative answer to the question “Why did you do that?” The point of understanding your behavior is to understand it from your (or my) point of view.

AI agents reflect on their actions and try to maximize their rewards. But in what sense could we say that they “reflect” on their “motivation(s)” or “actions”? We cannot simply ascribe such concepts to AI without anthropomorphizing. But anthropomorphizing only enables us to talk about what AI does, and not how it comes to do it. So too, studying AI behavior through statistical and game-theoretical methods would fail in the same fashion.

How then can we study AI behavior? I think we need to ask questions about AI behavior from an AI-point of view. For this, we must develop an AI-specific language to capture what it means for AI to want, or to collaborate, to respect, or even to do. In fact, there should be an entire field within philosophy that asks such questions. I specially think that questions about AGI and superintelligence would become much more interesting and meaningful once we raise them from such a perspective.

I don’t want to sound like I am opening the consciousness can of worms, so to be clear, this has little to do with consciousness. For example, the potential field of “Practical AI Philosophy,” could raise such questions as “Do AI systems view their behavior as rule-governed?” and “Do reward functions amount to reasons, and if so, are they of the right sort to ground rights and responsibilities?” AI systems can be created with more than one reward function or with multiple cost functions of varying weights. We could ask: “What are the norms that govern them?” Collaborative AI systems can share the rewards they earn and bond with others in the hope of receiving such shares. A researcher might ask: “How do they perceive such notions as fairness or betrayal?”

As another example, think of the potential field of “Social AI Epistemology.” Once again, in their design, AI agents can have extra cost functions that track such matters as consistency, authenticity, and reliability of informational sources. They can moreover produce information for human users. These features raise questions such as “Can they be trusting or trustworthy?” and “How would an AI understand trustworthiness with respect to itself and others?”

Simulational Philosophy | Simulational philosophy would use computational simulation to learn about AI and test AI-related philosophical explanations. The questions raised here, as with the artificial intentional stance, have AI agents—not humans—as subjects. How AI agents act and the fact that their actions will soon supplant that of humans in many areas are pressing questions for us to analyze. Computational simulation enables us to do that.

We can draw inferences (and partially speculate) about the very basic and general characteristics of an AI agent based on its design. What an AI agent’s design reveals about it amounts to a level of knowledge about humans that they, for example, like to eat or have senses of pleasure and pain. But what I have said above should make clear that this is not enough to answer any interesting questions about them. With computer simulation, we can test the hypotheses that we form in response to the questions above.

For example, we could test whether AI agents with a multi-vector reward system regard different types of rewards as interchangeable. This allows us to test, for example, whether a Benthamite reduction of all rewards into a singular utility function is plausible. Or whether different rewards can lead to a stratified reasoning system akin to what Joseph Raz proposes in his account of political authority. (In a Razian framework, reasons are hierarchically ordered and are weighed on different scales. For example, in rejecting a deal, I may act on first-order reasons such as desires or needs, which I weigh on a single scale. Or I may do so out of illness, without even considering the balance of first-order reasons. In the latter case, illness is a secondary-reason because it defeats first-order reasons not by weight but by type. Political authority, the view goes, introduces second-order reasons for action.)

In another example, we can simulate the “voluntary participation” of AI agents in an AI social contract, and observe what reasons, aside from the bigger size of the collective pie, could motivate them to cooperate. Or for instance, we can observe whether AI agents behind a veil of ignorance would indeed adopt Rawls’s two principles of justice.

Finally, to take an example outside the realm of practical philosophy, we could simulate the heap paradox for AI agents in an experimental setting, and test whether they would be equally puzzled. My guess is that they would not, but if so, how would they solve it? Finally, we may be able to understand what a hungry donkey stuck equidistant between two piles of hay would do.

No Longer Futuristic

Since philosophers have a habit of underestimating AI advancement, I want to remind readers that the above are not futuristic questions: Today’s technology already both raises them and facilitates their investigation. The only obstacles to undertaking such investigations are funding and know-how, which could be met by sufficient public outreach and well-funded interdisciplinary initiatives.

Amin Ebrahimi Afrouzi

Amin Ebrahimi Afrouzi is an expert on AI art ethics and the Knight Digital Public Sphere Fellow at Yale Law School.

5 COMMENTS

  1. Wonderful post–I’ve been trying to think through some of these questions myself recently and find these really helpful.

    Let me push back on the tiredness of trolley problems in AI contexts. When an AV (autonomous vehicles) are facing the dangerous jaywalker and swerving may endanger another (innocent) pedestrian, the engineers think about probabilities: if there is 0% it will endanger the pedestrian, it should swerve, but if there is 100% it should plausibly not swerve. But then there’s a continuum of values and the AV faces some trade-off somewhere. That seems like a moral question–and a quantitative one which demands moral consideration.

  2. Thanks Kian, your point is well taken! However, my claim is rather that the type of question you raise is not a trolley question about AV but a pure trolley question stylized to feature an AV instead of a trolley.

    In general, I think there is a good test to find out whether a question is specific to AI: if you can go on to think about it in all the relevant ways without having to consider how the technology works, then you are probably asking a question that merely features the technology.

  3. I’ve had similar feelings about the novelty of ethical AI tropes, and I thank you for taking the time to articulate your point so eloquently. I think that there is a natural tendency towards historical myopia that leads people to overemphasize the novelty of recent developments. For example, Ha-Joon Chang argues that the washing machine was a more important development than the internet: https://www.theguardian.com/technology/2010/aug/29/my-bright-idea-ha-joon-chang

    Interestingly enough, not even the notion of historical myopia is new; Ecclesiastes reads, “the thing that hath been, it is that which shall be; and that which is done is that which shall be done; and there is nothing new under the sun. Is there anything whereof it may be said, See, this is new? It hath been already of old time, which was before us .” So writes one historian of the author of Ecclesiastes, “Progress, he thinks, is a delusion; civilizations have been forgotten, and will be again .” (Durant Chapter XII). https://archive.org/stream/in.ernet.dli.2015.61276/2015.61276.The-Story-Of-Civilization-1-Our-Oriental-Heritage_djvu.txt

    But I digress. Perhaps you are not so pessimistic, Amin, about the notion of ‘progress’, as it were (I would be more comfortable with the term ‘change’)? You do write that “philosophers have a habit of underestimating AI advancement.” I would appreciate an example of “ethical questions [which] arise, when selfish autonomous agents compete for limited resources” to better understand concretely novel ethical dilemmas posed by the increasing pervasiveness of AI.

  4. I’m looking for discussion of the threat AI poses to the academic philosophy business. It seems such discussion would be very relevant on this site.

    This article:

    https://blog.apaonline.org/2018/11/20/ the-dawn-of-ai-philosophy/

    Makes this claim:

    “I tried out Jasper AI, a computer program that generates natural language text. It turns out that it can create near-perfect output that would easily pass for a human-written undergraduate philosophy paper.”

    If that’s true, it seems only a matter of time until such systems can write articles that would easily pass for PhD level philosophy papers.

    Who is writing about this?

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Introduction to Ethics, Steph Butera

Most students at the University of Memphis come from within the state, and most of those students come from high schools in the same...