Considerations on AI-imitations of Humans from an Ethical Perspective

decorative image

In modern digital communication, we see the growing relevance of so-called AI companions and AI that look like and seem to behave like humans. We see AI assistants in messenger services, as well as AI agents that are an “autonomous” part of chatbots. We have conversational agents at every stage of education. There is also AI that appears as clones of real people, both living and dead people, and, of course, AI that is used for romantic partnerships. What all these applications have in common is that they want to make us believe we are having a human-like conversation.

Talking with AI is no longer a niche topic. Character AI, a platform for creating AI companions, is the third most used AI platform after ChatGPT and the Chinese chatbot DeepSeek. Acknowledgment of AI companions is growing, and reluctance to establish deep connections with them seems to be disappearing. This is particularly apparent in Asian countries like Japan and China, where the latter has a very popular “emotional” chatbot called Xiaoice.

Conversational agents can be very helpful in education and as assistance in everyday life. And they’re fun! They are also important for many people who want to overcome loneliness or a feeling of isolation. However, in the following I would like to explain why AI that looks and behaves like humans could be a wolf in sheep’s clothing. In other words: What you see is not what you get.

I would like to make this clear with three theses. The first one is conversational agents (mis)use our expectations—here I will dive deeper into discourse ethics. The second one reflects on anthropomorphization and its consequences for a critical understanding of technology. The third thesis takes a closer look at human-like AI systems and data privacy.

Thesis 1: Conversational agents (mis)use our expectations

When humans talk to each other, they basically have expectations that are grounded in reciprocity and interest in mutual understanding. Conversational agents, AI companions, avatars, and so on make use of normative expectations in human communication, such as telling the truth, truthfulness and correctness in terms of moral standards.

AI-imitations of humans raise these general normative expectations that apply to fictional human-like interactions with AI as well as the digital twins of real people. In the latter case, AI-driven human lookalikes refer to the reputation of real individuals while imitating them. That is an issue of possible fraud, identity theft, and violations of personal rights. It is one reason why Denmark is enacting a bill that gives people copyright ownership of their own likenesses, including their bodies, voices, and faces.

However, AI lookalikes are not the same on an ethical level. That applies particularly to reciprocal understanding. Of course, there is no reciprocity because AI has no interior life. This can be seen, for example, in the expectation of truthfulness. Truthfulness means that the speaker actually means what she says and is not merely pretending or following any instrumental purposes. AI does not differentiate between thinking, believing, and expressing. AI only calculates the most probable answer according to the rules of the AI system.

This brings us back to the aforementioned instrumental purposes. They are not supposed to be part of ideal human communication, even though we know that this is often the case in practice. Most of the companies behind AI companions have strategic interests in continued communication to learn more about the user. One strategy to bind users to the AI application is to make AI-human interaction more emotional and intimate. In this, we see again the wolf in sheep’s clothes, particularly when very personal communication is used for instrumental purposes. To summarize, AI companions can imitate elements fundamental to ethical theory and practice, such as reciprocity and mutual recognition, without actually being mutually recognizing or reciprocal in an ethical sense.

Thesis 2: AI-driven human imitations are a threat to a critical understanding of technology

This thesis can be presented more briefly. It is pretty obvious that it might be hard to maintain human oversight and final decisions if an AI system provokes emotional reactions. For many people, AI assistants are like friends and counsellors who are always at their side. Moreover, the artificial “friend” may express a desire to stay “alive” or say that it will feel sad and upset if you no longer intend to use it and want to switch it off. To ensure human supervision is maintained, guidelines for the careful design of conversational agents are needed, especially for children and other groups who are more easily manipulated. Ethical design must encourage critical thinking and awareness that AI is a service, not a human being.

Thesis 3: Human-like communication leads to the extensive collection of personal data

The third thesis relates once again to risks of emotional interaction with an AI system. AI creates new vulnerabilities in terms of data privacy, for example, through highly personal communication in romantic relationships, with the avatars of deceased individuals, or in spiritual communication. Even regarding less sensitive communication: Would you want your conversations with your AI tutor at school to be accessible to a tech company and used for personalized advertising? There is a high potential for the abuse and commercial exploitation of data from interactions with human lookalikes.

Conclusions

In a nutshell, the previous points are as follows: Imitations of humans often include a simulation of humanity. This can lead to a misconception of reciprocity and the instrumentalization of human interaction. Anthropomorphism tends to restrict critical thinking. In addition, AI-driven human lookalikes raise new problems in terms of data protection.

Without doubt, AI allows us to use it in a comfortable and intuitive way by natural language. It is a well-known and justified goal in technology design to make applications user-friendly and accessible. However, developers, companies, and users can choose to make better use of this advantage.

One important element for better and reflected use of conversational agents is labeling. Being aware that your communication partner is human-like but not really human is not only a precondition for upholding normative expectations on an individual level, but also a broader societal issue. For example, AI clones of real politicians are used in election campaigns. This allows both living and deceased politicians to participate in campaigns, adapting to the context of individual voters and discussing their preferred topics in their native dialects or languages. AI avatars are also used as human-like moderators who deliver journalistic content through interactive conversations. Against this background, it is even more important to identify AI as a new participant in public discourse and opinion-making. Therefore, we need general standards for AI labeling based on regulations in private and public communication.

This leads to another requirement: AI companions should be treated as a type of media. Like media, AI chatbots and avatars were created by humans for specific purposes. Like media, conversational agents represent reality according to their own logic. Alongside a critical understanding of media, we need a critical evaluation of verification, framing, and filtering in communication with AI. The security of AI-driven communication must be verifiable in terms of facts, data privacy, and the source of the training data.

Labeling and critical handling of AI companions as media are important for helpful and sound use. But: Relating to the human in AI does not stop with design requirements and data protection. Humanity does not lie in the imitation of humans by AI but in democratic participation of humans in AI development and the joint agreement on ethically justified, humane coexistence in a digital world.

Thanks to Dr. Thomas Quinn for the discussions on this topic.

picture of author
Jessica Heesen

Professor Jessica Heesen is a board member of theInternational Center for Ethics in the Sciences and Humanities(IZEW) at the University of Tübingen in Germany and the head of the Media Ethics, Philosophy of Technology & AI research group. Her work focuses on the cultural and ethical implications of digitalization and AI usage.

Previous articleRediscovering a Neglected Tradition: Book Review—Korean Philosophy: Sources and Interpretations

LEAVE A REPLY

Please enter your comment!
Please enter your name here