ResearchPhilosophy and TechnologyLuFlot: The first philosopher-powered chatbot

LuFlot: The first philosopher-powered chatbot

Portraits by Mara Lavitt; Image courtesy of Yale University

Students now have a new tool at their disposal. The Luciano Floridi Bot, also known as LuFlot, is an AI-powered online tool designed to democratize access to philosophical material and foster engagement with the works of philosopher and Director of Yale’s Digital Ethics Center (DEC) Luciano Floridi. The chatbot, which was trained on Floridi’s body of work, answers user questions based on his more than thirty years of writing. The bot not only synthesizes material from multiple sources, but also provides in-text citations which are useful for double-checking its work. Like other AI chatbots, Luflot is not immune to the occasional hallucination.

In the following interview, I talk with Floridi about the process of creating the bot, the limitations of chatbots more broadly, and their ethical implications.

How did this project come to be? I understand that Nicolas Gertler, a first-year student at Yale College and research assistant at Yale’s Digital Ethics Center (DEC), partnered with Rithvik “Ricky” Sabnekar, a high school junior and skilled developer from Texas, to create the Luciano Floridi Bot (aka ‘LuFlot’). 

Nicolas had the idea and we soon started working with Ricky implementing it. They deserve all the credit. I provided my writings, and some advice on design and communication strategies, but it is their project. I only share the responsibility.

What was the process like for creating it?

Typical of progressive refinements, as it happens in computer science. After the project became clear, we tried several implementations with some free and not-so-expensive tools. Nicolas recommended we find a more sophisticated platform, so we ended up using GPT-4 as the basic engine. Then, there was figuring out how we could optimize the bot to respond to user queries about my writings. We decided to implement retrieval augmented generation, as it enables the bot to connect its syntheses to my writings—even quoting directly from it. I have learned a lot just by following the creation of the bot.

You’ve said that the AI drew connections between some of your temporally distant works. What surprised you most about the AI’s results?

For the bot, all my writings are “now.” It’s like when you look at things on a table, it does not matter when and who put them there, you see the distances and relations among them. I have my own “vision” but of course, it is more selective and “narrative,” not synchronous but more historical. To be able to see the links between nodes (ideas, thoughts, remarks, arguments, topics, etc.) in thousands of pages that can be decades apart is quite surprising.

What do you see as the potential for similar generative AI chat models? 

If used properly (and they can easily not be) they can be great tools for learning (Nicolas is working on a bot for a cognitive science course, for example) and for research. In the latter case, one can discover and explore connections (including contradictions or inconsistencies, but also correspondences or new aspects) and changes in a conceptual space that until recently we could not navigate as easily, or sometimes at all.

What are some of the ways that chatbots can be used improperly?

The improper use of chatbots includes privacy violations, manipulation and deception (using chatbots to shape users’ decisions or opinions, e.g. for political influence or spreading misinformation), spamming, phishing and fraud (e.g. by pretending to be trustworthy entities to extract sensitive information like passwords, credit card numbers, or other personal details for fraudulent purposes), and harassment and abuse (e.g. to insult individuals or disseminate hate speech and discriminatory content). Finally, I would add that overreliance on a technology can lead to a decrease in critical thinking (including problem-solving and writing skills).

The important point to stress is that this is all about human users’ unethical or illegal behavior, not chatbots.

What are their current limitations, and do you see these being resolved?

Chatbots have social limitations. They typically require access to the internet and a good level of digital literacy to be used effectively. This creates a divide where individuals without internet access or digital skills cannot benefit from the services provided by chatbots. The next problem is accessibility and usability, e.g., design that makes them difficult or impossible to use for people with different abilities, relying only on a few major languages, or excluding non-native speakers. There are then costs (development and maintenance) that can be a real barrier for many actors. Other issues concern the abrupt replacement or displacement of workers (e.g., in customer service roles), which can exacerbate unemployment and underemployment. I list other problems above, which are more ethical than purely social: privacy, bias, misinformation, manipulation, etc.

If we skip the social problems (costs, accessibility, etc.), the real issue is reliability, the so-called “hallucinations.” A journalist friend recently looked at what the bot would say about him and it fabricated an article we never wrote together. We laughed about it, but this is a problem. There is a lot of expertise that goes into using and managing these tools properly, and that can be underestimated. One of the things we plan to research at Yale’s DEC is exactly how to promote that expertise, which is definitely not just technical, but cultural, historical, contextual, critical, and semantic.

Are there any ethical concerns that you have regarding chatbots like Chat GPT? 

Oh, there are so many the list would be quite long. By now, some are classic, like bias, plagiarism, copyright infringement, privacy, etc. Others are less obvious but are becoming pressing, like individual autonomy, disinformation, and weaponization. We need more regulations, education, and ethics.

What do you plan to do next with the AI?

It’s a secret!

Aww, really? Can we have a hint?

OK 🙂 we are looking into a voice and image interface, like a real avatar.

Wow! I can’t wait to see that! What has been your biggest takeaway from the experience of making the first bot?

It was wonderful to collaborate with two bright students like Nicolas and Ricky. Their ideas, skills, genuine enthusiasm, and a free sense of “doability” have been contagious.

Luciano Floridi

Luciano Floridi is the founding Director of the Digital Ethics Center and Professor in the Cognitive Science Program at Yale University. His research concerns the digital revolution and its philosophical issues. His most recent books are: The Ethics of Artificial Intelligence – Principles, Challenges, and Opportunities (OUP, 2023) and The Green and The Blue – Naive Ideas to Improve Politics in the Digital Age (Wiley, 2023).

Maryellen Stohlman-Vanderveen is the APA Blog's Diversity and Inclusion Editor and Research Editor. She graduated from the London School of Economics with an MSc in Philosophy and Public Policy in 2023 and currently works in strategic communications. Her philosophical interests include conceptual engineering, normative ethics, philosophy of technology, and how to live a good life.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Virginia Held, 2018 APA Eastern Division Dewey lecture, "Philosophy, Feminism, and Care"

2018 Eastern Division Dewey Lecture: Philosophy, Feminism, and Care

Below is the audio recording of Virginia Held’s John Dewey Lecture, “Philosophy, Feminism, and Care,” given at the 2018 Eastern Division Meeting. The full...