Issues in PhilosophyDigital Ethics, Facebook, and Bentham's Panopticon: Interview with Luciano Floridi

Digital Ethics, Facebook, and Bentham’s Panopticon: Interview with Luciano Floridi

In January, The European Data Protection Supervisor (EDPS) announced the appointment of a new Ethics Advisory Group on the ethical dimensions of data protection. APA Member Luciano Floridi is one of six members of the group who will develop a new framework of digital ethics aimed at protecting the rights and freedom of individuals, their privacy, and their data.

What is the ethics advisory group, and why was it formed?

The European Commission is working on updated data protection legislation, the General Data Protection Regulation (GDPR). The GDPR is reaching the very final stage of the acceptance process and should be announced soon. But it’s not enough to have new legal rules in place. We also need to understand the ethos and ethical implications of the legislation. I don’t see this as an analytic exercise to make clear the problems and to extract all the implications of the issues and so on. That’s only half the job. It’s like buying ingredients for a dish, but if you don’t cook them they’re useless. So, the second part of the task is coming up with potential strategies to deal with the problems.

For example, freedom of speech, the right to information, and the right to privacy, are clashing on personal, corporate, and political levels. Recently, the European Court of Justice required Google to remove links to personal information that an individual found disturbing. Google wanted to know exactly what should be done—not only for legal compliance, but what would be the right thing to do. When is it right and wrong for Google to refuse to remove links to information? What if the person is a public figure? What if it’s in the public interest to have the information available? Law and ethics don’t always overlap. I was on the Advisory Council to Google, which addressed these questions, as well as the right to be forgotten. The guidelines—which, as far as I know, Google took on board—are freely available online.

The point is that it is not enough only to analyze issues that new data protection legislation will raise. We need to know: what do we do about it? Philosophy, as I imagine it, is a great way of coming up with solutions.

Why do we need a new ethos and why do we need it now?

We need it now because we are living in a new environment, and the shift is happening very, very quickly. Consider the print revolution. Between the first mention of printing and the first magazine appearing in Europe, there were three centuries. The car revolution took about one century. The digital revolution is transforming our lives daily in cultural, intellectual, ethical, and philosophical ways. We haven’t had centuries to adapt. That’s why we need it now.

Why do we need a new ethos? I like to say we need to add a new chapter in the big philosophy book. It’s a great book, but a lot of the stuff we need to deal with today is not in it, specifically the philosophy and ethics of information. Do we understand the ethics of this information flow, the new environment, the transformation of personal identity—what we expect we are and could become—within the infosphere? No. Do we need it? Yes. It’s time to upgrade philosophy to be able to talk to a changed world.

You’ve mentioned some data-related ethical issues already, such as the tension between privacy and freedom of expression. What other ethical issues are there?

There is almost nothing in life that is not touched by data flows: education, conflict, cyber war, health, entertainment, and jobs are all affected. The queen of all problems is privacy and how we use big data as a resource. Privacy is absolutely vital. And consent goes hand in hand with privacy. We want to protect our privacy so that we have control over our information. We want to be informed about what we consent to in terms of our data.

Yet it’s also vital that we don’t lose sight of all the other issues, such as our responsibility toward future generations. What responsibility do we have to use data to improve human health and living standards, for example? And then there’s the digital divide, that is, inequality. How can we make sure that we create a fair environment in which everyone has fair access to the resources? It’s not only a financial problem concerning who can afford to be online and pay for access to information, but it’s also educational, social, political, and even ethnic, because some groups are not comfortable being online.

My hope is that a philosophical perspective can help to deal with all of these issues systematically.

Isn’t it a good thing that companies are using big data to tailor their offerings to sell to us? What are the ethical issues?

Companies use data to tailor products to individuals, and that’s not so bad. For example, I love playing squash. Suppose a company knows that there is a new raquet that I like and sends me a message that it’s on sale. Yes, anytime, please! However, the same path can be followed for discrimination purposes. What if I receive a message saying that my insurance just went up, because the company knows that I play a lot of squash, have a medical condition, and therefore am at a much higher risk of filing a claim? A great raquet at a discount? Fantastic! Insurance with a higher premium? Not so cool. Unfortunately, it’s the same information channel, but used for different purposes. When this happens, there’s a higher risk of discrimination. I’m not a profile. I’m an individual. It’s unfair, because we are all unique, but we also use shortcuts to discriminate against each other.

To what extent are individuals responsible for protecting and curating their own data, rather than regulating it?

I belong to a rather unorthodox party. The distinction is between being empowered to have the right to check what details companies have about me, but not disempowered to have the duty to check. For example, you cannot join a particular social media platform until you’re thirteen years old. The company says that they want to empower all parents to be able to oversee whether their child is using it. This is not empowering users. This is the company discharging its responsibility. That’s not the way it works for anything else. If you get on a plane or a train, you assume that all the safety points and checks have been done. It’s not up to the passenger to make sure that the engineers did their job.

I want to know that someone somewhere has done all the checking possible so that it’s safe. I don’t want to spend my life being a slave to the system and having to control whether anyone is messing with my data. I want the company to check and someone in charge to ensure that the company did check. It is the duty of the shop—and of the police—not to sell alcohol to children, not the duty of the parents to ensure that the children do not buy it at the shop (rather, it is the right of the parents that their children’s safety be ensured by the shop).

Do you think something like Plato’s Ring of Gyges, to make us invisible at will, would be useful or too risky?

As Plato already said in his own time, it’s too risky. We need to get rid of the false impression that when we’re online we’re wearing some kind of Harry-Potter-style invisibility cloak or ring. I wish! It’s exactly the opposite. Imagine that when you go online, you’re opening a window on your life. Imagine that the screen is a big hole through which 7 billion people are watching you. It’s not true. I hope it’s not true. How boring that would be! But it’s possible.

Are you suggesting that we should act as if we are in Jeremy Bentham’s panopticon?

We shouldn’t be too paranoid, but we should act as if we’re on record. Assume that every time you speak, every time you click, every time you go to a website, a record is being kept. Maybe no one will ever use that record. But somewhere, somehow, there is a record for that click, for that purchase, for your having visited that website. If we had a better sense about what it is to exist online, we would behave a little more intelligently.

How has the digital revolution transformed philosophical questions?

It has generated new interesting philosophical problems, but old issues are also being reinterpreted, and get an extra twist, every time we move forward. For example, what is information? What ethics do we need for the infosphere and digital environments? What does it mean to be yourself in a context where you constantly monitor yourself, and you’re constantly monitored by society? What does it mean to have a world that is so deeply interpreted, metaphysically, via information? Do we have an informational grounding for a theory of knowledge? These questions have implications for epistemology, philosophy of mind, and ontology. And the list could be much longer, including political philosophy, the philosophy of social sciences, philosophy of language, and aesthetics.

There’s a huge need for philosophical understanding and problem-solving. Philosophers are good at analysis and synthesis of conceptual issues. We’re at the nexus of what I like to call “conceptual design.” We need to understand which conceptual design is the good one for the time being. That’s philosophy to me. Philosophers could do a lot to make a difference, and we have a responsibility to do so. Otherwise, the vacuum that we leave unfilled is occupied by obscurantism, fundamentalism, and other dubious narratives.

In 2014, Facebook conducted a mood experiment whereby it secretly tinkered with the news feeds of 689,003 users, and was highly criticized for doing so. What will guidelines like yours mean for such experiments?

Had good guidelines been in place, this experiment would never have happened. I’m still quite astonished. It just baffles me that they thought it was okay to modify almost 700,000 people’s news feeds and treat them as lab rats, without a moment of consultation. My grandmother could have told us that if you get bad news, your mood goes down, and if you get good news, your mood goes up. There was no need to make hundreds of thousands of people’s lives slightly more miserable to prove that.

It’s also very dangerous. Suppose there’s a referendum, such as the recent Scottish referendum. Imagine that, if you feel upbeat and optimistic, you’re going to vote yes. If you feel down and as though life is too tough, you might think it’s better to stick together and vote no. There are about 5 million people in Scotland, and most of them are on Facebook. If Facebook runs a mood experiment on the same day that people vote, they have the power to change history. That worries me.

Whenever there’s an important election, The Economist gives their opinion on which candidate they would vote for. Social media hasn’t played politics yet. But what if Facebook decides to have a voice, and spreads their analysis to over one billion accounts? Doable? Yes. Illegal? No. Influential? I can’t think of any bigger political earthquake. I find it amazing that there’s so little regulation. To keep our fingers crossed and hope that nothing happens is not a good strategy.

When I was younger, Microsoft was the bad guy because it had a monopoly. Google is now the bad guy. The next bad guy is going to be Facebook, because they have lots of data for lots of people, and it’s very personal. It’s not only the silly and funny pictures that we are handing over. There are much bigger political risks.

Some people are afraid that artificial intelligence will destroy the world, and I know you’re skeptical about that. Why do you think their concerns are unjustified?

We can always be worried about everything. You can be worried about losing your job, even if you have the safest job in your life. “What if…” is a source of worries, no matter what you say or what you do. What if Martians land on Earth? What if someone hacked into the nuclear missiles in the United States and launched them right now? Is that impossible? As in married-bachelor-impossible? It isn’t. But I’m not worried, because it’s such a remote possibility.

Elon Musk said the highest threat to humanity is artificial intelligence. Try explaining that to almost 700 million people who don’t have access to clean water. Strong or True AI—the Terminator sort of thing—is a rich kid’s way of entertaining themselves by scaremongering people into believing that tomorrow’s robots are going to dominate the world. It’s a bad joke. What about the apocalypse? It’s the same risk. I am not going to lose sleep over it tonight. The APA Newsletter on Philosophy and Computers published a short piece of mine on the topic, where I try to deal with the topic lightly.

How do you think philosophers can contribute and be more involved in contemporary issues like the ones that we have been discussing?

I like to think of great philosophy as being 100% theoretical and applicable. I did not say applied. Plato and Aristotle established universities. That’s a lot of involvement in the real world! All the great philosophers were deeply engaged with their time, but they did it with some abstraction. I’m talking about Plato, Aristotle, Augustine, Aquinas, Descartes, Locke, Hume, Hegel, Heidegger, Wittgenstein, and Russell. Their lessons were universal and could be exported to other contexts. So let’s keep one eye on great theory and the other eye on what difference our conceptual design can make to society. That recipe is 25 centuries old. But don’t get too involved with the real world. That’s not really our job. Let’s just make sure our ideas have the kind of deep traction that others can use and run with. Creating and shaping the right ideas correctly is the most important thing we can contribute to make the world a better place.

 

Luciano Floridi is Professor of Philosophy and Ethics of Information at the University of Oxford, where he is also Director of Research of the Oxford Internet Institute. Find out more about Floridi on his website here.

*

If you would like to submit a contribution, we’d love to hear from you. Please contact us via the submission form here.

Skye C. Cleary PhD MBA is a philosopher and author of How to Be Authentic: Simone de Beauvoir and the Quest for Fulfillment (2022), Existentialism and Romantic Love (2015) and co-editor of How to Live a Good Life (2020). She was a MacDowell Fellow (2021), awarded the 2021 Stanford Calderwood Fellowship, and won a New Philosopher magazine Writers’ Award (2017). She teaches at Columbia University and the City College of New York and is former Editor-in-Chief of the APA Blog.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Asking Humanly Historical Questions in Philosophy Classrooms

My students were mad the day I told them they’d have to debate the merits of The Origin of Species. Obviously, they told me,...