Question: how can you be dishonest with deepfakes? Answer: create and share deepfakes. Ok, that’s much too glib. But it turns out, in my view, it’s not all that far from the truth. It’s more difficult than you might think to use deepfakes honestly. Let me explain.
What’s a deepfake?
First, what are deepfakes? I’ll use the term “deepfake” fairly loosely to refer to any realistic digital video, audio, or image media generated by contemporary machine learning techniques. (Strictly speaking, that characterization is a bit too broad, but it fits common usage. For a detailed taxonomy and terminology, though, see Raphaël Millière’s “Deep Learning and Synthetic Media.”)
In a classic deepfake, the likeness of one person originally appearing in a source video has been replaced with the likeness of another person who didn’t appear in that source video. But it’s also becoming common to train AI models to generate wholly new digital video/audio/image media from the ground up.
What do people do with deepfakes? The bad.
Deepfakes probably are still best known for their bad uses. Just a couple of years after deepfakes hit the internet in 2017, the vast majority of deepfakes were pornographic, often depicting famous people doing things they hadn’t actually been recorded doing.
Later, deepfakes were being deployed for political destabilization, even as wartime disinformation tactics. For instance, not long after Russia invaded Ukraine in 2022, a deepfake spread on social media, falsely depicting Ukrainian President Volodymyr Zelensky as telling his people to surrender.
More generally, there are justified worries that the proliferation of deepfakes—or perhaps even knowledge about deepfakes—is likely to erode social trust in digital media as reliable sources of evidence supporting critical parts of our social and political infrastructure, e.g., the role of video evidence in legal proceedings. That’s all bad. But are there good uses for deepfakes? Yes, but I’ll come back to those shortly.
A framework for thinking about dishonesty in deepfakery
Arguably, the root-level moral concern about deepfakery is dishonesty. Here’s how I’ll define honest and dishonest actions, drawing from (but slightly modifying) the definitions I gave in a recent paper titled “Deepfakes and Dishonesty” along with my co-author Christian Miller:
A person acts honestly when she does not intentionally distort the facts, as she understands them, to others whom she reasonably foresees will be in her audience.
A person acts dishonestly when she does do that.
Of course, even if doing something would be dishonest, that doesn’t necessarily mean that what you’re doing is all-things-considered morally wrong. Lying to the proverbial Nazis at the door about the Jews you’re harboring would be, by the lights of many people, morally permissible.
In our article noted above, Miller and I develop a framework for evaluating honesty and dishonesty in deepfakery by combining our definitions of honest and dishonest actions with a “phase-agent analysis” of deepfakes. While there’s not space here to lay out this analysis fully, here’s a brief sketch. We find three general phases in any deepfake’s life cycle: first, the production phase, during which the deepfake is planned, source materials are gathered, AI models are trained, and finally the deepfake is generated. Second, there’s the distribution phase, when the deepfake is made available to people via social media, email, etc. In the final phase, people view (or listen to) the deepfake.
In each of these phases, various people play various agent roles. For example, the deepfaker created the deepfake. Anyone who makes the deepfake available to others is a distributor. A faked-in person didn’t appear in the original, authentic recordings, but their likeness was added to the deepfake, often replacing another person’s likeness (whom we call a faked-out person). And so on, with several more types of roles that people (or perhaps even organizations) can play in the life cycle of a deepfake.
With this analysis in one hand and our definitions of honest and dishonest actions in the other, it’s easier to ask clearer, more precise questions about whether (or not) the use of a given deepfake is dishonest, and if so, who was (or wasn’t) dishonest, and why exactly it was (or wasn’t) dishonest.
Consider an example. It’s election season, and Doug wants to produce a deepfake depicting Ethel, a politician he dislikes, enjoying offensive jokes with Frank, a brazenly racist social media creator. Problem: Ethel has neither met Frank nor enjoyed any racist jokes. But no problem: Doug found some videos of Frank laughing over racist jokes in an interview with Gertrude, who happens to be of roughly the same height and weight as Ethel. So, Doug trains an AI model with the Frank-Gertrude videos as well as some videos of Ethel—easy to find, since she’s a public figure. Soon Doug has produced a deepfake that appears to show Ethel laughing at racist jokes in an interview with Frank. Doug promptly plasters the deepfake all over social media. Hank, an average guy, sees the deepfake, believes it to be genuine, and, shocked to learn of Ethel’s bad character, shares it on his own social media profile, causing others to believe falsely in the events depicted.
Who was or wasn’t dishonest, and why? Clearly Doug, both the deepfaker and a distributor, was dishonest. He intended to distort the facts as he understood them to a broad audience. What facts did he distort? More than you might think. Obviously Ethel’s laughing at racist jokes (which didn’t happen). But more besides: Ethel as having met and laughed with Frank (which didn’t happen), and also Frank with Ethel (which didn’t happen); Ethel’s having been recorded doing these things (which she wasn’t); her having consented to be recorded (which she didn’t); and so on. These are all distinct distorted facts depicted by Doug’s deepfake.
What about Hank, who distributed the deepfake? Hank’s social media post was unfortunate and deceived others, but it wasn’t dishonest, since he didn’t intentionally distort any facts as he understood them. Our framework can be used to analyze this case further, of course, and can be applied to more complex cases.
What about well-intentioned deepfakes?
But wait: aren’t deepfakes also used for good purposes? Yes, they are. But if so, surely it’s difficult to be dishonest with well-intentioned (or at least non-ill-intentioned) deepfakes, right? Well, the opposite is true, in my view. Consider just two examples.
First, deepfakes have been used to raise awareness about worthy social causes. In an early example, David Beckham was deepfaked as speaking in a number of languages, encouraging world leaders to work toward ending malaria. But this deepfake contained no disclosures, and so, despite the laudable intentions, the production and initial distribution of this deepfake—which distorts facts about what Beckham said and could say—likely falls short of honesty, since its producers likely intended to distort the facts, even if they did so for a good cause.
Second, there’s a burgeoning industry, led by companies like Synthesia.io, offering customizable deepfakes for a variety of purposes such as marketing, advertising, education, and more. Customers can select an AI model trained on a real actor’s visual and audio likeness, give it a script, and like a digital human puppet it’ll say whatever you tell it (subject to some content restrictions). But as these digital puppets become increasingly realistic—and some already are quite realistic—many viewers will believe they’re watching genuine recordings. Absent disclosures, which many of these companies don’t require, creating and distributing these sorts of deepfakes often will fall short of honesty, since doing so involves intentionally distorting the facts to one’s audience—many of whom, it’s easy to foresee, will believe what’s depicted to be authentic. And all this is true even if one’s ultimate intentions are laudable.
What about labels or disclosures?
But what if we include a visible or audible disclosure, e.g., “This contains deepfaked material!” Such a disclosure would prevent intentional distortion of the facts, right? Possibly, but it’s not as simple as you might think. We can easily foresee that most of the main methods of disclosure would not actually notify many (perhaps most) viewers/listeners that the recording is a deepfake. Typical methods might be a printed label on physical packaging, a statement in the closing credits, or a notice in a film’s title page on a streaming service. But we know beforehand that many viewers—maybe most—won’t see these notices, and so it’s harder not to intend to distort facts for viewers when you know that’s precisely what you’re doing. What about prominent, on-screen disclosures during the deepfaked parts of the film? This would do the trick: someone using this method is very likely trying to undistort for the viewer whatever facts the deepfake distorts. The trouble is, virtually no one would use this method. It’d be distracting for viewers and would ruin a film’s aesthetic experience. So, disclosures aren’t as obvious an honest route as you might’ve thought.
What if we’ve got consent?
Finally, what if we first obtain the consent of the deepfaked persons? If the faked-in person consents to be depicted doing or saying things they didn’t actually do or say, isn’t that enough for honesty? As I’ll be arguing in a new project in progress, the answer again is “no, often not.” While the consent of the depicted could avoid dishonesty in some cases—e.g., perhaps consenting to be deepfaked as saying “I consented to being deepfaked as saying I consented”—most of the time distributing a deepfake will still involve distorting a number of facts. But I’ll say more about this in the near future.
In Closing
I’ve offered a sketch of a framework for thinking about honesty and deepfakery. But in closing I think it’s worth clarifying two things I’m not arguing. First, I’m not arguing that producing and distributing deepfakes is always morally wrong. Even if it turns out that using a particular deepfake would be dishonest, perhaps it’s still morally permissible, maybe even something you ought to do. Depending on the fundamental moral theory to which you subscribe, honesty might be just one, potentially overridable factor to consider—albeit surely an important factor—when making moral decisions.
Second, I’m not arguing that deepfakery is always dishonest. It’s just that it’s hard not to be dishonest with deepfakes, because it’s hard not to engage in deepfakery—even well-intentioned deepfakery—without intentionally distorting the facts as you understand them to an audience. Context matters, of course. If a particular context makes it obvious to you and your audience that your deepfake is indeed fake, then it’s less likely you’re being dishonest, since in that context you’re less likely to be intentionally distorting the facts for your audience. Or, if in the future our digital media environment becomes so saturated with highly realistic AI-generated media that no one anymore believes by default that any digital media are authentic, then perhaps deepfakery will rarely be dishonest. But if we get to that point, I suspect we’ll have bigger worries.

Tobias Flattery
Tobias Flattery is Assistant Teaching Professor of Philosophy at Wake Forest University. He researches and writes on issues in the ethics of emerging technologies as well as the history of philosophy. He received his Ph.D. in philosophy from the University of Notre Dame. Prior to academia, he was a data warehouse engineer and business intelligence analyst in the private technology sector.






