by Robert Greenleaf Brice
In his final State of the Union speech, one topic that President Obama did not discuss was his use of drone strikes in the so-called “War on Terror.” Perhaps this is not surprising, as the President and the CIA have permitted drone strikes to occur under an unknown set of rules, supported by an unknown set of reasons. But as someone who works in epistemology, I find the level of uncertainty here reckless, and as a citizen, I find it terrifying.
On January 14, 2015, a U.S. drone strike inadvertently killed two hostages: a 73-year-old American, Warren Weinstein, and a 37-year-old Italian, Giovanni Lo Porto. Although President Obama said he grieves “when any innocent life is taken,” he also said that preliminary assessments indicate that the strike “was fully consistent with the guidelines under which we conduct our counterterrorism efforts.” Included among these guidelines is a strict policy—mentioned briefly in a speech at the National Defense University and more fully articulated in the President’s Counterterrorism Policy and Procedure Directive—that requires “near certainty that no civilians will be killed or injured.” But what does it mean to be “nearly certain”? Is such a level of assessment even attainable?
Philosophers have been evaluating the requirements that must be met for a person to claim that they “know something” at least since Plato first raised the issue in his dialogue the Theaetetus. Here, knowledge was defined as “true belief combined with a logos,” or “justification.” That is, knowledge is justified true belief. A person’s belief may or may not be true, but their claim of belief doesn’t require any additional proof. It is enough that they say they believe it. Knowledge, however, bears a greater burden of justification than mere belief. If, for instance, one were asked how he knew that the first aerial bomb was dropped over Ain Zara, Libya, on November 1, 1911, he should be able to respond. Perhaps he is justified in his response because he studies the history of German aviation and regularly teaches and writes about aerial warfare.
As Ludwig Wittgenstein observed in his posthumously published notes On Certainty, “‘I know’ seems to describe a state of affairs which guarantees what is known, guarantees it as a fact.” The use of “I know…”—a shortened version of “I know that p”—has come to represent a technical phrase in epistemology. Epistemologists refer to this as propositional knowledge (sometimes, “knowing that”). Propositional knowledge covers all those cases in which a person knows that some sentence or proposition is the case: “I know that I have two hands,” “I know that the world exists.” Knowing something with certainty, however, implies a degree of support that is even stronger than the traditional knowledge claim.
G.E. Moore once offered a “long list of propositions…every one of which,” he says, “I know, with certainty, to be true.” David Hume, while discussing human conduct and necessity, says, “I shall say that I know with certainty that my friend will not put his hand into the fire and hold it there until it is consumed.” Even René Descartes, in introducing his radical skepticism, says, “I will…put aside everything that admits of the least of doubt…until I know something certain, or, if nothing else, until I at least know for certain that nothing is certain.” For Moore, Hume, Descartes, and much of the Western philosophical tradition, knowing with certainty represents our highest level of confidence. To possess certainty seems to mean that you not only know something, but that you know it without the possibility of error.
The history of human knowledge, however, suggests otherwise. Sometimes what we think we know—even that which we think we know with certainty—turns out to be wrong. For instance, we once thought we knew that the Earth was flat; or that we were located at the center of the universe; or that we were categorically distinct from other animals. These and other sorts of knowledge claims were even once thought to be certain. But as Wittgenstein reminds us, “One always forgets the expression ‘I thought I knew’” (1972, §12).
In On Certainty, Wittgenstein makes an important distinction between a “mistake” and a “mental disturbance.” While we can be mistaken about the date of a particular battle or the result of a chemistry experiment, we would be considered mad if we doubted that the earth is very old, or that physical objects exist. To be mistaken about historic battles or chemistry experiments may elicit a kind of doubt in us the next time we are asked, say, when the Battle of New Orleans took place, or when asked to explain what occurs when one mixes a small amount of sodium chlorate and sugar with a few drops of sulphuric acid. Still, this kind of doubt is just the sort we might expect in our everyday lives; it is typical, even ordinary. But to doubt whether the earth is old, or whether physical objects exist, is not. Someone whose errors are so abnormal that they threaten the very core of our commonsense judgments is not mistaken, but suffering from a kind of “mental disturbance” (1972, §71). While acknowledging our fallibility within a commonsense, practical framework is seen by many to be natural and healthy, when questions and answers exceed common sense and no longer seem practical, not only are they no longer healthy—in many instances, says Wittgenstein, they are no longer sane.
Let’s return to the President’s Counterterrorism Directive. When President Obama says he knows, or that he knows with “near certainty,” he is under an obligation to have reasons; he must have justification to support his claim. So how might one go about determining that “no civilians will be killed or injured”? How does one attain “near certainty” here?
Legal issues aside, a reasonable place to begin would be to establish precisely who is a civilian and who is a terrorist. While the U.S. Intelligence Community is guided by the definition of terrorism contained in Title 22 of the U.S. Code, Section 2656f(d), to say definitively and absolutely what a terrorist is can elicit serious interpretive challenges. Our notions of terrorism and terrorist (and, by extension, extremism and extremist) are social/cultural concepts that have been—and in many cases still are—contested. To list but one example, according to the Department of Defense’s own Training Manual, even “the [American] colonists who sought to free themselves from British rule” could be seen as “extremists.”
Let’s say, however, that we could objectively demarcate terrorists and non-terrorists. It seems the next logical step before launching the strike would be to acquire surveillance of the confirmed target. If the terrorist enters a facility, the intelligence received from surveillance must determine that it is free of civilians. If one or more civilians are within the facility, or if there is a chance that one or more civilians are in harm’s way, the strike must be aborted.
A troubling issue emerges, however, when we consider one of the means of justification used by the President and the CIA for targeted assassination by drones: the “signature strike.” “Signature strikes” are drone strikes based on “life patterns,” vague, general patterns of behavior, e.g., attending a certain mosque, associating with particular individuals, and being of a certain age. These “signatures” appear to be enough to place one on the government’s “kill list.” What “signature strikes” are not based on is specific intelligence. As Jeremy Scahill details in Dirty Wars, the kill list is
a form of “pre-crime” justice in which individuals [are] considered fair game if they [meet] certain life patterns of suspected terrorists. Utilizing signature strikes, it was no longer necessary for targets to have been involved with specific thoughts or actions against the United States. Their potential to commit future acts could be a justification for killing. At times, simply being among a group of “military-age males” in a particular region of Pakistan would be enough evidence of terrorist activity to trigger a drone strike.
Former CIA case officer Philip Giraldi told Scahill that the President has a system in place,
where people are being killed, you don’t know what the evidence is, and you have no way to redress the situation…It’s not that there aren’t terrorists out there, and every once in a while one of them is going to have to be killed for one good reason or another, but I want to see the good reason. I don’t want to see someone in the White House telling me, ‘you’ll have to trust me.’
Under even the most charitable of interpretations, it is hard to imagine how justification for a “signature strike” could be considered acceptable, much less a staid example of an assessment criterion that is “near certain.” This sort of “justification” is disturbing, to say the least.
Perhaps the most unsettling aspect of the entire drone program is the apparent role that the Joint Special Operations Command (or JSOC) plays. JSOC is a “sub-unified command” of the U.S. Special Operations Command, and is charged with, among other things, the planning and conducting of special operations in the Middle East. But JSOC also “has its own intelligence operations inside Pakistan and, at times, [has] conducted its own drone strikes.” This elicits an obvious set of questions: what, if any, sort of justificatory calculus is JSOC using? Does it include a “near certain” criterion to avoid civilian casualties? Who oversees JSOC? Who is accountable here? Unfortunately, as Scahill’s source told him,
JSOC personnel, working under a classified mandate, are not [overseen by Congress], so they just don’t care. If there’s one person they’re going after and there’s thirty-four [other] people in the building, thirty-five people are going to die. That’s the mentality. [T]hey’re not accountable to anybody and they know that.
In his press conference, however, the President said that, “as Commander-in-Chief, I take full responsibility for all our counterterrorism operations.” So should we hold the President accountable for JSOC’s actions and justifications for those actions?
As we have seen, the President’s and the CIA’s criteria for determining who should die by use of drone strikes is terribly flawed. It clearly does not rise to the level of knowledge: “signature strikes” are based on a belief—a belief that “life patterns” provide sufficient justification for a lethal strike. This is not knowledge, and it is obviously not knowledge with “near certainty.” But even more chilling is the fact that the President and the CIA have permitted JSOC to operate drone strikes under an unknown set of rules, supported with an unknown set of reasons. The level of uncertainty here is reckless and terrifying.
The New York Times quoted a senior administration official as saying, “It makes you wonder whether the intelligence community’s definition of ‘near certainty’ is the same as everybody else’s.” It makes me wonder, however, whether their definition is the same as anybody else’s.
† Two paragraphs above were initially published in my book, Exploring Certainty: Wittgenstein and Wide Fields of Thought (2014). My thanks to Lexington Books for allowing me to reprint them here.
Robert Greenleaf Brice is an assistant professor in the department of Philosophy at Loyola University New Orleans, and the author of Exploring Certainty: Wittgenstein and Wide Fields of Thought. He is currently working on a guidebook to Wittgenstein’s On Certainty for Springer Publishing.
If you are interested in writing a post for the Blog of the APA connecting research to contemporary events, please tell us more about your idea via our submission form here.