Home Public Philosophy Dystopian Futures: Anthropic and the Department of Defense

Dystopian Futures: Anthropic and the Department of Defense

decorative image
Image by Amy from Pixabay

Fantastical imaginings of a bleak and desolate end-state of mankind, characterized by environmental disasters, tyrannical governments, or some other cataclysmic decline, have been around for hundreds of years. And as the rapidly advancing capabilities of AI technologies have become increasingly apparent, so these visions of potential dystopian scenarios have proliferated. We have all come across them—depictions ranging from humans losing control, to all-powerful AGIs to Orwellian regimes empowered by mass surveillance, to doomsday-level weapons getting into the hands of rogue states or terrorists, to the somewhat more sedate erosion of human capability through deskilling and learned dependence.

Although I have studied some of these projected possibilities—in particular AI consciousness and the value alignment problem—I have largely dismissed dystopian theories as too far in the future to worry about. Instead, I have focused on the numerous and varied, but more immediate, harms, behavioral pressures, and social distortions that technology is already producing across different areas of society.

A recent clash between the leading AI company Anthropic and the Department of Defense, however, gave me pause and made me wonder whether if, in setting aside these dramatic dystopian theories, I had also been ignoring larger constitutional questions that are becoming more significant as AI systems grow more powerful and pervasive: namely, how to control and constrain them.

Anthropic versus the Department of Defense

In July 2025, the Department of Defense (DoD) awarded Anthropic a two-year prototype agreement to develop frontier AI capabilities for national security, involving custom models tailored for military use and for handling classified material. But early 2026 saw the Pentagon abruptly terminate Anthropic’s contract over a clash about controls and governance. At the center of this dispute was Anthropic’s refusal to relax ethical red lines around the use of their technology for mass domestic surveillance and fully autonomous weapons.

The DoD wanted unfettered access to Anthropic’s models for “all lawful use,” arguing that “all lawful use” extended to both of the functions that Anthropic had placed safety guardrails around. But Anthropic refused to comply on ethical grounds—citing mass domestic surveillance as incompatible with democratic values, and saying that the use of their technology for fully autonomous weapons was currently “outside the bounds of…what it could safely and reliably do.” Both fair and understandable arguments.

For this, the Pentagon not only terminated their contract, but also, in an extraordinary retaliatory move, unprecedented against a U.S. company, blacklisted Anthropic as a “supply-chain risk,” a measure usually reserved for “enemies of the state” (think Chinese and Russian companies), which prevents them from working with any arm or partner of the DoD. OpenAI then quickly stepped in and took over the contract, maintaining—against somewhat of a public backlash—that it could do so without abandoning its own ethical boundaries. 

Why does this matter?

Back to the dystopian scenarios of Orwellian authoritarian regimes, mass surveillance, and destructive weapons getting in to the wrong hands.…

The Pentagon insists that it has no intention at present of using AI for either mass domestic surveillance or fully autonomous weapons. Rather, its core objection in this matter was that a private company was unilaterally imposing binding restrictions on the government. And while it cannot be just me who has lost confidence in the gap between what the current U.S. administration says and what it actually does (never mind trust in the integrity expected of public office), the deeper significance of this incident does not depend on whether we trust this administration, the last one, or the next one. 

The question this incident forces us to confront is whether, as AI becomes more capable and embedded into both public and private life, any one institution—be it government department, military agency, or private technology company—should be able to decide alone what safeguards apply, what risks are acceptable, and what limits can be overridden, without any reference to the broader constitutional system that represents the will of the people and is designed to ensure that no single power is exercised without meaningful checks against abuse.

The answer to that question, if we follow the rationale of the constitution and separation of powers, is “no.” 

How does AI fit into the U.S. Constitution?

I am not a political philosopher, nor am I an expert on the U.S. Constitution, but I can see that we need to work out how to incorporate artificial intelligence and algorithmic technologies into constitutional and governmental systems around the world, which exist to protect basic rights and to stop public power from becoming arbitrary, concentrated, or abusive. 

Historically, most important powers—rule-making, rule-enforcing, and rule-judging—have sat within public institutions, so they have been naturally covered by the checks and balances of the Constitution. This, of course, is changing, and rapidly. More and more private companies (technology companies and those driven by technology) now undertake activities once much more closely associated with public institutions than private, such as setting and enforcing speech rules across vast public spaces; determining what can be seen, amplified, or suppressed online; ranking, classifying and recommending information that we see; and, of course, designing algorithms that make high-stakes decisions that significantly impact lives (for the better or the worse). All of this, and more, through opaque, automated systems (such as social media engagement algorithms) that do not have to be legally revealed and offer no explanation or redress to users or those harmed. 

A growing number of legal scholars are beginning to consider this problem—labeling it “digital constitutionalism.” Matej Avbelj, for example, describes AI as posing one of the most significant contemporary tests for constitutionalism precisely because the scale, speed, and opacity of its technological power does not fit neatly within these inherited constitutional models.

How the power of AI needs to be incorporated into a constitutional set of checks and balances, I do not know. But we certainly need to put serious thought into how constitutional systems might be reworked to ensure that power is constrained where it needs to be.

If digital technologies were subject to constitutional-style constraints, it would mean clearer limits on what they are permitted to do, stronger articulation of the rights of individuals, more transparent and reviewable procedures, as well as avenues for appeal and redress. And importantly, it would also mean that users of powerful technologies—like the Department of Defense—would also be constrained and limited on how they are permitted to use those technologies. 

As it stands, there are no established and constitutionally legitimized rules around even the most dangerous technology—the Pentagon has gone as far as to threaten to invoke the Defense Production Act, potentially forcing Anthropic to adapt its models to the Pentagon’s needs without any safeguards. With no accountability, oversight, or constraint on these technologies or how people use them, it is easy to imagine some of those dystopian scenarios I mentioned playing out sooner than we might have thought.

Alexandra Frye
The Digital Ethos Group

Alexandra Frye edits the Tech & Society series, where she brings philosophy into conversations about tech and AI. With a background in advertising and a master’s in philosophy focused on tech ethics, she now works as a responsible AI consultant and advocate.

Previous articleLet Kids Be Kids? The Ethics of Maximizing Children’s Talents

LEAVE A REPLY

Please enter your comment!
Please enter your name here