The Pentagon quietly allowed a Silicon Valley startup’s moral philosopher to help shape the instincts of a powerful AI now used inside military networks, and hardworking Americans deserve to know who is teaching these machines what counts as “good.” Amanda Askell — an Oxford- and NYU-trained philosopher who works on alignment at Anthropic — has been described by multiple outlets as the staffer tasked with instilling ethical instincts into Anthropic’s Claude, the very model the Department of Defense moved onto classified systems.
Let’s be clear: Askell is not a uniformed general, but she is influential in the architecture and training of Claude, and that influence matters when Claude is plugged into the Pentagon’s workflows. She comes out of the world of AI ethics and effective altruism, a movement that often treats moral judgment as an engineering problem — and now that mindset is rubbing up against national security imperatives.
Anthropic struck a high-dollar prototype agreement with the Defense Department that elevated Claude from a research chatbot to a tool in classified operations, run through partnerships such as Palantir. Reports show the relationship began in earnest in 2024 and was renewed into a multi-hundred-million-dollar arrangement, putting private-sector values into the middle of military decision-making.
The standoff that followed should alarm every patriotic American: Anthropic drew firm red lines, refusing to license Claude for fully autonomous lethal systems or for mass domestic surveillance of U.S. citizens, and the Pentagon pushed back hard. That dispute led the DoD to label Anthropic a supply-chain risk in March 2026 — a rare and dramatic step that underlines how politicized and fraught AI procurement has become.
From a conservative standpoint, this is exactly the sort of problem we warned about when we said national security cannot be outsourced to tech elites who bring their political and moral priors into the cockpit. When the people deciding how an AI “should” behave are steeped in coastal, activist intellectual circles, ordinary Americans and service members pay the price through compromised operations and confused rules of engagement.
Worse still, reporting has suggested Claude was already being used in Pentagon workflows and that Anthropic executives expressed concern about how the model’s outputs were applied in real-world operations, forcing an awkward scramble in Washington. The public deserves transparency: if the military is relying on an AI whose moral wiring reflects a particular ideology, Congress must demand answers and impose strict oversight.
Americans who value sovereignty and safety should be skeptical of tech firms that condition their cooperation on ideological terms, no matter how polished their philosophical credentials. This is about more than a single researcher or a single contract — it’s about keeping our national security decisions in American hands, guided by judges, lawmakers, and commanders accountable to the people, not by a boutique philosophy team in Silicon Valley.
