The Pentagon spent last Friday trying to bully an AI company that had dared to require human oversight of surveillance and autonomous weapons. On Sunday, automated air defense proved exactly why that company was right and oversight matters.
On Friday, February 27, 2026, Defense Secretary Pete Hegseth designated Anthropic — the AI company behind Claude — a “supply chain risk,” a classification meant for foreign adversaries. The company’s offense was simply refusing Pete’s unwanted advances. He demanded they strip naked the safeguards requiring human responsibility for lethal autonomous weapons systems, and prohibiting mass domestic surveillance. Hegseth, infamous for arrogance and betrayal, called it “a master class in arrogance and betrayal.”
Emil Michael, the disgraced ex-Uber executive who was fired for acting like he was God and could control women (appointed Pete’s Undersecretary for Research and Engineering), called Anthropic CEO Dario Amodei “a liar” with a “God-complex” who “wants nothing more than to personally control the US Military.”
The demand made was to allow the AI to be used for “all lawful purposes,” with no exceptions. Anthropic said no because, Amodei argued, it is not reliable enough for autonomous lethal decisions under the law.