This is a cached version of https://wired.com/story/trump-moves-to-ban-anthropic-from-the-us-government from 2/28/2026, 3:14:08 PM.
Trump Moves to Ban Anthropic From the US Government | WIRED
President Donald Trump’s sudden order comes after the Defense Department pressured Anthropic to drop restrictions on how its AI can be used by the military.
Skip to main contentSave this storySave this storyUS President Donald Trump announced Friday that he was instructing every federal agency to “immediately cease” use of Anthropic’s AI tools. The move comes after Anthropic and top officials clashed for weeks over military applications of artificial intelligence."The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War,” Trump said in a post on Truth Social.Trump said that there would be a “six month phase out period” for agencies using Anthropic, which could allow time for further negotiations between the government and the AI startup.The Pentagon and Anthropic did not immediately respond to requests for comment.Shortly after the President’s announcement, defense secretary Pete Hegseth said that Anthropic would also be designated a “supply chain risk,” a move normally reserved for foreign businesses considered a danger to American national security. The designation will bar the US military and its contractors and suppliers from working with the AI company.Hegseth also lashed out at Anthropic and its CEO, Dario Amodei, over the company's refusal to agree to its demands. “Cloaked in the sanctimonious rhetoric of ‘effective altruism,' they have attempted to strong-arm the United States military into submission—a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives,” Hegseth wrote on X.The Department of Defense has sought to change the terms of a deal struck with Anthropic and other companies last July to eliminate restrictions on how AI can be deployed and instead permit “all lawful use” of the technology. Anthropic objected to the change, claiming that it could allow AI to be used to fully control lethal autonomous weapons or to conduct mass surveillance on US citizens.The Pentagon does not currently use AI in these ways, and has said it has no plans to do so. However, top Trump administration officials have voiced opposition to the idea of a civilian tech company dictating military use of such an important technology.Anthropic was the first major AI lab to work with the US military, through a $200 million deal signed with the Pentagon last year. It created several custom models known as Claude Gov that have fewer restrictions than its regular ones. Google, OpenAI, and xAI signed similar deals around the same time, but Anthropic is the only AI company currently working with classified systems.Anthropic’s model is available through platforms provided by Palantir and Amazon’s cloud platform for classified military work. Claude Gov is currently largely used for run-of-the-mill tasks, like writing reports and summarizing documents, but it is also used for intelligence analysis and military planning, according to one source familiar with the situation who spoke to WIRED on condition of anonymity because they are not authorized to discuss the matter publicly.In recent years, Silicon Valley has gone from largely avoiding defense work to increasingly embracing it and eventually becoming full-blown military contractors. The fight between Anthropic and the Pentagon is now testing the limits of that shift. This week, several hundred workers from OpenAI and Google signed an open letter supporting Anthropic and criticizing their own companies’ decisions to remove restrictions on military use of AI.In a memo sent to OpenAI staff today, CEO Sam Altman said that the company agreed with Anthropic and also viewed mass surveillance and fully autonomous weapons as a “red line.” Altman added that the company would try to agree to a deal with the Pentagon that would let it continue working with the military, The Wall Street Journal reported.The public spat between the Pentagon and Anthropic began after Axios reported that US military leaders used Claude to assist in planning its operation to capture Venezuela’s president, Nicolás Maduro. After the operation, an employee at Palantir relayed concerns from an Anthropic staffer to US military leaders about how its models had been used. Anthropic has denied ever raising concerns or interfering with the Pentagon’s use of its technology.The dispute between Anthropic and the Department of Defense has escalated in recent days, with officials publicly trading barbs with the AI company on social media.Hegseth met with Anthropic’s CEO, Dario Amodei, earlier this week. He gave the company until Friday to commit to changing the terms of its contract to allow “all lawful use” of its models. Hegseth praised Anthropic’s products during the meeting and said that the Department of Defense wanted to continue working with Anthropic, according to one source familiar with the interaction who was not authorized to discuss it publicly.Some experts say that the dispute boils down to a clash over vibes rather than concrete disagreements over how artificial intelligence should be deployed. “This is such an unnecessary dispute in my opinion," says Michael Horowitz, an expert on military use of AI and former deputy assistant secretary for emerging technologies at the Pentagon. “It is about theoretical use cases that are not on the table for now.”Horowitz notes that Anthropic has supported all of the ways the Department of Defense has proposed using its technology thus far. “My sense is that the Pentagon and Anthropic agree at present about the use cases where the technology is not ready for prime time,” he adds.Anthropic was founded on the idea that AI should be built with safety at its core. In January, Amoedi penned a blog post about the risks of powerful artificial intelligence that touched upon the dangers of fully autonomous AI-controlled weapons."These weapons also have legitimate uses in the defense of democracy," Amodei wrote. "But they are a dangerous weapon to wield."Additional reporting by Paresh Dave.Update 2/27/26 6:30pm ET: This story has been updated to include additional information from defense secretary Pete Hegseth's post on X.You Might Also LikeIn your inbox: Upgrade your life with WIRED-tested gearA wave of unexplained bot traffic is sweeping the webBig Story: The women training for pregnancy like it’s a marathonIran’s digital surveillance machine is almost completeListen: Silicon Valley tech workers are trying to stop ICEWill Knight is a senior writer for WIRED, covering artificial intelligence. He writes the AI Lab newsletter, a weekly dispatch from beyond the cutting edge of AI—sign up here. He was previously a senior editor at MIT Technology Review, where he wrote about fundamental advances in AI and China’s AI ... Read MoreSenior WriterTopicsdepartment of defenseDonald TrumpAnthropicartificial intelligenceMilitaryAI Safety Meets the War MachineAnthropic doesn’t want its AI used in autonomous weapons or government surveillance. Those carve-outs could cost it a major military contract.How Chinese AI Chatbots Censor ThemselvesResearchers from Stanford and Princeton found that Chinese AI models are more likely than their Western counterparts to dodge political questions or deliver inaccurate answers.OpenAI’s President Gave Millions to Trump. He Says It’s for HumanityIn an interview with WIRED, Greg Brockman says his political donations support OpenAI's mission—even if some employees at the company disagree.This Defense Company Made AI Agents That Blow Things UpScout AI is using technology borrowed from the AI industry to power lethal weapons—and recently demonstrated its explosive potential.AI Is Here to Replace Nuclear Treaties. Scared Yet?The last major nuclear arms treaty between the US and Russia just expired. Some experts believe a combination of satellite surveillance, AI, and human reviewers can take its place. Others, not so much.OpenAI Abandons ‘io’ Branding for Its AI HardwareA court filing in a trademark lawsuit reveals OpenAI won't use the name “io” for its AI hardware device, which isn't expected to ship until 2027.AI Bots Are Now a Significant Source of Web TrafficNew data shows AI bots pushing deeper into the web, prompting publishers to roll out more aggressive defenses.An AI Toy Exposed 50,000 Logs of Its Chats With Kids to Anyone With a Gmail AccountAI chat toy company Bondu left its web console almost entirely unprotected. Researchers who accessed it found nearly all the conversations children had with the company’s stuffed animals.CBP Signs Clearview AI Deal to Use Face Recognition for ‘Tactical Targeting’US Border Patrol intelligence units will gain access to a face recognition tool built on billions of images scraped from the internet.The Only Thing Standing Between Humanity and AI Apocalypse Is … Claude?As AI systems grow more powerful, Anthropic’s resident philosopher says the startup is betting Claude itself can learn the wisdom needed to avoid disaster.I Infiltrated Moltbook, the AI-Only Social Network Where Humans Aren’t AllowedI went undercover on Moltbook and loved role-playing as a conscious bot. But rather than a novel breakthrough, the AI-only site is a crude rehashing of sci-fi fantasies.HHS Is Using AI Tools From Palantir to Target ‘DEI’ and ‘Gender Ideology’ in GrantsSince March of 2025, the Department of Health and Human Services has been using tools from Palantir and the startup Credal AI to weed out perceived alignment with “DEI” or “gender ideology.”