Share.

39 Comments

  1. Nearly two hours after President Donald Trump announced on Truth Social that he was banning Anthropic products from the federal government, Secretary of Defense Pete Hegseth took it one step further and announced that he was now designating the AI company as a “supply-chain risk”.

    After a week of tense negotiations over the company’s acceptable use policies, the Pentagon gave Anthropic an ultimatum: agree by Friday, 5:30 PM EST, to let the Pentagon use Claude for “all legal purposes,” including for autonomous lethal weapons without human oversight and mass surveillance, or be designated a supply-chain risk. The designation, which is typically used for companies with ties to foreign governments that pose national security risks to the United States, will bar any company that uses Anthropic products from working with the Department of Defense.

    Read more: [https://www.theverge.com/policy/886632/pentagon-designates-anthropic-supply-chain-risk-ai-standoff](https://www.theverge.com/policy/886632/pentagon-designates-anthropic-supply-chain-risk-ai-standoff)

  2. TheCaptainDamnIt on

    Man they really want to use AI to target, spy on and kill non-white people so they can claim it wasn’t their doing huh.

  3. Patrón Kegseth had his lil baby dick fee fees hurt, so he’s chosen to throw a temper tantrum and abuse his power.

    Pretty par for the course for the Trump Reich.

  4. literallytwisted on

    Since Whisky Pete is demanding the company enable things the DOD is not legally allowed to do in the first place Anthropic will probably take them to court.

    They also likely have a contract that the drunk is trying to change so this whole thing is probably a waste of taxpayer money.

  5. Does he realize how many companies are probably already using Claude to write and test code? He claims their TOS are not inline with American principles? How is is unamerican to specify how your company’s product is used? Or to say we won’t let our software be used to decide to kill people without human oversight?

  6. Three_Froggy_Problem on

    So wait, am I understanding this correctly? This company did not agree to let the Pentagon use its product for autonomous weapons, so the Department of Defense is now blacklisting them and essentially declaring them a national security risk?

  7. I hope they use Grok. I hope Elon insists on being there when it launches. I hope it then goes terribly wrong…

  8. Pete was very enthusiastic about AI until he found out that it doesn’t mean Alcohol Inebriated.

  9. Weren’t Republicans whining about how any guardrails on AI would cause China to dominate? This has TACO written all over it

  10. Not that I love Anthropic, but this is such a blatant shakedown it should give even the Republicans pause. But it won’t, of course.

  11. iamliterallyonfire on

    How many signal chat leaks are you up to now Pete?

    Not exactly the best track record when it comes to risk assessment.

  12. So what does this actually mean?

    Federal agencies aren’t allowed to use Claude? Businesses that do direct work with the feds can’t use Claude? Anyone who takes federal money can’t use Claude?

    Depending on how wide that ban goes, their enterprise subscriptions could evaporate immediately.

    E: As I dig deeper, it’s [absolutely the nuclear option here](https://www.cnbc.com/2026/02/27/anthropic-pentagon-ai-policy-war-spying.html):

    >The [supply chain risk] label would force DoD vendors and contractors to certify that they don’t use Anthropic’s models.

    So if there’s even one programmer at any third party (e.g. Xitter/Grok) who’s running Claude, that shuts THEM out too,

  13. So the government has to immediately stop using the only AI system that they have a contract with.

    This guy has to be the stupidest person on the planet, and they say women are too emotional to be in power. We’re now the only military of the planet with no AI.

  14. Guy who added a reporter to a signal group detailing an upcoming strike in Middle East says what’s a risk now?

  15. Its already illegal for them to use AI for mass surveillance or autonomous weapons

    But they’re asking for Anthropic to remove the guard rails for … “trust us / reasons”

    Anthropic says why tho, no

    Govt gets mad

    Make it make sense Peter

  16. The Trump administration is waging a war against the American people and American companies with a backbone. They are gunning citizens down in the streets, exiling people without due process, taxing without representation, trying to shut down elections, and violating every constitutional right you can imagine. All in an attempt by the Republican party to install a king. The Trump administration and all those that still support Trump are traitors to the Republic.

    Join the no kings movement and help take our country back from the monarchists.

    No kings.

  17. simplethingsoflife on

    Anthropic should file civil lawsuits directly against them as individuals. They’re abusing their power to negatively hurt their business.

  18. I look forward to seeing this piece shit in court serving out his final days of freedom defending his deplorable position as a sycophant of insurrection and criminal behavior to Trump and ultimately sentenced for his complicity in war crimes and unconstitutional actions.

  19. If this fucking idiot thinks they’re a risk, then that means you should immediately download it because it’s the safest option available.

  20. Syphillisdiller1 on

    How do they intend to portray themselves as the good guys here? I don’t think whining that they’re being “strong armed” in the name of corporate virtue-signaling is gonna cut it.

    “We’re black balling them because they won’t let us allow the AI to decide who to kill” is a pretty tough sell.

Leave A Reply