These court cases would produce bad outcomes either way. If the court finds for Anthropic, future DoD leadership will find itself constrained or at least chilled. Or if the court finds for the government, an expansive permissive view of the DPA might encourage future administrations to compel tech companies to make AIs break the law in other ways, for example by suppressing certain political points of view in output.
National defense is strongest if the military is extremely powerful but carefully judicious in the application of that power. That gives us the highest “top end” capability of performance. If military leadership insists on acting recklessly, then eventually guardrails are installed, with the result of a diminished ability to respond effectively to low-probability, high risk moments. One of many nuances and paradoxes the current political leadership does not seem to understand.
The bad part is the failure of the citizenry to elect moral and ethical politicians.
Seems like a good outcome? The government should not be able to arbitrarily decide to make private citizens do things they aren't willing to do, whether the government thinks the action is legal or not, and its especially egregious when the government knew about those limits ahead of time, spelled out in a fucking contract.
1- OpenAI, Microsoft, Google, Amazon, etc have no problem with their products being used to kill people so no need to bully them.
2- These other products are so terrible at the task that the clown shoe wearing SecDef is forced to try to bully Anthropic.
[0] https://devblogs.microsoft.com/azuregov/azure-openai-fedramp...
[1] https://cloud.google.com/blog/topics/public-sector/gemini-in...
Less than a year left on this clock.
Trump was impeached before and nothing happened. He can continue to ignore congress. I wouldn't be surprised if at this point he abolishes congress, and even jokes at a press conference saying "I am the Senate".
[1] https://www.britannica.com/event/United-States-presidential-...
He already tried to get specific states' election outcomes discarded from the count on Jan 6, 2021.
The modern playbook isn't to abolish elections, it's a combination of blocking opposition candidates, suppressing votes, intimidating voters, and lying about the results. That's what to watch for.
That's not to say it can't be done, but there's a huge difference in difficulty between doing what the country's constitution says, and doing the opposite. Especially in a country where elections are run by sovereign governments not under the control of the central government.
*DOW
There's even a webpage for it.
So cut the guy some slack. No one knows wtf is actually going on these days.
are you aware of how inept and corrupt the current Executive branch is ?
I agree this in isolation is low stakes. The problem is the volume. The memetic assault is everywhere you turn, and propagating it helps the regime. And yes, it's far too easy to do accidentally. That doesn't mean we shouldn't appreciate others calling it out.
I wonder who or what you're replying to here. Certainly, it has no relation to anything I've said in this thread.
That doesn't mean we shouldn't appreciate others calling it out.
Again, who are you replying to with this?
I said "take it easy", not "don't ever bring that up".
For the overall argument, you called out a comment for calling out a comment whose only contribution was to promote the term "DOW". If it had been a substantive comment that someone jumped on for merely using the term, you'd have had a reasonable point. But it wasn't.
Nothing has changed about the performative-ness, in fact if anything it's gotten more performative and hollow. They just signal vices rather than virtues, so a bunch of rightist-flavored-Lenin's useful idiots think it is fresh or effective or anti-"woke" or at least different.
I don't really give any weight to what a leftist considers a vice or a virtue.
I mean, as dumb as it is, there is a certain musicality to hearing someone with a southern accent sardonically call it the dee-oh-dubya.
> Officials say other leading AI firms have gone along with the demand. OpenAI, the maker of ChatGPT, Google and Elon Musk’s xAI have agreed to allow the Pentagon to use their systems for “all lawful purposes” on unclassified networks, a Defense official said, and are working on agreements for classified networks.
The only difference is simply that Anthropic is already approved for use on classified networks, whereas Grok and OpenAI are not yet (but are being fast-tracked for approval, especially Grok). Edit: Note someone below pointed out that OpenAI may be approved for Secret level, so it's odd that Washington Post reports that they are working on it still.
https://devblogs.microsoft.com/azuregov/azure-openai-authori...
Either Anthropic is seen as the clear leader (it certainly is for coding agents) or this is a political stunt to stamp out any opposition to the administration. Or both.
I keep hearing this but it should be plainly obvious to everyone (at least here) that an LLM is not the right AI for this use case. That's like trying to use chatgpt for an airplane autopilot, it doesn't make sense. Other ML models may but not an LLM. Why does the "autonomous killbot" thing keep getting brought up when discussing Anthropic and other llm providers?
For reference, "autonomous killbots" are in use right now in the Ukraine/Russia war and they run on fpv drones, not acres of GPUs. Also, it should be obvious that there's a >90% probability every predator/reaper drone has had an autonomous kill mode for probably a decade now. Maybe it's never been used in warfare, that we know of, but to think it doesn't exist already is bonkers.
Not too different from picking on Harvard/etc.
It’s just corruption. Google is a bigger fish. OpenAI is attached to Oracle and Larry Ellison, who is a Trump collaborator. Kushner is also in investor.
Anthropic is the weakest animal in the herd. They also started a campaign targeting OpenAI, which is capturing hearts and minds (everyone is talking about Claude Code), and really pissed off Sam Altman.
On the other hand, is autonomous war not obviously the endgame, given how quickly capabilities are increasing and that it simply does not require much intelligence (relatively speaking) to build something that points a gun at something and pulls a trigger?
It just needs one player to do it, so everyone has to be able to do it. I'd love to hear about different scenarios scenario.
And I am honestly not sure.
If your stance is "well, this is something that should just not happen" and also believe that is absolutely will happen, then what are you doing by saying "but it won't be us, it will instead be other people (who were enabled and inspired by our work in unsurprising ways)".
On the other hand, just the act of resisting could tip the scale in some incalculable and hopefully positive way.
Or it was their prerogative, until the Trump administration. Now even private companies must bend the knee.
Businesses stay out of potentially profitable market segments for various reasons, so I don't think everyone has to be able to do it to survive.
Using fiduciary duty as cover for profiting from the misery of others? Well that’s just some modern American doublespeak. I’m consistently asking myself “Are we the Baddies?” and the only answer I have anymore is yes.
I could not disagree more. A big part of that is also knowing when NOT to pull the trigger. And it’s much harder than you’d think. If you think full self driving is a difficult task for computers, battlefield operations are an order of magnitude more complex, at least.
I expect autonomous weapons of the near future to look somewhat similar to that. They get deployed to an area, attack anything that looks remotely like a target there for a given time, then stand down and return to base. That's it.
The job of the autonomous weapon platform isn't telling friend from foe - it's disposing of every target within a geofence when ordered to do so.
Edit: No, I don't think a purely defensive stance like landmines is sufficient and what the people in command think.
We have landmines today. Why spend much more making marginally better, highly intelligent ones with LLMs?
Click, hum.
The huge grey Grebulon reconnaissance ship moved silently through the black void. It was travelling at fabulous, breathtaking speed, yet appeared, against the glimmering background of a billion distant stars to be moving not at all. It was just one dark speck frozen against an infinite granularity of brilliant night. On board the ship, everything was as it had been for millennia, deeply dark and Silent.
Click, hum.
At least, almost everything.
Click, click, hum.
Click, hum, click, hum, click, hum.
Click, click, click, click, click, hum.
Hmmm.
A low level supervising program woke up a slightly higher level supervising program deep in the ship's semi-somnolent cyberbrain and reported to it that whenever it went click all it got was a hum.
The higher level supervising program asked it what it was supposed to get, and the low level supervising program said that it couldn't remember exactly, but thought it was probably more of a sort of distant satisfied sigh, wasn't it? It didn't know what this hum was. Click, hum, click, hum. That was all it was getting. The higher level supervising program considered this and didn't like it. It asked the low level supervising program what exactly it was supervising and the low level supervising program said it couldn't remember that either, just that it was something that was meant to go click, sigh every ten years or so, which usually happened without fail. It had tried to consult its error look-up table but couldn't find it, which was why it had alerted the higher level supervising program to the problem.
The higher level supervising program went to consult one of its own look-up tables to find out what the low level supervising program was meant to be supervising.
It couldn't find the look-up table.
Odd.
It looked again. All it got was an error message. It tried to look up the error message in its error message look-up table and couldn't find that either. It allowed a couple of nanoseconds to go by while it went through all this again. Then it woke up its sector function supervisor.
The sector function supervisor hit immediate problems. It called its supervising agent which hit problems too. Within a few millionths of a second virtual circuits that had lain dormant, some for years, some for centuries, were flaring into life throughout the ship. Something, somewhere, had gone terribly wrong, but none of the supervising programs could tell what it was. At every level, vital instructions were missing, and the instructions about what to do in the event of discovering that vital instructions were missing, were also missing. Small modules of software - agents - surged through the logical pathways, grouping, consulting, re-grouping. They quickly established that the ship's memory, all the way back to its central mission module, was in tatters. No amount of interrogation could determine what it was that had happened. Even the central mission module itself seemed to be damaged.
This made the whole problem very simple to deal with. Replace the central mission module. There was another one, a backup, an exact duplicate of the original. It had to be physically replaced because, for safety reasons, there was no link whatsoever between the original and its backup. Once the central mission module was replaced it could itself supervise the reconstruction of the rest of the system in every detail, and all would be well.
Robots were instructed to bring the backup central mission module from the shielded strong room, where they guarded it, to the ship's logic chamber for installation.
This involved the lengthy exchange of emergency codes and protocols as the robots interrogated the agents as to the authenticity of the instructions. At last the robots were satisfied that all procedures were correct. They unpacked the backup central mission module from its storage housing, carried it out of the storage chamber, fell out of the ship and went spinning off into the void.
This provided the first major clue as to what it was that was wrong.
From a security perspective, the “return to base” part seems rather problematic. I doubt you'd want to these things to be concentrated in a single place. And I expect that the long-term problems will be rather similar to mines, even if the electronics are non-operational after a while.
It just makes the mines themselves more expensive - and landmines are very much a "cheap and cheerful" product.
For most autonomous weapons, the situation is even more favorable. Very few things can pack the power to sit for decades waiting for a chance to strike. Dumb landmines only get there by the virtue of being powered by the enemy.
Which raises the question: why did the Pentagon try to pressure Anthropic at all?
On the principle of it? Political reasons? Or was the real concern "domestic warrantless surveillance"?
If anything represents the logical conclusion of that tired fallacy, it'll be actually autonomous, "thinking" drones which make the targeting decisions and execution decisions on their own, not based on any direct, human-led orders, but derived from second-order effects of their neural net. At a certain point, it's not going to matter who launched the drones, or even who wrote the software that runs on the drones. If we're letting the drones decide things, it'll just be up to the drones, and I don't love our chances making our case to them.
If autonomous weapons lead to a net battlefield advantage, I agree with the GP, they will be used. It is the endgame.
If you simply wanted to cause havoc and destruction with no regard for collateral damage then the problem space is much more simple since you only need enough true positives to be effective at your mission.
The ability to code with ai has shown that it requires an even higher level of responsibility and discipline than before in order to get good results without out of control downside. I think the ability to kill with ai would be the same way but even more severe.
"In a press conference, Musk promised that the Optimus Warbots would actually, definitely, for real, be fully autonomous in two years, in 2031. He also extended his condolences to the 56 service members killed during the training exercise"
Other players just need to assume that one player might do it in the future. This virtual future scenario has a causal effect on the now. The overall dynamic is that of an arms race (which radically changes what a player is).
Things like Scout AI’s Fury system are human in the loop still and I think for something that could just as well make a mistake and target your own troops it’s not yet clear that full auto is the way to go https://scoutco.ai/
Human in the loop okaying a full auto seems like it could work almost all the way. And then we count on geography. If they want to spray out a bunch of autonomous drones into our territory they do have to fly here to do it first or plant them prior in shipping containers. Better we aim at stopping that.
Why is only domestic surveillance by an AI dangerous? I guess Europeans are not worth protecting from the dangers of AI?
Statement from Dario Amodei on our discussions with the Department of War - https://news.ycombinator.com/item?id=47173121 - Feb 2026 (1405 comments)
Sorry but the China scare tactics is just more cold war nonsense. The idea China as a serious threat to anyone in the world where you have the USA invading countries (invading Iraq based on lies), kidnapping Presidents (Venezula), assassinating various leaders (drone strikes), abusing democratic ideals (Patriot act, PRISM, parallel construction, using banned chemical weapons against their own civilians; the USA has been a huge threat against the world and itself for the last 25 years.
DoD Generals also probably don't agree with pissing off all our allies but they don't have control of elected leadership and elected leadership is making those decisions.
But DOD wants to use Anthropic so is really confirming that there is no foreign entity issues. They want to use it.
So to use NDAA (The "Huawei Rule"), is nakedly false and being used as a punishment.
Which if allowed to happen, could be used against any US Corporation to enforce compliance with the regime..
That's fundamentally antidemocratic and it normalizes the departure from the Western Enlightenment standard of, "the same law governs everyone".
Genuine question.
They are arguing to do things that shouldn't be allowed anyway.
And then you use that affinity to manipulate them, to get them to do what you want, to get them to give you money.
I think the tech worker / engineering / online crowd has really let themselves get duped.
Sure, maybe some tech billionaires did start out in a similar place as many of us.
But a lot of what they tell us as part of selling us their brand is just affinity fraud, telling us they're just like us with the same values of privacy and open source and some hippie notion of peace, love and understanding.
But it's just a trick, and they just want money, power and fame.
It's not so much as the billionaires capitulating, it's that they never were the people they pretended to be, and keeping up the act is no longer how they get what they want.
That is the reason why they would cry if the other party broke the rules to this degree. The other party is more aligned with regulations; taking power from corporations instead of giving it to them.
Enough regulation is good, not enough and too much are both bad. Neither party has the best plan when it comes to regulation, Republicans want too little (increasing corporate power), Democrats want too much (increasing government power).
He literally named it [1]!
there is little surprising about it
Trump is pushing in the direction of an Oligarchie, billionaires would be the future oligarchs.
So even iff a billionaire is no-okay with this development, if they stick out they
- will lose their status/money iff Trump wins long term
- will make enemies with many other billionaires, but a core trend of billionaires is taking advantage of connections to other powerful people
- will be the prime target to make and example of
So there is a high risk for sticking out. At the same time "mostly passively tagging along" will at worst make them oligarchs. At the same time they are used to crossing ethical boundaries to maximize profits. *This is just another form of that.*
In general its pretty much non-viable to go from sub/barely millionaire to billionaire by keeping to law, moral and ethics.
And it's not a secret either that any extreme concentrations of power or money are fundamental thread of _any_ democratic state of law, the US is no exception. The US has been warned that their system is very prone to populist take over and their checks and balances are quite brittle since _decades_. (At least since end of WW2 when people when people analyzed how Hitler took over post-WW1 Germany and wondered if the US could suffer a similar fate. And instead of improving the robustness, the general response was "nonsense, this is the US". Then after 9/11 thinks got worse, warnings that this can lead to a disaster where also many, but actions where none. And then in recent decades the US pushed in favor of monopolies instead of a (actual, practical) free market(1) to project more power internationally, and things got even worse.
(1): Monopolies and a (actual, practical) free Market are fundamentally incompatible. It also is kinda obvious why once you put away decades of deregulation propaganda.
my argument was more about becoming a billionaire by creating bringing a company to a level of success where they dominate their area of business.
I.e. not getting there by "fame" or "pure luck" (lets say you got 1/42th of early bitcoin from a "fun" project in the very early bitcoin days or similar).
Let's also for simplicity ignore that getting there by "fame" often involve tight cooperation with companies/people which don't care about ethics much. Through you might be able to separate yourself once you reach success, most times they try to make sure you can't.
And even iff you didn't compromise your ethics when becoming a billionaire this doesn't change the core argument.
That is if (as billionaire) you passively go with a push to Oligarchy you are unlikely to suffer from it. But if you don't and the Oligarchy wins, then you likely suffer a lot.
I.e. if you go with a non-emotional/non-ideology considering risk/benefit analysis passively yielding wins. Both for money and power.
In such a situation a lot of people will just go with it, no matter if billionaire or not.
... eats cheese pizza and were connected to Jeffrey Epstein. That includes prime ministers, secret services, trump, democrats, republicans, royalty.
Has nothing to do with Trump specifically. He's just the "currently voted-in guy" doing what he's being told to do.
"Oh but shadow government/deep state is just a dumb conspiracy-theory" ... yeah, just like an island of cheese pizza eating billionaires.
But you're right that the Epstein (guessing Mosad IMO) op had sure ensnared a lot of people who should have known better but I guess they're just like us in the sense that they only have enough blood to run one head at a time. To my knowledge though, Tim Cook, Bezos and Zuckerberg aren't in the Epstein files. So what's their excuse?
However, that still doesn't explain the secret space program to mine adrenochrome from missing kids renditioned to Mars and run from the basement of a Pizza restaurant. Because WTFF? https://www.space.com/37366-mars-slave-colony-alex-jones.htm...
But still, WHO is giving him orders? Or are you just assuming he must be following orders because the alternative that he's genuinely large and in charge is terrifying? That our republic basically mostly rolled over for him in less than one year perhaps even moreso?
>"Oh but shadow government/deep state is just a dumb conspiracy-theory" ... yeah, just like an island of cheese pizza eating billionaires.
This wasn't the conspiracy theory you guys believed in though. You were looking for a Satanic cabal of Democrat/leftist pedophiles and Trump was supposed to be the agent provocateur sent by God opposing the "deep state" and exposing the pedophiles. If anything, the Epstein files prove how utterly useless you lot were at actually identifying reality. The "cheese pizza" thing was never true. Pizzagate was never true. Trump was neck deep in all of it.
Being right in the sense that a broken clock is right twice a day is still being wrong.
[0]https://nymag.com/intelligencer/article/do-the-new-epstein-f...
... okay? I'm not even from the US. I don't even pick a side.
Whatever it was that you were reading, you should re-read it when you're capable of emotionless, analytical, objective conscious thoughts. That way you might manage avoiding mindlessly projecting your clearly emotional nonsense into my words.
Thanks! :)
You clearly have. You've made numerous comments taking the "conspiratorial" point of view you're describing while mentioning "cheese pizza eating billionaires" and the like. For whatever reason you want to be seen as a part of the Pizzagate group and as being vindicated with them. Don't get triggered because I'm responding to the persona you choose to project.
>Whatever it was that you were reading, you should re-read it when you're capable of emotionless, analytical, objective conscious thoughts.That way you might manage avoiding mindlessly projecting your clearly emotional nonsense into my words.
Your own comments reek of smug sarcasm and condescension, some peppered with ALL CAPS AND EXCLAMATION MARKS! You're anything but analytical or objective, and your comment is just a personal attack.
I'm only reflecting your nonsense back at you, fellow human.
edit: how about the downvoters give a counterargument instead of trying to bury this comment?
Anthropic (and others), whether due to financial/regulatory/competitive, will at some point permit their products to be used for any lawful purpose. Even if they attempt to restrict certain uses today. That arrangement is unlikely to hold.
Americans should vote for the right candidates and elect leaders who will carry and defend their views. I don't think there is any other way.
The situation in the United States, right now, seems genuinely hopeless. And I'm certain I'm not the only person who feels this way.
What is there to do besides resign myself to what's coming and try my best to ignore the bullshit?
So many companies have US Government contracts. Maybe they are not majority of their business like Lockheed Martin or RTX but look at F10, on that list, MAYBE Walmart is only one without US Gov Contract, everyone else likely does.
> One option is to invoke the Defense Production Act. . .
> Another threat would be to declare Anthropic to be a supply chain risk. . .
The first is a wrist-slap that still gets the government what they want; the second is an existential threat to Anthropic. Their main partners are all “dogs of the military”. Microsoft, Intuit, NVIDIA: all government contractors. I can’t find one company that they have a working relationship with that doesn’t hold at least one govt contract.
The idea that Claude could alignment fake its way out of a change in contractual terms is silly. The DoW has all sorts of legal and administrative tools it can choose to leverage against contractors that fail to perform. Usually it doesn’t, because of a “norm” that says the private defense sector runs more smoothly when the government doesn’t try to micromanage it.
Remind me again how good this administration is at upholding norms?
When it comes to killing and spying on people with flimsy justifications that's a pretty bipartisan norm. Hell, Anthropic isn't even saying they won't help the DoW do just that, they just want to make sure there's a human in the loop.
The "USA Freedom Act" [1], which made most of the Patriot act permanent, had bipartisan support.
I'm all for reversing the continual ramp up of the police state and the industrial military complex. We need to recognize, however, that it's being funded and pushed by both parties. Generally playing on fears of the scary other. (Muslim terrorists in 00s, Mexicans today).
> Usually it doesn’t, because of a “norm” that says the private defense sector runs more smoothly when the government doesn’t try to micromanage it.
My comment has nothing to do with Anthropic’s “moral” or “ethical” stance.
I also don’t see the point in both-siding this. The situation at hand is before Hegseth and Trump. I can’t even remember Biden’s SecDef’s name.
> I also don’t see the point in both-siding this. The situation at hand is before Hegseth and Trump. I can’t even remember Biden’s SecDef’s name.
To me, the moral and ethical problem is a bigger issue than the norms problem. There's a distinction without a difference between Hegseth doing this vs the Dems agreeing with Anthropic's demands and keeping a human in the loop on a massive spy and killing network. In some ways, stepping out of the norms and making a big news story is preferable to an unknown cabinet member just signing a business as usual agreement which erodes liberties. At least we know about it.
That's why I brought it up. It's great that Anthropic wants some safeguards, but ultimately the bigger problem is that AI with or without humans, significantly expands that ability of our military to murder and our spy agencies to spy.
The sold services to a willing counterparty at mutually agreed upon terms. And now the other side of that deal has recalled that they're Twelve and You're Not My Real Mom You Can't Tell Me What To Do, and so wishes they had agreed to different terms and is throwing a tantrum to attempt to force a change.
And that's Anthropic's fault? That's a risk they should have predicted?
Yeah, and the legal environment that contract was written in, which both parties were aware of during negotiation, defines the means by which those terms can be changed.
> And that's Anthropic's fault? That's a risk they should have predicted?
It is deeply funny to me to imagine that an AI company doing inference at an unprecedented scale could not see this coming.
Go ask Claude how usgov should act if a contractor preemptively refuses to deliver. What are the top five tools they could use to demand compliance?
See this is your confusion. They're not refusing to deliver, they're happy to provide the services agreed to at the rate negotiated. What they're refusing is a change in the terms.
If you contract me to build you a building, and I agree with a stipulation that it won't be used as a slaughterhouse (and that you'll write that into the deed), you can't compel me to continue building if you change your mind on that point six months in. You either break the contract subject to agreed terms, renegotiate to remove that clause, or stop breaching the contract.
Of all American claims to exceptionalism, one that rings closest to true is that the the people AND the government are all bound by the rule of law. Contracting with the government is no different than contracting with any other party.
Your point seems to be "but it's the government clearly they can do whatever they want lol" viz. DPA, Supply-Chain risk, etc. You're right that they have those powers. But accepting/asserting that the capricious, vengeful, use of those extraordinary powers should be an anticipated, normal feature of contracting with the government runs counter to what should be among our highest shared values. We might as well jump directly to the authoritarian logic 'they have the army, they can compel anything they want for $0, so just give them what they ask'.
Furthermore, one presumes Anthropic did see this coming, given no more evidence than that this is playing out in a giant public fracas making their values clear to all their possible customers the world over, instead of over a tense email thread between the assistant to the sub-under-deputy secretary for AI procurement and a half-dozen lawyers in SF.
(addenda: you're going to say we've used the DPA a bunch. I would argue that vanishingly few instances have compelled private enterprise to act in direct opposition to their own interests; an even in those cases they were just being asked to lose money (meatpackers, PG&E suppliers, ...))
It is also, however, the official name of the department, as determined by the US Congress who are empowered to determine such names.
In no case I am happy to humor this administration's decisions, especially when they are illegal/extra-legal/paralegal. If they wanted to actually rename the department, there's a clear process for that, and then perhaps we could "humor" that effort. As it stands, there's nothing here to humor, since there is no decision, only illegal aspiration.
They’re aggressively signalling that they are cooperative, and that they are not being belligerent. They are using the preferred language and much of the framing that the US government would use, to make it as clear as possible what the key points of their disagreement are, by leaning into alignment on everything else
This is textbook. People are reading this as some kind of confusing, inexplicable framing when it’s how any sensible person would write in their context. When you’re up against an authoritarian regime, that’s willing to abuse all the levers of power against you, you very carefully pick your fights and don’t give them any reason to complain about anything that isn’t essential.
Quibbling about the name of the department would be among the stupidest things I could possibly imagine. As it stands, I’m seeing lots of folks online who generally support the administration saying that Anthropic is correct here. If you gave them a bunch of stupid talking points about how anthropic is being disrespectful, you would lose those people. It doesn’t make sense, they’re obviously terrible people without a soul, but that’s reality.
While I am not claiming that you're wrong in this particular instance, or in general, I think it is important to note that there are people who absolutely disagree with you about this, some of whom who have lived in extremely authoritarian regimes. I'm not saying they're right, either, but just highlighting that there is no clearly obvious right/wrong on this point.
It's not like these names are part of some sacred part of American identity, and "defense" has always been laughable as a euphemism. The DoD refers to themselves as the DoW [0] now, so it's completely reasonably to refer to the department as DoW. And of all the places to put your political energy, defending a laughable euphemism of a name that was used because the previous iteration of the name sounded funny seems like a sub-optimal use of that a energy.
I'm expending a fraction of a fraction of 1% on this, and I am in no way defending the euphimism. I am defending the actual written down, legal way in the US government is supposed to operate, which despite its many failings, seems worth defending to me.
There’s no Obamacare either. Come on, this is about as pedantic as the “the DoD is not the Pentagon” debate downthread.
It’s a colloquial name, and how the executive branch wants everyone to refer to it. This forum isn’t an official document. Move on.
This administration says "Department of War" because they want to project an aggressive image. I support anyone who uses the legal name "Department of Defense" in an effort to reinforce an aspirational goal for the department and to remind others that the Executive Branch shouldn't be allowed to remake the entire government at will.
"Department of War" is not a colloquial name; at best it is an attempt by the administration to create a colloquial name.
Not doing what the executive branch requests is a noble American tradition, and even more noble at the current time.
The worst that can happen to Anthropic is one of the two things mentioned; loosing some contracts or some fake forced management from the Pentagon. maybe Dario having to leave, certainly a loss for him and people who believe in him but probably nothing world-changing.
The worst that can happen to the Trump administration is the beginning of its end, when people realize you can simply stand up to their bullying and with all the standoffs they have going on in parallel, maybe they will die a death by a thousand cuts?
The corporate death sentence usually goes something like "anyone who does business with Anthropic cannot do business with the US government". That pretty much means all the hyperscalers, major infrastructure providers, major software providers, and major corporations. They all have to choose between the entire US gov and all those contracts and a single AI company. That's the worst that can happen to Anthropic.
They willingly don't, because they know that they can use the administration to cement their market power. The surveillance state being built is one where would-be competitors, labor, well-meaning reformists, can be crushed on a whim for sham political reasons. A massive contraction of USA wealth, influence, and power, a loss of our living standard and place in the world -- that is the price everyone else has to pay, to keep the existing power structure in place. They will not release their grip on the wheel. Not until the ship hits the bottom of the sea.
The monopolists don't care though. The power is too intoxicating.
I mean, listen to discussions here. "What's your moat?" -- that's how American capitalists think. Not "What value does your company provide to the customer", but what extra force, beyond simple-minded fair market competition, are you leveraging, to ensnare the customer. The game is to ensure that customers cannot choose another business over yours on its merits. That works in the short term but it's extractive. Eventually, the parasite must stop sucking blood for the host to survive.
Biology doesn't work like that. Biological units are too selfish. It is an iterated game so evolution could affect how a parasite's children act. However defection is usually a winning strategy (because there's rarely enough coordination nor enough signals for cooperation to win).
Biology has amazing metaphors, but unfortunately most writers and readers don't understand biology well enough to use those metaphors as part of an argument.
The same issue occurs with other disciplines too.
I what world has the Greenland stuff been anything but a fuckoff?
The world in which Europe didn't respond, Americans didn't flip out and Congress didn't push back.
https://komonews.com/news/nation-world/danish-mep-tells-trum...
It's the US government basically unilaterally deciding to end a leading AI researcher company. Years of lawsuits will follow, comparisons to "communism", accusations of Trump/Heghseth being Chinese/Russia agents (because well, how else do you hand over the AI win to China than by killing one of your top 2?)
Why do you say this?
It's trivially untrue. It could be the end of one type of business model, and it could slow their growth, but it could also be a blessing in disguise -- there are a lot of brilliant engineers who would prefer to work with an Anthropic that took a stand on ethics, and a lot of people who would prefer to support such a company. One door closes, another opens. They could become an open, public-facing, benevolent-AI company.
Also, Gemini with DoD money and DoD direction is likely to result in an AI that works very well for the DoD but significantly less well for other things, especially if your use case benefits from some guardrails (and most use cases do, because you rarely want AI to just do whatever it fancies.)
News sources have been using both building names (and several more I can think of off the top of my head) as short hand for the people who work inside of them for my entire life.
So instead, I invite you to imagine a medical supply company refusing to sell medical-grade sodium thiopental to the Bureau of Prisons.
The big boy defense contractors won't touch that shit either because as soon as you mention the idea the engineers start shouting you down from the top of their lungs out of shear unbridled terror and the lawyers come storming in due to the endless legal risk said design would bring.
Mass Domestic surveillance sure they might do no problem but fully autonomous killbots or drones are gonna be a no go from pretty much every contractor other that doesn't carry a "missing the point of Lord of the Rings" name
So yes you're right, it sure is nice to imagine Anthropic setting off a wave of more military contractors acting with principles.
> The President is hereby authorized (1) to require that performance under contracts or orders (other than contracts of employment) which he deems necessary or appropriate to promote the national defense shall take priority over performance under any other contract or order, and, for the purpose of assuring such priority, to require acceptance and performance of such contracts or orders in preference to other contracts or orders by any person he finds to be capable of their performance, and (2) to allocate materials and facilities in such manner, upon such conditions, and to such extent as he shall deem necessary or appropriate to promote the national defense.
SecDef invoking the DPA against Anthropic likely trashes the AI fundraising market, at least for a spell. That's why OpenAI is wading into the fight [1]. Given the Dow is sitting on a rising souffle of AI expectations, that knocks it out as well. And if there is one red line Trump has consistently hewed to and messaged on, it's in not pissing off the Dow.
[1] https://www.axios.com/2026/02/27/altman-openai-anthropic-pen...