Anthropic were very vocal, well before this happened, that they were against the use case.
I don't blame them. These use cases are like blaming MySQL for storing the lat/long of the school. AI can't be held accountable and the company was trying to protect us and, yes, it was too late.
Why shouldn't it also go to jail, the same way a human does?
Just because it's difficult doesn't mean it can't be done. If you're claiming your machine should be treated like a human, then let's treat it like a human.
Except Arabic or Ethiopian Jews. They have a bit of leeway that way.
> I don't blame them. These use cases are like blaming MySQL for storing the lat/long of the school. AI can't be held accountable and the company was trying to protect us and, yes, it was too late.
They weren't trying to protect squat, and were not against this use case. Their only two red lines are "no mass domestic surveillance" and "no fully autonomous killing until the AI gets good enough to be able to do it". Assuming the story is true, there's no chance this was a fully autonomous act and was most certainly approved and executed by people.
A storage layer versus a decision making system? What a ridiculous comparison.
We’ve seen AI tools used for tons and tons of inappropriate things over the last year. Reviewing research grants, aid programs, and regulations? Why not? Publishing propaganda on Twitter? Sure thing! Finding “fraud” in state benefits? Absolutely!
There’s a belief amongst these people that AI tools are better than human judgement and represent an inevitable future where CEO kings operate the world. Why not also apply it to war?
https://www.washingtonpost.com/technology/2026/03/04/anthrop...
https://www.nonzero.org/p/iran-and-the-immorality-of-openai
This uses this Washington Post article as a source
https://www.washingtonpost.com/technology/2026/03/04/anthrop...
(Non paywall: https://archive.is/bOJkE)
As far as I know, wasn't Claude banned from use in the Pentagon a few days ago, exactly for taking a weak stance against this kind of thing?
> Even if Amodei’s scruples had somehow magically prevented the bombing of that school, Claude would still be an accomplice to mass murder.
This point from the nonzero blog I take issue with. If they had used Google Maps to pick targets, would that make Maps an accomplice?
The people who pushed the button to launch the missiles that hit the school, and the people who ordered them to do that, are fully responsible here, not the tools they used.
Absolutely. A real issue here is the normalizing of "AI scapegoating".
The real failure? Not following through on human verification of a "strong lead".
The Iran school site absolutely was _once_ a target, in the distant past - it's sited on and within a former Iranian Guard post with airstrip, etc.
The part that needed strong checking was "history since last identified as a target" - and that site has a history of disrepair and abandonment.
The debatable issue was whether the larger site did indeed store significant military assets underground, etc. which was entirely possible.
Not exactly, you might want to reread the news to understand what's actually happening.
SO... a map is static reference. A calculator is deterministic computation. An LLM is probabilistic generation
In high-stakes environments like military planning, tools that generate new claims rather than reference known data introduce a different class of risk.
Yes, everyone is responsible for their own decisions. But then circle back to risk. How can the planners be sure they aren't dealing with hallucinations, questionable data, differing outputs based on prompts, and a long list of other things...
I'm not sure they care nor do I know who holds stealth bombers accountable. We're back in the might makes right world.
Surely nobody is arguing that an Anthropic AI, with perfect knowledge that it's a school, and that students would be present, chose to knowingly murder children. Assuming this was a US military strike and not a false flag, surely nobody is arguing that the failure here was in relying on outdated intelligence about an ex-military building.
The use of AI here is simply not relevant.
The criticism I have for the current US government is massive, and my disgust for the current leadership is as intense as anyone else here, I'd wager. But there's also no doubt in my mind that if they knew it was a school, they wouldn't not have targeted it. By contrast, Russia's government shows who they are when they target civilians in Ukraine. That distinction is important and we muddy it at our own peril.
- hostage politics: in medieval times, royal families of different kingdoms would exchange family members to live with the other royal family, as a form of hostage politics, supposedly this would prevent or discourage wars. The current regime in Iran rose to power how? by taking hostages. How have they repeatedly responded to spontaneous internal domestic forces towards regime change? Hostage politics. Every time they feel threatened they take hostages in some form or another: by taking a protestor hostage into some torture prison, they are keeping their relatives in line ("behave or your niece will have a bad day in infamous prison X"), it goes both ways they also keep the "free" relatives hostage by threatening say a protestor to harm their families if they don't pretend everything is fine. It's not just internal freedom of speech. I write from Belgium, when the protests surrounding Mahsa Amini's death occured, and the video of her collapse was released it even affected my freedom of speech: from the video it was clear they used hydrogen cyanide, but would I be allowed to share this on international media when "Free" nations are desperately trying to negotiate back their citizens taken hostage by the regime in Iran?
- The wrongs and mistakes made in say Europe during WW1 (lobbing chemicals at each other), were just repeated without learning lessons by Iran. They are a signatory to the chemical weapons ban treaty. Yet the Mahsa Amini video (which even aired on local Iranian national television) subtly leaks the information that she was killed with hydrogen cyanide.
There is no valid defense of the IRGC and the Iranian regime.
Bullshit. While many experts opposed the move, many were in favor of it too. And nonchalantly deciding it paved the way to Putin's senseless attack on Ukraine is a dumb Russian talking point
Technology has generally been driven by war, and now is no different.
There's been a lot of pro-Claude jerking on HN lately, but anything against it gets buried?
As a result, sadly, it's become basically a Reddit style echo chamber, where negative news is suppressed. Often, the justification is "it's politics!", as perhaps might be the case here. Despite the fact that Silicon Valley's products, and Silicon Valley itself, are becoming more entangled with "politics" and the US government than ever.
There are better tools than Reddit to see what gets swept under the moderation rug, at least.
Why would I believe anything they say about this school is true?