72 pointsby delichon2 days ago21 comments
  • cyrusradfar2 days ago
    Whether this is confirmed or not, we have countless examples of AI used in targeting in Gaza.

    Anthropic were very vocal, well before this happened, that they were against the use case.

    I don't blame them. These use cases are like blaming MySQL for storing the lat/long of the school. AI can't be held accountable and the company was trying to protect us and, yes, it was too late.

    • floralhangnail2 days ago
      "A Computer Can Never Be Held Accountable Therefore a Computer Must Never Make a Management Decision"
      • keylea day ago
        I remember this but I can't remember where it's from? IBM?
      • maplethorpea day ago
        I mean, they've made the argument that their computer learns like a human, so should be able to get away with ingesting all the data it sees, the same way a human does.

        Why shouldn't it also go to jail, the same way a human does?

        • 0xpiguya day ago
          What? How? By putting the computer or robot that made mistake in a prison cell?
          • maplethorpea day ago
            Yes. Claude exists on physical media somewhere. Put that media in a cell with no access to the internet. No one must access Claude outside of visitation hours.

            Just because it's difficult doesn't mean it can't be done. If you're claiming your machine should be treated like a human, then let's treat it like a human.

            • rune-deva day ago
              I can’t tell if you’re being serious or not…
              • frakt0x90a day ago
                It's a funny way of imposing a very large fine. Make the service only available during predefined "visitation hours", prevent updated learning except from resources available in the prison, restrict speech and actions according to prison rules etc.
      • Fire-Dragon-DoL2 days ago
        I mean, the problem is whoever follow the suggestion without double checking
        • razster2 days ago
          I bet there is some moronic explanation. I have no doubt at this point and how things are going.
        • polynomial2 days ago
          Well at least you know who to fire
        • angry_octet2 days ago
          [flagged]
          • orwina day ago
            > The only ROE they follow with any consistency is "Don't shoot Jews".

            Except Arabic or Ethiopian Jews. They have a bit of leeway that way.

            • selimthegrima day ago
              And hostages coming towards you with their hands up speaking Hebrew.
    • kouteiheika2 days ago
      > Anthropic were very vocal, well before this happened, that they were against the use case.

      > I don't blame them. These use cases are like blaming MySQL for storing the lat/long of the school. AI can't be held accountable and the company was trying to protect us and, yes, it was too late.

      They weren't trying to protect squat, and were not against this use case. Their only two red lines are "no mass domestic surveillance" and "no fully autonomous killing until the AI gets good enough to be able to do it". Assuming the story is true, there's no chance this was a fully autonomous act and was most certainly approved and executed by people.

      • nyantaro1a day ago
        I would also challenge "no mass domestic surveillance"
    • j2kuna day ago
      > These use cases are like blaming MySQL for storing the lat/long of the school.

      A storage layer versus a decision making system? What a ridiculous comparison.

    • aaron695a day ago
      [dead]
  • abdelhousni2 days ago
    Gaza as a defining standard for war crimes and state terrorism : https://www.972mag.com/mass-assassination-factory-israel-cal...
  • bhhaskin2 days ago
    What I can't understand is why? Let's ignore the moral question for a second. I can't imagine an LLM is the right tool for this at all.
    • Cerium2 days ago
      When you have a hammer as big as an LLM a lot of problems start to look like nails.
    • roncesvallesa day ago
      It's basically an OSINT query tool.
    • esseph2 days ago
      Volume vs accuracy

      "Maybe we break a couple eggs making this omelette!"

      • tasukia day ago
        Where omelette?
    • UncleMeata day ago
      The modern right wing is all in on AI tools, in part because of their particular beliefs about the nature of expertise and general humanity.

      We’ve seen AI tools used for tons and tons of inappropriate things over the last year. Reviewing research grants, aid programs, and regulations? Why not? Publishing propaganda on Twitter? Sure thing! Finding “fraud” in state benefits? Absolutely!

      There’s a belief amongst these people that AI tools are better than human judgement and represent an inevitable future where CEO kings operate the world. Why not also apply it to war?

    • locallosta day ago
      One possibility would be: when I look at the current administration, it's a bunch of bros that don't really know anything except how to succeed between other bros they spend time with, so they need Claude for anything that involves actually knowing something. It's a stretch because you would hope the army is not run by morons, but I would no longer bet that they didn't do this because Hegseth asked Claude and influenced the decision after discussing it over Signal with other bros. The culture is driven by the person in charge, who is also incompetent for anything that doesn't involve dealing with loans and still making out ok despite the bankruptcy.
  • skybrian2 days ago
    There doesn't seem to be any reporting in the blog post linked to by this tweet? Here's the news story it seems to be based on:

    https://www.washingtonpost.com/technology/2026/03/04/anthrop...

    • tantalor2 days ago
      Unfortunately, WaPo has lost credibility for this type of reporting
      • skybrian2 days ago
        Maybe, but on any given subject, most of us haven't done any investigation at all. An article written by actual journalists based on what sources tell them beats whatever our uninformed opinions are on the subject.
  • esperent2 days ago
    Actual article, rather than Twitter link:

    https://www.nonzero.org/p/iran-and-the-immorality-of-openai

    This uses this Washington Post article as a source

    https://www.washingtonpost.com/technology/2026/03/04/anthrop...

    (Non paywall: https://archive.is/bOJkE)

    As far as I know, wasn't Claude banned from use in the Pentagon a few days ago, exactly for taking a weak stance against this kind of thing?

    > Even if Amodei’s scruples had somehow magically prevented the bombing of that school, Claude would still be an accomplice to mass murder.

    This point from the nonzero blog I take issue with. If they had used Google Maps to pick targets, would that make Maps an accomplice?

    The people who pushed the button to launch the missiles that hit the school, and the people who ordered them to do that, are fully responsible here, not the tools they used.

    • defrost2 days ago
      > The people who pushed the button to launch the missiles that hit the school, and the people who ordered them to do that, are fully responsible here, not the tools they used.

      Absolutely. A real issue here is the normalizing of "AI scapegoating".

      The real failure? Not following through on human verification of a "strong lead".

      The Iran school site absolutely was _once_ a target, in the distant past - it's sited on and within a former Iranian Guard post with airstrip, etc.

      The part that needed strong checking was "history since last identified as a target" - and that site has a history of disrepair and abandonment.

      The debatable issue was whether the larger site did indeed store significant military assets underground, etc. which was entirely possible.

    • g947o2 days ago
      > Claude banned from use in the Pentagon a few days ago

      Not exactly, you might want to reread the news to understand what's actually happening.

    • 2 days ago
      undefined
    • gexla2 days ago
      Didn't read the articles, but at least the planners know and understand a map.

      SO... a map is static reference. A calculator is deterministic computation. An LLM is probabilistic generation

      In high-stakes environments like military planning, tools that generate new claims rather than reference known data introduce a different class of risk.

      Yes, everyone is responsible for their own decisions. But then circle back to risk. How can the planners be sure they aren't dealing with hallucinations, questionable data, differing outputs based on prompts, and a long list of other things...

      • eleventyseven2 days ago
        > Didn't read the articles

        Then kindly shut the fuck up.

      • esseph2 days ago
        > How can the planners be sure they aren't dealing with hallucinations, questionable data, differing outputs based on prompts, and a long list of other things...

        I'm not sure they care nor do I know who holds stealth bombers accountable. We're back in the might makes right world.

  • simondotaua day ago
    Like so much war reporting in the past decade, there's a lot of low-effort moralising and low-confidence maybes being strung together to create headline narrative that the body text simply cannot cash. And it waves away the critical distinction between bad intelligence and actively targeting civilians.

    Surely nobody is arguing that an Anthropic AI, with perfect knowledge that it's a school, and that students would be present, chose to knowingly murder children. Assuming this was a US military strike and not a false flag, surely nobody is arguing that the failure here was in relying on outdated intelligence about an ex-military building.

    The use of AI here is simply not relevant.

    The criticism I have for the current US government is massive, and my disgust for the current leadership is as intense as anyone else here, I'd wager. But there's also no doubt in my mind that if they knew it was a school, they wouldn't not have targeted it. By contrast, Russia's government shows who they are when they target civilians in Ukraine. That distinction is important and we muddy it at our own peril.

  • hexasquid2 days ago
    Two-faces' coin is responsible for his actions
    • abrkn2 days ago
      Technology isn't intrinsically good or evil. It's how it's used. Like the Death Ray.
      • ajewherea day ago
        Anthropic is not "technology". Anthropic is people, such as this Amodei, a filthy murderer at the service of the big capital.
  • gnabgib2 days ago
    Discussion (34 points, 2 days ago, 34 comments) https://news.ycombinator.com/item?id=47286236
  • 2 days ago
    undefined
  • evil-olive2 days ago
    direct link to the Substack post (instead of a Twitter post linking to it): https://www.nonzero.org/p/iran-and-the-immorality-of-openai
    • DoctorOetker2 days ago
      That must be one of the most biased analyses on Iran I have read.
      • throw3108222 days ago
        Or maybe one of the less biased?
        • DoctorOetker20 hours ago
          All the horrible practices employed by the regime of Iran, used to happen in western European countries as well:

          - hostage politics: in medieval times, royal families of different kingdoms would exchange family members to live with the other royal family, as a form of hostage politics, supposedly this would prevent or discourage wars. The current regime in Iran rose to power how? by taking hostages. How have they repeatedly responded to spontaneous internal domestic forces towards regime change? Hostage politics. Every time they feel threatened they take hostages in some form or another: by taking a protestor hostage into some torture prison, they are keeping their relatives in line ("behave or your niece will have a bad day in infamous prison X"), it goes both ways they also keep the "free" relatives hostage by threatening say a protestor to harm their families if they don't pretend everything is fine. It's not just internal freedom of speech. I write from Belgium, when the protests surrounding Mahsa Amini's death occured, and the video of her collapse was released it even affected my freedom of speech: from the video it was clear they used hydrogen cyanide, but would I be allowed to share this on international media when "Free" nations are desperately trying to negotiate back their citizens taken hostage by the regime in Iran?

          - The wrongs and mistakes made in say Europe during WW1 (lobbing chemicals at each other), were just repeated without learning lessons by Iran. They are a signatory to the chemical weapons ban treaty. Yet the Mahsa Amini video (which even aired on local Iranian national television) subtly leaks the information that she was killed with hydrogen cyanide.

          There is no valid defense of the IRGC and the Iranian regime.

  • mentalfista day ago
    >Consider, for example, Bill Clinton’s decision to expand NATO, a decision that paved the path to the Ukraine War. Pretty much every expert on the Soviet Union opposed this move, some of them vehemently

    Bullshit. While many experts opposed the move, many were in favor of it too. And nonchalantly deciding it paved the way to Putin's senseless attack on Ukraine is a dumb Russian talking point

  • coolooa day ago
    No evidence, low quality articlea. Meanwhile Iran regime bomb civilians all over the middle east
  • whattheheckheck2 days ago
    Without fluff, where is the direct claim and evidence?
  • trollbridge2 days ago
    Reminder that the very first computer was built for computing artillery tables.

    Technology has generally been driven by war, and now is no different.

    • hackable_sanda day ago
      It wasn't
      • trollbridge14 hours ago
        It was built to compute artillery tables. Its first actual use ended up being hydrogen-bomb calculations. (I'm referring of course to ENIAC.)
  • 2 days ago
    undefined
  • Razengana day ago
    Why is this post flagged?

    There's been a lot of pro-Claude jerking on HN lately, but anything against it gets buried?

    • ta_4304545a day ago
      Standard Hacker News is heavily "moderated" these days (if not by the mods themselves then by mobocracies misusing tools), which means that anything that falls outside the Happy Silicon Valley / Everything Is Good / Nothing Is Wrong narrative will get flagged and buried.

      As a result, sadly, it's become basically a Reddit style echo chamber, where negative news is suppressed. Often, the justification is "it's politics!", as perhaps might be the case here. Despite the fact that Silicon Valley's products, and Silicon Valley itself, are becoming more entangled with "politics" and the US government than ever.

      There are better tools than Reddit to see what gets swept under the moderation rug, at least.

  • mrcwinna day ago
    For those following closely, I highly recommend Dropsite News and Breaking Points. Excellent coverage.
  • ed_mercer2 days ago
    "My apologies! I should not have picked that girl school as a target. Updated my NOTES.md"
  • genxy2 days ago
    Wait till Claude finds out.
    • throw3108222 days ago
      Anthropic will have a lot of explanations to do. I'm serious, Claude's self-image is clearly going to be affected by this.
  • Helloworldboy2 days ago
    Iran has claimed to have sunk the USS Abraham Lincoln 5 days in a row. they have claimed to have killed 700 US Service members.

    Why would I believe anything they say about this school is true?