16 pointsby pera9 hours ago3 comments
  • Legend24404 hours ago
    I'm dubious. There's no real evidence here to suggest that it was. This sounds like a good old-fashioned intel failure, which was common in every previous war in the middle east.

    Also, so much for 'no new wars'. I'm sure this one will go better than the last five wars we've started in the middle east.

    • orbital-decay4 hours ago
      If you automate your intelligence, which they supposedly did, it becomes an automation failure as well. If Claude is grinding through their data, looking for hints and connecting the dots to designate the targets, it's definitely going to produce plausible false positives that are hard to verify quickly.
      • Legend24404 hours ago
        I'm dubious the degree to which they've actually automated their intel.

        If it's anything like how my industry has 'adopted' AI, it means they've got a chatbot somewhere that no one actually uses. And a bunch of press releases.

  • jdylanm4 hours ago
    How would Claude assist in this task? Anyone have any concrete examples?

    Nonetheless, I don’t like this…

    • 4 hours ago
      undefined
    • hedora4 hours ago
      Probably in the way the US military says it is using Claude:

      The AI shortlists targets. Someone approves them, and does so at an unprecedented rate.

  • metalman5 hours ago
    which makes this a defence of releasing indiscriminate killing machines against anyone anywhere. ambiguity is impossible.