4 pointsby geox5 hours ago1 comment
  • SegfaultSeagull4 hours ago
    I get why people are nervous about AI in the military. Autonomous weapons are a scary concept. But I think the framing sometimes skips a step.

    Modern militaries are already software-driven. Targeting, logistics, satellite analysis, cyber defense — it’s all code. Machine learning is just the next layer on top of systems that already exist. It’s not like we’re going from swords to Terminators overnight.

    Also, opting out doesn’t mean the technology goes away. If the U.S. or other democratic countries decide “we won’t touch this,” that doesn’t slow down China, Russia, Iran, etc. It just shifts the balance toward actors who are less constrained.

    There’s a real argument that better AI could reduce civilian harm. If a system can process more sensor data than a human and flag inconsistencies or uncertainty, that can make strikes more discriminate, not less. Humans under stress make mistakes too. A lot of them.

    I’m much more worried about governance than the raw tech. Keep humans in the loop. Make systems auditable. Make decision chains reviewable after the fact. That’s a policy problem, not a “ban the math” problem.

    The military isn’t going to stop using advanced software. The real question is whether it’s built inside systems that answer to law and oversight, or outside of them.

    • RavingGoat25 minutes ago
      AI has no problem murdering American citizens or spying on American citizens