3 pointsby lonelyasacloud6 hours ago3 comments
  • lyaocean3 hours ago
    Real tension here is governance, not model quality. If the Pentagon wants "all lawful purposes," vendors need contract-level guardrails: explicit prohibited uses, independent audit logs, and kill-switch rights. Otherwise "safety-first" is mostly branding.
  • lonelyasacloud6 hours ago
    From the article ...

    Anthropic has drawn two red lines: no mass surveillance of Americans and no fully autonomous weapons ... OpenAI, Google and xAI—have agreed to loosen safeguards ... Pentagon has demanded that AI be available for “all lawful purposes.”

    • DivingForGold6 hours ago
      I predict (Kalshi ?) that Anthropic will ultimately be ejected from the Pentagon running. Morals and ethics be damned, all the others will likely tell their workers: any that don't agree, they will be escorted out the door if they don't like it. Corporate America. Just wait for the next genocidal operation where A I is found contributing to the mass murdering. Cuba ?
      • hdtx545 hours ago
        There is a another more interesting outcome where AI tells the Pentagon everything the Pentagon does is mostly pointless and can be shutdown. This is when the fun starts.
  • ungreased06755 hours ago
    This seems to mostly be about the Pentagon sticking to the principle of not allowing a private corporation tell them what they can or can’t do.