Real tension here is governance, not model quality. If the Pentagon wants "all lawful purposes," vendors need contract-level guardrails: explicit prohibited uses, independent audit logs, and kill-switch rights. Otherwise "safety-first" is mostly branding.
Anthropic has drawn two red lines: no mass surveillance of Americans and no fully autonomous weapons ... OpenAI, Google and xAI—have agreed to loosen safeguards ... Pentagon has demanded that AI be available for “all lawful purposes.”
I predict (Kalshi ?) that Anthropic will ultimately be ejected from the Pentagon running. Morals and ethics be damned, all the others will likely tell their workers: any that don't agree, they will be escorted out the door if they don't like it. Corporate America. Just wait for the next genocidal operation where A I is found contributing to the mass murdering. Cuba ?
There is a another more interesting outcome where AI tells the Pentagon everything the Pentagon does is mostly pointless and can be shutdown. This is when the fun starts.