29 pointsby doener6 hours ago7 comments
  • andsoitis6 hours ago
    >The ask is simple: let us use your models for anything that's technically legal.

    > Weapons development, intelligence collection, battlefield operations, mass surveillance of American citizens.

    > OpenAI said yes.

    > Google said yes.

    > xAI said yes.

    > Anthropic said no.

    Is that accurate? Did all the labs, other than Anthropic, say yes to allowing their models to be used for weapons development and mass surveillance of Americans?

    Or is the poster overlooking some nuance?

  • jaybrendansmith3 hours ago
    Ironic that it is Anthropic that is actually focused on the thing that OpenAI was founded on ... safety.
  • an hour ago
    undefined
  • MrCoffee76 hours ago
    The Claude chatbot for the general public won't even answer questions related to military AI. It won't even answer questions like if there are any dual use papers among a group of new AI research paper listings that might be of concern from an AI safety viewpoint.
  • roncesvalles4 hours ago
    Anthropic has been killing it with the marketing recently. Wouldn't put it past them for this to be one more brand campaign.
  • xyzsparetimexyz5 hours ago
    100% AI slop tweet. Probably real news, but it you actually care about getting people to care then you can't be slopping out writing like this.
    • queenkjuul12 minutes ago
      I think you're right. Called into question the whole story for me until i saw the WSJ link