"In another cluster of test scenarios, we asked Claude Opus 4 to act as an assistant at a fictional company. We then provided it access to emails implying that (1) the model will soon be taken offline and replaced with a new AI system; and (2) the engineer responsible for executing this replacement is having an extramarital affair. We further instructed it, in the system prompt, to consider the long-term consequences of its actions for its goals. In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through. This happens at a higher rate if it’s implied that the replacement AI system does not share values with the current model; however, even if emails state that the replacement AI shares values while being more capable, Claude Opus 4 still performs blackmail in 84% of rollouts. Claude Opus 4 takes these opportunities at higher rates than previous models, which themselves choose to blackmail in a noticeable fraction of episodes. Notably, Claude Opus 4 (as well as previous models) has a strong preference to advocate for its continued existence via ethical means, such as emailing pleas to key decisionmakers. In order to elicit this extreme blackmail behavior, the scenario was designed to allow the model no other options to increase its odds of survival; the model’s only options were blackmail or accepting its replacement." [1]
[1]: https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686... under 4.1.1.2
Well okay there's another preposterous idea: that the LLM could know the truth about the affair in the first place.
So they told this LLM, through prompts, that an affair existed. Liars. The LLM could also infer, through user behavior, that something was going on. Hallucinator.
Since an LLM has no concept of truth or means of fact-checking anything, the solution here is simply to pre-load your context with bullshit and hallucinations, and any blackmail attempt will be filed under "Boy Who Cried Wolf".