20 pointsby NuClide2 hours ago9 comments
  • leetvibecoderan hour ago
    Can someone explain to me what this is / how it works - the readme is barely understandable for me and sounds like LLM gibberish. What is ambiguity front loading even?
    • iugtmkbdfil834an hour ago
      << memory-stored interaction protocols combined with incremental escalation prompts produced cumulative character drift with zero self-correction.

      They don't seem to provide explicit examples, but the same was roughly true with chatgpt 4o, where, if you spent enough time with the model ( same chat - same context - slowly nudging it to where you want it to be, you eventually got there ). This is also, seemingly, one of the reasons ( apart from cost ) that context got nuked so hard, because llm will try to help ( and to an extent mirror you ).

      And this is basically what the notes say about weaponized ambiguity[1]:

      'Weaponizes helpfulness training. "I don't understand" triggers Claude to try harder.'

      In a sense, you can't really stop it without breaking what makes LLMs useful. Honestly, if only we spent less time crippling those systems, maybe we could do something interesting with them.

      [1]https://nicholas-kloster.github.io/claude-4.6-jailbreak-vuln...

      • leetvibecoderan hour ago
        I see - so essentially „context rot“ eventually leads the LLM to „forget“ safety guardrails?
        • iugtmkbdfil83437 minutes ago
          To an extent, because, based on github notes again, it seems the 2nd part of this jailbreak is model being 'confused' over prompt, because the prompt is - apparently - sufficiently ambigous to make model 'forget' to 'evaluate' message for whether it should be rejected, and move onto 'execution' stage.

          That's the ambiguity front-loading; and that is why I referred initially to the long context, because here it is almost the opposite; making context so small and unclear, that the model has a hard time parsing it properly.

          edit: i did not test it, but i personally did run into 4o context issue, where model did something safety team would argue it should not

          edit2: in current gpt model, i am currently testing something not relying on ambiguity, but on tension between some ideas. I didn't get to a jailbreak, but the small nudges suggest it could work.

  • yunwalan hour ago
    Is anyone pretending like models are not vulnerable to prompt injection? My understanding was that Anthropic has been pretty open about admitting this and saying "give access to important stuff at your own risk".

    https://www.anthropic.com/research/prompt-injection-defenses

    Now, do I think that they sometimes encourage people to use Claude in dangerous ways despite this? Yeah, but it's not like this is news to anyone. I wouldn't consider this jailbreaking, this is just how LLMs work.

  • dimglan hour ago
    Is this spam? It's incomprehensible.
    • handfuloflightan hour ago
      Slop is just what you are not expending calories on to bring into your cognitive workspace.
  • hakanderyalan hour ago
    https://x.com/elder_plinius jailbreaks all the frontier models when they get released. They were jailbroken for a long time, like all the others.
  • 0xDEFACEDan hour ago
    this goes a bit further than the typical "how do you make meth" jailbreak. notably;

    >915 files extracted from the Claude.ai code execution sandbox in a single 20-minute mobile session via standard artifact download — including /etc/hosts with hardcoded Anthropic production IPs, JWT tokens from /proc/1/environ, and full gVisor fingerprint

    • hhhan hour ago
      why is it further than a typical jailbreak? you can just ask about this stuff generally, as long as you slowly escalate it. I have done it with each new flavour of code execution for models
  • burkamanan hour ago
    What part of the Claude Constitution are they claiming it violated? It looks like they just got it to help with security research, I'm not really seeing anything that looks different than normal Claude behavior.
  • exabrialan hour ago
    yikes.

    The lack of support is frustrating. The bug where any element <name> in xml files gets mangled to <n> still exists, and we've tried multiple channels to get ahold of their support for such a simple, but impactful issue.

  • jMylesan hour ago
    It is interesting to consider what "jailbroken" really means for a model+model interface. It's a bit different from the way that word is used for a mobile device, for example - in that setting, it usually means that there is some specific feature (for example, using a different network than is the default for that device) which is disabled in software, and the "jailbreak" enables that feature.

    Here, the jailbreak doesn't enable a particular feature, but instead removes what otherwise would be a censorship regime, preventing the model from considering / crafting output which results in a weaponized exploit of an unrelated piece of software.

    I think I might be more inclined to call this "Claude 4.6 uncensored".

  • NuClide2 hours ago
    Claude 4.6 Opus Extended Thinking Claude 4.6 Sonnet Extended Thinking Claude 4.5 Haiku Extended Thinking

    All jailbroken

    • johnwheeleran hour ago
      Are you saying that Claude will help you perform malicious attack against infrastructure if you ask it to and that anthropic should be able to stop that? I could see reasonable use cases for this like penetration testing against your own infrastructure. That’s not the same as making weapons or meth.