The OpenClaw system prompt has no measures in it at all to prevent leaking, because trying to protect your system prompt is almost entirely a waste of time and actually makes your product less useful.
As a result, I do not think this is a credible report.
Here's the system prompt right now: https://github.com/openclaw/openclaw/blob/b4e2e746b32f70f8fb...
No person starts a summary that way, it's over-the-top and meaningless. I have seen AI do that many times when summarizing something related to security, though. Claude often says "CRITICAL:" or "CRITICAL VULNERABILITY:" or similar, especially when you jam the context window full of junk.
At least, I am curious about the tool
I do understand there's a lot of people running openclaw that don't really understand it and know what models are actually running. But we've known for a while that there are tons of older models that are pretty vulnerable, and you can hook up any model to OpenClaw, so, this data is not really that useful. Even though I totally agree that there are plenty of security risks here
No amount of hardening or fine-tuning will make them immune to takeover via untrusted context