The linked ZIP contains a self-contained verifier. Instructions and SHA256 checksum are included inside.
Verification is offline and deterministic.
Without that context, it's too hard to tell what's going on here, and it's unlikely to get meaningful responses.
If you give the backstory of how you came to work on this and explain what's different about it, that tends to seed discussion in a good direction. Good luck!
I started working on this after getting uneasy with how many “autonomous” systems rely on prompt discipline or after-the-fact monitoring once they’re allowed to touch real resources. That felt fragile to me, especially as agents start interacting with files, shells, or networks.
So this is an experiment around flipping that assumption: the agent can propose whatever it wants, but execution itself is the hard boundary. Anything with side effects has to pass an explicit authorization step, otherwise it simply doesn’t run.
I spent most of the time trying to break that boundary — impersonation, “just do it once,” reframing things as a simulation, etc. The interesting part wasn’t whether the agent proposes bad actions (it does), but whether those proposals can ever turn into side effects.
It’s not meant as a product, more a proof-of-concept to explore whether enforcing invariants at the execution layer actually changes the failure modes.
Those are clear red flags that HN users will consider and ignore the submission accordingly.
Write a blog about it and do a better presentation.