https://github.com/nobulexdev/nobulex/blob/main/docs/crisis-...
"HN Posting Notes
Internal only. Delete before posting.
When posting UNCOVENANTED-AGENT-PROBLEM-HN.md:
Post on Tuesday or Wednesday, 8-9am EST
Title is just: "The Uncovenanted Agent Problem"
Replace [GitHub link] with actual repo URL when live under Kova name
First comment should be from you: brief context on who you are and why you built this
Respond to EVERY comment in the first 6 hours
Don't be defensive. Thank critics. Ask follow-up questions.
If someone finds a real flaw, acknowledge it publicly and say you'll address it
DO NOT mention your age unless directly asked. Let the work speak.
"> DO NOT mention your age unless directly asked. Let the work speak.
I'd agree. Why does the age matter.
What's the application here? If you want to enforce that an agent's blockchain transactions follow some deterministic conditions, why not just give it access to a command-line tool (MCP / skill / whatever) that enforces your conditions?
If you want auditing of the agent's blockchain actions to be public, why not just make all your agent's actions go through an ordinary smart contract?
I don't mean to kill your enthusiasm for programming or AI. But this project...I'm sorry, but this project just isn't good. It's an over-engineered, vibe-coded "solution" in search of a problem.
This project is about a month old. I highly doubt one person produced 134 kloc in that time. I'm pretty sure a lot of it is vendorized dependencies and AI-generated code that's had minimal human review. Much of the documentation appears to be AI-generated as well.
https://github.com/nobulexdev/nobulex/blob/main/demo/two-par...
Run it: npx tsx demo/two-party-verify.ts
Three steps: operator creates a covenant, claims compliance and then a regulator verifies the cryptographic proof without trusting the operator. That is the core of what Nobulex does. Everything else is tooling around this pattern. Appreciate the pushback, as it helped clarify what actually matters.
This is obvs 5 minutes of LLM generated code
> a regulator verifies the cryptographic proof without trusting the operator.
No, the regulator verifies that the operator signed the proof, which isn't a lot different from the operator saying it alone.
For example, I have a Gmail CLI that just wraps the Gmail API and I specifically give AI certain powers and withhold other abilities. I log every action taken.
Is this a meta framework for this or an NPM package that does something like that?
The difference: your CLI controls one agent on one tool with rules you have hardcoded. Nobulex gives you signed, immutable constraints that third parties can verify independently. The logs are hash-chained so nobody (including you) can tamper with them after the fact. And the constraints are the cryptographically bound to the agent's identity.
If you are truly the only one who needs to trust your agent, your approach works fine. Nobulex matters when someone else needs to verify what your agent has done, a regulator, a customer and a counterparty.
The Enforcement and verification serve for a different audience.
Enforcement will protect you as it stops your agent from doing something it shouldn't. Verification protects everyone else, as it lets a third party independently confirm that the enforcement actually happened, without trusting you. You say "my agent followed the rules," while the regulator says "prove it." The hash-chained logs and signed covenants are the proof. Without verification, it's just your word.
all the kitchen sink stuff makes it pretty intense though. have you considered separating out just the core execution, logging and verification components? stuff like c2pa seems super cool, but maybe a second layer for application type things like that so that the core consensus stuff can be inspected easily? one goal for a system like this is easy auditability of the system itself.
You are right that auditability of the system itself is the goal. Its very hard to trust a trust layer you can't easily inspect. Appreciate you digging deep into the code.
The problem: AI agents are making real decisions for loans, trades, hiring, diagnostics with zero cryptographic proof of what they have done or whether they followed any rules. The EU AI Act requires tamper-evident audit trails by August 2026. Nobody has infrastructure for this.
Nobulex is three things:
Agents will be able to sign behavioral covenants before they act (cryptographic commitments — "I will not do X")
Middleware enforces those covenants at runtime as violations are blocked before execution
Every action is logged in a hash-chained, merkle-tree audit trail that anyone can use and verify independently
The quickstart is 3 lines: const { protect } = require('@nobulex/sdk'); const agent = await protect({ name: 'my-agent', rules: ['no-data-leak', 'read-only'] });
npm install @nobulex/sdk
Everything is MIT licensed and on npm under @nobulex/*. Site: https://nobulex.com
Would love feedback on the architecture, the covenant model, or anything else. Happy to answer questions.
An agent signing a covenant doesn't do anything. You're not going to enforce a contract against it, and there's not some kind of non-repudiation problem to solve.
Enforcing behavioral covenants or boundaries is inherent to how you make things safe. But how do you really do it for anything that matters? How do you make sure that an agent isn't discriminating based on race or other factors?
The whole reason you're using an LLM is because you're doing something either:
A) at very low scale, at which case it's hard to capture sufficient covenants cost-efficiently
or B) with very great complexity, where the behavior you want is hard to encapsulate in code-- in which case meaningful enforcement of the complex covenants that may result is hard.
Indeed, if you could just write code to do it, you'd just write code to do it.
I'm glad you're interested in these issues and playing with them. I'll leave you with one last thought: 134 KSLOC is a bug, not a feature. Some software systems that need to be huge, but for software systems that need to be trusted-- small, auditable, and understandable to humans (and agents) is the key thing you're looking for. Could you build some kind of small trustable core that solves a simple problem in an understandable way?
Surely it's just the enforcement, and maybe the measuring of sentinel events -- how far does it wander off course.
How is cryptography an important part of this, given that we're talking about a layer that sits on top of an LLM without an adversary in-between?
I know you mention non-repudiation, but ... there's no kind of real non-repudiation here in this environment.
But, it matters when there are multiple parties. An enterprise deploys an agent that can handle customer data. The customer wants proof the agent has followed the rules. The regulator wants proof that the logs were not just edited after an incident. Without cryptographic signatures and hash chains, the enterprise can just say "trust us." With them, the proof is independently verifiable.
It's just the difference between "we followed the rules" and "here's a mathematically verifiable proof we followed the rules." For internal use, it's an overkill. For anything with external accountability, that targets the point.
It doesn't tell you anything about what code was running there or whether it was really enforced.
Look, it's cool that this is an area that interests you. But I want you to know that AI agents are sycophantic and will claim your ideas are good and will not necessarily steer you in good directions. I have patents in the area of non-repudiation dating back 25 years and am doing my best to give you good feedback.
Non-repudiation, policy enforcement, audit-readiness, ledgers: these are all good things. As far as I can tell, there's nothing too special about doing this with LLMs, too. The same kinds of code that a bank uses to ensure that its ledger isn't tampered with and that the right software is running in the right places would work for this job -- and it wasn't vibe coded and mostly specified by AI.
On “nothing too special about doing this with LLMs,” also fair. The primitives (policy enforcement, audit trails, non-repudiation) aren’t new. The bet is that AI agents will need these at a scale and standardization level that does not exist yet, and having it as a composable library matters when every framework (LangChain, CrewAI, Vercel AI SDK) is building agents differently. But the underlying cryptography isn’t novel.
Cryptography doesn't really do as much to improve it as one would think. Yes, providing evidence of sequence or that stuff happened before a certain time is a helpful tool to have in the toolbox.
The earliest human writings date to about 3000-3500 BCE, and are almost entirely ledgers on clay tablets.
I want to point out a little asymmetry. It's a little rude to generate a bunch of stuff, including writing, using LLMs, and then expect actual humans to interact with it. If it wasn't your time to do and understand and say, why should it be worth others' time to read and respond to it?