2 pointsby amjadfatmi114 days ago3 comments
  • niyikiza8 days ago
    Working on this problem: https://github.com/tenuo-ai/tenuo

    Different angle than policy-as-YAML. We use cryptographic capability tokens (warrants) that travel with the request. The human signs a scoped, time-bound authorization. The tool validates the warrant at execution, not a central policy engine.

    On your questions:

    Canonicalization: The warrant specifies allowed capabilities and constraints (e.g., path: /data/reports/*). The tool checks if the action fits the constraint. No need to normalize LLM output into a canonical representation.

    Stateful intent: Warrants attenuate. Authority only shrinks through delegation. You can't escalate from "read DB" to "POST external" unless the original warrant allowed both. A sub-agent can only receive a subset of what its parent had, cryptographically enforced.

    Latency: Stateless verification, ~27μs. No control plane calls. The warrant is self-contained: scope, constraints, expiry, holder binding, signature chain. Verification is local.

    The deeper issue with policy engines: they check rules against actions, but they can't verify derivation. When Agent B acts, did its authority actually come from Agent A? Was it attenuated correctly?

    Wrote about why capabilities are the only model that survives dynamic delegation: https://niyikiza.com/posts/capability-delegation/

  • yaront11114 days ago
    i just built Cordum.io .. should give u 100% deterministic security open sourced and free :)
    • amjadfatmi114 days ago
      Hey @yaront111, Cordum looks like a solid piece of infrastructure especially the Safety Kernel and the NATS based dispatch.

      My focus with Faramesh.dev is slightly upstream from the scheduler. I’m obsessed with the Canonicalization problem. Most schedulers take a JSON payload and check a policy, but LLMs often produce semantic tool calls that are messy or obfuscated.

      I’m building CAR (Canonical Action Representation) to ensure that no matter how the LLM phrases the intent, the hash is identical. Are you guys handling the normalization of LLM outputs inside the Safety Kernel, or do you expect the agent to send perfectly formatted JSON every time?

      • yaront11114 days ago
        That’s a sharp observation. You’re partially right CAP (our protocol) handles the structural canonicalization. We use strict Protobuf/Schematic definitions, so if an agent sends a messy JSON that doesn't fit the schema, it’s rejected at the gateway. We don't deal with 'raw text' tool calls in the backend. But you are touching on the semantic aliasing problem (e.g. rm -rf vs rm -r -f), which is a layer deeper. Right now, we rely on the specific Worker to normalize those arguments before they hit the policy check, but having a universal 'Canonical Action Representation' upstream would be cleaner. If you can turn 'messy intent' into a 'deterministic hash' before it hits the Cordum Scheduler, that would be a killer combo. Do you have a repo/docs for CAR yet?
        • amjadfatmi114 days ago
          Spot on, Yaron. Schematic validation (Protobuf) catches structural errors, but semantic aliasing (the 'rm -rf' vs 'rm -r -f' problem) is exactly why I developed the CAR (Canonical Action Representation) spec.

          I actually published a 40-page paper (DOI: 10.5281/zenodo.18296731) that defines this exact 'Action Authorization Boundary.' It treats the LLM as an untrusted actor and enforces determinism at the execution gate.

          Faramesh Core is the reference implementation of that paper. I’d love for you to check out the 'Execution Gate Flow' section. it would be a massive win to see a Faramesh-Cordum bridge that brings this level of semantic security to your orchestrator.

          Code: https://github.com/faramesh/faramesh-core

  • kxbnb13 days ago
    Your framing of the problem resonates - treating the LLM as untrusted is the right starting point. The CAR spec sounds similar to what we're building at keypost.ai.

    On canonicalization: we found that intercepting at the tool/API boundary (rather than parsing free-form output) sidesteps most aliasing issues. The MCP protocol helps here - structured tool calls are easier to normalize than arbitrary text.

    On stateful intent: this is harder. We're experimenting with session-scoped budgets (max N reads before requiring elevated approval) rather than trying to detect "bad sequences" semantically. Explicit resource limits beat heuristics.

    On latency: sub-10ms is achievable for policy checks if you keep rules declarative and avoid LLM-in-the-loop validation. YAML policies with pattern matching scale well.

    Curious about your CAR spec - are you treating it as a normalization layer before policy evaluation, or as the policy language itself?