1 pointby danbitengo7 hours ago1 comment
  • danbitengo7 hours ago
    AI agents are taking real actions — deleting files, sending emails, merging code. When something goes wrong, the question is always: did a human actually authorize that, and can you prove it?

    Oath is an open protocol for cryptographically signed human intent. Before an agent acts, it checks for an attestation:

    oath attest --action "database:delete_records:project_alpha" \

            --context "cleanup approved"
    
    oath verify --action "database:delete_records:project_alpha"

    # → ATTESTED proof: a1b2c3d4

    If there's no attestation, the action is blocked. The interesting part: the absence of a signature is itself evidence. An agent can't claim it was authorized if there's no cryptographic proof that it was.

    Security model: the private key never leaves your machine. The agent only calls verify, never attest. Attest is a human command. As secure as SSH key storage - same threat model, no central authority to compromise.

    The repo has a working demo - an agent running 5 actions, 2 attested, 3 blocked — and a protocol spec at v1.0.0 (CC0, public domain). Anyone can implement the protocol in any language.

    What's not there yet: multi-device sync and a Python package. Would appreciate feedback on the protocol design, especially the action class format (namespace:action:scope) and whether the signing model covers edge cases I haven't thought of.