The problem we're solving: every AI agent framework today — LangChain, AutoGen, CrewAI, MCP, AWS Bedrock — treats incoming payloads as legitimate by default. There's no cryptographic verification that a payload was actually sent by the agent who claims to have sent it, that it hasn't been modified in transit, or that it hasn't been replayed from a previous session.
This creates what we call the Payload Trust Gap. All the upstream security layers — orchestration, tool schemas, sandboxing, permissions, guardrails, logging — operate on the assumption that the payload is fine. If it isn't, those controls are all working on a bad premise.
A2SPA sits at the execution boundary (Layer 5 of the agent stack) and enforces:
- SHA-256 payload signing with the sending agent's private key
- Nonce + 24hr TTL replay protection
- Per-agent permission mapping with instant on/off toggle
- Tamper-proof audit logging of every agent interaction
It's framework-agnostic and priced at $0.01 per verification — pay as you go, no minimums.
A few things I'd genuinely love feedback on:
1. Is the "Payload Trust Gap" framing accurate to how you think about agent security, or is there a better mental model?
2. Are there attack scenarios we haven't accounted for?
3. For those running agents in production — is this a problem you've already solved internally, and if so how?
Happy to get into the technical details of the implementation. Thanks for taking a look.