2 pointsby andredelima7 hours ago1 comment
  • andredelima7 hours ago
    I built a small execution engine for AI agents that focuses on safety and explicitness rather than capability. It provides a strict permission system, a sandbox boundary, a schema validator, and an audit logger. There is no autonomy, no hidden behavior, and no implicit capabilities.

    The goal is to create a predictable, inspectable substrate for agent actions. The repository includes documentation, a threat model, a full pytest safety suite, example agents, and a minimal CLI.

    Would appreciate feedback from anyone working on agent systems, security, or sandboxing.