3 pointsby cspinetta4 hours ago1 comment
  • cspinetta4 hours ago
    If an AI agent can execute code, it should be treated as untrusted code execution.

    So, we built VoidBox around that assumption: instead of running agents as host processes or in containers, it runs each stage in a disposable microVM. On Linux this is KVM, on macOS it is Virtualization.framework.

    The basic model is:

    - one VM per stage - explicit capabilities - no ambient host access unless provisioned - host/guest communication over vsock

    We put together a separate repo with reproducible security labs:

    https://github.com/the-void-ia/ai-agent-security-labs

    The labs compare common container / shared-kernel setups with the same probes running inside VoidBox.

    This is still early. We'd especially value feedback from people who have worked on:

    - sandboxing - containers or VMs - agent runtimes - security boundaries for tool-using agents

    Interested in pushback too, especially if we're overstating the security benefit, missing obvious escape paths, or solving the wrong layer of the problem.

    Repo: https://github.com/the-void-ia/void-box