Containarium is more of a "purpose-built, single-VM, SSH-first dev environment" approach:
- Lightweight: 1 VM can host 50–100+ LXC containers - Quick provisioning: seconds instead of minutes per environment - Focused on SSH workflows and dev sandboxing, not full datacenter management - Minimal infra overhead: no GUI, no HA cluster required
Tradeoffs we’re aware of: - Shared kernel (not VM-level isolation) - Linux-only - Less built-in tooling compared to Proxmox
We designed it to *optimize for cost efficiency and rapid dev onboarding*, rather than full-featured virtualization.
Would love to hear if you see any pitfalls with this approach compared to using Proxmox/Incus in a single-host scenario!
We’re curious how you handled provisioning, isolation, and resource limits in your setup. More importantly, what’s the maximum scale you’ve been able to push?
In fact, it is just using the same technologies as LXC and Incus. (It is exactly LXC and Incus)
So really nothing special at all. Perhaps people looked at the title and rushed to the repo.
When I saw "IMPLEMENTATION-PLAN.md" and "SECURITY-CHECKLIST.md" filled with hundreds of emojis, I immediately closed the tab and now replying to you that it is total slop.
2026 is the year of abundant "not invented here syndrome".
If a developer can run Docker inside this, what stops them from mounting volumes from the host or changing namespaces?
Is this relying on user namespaces ?
- We enable `security.nesting=true` on unprivileged LXC containers, so Docker can run inside (rootless).
- *User namespace isolation* ensures that even if a user is “root” inside the container, they are mapped to an unprivileged UID on the host (e.g., UID 100000), preventing access to host files or devices.
This setup allows developers to run Docker and do almost anything inside their sandbox, while keeping the host safe.
If you don’t mind me asking:
- Did you use LXC containers, or full VMs for each sandbox? - How did you handle SSH / network isolation? - Any tips on making provisioning faster or keeping resources efficient?
We’re using unprivileged LXC + SSH jump hosts on a single VM for cost efficiency. I’d love to hear what tradeoffs you found using the Proxmox API.
So a Grain calls Proxmox with a generated SSH Key / CloudInit, then persists that to state, then deploys an Orleans client which connects to the cluster for any client side C# execution. There's lots you could do for isolated networks with the LXC setup, but my uses didn't require it.
Proxmox handles the horizontal scaling of the hardware. Orleans handles the horizontal scaling of the codebase.
We’ve been experimenting with an alternative to the “one VM per developer” model for SSH-based development environments.
The project is called Containarium: https://github.com/FootprintAI/Containarium
The idea is simple: - One cloud VM - Many unprivileged LXC system containers - Each user gets their own isolated Linux environment via SSH (ProxyJump) - Persistent storage survives VM restarts
This is NOT Kubernetes, Docker app containers, or a web IDE. Each container behaves like a lightweight VM (full OS, users, SSH access).
Why we built it: We kept seeing teams pay for dozens of mostly-idle VMs just to give people a place to SSH into. Using LXC, we can host tens or hundreds of environments on a single VM and cut infra costs significantly.
What we’re looking for: - Feedback from people who’ve run multi-tenant Linux systems at scale - Security concerns we might be underestimating - Where this approach breaks down in real-world usage - Alternatives we should be considering (LXD, Proxmox, something else?)
Tradeoffs we’re aware of: - Shared kernel (not VM-level isolation) - Not suitable for untrusted workloads - Linux-only - Requires infra discipline (limits, monitoring, backups)
This is early-stage and open source. APIs and workflows will evolve.
We’re not trying to “replace Kubernetes” — just trying to do one thing well: cheap, fast, SSH-based dev environments.
Would love blunt feedback from folks who’ve been down this road before.