dank-py takes an existing agent project, inspects it, generates config, locks dependencies, validates everything in an isolated environment, and turns the agent into a Dockerized HTTP microservice.
The output is meant to be production-ready by default: standardized runtime contract, input/output validation, health/status/metrics/logs/traces, and support for either separate containers or bundled multi-agent containers.
It’s framework-agnostic. You describe how the agent is invoked in `dank.config.json`, so it can work with LangChain, LangGraph, CrewAI, PydanticAI, LlamaIndex, or custom agents with direct LLM calls. The CLI can usually bootstrap the config automatically.
If you try it, I’d be especially interested in whether it fits naturally into your existing project structure, and whether the runtime contract covers the functionality you’d want from agent microservices.