2 pointsby jmartenka5 hours ago1 comment
  • jmartenka5 hours ago
    Hey HN,

    I built LunarGate because I was tired of scattering API keys and routing logic across every app that talks to an LLM.

    It's a single self-hosted binary (Go) that sits between your apps and LLM providers. You get one OpenAI-compatible endpoint, and behind it: multi-provider routing, fallback chains, retries, caching, rate limiting, and complexity-aware autorouting that sends cheap prompts to cheap models automatically.

    The privacy angle: prompts and responses stay local by default. Observability is opt-in, EU-hosted, and I'm working on e2e encryption so even we can't read your data.

    - Stack: Go binary via Homebrew or Docker, YAML config, hot-reloadable. Works with OpenAI, Anthropic, DeepSeek, Ollama, Abacus, and more. - Quick start is literally: brew install, save a config.yaml, run it, point your OpenAI client at localhost:8080. - Gateway: https://github.com/lunargate-ai/gateway - Docs: https://docs.lunargate.ai/ - Site: https://lunargate.ai/

    Would love feedback on the routing model and what providers/features you'd want next.