3 pointsby headkit5 hours ago1 comment
  • headkit5 hours ago
    We built AKI.IO because we needed a way to run open-source models (like Minimax, GLM, Qwen3, Llama3, Flux etc.) in production without managing our own GPU clusters, while keeping data within EU jurisdiction. It's a managed API that aims to be a drop-in replacement for the OpenAI/Anthropic API spec, so you can switch the base URL in your existing code. Under the hood, it routes requests through a job queue to distributed GPU nodes in certified European data centers (ISO 27001/TÜV). Key technical details:

        No vendor lock-in: You can switch models via API parameter without changing application logic.
        Data handling: Inputs/outputs are not used for training; everything stays in the EU.
        Stack: We rely on open-source components and standard web protocols.
    
    We're currently optimizing latency for the job queue system and would appreciate feedback. Also, giving away free token credits!

    Link: https://www.aki.io (I'm one of the developers behind this. Happy to answer questions about the infra or the model selection.)