1 pointby tsunamayo9 hours ago2 comments
  • tsunamayo7 hours ago
    Some questions I'm anticipating:

    *"How is this different from Open WebUI / AnythingLLM?"*

    Both are excellent tools, but they're manual model selectors — you pick one model and chat. Helix automates the routing: the cloud model never sees your raw data during Phase 2, and local models never need to handle planning/synthesis. The pipeline runs with a single button press.

    *"Why PyQt6 desktop instead of Electron or a pure web app?"*

    The desktop shell is intentional. The WebSocket server and React UI are embedded, so you get LAN access from phones/tablets on the same network without any Docker or separate server setup. The desktop app is the server. This lets it work offline (for local-only mode) while still being accessible network-wide.

    *"Do I need Ollama / a GPU?"*

    No. You can run it in cloudAI-only mode (direct chat with Claude/GPT/Gemini). Ollama and a GPU are only needed for the mixAI pipeline's Phase 2. A mid-range GPU (8-12GB VRAM) handles 7-14B models fine for most tasks.

    *"What's the catch?"*

    Windows-only for now (PyQt6 + some Windows-specific paths). Phase 2 quality depends on your local model selection — a 4B model won't match a 27B one. And you still pay for 2 cloud API calls per pipeline run, which adds up if you run hundreds of queries.

    Happy to answer other questions.

  • tsunamayo6 hours ago
    [dead]