1 pointby mihailyonchev3 days ago1 comment
  • mihailyonchev3 days ago
    I travel a lot and got tired of ChatGPT being unusable on flights (no Wi‑Fi), so I built a browser-based AI chat that runs locally. It downloads a small open model once (cached in IndexedDB), then runs inference on-device via WebGPU in a Web Worker, so it works offline and nothing leaves your machine. Trade-offs: smaller models (not GPT‑4), first load is a big download, and older hardware/mobile can struggle. However it works, and it goes into the direction of open-source, AI, and smaller, smarter models. Curious for feedback on the UX for model downloads, and whether people think browser-local AI is a viable direction.
    • ar_turnbull3 days ago
      Just curious whether it would it make more sense as an app (even if it's just a wrapper)? Is that on the road map? I know technically you can download large files in the browser but do normal people understand that?

      On iOS with memory management, I always find that browser pages always feel.. brittle?

      • mihailyonchev3 days ago
        Hi, great question. There are many reasons why this fits better as an app, and is definitely something that I will consider. The initial scope is to be as accessible as possible, and that is - no installations required or whatsoever, just open a link and run it. I'm building this in public so once I upload the content around it in my channels, yt and li, I will strongly evaluate this option. Would you like me to keep you in the loop?