26 pointsby kstonekuan4 days ago5 comments
  • james_marks2 days ago
    Looks interesting!

    How are you feeling about Tauri as you take this to a larger audience?

    I’ve dabbled with it and thought it was compelling, but not supported a release.

    • kstonekuan2 days ago
      I started off trying Electron, but I am liking Tauri more. It seems Rust has better support for system-level integration, like controlling audio and keys. Are there other alternatives you are exploring?
      • james_marks2 days ago
        Yeah, Tauri struck me as much more powerful and lighter than Electron.

        My hobby project with Tauri died when I managed to set an OS-wide shortcut, which is amazing.

        Except it broke many other apps, and I just never got back to it.

  • popalchemist3 days ago
    The critiques about local inference are valid, if you're billing this as an open source alternative to existing cloud based solutions.
    • kstonekuan3 days ago
      Thanks for the feedback, probably should have been clearer in my original post and in the README as well. Local inference is already supported via Pipecat, you can use ollama or any custom OpenAI endpoint. Local STT is also supported via whisper, which pipecat will download and manage for you.
  • bryanwhl3 days ago
    Does this work on macos?
    • kstonekuan3 days ago
      Yup, the desktop app is built with Tauri, which is cross-platform compatible, and I have personally tested it on macos and windows
  • lrvick4 days ago
    Is there a way to do this with a local LLM, without any internet access needed?
    • kstonekuan3 days ago
      Yes, Pipecat already supports that natively, so this can be done easily with ollama. I have also built that into the environment variables with `OLLAMA_BASE_URL`.

      About ollama in pipecat: https://docs.pipecat.ai/server/services/llm/ollama

      Also, check out any provider they support, and it can be easily onboarded in a few lines of code.

  • grayhatter4 days ago
    I don't think I'd call anything that only works with a proprietary Internet hosted LLM (one you need an account to use) open-source.

    This is less voice dictation software, and much more a shim to [popular LLM provider]

    • kstonekuan3 days ago
      Hey, sorry if the examples given were not robust, but because this is built on Pipecat, you can actually very easily swap to a local LLM if you prefer that, and the project is already set up to allow you to do that via environment variables.

      The integration to set up the WebRTC connection, get the voice dictation working seamlessly from anywhere, and input into any app took a long time to build out, and that's why I want to share this open source.