182 pointsby Curiositry7 hours ago10 comments
  • HorizonXP4 hours ago
    If folks are interested, @antirez has opened a C implementation of Voxtral Mini 4B here: https://github.com/antirez/voxtral.c

    I have my own fork here: https://github.com/HorizonXP/voxtral.c where I’m working on a CUDA implementation, plus some other niceties. It’s working quite well so far, but I haven’t got it to match Mistral AI’s API endpoint speed just yet.

    • Ygg22 hours ago
      There is also another Mistral implementation: https://github.com/EricLBuehler/mistral.rs Not sure what the difference is, but it seems to be just be overall better received.
      • NitpickLawyer2 hours ago
        mistral.rs is more like llama.cpp, it's a full inference library written in rust that supports a ton of models and many hardware architectures, not just mistral models.
  • zaptheimpaler41 minutes ago
    I don't know anything about these models, but I've been trying Nvidia's Parakeet and it works great. For a model like this that's 9GB for the full model, do you have to keep it loaded into GPU memory at all times for it to really work realtime? Or what's the delay like to load all the weights each time you want to use it?
  • simonw2 hours ago
    I tried the demo and it looks like you have to click Mic, then record your audio, then click "Stop and transcribe" in order to see the result.

    Is it possible to rig this up so it really is realtime, displaying the transcription within a second or two of the user saying something out loud?

    The Hugging Face server-side demo at https://huggingface.co/spaces/mistralai/Voxtral-Mini-Realtim... manages that, but it's using a much larger (~8.5GB) server-side model running on GPUs.

    • refulgentis2 hours ago
      It's not fast enough to be realtime, though you could do a more advanced UI and a ring buffer and have it as you describe. (ex. I do this with Whisper in Flutter, and also inference GGUFs in llama.cpp via Dart)

      This isn't even close to realtime on M4 Max. Whisper's ~realtime on any device post-2022 with an ONNX implementation. The extra inference cost isn't worth the WER decrease on consumer hardware, or at least, wouldn't be worth the time implementing.

  • mentalgear2 hours ago
    Kudos, this is were it's add: open-models running on-premise. Preferred by users and businesses. Glad Mistral's got that figured out.
  • Jayakumark5 hours ago
    Awesome work, Would be good to have it work with handy.computer. Also are there plans to support streaming ?
    • JLO644 hours ago
      Just tried out Handy. This is much better and lightweight UI than the previous solutions I've tried out! I know it wasn't you intention, but thank you for the recommendation!

      That said, I now agree with your original statement and really want Voxtral support...

      • ChadNauseam4 hours ago
        Handy is awesome! and easy to fork. I highly recommend building it from source and submitting PRs if there are any features you want. The author is highly responsive and open to vibe-coded PRs as long as you do a good job. (Obviously you should read the code and stand by it before you submit a PR, but I just mean he doesn't flatly reject all AI code like some other projects do.) I submitted a PR recently to add an onboarding flow to Macs that just got merged, so now I'm hooked
  • Retr0id5 hours ago
    hm, seems broken on my machine (Firefox, Asahi Linux, M1 Pro). I said hello into the mic, and it churned for a minute or so before giving me:

    panorama panorama panorama panorama panorama panorama panorama panorama� molest rist moundothe exh� Invothe molest Yan artist��������� Yan Yan Yan Yan Yanothe Yan Yan Yan Yan Yan Yan Yan

    • 2 hours ago
      undefined
  • jszymborski5 hours ago
    Man, I'd love to fine-tune this, but alas the huggingface implementation isn't out as far as I can tell.
  • refulgentis4 hours ago
    Notable this isn't even close to realtime. M4 Max.
  • Nathanba4 hours ago
    I just tried it, I said "what's up buddy, hey hey stop" and it transcribed this for me: " وطبعا هاي هاي هاي ستوب" No, I'm not in any arabic or middle eastern country. The second test was better, it detected english.
    • 7moose4 hours ago
      fwiw, that is the right-ish transliteration into arabic. It just picked the wrong language to transcribe to lol
  • sergiotapia6 hours ago
    >init failed: Worker error: Uncaught RuntimeError: unreachable

    Anything I can do to fix/try it on Brave?