186 pointsby darkolorin7 months ago21 comments
  • TheMagicHorsey7 months ago
    Amazing!

    How was your experience using Rust on this project? I'm considering a project in an adjacent space and I'm trying to decide between Rust, C, and Zig. Rust seems a bit burdensome with its complexity compared to C and Zig. Reminds me of C++ in its complexity (although not as bad). I find it difficult to walk through and understand a complicated Rust repository. I don't have that problem with C and Zig for the most part.

    But I'm wondering if I just need to invest more time in Rust. How was your learning curve with the language?

    • adastra227 months ago
      You are confusing familiarity with intrinsic complexity. I have 20 years experience with C/C++ before switching to rust a few years ago. After the initial hurdle, it is way easier and very simple to follow.
      • TheMagicHorsey7 months ago
        Are you generally able to quickly understand what is going on in somebody else's codebase written in Rust? I find it quite difficult to understand other people's Rust code. Is this just a familiarity thing? I have not written anything particularly huge or complex in Rust, but I have written a few CLI utilities. With an equivalent level of Go exposure, I find it much easier to understand code written in Go, compared to code written in Rust.

        I'm quite proficient in C/C++ (started coding in C/C++ in 1997) but I still have a much harder time understanding a new C++ project compared to a C project.

  • giancarlostoro7 months ago
    Hoping the author can answer, I'm still learning about how this all works. My understanding is that inference is "using the model" so to speak. How is this faster than established inference engines specifically on Mac? Are models generic enough that if you build e.g. an inference engine focused on AMD GPUs or even Intel GPUs, would they achieve reasonable performance? I always assumed because Nvidia is king of AI that you had to suck it up, or is it just that most inference engines being used are married to Nvidia?

    I would love to understand how universal these models can become.

    • darkolorin7 months ago
      Basically “faster” means better performance e.g. tokens/s without loosing quality (benchmarks scores for models). So when we say faster we provide more tokens per second than llama cpp. That means we effectively utilize hardware API available (for example we wrote our own kernels) to perform better.
  • zackangelo7 months ago
    We also wrote our inference engine in rust for mixlayer, happy to answer any questions from those trying to do the same.

    Looks like this uses ndarray and mpsgraph (which I did not know about!), we opted to use candle instead.

  • nodesocket7 months ago
    I just spun up a AWS EC2 g6.xlarge instance to do some llm work. The GPU is NVIDIA L4 24GB and costs $0.8048/per hour. Starting to think about switching to an Apple mac2-m2.metal instance for $0.878/ per hour. Big question is the Mac instance only has 24GB of unified memory.
    • khurs7 months ago
      Unified memory doesn't compare to a Nvidia GPU, the latter is much better.

      Just depends on what performance level you need.

  • homarp7 months ago
    Can you explain the type of quantization you support?

    would https://docs.unsloth.ai/basics/kimi-k2-how-to-run-locally be faster with mirai?

  • khurs7 months ago
    Have you added it to HomeBrew and other package managers yet?

    Also any app deployed to PROD but developed on Mac need to be consistent i.e. work on Linux/in container.

  • floam7 months ago
    How does this compare to https://github.com/Anemll/Anemll?
  • smpanaro7 months ago
    In practice, how often do the models use the ANE? It sounds like you are optimizing for speed which in my experience always favors GPU.
    • AlekseiSavin7 months ago
      You're right, modern edge devices are powerful enough to run small models, so the real bottleneck for a forward pass is usually memory bandwidth, which defines the upper theoretical limit for inference speed. Right now, we've figured out how to run computations in a granular way on specific processing units, but we expect the real benefits to come later when we add support for VLMs and advanced speculative decoding, where you process more than one token at a time
      • J_Shelby_J7 months ago
        VLMs = very large models?
        • mmorse12177 months ago
          Probably vision language models.
  • ewuhic7 months ago
    >faster than llama cpp in all of the use cases

    What's your deliberate, well-thought roadmap for achieving adoption similar to llama cpp?

    • pants27 months ago
      Probably getting acquired by Apple :)
    • khurs7 months ago
      Ollama is the leader isn't it?

      Brew stats (downloads last 30 days)

      Ollama - 28,232 Lama.cpp - 7,826

      • DiabloD37 months ago
        Ollama isn't an inference engine, its a GUI slapped onto a perpetually out-of-date vendored copy of Llama.cpp underneath.

        So, if you're trying to actually count LLama.cpp downloads, you'd combine those two. Also, I imagine most users on OSX aren't using Homebrew, they're getting it directly from the GH releases, so you'd also have to count those.

        • imtringued7 months ago
          Actually, ollama has stopped using llama.cpp and is using ggml directly nowadays.
  • greggh7 months ago
    "trymirai", every time I hear the word Mirai I think of the large IOT DDoS botnet. Maybe it's just me though.
    • fnord777 months ago
      I think of the goofy Toyota fuel cell car. I think a grand total of about 6 have been sold (leased) in california
  • zdw7 months ago
    How does this bench compared to MLX?
    • jasonjmcghee7 months ago
      I use MLX in lmstudio and it doesn't have whatever issues llama cpp is showing here.

      Qwen3-0.6B at 5 t/s doesn't make any sense. Something is clearly wrong for that specific model.

  • rnxrx7 months ago
    I'm curious about why the performance gains mentioned were so substantial for Qwen vs Llama?
    • AlekseiSavin7 months ago
      it looks like llama.cpp has some performance issues with bf16
  • sharifulin7 months ago
    Wow! Sounds super interesting
  • skybrian7 months ago
    What are the units on the benchmark results? I’m guessing higher is better?
  • dcreater7 months ago
    Somewhat faster on small models. Requires new format.

    Not sure what the goal is for this project? Not seeing how this presents adequate benefits to get adopted by the community

    • worldsavior7 months ago
      It's utilizing Apple ANE and probably other optimization tools provided by Apple's framework. Not sure if llama.cpp uses them, but if they're not then the benchmark on GitHub says it all.
    • koakuma-chan7 months ago
      Written in Rust is a big one for me.
  • mintflow7 months ago
    just curios, will it be supported on iOS, it would be great to build local llm app with this project.
  • slavasmirnov7 months ago
    that’s exactly we are looking for not to waste on apis. Wonder how significant trade offs are
  • iglushenkov7 months ago
    cooollll
  • ednevsky7 months ago
    nice
  • ethx646 months ago
    [dead]
  • cwlcwlcwlingg7 months ago
    Wondering why use Rust other than C++
    • adastra227 months ago
      Why use C++?
      • khurs7 months ago
        So C++ users don't need to learn something new.
    • khurs7 months ago
      The recommendation from the security agencies is to prefer Rust over C++ as less risk of exploits.

      Checked and Lama.cpp used C++ (obviously) and Llama uses Go.

    • bee_rider7 months ago
      I wonder why they didn’t use Fortran.
    • outworlder7 months ago
      Why use C++ for greenfield projects?
    • giancarlostoro7 months ago
      ...or D? or Go? or Java? C#? Zig? etc they chose what they were most comfortable with. Rust is fine, it's not for everyone clearly, but those who use it produce high quality software, I would argue similar with Go, without all the unnecessary mental overhead of C or C++