How was your experience using Rust on this project? I'm considering a project in an adjacent space and I'm trying to decide between Rust, C, and Zig. Rust seems a bit burdensome with its complexity compared to C and Zig. Reminds me of C++ in its complexity (although not as bad). I find it difficult to walk through and understand a complicated Rust repository. I don't have that problem with C and Zig for the most part.
But I'm wondering if I just need to invest more time in Rust. How was your learning curve with the language?
I'm quite proficient in C/C++ (started coding in C/C++ in 1997) but I still have a much harder time understanding a new C++ project compared to a C project.
I would love to understand how universal these models can become.
Looks like this uses ndarray and mpsgraph (which I did not know about!), we opted to use candle instead.
Just depends on what performance level you need.
would https://docs.unsloth.ai/basics/kimi-k2-how-to-run-locally be faster with mirai?
Also any app deployed to PROD but developed on Mac need to be consistent i.e. work on Linux/in container.
https://github.com/trymirai/uzu-swift?tab=readme-ov-file#qui...
What's your deliberate, well-thought roadmap for achieving adoption similar to llama cpp?
Brew stats (downloads last 30 days)
Ollama - 28,232 Lama.cpp - 7,826
So, if you're trying to actually count LLama.cpp downloads, you'd combine those two. Also, I imagine most users on OSX aren't using Homebrew, they're getting it directly from the GH releases, so you'd also have to count those.
Qwen3-0.6B at 5 t/s doesn't make any sense. Something is clearly wrong for that specific model.
Not sure what the goal is for this project? Not seeing how this presents adequate benefits to get adopted by the community
Checked and Lama.cpp used C++ (obviously) and Llama uses Go.