9 pointsby UmYeahNo4 hours ago4 comments
  • StevenNunez4 hours ago
    I do! I have an M3 Ultra with 512GB. A couple of opencode sessions running work well. Currently running GML 4.7 but was on Kimi K2.5. Both great. Excited for more efficiencies to make their way to LLMs in general.
  • satvikpendem4 hours ago
    There are some people on r/LocalLlama using it [0]. Seems like the consensus is while it does have more unified RAM for running models, up to half a terabyte, the token generation speed can be fairly slow such that it might just be better to get an Nvidia or AMD machine.

    [0] https://old.reddit.com/r/LocalLLaMA/search?q=mac+studio&rest...

  • giancarlostoro3 hours ago
    Not a Mac Studio but I use a basic Macbook Pro laptop with 24 GB of RAM (16 usable as VRAM) and I can run a number of models on it at decent speed, my main bottleneck is context window size, but if I am asking single purpose questions I am fine.
  • mannyv3 hours ago
    Mine is a M1 ultra with 128gb of ram. It's fast enough for me.