There are some people on r/LocalLlama using it [0]. Seems like the consensus is while it does have more unified RAM for running models, up to half a terabyte, the token generation speed can be fairly slow such that it might just be better to get an Nvidia or AMD machine.
[0] https://old.reddit.com/r/LocalLLaMA/search?q=mac+studio&rest...