Seconding the "Gee, must be nice to be able to set money on fire" sentiment. Personally, haven't quite converged on a config where I can seem to get the tooling successfully talking to my locally hosted models, and running, albeit slowly. Yet. Damn well not using other people's money to subsidize some cloud provider's build out. So no, I suppose I don't have your problem. I also don't know if I'll ultimately stick with the tooling. It's so damn different from how I'm used to dealing with things, I'm not entirely sure that I'll ever use most LLM's as anything but boilerplate generators, summarizers, or delvers into the latent space of humanity to bring back breadcrumbs, and validate understanding/analyze/cross-reference, etc. For everything else, there's grep.