3 pointsby curiousaboutml14 hours ago8 comments
  • al_borland14 hours ago
    My wildly uneducated guess is that they are getting to the point where they need to figure out how to profit off all this investment, and releasing self-hosted open-source models isn’t going to help them do that.
    • curiousaboutml14 hours ago
      Possibly, but it's not just the release of new models. It seems the community itself has lost interested in self-hosted models.
  • bityard14 hours ago
    HN only covers a very small slice of interesting things that happen in tech every day. If it's your only source of tech news and information, you are missing out on a LOT.

    There are plenty of self-hosted models being released all the time, they just don't make it to HN. For that, you need to find a community that is passionate about testing and tinkering with self hosted models. A very popular one is "/r/localllama" on Reddit, but there are a few others scattered around.

    • doublerabbit2 hours ago
      Could you recommend other sites? I only use HN exclusively but would be keen on decent tech new sites without having to sieve through the sludge of Google.

      TheRegister, SlashDot and hackaday I know of.

  • gnosis6714 hours ago
    Ollama has changed. Early versions were raw, and then they were optimized (I’m on a laptop with 64GB RAM), and then they fell to shit. Optimized for someone else’s home rig I suppose.

    And my old favorite models broke so I have to link different versions. nous-hermes2-mixtral I miss your sage banter.

    Now everything runs on an excessive lag.

  • nacozarina14 hours ago
    Investors need everyone to avoid self-hosted models and pay premium subscriptions for large centralized models, else they will never earn the profits they want. Self-hosted models spoil their revenue forecasts.
  • softwaredoug12 hours ago
    One thing that happened was the providers got better at hosting smaller and cheaper models. So you could self host or just get your work done with GPT 5 nano.
  • electroglyph14 hours ago
    there are tons of models released still. even some non-Qwen ones!
  • jaggs9 hours ago
    There are a lot of local models being released every week. You really need to log into /r/localllama to stay up to date.
  • potsandpans10 hours ago
    They're still going. I just bought a 5090 for myself this Christmas to do more interesting things.

    I mostly use them for game assets.

    Trellis2 is very cool. Ive managed to put together a sdxl -> trellis -> unirig pipeline to generate 3d characters with mixamo skeletons that's working pretty well.

    On the llm front, deepseek and qwen are still cranking away. Qwen3 a22b instruct, imho does a better job than gemini in some cases with ocr and translation of handwritten documents.

    The problem with these frontier open weight models is that running them locally is not exactly tenable. You either have to get a cloud GPU instance, or go through a provider.

    - https://github.com/microsoft/TRELLIS.2 - https://github.com/VAST-AI-Research/UniRig