63 pointsby Xiol5 hours ago6 comments
  • wasabinator5 hours ago
    This should be a warning to those who feel that it's ok to offload your creativity to a subscription service. Always need a local model in some form.
    • avaer3 hours ago
      You could judge the costs of the AI products you're using by the standard API pricing, not promotional subscription offers.
      • kadoban2 minutes ago
        For me, it's not even cost necessarily. If they decide to change the product they offer, the old one is gone. I refuse to use anything for personal use that's not at least _available_ as model weights.
      • glimshe2 hours ago
        Not even that way, given that the price is still highly subsidized by investors and circular deals.
    • para_parolu2 hours ago
      There is very little vendor lock. We can keep using subsidized model until it’s not. Then switch to next subsidized model.
      • ares6232 hours ago
        It's like chairs!
    • rvz3 hours ago
      I keep telling them and they still want to spend money on tokens at the Anthropic casino, even though they are egregiously price gouging and applying upper limits so you spend more on tokens.

      Sometimes you can't help gamblers who want to gamble on tokens to hit the jackpot on fixing a typical issue which can be done by local models or even reading the documentation.

    • locusofself3 hours ago
      Are there local models that are anywhere near as good at coding as opus 4.6?
      • kadobana few seconds ago
        Not really. Qwen 3.5 and Gemma and a couple of others are quite good though, and the quants are _very_ runnable on a good gpu.
      • jasonjmcghee3 hours ago
        People will insist otherwise, but I haven't seen anything close to sonnet 4.6 that can be run locally.
        • Incipient2 hours ago
          I don't think anyone can honestly say a huge frontier model is actually going to be matched by something running on 64gb locally?
          • urigan hour ago
            You don't have to use the most recent bleeding edge model to succeed. A local FOSS coding agent coupled with a reasonably priced LLM could yield the optimal ROI.
          • jasonjmcghee2 hours ago
            I have read many comments saying Qwen3.5 various ~30B models, Gemma 4 ~30B models and now Qwen3.6 "better than sonnet".

            I don't know how large sonnet and opus are but the rumor is 1T and 5T respectively.

    • ratg13an hour ago
      This doesn’t affect existing users.

      This is a simple supply and demand curve.

      Higher demand means the price goes up .. this has been true of things since before SaaS and before computers

    • jazz9k4 hours ago
      The 'local model' is called your brain.
      • mingus883 hours ago
        I’m sorry but that’s just dumb. An LLM is a tool. Your brain is not a substitute for an LLM in the same way your fingers are not a substitute for a wrench.

        The year is 2026 and if you are using your brain on chore work like one-off scripts, refactoring, boilerplate test code, then you are wasting time and money and I don’t want to work with you.

        Local models are fine for this and can do it in a fraction of the time your brain will take to even get bootstrapped

        • adithyassekhar2 hours ago
          The year is 2026 the average RAM for the most common type of developer’s (web) machine is 16GB. 8 will be the lower end. Tell me which model can one run on this machine locally?
  • ghstindaan hour ago
    They've been folding under government pressure for months. I think they lost control of their own company. Still they have a nice writing voice, but I think Google will be the last man standing when this is all over.
  • F7F7F73 hours ago
    He mentions Max as another place that they didn't properly predict plan and pricing relative to usage. I'd bet the farm that it's the next to be 'A/B' tested.
  • muyuu5 hours ago
    Maybe those already on $20 a month plans won't be nerfed much more?

    It's yet another austerity move, pretty much in line with the recent ones.