4 pointsby sahli7 hours ago8 comments
  • LatencyKills7 hours ago
    I've been an engineer for almost 30 years (@ MS & Apple). I've been using Claude to perform code reviews for several of my macOS apps (that I wrote without AI).

    A few weeks ago, its performance was impressive and helpful.

    In the last week, it has been unusable. It is now getting confused, suggests architecture changes that make absolutely no sense, and has even started ignoring my stop hooks (and then arguing with me that they "aren't necessary").

  • jqpabc1237 hours ago
    Is Anthropic trying to limit usage or drive people away?

    They are most likely attempting to become profitable.

    AI company's have consumed eye watering amounts of venture capital that they are increasingly under pressure to justify. In order to do this, they will have to either increase rates or degrade performance or both.

    A lot of people don't seem to grasp the epic proportions of what is taking place here. Consultants at Bain & Co. estimated that justifying current AI spending will require $2 trillion in annual AI revenue by 2030.

    By comparison, this is more than the combined revenue of Amazon, Apple, Alphabet, Microsoft, Meta and Nvidia, and more than five times the size of the entire global subscription software market.

    For most companies, this means that AI will have to become their primary technology expense, far exceeding their current budgets.

  • palata7 hours ago
    I was working really well, so I paid a yearly subscription. Two weeks later, it got to a point where I wouldn't pay for it if I had a choice.

    I wonder if the business model is: "make it great, get people to pay yearly subscriptions, and then make it bad again". At least I won't make that mistake ever again, it proved that such services cannot be trusted.

    I'll only pay for monthly subscriptions in the future, so that when they screw me I can stop paying.

    • jqpabc1237 hours ago
      "make it great, get people to pay yearly subscriptions, and then make it bad again".

      This is called "bait and switch".

  • the_inspector7 hours ago
    At least I could not get claude projects working, after it worked perfectly about 2 months ago. Because Anthropic introduced a chatbot as only line of support, help is not on is way.
  • niobe7 hours ago
    100%. Claude is, I'm sorry to say, basically nerfed.

    I downgraded from Max to Pro this month and will cancel my subscription next month. I would suggest others who feel similarly do the same. The only way to signal to to these companies that this model enshittification cycle is unacceptable is to vote with your feet.

    • jqpabc1236 hours ago
      vote with your feet.

      All vendors are under the same sort of pressure. What you've experienced is likely to be duplicated elsewhere.

  • throwawayffffas7 hours ago
    My guess they are trying lower quantization. I have not noticed I recently begun trying qwen 3.5 locally.
  • loolhahalmao7 hours ago
    vibing prod has its consequences. i don't believe theyre purposefully trying to make their products worse, but it's a result of not reviewing / testing their code and then trying to stem costs, resulting in higher cost for users with worse quality.
  • jazz9k7 hours ago
    It's most likely an intentional downgrade, so they can sell a better model to corporate and enterprise clients. This was bound to happen, especially since all of these companies are bleeding cash.