I keep getting the sense that people feel like they have no idea if they are getting the product that they originally paid for, or something much weaker, and this sentiment seems to be constantly spreading. Like when I hear Anthropic mentioned in the past few weeks, it's almost always in some negative context.
- Banning OpenClaw users (within their rights, of course, but bad optics)
- Banning 3rd party harnesses in general (ditto)
(claude -p still works on the sub but I get the feeling like if I actually use it, I'll get my Anthropic acct. nuked. Would be great to get some clarity on this. If I invoke it from my Telegram bot, is that an unauthorized 3rd party harness?)
- Lowering reasoning effort (and then showing up here saying "we'll try to make sure the most valuable customers get the non-gimped experience" (paraphrasing slightly xD))
- Massively reduced usage (apparently a bug?) The other day I got 21x more usage spend on the same task for Claude vs Codex.
- Noticed a very sharp drop in response length in the Claude app. Asked Claude about it and it mentioned several things in the system prompt related to reduced reasoning effort, keeping responses as brief as possible, etc.
It's all circumstantial but everything points towards "desperately trying to cut costs".
I love Claude and I won't be switching any time soon (though with the usage limits I'm increasingly using Codex for coding), but it's getting hard to recommend it to friends lately. I told a friend "it was the best option, until about two weeks ago..." Now it's up in the air.
I have been wondering if it's more geared at reducing resource usage, given that at the moment there's a known constraint on AI datacenter expansion capability. Perhaps they are struggling to meet demand?
It only makes sense for them to get users to use their ecosystem, rather than other tools.
Yes, definitely, they’re gracefully failing to meet demand. They could also deny new customers, but it would probably be bad for business.
100% this, I’ve posted the same sentiment here on HN. I hate the chilling effect of the bans and the lack of clarity on what is and is not allowed.
I don’t think they could have done that much better I’d say.
Another thing is branding: Their CLI might be the best right now, but tech debt says it won’t continue to be for very long.
By enforcing the CLI you enforce the brand value — you’re not just buying the engine.
Maybe there’s some truth to that, but then why haven’t OpenAI made the same move? I believe the main reason is platform control. Anthropic can’t survive as a pipeline for tokens, they need to build and control a platform, which means aggressively locking out everybody else building a platform.
OpenAI has never shyed away from burning mountains of cash to try and capture a little more market share. They paid a billion dollars for a vibe coded mess just for the opportunity to associate themselves with the hype.
Claude code uses a bunch if best practices to maximize cache hit rate. Third party harnesses are hit or miss, so often use a lot more tokens for the same task.
most of the users of those third party harnesses care just as much about hitting cache and getting more usage.
If you're paying normal API prices they'll happily let you use whatever harness you want.
it's a bug only if they get a harsh public response, otherwise it becomes a feature
I've used it with a sub a lot. Concurrency of 40 writing descriptions of thousands of images, running for hours on sonnet.
I have a lot of complaints. I've cancelled my $200 subscription and when it runs out in a few days I'll have to find something else.
But claude -p is fine.
... Or it was 2 week ago. Who knows if they've silently throttled it by now?
Not sure how that's enforced though. I was in OpenClaw discord a while ago and enforcement seemed a bit random.
I'll try to find the source, I might have gotten the details mixed up.
Just tmux and use that.
Soon if they drop -p people will just vibe code in 5 minutes a way to type inside it remotely similar to their own built in remote access tool. Seems like a losing game from anthropics side
Claude seems to be getting nerfed every week since we've switched. I wonder how our EVP is feeling now.
It kind of reminds me of the joke where a plumber charges $500 for a 5 minute visit. When the client complains the plumber says it's $50 for labor and $450 for knowing how to fix the problem.
In a bustling restaurant, an excited patron recognized the famous artist Picasso dining alone. Seizing the moment, the patron approached Picasso with a simple request. With a plain napkin and a big smile, he asked the artist for a drawing. He promised payment for his troubles. Picasso, ever the creator, didn’t hesitate. From his pocket, he produced a charcoal pencil and he brought to life a stunning sketch of a goat on the napkin—a clear mark of his unique style. Proudly, he presented it to the patron.
The artwork mesmerized the patron, who reached out to take it, only to be stopped by Picasso’s firm hand. “That will be $100,000,” Picasso declared.
Astonished, the patron balked at the sum. “But it took you just a few seconds to draw this!”
With a calm demeanor, Picasso took back the napkin, crumpled it, and tucked it away into his pocket, replying, “No, it has taken me a lifetime.”
Whether it's due to bugs or actual malice, it's not a good look. I genuinely can't tell if it's buggy, if it's been intentionally degraded, if it's placebo or if it's all just an elaborate OpenAI psyop.
https://news.ycombinator.com/item?id=47664442
Configuration and environment variables seem to have improved things somewhat but it still seems to be hit or miss.
So the trick is to always set to max, and then begin every task with “this is an extremely complex task, do not complete it without extensive deep thinking and research” or whatever.
You’re basically fighting a battle to make the model think more, against the defaults getting more and more nerfed to save costs.
It's pretty clear that OpenAI has consistently used bots on social networks to peddle their products. This could just be the next iteration, mass spreading lies about Anthropic to get people to flock back to their own products.
That would explain why a lot of users in the comments of those posts are claiming that they don't see any changes to limits.
(FWIW I have definitely noticed a cognitive decline with Claude / Opus 4.6 over the past month and a half or so, and unless I'm secretly working for them in my sleep, I'm definitely not an Anthropic employee.)
in short, it looks like nothing has been nerfed, but sentiment has definitely been negative. I suspect some of the openclaw users have been taking out their frustrations.
You definitely shouldn't trust me, as we're way beyond the point where you can trust ANYTHING on the internet that has a timestamp later than 2021 or so (and even then, of course people were already lying).
Personally I use Claude models through Bedrock because I work for Amazon, and I haven't noticed any decline. Instead it's always been pretty shit, and what people describe now as the model getting lost of infinite loops of talking to itself happened since the very start for me.
I'm on the enterprise team plan so a decent amount of usage.
In March I could use Opus all day and it was getting great results.
Since the last week of March and into April, I've had sessions where I maxed out session usage under 2 hours and it got stuck in overthinking loops, multiple turns of realising the same thing, dozens of paragraphs of "But wait, actually I need to do x" with slight variations of the same realisation.
This is not the 'thinking effort' setting in claude code, I noticed this happening across multiple sessions with the same thinking effort settings, there was clearly some underlying change that was not published that made the model get stuck in thinking loops more for longer and more often without any escape hatch to stop and prompt the user for additional steering if it gets stuck.
Although it seems that enterprise wasn’t included, so maybe not in your case.
https://support.claude.com/en/articles/14063676-claude-march...
In all seriousness though, I've observed the same thing with my own usage.
It is not in the interests for Anthropic to screw its customer base. Running a frontier lab comes with tradeoffs between training, inference and other areas.
How would anthropic increase future profits without satisfying customers?
The weakest signal to me is investor money, because when you think of it, investors are betting on a future that may or may not be there. Heck even trends aren't guaranteed, "past performance is no guarantee etc etc"
1. Build AGI
2. Use said AGI to tell us how to become profitable
3. Profit!
Anthropic seems to be going all in on enterprise sales. Which means they don't actually have to please customers, or it's what ThePrimeagen humorously calls a "yacht problem"—a problem that only needs a solution after the IPO. For now all they have to do is convince corporate leadership that this is the future of work and sow enough FOMO to close those sales contracts and their projected sales, and stock valuation, goes through the roof.
Of course that value will collapse if they go without delivering on their promises long enough. That's why they call it a bubble. But by then, hopefully, Dario and the early investors will be long gone and even richer than they were to start. Their only competitor, OpenAI, is confronted with the same issues: the scalability problems won't go away, and addressing them doesn't drive stock valuation the way promising high rollers that AGI and total workforce automation are just around the corner does.
Demand is way up and compute supply is extremely limited because data center buildouts can't keep up with demand.
In the face of rising demand and insufficient compute their only practical options (other than refusing new business until demand can be met) are signicantly raising the price of tokens (and more tighly limiting subscription options) or doing behind the scenes inference optimizations that are likely to make the model dumber.
It is very easy to believe that they took the route of inference optimizations that have reduced quality of the service and that that is where the perceived enshittification is coming from.
The ideal time to make your product worse is probably not at the same point that all of your competitor's customers are looking. Anthropic really, really fucked up here.
And beyond that, there's a ton of people who are just regular 9-5 Claude CLI users with an enterprise subscription who are getting punished with a worse model at the same price just as if we were Claw users. This kind of thing does not make one feel warm and fuzzy. I feel like I just got a boot to the teeth.
Phase 1: $200/mo prosumer engineer tool
Phase 2: AI layoffs / "it's just AI washing"
Phase 3: $20,000/mo limited release model "too dangerous" to use
Phase 4: Accelerated layoffs / two person teams. Rehiring of certain personnel at lower costs.
Phase 5: "Our new model can decompile and rewrite any commercial software. We just wrote a new kernel after looking at Linux (bye, bye GPL!) We also decompiled the latest Zelda game, ported the engine to Rust, and made a new game with it. Source code has no value. Even compiled and obfuscated code is a breeze to clone."
Phase 6: $100k/mo model that replicates entire engineering teams, only large companies can afford it. Ordinary users can't buy. More layoffs.
Phase N: People can't afford computing anymore. Everything is thin clients and rented. It's become like the private railroad industry. End of the PC era. Like kids growing up on smartphones, there's nothing to tinker with anymore. And certainly no gradient for entrepreneurship for once-skilled labor capital.
Anothropic used to be cool before they started gating access. Limiting Claw/OpenCode was strike one. Mythos is strike two.
Y'all should have started hating on their ethics when they started complaining about being distilled. For training they conducted on materials they did not own.
We need open weights companies now more than ever. Too bad China seems to be giving up on the idea.
"You wouldn't distill an Opus."
You will be backstabbed
You will be squeezed for all they can.
And you will be betrayed.
> Phase N: People can't afford computing anymore. Everything is thin clients and rented. It's become like the private railroad industry. End of the PC era. Like kids growing up on smartphones, there's nothing to tinker with anymore. And certainly no gradient for entrepreneurship for once-skilled labor capital.
Thankfully none of them actually makes money and just runs on investment so there is a good chance bubble will drop and the price of PC equipment will... continue to rise as US gives up Taiwan to China
Anthropic is a private company but nevertheless, the sentiment is accurate and applies to all kinds of corporations.
I've been using GLM for over 6 months and pretty happy.
Releasing open weights have been basically a PR move, the moment those companies need to actually make money they will cut it out as that reduces their client base.
They DO NOT want you to run AI. They want you to pay them to do it
z.ai did go public on the HK exchange. They are under pressures similar to other public companies.
I know that China models are increasingly being trained and run using Huawei chips instead of Nvidia. I know China has a surplus of electricity from renewables (wind, solar, hydro).
So, it makes a lot of sense to get people a "demo" and claim the paid product is better.
i think a lot of people have no idea how capable local models are atm.
The AI landscape in China is larger than just Qwen and Alibaba.
The first one is just incredibly naive, the second might be true for some people, for some tasks, but it's not going to capture the majority who're chasing the latest and greatest to "keep up".
It all boils down to a brilliant but extremely expensive technology. Both to build and to run.
We've been sold a product with heavy subsidy. The idea (from Sam) scale out and see what happens.
Those who care to read between the lines can see what's happening. A perfect storm of demand that attract VCs who can't understand they are the real customers. Once they understand that it will be too late.
Regarding open weight models: eventually we will, as humanity, benefit from the astronomical capital poured into developing a technology ahead of its time. In a few years this and even more will run on edge.
Written by open source developers, likely former openai and anthropic employees who got so much cash in the bank they don't need to worry about renting their knowledge.
I think it has something to do with mode collapse (although Claude certainly has its own "tells"), but I'm not sure.
It sounds trivial but even for Agentic, I found the writing style to be really important. When you give Claude a persona, it sounds like the thing. When you give GPT a persona, it sounds like GPT half-assedly pretending to be the thing.
---
Some other interesting points about Anthropic's models. I don't know if any of these relate to my LLM style question, but seems worth mentioning:
Claude models also use way less tokens for the same task (on ArtificialAnalysis, they are a clear outlier on this metric).
And there's a much stronger common sense, subjectively. (Not sure if we have a good way to actually measure that, though.) It takes context and common sense into account, to a much greater degree.
(Which ties in with their constitution. Understanding why things are wrong at a deeper level, rather than just surface level pattern matching.)
Opus is great but it should be bigger. You notice the difference between Sonnet and Opus, but with heavy use you notice Opus's limitations, too.
If you're objective it to democratize AI, sure. But for those fed up with it and the devastating effects it's having on students, for example, can opt to actively avoid paying for products with AI (I say this as someone who uses it every day, guilty). At some point large companies will see that they're bleeding money for something that most people don't seem to want, and cancel those $100k/mo deals. I've already experienced one AI-developer-turned company crash and burn.
Personally, I don't think this LLM-based AI generation will have any significant positive impacts. Time, energy (CO2) and money would have been far better spent elsewhere.
This one seems too far fetched. Training models is widespread. There will always be open weight models in some form, and if we assume there will be some advancements in architecture, I bet you could also run them on much leaner devices. Even today you can run models on Raspberry Pis. I don't see a reason this will stop being a thing, there will be plenty of ways to tinker.
However, keep in mind the masses don't care about tinkering and never have. People want a ChatGPT experience, not a pytorch experience. In essence this is true for all tech products, not just AI.
The SI symbol for minutes is "min", not "M".
A compromise would be to use the OP notation "m".
I'm aware of that, and thought that "downgraded" was the wrong word to use when going from 1h to 5 months.
1. I guess longer caching means more stale data, which is why it's a downgrade? 2. Maybe this isn't the TTL I thought it was? 3. Maybe this isn't the cache I thought?
Then I clicked on the link and realized I had been mislead my the title.
Seeing some things about how the effort selector isn't working as intended necessarily and the model is regressing in other ways: over-emphasizing how "difficult" a problem is to solve and choosing to avoid it because of the "time" it would take, but quoted in human effort, or suggesting the "easier" path forward even if it's a hack or kludge-filled solution.
I heard a while back Claude refused to attempt a task for days, saying it would take weeks of work. Eventually the user convinced it to try, and it one-shotted it in 30 seconds.
Totally true, also tokens seem to burn through much faster. More parallelism could explain some of it but where I could work on 3-5 projects at once on the max plan a month ago, I can't even get one to completion now on the same Opus model before the 5h session locks me up..
I point it to example snippets and webdocumentation but the code it gens won't work at all, not even close
Opus4.6 is a tiny bit less wrong than Codex 5.4 xhigh, but still pretty useless.
So, after reading all the success stories here and everywhere, I'm wondering if I'm holding it wrong or if it just can't solve everything yet.
What you're doing is more specialized and these models are useless there. It's not intelligence.
Another NFT/Crypto era is upon us so no you're not holding it wrong.
Obviously it cannot. But if you give the AI enough hints, clear spec, clear documentation and remove all distracting information, it can solve most problems.
One of these is better.
https://www.newyorker.com/magazine/2026/04/13/sam-altman-may...
You know, you can just google his name yourself, don't you?
</tinfoil>
But more likely they are constrained on GPUs and can't get them fast enough.
(My guess having no understanding of how this industry actually works.)
So I can't continue my claude code session I started yesterday.
https://www.anthropic.com/engineering/a-postmortem-of-three-...
Since the caching really primarily is something they can be judged at scale from across many users I can only assume that Anthropic looked at their infra load and impact and made a very intentional change.
They can't really revolutionize AI again so they make the product worse and worse and then offer you a "better" one
So you'd need some adaptive algorithm to decide when to keep caching and when to purge it whole, possibly on client side, but if you give client the control, people will make it use most cache possible just to chase diminishing returns. So fine grained control here isn't all that easy; other possible option is just to have cache size per account and then intelligently purge it instead of relying just on TTL
the hardware VM model is almost identical. Each session can go anywhere to start but a live session cant just be routed anywhere without penalty.
I mean, you are investing a lot (infrastructure and capital) into something that is essentially not yours. You claim credit for the offspring (the solution) simply because it resides in your workspace. You accept foreign code to make your project appear more successful and populated than you could manage alone. Your over-reliance on a surrogate for the heavy lifting leads to the loss of your own survival skills (coding and debugging). Last but not least, you handle the grunt work of territory defense (clients and environments) while the AI performs the actual act of creation (Displaced Agency).
Why the FUD?
I notice some interesting public opinion weather change since Anthropic passed OpenAI wrt revenue
>> Was there a change? Yes — March 6, intentional, part of ongoing cache optimization. You pinpointed the date correctly.
The entire issue lays out how and why it's a silent downgrade. Also silent because it just happened, without announcing.
I don't understand how is this FUD?