We've been investigating these reports, and a few of the top issues we've found are:
1. Prompt cache misses when using 1M token context window are expensive. Since Claude Code uses a 1 hour prompt cache window for the main agent, if you leave your computer for over an hour then continue a stale session, it's often a full cache miss. To improve this, we have shipped a few UX improvements (eg. to nudge you to /clear before continuing a long stale session), and are investigating defaulting to 400k context instead, with an option to configure your context window to up to 1M if preferred. To experiment with this now, try: CLAUDE_CODE_AUTO_COMPACT_WINDOW=400000 claude.
2. People pulling in a large number of skills, or running many agents or background automations, which sometimes happens when using a large number of plugins. This was the case for a surprisingly large number of users, and we are actively working on (a) improving the UX to make these cases more visible to users and (b) more intelligently truncating, pruning, and scheduling non-main tasks to avoid surprise token usage.
In the process, we ruled out a large number of hypotheses: adaptive thinking, other kinds of harness regressions, model and inference regressions.
We are continuing to investigate and prioritize this. The most actionable thing for people running into this is to run /feedback, and optionally post the feedback ids either here or in the Github issue. That makes it possible for us to debug specific reports.
Jeff Bezos famously said that if the anecdotes are contradicting the metrics, then the metrics are measuring the wrong things. I suggest you take the anecdotes here seriously and figure out where/why the metrics are wrong.
Baking deeper analytics into CC would be helpful... similar to ccusage perhaps: https://github.com/ryoppippi/ccusage
The way your tone and complaints come across reminds me of this. As a paying customer ($5k spend per month in my corporate job), I’d rather anthropic keep doing what they’re doing — innovating and shipping useful stuff at blinding speed — and not index on your feedback. I think the tradeoffs they would cost far outweigh the consequences.
Thank you Boris.
Also, why is there no SLA?
My clients demand one, so there is one.
Anthropic has many customers despite the fact that they have occasional problems. They’re not suing Anthropic because Anthropic isn’t promising in its agreement something they can’t deliver.
I think you’re reading into the agreement something that isn’t there, and that’s the cause of your confusion.
Does it exist?
Just because people pay for things doesn't mean they know or understand what they are paying for. Nor is there the legal precedence to actually understand where the rub lies or how that impacts business.
I believe, respectfully, that’s precisely what is happening in this thread because you keep complaining about the absence of an SLA that was never in the agreement, as though it is—or is supposed to be—there, and therefore the existence of some “rights” that would flow from that.
Maybe just maybe they didn’t put him here, rather he just a normal guy who reads HN, who is passionate about his role, and is here on his own time.
So that means we just eject any critical thinking when it comes to companies, especially where they is no liability or obligation for them (Boris or Anthropic) to be honest.
Other than 'trust'.
It is a horrible error of judgement to insert a complex request for such a basic ability. It is also an error of judgement to make claude make decisions whether it wants to improve the code or not at all.
It is so bad, that i stopped working on my current project and went to try other models. So far qwen is quite promising.
a6edd0d1-a9ed-4545-b237-cff00f5be090 / https://github.com/anthropics/claude-code/issues/47027
I'm happy to provide any other info that can be useful (as long as i'm not sharing any information about the code or tools we use into a public github issue).
Please:
1. Upgrade to the latest: claude update (seems like you did this already)
2. Start a new conversations (resuming an old convo may trigger this bug again in that convo)
2. Can we pay more/do more rigorous KYC to disable it if it's active?
EDIT: prompt caching behavior -did- change! 1hr -> 5min on March 6th. I'm not sure how starting a fresh session fixes it, as it's just rebuilding everything. Why even make this available?
It feels like the rules changed and the attitude from Anth is "aw I'm sorry you didn't know that you're supposed to do that." The whole point of CC is to let it run unattended; why would you build around the behavior of watching it like a hawk to prevent the cache from expiring?
This is not accurate. The main agent typically uses a 1h cache (except for API customers, which can enable 1h but it is not on by default because it costs more). Sub-agents typically use a 5m cache.
“This seems like a good opportunity to wrap it up and continue in a fresh context window.”
“Want to continue in a fresh context window? We got a lot of work done and this next step seems to deserve a fresh start!”
If there’s a cost problem, fix the pricing or the architecture. But please stop the model and UI from badgering users into smaller context windows at every opportunity. That is not a solution, it’s service degradation dressed as a tooltip.this seems a bit awkward vs the 5 hour session windows.
if i get rate limited once, I'll get rate limited immediately again on the same chat when the rate limit ends?
any chance we can get some form of deffered cache so anything on a rate limited account gets put aside until the rate limit ends?
I have yet to see Anthropic doing the same. Sorry but this whole thing seems to be quite on purpose.
I use Claude Code about 8hrs every work day extensively, and have yet to see any issues.
It really does seem like PEBKAC.
For example, I don't pull in tons of third-party skills, preferring to have a small list of ones I write and update myself, but it's not at all obvious to me that pulling in a big list of third-party skills (like I know a lot of people do with superpowers, gstack, etc...) would cause quota or cache miss issues, and if that's causing problems, I'd call that more of a UX footgun than user error. Same with the 1M context window being a heavily-touted feature that's apparently not something you want to actually take advantage of...
With a new version of Claude Code pretty much each day, constant changes to their usage rules (2x outside of peak hours, temporarily 2x for a few weeks, ...), hidden usage decisions (past 256k it looks like your usage consumes your limits faster) and model degradation (Opus 4.6 is now worse than Opus 4.5 as many reported), I kind of miss how it can be an user error.
The only user error I see here is still trusting Anthropic to be on the good side tbh.
If you need to hear it from someone else: https://www.youtube.com/watch?v=stZr6U_7S90
This is false. My guess is what is happening is #1 above, where restarting a stale session causes a 256k cache miss.
That said, I hear the frustration. We are actively working on improving rate limit predictability and visibility into token usage.
To avoid 1M issues, this week I have also intentionally used the 256k context model, disabled adaptive thinking and did the same "plans in multiple short steps with /clear in-between" to minimize context usage, and yet nothing helps. It just feels ~2x to ~3x less tokens than before, and a lot less smart than in February.
Nowadays every time I complete a plan I spend several sessions afterwards saying things like "we have done plan X, the changes are uncommitted, can you take a look at what we did" and every time it finds things that were missed or outright (bad) shortcuts/deviations from plan despite my settings.json having a clear "if in doubt ask the user, don't just take the easy way out". As a random data point, just today opus halfway through a session told me to make a change to code inside a pod then rollout restart it to use said change, and when called out on it it of course said that I was right and of course that wouldn't work...
It is understandable that given your incredible growth you are between a rock and a hard place and have to tweak limits, compute does not grow on trees, but the consistent "you are holding it wrong" messaging is not helpful. I am wondering if realistically your only option is to move everybody to metered, with clear token usage displayed, and maybe have pro/max 5/max 20 just be a "your first $x of tokens is 50/75% off". Allow folks to tweak the thinking budget, and change the system prompt to remove things like "try the easy solution first" which anecdotally has been introduced in the past while, and allow users to verify on prompt if the prompt would cause the whole context to be sent or if cache is available.
They introduced a 1M context model semi-transparently without realizing the effects it would have, then refused to "make it right' to the customer which is a trait most people expect from a business when they spend money on it, specially in the US, and specially when the money spent is often in the thousands of dollars.
Unless anthropic has some secret sauce, I refuse to believe that their models perform anywhere near the same on >300k context sizes than they do on 100k. People don't realize but even a small drop in success rate becomes very noticeable if you're used to have near 100%, i.e. 99% -> 95% is more noticeable than 55% -> 50%.
I got my first claude sub last month (it expires in 4 days) and I've used it on some bigish projects with opencode, it went from compacting after 5-10 questions to just expanding the context window, I personally notice it deteriorating somewhere between 200-300k tokens and I either just fork a previous context or start a new one after that because at that size even compacting seems to generate subpar summaries. It currently no longer works with opencode so I can't attest to how it well it worked the past week or so.
If the 1M model introduction is at fault for this mass user perception that the models are getting worse, then it's anthropics fault for introducing confusion into the ecosystem. Even if there was zero problems introduced and the 1M model was perfect, if your response when the users complain is to blame it on the user, then don't expect the user will be happy. Nobody wants to hear "you're holding it wrong", but it seems that anthropic is trying to be apple of LLMs in all the wrong ways as well.
That said, I feel that things started to feel a bit off usage-wise after the introduction of 1M context.
I'd personally be happy to disable it and go back to auto-compacting because that seems to have been the happy medium.
Given I'm running two max accounts to get the usage I want, can we get a 25x and 40x tier? :-)
Maybe using a heartbeat to detect live sessions to cache longer than sessions the user has already closed. And only do it for long sessions where a cache miss would be very expensive.
1. Poor cache utilization. I put up a few PRs to fix these in OpenClaw, but the problem is their users update to new versions very slowly, so the vast majority of requests continued to use cache inefficiently.
2. Spiky traffic. A number of these harnesses use un-jittered cron, straining services due to weird traffic shape. Same problem -- it's patched, but users upgrade slowly.
We tried to fix these, but in the end, it's not something we can directly influence on users' behalf, and there will likely be more similar issues in the future. If people want to use these they are welcome to, but subscriptions clients need to be more efficient than that.
And I’m using Claude on a small module in my project, the automations that read more to take up more context are a scam.
- I wrote an extension in Pi to warm my cache with a heartbeat.
- I wrote another to block submission after the cache expired (heartbeats disabled or run out)
- I wrote a third to hard limit my context window.
- I wrote a fourth to handle cache control placement before forking context for fan out.
- my initial prompt was 1000 tokens, improving cache efficiency.
Anthropic is STOMPING on the diversity of use cases of their universal tool, see you when you recover.
I don’t understand this. I frequently have long breaks. I never want to clear or even compact because I don’t want to lose the conversations that I’ve had and the context. Clearing etc causes other issues like I have to restate everything at times and it misses things. I do try to update the memory which helps. I wish there was a better solution than a time bound cache
But my understanding is that we're talking about ~60GB of data per session, so it sounds unrealistic to do...
When a user walks away during the business day but CC is sitting open, you can refresh that cache up to 10x before it costs the same as a full miss. Realistically it would be <8x in a working day.
The only people who are going to run into issues are superpower users who are running this excessively beyond any reasonable measure.
Most people are going to be quite happy with your service. But at the same time, and this is just a human nature thing people are 10 times more likely we complain about an issue than to compliment something working well.
I don't know how to fix this, but I strongly suspect this isn't really a technical issue. It's more of a customer support one.
This seems really useful!
I'm surprised that "Opus 4.6" (200K) and "Opus 4.6 1M" are the only Opus options in the desktop app, whereas in the CLI/TUI app you don't seem to even get that distinction.
I bet that for a lot of folks something like 400k, 600k or 800k would work as better defaults, based on whatever task they want to work on.
"To experiment with this now, try: CLAUDE_CODE_AUTO_COMPACT_WINDOW=400000 claude."
Maybe try changing the 4 to a 3 and see if that works for you?
its all "cache_control": { "type": "ephemeral" } there is no "ttl" anywhere.
// edit: cc_version=2.1.104.f27
> By default, the cache has a 5-minute lifetime. The cache is refreshed for no additional cost each time the cached content is used. > > If you find that 5 minutes is too short, Anthropic also offers a 1-hour cache duration at additional cost.
- More configurations and environments we need to test
- Given an edge/corner case, it is more likely a significant number of users run into it
- As the ecosystem has grown, more people use skills and plugins, and we need to offer better tools and automation to ensure these are efficient
We do actually dogfood rate limits, so I think it's some combination of the above.
"CLAUDE_CODE_AUTO_COMPACT_WINDOW=400000"
Running Claude Cowork in the background will hit tokens and it might not be the most efficient use of token use.
Last, but not least, turning off 1M token context by default is helpful.
The reply seems to be: oh huh, interesting. Maybe that's a good thing since people sometimes one-shot? That doesn't feel like the messaging I want to be reading, and the way it conflicts with the message here that cache is 1 hour is confusing.
https://news.ycombinator.com/item?id=47741755
Is there any status information or not on whether cache is used? It sure looks like the person analyzing the 5m issue had to work extremely hard to get any kind of data. It feels like the iteration loop of people getting better at this stuff would go much much better if this weren't such a black box, if we had the data to see & understand: is the cache helping?
Is this really an improvement? Shouldn't this be something you investigate before introducing 1M context?
What is a long stale session?
If that's not how Claude Code is intended to be used it might as well auto quit after a period of time. If not then if it's an acceptable use case users shouldn't change their behavior.
> People pulling in a large number of skills, or running many agents or background automations, which sometimes happens when using a large number of plugins.
If this was an issue there should have been a cap on it before the future was released and only increased once you were sure it is fine? What is "a large number"? Then how do we know what to do?
It feels like "AI" has improved speed but is in fact just cutting corners.
I think this may be the biggest concern for people building tools on the API: https://github.com/anthropics/claude-code/issues/46829
I would argue that KV caching is a net gain for Ant and a well-maintained cache is the biggest thing that can generate induced demand and a thriving third party ecosystem. https://safebots.ai/papers/KV.pdf
Can you explain why Opus 4.6 will be coming up with stupid solutions only to arrive at a good one when you mention it is trying to defraud you?
I have a feeling the model is playing dumb on purpose to make user spend more money.
This wasn't the case weeks ago when it actually working decently.
What's the right way to work on a huge project then? I've just been saying "Please continue" -- that pops the quota?
* Anthropic is in some way trying to run a business (not a charity) and at least (eventually?) make money and not subsidize usage forever
* "What a steal/good deal" the $100-$200/mo plans are compared to if they had to pay for raw API usage
and less on "how dare you reserve the right to tweak the generous usage patterns you open-ended-ly gave us, we are owed something!"
If Anthropic is allowed to alter the deal whenever, then I'd expect to be able to get my money back, pro-rata, no questions asked.
I ended up buying the $100 Codex plan. So far it has been much more generous with usage and more accurate than Claude for the kind of work I do.
That said, Codex has its own issues. Its personality can be a bit off-putting for my taste. I had to add extra instructions in Agents.md just to make it less snarky. I was annoyed enough that I explicitly told it not to use the word “canonical.”
On UI/UX taste, I still think current Codex is behind the Jan/Feb era of Claude Code. Claude used to have much better finesse there. But for backend logic, hard debugging, and complex problem-solving, Codex has been clearly better for me. These days I use Impeccable Skillset inside Codex to compensate for the weaker UI taste, but it still does not quite match the polish and instinct Claude Code used to have.
I used to be a huge Claude Code advocate. At this point, I cannot recommend it in good conscience.
My advice now is simple: try the $20 plans for Codex and Cursor, and see which one matches your workflow and vibes best
I tested on a previous version (2.1.68) and it still ran into this neverending loop BUT at least the token count kept steadily increasing.
So we are seeing 1. some sort of model degredation is my guess (why it can't break a thinking loop on some problems), as well as 2. a clear drop in thinking token UI transparency.
My best guess is this is the result of the companies running "experiments" to test changes. Or it's just all in my head :)
It’s not under load either it’s just fully downgraded. Feels more they’re dialing in what they can get away with but are pushing it very far.
So we are seeing 1. some sort of model degredation is my guess (why it can't break a thinking loop on some problems), as well as 2. a clear drop in thinking token UI transparency
when i left it running overnight it finally sent a message saying it exceeded the 64000 output token limit
This is what I'm working on proving now.
It is more that there is a confidence score while thinking. Opus will quit if it is too high and will grind on if the confidence score is close to the real answer. Haiku handles this well too.
If you give Sonnet a hard task, it won't quit when it should.
Nonetheless, that issue has been fixed with Opus.
I'll try to show that the speed of using Opus on tasks that have medium to hard difficultly is consistently the same price or cheaper than running them with Haiku and Sonnet. While easier tasks, the busy work that is known, is cheaper run with Haiku.
Still, in comparison with Claude Code, the quota of Codex is a much better deal. However, they should not make it worse...
At the same time, they’ve been giving out a ton of additional quota resets seemingly every other week (and committed to an additional reset for every million additional users until they hit 10mil on codex).
So they’ve really set a high bar for people’s expectations on their quota limits.
Once they drop the 2x promotion for good and stop the frequent resets, there are going to be a lot of complaints.
My experience is limited only to CC, Gemini-cli, and Codex - not Aider yet, trying different combinations of different models.
But, from my experience, CC puts everything else to shame.
How does Cursor compare? Has anyone found an Aider combination that works as well?
It was pretty much first for CLI agents and had a benchmark that was the go to at the start of LLM coding. Now the benchmark doesn't get updated and aider never gets a mention in talking about CLI tools till now.
Give it a custom sandbox and context for the work, so it has no opportunity to roam around when not required. AI agentic coding is hugely wasteful of context and tokens in general (compared to generic chat, which is how most people use AI), there's a whole lot of scope for improvement there.
It does seem like a cynical attempt to make more money.
When will people realize this is the same as vendor lock-in?
"Maybe if I spend more money on the max plan it will be better" > no it will be the same "Maybe if I change my prompt it will work" > no it will be the same "Maybe if I try it via this API instead of that API it will improve" > no it will be the same.
Claude, ChatGPT, Gemini etc all of these SOTA models are carefully trained, with platforms carefully designed to get you to pay more for "better" output, or try different things instead of using a different product.
It's to keep you in the ecosystem and keep you exploring. There is a reason you can't see the layers upon layers of scaffolding they have. And there's a reason why after 2 weeks post major update, the model is suddenly "bad" and "frustrating". It's the same reason its done with A/B testing, so when you complain, someone else has no issues, when they complain, you have no issues. It muddies the water intentionally.
None of it is because you're doing anything wrong, it's not a skill issue, it's a careful strategy to extract as much engagement and money from customers as possible. It's the same reason they give people who buy new gun skins in call of duty easier matches in matchmaking for the first couple games.
The only mistake you made was paying MORE, hoping it would get better. It won't, that's not what makes them money. Making people angry and making people waste their time, while others have no issues, and making them explore and try different things for longer so they can show to investors how long people use these AI tools is what makes them money.
When competitors have a better product these issues go away When a new model is released these issues don't exist
I was paying a ton of money for claude, once I stopped and cancelled my subscription entirely, suddenly sonnet 4.6 is performing like opus and I don't have prompts using 10% of my quota in one message despite being the same complexity.
Codex consumes way fewer resources and is much snappier.
OpenCode is great though, and can (for now) use an OpenAI subscription.
TDD was never really my natural style, but LLMs are great at generating the obvious test cases quickly. That lets me spend more of my attention on the edge cases, the invariants, and the parts that actually need judgment.
Frontend is another area where they help a lot. It’s not my strongest side, so pairing an LLM with shadcn/ui gets me to a decent, responsive UI much faster than I would on my own. Same with deployment and infra glue work across Cloudflare, AWS, Hetzner, and similar platforms.
I’m basically a generalist with stronger instincts in backend work, data modeling, and system design. So the value for me is that I can lean into those strengths and use LLMs to cover more ground in the areas where I’m weaker.
That said, I do think this only works if you’re using them as leverage, not as a substitute for taste or judgment.
Here’s what I’ve done to mostly fix my usage issues:
* Turn on max thinking on every session. It save tokens overall because I’m not correcting it of having it waste energy on dead paths.
* keep active sessions active. It seems like caches are expiring after ~5 minutes (especially during peak usage). When the caches expire it sees like all tokens need to be rebuilt this gets especially bad as token usage goes up.
* compact after 200k tokens as soon as I reasonably can. I have no data but my usage absolutely sky rockets as I get into longer sessions. This is the most frustrating thing because Anthropic forced the 1M model on everyone.
Good chance it's not real or misdiagnosed. But it gives me some degree of schadenfreude to see it happening to the Claude Code repo.
Vibes, indeed.
They also silently raised the usage input tokens consume so it's a double whammi.
At least up until recently the 1M model was separated into /model opus[1M]
https://github.com/anthropics/claude-code/commit/48b1c6c0ba0...
This is spot on. It would be great (and very easy for them) to have a setting where you can force compaction at a much lower value, eg 300k tokens.
> * keep active sessions active. It seems like caches are expiring after ~5 minutes (especially during peak usage). When the caches expire it sees like all tokens need to be rebuilt this gets especially bad as token usage goes up.
Is this as opaque on their end as it sounds, or is there a way to check?
This is definitely true. Ever since I realized there is an /effort max option I am no longer fighting it that much and wasting hours.
For those not in the Google Gemini/Antigravity sphere, over the last month or so that community has been experiencing nothing short of contempt from Google when attempting to address an apparent bait and switch on quota expectations for their pro and ultra customers (myself included). [1]
While I continue to pay for my Google Pro subscription, probably out of some Stockholm Syndrome, beaten wife level loyalty and false hope that it is just a bug and not Google being Google and self-immolating a good product, I have since moved to Kiro for my IDE and Codex for my CLI and am as happy as clam with this new setup.
[1] https://github.com/google-gemini/gemini-cli/issues/24937
However, I've found that the flash quota is much more generous. I have been building a trio drive FOC system for the STM32G474 and basically prompting my way through the process. I have yet to be able to run completely out of flash quota in a given five hour time window. It is definitely completing the work a lot faster than I could do myself -- mainly due to its patience with trying different things to get to the bottom of problems. It's not perfect but it's pretty good. You do often have to pop back in and clean up debris left from debugging or attempts that went nowhere, or prompt the AI to do so, but that's a lot easier than figuring things out in the first place as long as you keep up with it.
I say this as someone who was really skeptical of AI coding until fairly recently. A friend gave me a tutorial last weekend, basically pointing out that you need to instruct the AI to test everything. Getting hardware-in-loop unit tests up and running was a big turning point for productivity on this project. I also self-wired a bunch of the peripherals on my dev board so that the unit tests could pretend to be connected to real external devices.
I think it helps a lot that I've been programming for the last twenty years, so I can sometimes jump in when it looks like the AI is spinning its wheels. But anyway, that's my experience. I'm just using flash and plan mode for everything and not running out of the $20/mo quota, probably getting things done 3x as fast as I could if I were writing everything myself.
There's a lot of angles you take from that as a starting point and I'm not confident that I fully understand it, so I'll leave it to the reader.
The parent's argument is that the marginal cost of inference is minimal. However, the fundamental flaw is that he's separating inference from the high cost frontier models. It's a cross-subsidy that can't be ignored.
IMO they need as many users before their IPO - then the changes will really begin.
I'm dying to see S-1 filing for Anthropic or OpenAI. I don't actually think inference is as cheap as people say if you consider the total cost (hardware, energy, capex, etc)
1. the 80% margin from 2025 was theoretical,
2. they're relying on distillation/synthetic data for training,
3. and have been very opaque about cross-subsidization of R&D with their models.
The distillation alone adds a big asterisk for comparisons.
Huh?
The reddit summary comment makes no sense. How are they getting revenues without ads or paying customers?
"After" makes more sense.
FTA:
>The company has yet to show a profit and is searching for ways to make money to cover its high computing costs and infrastructure plans.
You also can't put ads in code completion AIs because the instant you do the utility to me of them at work drops to negative. Guess how much money companies are going to pay for negative-value AIs? Let's just say it won't exactly pay for the AI bubble. A code agent AI puts an ad for, well, anything and the AI accidentally puts it into code that gets served out to a customer and someone's going to sue. The merits of the case won't matter, nor the fact the customer "should have caught it in review", the lawsuit and public reputation hit (how many people here are reading this and salivating at the thought of being able to post an angrygram about AIs being nothing but ad machines?) still cost way too much for the AI companies creating the agents to risk.
Valuation have already reached point where these companies can run their nuclear power station, fund developement of new hardware and techniques and boost capabilities of their models by 10x
That's also ignoring that nuclear power plants also consume quite a bit of water, which may be a more difficult bottleneck in and of itself even without trying to add nuclear into the mix.
How many companies will generate profit in the end, what will happen with all those power stations and data centers ?
A huge difference is early computers were not subsidized. It took decades until most people could afford to own a computer at home.
Can confirm, I initially enjoyed the 5-hour limits on Gemini CLI and Antigravity so much that I paid for a full year, thinking it was a great decision
In the following months, they significantly cut the 5-hour limits (not sure if it even exists anymore), introduced the unrealistically bad weekly limit that I can fully consume in 1-2 hour, introduced the monthly AI credits system, and added ads to upgrade to Ultra everywhere
At the very least the Gemini mobile app / web app is still kinda useful for project planning and day-to-day use I guess. They also bumped the storage from 2TB to 5TB, but I don't even use that
Unfortunately, at least for those of us in the US, there isn't legally much that can be done. It's simply not possible to make a contract that would obligate a company to fulfill its promises on this type of sale.
Looks like enshittification on steroids, honestly.
I only did the $20/month subscription since 9/2025
It was great for about 5 months, amazing in fact. I under utilized it.
For the past month, it’s basically unusable, both Claude code and just Claude chat. 1-2 prompts and I’m out. Last week I prob sent a total of 15 messages to Claude and was out of daily and weekly usage each day.
I get that the $20/month subscription isn’t a money maker for them, and they probably lose money. But the experience of using Claude has been ruined
"effortLevel": "high",
"autoUpdatesChannel": "stable",
"minimumVersion": "2.1.34",
"env": {
"DISABLE_AUTOUPDATER": 1,
"CLAUDE_CODE_DISABLE_ADAPTIVE_THINKING": 1
}
I also had to:1. Nuke all other versions within /.local/share/claude/versions/ except 2.1.34. 2. Link ~/.local/bin/claude to claude -> ~/.local/share/claude/versions/2.1.34
This seems to have fixed my running out of quota issues quickly problems. I have periods of intense use (nights, weekends) and no use (day job). Before these changes, I was running out of quota rather quickly. I am on the same 100$ plan.
I am not sure adaptive thinking setting is relevant for this version but in the future that will help once they fix all the quota & cache issues. Seriously thinking about switching to Codex though. Gemini is far behind from what I have tried so far.
export CLAUDE_CODE_MAX_OUTPUT_TOKENS=64000 export MAX_THINKING_TOKENS=31999 export DISABLE_AUTOUPDATER=1 export CLAUDE_CODE_DISABLE_ADAPTIVE_THINKING=1
> The March 6 change makes Claude Code cheaper, not more expensive. 1h TTL for every request could cost more, not less
Feels very AI. > Restore 1h as the default / expose as configurable? 1h everywhere would increase total cost given the request mix, so we're not planning a global toggle.
They won't show a toggle because it will increase costs for some unknown percentage of requests?
There must be a better way to do this. The consumer option is the pricing difference. If they’d make cache writes the same price as regular writes, that would solve the whole problem. If you really want to push it, use that pricing only for requests where number of cache hits > 0 (to avoid people setting this flag without intent to use it), and you solved the whole issue.
And if you can't stomach OpenAI, GLM 5.1 is actually quite competent. About Opus 4.5 / GPT 5.2 quality.
Anthropic sells you 'knowledge' in the form of 'tokens' and you spend money rolling the dice, spinning the roulette wheels and inserting coins for another try. They later add limits and dumb down the model (which are their gambling machines) of their knowledge for you to pay for the wrong answers.
Once you hit your limit or Anthropic changes the usage limits, they don't care and halt your usage for a while.
If you don't like any of that, just save your money and use local LLMs instead.
Fair transactions involve fair and transparent measurements of goods exchanged. I'm going to cancel my subscription this month.
Running non deterministic software for deterministic tasks is still an area for efficiency to improve.
I'm curious what are people doing that is consuming your limits? I can't imagine filling the $200 a month plan unless I was essentially using Claude code itself as the api to mass process stuff? For basic coding what are people doing?
As of now, I’m consistently hitting my 5 hour limit in less than 1 hour during N/A business hours. I’m getting to the point where I basically can’t use CC for work unless I work very early or late in the day.
"Usage remains unchanged" between 8am and 2pm.
I feel the Claude subreddits are mostly full of speculation and dramatics, not much productive discussion, like endless exaggerated complaining about downtime. Pretty much the same as a pretty significant chunk of reddit nowadays.
Edit: the rumor was probably stemming from this https://www.theregister.com/2026/03/26/anthropic_tweaks_usag...
It does look pretty bad, especially not announcing it on a primary channel, but also they claim it's balanced out by efficiency gains and would affect 7% of users overall and 2% of 20x users.
is an official post by Anthropic.
> Your weekly limits remain unchanged. During peak hours (weekdays, 5am–11am PT / 1pm–7pm GMT), you'll move through your 5-hour session limits faster than before. Overall weekly limits stay the same, just how they're distributed across the week is changing.
I'm Eastern time and peak usage works out as 8am-2pm (the bulk of my work day). It's nice that Europe gets to use it in the morning and Pacific gets to use it in the afternoon, but this is completely bullshit and infuriating. I would have no problem if it were 2x outside peak but that's NOT what they're saying.
If you start to parallelize and you have permission prompts on you're likely missing cache windows as well.
Think Twitter's fail-whale problems. Sometimes you are lucky, sometimes you aren't. Why? We won't know until Anthropic figures it out and from the outside it sure looks like they're struggling.
Either they decimated the limits internally, or they broke something.
Tried all the third-party tricks (headroom, etc.), switched to 200k context window, switched back to 4.5.
I hope 4.5 will help, but the rest of the efforts didn’t move the needle much
I suspect I was getting rate limited very aggressively on Thursday last week. It honestly infuriated me, because I'm paying $200 a month for this thing. If it's going to rate limit me, at least tell me what it's doing instead of just making it seem like it's taking 12 hours to run through something that I would expect to be 15 minutes. The worst part is that it never even finished it.
My gut feeling is this is not enough money for them by far (not to mention their investors), and we'll eventually get ratcheted up inline with dev salaries. E.g. "look how many devs you didn't have to hire", etc.
I have a day job, a side business, actively trade shares options and futures, and have a few energy credit items.
All were given the same copied folder containing all the needed documents to compose the return, and all were given the same prompt. My goal was that if all three agreed, I could then go through it pretty confidently and fill out the actual submission forms myself.
5.4 nailed it on the first shot. Took about 12 minutes.
3.1 missed one value, because it decided to only load the first 5 pages of a 30 page document. Surprisingly it only took about 2 minutes to complete though. A second prompt and ~10 seconds corrected it. GPT and Gemini now were perfectly aligned with outputs.
4.6 hit my usage limit before finishing after running for ~10 minutes. I returned the next day to have it finish. It ran for another 5 minutes or so before finishing. There were multiple errors and the final tax burden was a few thousand off. On a second prompt asking to check for errors in the problem areas, it was able to output matching values after a couple more minutes.
For my first time using CC and 4.6 (outside of some programming in AG), I am pretty underwhelmed given the incessant hype.
My only point here is it sure seems the same activity / use case can have wildly different results across sessions or users. Customer support and product development in the age of non-deterministic software is a strange, strange beast.
Obviously, accounting is "spreadsheet math" intensive, so Claude wrote some python scripts for that which kept the math very stable. But there were some complex nuances that had taken the accountant and I quite a bit of work to track down and clarify. Claude quickly had a very accurate read on the situation and knew all the right clarifying questions.
I'm not yet ready to ever sign a return that's been entirely AI prepared, but I left the exercise pretty impressed.
https://docs.github.com/en/copilot/concepts/billing/copilot-...
This clearly isn't true for agentic mode though. This document is extremely misleading. VSCode has the `chat.agent.maxRequests` option which lets you define how many requests an agent can use before it asks if you want to continue iterating, and the default is not one. A long running session (say, implementing an openspec proposal) can easily eat through dozens of requests. I have a prompt that I use for security scanning and with a single input/request (`/prompt`) it will use anywhere between 17 and 25 premium requests without any user input.
The overall context windows are smaller with copilot I believe, but it dfoesnt appear to be hurting my work.
I'm using it for approx 4 hours a day most days. Generally one shotting fun ideas I thoroughly plan out in planning mode first, and I have my own verison of the idea->plan->analyse-> document implementation phases -> implement via agent loop. simulations, games, stuff-im-curious about and resurrecting old projects that never really got off the ground.
Yet, there must obviously be something different for so many people to be reporting these issues.
I feel for the Anthropic devs that have to deal with this, having to figure out what setup everyone has, what their usage patterns are to filter out the valid reports, and then also deal with the backlash from people that were just pulling obvious footguns like having a ton of skills/MCPs polluting their context window.
https://www.reddit.com/r/ClaudeAI/comments/1s4idaq/update_on...
It’s been unusable for me as my daily coding agent. I run out of credits in the pro account in an hour or so. Before that I had never reached the session limit. Switched back to Junie with Gemini/chatgpt.
Now a single question consistently uses around 15% of my quota
Once people won't be able to think anymore and business expect the level of productivity witnessed before, will have no choice but cough up whatever providers bill us.
Is that bad? After all, even if they hiked to price infinity, you wouldn't worse off than if AI didn't exist because you could still code by hand. Moreover if it's really in a "business" (employment?) context, the tools should be provided by your employer, not least for compliance/security reasons. The "expectation" angle doesn't make sense either. If it's actually more efficient than coding by hand, people will eventually adopt it, word will get around and expectations will rise irrespective of whether you used it or not.
My argument was not about AI. Rather about the practice of Anthropic and the likes.
This was addressed by the words that you perhaps mistakenly omitted from your quote:
> Once people won't be able to think anymore...
People who aren't able to think anymore, can't still code by hand. Think "Idiocracy".
OpenAI and Anthropic have been getting stingy with their plans and it's only it's been what, 1 year, maybe 2 since vibecoding was widely used in a professional context (ie. not just hacking together a MVP for a SaaS side hustle in a weekend)? I doubt people are going to lose their ability to think in that timespan.
Online advertising is now ubiquitous, terrible, and mandatory for anyone who wants to do e-commerce. You can't run a mass-market online business without buying Adwords, Instagram Ads, etc.
AI will be ubiquitous, and then it will get worse and more expensive. But we will be unable to return to the prior status quo.
I suspect more customers are lost a lot faster when you increase prices, compared to enshittifying the product. It's also a lot more directly attributable to an action, and thus easier for an executive to be blamed if they choose the former over the latter.
It occurred to me an outright rejections of these tools is brewing but can't quite materialise yet.
OP wrote "I pay for the lowest plan", so that’s the $20/mo one.
How is this normal?
But the opacity itself is a bit offensive to me. It feels shady somehow.
a) quotas will get restricted
b) the subscription plan prices will go up
c) all LLMs will become good enough at coding tasks
I just open sourced a coding agent https://github.com/dirac-run/dirac
The entire goal is to be token efficient (over 50% cheaper), and by extension, take advantage of LLM's better reasoning at shorter context lengths
This really started as an internal side project that made me more productive, I hope it will help others too. Apache 2.0
Currently it still can't compete the subsidized coding plan rates using Anthropic API pricing though (even though it beats CC while both use API key), which tells me that all subscription plan operators are losing money on such plans
Opus is not worth the moat, there are multiple equivalent models, GLM 5.1 and Kimi K2.5 being the open ones, GPT 5.4 and Gemini 3.1 Pro being closed. https://llm-stats.com/ https://artificialanalysis.ai/leaderboards/models https://benchlm.ai/
Even API use (comparatively expensive) can be cheaper than Anthropic subscriptions if you properly use your agents to cache tokens, do context-heavy reading at the beginning of the session, and either keep prompt cache alive or cycle sessions frequently. Create tickets for subagents to do investigative work and use smaller cheaper models for that. Minimize your use of plugins, mcp, and skills.
Use cheaper models to do "non-intelligent" work (tool use, searching, writing docs/summaries) and expensive models for reasoning/problem-solving. Here's an example configuration: https://amirteymoori.com/opencode-multi-agent-setup-speciali... A more advanced one: https://vercel.com/kb/guide/how-i-use-opencode-with-vercel-a...
I'm using the free model via chat from the beginning. This is the first time, I'm seriously considering moving away from Claude. Before last month, Claude's Sonnet model was consistent in quality. But, now the responses are all over the place. It's hard to replicate the issue as it happens once in a while. I rarely encountered hallucinations from Claude models with questions from my domain however since last month I have observed abundance of them.
I am getting bored of having to plan my weekends around quota limit reset times...
To try things out you can use llama.cpp with Vulkan or even CPU and a small model like Gemma 4 26B-A4B or Gemma 4 31B or Qwen 3.5 35-A3B or Qwen3.5 27B. Some of the smaller quants fit within 16GB of GPU memory. The default people usually go with now is Q4_K_XL, a 4-bit quant for decent performance and size.
https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF
https://huggingface.co/unsloth/gemma-4-31B-it-GGUF
https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF
https://huggingface.co/unsloth/Qwen3.5-27B-GGUF
Get a second hand 3090/4090 or buy a new Intel Arc Pro B70. Use MoE models and offload to RAM for best bang for your buck. For speed try to find a model that fits entirely within VRAM. If you want to use multiple GPUs you might want to switch to vLLM or something else.
You can try any of the following models:
High-end: GLM 5.1, MiniMax 2.7
Medium: Gemma 4, Qwen 3.5
https://unsloth.ai/docs/models/minimax-m27
https://unsloth.ai/docs/models/glm-5.1
https://unsloth.ai/docs/models/gemma-4
You can run smaller models on much more modest hardware but they aren't yet useful for anything more than trivial coding tasks. Performance also really falls off a cliff the deeper you get into the context window, which is extra painful with thinking models in agentic use cases (lots of tokens generated).
I don't have metrics, so I could be imagining this, or finally noticing extra lag of the Claude Code client. On the other hand, the API was giving me range anxiety, I won't be pushing a 300k context window into that anytime soon, like I occasionally need to do in Claude Code.
Please unsubscribe to these services and see how they perform:
"Maybe if I spend more money on the max plan it will be better" > no it will be the same "Maybe if I change my prompt it will work" > no it will be the same "Maybe if I try it via this API instead of that API it will improve" > no it will be the same.
Claude, ChatGPT, Gemini etc all of these SOTA models are carefully trained, with platforms carefully designed to get you to pay more for "better" output, or try different things instead of using a different product.
It's to keep you in the ecosystem and keep you exploring. There is a reason you can't see the layers upon layers of scaffolding they have. And there's a reason why after 2 weeks post major update, the model is suddenly "bad" and "frustrating". It's the same reason its done with A/B testing, so when you complain, someone else has no issues, when they complain, you have no issues. It muddies the water intentionally.
None of it is because you're doing anything wrong, it's not a skill issue, it's a careful strategy to extract as much engagement and money from customers as possible. It's the same reason they give people who buy new gun skins in call of duty easier matches in matchmaking for the first couple games.
Stop paying more, stop buying these pro max plans, hoping it will get better. It won't, that's not what makes them money. Making people angry and making people waste their time, while others have no issues, and making them explore and try different things for longer so they can show to investors how long people use these AI tools is what makes them money.
When competitors have a better product these issues go away When a new model is released these issues don't exist
I was paying a ton of money for claude, once I stopped and cancelled my subscription entirely, suddenly sonnet 4.6 is performing like opus and I don't have prompts using 10% of my quota in one message despite being the same complexity.
I am tired of all the astroturf articles meant to blame the user with “tips” for using fewer tokens. I never had to (still don’t) think of this with Codex, and there has been a massive, obvious decline between Claude 1 month ago and Claude today.
We're generating all of the code for swamp[1] with AI. We review all of that generated code with AI (this is done with the anthropic API.) Every part of our SDLC is pure AI + compute. Many feature requests every day. Bug fixes, etc.
Never hit the quota once. Something weird is definitely going on.
But people who go > 5 minutes between prompts and see no cache, usage is eaten up quickly. Especially passing in hundreds of thousands of tokens of conversation history.
I know my quote goes a lot further when I sit down and keep sessions active, and much less far when I’m distracted and let it sit for 10+ minutes between queries.
It’s a guess. But n=1 and possible confirmation bias noted, it’s what I’m seeing.
What it does for you is simple: if you want to automate something, it does. Load the AI harness of your choice, tell it what to automate, swamp builds extensions for whatever it needs to to accomplish your task.
It keeps a perfect memory of everything that was done, manages secrets through vaults (which are themselves extensions it can write) and leaves behind repeatable workflows. People have built all sorts of shit - full vm lifecycle management, homelab setups, manage infrastructure in aws and azure.
What's also interesting is the way we're building it. I gave a brief description in my initial comment.
The sociotechnical stuff with System Initiative was made by your CEO? The guy who is really into music? And I don't even know how long that product was a thing before the pivot. Not long!
System Initiative was a thing for ~6.5 years. I talked to every person who ever used it or was interested in using it in the last 2.5 years. Thousands of them.
Swamp is better by every metric; has a lot more promise, is a lot more interesting.
I'm using another tool, not claude code, but I don't think that matters much.
There's this honeymoon period with Claude you experience for a month or two followed by a trough of disillusionment, and then a rebound after a model update (rinse and repeat). It doesn't help that Anthropic is experiencing a vicious compute famine atm.
It’s further frustrating that I have committed to certain project deadlines knowing that I’d be able to complete it in X amount of time with agent tooling. That agentic tooling is no longer viable and I’m scrambling to readjust expectations and how much I can commit to.
To add the fact we are being taken for fools with dramatic announcements, FOMOs messages. I even suspect some reaction farms are going on to boost post from people boasting Claude models.
These don't happen for codex. Nor for mistral. Nor for deepseek. It can't just be that Claude code is so much better.
There are open weight models that work perfectly fine for most cases, at a fraction of the cost. Why are more people not talking about those. Manipulation.
I often compare with Gemini. Sure those Google servers are super fast. But I can't see it better. Qwen and deepseek simply work better for me.
Haven't tested Mistral in a while, you may be right.
People try out and feel comfortable: using U.S models (I can see the logic), but mostly for brand recognition. Anthropic and OpenAi are the best aren't they? When the models jam they blame themselves.
For context, with Google AI Pro, I can burn through the Antigravity weekly limit in 1-2 hours if I force it to use Gemini 3.1 Pro. Meanwhile Gemini 3 Flash is basically unlimited but frequently produces buggy code or fail to implement things how I personally would (felt like it doesn't "think" like a software dev)
I also tried VS Code + Cline + OpenRouter + MiniMax M2.7. It's quite cheap and seems to be better than Gemini 3 Flash, but it gets really pricy as the context fills up because prompt caching is not supported for MiniMax on OpenRouter. The result itself usually needs 3-6 revisions on average so the context fills up pretty often
Eventually I got Claude Max 5x to try for a month. VS Code + Claude Code extension on a ~15k lines codebase, model set to "Default", and effort set to "Max". So far it's been really good: 0-2 revisions on average, and most of the time it implements things exactly how I would or better. And, like I said, I can only consume 40-60% of the 5-hour limits no matter how hard I try
Granted, I'm not forcing it to use Opus like OP (nor do I use complicated skills or launch multiple tasks at the same time), but I feel like they really nailed the right balance of when to use which model and how to pass context between the them. Or at least enough that I haven't felt the need to force it to use Opus all the time
it has been reported that it behaves very differently depending on those factors, presumably because people are placed in best-effort buckets, who knows
The thing is, if it's going to be this expensive it's not going to be worth it for me. Then I'll rather do it myself. I'm never going to pay for a €100 subscription, that's insane. It's more than my monthly energy bill.
Maybe from a business standpoint it still makes sense because you can use it to make money, but as a consumer no way.
Ask claude code to give you all the memories it has about you in the codebase and prune them. There is a very high chance that you have memories in there which are contradicting each other and causing bad behavior. Auto-saved memories are a big source of pollution and need to be pruned regularly. I almost don't let it create any memories at all if I can help it.
Disclaimer: I'm also burning through usage very quickly now - though for different reasons. Less than 48 hours to exhaust an account, where it used to take me 5-6 days with the same workload.
To be fair I have a pretty loose harness and pattern but it’s been enough to pull in 20k in bounties a month for a long time without going over plan with very little steering (sometimes days of continuous work)
That being said I’ve figured this was coming for a long time and have been slowly moving to local models. They’re slower but with the right harnesses and setup they’re still finding much the same amount in bounties.
I still review and make a decision about every report though.
In contrast I think a lot of people are just pointing agents at websites and then telling them to create and send a report which is a great way to produce trash and a ban.
In theory the /stats command tells you how many tokens you've used, which you could use to compute how much you are getting for your subscription, but in practice it doesn't contain any useful info, it may be counting what is printed to the terminal or something - my stats suggest my claude code usage is a tiny amount of tokens, but they must be an extremely underestimated token count, or they are charging much more for the subscription than the API per token (which is not supposed to be the case).
Last week's free extra usage quota shed some light on this. It seems like the reported tokens are probably are between 1/30th to 1/100th of the actual tokens billed, from looking at how they billed (/stats went up 10k tokens and I was billed $7.10). With the API it should be $25 for a million tokens.
For general queries and investigation I will use whatever public/free model is available without being logged in. Not having a bunch of prior state stacked up all the time is a feature for me. This is essentially my google replacement.
For very specific technical work against code files, I use prepaid OAI tokens in VS copilot as a "custom" model (it's just gpt5.4).
I burn through maybe $30 worth of tokens per month with this approach. A big advantage of prepaying for the API tokens is that I can look at everything copilot is doing in my usage logs. If I use the precanned coding agent products, the prompts are all hidden in another layer of black box.
Taking a second opinion has significantly helped me to design the system better, and it helped me to uncover my own and Claude blindspots.
Also, agree that, it spent and waist a lot of token on web search and many a times get stuck in loop.
Going forward- i will always use all 3 of them. Still my main coding agent is Claude for now.. but happy to see this field evolving so fast and it's easy to switch and use others on same project.
No network effects or lock in for a customer. Great to live in this period of time.
Anthropic is not incentivized to reduce token use, only to increase it, which is what we are seeing with Opus 4.6 and now they are putting the screws on
On the flip side- Using Opus with a baby billy freeman persona has never been more entertaining.
For something I spend all my time using- I’d rather iterate with Claude. The personality makes a big difference to me.
Honestly when I get codex to review the work that Claude does (my own or my coworker's) it consistently finds terrible terrible bugs, usually missing error handling / negative conditions, or full on race conditions in critical paths.
I don't trust code written by Claude in a production environment.
All AI code needs review by human, and often by other AIs, but Opus 4.6 is the worst. It's way too "yeet"
The opus models are for building prototypes, not production software.
GPT 5.4 in codex is also way more efficient with tokens or budget. I can get a lot more done with it.
I don't like giving money to sama, but I hate bugs even more.
It does seem like this new routing is worse for the consumer in terms of code quality and token usage somehow.
But like most challenges with claude, if you can just express them clearly, there are usually ways of optimizing further
Since then, I've been seeing increased critique of Anthropic in particular (several front page posts on HN, especially past few days), either due to it being nerfed or just straight up eating up usage quota (which matches my personal experience). It appears that we're once again getting hit by enshittiffication of sorts.
Nowadays I rely a lot on LLMs on a daily basis for architecture and writing code, but I'm so glad that majority of my experience came from pre-AI era.
If you use these tools, make sure you don't let it atrophy your software engineering "muscles". I'm positive that in long run LLMs are here to stay. The jump in what you can now self-host, or run on consumer hardware is huge, year after year. But if your abilities rely on one vendor, what happens if you come to work one day and find out you're locked out of your swiss army knife and you can no longer outsource thinking?
I don’t understand why people insist on these subscriptions and CC.
Fanboyism is a bit too hardcore at this point. Apple fanboys look extremely prudent compared to this behavior.
So you just aren't in the same realm of usage. Maybe that is why you don't understand?
I strongly believe google's legs will allow it to sustain this influx of compute and still not do the rug-pull like OAI or Anthropic will be forced to do as more people come onboard the code-gen use case.
What I wish for right now is for open-weight models and hardware companies (looking at you Apple) to make it possible to run local models with Opus 4.6-level intelligence.
@Anthropic I've cancelled my subscription. Good luck :)
It is hard now to hit the limit...
No FOMO
What I did instead is tune the prompt for gemma 4 26b and a 3090. Worked like a charm. Sometimes you have to run the main prompt and then a refinement prompt or split the processing into cases but it’s doable.
Now I’m waiting for anyone to put up some competition against NVIDIA so I can finally be able to afford a workstation GPU for a price less than a new kidney.
I’ve moved away from Claude and toward open-source models plus a ChatGPT subscription.
That setup has worked really well for me: the subscription is generous, the API is flexible, and it fits nicely into my workflow. GPT-5.4 + Swival (https://swival.dev) are now my daily drivers.
Either you are using it wrong or you are working in a totally different field.
> As the Codex promotion on Plus winds down today
Any highlights you can share here? I'm always looking to improve me setup.
Especially when it's on purpose.
Cache reads cost $0.31
Cache writes cost $105
Input tokens cost $0.04
Output tokens cost $28.75
The total spent in the session is $134.10, while the Pro Max 5x subscription is $100.
Even taking the Anthropics API pricing, we arrive at $80.58. Below the subscription price, but not by much.
It's just the end of the free tokens, nothing to see here. It's easy to feel like you're doing "moderate" or even "light" usage because you use so little input tokens, but those "agentic workflows" are simply not viable financially.
They inflated how much their tools burn tokens from day one pretty much,remember all the stupid research and reports Claude always wanted to do, no matter what you asked it. Other tools are much smarter so this is not such a big deal.
More importantly, these moves tend to reverberate in the industry, so I expect others will clamp down on usage a lot and this will spoil my joy of using AI without countring every token.
Burning tokens doesn't just wastes your allotment, it also wastes your time. This gave rise to turbo offering where you get responses faster but burn 2x your tokens.
Probably a combination of it being vibe coded shit and something in the backend I expect.
But for high-ish quality translations of substantive texts, you typically want a harness that's pretty different from Claude Code. You want a glossary of technical terms or special names, a structured summary of the wider context, a concise style guide, and you have to chop the text into chunks to ensure nothing is missed. Even with super long context models, if you ask them to translate much at once they just translate an initial portion of it and crap out.
Are you using it for localization or short strings of text in an app? I wonder what you can do to get better results out of smaller models. I'm confident there's a way.
Demand is higher than supply it is just the start of bubble.
Everyone and their dog is burning tokens on stupid shit that would be freed up if they would ask to make deterministic code for the task and run the task. OpenAI, Anthropic are cutting free use and decreasing limits because they are not able to meet the demand.
When general public catches up with how to really use it and demand will fall and the today built supply will become oversupply that’s where the bubble will burst.
I say 5 more years.
however his response gaslights us because in the OPs opening post his math demonstrates this is not true, it shows reads 26x more so at least in his case the cache is not doing what the anthropic employee describes.
clearly we are being charged for less optimization here and being given the message (from my perspective by anthropic) that if you are in a special situation your needs don't matter and we will close your thread without really listening.
It's also in the interest of the users to keep certain params private, we are meant to deduce that. Did you not ?
During core US business hours, I have to actively keep a session going or I risk a massive jump in usage while the entire thread rebuilds. During weekend or off-hours, I never see the crazy jumps in usage - even if I let threads sit stale.
Are there any other $50B+ Valuation companies that care about special situations? If so, who?
This is by design, of course. Anyone who has been paying even the slightest bit of attention knows these subscriptions are not sustainable, and the prices will have to go up over time. Quietly reducing the usage limits that they were never specific about in the first place is much easier than raising the prices of the individual subscription tiers, with the same effect.
If you want to know what kind of prices you'll be paying to fuel your vibe coding addiction in a few years, try out API pricing for a bit, and try not to cry when your 100$ credit is gone in 2 days.
> People need to understand a few things: vague questions make the models roam endlessly “exploring” dead ends.
> If people were considerably more willing to aggressively prune their context and scope tasks well, they could get a lot more done with it
If this were the problem, people would've encountered this when they started using Claude. The problem is not that they can't get anything done. It's being able to get things done for months, but suddenly hitting rate limits way too easily and response quality being clearly degraded, so they can't get things done that used to be possible.
The ecosystem is evolving super quickly so, our own experiences and workflows must keep adapting with it to experiment, find limitations and arrive at the "tightest possible scope" that still allows you to get things done, because it is possible.
Another example: pre-paid monthly subscription aggregates usage towards web and Claude Code, for eg. So if you're checking for holiday itineraries over your lunch break, then decide to sit down and ask a team of agents to refactor a giant codebase with hundreds or thousands of files, context will be exhuasted quickly, etc, etc.
I see this "context economy" as a new way of managing your "mental models": every token counts, and every token must bear its weight for the task at hand, otherwise, I'm "wasting budget". I am also still learning how to operate in this new way of doing things, and, while there have been genuine issues with Claude Code, not every single issue that people encounter is an upstream problem.
No they can't. When I buy an annual subscription and prepay for the year, they can't just go "ok now you get one token a month" a day in. I bought the plan as I bought it. They can't change anything until the next renewal.
So no new models, no new features?
If they're selling me compute and bundling the features in, they better not go back on the compute I paid for.
If your limits stay "the same", but you then use Opus 4.6, your quota will be exhausted much faster, it's just how it works.
Note that some features are simply NOT made for these Pro, Max, Max 5x or whatever pre-paid plans. I'm pretty sure this is by design and not an accident or a bug: If you have 6/7 MCP servers configured or if you want to use this new feature of "Agent Teams", you will exhaust your entire quota before ANY work is even done. This is not a bug. Each agent has its own context window and tools and they all count separately.
MCP servers, when active, add A LOT of context to your sessions before you even use them, etc, etc.
It feels to me that people want to have their cake and eat it too, but, that would NOT be a sustainable business model. You can not complain about the tools if you can't understand them in-depth.
I want to state that I don't think Anthropic are fully aware of the ramifications that ANY small change in ANY of their models might have, because their entire ecosystem is a bit messy atm, but, I'm certain they're aware that if people dont like it, they will cancel the subscription and flock to a competitor very quickly, since there's no real moat anymore. So, it's in their own interest to keep things minimally usable even on the "cheaper plans".
I have seen people with 5-10 "active MCP servers" that they "wanted to try out" then they forget about it and wonder why their context is always full... Cmon... that's almost bad faith.
I don't fully defend Anthropic as they've had several issues with degraded model quality after releasing "the latest model", and CLI usability that cost me real money and real tokens, so, there's a lot of room for improvement, but, to claim that quota gets exhausted after 1h it points out to either some forgotten MCP servers, skills or giant files being accidentally read in, or some sort of mis-use which these limits were put in place to prevent exactly.
There's a very thin line between: quota is exhuasted on a regular, normal session after 1h and I think there's a bug versus I had 3-4 MCP servers active that I am not using at all but forgot to disable and my CLAUDE.md file is 1000 lines...
I guess this is fitting when the person who submitted the issue is in "AI | Crypto".
Well there's no crying at the casino when, you exhaust your usage or token limit.
The house (Anthropic) always wins.