In other words this is about Anthropic subsidizing their own tools to keep people on their platform. OpenClaw is just a good cover story for that. You can maximize plans just as easily w/ /loop. I do it all the time on max 20x. The agent consuming those tokens is irrelevant.
For what it's worth I don't use OpenClaw and don't intend to, but I do use claude -p all the time.
You are paying to be using that limit some of the time. There are 5 hour windows when you are sleeping and can't use it. There are weekend limits.
Theoretically you can max out every 5 hour window, but they lose money on that.
It's structured so users can have bursts of unlimited usage, and spend ~15% of the theoretical max cap, and that's still cheaper than a subscription for that user.
An OpenClaw user can use 6, 7, 8 times what a human subscriber is using.
Perhaps because your Claude agent usage is not representative of the average user, and closer to the average OpenClaw user levels...
With data, it's an engineering target.
They could just 429 badly behaved clients.
More users spinning up OpenClaw means that balance starts to shift towards more users maxing their tokens, thus the average increases, so I think their explanation makes sense still.
I downgraded from my $200 a month plan to my $20 plan and hit limits constantly. I try to use the API access I purchased separately, and it doesn't work with Claude Code (something about the 1 million context requiring extra usage) so I have to use it Continue. Then I get instantly rate limited when it's trying to read 1-2 files.
It just sucks. This whole landscape is still emerging, but if this is what it's like now, pre enshittification, when these companies have shitloads of money - it's going to be so much worse when they start to tighten the screws.
Right now my own incentive is to stop being dependent on Claude for as much as I can as quickly as I can.
Either you get a flat rate fee based on certain allowed usage patterns or everyone has to be billed à la carte.
Your comparisons are all also "unlimited" situations to Claude's very much limited situation. You can't buy a plan for Claude that is marketed as being unlimited. They're already selling people metered usage. They're just also adding restrictions on top of that.
Not the best example. The upkeep cost of a gym is pretty flat regardless of how much people use the facilities. Two people can't use a single machine at the same time make it wear out twice as fast. The price of memberships is not correlated to usage, it's inversely correlated to the number of memberships sold.
The machine doesn't care about the number of people using it. If it's constantly being used, it will wear out faster. You are conflating "we price based on expected under-utilization" with "costs don't scale with usage." Those are different things.
The inverse correlation you talk about isn't relevant here - People buy gym memberships intending to go, feel good about the intention, and then don't follow through. The business model is built on that gap. That's pretty specific to fitness and a handful of similar industries where aspiration drives purchase.
Anthropic doesn't sell based on a "golly gee I hope people dont use this" gap - they sell compute. Different business.
So they further restricted the metered caps, which were only offered to NOT be reached by that many.
Simple as that.
Then they should figure out how to structure an offering that accommodates this type of usage not just blanket ban it
I'm sorry is there anything even close to sonnet, much less opus, that can be run on a 4080? Or 64gb of ram, even slowly?
Whether it's human token use, or future OpenClaws
I even think an LLM trained to communicate using telegram style might even be faster and way cheaper.
.- -. -.. / .. --..-- / ..-. --- .-. / --- -. . --..-- / .-- . .-.. -.-. --- -- . / --- ..- .-. / -. . .-- / - . .-.. . --. .-. .- -- -....- -... .- ... . -.. / --- ...- . .-. .-.. --- .-. -.. ...
Terse.
This mainly just affects hobbyists.
Then it's not priced correctly. As I said, you can do all of this without OpenClaw.. claude code ships with everything you need to maximize the limits.
I mean, you can. Electricity is already sold that way. Subscribers with uncharacteristic usage spikes don't get blackouts, they get a slightly larger bill, and perhaps get moved up a tier.
Just because outliers can be money-losing doesn’t mean you should raise the price for everyone.
If they are losing money then it's not priced correctly. That's what I responded to.
Yes, subscriptions work as you say. Plenty of people under utilize subscriptions from prime, to credit cards, to netflix. But if they lost money overall, they too would raise prices. Because that's how economics works. Shortage of capacity, high demand, raise prices until equilibrium.
There's other knobs beyond ToS. They just didn't choose those options.
From Anthropic's perspective, everyone pays to be in bins with a given max.
And to everyone's benefit, there is a wide distribution of actual use. Most people pay for the convenience of knowing they have a max if they need it, not so they always use it.
So Anthropic does something nice, and drops the price for everyone. They kick back some of the (actual/potential) savings to their customers.
But if everyone automates the use of all their tokens Anthropic must either raise prices for everyone (which is terribly unfair for most users, who are not banging the ceiling every single time), or separate the continuous ceiling thumpers into another bin.
That's economics. Service/cost assumptions change, something has to give.
And of the two choices, they chose the one that is fair to everyone. As apposed to the one that is unfair (in different directions) to everyone.
Instead, you can prioritize people "earnestly" bursting to the usage limits, like the users who are actually sitting at their computer using the service over someone's server saturating the limit 24/7.
The goal is to have different tiers for manual users vs automated/programmatic tools. Not just Anthropic, this is how we design systems in general.
As you said, I would imagine where the token usage comes from is irrelevant - you are generating the same load whether you do it from claude code or some other agent. So it seems like the rules are more to do with encouraging claude code usage, rather then claude model usage.
Do you have an example of how this is how they have advertised or sold the plan? I don’t recall ever seeing any advertisement that their plan is simply pre paying for tokens.
Subscriptions are crazy subsidized.
So you can’t use OpenClaw, OpenCode, etc. because they take you outside their applications/lock in and their ability to easily monetize in the future.
This is so wrong.
The subscription is to Claude (the app, Claude code, etc) not the API.
Anthropic subsidizes Claude code because they collect a ton of super useful telemetry and logs so they can improve… Claude code.
Wanting to pay for a subscription to Claude and treat it like an API discount is like going to an all you can eat buffet and asking them to bring unlimited quantities of raw ingredients to you so you can cook at home. Ok, not a perfect analogy, but you get the idea.
You just paraphrased my argument
If Anthropic miscalculated the amount of tokens, or simply pushed too hard to capture market share, that is a costly mistake because people in this market are very sensitive to price hikes.
They have to be honest about what they can offer for $200. Sure, people don't max their subscriptions but when they're large they make the best of it, or they will likely cancel it. The typical subscription works well below capacity because it's cheap enough that the optionality may be worth it. $200 is not the typical subscription.
Isn't that exactly what they just did?
Their expectation must have been a human using the service at a human capacity.
This is different from an automated agent orchestrating a ton of different agents at the same time doing a lot of things.
There is a difference.
They already have the regular subscription plans (Pro, Max) and a separate billing process for direct API usage. They could absolutely introduce another type of plan optimized toward this kind of usage or just accept that it's a dumb pipe that is being paid for and having these random arbitrary limitations is just making things more confusing and a bad plan for the future.
Sure there is a difference. It's like when most mobile companies wouldn't allow tethering because then people would actually use the service.
You can try to stop that, but people will price in those inconveniences. They will simply learn that the fee pays for much less than the token limit and that the company is enforcing some unwritten limits by adding extra limitations to usage.
We will see it play out.
The whole point is that the users can have it doing shit for them instead of them having to babysit the computer.
The fact that users still have to sit there and argue with it erodes their value proposition. The proposition you can pay fewer salaries.
For now too many people will use AI for stuff that deterministic stupid code would be much more efficient.
This is (almost) universally true of flat rate subscriptions; but there are usage-billed ones, too (and even those often have an aspect of subsidies).
A great example of the shakeup is when dial-up went from "connect, do the thing, disconnect" to "leave the computer online all the time" - they had to change the billing model because it wasn't built for continuous connections.
Sucks to be pushed back to Claude Code with opaque system behavior and inconsistency. I bet many would rather pay more for stability than less for gambling on the model intelligence.
Or maybe I’ll just get a Codex subscription instead. OpenAI has semi-officially blessed usage of third party harnesses, right?
"Developers should code in the tools they prefer, whether that's Codex, OpenCode, Cline, pi, OpenClaw, or something else, and this program supports that work."
https://developers.openai.com/community/codex-for-oss
Obviously, the context is that OpenAI is telling open source developers who are using free subscriptions/tokens from the Codex for Open Source program that they can use any harness they want. But it would be strange for that to not extend to paying subscribers.
It seems that installing claude code directly from npm shields from some of the current issues.
Honestly, this just looks like what Dylan of SemiAnalysis suggested on Dwarkesh – that they've massively under-provisioned capacity / under-spent on infrastructure.
That would honestly be a comforting answer if true, because I would gladly take 'we can't afford to do this right now' over 'we are self-preferencing, and the FTC should really take a look at us, even if we're technically not a monopoly right now, since we're the only strongly-instruction-following model in town and we clearly know it'.
You can use these tools with most providers today, just no subscription plan. If you have enough spend, you can likely get bulk deals
Tell me you have zero clue what a monopoly is or what the law is, without telling me.
Monopoly law relies on broad categories, not narrow ones. You can’t call Microsoft a monopoly because they are the only company that makes Windows. You can’t call Amazon a monopoly because they are the only company that makes AmazonBasics. You can’t call Anthropic a monopoly because their product is 20% better for your use case, otherwise by definition no company has any incentive to do a good job at anything.
Monopoly law is subject to reinterpretation over time and anybody who has studied the history of it knows this. The only people argue for "strict" interpretations of current monopoly law are those who currently benefit from the status quo.
> Monopoly law relies on broad categories, not narrow ones.
And this is currently a gigantic problem. Because of relying on broad categories to define "monopoly", every single supply chain has been allowed to collapse into a small handful of suppliers who have no downstream capacity thanks to Always Late Inventory(tm). This prevents businesses from mounting effective competition since their upstream suppliers have no ability to support such activities thanks to over-optimization.
To be effective on the modern incarnation of businesses, monopoly law needs to bust every single consolidated narrow vertical over and over and over until they have enough downstream capacity to support competition again.
Then don’t make BS up like implying Anthropic is a monopolist for the crime of competence.
> tell me you don't understand how a small quantitative gap can result in a step change in capability
The law does not give a darn about this. Being a good competitive option does not make you a league of your own. If I invent a new flavor of shake, the Emerald Slide, am I a monopolist in shakes because I’m the only one selling Emerald Slides? If you go and then start a local business reselling shakes and I’m your only supplier, am I a monopolist then? Absolutely not.
We have a similar situation in mobile where Apple may not be considered a monopoly, but people have walked around for a decade with a supercomputer in their pocket that is wildly underused.
Things have gotten faster; things are different than they were decades ago when a lot of this was devised.
The reality of the matter is that some of us just want to see innovation actually happen apace, and not see 5, 10, or 30 years of slowdown while we litigate whether or not such a company is holding all the cards, while everyone is collectively waiting at the spigot for a company to get its shit together because we're not allowed to fix the situation.
For what it's worth, I'm hopeful that the other model providers will catch up and put us in a situation where this conversation is irrelevant.
What I'm afraid of is a situation where we see continued divergence, and we end up with another Apple situation.
That is not calling out that they are “absolutely not a monopoly by the law” in any way, shape, or form. You’re framing it as though they aren’t by a technicality, when they aren’t anywhere near discussion by even the most extreme of legal theories. You won’t find Lina Khan or Margarethe Vestager, both ousted for going too far, complaining about Anthropic.
> “We have a similar situation in mobile where Apple may not be considered a monopoly, but people have walked around for a decade with a supercomputer in their pocket that is wildly underused.”
In that we can’t run a Torrent client to download illegally redistributed media 99% of the time? Otherwise, in what way, are they underused? For the degrees of public addiction, a more underutilized phone would be a social benefit.
I'm looking forward. Things are moving very quickly. As I said above, I'm afraid of us diverging into another Apple situation in the future. If I suggest that they should be looked at and thought about, it's not for today, it's for tomorrow. If divergence continues. Because as with everything in AI, it might hit us a lot faster than people expect. Hell, given their approach to morality, I suspect that Anthropic folks have already thought deeply about these sorts of concerns. That's why it's actually a lot more in character for them to be doing this not due to self-preferencing, but due to unaffordability, which - if you look at my first post - is what I said seems to be happening.
Suffice to say that I have a graveyard of things that I think phones could have been, where unfortunately we've ended up with these - as you say - addicting consumerist messes.
Gonna stop here so I don't flood the thread. We're getting very off topic.
I haven't tried it to see if it's any good but it's $20/mo.
Dealing with Claude going into stupid mode 15 times a day, constant HTTP errors, etc. just isn't really worth it for all it does. I can't see myself justifying $200/mo. on any replacement tool either, the output just doesn't warrant it.
I think we all jumped on the AI mothership with our eyes closed and it's time to dial some nuance back into things. Most of the time I'm just using Opus as a bulk code autocomplete that really doesn't take much smarts comparatively speaking. But when I do lean on it for actual fiddly bug fixing or ideation, I'm regularly left disappointed and working by hand anyway. I'd prefer to set my expectations (and willingness to pay) a little lower just to get a consistent slightly dumb agent rather than an overpriced one that continually lets me down. I don't think that's a problem fixed by trying to swap in another heavily marketed cure-all like Gemini or Codex, it's solved by adjusting expectations.
In terms of pricing, $200 buys an absolute ton of GLM or Minimax, so much that I'd doubt my own usage is going to get anywhere close to $200 going by ccusage output. Minimax generating a single output stream at its max throughput 24/7 only comes to about $90/mo.
2 weeks ago, I had only hit my limit a single time and that was when I had multiple agents doing codebase audits.
They didn’t do a great job of explaining it. I wonder how many people got used to the 2X limits and now think Anthropic has done something bad by going back to normal
Would really love some path forward where the AI parts only poke out as single fields in traditional user interfaces and we can forget this whole episode
And video calling did take off, plenty of people use facetime and almost everybody working in an office uses some form of video calls. Criticizing the early attempts at getting video calling working because they hadn't taken off yet (I remember them being advertised on "video phones" with 56k modems), of course someone was going to have the idea and implement before it was quite reasonable.
To help with understanding that perspective, I cannot imagine a scenario where I would ask a device connected to the internet to turn off the lights. I literally never wanted this. A physical switch is a 100% non negotiable for me. I feel the same way about non-mechanical car doors.
Perhaps due to that outlook I was always puzzled about the entire idea of an "assistant". It's interesting for me to see, that there are people out there who actually want that "assistant".
Oh no, there's plenty of us willing to say we told you so.
What's more interesting to me is what it's going to look like if big companies start removing "AI usage" from their performance metrics and cease compelling us to use it. More than anything else, that's been the dumbest thing to happen with this whole craze.
I’m kind of confused by these takes from HN readers. I could see LinkedIn bros getting reality checked when they finally discover that LLMs aren’t magic, but I’m confused about how a developer could go all-in on AI and not immediately realize the limitations of the output.
OpenClaw was still using Claude Code as the harness (via claude -p)[0]. I understand why Anthropic is doing this (and they’ve made it clear that building products around claude -p is disallowed) but I fear Conductor will be next.
[0]: See “Option B: Claude CLI as the message provider” here https://docs.openclaw.ai/providers/anthropic#option-b-claude...
Imagine not being able to connect services together or compose building-blocks to do what you want. This is absolute insanity that runs counter to decades of computing progress and interoperability (including Unix philosophy); and I'm saying this as someone who doesn't even care for using AI.
The disrespect Anthropic has for their user base is constant and palpable.
I'm not sure what to say. You're either listening to the actions of these companies, or you're not in a place where you feel the need to be concerned be their actions.
I'm in a place where I'm concerned by their actions, and the impact that their claims and behavior have on the working environment around me.
Or are you also upset about the modern plight of the telephone operator, farrier, or coal miner?
It is not a class of labor ... it is all digital labor. Do you or do you not understand this?
It is digital knowledge itself, and then all communication labor, and then all physical labor with robotics.
Is this clear to you?
When they shut down open code, I thought it was a lame move and was critical of them, but I could understand at least where they're coming from. With this though, it's ridiculous. Claude core tools are still being used in this case. Shelling out to it to use it there's no different than a normal user would do themselves.
If this continues, I'll be taking my $200 subscription over to open AI.
OpenAI will soon do the same thing, don't be delusional.
But OpenClaw is not a product. It's just a pile of open source code that the user happens to choose to run. It's the user electing to use the functionality provided to them in the manner they want to. There's nothing fundamental to distinguish the user from running claude -p inside OpenClaw from them running it inside their own script.
I've mostly defended Anthropic's position on people using the session ids or hidden OAuth tokens etc. But this is directly externally exposed functionality and they are telling the user certain types of uses are banned arbitrarily because they interfere with Anthropic's business.
This really harms the concept of it as a platform - how can I build anything on Claude if Anthropic can turn around and say they don't like it and ban me arbitrarily.
Where it leaves me is is sort of like the DoD - nobody should use Claude for anything. Because Anthropic has set as principle here that if they don't like what you do, they will interfere with your usage. There is no principle to guide you on what they might not like and therefore ban next. So you can't do anything you want to be able to rely on. If you need to rely on it, don't use Claude Code.
And to be clear, I'm not arguing at all against using their API per-token billed services.
Try this one: https://code.claude.com/docs/en/overview#run-agent-teams-and...
Or perhaps: https://code.claude.com/docs/en/overview#pipe-script-and-aut...
You know what they say about looking and quacking.
When this happens I will have to look at other providers and downgrade my subscription. Conductor is just too powerful to give up. It’s the whole reason why I’m on a max plan.
Also what's the point of Claude -p if not integration with 3rd party code? (They have a whole agents SDK which does the same thing.. but I think that one requires per token pricing.) I guess they regret supporting subscription auth on the -p flag
that's a ridiculous position to take - gemini and others work just great with claw...
EDIT: confused by downvotes. In this thread people are saying it runs on top of `claude -p` and others saying it's on pi.
The `claude -p` option is allowed per https://x.com/i/status/2040207998807908432 so I really don't understand how they're enforcing this.
The lines drawn by their consumer vs commercial TOS was clear and I never subscribed because of it.
For a good existing example developed by a known company, check Cline Kanban: https://cline.bot/kanban
They don't have the MCP-bundling idea that I'm experimenting with, however.
I imagine how they treat these things will be contextual and maybe inconsistent. There aren't really hard lines between what they probably want editors that integrate with them to do and generic tools that try to sit a layer above the vendors' agent TUIs.
If you started plugging tools into GPT5.4 you may soon discover that you don't need anything beyond a single conversation loop with some light nesting. A lot of the openclaw approach seems to be about error handling, retry, resilience and perspectives on LLM tool use from 4+ months ago. All of these ideas are nice, but it's a hell of a lot easier to just be right the first time if all you need is a source file updated or an email written. You can get done in 100 tokens what others can't seem to get done in millions of tokens. As we become more efficient, the economic urgency around token smuggling begins to dissipate.
https://news.ycombinator.com/item?id=46936105 Billing can be bypassed using a combo of subagents with an agent definition
> "Even without hacks, Copilot is still a cheap way to use Claude models"
20260116 https://github.blog/changelog/2026-01-16-github-copilot-now-...
https://github.com/features/copilot/plans $40/month for 1500 requests; $0.04/request after that
https://docs.github.com/en/copilot/concepts/billing/copilot-... Opus uses 3x requests
Subscriptions assume “human usage” — bursty, limited, mostly interactive. Agent systems are closer to autonomous infrastructure load running continuously.
OpenClaw is a good example of this. Once agents operate freely, they don’t behave like users — they behave like infrastructure.
That’s why this kind of restriction isn’t too surprising.
Long term, it seems likely this pushes things toward: - API-first usage - or local / open models
rather than agents sitting on top of subscription-based UIs.
AKA when you fully use the capacity you paid for, that's too much!
Similarly, on a home internet connection you might pay for a given size of pipe, but most residential ISPs don't allow running publicly accessible servers on your connection because you'll typically use way more of the bandwidth.
That argument would have been valid when the 5 hours blocks were unlimited in the beginning.
If you are not aware, ACP creates a persistent session for steering rather than using the models directly.
And you don't have to get anyone's permission to use tmux.
I think the usage patterns of a lot of harnesses are pushing against their planned capacity. I would say they can certainly explain themselves a lot better.
Is it infrastructure? Are they unable to control costs?
Everyone else is spending like money is water to try to get adoption. Claude has it and is dialing back utility so that its most passionate users will probably leave.
I don’t understand this move.
For SaaS, use the SaaS API. For product, use the product.
They subsidize the product with "don't care how much" pricing so they have users to build out features without users worrying about cost. If it's not actual users using the product, then features will be built in OpenClaw instead of Claude.
The earlier they draw this line, the better.
However, announcing it the day before it is effective is a huge unforced error, even if it were just a consequence of the TOS. They gain nothing by making people scramble.
Also better to announce at the same new ways to support plugging in to Claude Code - something to encourage integration/cooperation. No fences unless the field inside is flowering.
Despite their power, frontier models are threatened by open-source equivalents. If AGI is not on the horizon and model performance is likely not going to be enough of a differentiator to keep the momentum going, the only other way is to go horizontal - enterprise solutions, proprietary coding agent harnesses, market capture, etc.
If AGI is in sight, none of these short-term games really matter. You just need to race ahead.
I switched OpenClaw to MiniMax 2.7. This combined with Claude over telegram does enough for me.
OpenClaw used to burn through all my Claude usage anyway.
The problem Anthropic is running into is that OpenClaw made it easy for everyone to become one of those folks that washes their car three times a week or more.
I’m sure they were losing money on subscriptions in general but now they are really losing money. Shutting off OpenClaw specifically probably helps stem some of the bleeding.
So this change has actually forced a reckoning of sorts. Maybe the best option is to outsource the thinking to another model, and then send it back to Opus to package up.
Ironically this is how the non-agent works too to an extent.
1. Make a better product/alternative to Openclaw and start eating their userbase. They hold the advantage because the ones "using their servers too much" are already their clients so they could reach out and keep trying to convert. Openclaw literally brought them customers at the door.
2. Do everyone royally and get them off their platform - with a strong feeling of dislike or hatred towards Anthropic.
Let's see how 2 goes for them. This is not the space to be treating your clients this way.
Why hatred btw? They're not even banning accounts left and right like Google?
There's a good chance they do not have the infrastructure to do that.
For example...
We recently moved a very expensive sonnet 4.6 agent to step-3.5-flash and it works surprising well. Obviously step-3.5-flash is nowhere near the raw performance of sonnet but step works perfectly fine for this case.
Another personal observation is that we are most likely going to see a lot of micro coding agent architectures everywhere. We have several such cases. GPT and Claude are not needed if you focus the agent to work on specific parts of the code. I wrote something about this here: https://chatbotkit.com/reflections/the-rise-of-micro-coding-...
> Obviously step-3.5-flash is nowhere near the raw performance of sonnet
I feel like these two statements conflict with each other.
inb4 skill issue I could probably beat you coding by hand with you using Claude code
Claude Code is subsidized because of data collection.
Im hitting rate limits within 1:45 during afternoons.
I can’t justify extra usage since it’s a variable cost, but I can justify a higher subscription tier.
My guess is a plan with double the limits would need to be 5-10x as expensive.
There's gotta be a limit; nobody can afford to have tons of users who are losing them money every month.
Time to compete on value with the Chinese.
https://support.claude.com/en/articles/12429409-manage-extra...
It's like I was a graphic designer and my finance company said "photoshop is too expensive". I wouldn't be mad at Adobe for it
Claude Code seems designed to terminate quickly- mine always finds excuses to declare victory prematurely given a task that should take hours.
Graceful handling from Anthropic
Our engineering team averages 1.5k per dev per month on credit costs, without busting Max limits today.
I understand people from the US will have an anti-Chinese reaction, but for us in the "third world" that can use both techs, the openess is always good.
Forgive me if someone asked this already and I can't find it in the comments.
headers['X-Title']
You can change that
The other simple method is to only accept certain system prompts
I've been meaning to do some dumb little proxy system where all your i/o can pass through any specified system such as a web page, harness, whatever...
Essentially a local model toolcalls to an "Oracle" which is just something like a wrapper around Claude code or anything you've figured out how to scrape and then you talk to the small model that mostly uses the Oracle and.... There you go.
There's certainly i/o shuffling and latency but given model speeds and throughput it'll be relatively very small
Now people probably care
Doesn't mean I know how to market it, I'll certainly fail at that, but at least I can build it
I can do that now with claude code and a "while true" bash loop.
Or with the built-in "/schedule" in claude code to set an agent to run say once every few minutes.
but couldn't i use this in off times only?
Extra usage is very sneaky you don't get any notice that you are using extra usage and could end up with unnecessary costs in case you would have preferred to wait an hour or so.
UPDATE:
reply on x Thariq @trq212 only flagged accounts, but you can still claim the credit
So, to me its a "we built it into our world use ours"
Edit: FWIW I am an avid hater of all claw things, they're security nightmare.
Btw even at insane markups $200/mo means GPUs break even pretty fast.
It's simply identical to how people use Claude Code locally.
We are paying for a certain amount of token consumption
Why then, is this an outsized strain on your system Anthropic?
It's like buying gasoline from Shell, and then Shell's terms of services forcing you to use that gas in a Hummer that does 5 MPG, while everyone else wants to drive any other vehicle.
To use your analogy, if Shell sold you a subscription to fill up your Hummer up to 30 times a month, they wouldn't let you use that subscription to fill gas cans with a GMC logo taped to the side. They couldn't, without overcharging the people who just want to average out their cost of driving.
You don't get to sell a subscription described primarily as being for some quantity of X and then change the terms every time people find creative ways to use the stream of X they believe themselves to have purchased from you. People thought they were purchasing in bulk.
> We are paying for a certain amount of token consumption
I dont think you are. The specific arrangement you have is you pay for a subscription to be used with Claude Code. It isnt access to tokens, so you can do whatever you please.
---
An analogy would be a refillable cup for a soda at a restuarnt. They will allow you to refill how many ever times you want, but only using the store provided cup - and you cant bring your own 2L hydroflask or whatever. You're paying not just for the liquid, but for the entire setup.
It would be like the restaurant saying "you can buy the 2-liter soda pack" and then getting all uppity when you bring your own 2L hydroflask in.
They have a per-token payment option where you can use any tool you like
The plans do not say how many tokens you get. People are paying for access. Higher plans get more usage. The marketing and support material of the plans only use the word "usage" and never "tokens."
Interestingly, it looks like I haven't received a non-receipt email from them since August 2025.
edit: see Boris' tweet about it https://x.com/bcherny/status/2040206443094446558
Say goodbye to my 600$/ month Anthropic.
No, Anthropic, just because you added a clause that says "we can change these terms whenever" doesn't make it right. I'm paying you a set amount of money a month for a set amount of tokens (that's what limits are), and I should be able to use these tokens however I want.
Luckily, there are alternatives.
ChatGPT found it was a great idea and that I can use Claude for planning and gave me instructions on how to best hand off the building part. Claude told me it’s a horrible idea.
Claude also burns much more liberally through tokens, eg reading through entire irrelevant docs.
Openclaw is great for resolving this since I much more control which work goes where and also gives a much better user experience without all the back and forth to understand what context it has (my use case is to build things from my phone while I’m in senseless meetings in my day job).
Fully agree on the alternatives. In the end Claude’s experience is worse, while it still makes bad decisions if you let it. Better to get a good workflow on a less capable model.
Anthropic not allowing Claiude Code subscriptions to be used with other projects isn't "pulling the rug out"; you paid for an API subscription to use Claude Code, and now you're using it for a different purpose and a different product.
If Tesla offered $10/month charging for your Tesla, and then a bunch of people turned around and use their Tesla Charge subscription to charge all different electric vehicles, and battery packs, and also hooked up a crypto mining rig to it, would you be surprised if they said "Nope, we're cutting this off. You can only use your Tesla Charge subscription for your Tesla vehicle"?
> If Tesla offered $10/month charging for your Tesla
No, "if Tesla offered $10/month for 100 kWh of charging", and yes, I expect to use those 100 kWh with any vehicle I want, because there's a limit on the resource I'm paying for.
I can understand caps on unlimited, I can't understand caps when there are strict limits.
It's like if I buy a hot dog every month and they tell me they're raising the price next month, or discontinuing honey mustard. Inconvenient but they're not doing anything wrong.
Especially since, given my back of the napkin math, they're giving us a pretty decent discount on the subscription plans.
If you haven't been paying attention anthropic burned a lot of their developer good will in the last 2 weeks, with some combination of bugs and rate limits.
But the writing is on the wall about how bad things are behind the scenes. The circa 2002 sentiment filter regex in their own tool should have been a major clue about where things stand.
The question every one should be asking at this point is this: is there an economic model that makes AI viable. The "bitter lesson" here is in AI's history: expert systems were amazing, but they could not be maintained at cost.
The next race is the scaling problem, and google with their memory savings paper has given a strong signal what the next 2 years of research are going to be focused on: scaling.
How is what you are asking for different from what they are saying?
I'm doing a side-by-side with GPT-5.4 for $20/mo and Sonnet for $20/mo and I can tell you that all my 5 hour tokens are eaten in 30 minutes with Claude. I still haven't used my tokens for OpenAI.
Code quality seems fine on both. Building an app in Go
Only thing now is that the cheaper (worse) chinese model coding plans have huge limits, so I lean on those now. Requires a lot more hand-holding though.
I think using it to write small documentation or small scripts would be a good use case for it, but serious development work you Hit the usage limits way too fast.
Ive been calling for local LLM as owning the means of production. I aint wrong.
> you’ll no longer be able to use your Claude subscription limits for third-party harnesses including OpenClaw.
My understanding is that Conductor and others aren't using it.
The Anthropic casino wants you to continue gambling tokens at their casino only on their machines (Claude Code) only by giving more promotional offers such as free spins, $20 bets and more free tokens at the roulette wheels and slot machines.
But you cannot repurpose your subscription on other slot machines that are not owned by Anthropic and if you want it badly, they charge you more for those credits.
The house (Anthropic) always wins.
You can use your Claude Code subscription with third-party tools, but you have to use the Claude Code harness. Or, you use the API. OpenClaw could use the Claude Code harness, but they don't.
Just look at how Sam Altman has led OpenAI step by step to dominate—and choke out—Anthropic, a company founded by the group of engineers who were once part of the turmoil at OpenAI.
Anthorpic's product thinking is terrible even though it is technically very good.
OpenAI seems to mostly be chasing the consumer market, but not doing great at it.