Apparently nobody gets the Anthropic move: they are only good at coding and that's a very thin layer. Opencode and other tools are game for collecting inputs and outputs that can later be used to train their own models - not necessarily being done now, but they could - Cursor did it. Also Opencode makes it all easily swappable, just eval something by popping another API key and let's see if Codex or GLM can replicate the CC solution. Oh, it does! So let's cancel Claude and save big bucks!
Even though CC the agent supports external providers (via the ANTHROPIC_BASE_URL env var), they are working hard on making it impossible for other models to support their every increasing agent feature set (skills, teleport and remote sessions, LSP, Chrome integration, etc). The move totally makes sense, like it or not.
It's all easily swappable without OpenCode. Just symlink CLAUDE.md -> AGENTS.md and run `codex` instead of `claude`.
> they are working hard on making it impossible for other models to support their every increasing agent feature set (skills, teleport and remote sessions, LSP, Chrome integration, etc).
Every feature you listed has an open-source MCP server implementation, which means every agent that supports MCP already has all those features. MCP is so epic because it has already nailed the commodification coffin firmly shut. Besides, Anthropic has way less funding than OAI or Google. They wouldn't win the moat-building race even if there were one.
That said, the conventional wisdom is that lowering switching costs benefits the underdogs, because the incumbents have more market share to lose.
They're after the enterprise market - where office / workspace + app + directory integration, security, safety, compliance etc. are more important. 80% of their revenue is from enterprise - less churn, much higher revenue per W/token, better margins, better $/user.
Microsoft adopting the Anthropic models into copilot and Azure - despite being a large and early OpenAI investor - is a much bigger win than yet another image model used to make memes for users who balk at spending $20 per month.
Same with the office connector - which is only available to enterprises[0] (further speaking to where their focus is). There hasn't yet been a "claude code" moment for office productivity, but Anthropic are the closest to it.
[0] This may be a mistake as Claude Code has been adopted from the ground up
Usually you can see it when someone nags about “call us” pricing that is targeted at enterprise. People that nag about it are most likely not the customers someone wants to cater to.
... an extremely popular marketing tool ... sending an equally excessive amount of data above what they were paying for. They were far less adamant about the product, and on some days I didn't even want them as a customer. If there was a minor blip in the service, they were the first to complain. Reminder, [Sentry] was still a side project at the time so I had a day-job. That meant it was often stressful for Chris and I to deal w/ customer support, and way more stressful dealing with outages.
We had one customer who loved the product, and one who didn't. Both of these customers had such extreme volumes of data that it had a tangible infrastructure cost associated with hosting them. We knew the best thing to do was to find a way to be able to charge them more money for the amount of data they sent. So we set off to build that and then followed up with each customer.
To our surprise, the customer that loved the product didn't want to pay more. The customer who was constantly complaining immediately jumped on the opportunity. What's the lesson to take away from this?
... when I was a teenager I worked at Burger King, and there was an anecdote I will never forget: for every customer that complains, there are nine more with a similar experience. I've cemented this in my philosophy around development, to the point where I now believe over rotating on negative feedback is actually just biasing towards the customers who truly see the value in what you're offering. The customer that was complaining really valued our product, whereas the customer that was happy was simply content.
A $7 Subscription, https://cra.mr/a-seven-dollar-subscription / https://archive.vn/IWS0A (2023).Most of the time you want to cut off ‘non customers’ as soon as possible and don’t leave ‘big fish’ without having direct contact person who can explain stuff. People just clicking around on their own will make assumptions that need to be addressed in a way no one wastes time.
If you mean this literally, one of the best ways to turn non-customers into customers is to give them a way to pay you. Which means telling them the price. If you're implying something else by ‘non customers’ then I'm missing the implication.
> and don’t leave ‘big fish’ without having direct contact person who can explain stuff
You can give a contact person and have a list of prices.
> People just clicking around on their own will make assumptions that need to be addressed in a way no one wastes time.
Making everyone call you to negotiate is going to waste time.
I am curious how big of a chance they have. I could imagine many enterprises that are already (almost by default) Microsoft customers (Windows, Office, Entra etc.) will just default to Copilot (and maybe Azure) to keep everything neatly integrated.
So an enterprise would need to be very dedicated to use everything Microsoft, but then go through the trouble use Claude as their AI just because it is slightly better for coding.
I have a feeling I am missing something here though, I would be happy for anyone to educate me!
Can't light a candle to Opus 4.5 who can now create and modify financial models from PDFs and augmented with websearch and the Excel skill (gpt-5.2 can do this too). That said the market IS smaller
Anthropic is rather obnoxious about training on user data, and I wonder if enterprises (and small businesses!) will grow up soon and start using competing products instead.
(Not that Google is amazing in this regard — their purchasable product options are all over the place to the point where it might be nearly a full time (human!) job to keep track of how to correctly purchase Gemini. Gemini itself seems incapable of figuring this out, or at least I haven’t found the right prompt yet. Gemini is absolutely amazing at hallucinating Google product offerings. OpenAI, on the other hand, seems to have nailed this.)
But that means they lose on inference. Which isn't good.
Making this mistake could end up being the AI equivalent of choosing Oracle over Postgres
It's a supposedly professional tool with a value proposition that requires being in your work flow. Are you going to keep using a power drill on your construction site that bricks itself the last week or two of every month?
An error message says contact support. They then point you to an enterprise plan for 150 seats when you have only a couple dozen devs. Note that 5000 / 25 = 200 ... coincidence? Yeah, you are forbidden to give them more than Max-like $200/dev/month for the usage-based API that's "so expensive".
They are literally "please don't give us money any more this month, thanks".
I imagine a combination of stop loss and market share. If larger shops use up compute, you can't capture as many customers by headcount.
// There was a figure around o3, an astonishing model punching far above the weights (ahem) of models that came after, that suggested the thinkiest mode cost on the order of $3500 to do a deep research. Perhaps OpenAI can afford that, while Anthropic can't.
But this is not the equivalent of Oracle over Postgres, as these are different technology stacks that implement an independent relational database. Here were talking about Opencode which depends on Claude models to work "as a better Claude" (according to the enraged users in the webs). Of course, one can still use OC with a bazillion other models, but Anthropic is saying that if you want the Claude Code experience, you gotta use the CC agent period.
Now put yourself in the Anthropic support person shoes, and suppose you have to answer an issue of a Claude Max user who is mad that OC is throwing errors when calling a tool during a vibe session, probably because the multi-million dollar Sonnet model is telling OC to do something it can't because its not the claude agent. Claude models are fine-tuned for their agent! If the support person replies "OC is an unsupported agent for Claude Code Max" you get an enraged customer anyway, so you might as well cut the crap all together by the root.
The client is closed source for a reason and they issued DMCA takedowns against people who published sourcemaps for a reason.
Claude code runs into use limitations for everyone at every tier. The API is too expensive to use and it's _still_ subsidized.
I keep repeating myself but no one seems to listen: quadratic attention means LLMs will always cost astronomically more than you expect after running the pilot project.
Going from 10k loc to 100k loc isn't a 10x increase, it's a 99x increase. Going from 10k loc to 1m loc isn't a 100x increase, it's a 9999x increase. This is fundamental to how transformers work and is the _best case scenario_. In practice things are worse.
So what you say is not true: cost does not directly correlate with LoC.
What do you mean by this? I know plenty of people who never hit the upgraded Opus 4.5 limits anymore even on the $100 plan, even those who used to hit the limits on the $200 plan w/ Opus 4 and Opus 4.1.
>The API is too expensive to use and it's _still_ subsidized.
What do you mean by saying the API is subsidized? Anthropic is a private company that isn't required to (and doesn't) report detailed public financial statements. The company operating at a loss doesn't mean all inference is operating at a loss, it means that the company is spending an enormous amount of money on R&D. The fact that the net loss is shrinking over time tells us that the inference is producing net profit over time. In this business, there is enormous up front cost to train a model. That model then goes on to generate initially large, but subsequently gradually diminishing revenue until the model is deprecated. That said, at any given snapshot-in-time, while there is likely large ongoing R&D expenditure on the next model causing the overall net profit for the entire company to still be negative, it's entirely possible that several, if not many or even most of the previously trained models have fully recouped their training costs in inference revenue.
It's fairly obvious that the monthly subscriptions are subsidized to gain market share the same way Uber rides were on early on, but what indication do you have that the PAYG API is being subsidized? How would total losses have shrunk from $5.6B in 2024 to just $3B in 2025 while ARR grew from ~$1B to ~$7B over the same time period (one where usage of the platform dramatically expanded) if PAYG API inference wasn't running at a net profit for the company?
>quadratic attention means LLMs will always cost astronomically more than you expect after running the pilot project
This is only true as long as O(n²) quadratic attention remains the prevailing paradigm. As Qwen3-Next and Nemotron 3 Nano have shown with hybrid linear attention + sparse quadratic layers and a hybrid Mamba SSM, not all modern, performant LLMs necessarily need to run strictly O(n²) quadratic attention models. Sure, these aren't frontier models competitive with Opus 4.5 or Gemini 3 Pro or GPT 5.2 xhigh, but these aren't experimental tiny toy models like RWKV or Falcon Mamba that serve as little more than PoCs for alternative architectures, either. Qwen3-Next and Nemotron 3 Nano are solid players in their respective local weight classes.
Why is that their “huge asset?” The genus of this complaint is that Opencode et al replace everything but the LLM, so it seems like the latter is the true “huge asset.”
If Clause Code is being offered at or near operational breakeven, I don’t see the advantage of lock-in. If it’s being offered at a subsidy, then it’s a hint that Claude Code itself is medium-term unsustainable.
“Training data” is a partial but not full explanation of the gap, since it’s not obviously clear to me how Anthropic can learn from Claude Code sessions but not OpenCode sessions.
Right now, most enterprises are experimenting with different LLMs and once they chose they will be locked for a long time. If they cant can't chose because their coding agent doesn't let them they be locked to that.
Maybe they can hope to murder opencode in the meantime with predatory pricing and build an advantage that they don't currently have. It seems unlikely though - the fact that they're currently behind proves the barrier to building this sort of tool isn't that high, and there's lots of developers who build their own tooling for fun that you can't really starve out of doing that.
I'm not convinced that attempting to murder opencode is a mistake - if you're losing you might as well try desperate tactics. I think the attempt is a pretty clear signal that Antrhopic is losing though.
I mean I guess they could do a bait and switch (drop prices so low that Anthropic goes bankrupt, then raises price) but that’s possible in literally any industry, and sees unlikely given the current number of competitors
If Anthropic ended up in a position that they had to beg various Client providers to be integrated (properly) and had to compete with other LLMs on the same clients and could be swapped out at a moment's notice, they would just become a commodity and lose all leverage. They don't want to end up in such situation. They do need to control the delivery of the product end-to-end to ensure that they control the customer relationship and the quality.
This is also going to be KEY in terms of democratizing the AI industry for small startups because this model of ai-outside-tools-inside provides an alternative to tools-outside-ai-inside platforms like Lovable, Base44 and Replit which don't leave as much flexibility in terms of swapping out tooling.
The types of people who would use this tool are precisely the types of people who don't pay for licenses or tools. They're in a race to the bottom and they don't even know it.
> and that's a very thin layer
I don't think Anthropic understands the market they just made massive investments in.
The CLI tool is terrible compared to opencode.
That is the unfortunate reality, we are now being foisted claude code. :( I wish they just fork opencode.
that's just it, it has been proven over and over again with alternatives that CC isn't the moat that Anthropic seems to think it is. This is made evident with the fact that they're pouring R&D into DE/WM automation meanwhile CC has all the same issues it has had for months/years -- it's as if they think CC is complete.
if anything MCP was a bigger moat than CC.
also : I don't get the opencode reference. Yes, it's nice -- but codex and gemini-cli are largely compatible with cc generated codebases.
There will be some initial bumpiness as you tell the agent to append the claude.md file to all agent reads -- or better yet just merge it into agent file.) -- but that's about as rough as it'll get.
I don't understand, why would other models not be able to support any, or some, or even a particular single one of these? I don't even see most of these as relevant to the model itself, but rather the harness/agentic framework around it. You could argue these require a base degree of model competence for following instructions, tool calling, etc, but these things are assumed for any SOTA model today, we are well past this. Almost all of these things, if not all, are already available in other CLI + IDE-based agentic coding tools.
the reason i got the subscription wasnt to use claude code. when i subscribed you couldnt even use it for claude code. i got it because i figured i could use those tokens for anything, and as i figured out useful stuff, i could split it off onto api calls.
now that exploration of "what can i do with claude" will need to be elsewhere, and the results of a working thing will want to stay with the model that its working on.
I'd be pretty happy if Anthropic acquired Midjourney
I use CC as my harness but switch between third party models thanks to ccs. If Anthropic decided to stop me from using third party models in CC, I wouldn't just go "oh well, let's buy another $200/mo Claude subscription now". No. I'd be like: "Ok, I invested in CC—hooks/skills/whatever—but now let's ask CC to port them all to OpenCode and continue my work there".
Claude, ChatGPT, Gemini, and Grok are all more or less on par with each other, or a couple months behind at most. Chinese open models are also not far behind.
There's nothing inherent to these products to make them "sticky". If your tooling is designed for it, you can trivially switch models at any time. Mid-conversation, even. And it just works.
When you have basically equivalent products with no switching cost, you have perfect competition. They are all commodities. And that means: none of them can make a profit. It's a basic law of economics.
If they can't make a profit, no matter how revolutionary the tech is, their valuation is not justified, and they will be in big trouble when people figure this out.
So they need to make the product sticky somehow. So they:
1. Add a subscription payment model. Once you are paying a subscription fee, then the calculus on switching changes: if you only maintain one subscription, you have a strong reason to stick with it for everything.
2. Force you to use their client app, which only talks to their model, so you can't even try other models without changing your whole workflow, which most people won't bother to do.
These are bog standard tactics across the tech industry and beyond for limiting competitive pressure.
Everyone is mad about #2 but honestly I'm more mad about #1. The best thing for consumers would be if all these model providers strictly provided usage-based API pricing, which makes switching easy. But right now the subscription prices offer an enormous discount over API pricing, which just shows how much they are really desperate to create some sort of stickiness. The subscriptions don't even provide the "peace of mind" benefit that Spotify-like subscription models provide, where you don't have to worry about usage, because they still have enforced usage limits that people regularly hit. It's just purely a discount offered for locking yourself in.
But again I can't really be that mad because of course they are doing this, not doing it would be terrible business strategy.
If they're going to close the sub off to other tools, they need to make very strong improvements to the tool. And I don't really see that. It's "fine" but I actually think these tools are letting developers down.
They take over too much. They fail to give good insights into what's happening. They have poor stop/interrupt/correct dynamics. They don't properly incorporate a basic review cycle which is something we demand of junior developers and interns on our teams, but somehow not our AIs?
They're producing mountains of sometimes-good but often unreviewable code and it isn't the "AI"'s fault, it's the heuristics in the tools.
So I want to see innovation here. And I was hoping to see it from Anthropic. But I just saw the opposite.
I myself have been building a special-purpose vibe-coding environment and it's just astounding how easy it is to get great results by trying totally random ideas that are just trivial to implement.
Lots of companies are hoping to win here by creating the tool that everyone uses, but I think that's folly. The more likely outcome is that there are a million niche tools and everyone is using something different. That means nobody ends up with a giant valuation, and open source tools can compete easily. Bad for business, great for users.
I have no idea what JetBrain's financials are like, but I doubt they're raking in huge $$ despite having very good tools & unfortunately their attempts to keep abreast of the AI wave have been middling.
Basically, I need Claude Code with a proper review phase built in. I need it to slow-the-fuck-down and work with me more closely instead of shooting mountains of text at me and making me jam on the escape key over and over (and shout WTF I didn't ask for that!) at least twice a day.
IHMO these are not professional SWE tools right now. I use them on hobby projects but struggle to integrate them into professional day jobs where I have to be responsible in a code review for the output they produced.
And, again, it's not the LLM that's at fault. It's the steering wheel driving it missing a basic non-yeet process flow.
It sounds like you want Codex (for the second part)
It's irresponsible to your teammates to dump very large giant finished pieces of work on them for review. I try to impress that on my coworkers, and I don't appreciate getting code reviews like that for submission, and feel bad if I did the same.
Even worse if the code review contains blocks of code which the author doesn't even fully understand themselves because it came as one big block from and LLM.
I'll give you an example -- I have a longer term bigger task at work for a new service. I had discussions and initial designs I fed into Claude. "We" came to a concensus and ... it just built it. In one go mainly. It looks fine. That was Friday.
But now I have to go through that and say -- let's now turn this into something reviewable for my teammates. Which means basically learning everything this thing did, and trying to parcel it up into individual commits.
Which is something that the tool should have done for me, and involved me in.
Yes, you can prompt it to do that kind of thing. Plan is part of that, yes. But planning, implement, review in small chunks should be the default way of working, not something I have to force externally on it.
What I'd say is this: these tools right now are are programmer tools, but they're not engineer tools
I expect that from all my team mates, coworkers and reports. Submitting something for code review that they don't understand is unacceptable.
i immediately see that the most important thing to have understand a change is future LLMs more than people. we still need to understand whats going on, but if my LLM and my coworkers LLM are better aligned, chances are my coworker will have a better time working with the code that i publish than if i got them to understand it well but without their LLM understanding it.
with humans as the architects of LLM systems that build and maintain a code based system, i think the constraints are different, and that we dont ahve a great idea on what the actual requirements are yet.
it certainly mismatches with how we've been doing things in publishing small change requests that only do a part of a whole
Or to put it another way -- understandable piecemeal commits are a best practice for a fundamental human reason; moving away from them is risking lip-service reviews and throwing AI code right into production.
Which I imagine we'll get to (after there are much more robust auto-test/scan wrap-arounds), but that day isn't today.
In 2024, ~725 M$ total revenue, ~119 M$ net profit.
Well, no. It just means no single player can dominate the field in terms of profits. Anthropic is probably still losing money on subscribers, so other companies "reselling" their offering does them no good. Forcing you to use their TUI at least gives them control of how you interact with the models back. I'm guessing but since they've gone full send into the developer tooling space, their pitch to investors likely highlights the # of users on CC, not their subscriber numbers (which again, lose money). The move makes since in that respect.
Using openrouter myself I find the costs of APIs to be extremely low and affordable? I don't send the whole codebase to every question, I just ask about what I need, and everything is actually ridiculously cheap? $20 lasts about 3 months.
Meanwhile copy/pasting those shells in OpenRouter's Chat and asking the same question resulted in a single API request costing a tenth of a cent.
I could probably try tuning everything to keep costs down, but idk if it's worth the efforts.
I tried the same with OpenRouter and I used up 2.5 dollars in a day using Sonnet 4.5. Similar use on copilot has could maybe make me use 10% of my quota (and that's being generous for OpenRouter).
I think GitHub Copilot is way more affordable than OpenRouter.
Maybe another symptom of Silicon Valley hustle culture — nobody cares about the long term consequences if you can make a quick buck.
In any case, the long-term solution for true openness is to be able to run open-weight models locally or through third-party inference providers.
The reason to subsidize is the exact reason you are worried about. Lock in, network effects, economies of scale, etc.
I mean, this is the playbook of every tech company for the past 30 years. You sell something at a huge loss to gain market share and force your competitors to exit, and then you begin value extraction from your, now captive, customer base. You lower quality, raise prices, and cut support, and you do it slowly enough that nobody is hit with enough friction at one time to walk.
If you expect anything else, I don't know what to tell you. This is very much the standard. In fact it's SO much the standard that companies don't even have a choice. If you choose not to do this, then the people who are doing this will just undercut you and run you out.
The key piece in this is that, once the value extraction begins, it can't just strive for profitability. No, it also has to make up for the past 10 or 15 years of losses on top of that. So it's not like the product will just get expensive enough to sustain itself like you'd expect with a typical product. It'll get much more expensive than that.
We've collectively forgotten because a large enough number of professional developers have never experienced anything other than a thriving open source ecosystem.
As with everything else (finance and politics come to mind in particular), humans will have to learn the same lessons the hard way over and over. Unfortunately, I think we're at the beginning of that lesson and hope the experience doesn't negatively impact me too much.
Hate to break it to you, but the vast majority never did. See any thread about Linux on HN. Maybe the Open Source wave was before my time, but ever since I came into the industry around 2015 "caring about open source" has been the minority view. It's Windows/Mac/Photo Shop/etc all the way up and down.
If all is equal, I pick the open option. In this case it's not equal, Claude Code + Opus 4.5 is better than Opencode + Opus 4.5.
In all seriousness, I really don't think it should be a controversial opinion that if you are using a companies servers for something that they have a right to dictate how and the terms. It is up to the user to determine if that is acceptable or not.
Particularly when there is a subscription involved. You are very clearly paying for "Claude Code" which is very clearly a piece of software connected to an online component. You are not paying for API access or anything along those lines.
Especially when they are not blocking the ability to use the normal API with these tools.
I really don't want to defend any of these AI companies but if I remove the AI part of this and just focus on it being a tool, this seems perfectly fine what they are doing.
1. The company did something the customers did not like.
2. The company's reputation has value.
3. Therefore highlighting the unpopular move online, and throwing shade at the company so to speak, is (alongside with "speaking with your wallet") one of the few levers customers have to push companies to do what they want them to do.
I could write an article and complain about Taco Bell not selling burgers and that is perfectly within my right but that is something they are clearly not interested in doing. So me saying I am not going to give them money until they start selling burgers is a meaningless too them.
Everything I have seen about how they have marketed Claude Code makes it clear that what you are paying for is a tool that is a combination of a client-side app made by them and the server component.
Considering the need to tell the agent that the tool you are using is something it isn't, it is clear that this ever working was not the intention.
Sure, but that's because you're you. No offense, but you don't have a following that people use to decide what fast food to eat. You don't have posts about how Taco Bell should serve burgers, frequently topping one of the main internet forums for people interested in fast food.
HN front page articles do matter. They get huge numbers of eyeballs. They help shape the opinions of developers. If lots of people write articles like this one, and it front pages again and again, Anthropic will be at serious risk of losing their mindshare advantage.
Of course, that may not happen. But people are aware it could.
> It is up to the user to determine if that is acceptable or not.
It sounds like you understand it perfectly.
While Anthropic was within their right to enforce their ToS, the move has changed my perspective. In the language of moats and lock-ins, it all makes sense, sure, but as a potential sign of the shape of things to come, it has hurt my trust in CC as something I want to build on top of.
Yesterday, I finally installed OpenCode and tried it. It feels genuinely more polished, and the results were satisfactory.
So while this is all very anecdotal, here's what Anthropic accomplished:
1) I no longer feel like evangelizing for their tool 2) I installed a competitor and validated it's as good as others are claiming.
Perhaps I'm overly dramatic, but I can't imagine I'm the only one who has responded this way.
It's too soon to tell if that's true or not.
One of the features of vertical integration is that there will be folks complaining about it. Like the way folks would complain that it's impossible or hard to install macOS on anything other than a Mac, and impossible or hard to install anything other than macOS on a Mac. Yet, despite those complains, the Mac and macOS are successful. So: the fact that folks are complaining about Anthropic's vertical integration play does not mean that it won't be successful for them. It also doesn't mean that they are clueless
A lot of the comments revolve around how much they will be locked in and how much the base models are commoditized.
Google is pretty clearly ok with being an infrastructure/service provider for all comers. Same is true for Open AI (especially via Azure?) I guess Anthropic does not want to compete like that.
I think they do see vertical integration opportunities on product, but they definitely want to compete to power everything else too.
They're probably losing money on each pro subscription so they probably won't miss me!
looool
Maybe the LLM thing will be profitable some day?
As it will continue to be. Unless we get a Opus-5 or GPT-6 that blows everything out of the water, all major progress will be in the UX/DX of the tools and what tools each harness will let the agent use and how.
For now Claude is the best at this, MS is trying to keep up with Copilot in VSCode and Codex ... exists.
I think Anthropic took a look at the market, realized they had a strong position with Claude Code, and decided to capitalize on that rather than joining the race to the bottom and becoming just another option for OpenCode. OpenAI looked at the market and decided the opposite, because they don’t have strong market share with Codex and they would rather undercut Claude, which is a legitimate strategy. Don’t know who wins.
I feel like Anthropic is probably making the right choice here. What do they have to gain by helping competitors undercut them? I don’t think Anthropic wants to be just another model that you could use. They want to be the ecosystem you use to code. Probably better to try to win a profitable market than to try to compete to be the cheapest commodity model.
But there are specific subreddits and communities who did, /r/linux and related being the biggest ones, who moved to Lemmy.
As for Twitter blocking the API, they just killed all of the fun bots people made (two of mine) - the actual goverment propaganda troll-bots never went away, they just paid the $10 for the checkmark to get top of everyone's replies and kept running as-is.
And if they've made a business decision to do this, rolling it out without announcement is even worse.
Did they think no one would notice?
Plus I’m the one who compared them to Reddit. They certainly didn’t issue a statement that said “well it worked for Reddit”.
Plus its product utility scaled with user count.
To the upthread/sibling conversations about substitutability of LLMs (and therefore pricing power).
> Plus its product utility scaled with user count.
Which should mean it’s more impactful for Reddit to lose a set of engaged users. The value that Reddit brings to its customers is directly proportional to how many customers maintains. The same is not true for Anthropic.
What was the scaled pre-existing topic-divided community-moderated discussion space?
It's difficult to say Claude Code has first mover advantage when these discussions are littered with people talking about their alternative preferred toolchains.
> The value that Reddit brings to its customers is directly proportional to how many customers maintains. The same is not true for Anthropic.
I'd question Anthropic's ability to fund Claude Code engineering vs their peer competitors, should they slip user count.
Reddit did not have community discussions originally. That came later (but before the Digg exodus).
It’s totally valid that people who used OpenCode with Claude would be annoyed, but less valid to act shocked.
It’s CC with Qwen and KLM and other OSS and/or local models.
And if I was making any money, the Max tiers would be pennies in the bucket.
It is blocking the usage of subsidized subscriptions that are intended to be used with Claude Code, with third party tools. Those thirdy party tools can still use claude's api, but paying API rates, which are not subsidized or at least are a lot less subsidized.
You can use the Anthropic API in any tool, but these users wanted to use the claude code subscription.
Isn't this what they just explicitly banned?
API Error: 529 {"type":"error","error":{"type":"overloaded_error","message":"Overloade d"},"request_id":"req_011CX42ZX2u
If they want to prioritize direct Anthropic users like me, that's fine. Availability is a feature to me.Also You can still use OpenCode with API access...so no they didn't lock anything down. Basically the people just don't want to pay what is fair and is whining about it.
It looks like they need to update their FAQ:
Q: Do I need extra AI subscriptions to use OpenCode? A: Not necessarily, OpenCode comes with a set of free models that you can use without creating an account. Aside from these, you can use any of the popular coding models by creating a Zen account. While we encourage users to use Zen, OpenCode also works with all popular providers such as OpenAI, Anthropic, xAI etc. You can even connect your local models.
What's changed is that I thought I was subscribing to use their API services, claude code as a service. They are now pushing it more as using only their specific CLI tool.
As a user, I am surprised, because why should it matter to them whether I open my terminal and start up using `claude code`, `opencode`, `pi`, or any other local client I want to send bits to their server.
Now, having done some work with other clients, I can kind of see the point of this change (to play devils' advocate): their subscription limits likely assume aggregate usage among all users doing X amount of coding, which when used with their own cli tool for coding works especially well with client side and service caching and tool-calls log filtering— something 3rd party clients also do to varying effectivness.
So I can imagine a reason why they might make this change, but again, I thought I was subscribing to a prepaid account where I can use their service within certain session limits, and I see no reason why the cli tool on my laptop would matter then.
Just pay per token if you want to use third party tools. Stop feeling entitled to other people's stuff.
Anthropic hasn't changed their licensing, just enforcing what the licensing always required by closing a loophole.
Business models aside - what is interesting is whether the agent :: model relationship requires a proprietary context and language such that without that mutual interaction, will the coding accuracy and safety be somehow degraded? Or, will it be possible for agentic frameworks to plug and play with models that will generate similar outcomes.
So far, we tend to see the former is needed --- that there are improvements that can be had when the agentic framework and model language understanding are optimized to their unique properties. Not sure how long this distinction will matter, though.
that and they "stole" my money
In building my custom replacement for Copilot in VS Code, Anthropic's knowledge sharing on what they are doing to make Claude Code better has been invaluable
They simply stopped people from abusing a accessibility feature that they created for their own product.
> > > one word: repositories view
> > what do you mean?
> It's possible, and the solution is so silly that I laughed when I finally figured it out. I'm not sure if I should just post it plainly here since Anthropic might block it which would affect opencode as well, but here's a hint. After you exhaust every option and you're sure the requests you're sending are identical to CC's, check the one thing that probably still isn't identical yet (hint: it comes AFTER the headers).
I guess Anthropic noticed.
But they also have shown a weakness by failing to understand why people might want to do this (use their Max membership with OpenCode etc instead).
People aren't using opencode or crush with their Claude Code memberships because they're trying to exploit or overuse tokens or something. That isn't possible.
They do it because Claude Code the tool itself is full of bugs and has performance issues, and OpenCode is of higher quality, has more open (surprise) development, is more responsive to bug fixes, and gives them far more knobs and dials to control how it works.
I use Claude Code quite a bit and there isn't a session that goes by where I don't bump into a sharp edge of some kind. Notorious terminal rendering issues, slow memory leaks, or compaction related bugs that took them 3 months to fix...
Failure to deal with quality issues and listen to customers is hardly a good sign of company culture, leading up to IPO... If they're trying to build a moat... this isn't a strong way to do it.
If you want to own the market and have complete control at the tooling level, you're simply going to have to make a better product. With their mountain of cash and army of engineers at their disposal ... they absolutely could. But they're not.
But to me the appeal of OpenCode is that I can mix and match APIs and local models. I have DeepSeek R1 doing research while KLM is planning and doing code reviews and o4 mini breaking down screenshots into specs while local QWEN is doing the work.
My experience with bugs has also been the exact opposite of what you described.
And you let local QWEN write the code for you? Is the output any good or comparable to frontier models?
- ships with web server that has a UI you can open in your browser, even access remotely with tailscale or so
- the server has an API to do almost all things allowing you to build on top of it, the people at ramp build a very advanced internal agent on top of OpenCode https://builders.ramp.com/post/why-we-built-our-background-a...
- Google cutting off using search from other than their home page code. (At one time there was an official SOAP API for Google Search.)
- Apple cutting off non-Apple hardware in the Power PC era. ("We lost our license for speeding", from a third party seller of faster hardware.)
- Twitter cutting off external clients. (The end of TweetDeck.)
But it was only a matter of time before: a) Microsoft reclaimed its IDE b) Frontier model providers reclaimed their models
Sage advice: don’t fill potholes in another company’s roadmap.
Re: b) "frontier" models can reclaim all they want; bring it. that's not a moat.
The truth is Opencode didn’t have to bake this in. People who can will proxy Claude’s API anyways through other means.
Honestly, it seems like this played out in Opencode's favor. They are getting press for this and people who are used to Opencode now and can't use their Claude plan might use GLM 4.7 or Minimax M2, models they offer for free.
Anthropic should be profitable from the inference alone. That's their product...but they (like others) aren't.
This makes some sense now why they want to control usage/distribution. I bet they have a very good chunk of subscribers to Claude Code who aren't using their credits. So they probably don't have any chance at being profitable without this. Not a great place to be.
I remember the story used to be the other way around - "just a wrapper", "wrapper AI startups" were everywhere, nobody trusted they can make it.
Maybe being "just a model provider" or "just a LLM wrapper" matter less than the context of work. What I mean is that benefits collect not at the model provider, nor at the wrapper provider, but where the usage takes place, who sets the prompts and uses the code gets the lion share of benefits from AI.
Being "just a wrapper" wouldn't be a risky position if the LLMs would be content to be "just a model." But they clearly wouldn't be, and so it wasn't.
It's a trivial violation until it isn't. Competitors need to be fought off early else they become much harder to fight in the future.
Power users?
Any such users in the thread? I used third-party clients for a little while but I did not see the benefit.
(I was more likely to do the opposite, and run Claude Code with a proxy which allows me to use it with other models. Though after much experimentation I ended up back on Claude.)
This will be completely forgotten in like a week.
And if you leave because of this, more support for those that abide by the TOS and stay.
This is akin to someone selling/operating a cloud platform named Blazure and it’s just a front for Azure.
My view to everyone is to stop trying to control the ecosystem and just build shit. Fast.
when i signed up for a subscription it was with the understanding that id be able to use those tokens on which ever agent i wanted to play with, and that as i got to something i want to have persistently running, id switch that to be an api client. i quickly figured out that claude code was the current best coding agent for the model, but seeing other folks calling opus now im not actually sure thats true, in which case that subsidized token might be more expensive to both me and anthropic, because its not the most token efficient route over their model.
i dislike that now i wont be able to feed them training data using many different starting points and paths, which i think over time will have a bad impact on their models making them worse over time
I have a gut feeling that the real top dog harness (profitability, sticky users, growth) is VSCode + Copilot.
This is really the salient point for everything. The models are expensive to train but ultimately worthless if paying customers aren't captive and can switch at will. The issue it that a lot of the recent gains are in the prefill inference, and in the model's RAG, which aren't truly a most (except maybe for Google, if their RAG include Google scholar). That's where the bubble will pop.
That is it. That is the problem. Everyone wants vertical integration and to corner the market, from Standard Oil on down. And everyone who wants that should be smacked down.
Opencode was spoofing itself as the official Claude Code CLI to get access to the subscription tier.
What I learned from all this is that OpenAI is willing to offer a service compatible with my preferred workflow/method of billing and Anthropic clearly is not. That's fine but disappointing, I'm keeping my Codex subscription and letting my Claude subscription lapse but sure, it would be nice if Anthropic changed their mind to keep that option available because yes, I do want it.
I'm a bit perplexed by some comments describing the situation like OpenCode users were getting something for free and stealing from CC users when the plan quota was enforced either way and were paying the same amount for it. Or why you seem to think this post pointing out that Anthropic's direct competitor endorses that method of subscription usage is somehow malicious or manipulative behavior.
Commerce is a two-way street and customers giving feedback/complaining/cancelling when something changes is normal and healthy for competition. As evidenced by OpenAI immediately jumping in to support OpenCode users on Codex without needing to break their TOS.
I think I just understand that companies only offer heavily subsidized services in return for something - in this case Anthropic gets a few things - to tell investors how many daily actives are on CC, and a % of CC users opting into data sharing. Plus control of their UX, more feedback on their product, future opportunities to show messages, etc. It's really just obvious and normal and I don't get why anyone would be upset that they removed OC access.
What people expect from them?
(@dang often doesn't work, I just happened to see this. If you want guaranteed message delivery it's best to email hn@ycombinator.com)
what? that's a thing ? why would a vibe coder be "renowned"? I use Claude every day but this is just too much.
https://clawd.bot/ https://github.com/clawdbot/clawdbot
He's also the guy behind https://github.com/steipete/oracle/
Archaeologist.dev Made a Big Mistake
If guided by this morality column, Archaeologist should immediately stop using pretty-much anything they are using in their life. There's no company today that doesn't have their hands dirty. The life is a dance between choosing the least bad option, not radically cutting off any sight of "bad".
The best pressure on companies comes from viable alternatives, not from boycotts that leave you without tools altogether.
That said, the author is deluding themselves if they think OpenAI is supporting OpenCode in earnest. Unlike Anthropic, they don't have explicit usage limits. It's a 'we'll let you use our service as long as we want' kind of subscription.
I got a paid plan with GPT 5.2 and after a day of usage was just told 'try again in a week'. Then in a week I hit it again and didn't even get a time estimate. I wasn't even doing anything heavy or high reasoning. It's not a dependable service.
Or maybe they did consider but were capital/ inference capacity constrained to keep serving at this pricepoint. Pretty sure without any constraints they would eagerly go for 100% market share.
CC users give them the reigns to the agentic process. Non CC users take (mostly indirect) control themselves. So if you are forced to slow growth, where do you push the break (by charging defacto more per (api) token)?
I think these third party clients put their customers at risk. Most of them likely did not realize that the tools were doing something that violated ToS. Using these tools put many of those users at risk of account bans and risk Anthropic pulling the plug entirely and raising prices, which would be bad for everyone
To be clear, I’ve seen this sentiment across various comments not just yours, but I just don’t agree with it.
https://builders.ramp.com/post/why-we-built-our-background-a...