Subscription costs are capped to API rates as their ceiling (and, realistically, way lower than that - why would you even subscribe if you could just go pay-what-you-use instead), and those are already at a big margin for Anthropic. What still costs them a fuckton of money comparatively is training, but that is only going to get more efficient with more purpose-built hardware on the way.
Basicallly, I don’t see much of a reason to hike subscription prices dramatically. I don’t think they’ll stay at $100/$200 but anyone who’s paying that already knows how much value they’re getting out of that and probably wouldn’t mind paying more.
It worked for cloud services :-)
Frontier models will continue to be either exclusively available from servers or significantly more affordable from servers vs local alternatives for the foreseeable future.
The moat is only
a) post-training magic for the elusive UX "vibes"
b) stickiness of the Claude UI's.
The first part will be eventually (give it a couple years) solved by a LoRA marketplace.
The second is not relevant because existing UI's are very sticky already and Claude won't be able to overcome decades of inertia anyways.
People with titles like
Giga Chad, MBA, CSS, CKAD, XXX, PQRS
are gonna love this.
In no time, HRs will start slapping “10 years of certified Claude Code experience required” on job listings.
Nowadays I just paste a test, build, or linter error message into the chat and the clanker knows immediately what to do, where it originated, and looks into causes. Often times I come back to the chat and see a working explanation together with a fix.
Before I had to actually explain why I want it to change some implementation in some direction, otherwise it would refuse no I won't do that because abc. Nowadays I can just pass the raw instruction "please move this into its own function", etc, and it follows.
So yeah, a lot of these skills become outdated very quickly, the technology is changing so fast, and one needs to constantly revisit if what one had to do a couple of months earlier is still required, whether there's still the limits of the technology precisely there or further out.
As the same age as Linus Torvalds, I'd say that it can be the opposite.
We are so used to "leaky abstractions", that we have just accepted this as another imperfect new tech stack.
Unlike less experienced developers, we know that you have to learn a bit about the underlying layers to use the high level abstraction layer effectively.
What is going on under the hood? What was the sequence of events which caused my inputs to give these outputs / error messages?
Once you learn enough of how the underlying layers work, you'll get far fewer errors because you'll subconciously avoid them. Meanwhile, people with a "I only work at the high-level"-mindset keeps trying to feed the high-level layer different inputs more or less at random.
For LLMs, it's certainly a challenge.
The basic low level LLM architecture is very simple. You can write a naive LLM core inference engine in a few hundred lines of code.
But that is like writing a logic gate simulator and feeding it a huge CPU gate list + many GBs of kernel+rootfs disk images. It doesn't tell you how the thing actually behaves.
So you move up the layers. Often you can't get hard data on how they really work. Instead you rely on empirical and anecdotal data.
But you still form a mental image of what the rough layers are, and what you can expect in their behavior given different inputs.
For LLMs, a critical piece is the context window. It has to be understood and managed to get good results. Make sure it's fed with the right amount of the right data, and you get much better results.
> Nowadays I just paste a test, build, or linter error message into the chat and the clanker knows immediately what to do
That's exactly the right thing to do given the right circumstances.
But if you're doing a big refactoring across a huge code base, you won't get the same good results. You'll need to understand the context window and how your tools/framework feeds it with data for your subagents.
> “Good at explaining requirements, needs handholding to understand complex algorithms, picky with the wording of comments, slightly higher than average number of tokens per feature.”
I’m not saying this would be good at all, but the data (/insights) and the opportunity are clearly there.
But yeah, if the recruiters start asking for "10 years experience with Claude Code", then I guess a tongue-in-cheek answer would be "sure, I did 10 projects in parallel in one year".
Adding more people to a project doesn’t improve throughout - past a certain point. Communication and coordination overhead (between humans) is the limiting factor. This has been well known in the industry for decades.
Additionally, i’d much rather hire someone that worked on a a handful of projects, but actually _wrote_ a lot of the code, maintained the project after shipping it for a couple years, and has stories about what worked and didn’t, and why. Especially a candidate that worked on a “legacy” project. That type of candidate will be much more knowledgeable and able to more effectively steer an AI agent in the best direction. Taking various trade offs into account. It’s all too easy to just ship something and move on in our industry.
Brownie points if they made key architecture decisions and if they worked on a large scale system.
Claude building something for you isn’t “learning” in my opinion. That’s like saying I can study for a math exam by watching a movie about someone solving math problems. Experience doesn’t work like that. You can definitely learn with AI but it’s a slow process, much like learning the old fashioned way.
Maybe “experience” means different things to us…
Linked from here: https://claude.com/partners
In interview/hiring situations where they're not expected or effectively required, they make for great chat fodder and a really good opportunity to exhibit awareness about yourself, the industry, and how the person on the other side of the table might perceive certifications given the context.
Great perspective. I'm going to do this. Haha.
Bruh lol these courses are marketing material designed by fresh grad communications majors. You're falling for exactly the scam they want you to fall for by giving so much benefit of the doubt to entities which deserve none.
Edit: no I don't do this kind of work but my mother does so I know exactly how the sausage is made.
Startups / technology companies that expect employees to be self-starters who can be set free to frolic amongst the problems are an aberration.
Or governments/large organizations performing box checking exercises
Doesn't stop them being useless though, like giving an electric drill to a chimp and telling them to build a house...lots of action, a lot of screeching, not much work.
One of the mistakes with AI is that people believe it will turn lead into gold: if you give AI bad prompts, AI will produce bad work.
Are you sure? What about all those AWS, Azure, etc certifications that many places require their engineers to have?
"Must have a degree or certification in Claude."
"Must hold an OpenClaw 2026 Grade II Certificate"
In fact, if you look at basically every major AI/LLM player you'll see a similar "alliance" or "partnership". Its a sales channel of high end referrals.
Businesses that are already in conversations about building partnerships and training with Anthropic.
The real revenue that foundation model companies like Anthropic, OpenAI, Google DeepMind, and others generate comes from enterprise deals with a smattering of government - not consumer.
Consumer usage is largely a loss leader used as a training/refining tool, and it's best to view the economics of foundational model providers through the same lens you would a hyperscaler.
A major component to AWS's rise was the ecosystem built around training and teaching how to use the AWS ecosystem thanks to the AWS certification program. Same for K8s via the Linux Foundation.
By building a partnership and training motion, Anthropic can get the WITCHes, Deloittes, PWCs, Accentures, KPMGs, and others to start offering turnkey services, which is why Anthropic has been working on building co-sell relationships with those kinds of companies.
And let's not even discuss the vacuity of their new cash machine certifications. "Architect" come on...
E.g., "find where the method X is called and what arguments are passed".
That can be useful for refactoring or debugging.
Coding is the worst way to use an LLM though.
It is bullshit all the way down.
Doesn't make sense.
The partner network isn't really about certifications. It's about distribution.
Right now, if a business wants to "use AI," they have two options: 1) hire expensive AI engineers who understand the tools, or 2) fumble around with ChatGPT and hope for the best. The partner network creates option 3: pay a consultancy that's been trained to deploy Claude effectively.
Is this good for developers? Probably not - it'll create a layer of "certified Claude consultants" who know less than you do but charge 10x more. Is it good for Anthropic's revenue? Absolutely. Enterprise sales runs on relationships and trust signals, not technical merit.
The real play here is making Claude the "safe enterprise choice" - the AI equivalent of "nobody ever got fired for buying IBM." AWS did the same thing with their certification ecosystem and it worked incredibly well.
The certifications themselves are probably worthless. But the sales channel they create is worth $100M easily.