> apologies for the confusion
response from @thariq
> Model access: When using a Pro plan with Claude Code, you will only be able to use Opus models after enabling and purchasing extra usage.
And your extremely slow loading archive link is just the same page with the same text.
The live page - last edited 15 minutes after your comment - no longer has that passage.
After a few weeks they'll settle on the documentation for raised garden bed and the implementation of a home defence sprinkler system.
All while leaving you with a $10,000 bill at the end of the month.
Ain't life grand.
There is simply nothing that could compete with their open models. At the same time more and more corps got "AI addicted", so they will either have to pay ridiculous amounts of money, or use the Chinese stuff.
My usage might be outliner here, but 200->400 is an average case.
At least for coding, there's little correlation between token spend and the quality (and impact) of the resulting AI suggestion.
This is fine when inference prices are capped (eg via a monthly subscription plan or self-hosting), but rapidly discombobulates the relationship between provider and user otherwise.
It still seems like OpenAI has no moat and neither does anyone else, as the only reasonable way to use the coding slot machines is going to be via open source models on inference-optimized hardware.
Still better than the secret lobotomization they were doing on subscription plan models though.
My sentiment exactly! I have a very similar scaffold to each of my prompts, and feel I provide similar context files, however sometimes I get a truly inspired, complex, and functionally complete response...and sometimes I'd have been better off running lorem ipsum through a python interpreter.
I can't find any rhyme or reason to success. I'm not sure if prompting is significantly more nuanced than I realise, or it's the statistical magic that's having a laugh at me.
>open source models on inference-optimized hardware.
Is this actually a thing? Or are you talking about some hypothetical "opus 4.7 ASIC"?
Anthropic is doing changes on their help support pages on what looks like it will be the next pricing change regarding how users will use Opus models on Pro Plan.
it is rough, but it has taught me to treat every prompt and process with care since i watch the pennies and dollars burn instead of tokens, which is a good habit to get into anyway
Personally I'm anticipating agentic coding will be out of my price bracket (a single agent run costing US$20+ is far beyond what I can justify, especially with how often it fails). I'm planning on going back to optimised prompts on one-pass edits.
The writing is on the wall.
As it stands now, there is so much FUD surrounding their offerings, I'm not sure what they could do in the short term to turn things around.
They need to start shifting from "move fast and break things" to "move faster by slowing down". Their public communication, feature set, and organization as a whole needs to start matching the scale and level they're competing at. They won many hearts and minds by being better and are losing them by being chaotic. Different outcomes from the same internal behavior because they needed to change gears and haven't.
Investors might move from funding the model providers to funding the enterprises that use those models. That is, they might move from funding the cost of the experiment to funding the value of the result. No funding if there are no demonstrable AI gains.
This is a reasonable shift if this happens. If enough gains have been demonstrated, then investors might go back to funding the model providers. Investors always move towards the highest leverage point.
As long as AI delivers, this would be the rhythm.