I switched a few days ago and work has been much less frustrating. Feels like CC did back in February before they started playing games.
It also doesn't eat nearly as many tokens, so it's saving me $100/mo.
For coding its fine, I havent experimented too much with Amazon Bedrock myself, but I just might soon to check for any limitations.
It can use playwright, web fetch, etc…
I use bedrock at work and Claude subscription at home. They are pretty much exactly the same in my experience
Or do you mean the Claude in chrome plugin? Bedrock doesn’t have that, but in my experience it doesn’t work that well.
Neither does the Claude managed agents or ultra plan.
I say this as someone working for a tech company who does not have to foot the bill (in the >$1k per month bracket)
I also experienced and accept the 1990s levels of unreliability, which is my “internet generation”. My first access was lifting a handset and placing on a speaker/mic cradle.
Programmers these days are fucking spoiled. If it’s $220 worth of value for $200 - I get it. But I’m getting $100k of value for $10k and so I’ll put up with some shit.
Wrong comparison. If a competitor gives you $230 of value for $200, of course you shouldn't pick the $220 one
I am an API user, and while it being down is super annoying, it isn't really as big of a hit to my overall usage as I can just prepare a bunch of stuff to run in parallel when it does come back up.
Is this just the API and I'm too much of luddite to actually use the API?
It's starting to feel like a lot of comments on here and other social media outlets that are anecdotal about their experience with x model and y tool are astroturfing. They add almost zero value to the conversation.
These is a multi-billion dollar market and battleground, so im skeptical of anyone telling me that this isn't happening at a decent clip. I think moderators on the site should definitely consider how to approach this because it's devaluating this space as a place for actual discourse.
My mind also considers that this being one of Altman's old stomping grounds, he may place a higher value in winning here than elsewhere.
[0]I say December, because that's around the time the models got good enough that non-AI folks started to notice.
I don't really blame Anthropic here.
I couldn’t find any public data on GitHub, but Google Trends shows a sharp increase starting in December.
That could be in part to people complaining about the outages, but more people than ever are writing code with AI.
Hence the parallel to Eternal September – code volume is up, quality is down, and programming is never going to return to how it was (difficult for “normal” people to interface with).
There's a live Claude status board in the corner so you know when it's time to get back to work.
Yikes.
We can use several different topologies (2 or 3 agents, etc.) but currently primarily use pair programming teams consisting of an opus4.7 for implementation and a codex5.5 for plan and code reviews, with a codex5.5 run-manager that pushes the agent lanes along and keeps things moving if they get stuck or escalate reviews to run-manager decisions.
Escalation to run-manager is a pretty regular thing as Codex5.5 generally picky and thorough and opus4.7 pushes back at times, and after three codex rejections we allow opus4.7 to escalate to run-manager decision to settle it. Usually, opus4.7 agrees and will continue iterating but it's not unusual that it will push back and escalate.
I've found codex5.5 is extremely capable. I just now finished a large multi-phase orchestrated swarm run with codex5.5 (xhigh) as the run-manager, presiding over 8 paired lanes, with 8 opus4.7 (high) implementers and 8 codex5.5 (high) reviewers, so 16 agents orchestrated and working in a swarm together. Codex5.5 managed that run perfectly for 14 hours with zero intervention needed by me.
Overall, I prefer to let opus4.7 draft the plans and then let codex5.5 offer git-diff style change feedback on plans, then let opus implement and codex review/manage. This seems to get the best result for me.
It also fits nicely. Claude plans better, and Codex has way higher limits.
Anthropic seems to have really killed their advantage by squandering the immense good will they built up by blundering over and over again the last few months with the developer community.
Tonight, for instance, after the incident had recovered, I restarted my work. On my Max account my usage period completely exhausted in 4 minutes of sonnet subagent work. This was long after prime time, and the workload was a fraction what I normally do.
These days I run codex concurrently and have gotten my marketplaces and plugins and MCPs adapted to it - other than the agents which I do lean heavily on - and generally find it a capable replacement. Anthropic needs to take notice and get their house in order.
*But* I don't work with the defaults -- I work with my own prompt framework based off of superpowers.
Given sufficient prompt scaffolding, I've found the models relatively interchangeable -- _I might_ be getting some of this for free by basing my own system off of superpowers which is used across various harnesses -- In other words achieving this kind of portability may be a lot harder than it looks and I'm benefiting from other people's work.
To get an idea of what I'm talking about, you could install https://github.com/obra/superpowers/ into both Codex and Claude Code -- You'll find that the behavior is remarkably similar if you A/B compare them on the same problems. CC occasionally misses things that Codex gets and vice versa.
Overall the output structure and final code is remarkably similar... Which is pretty different than if you just run them with their default system prompts. I'd throw codex out the window with its default outputs.
It's fine for Claude to be unavailable when there is no work at these hours. However, the problem is Claude gave no notice.
At this rate, Claude being unavailable every day is no better than a human on a 9 - 5 working day job.
I worked with 4.6 and found some improvements for better planning and sustained us, but agree some posters 4.7 is slower, overthinking.
What I expect is frontier models to get bigger and more expensive (especially fast mode like on Cerberus). And most of his get much smaller distillations for the more generous subscription tiers.
99.02 % uptime
However, when there is an incident it is immediately "human error", not Claude.
> Can’t they prompt Mythos to give them better uptime?
Anthropic is currently "vibe coding" the situation right now.
Normally I'd just have it write out what it's doing to a file, if I need to transfer context, but if it goes down mid-session that's a no-go.
I think people have built tools for this, and of course you could reasonably vibe one yourself, but I don't really trust something like that to work reliably or in an ongoing manner.
Maybe it should just be a skill.
Anthropic have blocked usage of your subscription however with third party harnesses.
This is the main reason I use different harnesses, but I also expect (could be wrong) codex is better with codex harness (due to training on it's specific tools) than with other harnesses. I use opencode for everything that's not claude/codex.
Still, it's pretty crazy that Claude is down to 1 nine.
We can now shop around easily. They almost all do the same thing now. The models are "Just Enough".
[unknown] missing EndStreamResponse
It's impossible to tell these days whether 4.7 is stuck because it's thinking and Anthropic suppressed all output (seriously, 4.7 will just start making changes without explaining any reasoning - how is that an upgrade?) or because the underlying infrastructure is having issues.
4.5 -> 4.7 feels like going from working with a coach-able, junior engineer that does well with clear guidance to working with a cocky mid-level that will spend too long on pointless tangents and make confidently incorrect changes without any discussion.
Many such cases with humans (given that we continue to compare LLMs to humans these days which you cannot)