No reason to expect capabilities of models are going to stop.
They are saying "trust me, bro, I have a superhacker model" and proceed to show 0 evidence that it is what they are hyping.
E.g. even though LLMs can generate code and we have agents - the profession of software engineers is not being destroyed. The demand for software engineers in the labour market is still strong.
Also a thing that wasnt spoken about loudly (and for good reason) is that code is not perfect - this means bugs/vulnerabilities are there. And the reality is, it is optimal to have done this - for it were not done, the release of software and the moving of resources towards other projects would slow. Aka slowing down economic activity.
In my view the naysayers always simply been moving the goalposts, and never admit when they were wrong. "AI just produces slop" -> "AI can't write useful code" -> "AI can't take SWE jobs" -> [we are here]
You, just like 'them' are no better. It would be better if all those on the extreme ends could be muted - the truth is closer to the middle.
Didn't Oracle just lay of 30k people, Meta plans on laying off thousands more this year, Microsoft and Amazon have already done layoffs in the name of AI?
And it's impossible for new grads in CS to find a job. You would have to be insane to study CS in college today. My big tech org hasn't hired interns in over a year.
I don't think anyone claimed "all SWE jobs would disappear by April 2026," but two things have come true:
- AI can write nearly all new code at today, and it does at many companies. (People were very skeptical when Dario claimed this would happen just over a year ago)
- AI has fundamentally de-valued programming. Code is cheap.
So at what point should naysayers update their priors? How many times must people be proven wrong to think "maybe I have the wrong perspective here after all?"
Full disclosure, I used to be a skeptic myself in the early days, but I think being a skeptic today is pure stubbornness, not rational.
What matters is the aggregate activity in the labour market - for which - the activity associated with demand for software engineers is healthy.
So?
Modern software is designed with a defense in depth model, so it often requires chaining multiple vulnerabilities to get a successful exploit. But individual vulnerabilities still need finding and fixing because people might find vulnerabilities in the other isolation layers later.
I swear every time an LLM does something useful, the usual band of skeptics bends over backwards trying to invent reasons to dismiss it.
Also, I don't believe it is fair to dismiss skeptics as inventing reasons. If anything, "believers" are bending over backwards to praise Anthropic even though they didn't actually release anything.