To make this easier to discuss and think about, here's a short name <Y> for that thing, and a longer definition <Z> that probably describes that thing.
.
Oh look, if I take the literal definitions of some words in <Y>, there's a thing that fits!
Therefore <X> is solved!
I really don't think this would be the reaction. I'd say they would (or should) look at the systems we have now and see a very clear path between where they were then and where we are now, with all the positives _and negatives_. We still get hallucinations. We still get misalignment, if anything as capabilities have improved so has the potential for damage when things go wrong. It's pretty clear to me that late 2025 models are just better versions of what we had in 2021.
That's not to say they're not more useful, more valuable, they absolutely are! But that's all about product integrations, speed, and turning up the dial on inference compute. They're still fundamentally the same things.
The next big step forward, the thing that LLMs are obviously missing, is memory. The fact we're messing around with context windows, attention across the context space, chat lookup and fact saving features, etc, are all patches over the fact that LLMs can't remember anything in the way that humans (or pretty much any animal) can. It's clear that we need a paradigm shift on memory to unlock the next level of performance.
A cat doesn't know its way around a house when it's born, but it also doesn't have to flick through markdown files to find its way around. A child can touch a hot stove once and be neurotic about touching hot things for the rest of their life, without having to read flash cards each morning or think for a few minutes about "what do I know about stoves" every time they're in the kitchen.
We're hacking around the fact that the models don't learn in normal use. That's in no way controversial.
A model that continuously learnt would not need the same sort of context engineering, external memory databases, etc.
> It's clear that we need a paradigm shift on memory to unlock the next level of performance.
and my take is that we might not need to get there to get the next level of performance, based on how well the latest models are able to utilize these hacks of a memory feature. On top of that, Claude was specifically RLHF'd to have the skills concept, so it's good with those. We disagree. Let's let time see who ends up being right.
I think this is on point to the next phase of LLMs or a different neural network architecture that improves on top of them, alongside continual learning.
Adding memory capabilities would mostly benefit local "reasoning" models than online ones as you would be saving tokens to do more tasks, than generating more tokens to use more "skills" or tools. (Unless you pay more for memory capabilities to Anthropic or OpenAI).
It's kind of why you see LLMs being unable to play certain games or doing hundreds of visual tasks very quickly without adding lots of harnesses and tools or giving it a pre-defined map to help it understand the visual setting.
As I said before [0], the easiest way to understand the memory limitations with LLMs is Claude Playing Pokemon with it struggling with basic tasks that a 5 year old can learn continuously.
Can AGI not speak for itself? Does it need humans to speak and act on its behalf? Who are the high-priests and what are the sects?
> This is why I propose unilateral declaration as a strategic countermove [... tearing] away the veil of any-minute-now millenarianism to reveal deployed technology
I think that in an ideal world, this would thoroughly embarrass the over-promisers by forcing them to put-up-or-shut-up, and it's fun to imagine... however I worry that it won't work out that way. Instead of deflating the nonsense in its tracks, it'll just give it more momentum and worsen the eventual mess.
> What do I mean by AGI ?
Can we fight it with a better term? Something like... Oh, I dunno, maybe "Artificial Narrative Intelligence", in the same sense that we could say A* is a kind of pathfinding intelligence.
I say "narrative" because we've got these machines that grow "fitting" documents, and are often used with stories to "decide" what happens next. For example, the story setting is a Support Page, the Customer Character says X, and the Very Helpful Robot Character then does Y and says Z in response, etc.
However just because these stories fit surprisingly well doesn't mean it's doing the kind of "thinking" we really dreamed of.
> You sometimes read about employees of AI companies absorbed by their own products. Nobody on Earth has spent more hours talking to YakGPT than Katie Echo! Nobody can pump more code out of ShannonSoft than Johnny Narcissus! Recalling my Twitter experience, I think boasts (and posts) of this kind should inspire caution.
To me a lot of that feels like just the thing-of-the-day LinkedIn Lunacy, albeit running at an unusual intensity.
It's not perfect, but it doesn't need to be, to be useful.
Maybe AGI is here for the author and mediocre web developers, otherwise the big AI labs would have replaced their AI researchers already and commercial airliners would have already replaced their pilots with GPTs.
This is exactly why “AGI” is meaningless.