I expect this definition will be proven incorrect eventually. This definition would best be described as a "human level AGI", rather than AGI. AGI is a system that matches a core set of properties, but it's not necessarily tied to capabilities. Theoretically one could create a very small resource-limited AGI. The amount of computational resources available to the AGI will probably be one of the factors what determines whether it's e.g. cat level vs human level.
"AGI is reached when it’s no longer easy to come up with problems that regular people can solve … and AIs can’t."
Currently, AGI is defined in a way where it is truly indistinguishable from superintelligence. I don’t find that helpful.
[1] https://www.noemamag.com/artificial-general-intelligence-is-...
Yes, that's more versatile than most of us, because most of us are not at or above the median practiced person in a wide range of tasks. But it's not what I think of when I hear "superintelligence," because its performance on any given task is likely still inferior to the best humans.
AI is already better than a 50th percentile human on many/most intellectual tasks. Chess, writing business plans, literature reviews, emails, motion graphics, coding…
So, if we say “AI is not AGI” because 1. It can’t do physical tasks or 2. it can’t replace intellectual human labor yet in most domains (for various reasons) or 3. <insert reason for not being AGI>, then it stands to reason that by the time we reach AGI, it will already be superintelligent (smarter than humans in most domains)
> > Yes, that's more versatile than most of us, because most of us are not at or above the median practiced person in a wide range of tasks. But it's not what I think of when I hear "superintelligence," because its performance on any given task is likely still inferior to the best humans.
> AI is already better than a 50th percentile human on many/most intellectual tasks. Chess, writing business plans, literature reviews, emails, motion graphics, coding…
Note the caveat above of "with some practice." That's much less clear to me.
> That seems like a personal definition for super intelligence.
I was giving a definition for artificial general intelligence as distinguished from super-intelligence, since the poster above said that most definitions of AGI were indistinguishable from super-intelligence.
To me, a computer doing as well as a practiced human at a wide swath of things is AGI. It's artificial; it's intelligence, and it's at least somewhat general.
I think the goalposts for "AGI" will keep moving so current AI doesn't match it.
I've thought of it as human level but already people are saying beating the average human isn't enough and it has to beat the best and be nobel prize worthy.
If we were dogs, we'd invent a basic computer and start writing scifi films about whether the computers could secretly smell things. We'd ask "what does the sun smell like?"
Then Kurzweil became my manager’s peer at Google in 2014 or so (actually 2 managers). I remember he was mocked by a few coworkers (and maybe deservedly so, because they had some mildly funny stories)
So I have been wondering with all the AGI talk why Kurzweil isn’t talked about more. Was he vindicated in some sense?
I did get a partial answer - one reason is that doomer AGI prophecies are better marketing than Kurzweil’s brand of AGI, which is about merging with machines
And of course both kinds of AGI prophecies are good distractions from AI ethics, which is more likely to slow investment than to grow it
No. He's still saying AGI will demand political rights in 2029. Like Geoffrey Hinton, Kurzweil gets a pass because he's brilliant and acomplished. But also like Hinton, he's wrong about this one issue. With Hinton it appears to be fear driving his fantasies. With Kurzwel it's probably over-confidence.
> With Kurzwel it's probably over-confidence.
It's his fear of mortality, which also helps explain the "merging" emphasis.
Every new Kurzweil "prediction" involves technologies that are just amazing-enough and the timeline just aggressive-enough that they converge into a future where a guy of Ray Kurzweil's age just manages to hop onto the first departure of the train to immortality.
If y'all have seen any exception to that pattern, please let me know, I'm genuinely curious.
> a scheme that’s flexible enough to sustain belief even when things don’t work out as planned; the promise of a better future that can be realized only if believers uncover hidden truths; and a hope for salvation from the horrors of this world.
Sometimes 90% of the "hidden truths" are things already "known" by the believers, an elite knowledge that sets them apart from the sheeple. The remaining 10% is acquiring some McGuffin that finally proves they were Right-All-Along so that they can take a victory lap.
> Superintelligence is the hot new flavor—AGI but better!—introduced as talk of AGI becomes commonplace.
In turn, AGI was the hot new flavor—AI but better!—companies pivoted to as consumers started getting disappointed/jaded experiencing "AI" that wasn't going to give them robot butlers.
> When those people are not shilling for utopia, they’re saving us from hell.
Yeah, much like how hatred is not really the opposite of love, the "AI doom" folks are really just a side-sect of the "AI awesome" folks.
> But what if there are, in fact, shadowy puppet masters here—and they’re the very people who have pushed the AGI conspiracy hardest all along? The kings of Silicon Valley are throwing everything they can get at building AGI for profit. The myth of AGI serves their interests more than anybody else’s.
Yes, the economic engine behind all this, the potential to make money, is what really supercharges everything and lifts it out of niche communities.
There is...chanting in team meetings in the US?
Has this been going for long now or is this some new trend picked up in Asia or something like that?
[1] https://www.mercurynews.com/2020/11/25/theranos-founder-holm...
If we define it as "a machine that can match humans on a wide range of cognitive tasks," that begs the questions: which humans? Which range? What cognitive tasks? I honestly think there is no answer you could give to these three alone that wouldn't cause everything to break down again:
For the first question, if you say "all humans," how do you measure that?
If we use IQs? If so, then you have just created an AI which is able to match the average IQ of whatever "all" happens to be. I'm pretty sure (though have no data to prove) that the vast super-majority of people don't take IQ tests, if they've ever even heard of them. So that limits your set to "all the IQ scores we have". But again... Who is we? Which test organization? There are quite a few IQ testing centers/orgs, and they all have variations in their metrics, scoring, weights, etc.
If you measure it by some other thing, what's the measurement? What's the thing? And, does that risk us spiraling into an infinite debate on what intelligence is? Because if so, the likelihood of us ever getting an AGI is nil. We've been trying to define intelligence for literally thousands of years and we still can't find a definition that is even halfway universal.
If you say anything other than all, like "the smartest humans" or "the humans we tested it against," well... Do I really need to explain how that breaks?
For the second and third questions, I honestly don't even know what you'd answer. Is there even one? Even if we collapse the second and third questions into "what wide range of cognitive tasks?", who creates the range of tasks? Are these tasks ones any human from, lets say, age 5 onward would be capable of doing? (Even if you answer yes here, what about those with learning disabilities or similar who may not be able to do whatever tasks you set at that age because it takes them longer to learn?) Or, are they tasks a PhD student would be able to do? (If you do this, then you've just broken the definition again.)
Even if we rewrite the definition to be narrower and less hand-wavy, like, an AI which matches some core properties or something, as was suggested elsewhere in these comments, who defines the properties? How do we measure them? How do we prove that us comparing the AI against these properties doesn't cause us to optimize for the lowest common denominator?
Let's say the AGI true believers, the really big players out there believe 2 things simultaneously that seem impossible together:
1) AGI will revolutionize humanity and propel us to unimaginable progress
2) AGI will destroy humanity
What if they actually believe 2) is a precursor to 1)? A lot of them seem to be building bunker fortresses on private islands right now. It seems like there is a certain class of very rich person that thinks yes, this current iteration of civilization is doomed (whether you believe it to be nuclear war, climate change, AGI, whatever), but, by virtue of them being Very Smart and Wealthy Human Beings, they can ride that out, and create a new civilization built on what was left behind.
I've not deep dove into a lot of writings by these guys but this kind of attitude seems to ooze from it. They're not concerned, because they think they're the ones that won't be affected. I think history says otherwise - when civilizations undergo catastrophic events, it tends to be the minority ruling classes that get eaten alive, not the other way around. I guess we'll see either way, right or wrong.
Also, in retrospect, something doesn't quite add up about the 'AI winter' narrative. It's hard to believe that so many people were studying and working on AI and it took so long given that ultimately, attention is all you need(ed).
I studied AI at university in Australia over a decade ago, did the introductory course which was great; we learned about decision trees, Bayesian probability and machine learning; we wrote our own ANNs from scratch. then I took on the advanced course, expecting to be blown away by the material, but the whole course was about mathematics, no AI theory; even back then there was a lot of advanced material which they could have covered (e.g. evolutionary computation) but didn't... I dropped out after a week or two because of how boring it was.
In retrospect, I feel like the course was made boring and irrelevant on purpose. I remember I even heard someone in my entourage mention that AI winter is not real... While we were supposedly in the middle of it.
Also, I remember thinking at the time that evolutionary computation combined with ANNs was going to be the future... So I was kind of surprised how evolutionary computation seemingly disappeared out of view... In retrospect though, I think to myself; progress in that area could potentially lead to unpredictable and dangerous outcomes so it may not be discussed openly.
Now I think; take an evolutionary algorithm and combine it with modern neural nets with attention mechanisms and you'd surely get some impressive results.
I don't think Netflix used any of the algorithms. I suspect they have more data on the user and movies than was presented in the contest.
I agree, if all of the content is garbage, then you don't need a model, you could simply pick something at random.
That the claims appear extreme and apocalyptic doesn't tell us anything about correctness.
Yes, there are tons of people saying nonsense, but look back at events. For a while it seemed as though AI was improving extremely quickly. People extrapolated from that. I wouldn't call that extrapolation irrational or conspiratorial, even if it proves incorrect.
If they discussed what a future moon landing might be like or how it could work, they would be a futurist.
If they were raising funds for a moon landing that they are currently working on, and success is surely imminent, despite not having any evidence that they can achieve it, or that they have beaten the technical hurdles necessary to do so, then they would be seen as a fraud.
It doesn’t really matter that at some point in the future the moon landings happened.
So, if you assume that AGI is fake and impossible, it's... A conspiracy. Sure.
Though, if you just finished quoting Turing (and folks like von Neumann), who thought it is possible, it would be good form for you to offer some reasoning that it's impossible, without alluding to the ineffable human soul or things like that.
It is the ultimate example of always having to be on guard against argumentum ad verecundiam.
That seems like a bad straw-man for "AI boosterism has the following hallmarks of conspiratorial thinking".
> offer some reasoning that it's impossible
Further on, the author has anticipated your objection:
> And there it is: You can’t prove it’s not true. [...] Conspiracy thinking looms again. Predictions about when AGI will arrive are made with the precision of numerologists counting down to the end of days. With no real stakes in the game, deadlines come and go with a shrug. Excuses are made and timelines are adjusted yet again.
No more than yelling "electricity is conspiracy thinking/Satan's plaything!" repeatedly would have stopped engineers in the 19th century from studying and building with it.
What's this, a second straw-man? So quickly after the first?
TFA never condemned invention or hard work, nor does it agree with the "doomers" who consider the target-invention to be fundamentally bad. At most it's a critique of a set of beliefs/rationalizations plus choices made by investors.
> makes you angry [...] people who have thought about it a lot more than you [...] repeatedly yelling [...] "you're a fool!"
Who's angry? Who's making it personal?
I think reading the article made you angry... and you're projecting it onto everybody else.
We don't have to save everybody, but only by trying to we save some.
For AGI, that's very far from being true.
I don't believe LLMs with DLC will reach AGI, but I assume it will be happen at some point in the future.
Wattlash /ˈwɒt-læʃ/
n. The fast, localized backlash that erupts when AI-era data centers spike electricity demand—triggering grid constraints, siting moratoriums, bill-shock fears, and, paradoxically, a rush into fixes like demand-response deals, waste-heat reuse, and nuclear/fusion PPAs.
I just don't think that's true. People used to say this kind of thing about computer vision - a computer can't really see things, only compute formulas on pixels, and "does this picture contain a dog" obviously isn't a mathematical formula. Turns out it is!
At the time there was no obvious reason not to trust that OpenAI was trying to act for the benefit of society, per their charter, so it seemed like an abundance of caution, and this level of LLM capability was new to most of us, so it was hard to guess how dangerous it actually was...
However, in retrospect, seeing how OpenAI continues to behave, it may well have just been to get publicity.
This whole "Be warned, we're about to release something that will destroy society!" shtick seems to be a recurring thing with the AI CEOs, specifically Altman and Amodei (who switched into hardcore salesman mode about a year ago).
The latest Twitter "warning" from Altman is to claim that their AI will soon be at the level of their AI developers, and so we should be prepared (for the self-accelerating singularity I suppose). Maybe this inspired someone to write him another trillion dollar check?
Another quote: "Trying GPT-4.5 has been much more of a 'feel the AGI' moment among high-taste testers than I expected!"
That's the most important paragraph in the article. All of the self serving excessive exaggerations of Sam Altman and his ilk, predicting things and throwing out figures they cannot possibly know. "ai will cure cancer, and demetia! And reverse global warming! Just give more money to my company which is a non profit and is working for the good of humanity. What is that? Do you mean to say you don't care about the good of humanity?" What is the word for such behaviour? It's not hubris, it's a combination of wild prophecy and severe main character syndrome.
I heard once, though i have no idea if it's true that he claims he carries a remote control around with him to nuke his data centres if they ever start trying to kill everyone. Which is obviously nonsense but is exactly the kind of thing he might say.
In the meantime they're making loads of money by claiming expertise in a field which doesn't even exist and in my opinion never will, and that's the main thing i suppose.
That would be quite useless even if it exists since now that you said it, the AGISGIAIsomething will surely know about it and take appropriate measures!
Why would anyone subject themselves to so much hatred? Have some standards.
The days of plain text Google AdWords are long, long gone.
In fact generating ad views and not purchasing things from them reduces the value of ads to the website.
They have, including multiple times in this very article, but the author's not willing to listen. As he says later:
> But set aside the technical objections—what if it doesn't continue to get better?—and you’re left with the claim that intelligence is a commodity you can get more of if you have the right data or compute or neural network. And it’s not.
Modern AI researchers have proven that this is not true. They routinely increase the intelligence of systems by training on different data, using different compute, or applying different network architectures. But the author is absolutely convinced that this can't be so, so when researchers straightforwardly explain that they have done this, he's stuck trying to puzzle out what they could possibly mean. He references "Situational Awareness", an essay that includes detailed analyses of how researchers do this and why we should expect similar progress to continue, but he interprets it as a claim that "you don’t need cold, hard facts" because he presumes that the facts it presents can't possibly be true.