As I've learned today: confidence is more influential than actual facts. So Altman has confidently grifted his way into a place where he might find a way to foot the bills, even if that way is just government bailout - clever, but hardly the fault of people saying "putting AI in charge is a bad idea".
And yes, we're nowhere near AGI, and, personally, I don't think our current trajectory leads there. Something fundamental has to change to reach that point. LLMs might be tools that an AGI uses, but in the same way that I am not a car (it's a tool I use, and it cannot work alone - it requires some intelligent direction), an AGI would not be a token-predictor. There's more to it than that, as easily evidenced by the hit/miss rate.
I'm not saying "don't use the tools". I'm saying "don't _trust_ the tools" - because they are probablistic, not deterministic. They have no actual understanding. They can string tokens together well enough to fool humans into feeling like there's a person at the other end (and some people are fooled enough to believe AGI is in the making).