2 pointsby HipstaJules4 hours ago4 comments
  • SirensOfTitanan hour ago
    We need to define terms precisely first and the industry seems allergic to that, likely because precise terms would undermine hype marketing necessary for companies like Anthropic to justify their valuations.

    We need clear definitions and clear ways of evaluating toward those definitions, as human evaluation of LLM is rife with projection.

    Generally speaking, scaling is clearly not going to get LLMs there, and a lot of the gains over the past year or so have been either related to reasoning or domain-specific training and application.

    I do think world models are the future and we’ll likely see some initial traction toward that end this year. Frontier AI labs will have to prove they can run sustainable businesses in pursuit of the next stage though, so I’d anticipate at least one major lab goes defunct or gets acquired. It may very well be that the labs that brush up against AGI according to conventional definitions are still nascent stage. And there’s a distinct possibility of another AI winter if none of the current labs can prove sustainable businesses on the back of LLMs.

    I think a lot of the west is undergoing the early stages of a Kuhnian paradigm shift in many ways, so I’ve found it difficult to take the signaling from the macro environment and put it to work in my decision making.

  • tim-tday2 hours ago
    If LLMs are a dead end then we have no idea.

    Many respectable AI experts think LLMs can’t cut it. (Look up what Yan LeCun has to say about it)

    There are probably 5 respectable efforts towards AGI in Silicon Valley by people who might pull it off in less than a decade. (People smarter than me doing work I’m not qualified to assess till after the have a successful breakthrough or fail to do so) I’d guess China has a mirror program with more, less of what we’d traditionally think is needed, but more chaos and spirit.

    My own thought is if we can solve some things around slow thinking, nonverbal reasoning, and systems thinking the LLM can be a large part of AGI but it can’t be the foundation and the part to actually solve difficult problems.

    If we did build a simulacrum of a person using current technology they’d be a charismatic, fast talking liar capable of faking it in 80% of situations, but incapable of difficult reasoning or moving math science or technology forward. So no worse than most people I encounter, but not useful for the sort of explosion of innovation some people are predicting.

  • nathanaldensr3 hours ago
    To know "how far," you'd have to know the exact location of your destination. We don't know that; therefore, your question is not possible to answer.
  • damnitbuilds4 hours ago
    LLMs will not lead to AGI.

    So we are back to the answer we had before LLMs: We don't know if mankind will ever be able to develop am AGI system.