32 pointsby flail12 hours ago6 comments
  • JuniperMesos11 hours ago
    Interesting that in an article entitled "Why I'm betting against AGI hype", the author doesn't actually say what bet he is making - i.e. what specific decisions is he making, based on his prediction that AGI is much less likely to arise from LLMs than the probability the market is implicitly pricing in suggests. What assets is he investing in or shorting? What life decisions is he making differently than he otherwise would?

    I say this not because I think his prediction as stated here is necessarily wrong or unreasonable, but because I myself might want to make investment decisions based upon this prediction, and translating a prediction about the future into the correct executions today is not trivial.

    Without addressing his argument about AGI-from-LLMs - because I don't have any better information myself than listening to Sutskever on Dwarkesh's podcast - I am somewhat skeptical that the current market price of AI-related assets is actually pricing in a "60-80%" chance of AGI from LLMs specifically, rather than all the useful applications of LLMs that are not AGI. But this isn't a prediction I'm very confident in myself.

    • karmakaze11 hours ago
      Armchair commentary.

      > I’ve listened to the optimists—the researchers and executives claiming [...]

      Actually researchers close to the problem are the first ones to give farther out target dates. And Yann LeCun is very vocal about LLMs being a dead end.

      • nomel7 hours ago
        > farther out target dates

        And, that's why there's so much investment. It's more of a "when" question, not an "if" question (although I have seen people claim that only meat can think).

      • klysm9 hours ago
        He is starting a business that depends on them being a dead end
        • techblueberry9 hours ago
          Sounds like he’s putting his money where his mouth is.
      • arisAlexis6 hours ago
        Same guy that predicted LLMs couldn't do something in 5000 years and they did it next year? (Google this, seriously)
        • sidereal15 hours ago
          Couldn't do what? You haven't told us what to search for.
  • drpixie6 hours ago
    Summary of the current situation...

    LLMs have shown us just how easily we are fooled.

    AGI has shown us just how little we understand about "intelligence".

    Standby for more of the same.

  • m4637 hours ago
    I don't think there's a lot of "AGI hype".

    I think all the hype is more about ai replacing human effort in more ambiguous tasks than computers helped with before.

    A more interesting idea would be - what would the world do with AGI anyway?

    • arisAlexis6 hours ago
      Can't you think what a world with a species smarter than humans could be like? Yeah, it's difficult
    • fragmede6 hours ago
      Hire digital employees rather than human ones. When all your interaction is digital, replacing the human on the other end with a theoretically just as capable AI is one possibility. Then, have the AI write docs for your AI employee, spin up additional employees like EC2 instances on AWS. Spin up 30 to clear out your Trello/Monday.com/Jira board, then spin them back down as soon as they've finished, with no remorse, because they're just AI robots. That's what you could do with such a technology anyway.

      That's for regular human-level AGI. The issue becomes more start for ASI, artificial super intelligence. If the AI employee is smarter than most, if not all humans, why hire humans at all?

      Of course, this is all theoretical. We don't have the technology yet, and have no idea what it would even cost if/when we reach that.

  • FrankWilhoit11 hours ago
    "...a philosophical confusion about the nature of intelligence itself...."

    That is how it is done today. One asks one's philosophical priors what one's experiments must find.

  • arisAlexis6 hours ago
    Contrarianism as a mental property of humans
  • NedF7 hours ago
    [dead]