26 pointsby ryan_j_naughton9 hours ago2 comments
  • gbnwl2 hours ago
    Not the first to notice this I'm sure but it feels like there's an insane amount of pressure pushing capital towards anything with a hint of AI legitimacy. It's as if asset owners across the planet have come to a consensus that the only industry that will matter going forward is this one (fair enough I guess), but this intense systemic pressure squeezes insane amounts of money toward litearlly any AI shaped outlet that opens up. It's just starting to feel like "scared and desperate" money more than "smart money".
    • iqp2 hours ago
      > the only industry that will matter going forward is this one (fair enough I guess)

      Housing, healthcare, and food production all spring to mind as industries that matter waaaay more than AI! (≧ᗜ≦)

      • FuckButtonsan hour ago
        Not if all human labor becomes surplus to requirements.
    • ktallettan hour ago
      Is it not a case of many funders don't want to risk missing out on the next big thing? And a loss of a few billion now is better than the loss of many billions down the line and control of the future?
      • gbnwl40 minutes ago
        Of course the motivation makes sense on the surface. What I'm getting at is that the supply of capital vs the supply of potential "control of the future" plays feels incredibly imbalanced. Money seems to be so desperate to move into AI it's lost all prudence (the particular people and company mentioned in the OP nonwithstanding, maybe they do deserve 1B).

        "not wanting to risk missing out" is essentially just FOMO right? "Smart" money has feels more like FOMO money these days. We literally have shoe companies savying they're going to pivot to AI and having their market cap increase in multiples as reward.

        • ktallett9 minutes ago
          I don't think Silicon Valley has been smart money for a decade plus. Quantum Computing is becoming the exact same with academic and government funding. With a lot of cash being spent on long shots or no hopers.
  • 7777777philan hour ago
    AlphaZero worked because chess and Go have terminal rewards and positions you can prove are right or wrong. General intelligence has neither, and the leap from self-play in a well-defined game to self-play in arbitrary environments is the hard part Silver isn't really demoing. Sara Hooker's stuff on scaling laws lines up here (1)

    (1) https://philippdubach.com/posts/the-most-expensive-assumptio...