57 pointsby fzliu10 hours ago22 comments
  • jerf5 hours ago
    I don't know about "Winter". The original "AI Winter" was near-total devastation. But it's probably reasonable to think that after the hype train of the last year or two we're due to be headed into the Trough of Disillusionment for LLM-based AI technologies on the standard Gartner hype cycle: https://en.wikipedia.org/wiki/Gartner_hype_cycle
    • mewpmewp24 hours ago
      Maybe, but modern AI, I find already immensely useful in a lot of ways and I see things constantly improving. E.g. realistic video generation, music generation, OpenAI advanced voice mode - it's still wild to me how good these are and how well LLMs can perform.

      I still remember even when seeing GPT3.5 I thought it must be impossible what it can do and that there must be some sort of trickery involved, but no.

      I feel like I'm still impressed and amazed daily what AI can do now.

      • foogazi23 minutes ago
        It’s can’t be economically sustainable if this is it right ?
    • brotchie3 hours ago
      Feels different to past hype cycles (Internet bubble, Crypto bubble).

      LLMs with meaningful capabilities arrive very quickly. e.g. One week they were not that useful, the next week they gained meaningful capabilities.

      A function that takes text and returns text isn't that useful without it being integrated into products, and this takes time.

      Next 12-24 months will be the AIfication of many workflows: that is, discovering and integrating LLM-based reasoning into business processes. Assuming even a gradual improvement in capabilities of LLMs over time, all of these AI enhanced business processes will simply get better.

      Diffusion of technology is slow slow slow, and then fast. As I become more capable with AI (e.g. what tasks as an engineer are helped using AI) I'm getting better and better at it. So there's a non-linear learning curve where, as you learn to use the technology better, you can unlock more productivity.

    • contravariant4 hours ago
      Honestly I think we're already there, it just takes a bit before the realisation trickles down.

      The successful uses of LLMs don't seem to depart too far from the basic chatbot that started the whole hype. And the truly 'magic' uses seem to fail in practice because even a small error rate is way too high for a system that cannot learn from its mistakes (quickly).

      • GaggiX4 hours ago
        >don't seem to depart too far from the basic chatbot that started the whole hype.

        Is ChatGPT-3.5 a basic chatbot now? It's been less than two years since it was SOTA.

    • urbandw311er3 hours ago
      Nicely put
  • i-cjw5 hours ago
    > Or take Ed Thorp, who invented the option pricing model and quietly traded on it for years until Black-Scholes published a similar formula and won a Nobel Prize

    Hardly quietly. Thorpe published "Beat the Market" in 1967 detailing his formulae, six years before Black Scholes won the Nobel.

  • janalsncm5 hours ago
    I like the distinction between producers and promoters. This is why I am naturally skeptical of polished demos and people posting in their real name. If you post in your real name, you are at a minimum promoting yourself (generally boils down to “I am a smart, employable person”).

    I wish I had a better heuristic, but the best I’ve found on Twitter is pseudonymous users with anime profile pics. These are people who don’t care about boosting a product. They’re possibly core contributors to a lesser-known but essential python library. They deeply understand a single thing very well. They don’t post all day because they are busy producing.

    • pajeets4 hours ago
      I also second anime pfps and other blue checkmarkless accounts on X producing far more grounded takes (not just for AI).

      X is a very good microcosm of that producer/promoter model from Dalio except that the promoters are seemingly the entirety and they are extremely loud to the point that it triumphs all common sense and reasoning.

      It's also very tiring to scroll through "I made $XXXX in 30 days with AI and I'm only 17 year old high school student" or "we shipped a ChatGPT wrapper and used dark patterns for subs"

      On Linkedin its far worse, everybody is a genius and everybody needs to pay attention to me of the remote chance a recruiter from big tech will reach out and pay me a large salary for managing their impression.

      All in all, it really feels like the American economy is running on pure hopium and fumes. This cannot be good for it in the long run.

      • HKH22 hours ago
        > On Linkedin its far worse, everybody is a genius and everybody needs to pay attention to me of the remote chance a recruiter from big tech will reach out and pay me a large salary for managing their impression.

        Right. So much content, but it feels so empty. Do people actually network there?

    • larodi4 hours ago
      these, indeed are the actual accounts worth following, those still bearing some resemblance of the early internet adopters who were there for the fun, not the profit part. though never thought about the name perspective, something I can only agree with you on. which immediately cancels out people such as lex friedman and alex volkov, again - seems like the right thing to do tbh. some very obscure accounts are to me the real opinion leaders, they know how to ride the viral wave on repeat. Grimes doesn't though.
    • Terr_4 hours ago
      Soon: "LLM, take my self-promotional content and rephrase it as if I was a producer."
  • zcw1005 hours ago
    People warning about a coming AI winter are almost as annoying as people doomsaying about AGI. It’s going to be somewhere in between. It can be disappointing and revolutionary at the same time. We had the dot-com crash and yet out of that grew some of the largest corporations the world. Microsoft, Facebook, Apple, Amazon, etc
    • janalsncm4 hours ago
      The article is less about a winter for the field than a winter for AI boosters, who will soon move on to become “experts” in The Next Big Thing.

      For people working in the field, deep learning has already proven itself to be self-funding. It’s the main source of Google’s profits. It’s TikTok’s algorithm. Et cetera.

    • 4 hours ago
      undefined
    • threeseed5 hours ago
      Every one of those companies predates the dot com bomb by quite some time.

      And AGI is science fiction with no credible plan of how to get there. If you can even get everyone to agree on the same definition.

      An AI winter is something that can be measured and is factual eg. the lacklustre spending on AI products and the dry up in VC funding.

      • janalsncm4 hours ago
        Kind of depends on what you call “AI”. Large language models, maybe. AI is a lot more than that though. Deep learning isn’t going anywhere.

        The fact that VCs aren’t throwing millions of dollars after every CS undergrad who figured out how to make an API call to OpenAI means they are wising up. The main question is why it took this long.

      • archgoon4 hours ago
        Microsoft and Apple preceded the dot com bomb by several decades. (Microsoft 1975, Apple 1976)

        Amazon was a company that was around and survived the dot com bomb (founded in 1994, roughly around the time of the beginning of the bubble) [though its stock took about 7 years to recover]

        Facebook was post dot com bomb. (founded 2004)

      • mindcrime3 hours ago
        And AGI is science fiction with no credible plan of how to get there.

        I mean... you can't really have a (strict) plan for how to build something that nobody knows how to build (yet). But that doesn't necessarily mean it's "science fiction". There are credible reasons[1] to believe that AGI will happen - eventually. To me, the biggest question is around timeline, not "will it happen or not". Now granted, that allows for anything from "tomorrow" up to "the heat death of the universe" so you can accuse me of the dodging the issue if you'd like. But I'd bet money on it happening closer to "tomorrow" than "the heat death of the universe".

        [1]: among others - the progress on AI that's already been made. And while we may not have AGI, it's hard to deny that we have AI that's a far sight better than what we had in 1956. The other is that, unless you believe in magic, the human brain is an existence proof that human level AGI is achievable on a deterministic machine that operates according to the physical laws of the universe. It would seem to follow then that it should be possible (albeit perhaps very difficult) to achieve that same level of intelligence on some other deterministic machine. And note that even if "Penrose is right" about the brain relying on quantum mechanical phenomenon, there's no particular reason to think that those can't also be mirrored on a human created machine.

  • aiforecastthway4 hours ago
    The original "AI Winter" was primarily a government funding phenomenon [1]. There was no "bubble" in the private sector. I.e., the winter was the result of responsible people in government realizing the hype was over-extended and standing up for the taxpayer. Progress would be made, eventually, but not in that moment. (Those people were correct, btw.)

    > But beneath the surface, there are rampant issues: citation rings, reproducibility crises, and even outright cheating. Just look at the Stanford students who claimed to fine-tune LLaMA3 to have be multimodal with vision at the level of GPT-4v, only to be exposed for faking their results. This incident is just the tip of the iceberg, with arXiv increasingly resembling BuzzFeed more than a serious academic repository.

    Completely agreed. Academia is terminally broken. The citation rings don't bother me. Bibliometrics are the OG karma -- basically, fake internet points. Who cares?

    The much bigger problem is that those totally corrupt circular influence rings extend into program director positions and grant review committees at federal funding agencies. Most of those people are themselves academics (on leave, visiting, etc.) who depend on money from the exact sources they are reviewing for. So this time is their friends turn, and next time is their turn. And don't dare tell me that this isn't how it works. I've been in too many of those rooms.

    It's gotten incredibly bad in in ML in particular. Our government needs to cut these people off. I am sick of my tax money going to these assholes (via the NSF, DARPA, etc.). Just stop funding the entire subfield for a few years, tbh. It's that bad.

    On the private sector side, I think that the speculative AI bubble will deflate, but also that some real value is being created and many large institutions are actually behaving quite reasonably compared to previous nonsense cycles. You just have to realize we're mid-late cycle and companies/groups that aren't finding PMF with llm tech in the next 2-3 years are probably not great bets.

    --

    [1] https://en.wikipedia.org/wiki/Lighthill_report

    • Animats4 hours ago
      > There was no "bubble" in the private sector.

      There was a small bubble.

      There were 1980s AI startups: IntelliCorp and Teknowledge. Intellicorp pivoted from expert systems to UML and was acquired. Teknowledge seems to have disappeared. (The outsourcing company called Teknowledge today seems to be unrelated.) There were the LISP machine companies, Symbolics and LMI. There were a few others, mostly forgotten now.

  • mindcrime3 hours ago
    An "AI Fall" maybe. But "AI Winter"? I really doubt it. And the author of this piece presents very little in the way of compelling arguments for the advent of said AI Winter.

    For all the valid criticisms of "AI"[1] today, it's creating too much value to disappear completely and there's no particular reason[2] to expect progress to halt.

    [1]: scare quotes because a lot of people today are mis-using the term "AI" to exclusively mean "LLM's" and that's just wrong. There's a lot more to AI than LLM's.

    [2]: yes, I'm aware of neural scaling laws and some related charts showing a slow-down in progress, and the arguments around not having enough (energy|data|whatever) to continue to scale LLM's. But see [1] above - there is more to AI than LLM's.

  • pinkmuffinere4 hours ago
    > This is how we’re headed for another AI winter, just as we saw with the fall of data science, crypto, and the modern data stack.

    The fall of data science??? When did that happen? I’m not squarely in the field, but I thought I would have heard about it

    • mindcrime3 hours ago
      > The fall of data science??? When did that happen?

      It didn't. "Data science" may not be the latest, trendy, catchy "buzzword of the day" but nothing holds onto that title forever. Losing that crown to trendy tech du-jour isn't the same as "falling off" IMO.

  • hu35 hours ago
    Similar phenomenon, on a smaller scale, is happening with what I call meta-cloud PaaS, which facilitates web app deployments/provisioning. They usually run on top of AWS or other large clouds, hence meta-cloud.

    It started with Heroku but now it has gained VC attention in the form of Next/Vercel, Laravel Cloud, Void(0), Deno Deploy and Bun-yet-to-be-announced solution. I'm probably forgetting one or two.

    Don't get me wrong, they are legit solutions. But the VC money currently being poured in on influencers to push these solutions make them seem much more appealing than they would be otherwise.

    • grokkedit5 hours ago
      Heroku has been around for almost 20 years, Vercel was Zeit ~10 years ago, and they both have always been widespread solutions, I wouldn't say that that there is hype only now

      I cannot vouch for laravel cloud or void, since I've never used them, nor I will comment on Deno/Bun since they are far more recent

  • navaed013 hours ago
    I appreciate a good original perspective, but much of this seems over blown…

    “Meanwhile, data scientists and statisticians who oftentimes lack engineering skills are now being pushed to write Python and “do AI,” often producing nothing more than unscalable Jupyter Notebooks”

    Most data scientists are already well versed in python. There’s so many platforms emerging that abstract a lot of the infra required to build semi-scalable applications

  • incognito1245 hours ago
    > That leading edge research paper is most probably someone’s production code.

    Very powerful, albeit sad, statement.

  • Agentus4 hours ago
    So the argument that during a gold rush, there are scammers selling pyrite and misleading prospective prospectors to quarries where there is no gold, soo because all these are happening incidentally the gold rush is therefore near over. Okay. Good article otherwise. But Geoffrey Hinton takes the opposite stance (so does eric schmidt)with recently stating the last 10 years of ai development have been unexpected and the trend will continue with the next 10 years. But perhaps that could be handwaived off as cheerleaders/promoters.
  • 8note3 hours ago
    > as we saw with the fall of data science, crypto, and the modern data stack.

    Has data science or the modern data stack fallen? What does crypto(I assume currency) have any relevance to ai winter for?

  • hackable_sand5 hours ago
    Here is the thesis at the end

    > the real producers will keep moving forward, building a more capable future for AI.

    This is one of many signal flares going up.

    Do something or cash out of the AI space. Engineers are tired.

  • swyx4 hours ago
    our take on this from the industry pov: https://www.latent.space/p/mar-jun-2024 (there is a podcast version too if u click thru)

    broadly agree but i think predicting ai winter isnt as useful as handicapping how deep and still building useful things regardless.

  • teddyh4 hours ago
    At least we got a new keyboard Super modifier key out of it. Or maybe we should make it the Compose key?
  • mizzao4 hours ago
    Interesting that this article is right next to one making the opposite point:

    https://news.ycombinator.com/item?id=41813268

  • synapsomorphy4 hours ago
    Just like with most other criticisms I've seen of AI, this seems to be criticizing the hype around AI, not the technology itself. It isn't clear if the author conflates those but a lot of people wrongly do. AI isn't one to one with NFTs, there being a lot of grift around something doesn't make it useless or mean it won't change the world.
    • mindcrime3 hours ago
      I hate to post a reply that amounts to a long-winded "this". But you nailed it, IMO. I agree with everything you said here.
  • RobRivera4 hours ago
    Again?

    If anyone had this knowledge, they wouldnt tell us, theyd keep their market edge and make a bet for their own selfish greed.

    Anything else is PR

    discuss amongst yourselves Rhode Island, neither a road nor an island

  • woopwoop4 hours ago
    I am a pure mathematician by training. I _hate_ machine learning. The entire field seems to me like a bunch of unprincipled recipes and random empirics. The fact that it works is infuriating, and genuinely seems like a tragedy to me. The bitter lesson is very bitter indeed.

    But I've been hearing the refrain of this article for a decade now. I just don't believe it anymore.

  • bluesounddirect5 hours ago
    Awesome.
  • pajeets5 hours ago
    chatgpt wrapper startups are ngmi
  • ebabchick4 hours ago
    who's going to tell him #feeltheagi