23 pointsby cermicelli8 hours ago13 comments
  • deafpolygon3 minutes ago
    because it’s all about the adjusted gross income
  • ilyaizen6 hours ago
    Don't believe the hype, this is tribal thinking. Everybody seems to have these widely diverging opinions on AI lately. What does AGI have to do with predicting the next token stochastically like a parrot? Oh, people say you can brute-force AGI, if only things are answered correctly. I get that, I still see SOTA models sometimes fail like babies. I also mostly see them perform at a much higher intelligence and work ethics than I can, but maybe I'm too hard on myself.

    Anyway, here's something I've recently build that shows the HN consensus when it comes to AI-Coding (spoiler: they say it's quite good): https://is-ai-good-yet.com/ Is AI “good” yet? – A survey website that analyzes Hacker News sentiment toward AI coding.

    • razodactyl18 minutes ago
      Brilliant website. It should be a post of its own.
    • 4 hours ago
      undefined
    • viking123an hour ago
      We are decades if not hundreds of years away from any real AGI. The AGI is for many tech people what God is for most people in the world, it's kind of a feel-good thing to believe that in their lifetime they can essentially have this intelligence that can be cloned unlimited times and essentially solve any problem thrown at it and liberate them from aging and death.

      I think one day this kind of system will probably exist in some form, I don't see any fundamental reason why not but to believe fraudsters like Amodei and Altman telling you that in 6 months world will possibly end and the AIs will take over, it's just nonsense fed to boomer investors who have 0 clue.

      Many crypto scammers have pivoted to AI because there are new useful idiots to scam now that crypto is not doing too hot.

      Frankly the whole world is just scam after scam nowadays, they noticed that the Elon Musk strategy of continuously promising something "the next year" actually works so they are all doing it stringing people along.

      • razodactyl24 minutes ago
        I don't want to believe you are right. Not that I refuse to...

        You could be wrong, hopefully. I'll just remain optimistic.

        It's good that critical thinking still exists to keep us all grounded.

        Edit: I think you meant to replace AGI with ASI.

        What we have now is approaching the ability to solve problems generally with considerable effort but super intelligence is definitely out of reach for the time being.

    • user39393824 hours ago
      We have good research showing we think in language. So the seed is there. I’m working on methods (hardware and software) that gave us insane speeds and compression so you get orders of magnitude greater performance.
      • PurpleRamen18 minutes ago
        > We have good research showing we think in language.

        Source? From my knowledge, we do not "think in language", but we learn to finetune our thinking to be expressed in the form of words. Unless you consider pictures as language, after all "A picture is worth a thousand words"..

      • GOD_Over_Djinn3 hours ago
        That implies that nonverbal people are unable to think, no?
      • ilyaizen3 hours ago
        What? What does it have to do with anything that I said? Wow, I think HN has lost its spark.
  • bananaflagan hour ago
    Ten years ago I believed we'll have AGI/end-of-the-world/Singularity circa 2040, and meanwhile in the 2020s we will chill out in a futuristic, booming world of un-smart innovations like 3D printing, VR and the Metaverse.

    Then, in March 2023, with GPT-4, I said that we'll have AGI only ten years later, and the progress in the last few years (multimodal stuff, reasoning, coding agents) hasn't changed this view.

  • comex4 hours ago
    Probably the biggest thing that serious predictions are relying on is the METR graph:

    https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...

    It shows a remarkably consistent curve for AI completing increasingly difficult coding tasks over time. In fact, the curve is exponential, where the X axis is time and the Y axis is task difficulty as measured by how long a human would take to perform the task. The current value for 80% success rate is only 45 minutes, but if it continues to follow the exponential curve, it will only take 3 years and change to get to a full 40 hour human work week's worth of work. The 50% success rate graph is also interesting, as it's similarly exponential and is currently at 6 hours.

    Of course, progress could fall off as LLMs hit various scaling limits or as the nature of the difficulty changes. But I for one predicted that progress would fall off before, and was wrong. (And there is nothing saying that progress can't speed up.)

    On the other hand, I do find it a little suspicious that so many eggs are in the one basket of METR, prediction-wise.

    • SkiFire134 hours ago
      > It shows a remarkably consistent curve for AI completing increasingly difficult coding tasks over time.

      I'm not convinced that "long" is equivalent to "difficult". Traditional computer can also solve tasks that would take extemely long for humans, but that doesn't make them intelligent.

      This is not to say that this is useless, quite the opposite! Traditional computers shown that being able to shorten the time needed for certain tasks is extremely valuable, and AI shown this can be extender to other (but not necessarily all) tasks as well.

    • zeroonetwothree4 hours ago
      Wouldn’t actual “AGI” require an ~80 year timeframe ;)? After all most humans are able to achieve the task of “survival” over that period.
      • bnt1232 hours ago
        Very interesting thought! TY for sharing
  • techblueberry7 hours ago
    I think what we have is mostly AGI. It’s artificial , it’s intelligence, and most important it’s general. It may never get an IQ about 75 or so, but it’s here.
    • hombre_fatal4 hours ago
      Yeah, LLMs fulfill any goalpost I had in my mind years ago for what AGI would look like, like the starship voice AI in Star Trek, or merely a chat bot that could handle arbitrary input.

      Crazy how fast people acclimate to sci-fi tech.

      • bitwize4 hours ago
        The Mass Effect universe distinguishes between AI, which is smart enough to be a person—like EDI or the geth—and VI (virtual intelligence), which is more or less a chatbot interface to some data system. So if you encounter a directory on the Citadel, say, and it projects a hologram of a human or asari that you can ask questions about where to go, that would be VI. You don't need to worry about its feelings, because while it understands you in natural language, it's not really sentient or thinking.

        What we have today in the form of LLMs would be a VI under Mass Effect's rules, and not a very good one.

        • dangus3 hours ago
          This is a great analogy.

          The term AGI so obviously means something way smarter than what we have. We do have something impressive but it’s very limited.

          • handoflixue2 hours ago
            The term AGI explicitly refers to something as smart as us: humans are the baseline for what "General Intelligence" means.
  • smt887 hours ago
    No serious person thinks LLMs will be the method to create AGI. Even Sam Altman gave that up.

    Anyone still saying they'll reach AGI is pumping a stock price.

    Separately and unrelated, companies and researchers are still attempting to reach AGI by replacing or augmenting LLMs with other modes of machine learning.

    • viking123an hour ago
      AGI is just a meme at this point sold to midwits in Reddit and X.
      • rcore33 minutes ago
        Especially to those on HN.
  • nsonha2 hours ago
    I am a believer of agentic LLM and aside from a few downsides, it has been imensely useful for me.

    Having said that, I could not care less about AGI and don't see how it's any relevant to what I wanna do with AI.

  • segmondy5 hours ago
    AGI is already here and arrived without a bang, AGI arrived last year. To each his own and their own reality.
    • dangus3 hours ago
      Sure, to each their own and their own reality, but I think most people would consider something with a bold name like “artificial general intelligence” to at least match an average employee peon.

      We aren’t even really in “minimum wage job” territory yet, never mind a median salaried employee.

      I’m still being paid a small fortune even though AGI is supposedly available for the cost of a monthly Starbucks habit.

      I recently had to talk to a real human to get a cell phone activation/payment to go through even though supposedly an AI should be better at communicating with digital payment systems than a human clicking around with a mouse. The only purpose the AI in the system had was to regurgitate support articles and discourage me from contacting the human who could solve my problem.

    • 3 hours ago
      undefined
  • t3122275 hours ago
    hello,

    idk ... even sam altman talked a lot about AGI *) recently ...

    *) ads generated income

    *bruhahaha* ... ;^)

    just my 0.02€

  • empressplay3 hours ago
    While large language models don't have enough nuance for AGI, there is some promise still in multi-modal models, or models based purely on other high-bandwidth data like video. So probabilistic token-based models aren't entirely out of the running yet.

    Part of the problem with LLMs in particular is ambiguity -- this is poisonous to a language model. And English in particular is full of it. So another potential that is being explored is translating everything (with proper nuance) to another language that is more precise, or by rewriting training data to eliminate any ambiguities by using more exact English.

    So there are ideas and people are still at it. After all, it usually takes decades to fully exploit any new technology. I don't expect that to be any different with models.

    • 3 hours ago
      undefined
  • rvz6 hours ago
    Look carefully at the 'why' in the person / influencer and you almost answered your own question.

    > I don't know if the investments in AI are worth it but am I blind for not seeing any hope for AGI any time soon.

    > People making random claims about AGI soon is really weakening my confidence in AI in general.

    The "people" that are screaming the loudest and making claims about AGI are the ones that have already invested lots of money into hundreds of so-called AI companies and then create false promises about AGI timelines.

    Deepmind was the one that took AGI seriously first which it actually meant something until it became meaningless, when every single AI company after OpenAI raised billions in funding rounds over it.

    No one can agree as to what "AGI" really means, It varies depending who you ask. But if you look at the actions made by these companies invested in AI, you can figure out what the true definition converges to, with some hints [0].

    But it is completely different to what you think it is, and what they say it is.

    [0] https://news.ycombinator.com/item?id=46668248

  • teaearlgraycold8 hours ago
    I think modern agentic tools let you take bigger steps when programming. They’re still fallible and you need to be mentally engaged when using them. But they’re a programmer’s power drill.
    • bigstrat20036 hours ago
      Sure, if a power drill would randomly create a hole twice the size of the bit you put in, or go drill in a direction that you didn't point it in. The reason power drills are a good tool and LLMs are not is that the former works reliably whereas the latter does not and never will.
      • nitroedge4 hours ago
        If the context window is managed properly that offsets almost all these "random drill holes", I see so many people just filling up the buffer and then compacting and complaining. With ambitious huge task requests and not having any form of system that breaks a job into multiple mini tasks.

        Context rot is real and people who complain about AI's hallucinating and running random wild, I don't see it when the context window is managed properly.

      • SkiFire133 hours ago
        Luckily in programming you can very quickly undo a wrongly drilled hole if you notice it. It does require some effort and it's not always clear whether it's worth it, but it's a tool that can definitely be helpful in some situations.
      • 4 hours ago
        undefined
      • teaearlgraycold4 hours ago
        Your analogy does not apply. Remember the old saying?

        > If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization.

        I think you’re taking my analogy too literally. I just mean they help you go faster. When building software you have a huge advantage in that there is very little risk in exploring an idea. You can’t hurt yourself in the process. You can’t waste materials. You can always instantly go back to a previous state. You can explore multiple options simultaneously with a favorable cost to doing so.

        You don’t have to let your standards drop. Just consider AI coding an interactive act of refinement. Keep coding manually where you meet too much resistance. Accept the LLM can only do so much and you can’t often predict when or why it will fail. Review its output. Rewrite it if you like.

        Everything always has a chance of being wrong whether or not you use AI. Understand an AI getting something wrong with your code because of statistical noise is not user error. It’s not a complete failure of the system either.

        It’s a mega-library that inlines ether an adjustment of a common bit of code or makes up something it thinks looks good. The game is in finding a situation and set of rules which provide a favorable return on the time you put into it.

        Imagine if LLMs were right 99% of the time, magically doing most tasks of a certain complexity 10x faster than you could do them. Even when it’s wrong you will only waste so much time fixing 1% of the AI’s work. So it’s a net positive. Find a system that works for you and lets you find something where it makes sense to use it. Maybe 50% of the time and 3X faster than you makes it make sense.

        In some domains you can absolutely learn some basic rules for AI use that make it a net positive right away. Like as a boilerplate writer, or code-code translator. You can find other high success likelihood tasks and add them to the list of things you’ll use AI for. These categories can be narrow or wide.

        • SkiFire133 hours ago
          > Imagine if LLMs were right 99% of the time

          This is an hypothetical that's not here yet.

          Of course if LLMs had human-level accuracy they would be ideal.

    • bigbuppo4 hours ago
      It's like standing over the shoulder of a bunch of junior devs, fresh out of the certification mills that went belly up 15 years ago when their business model became illegal. You know, the sort of people that demanded a six figure starting salary because they are a certified php developer and a CCNA (sales-oriented).
  • aaaalone2 hours ago
    [dead]