10 pointsby ajax333 hours ago5 comments
  • guillego16 minutes ago
    There might be a really good conclusion in this article but I had to give up halfway through. The LLM-writing chapter after chapter is unbearable, full of short sentences leading into paragraphs that read like LinkedIn posts.

    > AlphaFold solved protein structure prediction, a fifty-year problem, not in decades but in a fraction of the time traditional research would have required. Not by thinking like a biologist. By finding patterns at a scale no human could reach. That is a domain detonation. Not progress. A before-and-after. The same logic is now moving through radiology, legal research, financial analysis, drug discovery, software engineering.

    If you have good ideas, good insights and good stories, they deserve your own words. If you can't respect your own ideas enough to spend time writing them down and forming them into paragraphs and sentences, why should I respect them any more?

  • effable30 minutes ago
    The core idea of ASI arriving before AGI seems to be true: we have already seen that through Chess Programs, LLMs etc.

    However what caught my eye and that to me does reflect the lens through which the author sees the world, unless I am completely misunderstanding their point:

    "Most of the world's important problems have never been modelled at the precision AI requires to act on them. Pollution, traffic, healthcare, taxation, public infrastructure, water distribution."

    Pollution, traffic, healthcare and public infrastructure however are not really problems that require "clever" solutions - rather they are problems of political will, regulating industry and moving to cleaner energy sources. For example, we have known about human caused climate change for decades and carbon emissions are just hitting their peak now.

    • roysting4 minutes ago
      The irony is that I think the author may have meant granularity, not precision. You could have the highest precision model (not the AI type) of any given topic or domain and not only be totally inaccurate, but being categorically flawed, i.e., you’re not even shooting at the right target.

      From his statement it seems what he is really saying though is that it is the granularity of data is insufficient for an AI model to accurately or precisely evaluate a problem and then presumably solve it, assuming there is, let alone a human-acceptable solution.

      As I mentioned, you can have the most precisely modeled problem in the world and it won’t make a difference if it’s not accurate, especially since there is a very uncomfortable reality starting to face us, at least in the West, that all the little lies we were told and we perpetuated because we have been trained on them from birth, across generations now, are simply wrong and have polluted our minds to such a degree that many people could never accept if AI told them they’re wrong and everything they believe they know and have known all their life is wrong.

      On top of that, it shatters people’s narcissistic self-image of having been the good guy, because accepting what AI tells them is actually the truth means accepting that they were abusive to those who were right all along, meaning they are actually the bad guy.

      And if we definitely know anything as good guys, it’s that the majority is always right, because that is what we were taught is the democratic way. The majority is always right and you always have to trust the minority that are experts! Right? Right!

  • maplethorpe40 minutes ago
    > What Moravec was describing was a difference in how skills are stored, not how complex they are. Physical skills are encoded in the body, almost impossible to put into words. But knowledge work, the analysis, the diagnosis, the strategy, the legal argument, is stored in text. Humans wrote it all down. Every framework, every protocol, every insight accumulated across every profession for centuries, captured in documents, papers, books, case files, and reports.

    I don't think this is true. Text is a lossy form of communication. There's no way to get the sum of my knowledge from my brain over to your brain purely through text.

    Also, anyone who has ever had to deal with incomplete documentation knows that humans did not, in fact, write it all down.

    • strogonoff18 minutes ago
      All communication is inherently lossy, and text is extremely so. Knowledge, insight, etc., is never captured in its entirety in communication. Indeed, there is no direct contact between human minds, not in the models we currently have.

      Communication builds on simplified shared maps over ineffable territory of human experience. It always presents a particular model—a necessarily wrong one (as all models are), good for one purpose but neutral or harmful for another.

      However, models and maps is not the only way in which humans attend to reality. Even though it is compelling to talk only about this way—talking is communication (see above)—we also have the impossible to convey direct experience. Over the past thousand or two years, as humanity becomes more of an interconnected anthill, this experiencing arguably increasingly takes a backseat to map-driven communication-driven frame of attention, but it still exists and is part of what makes us human.

      LLMs, as correctly noted, build only on our communication. What I don’t think is noted, is that this means they build on those (inevitably faulty) models and maps; LLMs fundamentally have no access to the experiencing aspect, and the territory-to-map workflow is inaccessible to them. What happens when wrong maps overstay their welcome?

  • bamboozled33 minutes ago
    There is a jarring assumption in this article, which is that LLMs are performing much much better then they are. Thy are awesome tools, but they just aren't that great where I'd be replacing my accountant with anything like an LLM and personally, as a software engineer, the more I use these tools, the more I realize I need to understand software better than I ever have before to actually be proficient with these tools. Maybe we're agreeing to some degree because the author seems to think there will still be need for certain skill sets, even with AGI, but I think we're still in the figuring shit out phase.

    If any thing they've made my job much much more stressful because I'm just dealing with 10xthe amount of code to reason about than before, and gain, if I wasn't a proficient programmer I'd be in the shit.

  • mkdelta2212 hours ago
    Fascinating article. Everyone knows they will be replaced by AI but nobody wants to talk about it.
    • 15 minutes ago
      undefined
    • pjmlp37 minutes ago
      Worse are the folks claiming how much productive they are with AI tools, without understanding that it means companies will require less of us to do the same job.

      Like in many scenarios, they always think the victims will be the other ones.

    • Atomic_Torrfisk24 minutes ago
      Based on what? Do you have data for that or is that just a feeling, or what you want.
    • Traubenfuchs27 minutes ago
      I really hope it won‘t take my job and I am very afraid, nut:

      Why hasn‘t it happened yet? Why hasn‘t the job market imploded? What‘s missing? Why do my colleagues, I and my friends still have their bullshit jobs? Why didn‘t my companies output explode through our unlimited claude access? What about all the other companies?