5 pointsby sammy09103 hours ago1 comment
  • MisterKent2 hours ago
    As I've started a greenfield project lately, and it's been almost entirely LLM driven development, I have concluded that there is no process or step that actually enables them to actually understand a problem.

    Humans can think abstractly, parsing a problem space and condensing it into a solution. That is, I can take 10 different seemingly unrelated problems, and determine they're all the same class of problem, and then create a novel (ish) solution for that problem. LLMs simply cannot build that world model AND even when explicitly told the solution, they can't actually hold the model while authoring code.

    They're fake thinking, and no amount of harnesses, or increasing the number of models is going to change that.

    There was a post here before: tech companies have become adept at taking a thick thing (like friendship) and boxing it up to give you the same surface level feelings but it's not _real_ it's a thin version (following an influencer, social media). And you feel like you're friends with these people, but real friendships are not so simple, they're not so messy.

    LLMs are robbing us of thick thinking and convincing us that thin thinking is something real. I don't think anyone can legitimately claim that LLMs are improving their product's actual problem modeling space, beyond just being a randomness / entropy generator where a human can actually choose.

    Human written books, code, art are fundamentally shaped by having a true understanding of the abstract problem space and compressing it down it a way that makes sense to other humans. LLMs are just word salad masquerading as intelligence. Letting them into our world is a mistake that will take years to correct.