10 pointsby hooch9 hours ago5 comments
  • ColinEberhardt7 hours ago
    I used to find Gary Marcus a good antidote to the AI hype, and followed his critique. But honestly, his more recent writings are clutching at straws. This article feels like desperation.

    It’s a bit like saying that driving cars still requires human muscles to operate the controls, so human strength has ‘won’, when it is clearly the internal combustion engine that has created the speed advantage of the car.

  • llbbdd6 hours ago
    Generated header image stinks to high heaven and the first couple lines of LLM crap prose make me sick. I've seen this guy's name on HN a few times and almost never in a good light, seemingly for good reason, who is he and why does he show up here so much?
    • anon70005 hours ago
      Well yeah, the LLM header image is basically a joke on his part.

      I think the tldr is that Gary Marcus has been hating on LLMs since ChatGPT came out, mostly because of the hype around them. His core theory is that pushing LLM tech just with more training is not going to accomplish AGI. He does have some essays with good writing (not this one), and he typically talks about how we’ll need different techniques to solve things like hallucinations.

      I’ve read articles of his which made genuinely good points, and which go against the grain of what the big LLM companies are saying.

      The reason there’s a lot of drama is that the LLM hype train (which includes some prominent people) really hated on HIM for saying anything negative about LLM technology, and he responded to that by keeping the flame war going for the past 4 years (as you can see in this article.)

      So when any companies do anything that looks like using these other techniques (neurosymbolic AI, world models), he basically tosses a quick article out about how vindicated he feels. Because the companies were all like “attention is all we need” and “we can just build 4x bigger data centers and that extra compute will solve all of our problems with more training,” and he was like “that’s BS”

      So, I really don’t mind him showing up, because we get do get much BS on here from the AI companies too. So… Gary Marcus is at least a balancing kind of BS in a way. (For example, it’s hard to trust anything Anthropic says about Mythos, because they have so much money riding on it being insanely capable.)

      But that situation isn’t ideal. What we actually need is more thoughtful, critical research which is NOT tied to impossible business goals. And that doesn’t describe Gary Marcus OR OpenAI/Anthropic.

  • bob10295 hours ago
    There was a time when I thought he might be right. That time has come and gone. I fear that Gary may have to find a different shtick this year.
  • chvid8 hours ago
    Is this the function he is referring to:

    https://github.com/yasasbanukaofficial/claude-code/blob/main...

    ?

    How is that “neurosymbolic”?

    It just looks like poorly structured overly verbose ai generated code.

    • cheevly6 hours ago
      I cannot find a single aspect of this file that even remotely hints at 'neurosymbolic' intelligence. And the post by Gary Marcus truly exhibits the type of person he is.
    • ColinEberhardt7 hours ago
      Can someone please use AI to explain this code smell?
      • chvid7 hours ago
        I am not sure it is inherent to LLM code generation as much as the training data and the tuning of the model. Emphasizing verbose code with lots of explicit explanation. Possibly the stuff you see in CS textbooks. And probably lots of vibe code style edits where the LLM fixes a bug, always adding further complexity to the code.

        Funny thing is you could create measurable criterias explaining what is wrong with the code. Ie. function line count or cyclomatic complexity and then letting those guide the code generation.

        • ColinEberhardt7 hours ago
          Very true, with the right feedback loop AI would do a wonderful job of refactoring.

          But if AI is the primary author and consumer of this code, that would be an unnecessary step. No need to clean it up for our feeble little human minds.

          I was just interested in what this file actually does - and am finding it hard to grok, scrolling through on a mobile device!

          • chvid4 hours ago
            I think it does all sorts of random things. And I doubt it is particular easy for a LLM to work with compared to a more sanely structured piece.
  • tucnak7 hours ago
    Wow, well this was a waste of time.