It’s a bit like saying that driving cars still requires human muscles to operate the controls, so human strength has ‘won’, when it is clearly the internal combustion engine that has created the speed advantage of the car.
I think the tldr is that Gary Marcus has been hating on LLMs since ChatGPT came out, mostly because of the hype around them. His core theory is that pushing LLM tech just with more training is not going to accomplish AGI. He does have some essays with good writing (not this one), and he typically talks about how we’ll need different techniques to solve things like hallucinations.
I’ve read articles of his which made genuinely good points, and which go against the grain of what the big LLM companies are saying.
The reason there’s a lot of drama is that the LLM hype train (which includes some prominent people) really hated on HIM for saying anything negative about LLM technology, and he responded to that by keeping the flame war going for the past 4 years (as you can see in this article.)
So when any companies do anything that looks like using these other techniques (neurosymbolic AI, world models), he basically tosses a quick article out about how vindicated he feels. Because the companies were all like “attention is all we need” and “we can just build 4x bigger data centers and that extra compute will solve all of our problems with more training,” and he was like “that’s BS”
So, I really don’t mind him showing up, because we get do get much BS on here from the AI companies too. So… Gary Marcus is at least a balancing kind of BS in a way. (For example, it’s hard to trust anything Anthropic says about Mythos, because they have so much money riding on it being insanely capable.)
But that situation isn’t ideal. What we actually need is more thoughtful, critical research which is NOT tied to impossible business goals. And that doesn’t describe Gary Marcus OR OpenAI/Anthropic.
https://github.com/yasasbanukaofficial/claude-code/blob/main...
?
How is that “neurosymbolic”?
It just looks like poorly structured overly verbose ai generated code.
Funny thing is you could create measurable criterias explaining what is wrong with the code. Ie. function line count or cyclomatic complexity and then letting those guide the code generation.
But if AI is the primary author and consumer of this code, that would be an unnecessary step. No need to clean it up for our feeble little human minds.
I was just interested in what this file actually does - and am finding it hard to grok, scrolling through on a mobile device!