In doing so, he rules out much of mathematics (e.g. matrix multiplication, in which the elements have no meaning in isolation, geometry, where points, lines and lines are explicitly left undefined, formal logic, etc.). This "turtles all the way down" stricture largely creates the problem that he then addresses.
What is interesting about current LLM-based systems is that they follow exactly the model suggested by this paper, by bolting together neural systems with symbol manipulation systems - to quote the paper "connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling."
They are clearly also kludges. As you say, they are built on shaky foundations. But the success - at least compared to anything that has gone before - of kludged-together neural/symbolic systems suggests that the approach is more fertile than any that has gone before. They are also still far, far away from the AGI that has been predicted by their most enthusistic proponents.
My best guess is that future successful hard-problem-solving systems will combine neurosymbolic processing with formal theorem provers, where the neurosymbolic layer constructs proposals with candidate proofs, to submit to symbolic provers to test for success.
I agree with you that purely symbolic AI systems had severe limitations (just think of those expert systems of the past), but the direction must not only go towards higher-level symbolic provers but also towards lower level sensory data integration.