I confess it was disappointing for me. Their main claim seems to be that thinking comprises pattern matching and pattern completion--allowing them to say that LLMs do resemble something we humans do-- but that's essentially the idea behind the connectionist movement from the 1980's - the one out of which current DNN models came from. Perhaps a friend of 1960's symbolic AI would be unhappy with that claim, but there are not many of these around anymore (Gary Marcus is misrepresented as one such, but his view is that models should be hybrid, not purely symbolic).
Nowadays, the question about whether LLMs are "actually" doing something similar to human thinking revolves around other dimensions, such as whether they rely on emergent world-models or not. Whether such world models would require symbolic reasoning or not is a different matter.