I haven’t met an AI who can think clearly in a systems setting. I’m guessing it might be a decade or two. As an example, ask an LLM to do some 10th grade math. Inspect the thinking process. It can regurgitate the process and the rules but cannot perform them.
Same with troubleshooting.
It seems to me that the solution is just RL to get the language model to delegate the actual calculation to the appropriate tool.
The reason they got replaced isn't because the problem became deterministic (like a calculator). It's because the error rate and the cost got to an acceptable place when compared with the cost of a human quant.