Not true. AI has been around far longer than modern LLMs and has performed well in non-generative areas, often with orders of magnitude fewer parameters.
Neural nets are unstable during training and dynamic weights amplifies the problem. Thus, the neural networks could end up in totally unusable states at inference time.