And not just in the legal field. The problem boils down to the very fundamental fact that AI is based on probability and statistics.
The results can't be legally defended in many cases because they are not derived from a reasonable, logical, reproduceable decision making process.
Here is an example from the medical industry. Really *important* results shouldn't be decided by a dice roll.
https://pub.towardsai.net/the-air-gapped-chronicles-the-cour...
Legal AI tools are generally not used for decision making but for improving productivity of legal professionals, for example by helping to review large amounts of contracts. Ultimately lawyers have to review the work and take responsibility for it's correctness. We actually aim to reduce error rates overall. In the case of CompleteFlow, our agentic platform for contract and document processing, we also provide access to all those settings that were demanded by the judge in this case since we do single tenent private cloud hosting and customers can select the models and settings they want to use for a cost / performance trade-off.
That said, an unscrupulous and lazy judge can already betray their position by outsourcing to an LLM in a personal capacity.