Failures happen a lot in those domains now. They are kind of expected, I guess. But what gets tricky is when something messes up and you have to explain it afterward. Basic stuff, you know.
What parts did the model look at. What did it skip over. And where was it sure of itself or not so much. Those questions do not always have easy answers.
I came up with this way to think about it. Accuracy is what lets a system get out there in the first place. But showing the reasoning, that is what keeps it okay with the regulators later on. It seems important.
For anyone in finance or healthcare or stuff like that. What kind of visibility has worked in real life so far. And the parts where it just does not hold up yet. I am curious about that. It feels like some areas might be better than others, but I am not totally sure.
This of course is only because there can be no "malfunction" in a system that carries the qualifier "AI can makes mistakes".