Unlike black-box neural models, every inference step is structurally inspectable and derived by formal rules.
Would you consider this kind of symbolic, rule-based, fully transparent reasoning a candidate for what could be called the “core” of Explainable AI?
How would you position it relative to current post-hoc explainability methods used in machine learning?
Source (for context): https://github.com/JAnicaTZ/TreeOfKnowledge