2 pointsby JAnicaTZ3 hours ago1 comment
  • JAnicaTZ3 hours ago
    I built a recursive logic-tree engine for first-order logic, based on explicit AST → NNF (De Morgan) decomposition with full step-by-step GUI visualization.

    Unlike black-box neural models, every inference step is structurally inspectable and derived by formal rules.

    Would you consider this kind of symbolic, rule-based, fully transparent reasoning a candidate for what could be called the “core” of Explainable AI?

    How would you position it relative to current post-hoc explainability methods used in machine learning?

    Source (for context): https://github.com/JAnicaTZ/TreeOfKnowledge