1 pointby christianrth5 hours ago2 comments
  • christianrth5 hours ago
    Author here. Some additional technical details:

    *Why symbolic reasoning?*

    Most production AI doesn't need the full flexibility of LLMs. Fraud detection, compliance checking, rule validation - these have well-defined logic. Using an LLM for "if amount > €5000 AND velocity > 0.8 then flag" is overkill.

    *The DSL:* ```aspera concept fraud_risk { signals: ["amount", "velocity", "location"]; weight: 1.0; }

    inference detect_high_velocity { when (signals.velocity > 0.8 AND signals.amount > 5000) { then increase concept: "fraud_risk" by 1.0; } }

    intention block_transaction { trigger: "fraud_detected"; strategy: [ if (concepts.fraud_risk > 0.7) then "block"; if (concepts.fraud_risk > 0.4) then "review"; else "approve"; ]; }

  • ImPrajyoth5 hours ago
    >> Code/paper: [if public, otherwise say "available upon request for validation"]

    Atleast proof read before pasting