2 pointsby sentinel_ai_act4 hours ago1 comment
  • sentinel_ai_act4 hours ago
    Hi HN,

    I’ve been working on a way to automate the heavy lifting of the EU AI Act for engineering teams. Most "AI Governance" tools are just high-level dashboards, so I built Sentinel-AI-Compliance.

    It’s a GitHub Action powered by a WASM-compiled binary. It follows a 90/10 rule: 90% of the audit is solved deterministically (Regex/Tree-sitter) for $0 cost, and AI is only used as a "higher instance" for the remaining 10%.

    To test it, I ran Sentinel against 265 high-profile AI repositories (including vLLM, Dify, and Microsoft projects). The results: Over 90% hit a 100/100 Risk Score, primarily due to missing Art. 10 (Data Governance) and Art. 14 (Human Oversight) manifests.

    Why WASM?

    Privacy-First: The binary runs entirely within your CI/CD pipeline. Your code never leaves your infrastructure.*

    Speed: Scans manifest files in milliseconds.*

    Deterministic: No LLM hallucinations in the compliance trail.*

    This is an independent engineering project focusing on making compliance a "git push" away rather than a legal nightmare.

    I’d love to hear your thoughts on the deterministic vs. AI approach to compliance!