Quick context on the pilot:
30-day test with 5 compliance analysts at UK fintech doing standard individual onboarding (Know Your Customer checks).
Before: 95 minutes per case (manual searches across OFAC, UN, EU sanctions lists, Companies House, adverse media, PEP databases)
After: 27 minutes per case (Claude orchestrates the searches, analyst reviews at 17 mandatory checkpoints)
Key architectural decision: NO auto-approvals. Every decision requires explicit analyst approval + notes.
Legal team spent 3 weeks reviewing before approving pilot. Main concern was audit trail - solved with immutable markdown logs.
Demo slides show the full workflow: https://github.com/vyayasan/kyc-analyst/blob/main/docs/demo-...
Happy to answer questions about: - The 17 stagegates - How risk scoring works (deterministic, not black box) - Regulatory requirements (FCA/MLR 2017) - What works vs what doesn't
Built this to test if foundation models can commoditize compliance middleware.
The thesis: If foundation models can reason through structured workflows, a lot of "middleware" categories (compliance, GRC, back-office automation) are just orchestrating free public data + applying published formulas. The "platform" becomes commoditized.
For KYC specifically: - Data is free (OFAC, UN, EU sanctions lists, Companies House, etc.) - Formulas are published (MLR 2017, FinCEN CDD rules) - Workflows are well-documented
So what are teams paying £60K/year for? Orchestration + audit trail.
If Claude can orchestrate and markdown can audit.. the economics shift dramatically.
Goals: 1. Prove open source can compete with commercial platforms (for standard workflows) 2. Make compliance accessible to smaller teams who can't afford £60K licenses 3. Test if this pattern applies to other regulated categories (legal, accounting, HR compliance)
Not building a company or raising money. Just want to see if expertise-as-code can disrupt vertical SaaS in regulated industries.
What do you think? Does this pattern apply to other categories you've seen?