It’s written in ~175K lines of no_std Rust. I built the whole thing solo.
The system includes its own boot path, page tables, interrupt handlers, and NVMe driver. It runs in long mode at ring-0. There’s no syscall boundary between the reasoning engine and hardware, and no scheduler preemption separating decision logic from execution.
The goal is bounded, auditable autonomy for safety-critical environments.
The reasoning engine is symbolic and refusal-first. Every conclusion must resolve to an explicitly grounded knowledge-base state. If it can’t, execution halts. Unsupported assertions don’t propagate into automated action.
There’s no probabilistic inference path in the decision layer. All reasoning is local and deterministic. Identical inputs produce byte-identical reasoning chains across reboots, and those chains are integrity-verified.
I model epistemic state explicitly. There are six discrete knowledge states mapped to corroboration level. Automation only executes at full corroboration; contested or degraded states block autonomous action.
So far I’ve run 358 validation tests across 25 phases. The validation footage is uncut, and I documented what each phase is intended to verify:
https://www.jou-labs.com/proof
Some limitations up front:
Capability is bounded by knowledge-base completeness.
The system isn’t formally verified.
Hardware trust is currently rooted in firmware.
The design intentionally excludes probabilistic inference in the decision layer.
I’m particularly interested in feedback around:
Formal verification pathways for systems like this
Deterministic autonomy models
How to make autonomous systems legally defensible