While using it daily, I realized the same architecture that solves "AI forgets everything" also solves "AI has no auditable safety record." If every decision is already written to human-readable files in real time, you have an audit trail a regulator can actually read. That became the foundation.
The hardware side came from a simple question: why does every AI safety system run as software inside the system it's supposed to constrain? Industrial automation solved this decades ago — Safe Torque Off gives a safety controller physical authority over motor power. The motor can't override it because there's no software path between them.
SASM applies that principle to AI compute. Dedicated safety processor on its own power rail. AI gets zero electricity until safety boots and passes self-test. During operation, the safety processor can cut AI power in under 10ms. No software command, no API call — GPIO pins driving MOSFET gates directly.
The EU AI Act goes fully into effect August 2, 2026. Every robot near humans needs auditable decisions, transparent operation, and human override capability. Nobody has a published standard that meets all three. That's what we filed.
15 PPAs, 134 claims, filed Feb 4-17, 2026. All of it designed working alongside AI tools using the memory system I built.
Open to questions about the hardware spec, the safety architecture, the patent process, or what it's like building a patent portfolio from a living room in rural Pennsylvania.