The idea is to have AI translate intent directly to LLVM IR and skip programming languages entirely.
I've gone through a few iterations starting with simple numerical operations, then I went a little crazy with it and decided to build a (mostly) self contained web server. The project site (semcom.ai) runs on a web server built this way. LLVM IR, raw Linux syscalls, no libc, static binary, scratch Docker container. IR generated by the semcom system with some interventions from Claude Code.
The system has four parts: intent layer (structured natural language), meaning compiler (intent → IR), semantic bridge (observes running system, produces behavioral model), and alignment engine (compares intent vs. behavior, outputs semantic deltas).
Me, and Claude constructed a white paper (it's on the site) with my thoughts on what this might look like. Check it out if interested!