Why not directly have the llm write ISA assembly. We're still grading based on results / theory proofs and for example the certifications for cryptographical government use are based on binary codes and not sources.
Edited:
Why not go further and print the chips, pcbs directly via 3d printing with llm instructions.
Edited (joke):
Why not go the furthest and turn the entire earth into a computer and grey goo?
I thought the latest advance in computing (spring 2025 - last year) is self-play / reinforcement learning. Like we've ran out of training data a few years ago.
https://github.com/OpenPipe/ART
Reinforcement learning having the large language model devise puzzles that they solve via llm-as-judge.
The definition of llm-as-judge is your llm generate 8-12 trajectories and a different llm judges the result. I'd use an oracle like windows or linux operating system execution for the problem of ISA-assembly creation.
The winning entries are used to train the large language model.
The honest ernest answer to that is it's a bad idea because it is not portable. Unfortunately for Intel, m they don't have the dominance they once did, so you have to pick between ARM, x86, or something more exotic, and then be attached to that specific ISA. It's an interesting thought tho.
https://bellard.org/jslinux/ bellard is notable for this approach where you write a riscv execution layer and then write a windows / linux / dos etc emulator on top.
I have decided to play around with clankers to produce the most stupid PoC to find out where I can go with it to “vibe” out some “context compact” unreadable programming language. So my motivation of this post to discuss, why do we need to produce human readable code at all? If modern programming languages semantics optimised for humans, and society (at least in all these hype posts) are being forced to generated code no one reads, why do we need to waste compute on smth that is made for human to read? What the point of this compute waste?