Examples here: https://corewar.co.uk/evolving.htm
The difference here is that instead of using a typical genetic algorithm written in a programming language, it uses LLM prompts to do the same thing.
I wonder if the authors tried some of the existing "evolvers" to compare to what the LLM gave out.
From that experience: The LLM is likely to do drastically better. Most of the prior work, mine included, took a genetic algorithm approach, but an LLM is more likely to make coherent multi-instruction modifications.
It's a shame they didn't compare against some of the standard core wars benchmarks as a way to facilitate comparisons to prior work, though. Makes it hard to say that they're better for sure. https://corewar.co.uk/bench.htm
Given fixed opposition, finding a warrior that performs the best is an optimization problem. Maybe, for very small core sizes like a nano core, it would be possible to find the optimum directly by SAT or SMT instead of using evolution? Or would it be impractical even for those core sizes?
For the nano hill[1], the constants are: each warrior has a max of five lines of code, core size is 80 instructions, and a match lasts a maximum of 800 cycles.
If N = 1, it's clear that the best you can do is drop a bomb at a fixed location and hope you hit. So that is mostly a tie. For N=2, it's probably still not possible to do anything useful. With N = 10, perhaps a quickscan is possible. N = 800 -- who knows?
https://en.wikipedia.org/wiki/Tierra_(computer_simulation)
https://github.com/adamierymenko/nanopond
Lots of evolving bug corewar-style systems around.
I think the interesting thing with this one is they're having LLMs create evolving agents instead of blind evolution or some similar ML system.
I know you can still do that today, but… something has changed. I don't know what it is. (Maybe I changed.)
Anyway, I was unable to track down PDF versions of the original articles, but, for the curious and newcomers to Core Wars, they're transcribed here:
I am one of the authors from Sakana AI and MIT. We just released this paper where we hooked up LLMs to the classic 1984 programming game Core War. For those who haven't played it, Core War involves writing assembly programs in a language called Redcode that battle for control of a virtual computer's memory. You win by crashing the opponent's process while keeping yours running. It is a Turing-complete environment where code and data share the same address space, which leads to some very chaotic self-modifying code dynamics.
We did not just ask the model to write winning code from scratch. Instead, we treated the LLM as a mutation operator within a quality-diversity algorithm called MAP-Elites. The system runs an adversarial evolutionary loop where new warriors are continually evolved to defeat the champions of all previous rounds. We call this Digital Red Queen because it mimics the biological hypothesis that species must continually adapt just to survive against changing competitors.
The most interesting result for us was observing convergent evolution. We ran independent experiments starting from completely different random seeds, yet the populations consistently gravitated toward similar behavioral phenotypes, specifically regarding memory coverage and thread spawning. It mirrors how biological species independently evolve similar traits like eyes to solve similar problems. We also found that this training loop produced generalist warriors that were robust even against human-written strategies they had never encountered during training.
We think Core War is an under-utilized sandbox for studying these kinds of adversarial dynamics. It lets us simulate how automated systems might eventually compete for computational resources in the real world, but in a totally isolated environment. The simulation code and the prompts we used are open source on GitHub.
Other info other than the blog link:
Paper (website): https://pub.sakana.ai/drq/
Interesting. So you're including past generation champions in the "fights"? That would intuitively model a different kind of evolution than just "current factors"-driven evolution.
> We also found that this training loop produced generalist warriors that were robust even against human-written strategies they had never encountered during training.
Nice. Curious, did you do any ablations for the "all previous champions" vs. "current gen champions"?
Since LLMs are text based, a text-based game might be interesting. Something like Nomic?
Or a "meme warfare" game where each agent tries to prompt-inject its adversaries into saying a forbidden codeword, and can modify its own system prompt to attempt to prevent that from happening to itself.
AFAIK, the best results so far for fully computer-generated warriors have been on the nano and tiny format (https://sal.discontinuity.info/hill.php?key=nano, https://sal.discontinuity.info/hill.php?key=tiny), with much shorter warriors (at most 5 or 20 instructions).