I created Schematra[1] and also a schematra-starter-kit[2] that can be spun from claude and create a project and get you ready in less than 5 minutes. I've created 10+ side projects this way and it's been a great joy. I even added a scheme reviewer agent that is extremely strict and focus on scheme best practices (it's all in the starter kit, btw)
I don't think the lack of training material makes LLMs poor at writing lisp. I think it's the lack of guidelines, and if you add enough of them, the fact that lisp has inherently such a simple pattern & grammar that it makes it a prime candidate (IMO) for code generation.
The article mentions a REPL skill. I don’t do that: letting model+tools run sbcl is sufficient.
I haven't tried integrating it into a repl or even command line tools though. The llm can't experience the benefit of a repl so it makes sense it struggled with it and preferred feeing entire programs into sbcl each time.
We should be using LLMs to translate from (fuzzy) human specifications to formal specifications (potentially resolving contradictions), and then solving the resulting logic problem with a proper reasoning algorithm. That would also guarantee correctness.
LLMs are a "worse is better" kind of solution.
Agreed! This is why having LLMs write assembly or binary, as people suggest, is IMO moving in the wrong direction.
> then solving the resulting logic problem with a proper reasoning algorithm. That would also guarantee correctness.
Yes! I.e. write in a high-level programming language, and have a compiler, the reasoning algorithm, output binary code.
It seems like we're already doing this!
I think the biggest barrier to adoption of program synthesis is writing the spec/maintaining it as the project matures. Sometimes we don’t even know what we want as the spec until we have a first draft of the program. But as you’re pointing out, LLMs could help address all of these problems.
Perhaps you meant to say "coding", not "programming". AI is immensely helpful for programming. Coding is just the last, and in a proper programming session sometimes even unnecessary step - there are times when an adequate investigation requires deleting code rather than writing new one, or writing pages of documentation without a single code change.
You have to be a detective and know what threads to pull to rope in the relevant data, digging inductively and deductively - soaring high to get the "big picture" of things and diving into the depths of a single code line change.
I've been developing software for decades now (not claiming to be great, but at least I think I've built certain intuition and knack for it), and I always struggled with the "story telling" aspect of it - you need to compose a story about every bug, every feature request - in your head, your notes, your diagrams. A story with actors, with plot lines, with beginning, middle, and end. With a villain, a hero, and stakes. But software doesn't work that way. It's fundamentally an exploratory, iterative, often chaotic process. You're not telling what happened - you're constructing a plausible fiction that satisfies the format. The tension I felt for decades is that I am a systems thinker being asked to repeatedly perform as a narrator, and that is hard.
Modern AI is already capable of digging up me the details for my narrative - I gave it access to everything - Slack, Jira, GitHub, Splunk, k8s, Prometheus, Grafana, Miro, etc. - and now I can ask it to explain a single line of code - including historical context, every conversation, every debate, every ADR, diagram, bug and stack trace - it's complete bananas.
It doesn't mean I don't have to work anymore, if anything, I have to work more now, because now I can - the reasons become irrelevant (see Steve Jobs' janitor vs. CEO quote). I didn't earn a leadership role - AI has granted it? Forced me into it? Honestly, I don't know anymore. I have mixed feelings about all of it. It is exciting and scary at the same time. Things that I dreamed about are coming true in a way that I couldn't even imagine and I don't know how to feel about all that.
This is definitely partly training data, but if you give an LLM a simple language to use on the fly it can usually do ok. I think the real problem is complexity.
Go and Java require very little mental modelling of the problem, everything is written down on the page really quite clearly (moreso with Go, but still with Java).
In GCL however the semantics are _weird_, the scoping is unlike most languages, because it's designed for DSLs. Humans writing DSL content requires little thought, but authoring DSLs requires a fair amount of mental modelling about the structure of the data that is not present on the page. I'd wager that Lisp is similar, more of a mental model is required.
The problem is of course that LLMs don't have a mental model, or at least what they do have is far from what humans have. This is very apparent when doing non-trivial code, non-CRUD, non-React, anything that requires thinking hard about problems more than it requires monkeys at typewriters.
This is a weird moment in time where proprietary technology can hurt more than it can help, even if it's superior to what's available in public in principle.
Who owns the tech doesn't matter, what matters is whether there's a set of diverse examples of its use spread around the internet.
That's the reason I think it honestly depends more on the complexity to understand and the necessity of having a mental model of the code.
The main problem is the dynamic scoping (as opposed to lexical scoping like most languages), and the fact that lots of things are untyped and implicitly referenced.
If you take a hard look at that workflow, it implies a high degree of incompetence on the part of humans: the reason we generally don’t write thousands of lines without any automated feedback is because our mistake rate is too high.
I proceeded to spend about 45 minutes configuring Emacs. Not because Claude struggled with it, but because Claude was amazing at it and I just kept pushing it well beyond sane default territory. It was weirdly enthralling to have Claude nail customizations that I wouldn't have even bothered trying back in the day due to my poor elisp skills. It was a genuinely fun little exercise. But I went back to VS Code.
E.g. I work on a huge monorepo at this new company, and Emacs TRAMP was super slow to work with. With help of Claude, I figured out what packages are making it worse, added some optimizations (Magit, Project Find File), hot-loaded caching to some heavyweight operations (e.g. listing all files in project) without making any changes to packages itself, and while listing files I added keybindings to my mini buffer map to quickly just add filters for subproject I'm on. Could have probably done all this earlier as well, but it was definitely going to take much longer as I was never deep into elisp ecosystem.
I learned Common Lisp years ago while working in the AI lab at the University of Toronto, and parts of this article resonated strongly with me.
However, if you abandon the idea of REPL-driven development, then the frontier models from Anthropic and OpenAI are actually very capable of writing Lisp code. They struggle sometimes editing it (messing up parens)), but usually the first pass is pretty good.
I've been on an LLM kick the past few months, and two of my favorite AI-coded (mostly) projects are, interestingly, REPL-focused. icl (https://github.com/atgreen/icl) is a TUI and browser-based front end for your CL REPL designed to make REPL programming for humans more fun, whether you use it stand-alone, or as an Emacs companion. Even more fun is whistler (https://github.com/atgreen/whistler), which allows you to write/compile/load eBPF code in lisp right from your REPL. In this case, the AI wrote the highly optimizing SSA-based compiler from scratch, and it is competitive against (and sometimes beating) clang -O2. I mean... I say the AI wrote it... but I had to tell it what I wanted in some detail. I start every project by generating a PRD, and then having multiple AIs review that until we all agree that it makes sense, is complete enough, and is the right approach to whatever I'm doing.
For example I've been on the lookout for a better language than bash to use for shell scripting, but didn't like the options I was familiar with for various reasons (go, python, js, swift, etc). I did some research and Nim seemed to fit my needs perfectly. I was able to quickly convert some scripts I had to Nim using an LLM, where in the past I wouldn't have bothered to get used to a whole new language just for a few scripts.
Or right now I'm working on a personal full stack project and chose Go for the backend services, TypeScript/React for the frontend, and also have one service in Python because the library I need is easier to use there than in Go. Normally it would be a frustrating to context switch languages, but with LLMs I'm thinking more about the architecture and logic than specific syntax so it's been pretty frictionless.
I've generally always been one to want to use the best language/stack/platform for the job, so I'm probably biased, but I think LLMs actually make it easier to use languages you're less familiar with as long as you understand fundamental programming concepts. I'm hoping they end up promoting the usage or uptake of some of the less popular languages like Nim due to the lower learning curve needed to get useful output from them.
There are some issues of course. Sometimes, Claude Code gets into "parenthesis counting loop" which is somewhat hilarious, but luckily this doesn't really happen too often for me. In the worst case I fix the problematic fragment myself and then let it continue. But overall I'd say Claude Code is not bad at all with Lisps
However, a large part of OP is about REPLs and on that I've also had a hard time with CC. I was working on it this evening in fact, and while I got something running, it's clunky and slow.
On Mac I can poke virtually any aspect of my system - my Hammerspoon config is written in Fennel - has a REPL.
On Linux, I have a babashka loop with nrepl, that "talks" to Hyprland's IPC through a socket - AI can diagnose the state of WM and move things around, change color temp, affect gamma, etc.
I have made little prototypes with nbb and Playwright, and the model had no difficulty understanding the REPL loop - it was able to inspect every DOM element going to it through the REPL.
We have a few services written in Clojure, we keep nrepl on staging k8s cluster. I have vide-coded, fixed and tested things on the go - LLM can directly eval things there. Fixing bugs in Python, Java and Go takes completely different kind of loop - sometimes it feels like AI even gets excited when there's a REPL to mess around.
If anything - being a lisper in AI-era only reinforced my belief that making a deliberate choice to learn and understand the philosophy of Lisp years ago was the best choice I could've made. I future-proofed myself for decades.
Working with Lisp for a human programmer requires mindset adjustment - AI is no different here - you just have to tell it where the REPL is.
I wasn't sure if I should expect great results relative to more popular languages with more code for the LLM to train on, but it looks like that's either not a big issue, or Clojure is over the popularity threshold for good results. I also previously expected languages with a lot of static guarantees like Rust to lead to consistently better results with LLM coding agents than languages like Clojure which have few, but that's untrue to the point that "bad AI rewrite in Rust" is a meme.
Everything in this area is moving so quickly that I haven't yet crystallized my thinking or settled on a working methodology but I am getting a lot of value out of running Claude Code with MCP servers for Common Lisp and Emacs (cl-mcp & emacs-mcp-server). Among other things this certainly helps with the unbalanced parentheses rabbit hole.
Along with that I am showing it plenty of my own Lisp code and encouraging it to adopt my preferred coding style and libraries. It takes a little coaching and reinforcement (recalcitrant intern syndrome) but it learns as it goes. It's really quite a pleasant experience to see it write Lisp as I might have written it.
Some are going to nitpick that Clojure isn't as lispy as, say, Common Lisp but I did experiment with Claude Code CLI and my paid Anthropic subscription (Sonnet 4.6 mostly) and Clojure.
It is okay'ish. I got it to write a topological sort and pure (no side effect) functions taking in and returning non-totally-trivial data structures (maps in maps with sets and counters etc.). But apparently it's got problems with...
... drumroll ...
The number of parentheses. It's so bad that the author of figwheel (a successful ClojureScript project) is working on a Clojure MCP that fixes parens in Clojure code spoutted by AI (well the project does more than that, but the description literally says it's "designed to handle Clojure parentheses reliably").
You can't make that up: there's literally an issue with the number of closing parens.
Now... I don't think giving an AI access to a Lisp REPL and telling it: "Do this by bumping on the guardrails left and right until something is working" is the way to go (yet?) for Clojure code.
I'm passing it a codebase (not too big, so no context size issue) and I know what I want: I tell it "Write a function which takes this data structure in and that other parameter, the function must do xxx, the function must return the same data structure out". Before that I told it to also implement tests (relatively easy for they're pure functions) for each function it writes and to run tests after each function it implements or modify.
And it's doing okay.
> Are the parentheses in ((((()))))) balanced?
There was a thread about this the other day [1]. It's the same issue as "count the r's in strawberry." Tokenization makes it hard to count characters. If you put that string into OpenAI's tokenizer, [2] this is how they are grouped:
Token 1: ((((
Token 2: ()))
Token 3: )))
Which of course isn't at all how our minds would group them together in order to keep track of them.
[1] https://news.ycombinator.com/item?id=47615876 [2] https://platform.openai.com/tokenizer
Try to get your favourite LLM to read the time from a clock face. It'll fail ridiculously most of the time, and come up with all kinds of wonky reasons for the failures.
It can code things that it's seen the logic for before. That's not the same as counting. That's outputing what it's previously seen as proper code (and even then it often fails. Probably 'cos there's a lot of crap code out there)
Our brains also process text entire words at a time, not letter-by-letter. The difference is that our brains are much more flexible than a tokenizer, and we can easily switch to letter-by-letter reading when needed, such as when we encounter an unfamiliar word.
As an example, I asked claude 3.5 back when that was the latest to indent all the code in my file by four more spaces. The file was about 700 lines long. I got a busy spinner for two minutes then it said, "OK, first 50 lines done, now I'll do the rest" and got another busy spinner and it said, "this is taking too long. I'm going to write a program to do it", which of course it had no problem doing. The point is that it is superhuman at some things and completely brain-dead about others, and counting parens is one of those things I wouldn't expect it to be good at.
Edit: working on a lot of legacy code that needs boring refactoring, which Claude is great at.
Things, on the whole, were fine, save for the occasional, rogue (or not) parentheses.
The AI would just go off the rails trying to solve the problem. I told it that if it ever encountered the problem to let me know and not try to fix it, I’d do it.
But most human languages—or at least the dominant ones that compose the vast bulk of the LLM training set—use more complex structuring rules for whatever evolutionary linguistic reasons. Easier error correction? Auditory disambiguation?
You could tell similar “just so” stories about computer language syntax, & why s-expressions didn’t win out over (say) XML-style tagging. And it turns out pseudo-XML is a great way to talk to LLMs.
EDIT: To be clear, by “s-expressions” I mean their typical use in Lisp programming of a function expression followed by a series of parameter expressions. The “grammar” is just eval/apply.
Yep. Language and libraries too.
1) use a running REPL session 2) ignore pre-compilation time (it will kill the running process, mistaking it as stuck...)
Damn. And here I have a Gemini Pro subscription sitting unused for a year now.
I had some test functions where minimization could be optionally used, but wanted to do one where minimization was needed, like the Ackermann function. Most of the frontier models struggled with doing this, although I may have been prompting incorrectly. Although - if I had been prompting totally correctly, I probably could have gotten what I got out of a frontier LLM in early 2025 and before.
Incidentally the test function that tells you if a number is prime in Emacs Lisp with primitive recursion is
(defalias 'prime (c (c (c (r 's (c 'z (p 1))) (p 1) 'z) (c (r (p 1) (c 's (p 2))) (c (c (c (r 'z (c (c 's 'z) (p 1))) (p 1) 'z) (c (r (p 1) (c (c (r 'z (p 1)) (p 1) 'z) (p 2))) (p 1) (p 2))) (p 2) (p 1)) (c (c (c (r 'z (c (c 's 'z) (p 1))) (p 1) 'z) (c (r (p 1) (c (c (r 'z (p 1)) (p 1) 'z) (p 2))) (p 2) (p 1))) (p 2) (p 1)))) (c (c (r 'z (c (r (p 1) (c 's (p 2))) (c (c (r 'z (c (r (p 1) (c 's (p 2))) (p 2) (p 3))) (c (c (c (r 's (c 'z (p 1))) (p 1) 'z) (c (r (p 1) (c 's (p 2))) (c (c (c (r 'z (c (c 's 'z) (p 1))) (p 1) 'z) (c (r (p 1) (c (c (r 'z (p 1)) (p 1) 'z) (p 2))) (p 1) (p 2))) (p 2) (p 1)) (c (c (c (r 'z (c (c 's 'z) (p 1))) (p 1) 'z) (c (r (p 1) (c (c (r 'z (p 1)) (p 1) 'z) (p 2))) (p 2) (p 1))) (p 2) (p 1)))) (c (c (r (p 1) (c (c (r 'z (p 1)) (p 1) 'z) (p 2))) (c (r 'z (c (r (p 1) (c 's (p 2))) (p 2) (p 3))) (p 2) (c (r 'z (c (r (p 1) (c 's (p 2))) (p 2) (c (c (r 's (c 'z (p 1))) (p 1) 'z) (c (r 'z (c (r 'z (c (r (p 1) (c 's (p 2))) (p 2) (p 3))) (c 's (p 2)) (c (c (r 's (c 'z (p 1))) (p 1) 'z) (c (c (c (r 's (c 'z (p 1))) (p 1) 'z) (c (r (p 1) (c 's (p 2))) (c (c (c (r 'z (c (c 's 'z) (p 1))) (p 1) 'z) (c (r (p 1) (c (c (r 'z (p 1)) (p 1) 'z) (p 2))) (p 1) (p 2))) (p 2) (p 1)) (c (c (c (r 'z (c (c 's 'z) (p 1))) (p 1) 'z) (c (r (p 1) (c (c (r 'z (p 1)) (p 1) 'z) (p 2))) (p 2) (p 1))) (p 2) (p 1)))) (c 's (p 2)) (p 3))))) (c 's (p 1)) (p 3))))) (p 1) (p 2))) (p 1)) (p 1) (p 2)) (c 'z (p 1))) (c (c (r 'z (c (c 's 'z) (p 1))) (p 1) 'z) (p 1))) (p 3) (c 's (p 1))) (p 2))) (p 1) (p 1)) (p 1)) (c 's (c 's 'z))))
That's what you get with every language. So, not much to really be disappointed by in terms of Lisp performance.
You guys are depressing.
It's though to steal what doesn't exist.
> but AI can write hundreds of lines in one go so that it just makes sense for the AI to use a language that doesn't use the REPL. It is orders of magnitude easier and cheaper to write in high-internet-volume languages like Go and Python
Python doesn't have a REPL?
Not really in the Lisp sense. If you consider how people typically develop and modify Python code (edit file -> run from beginning -> observe effects -> start over) and how people typically develop Lisp code (rarely do "start over" and "run from beginning" happen) it becomes obvious. Most Python development resembles Go or C++, you just get to skip the explicit "compile" step and go straight to "run". The Python "REPL" is nice for little snippets and little bits of interactive modification but the experience compared to Lisp isn't the same (and I think the experience is actually better/closer to Lisp in Java, with debug mode and JRebel).
How does traditional human use the REPL impact AIs ability to use one language over the other?
> Most Python development resembles Go or C++
How do you know this? Or what source are you using to arrive at this conclusion?
Again, super curious, if outright copyright theft _isn't_ the answer, why can't AI write lisp, then?
Contrary to the blog author I don't really believe this.
> How does traditional human use the REPL impact AIs ability to use one language over the other?
I don't think it does very much other than it's not the normal workflow for people vibe coding. Lisp doesn't require you to develop with an interactive mindset, but it enables it and it's very enjoyable if nothing else. Vibe coding workflow is prompt -> plan / code generation / edits -> maybe create and run some tests or run program from start -> repeat (sometimes the AI is in a loop until it hits some threshold, or has subagents, or is off on its own for long periods, and other complications). The layer of interactivity is with the AI tool, not with the program itself. You can use this workflow with Lisp just fine. Sometimes an MCP tool might offer some amount of interactivity to the AI at least, e.g. I've never tried to use an AI to do Blender work but I imagine there's an MCP that lets it do stuff in the running instance without having to constantly relaunch Blender. Blender has a Python API so the AI with no eyes might even be decent at some things nevertheless.
Others than the blog author report using something like https://github.com/cl-ai-project/cl-mcp that lets the AI develop more bottom-up style with the REPL, perhaps even configurable to use a shared REPL with the human, where programs evolve bit by bit without restarting. I trust their report that it works though I don't really have a desire to try it. If an AI barfs a bunch of changes across several Lisp files and I want to try them out without restarting, I can just reload the system (which reloads the necessary files) on my own separately. I also don't think representation in the training data is that important at least to frontier models because they express ever more general intelligence which lets them do more with less. This is further suggested by them being decently good at things like TLA+ and Lean proofs, which don't exactly have a lot of data either.
> How do you know this?
I've at times been a Java developer, a C++ developer, a Python developer, a PHP developer, a Lisp developer, and others. I read about and observe how people develop their programs and how commercial tooling advertises itself. Hot reloading tooling technically exists in a lot of places and gets you some of the way towards what Lisp provides out of the box but it's not used by the majority and usually comes with a lot of asterisks for where it will fail. I'd say one of the biggest differences with a lot of Python code vs. other langs is the prevalence of jupyter notebooks, but that's more similar to literate programming styles than Lisp styles, and unlike Lisp or literate programming (though I'm sure there's at least one exception) jupyter notebooks are typically used for tiny few-hundred-lines stuff at most, not large projects.
As an example of what's out of the box in Lisp, compile is a function you can call at runtime, not a separate tool, inspect is another function you can call at runtime that lets you view and modify data, and the mouthful update-instance-for-redefined-class method is part of the standard, so you have optional custom control over class redefinitions modifying existing objects rather than just "invalidating" them and/or forcing them to keep using older copies of methods forever, or filling new fields with default values (though this is usually a fine default), or like default Java debug mode in eclipse/intellij saying "woops, we can't reload this change, you have to restart!". I like to advertise JRebel because it doesn't have as many limitations and goes very far indeed by working with the whole Java ecosystem. e.g. XML files that under normal development are used to configure and initialize objects at program start time, changes to which require a restart, are monitored by JRebel and when changed trigger reinitialization without having to restart anything. That's the Lisp way, though in Lisp you'd have to setup your own XML file watchers for something like that. (Djula is a Django-inspired example for web template files, it does reloading by just checking file modification times. One could use something fancier on Linux like inotifywait. Though some Lisp developers just write their HTML with s-expressions and so changes to a page are just recompiling a function like normal development rather than saving a separate mostly-html template file. Lisp gives you many options of how you prefer to develop and deploy changes to a website. I like to ship a binary.)
Now is the time to switch to a popular language and let the machines wrangle it for you. With more training data available, you'll be far more productive in JavaScript than you ever were in Lisp.