There’s a thing I’m whispering to myself constantly as I work on software: “if I had something that would make this easy, what would it look like?”
I do this continuously, whether I’m working in C++ or Python. Although the author was talking about Lisp here, the approach should be applied to any language. Split the problem up into an abstraction that makes it look easy. Then dive in and make that abstraction, and ask yourself again what you’d need to make this level easy, and repeat.
Sometimes it takes a lot of work to make some of those parts look and be easy.
In the end, the whole thing looks easy, and your reward is someone auditing the code and saying that you work on a code base of moderate complexity and they’re not sure if you’re capable enough to do anything that isn’t simple. But that’s the way it is sometimes.
“if you can just trust that chat GPT will later fill in whatever stub functions you write, how would you write this program?” — and you can quickly get going, “well, I guess I would have a queue, while the queue is not empty I pull an item from there, look up its responsible party in LDAP, I guess I need to memoize my LDAP queries so let's @cache that LDAP stub, if that party is authorized we just log the access to our S3-document, oh yeah I need an S3-document I am building up... otherwise we log AND we add the following new events to the queue...”
It is not the technique that has most enhanced what I write, which is probably a variant on functional core imperative shell. But it's pretty solid as a way to break that writers block that you face in any new app.
He also did another talk expanding the concept called Boundaries: https://www.destroyallsoftware.com/talks/boundaries
core/app_config.py (data structures to configure services)
core/events.py (defines the core Event data structure and such)
core/grouping.py (parses rule files for grouping Events to send)
core/parse_events.py (registers a bunch of parsers for events from different sources)
core/users.py (defines the core user data structures)
(there's also an __init__.py to mark it as a module, and so forth). There is some subtlety, for instance events.py contains the logic to turn an event into a slack message string or an email, an AppConfig contains the definition of what groups there are and whether they should send an email or a slack message or both. But everything here is a deterministic transform. So for instance `parse_event` doesn't yet know what User to associate an event with, so `user.py` defines a `UserRef` that might be looked up to figure out more about a user, and there is a distinction between an `EventWithRef` which contains a `list[UserRef]` list of user-refs to try, and an `Event` which contains a User.Then there's the services/ module, which is for interactions with external systems. These are intentionally as bare as possible:
services/audit_db.py (saves events to a DB to dedupe them)
services/config.py (reads live config params from an AWS environment)
services/notification.py (sends emails and slack messages)
services/user_lookup.py (user queries like LDAP to look up UserRefs)
If they need to hold onto a connection, like `user_lookup` holds an LDAP connection and `audit_db` holds a database connecction, then these are classes where __init__ takes some subset of an AppConfig to configure itself. Otherwise like for the email/slack sends in the notification service, these are just functions which take part of the AppConfig as a parameter.These functions are as simple as possible. There are a couple audit_db functions which perform -gasp- TWO database queries, but it's for a good reason (e.g. lambdas can be running in parallel so I want to atomically UPDATE some rows as "mine" before I SELECT them for processing notifications to send). They take core data structures as inputs and generate core data structures as outputs and usually I've arranged for some core data structure to "perfectly match" what the service produces (Python TypedDict is handy for this in JSON-land).
"Simple" can be defined approximately as "having if statements", you can say that basically all if/then logic should be moved to the functional core. This requires a bit of care because for instance a UserRef contains an enum (a UserRefType) and user_lookup will switch() on this to determine which lookup it should perform, should I ask LDAP about an email address, should I ask it about an Amazon user ID, should I ask this other non-LDAP system. I don't consider that sort of switch statement to be if/then complexity. So the rule of thumb is that the decision of what lookups to do, is made in Core code, and then actually doing one is performed from the UserLookupService.
If you grok type theory, the idea more briefly is, "you shouldn't have if/then/else here, but you CAN have try/catch and you CAN accept a sum type as your argument and handle each case of the sum type slightly differently."
Finally there's the parent structure,
main.py (main entrypoint for the lambda)
migrator.py (a quick DB migration script)
../sql/ (some migrations to run)
../test/ (some tests to run)
Here's the deal, main.py is like 100 lines long gluing the core to the services. So if you printed it it's only three pages of reading and then you know "oh, this lambda gets an AppConfig from the config service, initializes some other services with that, does the database migrations, and then after all that setup is done, it proceeds in two phases. In the first ingestion phase it parses its event arguments to EventWithRefs, then looks up the list of user refs to a User and then makes an Event with it: then it labels those events with their groups, it checks those groups for an allowlist and drops some events based on the allowlist, otherwise it inserts those events into the database, skipping duplicates. Once all of that ingestion is done, phase two of reporting starts, it reserves any unreported records in the database, groups them by their groups, and for each group, tells the notification service, "here's a bunch of notifications to send" and for each successful send, we mark all of the events we were processing as reported. Last we purge any records older than our retention policy and we close the database connection." You get the story in broad overview in three pages of readable code. Migrator.py adds about two printed pages more to do database migrations, in its current form it makes its own DB connections from strings so it doesn't depend on core/ or services/, it's kind of an "init container" app except AWS Lambda isn't containerized in that way.The test folder is maybe the most important part, because based on this decoupling,
- The little pieces of logic that haven't been moved out of main.py yet, can be tested by mocking. This can be reduced arbitrarily much -- in theory there is no reason that a Functional Core Imperative Shell program needs mocks. (Without mocks, the assurance that main.py works is that main.py looks like it works and worked previously and hasn't changed, it's pure glue and high-level architecture. If it does need to change, the assurance that it works is that it was deployed to dev and worked fine there, so the overall architecture should be OK.)
- The DB migrations can be tested locally by spinning up a DB with some example data in it, and running migrations on it.
- The core folder can be tested exhaustively by local unit tests. This is why it's all deterministic transforms between data structures -- that's actually, if you like, what mocking is, it's an attempt to take nondeterministic code and make it deterministic. The functional core, is where all the business logic is, and because it's all deterministic it can all be tested without mocking.
- The services, can be tested pretty well by nonlocal unit "smoke"/"integration" tests, which just connect and verify that "if you send X and parse the response to data structure Y, no exception gets thrown and Y has some properties we expect etc." This doesn't fully test the situations where the external libraries called by services, throw exceptions that aren't caught. So like you can easily test "remote exists" and "remote doesn't exist" but "remote stops existing halfway through" is untested and "remote times out" is tricky.
- The choice to test stuff in services, depends a lot on who has control over it. AuditDBService is always tested against another local DB in a docker container with test data preloaded, because we control schema, we control data, it's just a hotspot for devs to modify. config.py's `def config_from_secrets_manager()` is always run against the AWS Secrets Manager in dev. UserLookupService is always tested against live LDAP because that's on the VPN and we have easy access to it. But like NotificationService, while it probably should get some sort of API token and send to real Slack API, we haven't put in the infrastructure for that and created a test Slack channel or whatever... so it's basically untested (we mock the HTTP requests library, I think?). But it's also something that nobody has basically ever had to change.
Once you see that you can just exhaustively test everything in core/ it becomes really addictive to structure everything this way. "What's the least amount of stuff I can put into this shell service, how can I move all of its decisions to the functional core, oh crap do I need a generic type to hold either a User or list[UserRef] possible lookups?" etc.
George Bernard Shaw, Man and Superman
> What type of person successfully finds simplicity working in C++?
Be the change you want to see!
Every language has the raw materials available to turn the codebase into an inscrutable complex mess, C++ more than others. But it’s still possible to make it make sense with a principled approach.
In the some vein that Python looks simple on the surface, but in reality it is a quite deep language, when people move beyond using it as DSL for C/Fortran libraries, or introduction to programming scenarios.
(It's a classic legend. There is an Islamic legend that Allah gave the first pair of tongs to the first blacksmith because you need a pair of tongs to make a pair of tongs. There's a Nordic legend that Thor made the first tongs. In reality, somebody probably used a bent piece of green wood, which didn't last long, but could be easily replaced.)
His piece "Vibe Coding, Final Word"[1] is relevant right now.
[1] https://funcall.blogspot.com/2025/04/vibe-coding-final-word....
_The Perfectionists: How Precision Engineers Created the Modern World_
(alternately title _Exactly_)
https://www.goodreads.com/work/editions/56364115-the-perfect...
and for further technical details see:
_Foundations of Mechanical Accuracy_ by Wayne R. Moore
https://mitpress.mit.edu/9780262130806/foundations-of-mechan...
See:
https://mooretool.com/about-us/publications/
for a form to request it.
https://en.wikipedia.org/wiki/Brokkr
Re: "funcall's vibe coding findings", it makes sense that human-style lisp (/tongs) would be too nonlinear for LLMs (or gods like Thor) to generate?
Edit: but in line with latter-day retcons it also makes sense that Thor would get credit for something good that Loki did
But a hammer! How do you make a hammer without a hammer?
Lisp, Jazz, Aikido and (now) Blacksmithing.
The distinction between Lisp and the programming languages widely adopted in the industry is a bit like the distinction between artist blacksmiths and fabricators. If blacksmiths have the skills and technique to transform the form of the metal materials they work with. While fabricators essentially rely upon the two operations of cutting and welding. Blacksmiths will use those two operations in their work, but also have the more plastic techniques of splitting, drifting, upsetting, fullering, etc.
https://old.reddit.com/r/lisp/comments/1eu9gd9/comment/likzw...
These additional basic tools are created from essentially the same working material, on the fly, just like the tongs in TFA
Cf Whitehead
Civilization advances by extending the number of important operations which we can perform without thinking of them.
https://ocw.mit.edu/courses/6-001-structure-and-interpretati...
The root object would be two rocks brought together in a bang heard 'round the world, then perhaps some sharpened sticks, all the way up to a Colchester lathe somewhere in Victorian England and the machinery that made whatever object we're looking at.
which is a multi-volume series based on the fact that a lathe is the only tool in a machine shop which can replicate itself, so the first volume has one make an aluminum casting foundry in one's backyard, the second how to use it to make a lathe, then one can use the rough-cast lathe to improve itself (or make a better lathe), and from there make the entirety of a machine shop.
Blacksmithing as noted in the original article is unique in that it is self-sufficient in a way that few other processes are, and downright elemental in what one needs.
Another book which touches on this sort of things is Verne's _The Mysterious Island_ which has a couple of escaped Civil War prisoners making things from essentially first principles:
https://www.gutenberg.org/ebooks/1268
Less on the nose are _Robinson Crusoe_ and _The Swiss Family Robinson_, though those have starter kits in the form of flotsam and jetsam (rather more than that for the latter).
That's it, not too complex.
This peaked at a time when microcomputers had not reached the right affordability and power parameters.
People who were developing in Lisp turned their eyes to the microcomputer market and the business to be had there, if the stuff would only run. So there was some activity of rewriting Lisp stuff in languages like Bliss and C.
The transition from powerful workstations (where we can count Lisp machines) to microcomputers basically destroyed everything which couldn't make the jump nimbly.
The new crop of programmers who cut their teeth on micros simply had no knowledge or experience with anything that didn't run on micros.
Poof, just like that, a chunk of the computing sphere consisting of new people suddenly had amnesia about Cobol, Fortran, Snobol, PL/I, operating systems like TOPS/20 and VMS and whatnot.
Only Unix pulled through, pretty much --- and that's because Unix started on relatively weak hardware, and was kept small. Unix started getting complicated approximately in step with micro hardware getting more complicated and powerful. E.g. a Unix kernel was around 50 kilobytes in 1980. not a good fit for some Apple II or Commodore Pet, but not far off from the resources the IBM PC would have.
By the time micros were powerful enough to the huge Lisp stuff with 20 megabyte images, we were into the 90s, and starting to be overrun with crap dynamic languages.
Now Lisp people could have buckled down and worked on promoting excellent Lisps for microcomputers. There were a few fledgling efforts like that that were not promoted well.
It seems that what Lisp programmers there were, they were mostly wrapped up working on bigger problems on larger hardware, and ignored microcomputers.
It's very hard to promote anything today that most of your Generation X (now management class) didn't get to play with in the 80s and 90s.
Nobody in the Lisp world ever took the time to implement stuff that people wanted on those tiny machines. Or to demonstrate to people the cool stuff it could do.
You can see this in Dr. Dobbs Journal. People are doing things like drawing graphics, writing spell checkers and controlling modems. Assembly and BASIC are normal but C and Forth are mentioned regularly. Turbo Pascal pops up in 1984. Some of the names are famous enough that you recognize them even now, decades later.
Lisp just ... gets barely mentioned in passing sometimes. And nobody of note writes anything about it. Somebody could have built a word processor, a spell checker, a chess game, a reversi game, ANYTHING ... but nobody did.
But interestingly, Tcl is a good example of languages that took a piece of Lisp's domain. In fact, Tcl is quite Lisp-like with a syntax distinct from Lisp. Tcl has been successful and with the recent release of v. 9.0, it stands to gain traction among programmers.
OTOH CL and Scheme remain underused though current implementations are generally well-equipped to handle contemporary requirements. I've used Scheme to build website generators and other tools but there's a dearth of large-scale, visible Lisp/Scheme projects out there to attract developers.
Starting such a project is a big commitment, probably programmers are hoping somebody else will pick up the ball and run with it.
I'm a decent fan of both Tcl and CL, but Tcl has the big problem of being "almost" homoiconic and lacking good meta-programming tools like quasi-quoting. I say almost because comments break homoiconicity, whereas in CL they are discarded at read-time, never appearing in the parsed tree.
The stuff here sure looks like quasiquoting to me:
https://wiki.tcl-lang.org/page/Macro+Facility+for+Tcl
mac mloop {idx cnt cmd} {
return "for {set $idx 0} {\$[set $idx] < $cnt} {incr $idx} {$cmd}"
}
The "..." with embedded $... reference is is a kind of quasiquote.Possible answers:
1. Blacksmiths enjoy making custom tools for each domain while welders just want to get on with solving their domain problem.
2. Blacksmithing is harder to learn. Welding using modern techniques is easy to learn. (Caveat: Welding well is quite difficult. But learning to weld good enough to repair a broken hitch on your tractor is easy.)
3. Welding can solve a very large chunk of metalwork problems. Not all of them--and not always with elegance--but it gets the job done quickly. Blacksmithing can solve a larger set of metalwork problems with more elegance but it also takes more time and skill.
blacksmithing you need a forge, which immediately takes up more space and is somewhat more likely to start a fire. an anvil, and tongs, and hammers. its also a lot more physically demanding, even if you use a power hammer.
your #2 and #3 are pretty key. most importantly most fabrication jobs are much happier to get quick work with reasonable precision using stock shapes. once you start talking about real free-form hot shaping you're immediately going up at least 10x in price/time. welded table base - $500. handcrafted wrought table base - $10,000.
really its that metalwork is mostly functional (fences, stairs, railings, walkways, enclosures, stainless for commercial kitchens, pipefitting, etc). its very difficult to stay in business as a actual craftsman making well-designed objects. architectural metal is probably the easiest in (wall coverings, nice looking railing and stairs, lamps, and other decorative elements). and there its still dominated by fabrication processes (machining and welding of stock shapes), although nicer materials like bronze start to have their place.
edit: you know I left this thinking I was missing something and I realized what it is. welding you make shapes out of like-shapes. like making drawings in figma. I don't think a lot of people have what it takes to learn to be a really good freehand artist. and even if you have the skill, being able to design those kind of organic arbitrary shapes so that they are emotive and attractive is another step up. do you want a piece of art which is a direct expression of the concept held by the artist? or do you want a 3x5' 32" inch high workbench for 1/20 the cost?
Or you use a gas-powered forge which is smaller and produces no smoke. But gas-powered forges don't get as hot so you can't forge-weld with them. No big deal IMHO. That's what TIG is for.
Let me just clarify one thing: you can reasonably do _arc_ welding in your garage, not torch welding. Source: my house burned down once due to the guy next door torch welding in his garage.
Would you ride a bike frame forged by a blacksmith? Haha.
A bike frame forged by a blacksmith would be incredibly strong but it would also be an enormous amount of work.
One of the major trends in computing in the 80's and 90's is that high-end systems lost out to the growth in capabilities of low-end systems, and this happens in pretty much every level in the computing stack. Several people responded to this trend by writing articles sniffling that their high-end systems lost to mass market garbage, often by focusing on the garbage of the mass market garbage and conveniently avoiding analysis as to why the high-end systems failed to be competitive in the mass market. The wonders of Lisp is one of the major topics of this genre.
Most famously, Lisp was tarred by its association with AI during the concomitant collapse of AI that led to the AI Winter, though it's less often explored why AI failed. In short, it didn't work. But more than just AI at the time, people also felt that the future of programming in general was based around the concept of something like rules-based systems: you have a set of rules that correspond to all of the necessary business logic, and a framework of program logic that's making those rules actually take effect--you can see how a language like Lisp works very well in such a world. But programming doesn't have a clean separation between business logic and program logic in practice, and attempts to make that separation cleaner have largely failed.
So Lisp has a strong competitive advantage in a feature that hasn't proven to actually be compelling (separating business from program logic). Outside of that feature, most of its other features are rather less unique and have seeped into most mainstream programming languages. Functional paradigms, REPLs, smart debuggers, garbage collection--these are all pretty widespread nowadays. Where Lisp had good ideas, they've been extensively borrowed. Where those ideas haven't pulled their weight... they've languished, and most of the people wistfully wishing for a return to Lisp haven't acknowledged that the limitations of these features.
So true. Lisp was designed to give individual programmers tremendous power. That means Lisp programmers sometimes prefer to reinvent solutions to problems rather than learn to use some existing solution. This tendency can be an absolute nightmare on a software engineering team.
Not that using Lisp on a software engineering team cannot be done, but it requires very strong discipline and leadership. The absence of strong discipline and leadership on a Lisp SWE team can lead to enormous amounts of wheel reinvention and technical debt.
Obviously discipline and leadership are necessary for any SWE team but languages like C don't encourage reinvention nearly as much as Lisp does, and Lisp programmers in general tend to be very resistant to the imposed discipline that SWE requires. (I say this as a diehard Lisp programmer, so I'm talking about myself.)
I've found that there's a world of difference between my tendency to wheel-reinvent when I'm messing around on my own vs. my tendency in an industrial setting. When I'm messing around on my own, Lisp gives me so much more reach it's incredible, and yeah, I kinda do want to reinvent the application server, or the TCP/IP stack, or something sometimes.
But when I'm getting paid and there are milestones and deadlines? Fuck it, I'll just use what's available to build what's needed. The difference is that in a Lisp codebase, some really smart people have come before and built some really cool abstractions. Like a test framework that makes automated testing so much simpler it's ridiculous. Like, two lines of code and you have a test for a new feature. You get access to tools and techniques that let you close the gap between "ticket lands in your lap" and "done" much faster than you would in Java.
Until you're two weeks in to what you expected to be a 2-hour project and you realize you can't meta-dot your tests and you made too many assumptions about which equality functions to support, so you let the user just specify a lambda for the relation function and Poof! Now you can't reason about your tests nearly as well.
And oh yeah I used a macro-centric approach when I should have used CLOS so again, I can't easily grovel my tests. Damn.
Designing a test framework in Lisp looks easy but doing it well is surprisingly hard. So using one of the better ones in Quicklisp is almost always a win.
Just curious: Which one is your favorite?
elements to not judge in the void https://github.com/azzamsa/awesome-lisp-companies/ (some are hiring) (that's just the companies we know, nothing official)
Lisp’s most successful commercial period was during the 1980s during an AI boom. Companies such as Symbolics, Texas Instruments, and Xerox sold workstations known as Lisp machines that were architecturally designed for running Lisp programs. They had corporate and institutional customers who were interested in AI applications developed under Lisp, including the United States government. Lisp was also standardized during this time period (Common Lisp). Lisp even caught the attention of Apple; Apple had some interesting Lisp and Lisp-related projects during its “interregnum” period when Steve Jobs was absent, most notably Macintosh Common Lisp, the original Newton OS (before C++ advocates won approval from CEO John Sculley), Dylan, and SK8.
However, the AI Winter of the late 1980s and early 1990s, combined with advances in the Unix workstation market where cheaper Sun and DEC machines were outperforming expensive Lisp machines at Lisp programs, severely hurt Lisp in the marketplace. AI would boom again in the 2010s, but this current AI boom is based not on the symbolic AI that Lisp excelled at, but on machine learning, which relies on numerical computing libraries that have C, C++, and even Fortran implementations and Python wrappers. Apple in the 1990s could have been a leading advocate of Lisp for desktop computing, but Apple was an unfocused beacon of creativity; many interesting projects, but no solid execution for replacing the classic Mac OS with an OS that could fully meet the demands for 1990s and 2000s computing. It took Apple to purchase NeXT to make this happen, and under Steve Jobs’ leadership Apple was a focused beacon of creativity with sharp execution. Of course, we ended up with Smalltalk-inspired Objective-C, not Common Lisp or Dylan, as Apple’s official language before Swift was released after the end of Jobs’ second reign.
Some other factors: 1. Lisp was truly unique in the 60s, 70s, and 80s, but it required expensive hardware to run. It would be hard to conceive of a Lisp running well on a 6502 or an 8086. Something like my NeXT Cube with a 68040 would do a much better job, but those machines cost roughly $6500 in 1989 dollars, out of reach for many developers.
2. By the time hardware capable of running Lisp acceptably became affordable, other languages started offering certain features that used to be unique to Lisp. Wanted garbage collection? In 1995 Java became available. Want object-oriented programming? You didn’t even have to wait until 1995 for that due to C++. Want anonymous functions and map()? Python’s popularity took off in the 2000s. Yes, Lisp still offers features that are not easily found in other languages (such as extensive metaprogramming), but the gap between Lisp and competing popular languages has been narrowing with each successive decade.
You'd be surprised. https://retrocomputing.stackexchange.com/questions/11192/wha... Of course something like FORTH was perhaps more suited to these smaller machines, but LISP implementations were around. Many users of 6502-based microcomputers were familiar with LOGO, which is just a LISP with different syntax.
Alas, I think MS saw the failure of Clojure within the Java ecosystem and foresaw the same if they made a similar effort.
At work I write a lot of TypeScript. At how I write a lot of lisp. The lisp is absolutely more ergonomic and extensible.
The ML crowd received F# and that’s practically the only reason anyone still uses anything ML-esque. I would like the same for Lisp. I know Rich Hickey tried to make Clojure for .NET first and failed, though, so I’m not holding my breath.
Atom/Pulsar, or Portacle (portable Emacs with SBCL + Quicklisp), or plain-common-lisp (2 clicks install for Windows), ALIVE for VSCode is getting there, also the newer Intellij plugin. And vim. LispWorks. Sublime, Lem, Jupyter notebooks, and more.
https://lispcookbook.github.io/cl-cookbook/editor-support.ht...
it's a tree. it's just a few operations to transform it as a structure.