He also mentioned Doctors Without Borders, who weren't seeing the outcomes they expected when it came to saving lives. One surprising reason? The medical teams often didn't speak the same language or even know each other's names.
The solution was simple: a pre-surgery checklist. Before any procedure, team members would state their name and role. This small ritual dramatically improved their success rates, not through better technique, but through better communication.
Aviation, Doctors Without Borders, and SQLite have good checklists. Checklists are simple, so it's easy to think "oh I could do that too". But you never hear about the probably endless companies and organizations that employ worthless checklists that do nothing but waste people's time.
I wish there was more talk about what makes a checklist good or bad. I suspect it's kind of like mathematics where the good formulas look very simple but are very hard to discover without prior knowledge.
I've also seen checklists made by morons that are enormously unhelpful.
IMO it's paramount for whoever is making the checklist to have familiarity with the task at hand (both how to do it properly, and what steps people tend to miss or get wrong), investment (is this tool something you'd find indispensable for yourself if you were placed in the role of executing it?), a sense of pragmatism and conciseness.
The ability to recognize what things will be obvious or flow naturally from A to B helps eliminate redundant fluff. e.g. I train volunteer firefighters and in most canonical steps for calling a Mayday, one is basically "Tell the person on the other end what's wrong". You don't need a checklist item for that. When something goes seriously sideways and you need help, you will be very inclined to convey what is the matter.
Most if not all the bad checklist I have encountered are all for the same reason, they were not tested or poorly written, and most of the time both.
Not tested in terms the checklist was written by somebody who doesn't actually know how to do the whole project. Unlike Professionals like Doctor ands Pilot where they are well trained and the check list are well understood to be a reminder. The rational behind it were taught and even if not professionals will question if something they dont understands while most other in there field could immediately give a detail answer.
Another example would be HR writing an on-boarding checklist. 99% of the time I have seen those check list are intended to make HR's life easier. Not the candidate or applicants.
Checklist is also a clear and distilled form of writing. And as the saying goes I dont have time to write you a short letter, but I have time for a long one. Writing short points with clarity takes a long time. And not a skill set everyone process. Nor do they have the time to do it when it is not part of their job or KPI.
I'd argue this comes back to "written by people who do not have to follow them on a regular basis".
In case it helps anyone else, I wrote a small tool that helps maintain and execute them:
https://github.com/amboar/checklists/
It's just a shell script wrapped around $EDITOR and git. The intent is to write checklists in the style of github-flavoured markdown. It has some tricks such as envsubst(1) interpolation and embedding little scripts whose execution is captured alongside the execution itself.
Here's an example checklist that's fairly well-worn (though is somewhat dated):
https://gist.githubusercontent.com/amboar/f85449aad09ba22219...
Where checklist entries are commands I just copy/paste them into a shell session. Usually this is in a tmux split with the checklist on one side with a shell on the other.
Could more of it be automated? Perhaps, though when some of the steps fail for various reasons, I find it's easier to recover or repair the situation if I have been invoking them myself sequentially. The embedded script support enables progressive automation where stability of the results is demonstrated over time.
I've always day dreamed of an IT organization that combined those with the decision-making procedures and leadership of modern armies, such as the US Army one.
I've re-read multiple times FM22-100 which I find strikingly modern and inspiring:
https://armyoe.com/wp-content/uploads/2018/03/1990-fm-22-100...
While I do understand that business leadership cannot compare to the high standards of those required by way more important stakes, I think there's many lessons to learn there too.
In the end, for most scenarios, break fast, fix fast is likely a more practical and cost effective approach.
I have tests and CI and all that, sure. But I also have a deployment checklist in a markdown document that I walk through. I don’t preserve results or create a paper trail. I just walk the steps one by one. It’s just so little work that I really don’t get why I cannot convince anyone else to try.
That said, and without prejudice to SQLite’s use of checklists which I haven’t deeply considered, while the conditions that make checklists the best choice are definitely present in aviation and surgery in obvious ways, processes around software tend to lend themselves to efficient and reliable automation, with non-transitory reliance on checklists very often a process smell that, while not necessarily wrong, merits skepticism and inquiry.
It would be amazing if someone had a link to a page with the MSF story, as that is a great reference to have! My google-fu hasn’t helped me in this case.
Possibly popularised by Atul Gawande “The Checklist Manifesto”.
Meta-comment: LLMs continue to impress me in the capabilities with unearthing information from imprecise inputs/queries.
The real answer is of a more philosophical nature, if you manually had to check A, B, C... Z, then you will have a better understanding of the state of the system you work with . If something goes wrong, at least the bits you checked can be disregarded and free you to check other factors. What if your systems correctly report a faulty issue, yet your automatic checklist doesn't catch it?
Also, this manual checklist checks the operator.
You should be automating everything you can, but much care should be put into figuring out if you can actually automate a particular thing.
Automate away the process to deploy a new version of hn, what's the worst that can happen?
But don't automate the pre flight checklist, if something goes wrong while the plane is in the air, people are going to die.
I think a less verbose version of the above is that a human can detect a fault in a sensor, while a sensor can't detect it is faulty itself.
How SQLite Is Tested - https://news.ycombinator.com/item?id=38963383 - Jan 2024 (1 comment)
How SQLite Is Tested - https://news.ycombinator.com/item?id=29460240 - Dec 2021 (47 comments)
How SQLite Is Tested - https://news.ycombinator.com/item?id=11936435 - June 2016 (57 comments)
How SQLite Is Tested - https://news.ycombinator.com/item?id=9095836 - Feb 2015 (17 comments)
How SQLite is tested - https://news.ycombinator.com/item?id=6815321 - Nov 2013 (37 comments)
How SQLite is tested - https://news.ycombinator.com/item?id=4799878 - Nov 2012 (6 comments)
How SQLite is tested - https://news.ycombinator.com/item?id=4616548 - Oct 2012 (40 comments)
How SQLite Is Tested - https://news.ycombinator.com/item?id=633151 - May 2009 (28 comments)
(Reposts are fine after a year or so; links to past threads are just to satisfy extra-curious readers)
Over a career of working at it, you get dramatically better at higher levels of quality even in earlier passes, so the same level of added effort provides increasing levels of reward as you gain experience.
Nobody ever complains about the person who’s leaving everything they touch a little cleaner than how they found it.
In most companies, that’s not how it plays out. Once something works, you’re immediately moved on to the next task. If you’ve had the time and space to refine, polish, and carefully craft your code, you’ve been fortunate.
If you spend your career acquiescing to every request to “just ship it” then, yes, slowing down a second to do a quality pass will seem impossible. But you really can just do it.
That's not how it works in most big companies. You can't take arbitrarily long to finish a project. Before the project is greenlit you have to give an estimate for how long the project will take. If your estimate is too big or seems unreasonable the project dies then and there (or given to someone else). Once the project starts you're held to the estimate, and if you're taking noticeably longer than your estimate you better have a good explanation.
I have never seen a software development shop where estimates were treated as anything other than loose, best guesses. Very infrequently are there actually ever genuinely immutable, hard deadlines. If you are working somewhere where that's repeatedly the case—and those deadlines are regularly unrealistically tight—failure is virtually inevitable no matter what you do. So sure, fine, if you're on a death march my suggestions won't work. But in that kind of environment nothing will.
This should be true, but it's not in my experience. Even small, clear improvements are rejected as off-mission or shunted to a backlog to be forgotten. Like, "cool, but let's hold off on merging this until we can be certain it's safe", or in other words "this is more work for me and I'd really rather not".
I have worked across a wide gamut of roles (full-stack eng, infosec, deploy infra, devops, infra eng, sysadmin), companies (big and small, startups and huge multibillion-dollar players), and industries (finance, datacenters, security products, gaming, logistics, manufacturing, AI) over a thirty year career and I have never felt the level of helplessness that people seem to be claiming. Some places have been easier, some have been harder, but I have never once found it to be as difficult or impossible as everyone laments is the case.
If you find yourself repeatedly working at places where the only option is to crank out garbage at breakneck pace, I don’t know what to tell you. If you stipulate as an axiom that it's impossible to write quality software at ${JOB}, then you're right, by definition there's nothing to be done. I just don't find that a particularly helpful mindset.
It's a silly product, but as far as being over engineered, it looks like it's about what I'd expect for those requirements.
If so, you may have noticed the jack you used didn't have several huge CNC machined aluminium parts, a seven-stage all-metal geartrain, or a 330v power supply and it probably didn't cost you $700. Probably it cost more like $40.
And sure, a consumer kitchen product needs to look presentable and you don't want trapping points for curious little fingers. But even given that, you could deliver a product that worked just as well for just as long at a far lower BOM cost.
You can achieve the same goal (getting juice from diced fruit without cleanup) much easier with different requirements. The post mentions that.
I have however seen dozens of projects where productivity grinds to a halt due to the ever-increasing effort of even minor changes due to a culture of repeatedly shipping the first thing that vaguely seems to work.
The entire zeitgeist of software development these days is “move fast and break things”.
With that said, I find it strange how the official website seems to be making its way through the HN front page piecemeal.
EDIT: Haskell was early 2010s Zig, and Zig is in the traditional ~quarter-long downcycle, after the last buzzkill review post re: all the basic stuff it's missing, ex. a working language server protocol implementation. I predict it'll be back in February. I need to make a list of this sort of link, just for fun.
> The dbsqlfuzz engine is a proprietary fuzz tester.
It's interesting that an open-source (actually public domain) software uses some proprietary tests. It never occurred to me that this was a possibility, though in retrospective it's obviously possible as long as the tests are not part of the release.
Could this be an alternative business model for "almost-open-source" projects? Similar to open-core, but in this case the project would easy to copy (open features), hard to modify (closed tests).
Yes, it's viable. I do it for my companies projects in addition to dual-licensing under the GPL. See "The unit tests and Unicode data generators are not public. Access to them is granted exclusively to commercial licensees." [1].
[1] https://github.com/railgunlabs/unicorn?tab=readme-ov-file#li...
I was thinking about sqlite's test landscape as described here, in relation to simonw's recent writing about porting/recreating the justHTML engine from python to js via codex, nearly "automatically" with just a prompt and light steering.
Correctness testing is important but the way SQLLite is used, potential performance drops in specific code paths or specific type of queries could be really bad for apps that use it in critical paths.
SQLite supports a set of JSON functions that let you query and index JSON columns directly, which looks very convenient—but be careful:
1. `json('{"a/b": 1}') != json('{"a\/b": 1}')`
Although the two objects are identical in terms of JSON semantics, SQLite treats them as different.
2. `json_extract('{"a\/b": 1}', '$.a/b') is null`, `json_extract('{"\u0031":1}', '$.1') is null`, `json_extract('{"\u6211":1}', '$.我') is null`
This issue only exists in older versions of SQLite; the latest versions have fixed it.
In many cases you can't control how your JSON library escapes characters. For example, `/` doesn’t need to be escaped, but some libraries will escape it as `\/`. So this is a rather nasty pitfall, you can end up failing to match keys during extraction with seemingly no reason.
Truly one of the best software products! It is used on every single device, and it is just pure rock-solid.
Something better than CVS was needed. (I'm not being critical of CVS. I had to use the VCSes that can before, and CVS was amazing compared to them.) Monochrome gave me the idea of doing a distributed VCS and storing content in SQLite, but Monochrome didn't support sync over HTTP, which I definitely wanted. Git had just appeared, and was really bad back in those early years. (It still isn't great, IMO, though people who have never used anything other than Git are quick to dispute that claim.) Mercurial was... Mercurial. So I decided to write my own DVCS.
This turned out to be a good thing, though not in the way I expected. Since Fossil is built on top of SQLite, Fossil became a test platform for SQLite. Furthermore, when I work on Fossil, I see SQLite from the point of view of an application developer using SQLite, rather than in my usual role of a developer of SQLite. That change in perspective has helps me to make SQLite better. Being the primary developer of the DVCS for SQLite in addition to SQLite itself also give me the freedom to adapt the DVCS to the specific needs of the SQLite project, which I have done on many occasions. People make fun of me for writing my own DVCS for SQLite, but in the balance it was a good move.
Note that Fossil is like Git in that it stores check-ins an a directed acyclic graph (DAG), though the details of each node are different. The key difference is that Fossil stores the DAG in a relational database (SQLite) whereas Git uses a custom "packfile" key/value store. Since the content is in a relational database, it is really easy to add features like tickets, and wiki, and a forum, and chat - you've got an RDBMS sitting there, so why not use it? Even without those bonus features, you also have the benefit of being about to query the DAG using SQL to get useful information that is difficult to obtain from Git. "Detached heads" are not possible in Fossil, for example. Tags are not limited by filesystem filename restrictions. You can tag multiple check-ins with the same tag (ex: all releases are tagged "release".) If you reference an older check-in in the check-in comment of a newer check-in, then go back and look at the older check-in (perhaps you bisected there), it will give a forward reference to the newer one. And so forth.
About Fossil, I really liked how everything is integrated into the VCS.
My friends also make fun of me having some tools that only I use. Somehow understanding a tool down to the last little bit of detail is satisfying by itself. We live in an era of software bloat that does not make too much sense.
Anyways, thanks for SQLite, I use it for teaching SQL for students and for my custom small scale monitoring system.
https://sqlite.org/althttpd/doc/trunk/althttpd.md
Just like Fossil vs Git, SQLite vs $SomeRealSQLServer, I wish someday Althttpd would become a no-bullshit self-contained replacement for Nginx/Apache/whatever bloated HTTP servers. It has already proved its working by serving Fossil/SQLite, but configuration/features for serving actual web site is not yet "real production quality", at least that is how I feel.
Overall, what an amazing legacy this set of software has been to the world.
*The homepage is available in: https://fossil-scm.org/home/doc/trunk/www/index.wiki.
Not really. It's one of the early _distributed_ version control systems, released a little after git but before git gained widespread acceptance.
It has a built-in (optional) web UI, which is cool, and uses SQLite to store its state/history.
What a superb piece of software SQLite is.
Install and forget.