“A variable should mean one thing, and one thing only. It should not mean one thing in one circumstance, and carry a different value from a different domain some other time. It should not mean two things at once. It must not be both a floor polish and a dessert topping. It should mean One Thing, and should mean it all of the time.”
- Not realizing it's a very concrete theorem applicable in a very narrow theoretical situation, and that its value lies not in the statement itself but in the way of thinking that goes into the proof.
- Stating it as "pick any two". You cannot pick CA. Under the conditions of the CAP theorem it is immediately obvious that CA implies you have exactly one node. And guess what, then you have P too, because there's no way to partition a single node.
A much more usable statement (which is not a theorem but a rule of thumb) is: there is often a tradeoff between consistency and availability.
Structure code so that in an ideal case, removing a functionality should be as simple as deleting a directory or file.
For example, each comment on HN has a line on top that contains buttons like "parent", "prev", "next", "flag", "favorite", etc. depending on context. Suppose I might one day want to remove the "flag" functionality. Should each button be its own file? What about the "comment header" template file that references each of those button templates?
The resolution I've landed on: be strict in what you accept at boundaries you control (internal APIs, config parsing) and liberal only at external boundaries where you can't enforce client upgrades. But that heuristic requires knowing which category you're in, which is often the hard part.
I've seen CompSci guys especially (I'm EEE background, we have our own problems but this ain't one of them) launch conceptual complexity into the stratosphere just so that they could avoid writing two separate functions that do similar things.
Take the 5 Rings approach.
The purpose of the blade is to cut down your opponent.
The purpose of software is to provide value to the customer.
It's the only thing that matters.
You can also philosophize why people with blades needed to cut down their opponents along with why we have to provide value to the customer but thats beyond the scope of this comment
The goal ought to be to aim for a local minima of all of these qualities.
Some people just want to toss DRY away entirely though or be uselessly vague about when to apply it ("use it when it makes sense") and thats not really much better than being a DRY fundamentalist.
A common "failure" of DRY is coupling together two things that only happened to bear similarity while they were both new, and then being unable to pick them apart properly later.
There should often be two points of truth because having one would increase the coupling cost more than the benefits that would be derived from deduplication.
Especially things like “every system grows more complex over time” — you can see it in almost any project after a few iterations.
I think the real challenge isn’t knowing these laws, but designing systems that remain usable despite them.
> Copy-paste is free; abstractions are expensive.
> 11. Abstractions don’t remove complexity. They move it to the day you’re on call.
"In analyzing complexity, fast iteration almost always produces better results than in-depth analysis."
Boyd invented the OODA loop.
9. Most software will get at most one major rewrite in its lifetime.
complexity(system) =
sum(complexity(component) * time_spent_working_in(component)
for component in system).
The rule suggests that encapsulating complexity (e.g., in stable libraries that you never have to revisit) is equivalent to eliminating that complexity.> complexity is not a function of time spent working on something.
But the complexity you observe is a function of your exposure to that complexity.
The notion of complexity exists to quantify the degree of struggle required to achieve some end. Ousterhout’s observation is that if you can move complexity into components far away from where you must do your work to achieve your ends, you no longer need to struggle with that complexity, and thus it effectively is not there anymore.
Or develop a skill to make it correct, fast and pretty in one or two approaches.
- Write a correct, pretty implementation
- Beat Claude Code with a stick for 20 minutes until it generated a fragile, unmaintainable mess that still happened to produce the same result but in 300ms rather than 2500ms. (In this step, explicitly prompting it to test rather than just philosophising gets you really far)
- Pull across the concepts and timesaves from Claude's mess into the pretty code.
Seriously, these new models are actually really good at reasoning about performance and knowing alternative solutions or libraries that you might have only just discovered yourself.
But yes, the scope and breadth of their knowledge goes far beyond what a human brain can handle. How many relevant facts can you hold in your mind when solving a problem? 5? 12? An LLM can take thousands of relevant facts into account at the same time, and that's their superhuman ability.
A couple are well-described/covered in books, e.g., Tesler's Law (Conservation of Complexity) is at the core of _A Philosophy of Software Design_ by John Ousterhout
https://www.goodreads.com/en/book/show/39996759-a-philosophy...
(and of course Brook's Law is from _The Mythical Man Month_)
Curious if folks have recommendations for books which are not as well-known which cover these, other than the _Laws of Software Engineering_ book which the site is an advertisement for.....
It’s not a great list. The good old c2.com has many more, better ones.
> Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
Anyway, the list seems like something AI scraped and has a strong bias towards "gotcha" comments from the likes of reddit.
https://lawsofsoftwareengineering.com/laws/premature-optimiz...
It leaves out this part from Knuth:
>The improvement in speed from Example 2 to Example 2a is only about 12%, and many people would pronounce that insignificant. The conventional wisdom shared by many of today’s software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny-wise- and-pound-foolish programmers, who can’t debug or maintain their “optimized” programs. In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering. Of course I wouldn’t bother making such optimizations on a one-shot job, but when it’s a question of preparing quality programs, I don’t want to restrict myself to tools that deny me such efficiencies.
Knuth thought an easy 12% was worth it, but most people who quote him would scoff at such efforts.
Moreover:
>Knuth’s Optimization Principle captures a fundamental trade-off in software engineering: performance improvements often increase complexity. Applying that trade-off before understanding where performance actually matters leads to unreadable systems.
I suppose there is a fundamental tradeoff somewhere, but that doesn't mean you're actually at the Pareto frontier, or anywhere close to it. In many cases, simpler code is faster, and fast code makes for simpler systems.
For example, you might write a slow program, so you buy a bunch more machines and scale horizontally. Now you have distributed systems problems, cache problems, lots more orchestration complexity. If you'd written it to be fast to begin with, you could have done it all on one box and had a much simpler architecture.
Most times I hear people say the "premature optimization" quote, it's just a thought-terminating cliche.
Look, I understand the intent you have, and I also understand the frustration at the lack of care with which many companies have acted with regards to personal data. I get it, I'm also frustrated.
But (it's a big but)...
Your suggestion is that we hold people legally responsible and culpable for losing a confrontation against another motivated, capable, and malicious party.
That's... a seriously, seriously, different standard than holding someone responsible for something like not following best practices, or good policy.
It's the equivalent of killing your general when he loses a battle.
And the problem is that sometimes even good generals lose battles, not because they weren't making an honest effort to win, or being careless, but because they were simply outmatched.
So to be really, really blunt - your proposal basically says that any software company should be legally responsible for not being able to match the resources of a nation-state that might want to compromise their data. That's not good policy, period.