We don't write everything in lambda calculus or SKI calculus or something like that, even though those things are made up of very simple parts, complexity would emerge. They are not the right choice for most tasks. How do you make it readable? Can you write something that is very clear and performant at the same time, while preventing yourself from programming yourself into a corner? That's where you need to get really clever.
For example, we have an algorithm that requires a key-value store with typical semantic. For the purposes of our algorithm we could simulate that store using an array and straightforward search and insert routines that just loop through the array without trying to be smart. Then we could attend to details of that key-value store and use a more efficient approach, this time without thinking about our original algorithm; or, perhaps, with a clear understanding of its access pattern.
In both cases the task at hand won't be more complex than necessary. But if we try to do both at the same time, it will be way more complex.
Here the separation is clear, but in real programming it is not clear and to discover these lines of separations is basically the essence of building a system. I think Brad Cox was occupied with that with his Software-IC concept and I kind of share his view that this is yet to happen. Things we build are not as composable as they should be; as they are in other industries.
But a text shaping library does not need an UTF-8 decoder. The product it is used in will certainly have one or, if it works in UTF-16 or, as Python, uses 3-way encoding, may not even need it and thus will have to add an UTF-8 encoding step only to communicate with that library. A simpler design would be to remove that UTF-8 decoder and make the library to accept Unicode characters as integers. If we need UTF-8, it is trivial to decode a string and feed the resulting Unicode into the shaper; if we don't, it is equally trivial to use the library with any other encoding.
(I guess I ended up with a slightly different example than I intended.) Anyway, removing an UTF-8 decoder here would result in a simpler and more universal design, although - this is an unexpected development - it may superficially look more complex to many people who have the "standard" UTF-8 string and just need to get the job done.
In hardware world, it's fine to use devboards and Arduinos to prototype things, but then you're supposed to stop being a newbie, stop using breadboards, and actually design circuits using relevant ICs directly, with minimal amount of glue in between. Unfortunately, in software, manufacturing costs are too cheap to meter, so we're fine with using bench-top prototypes in production, because we're not the ones paying the costs for the waste anyway, our users are.
(Our users, and hardware developers too, as they get the blame for "low battery life" of products running garbage software.)
Sometimes, 'clever' code is simply code that refuses to ignore the underlying reality of the hardware. The danger isn't cleverness itself, but unnecessary cleverness applied to problems where the bottleneck is human understanding rather than machine execution.
- One tiny piece of extremely clever abstraction.
- A huge amount of simple pieces that would be more complex without that tiny piece.
In other words, the clever abstraction can be justified if it enables lots of simplicity. It has to do it right now, not in the future.
If your kernel is complicated but writing drivers is simple, people won't even notice the abstractions. They will think of the system as "simple", without realizing there is some clever stuff making that "simple" possible.
Opaque code is code that requires you to form an unnecessarily large, detailed mental model of it in order to answer a particular question you may have about it.
People rarely read code in its entirety, like a novel. There is almost always a specific question they want to answer. It might be "how will it behave in this use case?", "how will this change affect its behaviour?" or "what change should I make it to achieve this new behaviour?". Alternatively, it might be something more high level, but still specific, like "how does this fit together?" (i.e. there's a desire to understand the overall organisational principles of the code, rather than a specific detail).
Opaque code typically:
* Requires you to read and understand large volumes of what should be irrelevant code in order to answer your question, often across multiple codebases.
* Requires you to do difficult detective work in order to identify what code needs to be read and understood to answer the question with confidence.
* Only provides an answer to your question with caveats/assumptions about human behaviour, such as "well unless someone has done X somewhere, but I doubt anyone would do that and would have to read the entire codebase to be sure".
Of course, this doesn't yield some number as to how "opaque" the code is, and importantly it depends on the question you're asking. A codebase might be quite transparent to some questions and opaque to others. It can be a very useful exercise to think about what questions people are likely to seek answers for from a given codebase.
When you think about things this way, you come to realise a lot of supposedly good practices actually exacerbate code opacity, often for the sake of "reusability" of things that will never be reused. Dependency injection containers are a bête noire of mine for this reason. There's nothing wrong with dependency injection itself (giving things their dependencies rather than having them create them), but DI containers tend to end up being dependency obfuscators, and the worst ones import a huge amount of quirky, often poorly-documented behaviour into your system. They are probably the single biggest cause of having to spend an entire afternoon trawling through code, often including that of the blasted container itself (and runtime config!), to answer what should be a very simple and quick question about a codebase.
"Clever" is a different thing to "complicated" or "opaque", and it's not always a negative. People can certainly make code much more opaque by doing "clever" things, but sometimes (maybe rather too rarely) they can do the opposite. A small, well thought out bit of "clever" code can often greatly reduce the opacity of a much larger amount of code that uses it. Thinking about what a particular "clever" idea will do to the opacity (as defined above) of the codebase can be a good way to figure out whether it is worth doing.
The programming landscape 30+ years ago and its severely constrained resources strongly biased our idea of "good software" in favor of cleverness. I think we can say we know better now. Having been responsible for picking up someone else's clever code myself.
Energy is a resource. Mobile computing devices demonstrate this constraint already. I predict that what is old will become new again.