That's very simple. The balanced path depends directly on how much of the requirements and assumptions are going to change during the life time of the thing you are building.
Engineering is helpful only to the extent you can forsee the future changes. Anything beyond that requires evolution.
You are able to comment on the complexity of that large company only because you are standing in the future into 50 years from when those things started take shape. If you were designing it 50 years back, you would end up with same complexity.
The nature's answer to it is, consolidate and compact. Everything that falls onto earth gets compacted into a solid rock over time, by a huge pressure of weight. All complexity and features are flattened out. Companies undergo similar dynamics driven by pressures over time, not by big-bang engineering design upfront.
A system of services that interact, where many of them are depending on each other in informal ways may be a complex system. Especially if humans are also involved.
Such a system is not something you design. You just happen to find yourself in it. Like the road to hell, the road to a complex system is paved with good intentions.
If the definition of "complex" is instead something more like "a system of services that interact", "prone to multiple, coincidental failures", then I don't think it's impossible to design them. It's just very hard. Manufacturing lines would be examples, they are certainly designed.
The first is too ambitious and ends in an unmaintainable pile around a good core idea.
The second tries to "get everything right" and suffers second system syndrome.
The third gets it right but now for a bunch of central business needs. You learned after all. It is good exactly because it does not try to get _everything_ right like the second did.
The fourth patches up some more features to scoop up B and C prios and calls it a day.
Sometimes, often in BigCorp: Creators move on and it will slowly deteriorate from being maintenaned...
The core point they're trying to make is that agile (or similar) practices are the incorrect way to approach consolidation of smaller systems into bigger ones when the overall system already works and is very large.
I agree with their assertion that being forced to address difficult problems earlier on in the process results in ultimately better outcomes, but I think it ignores the reality that properly planning a re-write of monumentally sized and already in use system is practically impossible.
It takes a long time (years?) to understand and plan all the essential details, but in the interim the systems you're wanting to rewrite are evolving and some parts of the plan you thought you had completed are no longer correct. In essence, the goal posts keep shifting.
In this light, strangler fig pattern is probably the pragmatic approach for many of these re-writes. It's impossible to understand everything up front, so understand what you reasonably can for now, act on that, deliver something that works and adds value, then rinse and repeat. The problem is that for sufficiently large system, this will take decades and few software architects stick around at a single company long enough to see it through.
A final remark I want to make is that, after only a few years of being a full-time software developer, "writing code" is one of the easiest parts of the job. The hard part is knowing what code needs to be written, this requires skills in effective communication with various people, including other software developers and (probably more importantly) non-technical people who understand how the business processes actually need to work. If you want to be a great software developer, learn how to be good at this.
> The most prevalent one, these days, is that you gradually evolve the complexity over time. You start small and keep adding to it.
> The other school is that you lay out a huge specification that would fully work through all of the complexity in advance, then build it.
I think AI will drive an interesting shift in how people build software. We'll see a move toward creating and iterating on specifications rather than implementations themselves.
In a sense, a specification is the most compact definition of your software possible. The knowledge density per "line" is much higher than in any programming language. This makes specifications easier to read, reason about, and iterate on—whether with AI or with peers.
I can imagine open source projects that will revolve entirely around specifications, not implementations. These specs could be discussed, with people contributing thoughts instead of pull requests. The more articulated the idea, the higher its chance of being "merged" into the working specification. For maintainers, reviewing "idea merge requests" and discussing them with AI assistants before updating the spec would be easier than reviewing code.
Specifications could be versioned just like software implementations, with running versions and stable releases. They could include addendums listing platform-specific caveats or library recommendations. With a good spec, developers could build their own tools in any language. One would be able to get a new version of the spec, diff it with the current one and ask AI to implement the difference or discuss what is needed for you personally and what is not. Similarly, It would be easier to "patch" the specification with your own requirements than to modify ready-made software.
Interesting times.
We have yet to see a largely llm driven language implementation, but it is surely possible. I imagine it would be easier to tell the llm to instead translate the Java implementation to whatever language you need. A vibe-coded language could do major damage to a companies data.
[0] https://iceberg.apache.org/spec/ [1] https://lists.apache.org/thread/whbgoc325o99vm4b599f0g1owhgw...
This is a really good observation and I predict you will be correct.
There is a consequence of this for SaaS. You can imagine an example SaaS that one might need to vibecode to save money. The reason its not possible now is not because Claude can't do it, its because getting the right specs (like you suggested) is hard work. A well written spec will not only contain the best practices for that domain of software but also all the legal compliance BS that comes along with it.
With a proper specification that is also modular, I imagine we will be able to see more vibecoded SaaS.
Overall I think your prediction is really strong.
One issue is that a spec without a working reference implementation is essentially the same as a pull request that's never been successfully compiled. Generalization is good but you can't get away from actually doing the thing at the end of the day.
I've run into this issue with C++ templates before. Throw a type at a template that it hasn't previously been tested with and it can fall apart in new and exciting ways.
> The WHATWG was based on several core principles, (..) and that specifications need to be detailed enough that implementations can achieve complete interoperability without reverse-engineering each other.
But in my experience you need more than a spec, because an implementation is not just something that implements a spec, it is also the result of making many architectural choices in how the spec is implemented.
Also even with detailed specs AI still needs additional guidance. For example couple of weeks ago Cursor unleashed thousands of agents with access to web standards and the shared WPT test suite: the result was total nonsense.
So the future might rather be like a Russian doll of specs: start with a high-level system description, and then support it with finer-grained specs of parts of the system. This could go down all the way to the code itself: existing architectural patterns provide a spec for how to code a feature that is just a variation of such a pattern. Then whenever your system needs to do something new, you have to provide the code patterns for it. The AI is then relegated to its strength: applying existing patterns.
TLA+ has a concept of refinement, which is kind of what I described above as Russian dolls but only applied to TLA+ specs.
Here is a quote that describes the idea:
There is no fundamental distinction between specifications and implementations. We simply have specifications, some of which implement other specifications. A Java program can be viewed as a specification of a JVM (Java Virtual Machine) program, which can be viewed as a specification of an assembly language program, which can be viewed as a specification of an execution of the computer's machine instructions, which can be viewed as a specification of an execution of its register-transfer level design, and so on.
Source: https://cseweb.ucsd.edu/classes/sp05/cse128/ (chapter 1, last page)
There are core differences in software engineering that, unlike in construction work:
- making changes is often cheaper
- we might not know beforehand everything that is needed to be built, especially unknown unknowns
I would still agree that the truth is somewhere in between, but I would argue that, for software, it's closer to the evolutionary approach.
In short: the tension described in "systems thinking" is the same one as the one between "spec driven" vs. "iterative prompting"
IMO the problem isn't discussing the spec per se. It's that the spec doesn't talk back the way actual working code does. On a "big upfront design" project, there is a high chance you're spending a lot of time on moot issues and irrelevant features.
Making a good spec is much harder than making working software, because the spec may not be right AND the spec may not describe the right thing.
I suppose it's primarily a matter of experience. And as the article alludes, it's very important to deeply understand the subject matter. I highly value some of my non-programmer colleagues responsible for documentation. But can't put my finger on what exactly they brought to table that made their prose exceptionally good (clear, concise, spot on)...
Everything W3C does. Go is evolving through specs first. Probably every other programming language these days.
People already do that for humankind-scale projects where there have to be multiple implementations that can talk to each other. Iteration is inevitable for anything that gains traction, but it still can be iteration on specs first rather than on code.
Ai assistance would seem to favor the engineering approach as the friction of teams and personalities is reduced in favor of quick feasibility testing and complete planning.
That feels like a straw man to me. This is not a binary question. For each small design decision you have a choice about how much uncertainty you accept.
There are no "two schools". There is at least a spectrum between two extremes and no real project was ever at either of the very ends of it. Actually, I don't think spectrum is a proper word even because this is not just a single dimension. For example, speed and risk often correlate but they are also somewhat independent and sometimes they anti-correlate.
But if anything, all development is the search for the search for the requirements. Some just value writing them down.
Designers need malleability, that is why they all want digital design systems.
It was discussed here just 2 days ago intensively.
How rapidly has business software changed since COVID? Yet how many skyscrapers remain partially unoccupied in big cities like London, because of the recent arrival of widespread hybrid working?
The buildings are structurally unchanged and haven't been demolished to make way for buildings that better support hybrid working. Sure office fit outs are more oriented towards smaller simultaneous attendance with more hot desking. Also a new industry boom around team building socials has arrived. Virtual skeet shooting or golf, for example.
On the whole, engineered cities are unchanged, their ancient and rigid specifications lacking the foresight to include the requirements that accommodate hybrid working. Software meanwhile has adapted and as the OP says, evolved.