Up to 80% of a typical dev time these days (depending on the company and the project) is idle waiting for a build to fail again, deployment to finish, all teammates showing up to another useless meeting, writing stupid performance reviews, arguing about secondary yet unavoidable issues on slack and pr comments, etc etc.
It reminds me of Kingman's Formula in queueing theory: As server utilization approaches 100%, the wait time approaches infinity.
We intuitively understand this for servers (you never run a CPU at 99% if you want responsiveness), yet for some reason, we decided that a human brain—which is infinitely more complex—should run at 99% capacity and still be expected to handle urgent interruptions without crashing.
For a couple of years I helped develop scheduling software for supply chains in process industry. We frequently optimized for throughput or resource utilization, but also for just in time or minimal latency. So goals differ, but it kind of works in industrial context.
Now, there has always been a tendency to also frame knowledge work like software development as though it's just industrial production. Hence (mostly futile) attempts to make things predictable, reproducible and "efficient". Where efficiency is bluntly taken to mean optimal utilization.
When attempting to apply a process optimisation perspective from supply chains or manufacturing to software delivery, one way the software delivery problem space differs is that the software delivery process isn't a process that produces a stream of identical units that are independent of each other.
Suppose we abstract the software situation, we can tell ourselves that it is a repeatable process that produces an endless stream of independent features or fixes (weighed in "story points" say) that get shipped to production. This mental model maybe works some of the time, until it doesn't.
In reality, each software change is often making a bespoke, one-off modification or addition to an existing system. Work to deliver different features or fixes are not fungible and delivering the work items may not be independent -- if changes interfere with each other by touching overlapping components in the existing system and modifying them in incompatible ways. A more realistic mental model needs to acknowledge that there's a system there, and its existing architecture and accumulated cruft may heavily constrain what can be done, and that the system is often a one-off thing that is getting changed in bespoke ways with each item of work that ships.
It seems like modern Agile has mutated into a tool for Manufacturing Predictability rather than Software Discovery. We are so obsessed with making the velocity graph look like a straight line that we stopped asking if we are even building the right thing.
Do you think that shift happened because non-technical management needed a metric they could understand (tickets closed), or did we do this to ourselves?
My ELIengineer take is that no bearing works without some slack - it gets stuck. But that still needs some understanding, and that is less and less available, esp. at management levels.
Good managers will do all of this simultaneously.
Bad managers just try to cram as much work in as possible. Because they are so poor at evaluating the quality of what their employees are doing, the only thing they understand is maximise throughput at the expense of all else. If your manager is like this, leave asap