I was in my last role for a year, and 90%+ of my time was spent investigating things that went "missing" at one of many failure points between one of the many distributed components.
I wrote less than 200 lines of code that year and I experienced the highest level of burnout in my professional career.
The technical aspect that contributed the most to this burnout was both the lack of observability tooling and the lack of organizational desire to invest in it. Whenever I would bring up this gap I would be told that we can't spend time/money and wait for people to create "magic tools".
So far the culture in my new embedded (Rust, fwiw) position is the complete opposite. If you're burnt out working on distributed systems and you care about some of the same things that I do, it's worth giving embedded software dev a shot.
Plus: far worse performance ("but it scales smoothly" OK but your max probable scale, which I'll admit does seem high on paper if you've not done much of this stuff before, can fit on one mid-size server, you've just forgotten how powerful computers are because you've been in cloud-land too long...) and crazy-high costs for related hardware(-equivalents), resources, and services.
All because we're afraid to shell into an actual server and tail a log, I guess? I don't know what else it could be aside from some allergy to doing things the "old way"? I dunno man, seems way simpler and less likely to waste my whole day trying to figure out why, in fact, the logs I need weren't fucking collected in the first place, or got buried some damn corner of our Cloud I'll never find without writing a 20-line "log query" in some awful language I never use for anything else, in some shitty web dashboard.
Fewer, or cheaper, personnel? I've never seen cloud transitions do anything but the opposite.
It's like the whole industry went collectively insane at the same time.
[EDIT] Oh, and I forgot, for everything you gain in cloud capabilities it seems like you lose two or three things that are feasible when you're running your own servers. Simple shit that's just "add two lines to the nginx config and do an apt-install" becomes three sprints of custom work or whatever, or just doesn't happen because it'd be too expensive. I don't get why someone would give that stuff up unless they really, really had to.
[EDIT EDIT] I get that this rant is more about "the cloud" than distributed systems per se, but trying to build "cloud native" is the way that most orgs accidentally end up dealing with distributed systems in a much bigger way than they have to.
But it's funny. The transition to distributed/cloud feels like the rush to OOP early in my career. All of a sudden there were certain developers who would claim it was impossible to ship features in procedural codebases, and then proceed to make a fucking mess out of everything using classes, completely misunderstanding what they were selling.
It is also not unlike what Web-MVC felt like in the mid-2000s. Suddenly everything that came before was considered complete trash by some people that started appearing around me. Then the same people disparaging the old ways started building super rigid CRUD apps with mountains of boilerplate.
(Probably the only thing I was immediately on board with was the transition from desktop to web, because it actually solved more problems than it created. IMO, IME and YYMV)
Later we also had React and Docker.
I'm not salty or anything: I also tried and became proficient in all of those things. Including microservices and the cloud. But it was more out of market pressure than out of personal preference. And like you said, it has a place when it's strictly necessary.
But now I finally do mostly procedural programming, in Go, in single servers.
Reccently I was going to do a fairly big download of a dataset (45T) and when I first looked at it, figured I could shard the file list and run a bunch of parallel loaders on our cluster.
Instead, I made a VM with 120TB storage (using AWS with FSX) and ran a single instance of git clone for several days (unattended; just periodically checking in to make sure that git was still running). The storage was more than 2X the dataset size because git LFS requires 2X disk space. A single multithreaded git process was able to download at 350MB/sec and it finished at the predicted time (about 3 days). Then I used 'aws sync' to copy the data back to s3, writing at over 1GB/sec. When I copied the data between two buckets, the rate was 3GB/sec.
That said, there are things we simply can't do without distributed computing because there are strong limits on how many CPUs and local storage can be connected to a single memory address space.
But once you need that second server, everything about your application needs to work in distributed fashion.
I understand that it can not deal with FAANG scale problems, but those are relevant only to a small subset of businesses.
On distributed. Qps scaling isn't the only reason and I suspect rarely the reason. It's mostly driven by availability needs.
It's also driven my organizational structure and teams. Two teams don't need to be fighting over the same server to deploy their code. So it gets broken out into services with clear api boundaries.
And ssh to servers might be fine for you. But systems and access are designed to protect the bottom tier of employees that will mess things up when they tweak things manually. And tweaking things by hand isn't reproducible when they break.
I'm not entirely sure you understand the problem domain, or even the high-level problem. The is or ever was a "rush" to distributed computing.
What you actually have is this global epifany that having multiple computers communicating over a network to do something actually has a name, and it's called distributed computing.
This means that we had (and still have) guys like you who look at distributed systems and somehow do not understand they are looking at distributed systems. They don't understand that mundane things like a mobile app supporting authentication or someone opening a webpage or email is a distributed system. They don't understand that the discussion on monolith vs microservices is orthogonal to the topic of distributed systems.
So the people railing against distributed systems are essentially complaining about their own ignorance and failure to actually understand the high-level problem.
You have two options: acknowledge that, unless you're writing a desktop app that does nothing over a network, odds are every single application you touch is a node in a distributed system, or keep fooling yourself into believing it isn't. I mean, if a webpage fails to load then you just hit F5, right? And if your app just fails to fetch something from a service you just restart it, right? That can't possibly be a distributed system, and those scenarios can't possibly be mitigated by basic distributed computing strategies, isn't it?
Everything is simple to those who do not understand the problem, and those who do are just making things up.
this would lead to a pointless conversation, if it were to ever happen.
That's the point, isn't it? It's simply wrong to assert that there's a rush to distributed systems when they are already ubiquitous in the real world, even if this comes as a surprise to people like OP. Get acquainted with the definition of distributed computing, and look at reality.
The only epiphany taking place is people looking at distributed systems and thinking that, yes, perhaps they should be treated as distributed systems. Perhaps the interfaces between multiple microservices are points of failure, but replacing them with a monolith does not make it less of a distributed system. Worse, taking down your monolith is also a failure mode, one with higher severity. How do you mitigate that failure mode? Well, educate yourself about distributed computing.
If you look at a distributed system and call it something other than distributed system, are you really speaking a different language, or are you simply misguided?
Welcome to computing.
- OOP will solve all of our problems
- P2P will solve all of our problems
- XML will solve all of our problems
- SOAP will solve all of our problems
- VMs will solve all of our problems
- Ruby on Rails and by extension dynamically typed languages will solve all of our problems
- Docker [etc...]
- Functional programming
- node.js
- Cloud
- Kubernetes
- Statically typed languages
- "Serverless"
- Rust?
- AI
Some have more merit (IMO notably FP, static typing and Rust), some less (notably XML and SOAP)...
That sounds like an awful organizational ethos. 30hrs to make a "magic tool" to save 300hrs across the organization sounds like a no-brainer to anyone paying attention. It sounds like they didn't even want to invest in out-sourced "magic tools" to help either.
I wonder if these roles tend to attract people who get the most job enjoyment and satisfaction out of the (manual) investigation aspect; it might explain some of the reluctance to adopting or creating more sophisticated observability tooling.
Either way, doing the same kinds of things, the same kind of ways, more than a few times, is an automation/tool/practice improvement opportunity lost.
I have yet to complete a single project I couldn't do much better, differently, if I were to do something similar again. Not everything is high creative, but software is such a complex balancing act/value terrain. Every project should deliver some new wisdom, however modest.
also called by some other names, including NIH syndrome, protecting your turf, we do it this way around here, our culture, etc.
In my -very- humble opinion, you should wait at least a year before making big swinging changes or recommendations, most importantly in any big company.
After that honeymoon period, all but the most autistic people will learn the organisational politics, keep their head down, and “play the game” to be assigned trivial menial tasks in some unimportant corner of the system. At that point, only after two beers will they give their closest colleagues their true opinion.
I’ve seen this play out over and over, organisation after organisation.
The corollary is that you yourself are not immune to this effect and will grow accustomed to almost any amount of insanity. You too will find yourself saying sentences like “oh, it always has been like this” and “don’t try to change that” or “that’s the responsibility of another team” even though you know full well they’re barely even aware of what that thing is, let alone maintaining it in a responsible fashion.
PS: This is my purpose in a nutshell as a consultant. I turn up and provide my unvarnished opinion, without being even aware of what I’m “not supposed to say” because “it upsets that psychotic manager”. I’ll be gone before I have any personal political consequences, but the report document will remain, pointing the finger at people that would normally try to bite it off.
FWIW, academia has off-the-charts levels of "wtf" that newcomers will point out, though it's even more ossified than corporate culture, and they don't hire consultants to come in and fix things :)
Interfacing with IT, who thought they knew the "right" way to do everything but in reality had little to no understanding of our constraints, was always interesting.
I also see a single-mindedness to specific technical implementations where a more mature view would be to see tech as a business and us less as artisans than blue collar workers.
> steve jobs on "You're right, but it doesn't matter" https://www.youtube.com/watch?v=oeqPrUmVz-o
My comment was a statistical observation of what typically happens in ordinary organisations without a strong-willed, technically capable leader at the helm.
Disclaimer: Also, I have a biased view, because as a consultant I will generally only turn up if there is something already wrong with an organisation that insiders are unable to fix.
That's weird. I love debugging, and so I'm always trying to learn new ways to do it better. I mean, how can it be any other way? How can someone love something and be that committed to sucking at it?
One of the engineers just quit on the spot for a better paid position, the other was demoted and is currently under heavy depression last I heard from him.
Unfortunately, some people confuse the two and believe they are paid to do the latter, not the former, simply because others look at those steps and go “wtf, we could make that hell more pleasant and easier to deal with”.
In the same vein, “creating perceived job security for yourself by willing to continuously deal with stupid bs that others rightfully aren’t interested in wasting time on.”
Sadly, you are ultimately right though, as misguided self-interest often tends to win over well-meant proposals.
Excessive drawing of boxes and lines, and the production of systems around them becomes a kind of Glass Bead Game. "I'm paid to build abstractions and then figure out how to keep them glued together!" Likewise, recomposing events in your head from logs, or from side effects -- that's somehow the marker of being good at your job.
The same kind of motivation underlies people who eschew or disparage GUI debuggers (log statements should be good enough or you're not a real programmer), too.
Investing in observability tools means admitting that the complexity might overwhelm you.
As an older software engineer the complexity overwhelmed me a long time ago and I strongly believe in making the machines do analysis work so I don't have to. Observability is a huge part of that.
Also many people need to be shown what observability tools / frameworks can do for them, as they may not have had prior exposure.
And back to the topic of the whole thread, too: can we back up and admit that distributed systems is questionable as an end in itself? It's a means to an end, and distributing something should be considered only as an approach when a simpler, monolithic system (that is easier to reasona bout) no longer suffices.
Finally I find that the original authors of systems are generally not the ones interested in building out observability hooks and tools because for them the way the system works (or doesn't work) is naturally intuitive because of their experience writing it.
I’ve seen a great many engineers become so used to provisioning compute that they forget that the same “service” can be deployed in multiple places. Or jump to building an orchestration component when a simple single process job would do the trick.
Weak tech leadership? Let's "fix" that with some microservices.
Now it's FUBAR? Conceal it with some cloud native horrors, sacrifice a revolving door of 'smart' disempowered engineers to keep the theater going til you can jump to the next target.
Funny because dis sys is pretty solved since Lamport, 40+ years ago.
First one was a multi-billion-Unicorn had everything converted to microservices, with everything customized in Kubernetes. One day I even had to fix a few bugs in the service mesh because the guy who wrote it left and I was the only person not fighting fires able to write the language it was in. I left right after the backend-of-the-frontend failed to sustain traffic during a month where they literally had zero customers (Corona).
At the second one there was a mandate to rewrite everything to microservices and it took another team 5 months to migrate a single 100-line class I wrote into a microservice. It just wasn't meant to be. Then the only guy who knows how the infrastructure works got burnout after being yelled at too many times and then got demoted, and last I heard is at home with depression.
Weak leadership doesn't even begin to describe it, especially the second.
But remembering it is a nice reminder that a job is just a means of getting a payment.
[0] https://lamport.azurewebsites.net/pubs/time-clocks.pdf
[1] https://lamport.azurewebsites.net/pubs/lamport-paxos.pdf
[2] https://lamport.azurewebsites.net/pubs/byz.pdf
Or, if you prefer wiki articles:
https://en.wikipedia.org/wiki/Lamport_timestamp
https://en.wikipedia.org/wiki/Paxos_(computer_science)
https://en.wikipedia.org/wiki/Byzantine_fault
I don't know that I would call it "solved", but he certainly contributed a huge amount to the field.
I've never once been granted explicit permission to try a different path without being burdened by a mountain of constraints that ultimately render the effort pointless.
If you want to try a new thing, just build it. No one is going to encourage you to shoot holes through things that they hang their own egos from.
It can be challenging to push through to a completed demo without someone cheering you on every morning. I find this to be helpful more than hurtful if we are interested in the greater good. If you want to go against the grain (everyone else on the team), then you need to be really sure before you start wasting everyone else's time. Prove it to yourself first.
One of the most significant "triumphs" of my technical career came at a startup where I started as a Principal Engineer and left as the VP Engineering. When I started, we had nightly outages requiring Engineering on-call, and by the time I left, no one could remember a recent issue that required Engineers to wake up.
It was a ton of work and required a strong investment in quality & resilience, but even bigger impact was from observability. We couldn't afford APM, so we took a very deliberate approach to what we logged and how, and stuffed it into an ELK stack for reporting. The immediate benefit was a drastic reduction in time to diagnose issues, and effectively let our small operations team triage issues and easily identify app vs. infra issues almost immediately. Additionally, it was much easier to identify and mitigate fragility in our code and infra.
The net result was an increase in availability from 98.5% to 99.995%, and I think observability contributed to at least half of that.
Most hardware companies have zero observability, and haven't yet seen the light ("our code doesn't really have bugs" is a quote I hear multiple times a week!).
My experience with mid-size to enterprise is having lots of observability and observability-adjacent tools purchased but not properly configured. Or the completely wrong tools for the job being used.
A few I've seen recently: Grafana running on local Docker of developers because of lack of permissions in the production version (the cherry on top: the CTO himself installed this on the PMs computers), Prometheus integration implemented by dev team but env variables still missing after a couple years, several thousand a month being paid to Datadog but nothing being done with the data nor with the dog.
On startups it's surprisingly different, IME. But as soon as you "elect" a group to be administrator of a certain tool or some resource needed by those tools, you're doomed.
Or talk to a goat, sometimes
https://modernfarmer.com/2014/05/successful-video-game-devel...
Distributed systems require insanely hard math at the bottom (paxos, raft, gossip, vector clocks, ...) It's not how the human brain works natively -- we can learn abstract thinking, but it's very hard. Embedded systems sometimes require the parallelization of some hot spots, but those are more like the exception AIUI, and you have a lot more control over things; everything is more local and sequential. Even data race free multi-threaded programming in modern C and C++ is incredibly annoying; I dislike dealing with both an explicit mesh of peers, and with a leaky abstraction that lies that threads are "symmetric" (as in SMP) while in reality there's a complicated messaging network underneath. Embedded is simpler, and it seems to require less that practitioners become advanced mathematicians for day to day work.
That platform was parallelizable of up to 6 of its kind in a master-slave configuration (so the platform in the physical position 1 would assume the "master role" for a total of 18 embedded chips and 6 linux boards) on top of having optionally one more box with one more CPU in it for managing some other stuff and integrating with each of our clients hardware. Each client had a different integration, but at least they mostly integrated with us, not the other way around.
Yeah it was MUCH more complex than your average cloud. Of course the original designers didn't even bother to make a common network protocol for the messages, so each point of communication not only used a different binary format, they also used different wire formats (CAN bus, Modbus and ethernet).
But at least you didn't need to know kubernetes, just a bunch of custom stuff that wasn't well documented. Oh yeah and don't forget the boot loaders for each embedded CPU, we had to update the bootloaders so many times...
The only saving grace is that a lot of the system could rely on the literal physical security because you need to have physical access (and a crane) to reach most of the system. Pretty much only the linux boards had to have high security standards and that was not that complicated to lock down (besides maintaining a custom yocto distribution that is).
Even more fun when multiple devices share a single communication bus, so you're basically guaranteed to not get temporally-aligned readings from all of the devices.
Here’s a great podcast on the topic which you will surely like!
https://signalsandthreads.com/clock-synchronization/
And a related HN thread in case you missed it:
You work on BMS stuff? That’s cool- a little bit outside my domain (I do energy modeling research for buildings) but have been to some fun talks semi-recently about BMs/BAS/telemetry in buildings etc. The whole landscape seems like a real mess there.
FYI that podcast I linked has some interesting discussion about some issues with PTP over NTP- worth listening to for sure.
I think this take is misguided. Most of the systems nowadays, specially those involving any sort of network cals, are already distributed systems. Yet, the amount of systems go even close to touching fancy consensus algorithms is very very limited. If you are in a position to design a system and you hear "Paxos" coming out of your mouth, that's the moment you need to step back and think about what you are doing. Odds are you are creating your own problems, and then blaming the tools.
And Paxos doesn't require much maths. It's pretty tricky to consider all possible interleavings, but in term of maths, it's really basic discrete maths.
It's so funny how all of a sudden every single company absolutely must implement Paxos. No exception. Your average senior engineer at a FANG working with global deployments doesn't come close to even hearing about it, but these guys somehow absolutely must have Paxos. Funny.
From the other direction, Paxos, two generals, serializability, etc. are not hard concepts at all. Implementing custome solutions in this space _is_ hard and prone to error, but the foundations are simple and sound.
You seem to be claiming that you shouldn't need to understand the latter, that the former gives you everything you need. I would say that if you build systems using existing tools without even thinking about the latter, you're just signing up to handling preventable errors manually and treating this box that you own and black and inscrutable.
No one goes to review the transaction engine of Postgress.
- You work on postgres: you have to deal with the transaction engine's internals.
- You work in enterprise application intergration (EAI): you have ten legacy systems that inevitably don't all interoperate with any one specific transaction manager product. Thus, you have to build adapters, message routing and propagation, gateways, at-least-once-but-idempotent delivery, and similar stuff, yourself. SQL business logic will be part of it, but it will not solve the hard problems, and you still have to dig through multiple log files on multiple servers, hoping that you can rely on unique request IDs end-to-end (and that the timestamps across those multiple servers won't be overly contradictory).
In other words: same challenges at either end of the spectrum.
This is built upon a framework of the network is either working or the server team / ops team is paged and will be actively trying to figure it out. It doesn't work nearly as well if you work in an environment where the network is consistently slightly broken.
If you're using traditional (p)threads-derived APIs to get work done on a message passing system, I'd say you're using the wrong API.
More likely, I don't understand what you might mean here.
- By "explicit mesh of peers", I referred to atomics, and the modern (C11 and later) memory model. The memory model, for example as written up in the C11 and later standards, is impenetrable. While the atomics interfaces do resemble a messaging passing system between threads, and therefore seem to match the underlying hardware closely, they are discomforting because their foundation, the memory model, is in fact laid out in the PhD dissertation of Mark John Batty, "The C11 and C++11 Concurrency Model" -- 400+ pages! <https://www.cl.cam.ac.uk/~pes20/papers/topic.c11.group_abstr...>
- By "leaky abstraction", I mean the stronger posix threads / standard C threads interfaces. They are more intuitive and safer, but are more distant from the hardware, so people sometimes frown at them for being expensive.
It pays poorly, the tooling more often than not sucks (more than once I've had to do some sort of stub for an out-of-date gcc), observability is non-existent unless you're looking at a device on your desk, in which case your observability tool is an oscilloscope (or bus pirate type of device, if you're lucky in having the lower layers completely free of bugs).
The datasheets/application notes are almost always incomplete, with errata (in a different document) telling you "Yeah, that application note is wrong, don't do that".
The required math background can be strict as well: R/F, analog ... basically anything interesting you want to do requires a solid grounding in undergrad maths.
I went independent about 2 years ago. You know what pays better and has less work? Line of business applications. I've delivered maybe two handfuls of LoB applications but only one embedded system, and my experience with doing that as a contractor is that I won't take an embedded contract anymore unless it's a client I've already done work for, or if the client is willing to pay 75% upfront, and they agree to a special hourly rate that takes into account my need for maintaining all my own equipment.
Embedded does give you a greater feeling of control. When things aren't working, it's much more likely to be your own fault.
Would you agree that, technically (or philosophically?) that both roles involved distributed systems (e.g. the world-wide-web of web-servers and web-browsers exists as a single distributed system) - unless your embedded boxes weren't doing any network IO at all?
...which makes me genuinely curious exactly what your aforementioned distributed-system role was about and what aspects of distributed-computing theory were involved.
It's often quite a challenge to get that class of engineer to adopt things that give them visibility and data to track things down as well. Sometimes it's just a capability/experience gap and sometimes it's just over indexing on a perception of time getting to a solution vs. the time wasted on repeated problems and yak shavings.
https://en.m.wikipedia.org/wiki/Choreographic_programming
I’m curious to what extent the work in that area meets the need.
https://en.m.wikipedia.org/wiki/Shakespeare_Programming_Lang...
Some time has passed since then — and yet, most people still develop software using sequential programming models, thinking about concurrency occasionally.
It is a durable paradigm. There has been no revolution of the sort that the author of this post yearns for. If "Distributed Systems Programming Has Stalled", it stalled a long time ago, and perhaps for good reasons.
For the very good reason that the underlying math is insanely complicated and tiresome for mere practitioners (which, although I have a background in math, I openly aim to be).
For example, even if you assume sequential consistency (which is an expensive assumption) in a C or C++ language multi-threaded program, reasoning about the program isn't easy. And once you consider barriers, atomics, load-acqire/store-release explicitly, the "SMP" (shared memory) proposition falls apart, and you can't avoid programming for a message passing system, with independent actors -- be those separate networked servers, or separate CPUs on a board. I claim that struggling with async messaging between independent peers as a baseline is not why most people get interested in programming.
Our systems (= normal motherboards on one and, and networked peer to peer systems on the other end) have become so concurrent that doing nearly anything efficiently nowadays requires us to think about messaging between peers, and that's very-very foreign to our traditional, sequential, imperative programming languages. (It's also foreign to how most of us think.)
Thus, I certainly don't want a simple (but leaky) software / programming abstraction that hides the underlying hardware complexity; instead, I want the hardware to be simple (as little internally-distributed as possible), so that the simplicity of the (sequential, imperative) programming language then reflect and match the hardware well. I think this can only be found in embedded nowadays (if at all), which is why I think many are drawn to embedded recently.
So because we discovered a lucrative, embarrassingly parallel problem domain that’s what basically the entire industry has been doing for 15 years, since multicore became unavoidable. We have web services and compilers being multi-core and not a lot in between. How many video games still run like three threads and each of those for completely distinct tasks?
What is this referring to? It sounds like a fascinating problem.
> What is this referring to?
30 = 8+6+4+4+3+2+1.5+1.5
On coroutines it’s not the network but the L1 cache. You’re better off running a function a dozen times and then running another than running each in turn.
rust accepted the tradeoff and can do pure stack async,
there are things you can do in c++ to not get the dynamic allocation to heap, but it requires a custom allocator + predefining size of coroutines.
https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines...
Pay a 100% premium on compute resources in order to pretend the 8 Fallacies of Distributed Computing don’t exist.
I sat out the beginning of Cloud and was shocked at how completely absent they are from conversations within the space. When the hangover hits it’ll be ugly. The Devil always gets his due.
However, I think there are great advantages to that style. It’s easier to analyze and test the sequential code for correctness. Then it writes a Kafka message or makes an HTTP call and doesn’t need to be concerned with whatever is handling the next step in the process.
Then assembling the sequential components once they are all working individually is a much simpler task.
The future is here, but it is not evenly distributed.
Since you were at AWS (?), you'd know that Erlang did get its shot at distributed systems there. I'm unsure what went wrong, but if not c/c++, it was all JVM based languages soon after that.
- MIT course with Robert Morris (of Morris Worm fame): https://www.youtube.com/watch?v=cQP8WApzIQQ&list=PLrw6a1wE39...
- Martin Kleppmann (author of DDIA): https://www.youtube.com/watch?v=UEAMfLPZZhE&list=PLeKd45zvjc...
If you can work through the above (and DDIA), you'll have a solid understanding of the issues in Distributed System, like Consensus, Causality, Split Brain, etc. You'll also gain a critical eye of Cloud Services and be able to articulate their drawbacks (ex: did you know that replication to DynamoDB Secondary Indexes is eventually consistent? What effects can that have on your applications?)
> Writing logic that spans several machines right next to each other, in a single function
> Surfacing semantic information on distributed behavior such as message reordering, retries, and serialization formats across network boundaries
Aren't these features offered by Erlang?
Unison seems to build on it further. Very cool
However the number of people that actually need a distributed system is pretty small. With the rise of kubernetes, the number of people who've not been burnt by going distributed when they didn't need to has rapidly dropped.
You go distributed either because you are desperate, or because you think it would be fun. K8s takes the fun out of most things.
Moreover, with machines suddenly getting vast IO improvements, the need for going distributed is much less than it was 10 years. (yes i know there is fault tolerance, but that adds another dimension of pain.)
Gosh, this was hard to parse! I’m still not sure I’ve got it. Do you mean “kubernetes has caused more people to suffer due to going distributed unnecessarily”, or something else?
K8s has unneeded complexity which is really not required at even decent enough scales, if you've put in enough effort to architect a solution that makes the right calls for your business.
People got burnt by kubernetes, and that pissed in the well of enthusiasm for experimenting with distributed systems
If you’re architecture is poor k8s won’t help you
One of the things that is most powerful about K8s is that it gives you a lot of primitives to build things with. This is also its biggest drawback.
If you are running real physical infrastructure and want to run several hundreds of "services" (as in software, not k8s services) then kubernetes is probably a good fit, but you have a storage and secrets problem to solve as well.
On the cloud, unless you're using a managed service, its almost certainly easier to either use lambdas (for low traffic services) or one of the many managed docker hosting services they have.
Some of them are even K8s API compatible.
but _why_?
At its heart, k8s is a "run this thing here with these resources" system. AWS also does this, so duplicating it costs time and money. For most people the benefit of running ~20 services < 5 dbs and storage on k8s is negative. Its a steep learning curve, very large attack surface (You need ot secure the instance and then k8s permissions) and its an extra layer of things to maintain. For example, running a DB on k8s is perfoectly possible, and there are bunch of patterns you can follow. But you're on the hook for persistence, backup and recovery. managed DBs are more expensive to run, but they cost 0 engineer hours to implement.
BUT
You do get access to helm, which means that you can copypasta mostly working systems into your cluster. (but again like running untrusted docker images, thats not a great thing to do.)
The other thing to note is the networking scheme is badshit crazy and working with ipv6 is still tricky.
Event sourcing is a logical structure; you can implement it with SQLite or even flat files, locally, if you your problem domain is served well by it. Adding Kafka as the first step is most likely a costly overkill.
This is in contrast to the fad-driven design and over-engineering that I'm speaking of (here I simply used ES as an example) that is usually introduced because someone in power saw a blog post or 1h talk and it looked cool. And Kafka will be used because it is the most "scalable" and shiny solution, there is no pros-vs-cons analysis.
1) We are surrounded by distributed systems all the time. When we buy and sell B2B software, we don't know what's stored in our partners databases, they don't know what's in ours. Who should ask whom, and when? If the data sources disagree, whose is correct? Just being given access to a REST API and a couple of webhooks is all you need to be in full distributed systems land.
2) I honestly do not know of a better approach than event-sourcing (i.e. replicated state machine) to coordinate among multiple masters like this. The only technique I can think of that comes close is Paxos - which does not depend on events. But then the first thing I would do if I only had Paxos, would be to use it to bootstrap some kind of event system on top of it.
Even the non-event-sourcing technologies like DBs use events (journals, write-ahead-logs, sstables, etc.) in their own implementation. (However that does not imply that you're getting events 'for free' by using these systems.)
My co-workers do not put any alternatives forward. Reading a database, deciding what action to do, and then carrying out said action is basically the working definition of a race-condition. Bankers and accountants had this figured out thousands of years ago: a bank can't send a wagon across the country with queries like "How much money is in Joe's account?" wait a week for the reply, and then send a second wagon saying "Update Joe's account so it has $36.43 in it now". It's laughable. But now that we have 50-150ms latencies, we feel comfortable doing GETs and POSTs (with a million times more traffic) and somehow think we're not going to get our numbers wrong.
Like, what's an alternative? I have a shiny billion-dollar fully-ACID SQL db with my customer accounts in them. And my SAAS partner bank also has that technology. Put forward literally any idea other than events that will let us coordinate their accounts such that they're not able to double-spend money, or are prevented from spending money if a node is down. I want an alternative to event sourcing.
In Freenet[1], we’ve been exploring a novel approach to consistency that avoids the usual trade-offs between strong consistency and availability. Instead of treating state as a single evolving object, we model updates as summarizable deltas—each a commutative monoid—allowing peers to merge state independently in any order while achieving eventual consistency.
This eliminates the need for heavyweight consensus protocols while still ensuring nodes converge on a consistent view of the data. More details here: https://freenet.org/news/summary-delta-sync/
Would love to hear thoughts from others working on similar problems!
Each contract determines how state changes are validated, summarized, and merged, meaning you can efficiently implement almost any CRDT mechanism in WASM on top of Freenet. Another key difference is that Freenet is an observable KV store, allowing you to subscribe to values and receive immediate updates when they change.
That article just scratches the surface, if you'd like a good overview of the entire system check out this talk: https://freenet.org/news/building-apps-video-talk/
It feels like we're racing towards a level of complexity in software that's just impossible for humans to grasp.
But it's also more that the "other" kind of SWE work -- "backend" etc is frankly overpaid because of the copious quantities of $$ dumped into it by VC and ad money.
EDIT: Lol nvm the author is one of the authors of Hydro [1].
I work on hardware and concurrency is a constant problem even at that low level. We use model checking tools which can help.
Distributed systems are difficult to reason about.
Computer hardware today is very powerful.
There is a yo-yo process in our industry over the last 50 years between centralization and distribution. We necessarily distribute when we hit the limits of what centralization can accomplish because in general centralization is easier to reason about.
When we hit those junctures, there's a flush of effort into distributed systems. The last major example of this I can think of was the 2000-2010 period, when MapReduce, "NoSQL" databases, Google's massive arrays of supposedly identical commodity grey boxes (not the case anymore), the High Scalability blog, etc. were the flavour of the time.
But then, frankly, mass adoption of SSDs, much more powerful computers, etc. made a lot of those things less necessary. The stuff that most people are doing doesn't require a high level of distributed systems sophistication.
Distributed systems are an interesting intellectual puzzle. But they should be a means to an end not an end in themselves.
I did my MSc in Distributed Systems and it was always funny (to me) to ask a super simple question when someone was presenting distributed system performance metrics that they'd captured to compare how a system scaled across multiple systems: how long does it take your laptop to process the same dataset? No one ever seemed to have that data.
And then the (in)famous COST paper came out and validated the question I'd been asking for years: https://www.usenix.org/system/files/conference/hotos15/hotos...
Wow I love that.
Many people in our profession didn't seem to really notice when the number of IOPS on predominant storage media went from under 200 to well over 100,000 in a matter of just a few years.
I remember evaluating and using clusters of stuff like Cassandra back in the late 00s because it just wasn't possible to push enough data to disk to keep up with traffic on a single machine. It's such an insanely different scenario now.
10 to 15 years ago, you could argue, however implausibly, that hardware constraints meant vertical scaling was impossible, and you were forced to adopt a distributed architecture. Subsequent improvement in hardware performance, means that in 2025, vertical scaling is perfect acceptable in nearly all areas, relegating distributed architecture to the most niche and marginal applications. The type of applications that the vast majority of businesses will never encounter.
When you frame the problem that way, unnecessary complexity seems like part of a healthy solution path. /h
Companies get reliability benefits from slack, but creative people abhor wasted slack. Some basic business strategy/wisdom for maintaining/managing creative slack is needed.
I've been writing distributed code now in industry for a long time and in practice, having worked at a some pretty high-scale tech companies over the years, most shops tend to favor static-location style models. As the post states, it's due largely to control and performance. Scaling external-distribution systems has been difficult everywhere I've seen it tried and usually ends up creating a few knowledgeable owners of a system with high bus-factor. Scaling tends to work fine until it doesn't and these discontinuous, sharp edges are very very painful as they're hard to predict and allocate resourcing for.
Are external-distribution systems dead ends then? Even if they can achieve high theoretical performance, operation of these systems tends to be very difficult. Another problem I find with external-distribution systems is that there's a lot of hidden complexity in just connecting, reading, and writing to them. So you want to talk to a distributed relational DB, okay, but are you using a threaded concurrency model or an async concurrency model? You probably want a connection pool so that TCP HOL blocking doesn't tank your throughput. But if you're using threads, how do you map your threads to the connections in the pool? The pool itself represents a bottleneck as well. How do you monitor the status of this pool? Tools like Istio strive to standardize this a little bit but fundamentally we're working with 3 domains here just to write to the external-distribution system itself: the runtime/language's concurrency model, the underlying RPC stack, and the ingress point for the external-distribution system.
Does anyone have strong stories of scaling an external-distribution system that worked well? I'd be very curious. I agree that progress here has stalled significantly. But I find myself designing big distributed architecture after big distributed architecture continuing to use my deep experience of architecting these systems to build static-location systems because if I'm already dealing with scaling pains and cross-domain concerns, I may as well rip off the band-aid and be explicit about crossing execution domains.
This is not true by most definitions of "snapshot". Most (all?) durable execution systems use event sourcing and therefore it's effectively an immutable event log. And it's only events that have external side effects enough to rebuild the state, not all state. While technically this is not free, it's much more optimal than the traditional definition of capturing and storing a "snapshot".
> But this simplicity comes at a significant cost: control. By letting the runtime decide how the code is distributed [...] we don’t want to give up: Explicit control over placement of logic on machines, with the ability to perform local, atomic computations
Not all durable execution systems require you to give this up completely. Temporal (disclaimer: my employer) allows grouping of logical work by task queue which many users use to pick locations of work, even so far as a task queue per physical resource which is very common for those wanting that explicit control. Also there are primitives for executing short, local operations within workflows assuming that's what is meant there.
I don't know enough about it to map it to the author's distributed programming paradigms, but the Bloom features page [3] is interesting:
> disorderly programming: Traditional languages like Java and C are based on the von Neumann model, where a program counter steps through individual instructions in order. Distributed systems don’t work like that. Much of the pain in traditional distributed programming comes from this mismatch: programmers are expected to bridge from an ordered programming model into a disordered reality that executes their code. Bloom was designed to match–and exploit–the disorderly reality of distributed systems. Bloom programmers write programs made up of unordered collections of statements, and are given constructs to impose order when needed.
[1]: https://boom.cs.berkeley.edu
Things like re-entrant idempotence, software transactional memory, copy on write, CRDTs etc are going to have waste and overhead but can vastly simplify conceptually the ongoing development and maintenance of even non-distributed efforts in my opinion, and we keep having the room to eat the overhead.
There's a ton of bias against this for good reasons that the non distributed concepts still just work without any hassle but we'd be less in the mud in a fundamental way of we learned to let go of non-eventual consistency.
In the old days of databases, if you put all your data in one place, you could scale up (SMP) but scaling out (MPP) really was challenging. Nowdays, you (iceberg), or a DB vendor (Snowflake, Databricks, BigQuery, even BigTable, etc), put all your data on S3/GCS/ADLS and you can scale out compute to read traffic as much as you want (as long as you accept something like a snapshot isolation read level and traffic is largely read-only or writes are distributed across your tables and not all to one big table.)
You can now share data across your different compute nodes or applications/systems by managing permissions pointers managed via a cloud metadata/catalog service. You can get microservice databases without each having completely separate datastores in a way.
Regarding Laddad's point, building tools native to distributed systems programming might be intrinsically difficult. It's not for lack of trying. We've invented numerous algebras, calculi, programming models, and experimental programming languages over the past decades, yet somehow none has really taken off. If anything, I'd venture to assert that object storage, perhaps including Amazon DynamoDB, has changed the landscape of programming distributed systems. These two systems, which optimize for throughput and reliability, make programming distributed systems much easier. Want a queue system? Build on top of S3. Want a database? Focus on query engines and outsource storage to S3. Want a task queue? Just poll DDB tables. Want to exchange states en masse? Use S3. The list goes on.
Internally to S3, I think the biggest achievement is that S3 can use scalability to its advantage. Adding a new machine makes S3 cheaper, faster, and more reliable. Unfortunately, this involves multiple moving parts and is therefore difficult to abstract into a tool. Perhaps an arbitrarily scalable metadata service is what everyone could benefit from? Case in point, Meta's warm storage can scale to multiple exabytes with a flat namespace. Reading the paper, I realized that many designs in the warm storage are standard, and the real magic lies in its metadata management, which happens to be outsourced to Meta's ZippyDB. Meanwhile, open-source solutions often boast about their scalability, but in reality, all known ones have certain limits, usually no more than 100PBs or a few thousand nodes.
25 years ago: http://herpolhode.com/rob/utah2000.pdf (Time flies.)
This is what I see holding some applications back.
The relational model is flexible and sufficient for many needs but the ACID model is responsible for much of the complexity in some more recent solutions.
While only usable for one-to-many relationships, the hierarchical model would significantly help in some of the common areas like financial transactions.
Think IBM IMS fastpath, and the related channel model.
But it seems every neo paradime either initially hampers itself, or grows to be constrained by Codd's normalization rules, which result in transitive closure a the cost of independence.
As we have examples like Ceph's radios, Kafka etc...if you view the hierarchical file path model as being intrinsic to that parent child relationship we could be distributed.
Perhaps materialized views could be leveraged to allow for SQL queries without turning the fast path into a distributed monolith.
SQL is a multi tool, and sometimes you just need to use a specific tool.
Would be interesting to see comparisons to other domains. Surely you could look at things like water processing plants to see how they build and maintain massive structures that do coordinated work between parts of it? Power generation plants. Assembly factories. Do we not have good artifacts for how these things are designed and reasoned about?
But almost all of the new innovation I'm familiar with in distributed systems is about training LLMs. I wouldn't say the programming techniques are "new" in the way this post is describing them, but the specifics are pretty different from building a database or data pipeline engine (less message oriented, more heavily pipelined, more low level programming, etc)
I think database space is still hot topic with many unsolved problems.
Agree with another commenter, observability tools do suck. I think that's true in general for software beyond a certain amount of complexity. Storing large amounts of data for observability is expensive.
But ultimately we pay it because it gives us incredibly valuable insights and has saved us countless hours in incident response, debugging, and performance profiling. It's lowered my stress level significantly.
Just learn that there is another discontinued attempt https://serviceweaver.dev/
For me it was a required elective (you must take at least one of these 2-3 classes). And I went to college while web browsers were being invented.
When Cloud this and Cloud that started every university should have added it to the program. What the fuck is going on with colleges?
The reality is that most of this profession is learned on the job, and college acts as a filter at the start of the funnel. If someone is not capable of picking up the Paxos paper, then having had someone tell them about it 5 years ago when they had a hangover won't help.
The latter course (a) was built on a mathematical formalism that had been developed at the university proper and not used anywhere else, (b) used PVM: <https://www.netlib.org/pvm3/>, <https://en.wikipedia.org/wiki/Parallel_Virtual_Machine>, for labs.
Since then, I've repeatedly felt that I've seriously benefited from my formal languages courses, while the same couldn't be said about my parallel programming studies. PVM is dead technology (I think it must have counted as "nearly dead" right when we were using it). And the only aspect I recall about the formal parallel stuff is that it resembles nothing that I've read or seen about distributed and/or concurrent programming ever since.
A funny old memory regarding PVM. (This was a time when we used landlines with 56 kbit/s modems and pppd to dial in to university servers.) I bought a cheap second computer just so I could actually "distribute" PVM over a "cluster". For connecting both machines, I used linux's PLIP implementation. I didn't have money for two ethernet cards. IIRC, PLIP allowed for 40 kbyte/s transfers! <https://en.wikipedia.org/wiki/Parallel_Line_Internet_Protoco...>
- Consistency models (can I really count on data being there? What do I have to do to make sure that stale reads/write conflicts don't occur?)
- Transactions (this has really fallen off, especially in larger companies outside of BI/Analytics)
- Causality (how can I handle write conflicts at the App Layer? Are there Data Structures ie CDTs that can help in certain cases?)
Even basic things like "use system time/monotonic clocks to measure elapsed time instead of wall-clock time" aren't well known, I've personally corrected dozens of CRs for this. Yes this can be built in to libs, AI agents etc but it never seems to actually be, and I see the same issues repeated over-and-over. So something is missing at the education layer
I could just about create an associates degree around that and my graduates would run circles around any code camp you could name.
So to simplify, from 1985 to 2005 ish you could keep sequential software exactly the same and it just ran faster each new hardware generation. One CPU but transistors got smaller and (hand wavy, on chip ram, pipelining )
Then roughly around 2010 single CPUs just stopped magically doubling. You got more cores, but that meant parallel or distributed programming - your software that in 1995 served 100 people was the same serving 10,000 people in 2000. But in 2015 we needed new coding - we got NOSQL and map reduce and facebook data centres.
But the hardware kept growing
TSMC now has wafer scale chips with 900,000 cores - but my non parallel, on distributed code won’t run 1 million times faster - Amdahls law just won’t let me
So yeah - no one wants to buy new chips with a million cores because you aren’t going to get the speed ups - why buy an expensive data centre full of 100x cores if you can’t sell them at 100x usage.
This is the understatement of the article. There are two insanely difficult things to get right in computers. One is cryptography, and other is distributed systems. I'd argue the latter is harder.
The reason simple enough to understand. In any program the programmer has to carry in his head every piece of state that is accessible at any given point, the invariants that apply to that state, and the code responsible for modifying that state while preserving the invariants. In sequential programs the code that can modify the shared state is restricted to inner loops and functions you call, and you have to verify every modification preserves the invariants. It's a lot. The hidden enemy is aliasing, and you'll find entire books written on the counter measures like immutable objects, function programming, and locks. Coordinating all this is so hard only a small percentage of the population can program large systems. I guess you are thinking "but of a lot of people here can do that". True, but we are a tiny percentage.
In distributed systems those blessed restrictions a single execution thread gives us on what code can access shared state goes out the window. Every line that could read or write the shared state has to be considered, whether its adjacent or not, whether you called it here or not. The state interactions explode in the same way interactions between qubits explode. Both explode beyond the capability of human minds to assemble them all in one place. You have to start forming theorems and formulating proofs.
That worst part is newbie programmers are not usually aware this explosion has taken place. That's why experienced software engineers give the following advice on threads: just don't. You don't have a feel for what will happen, your code will appear to work when you test it while being rabbit warren of disastrous bugs that will likely never be fixed. It's why Linux RCU author Paul McKenney is still not confident his code is correct, despite being one of the greatest concurrent programming minds on the planet. It's why Paxos is hard to understand despite being relatively simple.
Expecting an above average programmer to work on a distributed system and not introduce bugs without leaning on one of one of the "but it is inefficient" tools he lists is an impossible dream. A merely experienced average has no hope. It's hard. Only a tiny, tiny fraction of the programmers on the planet can pull it off kind of hard.
God I want to dig a cave and live in it.
I am not sure that pointing out that today's models are going to be MUCH worse at reasoning about distributed code than serial code is "pitching".
Conversely, pointing out that the reason they are so bad at distributed is the lack of related information locality, the same problem humans often have, puts a reasonable second underline on the value of more locality in our development artifacts.