No, everything was already open source, other had done it before too, they just made it in a way a lot of "normal" users could start with it, then they waited too long and others created better/their own products.
"Docker Swarm was Docker’s attempt to compete with Kubernetes in the orchestration space."
No, it never was intended like that. That some people build infra/business around it is something completely different, but swarm was never intended to be a kubernetes contender.
"If you’re giving away your security features for free, what are you selling?"
This, is what actually is going to cost their business, I'm extremely grateful for what they have done for us. But they didn't gave themselves a chance. Their behaviour has been more akin to a non-profit. Great for us, not so great for them in the long run.
Corporate customers didn't like the security implications of the Docker daemon running as root, they wanted better sandboxing and management (cgroups v2), wanted to be able to run their own internal registries, didn't want to have docker trying to fight with systemd, etc.
Docker was not interested (in the early years) in adopting cgroups v2 or daemonless / rootless operation, and they wanted everyone to pay to use Dockerhub on the public internet rather than running their own internal registries, so docker-cli didn't support alternate registries for a long long time. And it seemed like they disliked systemd for "ideological" reasons to an extent that they didn't make much effort to resolve the problems that would crop up between docker and systemd.
Because Docker didn't want to build the product that corporate customers wanted to use, and didn't accept patches when Red Hat tried to get them implemented those features themselves, eventually Red Hat just went out and built up Podman, Quay, and the entire ecosystem of tooling that those corporate customers wanted themselves (and sold it to them). That was a bit of an own goal.
Corporate customers do not care about any of the things you mentioned. I mean, maybe some, but in general no. That's not what corps think about.
There was never "no interest" at Docker in cgv2 or rootless. Never. cgv2 early on was not useable. It lacked so much functionality that v1 had. It also didn't buy much, particularly because most Docker users aren't manually managing cgroups themselves.
Docker literally sold a private registry product. It was the first thing Docker built and sold (and no, it was not late, it was very early on).
That said, I've been really surprised to not see more first class CI support for a repo supplying its own Dockerfile and being like "stage 1 is to rebuild the container", "stage two is a bunch of parallel tests running in instances of the container". In modern Dockerfiles it's pretty easy to avoid manual cache-busting by keying everything to a package manager lockfile, so it's annoying that the default CI paradigm is still "separate job somewhere that rebuilds a static base container on a timer".
But if that's not the case (robotics, ML, gamedev, etc) or especially if you're dealing with a slow, non-parallel package manager like apt, that upfront dependency install starts to take up non-trivial time— particularly galling for a step that container tools are so well equipped to cache away.
I know depot helps a bunch with this by at least optimizing caching during build and ensuring the registry has high locality to the runner that will consume the image.
I’d be surprised if the systemd thing was not also true.
I think it’s quite likely Docker did not have a good handle on the “needs” of the enterprise space. That is Red Hats bread and butter; are you saying they developed all of that for no reason?
I did large downloads all the time, I used to download 25GB games for my game consoles for instance. I just had to use schedule them and use tools that could resume downloads.
If I'd had a local docker hub I might have used docker but because I didn't it was dead to me.
Crun just stamp couples security profiles as an example, so everything in the shared kernel that is namespace incompatible is enabled.
This is why it is trivial to get in-auditable communication between pods on a host etc…
The former felt like a rug pull when they did it later, and the latter should have been obvious from the start. But it wasn't there in the beginning and too many alternatives from every cloud provider popped in to fill that gap and it was too late.
There were a lot of cool ideas, and I think early on, they were more focused on the cool ideas and less on how to make it a successful, long lived business that didn't rely on VC funding and an exit strategy they didn't have to succeed.
This is particularly amusing when considering they helped start the Open Container Initiative with others back in 2015.
What if Docker "the company" was just a long con to use VC bux to fund open source? I say mostly in jest.
As proven later when Kubernetes became container runtime agnostic.
Yes. It was a helpful UI abstraction for people uncomfortable with lower level tinkering. I think the big "innovations" were 1) the file format and 2) the (free!) registry hosting. This drove a lot of community adoption because it was so easy to share stuff and it was based on open source.
And while Docker the company isn't the behemoth the VCs might have wanted, those contributions live on. Even if I'm using a totally different tool to run things, I'm writing a Dockerfile, and the artifacts are likely stored in something that acts basically the same as Docker Hub.
Your comment is accurate for the original Swarm project, but a bit misleading regarding Swarm mode (released later on and integrated into docker).
I have worked on the original Swarm project and Swarmkit (on the distributed store/raft backend), and the latter was intended to compete with Kubernetes.
It was certainly an ambitious and borderline delusional strategy (considering the competition), but the goal was to offer a streamlined and integrated experience so that users wouldn't move away from Docker and use Swarm mode instead of Kubernetes (with a simple API, secured by default, just docker to install, no etcd or external key value metadata store required).
You can only go so far with a team of 10 people versus the hundreds scattered across Google/RedHat/IBM/Amazon, etc. There were so many evangelists and tech influencers/speakers rooting for Kubernetes already, reversing that trend was extremely difficult, even after initiating sort of a revolution in how developers deployed their apps with docker. The narrative that cluster orchestration was Google's territory (since they designed Borg that was used at a massive scale) was too entrenched to be challenged.
Swarm failed for many reasons (it was released too soon with a buggy experience and at an incomplete state, lacking a lot of the features k8s had, but also too late in terms of timing with k8s adoption). However, the goal for "Docker Swarm mode" was to compete with Kubernetes.
That would be news to the then Docker CTO, who reached out to my boss to try to get me in trouble, because I was tweeting away about [cloud company] and investing heavily in Kubernetes. The cognitive dissonance Docker had about Swarm was emblematic of the missteps they took during that era where Mesos, Kube and Swarm all looked like they could be The Winner.
At two places I worked their reps reached out to essentially ensnare the company in a sort of “gotcha” scheme where if we were running the version of Docker Desktop after the commercial licensing requirement change, they sent a 30 day notice to license the product or they’d sue. Due to the usual “mid size software company not micromanaging the developers” standard, we had a few people on a new enough version that it would trigger the new license terms and we were in violation. They didn’t seem to do much outreach other than threatening us.
So in each case we switched to Rancher Desktop.
The licensing cost wasn’t that high, but it was hard to take them in good faith after their approach.
This tracks with what I saw, one day there was an email sent out to make sure you don’t have docker desktop installed.
It was wild because we were on the heels of containerize-all-th-things push and now we’re winding down docker?? Sure whatever you say boss.
What exactly are you objecting to? Since you say “I’m fine with the idea that licensing Docker and Docker Desktop is a good thing to do” it’s not the change, so what is it? The 30 days, them saying they would sue after that, or the tone?
I haven’t seen the messages so I cannot comment on that, but if you accept that the licensing can be changed, whats wrong with writing offenders to remind them to either stop using the product or start paying? And what’s wrong with giving them 30 days, since, in my memory, they announced the licensing change months in advance?
It reminds me of someone handing me something on the street then asking me to pay for it, whenever they do that I just throw whatever it is as far as I can and keep walking.
If they never changed that licensing, nobody would have had an incentive to put big effort into an alternative.
I think the hosted Docker registry should have been their first revenue source and then they should have created more closed source enterprise workflow solutions and hosted services that complement the docker tooling that remained truly open source, including desktop.
Due to the usual “mid size software company not micromanaging the developers”
standard
You didn't have a device management system or similar product managing software installs (SCCM in Windows land)? That's table stakes for any admin.At one place there wasn’t and at the other it wasn’t well managed. I agree from a compliance point of view and have advocated for this but I was not on the IT/Ops side of the business so I could only use soft power.
The CTO at the first company had a “zero hindrances for the developers” mindset and the latter was reeling from being the merger of five different companies. The latter did a better job of trying to say the least but wasn’t great about it. Outcome was the same none the less.
But outside of 'make sure the oracle lawyers never contact us' they dont want us policing them and they are admins on their own devices. For a lot of businesses their computer network has separate production and business zones and the production zone is a YOLO type situation.
At my midsize company, our engineers could absolutely say something like “we don’t like Terraform Cloud, we want to switch to OpenTofu and env0” and our management would be okay with it and make it happen as long as we justify the change.
We wouldn’t even really have to ask permission if the change was no cost.
I think OPs point is they failed on this part. "Making it happen" should have been ensuring a compliant and approved version of the software was the one made available to the developers. At a large scale that is done via device management, but even at a medium sized enterprise that should have been done via a source management portal of some sort.
The tech is open source and free forever - thats somehow a problem? The company monitised enterprise features, while keeping core and hub free - also a problem? Is exploring AI tools, like everyone else is? should they not? should they just stay stagnant? Has made hardened images free instead of making that a premium feature only for people in banks? - and monitising SLAs, how is that a problem?
Docker is still maintaining the runtime on which orbstack, podman etc are all using, and all the cloud providers are using, but apparently at the same time Docker is deeply irrelevant and should not make money - while all of us on HN with well paid tech jobs get to have high thoughts on their every move to pay their employees and investors...
> Docker is still maintaining the runtime on which orbstack, podman etc are all using, and all the cloud providers are using
I need to fact check that one. runc was donated by Docker to OCI a while back. And containerd was created under the CNCF from a lot of Docker code and ideas. podman is sitting on the RedHat containers stack, which has their own code base. Docker itself uses runc and containerd, and so do most Kubernetes deployments. Many of these tools go to containerd directly without deploying the Docker engine.
No. containerd was created by Docker, as part of a refactoring of dockerd, then later donated to cncf. Over time it gained a healthy base of maintainers from various companies. It is the most successful of Docker's cncf contributions. But it was not created under the CNCF.
Podman? Podman appears to have reimplemented basically everything. What runtime are you talking about?
Look at the maintainer lists of containerd and moby, which are used by loads of others, several docker employees on those lists - I didn't check what their amount of involvement is compared to other companies, nor whether they are even sanctioned by docker to do the work, but afaik those projects came out of OCI with Docker as one of the primary backers.
Docker in its entirety was at risk of being wrapped as a commodity component. By spinning out lower-level components under a different brand, they (we) made it possible to keep control of the Docker brand, and use it to sell value-added products.
Source: I'm the founder of Docker.
Strongly disagree. The core Docker technology was an excellent product and as the article says, had a massive impact on the industry. But they never found a market for that technology at any price point that wasn't ~free, so they didn't have PMF. That technology also only took off in the way it did because it was free and open source.
AMA.
We heard the feedback that we should pick a lane between CI and AI agents. We're refocusing on CI.
We're making Dagger faster, simpler to adopt.
We're also building a complete CI stack that is native to Dagger. The end-to-end integration allows us to do very magical things that traditional CI products cannot match.
We're looking for beta testers! Email me at solomon@dagger.io
Only listen to your users and customers, ignore everyone else.
Don't hire an external CEO unless you're ready to leave. Hiring a CEO will not fix the loneliness of not having a co-founder.
Having haters is part of success. Accept it, and try to not let it get to you.
Don't partner with Red Hat. They are competitors even though they're not honest about it.
Not everyone hates you even though it may seem that way on hacker news and twitter. People actually appreciate your work and it will get better. Keep going.
In the early days we tried very hard to accommodate their needs (for example by implementing support for devicemapper as an alternative to aufs). But we soon realized our priorities were fundamentally at odds: they cared most about platform lock-in, and we cared most about platform independence. There was also a cultural issue: when Red Hat contributes to open source it's always from a position of strength. If a project is important to them, they need merge authority. Because of the diverging design priorities, they never got true merge rights on the repo: they had to argue for their pull requests like everyone else, and input from maintainers was not optional. Many pull requests were never merged because of fundamental design issues, like breaking compatibility with non-Red Hat platforms. Others because of subjective architecture disagreements. This was very frustrating to them, and led to all sorts of drama and bad behavior. In the process I lost respect for a company I once admired.
I think they made a mistake marketing podman as a drop-in replacement to Docker. This promise of compatibility limited their design freedom and I'm sure caused the maintainers a lot of headaches- compatibility is hard!
Ultimately the true priority of podman - native integration with the Red Hat platform - makes it impossible for it to overtake Docker. I'm sure some of the podman authors would like to jettison that constraint, but I don't think that's structurally possible. Red Hat will never invest in a project that doesn't contribute to their platform lock-in.
- take advantage of the current agentic wave and announce a Docker Sandbox runner product that lets you run agents inside cloud sandboxes
Things will actually change quite a bit. First of all, millions of people depend on Docker Desktop, and Podman Desktop is (as everything from RedHat is) a poor replacement for it. And the Docker CLI and daemon power a huge amount of container technology; Podman is, again, quite a poor replacement. If these solutions go away, a large amount of business and technology is gonna get left in the lurch.
Second, most of the containerized world depends on Docker Hub. If that went away, actually a huge swath of businesses would just go hard-down, with no easy fix. I know a million HNers will be crying out about the evils of "centralization", but actually the issue is it's corporate-run rather than an open body. The architecture should have had mirrors built-in from the start, but even without mirrors, the company and all its investment and support going away is the bigger rug-pull.
The industry and ecosystem have this terribly human habit of rushing at the path-of-least-resistance. If we don't plan an intelligent, robust migration strategy away from Docker, we'll end up relying on something worse.
There's a rootless [0] option, but that does require some sysadmin setup on the host to make it possible. That's a Linux kernel limitation on all container tooling, not a limitation of Docker.
> and that it keeps images between users in a database.
Not a traditional database, but content addressable filesystem layers, commonly mounted as an overlay filesystem. Each of those layers are read-only and reusable between multiple images, allowing faster updates (when only a few layers change), and conserving disk space (when multiple images share a common base image).
> I want to move things around using mv and cp, and not have another management layer that I need to be aware of and that can end up in an inconsistent state.
You can mount volumes from the host into a container, though this is often an anti-pattern. What you don't want to do is modify the image layers directly, since they are shared between images. That introduces a lot of security issues.
Yeah, I mean what do you expect or is the alternative? If you have a process that needs access to something only root typically can do, and the solution been to give that process root so it can do it's job, you usually need root to be able to give that process permission to do that thing without becoming root. Doesn't that make sense? What alternative are you suggesting?
You have to be root to set it up, but after that you don't need any special privileges. With Docker the only option is to basically give everyone root access.
It's true that it requires root for some setup though. Unclear if op was complaining about that.
Sure you cannot install docker or podman as a non-root user. But take your argument a bit further: what if the kernel is compiled without cgroups support? Then you will need root to replace the kernel and reboot. The root user can do arbitrarily many things to prevent you from installing any number of software. The root user can prevent you from using arbitrary already installed software. The root user can even prevent you from logging in.
It is astounding to me that someone would complain that a non-root user cannot install software. A much more reasonable complaint is that a non-root user can become root while using docker. This complaint has been resolved by podman.
Depends on what you mean by "install software".
If your definition is "put an executable in a directory that is in every other user's standard $PATH", then yes, this is an absurd complaint. Of course only root should be able to do this.
If your definition is "make an executable available to run as my user", then no, this is not absurd. You absolutely should not need root to be able to run software that doesn't require root privileges. If the software requires root, it's either doing something privileged, or it's doing it wrong.
> You absolutely should not need root to be able to run software that doesn't require root privileges.
But root can approve or disapprove you running that software. Have you heard of SELinux or AppArmor? The root user can easily and simply preventing you from running an executable even as your own user.
A malware can run as your own user and exfiltrate files you have access to. The malware does not need root privileges. Should root have the capability to prevent the malware from being installed? Regardless of what your definition of “install” is, the answer is unequivocally yes.
I would argue the reverse: that Docker's value was itself the product-market fit. Docker the technology was commoditized and open-source almost from its genesis, because its technology had been built by Borg engineers at Google. It provided marginally more than ergonomics, but ergonomics was all it needed - the missing link between theory and practice.
Just a note, I am working for the org, that sells enterprise software shipped as container images, publishes on Docker Hub and RedHat. No issues migrating to apple/container.
"Deploying" one is as simple as `nixos-rebuild switch --flake .#hostName`
It will get a company started but if the tech has any success, that success is always replicable (even if the exact tech isn’t). IP protection is worthless and beside the point.
The only moat is the creativity of a company’s core staff when they spend a lot of time on valuable problems. Each thing they produce will grow, live, and die, but if the company has no pipeline it is doomed.
And VCs know this, which is why they want to pump startups up, and then cash out before they flop, even while founders talk about all the great things they can do next.
Naming your company after your one successful product is a pretty good sign of a limited lifespan.
Docker had a choice of markets to go after, the enterprise market was being dominated by the hyperscalers pushing their own Kubernetes offerings. So they pivoted to focus on the developer tooling market. This is a hard market to make work, particularly since developers are very famous for not paying for tooling, but they appear to making a profit.
With Docker Hub, it's always been a challenge to limit how much that costs to run. And with more stuff being thrown in larger images, I don't want to see that monthly bill. The limits they added hurt, but also made a lot of people realize they should have been running their own mirror on-prem, if not only to better handle an upstream outage when us-east-1 has a bad day.
Everything else has been pushing into each of the various popular development markets, from AI, to offloading builds to the cloud, to Hardened Images. They release things for free when they need to keep up with the competition, and charge when enterprises will pay for it.
They've shifted their focus a lot over the years. My fear would be if they stayed stagnant, trying to extract rents without pushing into new offerings. So I'm not worried they'll fail this year, just like I wasn't worried any of the previous years when similar posts were made.
But the actual experience with developing on VSCode with Dev Containers is not great. It's laggy and slow.
I have also used them remotely (ssh and using tailscale) and noticed a little lag, but nothing really distracting.
First of all it supports containers natively, Windows own ones, and Linux on WSL.
Secondly, because Microsoft did not want to invent their own thing, the OS APIs are exposed the same way as Docker daemon would expect them.
Finally, with the goal to improving Kubernetes support and the ongoing changes for container runtimes in the industry, nowadays it exposes several touch points.
https://learn.microsoft.com/en-us/virtualization/windowscont...
If you open a native Windows folder in VSCode and activate the Dev Container, it will use the special drvfs mounts that communicate via Plan9 to host Windows OS to access native Windows files from the Docker distro. Since it is a network layer accross two kernels, it is slow as hell.
FYI- If I was docker, I'd stand up some bare metal hosting (i.e., a Docker Cloud) designed around making it easier for novice developers to take containers and turn them into web applications, with a product similar to Supabase built around this cloud to let novice developers quickly prototype and launch apps without learning how to do deployments in more sophisticated clouds. Supabase and AI vibe coders pair well, but the hole in the market is vibe coders who want to launch a web app vibe coded but don't know how to deploy containers to the cloud without a steep learning curve. It keeps many vibe coders trapped in AIO vibe coding platforms like Lovable and AI Studio.
Is it really a hole? I'm not the target user, but I keep coming across "Build & deploy your own platform/service/application with VibeCodingLikeThereIsNoTomorrow" and similar, maybe new one every week or so.
Gordon is the character from Half Life.
Docker a piece of software. Don't anthropomorphize it.
Early 2017 was peak Docker and Docker Inc. Those were the days. Container hype was everywhere. Before moby. Before all the pivots.
Microsoft was embracing open source and the cloud. They were acquiring dev tools.
It was a missed opportunity for both companies.
Open infrastructure is hard to monetize. Old school robotics players have a playbook for this. You may or may not agree DBs are infra but Oracle has done well by capitalistic standards.
The reality is in our economy exploitation is a basic requirement. Nothing says a company providing porcelain for Linux kernel capabilities has a right to exist. What has turned into OCI is great. Docker desktop lost on Mac to Orb stack and friends (but I guess they have caught back up?) the article does make it clear they have tried hard to find a place to leverage rent and it probably is making enough for a 10-100 person company to be very comfortable but 500-1000 seems very over grown at this point.
Really should not have given up on Swarm just to come back to it. Kubernetes is over kill for so many people using it for a convenient deployment story.
If I wrote the best word processor in the world, I could probably sell it for a decent sum to quite a few people.
However if I expressed my revenue expectations as a percentage of revenue from the world's bestselling novels, I would be very quickly disappointed.
I worked in engineering software for a long time and because of who we sell to, there's always been a very hard cost-benefit analysis for customers of SaaS in that space. If customers didn't see a saving equal to more than the cost of the software in Y1 they could and would typically cancel.
But not impossible. Terraform seems to have paid its creator quite well.
Imagine if Docker the company could charge AWS and Google for their use of their technology.
Imagine if Redis, Elastic, and so many other technologies could.
Modern database companies will typically dual license their work so they don't have their lunch eaten. I've done it for some of my own work [3].
You want your customers to have freedom, but you don't want massive companies coming in and ripping you off. You'd also like to provide a "easy path" for payments that sustain the engineering, but not require your users to be bound to you.
"OSI-approved" Open Source is an industry co-opt of labor. Amazon and Google benefit immensely with an ecosystem of things they can offer, but they in turn give you zero of the AWS/GCP code base.
Hyperscalers are miles of crust around an open source interior. They charge and make millions off of the free labor of open source.
I think we need a new type of license that requires that the companies using the license must make their entire operational codebases available.
[3] https://github.com/storytold/artcraft/blob/main/LICENSE.md
Big tech companies saw this as an opportunity to build proprietary value-add systems around open source, but not make those systems in turn open. As they scaled, it became impossible to compete. You're not paying Redis for Redis. You're paying AWS or Google.
To compete at offering infrastructure maybe, but what I would like is more capability to build solutions.
And I think that today one has much more open-source technologies that one can deploy with modest efforts, so I see progress, even if some big players take advantage of people that don't want or are not capable to make even modest efforts.
Part of that was that the platform churn costs were a new thing for developers that needed to be priced in now. In the "old world" aka Windows, application developers didn't need to do much, if any at all, work to keep their applications working with new OS versions. DOS applications could be run up until and including Windows 7 x32 - that meant in the most ridiculous case about 42 years of life time (first release of DOS was 1981, end of life for Win 7 ESU was 2023). As an application developer, you could get away with selling a piece of software once and then just provide bug fixes if needed, and it's reasonably possible to maintain extremely old software even on modern Windows - AFAIK (but never tried it), Visual Basic 6 (!!!) still runs on Windows 11 and can be used to compile old software.
In contrast to this, with both major mobile platforms (Android and iOS) as an app developer you have to deal with constant churn that the OS developer forces upon you, and application stores make it impossible to even release bugfixes for platforms older than the OS developer deems worthy to support - for Google Play Store, that's Android 12 (released in 2021) [1], for iOS the situation is a bit better but still a PITA [2].
[1] https://developer.android.com/google/play/requirements/targe...
An "issue" is that Docker these days mostly builds on open standards and has well documented APIs. Open infrastructure like this has only limited vendor lock-in.
Building a docker daemon compatible service is not trivial but was already mostly done with podman. It is compatible to the extent that the official docker cli mostly works with it oob (having implemented the basic Docker HTTP API endpoints too). AWS/GCP could almost certainly afford to build a "podman" too, instead of licensing Docked.
This is not meant to defend the hyperscalers themselves but should maybe out approaches like this in perspective. Docker got among other things large because it was free, monetizing after that is hard (see also Elasticsearch/Redis and the immediate forks).
I can't imagine. Tell me one software project used in AWS/GCP that Amazon/Google pay for. Not donations (like for Linux), but PAID for.
Docker started as a wrapper over LXC, Amazon has enough developers to implement that in a month.
For a while, Docker seemed to focus on developer experience.
ahh yes, docker desktop, where the error messages are "something went wrong", and the primary debugging step is to wipe it, uninstall, and reinstall.It has been a year without problems since I enabled WSL2 engine for Docker.
Honestly they should make the WSL2 Docker engine mandatory because otherwise things barely work.
There is a lot more than a simple chroot to Docker though - with FreeBSD Jails being a stepping stone along the way. It's real innovation and why it won over alternatives was the tooling and infrastructure around the containers - particularly distributing them.
We got an amazing durable essential piece of software from someone investing billions of dollars.
Now, the fact that they didn't get their money back, well, who cares? Not me, it wasn't my money.
Sucks for them, maybe -- but that's far better than enshittification for everyone.
I tried to substitute docker-compose with Podman and Quadlets on a test server the other day, but was shocked how badly described the overall concept is. Most materials I found glimpsed through ability to run it as root/user and how different that is in configuration, and repeated the same 4-6 commands mantra.
Spent a few hours on it and just... failed to run a single container. systemctl never noticed my qualdet definitions, even if podman considered my .container file registered.
A bit.. frustrating, I expected smoother sailing.
Then you can just create a few line systemd unit definition, and it integrates as a normal systemd unit, with logs visible via journalctl etc.
Short of weeding through the docs, I found the "Play with Kube using Podman" talk on DevConfs YouTube channel helpful.
systemctl --user daemon-reload
to let systemd ingest the changes. And, if you have configured to start your container on boot, then still you have to start the container by hand, as you typically won't reboot during development. If you have multiple containers, it might be easiest to have them in one pod, so you only need to start the pod.I agree that the documentation needs a good tutorial to show the complete concept as a starting point. There are multiple ones though on the internet.
The gap with Podman is closing though, and most users don't need any of these in the first place.
What is the edge that docker provides these days?
Enterprise support and Docker Desktop makes it nearly seamless to get set up using containers. I've tried Rancher/podman/buildah and the experience introduced too much friction for me without being on a Linux system.I'll add that needing to be on the "right" Linux system is another strike against Podman. Last I checked if I wasn't on a RedHat derivative I was in the wilderness.
I found the signal to noise ratio better in Podland. As a newb to docker space, I was overwhelmed with should I swarm, should I compose, what’s this register my thing? And people are freaking about root stuff. I’m sure I still only use and understand about 10% of the pod(man) space, buts way better than how I felt in the docker space.
I miss when software engineering put a high value on simplicity.
That you are not the average developer
Are you suggesting that docker provides an (unspecified) edge to developers who are better than average? Or to those who are mediocre? Or...
Jerry is a good friend of mine and I think a great VC, he comes from the VMware world and was part of building the VMware enterprise strategy. When all the container stuff was all going down, I was trying to understand how digialocean needed to play in the container space - so I spent a lot of time talking to people and trying to understand it (decided we basically...shouldn't, although looked at buying Hashi) - but it was clear at the time the docker team went with Jerry because they saw themselves either displacing VMware or doing a VMware style play - either way, we all watched them start the process of moving to a real enterprise footing out of just a pure play devtool in 2014, it might have worked too (although frankly their GTM motions were very very strange), but Kubernetes..yah. You might recall Flo was on the scene too selling his ideas at Mesosphere, and the wonderful Alex Polvi with CoreOS. It was certainly an interesting time, I think about that period often and that it is a bit of a shame what happened to docker. I like Solomon a lot and think he's a genuinely genius dude.
He'd always try to get us into various technologies, Docker was one of them. It wasn't really relevant for the job, but I could see its uses.
Now that I think about it, I don't think anything they did on the tech discovery front was useful. Got stuck on Confulence which required us to save as a .pdf for our users to view lmao. Credit for being super smart with coding, he was a wiz on code reviews.