24 pointsby RedShift13 hours ago8 comments
  • raffraffraff3 hours ago
    K3s + FluxCD. There's something nice about using git to add a helm repo, a helm release with a few values, then 'git push'. Shortly afterwards there's a new DNS record, TLS cert and I can hit https://mynewservice.example.com
    • frizlab2 hours ago
      Flux is the best thing that ever happened to ops. I set it up a few years back in my previous company, it was a revelation.
  • himata41133 hours ago
    Kubernetes solves real problems for the 1% who need it. The other 99% are paying a massive complexity tax for capabilities they never use, while 87% of their provisioned CPU sits idle.

    is where the author is just wrong:

    - abstracts away ssh - makes it pretty unnecessary

    - rbac multi tenancy

    - better automations

    - orchestating more than one cluster

    - better infra as code

    - provisions are as good as you make them, if you don't want them only use limits.

    - large mind share, bitnami (was) great

    I use k3s for my home network because it's simple and easy, thinking that k8s is overengineered just plain wrong - it's just different especially if you compare different versions of k8s designed for different things where for ex: k3s bundles csi, cni, ctl, ingress for you.

    I actually struggle with compose ('orchestration' alternative) significantly more since it usually has complicated workarounds to missing features.

    I have been running 5 k8s-flavored clusters for more than half a decade between 1 to 40 nodes.

    • NewJazz3 hours ago
      The author claimed cert-manager as inherent k8s overhead (its not) but then didn't mention certificate management with docker swarm at all. They lost me there.
      • SOLAR_FIELDS2 hours ago
        This is the thing about kubernetes that these short sighted takes always seem to miss. Kubernetes is complicated because deployment is complicated. For every little knob in k8s there is a pretty good standard path. Need certs? Cert manager. Autoscaling? Cluster autoscaler or KEDA. Load balancing? Handled. All wheels you will need to reinvent yourself otherwise.
      • k_roy2 hours ago
        The author mostly lost me when he started doing comparative line counts between docker swarm and kubernetes.

        And the docker swarm example didn’t even accomplish the same thing.

      • mystifyingpoi2 hours ago
        I agree. Honestly, this overhead doesn't exist in practice. I've never even checked what's inside cert-manager namespace, it gets deployed for every new cluster, it works, someone automated this, now who cares.
        • k_roy2 hours ago
          No kidding. Using cert-manager with my DNS on cloudflare or GKE is about the easiest and most mindless and zero-friction LE implementation I’ve ever used.
      • himata41133 hours ago
        [dead]
  • Taikonerd3 hours ago
    > If you need granular control over every tiny aspect of your container orchestration — network policies, pod scheduling, resource quotas, multi-tenant isolation, custom admission controllers, autoscaling on custom metrics — Kubernetes gives you knobs for all of it.

    > The problem is that 99% of teams don't need any of those knobs.

    I keep hoping for a Docker Swarm revival. It's the right size for small-to-medium-size deployments with normal requirements.

    • nitinreddy883 hours ago
      Every enterprise team (at least who are in B2B business) needs this. The number of security clearances (zero trust boundary), security compliance is must. May be in B2C space where you might not need that depending upon how secure you wanna be based on what data you hold
      • NewJazz3 hours ago
        Yeah I was trying to give the post a serious consider, but the author just flatly dismissed network policies as not needed, suggesting that we just make new overlay networks for every set of containers that need to communicate. This post really doesn't resonate with me, even though I am on a small team using k8s in a small company.
    • SOLAR_FIELDS2 hours ago
      ECS Fargate is basically this on AWS. It’s just not cloud agnostic. But Swarm itself while being cloud agnostic is a proprietary product as well, so you still get the lock in, just at a different layer
  • dwroberts3 hours ago
    Can you control the docker swarm API from within a container that is running inside of it?

    I think one of the killer features of k8s is how simple it is to write clients that manipulate the cluster itself, even when they’re running from inside of it. Give them the right role etc and you’re done. You don’t even have to write something as complete as an actual controller/operator - but that’s also an option too

    • itintheory3 hours ago
      You can. I think there's a couple approaches - bind mount the docker socket, or expose it on localhost, and use host networking for the consuming container, or there exist various proxy projects for the socket. There may be other ways, curious if anyone else knows more.
      • mystifyingpoi2 hours ago
        > bind mount the docker socket

        Bind-mounting /var/run/docker.sock gives 100% root access to anyone that can write it. It's a complete non-starter for any serious deployment, and we should not even consider it at any time.

      • NewJazz3 hours ago
        That's not even close to the same as a well thought out rbac system, sorry.
  • k_roy3 hours ago
    The author here repeatedly claims that teams would function identically on Swarm and are wasting resources using Kubernetes.

    You don’t even need to be a mid-sized team to need stuff like RBAC, service mesh, multi-cluster networking, etc.

    Claiming that kubernetes only “won” because of economic pressure is only true in the most basic of sense, and claiming it as a resume padder is flat out insulting to all its actual technical merits.

    The multi-tenant nature and innate capabilities is part economics of it, but operators, extensibility, and platform portability across different environments are actual technical merits.

    Claiming that autoscaling is optional and not required for most production environments is at best myopic.

    It also greatly undersells the operational complexity that autoscaling actually solves, versus just the reactive script based solely on CPU. Metrics pipelines, cluster-level resource constraints, and pod disruption budgets.

    As far as the repeated claim that it just “works”, great. Not working is more of a function of the application not the platform.

    I dunno, this whole article frames kubernetes as a massive overhead and monolithic beast rather than the programmable infrastructure that it is.

    It also tries to minimize many real world needs like multi-team isolation, extensibility, and ecosystem integrations

    • mystifyingpoi2 hours ago
      > I dunno, this whole article frames kubernetes as a massive overhead

      Author describes his context being a setup with two $83/year VPS instances - a scale so incredibly minuscule compared to typical deployments, that any of his arguments against one of the core cloud technologies fall flat.

      Of course he doesn't need Kubernetes. It's fine.

  • mzi3 hours ago
    Was betamax superior to VHS? https://www.youtube.com/watch?v=_oJs8-I9WtA
  • johnfn3 hours ago
    This article is very clearly AI generated. I’d rather read the prompts next time, thanks.
  • verdverm3 hours ago
    https://k3s.io/ is my new goto for this

    Docker Swarm doesn't have the mindshare for effective hiring