283 pointsby jakelsaunders949 hours ago42 comments
  • 3np7 hours ago
    > I also enabled UFW (which I should have done ages ago)

    I disrecommend UFW.

    firewalld is a much better pick in current year and will not grow unmaintainable the way UFW rules can.

        firewall-cmd --persistent --set-default-zone=block
        firewall-cmd --persistent --zone=block --add-service=ssh
        firewall-cmd --persistent --zone=block --add-service=https
        firewall-cmd --persistent --zone=block --add-port=80/tcp
        firewall-cmd --reload
    
    Configuration is backed by xml files in /etc/firewalld and /usr/lib/firewalld instead of the brittle pile of sticks that is the ufw rules files. Use the nftables backend unless you have your own reasons for needing legacy iptables.

    Specifically for docker it is a very common gotcha that the container runtime can and will bypass firewall rules and open ports anyway. Depending on your configuration, those firewall rules in OP may not actually do anything to prevent docker from opening incoming ports.

    Newer versions of firewalld gives an easy way to configure this via StrictForwardPorts=yes in /etc/firewalld/firewalld.conf.

    • exceptione7 hours ago

        > Specifically for docker it is a very common gotcha that the container runtime can and will bypass firewall rules and open ports anyway. 
      
      Like I said in another comment, drop Docker, install podman.
    • Ey7NFZ3P0nzAe15 minutes ago
      You might be interested in ufw-docker: https://github.com/chaifeng/ufw-docker
    • gus_6 hours ago
      it doesn't matter what netfilter frontend you use if you allow outbound connections from any binary.

      In order to stop these attacks, restrict outbound connections from unknown / not allowed binaries.

      This kind of malware in particular requires outbound connections to the mining pools. Others downloads scripts or binaries from remote servers, or try to communicate with their c2c servers.

      On the other hand, removing exec permissions to /tmp, /var/tmp and /dev/shm is also useful.

      • 3abiton3 hours ago
        Is there an automated way of doing this?
        • 3npan hour ago
          Two paths:

          - Configuration management (ansible, salt, chef, puppet)

          - Preconfigured images (NixOS, packer, Guix, atomic stuffs)

          For a one-off: pssh

    • skirge15 minutes ago
      also docker bypasses ufw
    • rglover5 hours ago
      One of those rare HN comments that's just pure gold.
    • denkmoon7 hours ago
      I’ll just mention Foomuuri here. Its bit of a spiritual successor to shorewall and has firewalld emulation to work with tools compatible with firewalld
      • 3np6 hours ago
        Thanks! Would be cool to have it packaged for alpine since firewalld requires D-Bus. There is awall but that's still on iptables and IMO at bit clunky to set up.
      • egberts12 hours ago
        Foomuuri is ALMOST there.

        I mean there are some payload over payload like GRE VPE/VXLAN/VLAN or IPSec that needs to be written in raw nft if using Foomuuni but it works!.

        But I love the Shorewall approach and your configuration gracefully encapsulated Shorewall mechanic.

        Disclaimer: I maintain vim-syntax-nftables syntax highlighter repo at Github.

    • lloydatkinson7 hours ago
      > Specifically for docker it is a very common gotcha that the container runtime can and will bypass firewall rules and open ports anyway. Depending on your configuration, those firewall rules in OP may not actually do anything to prevent docker from opening incoming ports.

      This sounds like great news. I followed some of the open issues about this on GitHub and it never really got a satisfactory fix. I found some previous threads on this "StrictForwardPorts": https://news.ycombinator.com/item?id=42603136.

  • esaym43 minutes ago
    So this is part of the "React2Shell" CVE-2025-55182 issue? I find it interesting that this seems to get so little publicity. Almost like the issue is normal or expected. And it looks like the affected versions go back a little over a year. So if you've deployed anything with Next.js over the last 12 months your web app is now probably part of a million node bot net. And everyone's advice is just "use docker" or "install a firewall".

    I'm not even sure what to say, or think, or even how to feel about the frontend ecosystem at this point. I've been debating on leaving the whole "web app" ecosystem as my main employment ventures and applying to some places requiring C++. C++ seems much easier to understand than what ever the latest frontend fad is. /rant

  • tgtweak8 hours ago
    Just a note - you can very much limit cpu usage on the docker containers by setting --cpus="0.5" (or cpus:0.5 in docker compose) if you expect it to be a very lightweight container, this isolation can help prevent one roudy container from hitting the rest of the system regardless of whether it's crypto-mining malware, a ddos attempt or a misbehaving service/software.
    • tracker17 hours ago
      Another is running containers in read-only mode, assuming they support this configuration... will minimize a lot of potential attack surface.
      • 3eb7988a16632 hours ago
        Never looked into this. I would expect the majority of images would fail in this configuration. Or am I unduly pessimistic?
        • hxtk25 minutes ago
          Many fail if you do it without any additional configuration. In Kubernetes you can mostly get around it by mounting `emptyDir` volumes to the specific directories that need to be writable, `/tmp` being a common culprit. If they need to be writable and have content that exists in the base image, you'd usually mount an emptyDir to `/tmp` and copy the content into it in an `initContainer`, then mount the same `emptyDir` volume to the original location in the runtime container.

          Unfortunately, there is no way to specify those `emptyDir` volumes as `noexec` [1].

          I think the docker equivalent is `--tmpfs` for the `emptyDir` volumes.

          1: https://github.com/kubernetes/kubernetes/issues/48912

        • s_ting76544 minutes ago
          Depends on specific app use case. Nginx doesn't work with it but valkey will.
    • freedomben8 hours ago
      This is true, but it's also easy to set at one point and then later introduce a bursty endpoint that ends up throttled unnecessarily. Always a good idea to be familiar with your app's performance profile but it can be easy to let that get away from you.
    • jakelsaunders948 hours ago
      This is a great shout actually. Thanks for pointing it out!
    • fragmede8 hours ago
      The other thing to note is that docker is for the most part, stateless. So if you're running something that has to deal with questionable user input (images and video or more importantly PDFs), is to stick it on its own VM and then cycle the docker container every hour and the VM every 12, and then still be worried about it getting hacked and leaking secrets.
      • tgtweakan hour ago
        Most of this is mitigated by running docker in an LXC containers (like proxmox does) which grants a lot more isolation than docker on it's own - closer in nature to running separate VMs.
    • miladyincontrol7 hours ago
      Soft and hard memory limits are worth considering too, regardless of container method.
  • p0w3n3d8 minutes ago

      $ sudo ufw default deny incoming
      $ sudo ufw default allow outgoing
      $ sudo ufw allow ssh
      $ sudo ufw allow 80/tcp
      $ sudo ufw allow 443/tcp
      $ sudo ufw enable
    
    As a user of iptables this order makes me anxious. I used to cut myself out from the server many times because first blocking then adding exceptions. I can see that this is different here as the last command commits the rules...
  • danparsonson7 hours ago
    No firewall! Wow that's brave. Hetzner will let you configure one that runs outside of the box so you might want to add that too, as part of your defense in depth - that will cover you if you make a mistake with ufw. Personally I keep SSH firewalled only to my home address in this way; if I'm out and about and need access, I can just log into Hetzner's website and change it temporarily.
    • Nextgrid7 hours ago
      But the firewall wouldn't have saved them if they're running a public web service or need to interact with external services.

      I guess you can have the appserver fully firewalled and have another bastion host acting as an HTTP proxy, both for inbound as well as outbound connections. But it's not trivial to set up especially for the outbound scenario.

      • danparsonson7 hours ago
        No you're right, I didn't mean the firewall would have saved them, but just as a general point of advice. And yes a second VPS running opnSense or similar makes a nice cheap proxy and then you can firewall off the main server completely. Although that wouldn't have saved them either - they'd still need to forward HTTP/S to the main box.
        • Nextgrid7 hours ago
          A firewall blocking outgoing connections (except those whitelisted through the proxy) would’ve likely prevented the download of the malware (as it’s usually done by using the RCE to call a curl/wget command rather than uploading the binary through the RCE) and/or its connection to the mining server.
          • denkmoon7 hours ago
            How many people do proper egress filtering though, even when running a firewall
          • drnick12 hours ago
            In practice, this is basically impossible to implement. As a user behind a firewall you normally expect to be able to open connections with any remote host.
    • tete6 hours ago
      Firewalls in the majority of cases don't get you much. Yes it's a last line of defense if you do something really stupid and don't even know where or what you configure your services to listen on, but if you don't the difference between running firewalls and not is minuscule.

      There are way more important things like actually knowing that you are running software with widely known RCE that don't even use established mechanisms to sandbox themselves it seems.

      The way the author describes docker being the savior appears to be sheer luck.

      • danparsonsonan hour ago
        The author mentioned they had other services exposed to the internet (Postgres, RabbitMQ) which increases their attack surface area. There may be vulnerabilities or misconfigurations in those services for example.

        Good security is layered.

        • seszettan hour ago
          But if they have to be exposed then a firewall won't help, and if they don't have to be exposed to the internet then a firewall isn't needed either, just configure them not to listen on non-local interfaces.
    • figassisan hour ago
      Yup. All my servers are behind Tailscale. The only thing I expose is a load balancer that routes tcp (email) and http. That balancer is running docker, fully firewalled (incl docker bypasses). Every server is behind herzner’s firewall in addition to the internal firewall.

      App servers run docker, with images that run a single executable (no os, no shell), strict cpu and memory limits. Most of my apps only require very limited temporary storage so usually no need to mount anything. So good luck executing anything in there.

      I used, way back in the day, to run Wordpress sites. Would get hacked monthly every possible way. Learned so much, including the fact that often your app is your threat. With Wordpress, every plugin is a vector. Also the ability to easily hop into an instance and rewrite running code (looking at you scripting languages incl JS) is terrible. This motivated my move to Go. The code I compiled is what will run. Period.

    • jwrallie6 hours ago
      Password auth being enabled is also very brave. I don’t think fail2ban is necessary personally, but it’s popular enough that it always come up.
    • 3abiton3 hours ago
      Honestly fail2ban is amazing. I might doa write up on the countless of attempts on my servers.
  • V__9 hours ago
    > The Reddit post I’d seen earlier? That guy got completely owned because his container was running as root. The malware could: [...]

    Is that the case, though? My understanding was, that even if I run a docker container as root and the container is 100% compromised, there still would need to be a vulnerability in docker for it to “attack” the host, or am I missing something?

    • d4mi3n8 hours ago
      While this is true, the general security stance on this is: Docker is not a security boundary. You should not treat it like one. It will only give you _process level_ isolation. If you want something with better security guarantees, you can use a full VM (KVM/QEMU), something like gVisor[1] to limit the attack surface of a containerized process, or something like Firecracker[2] which is designed for multi-tenancy.

      The core of the problem here is that process isolation doesn't save you from whole classes of attack vectors or misconfigurations that open you up to nasty surprises. Docker is great, just don't think of it as a sandbox to run untrusted code.

      1. https://gvisor.dev/

      2. https://firecracker-microvm.github.io/

      • 2 hours ago
        undefined
      • socalgal28 hours ago
        that's a really good point .. but, I think 99% of docker users believe it is a a sandbox and treat it as such.
        • freedomben8 hours ago
          And not without cause. We've been pitching docker as a security improvement for well over a decade now. And it is a security improvement, just not as much as many evangelists implied.
          • fragmede7 hours ago
            Must depend on who you've been talking to. Docker's not been pitched for security in the circles I run in, ever.
        • TacticalCoder5 hours ago
          Not 99%. Many people run an hypervisor and then a VM just for Docker.

          Attacker now needs a Docker exploit and then a VM exploit before getting to the hypervisor (and, no, pwning the VM ain't the same as pwning the hypervisor).

          • windexh8eran hour ago
            Agreed - this is actually pretty common in the Proxmox realm of hosters. I segment container nodes using LXC, and in some specific cases I'll use a VM.

            Not only does it allow me to partition the host for workloads but I also get security boundaries as well. While it may be a slight performance hit the segmentation also makes more logical sense in the way I view the workloads. Finally, it's trivial to template and script, so it's very low maintenance and allows for me to kill an LXC and just reprovision it if I need to make any significant changes. And I never need to migrate any data in this model (or very rarely).

          • briHass3 hours ago
            'Double-bagging it' was what we called it in my day.
        • dist-epoch8 hours ago
          it is a sandbox against unintentional attacks and mistakes (sudo rm -rf /)

          but will not stop serious malware

      • hsbauauvhabzb8 hours ago
        Virtual machines are treated as a security boundary despite the fact that with enough R&D they are not. Hosting minecraft servers in virtual machines is fine, but not a great idea if they’re cohosted on a machine that has billions of dollars in crypto or military secrets.

        Docker is pretty much the same but supposedly more flimsy.

        Both have non-obvious configuration weaknesses that can lead to escapes.

        • hoppp7 hours ago
          Yeah but why would somebody co-host military secrets or billions of dollars? Its a bit of a stretch
          • hsbauauvhabzb7 hours ago
            I think you’re missing the point, which was that high value targets adjacent to soft targets make escapes a legitimate target, but in low value scenarios vm escapes aren’t worth the R&D
            • z3t42 minutes ago
              [delayed]
    • michaelt8 hours ago
      Firstly, the attacker just wants to mine Monero with CPU, they can do that inside the container.

      Second, even if your Docker container is configured properly, the attacker gets to call themselves root and talk to the kernel. It's a security boundary, sure, but it's not as battle-tested as the isolation of not being root, or the isolation between VMs.

      Thirdly, in the stock configuration processes inside a docker container can use loads of RAM (causing random things to get swapped to disk or OOM killed), can consume lots of CPU, and can fill your disk up. If you consider denial-of-service an attack, there you are.

      Fourthly, there are a bunch of settings that disable the security boundary, and a lot of guides online will tell you to use them. Doing something in Docker that needs to access hot-plugged webcams? Hmm, it's not working unless I set --privileged - oops, there goes the security boundary. Trying to attach a debugger while developing and you set CAP_SYS_PTRACE? Bypasses the security boundary. Things like that.

    • cypharan hour ago
      You really need to use user namespaces to get this kind of security protection -- running as root inside a container without user namespaces is not secure. Yes, breakouts often require some other bug or misconfiguration but the margin for error is non-existent (for instance, if you add CAP_SYS_PTRACE to your containers it is trivial to break out of them and container runtimes have no way of protecting against that). Almost all container breakouts in the past decade were blocked by user namespaces.

      Unfortunately, user namespaces are still not the default configuration with Docker (even though the core issues that made using them painful have long since been resolved).

    • easterncalculus6 hours ago
      If the container is running in privileged mode you can just talk to the docker socket to the daemon on the host, spawn a new container with direct access to the root filesystem, and then change anything you want as root.
      • CGamesPlay3 hours ago
        Notably, if you run docker-in-docker, Docker is probably not a security boundary. Try this inside any dind container (especially devcontainers): docker run -it --rm --pid=host --privileged -v /:/mnt alpine sh

        I disagree with other commenters here that Docker is not a security boundary. It's a fine one, as long as you don't disable the boundary, which is as easy as running a container with `--privileged`. I wrote about secure alternatives for devcontainers here: https://cgamesplay.com/recipes/devcontainers/#docker-in-devc...

    • Nextgrid7 hours ago
      Container escapes exist. Now the question is whether the attacker has exploited it or not, and what the risk is.

      Are you holding millions of dollars in crypto/sensitive data? Better assume the machine and data is compromised and plan accordingly.

      Is this your toy server for some low-value things where nothing bad can happen besides a bit of embarrassment even if you do get hit by a container escape zero-day? You're probably fine.

      This attack is just a large-scale automated attack designed to mine cryptocurrency; it's unlikely any human ever actually logged into your server. So cleaning up the container is most likely fine.

    • ronsor9 hours ago
      There would be, but a lot of docker containers are misconfigured or unnecessarily privileged, allowing for escape.

      Also, if you've been compromised, you may have a rootkit that hides itself from the filesystem, so you can't be sure of a file's existence through a simple `ls` or `stat`.

      • miladyincontrol7 hours ago
        > but a lot of docker containers are misconfigured or unnecessarily privileged, allowing for escape

        Honestly, citation needed. Very rare unless you're literally giving the container access to write to /usr/bin or other binaries the host is running, to reconfigure your entire /etc, access to sockets like docker's, or some other insane level of over reach I doubt even the least educated docker user would do.

        While of course they should be scoped properly, people act like some elusive 0-day container escape will get used on their minecraft server or personal blog that has otherwise sane mounts, non-admin capabilities, etc. You arent that special.

        • cypharan hour ago
          As a maintainer of runc (the runtime Docker uses), if you aren't using user namespaces (which is the case for the vast majority of users) I would consider your setup insecure.

          And a shocking number of tutorials recommend bind-mounting docker.sock into the container without any warning (some even tell you to mount it "ro" -- which is even funnier since that does nothing). I have a HN comment from ~8 years ago complaining about this.

        • fomine34 hours ago
          I've seen many articles with `-v /var/run/docker.sock:/var/run/docker.sock` without scary warning
    • Havoc9 hours ago
      I think a root container can talk to docker daemon and launch additional containers...with volume mounts of additional parts of file system etc. Not particularly confident about that one though
      • minitech9 hours ago
        Unintentional vulnerabilities in Docker and the kernel aside, it can only do that if it has access to the Docker API (usually through a bind mount of the Unix socket). Having access to the Docker API is equivalent to having root on the host.
        • czbond8 hours ago
          Well $hit. I have been using Docker for installing NPM modules in interactive projects I was testing out. I believed Docker blocked access to the underlying host (my computer).

          Thanks for mentioning it - but now... how does one deal with this?

          • minitech8 hours ago
            If you didn’t mount docker.sock or any directory above it (i.e. / or /run by default) or run your containers as --privileged, you’re probably fine with respect to this angle. I’d still recommend rootless containers under unprivileged users* or VMs for extra comfort. Qubes (https://www.qubes-os.org/) is good, even if it’s a little clunkier than it could be.

            * but if you’re used to bind-mounting, they’ll be a hassle

            Edit: This is by no means comprehensive, but I feel compelled to point it out specifically for some reason: remember not to mount .git writable, folks! Write access to .git is arbitrary code execution as whoever runs git.

            • 8 hours ago
              undefined
          • 3np8 hours ago
            As sibling mentioned, unless you or the runtime explicitly mount the docker socket, this particular scenario shouldn't affect you.

            You might still want to tighten things up. Just adding on the "rootless" part - running the container runtime as an unprivileged user on the host instead of root - you also want to run npm/node as unprivileged user inside the container. I still see many defaulting to running as root inside the container since that's the default of most images. OP touches on this.

            For rootless podman, this will run as a user with your current uid and map ownership of mounts/volumes:

                podman run -u$(id -u) --userns=keep-id
      • 8 hours ago
        undefined
    • 9 hours ago
      undefined
    • trhway7 hours ago
      >there still would need to be a vulnerability in docker for it to “attack” the host, or am I missing something?

      non necessary vulnerability per. se. Bridged adapter for example lets you do a lot - few years ago there were a story of something like how a guy got a root in container and because the container used bridged adapter he was able to intercept traffic of an account info updates on GCP

    • TheRealPomax9 hours ago
      Docker containers with root have rootish rights on the host machine too because the userid will just be 0 for both. So if you have, say, a bind mount that you play fast and loose with, the docker user can create 0777 files outside the docker container, and now we're almost done. Even worse if "just to make it work" someone runs the container with --privileged and then makes the terminal mistake of exposing that container to the internet.
      • V__9 hours ago
        Can you explain this a bit further? Wouldn't that 0777 file outside docker be still executed inside the container and not on the host?
        • necovek8 hours ago
          I believe they meant you could create an executable that is accessible outside the container (maybe even as setuid root one), and depending on the path settings, it might be possible to get the user to run it on the host.

          Imagine naming this executable "ls" or "echo" and someone having "." in their path (which is why you shouldn't): as long as you do "ls" in this directory, you've ran compromised code.

          There are obviously other ways to get that executable to be run on the host, this just a simple example.

          • marwamc7 hours ago
            Another example is they would enumerate your directories and find the names of common scripts and then overwrite your script. Or to be even sneakier, they can append their malicious code to an existing script in your filesystem. Now each time you run your script, their code piggybacks.

            OTH if I had written such a script for linux I'd be looking to grab the contents of $(hist) $(env) $(cat /etc/{group,passwd})... then enumerate /usr/bin/ /usr/local/bin/ and the XDG_{CACHE,CONFIG} dirs - some plaintext credentials are usually here.

            The $HOME/.{aws,docker,claude,ssh}

            Basically the attacker just needs to know their way around your OS. The script enumerating these directories is the 0777 script they were able to write from inside the root access container.

            • tracker17 hours ago
              If your chosen development environment supports it, look into distroless or empty base containers, and run as --read-only if you can.

              Go and Rust tend to lend themselves to these more restrictive environments a bit better than other options.

    • Onavo9 hours ago
      Either docker or a kernel level exploit. With non-VM containers, you are sharing a kernel.
  • aborsy27 minutes ago
    If I’m not wrong, a hetzner VM by default has no firewall enabled. If you are coming from providers with different default settings, that might bite you. Containers that you thought were not open to internet have been open all this time.

    You have to define a firewall policy and attach it to the VM.

  • croemer6 hours ago
    Not proof read by a human. It claims more than once the vulnerability was related to Puppeteer. Hallucination!

    "CVE-2025-66478 - Next.js/Puppeteer RCE)"

    • loloquwowndueo6 hours ago
      TFA mentions it’s mostly a transcript of a Claude session literally in the first paragraph.
      • themafia4 hours ago
        That was added as an edit. It does not cover the inaccuracies contained within. It should more realistically say "this article was generated by an LLM and may contain several errors which I didn't bother to find or correct."
  • grekowalski9 hours ago
    Recently, those Monero miners were installing themselves everywhere that had a vulnerable React 19. I had exactly the same problem.
    • qingcharles8 hours ago
      I had to nuke my Oracle Cloud box that runs my Umami server. It got hit. Was a good excuse to upgrade version and upgrade all my backup systems etc. Lost a few hours of data while it was returning 500 errors.
  • marwamc7 hours ago
    Hahaha OP could be in deep trouble depending on what types of creds/data they had in that container. I had replied to a child comment but I figure best to reply to OP.

    From the root container, depending on volume mounts and capabilities granted to the container, they would enumerate the host directories and find the names of common scripts and then overwrite one such script. Or to be even sneakier, they can append their malicious code to an existing script in the host filesystem. Now each time you run your script, their code piggybacks.

    OTOH if I had written such a script for linux I'd be looking to grab the contents of $(hist) $(env) $(cat /etc/{group,passwd})... then enumerate /usr/bin/ /usr/local/bin/ and the XDG_{CACHE,CONFIG} dirs - some plaintext credentials are usually here. The $HOME/.{aws,docker,claude,ssh} Basically the attacker just needs to know their way around your OS. The script enumerating these directories is the 0777 script they were able to write from inside the root access container.

    • cobertos3 hours ago
      Luckily umami in docker is pretty compartimentalized. All data is in the and the DB runs in another container. The biggest thing is the DB credentials. The default config requires no volume mounts so no worries there. It runs unprivileged with no extra capabilities. IIRC don't think the container even has bash, a few of the exploits that tried to run weren't able to due to lack of bash in the scripts they ran.

      Deleting and remaking the container will blow away all state associated with it. So there isn't a whole lot to worry about after you do that.

    • jakelsaunders947 hours ago
      Nothing in that container luckily, just what Umami needed to run, so no creds at all. Thanks for the info though!
  • heavyset_go8 hours ago
    I wouldn't trust that boot image or storage again, I'd nuke it for peace of mind.

    That said, do you have an image of the box or a container image? I'm curious about it.

    • jakelsaunders948 hours ago
      Yeah I did consider just killing it, I'm going to keep an eye on it for a few days with a gun to it just in case.

      I was lucky in that my DB backups were working so all my persistence wax backed up to S3. I think I could stand up another one in an hour.

      Unfortunately I didn't keep an image no. I almost didn't have the foresight to investigate before yeeting the whole box into the sun!

      • muppetman6 hours ago
        Enable connection tracking (if it's not already) and keep looking at the conntrack entires. That's a good way to spot random things doing naughty stuff.
  • seymon7 hours ago
    What's considered nowadays the best practice (in terms of security) for running selfhosted workloads with containers? Daemon less, unprivileged podman containers?

    And maybe updating container images with a mechanism similar to renovate with "minimumReleaseTime=7days" or something similar!?

    • movedx7 hours ago
      You’ll set yourself up for success if you check the dependencies of anything you run, regardless of it being containerised. Use something like Snyk to scan containers and repositories for known exploits and see if anything stands out.

      Then you need to run things with as least privilege as possible. Sadly, Docker and containers in general are an anti-pattern here because they’re about convenience first, security second. So the OP should have run the contains as read-only with tight resource limits and ideally IP restrictions on access if it’s not a public service.

      Another thing you can do is use Tailscale, or something like it, to keep things being a zero trust, encrypted, access model. Not suitable for public services of course.

      And a whole host of other things.

  • wnevets9 hours ago
    Sure does seem like the primary outcome of cryptocurrencies being released onto the world has been criminals making money.
    • BLKNSLVR7 hours ago
      Criminals and the porn industry are almost invariably early adopters of new technologies. For better or worse their use-cases are proof-of-concepts that get expanded and built on, if successful, by more legitimate industries.

      Re: the Internet.

      Re: Peer-to-peer.

      Re: Video streaming.

      Re: AI.

      • lapetitejort7 hours ago
        What is the average length of time for new tech to escape porn and crime and integrate into real applications? Longer than 15 years?
        • BLKNSLVR6 hours ago
          Some kind of function of how quickly regulation comes to the technology.
    • nrhrjrjrjtntbt8 hours ago
      And fast malware detection.
    • dylan6048 hours ago
      Is that really a surprise though?
      • venturecruelty8 hours ago
        Not for anyone who doesn't have a financial stake in said fraud, no.
  • LelouBil4 hours ago
    Something similar happened to me last year, it was with an unsecured user account accessible over ssh with password authentication, something like admin:admin that I forgot about.

    At least that's what I think happened because I never found out exactly how it was compromised.

    The miner was running as root and it's file was even hidden when I was running ls ! So I didn't understand what was happening, it was only after restarting my VPS from with a rescue image, and after mounting the root filesystem, that I found out the file I was seeing in the processes list did indeed exist.

  • hughw6 hours ago
    You can run Docker Scout on one repo for free, and that would alert you that something was using Next.js and had that CVE. AWS ECR has pretty affordable scanning too: 9 cents/image and 1 cent/rescan. Continuous scanning even for these home projects might be worth it.

    [*] https://aws.amazon.com/inspector/pricing/

  • CGamesPlay3 hours ago
    I took issue with this paragraph of the article, on account of several pieces of misinformation, presumably courtesy of Claude hallucinations:

    > Here’s the test. If /tmp/.XIN-unix/javae exists on my host, I’m fucked. If it doesn’t exist, then what I’m seeing is just Docker’s default behavior of showing container processes in the host’s ps output, but they’re actually isolated.

    1. Files of running programs can be deleted while the program is running. If the program were trying to hide itself, it would have deleted /tmp/.XIN-unix/javae after it started. The nonexistence of the file is not a reliable source of information for confirming that the container was not escaped.

    2. ps shows program-controlled command lines. Any program can change what gets displayed here, including the program name and arguments. If the program were trying to hide itself, it would change this to display `login -fp ubuntu` instead. This is not a reliable source of information for diagnosing problems.

    It is good to verify the systemd units and crontab, and since this malware is so obvious, it probably isn't doing these two hiding methods, but information-stealing malware might not be detected by these methods alone.

    Later, the article says "Write your own Dockerfiles" and gives one piece of useless advice (using USER root does not affect your container's security posture) and two pieces of good advice that don't have anything to do with writing your own Dockerfiles. "Write your own Dockerfiles" is not useful security advice.

    • 3npan hour ago
      > "Write your own Dockerfiles" is not useful security advice.

      I actually think it is. It makes you more intimate with the application and how it runs, and can mitigate one particular supply-chain security vector.

      Agreeing that the reasoning is confused but that particular advice is still good I think.

  • egberts16 hours ago
    This Monero mining also happened with one of my VPS over at interserv.net, when I forgot to log out of the root console in web-based terminal console to one of my VPS and closed its browser tab instead.

    It has since been fixed: Lesson learned.

  • xp846 hours ago
    I wonder in a case like this how hard it would be to "steal" the crypto that you've paid to mine. But I assume these people are probably smart enough to where everything is instantly forwarded to their C&C server to prevent that.
  • minitech9 hours ago
    > Here’s the test. If /tmp/.XIN-unix/javae exists on my host, I’m fucked. If it doesn’t exist, then what I’m seeing is just Docker’s default behavior of showing container processes in the host’s ps output, but they’re actually isolated.

      /tmp/.XIN-unix/javae &
      rm /tmp/.XIN-unix/javae
    
    This article’s LLM writing style is painful, and it’s full of misinformation (is Puppeteer even involved in the vulnerability?).
    • jakelsaunders949 hours ago
      Yeah fair, I asked claude to help because honestly this was a little beyond my writing skills. I'm real though. Sorry. Will change
      • minitech8 hours ago
        Seconding what others have said about preferring to read bad human writing. And I don’t want to pick on you – this is a broadly applicable message prompted by a drop in the bucket – but please don’t publish articles beyond your ability to fact check. Just write what you actually know, and when you’re making a guess or you still have open questions at the end of your investigation, be honest about that. (People make mistakes all the time anyway, but we’re in an age where confident and detailed mistakes have become especially accessible.)
      • sincerely9 hours ago
        Just a data point - I would rather read bad human writing than LLM output
      • croemer6 hours ago
        It still says Puppeteer in multiple places.
      • seafoamteal9 hours ago
        Hi Jake! Cool article, and it's something I'll keep in mind when I start giving my self-hosted setup a remodel soon. That said, I have to agree with the parent comment and say that the LLM writing style dulled what would otherwise have been a lovely sysadmin detective work article and didn't make me want to explore your site further.

        I'm glad you're up to writing more of your own posts, though! I'm right there with you that writing is difficult, and I've definitely got some posts on similar topics up on my site that are overly long and meandering and not quite good, but that's fine because eventually once I write enough they'll hopefully get better.

        Here's hoping I'll read more from you soon!

    • jakelsaunders949 hours ago
      I fixed it, apologies for the misinformation.
      • 3np7 hours ago
        It still says:

        > IT NEVER ESCAPED.

        You haven't confirmed this (at least from the contents of the article). You did some reasonable spot checks and confirmed/corrected your understanding of the setup. I'd agree that it looks likely that it did not escape or gain persistence on your host but in no way have you actually verified this. If it were me I'd still wipe the host and set up everything from scratch again[0].

        Also your part about the container user not being root is still misinformed and/or misleading. The user inside the container, the container runtime user, and whether container is privileged are three different things that are being talked about as one.

        Also, see my comment on firewall: https://news.ycombinator.com/item?id=46306974

        [0]: Not necessarily drop-everything-you-do urgently but next time you get some downtime to do it calmly. Recovering like this is a good excercise anyway to make sure you can if you get a more critical situation in the future where you really need to. It will also be less time and work vs actually confirming that the host is uncontaminated.

        • jakelsaunders947 hours ago
          I did see your comment on Firewall, and you're right about the escape. It seems safe enough for now. Between the hacking and accidentally hitting the front page of HN it's been a long day.

          I'm going to sit down and rewrite the article and take a further look at the container tomorrow.

          • 3np6 hours ago
            Hey, thanks for taking the time to share your learnings and engage. I'm sure there are HN readers out there who will be better off for it alongside you!

            (And good to hear you're leaving the LLMs out of the writing next time <3)

      • Eduard8 hours ago
        I still see Puppeteer mentioned several times in your post and don't understand what that has to do with Umami, nextjs, and/or CVE-2025-66478.
  • hoppp7 hours ago
    This nextjs vulnerability is gonna be exploited everywhere because its so easy. This is just the start
    • christophilus6 hours ago
      I didn’t think it was possible for me to dislike nextjs any more, but here we are. It’s the Sharepoint of the JS ecosystem.
  • exceptione7 hours ago
    The first step I would take is running podman instead of Docker to prevent container escapes. Podman can be run truly rootless and doesn't mess with your firewall. Next I would drop all caps if possible.
    • doodlesdev7 hours ago
      What's the difference between running Podman and running Docker in rootless mode? (Other than Docker messing with the firewall, which apparently OP doesn't know about… yet). I understand Podman doesn't require a daemon, but is that all there is to it, or is there something I'm missing?
      • exceptione6 hours ago
        The runtime has been designed from the ground up to be run daemonless and rootless. They also have a K8s runtime, that has an extremely small surface, just enough to be K8s compliant.

        But podman has also great integration with systemd. With that you could use a socket activated systemd unit, and stick the socket inside the container, instead of giving the container any network at all. And even if you want networking in the container, the podman folks developed slirp4netns, which is user space networking, and now something even better: passt/pasta.

      • crimsonnoodle584 hours ago
        Rootless docker is more compatible than podman I found. I experienced crash dumps in say mssql with podman, but not with rootless docker.

        Also rootless docker does not bypass ufw like rootful docker does.

  • ryanto8 hours ago
    Sorry to hear you got hacked.

    I know we aren't supposed to rely on containers as a security boundary, but it sure is great hearing stories like this where the hack doesn't escape the container. The more obstacles the better I guess.

    • DANmode6 hours ago
      Hacks are humans. For like, ten more minutes anyway.

      If the human involved can’t escalate, the hack can’t.

  • elif4 hours ago
    This is a perfect example of how honeypots, anti-malware organizations, and blacklists are so important to security.

    Even if you are an owasp member who reads daily vulnerability reports, it's so easy to think you are unaffected.

  • tolerance8 hours ago
    Was dad notified of the security breach? If not he may want to consider switching hosting providers. Dad deserves a proper LLM-free post mortem.
    • jakelsaunders948 hours ago
      Hahaha, I did tell him this afternoon. This is the bloke who has the same password for all his banking apps despite me buying him 1password though. The imminent threat from RCE's just didn't land.
      • dylan6048 hours ago
        Buying someone 1Pass, or the like, and calling it good is not enough. People using password managers forget how long it takes to visit all of the sites you use to create that site's record, then update the password to a secure one, and then log out and log back in with the new password to test it is good. For a lot of people having a password manager bought for them is going to be over it after the second site. Just think about how many videos on TikTok they could have been watching instead
        • venturecruelty8 hours ago
          Yeah, mom and I sat down one afternoon and we changed all of her passwords to long, secure ones, generated by 1Password. It was a nice time! It also helped her remember all of the different services she needs to access, and now they're all safely stored with strong passwords. And it was a nice way to connect and spend some time together. :)
  • qingcharles8 hours ago
    As an aside, if you're using a Hetzner VPS for Umami you might be over-specced. I just cut my Hetzner bill by $4/mo by moving my Umami box to one of the free Oracle Cloud VPS after someone on here pointed out the option to me. Depends whether this is a hobby thing or something more serious, but that option is there.
    • ianschmitz8 hours ago
      I would pay $4/mo to stay as far away from Oracle as possible
    • angulardragon038 hours ago
      All fine and well, but oracle will threaten to turn off your instance if you don’t maintain a reasonable average CPU usage on the free hosts, and will eventually do so abruptly.

      This became enough of a hassle that I stopped using them.

      • treesknees8 hours ago
        Do you mean if it’s idle, or if it’s maxed out? I’ve had a few relatively idle free-tier VMs with Oracle and I’ve not received any threats of shutoff over the last 3 years I’ve had them online.
      • qingcharles5 hours ago
        I assumed the same, but as long as you keep a credit card on file apparently they will let you idle it too. I went in and set my max budget at $1/mo and set alerts too, just in case.
    • jakelsaunders948 hours ago
      I've got a whole Hetzner EX41 bare metal server, as opposed to a VPS. It's gotr like 20 services on it.

      But yeah it is massively overspecced. Makes me feel cool load testing my go backend at 8000 requests per second though!

    • spiderfarmer8 hours ago
      I pay for Hetzner because it’s an EU based, sane company without a power hungry CEO.
    • tgtweak8 hours ago
      The manageability of having everything on one host is kind of nice at that scale, but yeah you can stack free tiers on various providers for less.
  • pigbearpig8 hours ago
    You might want to harden that those outbound firewall rules as another step. Did the Umami container need the ability to initiate connections? If not, that would eliminate the ability to do the outbound scans.

    Also could prevent something to exfiltrate sensitive data.

  • meisel8 hours ago
    Is mining via CPU even worthwhile for the hackers? I thought ASICs dominated mining
    • jsheard8 hours ago
      ASICs do dominate Bitcoin mining but Monero's POW algorithm is supposed to be ASIC resistant. Besides, who cares if it's efficient when it's someone else's server?
      • 6 hours ago
        undefined
    • tgtweak8 hours ago
      Monero's proof of work (RandomX) is very asic-resistant and although it generates a very small amount of earnings, if you exploit a vulnerability like this with thousands or tens of thousands of nodes, it can add up (8 modern cores 24/7 on Monero would be in the 10-20c/day per node range). OPs Vps probably generated about $1 for those script kiddies.
      • pixl977 hours ago
        Hit 1000 servers and it starts adding up. Especially if you live somewhere with a low cost of living.
      • asdff5 hours ago
        So $40 a year? Does that imply all monero is mined like this because it's clearly not cost effective at all to mine legitimately?
        • beefletan hour ago
          I think so, but it is hard to say. Could be a lot of people with extra power (or stolen power), but their own equipment. I mine myself with waste solar power
    • rnhmjoj8 hours ago
      This is the PoW scheme that Monero currently uses:

      > RandomX utilizes a virtual machine that executes programs in a special instruction set that consists of integer math, floating point math and branches. > These programs can be translated into the CPU's native machine code on the fly (example: program.asm). > At the end, the outputs of the executed programs are consolidated into a 256-bit result using a cryptographic hashing function (Blake2b).

      I doubt that you anyone managed to create an ASIC that does this more efficiently and cost effective than a basic CPU. So, no, probably no one is mining Monero using an ASIC.

    • heavyset_go8 hours ago
      Yes, for Monero it is the only real viable option. I'd also assume that the OP's instance is one of many other victims whose total mining might add up to a significant amount of crypto.
    • edm0nd8 hours ago
      Its easily worth it as they are not spending any money on compute or power.

      If they can enslave 100s or even 1000s of machine mining XMR for them, easy money if you set aside the legality of it.

    • minitech8 hours ago
      Hard for it not to be worthwhile, since it’s free for them. Same automated exploit run across the entire internet.
    • Bender8 hours ago
      Optimal hardware costs money. Easy to hack machines are free and in nearly unlimited numbers.
    • justinsaccount8 hours ago
      If the effectiveness of mining is represented as profit divided by the cost of running the infrastructure, then a CPU that someone else is paying for is worth it as long as the profit is greater than zero.
  • eyberg4 hours ago
    a) containers don't contain

    b) if you want to limit your hosting environment to only the language/program you expect to run you should provision with unikernels which enforce it

  • zamadatix9 hours ago
    I don't use Docker for my containers at home, but I take it by the concern that user namespacing is not the employed by them or something?
    • heavyset_go8 hours ago
      If you're root in a namespace and manage to escape, you can have root privileges outside of it.
      • zamadatix7 hours ago
        Are you referring to user namespaces and, if so, how does that kind of break out to host root work? I thought the whole point of user namespaces was your UID 0 inside the container is UID 100000 or whatever from the perspective of outside the container. Escaping the container shouldn't inherently grant you ability to change your actual UID in the host's main namespace in that kind of setup, but I'm not sure Docker actually leverages user namespaces or not.

        E.g. on my systemd-nspawn setup with --private-users=pick (enables user namespacing) I created a container and gave it a bind mount. From the container it appears like files in the bind mount created by the container namespace's UID 0 are owned by UID 0 but from outside the container the same file looks owned by UID 100000. Inverted, files owned by the "real" UID 0 on the host look owned by 0 to the host but as owned by 65534 (i.e. "nobody") from the container's perspective. Breaking out of the container shouldn't inherently change the "actual" user of the process from 100000 to 0 any more than breaking out of the container as a non-0 UID in the first place - same as breaking out of any of the other namespaces doesn't make the "UID 0" user in the container turn into "UID 0" on the host.

        • heavyset_go6 hours ago
          Users in user namespaces are granted capabilities that root has, user namespaces themselves need to be locked down to prevent that, but if a user with root capabilities escapes the namespace, they have the capabilities on the host.

          They also expose kernel interfaces that, if exploited, can lead to the same.

          In the end, namespaces are just for partitioning resources, using them for sandboxes can work, but they aren't really sandboxes.

  • kopirgan4 hours ago
    Only lesson seems to be use ufw! (or equivalent)
    • scottyeager2 hours ago
      As others have mentioned, a firewall might have been useful in restricting outbound connections to limit the usefulness of the machine to the hacker after the breach.

      An inbound firewall can only help protect services that aren't meant to be reachable on the public internet. This service was exposed to the internet intentionally so a firewall wouldn't have helped avoid the breach.

      The lesson to me is that keeping up with security updates helps prevent publicly exposed services from getting hacked.

      • kopirganan hour ago
        Yes thanks for the clarification.
  • Computer07 hours ago
    Still confused what I am supposed to do to avoid all this.
    • movedx7 hours ago
      Learning to manage an operating system in full, and having a healthy amount of paranoia, is a good first step.
      • doublerabbit6 hours ago
        Then, write all your own software to please the paranoia for the next 15 years.

        Next year is the 5th year of my current personal project. Ten to go.

  • OutOfHere7 hours ago
    You're lucky that Hetzner didn't delete your server and terminate your account.
    • croemer6 hours ago
      With which justification?
      • OutOfHere5 hours ago
        Cryptocurrency software usage. It is strictly against their policy. Afaik, their policy does not differentiate with voluntary and involuntary use.

        They have done it to others.

  • guerrilla9 hours ago
    Whew, load average of 0 here.
  • mikaelmello9 hours ago
    This article is very interesting at first but I once again get disappointed after reading clear signs of AI like "Why this matters" and "The moment of truth", and then the whole thing gets tainted with signs all over the place.
    • dinkleberg9 hours ago
      Yeah personally I’d much rather read a poorly constructed article with actually interesting content than the same content put into the formulaic AI voice.
      • venturecruelty8 hours ago
        Article's been edited:

        >Edit: A few people on HN have pointed out that this article sounds a little LLM generated. That’s because it’s largely a transcript of me panicking and talking to Claude. Sorry if it reads poorly, the incident really happened though!

        For what it's worth, this is not an excuse, and I still don't appreciate being fed undisclosed slop. I'm not even reading it.

  • codegeek9 hours ago
    tl:dr: He got hacked but the damage was only restricted to one docker container runn ing Umami (that is built on top of NextJS). Thankfully, he was running the docker container as a non privileged non-root user which saved him big time considering the fact that the attack surface was limited only within the container and could not access the entire host/filesystem.

    Is there ever a reason someone should run a docker container as root ?

    • d4mi3n8 hours ago
      If you're using the container to manage stuff on the host, it'll likely need to be a process running as root. I think the most common form of this is Docker-in-Docker style setups where a container is orchestrating other containers directly through the Docker socket.
  • nodesocket7 hours ago
    I also run Umami, but patched once the CVE patch was released. Also, I only expose the tracking js endpoint and /api/send via Caddy publically (though, /api/send might be enough to exploit the vul). To actually interact with Umami UI I use Twingate (similar to Tailscale) to tunnel into the VPC locally.
  • zrn9004 hours ago
    Just use Hetzner managed servers? Very high specs, they manage everything, and you can install a lot of languages, apps etc.
  • iLoveOncall9 hours ago
    > ls -la /tmp/.XIN-unix/javae

    Unless ran as root this could return file not found because of missing permissions, and not just because the file doesn't actually exist, right?

    > “I don’t use X” doesn’t mean your dependencies don’t use X

    That is beyond obvious, and I don't understand how anyone would feel safe from reading about a CVE on a widely used technology when they run dozens of containers on their server. I have docker containers and as soon as I read the article I went and checked because I have no idea what technology most are built with.

    > No more Umami. I’m salty. The CVE was disclosed, they patched it, but I’m not running Next.js-based analytics anymore.

    Nonsensical reaction.

    • qingcharles8 hours ago
      Yeah, my Umami box was hit, but the time between the CVE disclosure and my box getting smacked was incredibly low. Umami patched it very quickly. And then patched it again a second time when the second CVE dropped right after.

      Nothing is immune. What analytics are you going to run? If you roll your own you'll probably leave a hole somewhere.

    • Hackbraten8 hours ago
      > No more Umami. I’m salty.

      But kudos for the word play!

  • 9 hours ago
    undefined
  • whalesalad9 hours ago
    [flagged]
    • mrkeen9 hours ago
      Someone mined Monero on my server a few years ago. I was running Jenkins.
  • venturecruelty8 hours ago
    I still can't believe that there are so many people out here popping boxen and all they do is solve drug sudokus with the hardware. Hacks are so lame now.
  • j459 hours ago
    Never expose your server IP directly to the internet, vps or baremetal.
    • palata8 hours ago
      Unless you need it to be reachable from the Internet, at which point it has to be... reachable from the Internet.
      • j455 hours ago
        Public facing services routed through a firewall or waf (cloudflare) always.

        Backend access trivial with Tailscale, etc.

    • sergsoares8 hours ago
      Not expose the server IP is one practice (obfuscation) in a list of several options.

      But that alone would not solve the problem being a RCE from HTTP, that is why edge proxy provider like Cloudflare[0] and Fastfy[1] proactivily added protections in his WAF products.

      Even cloudflare had an outage trying to protect his customers[3].

      - [0] https://blog.cloudflare.com/waf-rules-react-vulnerability/ - [1] https://www.fastly.com/blog/fastlys-proactive-protection-cri... - [2] https://blog.cloudflare.com/5-december-2025-outage/

    • cortesoft7 hours ago
      Any server? How do you run a public website? Even if you put it behind a load balancer, the load balancer is still a “server exposed to the internet”
      • j455 hours ago
        Public facing services routed through a firewall or waf (cloudflare) always.

        Backend access trivial with Tailscale, etc.

        Public IP never needs to be used. You can just leave it an internal IP if you really want.

        • cortesoft5 hours ago
          A firewall is a server, too, though.
    • mrkeen9 hours ago
      You're going to hate this thing called DNS
      • j455 hours ago
        Been running production servers for a long time.

        DNS is no issue. External DNS can be handled by Cloudflare and their waf. Their DNS service can can obsfucate your public IP, or ideally not need to use it at all with a Cloudflare tunnel installed directly on the server. This is free.

        Backend access trivial with Tailscale, etc.

        Public IP doesn't always need to be used. You can just leave it an internal IP if you really want.

    • miramba9 hours ago
      Is there a way to do that and still be able to access the server?
      • m00x9 hours ago
        Yes, cloudflare tunnels do this, but I don't think it's really necessary for this.

        I use them for self-hosting.

        • doublerabbit6 hours ago
          That server is still exposed to the internet on a public IP. Just only known and courted through a 3rd party's castle.
          • j455 hours ago
            The tunnel doesn't have to use the Public IP inbound, the cloudflare tunnel calls outbound that can be entirely locked up.

            If you are using Cloudflare's DNS they can hide your IP on the dns record but it would still have to be locked down but some folks find ways to tighten that up too.

            If you're using a bare metal server it can be broken up.

            It's fair that it's a 3rd party's castle. At the same time until you know how to run and secure a server, some services are not a bad idea.

            Some people run pangolin or nginx proxy manager on a cheap vps if it suits their use case which will securely connect to the server.

            We are lucky that many of these ideas have already been discovered and hardened by people before us.

            Even when I had bare metal servers connected to the internet, I would put a firewall like pfsense or something in between.

      • Carrok9 hours ago
        Many ways. Using a "bastion host" is one option, with something like wireguard or tinc. Tailscale and similar services are another option. Tor is yet another option.
        • cortesoft7 hours ago
          The bastion host is a server, though, and would be exposed to the internet.
        • venturecruelty8 hours ago
          >Never expose your server IP directly to the internet, vps or baremetal.
      • 9 hours ago
        undefined
      • sh3rl0ck9 hours ago
        Either via a VPN or a tunnel.
      • j455 hours ago
        Yes, of course.

        Free way - sign up for a cloudflare account. Use the DNS on cloudflare, they wil put their public ip in front of your www.

        Level 2 is install the cloudflare tunnel software on your server and you never need to use the public IP.

        Backend access securely? Install Tailscale or headscale.

        This should cover most web hosting scenarios. If there's additional ports or services, tools like nginx proxy manager (web based) or others can help. Some people put them on a dedicated VPS as a jump machine.

        This way using the Public IP can almost be optional and locked down if needed. This is all before running a firewall on it.

      • iLoveOncall9 hours ago
        Yes, CloudFlare ZeroTrust. It's entirely free, I use it for loads of containers on multiple hosts and it works perfectly.
        • j455 hours ago
          It's really convenient. I don't love that its a one of one service, but it's a decent enough placeholder.
    • procaryote9 hours ago
      As in "always run a network firewall" or "keep the IP secret"? Because I've had people suggest both and one is silly.
      • j455 hours ago
        A network firewall is mandatory.

        Keeping the IP secret seems like a misnomer.

        Its often possible to lock down the public IP entirely to not accept connections except what's initiated from the inside (like the cloudflare tunnel or otherwise reaching out).

        Something like a Cloudflare+tunnel on one side, tailscale or something to get into it on the other.

        Folks other than me have written decent tutorials that have been helpful.