113 pointsby Splizard7 hours ago10 comments
  • ValdikSS4 minutes ago
    `dlopen`'ing system libraries is an "easy" hack to try to maintain compatibility with wide variety of libraries/ABIs. It's barely used (I know only of SDL, Small HTTP Server, and now Godot).

    Without dlopen (with regular dynamic linking), it's much harder to compile for older distros, and I doubt you can easily implement glibc/musl cross-compatibility at all in general.

    Take a look what Valve does in a Steam Runtime:

        - https://gitlab.steamos.cloud/steamrt/steam-runtime-tools/-/blob/main/docs/pressure-vessel.md
        - https://gitlab.steamos.cloud/steamrt/steam-runtime-tools/-/blob/main/subprojects/libcapsule/doc/Capsules.txt
  • Rochus2 hours ago
    So what we need is essentially a "libc virtualization".

    But Musl is only available on Linux, isn't it? Cosmopolitan (https://github.com/jart/cosmopolitan) goes further and is available also on Mac and Windows, and it uses e.g. SIMD and other performance related improvements. Unfortunately, one has to cut through the marketing "magic" to find the main engineering value; stripping away the "polyglot" shell-script hacks and the "Actually Portable Executable" container (which are undoubtedly innovative), the core benefit proposition of Cosmopolitan is indeed a platform-agnostic, statically-linked C standard (plus some Posix) library that performs runtime system call translation, so to say "the Musl we have been waiting for".

    • VikingCoder20 minutes ago
      I desperately want to write C/C++ code that has a web server and can talk websockets, and that I can compile with Cosmopolitan.

      I don't want Lua. Using Lua is crazy clever, but it's not what I want.

      I should just vibe code the dang thing.

    • sidewndr4640 minutes ago
      At the rate things are going we'll need a container virtualization layer as well, a docker for docker if you know what I mean
      • miduil26 minutes ago
        Do you mean something like gVisor?
      • rafale20 minutes ago
        "All problems in computer science can be solved by another level of indirection"
  • amelius5 hours ago
    Is there a tool that takes an executable, collects all the required .so files and produces either a static executable, or a package that runs everywhere?
    • TheDong4 hours ago
      There are things like this.

      The things I know of and can think of off the top of my head are:

      1. appimage https://appimage.org/

      2. nix-bundle https://github.com/nix-community/nix-bundle

      3. guix via guix pack

      4. A small collection of random small projects hardly anyone uses for docker to do this (i.e. https://github.com/NilsIrl/dockerc )

      5. A docker image (a package that runs everywhere, assuming a docker runtime is available)

      6. https://flatpak.org/

      7. https://en.wikipedia.org/wiki/Snap_(software)

      AppImage is the closest to what you want I think.

      • a022311an hour ago
        It should be noted that AppImages tend to be noticeably slower at runtime than other packaging methods and also very big for typical systems which include most libraries. They're good as a "compile once, run everywhere" approach but you're really accommodating edge cases here.

        A "works in most cases" build should also be available for that that it would benefit. And if you can, why not provide specialized packages for the edge cases?

        Of course, don't take my advice as-is, you should always thoroughly benchmark your software on real systems and choose the tradeoffs you're willing to make.

      • gilli3 hours ago
        I wish AppImage was slightly more user friendly and did not require the user to specifically make it executable.
        • VadimPR3 hours ago
          We fix this issue by distributing ours in a tar file with the executable bit set. Linux novices can just double click on the tar to exact it and double click again on the actual appimage.

          Been doing it this way for years now, so it's well battle tested.

          • account422 hours ago
            That kind of defeats the point of an AppImage though - you could just as well have a tar archive with a c classic collection of binaries + optional launcher script.
      • amelius4 hours ago
        AppImage looks like what I need, thanks.

        I wonder though, if I package say a .so file from nVidia, is that allowed by the license?

        • mdavid6263 hours ago
          Don't forget - AppImage won't work if you package something with glibc, but run on musl/uclibc.
        • c0balt4 hours ago
          Depends on the license and the specific piece of software. Redistribution of commercial software is may be restricted or require explicit approval.

          You generally still also have to abide by license obligations for OSS too, e. G., GPL.

          To be specific for the exampls, Nvidia has historically been quite restrictive (only on approval) here. Firmware has only recently been opened up a bit and drivers continue to be an issue iirc.

        • direwolf203 hours ago
          No, that's a copyright violation, and it won't run on AMD or Intel GPUs, or kernels with a different Nvidia driver version.
          • amelius2 hours ago
            But this ruins the entire idea of packaging software in a self-contained way, at least for a large class of programs.

            It makes me wonder, does the OS still take its job of hardware abstraction seriously these days?

            • holowoodman2 hours ago
              The OS does. Nvidia doesn't.
              • direwolf202 hours ago
                Does Nvidia not support OpenGL?
                • holowoodmanan hour ago
                  Not really. Nvidia-OpenGL is incompatible to all existing OS OpenGL interfaces, so you need to ship a separate libGL.so if you want to run on Nvidia. In some cases you even need separate binaries, because if you dynamically link against Nvidia's libGL.so, it won't run with any other libGL.so. Sometimes also vice versa.
                  • 1313ed018 minutes ago
                    How realistic is it to eventually get something like emulation of some 21st century Nvidia GPU (even if a few generations old) built into QEMU (or other free VM), the way pretty much all other hardware is emulated to make the guest OS able to run things without depending on what the host OS hardware is like? Would it have to wait for some patents to expire or something like that? Or is it just very difficult to do on that level?
                  • direwolf2033 minutes ago
                    Does AMD use a statically linked OpenGL?
                    • holowoodman28 minutes ago
                      AMD uses the dynamically linked system libGL.so, usually Mesa.
            • maccard36 minutes ago
              That’s a licensing problem not a packaging problem. A DLL is a DLL - only thing that changes is whether you’re allowed redistribute it
            • direwolf202 hours ago
              It does, and one way it does that is by dynamically loading the right driver code for your hardware.
    • lizknope2 hours ago
      15-30 years ago I managed a lot of commercial chip design EDA software that ran on Solaris and Linux. We had wrapper shell scripts for so many programs that used LD_LIBRARY_PATH and LD_PRELOAD to point to the specific versions of various libraries that each program needed. I used "ldd" which prints out the shared libraries a program uses.
      • mkoubaa2 hours ago
        Sounds painful. Better to distrib a separate bundle per platform and use RPATH
    • ryan-c18 minutes ago
      (not an endorsement, I do not use it, but I know of it)

      https://www.magicermine.com/

    • fieu3 hours ago
      Ermine: https://www.magicermine.com/

      It works surprisingly well but their pricing is hidden and last time I contacted them as a student it was upwards of $350/year

    • alas442 hours ago
      There is this project "actually portable executable"/cosmopolitan libc https://github.com/jart/cosmopolitan that allows a compile once execute anywhere style type of C++ binary
    • mdavid6264 hours ago
      You can "package" all .so files you need into one file, there are many tools which do this (like a zip file).

      But you can't take .so files and make one "static" binary out of them.

      • geocaran hour ago
        > But you can't take .so files and make one "static" binary out of them.

        Yes you can!

        This is more-or-less what unexec does

        - https://news.ycombinator.com/item?id=21394916

        For some reason nobody seems to like this sorcery, probably because it combines the worst of all worlds.

        But there's almost[1] nothing special about what the dynamic linker is doing to get those .so files into memory that it can't arrange them in one big file ahead of time!

        [1]: ASLR would be one of those things...

      • fc417fc8023 hours ago
        Well not a static binary in the sense that's commonly meant when speaking about static linking. But you can pack .so files into the executable as binary data and then dlopen the relevant memory ranges.
        • mdavid6262 hours ago
          Yes, that's true.

          But I'm always a bit sceptical about such approaches. They are not universal. You still need glibc/musl to be the same on the target system. Also, if you compile againt new glibc version, but try to run on old glibc version, it might not work.

          These are just strange and confusing from the end users' perspective.

    • secure2 hours ago
      https://github.com/gokrazy/freeze is a minimal take on this
    • formerly_proven4 hours ago
      I don't think you can link shared objects into a static binary because you'd have to patch all instances where the code reads the PLT/GOT, but this can be arbitrarily mangled by the optimizer, and turn them back into relocations for the linker to then resolve them.

      You can change the rpath though, which is sort of like an LD_LIBRARY_PATH baked into the object, which makes it relatively easy to bundle everything but libc with your binary.

      edit: Mild correction, there is this: https://sourceforge.net/projects/statifier/ But the way this works is that it has the dynamic linker load everything (without ASLR / in a compact layout, presumably) and then dumps an image of the process. Everything else is just increasingly fancy ways of copying shared objects around and making ld.so prefer the bundled libraries.

      • 4 hours ago
        undefined
    • aa-jv4 hours ago
      AppImage comes close to fulfilling this need:

      https://appimage.github.io/appimagetool/

      Myself, I've committed to using Lua for all my cross-platform development needs, and in that regard I find luastatic very, very useful ..

    • marksugar4 hours ago
      [dead]
  • pilifan hour ago
    Isn't this asking for the exact trouble musl wanted so spare you from by disabling dlopen()?
  • athrowaway3z4 hours ago
    I'd never heard of detour. That's a pretty cool hack.
    • ckbkr103 hours ago
      they were prominent in game hacking 2005ish windows

      made hooking into game code much easier than before

      • sidewndr4639 minutes ago
        Aren't all DLLs on the Windows platform compiled with an unusual instruction at the start of each function? This makes it possible to somehow hot patch the DLL after it is already in memory
  • mgaunard3 hours ago
    It's funny how people insist on wanting to link everything statically when shared libraries were specifically designed to have a better alternative.

    Even worse is containers, which has the disadvantage of both.

    • arghwhat3 hours ago
      Dynamic libraries have been frowned upon since their inception as being a terrible solution to a non-existent problem, generally amplifying binary sizes and harming performance. Some fun quotes of quite notable characters on the matter here: https://harmful.cat-v.org/software/dynamic-linking/

      In practice, a statically linked system is often smaller than a meticulously dynamically linked one - while there are many copies of common routines, programs only contain tightly packed, specifically optimized and sometimes inlined versions of the symbols they use. The space and performance gain per program is quite significant.

      Modern apps and containers are another issue entirely - linking doesn't help if your issue is gigabytes of graphical assets or using a container base image that includes the entire world.

      • holowoodmanan hour ago
        Statically linked binaries are a huge security problem, as are containers, for the same reason. Vendors are too slow to patch.

        When dynamically linking against shared OS libraries, Updates are far quicker and easier.

        And as for the size advantage, just look at a typical Golang or Haskell program. Statically linked, two-digit megabytes, larger than my libc...

        • zbentley33 minutes ago
          I've heard this many times, and while there might be data out there in support of it, I've never seen that, and my anecdotal experience is more complicated.

          In the most security-forward roles I've worked in, the vast, vast majority of vulnerabilities identified in static binaries, Docker images, Flatpaks, Snaps, and VM appliance images fell into these categories:

          1. The vendor of a given piece of software based their container image on an outdated version of e.g. Debian, and the vulnerabilities were coming from that, not the software I cared about. This seems like it supports your point, but consider: the overwhelming majority of these required a distro upgrade, rather than a point dependency upgrade of e.g. libcurl or whatnot, to patch the vulnerabilities. Countless times, I took a normal long-lived Debian test VM and tried to upgrade it to the patched version and then install whatever piece of software I was running in a docker image, and had the upgrade fail in some way (everything from the less-common "doesn't boot" to the very-common "software I wanted didn't have a distribution on its website for the very latest Debian yet, so I was back to hand-building it with all of the dependencies and accumulated cruft that entails").

          2. Vulnerabilities that were unpatched or barely patched upstream (as in: a patch had merged but hadn't been baked into released artifacts yet--this applied equally to vulns in things I used directly, and vulns in their underlying OSes).

          3. Massive quantities of vulnerabilities reported in "static" languages' standard libraries. Golang is particularly bad here, both because they habitually over-weight the severity of their CVEs and because most of the stdlib is packaged with each Golang binary (at least as far as SBOM scanners are concerned).

          That puts me somewhat between a rock and a hard place. A dynamic-link-everything world with e.g. a "libgolang" versioned separately from apps would address the 3rd item in that list, but would make the 1st item worse. "Updates are far quicker and easier" is something of a fantasy in the realm of mainstream Linux distros (or copies of the userlands of those distros packaged into container images); it's certainly easier to mechanically perform an update of dependency components of a distro, but whether or not it actually works is another question.

          And I'm not coming at this from a pro-container-all-the-things background. I was a Linux sysadmin long before all this stuff got popular, and it used to be a little easier to do patch cycles and point updates before container/immutable-image-of-userland systems established the convention of depending on extremely specific characteristics of a specific revision of a distro. But it was never truly easy, and isn't easy today.

    • fc417fc8023 hours ago
      Dynamic linking exists to make a specific set of tradeoffs. Neither better nor worse than static linking in the general sense.
    • vv_3 hours ago
      It's easier to distribute software fully self-contained, if you ignore the pain of statically linking everything together :)
    • flohofwoe2 hours ago
      Dynamic libraries make a lot of sense as operating system interface when they guarantee a stable API and ABI (see Windows for how to do that) - the other scenarios where DLLs make sense is for plugin systems. But that's pretty much it, for anything else static linking is superior because it doesn't present an optimization barrier (especially for dead code elimination).

      No idea why the glibc can't provide API+ABI stability, but on Linux it always comes down to glibc related "DLL hell" problems (e.g. not being able to run an executable that was created on a more recent Linux system on an older Linux system even when the program doesn't access any new glibc entry points - the usually adviced solution is to link with an older glibc version, but that's also not trivial, unless you use the Zig toolchain).

      TL;DR: It's not static vs dynamic linking, just glibc being a an exceptionally shitty solution as operating system interface.

    • RicoElectrico2 hours ago
      That would be a good point if said shared libraries did not break binary backwards compatibility and behaved more like winapi.
  • netbioserror36 minutes ago
    I've been statically linking Nim binaries with musl. It's fantastic. Relatively easy to set up (just a few compiler flags and the musl toolchain), and I get an optimized binary that is indistinguishable from any other static C Linux binary. It runs on any machine we throw it at. For a newer-generation systems language, that is a massive selling point.
  • einpoklum5 hours ago
    This seems interesting even regardless of go. Is it realistic to create an executable which would work on very different kinds of Linux distros? e.g. 32-bit and 64-bit? Or maybe some general framework/library for building an arbitrary program at least for "any libc"?
    • quesomaster90005 hours ago
      Cosmopolitan goes one further: [binaries] that runs natively on Linux + Mac + Windows + FreeBSD + OpenBSD + NetBSD + BIOS on AMD64 and ARM64

      https://justine.lol/cosmopolitan/

      • oguz-ismail24 hours ago
        >Linux

        if you configure binfmt_misc

        >Windows

        if you disable Windows Defender

        >OpenBSD

        only older versions

        • account424 hours ago
          Yeah while APE is a technically impressive trick, these issues far outweigh the minor convenience of having a single binary.

          For most cases, a single Windows exe that targets the oldest version you want to support plus a single Glibc binary that dynamically links against the oldest version you want to support and so on is still the best option.

      • dontdoxxme4 hours ago
        Clearly a joke if it uses the .lol tld.
        • account424 hours ago
          It's his personal website lol.
          • hyperbolablabla3 hours ago
            Justine identifies as a woman.
            • hofrogs3 hours ago
              "identifies as" is an unnecessarily dismissive choice of words. She is a woman.
    • 4 hours ago
      undefined
    • 5 hours ago
      undefined
    • sambuccid4 hours ago
      Appimage exists that packs linux applications into a single executable file that you just download and open. It works on most linux distros
      • greyw4 hours ago
        I vaguely remember that Appimage-based programs would fail for me because of fuse and glibc symbol version incompatibilties.

        Gave up them afterwards. If I need to tweak dependencies might as well deal with the packet manager of my distro.

    • iberator4 hours ago
      Yup. Just compile it as static executable. Static binaries are very undervalued imo.
      • account424 hours ago
        As TFA points out at the beginning, it's not so simple if you want to use the GPU.
      • flohofwoe4 hours ago
        The "just" is doing a lot of heavylifting here (as detailed in the article), especially for anything that's not a trivial cmdline tool.
        • Xraider723 hours ago
          In my experience it seems to be an issue caused by optimizations in legacy code that relied on dlopen to implement a plugin system, or help with startup, since you could lazy load said plugins on demand and start faster.

          If you forego the requirement of a runtime plugin system, is there anything realistically preventing greenfield projects from just being fully statically linked, assuming their dependencies dont rely on dlopen ?

          • flohofwoe3 hours ago
            It becomes tricky when you need to use system DLLs like X11 or GL/Vulkan (so you need to use the 'hacks' described in the article to work around that) - the problem is that those system DLLs then bring a dynamically linked glibc into the process, so suddenly you have two C stdlibs running side by side and the question is whether this works just fine or causes subtle breakage under the hood (e.g. the reason why MUSL doesn't implement dlopen).

            E.g. in my experience: command line tools are fine to link statically with MUSL, but as soon as you need a window and 3D rendering it's not worth the hassle.

            • account422 hours ago
              X11 actually has a stable wire protocol so you don't strictly need any dynamic libraries for that - it's just that no one bothers because if you want X11 then you most likely also want GPU access where you do need to load hardware-specific libraries.
        • qznc4 hours ago
          Ack. I went down that rabbit hole to "just" build a static Python: https://beza1e1.tuxen.de/python_bazel.html
      • 3 hours ago
        undefined
      • pjmlp4 hours ago
        We had a time when static binaries where pretty much the only thing we had available.

        Here is an idea, lets go back to pure UNIX distros using static binaries with OS IPC for any kind of application dynamism, I bet it will work out great, after all it did for several years.

        Got to put that RAM to use.

        • flohofwoe2 hours ago
          The thing with static linking is that it enables aggressive dead code elimination (e.g. DLL are a hard optimization barrier).

          Even with multiple processes sharing the same DLL I would be surprised if the alternative of those processes only containing the code they actually need would increase RAM usage dramatically, especially since most processes that run in the background on a typical Linux system wouldn't event even need to go through glibc but could talk directly to the syscall interface.

          DLLs are fine as operating system interface as long as they are stable (e.g. Windows does it right, glibc doesn't). But apart from operating system interfaces and plugins, overusing dynamic linking just doesn't make a lot of sense (like on most Linux systems with their package managers).

          • pjmlp2 hours ago
            While at the same time it prevents extending applications, the alternatives being multiple processes using OS IPC, all of them much slower and heavier on resources than an indirect call on a dynamic library.

            We started there in computing history, and outside Linux where this desire to go to the past prevails, moved on to better ways including on other UNIX systems.

        • account422 hours ago
          I don't think dynamic libraries fail at "utilizing" any available RAM.
          • pjmlp2 hours ago
            Think of any program that uses dynamic libraries as extension mechanism, and now replace it with standard UNIX processes, each using any form of UNIX IPC to talk with the host process instead.
            • account42an hour ago
              In theory there might be a different RAM usage with the two approaches. In practice there is not.
        • jacquesm3 hours ago
          I've been static linking my executables for years. The downside, that you might end up with an outdated library, is no match for the upsite: just take the binary and run it. As long as you're the only user of the system and the code is your own you're going to be just fine.
  • Meneth4 hours ago
    That seems mostly useful for proprietary programs. I don't like it.
    • seba_dos14 hours ago
      Yeah, in my 20 years of using and developing on GNU/Linux the only binary compatibility issues I experienced that I can think of now were related to either Adobe Flash, Adobe Reader or games.

      Adobe stuff is of the kind that you'd prefer to not exist at all rather than have it fixed (and today you largely can pretend that it never existed already), and the situation for games has been pretty much fixed by Steam runtimes.

      It's fine that some people care about it and some solutions are really clever, but it just doesn't seem to be an actual issue you stumble on in practice much.

      • whizzter2 hours ago
        The solution to games is to load Windows games instead of Linux binaries.

        Basically the way for the year of the Linux desktop is to become Windows.

        • seba_dos12 hours ago
          These days Linux binaries usually work fine, even older ones, and when they don't the reason is that they often don't get the same attention as their Windows counterparts.
    • juliangmp3 hours ago
      Why? Foss software also benefits from less dependency hell.
      • breezykoi3 hours ago
        For distro-packaged FOSS, binary compatibility isn't really a problem. Distributions like Debian already resolve dependencies by building from source and keeping a coherent set of libraries. Security fixes and updates propagate naturally.

        Binary compatibility solutions mostly target cases where rebuilding isn't possible, typically closed source software. Freezing and bundling software dependencies ultimately creates dependency hell rather than avoiding it.

        • koffiezet2 hours ago
          It however shifts a lot of the complexity of building the application to the distro maintainer, or a software maintainer has to prioritize for which distribution they choose to build and maintain a package, because supporting them all is a nightmare and an ever shifting moving target. And it's not just a distribution problem, it's even a distribution version/release problem.

          Look at the hoops you sometimes have to jump through or hacks you have to apply to make something work on Nix, just because there is no standardization or build processes assume library locations etc. And if you then raise an issue with the software maintainer - the response is often "but we don't support Nix". And if they're not Nix/Nixos users, can you blame them?

          If you've ever had to compile a modern/recent software package for an old distro (I've had to do this for old RH distro's on servers which due to regulations could not be upgraded) - you're in a world of pain. And both distro and software maintainers will say "not my problem, we don't support this" - and I fully understand their stance on that, because it is far from straight forward, and only serves a limited audience.

        • account422 hours ago
          There is however also the long tail of open source software that isn't packaged for your favorite distribution.
          • breezykoi2 hours ago
            That is very true. But because it is open source, one can request for packaging, contribute a package, use a third-party repository, or build it from source when needed.
  • weebull2 hours ago
    If you're using dlopen(), you're just reimplementing the dynamic linker.
    • 112233an hour ago
      that's cute, but dismissive, sort of like "if you use popen(), you are reimplementing bash". There is so much hair in ld nobody wants to know about — parsing elf, ctors/dtors, ...