57 pointsby tanganika day ago13 comments
  • pornel21 hours ago
    Has anyone here even read the article?! All the comments here assume they're building a package manager for C!

    They're writing a tool to discover and index all indirect dependencies across languages, including C libraries that were smuggled inside other packages and weren't properly declared as a dependency anywhere.

    "Please don't" what? Please don't discover the duplicate and potentially vulnerable C libraries that are out of sight of the system package manager?

    • imtringued4 hours ago
      Yeah it's pretty weird how people assume that -l<name> is supposed to work in gcc/clang across distributions, but somehow deriving which OS package gives you that lib<name>.so file is the devil.
  • rurbanan hour ago
    This comes up every ten years or so, and is a solved problem. Any decent distro has tools to scan the dependencies of each binary via ldd, to check if its deps are correct.

    His example numpy shipping its own libblas.so, has the speciality that it's runtime loaded, so ldd will not find it, but the runtime dep is in the MANIFEST. And seeing that is not in a standard path concludes that is a private copy, that needs to be updated seperately if broken.

    No other hole than in his thinking and worrying.

  • etbebl3 hours ago
    I get that the scope of the article is a bit larger than this, but it's a pet peeve of mine when authors acknowledge the advantages of conda and then dismiss it for...silly? reasons. It kind of sounds like they just don't know many people using it, so they assume something must be wrong with it.

    > If you don’t need compiled extensions, Conda is more than you need.

    Am I missing something or isn't that exactly the problem we're talking about here?

    > And even when you do need it, conda environments are heavier than virtual environments and the resolver used to be infamously slow. Mamba exists largely because conda’s dependency resolution took forever on nontrivial environments.

    Like it says here, speed isn't a problem anymore - mamba is fast. And it's true that the environments get large; maybe there's bloat, but it definitely does share package versions across environments when possible, while keeping updates and such isolated to the current environment. Maybe there's a space for a language package manager that tries to be more like a system package manager by updating multiple envs at once while staying within version constraints to minimize duplication, but idk if many developers would think that is worth the risk.

    • elehackan hour ago
      Mamba is fast, and Pixi is also fast + sands a lot of the rough edges off the Conda experience (with project/environment binding and native lock files).

      Not perfect, but pretty good when uv isn't enough for a project or deployment scenario.

  • rwmja day ago
    Please don't. C packaging in distros is working fine and doesn't need to turn into crap like the other language-specific package managers. If you don't know how to use pkgconf then that's your problem.
    • hliyana day ago
      When I used to work with C many years ago, it was basically: download the headers and the binary file for your platform from the official website, place them in the header/lib paths, update the linker step in the Makefile, #include where it's needed, then use the library functions. It was a little bit more work than typing "npm install", but not so much as to cause headaches.
      • zbentleya day ago
        What do you do when the code you downloaded refers to symbols exported by libraries not already on your system? How do you figure out where those symbols should come from? What if it expects version-specific behavior and you’ve already installed a newer version of libwhatever on your system (I hope your distro package manager supports downgrades)?

        These are very, very common problems; not edge cases.

        Put another way: y'all know we got all these other package management/containerization/isolation systems in large part because people tried the C-library-install-by-hand/system-package-all-the-things approaches and found them severely lacking, right? CPAN was considered a godsend for a reason. NPM, for all its hilarious failings, even moreso.

        • JohnFena day ago
          > These are very, very common problems; not edge cases.

          Honestly? Over the course of my career, I've only rarely encountered these sorts of problems. When I have, they've come from poorly engineered libraries anyway.

          • bengarneya day ago
            Here is a thought experiment (for devs who buy into package managers). Take the hash of a program and all its dependency. Behavior is different for every unique hash. With package managers, that hash is different on every system, including hashes in the future that are unknowable by you (ie future "compatible" versions of libraries).

            That risk/QA load can be worth it, but is not always. For an OS, it helps to be able to upgrade SSL (for instance).

            In my use cases, all this is a strong net negative. npm-base projects randomly break when new "compatible" version of libraries install for new devs. C/C++ projects don't build because of include/lib path issues or lack of installation of some specific version or who knows what.

            If I need you to install the SDL 2.3.whatever libraries exactly, or use react 16.8.whatever to be sure the app runs, what's the point of using a complex system that will almost certainly ensure you have the wrong version? Just check it in, either by an explicit version or by committing the library's code and building it yourself.

            • sebastosa day ago
              Check it in and build it yourself using the common build system that you and the third party dependency definitely definitely share, because this is the C/C++ ecosystem?
        • tpoachera day ago
          You are conflating development with distribution of binaries (a problem which interpreted languages do not have, I hasten to add).

          1. The accepted solution to what you're describing in terms of development, is passing appropriate flags to `./configure`, specifying the path for the alternative versions of the libraries you want to use. This is as simple as it gets.

          As for where to get these libraries from in the event that the distro doesn't provide the right version, `./configure` is basically a script. Nothing stopping you from printing a couple of ftp mirrors in the output to be used as a target to wget.

          2. As for the problem of distribution of binaries and related up-to-date libraries, the appropriate solution is a distro package manager. A c package manager wouldn't come into this equation at all, unless you wanted to compile from scratch to account for your specific circumstances, in which case, goto 1.

      • fredrikholma day ago
        And with header only libraries (like stb) its even less than that.

        I primarily write C nowadays to regain sanity from doing my day job, and the fact that there is zero bit rot and setup/fixing/middling to get things running is in stark contrast to the horrors I have to deal with professionally.

      • krautsauera day ago
        And then you got some minor detail different from the compiled library and boom, UB because some struct is layed out differently or the calling convention is wrong or you compiled with a different -std or …
        • rwmja day ago
          Which is exactly why you should leave it to the distros to construct a consistent build environment. If your distro regularly gets this wrong then you do have a problem.
    • Joker_vDa day ago
      Well, if you're fine with using 3-year old versions of those libraries packaged by severely overworked maintainers who at one point seriously considered blindly converting everything into Flatpaks and shipping those simply because they can't muster enough of manpower, sure.

      "But you can use 3rd party repositories!" Yeah, and I also can just download the library from its author's site. I mean, if I trust them enough to run their library, why do I need opinionated middle-men?

      • rwmja day ago
        If this is a concern (which it rarely is) then you can pitch in with distro packaging. Volunteers are always welcome.

        > "But you can use 3rd party repositories!"

        That's not something I said.

        • Joker_vD5 hours ago
          > That's not something I said.

          This was a pre-emptive rebuttal to the non-answer "well, you're not limited to the official repositories, so apt/yum/etc. is absolutely fine, use only it" I always get unless I include this very rebuttal.

          > then you can pitch in with distro packaging. Volunteers are always welcome.

          Or I can just do something more useful and less straining?

        • sebastosa day ago
          >(which it rarely is)

          You're saying it's _rare_ for developers to want to advance a dependency past the ancient version contained in <whatever the oldest release they want to support> is?

          Speaking for the robotics and ML space, that is simply the opposite of a true statement where I work.

          Also doesn't your philosophy require me to figure out the packaging story for every separate distro, too? Do you just maintain multiple entirely separate dependency graphs, one for each distro? And then say to hell with Windows and Mac? I've never practiced this "just use the system package manager" mindset so I don't understand how this actually works in practice for cross-platform development.

    • JohnFena day ago
      I agree entirely. C doesn't need this. That I don't have to deal with such a thing has become a new and surprising advantage of the language for me.
      • sebastosa day ago
        I find this sentiment bewildering. Can you help me understand your perspective? Is this specifically C or C++? How do you manage a C/C++ project across a team without a package manager? What is your methodology for incorporating third party libraries?

        I have spent the better half of 10 years navigating around C++'s deplorable dependency management story with a slurry of Docker and apt, which had better not be part of everyone's story about how C is just fine. I've now been moving our team to Conan, which is also a complete shitshow for the reasons outlined in the article: there is still an imaginary line where Conan lets go and defers to "system" dependencies, with a completely half-assed and non-functional system for communicating and resolving those dependencies which doesn't work at all once you need to cross compile.

        • spauldo19 hours ago
          You're confusing two different things.

          For most C and C++ software, you use the system packaging which uses libraries that (usually) have stable ABIs. If your program uses one of those problematic libraries, you might need to recompile your program when you update the library, but most of the time there's no problem.

          For your company's custom mission critical application where you need total control of the dependencies, then yes you need to manage it yourself.

          • sebastos18 hours ago
            Ok - it sounds like you’re right, but I think despite your clarification I remain confused. Isn’t the linked post all about how those two things always have a mingling at the boundary? Like, suppose I want to develop and distribute a c++ user-space application in a cross platform way. I want to manage all my dependencies at the language level, and then there’s some collection of system libraries that I may or may not decide to rely on. How do I manage and communicate that surface area in a cross platform and scalable way? And what does this feel like for a developer - do you just run tests for every supported platform in a separate docker container?
        • 20 hours ago
          undefined
    • geraldcombsa day ago
      What "distro" package manager is available on Windows and macOS? vcpkg doesn't provide binary packages and has quite a few autotools-shaped holes. Homebrew is great as long as you're building for your local machine's macOS version and architecture, but if you want to support an actual user community you're SOL.
    • zbentleya day ago
      I mean … it clearly isn’t working well if problems like “what is the libssl distribution called in a given Linux distro’s package manager?” and “installing a MySQL driver in four of the five most popular programming languages in the world requires either bundling binary artifacts with language libraries or invoking a compiler toolchain in unspecified, unpredictable, and failure-prone ways” are both incredibly common and incredibly painful for many/most users and developers.

      The idea of a protocol for “what artifacts in what languages does $thing depend on and how will it find them?” as discussed in the article would be incredibly powerful…IFF it were adopted widely enough to become a real standard.

      • rwmja day ago
        Assuming that your distro is, say, Debian, then you'll know the answer to that is always libssl-dev, and if you cannot find it then there's a handy search tool (both CLI and web page: https://packages.debian.org) to help you.

        I'm not very familiar with MySQL, but for C (which is what we're talking about here) I typed mysql here and it gave me a bunch of suggestions: https://packages.debian.org/search?suite=default&section=all... Debian doesn't ship binary blobs, so I guess that's not a problem.

        "I have to build something on 10 different distros" is not actually a problem that many people have.

        Also, let the distros package your software. If you're not doing that, or if you're working against the distros, then you're storing up trouble.

        • lstodda day ago
          Actually "build something on 10 different distros" is not a problem either, you just make 10 LXC containers with those distros on a $20/mo second-hand Hetzner box, sick Jenkins with trivial shell scripts on them and forget about it for a couple years or so until a need for 11th distro arrives, in which case you spend half an hour or so to set it up.
      • a day ago
        undefined
      • fc417fc802a day ago
        > what is the libssl distribution called in a given Linux distro’s package manager?

        I think you're going to need to know that either way if you want to run a dynamically linked binary using a library provided by the OS. A package manager (for example Cargo) isn't going to help here because you haven't vendored the library.

        To match the npm or pip model you'd go with nix or guix or cmake and you'd vendor everything and the user would be expected to build from scratch locally.

        Alternatively you could avoid having to think about distro package managers by distributing with something like flatpak. That way you only need to figure out the name of the libssl package the one time.

        Really issues shouldn't arise unless you try to use a library that doesn't have a sane build system. You go to vendor it and it's a headache to integrate. I guess there's probably more of those in the C world than elsewhere but you could maybe just try not using them?

    • dupeda day ago
      > C packaging in distros is working fine

      GLIBC_2.38 not found

      • Joker_vDa day ago
        Like, seriously. It's impossible to run Erlang/OTP 21.0 on a modern Ubuntu/Debian because of libssl/glibc shenanigans so your best bet is to take a container with the userspace of Ubuntu 16 (which somehow works just fine on modern kernel, what a miracle! Why can't Linux's userspace do something like that?) and install it in there. Or just listen to "JuST doN'T rUN ouTdaTED SoftWAre" advices. Yeah, thanks a lot.
      • amiga386a day ago
        If you have a distro-supplied binary that doesn't link with the distro-supplied glibc, something is very very wrong.

        If you're supplying your own binaries and not compiling/linking them against the distro-supplied glibc, that's on you.

        • dupeda day ago
          Linking against every distro-supplied glibc to distribute your own software is as unrealistic as getting distributions to distribute your software for you. The model is backwards from what users and developers expect.

          But that's not the point I'm making. I'm attacking the idea that they're "working just fine" when the above is a bug that nearly everyone hits in the wild as a user and a developer shipping software on Linux. It's not the only one caused by the model, but it's certainly one of the most common.

          • amiga386a day ago
            It's hardly unrealistic - most free software has been packaged, by each distro. Very handy for the developer: just email the distro maintainers (or post on your mailing list) that the new version is out, they'll get round to packaging it. Very handy for the user, they just "apt install foo" and ta-da, Foo is installed.

            That was very much the point of using a Linux distro (the clue is in the name!) Trying to work in a Windows/macOS way where the "platform" does fuck-all and the developer has to do it all themselves is the opposite of how distros work.

            • duped21 hours ago
              User now waits for 3rd party "maintainers" to get around to manipulating the software they just want to use from the 1st party developer they have a relationship with. If ever.

              I understand this is how distros work. What I'm saying is that the distros are wrong, this is a bad design. It leads to actual bugs and crashes for users. There have been significant security mistakes made by distro maintainers. Distros strip bug fixes and package old versions. It's a mess.

              And honestly, a lot of software is not free and won't be packaged by distros. Most software I use on my own machines is not packaged by my distro. ALL the software I use professionally is vendored independently of any distribution. And when I've shipped to various distributions in the past, I go to great lengths to never link anything if possible that could be from the distro, because my users do not know how to fix it.

              • em-bee4 hours ago
                distributions started out with solving the problem that most developers at that time didn't even bother to build ready to run packages. they couldn't, because there were to many different architectures that not everyone had access to. so developers had to rely on users to build the applications for themselves. distributions then organized around that to make this easier for users. that's how the port system in BSD came about. linux distributions went a step further and built distributable binaries.

                the problem was to not predict that developers would want more control over the build of their applications, which, thanks to architectures consolidating, became easier because now a single binary will reach the majority of your userbase. and the need to support multiple versions of the same library or app in the package manager. that support should have been there from the start, and now its difficult to fix that.

                so it's unfair to say distros are wrong. yes, it's not an ideal design, but this is more of an accident of history, some lack of foresight, and the desire to keep things simple by having only the newest version of each package.

                there is a conflict between the complexity of supporting multiple package versions vs the complexity of getting applications to work with the specific library versions the distro supports. when distros started out it looked like the latter would be better for everyone. distributions tended to have the latest versions of libraries and fixing apps to work with those benefited the apps in most cases.

    • amlutoa day ago
      I've contemplated this quite a bit (and I personally maintain a C++ artifact that I deploy to production machines, and I generally prefer not to use containers for it), and I think I disagree.

      Distributions have solved a very specific problem quite nicely: they are building what is effectively one application (the distro) with many optional pieces, it has one set of dependencies, and the users update the whole thing when they update. If the distro wants to patch a dependency, it does so. ELF programs that set DT_INTERP to /lib/ld-linux-[arch].so.1 opt in to the distro's set of dependencies. This all works remarkably well and a lot of tooling has been built around it.

      But a lot of users don't work in this model. We build C/C++ programs that have their own set of dependencies. We want to try patching some of them. We want to try omitting some. We want to write programs that are hermetic in the sense that we are guaranteed to notice if we accidentally depend on something that's actually an optional distro package. The results ... are really quite bad, unless the software you are building is built within a distro's build system.

      And the existing tooling is terrible. Want to write a program that opts out of the distro's library path? Too bad -- DT_INTERP really really wants an absolute path, and the one and only interpreter reliably found at an absolute path will not play along. glibc doesn't know how to opt out of the distro's library search path. There is no ELF flag to do it, nor is there an environment variable. It doesn't even really support a mode where DT_INTERP is not used but you can still do dlopen! So you can't do the C equivalent of Python venvs without a giant mess.

      pkgconf does absolutely nothing to help. Sure, I can write a makefile that uses pkgconf to find the distro's libwhatever, and if I'm willing to build from source on each machine* (or I'm writing the distro itself) and if libwhatever is an acceptable version* and if the distro doesn't have a problematic patch to it, then it works. This is completely useless for people like me who want to build something remotely portable. So instead people use enormous kludges like Dockerfile to package the entire distro with the application in a distinctly non-hermetic way.

      Compare to solutions that actually do work:

      - Nix is somewhat all-encompassing, but it can simultaneously run multiple applications with incompatible sets of dependencies.

      - Windows has a distinct set of libraries that are on the system side of the system vs ISV boundary. They spend decades doing an admirable job of maintaining the boundary. (Okay, they seem to have forgotten how to maintain anything in 2026, but that's a different story.) You can build a Windows program on one machine and run it somewhere else, and it works.

      - Apple bullies everyone into only targeting a small number of distros. It works, kind of. But ask people who like software like Aperture whether it still runs...

      - Linux (the syscall interface, not GNU/Linux) outdoes Microsoft in maintaining compatibility. This is part of why Docker works. Note that Docker and all its relatives basically completely throw out the distro model of interdependent packages all with the same source. OCI tries to replace it with a sort-of-tree of OCI layers that are, in theory, independent, but approximately no one actually uses it as such and instead uses Docker's build system and layer support as an incredibly poorly functioning and unreliable cache.

      - The BSDs are basically the distro model except with one single distro each that includes the kernel.

      I would love functioning C virtual environments. Bring it on, please.

    • dminik15 hours ago
      No, just no.

      Using system/distro packages is great when you're writing server software and need your base system to be stable.

      But, for software distributed to users, this model fails hard. You generally need to ship across OSs, OS versions and for that you need consistent library versions. Your software being broken because a distro maintainer has decided that a 3 year old version of your dependency is close enough is terrible.

      • MiiMe1915 hours ago
        If you software is not being distributed by that distribution and is using some external download tool, it is inherently not supported and the only way to make sure it works is to compile from source.
        • dminik5 hours ago
          If you compile from source, but your distro is shipping library version that is incompatible with the app, you're still screwed.

          This is why flatpaks/snaps/app images have been taking off. Devs don't have time for bugs caused by incompatible libraries. Distro packagers don't have time to properly test the thousands of packages they have to change to satisfy their 1 shared library version policy.

    • aa-jva day ago
      ^ This.

      Plus, we already have great C package management. Its called CMake.

      • rurban5 minutes ago
        That's not great, that's horrible. It only helps windows users, which are used to even worse horrors. autotools and pkgconfig is fine.
      • bluGilla day ago
        CMake is not a package management tool, it is a build tool. It can be abused to do package management, but that isn't what it is for.
        • aa-jv7 hours ago
          It's a perfectly cromulant package manager.
      • rwmja day ago
        I hate autotools, but I have stockholm syndrome so I still use it.
        • kergonatha day ago
          I hated auto tools until I had to use cmake. Now, I still hate auto tools, but I hate cmake more.
        • aa-jva day ago
          Its not so hard once you learn it. Of course, you will carry that trauma with you, and rightly so. ;)
  • CMaya day ago
    I don't trust any language that fundamentally becomes reliant on package managers. Once package managers become normalized and pervasively used, people become less thoughtful and investigative into what libraries they use. Instead of learning about who created it, who manages it, what its philosophy is, people increasingly just let'er rip and install it then use a few snippets to try it. If it works, great. Maybe it's a little bloated and that causes them to give it a side-eye, but they can replace it later....which never comes.

    That would be fine if it only effected that first layer, of a basic library and a basic app, but it becomes multiple layers of this kind of habit that then ends up in multiple layers of software used by many people.

    Not sure that I would go so far as to suggest these kinds of languages with runaway dependency cultures shouldn't exist, but I will go so far as to say any languages that don't already have that culture need to be preserved with respect like uncontacted tribes in the Amazon. You aren't just managing a language, you are also managing process and mind. Some seemingly inefficient and seemingly less powerful processes and ways of thinking have value that isn't always immediately obvious to people.

  • krautsauera day ago
    Why is meson's wrapdb never mentioned in these kinds of posts, or even the HN discussion of them?
    • johnny2218 hours ago
      probably because meson doesn't have a lot of play outside certain ecosystems.

      I like wrapdb, but I'd rather have a real package manager.

  • conorbergina day ago
    I use a lot of obscure libraries for scientific computing and engineering. If I install it from pacman or manage to get an AUR build working, my life is pretty good. If I have to use a Python library the faff becomes unbearable, make a venv, delete the venv, change python version, use conda, use uv, try and install it globally, change python path, source .venv/bin/activate. This is less true for other languages with local package management, but none of them are as frictionless as C (or Zig which I use mostly). The other issue is .venvs, node_packages and equivalents take up huge amounts of disk and make it a pain to move folders around, and no I will not be using a git repo for every throwaway test.
    • auxyma day ago
      uv has mostly solved the python issue. IME it's dependency resolution is fast and just works. Packages are hard linked from a global cache, which also greatly reduces storage requirements when you work with multiple projects.
      • storystarlinga day ago
        uv is great for resolution, but it seems like it doesn't really address the build complexity for heavy native dependencies. If you are doing any serious work with torch or local LLMs, you still run into issues where wheels aren't available for your specific cuda/arch combination. That is usually where I lose time, not waiting for the resolver.
      • drowsspaa day ago
        You still need to compile when those libraries are not pre compiled.
      • amlutoa day ago
        uv does nothing to help when you have old, crappy, barely maintained Python packages that don’t work reliably.
    • megolodana day ago
      compiling an open source C project isn't time consuming?
  • arkt816 hours ago
    The biggest difficult is not that, is the many assumptions you need when writing a makefile and how to use different versions of same library. The LD_PATH is something had as potentially risky. Not that it be... but assumptions of the past, like big monsters, are a barrier to the simpler C tooling.
  • josefxa day ago
    > Conan and vcpkg exist now and are actively maintained

    I am not sure if it is just me, but I seem to constantly run into broken vcpkg packages with bad security patches that keep them from compiling, cmake scripts that can't find the binaries, missing headers and other fun issues.

    • adzma day ago
      I've never had a problem with vcpkg, surprisingly. Perhaps it is just a matter of which packages we are using.
    • Piraty19 hours ago
      yes, i found conan appears to have lax rules regarding package maintenance which leads to incosistent recipes
    • fslotha day ago
      C++ community would be better off without Conan.

      Avoid at all cost.

  • advael21 hours ago
    I think system package managers do just fine at wrangling static library dependencies for compiled languages, and if you're building something that somehow falls through the cracks of them then I think you should probably just be using git or some kinda vcs for whatever you're doing, not a package manager

    But on the other hand, I am used to arch, which both does package-management ala carte as a rolling release distro and has a pretty extensively-used secondary open community ecosystem for non-distro-maintained packages, so maybe this isn't as true in the "stop the world" model the author talks about

  • Piratya day ago
    • Archit3ch9 minutes ago
      They lost me when they advocate for global dependencies instead of bundling. Are you supposed to have one `python` in your machine? One copy of LLVM (shared across languages!) ? One `cuda-runtime`?
    • One of my favorite blog posts. I enjoy it every time I read it. I've implemented two C package managers and they... were fine. I think it's a pretty genuinely hard thing to get right outside of a niche.

      I've written two C package managers in my life. The most recent one is mildly better than the first from a decade ago, but still not quite right. If I ever build one I think is good enough I'll share, only to mostly likely learn about 50 edge cases I didn't think of :)

    • smwa day ago
      The fact that the first entry in his table says that apt doesn't have source packages is a good marker of the quality of this post.
  • dupeda day ago
    Missing in this discussion is that package management is tightly coupled to module resolution in nearly every language. It is not enough to merely install dependencies of given versions but to do so in a way that the language toolchain and/or runtime can find and resolve them.

    And so when it comes to dynamic dependencies (including shared libraries) that are not resolved until runtime you hit language-level constraints. With C libraries the problem is not merely that distribution packagers chose to support single versions of dependencies because it is easy but because the loader (provided by your C toolchain) isn't designed to support it.

    And if you've ever dug into the guts of glibc's loader it's 40 years of unreadable cruft. If you want to take a shot at the C-shaped hole, take a look at that and look at decoupling it from the toolchain and add support for multiple version resolution and other basic features of module resolution in 2026.

    • pifa day ago
      > And if you've ever dug into the guts of glibc's loader it's 40 years of unreadable cruft.

      You meant: it's 40 years of debugged and hardened run-everywhere never-fails code, I suppose.

      • dupeda day ago
        No, I meant 40 years of unreadable cruft. It's not hard to write a correct loader. It's very hard to understand glibc's implementation.
  • C*** shaped?