Setting of relative paths for bottle installs is still not perfect, well it works for every bottle I have tested except rust. Getting bottles working 100% is very doable though imo.
Build from source formulae is still pretty f*ed + I do not know if it is really feasible given that the json API lacks information there and a full on Ruby -> Rust transpiler is way out of scope. Will probably settle for automatic build system detection based on archive structure there. + Maybe do my own version of the .rb scripts but in a more general machine readable format, not .rs lol
Casks seem to work but I have only tested some .dmg -> .app ones and .pkg installers so far though. As with bottles 100% doable.
Given that almost all formulae are available as bottles for modern ARM mac this could become a fully featured package manager. Actually didn't think so many people would look at it, started building it for myself because Homebrew just isn't cutting it for what I want.
Started working on a declarative package + system manager for mac because I feel ansible is overkill for one machine and not really made for that and nix-darwin worms itself into the system so deep. Wrapping Brew commands was abysmally slow though so I started working on this and by now I am deep enough in I won't stop xD
Anyway I am grateful for every bug report, Issue and well meaning pull request.
Is there uv support?[0]
One of my biggest gripes about brew is how they manage dependencies. The devs have a conflicting philosophy that creates bloat. Package maintainers must define settings, settings should always use the latest Python version. It makes no sense. Maintainers won't update unless things break so you got a bunch of Python versions running around. And it won't use system Python!
Uv seems to provide an elegant solution for this. You can build a venv for each package and rust version will only have the specified deps. Since uv finds all your Python instances (and packages) and soft links them you have way less bloat and venvs become really useful. You can also use run and other tools to handle executables
Plus is also rust so good synergy ;)
> not dependencies for your own Python projects
I'm not doing that. Honestly I'm not sure how to do that and it sounds like a real pain. > the Python version does not matter
This is incorrect. Go check what versions of Python brew has installed for you. It's definitely not your system version...It's not "what works" it is "what the maintainer specified". And according to the brew devs this is supposed to be /the latest version that works/. Which was my point. People don't update just on a Python change. That's not going to happen without automation. (I even suggested we be allowed to specify the minimum version and I was told it's maintainer's responsibilities). You can trivially find packages that can be used with newer versions of Python than their brew formulas specify.
* One of the main reasons Homebrew doesn't use the "system" Python is because Apple has repeatedly indicated that they want to remove it, and that integrators should not depend on it. This, plus per-package Python version requirements makes using a single system Python a non-starter.
* The "bloat" you're noting in Homebrew around multiple Python interpreters is present in `uv` and `pyenv` as well: `uv` handles multiple interpreters transparently, so you may not even realize how many you have installed. I think this is a good pattern: disk space is cheap relative to the timesink of connecting the right Python version to the right set of packages, which is why every distribution scheme (including both Homebrew and Debian) prefers to distribute multiple Python versions.
> because Apple has repeatedly indicated that they want to remove it
That's not a reason to not use it. There's absolutely zero reason for me to have two copies of an exact same Python version (e.g 3.11.4). There is rarely reason to have a differing subversions (3.11.3 vs 3.11.4). > is present in `uv`
Are you guessing or do you know? Try it out. Prove me wrong.I said they search for versions on your system and link if found. Seriously, look into it
> disk space is cheap
That's not correct. It's a thing you need to consider when you have to compromise but not an infinite resource. But there's something cheaper than storage: infrequently scanning the system!The problem is when everyone thinks this way then space is no longer cheap (especially with Apple!)
Tragedy of the commons
In fact it's a big reason I live in the terminal. Because even on a modern M2 air that bloat creeps in. My system is fast and snappy. I have plenty of storage. But this isn't true if I didn't. The creep still happens in these programs but their nature lessens the blows and the likelihood that people care about these things increases.
It is, in fact, an excellent reason not to use it. Homebrew runs on tens of millions of machines; we are absolutely not going to rely on things that we're told not to rely on unless absolutely necessary. Python is readily buildable and we already need to build multiple versions for reasons aforementioned, so this condition does not apply.
Also note, it's stronger than what I originally said: 10.15 doesn't ship with Python by default at all anymore. I only have it installed (at 3.9.6) because I also have XCode installed. Homebrew supports being used without XCode or the CLT, so that alone would be a hard blocker for system Python for us.
> Are you guessing or do you know? Try it out. Prove me wrong.
My `uv python list` shows that I have 3 uv-managed versions of Python installed, along with 5 pyenv versions and 2 Homebrew versions. I have more than most people because I test large matrices of Python versions at once, but I imagine a normal Python developer isn't too far off.
You can test this for yourself with the same command. If you're developing more than one Python library or application at a time, I strongly suspect `uv` or `pyenv` is using more space for Python versions than Homebrew is.
> an excellent reason not to use it
You're really missing my argument here. >> there's something cheaper than storage: infrequently scanning the system!
The solution already exists If required python version does not exist, download.
This is already being done. The missing part is the scanning of the system. I need to make this abundantly clear: You do not need to rely on Apple for this to be a solution. Furthermore, even if Apple completely removes python, making this change would still improve brew and reduce bloat. This is why I didn't care about that comment. Because it is inconsequential to what I'm suggesting. If they remove python you're in the exact same position as not having the right version of python. It is equivalent. I'm not sure why you're harping on this. > My `uv python list` shows
You're again missing the point. It isn't about having different versions, hell I got 3.13, 3.12, 3.11, 3.10, 3.9, and 3.8 on my system. The problem is having redundant copies of the minor (or even patch) versions. When I installed `uv` I uninstalled brew, purged everything, and reinstalled. This is because when I'm creating a new venv it isn't actually `uv` that is installing a new python version, it is linking one that was installed by brew. When I was using Anaconda, conda didn't search for existing versions (and packages), it just installed its own. You might be thinking this isn't a problem with brew and a problem with conda, but that's passing the buck. Brew is making the same error that conda was and thinking brew is better and that the solution should be fixed by others is just idiotic. Its blaming those downstream for a problem that's being created. If a "crime" is a "crime" then it doesn't matter who does it.We're getting off topic now, since this is besides the point. But hey, here's a few things that can help look at the system and what's happening.
On my system, brew installed 3.13, 3.12, and 3.11 (you can guess when I purged). For good measure, try this instead
brew list | xargs -P8 -n1 sh -c 'brew cat "${0}" | grep "depends_on \"python@3." | sed -e "s/.*\(python@3\.[0-9]*\).*/\1/g"' 2> /dev/null | sort -u
This will go through all your brew packages and find the python depends. My output shows 3.11 and 3.12. Interestingly, no python 3.13, though brew installed this. Interesting considering brew and brew-core don't have python code (according to github). Maybe something installed it and then I removed it. I'm unsure. But let's edit our command and check only those python@3.11 instances. brew list | xargs -P8 -n1 sh -c 'brew cat "${0}" | grep "depends_on \"python@3.11" &> /dev/null && echo "${0}"' 2> /dev/null
I checked the formulas on github, none of those even have python 3.11 anymore. These are cryptography, libxml2, numpy, py3parser, py3cairo, python-cryptography, pygobjects3, and python-packaging. With the exception of python-cryptography and python-packaging (no GitHub formula), the brew formula on GitHub depends on both 3.12 and 3.13 for every one of these. There is no dep for 3.11! Did a `brew update` and `brew upgrade` and they're still 3.11. But hey, now the first brew list command reports 3.11, 3.12, and 3.13. Doing `brew info python-cryptography` there's a green checkmark on `python@3.12` and `python@3.13` (`python@3.11` is not shown), while `python-setuptools` has a red x despite the fact that I have setuptools installed in both 3.13 and 3.12 in different uv environments (it also isn't finding rust). So it certainly isn't finding packages and trying to reduce redundancy which is the big problem `uv` tries to solve in the first place.That kind of magical special case behavior is not intuitive and creates more problems than it is worth.
For the issue I'm talking about you have to look. It's a non breaking bug. So unless you look you likely won't see the problem.
That's two types of bugs: 1) Those that scream at you 2) Those that hide. We're talking about by later
I haven't played around with this yet (will this weekend!) but I know brew is way too verbose and IO is a big slowdown. Probably doesn't matter because it is written in ruby. But it's an easy thing to miss and is quite common for people to not recognize this. Like when people use `tar xvf` instead of `tar xf`, there's a very noticeable speed difference for many packages lol. (Sorry if I'm preaching to the choir here. It's a pet peeve of mine given how common this is)
The way I ended up using it was that `brew install` would temporarily install something, without adding it to my Brewfile. And a little `brew add` wrapper would add the package to my Brewfile to keep it on the system permanently. That part with the wrapper could have used some love and would be a nice fit for a new brew-compatible frontend IMO. Maybe you could expand on that for Sapphire, if that also scratches your declarative itch?
I'm guessing it's that you hoping that it is eventually more performant -- are there specific areas of current brew you have identified as performance bottlenecks likely to eventually benefit from a rust implementation?
Or any more info to share about assumptions/hopes that motivated this or any other motivations?
Thanks to rust just being (slightly, significantly? no idea about ruby's speed) faster + concurrent downloading & pouring of bottles, most "regular" formula installs feel a good bit faster than brew already. Mainly noticeable when installing multiple formulae at once.
Casks, especially those with pkg installers, seem to profit a bit less here.
Performance was a reason, not the main one though, like I said I wanted and still want, to build a declarative package + system managing solution on top. The idea was to get into rust with that. Imo having the base written in the same language instead of wrapping commands also gives more flexibility there.
Another reason is that I never liked the way brew looks and feels. Right now the ui/ux for Sapphire is far from finished, more like a clusterf*k and only the search command really looks the way I want it. Aiming for something modern, clean and information rich without beeing overly verbose. I really like dnf5 and what AerynOS is doing and will probably take some inspiration there.
Like mentioned, Bottles and Casks should be 100% doable and that would cover most package needs on macOS, I do not see why I should also define a new repo and packaging ecosystem when such a big and popular one exists.
Source build capability will probably stay(for easy integration of source building in the system management part later) but not be focused on brew formulae as the ruby dsl would be a horror to parse.
Well and sh*t I am not trying to compete really. This is the first time building something with rust and I really really had no idea what a giant never ending rabbit hole macOS package management is and how massive and complex Brew is.
This went from should I to can I pretty quick for me xD
The performance of apt/dnf in comparison is surreal; but dnf (or at least yum, its predecessor) is written in Python; which has even worse performance characteristics than Ruby.
Clearly something is wrong, I wonder how different they are architecturally.
(There's still low handing fruit, but it's not like it was a few years ago where `brew list` took seconds to run. It now runs nearly instantaneously for me locally, like most of the other happy path commands.)
However, the speed increase coincided with my upgrade to an M-Series laptop, so it's possible I just presumed there was a significant hardware speedup in the time we're talking about.
I fucking hate homebrew.
I hate the fact that the project still has the attitude of "sudo is le hard and we are le tired."
The project people are assholes.
The maintainers are often well behind current releases.
It maintains a cache of every installed version for no good reason wasting ~10GB or more of my SSD space.
When it breaks it's impenetrable trying to figure out how it broke, there's nobody to ask for help, the documentation sucks, and the fastest thing is just to wipe the whole fucking directory and start over.
I could go on. I don't know a single person that likes using homebrew - it's just the package manager everyone resigned themselves to use.
> I hate the fact that the project still has the attitude of "sudo is le hard and we are le tired."
> The project people are assholes.
Have you considered that approaching us like this isn't productive and doesn't make anyone remotely interested in helping you?
Years ago I worked with someone who strongly disliked brew because it leaned far too heavily on magic. I was okay with it because it seemed to work well. Brew still uses magic but now it just seems like I'm fighting it every single step of the way. I am le tired.
Actually hearing about them (instead of just cursing at us) is genuinely helpful, and we genuinely appreciate it.
cmon, `brew cleanup --prune=0` is just part of the muscle memory by now.
But as a long time Linux user who always has to use a Mac at work I've been consistently floored by how painful Homebrew is to use, to the point that for my latest corp-issued MacBook I switched to home-manager and I'm not looking back.
Maybe that could be a place where sapphire differentiates?
If I may surface one use case: Several years ago I had to manage a bunch of Macs for CI jobs. The build process (Unreal's UAT) didn't support running more than one build process at a time, and Docker was really slow, so I'd hoped to use different user accounts to bypass that and get some parallelization gains. Homebrew made that very difficult with its penchant for system-wide installs. So a feature request: I'd love to see a competitive package manager that limits itself to operating somewhere (overridable) in the user's home directory.
I didn't check, but there is a chance that path is also hardcoded in (some) formulae, so even building from the source might not help here.
so doing `brew install` inside a container with the proper volumes it’s not sufficient to fix the issue. Everything would have to run from within the container as well.
Please add knobs for the end user to manually configure this per package and global default before adding autodetection. As a user to is very frustrating to have to patch the package manager to override some well-intentioned automagic which didn't consider my setup or dig through sources to uncover some undocumented assumption. yarn is a cautionary example.
Last I checked (which was about a year ago), Homebrew had ~7000 formulas (not including casks).
I think it would be feasible to transcribe most of them to your format of choice with AI, run the build in a loop and ask the LLM to fix errors, and reserve manual intervention for the few cases that the LLM can't fix.
I'll probably open a Poll on git within the next couple days about what to do about a real from source packaging system with it's own dsl, maybe just yaml/toml or something a bit more powerful like lua with mlua. No matter the choice of packaging apporach I would like to keep the generated packages "bottle compliant" -> at least with the json api spec, since this also installs from there.
There seems to be at least some in my project so even though I am very opinionated on a lot of things I don't think I should decide on that completely alone, especially given that this is the first time I play with packaging.
At its core, there are really two parts to Homebrew:
1. There's the client side, i.e. `brew`, which 99.9% of users stick to happy paths (bottle installs, supported platforms) within. These users could be supported with relative ease by a small native-code installer, since the majority of the work done by the installer in the happy path is fetching bottles, exploding them, and doing a bit of relocation.
2. There's literally everything else, i.e. all of the developer, repository, and CI/CD machinery that keeps homebrew-core humming. This is the largely invisible infrastructure that makes `brew install` work smoothly, and it's very hard to RIIR (in a large part because it's tied heavily to the formula DSL, which is arbitrary Ruby).
(1) is a nice experimental space, because Homebrew does (IMO) a decent job of isolating the client-facing side from the complexity of (2). However, (2) is where the meat-and-potatoes of packaging happens, and where Homebrew's differentiators really lie (specifically, in how easy it is to contribute new packages and bump existing ones).
Edit: Another noteworthy aspect here around performance: I mentioned this in another comment[1], but parallel downloads of things like bottles and DMGs is not an architectural limitation of Homebrew itself, but instead a conscious decision to trade some install speed for courtesy towards the services we fetch from (including GitHub itself). Smaller projects can sidestep this because they're not directing nearly the same degree of volume; I think this project will discover if/when its volumes grow that it will need to serialize downloads to avoid being throttled or outright limited.
RIIR - "Rewrite It In Rust" (maybe obvious in context? sharing in case not)
I also feel that there could be a lot of automation in the backend part, catching bugs early (maybe even on local machine before CI run) for example.
I wonder if anything changed substantially over the years to make you say that?
- GitHub release sniffing https://github.com/Homebrew/homebrew-core/blob/b331b99b9f24f...
- page scraping https://github.com/Homebrew/homebrew-core/blob/b331b99b9f24f... (and also per-content-type flavors json and presumably xml)
- links to other formulae (IOW cascading updates): https://github.com/Homebrew/homebrew-core/blob/b331b99b9f24f...
and then the $(brew livecheck) invocation which will do a subset, a curated list, or hypothetically all of them
I can't imagine why MacPorts or our new Sapphire friend couldn't adopt a similar strategy
Absolutely MacPorts and Sapphire can adopt the same strategy, but the point is that brew already has, so what exactly would the benefit be? e.g. if the language of choice is effectively meaningless, re-writing homebrew in Rust serves effectively no purpose. This is contrasted with systems and software where the performance or correctness is the most important feature, and therefore RIIR can be a big win.
Said another way: Brew was not "re-write MacPorts in Ruby", it had much loftier goals which it then executed on effectively. Sapphire mostly seems to be "re-write Brew in Rust", without much beyond that. So the only real gain is a bit more performance out of the CLI.
I can't remember the last time my "brewup" alias failed me
brew update && brew upgrade -g -f && brew cleanup --prune=all
I'm acutely aware someone's going to say LSP something or Rubymine something else but as for drive-by contributions, ... anyway, like I said, just for your consideration
That doesn't make sense. As you say you're directing a huge volume of traffic so it makes no difference exactly when a user downloads a byte. It all gets smeared out. Only the total amount of data matters and that is unaffected by parallelism.
Homebrew's traffic pattern is not a uniform distribution. Package updates go out, and users download those packages in structured ways: there are spikes for MDM-managed Homebrew installations, spikes for cronjobs and CI/CD systems, spikes at 9AM on different coasts when developers sign into their machines, etc.
(How much this matters is also not uniform: it matters somewhat less for GitHub Packages - they should have a hefty CDN - and it matters somewhat more for Casks, which tend to be large DMGs hosted on individual servers.)
If spikes at midnight etc are an issue just automatically disabled parallel downloads around midnight. Or only use them when running in a terminal. I really doubt it is an issue for GitHub though.
(It’s not just midnight, or terminals, as mentioned, not that either of these is really a “just”. And it’s not just GitHub, as mentioned.)
I just don't think "it shouldn't be done" is really the true reason. It's very tempting to say that though because it sounds like a stronger defense than "I don't want to". I think some people that do it don't even realise.
These kinds of complications on a "same experience for everyone" flow are also nontrivial: single-threading on CI is hard to explain to users who see very fast local installs and slow CI installs - the latter costs them extra money! - and complicates the story with mirrors, download groups that include non-GH origins, etc.
That's not to say it's impossible; as linked in another thread, it's something that's being actively worked on. But I think it should be understandable why "just parallelize it and you're wrong/stupid/lazy because you haven't already" is neither a contextualized or charitable response.
I suspect they are wrong in this specific care based on the "every other package manager solved this" heuristic, but I have no doubt this is not a straightforward issue.
Personally, I dropped the case after offering to send a PR and getting back a "we won't accept this, it's not because we didn't think of it, from where we're standing it's a bad idea". They may or may not be wrong, but X am definitely wrong here, knowing almost nothing about it.
I'm not a big fan of keeping the Homebrew terminology though. I never know what a formula, keg, cask, cellar, tap or bottle is. Why not keep to the standard terms of package and repository etc? I don't know beer brewing terminology or how beer brewing is analogous to package management, and I honestly wish that it wasn't something which my tools expect me to learn.
I don't want to memorise their twee names; I'd much rather the name tell me what the entity / operation does by itself.
It’s not that I like boring. But I really like descriptive names. I have other things to do with my time than figuring out what the hell a cask, a tap, or a bottle is. Like solving the problem that requires the damn software.
It's toned down lately and a ton of projects have been renamed, though not scrubbed, so the old names are still in the code and some documentation uses the old names. Also, you can't rewrite tens of thousands of forum posts from 2010 that use the older names that show up while searching for issues.
I'd agree that the current homebrew terms are inappropriately whimsical and hard to grasp, but you are right in your intuition and goal, IMO.
That is, taking care of the gears first and then carefully adjusting the public API.
No different than the Windows registry, which apparently uses a honeybee / hive metaphor because some Windows dev hated bees and their teammates liked trolling them.
https://learn.microsoft.com/en-us/windows/win32/sysinfo/stru...
That said, the registry has been around for ~30 years and the terminology is well known to Windows users. It didn't build off of previously available terminology.
Homebrew just made shit up on the spot based on the project's name to be internally consistent with itself.
Microsoft has it's own fsckery of randomly named cute crap elsewhere, of course.
I wish they’d just call them binaries, macOS packages, packages, gui-packages, etc.
To be clear, all those words are also jargon but they’re reusable concepts across software.
Edit: And I got reminded that it was possible to run and install packages for and from multiple users. Brew took over /usr/local/bin and other /usr/local/homebrew for the running user. Managing a system with multiple users with brew was and still is hard. With macports you needed sudo like with most other package managers. The sudo less nature was a huge deal for its adoption. And now maybe a security risk if you ask me.
Not OP, but I want my package manager to have good UX around package installs and updates, not it deciding to update a Python major version because a random small thing I'm installing it requests it.
What discourages me from using Homebrew is the intent and the mindset of its developers and packagers, who, I think, see their goal building an "unstable" distribution, as Debian defines it: "[a distribution that] is run by developers and those who like to live on the edge".
I am not blaming the Homebrew developers for building a Sid rather than Bookworm. Some people want just that. Heck, I used to run Debian Sid myself, but have lost my patience for maintaining my own computers since: I am kept busy enough by fixing the software I write, I don't want to spend more time fixing software I did not.
This is how I install Homebrew when I have to, and so far the only issue I ran into is that binary packages are often tagged as installable only to Homebrew's default folder, so Homebrew had to built the to-be-installed software from source instead, resulting in it taking longer and the computer fans spinning louder.
[0] https://docs.brew.sh/Installation#untar-anywhere-unsupported
I know why certain software has be run with root privileges, but it's a bit hard for me to come with a reasonable scenario where a software would fail to run properly when installed to a directory that is owned by the same non-root user that launches that software.
Lemme try to repro again when I'm home.
This issue is also brought up in the Github commit posted here: https://news.ycombinator.com/item?id=43766371 "I wouldn't worry about it not being root. We don't install anything base enough for it to be a concern (unlike MacPorts or Fink)."
With many macOS users coming from a different communities than Debian users, I really wonder how well that would go over with the folks whose software was being distributed.
Trying to manage nix was more work than I wanted to do.
I know it sounds dumb but uv was smart to go shorter than pip and sapphire feels heavier than brew no matter what it does after typing that.
This could be Sapphire -> sap
I am not sure if the lesson is to try harder to avoid offence, or live with the fact that words can have multiple meanings and we can be "professional" enough to ignore some of those meanings in some contexts.
A decent full package manager would support a simple, shell-like DSL like say Alpine or Arch, concurrent and parallel phases (such as downloads/builds/installs), multiple versions, reproducible builds, building from source, build acceleration, security auditing, patch management, and package cryptographic signatures (not hashes or signing of hashes).
Nix is theoretically amazing but the barrier-to-entry and gotchas for popular use make it self-limiting. Simplicity in particular areas is a price that is often paid for popularity.
I don't see a lot of engineers running Intel Macs anymore. I haven't seen any engineers who still use an Intel model, myself, and for quite some time. Especially when there are Apple Silicon options for well under $1,000 that highly outperform the Intel models.
I like it in ~/.brew where I have full permission to it and only my user.
> Technically, you can just extract (or git clone) Homebrew wherever you want.
However, Homebrew maintainers are dickish about essentially banning you from contributing to the project if your installation is nonstandard. They have a similarly discouraging warning if you are running a developer beta, which has turned me off of contributing fixes to broken formulas lest they reject them.
> If you decide to use another prefix: don’t open any issues, even if you think they are unrelated to your prefix choice. They will be closed without response.
I especially like their claim of being unprivileged. Very early stages just like Sapphire.
A couple purely superficial suggestions (echoing some other comments here):
- Lose the Brew terminology, especially if the name of the project isn't a synonym of "brew." - Change the name in general. "Sapphire" makes me think of "Ruby." IMO the obvious name is MacPac :p
https://github.com/pkgxdev/pantry/pull/5360#issuecomment-233...
Edit: Oh, the Github issue, yeah that's screwed up.
As for Facebook tracking I 100% dislike that.
Interestingly, I always imagined that a would-be replacement would come written in Swift. I guess I was wrong.
Is it possible to have more short command name?
* excessive superflurious animations? hell yeah
* emojis everywhere? of course!
* forced updates on by default
* breaking updates? thanks your problem noob. Try to keep up.
* versioning? lol, this isn't NASA
* backwards compatibility? Lol. come on, we redesigned the standard, the right way, the 5th time this time.
Its easy to criticize open source when you're not contributing, but thats why I sent my dollars over there, as I couldn't donate my time. Frustrating.
Ruby seems fine for brew. Does this do anything else better? Ruby makes it easy to write recipes for it which is a huge boon for a package manager.
The main reason I find brew to be a bit slow is just a lack of parallelism with regards to downloads and installs. Rather brew does alternating and sequential D, I, D, I, D, I, when I wish it just kept downloading in the background when it is doing the installs. It would cut down the brew upgrade time by 30% or more at the cost of more disk space used during the process.
But this one qualm I have isn't a result of its language implementation at all.
(This is a perverse countereffect: small projects can make performance decisions that Homebrew and other larger projects can't make, because they don't have a large install base that reveals the limitations of those decisions.)
I have heard that before.
Hmm.... I wonder if you can get away with not doing parallel downloads, but just keep the sequential downloads going in the background while it is installing a package? It is the pause in downloads during an install that I find is the main slow down in brew.
I could be wrong, but I believe multiple people, including maintainers, have looked into exactly that :-)
(I also need to correct myself: there is some work ongoing into concurrent downloads[1]. That work hasn't hit `brew install` yet, where I imagine the question of concurrent traffic volume will become more pressing.)
(Among other technical challenges, like updating the P2P broadcast for each new bottle.)
Generating additional metadata at bottle build time doesn't appear to be much of a technical challenge either.
These are asymmetric: brew runs at a point in time, and most people decidedly do not want brew running in the background or blocking while leechers are still being serviced. They want it to exit quickly once the task at hand is done.
> Generating additional metadata at bottle build time doesn't appear to be much of a technical challenge either.
That's not the challenge. The challenge is distributing those updates. My understanding is that there's no standard way to update a torrent file; you re-roll a new file with the changes. That means staggered delivery, which in turn means a long tail of clients that see different, incompatible views of the same majority-equal files.
Kinda. You do create a new torrent, but you distribute it in a way that to a swarm member is functionally equivalent to updating an old one. Check out BEP-0039 and BEP-0046 which respectively cover the HTTP and DHT mechanisms for updating torrents:
https://www.bittorrent.org/beps/bep_0039.html
https://www.bittorrent.org/beps/bep_0046.html
If that updated torrent is a BEP-0052 (v2) torrent it will hash per-file, and so the updated v2 torrent will have identical hashes for files which aren't changed: https://www.bittorrent.org/beps/bep_0052.html
This combines with BEP-0038 so the updated torrent can refer to the infohash of the older torrent with which it shares files, so if you already have the old one you only have to download files that have changed: https://www.bittorrent.org/beps/bep_0038.html
(There’s also still the state/seeding problem and its collision with user expectations around brew getting faster, or at least not any slower.)
I could see institutional seeders doing it as a way to donate bandwidth though, like a CDN that's built into the distribution protocol instead of getting load-balanced to Microsoft's nearest PoP when hitting a GitHub `ghcr.io` URI like Homebrew does today. Or even better, use that as an HTTP Seed (BEP-0019) to combine benefits of both :)
(My skepticism around whether this makes sense for Homebrew might be obscuring it, but I’m generally quite a big fan of distributed/P2P protocols, and I strongly believe that existing CDN decencies in packaging ecosystems are a risk that needs mitigating.)
There is no state/seeding problem. The client downloads from the same https url as always but uses peers on an as-available basis to speed things up and reduce load on the origin.
> The client downloads from the same https url as always but uses peers on an as-available basis to speed things up and reduce load on the origin.
So some kind of hybrid scheme, which (to me) implies the worst of both worlds: clients are still going to hammer upstreams on package updates (since client traffic isn’t uniform), and every client pays a bunch of peering overhead that doesn’t pay off until the files are “hot.” In other words, upstreams still need to plan for the same amount of capacity, and clients have to do more work.
(The adjacent thread observes that none of this is necessary if CDNs or other large operators do this between themselves, rather than involving clients. That seems strictly preferable to me.)
Yes, brew exits when it is done installing, nothing would need to change about that if you used BT protocol to speed up downloads. I'm sure you do have some helpful users who would volunteer to seed their cache though, which would become feasible.
> That's not the challenge. The challenge is distributing those updates.
The metadata goes in the formula alongside the current metadata (URLs and hashes.)
You should only re-distribute the original file that was downloaded and thus one can just advertise the original torrent that was downloaded.
But as you said earlier, brew is a point in time command and this BitTorrent solution would only really work if brew switched to an always-on service. And I am not sure that many people want to do that, although I am sure some would.
Using BitTorrent is also a great way to get banned from company-owned laptops.
And the language doesn’t have much to do with that. This project looks to be someone just toying around with Rust or their own PM. Props for that, but the headline has extra implications on HN.
I recently rewrote a big portion of Atlas [1]. It’s a Nim based dependency manager that clones Nim packages to a `deps/` folder. Initially I was worried about using reference types, etc, for performance reasons. It’s a general habit of mine. Then I remembered that stuff would be negligible compared to the download times and git overhead. Well aside from the SAT solver.
To be fair I haven't noticed Brew being as tediously slow as Pip. Maybe I just use it way less.
1. Python packaging, unlike Homebrew, does have a compute-heavy phase in the form of dependency resolution. `uv` can make significant gains over `pip` in that phase.
2. `uv` performs parallel downloads and other operations, in part because of Rust's fearless parallelism.
Homebrew doesn't really have (1), since resolution is just a linearization of the dependency tree. And parallelism of downloads in (2) poses a problem for Homebrew that's been mentioned in other threads (whereas parallelism is not a significant problem for PyPI's CDN-fronted distribution).
If this is true, why are the Rust tools, on average, so much faster than the JS (or whatever) tools they replace?
Hint: the answer hasn't just to do with the theoretical technical merits of the underlying platform, but also what type of developer they attract.
What part of the home brew experience do you find slow?
eg...
% time brew upgrade
brew upgrade 0.75s user 0.16s system 68% cpu 1.337 total
% time brew list
brew list 0.01s user 0.02s system 57% cpu 0.054 total
2. It's a personal project.
3. It's explicitly declared as alpha software.
Doesn’t tell me how it differs. What makes this next generation? Just the programming language?
If it’s just a for fun personal project that no one else is supposed to use, I’m not sure why it’s on HN.
It's a cool piece of alpha-quality software. It may or may not be meant to be used, that's beside the point. As I see it HN isn't a platform for software recommendations, it's for discussing interesting geeky things. Which this definitely is, even if it was completely unusable today.
Not everything on HN has to come with a dissection and explanatory note - that's kind of the point of curiosity.
If it’s just a for fun personal project that no one else is supposed to use, I’m not sure why it’s on HN.
Those are totally fine on HN too.
This. There's a wave of projects whose only value proposition is this vacuous "let's reinvent the wheel in Rust" sales pitch, where nothing of value is proposed beyond throwing around the Rust buzzword.
It would be interesting to know if there are other goals though, e.g. UX improvements.
There has to be more important reasons to replace a mature widely use project like homebrew.
The thread you posted is comical. How many times does anyone run homebrew per day? Or per week? And you still have people complaining about sub-second execution times of a list command? In an app whose happy flow is downloading hundreds of MB off the internet and save it to disk?
Is this your argument for a major rewrite?
I'm with you when the "source" project is C/C++ or something in that realm, but when we're coming from an already memory-safe language I do think some sort of explanation is helpful. I see Homebrew as more of a "glue" application where its own performance isn't exactly critical as it coordinates processes that are much slower so I don't really care if it's a bit faster.
By how much? Is homebrew really so slow, and used so often that an improvement would matter?
> more maintainable
[citation needed], especially for uv, which is a tool useful only for Python developers, so using a different language limits the pool of contributors.
That was it, that's why I switched, nothing more, nothing less.
(Beyond anything else, Homebrew's biggest "win" over MacPorts was and probably is still UX and DX. The core technology of a packaging ecosystem is rarely itself the differentiator.)
Homebrew is by far the worst package manager I have ever used. I’m still sour it somehow dragged away packagers from solutions which were better in every way by being promoted as the "default" solution.
Here are the comparisons to other package managers:
> Packages are brewed in individual, versioned kegs. Then symlinks are created to give a normal POSIX tree. This way the filesystem is the package database. Everything else is now easy. We are made of win.
vs MacPorts registry which used its own homebrewed (lol) Receipts files in 2009, and now uses a SQLite DB: https://guide.macports.org/chunked/internals.registry.html#i...
> I wouldn't worry about it not being root. We don't install anything base enough for it to be a concern (unlike MacPorts or Fink).
vs MacPorts installs to `/opt/local` as root.
> Why Not MacPorts?
> =================
> 1. MacPorts installs its own libz, its own openssl, etc. It is an autarky.
> This makes no sense to me. OS X comes with all that shit.
> 2. MacPorts support Tiger, and PPC. We don't, so things are better optimised.
There is no “Why Not Fink?” section.
And because I didn't know the word autarky: https://en.wiktionary.org/wiki/autarky
It was frustrating in the beginning to see so much marketing-driven shade being thrown from an ill-informed position.
Obviously that wasn’t you or the current maintainers of homebrew, and things have improved tremendously, but that’s the era from which frustration like the grandparent post originates.
The worst part is that MacPorts already did the same thing, but used hardlinks to avoid the kinds of problems that emerge when (for example) `realpath` resolves a symlink to an unexpected versioned directory that’s supposed to be an implementation detail.
There was a lot of FUD, dishonesty, and shallow understanding from the homebrew creators in the beginning.
I can't find these anywhere on the official blog, which goes back to the first 1.0 release of Homebrew. Links would be helpful.
I don’t think you are being fair. This question presupposes that the supposed problems can be solved by iterative changes, rather than being inherent in the chosen design/architecture of the software, which usually requires complete replacement thereof (as well as the leadership thereof, as people who choose poor solutions to problems often can’t appreciate arguments for superior solutions).
(Not that I’m trying to suggest that I agree that homebrew in particular is bad — just speaking generally.)
And, I’ll point out the irony in your directive: you take issue with people expressing criticism without taking action to completely resolve the respective issue, and then you come along to express criticism by tell people to not complain but instead offer up free labor, when you could go and solve the problem that resulted in the original complaint.
Also, using Git + GitHub instead of SVN + Trac was absolutely a winning pick for scaling project participation back in the 2000s.
https://www.finkproject.org/doc/users-guide/index.php
https://pdb.finkproject.org/pdb/index.php?phpLang=en
(Not a recommendation; seems pretty dead)
I still use MacPorts though
Yes a steep learning curve but once you have it set up then it’s easy to sync across devices.