Installing a cross-platform targeting compiler toolchain is next level.
We usually keep archives of the software releases (even ones that are really, REALLY old and not out in service for the most part except for refurbs of old product), but being able to rebuild them and more importantly build a fixed version targeting the OS it originally targeted is really nice.
You can also run QEMU if you want to build for ARM (although this announcement makes this unnecessary): https://github.com/aksiksi/ncdmv/blob/aa108a1c1e2c14a13dfbc0...
My OSS Go project runs tests in 18 different OS/architecture combinations.
Some native, some using QEMU binfmt (user mode emulation on Linux), others launching a VM. In particular, that's how I test the BSDs and Solaris.
* Configure a workflow with 1 job for each arch, each building a standalone single-arch image, tagging it with a unique tag, and pushing each to your registry
* Configure another job which runs at the completion of the previous jobs that creates a combined manifest containing each image using `docker manifest create`.
Basically, doing the steps listed in https://www.docker.com/blog/multi-arch-build-and-images-the-... under "The hard way with docker manifest ".
Does anyone have a better approach, or some reusable workflows/GHA that make this process simpler? I know about Depot.dev which basically abstracts the runners away and handles all of this for you, but I don't see a good way to do this yourself without GitHub offering some better abstraction for building docker images.
Edit: I just noticed https://news.ycombinator.com/item?id=42729529 which has a great example of exactly these steps (and I just realized you can just push the digests, instead of tags too, which is nice).
Personally prefer just using Go/ko whenever possible ;)
Even with this, building multi-platform Docker images with fast persistent caching in GitHub Actions will still be slow in the worst case and tedious in the best case.
We've also expanded into GitHub Actions runners, bringing our fast caching and faster compute into the actual runner.
We've done some cool things like making caching and disk access faster using ramdisks, Ceph, and blob storage [1]. We're offering Intel, ARM, and macOS runners at half the cost of what GitHub offers to private repos. We're also focused on accelerating even more builds outside of the runner. [2]
[0] https://depot.dev/products/container-builds
[1] https://depot.dev/blog/introducing-github-actions-ultra-runn...
We charge $0.08/minute for macOS runners [0] which has 8 CPUs, 24 GB of memory, and 150 GB of disk. They run with M2 chips, so the closest GitHub-hosted macOS runner is the arm64 one with 6 CPUs at $0.16/minute [1].
It's also worth mentioning that we charge by the minute but track by the second. Whereas GitHub actually rounds up to the closest minute. So a 10-second build on Depot is 10 seconds, and you don't get charged a minute until you've accumulated a minutes worth of build time.
[0] https://depot.dev/docs/github-actions/runner-types#macos-run...
[1] https://docs.github.com/en/billing/managing-billing-for-your...
For 'large' instances, ARM64 is cheaper: https://docs.github.com/en/billing/managing-billing-for-your...
So what about regular instances?
https://wiki.debian.org/QemuUserEmulation
Many people don't know this, but on a correctly configured amd64 Linux box this just works:
$ GOARCH=s930x go test
The test is cross compiled, and then run with QEMU user mode emulation.
Configuring this for GitHub Actions is a single dependency: docker/setup-qemu-action@v3
Also, if you want to test different OSes, there are a couple of actions to accomplish it.
I'll probably be integrating these Linux ARM instances, but this workflow should give you an idea of what was already possible with the existing runners:
https://github.com/ncruces/go-sqlite3/blob/main/.github/work...
I switched from an Intel Mac to an Apple Silicon Mac a few months ago, and have been trying to do as much stuff as possible on ARM.
One thing this should do, is make people think more about switching their cloud-based workflows to ARM CPUs, which are generally less expensive.
We also support spinning up self-hosted runners on your AWS/gcp/azure in just a couple of clicks.
Paying someone for CI compute seems insane. The load is so variable that you never know if your monthly bill will be zero or several hundred/thousand dollars. I especially don't want my employees to consider that each and every push costs the company a nonzero amount of money. CI should be totally free and unrestricted. If a new employee has a really bad day and fires off a hundred CI runs (as we all have), I don't want to explain to accounting why there's an enormous spike in the bill.
It costs us a couple of my salaried hours a month to maintain our on-site infra. Far, far less than our present AWS bill. Most months it needs no attention. It just sits there and does its job. Hell, it's even solar powered.
You could:
- host your own set of static runners on AWS -- which, have a fixed monthly cost.
- pay a provider for hosted runners -- most providers bill in CI Minutes. So you will run out of minutes if jobs run amok, not run up your bill.
- Set up auto-scaling runners that ebb and flow based on demand. This case is the one that represents the risk you are describing of an unexpected bill increase.
2/3 cases of "paying someone else for CI compute" are just as predictable as your solution cost-wise. Yours could be cheaper, but the risk of "unexpected bill increase" is not really there.
I did have to move my repos into an organization because you can only use WarpBuild with organizations, not personal accounts, but I probably should have been doing that anyway.