I felt like Actions were a time sink that trick you into feeling productive - like you're pursuing 'best practice' - while stealing time that could otherwise be spent talking to users or working on your application.
Are you saying that the act of setting up the CI pipeline is time consuming? Or the act of maintaining it?
The only time I think about my CI pipeline is when it fails. If it fails then it means I forgot to run tests locally.
I guess I can see getting in the weeds with maintaining it, but that always felt more likely when not deploying Dockerized containers since there was duplication in environmental configs that needed to be kept synchronized.
Or are you commenting on the fact that all cloud-provided services can go down and are thus a liability?
Or do you feel limited by the time it takes to recreate environments on each deployment? I haven't bumped into this scenario that often. Usually that dominating variable in my CI pipeline is the act of running tests themselves. Usually due to poor decisions around testing best practices that cause the test runner to execute far slower than desired. Those issues would also exist locally, though.
But setting up a useful CI/CD pipeline was the worst part. The tl;dr is installing everything and getting system tests (i.e. tests using a browser) to work was just excruciating, partly because of the minutes long cycle time between making changes in the yaml, committing, pushing, clicking and waiting for it to fail. The cycle time was the killer. (if you could reliably run GHAs locally, my opinion of them would be completely different - and I tried Act, but it had its own problems, mostly related to its image and dependences being a bit different to that used in GHA).
More details (linking to save repeating them): https://news.ycombinator.com/item?id=45530753
In my experience, I just set everything up inside my container locally, run tests locally, push the Dockerfile to GH, and re-run my CI off of dependencies declared in the Dockerfile. https://stackoverflow.com/questions/61154750/use-local-docke...
I agree that debugging flakey tests locally is much easier, though, and flakey tests in a CI pipeline is really aggravating. Flakey tests are just aggravating in general, though.
I've also had frustrations where, if I didn't lock the versions of my actions, they'd start failing randomly and require intervention. Just getting into a good habit of not relying on implicit versioning for dependencies helped a lot.
uses: browser-actions/setup-chrome@latest
in my discarded yml file.Regarding containers, nope, I know and love docker, but it's unnecessary complexity for a one person project. IME, projects that use docker move at half the pace of the projects that don't. (similar to Actions - lots of fun, feels like 'best practice', but velocity suffers - i.e. it steals time that should be spent talking to users and building).
If anything, with the advent of Claude Code et al., I've become an even stronger proponent of container-based development. I have absolutely zero interest running AI on my host machine. It's reassuring to know that a rogue "rm -rf" will, at worst, just require me to rebuild my container.
This is a perplexing comment. Can you provide any specifics on what leads you to believe that Docker is a hindrance? I've been using Docker for ages, both professionally and in personal projects, and if anything it greatly simplifies any workflow. I wonder what experience you are having to arrive to such an unusual outcome.
For the personal home hacking projects I do, I often don't even make an external repo. I definitely don't do external CI/CD. Often a waste of time.
For more enterprise kind of development, you bet the final gold artifacts are built only by validated CI/CD instances and deployed by audited, repeatable workflows. If I'm deploying something from a machine I have in my hands with an active local login for, something is majorly on fire.
my setup before was just build and scp
now it takes like 3 mins for a deploy: i haven’t setup caching for builds etc. but that feels like a self made problem
my proj is pretty simple so thats probably why
We're using the github hosted runners for pull requests and builds... the build process with build and attach .zip files into a release and the deploy process runs on self-hosted runners on the target server(s). Tests for PRs takes about 2-4min depending on how long it takes to queue the job. Build/Bundling takes about 3-5 minutes. The final deploys are under a minute.
The biggest thing for me is to stay as hands off from the deployed servers as possible. Having done a lot of govt and banking work, it's just something I work pretty hard to separate myself from. Automating all the things and staying as hands off as I can. Currently doing direct deploys, but would rather be deploying containers.
For example, running tests before merge ensures you don't forget to. Running lints/formatters ensures you don't need to refactor later and waste time there.
For my website, it pushes main automatically, which means I can just do something else while it's all doing it's thing.
Perhaps you should invest in simplifying your build process instead?
The day I forget to run tests before merging I'll set up CI/CD (hasn't happened before, unlikely, but not impossible).
My build process is gp && gp heroku main. Minor commits straight to main. Major features get a branch. This is manual, simple and loveable. And involves zero all-nighters commit-spamming the .github directory :)
If you want more complex functionality, that's why I suggested improving your build system, so the actions themselves are pretty simple.
Where things get more frustrating for me is when you try using more advanced parts of actions like releases and artifacts which aren't as simple as running a script and checking it's output/exit code.
Just refreshed my memory by looking at mine. 103 lines. Just the glance brought back painful memories. The worst areas were:
- Installing ruby/bundler/gems/postgres/js libraries, dealing with versioning issues, and every few months have them suddenly stop working for some reason that had to be addressed in order to deploy.
- Installing capybara and headless chrome and running system tests (system tests can be flakey enough locally, let alone remotely).
- Minor issue of me developing on a mac, deploying to heroku, so linux on GHA needs a few more things installed than I'm used to, creating more work (not the end of the world, and good to learn, but slow when it's done via a yaml file that has to be committed and run for a few minutes just to see the error).
It's true that outages are probably less frequent, as a consequence of never making any changes, however when something does break e.g. security forces someone to actually upgrade the 5-years-since-end-of-support Ubuntu version and it breaks, it may take several days or weeks to fix because nobody actually knows anything about the configuration because it was last touched 10 years ago by 1 guy who has long left the company.
https://www.githubstatus.com/history
seems like Microsoft can't keep this thing from crashing at least three times a month. At this rate it would probably be cheaper just to buy out Gitlab.
Wondering when M$ will cut their losses and bail.
You can fix it through the API by generating an API token in your settings with notifications permission on and using this (warning: it will mark all your notifications as read up to the last_read_at day):
curl -L \
-X PUT \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer <YOUR-TOKEN>" \
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/notifications \
-d '{"last_read_at":"2025-10-09T00:00:00Z","read":true}'
Then you can click the checkbox at the top and then "select all", and it'll mark the phantom notifications as read.
GitHub has been experiencing mass waves of crypto scam bots opening repos and mass tagging tens of thousands of users on new issues. Using the issue content body to generate massive scam marketing like content bodies.
This has been a known issue since at least 2021, which is ridiculous.
By accident I landed on https://us.githubstatus.com/ and everything was green. At first, I thought, yeah sure, just report green, then I realized "GitHub Enterprise Cloud" in the title. There is also a EU mirror: https://eu.githubstatus.com
Edit:
The report just updated with the following interesting bit.
> We identified a faulty network component and have removed it from the infrastructure. Recovery has started and we expect full recovery shortly.
If the current state of GH availability is without Azure-induced additional unreliability, I truly fear what it will be on Azure
Edit: Found the discussion about this https://news.ycombinator.com/item?id=45517173
Just be warned if you try it out that if you don't specify which workflow to run, it will just run them all!
Having everything in one service definitely increases interoperability between those solutions, but it definitely decreases stability. In addition, each of the other systems is not the best in their class (I really detest GH Actions for example).
Why do so many solutions grow so big? Is it done to increase enterprise adoption?
Getting the same level of interoperability with a separate tool takes significantly more work on both sides, so the monolithic approaches tend to thrive because it can get out the door faster and better.
Forgejo is doing the same thing with its actions. Honestly, I'd prefer if something like Woodpecker became the blessed choice instead, and really good integration with diverse tools was the approach.
I do agree there are issues with a single provider for too many components, but I am not sure you get any decreased stability with that verse having a different provider for everything.
That said, I agree that the execution of many features in GitHub has been lacking for some time now. Bugs everywhere and abysmal performance. We're moving to Forgejo at $startup.
Who else?
1. Why Self-Host?
2. GitHub Issues
Change directory to your local git repository that you want to share with friends and colleagues and do a bare clone `git clone --bare . /tmp/repo.git`. You just created a copy of the .git folder without all the checked out files.
Upload /tmp/repo.git to your linux server over ssh. Don't have one? Just order a tiny cloud server from Hetzner. You can place your git repository anywhere, but the best way is to put it in a separate folder, e.g. /var/git. The command would look like `scp -r /tmp/repo.git me@server:/var/git/`.
To share the repository with others, create a group, e.g. `groupadd --users me git` You will be able to add more users to the group with groupmod.
Your git repository is now writable only by me. To make it writable by the git group, you have to change the group on all files in the repository to git with `chgrp -R git /var/repo.git` and enable the group write bit on them with `chmod -R g+w /var/repo.git`.
This fixes the shared access for existing files. For new files, we have to make sure the group write bit is always on by changing UMASK from 022 to 002 in /etc/login.defs.
There is one more trick. For now on, all new files and folders in /var/git will be created with the user's primary group. We could change users to have git as the primary group.
But we can also force all new files and folders to be created with the parent folder's group and not user primary group. For that, set the group sticky bit on all folders in /var/git with `find /var/git -type d -exec chmod g+s \{\} +`
You are done.
Want to host your git repository online? Install caddy and point to /var/git with something like
example.com {
root * /var/git
file_server
}
Your git repository will be instantly accessible via https://example.com/repo.git.At least I'm pretty sure the runners are, our account rep keeps trying to get us to use their GPU runners but they don't have a good GPU model selection and it seems to match what azure offers.
Expecting more and more downtime and random issues in the future.
At the same time, self-hosting is great for privacy, cost, or customization. It is not great for uptime.