EDIT: You mention this with archive.org links! Love it! https://mrshu.github.io/github-statuses/#about
That's not at all how you measure uptime. The per area measures are cool but the top bar measuring across all services is silly.
I'm unsure what they are targeting, seems across the board it's mostly 99.5+ with the exception of Copilot. Just doing math, 3 (independent, which I'm aware they aren't fully) 99.5 services brings you down to an overall "single 9" 98.5 healthy status but it's not meaningful to anyone.
Copilot seems to be the worst offender, and 99% of people using Github likely couldn't care less.
If they don't get their ops house in order, this will go down as an all-time own goal in our industry.
Something this week about "oops we need a quality czar": https://news.ycombinator.com/item?id=46903802
Does this mean you are only half-sarcastic/half-joking? Or did I interpret that wrong?
It's extra galling that they advertise all the new buzzword laden AI pipeline features while the regular website and actions fail constantly. Academically I know that it's not the same people building those as fixing bugs and running infra, but the leadership is just clearly failing to properly steer the ship here.
Pages and Packages completed in 2025.
Core platform and databases began in October 2025 and are in progress, with traffic split between the legacy Github data center and Azure.
All kinds of companies lose millions of dollars of revenue per day if not hour if their sites are not stable.... apple, amazon, google, Shopify, uber, etc etc.
Those companies have decided the extra complexity is worth the reliability.
Even if you're operating a tech company that doesn't need to have that kind of uptime, your developers probably need those services to be productive, and you don't want them just sitting there either.
I’m guessing they’re regretting it.
Our SOC2 doesn't specify GitHub by name, but it does require we maintain a record of each PR having been reviewed.
I guess in extremis we could email each other patch diffs, and CC the guy responsible for the audit process with the approval...
I have cleaned up more than enough of them.
But the inward-looking point is correct: git itself is a distributed technology, and development using it is distributed and almost always latency-tolerant. To the extent that github's customers have processes that are dependent on services like bug tracking and reporting and CI to keep their teams productive, that's a bug with the customer's processes. It doesn't have to be that way and we as a community can recognize that even if the service provider kinda sucks.
Not on the 2-4 hour latency scale of a GitHub outage though. I mean, sure, if you have a process that requires the engineering talent to work completely independently on day-plus timescales and/or do all their coordination offline, then you're going to have a ton of trouble staffing[1] that team.
But if your folks can't handle talking with the designers over chat or whatnot to backfill the loss of the issue tracker for an afternoon, then that's on you.
[1] It can obviously be done! But it's isomorphic to "put together a Linux-style development culture", very non-trivial.
That is what that feature does. It imports issues and code and more (not sure about "projects", don't use that feature on Github).
They literally have the golden goose, the training stream of all software development, dependencies, trending tool usage.
In an age of model providers trying train their models and keep them current, the value of GitHub should easily be in the high tens of billions or more. The CEO of Microsoft should be directly involved at this point, their franchise at risk on multiple fronts now. Windows 11 is extremely bad. GitHub going to lose their foundational role in modern development shortly, and early indications are that they hitched their wagon to the wrong foundational model provider.
(Actually there's 3 I'm currently working, but 2 are patched already, still closing the feedback loop though.)
I have a 2-hour window right now that is toddler free. I'm worried that the outage will delay the feedback loop with the reporter(s) into tomorrow and ultimately delay the patches.
I can't complain though -- GitHub sustains most of my livelihood so I can provide for my family through its Sponsors program, and I'm not a paying customer. (And yet, paying would not prevent the outage.) Overall I'm very grateful for GitHub.
Usually an outage is not a big deal, I can still work locally. Today I just happen to be in a very GH-centric workflow with the security reports and such.
I'm curious how other maintainers maintain productivity during GH outages.
As an alternative, I thought mainly as a secondary repo and ci in case that Github stops being reliable, not only as the current instability, but as an overall provider. I'm from the EU and recently catch myself evaluating every US company I interact with and I'm starting to realize that mine might not be the only risk vector to consider. Wondering how other people think about it.
Not who you're responding to, but my 2 cents: for a popular open-source project reliant on community contributions there is really no alternative. It's similar to social media - we all know it's trash and noxious, but if you're any kind of public figure you have to be there.
Edit: Nevermind, looks like they migrated to github since the last time I contributed
Edit- oh you probably meant an alternative to GitHub perhaps..
[1] https://www.theverge.com/tech/865689/microsoft-claude-code-a...
> During this time, workflows experienced an average delay of 49 seconds, and 4.7% of workflow runs failed to start within 5 minutes.
That's for sure not perfect, but there was also a 95% chance that if you have re-run the job, it will run and not fail to start. Another one is about notificatiosn being late. I'm sure all others do have similar issues people notice, but nobody writes about them. So a simple "to many incidents" does bot make the stats bad - only an unstable service the service.
GitHub is under Microsoft’s CoreAI division, so that’s a pretty sure bet.
https://www.geekwire.com/2025/github-will-join-microsofts-co...
The inertia is not permanent.
Computers can produce spreadsheets even better and they can warm the air around you even faster.
* writing endless reports and executive summaries
* pretending to know things that they don't
* not complaining if you present their ideas as yours
* sycophancy and fawning behavior towards superiors
It has been a pretty smooth process. Although we have done a couple of pieces of custom development:
1) We've created a Firecracker-based runner, which will run CI jobs in Firecracker VMs. This brings the Foregjo Actions running experience much more closely into line with GitHub's environment (VM, rather than container). We hope to contribute this back shortly, but also drop me a message if this is of interest.
2) We're working up a proposal[1] to add environments and variable groups to Forgejo Actions. This is something we expect to need for some upcoming compliance requirements.
I really like Forgejo as a project, and I've found the community to be very welcoming. I'm really hoping to see it grow and flourish :D
[0]: https://lithus.eu, adam@
[1]: https://codeberg.org/forgejo/discussions/issues/440
PS. We are also looking at offering this as a managed service to our clients.
Edit: Looks like they've got a status page up now for PRs, separate from the earlier notifications one: https://www.githubstatus.com/incidents/smf24rvl67v9
Edit: Now acknowledging issues across GitHub as a whole, not just PRs.
Investigating - We are investigating reports of impacted performance for some GitHub services. Feb 09, 2026 - 15:54 UTC
But I saw it appear just a few minutes ago, it wasn't there at 16:10 UTC.
Investigating - We are investigating reports of degraded performance for Pull Requests Feb 09, 2026 - 16:19 UTC
This should not be normal for any service, even at GitHub's size. There's a joke that your workday usually stops around 4pm, because that's when GitHub Actions goes down every day.
I wish someone inside the house cared to comment why the services barely stay up and what kinds of actions are they planning to do to fix this issue that's been going on years, but has definitely accelerated in the past year or so.
That doesn't normally happen to platforms of this size.
There are probably tons of baked in URLs or platform assumptions that are very easy to break during their core migration to Azure.
ISTR that the lift-n-shift started like ... 3 years ago? That much of it was already shifted to Azure ... 2 years ago?
The only thing that changed in the last 1 year (if my above two assertions are correct (which they may not be)) is a much-publicised switch to AI-assisted coding.
One solution I see is (eg) internal forge (Gitlab/gitea/etc) and then mirrored to GH for those secondary features.
Which is funny. If GH was better we'd just buy their better plan. But as it stands we buy from elsewhere and just use GH free plans.
Mirroring is probably the way forward.
* Deploy everything * It explodes * Rollback everything * Spend two weeks finding problem in one system and then fix it * Deploy everything * It explodes * Rollback everything * Spend two weeks finding a new problem that was created while you were fixing the last problem * Repeat ad nauseum
Migrating iteratively gives you a foundation to build upon with each component
But you need to have pieces that are independent enough to run some here and some there, and ideally pieces that can fail without taking down the whole system.
1. Stateful systems (databases, message brokers) are hard to switch back-and-forth; you often want to migrate each one as few times as possible.
2. If something goes sideways -- especially performance-wise -- it can be hard to tell the reason if everything changed.
3. It takes a long time (months/years) to complete the migration. By doing it incrementally, you can reap the advantages of the new infra, and avoid maintaining two things.
---
All that said, GitHub is doing something wrong.
Business by spreadsheet is super hard for this reason - if you try to charge the maximum you can before people get angry and leave then you're a tiny outage/issue/controversy/breach from tipping over the wrong side of that line.
You did back it up, right? Right before you ran me with `--allow-dangerously-skip-permissions` and gave me full access to your databases and S3 buckets?
"Whoops, now that one is nuked too. You have any more backups I can practice my shell commands on?"
A people have replied to you mentioning Codeberg, but that service is intended for Open Source projects, not private commercial work.
Also very happy with SourceHut, though it is quite different (Forgejo looks like a clone of GitHub, really). The SourceHut CI is really cool, too.
Dunno about actions[1], but I've been using a $5/m DO droplet for the last 5 years for my private repo. If it ever runs out of disk space, an additional 100GB of mounted storage is an extra $10/m
I've put something on it (Gitea, I think) that has the web interface for submitting PRs, reviewing them, merging them, etc.
I don't think there is any extra value in paying more to a git hosting SaaS for a single user, than I pay for a DO droplet for (at peak) 20 users.
----------------------
[1] Tried using Jenkins, but alas, a $5/m DO droplet is insufficient to run Jenkins. I mashed up shell scripts + Makefiles in a loop, with a `sleep 60` between iterations.
Distributed source control is distributable.
It's pretty nice if you don't mind it being some of the heaviest software you've ever seen.
I also tried gitea, but uninstalled it when I encountered nonsense restrictions with the rationale "that's how GitHub does it". It was okay, pretty lightweight, but locking out features purely because "that's what GitHub does" was just utterly unacceptable to me.
ad hominem isn't a very convincing argument, and as someone who also enjoys forgejo it doesn't make me feel good to see as the justification for another recommender.
I personally use Gitea, so I'd appreciate some additional information.
Forgejo became a hard fork in 2024, with both projects diverging. If you're using it for local hosting I don't personally see much of a difference between them, although that may change as the two projects evolve.
All the more reason why they should be sliced and diced into oblivion.
just add a new git remote and push. less so for issues and and pulls, but at least your dev team/ci doesn't end up blocked.
The engineers who build the early versions were folks at the top of their field, and compensated accordingly. Those folks have long since moved on, and the whole thing is maintained by a mix of newcomers and whichever old hands didn't manage to promote out, while the PMs shuffle the UX to justify everyones salary...
Are the other providers offering much better uptime GitLab, CircleCI, Harness? Saying this as someone that's been GH exclusive sicne 2010.
Github is down so often now, especially actions, I am not sure how so many companies are still relying on them.
Hosting .git is not that complicated of a problem in isolation.
"A better way is to self host". [0]
Edit: Now acknowledging issues across GitHub as a whole, not just PRs.
I am able to access api.github.com at 20.205.243.168 no problem
No problem with githubusercontent.com either
Github isn't the only source control software in the market. Unless they're doing something obvious and nefarious, its doubtful the justice department will step in when you can simply choose one of many others like Bitbucket, Sourcetree, Gitlab, SVN, CVS, Fossil, DARCS, or Bazaar.
There's just too much competition in the market right now for the govt to do anything.
I doubt policymakers in the early 1900s could have predicted the impact of technology and globalization on the corporate landscape, especially vis a vis “vertical integration”.
Personally, I think vertical integration is a pretty big blind spot in laws and policies that are meant to ensure that consumers are not negatively impacted by anticompetitive corporate practices. Sure, “competition” may exist, but the market activity often shifts meaningfully in a direction that is harmful consumers once the biggest players swallow another piece of the supply chain (or product concept), and not just their competitors.
The other change is reluctance to break up companies. AT&T break up was big deal. Microsoft survived being broken up in its antitrust trial. Tech companies can only be broken up vertically, but maybe the forced competition would be enough.
Not really. It's a network effect, like Facebook. Value scales quadratically with the number of users, because nobody wants to "have to check two apps".
We should buy out monopolies like the Chinese government does. If you corner the market, then you get a little payout and a "You beat capitalism! Play again?" prize. Other companies can still compete but the customers will get a nice state-funded high-quality option forever.
Simple: the US stopped caring about antitrust decades ago.
There was just a recent case with Google to decide if they would have to sell Chrome. Of course the Judge ruled no. Nowadays you can have a monopoly in 20 adjacent industries and the courts will say it's fine.
If a company can build a monopoly (or oligopoly) in multiple markets, it can then use these monopolies to build stability for them all. For example, Google uses ads on the Google Search homepage to build a browser near-monopoly and uses Chrome to push people to use Google Search homepage. Both markets have to be attacked simultaneously by competitors to have a fighting chance.
Hopefully the hobbyists are willing to shell out for tokens as much as they expect.
Gerrit is the other option I'm aware of but it seems like it might require significant work to administer.
Codeberg gets hit by a fair few attacks every year, but they're doing pretty well, given their resources.
I am _really_ enjoying Worktree so far.
I would love to pay Codeberg for managed hosting + support. GitLab is an ugly overcomplicated behemoth... Gitea offers "enterprise" plans but do they have all the needed corporate features? Bitbucket is a joke, never going back to that.
Today, when I was trying to see the contribution timeline of one project, it didn't render.
coincidence I think not!
Radicle is the most exciting out of these, imo!
It's definitely some extra devops time, but claude code makes it easy to get over the config hurdles.
But I don't understand if they're that good why are we getting an outage every other week? AWS had an outage unsolved for about 9+ hrs!
hopefully its down all day. we need more incidents like this to happen for people to get a glimpse of the future.
Self hosting would be a better alternative, as I said 5 years ago. [0]
Maybe they need to get more humans involved because GitHub is down at least once a week for a while now.
The new-fangled copilot/agentic stuff I do read about on HN is meaningless to me if the core competency is lost here.
With the latter no longer a thing, and with so many other people building on Github's innovations, I'm starting to seriously consider alternatives. Not something I would have said in the past, but when Github's outages start to seriously affect my ability to do my own work, I can no longer justify continuing to use them.
Github needs to get its shit together. You can draw a pretty clear line between Microsoft deciding it was all in on AI and the decline in Github's service quality. So I would argue that for Github to gets its shit back together, it needs to ditch the AI and focus on high quality engineering.
Just when open source development has to deal with the biggest shift in years and maintainers need a tool that will help them fight the AI slop and maintain the software quality, GitHub not only can't keep up with the new requirements, they struggle to keep their product running reliably.
Paying customers will start moving off to GitLab and other alternatives, but GitHub is so dominant in open source that maintainers won't move anywhere, they'll just keep burning out more than before.
Any public source code hosting service should be able to subscribe to public repo changes. It belongs to the authors, not to Microsoft.
If we had a government worth anything, they ought to pass a law that other competitors be provided mirror APIs so that the entire world isn't shut off from source code for a day. We're just asking for a world wide disaster.
Incident with Pull Requests https://www.githubstatus.com/incidents/smf24rvl67v9
Copilot Policy Propagation Delays https://www.githubstatus.com/incidents/t5qmhtg29933
Incident with Actions https://www.githubstatus.com/incidents/tkz0ptx49rl0
Degraded performance for Copilot Coding Agent https://www.githubstatus.com/incidents/qrlc0jjgw517
Degraded Performance in Webhooks API and UI, Pull Requests https://www.githubstatus.com/incidents/ffz2k716tlhx
EDIT: my bad, seems to be their server's name.