360 pointsby gen2202 hours ago53 comments
  • AlexB1382 hours ago
    Github has published some incredible usage rate increase numbers, which they ascribe to the rise of agentic coding. At some point, they are going to have to change rate limits, cut free-tier usage, or find some other path to reducing load. It's clear that their infrastructure can't keep up with this significant increase, and it's unlikely that they're going to just absorb the increased costs themselves.

    Very curious to see what the future holds for Github.

    • eddyg2 hours ago
      From the GitHub COO on April 3rd:

          Platform activity is surging. There were 1 billion commits in 2025.
          Now, it's 275 million per week, on pace for 14 billion this year if
          growth remains linear (spoiler: it won't.)
      
          GitHub Actions has grown from 500M minutes/week in 2023 to 1B minutes/week
          in 2025, and now 2.1B minutes so far this week.
      
          So we're pushing incredibly hard on more CPUs, scaling services, and
          strengthening GitHub’s core features.
      
      https://x.com/kdaigle/status/2040164759836778878

      They also had a recent blog post about availability: https://github.blog/news-insights/company-news/an-update-on-...

      I don't envy the scaling issues the GitHub engineers are facing! #HugOps

      • munk-aan hour ago
        After the Microsoft acquisition GH marketing and pricing put an immense amount of effort[1] into trying to kill secondary platforms that integrated into github and move more corporate accounts fully on-platform. We recently dropped travis for github actions and dropped reviewable for github PRs (which are terrible).

        There's a portion of this that is agentic driven and there's a portion of this that's just github making their own bed.

        1. Arguably anticompetitive pricing like MSFT is used to doing with the office suite.

        • foolswisdom44 minutes ago
          In other words, the set of github core services has expanded because you don't use third party tooling for some of those services anymore.
          • munk-a40 minutes ago
            For us, yes - and likely for a lot of other users. I'm not certain who else has dealt with the headache of being migrated off their legacy pricing plan but it ends up pushing those internal offerings a lot harder than the old approach did so if they're seeing successful conversions it's likely they're seeing significantly more load from mature codebases with expensive CI/CD pipelines.
      • skylerwiernikan hour ago
        This is extremely interesting how fast this happened. Either AI use surged massively in the last quarter, or this is a very sneaky move by Anthropic. Looking at my own stats, I don't think I'm using Claude Code much more than I used to, but my commits have gone way up. I have a feeling they've tuned the models recently to commit more often, which gives the illusion of more work being done.
        • crystal_revengean hour ago
          > Either AI use surged massively in the last quarter

          December 2025 is considered by many people to be a major step function in agentic coding (both due to improvements in harnesses and LLMs themselves). I know my coding has forever changed since then.

          Before I was basically always hands on the keyboard while working with AI. Now I'm running experiments with multiple agents over the weekend, only periodically checking in if they have any questions or need further instruction.

          The last quarter is where I personally first started to see how this was all going to change things (despite having worked on both the research and product side of AI for the last few years).

          > I have a feeling they've tuned the models recently to commit more often, which gives the illusion of more work being done.

          Agents certainly are committing more often, but I know, at least for these projects, there really is work being done. An example: I had an agent auto-researching a forecast I was working on. This is something I've done manually for over a decade now. The iteration process is tedious and time consuming, and would often take weeks of setting up and ultimately poorly documenting many, many experiments to see what works. Now I can "set it and forget it", and get the same results I would have in hours (with much more surface area covered and much better documentation). Each experiment is a branch (or work-tree) so yes there are a lot of commits happening, but the results are measurably real.

          I often think the big divide the success with agents is whether or not the quality of ones work can be objectively measured. For those of use doing work that can be measured, the impact of agents is still hard to comprehend.

        • martinaldan hour ago
          Many things at once I suspect:

          1. Models have got way better, which means you are far more likely to get something working. I know I used to have little 'tool'/'weekend projects' all the time that wouldn't get off the starting blocks before, now it takes a few minutes often to build them, and once I've built them I tend to want to have them saved on github. Quite how useful they turn out to be is another question though...

          2. Related, because the models are a lot better I can generate far more code per unit time. On Sonnet last year I'd have to babysit the model and constantly 'steer' it, which meant a lot of the CC time was actually me reviewing it. Now with Opus4.7 it can often just churn away for 10-30minutes and get something reasonable.

          3. Most importantly, just the volume of new users to coding agents - loads of new developers shipping far more far frequently.

          4. Many users who were not on github, now signing up and pushing code to it. "Vibe coders" basically who don't have SWE experience and their agent tells them git would be a good idea.

          Each of these would be a big increase in scale, but combined it is vvv high

        • tossandthrowan hour ago
          I don't think commits per se puts pressure on the infrastructure.

          More likely pulls and pushes, and, naturally, the ci minutes they identify as the main issue.

          • NewJazzan hour ago
            But CI only increased by a factor of 2 since last year. Did they really not foresee that happening? And how does that affect git and api operations.
            • munk-aan hour ago
              It really shouldn't. The technical summary they released[1] is a very interesting read from a software engineering perspective. It seems to be blindsided by the increased traffic and gives stats related to commits/PRs (which should be relatively cheap for github to process) without any insight into their web traffic or details on how much actions are costing them. If they were super transparent they'd release information about their request response time and resourcing to fulfill that.

              Their current path to resolution is to migrate their codebase to a new language[2], continue to drop their inhouse ops for Azure resources and get off MySQL. Maybe one or two of those steps are legitimately a good idea - I don't have an inside scope - but technology migrations are always fraught with issues. It's quite possible these changes are just a result of them vibe-coding a mature codebase into a new language.

              1. https://github.blog/news-insights/company-news/an-update-on-...

              2. I'll grant that Ruby isn't the best language to use as scale but I think we're all old enough to realize that language choice is far less impactful on performance than code quality.

              • evanelias6 minutes ago
                > migrate their codebase to a new language[2], continue to drop their inhouse ops for Azure resources and get off MySQL

                The recent blog post you're linking to mentioned moving data only for webhooks off MySQL, not all relational data used by the entire site; and moving "performance or scale sensitive code out of Ruby", again not the entire codebase.

                Do you have an official source suggesting these migrations are more comprehensive than that?

              • hosh16 minutes ago
                Azure’s core hypervisor orchestrator was half-baked at launch and it was never been fixed. This long read blog series explains a lot for me — for example, why the FedRamp certification was never able to get a straight answer from Azure about how they handled secrets.

                https://isolveproblems.substack.com/p/how-microsoft-vaporize...

              • spockz28 minutes ago
                Re 2, I would generally agree and there is a lot that can be done with caching. However, since writing services in Rust and Golang, there is whole other tier in speed. Architecture matters, code quality also matters, but Golang and Rust help a lot in making very fast services.
                • munk-a23 minutes ago
                  Yeah I don't disagree. To clarify. Rust, Golang etc - they give you a very noticeable advantage when it comes to writing good performant software with the assumption that you're putting in the effort on the design side. But poorly written Rust is likely going to be indistinguishable from poorly written Ruby.
      • siva729 minutes ago
        It's the end of the free lunch era. Subsidizing groups like students or new users to gain market share worked as long as there weren't billions of them at the same time eating all compute from the paying customers. It's not working anymore for ai products.
      • wolfi12 hours ago
        I wonder how many of those actions are really necessary
        • PhilipRomanan hour ago
          And how many of those actions do uncached downloads instead of building self-contained offline images... Speaking of which, I wonder if GitHub has implemented any HTTP interception for common mirror sites, like used by apt, etc.
          • everfrustrated14 minutes ago
            GitHub and WarpBuild cache is so slow it is often faster to re-download hundreds of MB each run than cache it properly.

            I so wish this wasn't the case.

          • spockz30 minutes ago
            Many downloads now go over https. Intercepting them would require having certificate for those domains. IIRC on the clouds the standard images do have a sources list that points to mirrors on the cloud’s network. I would only presume Github Actions runners have the same.

            Not sure if something similar exists for NPM which is big for all things JS.

            • munk-a26 minutes ago
              Other CI/CD platforms usually push you towards using self-hosted mirrors for downloading large chunks of data (often aggressively so) but github is pretty hands off when it comes to actions. It is interesting to consider whether managing that traffic might be overwhelming them and if this can be traced back to a lack of forethought when it came to building out those tools.
        • bravetraveleran hour ago
          Or how many pushes those commits are spread across; oh, neat, big number.
      • hansmayer2 hours ago
        Wow, nice to see the relentless push for more AI slop finally paying back some dividents back to the issuer.
    • amluto2 hours ago
      For literally decades, I’ve observed that there are systems that make each operation cheap and systems that work hard to scale out. The former frequently seems to wildly outperform the latter.

      GitHub, for example, seems to implement the main repository /pulls page as a search query, which is hinted at by the prefilled search bar and was mostly confirmed last week when the search backend failed and pull requests didn’t load. But it could have been implemented as a plain API call that just loads open pull requests, and that API exists and did not go down.

      If GitHub focused a bit on identifying their top 95% of high level operations (page loads including resulting API calls, for example) and making them efficient, I bet they could get a 5x or better reduction in backend load by simplifying them.

      (Don’t even get me started on the diff viewer. I realize that much of its awfulness is the horribly inefficient front end, which does not directly load the back end, but I expect there is plenty of room for improvement. The plain git command line features are very fast.)

      • stabblesan hour ago
        I noticed the same https://news.ycombinator.com/item?id=47940213. My working hypothesis is that, given that a filter was always required (prs and issues are likely rows in the same database with a bool property to distinguish them), someone thought it'd be good to use the search API uniformly. But search is on the derivative of the underlying data, in contrast to the specific APIs for listing issues and prs.
        • munk-a30 minutes ago
          Working in an organization without a mono-repository I've actually found it extremely difficult to keep a tab on PRs and issues across multiple repositories. For a problem that should be resolved by a "For me" page that just lists out all your active incoming and outgoing PRs their multi-page solution involving search filters that often need to be reset feels extremely weak. I've worked on large multi-tenant solutions before and a page where you can "SELECT * FROM everything LIMIT 10" is the absolute last thing you want to give to users.

          It is bizarre to me that so much of their tooling defaults to acting across the whole of github data points without guiding the user towards (or even making available as far as I can tell) a way to easily scope requests down outside of a complex search filter.

      • mnky9800n2 hours ago
        Are you telling me you don’t want a chat interface to greet you when you log in to GitHub?
        • amluto29 minutes ago
          That’s sort of orthogonal. But if GitHub actually invoked an LLM on initial page load, that would be about par for the course, and it would be amusing for GitHub to then complain that they’ve grown so quickly that their systems can’t keep up.
      • wavemodean hour ago
        Git itself is kind of a fundamentally computationally inefficient way to store and retrieve information. If the problem to solve were simply "store and version this text", 14 billion commits in a year would not even be considered a lot.

        In other words, a centralized version control system built from the ground up to operate at scale would do far more for scalability than anything GitHub could possibly do to optimize their Git operations. Every major tech company (Amazon, Meta, Google, etc) is already doing something like this internally.

        Though this would require people to start using a github-specific client rather than the traditional git+ssh. (Though the github client could still maintain a git repo locally, for compat.)

        • munk-a28 minutes ago
          I can guarantee you one thing - github's problem isn't coming from git.

          Considering all the ci/cd pipelines, PR & issue discussions, social media tracking, rich data and else that github hosts if their true issue is the actual meat and potatoes of running git I would be gobsmacked.

        • stabblesan hour ago
          What are you referring to when you say it's "fundamentally computationally inefficient"? It's pretty efficient because it's content-addressed, plus optimizations to reduce storage and data transfer with packfiles.
          • galangalalgol35 minutes ago
            I suspect they were referring to some of the things git allows for non centralized version control. There are simplifications if you just wanted a centralized system like cvs had.
      • the_sleaze_2 hours ago
        I think you need to broaden your focus here - I can't really remember any significant downtime before the Microsoft acquisition and the data supports my memories.

        Microsoft bought Github and migrated to Azure, which is explains the findings. The query performance was fine before they started serving from Azure.

        I mean honestly, as though there isn't one single person competent enough to read some logs and horizontally scale a few read only dbs to meet demand? That's not it

        • AlexB1382 hours ago
          > I think you need to broaden your focus here - I can't really remember any significant downtime before the Microsoft acquisition and the data supports my memories.

          This is the opposite of my recollection, actually. I distinctly remember having conversations about Github struggling to scale well before MS was involved, and people claiming that MS had somehow saved Github because it had stabilized and begun adding features again.

          > The query performance was fine before they started serving from Azure.

          This may be correct though. The Azure migration seems more aligned with the timeline of struggling to scale.

        • nvme0n1p1an hour ago
          I don't know why this is downvoted. The data backs you up: https://damrnelson.github.io/github-historical-uptime/
        • philistinean hour ago
          I mean, are any of the other forges, which I presume are also seeing logarithmic increase in commits, also failing as hard as Github?
    • graypegg2 hours ago
      IMO, they're reaching the point of no return. I don't think they can horizontally-scale their way out of the hole they dug themselves unless they separate their free and paid infra maybe... which doesn't seem likely considering how their other infra changes are going.

      In the same way you need to be 10x better for someone to consider switching to your product, if you get 10x worse your competitors get a free 10x by just standing still.

      • AlexB1382 hours ago
        I think there's a very good chance you're right. Their reputation is obviously severely harmed, and high profile projects like Ghostty leaving may be a canary in the coalmine.

        Something creative like separating their free and paid tiers may help them. I suspect the fact that all of this is happening to them along with their migration to Azure is probably complicating their ability to adapt their infrastructure.

      • dylan604an hour ago
        I wonder if AWS resurrecting CodeCommit might be related. "For all of our warts, we still have a higher rep score than github" would not be an extraordinary thought at this point. There has been some brief chat about looking to github, and I'm so glad we never did. A previous company did migrate to github with no real answers on what the benefit was other than investors ask if your code is in github by name vs some other repo.
      • fastballan hour ago
        How can they not? Surely at GitHub scale there isn't a single component where they were relying on vertical scaling?
        • graypegg31 minutes ago
          For all of it's history (up to and including now possibly?) Github was a big Ruby on Rails monolith. [0] Obviously some things run in their own service, but I'm seeing the core github features fall apart which should be the features packed into the big monolith. If load is this much a problem, not being able to only vertically scale the processes that need the extra headroom is a big problem. Scaling horizontally by just throwing more machines at it, or at least cordoning-off some machines as "the ones that people actually pay for" is all I can think of for an application I can only describe as "accidentally working". Urgency is most-definitely high and that pushes decision making towards permanently-temporary patches instead of actual infra/architecture improvements.

          [0] https://github.blog/engineering/architecture-optimization/bu...

      • jcgrilloan hour ago
        IIRC back in the day they used to have an on-prem Enterprise product? I've never heard of anyone who actually used it though. IMO that would make a lot of sense for a medium-large organization--you still get the familiar Github product but you can take responsibility for your own uptime--like with Jira, Jenkins (nee Hudson), PyPI/Maven/etc.
    • sh3rl0ck2 hours ago
      I've been a strong proponent of reallocating all LinkedIn server capacity to GitHub.
    • kqp31 minutes ago
      A week ago GitHub published a blog post saying this, a day later GitHub execs were in HN comments repeating it, and just like that it’s common knowledge that GitHub’s steady reliability decline from the 2019 onward was actually caused not by the 2019 Microsoft integration, but by something that did not exist until 2023. PR works, y’all. Turns out the reason GitHub doesn’t work is because it’s just so good!
    • cdrnsf2 hours ago
      They can't really cite the situation as a problem given their hand in creating and continuing it.
      • nine_k2 hours ago
        It's hard to talk about "them" as a singular entity. I bet that the "Copilot all the things!!11" faction mostly does not consist of GitHub SREs.
        • Hamuko2 hours ago
          The GitHub SREs are working for the Copilot company.
          • cdud32 hours ago
            Satya Nadella at the LlamaCon event in April 2025: "I’d say maybe 20%, 30% of the code that is inside of our repos today and some of our projects are probably all written by software."

            In particular Github, with it's copilot-next initiative, has probably so much AI generated code inside today that fixing all this new performance problems will need lots of human developer brains.

            • PunchyHamsteran hour ago
              It literally have problems the moment MS bought it, way before AI gold rush.
      • petcat2 hours ago
        The sysadmins didn't make any of those decisions.
        • cdrnsf2 hours ago
          I suppose the idiocy of their parent company is their job security.
    • crotean hour ago
      It's a bit hard to blindly trust their numbers when they are trying very hard to sell Copilot to everyone.

      Sure, AI will undoubtedly have increased their workload, but how much of the shown figures is real, and how much is the PR department trying to make it look like Copilot & friends is a massive success?

    • munk-a36 minutes ago
      Have they published incredible usage rate numbers somewhere? I saw their recent blog post about the outages[1] and it has a graph without axis labels and without any context around usage before 2019 to indicate just how much this agentic acceleration has actually increased usage growth.

      1. https://github.blog/news-insights/company-news/an-update-on-...

    • bdashdash2 hours ago
      Isn't the data that flows through Github so valuable that they (Microsoft) are happy to eat the cost?

      I don't have a clear idea how that value can be captured, since it's going to be 90% AI generated code that anyone can scrape (public projects) or can't be used (private projects), so perhaps you're right.

      • Athas2 hours ago
        > Isn't the data they capture so valuable that they (Microsoft) are happy to eat the cost?

        Even if that is true, unless the value of the data corresponds to near-term revenue, then eventually the cost may simply not be possible to meet. Or for that matter, the capital to manage the increasing load may simply not exist - it does not matter how much valuable data you have, if the supply of hardware cannot keep up with your demand.

        Also, I suspect that most of the "data" obtained by the incessant hammering on GitHub is not very valuable. Most business code is routine, and getting Copilot to help out with generating enormous amounts of it may not contribute much in return.

      • petcat2 hours ago
        > 90% AI generated code

        And it isn't even clear yet if the AI generated code is even particularly valuable since it's legally ambiguous as to whether or not any human ownership can be attributed to it.

        The USPTO has declined copyrightability for genai artwork, it's only a matter of time before the same question comes up about code.

        • graemep2 hours ago
          Your claim is incorrect. Something purely AI generated may not be covered by copyright in the US. That would make it more valuable to MS as you can reuse it as you like.

          However, works with significant human input are covered by copyright, and most code does have such input. Human review, and correction is very common. There is a lot of AI generated code out there, and there are no cases challenging the copyright on it.

          You also need to look beyond US law. Software is a global business and most software businesses do not want to write software they can only sell in certain countries.

          • sofixa2 hours ago
            > However, works with significant human input are covered by copyright, and most code does have such input. Human review, and correction is very common. There is a lot of AI generated code out there, and there are no cases challenging the copyright on it.

            Legislation and court decisions still pending. There are numerous lawsuits about copyrigtability of output, and right of use of copyrighted work by LLMs, and both could have ramifications for code. I don't see how it's materially different to tell Claude Code to write you a function fetching an entry from a database, and telling ChatGPT to generate you a picture of a unicorn riding a bicycle. Both have the same level of input (desired end goal), both might go through review and updates (no, pink unicorn; no, cache the database connection).

            Legal challenges over code copyright are relatively rare nowadays, so I wouldn't take lack of high profile lawsuits as proof of legality / copyrightability.

            And yes, this will also depend on jurisdiction. Court decisions or laws can change that. Litigation over copyright infringement via training and reproduction is ongoing in multiple jurisdiction, and it wouldn't be shocking to me if at least some decide that it is indeed copyright infringement to pirate content to train LLMs that can reproduce it.

            • xp8428 minutes ago
              If I write a program of 1000 lines of code, with AI features turned off, then I turned the AI features on and use a completion to edit one function, can my program not be copyrighted? (I expect/hope you’ll say: “Of course it’s still eligible for copyright”)

              How about if I write 100 lines myself, turn the AI features on, vibe code 100 lines, and repeat this for five cycles? Half the functions are AI coded and half the functions I wrote myself. How about if I just tell Claude to write the program?

              And what if I tell Claude to write the program, and then spend six months tweaking most of the lines of code?

              I struggle to see a specific and obvious point where a line should be drawn. It seems intuitive to me that if I spend at least a few days worth of effort on a code base (whether tweaking, correcting, or directing AI to do targeted refactors), that is meaningful human authorship even if it has thousands of lines of generated code.

              I can, however, acknowledge the fairness that something which is simply one-shot output probably shouldn’t merit protection. But really, in any of these cases, it’s going to be pretty hard to prove after the fact exactly what the proportion of generated code to human authorship is, so idk how a court will really tell whether a repo with 20,000 LOC is one-shot or actually had a person spend a few weeks tweaking it.

            • graemepan hour ago
              If that function is all you ask it to write as a one off, maybe. However, if that function is part of a larger system that is human designed it is very different. If you review and correct the code in the system it is very different.

              Pages 27 and 28 of this are relevant to this: https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...

      • gpugreg2 hours ago
        > I don't have a clear idea how that value can be captured, since it's going to be 90% AI generated code that anyone can scrape (public projects) or can't be used (private projects), so perhaps you're right.

        The value is probably in knowing which AI-generated code ends up being pushed or discarded, which can't be derived from public projects. This information can then be used to finetune the next big model so it only generates the "good" code.

      • graemep2 hours ago
        Its easier for them to scrape than it is for anyone else. they also have a lot more meta data about the code which may be useful.

        Do Github terms entirely prevent them from making use of data in private projects.

      • desdenova2 hours ago
        > or can't be used (private projects)

        As if they cared about that

    • mohsen12 hours ago
      The same company operates the Xbox network. More daily active users and more events per second
      • brian-armstrongan hour ago
        Not comparable at all. Xbox would be mostly transient traffic. It's probably not much more than packet forwarding for a lot of traffic.

        Github is a giant complicated stateful mess with a lot of reads and writes. It also has a lot of features at this point. Hard to scale and hard to optimize.

        • pathartl26 minutes ago
          I think this is minimizing the Xbox platform. They are also a massive digital distribution platform where almost every game is a digital download now.

          That being said, you are correct. It is absolutely no surprise to me that Actions has the worst uptime.

      • incognito124an hour ago
        Xbox network was _designed_ for such concurrency, GitHub is Ruby on rails + vitess (mysql).
      • steve1977an hour ago
        Do they run Xbox network on Azure or is it a separate thing?
    • amarantan hour ago
      Huh, so vibe coding really is the reason GitHub has been down so much lately!
    • cedws29 minutes ago
      The disappointing thing is that if you do some digging, you'll find the majority of that it's slop and just outright spam. There's a page on GitHub where you can see recently updated repositories and it's very rare I see anything of quality on there.

      GitHub has become a dumping ground for broken code and it has more bots than ever. As much as I hate ID verification it might be a necessarily evil at this point because clearly their anti-bot measures aren't working.

    • shevy-java31 minutes ago
      It means that Skynet is winning.

      What you described above will piss off and alienate even more people. Eventually there will be a critical threshold crossed. Microslop will be the first victim of Skynet 11.0 (I lost track of its current version but you can see how much damage is caused by AI in general now - this was the beginning of skynet. Except that it sucks).

    • elAhmo2 hours ago
      Can you share where did they published that?
    • pier25an hour ago
      Amazing that Microsoft didn't see this coming after aggressively pushing AI everywhere for years.
    • pydry2 hours ago
      Github naturally scales horizontally.

      Usage numbers is the PR reason. Vibecoding insanity in Microsoft is the more plausible actual culprit.

  • gejose2 hours ago
    Github has 84.92% uptime in the last 90 days according to https://mrshu.github.io/github-statuses

    I don't know how this is even remotely close to acceptable.

    • gen2202 hours ago
      IMO that site overcounts downtime. If you filter for major and critical outages (the kind that make the front page of HN), the story is still bad but it’s not 84.92% bad.

      https://isgithubcooked.com/?severities=major.critical

    • pluc2 hours ago
      It isn't. Lots of unacceptable things going on these days and everyone seems to be accepting them just fine.
      • steve1977an hour ago
        I think it's like some kind of collective inferiority complex. Nobody really understands things anymore but everyone is afraid to point out mistakes of others because they are scared to come under scrutiny themselves then.
      • afro88an hour ago
        Guarantee enterprises with SLAs aren't accepting them
        • maccard25 minutes ago
          The thing about an SLA is that once you’ve broken it you’ve lost the trust. It doesn’t _really_ matter what the cost is for breaking it, nobody chooses their platform based on the refund they’ll get if they’re down. But they absolutely do choose based on reliability and uptime. The enterprise SLA refund credit will show as a (big) metering blip, but the problem is the people who signed the contracts are going to be speaking to Gitlab now
      • booleandilemma24 minutes ago
        I think the default position people like to take generally is to just go with the status quo. GitHub has reached status quo level. As in "nobody ever got fired for choosing GitHub". It's the only forge I've seen advertisements for in the meatspace, and even non-technical people know about it. On job applications, companies ask for my GitHub URL. I think it'll be awhile now before they get abandoned. That said, I recently started moving my stuff over to Codeberg. The change needs to start with us, the people writing software.
      • tardedmemean hour ago
        We should make an alternative git site, but how to acquire users?
        • go_elmoan hour ago
          Make it nerdy enough to scare of agentic coders only. Also, blackjack and hookers are said to be helpful in such circumstances.
        • dd8601fnan hour ago
          Forgejo is a thing. But the headlines lately make it sound like it’s not in great shape either.
        • tantaloran hour ago
          What do you need users for?

          GH is not a social network

          • tardedmemean hour ago
            I don't know. Everyone seems to be using GitHub only because everyone else is using GitHub. Apparently that's important somehow. Me, I use "git init"
            • 01HNNWZ0MV43FFan hour ago
              https://en.wikipedia.org/wiki/Network_effect

              It's a lot easier to get bug reports and fixes when everyone is on the same auth system.

              That's why there is also a call for federated forges

              • tardedmemean hour ago
                Why is it easier to submit a bug report if my bug reporting system is run by the same company as your code repository? Why are those things even slightly related?
        • 01HNNWZ0MV43FFan hour ago
          codeberg is doing fine
      • Retr0idan hour ago
        I, for one, am not paying them enough money to expect any better.
    • tantaloran hour ago
      They can't even get two eights, let alone three nines.
    • amarantan hour ago
      Hey there's a nine in there, so it's fine!
    • croes2 hours ago
      At least one 9 … somewhere
    • veber-alexan hour ago
      ffs can we stop talking about that number and site already.

      It treats any service being down as the entire platform being down which is nonsense.

      It's just lying with statistics.

      • bspammer44 minutes ago
        The individual numbers for git operations, pull requests, and actions are all still single-nine.
  • indianhippie2 hours ago
    This is reaching an unacceptable level of performance. There isn't a week that work isn't interrupted by GH.
    • petcat2 hours ago
      AI agents have changed the scalability properties of basically the entire internet.

      It used to be that GitHub could rely on a finite number of people interacting with their platform in real human ways in real observable patterns. So I'm assuming that they scale for those patterns, and optimize for the UI and UX hotspots.

      But now everyone's got a moltbot running 24/7, sometimes many, and it's completely overloading a lot of services. Especially services like GitHub which are very much agent-centric nowadays.

      • njovinan hour ago
        Microsoft buys github.

        Microsoft forces AI usage down everyone's throats.

        AI bot usage takes down github.

        I have to assume that there are some serious fights going on between the poor SRE teams wanting to throttle bots, and MS not wanting to do anything to dissuade AI usage.

        • j_maffe21 minutes ago
          How do you throttle bots? Everyone will stop having commit msgs mentioning LLM agents. then what?
      • PunchyHamsteran hour ago
        GH was going down before AI explosion. The start of the trend is MS buying it, not AI explosion, that is just final nail
      • tardedmemean hour ago
        It's not new, it's just a DoS, which is a serious crime, just report the attacker to the police if in your country, or block their IP if not. If done accidentally, it's likely not a crime but the police will still scare them to stop doing it.
      • DetroitThrow2 hours ago
        >AI agents have changed the scalability properties of basically the entire internet.

        Why is GH the only service provider seeing such consistently bad availability then? Everyone has had to scale massively all the time, if GH is choosing moltbots capacity over basic availability for the rest of the humans, they have made the wrong choice.

        • mert-kurttutanan hour ago
          Some people really abuse the f out of the system in a way optimized to take github down. Like they push every minute or for every commit instead of with certain time intervals (e.g. a single push a few times a day for each repo).

          I follow some of the accounts that run 24/7 agent sesssion. Their projects are not even that novel for the number of commits that appear on the profile. Many of the commits have the log of beads, claude session etc (no change to the actual code). Some of them are ports of some projects to another language. AI surely will increase the productivity, but the waste and noise that some people are willing to commit ....

      • dingnuts2 hours ago
        [dead]
    • lbourdages2 hours ago
      It's been unacceptable for months, but now it's at the level of "we should actively look for alternatives".
      • cenal2 hours ago
        Any centralized solution like GitHub is going to suffer the same fate as vibe coding chokes these services. The only option to have high uptime is to self host and most organizations can't do that easily. Time will tell if GitHub can scale up enough to meet demand.
        • tardedmemean hour ago
          It's a nice thought but I think the revealed preference from the history of the internet is that people actually only want centralised services, no matter what they say they want.

          People love to clown on the fediverse because of having to choose a server. Which is no different from email. I guess the difference is that their ISP used to give them email.

        • ozgrakkurtan hour ago
          Not really. It is possible to implement systems that handle a lot more scale than github has. This is proven by systems that exist today.

          It might be hard to create such systems using ruby and microslop AI management though

    • baq2 hours ago
      A week? You're going to be happy with more than a day without an incident.

      I lost track which Monday morning PST in a row this is.

      • 2 hours ago
        undefined
    • Hamuko2 hours ago
      Things are a lot better in Europe. I stopped working hours before this incident started, and I can't really remember any major work-stopping indicents in the past months. I only remember once trying to do hobby stuff in the evening that was impacted recently.
    • enraged_camel2 hours ago
      I would say we are way past unacceptable.
  • Insanity2 hours ago
    At this point, "GH is down" posts are competing with "Newest LLM Hype" for the HN front-page week over week.

    For my personal project, I've been considering moving everything over to Codeberg. Stability of GH being one reason, but I also like the idea of an alternative that is not strictly tied to a big tech company.

    • SpyCoder772 hours ago
      Your name summarizes all the GitHub uptime crap.
    • eastbound2 hours ago
      And yet, you haven’t. That’s the problem with dominant platforms: Slight inconveniences + inertia are enough to ensure no-one moves (even without monopolistic abuse – and I’m talking about Microsoft here).
  • chao-2 hours ago
    It feels weird (sad?) that I'm starting to get a sixth sense for when Github is going to a service disruption.

    About an hour ago, clicking "Resolve Conversation" in a Pull Request failed a few times with an error message that appeared lower on the page (outside the viewport), and which I did not see the first few times. I had to reload the page after every few actions to get the server to register new ones.

    I told a colleague, and added "Github might be having an issue with some other service, and it's just bleeding over to PR comments? Maybe it will snowball into a larger outage?"

    • romellem2 hours ago
      Literally just had this same signal with PR review comments. Checked the status page, saw it was green, and (correctly) assumed “not for long!”
    • thomas_viaelo2 hours ago
      [flagged]
    • throwaway6137462 hours ago
      [dead]
  • matthew_hre2 hours ago
    Hilariously, it looks like basically everything except Copilot is degraded. The jokes write themselves sometimes.
    • Kwpolskaan hour ago
      Copilot is fully independent of the code forge parts of GitHub, so I would imagine it’s running on completely different infrastructure, without any hard dependencies on the Rails monolith.
    • cdrnsf2 hours ago
      Copilot's full functionality is only fractionally useful compared to what's currently degraded.
  • gritzko2 hours ago
    2027: "GitHub is up!"
    • elevation2 hours ago
      "Three 8s of availability"
      • baq2 hours ago
        I literally laughed out then shed a tear, because I'd actually take three 8s today.
        • tardedmemean hour ago
          Three eights is more than a month of downtime every year. Today is the three eights.
          • Philpax32 minutes ago
            This felt wrong to my intuition, but, no, you're right: (1-0.888) * 12 = 1.344.
  • jfrbfbreudhan hour ago
    Reduce the free tier.

    I’ve made 4000 commits in the last 2.5 months. That’s just to main. And I push up tons of artifacts daily for regression testing.

    For $0.

    • cphoover24 minutes ago
      They do that and open source projects that haven't left already will migrate.
    • PunchyHamsteran hour ago
      That's like 2 minutes of CPU usage for the repo part
    • timmgan hour ago
      Honestly, I hate "free tiers" for SAAS products like this.

      For about 5 minutes, Google had a pay-as-you-go service on GCP for git. I used to use that, because I wanted to own my stuff. But (I guess) because everyone used "free" github -- and like a lot of other Google services -- they sunsetted it.

      So now I'm using github for free. But would rather pay for my storage an usage with a (big) cloud provider.

    • Kwpolskaan hour ago
      GitHub should add a slop tax. Co-authored by Claude? Pay up. Em-dashes in comments? Pay up. A lot of code written in a short window of time? Pay up.
  • m_w_2 hours ago
    This is really getting ridiculous - although people sometimes dismiss the "missing" status page because it includes copilot, it's worth noting that pull requests (95.5%) are even lower availability than copilot (96.4%).

    How am I expected to comment "LGTM" if I can't even get to the PR?

  • int32_642 hours ago
    At least people are gaining knowledge of how to use the git remote command.
  • 12 minutes ago
    undefined
  • nine_k2 hours ago
    The best time for GH to increase prices was 6 months ago, or so. No service is going to weather the storm of agentic code overload unscathed. But at least they could become an expensive-but-working solution instead of the sad comedy they're now, and thus keep their most lucrative customers.
  • ecshafer2 hours ago
    At this point we can just have a Bot repost this exact submission every day and it would be right more often than not.
    • bigbuppo2 hours ago
      It exists, but it runs as a github action, so it's not working right now.
    • tardedmemean hour ago
      We couldn't, because HN prevents duplicate posts.
  • faangguyindia2 hours ago
    I have been writing lots of code lately

    And I still only have local git repository.

    I do not like why git doesn't have a built in bug or issue tracker and a kanban board or something.

    I wished Git and Fossil hybrid should have won.

    I thought about using fossil many times but it seems codex and claude have deeper integration with git.

    I don't like installing software which keeps growing into infinite feature Monster. Maybe I'll install gitea or forgjo idk.

    That's the last piece of puzzle remaining for me I've already mastered deployment and HA on bare metal from OVH and Hertzner, already have scaled to tons of users

    • vaylian2 hours ago
      You might be interested in radicle: https://radicle.dev/faq#how-does-radicle-handle-issues-pull-...

      Radicle supports issues directly as part of the git object database.

    • nine_k2 hours ago
      GitHub is actively used to do code review and bug tracking. There is a number of tools that offer it on top of git, in a distributed way, but it means that yo need to install them locally on every machine involved.

      What's worse, GitHub is widely used as a CI/CD solution, it runs massive amount of build pipelines and test suites. There is a ton of players in this space, too.

      GitHub's main value proposition was having all these things in one place, as a convenient web app, for free or for moderate money. So they're crushed by the success of their model.

    • 2 hours ago
      undefined
    • WolfeReaderan hour ago
      "I thought about using fossil many times but it seems codex and claude have deeper integration with git."

      Don't let "agentic" "coding" be the reason to avoid fossil.

      Fossil and other VCS are much easier for humans to use than Git is; there's no reason to have an LLM burning up tokens and the environment to do tasks you'd do yourself quickly and correctly.

  • Benderan hour ago
    Would it reduce their load if every AI instance both local and centralized had their own local git repo (and/or Gerrit instance) and then only committed for a sub-set of specific conditions were met?
  • hansmayer2 hours ago
    Is this finally the rise of the "multi-agentic os/platform" we've all been so eagerly awaiting?
  • baderb2 hours ago
    I keep wondering what causing such a bad service from GH lately, is it the overuse of AI generated code? Are they trying to cut costs with the infrastructure?
    • sailingparrot2 hours ago
      Everyone producing magnitude more code with AI agents. Numbers from GH COO here: https://x.com/kdaigle/status/2040164759836778878
      • crotean hour ago
        So why aren't we seeing an explosion in feature shipping rate, or tech startups?

        If there is so much extra code, where is it going? Is everyone just creating giant piles of throwaway slop?

        • tibbar12 minutes ago
          I'm curious, do people around you use AI? Because in my own workplace, people use lots of AI, and they ship lots of PRs, which correspond to actual features on the roadmap. I've been doing this a long time, and there is a whole lotta stuff shipping. I'm a manager and in the handful of hours I have I'm shipping the equivalent of what I would have as a full-time eng years ago.
        • Barrin92an hour ago
          > Is everyone just creating giant piles of throwaway slop?

          yes, like the crypto or productivity 'industry', where every project exists to manage your other crypto and productivity projects, the dynamic of 'AI programming' seems to be to make software to manage your other two dozen AI things

          mysteriously enough, big open source projects that people actually use in the real world aren't seeing their issues closed at any higher rates, almost as if that requires actual maintenance and programming (which Fred Brooks told us this half a century ago)

    • robertclaus2 hours ago
      There was a post from Github a few weeks ago showing commit volume exploded from linear to exponential growth about 6 months ago. I don't know for sure, but I think they weren't ready for the scale out. Whether that means actual scaling issues or cost cutting because of the scale out, who knows.
    • simoncion35 minutes ago
      Part of it is additional load. Part of it is their move of more and more of Github infrastructure to Azure.

      I've done a lot of "plain compute" work [0] with the Big Three Cloud Compute vendors. Azure is by far the worst. Mysterious resource creation failures, mysterious resource deletion failures, mysterious "incompatible schema" failures when talking to Azure provisioning and status infrastructure, mysterious and inexplicable performance problems, etc, etc, etc. Unless I was being paid a lot of money to use Azure, I'd take Google's legendarily nonexistent support over Azure's unreliability any day.

      [0] That is, "create a VM with persistent disks, Internet access, and maybe a load balancer in front and ignore all of the other features provided by the vendor"

    • grepfru_it2 hours ago
      Something something staff reductions something
  • p33p2 hours ago
    One of the most satisfying (and sad that it’s needed) GH CLI extensions I’ve (publicly) vibe coded has been https://github.com/Houstonwp/gh-down

    Although I think I still ironically end up finding out about the outages on HN first.

  • datadrivenangelan hour ago
    Edit: the status page is still all green for today, but the incident is there.

    pre-edit: They just removed the incident and set the status page to all green as of 10:01 PT???

    • DetroitThrowan hour ago
      They're trying to make it seem like it's not below 95% uptime really hard, so that might be the KPI they're trying to uphold.
  • mandeepjan hour ago
    I think there should be another page 'GitHub is NOT down'. Needless to say, for all the other times it's Down.
  • samuelknight2 hours ago
    I couldn't merge earlier but navigation around my project worked. Maybe if I generate even more PRs some will get through!
  • throwaway8728144 minutes ago
    What's with Github Enterprise? At the workplace it's constantly broken as well.
  • mghackerladyan hour ago
    It'd be more useful to inform us when it is working at this point
  • ethinan hour ago
    Honestly, if GH keeps getting worse I may need to migrate away to Forgejo or something. The problem is GHA... Does anyone know of a service that is better and doesn't charge me an arm and a leg for runners (particularly MacOS ones)? I've been wanting to shift away from GHA for ages (because I hate it) but I don't know of any alternatives that are quite like it (or the costs involved).
    • tommy_axle2 minutes ago
      If self-hosting the runners too then it's doable. Not 100% sure for Forgejo but with Gitea and act_runner it's possible and pretty economical if you have an extra mac mini.
  • munificent2 hours ago
    I haven't seen Friendster-level stability like this in a long time.
  • pulkasan hour ago
    they are testing the limits of ruby on rails.

    "Ultimately, if more companies treated the framework as an extension of the application, it would result in higher resilience and stability. Investment in Rails ensures your foundation will not crumble under the weight of your application. Treating it as an unimportant part of your application is a mistake and many, many leaders make this mistake." https://github.blog/engineering/architecture-optimization/bu...

  • pickleballcourtan hour ago
    Is this trending in relation to Ghostty moving away from github?
  • alfg2 hours ago
    Wow, GitHub are no longer serious people. Unbelievable.
  • sergiotapiaan hour ago
    If you have clowns bombarding the backend with agentic volume that is 200x the average, start charging them for usage.

    This is not sustainable and hurts actual customers.

  • claudiusa2 hours ago
    worked for me pushing some commits about 20 minutes ago.
    • Insanity2 hours ago
      _Usually_ the blast radius isn't "GH is down globally across all functionality". So it can work for you while still being either down for other regions, or at least degraded.
      • bombcar2 hours ago
        Pushing commits over SSH is often the most "reliable" thing, though you can get some fun situations where a commit is pushed and runners never ran, causing downstream FUN eventually.
        • maccardan hour ago
          I’ve seen this behaviour but IMO it’s a fatal design flaw of Actions. It shouldn’t be possible.
    • tosti2 hours ago
      I don't do that anymore, but I can still access repositories and issues. Looks normal to me
  • testemailfordg2an hour ago
    Probably the solution is not using the centralized Github for all work related to agentic coding but rather a distributed/local github repository from get go. This way only what hits the centralized github and becomes public, is something vetted and signed off by the human in the loop, hopefully reducing the AI slop and increasing the quality of commits.
  • alansaberan hour ago
    Github vs cloudflare
  • Rzor2 hours ago
    Soon on your favorite prediction markets.
  • josefritzisherean hour ago
    Microslop happens.
  • winddude2 hours ago
    re there polymarket bets on if gitub will be up or down today yet?
  • pbiggaran hour ago
    CircleCI seems to not have the same problem: https://status.circleci.com/
  • dankobgd2 hours ago
    oh no, anyway...
  • fHr2 hours ago
    Selfhosted gitlab alpha versus the cloud Github beta
  • 2 hours ago
    undefined
  • Imustaskforhelp2 hours ago
    The amount of damage Github has done to its brand is something so phenomenal that I think it might be studied in future case studies. Skype and Github were two microsoft products which everyone used until they degraded it. Windows itself can be included in it too now.
  • tailscaler20262 hours ago
    > Co-authored by Co-pilot
  • Group_B2 hours ago
    I'm moving everything to self hosted GitLab CE now. Will just use GitHub as a backup. This is so ridiculous at this point.
    • Foorackan hour ago
      GitLab vs Forgejo? Personally run GitLab but looking at moving, due to it's massive size.
  • dickeeT2 hours ago
    really need competitor for gh rn
    • deathanatos2 hours ago
      Gitlab, Forgejo, maybe Codeberg all exist. Vote with your wallet, if you hold the purse strings.
      • PunchyHamsteran hour ago
        Doesn't matter if your dependencies are still using GH
    • Zambytean hour ago
      Then use one. There are so many.
    • tardedmemean hour ago
      it's git. Just do "git init" in some SSH accessible folder.
  • shevy-java32 minutes ago
    Many of us predicted this.

    Microslop, 'xcuse me, Microsoft is killing off GitHub. Not necessarily by design or on purpose, but by slop-neglect effect. AI is eating the corporate brain that Microsoft used to have (well, a long time ago).

  • saltyoldman2 hours ago
    Is Git decentralized?
    • bigbuppo2 hours ago
      Git is, but with GitHub being treated as the golden repo for projects, it becomes the hub of other parts of development, like PRs and CI/CD pipelines.
    • keyboredan hour ago
      Yes. But in practice people need their `if any_tasks_failed: throw error` (“CI”) to be in the cloud and have three nines of uptime.

      Personally (but who am I) I don’t need to check whether my changes work on Windows 7 and SunOS in order to continue on a project most of the time.

  • SilverElfin2 hours ago
    If this was a business running on its own, without Microsoft’s capital and anti competitive practices, they would have been out of business by now.
  • fishgoesblub2 hours ago
    This is genuinely pathetic. I wish I could do my job this poorly and still be employed.
    • this_user2 hours ago
      • cdud3an hour ago
        It's A Trap! ~ Admiral Ackbar
    • tardedmemean hour ago
      Keep in mind that it's easy to call someone else incompetent whenever their thing doesn't work, but we have no full idea what's going on behind the scenes. They could be very competent people in an unwinnable situation (like being forced to use Azure).
  • Veyg2 hours ago
    Here we go again...
  • 2 hours ago
    undefined
  • throwaway6137462 hours ago
    [dead]
  • 2 hours ago
    undefined
  • ykhan112 hours ago
    [dead]
  • zkmon2 hours ago
    The goals for code-versioning as they existed decades back, might be sliding into irrelevance now. Code is no longer a direct work artifact from humans, as it used to be. Back in the day, people wanted to persist their code and it's changes, because code was hard to write and test. People didn't use a versioning system for their compiled binaries though, because it was a machine output and can be created from source code. But there was no higher level "source" for generating the code itself. Now things have changed. We need to check what exactly is the human artifact that requires treasuring, persisting and versioning.
    • hansmayeran hour ago
      Flawed reasoning all around. Machine code and object code will look the same given the same source code, target platform, compilation and linkin params etc. How is AI-Slop-Code even close to that?
      • zkmonan hour ago
        "looking same" is not the requirement. "Working same" is. With proper test harness, the generated code can be controlled to "work the same".
        • dminik14 minutes ago
          And where does one store and version this harness?
        • hansmayeran hour ago
          :) :) Sure buddy - mind showing me yours ?