This has lead to a game: each time the sale pitch claiming big wins for their AI tools arrives, ask “when will we start seeing GitLab development back to pre-IPO speed?”
It’s a tough trade-off for a small team competing with a behemoth, and I guess their success indicates that they played their hand correctly. If you are going for the enterprise segment then checking off the feature requirements can be more important than making each feature perfect.
10 years later the same problem remains. While Gitea / Forgejo have very little performance problems. And will only get better once Go 1.26 is out. Which is a much bigger release than a single digit version number upgrade.
Yeah, from my experience administering a pretty small self-hosted instance, I think it's got to be this. Every single admin action you try to take on the backend is SLOWWWWW. Restarting the services takes a few minutes. Just to bring up an admin console on the server in a CLI takes well over a minute, which indicates a tremendous amount of overhead purely on the ruby side.
There's just something fundamentally slow about how it's all put together. I wouldn't blame the language or tech stack, exactly - but perhaps Ruby on Rails is not a great fit, anymore, for a fully featured forge of this scale?
So while Go by default will always be faster then Ruby Rails, there are plenty of examples of decently fast Ruby Rails Apps. Having 10 years saying they will improve performance and not getting any of it suggest problems with Gitlab itself more than RoR.
Basecamp is slow as hell, Shopify depends on some insanely specific tuning, and I don't use Hey but it's the same team as Basecamp IIRC.
Can anyone tell me why? What's changing with Go 1.26?
https://medium.com/@anand.hv123/go-1-26-is-around-the-corner...
I thought they're only struggling with the free public version, but no. GitLab run on a private box is the same bag of lags and loading spinners.
who thought showing markdown as markdown could be cool :)
As the author says, GitLab feels sluggish, and is bloated with 1001 thing I'd never use that just makes the UI a pain. Despite all the features I don't need, some that I would benefit from are disabled in the free version.
Forgejo is simpler. It allows me to hide features per project that I don't need. Bit there are some tradeoffs. Updates on GitLab was great. I've been letting it self update for years with no issues. This does not work on Forgejo. Forgejo is also a lot less polished, and some features just doesn't seem to work like they should.
This is the primary reason I dislike GitLab. It is so complicated to use.
Github issue trackers are so simple to use.
Perhaps GitLab offers more features for project owners, but as a user, I much prefer github. Of course I'd prefer it even more if Microsoft wouldn't control github ...
This is due to the fact that they don’t well.
Even just finding the source code or issue tracker was like hidden behind layers of layers in the navigation.
No we're hosting GitHub. Shits never been easier. GitHub also has its ugly sides, but the ugly sides still are better than anything in GitLab.
I hate GitLab with a passion.
I still have proper CI, issue tracking, and all other features I care about, but the interface loads instantly and my screen isn't filled with many features I'll never use for my private projects.
The article mentions the container registry as a prime feature of gitab. Forgejo has this, btw.
In addition, speed (of everything) is so good with forgejo. The resource requirements (napkin math, but...) are 10% of gitlab.
I see no reason to ever use GitLab again.
There are two minor annoyances for me, but not deal breakers. . First, I actually prefer the GitLab CI syntax. "GitHub Actions" is a mess. I suppose it makes sense to use the dominant gorilla (github actions), but converting to this CI was more trouble than it should have been.
Also, the forgejo API: it is much less developed. I did like exploring with GraphQL which is totally missing in forgejo. But, you have access to the database directly (and can use sqlite or postgres, your choice) so you can really do whatever you want with a custom script. Forgejo API and their infrastructure around it is just a bit more clunky, but nothing that was a major problem.
Codeberg (a public Forgejo-based forge) also offers Woodpecker CI. Their hosted Forgejo Actions is still in beta AFAIK, but you can also use your self-hosted runners.
GitLab is no Gerrit, but it does at least support stacked MRs, and at least seeing comments between forced pushes / rebases, if not tracking them.
I use Codeberg, and therefore Forgejo for my open source project, but frankly the GH style workflow is not appropriate for serious software development. It forces one to either squash all commits or use <gag> merge commits. Many people have developed stockholm syndrome around this and can't imagine any other way. But it sucks.
The GH model encourages big-bang giant PR all at once development, and it's corrosive on productivity and review culture inside teams. And it leads to dirty commits in the git history ("fix for review comments." "merge." "fix for review comments." etc)
I worked with GitLab for a year and a half on a job, and I prefer its review tool for functionality, though not necessarily UX.
The docker images don't have limits, there is a limit per layer. IIRC I've distributed a 100GB image through their free tier (just had to make sure to keep the layer size small enough).
Update: Sources
https://docs.gitlab.com/user/storage_usage_quotas/ https://gitlab.com/gitlab-org/container-registry/-/issues/10...
Hard disagree. Gitlab CI, while more powerful than some alternatives, is so so bad, its YAML-based syntax included. As I said in another thread[0]:
> I worked with Gitlab CI on the daily from 2021 till 2024 and I started curating a diary of bugs and surprising behavior I encountered in Gitlab. No matter what I did, every time I touched our CI pipeline code I could be sure to run into yet another Gitlab bug.
We also need custom runners anyways because macOS and Windows are important and getting those with graphical session access and/or CUDA hardware in the cloud is either $$$$ or severely limited. Even with our setup, we split the build and test phases so that CUDA hardware slots aren't wasted on running compilers. It also lets us test a single build under different environments easily.
So, yeah, I can see fighting with the feature spectrum, but you need to restrict yourself in most other cases with that kind of stuff too. But at least what we do is possible with GitLab-CI.
I had such a better experience with gitlab CI than any other I have used. There are quirks, but they make sense after you learn them.
> every sufficiently complex CI system becomes indistinguishable from a build system.
But what alternatives are there that also integrate well with version control systems like GitLab/GitHub/Gitea/…?
For instance, Dagger works quite well but its UI is going to be completely separate from whatever CI system you're using.
What UI are you looking for outside of log streaming? If you want to see a DAG of your workflows and their progress you can use other systems as you say (Dagger has this), or your orchestration layer can implement that.
If you want to use the orchestration component of your ci tooling, you always can do so, and get your DAG viewer, but you have to accept all of the constraints that come with that choice
Github is so much easier to use. I don't understand why gitlab wants to make everything so complicated.
Give it 1-2 years, feature quantity will take precedence over feature quality
That's been my experience as well and, in fact, it was totally a meme at my former client! See also my comment in another recent thread: https://news.ycombinator.com/item?id=46296816
It is slow as molasses, issues are more project management oriented instead of coding, quality gates are virtually non existent and builds are now slow. Builds are slow because instead of our beefy build servers they run on VMs, that are undersized and have IOPS restrictions, because downloading the cache for maven/docker/npm is relatively fast but actually expanding it on disk is slow, because just the simple orchestration to spawn a job is also slow.
I would love to go back to gitlab and I would even dedicate some time to performance tune it and contribute back. I think gitlab does everything right. (Technically, not sure about pricing and tiering.)
To give an idea, this is a proprietary system that is extendable via scripts, but all of the artifacts are exported via XML files where script source is escaped into one XML tag within the metadata. Same with presentation layers, the actual view XML is escaped into one line within one attribute of the metadata file. The "view" xml may be thousands of lines but it is escaped into a single like of the export file so any change at all just shows that line as being changed in a diff. Attempts at extracting that data and unescaping it even seem to present problems because when the XML is exported often times the attributes within the schema are exported in a different order, etc.
Yes... and no.
Gitlab doesn't make sense for a low-volume setup (single private user or small org) because it's a big boat in itself.
But when you reach a certain org size (hundreds of users, thousands of repos), it's impressive how well it behaves with so little requirements!
Just in case anyone else (like me) didn't get the reference:
> This page describes the GitLab reference architecture designed to target a peak load of 40 requests per second (RPS), the typical peak load of up to 2,000 users, both manual and automated, based on real data.
https://docs.gitlab.com/administration/reference_architectur...
Eventually, there was no virtual scaling that could help. This, for me, is the biggest problem with Gitlab hosting: as soon as you hit a scale where a single machine with Omnibus doesn't cut it, the jump in complexity, cost, and engineering hours is significant.
They have their free fast stats tool and you can run your logs through their tool to get statistics and identify hotspots
I would never use Gitlab for my own needs, but at company level, it's impressive how well it behaves!
It's not as demanding as a some of the other software out there, like a self-hosted Sentry install, just look at all of the services: https://github.com/getsentry/self-hosted/blob/master/docker-... in comparison to their self-contained single image install: https://docs.gitlab.com/install/docker/installation/#install...
At the same time it won't always have full on feature parity with some of the other options out there, or won't be as in depth as specialized software (e.g. Jira / Confluence) BUT everything being integrated can also be delightfully simple and usable.
I will say that I immensely enjoy working with GitLab CI at work (https://docs.gitlab.com/ci/), even the colleagues on projects using Jekins migrated over to it and seems like everyone prefers it as well, the last poll showing 0 teams wanting to use Jenkins over it (well I might later for personal stuff, but that's more tool-hopping, like I also browser and distro hop; to see how things have changed).
However, it was a bit annoying for me to keep up with the updates and the resource usage on a VPS so that's why my current setup is Gitea + Drone CI (might move over to Woodpecker CI) + Nexus instead of GitLab, and is way more lightweight and still has the features I need. Some people also might enjoy Forgejo or whatever, either way it's nice to have options!
Also the free version doesn't have PR requirements or multiple reviewers etc.
But they have a massive backlog and they seem to be focusing their development resources on customer requests, obviously. So it could definitely use improvement.
Also if we're being honest, despite Gitlab being the #2 platform, you're going to get less contributions than on Github as people just aren't going to want to sign into a second service. Now most of my public projects are like "I made this, I put it here to show off, and use it if you like" so if people _don't_ use it, it's no big deal for me, but if you're in it for revenue or clout or just like seeing usage numbers going up, it's clearly not the optimal choice.
The problem is that organising it on github was really really hard. Trying to find and group projects was notoriously difficult and the CI/CD offering was shit.
I joined a startup and it was on gitlab. The server was hosted by the runner was local. We could make arbitrary projects and CI/CD was a dream. Very simple but powerful.
The downside was at the time, it was offline every other week. However to administer, it was far far easier.
Github has improved, you can have organisations and chain them together. But its a motherfucker to administer. They move everything about monthly, and make it very difficult to work out which child org has which power.
The CI/CD has vastly improved, we have private runners. But.
They are really nasty to administer and monitor, and the language you use to make jobs is not that intuitive. It doesn't feel like a shell script with a docker wrapper.
If I had a choice, I'd move us over, but its a lot of faff, and we have so many work arounds.
Some builds produce a lot of output, and Gitlab simply truncates it. Your job failed? Good luck figuring out what went wrong :)
Showing the last N bytes makes so much more sense as a solution to the artificial problem of CI output being too large.
The people who submit fake/AI issues and PRs for resume purposes seem to exclusively use github and this is a substantial expense for the real users (see the recent Curl discussion). Gitlab doesn't have those people (or at least its a wildly smaller problem).
One social downside is noobs will insist a project does not exist if its not on github, even if you send them the gitlab URL. Almost have to physically cut and paste the gitlab URL into their browser for them before they will believe. You can either do nothing which filters them, not a bad idea, or create a clone or placeholder on GH that basically links back to the real repo on GL. I don't know if that is allowed in the ToS for GH but people certainly do it a lot.
Edit: perhaps it's skill issue too, but I'm annoyed they don't have a similar feature to jump to definitions as github does.
That said, looking at recent releases, there are nice things from both, and if I wasn’t running GHES, I’d be stuck to choose between the two
They sometimes do braindead moves like prohibiting no-expiry-date access tokens but otherwise it's pretty smooth sailing.
And with recent migration to an SPA GitLab feels quicker and quicker.