This is have-your-cake-and-eat-it. PR approval is a permission so is a boolean. Of course it is. Either the code can be merged or it can't.
What's being described really here is just something to make you feel slightly better about yourself whilst approving code you hate ("we should revisit this..."). Just open a new ticket.
-2: This is a bad idea, don't do that
-1: This is a good idea but needs improvement
+1: LGTM but I don't have enough knowledge or authority to approve
+2: Approved
This meant that they were completely unable to actually 'approve' a review, but were only able to reject it. They were juniors, so they'd eventually get to that point, but by then, everyone would be used to just ignoring their approvals.
This provides that middle ground.
Given that, what's wrong with simply commenting on the PR to document the concerns, issues, lack of knowledge, etc?
Unless you're using those +/-2 to achieve some sort of goal... but you can also do that with labels, tags, etc. on the PR.
sometimes I review something and say "approved", but sometimes I can only review part of it, and really need someone else to check what's out of my wheelhouse.
sort of "partially approved".
I have used systems that can set things like "requires 2 reviewers" or "bob, fred are required reviewers, elon and sam are optional reviewers".
also we had "thumbs up, thumbs down, and some comments might have a "task" associated with them as a required fix before approval"
optionally, maybe before you say "approved" you have an overall comment, and see the comments of other reviewers.
In many environments that depends on more than just code review, e.g. CI.
Unless you're implying codeReview is a score and a low code review score can be offset by higher scores elsewhere eg. passes more tests?????
The nuance is comments on the PR itself, rather than the state of the approval, which is binary (or ternary, if you want to count leaving it in an unknown state for extended periods of time).
Someone else knows the other portion well and sees the +1 and decides to +2.
In practice this ends the stalemate where partial owners don't feel confident to approve the whole thing
Having several people review each separate parts but not understanding the others' can cause interaction bugs. If such bugs cannot happen (say, due to modularity, or type safety guarantees etc), then it won't be the case where you need to have a partial approve.
I am not a fan of partial approve. Either you think the code is approvable, or it isn't.
You can require approvals from N domains from (potentially) different people.
Some people think that PR status can also communicate rationales and partial approvals.
Some think that should be done with tags and comments.
Lots of request systems have multiple stages between "open" and "resolved".
So you could require `Verified+2` (CI), `Code-Review+2`, and `Design+2`, for example.
[0]: <https://gerrit-review.googlesource.com/Documentation/config-...>
But with this, a non-maintainer could review be allowed to give a +1 or -1, but not a -2 or +2, and it is more clear that a "+1" isn't sufficient for actually merging the PR.
To run the process smoothly, one can just hope that the team/tech lead is an ideal developer. Otherwise they are in a position where no one senior than them is available for the code review and any one junior would just rubber stamp their PR's.
1. Is the PR suitable, and therefore should be approved, and
2. Is this person suitable to make that decision.
If 2 is false then the person should remove themselves from the list of reviewers. Then 1 can follow its normal process.
Also:
> If [a person is not suitable to make the decision of whether the PR should be approved] then the person should remove themselves from the list of reviewers.
This doesn't reflect what sometimes happens in real life. Someone could have sufficient specialized knowledge to be able to veto a PR, without having sufficient broader knowledge to approve a PR. That person should definitely be left on the reviewer list, with the ability to veto, the necessity to remark if he has vetoed or not, and the inability to definitively approve.
It is necessary for this specialist to notate "I have finished examining this PR, and there is nothing within my expertise that would cause me to veto it" before the PR is advanced.
Unfortunately, in a binary system, that often equates to him having to say "I approve" even though this does not truly capture the intent. Then you wind up with hacky work-arounds, like requiring a minimum number of approvals.
Not an intuitionist, I see.
yes but tangled.org really does do most of that!
1. JJ as the VCS: tangled supports stacked PRs using jj change-ids. https://blog.tangled.org/stacking , we use it a lot to build tangled itself: https://tangled.org/tangled.org/core/pulls
2. Raspberry pi as a forge for a long time: also check, the git server shim is super lightweight, its just an XRPC layer over git repositories + an sqlite3 database. there are folks running it on a riscv board with 512 megs of RAM.
3. Actions are critical and they should be runnable on my local machine: IMO this ask is slightly misplaced. it is mostly your build-systems' job to be hermetic, run anywhere, handle cross-builds etc. it would be really cool to "promote" results of such builds to the forge itself.
I know that not all USB-to-SATA connectors are compatible with RPi – I got lucky with the first connector I tried (Unitek). Not sure if RPi 5 has a wider compatibility.
I do think it's just an awkward problem to solve though, because it essentially devolves to needing to run the entire system somewhere else, which is why every system I've seen like this ends up being trial-and-error.
yes, and... the idea here is that it would be neat to extend the hermetic builds idea such that this can be run locally / anywhere where there's compute easily. The root problem that's being called out here is that idea of running something until the CI says it's green when there's a change, commit, network call, in the cycle is a pain in the ass. (The best way to avoid this churncycle is to just never write bugs! TFIC ;P)
The point I am trying to make is, until you offer a user the ability to make a private repo for side projects, it's unlikely to take off.
What people want is the ability to make a private repo, go away for a few months and come back to find their repos right there waiting for them.
Private repos provide nothing to the site by definition. The value model here is, you must pay for private repos, either by paying a subscription or hosting your own node and bearing the related costs.
Grasp is actually pretty cool too, built on nostr, which is maybe a stronger platform in the end? I dont really know enough about it. Stronger as in, you're maybe opening up more interoperablity by putting your stuff on a "anything network" vs Radicles "p2p git data network".
To be honest they're all cool ideas, Tangled feels some how corporate though.
https://tangled.org/did:plc:wshs7t2adsemcrrd4snkeqli/core/is...
You want blobless clones:
git clone --filter=blob:none <url>
Gets history and only fetches blobs on demand. Github has a great article on it https://github.blog/open-source/git/get-up-to-speed-with-par...I think the problem is that Microsoft committed to AI totally. There is no way back for them. And this also means that Github will suffer from this. Microsoft PR will tell people that AI is the solution to everything, but in reality it will lead to problems that keep on coming up again and again. Now, people may say "but Github services being down, does not have anything directly to do with AI" - while that may be true, the problem is that Microsoft shifted its strategy already, so most of its considerations will go about AI top down control. Whether people's workflow using Github is disrupted, is at best only of secondary interest to Microsoft - and that specific problem will keep on resurfacing again and again and again. Perhaps it will be silent for 3 months or so - but I am 100% certain that in the not so distant future, you'll have a new drama story about how Github is declining.
This is like step-wise deterioration. Ghostty won't be the last here.
Whether alternatives arise ... that will be interesting to see. I mean those alternatives need to not suck, but a lot of those websites etc... kind of suck.
The future might look something like instead of paid software or open source software what you get is a set of requirements documents for a code forge, like a recipe. You bake your own.
Then you alter it to your particular use case and set of preferences.
Some of the drawbacks include:
1. The time & effort you spend dicking around making something you could buy is time & effort not spent on your core business 2. Understanding what to build is not trivial. Sure, the tool you built works for your use case, but does it work for other teams? CS? Legal with all their fiddly requirements? Congrats, you're a product manager now. 3. Understanding what to build is not trivial. Jira is not a trivial CRUD app, it's a workflow engine builder. 4. User training and support is not something you can prompt away. The minute your software gets in anyone's critical path, you're on the hook for a lot of handholding. Congrats, you're user support now 5. Congrats on your new role in ops and getting called when stuff goes down
Any software engineer will tell you writing code is the easy part. Believe them lol.
Here's the thing: I don't think so in the age of LLMs.
>I’ve noticed that people who have never worked with steel have trouble seeing this—that the motorcycle is primarily a mental phenomenon. They associate metal with given shapes—pipes, rods, girders, tools, parts—all of them fixed and inviolable, and think of it as primarily physical. But a person who does machining or foundry work or forge work or welding sees “steel” as having no shape at all. Steel can be any shape you want if you are skilled enough, and any shape but the one you want if you are not. Shapes, like this tappet, are what you arrive at, what you give to the steel. Steel has no more shape than this old pile of dirt on the engine here
Like the common person vs. a metalworker thinking about steel I think we've all gotten this rigid view that software we work with is fixed and unchangeable and the LLM boom is going to change that by making ALL of the software we use "any shape we want".
I think libraries and open source software are going to have to move to looking more like building blocks with standards and instructions for modifications and people are actually going to DO those modifications to suit themselves instead of just being satisfied with whatever their SaaS providers want to give them.
And the pendulum of "we don't do it because it's not our core competence" is going to swing back to having developer tools teams that actually build and maintain developer tools.
The old advice about the time spend writing your tools is tempered by the fact that LLMs make it very very much easier for a focused smart team to build things.
> we've all gotten this rigid view that software we work with is fixed and unchangeable and the LLM boom is going to change that by making ALL of the software we use "any shape we want"
What? Literally nobody in software engineering has this view lol. We take open source code and libraries and adapt them all the time. And make new ones.
Your steel analogy is bad, because you're missing what's complicated about both manufacturing and coding.
I've taken welding and shop classes, I could make a motorcycle. Turning a part on a lathe isn't that hard. Bending steel just needs the right tools. So should I build instead of buying, if I want the motorcycle itself and not a hobby project? Haha absolutely not.
I'm not buying from Honda because I think vehicles are immutable and unchanging things, I'm buying from Honda because it'll be quicker, cheaper, safer, and far more reliable than anything I could do. If I want a hobby project, sure, but otherwise it's a bad idea. [0]
Same thing with code and for the same reason.
> The old advice about the time spend writing your tools is tempered by the fact that LLMs make it very very much easier for a focused smart team to build things.
Yeah, you misunderstand the problem lol. Building is the easy part. It was never the gate.
You can't prompt your way out of understanding what to build. It's so much harder than you think it is.
You also can't prompt your way out of the hassle of running a biz critical system, dealing with outages, supporting users, etc.
[0] After taking a welding class, you'll instantly understood why it's a trade. Making consistent, quality welds is not easy.
Uh, I'm not guessing what it will be like, I'm doing it. Both reflecting on a better past when organizations did much more of their computing in house and advocating for modern organization use the new tools we have at our disposal to return to building our own tools.
Maybe a way of facilitating "releases" with compiled binary assets (built locally and uploaded).
Forks can be handled by people cloning the repository and uploading a new project.
Part of the reason for not wanting bells and whistles is for the service to have less chance of dying under a heavy load.
It can be done incredibly easily simply by having a branch per review with a known prefix (although these will rapidly clog up the default branch namespace), implemented via git namespaces to be distinct from the main namespace, or maybe just a special branch e.g. ".reviews" that just contains commit IDs for the tip of each review branch.
It just needs someone who's invested enough to specify it and make a viable implementation, after which people might start adopting it. I guess the reason github and the various forges didn't take this approach is that keeping the review metadata within their ecosystem is what gives their platform value. If anyone could use any local tool they like for reviewing other people's code, there wouldn't be as much vendor stickiness.
EDIT: actually, I guess there are other reasons why you might want your review metadata in a different repository, such as access control and/or cross-repo reviews.
Also, as far as read-only access, Gerrit review data is actually accessible via Git[7] (for review ABCDE, pull refs/changes/DE/ABCDE/meta instead of one of the usual numbered refs under that prefix), and someone made the effort[8] to make it accessible via Git notes too (as mentioned in the post on Git notes that I linked above).
Also also, the Fossil SCM of SQLite fame somewhat famously does[9] do this kind of thing with its builtin bug tracker. It has been relegated to obscurity partly as an accident of history (Git won) and partly on the merits (it is aggressively hostile to the kind of history rewriting we are used to routinely—if not always wisely—performing in Git).
Going back to working on top of Git, though, I think that part of the problem is that you really want custom merge strategies when you’re trying to build a fancy datatype, and Git’s support for them requires a lot of wrapping to make it seamless (the location tracking stuff in git-annex[10] is the only success story I am aware of, and that’s a sizeable Haskell project). The existing porcelain is just too rigid.
[1] Can I have a viable replacement for PGP for that use case? Please stop telling me that I don’t exist and should screw off[2]? Please?..
[2] https://news.ycombinator.com/item?id=44239804
[3] https://github.com/aaiyer/bugseverywhere
[4] https://github.com/google/git-appraise
[5] https://tylercipriani.com/blog/2022/11/19/git-notes-gits-coo..., https://news.ycombinator.com/item?id=44345334 (579 points, 146 comments)
[6] https://github.com/git-bug/git-bug
[7] https://gerrit-review.googlesource.com/Documentation/note-db...
[8] https://gerrit.googlesource.com/plugins/reviewnotes/+/refs/h...
[9] https://fossil-scm.org/home/doc/trunk/www/bugtheory.wiki
What part of gerrit is so different? Stacked PR’s work fine right (not in github, as a concept)
By which I mean the discussion doesn't get broken between changes, and it makes it far more trivial to iterate on things in the review without breaking the discussion. And for the reviewer to see what's changed between revisions at the specific comment point they're looking at. And then have a nice clean commit at the end instead of some dogs breakfast of merge commit with revision commits shoved in it.
But more importantly, Gerrit has a "Change-Id" it adds to the git commit which is independent of the git hash, so it can track changes independent on the "logical" commit separate from the physical commit.
It also is able because of this to show you what changed for a given commented section between those commits. Allowing you to properly review the changes.
GitHub just acts like a dumbass and throws away comments or threads if the original commit is rewritten. And so forces you into this garbage workflow of endless "commit to address review comments" or "new version" commits, which then have to be either manually rewritten before merge, turned into a (garbage) merge commit, or squashed down into one commit.
1. Gerrit's approach requires a stable Change-Id in the commits. So it doesn't just work out of box with stock git. It requires that the submitter's git configuration and the repository be set up to support this. (Note that JJ supports this out of the box)
2. Cargo cult. We have a whole generation of software developers who grew up in this generation, love GitHub, and have never known anything else. The "PR" approach is considered orthodox. Unless they went and worked at a Google or somewhere like that, they've probably never been exposed to alternative processes.
I personally think Gerrit works much better than whatever GitHub et al. have for code reviews. As for CI, I would try to keep that out of it as much as possible; just hooks to start a pipeline and to display the result and decide whether to allow a merge or not.
1. Code review 2. Source browsing 3. Ticket tracking 4. CI
It does a mediocre job at all 4. But it does a good job of integrating them all together.
So I agree Gerrit is the superior code review model. but without the other 3 pieces you don't have a product. Even when I was at Google and working every day in Gerrit, I was dissatisfied with the poor integration between code search and code review and with CI.
Google3/Critique/Forge/etc -- Google's internal tooling -- did a much better job of tying that all together.
Reminded me one benefit of email-based workflow.
If I started receiving email, that's usually because I'm in the right mood to doing so. In such mood, I'll be more focused because I expect nothing else to interrupt my work.
My problem with notification is that there's a pull towards clearing them as they show up. But there's no guarantee I have the right energy at the moment.
Also, I found that most notification systems on the web are poor mimics of what email client has already archived decades ago. Maybe the old folks really got it right for using emails.
Or, just like what @atrus said, don't receive emails if you have other higher-priorities.
In the mood for dealing with email, open client.
The PR review process is flawed, it adds something, but maybe not what it intends.
It’s just not a great discussion platform, while also putting that as the default tab in the PR view
But just a few inches earlier, the author stated:
> Everything tools always turn into crap.
This seems like a contradiction to me.
My approach is to utilize https://pre-commit.com/ to have all checks available to run locally during commit (or push), but leave it to contributors whether they want to run it or not. If they don't, the checks still run on the forge after pushing. The upside of this approach is that it still allows contributors to commit without internet access or the forge being down.
> 3. PRs are too inflexible. I don't need 4 eyes on every change, especially in a universe where LLMs exist. The global GDP lost annually to senior engineers staring at a four-line PR waiting for someone — anyone — to type 'LGTM' could fund a moon mission.
Well, that's possible with Github and is just a matter how permissions to merge PRs are configured. Just let every contributor merge changes without explicit approval. And if you want LLM approval, make that a Github Action with mandatory success for merging.
> 4. Stacked PRs are just better. […]
Seems like Github is working on this: https://github.github.com/gh-stack/
> 8. On the flip side, since I need to be online all the time to really work with a team […]
Sure, for communication you need internet access, but working on code can be much more efficient if you can do so without relying on internet access and the forge being available.
I'd even argue working on issues and reviewing PRs should be available entirely offline too with just the state getting synced whenever internet connectivity to the forge is available.
That works fine for some things, but it doesn't work for building and testing on other platforms. For example, if I am running on linux, pre-commit won't be able to check that my changes also work on Mac and Windows.
How would a pre-commit hook help? Would the developer not be crying and working late if their work was rejected by the pre-commit hook instead of the PR? Also, if the tests are so fast they wouldn’t block the terminal running `git commit` for more than a minute or two, you can just run the tests on the local machine, and you should be running them as part of your workflow.
> PRs are too inflexible. I don't need 4 eyes on every change, especially in a universe where LLMs exist. The global GDP lost annually to senior engineers staring at a four-line PR waiting for someone — anyone — to type 'LGTM' could fund a moon mission. A nice one. With legroom. Let me customize and more easily control this. If the person is a maintainer and the LLM says its low risk/no risk just let them go.
You can do this with the existing forges, you can give trusted people the right to bypass the rules. Or you could build your own small PR auto-approval bot, which hands the diff to a LLM, and if the LLM approves, the bot approves the PR on the forge.
aren't you describing why you'd want a pre-commit for this? you do not have to remember to do so, and new people do not need to learn it.
The Linux kernel is not hosted on GitHub and uses cgit. Others use GitLab, or Gitea and there is also Forgejo (Which Codeberg uses) that people are using and can be self hosted.
This is why now everyone is realising why "centralising everything to GitHub" [0] was a terrible idea and now GitHub has been (unsurprisingly) run into the ground.
GitLab and Azure are a daily source of pain for us.
I'm sure if I used it more often that I would figure it out, but it's deeply off-putting for someone who only uses it twice a year or so.
UI is constantly inconsistent. You have to keep reloading the page to hope to see what’s up with your MR. Doesn’t help it’s super slow to load.
The backend infra is super unreliable, with actions failing to start, merge trains being stuck, their webhooks being overloaded.
I created a little Github Issues replacement for myself that puts the issues within the repo so that the work and the todos stay in sync. https://github.com/steviee/git-issues
And I bet there's numerous other projects like that.
Hope you get your submarine, man! ;)
The backend could be git on an SSH server, but if you have a slick design and 100% operational uptime-- you'll capture 80% of the developers using Github currently.
Github, outages and all, is a known commodity with a reputation they can make money off of.
Any replacement would have to offer not just compelling feature improvements and uptime, but to have a history long enough to make it trustworthy to migrate to.
And uptime? Easy when you have a small number of customers. Much, much more difficult at a million+ customers (although Microsoft has really dropped the ball).
Seems like there are lots of answers: pre-commits, rebase squashes, merge squash...
git commit --no-verify
git commit --amend --no-edit
Feedback + commit is a loop. I often reply to comments w/ the commit sha that resolves it.There is a fundamental contradiction here.
To make a replacement, not only do you have to improve on support for every major use case at a technical level (no easy task, to put it mildly), you also have to make it so compelling to use that Github users will abandon Github en masse.
Someone with an LLM assisted IDE has the theoretical potential to improve on all major Github features. But to make their replacement compelling enough to get folks to leave Github? Not a chance.
It's also very hard to appease everybody; from visual design to operational smoothness.
I'd like to add another idea: automatic PR merge contingent on another PR getting merged.
Why don't you see how far you get in a weekend with Claude.
From unregistered domain to website in hours.
I should have registered it myself.
If you want a certain app with a feature and the app isn't open source, then you may as well just clone the app and add the feature.
Claude Code and Codex (and other tools) have computer use and are perfectly capable of navigating, experimenting, cloning functionality, writing tests...
If the app is open source it's probably easier to just fork and add your features though. And cheaper.
Truly magical, it would have taken me months.
If no post planned, please consider - that’s very “an app is a home cooked meal”, and I love it.
It’s mostly your original story of motivation, in brief prose, that does the heavy lifting of a satisfying post,
followed by exactly what spec and names of tools you used, mundane as they may feel,
your exact prompt(s) (because this is of technical interest in and of itself),
and screenshots of excerpts/link to output.
Things that stood out to you along the way would also stand out to others.
The comment alone will probably be the most intriguing one I read all day.
I agree with the author so much. Every git forge looks like Temu Github. It's boring and lame.
At the top of this year I mocked up a hypothetical forge I'd use[1]. I then found the domain eol.sh was available so I snagged it. I'm currently using cgit for my personal public repos but I cannot wait to work on EOL. I'm gonna steal OP's ideas too, they sound good.
---
Stuff happens in the wrong order. You know the PR. Commit 1: 'Feature.' Commit 2: 'fix.' Commit 3: 'fix.' Commit 4: 'actually fix.' Commit 5: 'please.' Commit 6, made at 11:47 PM on a Thursday: 'asdfasdf'. This person has a family. This person has hobbies. This person is, at this moment, crying. You don't want the feedback loop after the commit you want it before. Let me do an enforced pre-commit hook to run the jobs remotely on the forge and provide the feedback to the user before they push.
Isn't this already totally possible? Or am I thinking subversion?I think the implication is that a user doesn't host the CI locally. They are suggesting that there should be? a configuration to call an API to submit the code changes for some part/total CI checking. This is only beneficial for orgs/individuals which somehow rank dev effectiveness based on how messy a branch PR history is and how many times they have submitted code that passes/fails a build. Maybe due to build cost, maybe due to ego.
I understand what they are asking for, but it feels like misusing git or based on some org process rather than normal development flow. I don't understand the point.
Many of us were annoyed already when Microslop, 'xcuse me, Microsoft assimilated GitHub. But we have to be realistic - alternatives often sucked. Sourceforge? I find creating issues there annoying to no ends. I can use gitlab, which is a bit better than sourceforge, but I also hate creating issues there. I recently saw codeberg appears to have updated its UI (I think?), but I also find it quite annoying.
What GitHub got right were, initially, the UI; and also a focus on folks using github, e. g. making things easy/easier for them. They did not get everything right though - the wiki support I find awful. I rarely use the wikis because they are so bad.
I think the really big problem is that there are commercial interests aka private interests. Microsoft is just one example here; it is a problem literally everywhere in similar sites. In the past I pointed at the example of discussions in issues, with regards to the xz backdoor utils - and the next day after I also participated in discussions, Microsoft took it all down; though it also does not matter if it was Microsoft or the repository owner. The problem is that individuals can too easily censor potentially useful information. The issue discussions WERE useful, and they were censored. If I remember correctly, all information from back then was never fully reinstated. Perhaps people mirrored it, but I did not see a link. The point is that I think this shows that top-down control can be really detrimental. And let's be honest: how many of you trust Microsoft? We kind of need something that is de-central, works reliably and well, and also has a good UI by default and a simple (or at the least a good) workflow. And we also need to avoid the situation where private actors can hold everyone else a hostage. I have absolutely no idea how to solve the above; perhaps it requires different approaches at the same time.
The www kind of changed and I feel that private interests - aka huge mega-mega corporations in particular - made things a lot worse in the last 10-15 years here. That needs to change.
Mega corps can fund that, but even large numbers of devs on small budgets don't have the money to do the same.
So any commercial project will inevitably trend towards supporting the interests of mega corps over the average person.
You have to 'push' the code to the forge to run it. This code is a 'branch' of the version that is on repo.
> The PR is approved or it's not approved
The code is either merged or it's not. Sure you can trivially add a snooze feature...
> I don't need 4 eyes on every change, especially in a universe where LLMs exist.
Huh, I do. Anyone thinking LLMs replace human review, when LLMs are already replacing the coders, is just vibe-coding, not building a reliable library.
> Stacked PRs are just better.
I have no idea what this really means honestly. You can stack multiple commits in a single PR. You can create PRs based on other MRs.
> A forge shouldn't do everything. Issue tracking yes. Kanban board, probably not.
The board has to live in-sync with the issues or it's not a board.
As a technology base to fork from, probably not ideal. But its flows are something to learn from.
The PR process in GitHub has always been garbage, and its cargo cult adoption by the whole industry is sad. But also unnecessary. There were always alternatives, but GH's refusal to do proper multi-round review and its tendency to encourages giant messy merge commits with no ability to track discussions between changes is an organizational nightmare, and now with LLMs it's even more terrifying.
Every company I've worked at since I left Google has had this problem with giant "take it or leave it" submissions. Dozens of commits in one "review". No ability to properly track changes between revisions. A mess of commits that all land at once.
I don't see how one can build a serious software team structure over top of this. It's a mess. And GitLab only makes it slightly better.