I host my own public Git repositories, but statically--read-only, no HTML views. I don't want to run any specialized server-side code, whether dynamic or preprocessed, as that's a security vector and system administration maintenance headache I don't want to deal with. You can host a Git repository as a set of static files using any web server, without any special configuration. Just clone a bare repo into an existing visible directory. `git update-server-info` generates the necessary index files for `git clone https://...` to work transparently. I add a post-receive hook to my read-write Git repositories that does `cd /path/to/mirror.git && git fetch && git --bare update-server-info` to keep the public repo updated.
In theory something like gitweb could be implemented purely client-side, using JavaScript or WASM to fetch the Git indices and packs on-demand and generate the HTML views. Some day I'd like to give that a try if someone doesn't beat me to it. You could even serve it as index.html from the Git repository directory, so the browser app and the Git clone URL are identical.
There's also various optimizations on the git side like bitmaps[1] and commit-graphs[2]. If this a bare repo on the server side it shouldn't be a problem to make sure it is in a particular format with receive hooks.
That's just displaying a file listing though. Displaying what GitHub displays with the last change of each file is more complex, maybe the commit graph could be used so the client wouldn't have to fetch everything itself.
[1]: https://git-scm.com/docs/bitmap-format
[2]: https://git-scm.com/docs/commit-graph
However I never got around to finishing it, mainly because I couldn't decide on where to stop: should I also generate commits from all non-master branches etc.
I flirted with the idea of a browser-side repo viewer too, but re-implementing git packfile parsing in js didn't seem like something I'd want to spend my time on, so I moved on. Glad to see others pondering the same thing though.
That said, JavaScript libgit2 is a thing [1], so doing it "properly" in a client app is totally possible.
[1]: https://github.com/libgit2/libgit2/issues/4376 | https://github.com/petersalomonsen/wasm-git
Hmmm, I might give it a go one day.
Of course, there’s also the argument that if you’re self-hosting a repo, you’re more likely to care about users that have disabled JS (which is a good idea, tbh).
For a repo with thousands of files, it might make sense to limit entry pages to only directories and .md files (and other prose files), which are more likely to be linked to. You can also skip shipping a Markdown renderer this way!
So, if you want to serve it, just `fossil server file.fossil` or serve a whole directory of them. Or, if you want, you can just `fossil ui` and muck around yourself, locally. The server supports SSH and HTTPS interactions for cloning and pushing.
All by the same people who make SQLite, Pikchr, and more.
It really depends on what you're optimizing for.
- The repo normally lives outside of the worktree, so remembering to 'fossil new foo.fossil && mkdir foo && cd foo && fossil open ../foo.fossil' took some getting used to. Easy enough to throw into some 'fossil-bootstrap' script in my ~/.local/bin to never have to remember again.
- For published repos, I've gotten in the habit of creating them directly on my webserver and then pulling 'em down with 'fossil clone https://${FOSSIL_USER}@fsl.yellowapple.us/foo'
- The "Fossil way" is to automatically push and pull ("auto-sync") whenever you commit. It feels scary coming from Git, but now that I'm used to it I find it nice that I don't have to remember to separately push things; I just 'fossil ci -m "some message"' and it's automatically pushed. I don't even need to explicitly stage modified files (only newly-created ones), because...
- Fossil automatically stages changed files for the next commit - which is a nice time-saver in 99% of cases where I do want to commit all of my changes, but is a slight inconvenience for the 1% of cases where I want to split the changes into separate commits. Easy enough to do, though, via e.g. 'fossil ci -m "first change" foo.txt bar.txt && fossil ci -m "everything else"'.
- 'fossil status' doesn't default to showing untracked files like 'git status' does; 'fossil status --differ' is a closer equivalent.
That'd be a deal breaker for me. Git's staging area is such a breath of fresh air compared to the old way (that fossil is doing), that it's one of the biggest reasons for me to switch to it. It's completely freeing to not have to worry about things being accidentally added to commits that I didn't want to have.
There is a guide written for Git users:
Yeah indeed. I have written but not submitted patches to a project (OpenSBI) because it made the submission process super complicated, requiring signing up to a mailing list, learning how to set up git send-email.
I don't see how he can think creating a GitHub account (which almost everyone already has) is a big barrier when he freely admits his process is incomprehensible.
I don't buy the GitHub locks you in either. It's pretty easy to copy issues and releases elsewhere if it really comes to it. Or use Gitlab or Codeberg if you must.
https://docs.codeberg.org/advanced/migrating-repos/
Any of those are far better than random mailing lists, bugzilla, and emailed patch files.
Putty is great but please don't listen to this.
worked easily enough for me, i could see myself using it for small patches here and there.
I did end up installing forgejo in my homelab after all, but back then it sounded like federation was much closer than it actually was. i did kind of expect that though, federation gets pretty complex quick.
every time i log into forgejo, i do see that juicy "proceed with OpenID" button though, and i've looked into running my own openid provider a few times - sadly not seeing anything that would work for me yet. honestly i can't believe we went from "facebook sign in" to "google sign in" and are now going to "github sign in" without a single open standard that's gotten some adoption.
A traditionally configured mailing list allows posts from non-subscribers.
All the mailing lists I operate are like this.
If you have good anti-spam-fu, you can get away with it. Plus, it's possible to have posts from non-members be held for moderation, and you can whitelist non-members who write legitimate posts.
Projects which require people to sign up to their mailing lists to participate are erecting a barrier that costs them users; it's a stupid thing to do, and unnecessary.
Whenever I have to interact with some mailing list, I begin by just sending my query to the list address. If it bounces due to nonmembership, I usually move on, unless it's some important matter.
By the way, some modern lists allow posts from non-members but then rewrite the headers in such a way that the nonmember does not receive replies! This happens in more than one way, I think. One of them is Reply-To Munging: the list robot sets the Reply-To: header redirecting replies to be directed to the list address. The Reply-To throws away the content of the original To and Cc fields.
When this happens to me, I usually refrain from further interaction with the list. I check for replies in their archive. If I'm satisfied with what they said, that's the end of it.
Which one of the 4 preferred processes, not including the maligned git send-email, and infinite other accepted processes?
All these complaints and critiques sound like so much baby crying over nothing to me.
I'm sure I can figure out the archaic git email system, how hard can it possibly be? Same with the git bundle thing, this is the first time I've read about it but it seems usable. I don't expect anyone I'll ever directly work with to know what the hell a bundle file is but if a project wants their git commits in that format, it shouldn't be that much of a problem. The biggest hurdle will probably be spam filters, but that's an email problem and not necessarily a git problem.
Or course the downside to all this funky command line stuff is that you're applying a filter on the people who will ever contribute code. Plenty of people don't want to figure out the git's many weird command line flows and communication options. Plenty of developers don't care enough to actually learn about git beyond push/pull/rebase/merge. If you're only interested in the turbo nerds who enjoy using the many tricks git has to offer, you'll probably filter out most contributors, but realistically how many contributors does hobby project on a personal git server ever attract in the first place.
He's not writing about how you should run your project for his convenience.
No, but we can discuss and criticize policies all the same, regardless of how the original creator feels about such discourse.
I think his (Simon's) objection to git send-email emails could be addressed with better tooling, or better use of them. It's 'just' a matter of exporting the emails into a single mailbox file, right? (I'm not experienced with git's email-driven features.)
It seems to work smoothly for the Linux kernel folks; clearly it doesn't have to be clunky.
> because git format-patch doesn’t mention what commit the patches do apply against, I’m more likely to encounter a conflict in the first place when trying to apply them.
This is what the --base flag is for. [2]
[0] https://git-send-email.io/
[2] https://git-scm.com/docs/git-format-patch#Documentation/git-...
As someone who works with an git-send-email workflow every day, I can tell you, it sucks. Email is not a good substrate for development.
If I were Linus I would be pestering the Linux Foundation to set up a bunch of managed Gerrit instances or something.
An argument could be made that developers shouldn't have to build their own workflow to do basic development tasks, but given Linus's past statements on C++ developers, I would assume that he would rather not have developers that cannot handle an email workflow working on Linux (that is, this is WAI).
sourcehut has some GUI around it (that I never actually used). I heard that there is some local terminal thing around the e-mail git flow...?
One thing I like - in theory - is how decentralized/federated it all is. E-mail is the original decentralization/federation! But I never had to actually use it.
There is Patchwork which can help with managing the review workload and can also provide some CI feedback, some subsystems use that with some succcess, other's don't. It's not really something an individual can adopt so if you're working in an area that doesn't use it you're out of luck.
There is also Patchew which I've never tried.
But overall everyone just has their own individual pile of shell scrips and mail client macros.
Why would you expect there to be a standardised set of tools used by the largest distributed project in the world? Do you think that this would be possible to enforce globally in a way that makes everyone happier to contribute?
You mentioned two tools that are used by some subsystems. b4[1] is another one, and more are listed here[2]. So there _is_ tooling around it that works for many people. It's just not your preferred choice of tooling, which is... fine.
The fact that email is the lowest common denominator seems like a good thing to me. It allows everyone to use their tools of choice. As long as you can send and receive email, you can contribute. How you decide to integrate that into your development process is up to you. If you can't be bothered to setup a workflow from scratch, then you can adopt someone else's. I'd much rather have this choice, than be forced to use Gerrit or GitHub or whatever else.
[1]: https://b4.docs.kernel.org/
[2]: https://www.kernel.org/doc/html/v6.14-rc4/dev-tools/index.ht...
> As long as you can send and receive email, you can contribute.
Sending and receiving email has so many barriers! The Linux Foundation literally has to manage a mail server that people who can't get access to a working mail setup can use! Saying that email is a sensible lowest common denominator is crazy. The reality of it is that the kernel community is majorly dependent on GMail and GMail isn't even a good mail service for the job!
Using a git forge is dramatically easier to use and easier to set up, host and maintain.
However, AFAIK there isn't a forge that exists today that can actually meet the kernel's needs though. Switching to one would be a significant project. (But if the core maintainers wanted it, it would be very feasible).
I'm not familiar with kernel development, but after you pull the patches locally, can't you simply review the commits via `git diff` or with whatever viewer you use? This is how I often review code when using GitHub. I only use the GH interface for sending my comments, which is what email in the kernel workflow is for.
The only thing a web-based tool is helpful for is for grouping discussions, and being able to reference or find them later. This might be more tedious with typical web-based email UIs, but most offer some kind of threading and search support, so it can't be that bad.
> Sending and receiving email has so many barriers!
Email has many problems, but I don't see barriers to using it as one of them. There are literally thousands of providers to choose from that could be suitable for sending and receiving patches.
> The Linux Foundation literally has to manage a mail server that people who can't get access to a working mail setup can use!
linux.dev is not managed by the Linux Foundation but by Migadu. It's only offered as a convenience service for people on corporate networks who have no control over their email. They could just as well choose to use another provider.
> Using a git forge is dramatically easier to use and easier to set up, host and maintain.
You contradict this right in your next sentence. You're right that maintaining a centralized system at this scale would be a daunting task. Email being decentralized avoids this problem altogether.
The thing is that a "forge" provides several loosely-related services.
Sharing code is a basic one that Git already does quite well over HTTPS and SSH. What I don't understand is why the patches simply aren't shared this way instead of using email as the medium. The problems outlined in the original article are very real. Kernel development could follow a similar suggested model where URLs to pull from are shared instead, while keeping email strictly for reviews and discussions. This way everyone would be free to choose where they want to host their fork, and maintainers can simply pull from it. But I digress...
The other things forges are used for are code reviews, CI, bug tracking, planning, etc. It's debatable how helpful any of these services are, and many developers would have their own preference. You might like Gerrit, but that's not a universal opinion. And I think most developers would agree that code reviewing in GitHub or Gitlab is painful. If it was up to me, I would choose a tool like git-appraise instead, but I'm stuck with GitHub because that's what the teams I've worked on have preferred.
So my point is that email is likely not the best choice for this, but it's a reasonable one that's flexible enough to be usable by anyone. Forcing any other tool on everyone would likely displease an equal or greater amount of contributors, while also introducing maintenance and reliability problems.
No, it really, really wouldn't. The Linux workflow is optimized for the preferences of (a subset of) maintainers, notably Linus; it is not optimized for contributors.
If you want to deliberately exclude new developers (which is a stupid thing to want), you can do that. You don't have to additionally shoot yourself in the foot by sticking to a broken workflow as an instrumental goal.
Git was created in 2005, and Github was created in 2008. So we had 3 years of email-driven git and 17 years of Github style development.
People won’t use it if it ain’t on the web.
If you force them to use it anyway, very few people will use it.
DdV’s advocacy stems from the fact that he is a lone-wolf dev, building tooling for other lone-wolf devs. The social and collaborative features sucking is a feature, not a bug.
It falls flat on its face for larger projects and communities.
And that's perfectly fine. Not all things are for all people, and nor should they even try to be.
PRs are public and email patches are receiver's inbox only, right?
When I'm assessing the merit of any given project, I'd want see if there's a backlog of PRs, right?
Do projects using email-patch workflows set up listservs for receiving patches?
Not necessarily. For a 'lone wolf' project that might be the case, but some projects have public mailing-list archives, e.g. https://lists.gnu.org/archive/html/lightning/
I think it would be necessary to do some manual copy+pasting, or else use custom scripts/tooling, to turn such mailing-list archive pages into patch files (for git apply) or mbox files (for git am).
Exactly. I used to have a GitHub account but as soon as it got bought out by Microsoft, I was gone.
I still refuse to create an account, even though there have been bugs I wanted to report or patches I wanted to contribute. Maybe some maintainers still have email addresses on their profile, many don't. Even if they do, I just don't get the motivation to email them.
People like to complain about email a lot, but I enjoy different mailing lists for open source software. You could have discussions with other users of that software or keep track of development by following the "-devel" list. All you needed is something you already had—email. Sadly, they're becoming less popular. Even python moved to discourse which—dun dun dun—requires an account. grumble grumble
I like SourceHut for many reasons—it's the fastest forge I've used, it's FOSS, doesn't try to copy the GitHub UI like every other Git forge these days. But by far the reason I love it is _because_ it doesn't require me creating an account to contribute. I think of it as gitweb, but nicer.
Creating an account just locks you in, when the alternative exists or existed before. SourceHut proves this is possible. Why not allow non-accounts to contribute?
I encourage you to try an experiment where you pick three or four (or more) times a day to log out of your HN account and only log back in the next time you need to perform some action that requires an account/authorization. Now do the same with GitHub and compare the experience. They've made merely logging in such a massive pain in the ass that somehow goes beyond the anticipated pain around "here's a forced 2FA workflow you didn't ask for but have to run through, anyway". All so you can be generous with your time to someone else's benefit and e.g. leave a signpost comment with answers to a shared problem in some neglected bugtracker, but it's real a kicker when this is interrupting a semi-flow state.
I don't agree, in my opinion it's easier than logging into HN because Github has passwordless auth with passkeys.
I don't even have to enter a username, I just click "Sign in with a passkey" and use my passkey and then I'm logged in, no "forced 2FA workflow"
> no "forced 2FA workflow"
What does "2FA" stand for?
> it's easier than logging into HN
You have your thumb on the scale (which seems to happen every time someone criticizes GitHub). You have already indicated a willingness/desire to use an authenticator. At that point, there is literally nothing stopping the authenticator from providing the exact same user experience, where instead of releasing your "passkey", it provides your password to HN's login form. And oh wait that's exactly how scores of password managers work, including the ones that are built in to every mainstream browser. (If you're somehow using one that for whatever reason doesn't do that, then it's self-inflicted, which is exactly opposite to the case of the forced 2FA flow that GitHub imposes.)
This is without even mentioning that you have to set all this up.
Two factor authentication, I'm sure you can google it.
> The passkeys that you (and GitHub) are talking about require a separate authenticator to use.
I'm not using anything other than my browser.
> You have already indicated a willingness and desire to use an authenticator
I'm only using my browser, it was 1 click.
> This is all before we even mention that you have to set all this up.
Yeah, once (which doesn't take longer than 15-20s), just like registering on HN, you do it once.
Also as I stated it's my opinion, having a different opinion doesn't make me dishonest
The question was rhetorical, they are showing how a passkey is also a form of 2FA.
Your passkey could have 2FA locally (e.g., a Yubikey with a PIN), but that is up to your discretion. It may be single factor.
The passkey alone is not sufficient to log in. You must also provide a successful response to the WebAuthn challenge from an authenticator that has been registered/configured with that passkey.
> That's kinda the point, to reduce user toil.
It's almost as if letting people elect to enter their secure, never-written-down-anywhere-else passphrase would accomplish that.
Great. Now go ahead and try to argue the indefensible position that relying on an authenticator to supply a passkey is somehow not a form of two-factor auth.
> I'm not using anything other than my browser.
... as your authenticator. The fact that you're using your browser and its built-in support for this as your authenticator but are using the term "browser" when you're talking about it instead of the word "authenticator" (GitHub's term—here's their documentation about authenticators, which I'm sure you could have Googled: <https://docs.github.com/en/authentication/authenticating-wit...>) doesn't change its role.
> (which doesn't take longer than 15-20s)
Aside from the fact that the ~5 seconds that it takes to create an HN account is not even the same as the 15–20 second estimate that you're offering here, there's the minor problem that that estimate is bogus.
You are simply not being honest in your reckoning of the respective costs. Here's GitHub's own documentation for the process of adding a passkey to your account:
<https://docs.github.com/en/authentication/authenticating-wit...>
(I'm sure you could have Googled it.)
> as I stated it's my opinion, having a different opinion doesn't make me dishonest
Stating your opinion doesn't make you dishonest, but arguing about things that are matters of fact and not opinions—measurable, quantitative things—and doing it with bad quantities chosen in a dishonest way is, in fact, dishonest.
Here's the Wikipedia article about intellectual dishonesty:
<https://en.wikipedia.org/wiki/Intellectual_dishonesty>
I'm sure you could have Googled it.
Now, would HN be better without an account? I believe it would, why not? I like lurking (and sometimes commenting) on HN though so I feel like creating an account is valid. Also, HN works fine without JS and has no trackers, which does tend to get me to create an account.
That’s not correct. You can write the email to an mbox file (your MUA lets you do that, right?) and then use `git am` to pull it all into git.
> Why I don’t like it: because the patch series is split into multiple emails, they arrive in my inbox in a random order, and then I have to save them one by one to files, and manually sort those files back into the right order by their subject lines.
The patch series arrives threaded, so your MUA should be able to display them in order. Export them to an mbox file, and then use `git am` again.
There might be ways that someone can send you email in this way and for the patches to be broken such that `git am` won’t work, of course. I take no issue with that part of the argument.
Of course, the classic response is "get a better MUA you luser", but that just adds even more steps for people who use webmail services for 99.9% of their email needs.
I’m just completing the picture by pointing out that for those who choose to use emails to jockey patches around by mutual agreement, including patches in emails really shouldn’t be a problem.
Git is distributed and allows you to work efficiently with poor connectivity, having full history available at any time, which is a big accessibility point for people with limited connectivity (and also helps people working while traveling, for example). If you do have any email client, you get all of this as well, plus arbitrarily powerful, low-latency filtering and searching. I recommend Greg KH's "Patches carved into stone tablets" talk [0].
Despite your "luser" strawman, people advocating for client-side MUAs mean well and have a point. Try replacing "webmail" by "Notepad" and "client-side MUA" by "emacs/vim" to see how your argument sounds. You probably spend a decent amount of time interacting with email, and the investment in setting up a fast, flexible and powerful environment (preferably reusing your text editor for composing messages) for doing so pays for itself soon.
As it happens, I'm the kind of masochist who uses Sublime Text with no plugins for most of my programming (and literal Notepad for most of my note-taking on Windows), so I find value in letting people stick to their familiar workflow, even if some might see that workflow as somewhere between 'grossly inferior' and 'literally unusable'.
The nice thing with remote Git repos is that you don't need to care at all about how they work internally: you can speak to them using the same Git client (or GUI wrapper, alternative Git-compatible client, etc.) that you use for everything else. Of course, many people would prefer not to use Git at all, but it's a necessary evil to have some client if you want source control, and it doesn't take much work to set up. (At this point, I've installed several source-control tools that I don't really use nor have to worry about.)
But setting up an MUA solely for a git-send-email based workflow is several steps beyond that. E.g., some of the Linux maintainers demand inline patches, which physically cannot be done through many webmail services. So you're left with the slog of finding the right incantations for git-send-email (or an MUA you don't need for anything else) to provide the right credentials to an obscure SMTP proxy. And then you have to worry about how well-protected those credentials are, unless you have some sort of keyring or 2FA integration.
> You probably spend a decent amount of time interacting with email, and the investment in setting up a fast, flexible and powerful environment (preferably reusing your text editor for composing messages) for doing so pays for itself soon.
I'm a bit curious, how well do these tools handle HTML email? Webmail services come with WYSIWYG editors that I make liberal use of for formatted text. There's a big overlap between the "email patches are great!" and "HTML email is the worst!" crowds, but I'd be surprised if HTML email is totally anathema to today's MUAs.
I definitely think there are upsides to not tweaking your text editor config endlessly, so I understand your point :) What I meant with "vim/emacs" is mostly that sometimes you really want to automate a text editing task, and then it's really convenient to have a programmable text editor. It's also very much a case of [0].
> I'm a bit curious, how well do these tools handle HTML email?
In my case, I use mu4e in emacs to read my mail. Very basic HTML works by default via emacs's native HTML renderer (see, e.g., [1] for old screenshots). That's my preferred solution because I like the keyboard consistency (it's just an emacs buffer) and because there is a command to view the email in an external browser if needed, but it is also possible to render HTML email accurately in emacs by embedding a webkit widget [2]. As for writing, you can write in Org mode format (emacs markdown, if you will) and it gets converted to HTML on send.
[1] https://lars.ingebrigtsen.no/2015/02/10/eww-now-with-fonts/
[2] https://www.reddit.com/r/emacs/comments/l60p6a/howto_mu4e_an...
With Forgejo (Codeberg) you can toggle features such as pull requests, issues, etc.
You can also configure any external issue tracker or wiki apparently, though I've never tried it, because those included with the forge are good enough for me.
If anyone with an account on any other gitlab instance could automatically do things on our gitlab instance, it would be a nightmare. We'd probably disable federation if gitlab offered it.
I think the idea is the exact opposite, no? People wouldn't be able to do anything on your forge. They would only initiate actions on their own server and then send you notifications of PR requests to the ActivityPub inbox of your repository, and spammers would have no incentive to do this because nothing would end up in public view.
These days, Git forges left and right are even working on decentralizing things like issues and comments, something Git doesn't track or care about.
People flock to Github because it's free and easy. Very few people care about the peer-to-peer internet and decentralization that Git was built for.
I'm not hosting this on the public Internet, so maybe it's not a fair comparison, but thought it was worth mentioning that there are lighter/easier forge options than Gitlab.
I 100% agree. When people say things like "You have to be on GitHub because that's where the community is" I don't know what to say. Who cares? Is it really that hard to log into (or not) a different Git server? Do we really want to encourage this idea of community anyway, at a scale larger than an individual project?
But the author does a very good and reasonable job of explaining it.
It didn't convince me to do the same thing, but I can't help but nod along to the even-handed pros and cons that he lays out.
"Did you just tell me to go fuck myself?"
"I believe I did, Bob."
This could actually be useful for "open source but not open contribution" situations. It avoids the thing where people seeing that nice easy pull request button as somehow giving them the right to expect their contribution to be accepted.
"If you don't want contributors, why is it on [insert-forge-here] in the first place?" is a question I've seen asked in discussions about such things. Some people get personally insulted when their pull doesn't happen, even though the project states clearly ahead of time that this is how things are. Heck, some seem to be offended in advance that some hypothetical patch they might produce in future would be rejected.
https://github.com/artumi-richard/ssh-git-hosting
I stopped using it years ago. It had the additional advantage of no artificial limits (file sizes etc).
I never find this to be a big deal. The friction to create a new account is usually pretty low. Type in my email address, let Firefox generate and save a secure password, maybe do email verification, and I'm in. If I'm willing to spend possibly hours crafting a patch to contribute to some software I like, followed by some code-review back-and-forth, the few minutes spent setting up an account is nothing.
A more generic federated identity system that everyone could use/operate would of course be 1000% better. But slightly orthogonal to forge/no forge.
It would be so much nicer if there was a federated way where I the user could specify any OAuth identity provider (even if it was e.g. a self hosted one) rather than the predetermined list dictated by the relying party.
Funnily enough someone recently asked me if I could comment on a Product Hunt post and I was unable to do so since they only allow sign in with Google, Twitter, Facebook, Apple or Linked in; I have none of these accounts and would rather not create any of them. Oh well.
I'm not sure to what degree that natively allows for integration between Git and an MTA but it would still have most of the desired aspects.
I really disliked i.e. pushing new artifacts for fdroid repo. Build can be very long and I do not plan to look at the command line progress for 30 minutes to make sure that it is ready to be pushed to release.
I guess someone can automate this with some cron jobs or bash scripts, file watchers and etc. But moving it to new machine it so much trouble. With GL I can just make a backup and restore it on the other machine. With everything running in docker it is much easier to get the environment running.
Or at least for me.
Git repo can have executable code! Git hooks are shell commands stored in dir ".git/hooks" executed on events such as merge, commit... That is not duplicated by git clone, I am not sure about bundles. But I would be VERY careful to accept something like compressed git repo from anyone!
Do you know that, for instance, that the playing card symbols in the Unicode are narrow characters, the terminal emulators treat them so yet the fonts draw them as 1.5-cells wide which causes ugly and annoying overlaps?
You simply don't know what you are talking about.
I understand the article thinks of occasional contributors outside of the project. For developers of your team (if any), I guess you could grant them SSH access to push, with git hooks enforcing your workflow (prevent direct commits to main, feature-* branches...).
The rest of it is reasonable advice as far as it goes. Learn how to replicate patches between raw repositories as a good practice, as you'll want to be able to do that anyway. Don't lean too hard on the GitHub abstraction of a "Pull Request" as that won't match what kernel people want, etc...
[1] Technically true, but in practice a ridiculous whopper.
What could possibly possess you to want to review and apply patches through email? The whole mentality is just utterly foreign to me.
It's the goth subculture of software development! Because it's popular, it must be eschewed.
Geez.
is just
> it's popular, it must be eschewed.
after a chatgpt pass
You are right to be mindful of the similarities. But it could be a mistake to be dismissive of the deeper thinking that might be behind the statement if it isn't just AI spew.
It's crazy how many people today think git == github.
Unless you're really into git and email, it's just tiresome and a time sink having to work all this out.
Fwiw I've used putty in the past and appreciate his efforts, if not his obscurantist tendencies.
Saying it is just that after reading the full text seems rather reductive and unfair.
That is certainly a likely side effect, of course. It may even be a desirable one, avoiding, or at least reducing, several classes of time wasting contributions (like the glut of single typo fixing pull requests that resulted from an ill-conceived contribution based competition a while back), especially for "open source but not open contribution" projects.
Some projects are not wanting to optimise for the number of contributors above other considerations.
De-facto standards are not standards.
Standards are properly discussed, and consensus is detected among the participants.
There was never a discussion on making Github an "industry standard".
The only reason things appeared simple in the past is because of GitHub's monopoly. As soon as you want to get rid of that, life gets more complicated. That is just another tradeoff you have to make.