It just looks like a normal frontend+backend product monorepo, with the only somewhat unusual inclusion of the marketing folder.
maybe they could be encrypted, and you could say "well its everything but the encryption key, which is owned in physical form by the CEO."
theres a lot of power i think to have everything in one place. maybe github could add the notion of private folders? but now thats ACLs... probably pushing the tool way too far.
How close do you think this is? Deploys everything but the actual backend/frontend code.
maybe they could be encrypted, and you could say "well its everything but the
encryption key, which is owned in physical form by the CEO."
I don't see how this is any different from most projects where keys and the like are kept in some form of secrets manager (AWS services, GHA Secrets, Hashi Vault, etc.).We used p4 rather than git though.
However there's a big difference between development and releases. You still want to be able to cut stable releases that allow for cherrypicks for example, especially so in a monorepo.
Atomic changes are mostly a lie when talking about cross API functions, i.e. frontend talking to a backend. You should always define some kind of stable API.
Can you explain this comment? Are you saying to develop directly in the main branch?
How do you manage the various time scales and complexity scales of changes? Task/project length can vary from hours to years and dependencies can range from single systems to many different systems, internal and external.
The complexity comes from releases. Suppose you have a good commit 123 were all your tests pass for some project, you cut a release, and deploy it.
Then development continues until commit 234, but your service is still at 123. Some critical bug is found, and fixed in commit 235. You can't just redeploy at 235 since the in-between may include development of new features that aren't ready, so you just cherry pick the fix to your release.
It's branches in a way, but _only_ release branches. The only valid operations are creating new releases from head, or applying cherrypicks to existing releases.
So you can say that you have short-lived development branches that are always rebased on main. Along with the release branch and cherry-pick process, the workflow you describe is quite common.
They don’t do code reviews or any sort of parallel development.
They’re under the impression that “releases are complex and this is how they avoid it” but they just moved the complexity and sacrificed things like parallel work, code reviews, reverts of whole features.
Feature flags are a good idea, but they require a lot of discipline and maintenance. In practice, they tend to be overused, and provide more negatives than positives. They're a complement, but certainly not a replacement for VCS branches, especially in monorepos.
We use Unleash at work, which is open source, and it works pretty well.
I'm using a monorepo for my company across 3+ products and so far we're deploying from stable release to stable release without any issues.
The moment you have two production services that talk to each other, you end up with one of them being deployed before the other.
Cherry picks are useful for fixing releases or adding changes without having to make an entirely new release. This is especially true for large monorepos which may have all sorts of changes in between. Cherry picks are a much safer way to “patch” releases without having to create an entirely new release, especially if the release process itself is long and you want to use a limited scope “emergency” one.
Atomic changes - assuming this is related to releases as well, it’s because the release process for the various systems might not be in sync. If you make a change where the frontend release that uses a new backend feature is released alongside the backend feature itself, you can get version drift issues unless everything happens in lock-step and you have strong regional isolation. Cherry picks are a way to circumvent this, but it’s better to not make these changes “atomic” in the first place.
A monorepo only allows you to reason about the entire product as it should be. The details of how to migrate a live service atomically have little to do with how the codebase migrates atomically.
Adding new APIs is always easy. Removing them not so much since other teams may not want to do a new release just to update to your new API schema.
This seems like simply not following the rules with having a monorepo, because the DB Cluster is not running the version in the repo.
Being 17 versions behind is an extreme example, but always having everything run the latest version in the repo is impossible, if only because deployments across nodes aren't perfectly synchronised.
Canary/Incremental, not so much
There's a bigger problem though: in practice there's almost always a client that you don't control, and can't switch along with your services, e.g. an old frontend loaded by a user's browser.
But (in my mind) even a front end is going to get told it is out of date/unusable and needs to be upgraded when it next attempts to interact with the service, and, in my mind atleast, that means that it will have to upgrade, which isn't "atomic" in the strictest sense of the word, but it's as close as you're going to get.
My history ends up being: - add feature x - linting - add e2e tests - formatting - additional comments for feature - fix broken test (ci caught this) - update README for new feature - linting
With a squash it can boil down to just “added feature x” with smaller changes inside the description.
Where logical commits (also called atomic commits) really shine is when you're making multiple logically distinct changes that depend on each other. E.g. "convert subsystem A to use api Y instead of deprecated api X", "remove now-unused api X", "implement feature B in api Y", "expose feature B in subsystem A". Now they can be reviewed independently, and if feature B turns out to need more work, the first commits can be merged independently (or if that's discovered after it's already merged, the last commits can be reverted independently).
If after creating (or pushing) this sequence of commits, I need to fix linting/formatting/CI, I'll put the fixes in a fixup commit for the appropriate and meld them using a rebase. Takes about 30s to do manually, and can be automated using tools like git-absorb. However, in reality I don't need to do this often: the breakdown of bigger tasks into logical chunks is something I already do, as it helps me to stay focused, and I add tests and run linting/formatting/etc before I commit.
And yes, more or less the same result can be achieved by creating multiple MRs and using squashing; but usually that's a much worse experience.
every commit is reviewed individually. every commit must have a meaningful message, no "wip fix whatever" nonsense. every commit must pass CI. every commit is pushed to master in order.
Even if we squash it into main later, it’s helpful for reviewing.
Other than that pretty free how you write commit messages
So one branch had 40x "Deploy to Dev" commits. And those got merged straight into the repo.
They added no information.
No information loss, and every commit is valid on their own, so cherry picks maintain the same level of quality.
Also rebasing is just so fraught with potential errors - every month or two, the devs who were rebasing would screw up some feature branch that they had work on they needed and would look to me to fix it for some reason. Such a time sink for so little benefit.
I eventually banned rebasing, force pushes, and mandated squash merges to main - and we magically stopped having any of these problems.
The Linux kernel manages to do it for 1000+ devs.
I can spend hours OCDing over my git branch commit history.
-or-
I can spend those hours getting actual work done and squash at the end to clean up the disaster of commits I made along the way so I could easily roll back when needed.
But also, rewriting history only works if you haven't pushed code and are working as a solo developer.
It doesn't work when the team is working on a feature in a branch and we need to be pushing to run and test deployment via pipelines.
Weird, works fine in our team. Force with lease allows me to push again and the most common type of branch is per-dev and short lived.
It's harder to debug as well (this 3000line commit has a change causing the bug... best of luck finding it AND why it was changed that way in the first place.
I, myself, prefer that people tidy up their branches such that their commits are clear on intent, and then rebase into main, with a merge commit at the tip (meaning that you can see the commits AND where the PR began/ended.
git bisect is a tonne easier when you have that
When I am ready to make my PR I delete my remote feature branch and then squash the commits. I can use all my granular commit comments to write a nice verbose comment for that squashed commit. Rarely I will have more than one commit if a user story was bigger than it should be. Usually this happens when more necessary work is discovered. At this stage each larger squashed commit is a fully complete change.
The audience for these commits is everyone who comes after me to look at this code. They aren’t interested in seeing it took me 10 commits to fix a test that only fails in a GitHub action runner. They want the final change with a descriptive commit description. Also if they need to port this change to an earlier release as a hotfix they know there is a single commit to cherry pick to bring in that change. They don’t need to go through that dev commit history to track it all down.
- You need to remove trash commits that appear when you need to rerun CI. - You need to remove commits with that extra change you forgot. - You want to perform any other kind of rebase to clean up messages.
I assume in this thread some people mean squashing from the perspective of a system like Gitlab where it's done automatically, but for me squashing can mean simply running an interactive (or fixup) and leaving only important commits that provide meaningful information to the target branch.
Serious question, what's going on here?
Are you using a "trash commit" to trigger your CI?
Is your CI creating "trash commits" (because build artefacts)?
Is there overhead to creating a branch?
At some point, you will have many teams. And one of them _will not_ be able to validate and accept some upgrade. Maybe a regression causes something only they use to break. Now the entire org is held hostage by the version needs of one team. Yes, this happens at slightly larger orgs. I've seen it many times.
And since you have to design your changes to be backwards compatible already, why not leverage a gradual roll out?
Do you update your app lock-step when AWS updates something? Or when your email service provider expands their API? No, of course not. And you don't have to lock yourself to other teams in your org for the same reason.
Monorepos are hotbeds of cross contamination and reaching beyond API boundaries. Having all the context for AI in one place is hard to beat though.
This isn't to say monorepo is bad, though, but they're clearly naive about some things;
> No sync issues. No "wait, which repo has the current pricing?" No deploy coordination across three teams. Just one change, everywhere, instantly.
It's literally impossible to deploy "one change" simultaneously, even with the simplest n-tier architecture. As you mention, a DB schema is a great example. You physically cannot change a database schema and application code at the exact same time. You either have to ensure backwards compatibility or accept that there will be an outage while old application code runs against a new database, or vice-versa. And the latter works exactly up until an incident where your automated DB migration fails due to unexpected data in production, breaking the deployed code and causing a panic as on-call engineers try to determine whether to fix the migration or roll back the application code to fix the site.
To be a lot more cynical; this is clearly an AI-generated blog post by a fly-by-night OpenAI-wrapper company and I suspect they have few paying customers, if any, and they probably won't exist in 12 months. And when you have few paying customers, any engineering paradigm works, because it simply does not matter.
> you will have the old system using the old schema and the new system using the new schema unless you design for forwards-backwards compatible changes
Of course you design changes to be backwards compatible. Even if you have a single node and have no networked APIs. Because what if you need to rollback?
> Maybe a regression causes something only they use to break. Now the entire org is held hostage by the version needs of one team.
This is an organizational issue not a tech issue. Who gives that one team the power to hold back large changes that benefit the entire org? You need a competent director or lead to say no to this kind of hostage situation. You need defined policies that balance the needs of any individual team versus the entire org. You need to talk and find a mutually accepted middle ground between teams that want new features and teams that want stability and no regressions.
If my code has to be backwards compatible to survive the deployment, then having the code in two different repos isn’t such a big deal, because it’ll all keep working while I update the consumer code.
Multiple repos shouldn't depend on a single shared library that needs to be updated in lockstep. If they do, something has gone horribly wrong.
It’s both. Furthermore, you _can_ solve organizational problems with tech. (Personally, I prefer solutions to problems that do not rely strictly on human competence)
We have a monorepo, we use automated code generation (openapi-generator) for API clients for each service derived from an OpenAPI.json generated by the server framework. Service client changes cascade instantly. We have a custom CI job that trawls git and figures out which projects changed (including dependencies) as to compute which services need to be rebuilt/redeployed. We may just not be at scale—thank God. We're a small team.
> We may just not be at scale—thank God. We a small team.
It's perfectly acceptable for newer companies and small teams to not solve these problems. If you don't have customers who care that your website might go down for a few minutes during a deploy, take advantage of that while you can. I'm not saying that out of arrogance or belittlement or anything; zero-downtime deployments and maintaining backwards compatibility have an engineering cost, and if you don't have to pay that cost, then don't! But you should at least be cognizant that it's an engineering decision you're explicitly making.
Seems like a weird workaround, you could just clone multiple repos into a workspace. Agree with all your other points though.
The alternative of every service being on their own version of libraries and never updating is worse.
months-long delays on important updates due to some large project doing extremely bad things and pushing off a minor refactor endlessly has been the norm for me. but they're big so they wield a lot of political power so they get away with it every time.
or worse, as a library owner: spending INCREDIBLE amounts of time making sure a very minor change is safe, because you can't gradually roll it out to low-risk early adopter teams unless it's feature-flagged to hell and back. and if you missed something, roll back, write a report and say "oops" with far too many words in several meetings, spend a couple weeks triple checking feature flagging actually works like everyone thought (it does not, for at least 27 teams using your project), and then try again. while everyone else working on it is also stuck behind that queue.
monorepos suck imo. they're mostly company lock-in, because they teach most absolutely no skills they'd need in another job (or for contributing to open source - it's a brain drain on the ecosystem), and all external skill is useless because every monorepo is a fractal snowflake of garbage.
And monorepo or not, bad software developers will always run into this issue. Most software will not have 'many teams'. Most software is written by a lot of small companies doing niche things. Big software companies with more than one team, normally have release managers.
My tipp: use architecture unit tests for external facing APIs. If you are a smaller company: 24/7 doesn't has to be the thing, just communicate this to your customers but overall if you run SaaS Software and still don't know how to do zero-downtime-deployment in 2025/2026, just do whatever you are still doing because man come on...
I guess I could work with either option now.
I think it’s better to always ask your devs to be concerned about backwards compatibility, and sometimes forwards compatibility, and to add test suites if possible to monitor for unexpected incompatible changes.
And if it's not, it breaks everything. This is an assumption you can't make.
Crazy that nobody can be bothered to get rid of the obvious AI-isms "This isn't just for...", "The Challenges (And How We Handle Them)", "One PR. One review. One merge. Everything ships together." It's an immediate signal that whoever wrote this DGAF.
Not to say this post isn't AI generated but you might want a better tool (if one exists)
I've had a blog post kicking around about this for a while, it's CRAZY how much more expensive AI detection is than AI generation.
In my mind content generated today with AI "tells" like the above and a general zero-calorie-feel that also trip an AI detector are very likely AI generated.
A text either has value to you or it doesn’t. I don’t really understand what the level of AI involvement has to do with it. A human can produce slop, an AI can produce an insightful piece. I rely mostly on HN to tell them apart value-wise.
> Last week, I updated our pricing limits. One JSON file. The backend started enforcing the new caps, the frontend displayed them correctly, the marketing site showed them on the pricing page, and our docs reflected the change—all from a single commit.
Human articles on HN are largely shit. I would personally prefer to see either AI articles, or human articles by experts (which we get almost none of on HN)
It's almost as if when you seek to find patterns, you'll find patterns, even if there are none. I think it'd benefit these kinds of people to remember the scientific "rule" of correlation does not equal causation and vice versa.
wat. You are running the marketing page from the same repo, yet having an LLM make the updates? You have the data file available. Just read the pricing info from your config file and display it?
AI didn’t magically uninvent “let’s have someone else check this over before it’s shipped”.
I'm wondering once the exceedingly obvious LLM style creeps more and more into the public mind if we're going to look back at these blog posts and just cringe at how blatant they were in retrospect. The models are going to improve (and people will catch on that you can't just use vanilla output from the models as blog posts without some actual editing) and these posts will just stand out like some very sore thumbs.
(ps all of the above 100% human written ;)
Company website in the same repo means you can find branding material and company tone from blogs, meaning you can generate customer slides, video demos
Going further, Docs + Code, why not also store Bugs, Issues etc. I wonder
Here were the downsides we ran into
- Getting buy in to do everything through the repo. We had our feature flags controlled via a yaml file in the repo as well, and pretty quickly people got mad at the time it took for us to update a feature flag (open MR -> merge MR -> have CI update feature flag in our envs), and optimizing that took quite a while. It then made branch invariants harder to reason about (everything in the production branch is what is in our live environments, but except for feature flags). So, we moved that out of the monorepo into an actual service.
- CI time and complexity. When we started getting to around 20 services that deployed independently, GitLab started choking on the size of our CI configuration and we'd see a spinner for about 5 minutes before our pipeline even launched. Couple that with special snowflakes like the feature flag system I mentioned above, eventually it got to the point that only a few people knew exactly how rollouts edge cases worked. The juice was not worth the squeeze at that point (the juice being - "the repo is the source of truth for everything")
- Test times. We ran some e2e UI tests with Cypress that required a lot of beefy instances, and for safety we'd run them every single time. Couple that with flakiness, and you'd have a lot of red pipelines when the goal was 100% green all the time.
That being said, we got a ton of good stuff out of it too. I distinctly remember one day that I updated all but 2 of our services to run on ARM without involving service authors and our compute spend went down by 70% for that month because nobody was using the m8g spot instances, which had just been released.
They had to open a whole epic in order to reduce the memory usage, but I think all that work just let us continue to use GitLab as the number of services we grew increased. They recommended we use something called parent/child pipelines, but it would have been a fairly large rewrite of our logic.
We build a user-friendly way for non-technical users to interact with a repo using Claude Code. It's especially focused on markdown, giving red/green diffs on RENDERED markdown files which nobody else has. It supports developers as well, but our goal is to be much more user friendly than VSCode forks.
Internally we have been doing a lot of what they talk about here, doing our design work, business planning, and marketing with Claude Code in our main repo.
for example I can have a prompt writing playwright tests for happy paths while another prompt is fixing a bug of duplicated rows in a table because of a missing SQL JOIN condition.
When a feature touches the backend API, the frontend component, the documentation, and the marketing site—why should that be four repositories, four PRs, four merge coordination meetings?
The monorepo isn't a constraint. It's a force multiplier."
Thank you Claude :)
Still adverse to the monorepo though, but I understand why it's attractive.
You could easily scoff the same way about some number of API endpoints, class methods, config options, etc, and it still wouldn't be meaningful without context.
It's ok to split or lump as the team sees fit.
There may not be a universally correct granularity, but that doesn't mean clearly incorrect ones don't exist. 50+ services is almost always too many, except for orgs with hundreds or thousands of engineers.
“It’s all there Claude just read it.”
Ok…
Of course, it’s still a pretty rough and dirty way to do it. But it works for small/demo projects.
Never expose your storage/backend type. Whenever you do, any consumers (your UI, consumers of your API, whatever) will take dependencies on it in ways you will not expect or predict. It makes changes somewhere between miserable and impossible depending on the exact change you want to make.
A UI-specific type means you can refactor the backend, make whatever changes you want, and have it invisible to the UI. When the UI eventually needs to know, you can expose that in a safe way and then update the UI to process it.
It's tempting to return a db table type but you don't have to.
It's definitely not amazing, code generation in general will always have its quirks, but protobuf has some decent guardrails to keep the protocol backwards-forwards compatible (which was painful with Avro without tooling for enforcement), it can be used with JSON as a transport for marshaling if needed/wanted, and is mature enough to have a decent ecosystem of libraries around.
Not that I absolutely love it but it gets the job done.
I look forward to when we see the article about breaking the monorepo nightmare.
Fuck yes I love this attitude to transparency and code-based organization. This is the kind of stuff that gets me going in the morning for work, the kind of organization and utility I honestly aspire to implement someday.
As many commenters rightly point out, this doesn't run the human side of the company. It could, though, if the company took this approach seriously enough. My personal two cents, it could be done as a separate monorepo, provided the company and its staff remain disciplined in its execution and maintenance. It'd be far easier to have a CSV dictate employees and RBAC rather than bootstrapping Active Directory and fussing with its integrations/tentacles. Putting department processes into open documentation removes obfuscation and a significant degree of process politics, enabling more staff to engage in self-service rather than figuring out who wields the power to do a thing.
I really love everything about this, and I'd like to see more of it, AI or not. Less obfuscation and more transparency is how you increase velocity in any organization.
The people who say polyrepos cause breakage aren't doing it right. When you depend across repos in a polyrepo setup, you should depend on specific versions of things across repos, not the git head. Also, ideally, depend on properly installed binaries, not sources.
To be fair, this problem is not solved at all by monorepos. Basically, only careful use of gRPC (and similar technology) can help solve this… and it doesn’t really solve for application layer semantics, merely wire protocol compatibility. I’m not aware of any general comprehensive and easy solution.
In a polyrepo environment, either:
- B updates their endpoint in a backward compatible fashion, making sure older stuff still works
OR
- B releases a new version of their API at /api/2.0 but keeps /api/1.0 active and working until nothing depends on it anymore, releasing deprecation messages to devs of anyone depending on 1.0
Also, are we just upvoting obvious AI gen marketing slop now?
This is something that is, of course, super relevant given context management for agentic AI. So there's great appeal in doing this.
And today, it might even be the best decision. But this really feels like an alpha version of something that will have much better tooling in the near-future. JSON and
Markdown are beautiful simple information containers, but they aren't friendly for humans as compared with something like Notion or Excel. Again I'll say, I'm confident that in the near-future we'll start to see solutions emerge that structure documentation that is friendly to both AIs and humans.