If it's a simple change that is obviously correct, I'll try to unblock the author ASAP - often within minutes, even if it interrupts my flow.
If it's a giant PR with lots of risky changes in vital code, an awkward-to-unreadable diff, and/or maddeningly questionable design/coding decisions that require me to think a lot and compose some nuanced, reasoned comments to steer you in a better direction, then, well, you might find yourself nagging me for a review 2 days later. (And I'll probably ask you to split up PRs like this in future, e.g. to separate major refactoring from any logic changes.)
Yes, a lot of large codebases have things that look wrong. They're not always wrong. Trying to clean up what look like cobwebs at the core of a large codebases often means removing things that are there for a reason. The reason is usually counterintuitive and could be documented better. But often only a few people can really evaluate the change.
A small PR is likely to be accepted quickly, and a large PR is likely to take awhile. No one gets upset by that, though.
The flip side is that a small PR to a critical part of the codebase is also likely to take awhile and be treated as "default to no". It's often hard to see that from the "outside", though.
With that said, trying to go the extra mile and make things clear, concise, and better documented than before goes a long way in getting an MR reviewed quickly.
I've been doing this for 20 years and I've never seen an organization where this didn't exist to a significant degree.
I review my PRs myself line by line. And because of this I've literally never had a PR reviewer catch a major bug (a bug that would have required declaring an incident and paging an on call engineer had it made it through to prod). Not a single time.
So the vast majority of the time, I don't care that someone is going to rubber-stamp my PR. Occasionally, I'll ask someone who knows more than me to give it a thorough going over just in case.
I also very rarely have someone suggest a change in PR that was worth bringing up. But most of the time if I'm doing anything non-trivial, I've already talked it over with someone else well before PR time. I think that is the real time when collaboration should be happening, not at the end of the process.
Essentially, the PR a process at most tech companies is nothing more than a gatekeeping/complicance/CYA mechanism, and realizing this can definitely help your career..
I've been a software engineer for decades and even so, teammates will have good ideas sometimes. Nobody can think of every good idea every time.
I get those frequently, and they are usually reasonable suggestions, and I usually graciously accept them. But I say it's not worth bringing up in a PR because 99% of the time it doesn't actually matter in the long run. Both forms will have been used in the code base previously, and which one you think is better really comes down to which one you use more. It's the kind of thing you could change with a sufficiently opinionated linter. And the kind of thing that isn't worth the cost of mandatory PR reviews.
The 2nd is where someone suggests changes to overall architecture or changes to the way the feature or code works large scale.
These are much rarer because of several reasons
1. PR reviews are almost always surface level and so are more likely to catch category 1 than category 2. The incentives at nearly every tech company push for this. 2. Very frequently one person isn't available to review all of the PRs that go into a feature, so the reviewers lack context. 3. It's very unlikely that even if someone wants to dig deeper, that they have the free time to spend even 1% of the time the person who wrote the PR spent on it.
But the biggest reason I personally don't get many of the 2nd category is because I talk through non-trivial problems with other experienced engineers before I get to the PR stage.
Certainly I agree architectural decisions shouldn't be made in every PR and ideally should be discussed before that point.
"changes to overall architecture or changes to the way the feature or code works large scale"
I'm not saying there's never a reason to go back and redesign something after a PR review, but in my mind getting a design to that point and then actually needing to change it is a huge failure.
Far more common the case where someone wants a big design change with no tangible benefits just different tradeoffs.
I just don't think the ocassional benefit is worth the cost of the process.
What I said was those kind of requests usually aren't meaningful or impactful long term. They very very rarely make or save anyone a single dollar. Let alone enough money to justify the time spent on the review process.
If you've already spent the time to suggest the change, if it's slightly better, sure I'll make it. Even if it's not any better, as long as it's not worse, if you feel strongly about it, there's a good chance I'll go along. I just don't think the process has a positive ROI, and I've yet to see any data to convince me otherwise.
You’d kill your teams velocity if you did this for every PR.
I worked on a team once with… Perfectionist Petra. Petra would jump on an 1200 line refactor that was blocking the team and demand changes that would require it to be rewritten. Not changes that would save the code base from grievous error: Petra didn’t like the pattern used or didn’t approve of the design.
Sufficient tests? Check. Linted and formatted? Check. Does Petra approve? Big variable. I often wanted to tell Petra if they could just write the code out for me in a ticket so I could copy it for them. Instead I had to try and read Petra’s mind or hope they wouldn’t jump on my PR.
Sometimes you have to trust your teammate and not let the perfect plan interfere with a good enough one.
The point was that I'm a planner. I tend to discuss designs long before I begin writing code. By the time I get to a PR I don't expect to get feedback about the design. I'm looking for someone to check that the code is acceptable to merge. If you have problems with the design, it is too late!
However I work with folks who are not planners. They will write a PR as a proposal. Their expectation is that they may need to iterate on the code a bit based on that feedback; even the design and approach itself is up for review. If you have problems with the design, it's fine!
What has worked well for me in those situations in the past is to be up-front about what kind of feedback you're seeking. Mention in the PR description that you're looking for design crit, code review, etc. Whatever lingo works for your team.
Update: Realizing at the last minute that the design itself is wrong, and saying so, is a good thing -- and I do appreciate that. However it should be rare. My pet peeve is more with the folks who nitpick... whose design contributions are superficial and concerned with style more than substance.
That said, sitting down before hand and planning is useful for the big picture but one of the big benefits of PRs, as I explained earlier, is to eliminate lots of wasted time going back and forth in meetings about how to implement something and instead empowering engineers to go write code and then to iterate on actual implementations instead of endlessly discussing ahead of time. So there is a balance there where you make sure everyone is aligned on what you're building but also being open to suggestions on your PRs. Sometimes that may mean rewriting large chunks in exchange for drastically reducing pre-coding discussions.
I think I'm more concerned with nitpickers which show up in different flavours.
I understand a functional requirement to rewrite a piece of code in a PR. Perhaps you spent the last 8 hours coding up a solution that passes all of the tests but if you use that data structure at scale it wouldn't perform acceptably. That's a rewrite and that sucks but it will be for the better.
It has value particularly when the team has newcomers or less-experienced developers. Writing it off as 'CYA' discards an opportunity for async feedback to be given on work.
However, I think that PR reviews after the code has been written are the absolute worst time to do this.
But at the same time not everything needs to be asynchronous and if you try to make everything asynchronous, such that you never have ad hoc meetings or synchronous conversations, you're going to move much slower than is necessary. I've been exclusively remote for 10 years now, so I love doing thing async when I can, but I recognize the limits.
By the time my PR is ready to merge the assumption should be that it's ready to merge, unless you see something actually wrong. Not just hey this could be a little easier to reason about if you completely redesigned your solution.
9/10 when people do make those kinds of suggestions it's not objectively better, and 9/10 changing it won't move the needle with respect to maintainability or meaningful performance.
So given that we have a funnel of reviewers where most people are only going to do surface level reviews, of those that do deep dives most of them aren't going to find any problems, of those that do most of the solutions aren't going to make any kind of significant impact. Is it worth going through this whole process? My conclusion is that no it is not.
Comparing the software we produced before the whole PR review for every merge request thing became common, and the software we produce now, I don't think software quality has improved at all. So for me to change my mind, I'm going to need to see hard data that says the opposite.
The 9/10 number seems incongruent with my experience, but that could be because I often help set the PR review culture, and encourage people to identify the questions: is it "how" (…is it working), "what" (…is it doing), or "why" (…is it done this way). If you recognize those, it makes it way easier for the author to address the specifics without a total redesign. (How = hard to follow the logic, what = better names needed, why = comments needed).
But sometimes a redesign can be super compelling to everyone, and the question is: do we want to be stuck with the subpar version? Because really, the closer you are to remembering how it all works, the easier it is to rewrite it. Not later, when it faded from memory and overgrew with dependent code.
> Comparing the software we produced before the whole PR review for every merge request thing became common, and the software we produce now, I don't think software quality has improved at all. So for me to change my mind, I'm going to need to see hard data that says the opposite.
The kind of subjective reviews we're discussing (not security, performance, consistency, edge case issues, or major simplifications) are not really about quality, they're about your specific team understanding your specific code. Help your colleagues (just the small circle of maintainers) have more confidence in coming in later to change this code. If you're working alone, this isn't as big an issue.
To collect data on this, you would need to witness what happens when everyone architects their own way, and then you have to find the person who wrote that part to decipher it, and compare the amount of slowdown that can cause. Nearly impossible to measure in practice, I think.
> not everything needs to be asynchronous
I completely agree. PRs provide great tools for discussing code asynchronously so it's good to lean on them sometimes, and discuss the work on a call other times. This is however a big departure from your original statement that PRs are the absolute worst time to review the code.
I highly doubt you'd be able to do this outside of a small company, or a small self-contained team.
I've seen those exact questions in PR templates before, but they almost always end up ignored or answered in the most obvious surface level manner. I see no reason you'd be able to reliably get reviewers to do any better. Really understanding a PR takes time, and the time to do that is one of the first things that gets jettisoned when anyone is under any kind of pressure to deliver.
>But sometimes a redesign can be super compelling to everyone, and the question is: do we want to be stuck with the subpar version?
If it's really going to make a difference sure. The point is is this the case frequently enough for the process to have a positive ROI. I don't think it is.
>are not really about quality
If they aren't improving quality, then they better be improving velocity. I highly doubt this is the case. I've never personally seen it. Given the enormous cost, I want hard evidence to justify that cost.
>when everyone architects their own way,
That's not the alternative, before mandatory PR reviews, we still had, design reviews, white board sessions, code reviews, pairing sessions, planning meetings etc...
>This is however a big departure from your original statement that PRs are the absolute worst time to review the code.
Catching a problem just before merge after someone has written everything they plan to merge is self-evidently the worst time to catch a problem (next to after it has been merged of course). Now you can argue that it may be the most likely time to catch a problem, but there's no arguing that it wouldn't have been better if the problem had been caught earlier.
My argument is that in addition to be the worst time to catch the problem, it's not actually catching real problems enough to be useful.
Not templates. The culture HAS TO be for everyone to calm down and treat PR reviews as first class citizens, not unwanted distractions. If ya'll stop stressing each other out that reviews are bs, I'm sure they'll stop being that.
> […] we still had, design reviews, white board sessions, code reviews, pairing sessions, planning meetings etc...
You know that feeling when you get a design idea, you start implementing, and a single plot twist makes you realize it doesn't fit? What do you think happens when a white board session bumps against reality? That's likely a session down the drain.
What if we replace it with the following process: 1) Discover all the plot twists up front by writing out the solution. Publish a PR. 2) When a reviewer comes in, give them time to grok it, welcome redesign ideas. 3) Easily assess and validate their ideas using the stupid amounts of expertise you temporarily gained by having written a working solution. 4) Discuss or proceed with fixups.
How can it get more concrete and grounded in reality than this? One of you knows every tree, the other one can still see the forest. I'd argue, you rarely need to talk before this point. This is the best point.
(Obviously you shouldn't just suffer in silence if you're stuck or your diff is getting out of control. Run it by somebody sooner than later.)
> I highly doubt you'd be able to do this outside of a small company, or a small self-contained team.
Thankfully most companies or teams are indeed small.
If you do have people who trust you, a slack DM when the PR goes up isn’t bugging them. Maybe they just don’t pay much attention to GH notifications.
When performance reviews came around I was criticized for not getting enough story points merged (because I couldn't get them reviewed) and the fact that I was single handedly responsible for reviewing almost all of the PRs was not deemed to be a valuable contribution to the team.
And I bet you learned to stop doing reviews so quickly.
Your coworkers may have already figured out where the disincentives were.
It's very strange that most companies (maybe GitHub is to blame for this) have something like a free-for-all where anyone can approve anything, as long as they themselves aren't the author. As other comments have mentioned, this reliably creates reviewer cliques as engineers seek out the path of least resistance to get changes in. Those who haven't figured out the game often lose hours of their time to nitpickers and pedants without anything better to do.
This also results in a slow decay since most of the code ends up being written by people who are good at getting code merged, and not people who are good at building the software.
At some employers, particularly the Scrum point counting ones, PRs were unrecognized work, so everyone avoided them unless hounded and then people traded PR approvals at the end to score more points as “done.”
At employers that didn’t care about points complete per sprint, it depended on the overall importance of the work as the team leads jumped in to have them done. But even then, as it wasn’t recognized as work in any way, during crunch it got abandoned.
My reviewing doesn't show up in Jira under the amount of work I completed.
No performance review of mine has ever mentioned reviewing code.
In summary, there is minimal credit to be had from doing the work and even when there is credit, nobody lets you exchange that credit for much.
Yes, by rule reviewing is considered work, but it is not work anyone gives you much credit for doing. As an individual with incentives not aligned with the company most of the time, that makes it not worth prioritizing.
So I suppose it is recognized as work, but it is the least rewarded of the work you could be doing.
I have received praise for my review, I have had it mentioned by my manager, including in performance reviews, I have thankfully had leadership emphasise to everyone on the team how much it matters, and I've had great retrospectives on how we can improve the quality and speed of review that resulted in further meaningful improvements.
Changing culture on this is hard, so maybe the answer is just to find a better culture elsewhere, but I can assure you that it does exist.
In general for small PRs I consider a consistent latency of over 24 hours to be long and mention it negatively in peer review.
I find most of my PRs being reviewed by my team, on the same timezone, are reviewed within a few hours, but I suspect a big part of that is that most of my PRs are very small (they're actually CLs not PRs, and closer to a single commit in some ways, although hard to draw specific parallels to Git).
It depends on the language/ecosystem a bit, but normally I'd try to keep changes under 100 lines for an efficient PR process. Up to 500 lines is manageable but much slower. Over that is just impossible to do good review on.
Something that my previous company had a lot of success with was review by commit. Making sure that the series of commits in a PR "tells a story", with each one being simple and obvious, and then instructing reviewers to review commit by commit. It can speed things up and increase the quality of review substantially, but does require a bit more work from the author.
I work with a guy who, until I said "calm down", would share his PR links in the team chat several times a day. He refused to learn rebase, or seemingly anything other than a standard Git workflow of checkout, commit, push, merge, repeat. He has since improved on both fronts.
Not knowing how to rebase means that in order to stay up-to-date with a shared branch you would have to merge it each time and thus produce something akin to a spruce tree in your commit history.
Obviously, there's going to be exceptions for large changes or big rewrites on the one hand, or for tiny label changes on the other hand.
PS. Reading the comments here, people seem to work in some pretty dysfunctional environments. Get the team together and establish some guidelines that everyone agrees on. Review time should be predictable, and the work should be reasonably shared.
I often find the reviews to be helpful. People come up with edge cases I haven't thought of, or notice things to clean up. I try to proofread everything before asking for reviews, but after I've started at my own diff for a while, it's hard to read it with fresh eyes.
Sometimes we ask for reviews on more experimental PRs too, just to get early design feedback before wasting too much time on polishing the changes.
The change request must either make the code objectively better (i.e. performance, memory, security, adherence to a consistent architectural pattern, simplification), or subjectively clearer (it's hard for you to understand what is happening, how it happens, or why it's done this way).
If the reason is solid, then the feedback is usually well received. Except in unfortunate environments where PRs are treated as annoying disruptions. I hope I never have to work in that kind of place.
> Aim for a response within 1 business day. This need not be a full review, but could be a comment setting expectations about when you’ll be able to review the patch, pointing to other folks who might be able to review it sooner or would be more appropriate to review the patch.
https://firefox-source-docs.mozilla.org/browser/FrontendCode...
To speed up reviews that don’t require a particular individual’s approval, some Firefox teams create (Phabricator) reviewer groups for different feature areas (such as graphics or Windows code) so anyone in that group can approve a PR.
My most common response is "needs no further review if these points are addressed". On rare occasions my only response is "need more time". Some changes take time to settle in my mind. No good comes from rushed comments.
There is an expectation not to haggle over dumb shit. When you decide to pick a battle, pull in a second reviewer for their opinion.
I'm sure you've heard the "trick" of splitting up large merges to make them easier. Yes this is good up to a point. Other than that, it really is team commitment to timely reviews. No tricks to it.
We also have a dedicated thread for PR review requests in our private team chat channel, which helps bring visibility to our-team-specific PRs that can get lost in the churn of github notifications from all the repos we’re involved with.
My last company was much more methodical with actually reviewing code but honestly? I kind of like the trust in developers and we're able to get code out a lot quicker.
Lately, I don't want code to be perfect, because my code never was in the first place. I want it to be just good enough for me or anyone else in the team to iterate on.
It doesn’t matter how much you wait for review of a PR if you can just build on top of it.
Also, this workflow encourages smaller PRs that are faster and easier to review.
We implement this at my company through a tool called Graphite and review times usually don’t exceed 1-2 days.
To me the ideal system would have a reviewer assigned at the start that is expecting to review and you keep them updated on the design.
I hate that. Mine have a much slower time to review; sometimes I eventually give in and share them too.
* After each change you have to review the whole MR, either because you have to rebuild the context or because it's not obvious how the new changes interact with the old ones. Hopefully the new review won't take as much as the first one, but still it's linear, and the number of changes request is also likely linear in the MR size, so that's already total quadratic.
* More iterations often mean higher review fatigue, at some point I don't want to rehash the same code over and over again, and I tend to postpone bigger MRs in favor of smaller ones.
* Larger MRs are correlated with the author not having a full grasp of what they are doing, and therefore are unable to segment it properly and organize the code properly. Sometimes there are not many alternatives to a large MR, so that's not a hard rule, but there is certainly a decent correlation.
In my experience (admittedly with a company that is not necessarily statistically meaningful) when I began doing code review that helped with my own MRs getting reviewed (and also contibuted positively to performance reviews). First and foremost because it helped me getting a better understanding of the whole code base, not just the parts that I ended up touching; and then, I suppose, also because it made me more trustworthy towards the code maintainer.
I think it comes down to two main pieces: culture & tooling.
Culture: - Reviews are part of the job. A solid senior developer isn't just writing code all day -- there are many aspects to the job and reviews are an important one - Reviews shouldn’t be viewed negatively or as something we "have to do." They’re opportunities to learn, teach, collaborate, and improve as a team - Make small, well scoped PRs. Do not try to roll a bunch of changes into one PR. It makes it harder to review, riskier to merge and harder to rollback. - Reviews should be owned by the team. Often there doesn't have to be just one developer who can do a review. If a PR is waiting on a developer who has a morning full of meetings or is fixing a high priority bug, then another team member should pick up the review - Prioritize code ready for review over code still being written. Code in review represents significant business effort (customer feedback, planning, design, development), but its value isn’t realized until it’s merged. Prioritizing reviews can help deliver that value faster - It’s awesome as an author to get a quick review, so "treat others as you’d like to be treated." - (Likely not feasible everywhere, but helpful for us) Skip reviews for trivial changes like copy updates, test fixes, or non-critical dependency upgrades
Tooling: - Automate as much as possible. Linting, tests, and other automated checks can reduce manual review effort - Make sure there's a process for everyone getting notifications quickly. Turned out some devs would use github notifications and only checked those like once a day, while others got real-time slack alerts (guess which group tends to complete reviews faster) - Have a way to set & track your review time target as a team, try to make an automated way to keep the team accountable & aware of where they're at
Obviously some of these things aren't easy to change and take time. I'm fortunate enough to be a TL so I was able to influence my team and the engineering department more directly, but I think a respected developer can bring forward this kind of change too. To also address some of these points for our department, I built a Slack <> GitHub app to help our team move reviews faster. Feel free to try it if you're interested: https://pullpro.dev I’ve been working on making it accessible outside our org and would love to hear any feedback if you give it a go!
It's a queuing problem. See: https://www.johndcook.com/blog/2008/10/21/what-happens-when-...
The slowness of processes that involve other people will frustrate you until you realize it is an ordinary part of the work.
My advice, is behave in a way that makes you the kind of person other people will want to work with. Turn your work around quickly. Be patient with other people's turn around time.
Good luck.
However, we're onboarding a lot of newer developers and actively trying to slow this pace now. CRs need a lot more scrutiny in this case.