It also shows a broken culture where the other reviewers were not engaged and committed to the success of the feature. When a bug escapes, it's both the author _and_ reviewer at fault.
A broken culture would be expecting engineers to find every last bug in other people's code and blaming them for bugs.
Just trace the Whys:
1) Why did the bug happen? These lines of code 2) Why wasn't that caught by tests? The test was broken / didn't exist 3) How was the code submitted without a test? The reviewer didn't call it out.
Arguing about percentages doesn't matter. What matters is fixing the team process so this doesn't happen again. This means formally documented standards for test coverage and education and building team culture so people understand why it's important.
It's not "just the author's fault". That kind of blame game is toxic to teams.
In a good culture it's "our code" where everyone shares the responsibility for quality, performance, and functionality.
I've found this to be key for both giving and receiving feedback. Even small breaking commits are better in a lot of cases because they invite feedback in a way that 1000+ lines don't.
Do you happen to know how your tool compares to spr? How production-ready is it?
I have been using this for a few months now, and it has served me well! I haven't spent much (any) time marketing it, so haven't really had any feedback from other users yet. Feel free to check it out & lmk if you have any suggestions. It's also open source, so feed free to open issue/PR on Github
In the same vein; the author says that the other feature took several releases to be stable. Were the other release purely bug fixes or did that give the engineer a chance to get early feedback and incorporate that into the feature ?
It’s clear that the author prefers a slow and careful approach, and he judges “success” and “failure” by that metric. It sometimes is the right approach. It sometimes isn’t.
It also has some UX snafus that cause reviewers to write a number of comments and then forget to publish any of them, leading to a lot of interactions along the lines of "I thought you were going to review my PR?" "I already did?"
It's great if you have a team that does code reviews. It works less well for reviewing contributions from external contributors on an open-source project,a as the contributor likely just wants to get their PR merged and doesn't want to learn a special reviewing tool.
No affiliation, just a happy customer.
For the most part, code reviews should be optional - if you want to get a review from someone, tag them on your PR and ask. If someone you didn't tag spots something and your PR landed, you can always figure it out and still make a fix.
I will give an exception to maybe super fragile parts of the code but ideally you can refactor/build tests/do something else that doesn't require blocking code review to land changes.
By putting breakpoints in the code, and seeing what the changed lines do, I can compare the output line by line with my mental model of what should be happening, and this thoroughly helps me verify the changes.
Maybe we're in a very different context, but to me if you can't understand what the code does without a debugger, then you ask the author to make the code clearer.
The pressure to get work done faster in the long term always wins out over other concerns and you end up with largely surface level speed reviews that don’t catch much of anything. At best they tend to enforce taste.
In 20 years across many companies and thousands of PRs, I’ve never had a reviewer catch a single major bug (a bug that would have required declaring an incident) that would have otherwise gone out. I've pushed a few major bugs to production, but they always made it through review.
I’ve been doing this since well before everyone was using GitHub and requiring PR reviews for every merge. I haven’t noticed a significant uptick in software quality since the switch.
The cost is high enough that I’d like to see some hard evidence justifying it. It just seems to be something people who have never done any different take on faith.
Good thing reviews aren't just about catching bugs.
However catching bugs is always going to be at or near the top of list, so clearly it’s at least partially about catching bugs.
I’d argue that catching bugs along with ticking a compliance checkbox (which is only there because something thinks they catch bugs and malicious code) are the 2 primarily reasons that the business part of the company cares about or requires code reviews in the first place.
At my current job, code review requirements are set on a per-folder basis, with many folders not requiring a review. People ask for a review because they want a review (or, sometimes, they dont. For example, I don't ask someone to review every quick one-liner of the systems I am an expert in).
I saw no proof of the later review for all pushes.
> Code reviews have a bad rap: they are antagonistic in nature and, sometimes, pure red tape.
I wonder if folks know that this is a job? What are you gonna do, not do it? Cry at night because you forgot for the hundreth time to add token strings to translation files and not be hard-coded? Come on.
An AI tool that could convert large scale changes into a set of small commits would be amazing.