What I like most is that you’re inserting context at the moment of change (PR time) instead of relying on people to proactively search docs. That’s where most documentation systems fail — they require memory and initiative.
One question I have: how do you prevent this from becoming noisy over time? In my experience, the biggest risk with automated PR comments is that teams start ignoring them once the signal-to-noise ratio drops.
Re: noise — I agree, that’s the main failure mode for any PR bot.
A few things I did to keep signal high:
Only trigger on explicit file patterns (no fuzzy matching by default)
Idempotent comments — it updates instead of spamming new ones
If multiple decisions match, it groups them into a single comment
Severity levels (info / warning / critical) so teams can tune strictness
Optional “fail critical” mode so it never blocks PRs unless configured
The goal is that it behaves more like a linter than a chatbot — predictable and quiet unless something clearly matches.
If it ever becomes background noise, it’s failed
why do I want all this extra stuff?
with markdown in the repo and agents available everywhere... what makes this approach better? (ps, the practice of coding has fundamentally changed forever, we are at the beginning of a paradigm shift, the 4th wave for any Toffler fans)
This is still just markdown in the repo. The Action doesn’t replace ADRs, it just surfaces the relevant ones automatically in PRs so reviewers don’t have to remember to look for them.
In teams where people consistently check ADRs, this probably isn’t useful.
In teams where the ADR exists but nobody remembers it during review, this helps reduce that friction.
And yeah — the Mongo example was dramatized. The real version was just re-explaining a past decision in a design doc. Not catastrophic, just wasted cycles.
Appreciate the sanity check.