So now you're looking at a PR that at face value looks good, but doesn't reflect the author's skill and understanding of the subject.
Meaning now you shift more work to the owners of the codebase, as they have to go through those verifications steps.
Think carefully before responding.
I don't mind if a review is AI-assisted. I've always been a fan of the whole "human in the loop" concept in general. Maybe the AI helps them catch something they'd normally miss or gloss over. Everyone tends to have different priorities when reviewing PRs, and it's not like humans don't have lapses in judgement either (I'm not trying to anthropomorphise AI, but you know what I mean).
My stance is same about writing code. I honestly don't mind if the code was written `ed` on a linux-powered toaster from 2005 with 32x32 screen, or if they wrote it using Claude Code 9000.
At the end of the day, the person who's submitting the code (or signing off a review) is responsible for their actions.
So in a round-about way, to answer your question: I think AI as part of the review is fine. As impressive as their output can be sometimes be, it can be both impressively good and impressively bad. So no, only relying on AI for review is not enough.
On the other hand, I haven’t and I believe many of us, have never paid node any money so it feels weird to dictate their approach.
Such a PR should be rejected simply because of the shear size of it, regardless of AI use. Seriously, who submits a 19k line PR? Just make many small ones.
I suggest EVERYONE in this thread go read the the GitHub PR in question. There's some good arguments for and against AI, and what it means for FOSS... But good lord you will have to sift through the virtue signalling bullshit and have patience for the constant moving of goalposts
Also, no mention at all regarding the test coverage, or impact if any on existing code paths specifically.
@indutny explains their views in that thread.
That to me seems to match the definition of survivorship bias quite well?
A person, who posts slop for whatever reason, or runs bots that post slop, will simply ignore them.
An honest person, who cares about the quality of their contribution and genuinely wants to improve the project will be more limited in the choice of tools to do so.
So, this policy only serves to limit honest contributors, while doing absolutely nothing to stop spammers/slopposters.
Stop treating this like it's going to go away. We need actual solutions for the FOSS community that make reviewing AI assisted work tractable.
I don't think it should be up to reviewers and maintainers to put in the work to figure that one out. You want to "disrupt" the open-source pipeline? Fine, then you must propose a solution for the problems that your disruption is now causing.
Come up with a system so that I, a maintainer, can review a large volume of AI-generated PRs where the contributor often has neither the inclination nor the skills of assessing the quality of what they're proposing.
The system must be effective at preventing me from waste time on very obvious slop, it must also work offline and be free, because most maintainers are unpaid volunteers.
If you can offer that solution, I'm sure more projects would be open to giving carte blanche to AI-authored PRs.