88 pointsby philippta6 days ago73 comments
  • viccis3 hours ago
    Open source? Close it and ask them resubmit a smaller one and justify the complexity of things like a DSL if they wanted it included.

    For work? Close it and remind them that their AI velocity doesn't save the company time if it takes me many hours (or even days depending on the complexity of the 9k lines) to review something intended to be merged into an important service. Ask them to resubmit a smaller one and justify the complexity of things like a DSL if they wanted it included. If my boss forces me to review it, then I do so and start quietly applying for new jobs where my job isn't to spend 10x (or 100x) more time reviewing code than my coworkers did "writing" it.

    • oarsinsyncan hour ago
      > If my boss forces me to review it, then I do so and start quietly applying for new jobs where my job isn't to spend 10x (or 100x) more time reviewing code than my coworkers did "writing" it.

      Another equally correct approach (given the circumstances of the organisation) is to get a different AISlopBot to do the review for you, so that you spend as much time reviewing as the person who submitted the PR did coding.

      • ffsm843 minutes ago
        That only works if you're not personally responsible for the code you review, too.
        • throwup23832 minutes ago
          Just don’t give the AI agent an “approve_pr” tool. It can only comment or reject.
          • kortilla11 minutes ago
            But then what? At the end it’s still on you to approve and you have no idea what is hiding in the code.
    • krackers2 hours ago
      > then I do so and start quietly applying

      In this job market? And where pretty much every company seems to be following the top-down push for AI-driven "velocity"?

      • viccisan hour ago
        That's why I would start applying instead of just quitting. There are plenty of companies that use AI responsibly or not much at all.
        • xeonmcan hour ago
          This is why we need a programmer union, so that coders can collectively reject reverse-centaur slopwork, like miners rejecting asbestos mines or workers refusing to fix dangerous machines while it’s running.
          • immibis15 minutes ago
            Are AI slop reviews threatening to your life?
      • nextlevelwizard2 hours ago
        When you are applying from a job you are more desirable and you aren't desperate so you can take your pick. If your current job is bad then you can't really lose much.

        Otherwise you need to be the person at the company who cuts through the bullshit and saves it from when the VibeCodeTechDebt is popping the industry.

      • zwnowan hour ago
        The market only sucks for devs that lack experience or have a skillset thats oversaturated. If you only know React and Python I'm sorry, but there are like 20 million devs just like you so the one thats willing to work for the smallest coin is going to win.
  • throwawayffffas6 hours ago
    > How would you go about reviewing a PR like this?

    Depends on the context. Is this from:

    1. A colleague in your workplace. You go "Hey ____, That's kind of a big PR, I am not sure I can review this in a reasonable time frame can you split it up to more manageable pieces? PS: Do we really need a DSL for this?"

    2. A new contributor to your open source project. You go "Hey ____, Thanks for your interest in helping us develop X. Unfortunately we don't have the resources to go over such a large PR. If you are still interested in helping please consider taking a swing at one of our existing issues that can be found here."

    3. A contributor you already know. You go "Hey I can't review this ___, its just too long. Can we break it up to smaller parts?"

    Regardless of the situation be honest, and point out you just can't review that long a PR.

    • MartijnHols20 minutes ago
      Telling a new contributor no thank you is hard. Open source contributors are hard to come by, and so I’ve always dealt with PRs like this (albeit before AI days but from people who had never written a line of code before their PR) by leaving a message that it’s a huge PR so it’s going to take a while to review it and a request to make smaller PRs in the future. A couple of times I ended up leaving over a hundred review comments, but most times they were all fixed and the contributor stuck around with many better PRs later.
  • jonchurch_6 hours ago
    We are seeing a lot more drive by PRs in well known open source projects lately. Here is how I responded to a 1k line PR most recently before closing and locking. For context, it was (IMO) a well intentioned PR. It purported to implement a grab bag of perf improvements, caching of various code paths, and a clustering feature

    Edit: left out that the user got flamed by non contributors for their apparently AI generated PR and description (rude), in defense of which they did say they were using several AI tools to drive the work. :

    We have a performance working group which is the venue for discussing perf based work. Some of your ideas have come up in that venue, please go make issues there to discuss your ideas

    my 2 cents on AI output: these tools are very useful, please wield them in such a way that it respects the time of the human who will be reading your output. This is the longest PR description I have ever read and it does not sound like a human wrote it, nor does it sound like a PR description. The PR also does multiple unrelated things in a single 1k line changeset, which is a nonstarter without prior discussion.

    I don't doubt your intention is pure, ty for wanting to contribute.

    There are norms in open source which are hard to learn from the outside, idk how to fix that, but your efforts here deviate far enough from them in what I assume is naivety that it looks like spam.

    • jonchurch_6 hours ago
      Daniel Stenberg of curl gave a talk about some of what theyve been experiencing, mostly on the security beg bounty side. A bit hyperbolic, and his opinion is clear from the title, but I think a lot of maintainers feel similarly.

      “AI Slop attacks on the curl project” https://youtu.be/6n2eDcRjSsk

  • yodsanklai4 days ago
    You review it like it wasn't AI generated. That is: ask author to split it in reviewable blocks. Or if you don't have an obligation to review it, you leave it there.
    • ivanjermakovan hour ago
      My record is 45 comments on a single review. Merge conditions were configured so that every comment must be resolved.

      If PR author can satisfy it - I'm fine with it.

      • cryptonym26 minutes ago
        They will let AI somewhat satisfying it and ask you for further review
    • userbinator3 hours ago
      If you try to inspect and question such code, you will usually quickly run into that realisation that the "author" has basically no idea what the code even does.

      "review it like it wasn't AI generated" only applies if you can't tell, which wouldn't be relevant to the original question that assumes it was instantly recognisable as AI slop.

      If you use AI and I can't tell you did, then you're using it effectively.

      • ahtihn2 hours ago
        If it's objectively bad code, it should be easy enough to point out specifics.

        After pointing out 2-3 things, you can just say that the quality seems too low and to come back once it meets standards. Which can include PR size for good measure.

        If the author can't explain what the code does, make an explicit standard that PR authors must be able to explain their code.

    • ashdksnndck2 hours ago
      If you ask them to break it into blocks, are they not going to submit 10 more AI-generated PRs (each having its own paragraphs of description and comment spam), which you then have to wade through. Why sink even more time into it?
      • Buttons840an hour ago
        Being AI-generated is not the problem. Being AI-generated and not understandable is the problem. If they find a way to make the AI-generated code understandable, mission accomplished.
        • ashdksnndckan hour ago
          How much of their time should open source maintainers sink into this didactic exercise? Maybe someone should vibe-code a bot to manage the process automatically.
    • gpm6 hours ago
      Eh, ask the author to split it in reviewable blocks if you think there's a chance you actually want a version of the code. More likely if it's introducing tons of complexity to a conceptually simple service you just outright reject it on that basis.

      Possibly you reject it with "this seems more suitable for a fork than a contribution to the existing project". After all there's probably at least some reason they want all that complexity and you don't.

    • resonious6 hours ago
      This is it. The fact that the PR was vibe coded isn't the problem, and doesn't need to influence the way you handle it.
      • gdulli4 hours ago
        It would be willfully ignorant to pretend that there's not an explosion of a novel and specific kind of stupidity, and to not handle it with due specificity.
        • 4 hours ago
          undefined
        • WalterSear3 hours ago
          I contend that, by far and away the biggest difference between entirely human-generated slop and AI-assisted stupidity is the irrational reaction that some people have to AI-assisted stuff.
          • JoshTriplett6 minutes ago
            Many of the people who submit 9000-line AI-generated PRs today would, for the most part, not have submitted PRs at all before, or would not have made something that passes CI, or would not have built something that looks sufficiently plausible to make people spend time reviewing it.
          • hatefulmoron3 hours ago
            Calling things "slop" is just begging the question. The real differentiating factor is that, in the past, "human-generated slop" at least took effort to produce. Perhaps, in the process of producing it, the human notices what's happening and reconsiders (or even better, improves it such that it's no longer "slop".) Claude has no such inhibitions. So, when you look at a big bunch of code that you haven't read yet, are you more or less confident when you find out an LLM wrote it?
            • fragmede19 minutes ago
              If you try and one shot it, sure, but if you question Claude, point out the errors of its ways, tell it to refactor and ultrathink, point out that two things have similar functionality and could be merged. It can write unhinged code with duplicate unused variable definitions that don't work, and it'll fix it up if you call it out, or you can just do it yourself. (cue questions of if, in that case, it would just be faster to do it yourself.)
              • hatefulmoron8 minutes ago
                I have a Claude max subscription. When I think of bad Claude code, I'm not thinking about unused variable definitions. I'm thinking about the times you turn on ultrathink, allow it to access tools and negotiate it's solution, and it still churns out an over complicated yet partially correct solution that breaks. I totally trust Claude to fix linting errors.
            • WalterSear2 hours ago
              I have pretty much the same amount of confidence when I receive AI generated or non-AI generated code to review: my confidence is based on the person guiding the LLM, and their ability to that.

              Much more so than before, I'll comfortably reject a PR that is hard to follow, for any reason, including size. IMHO, the biggest change that LLMs have brought to the table is that clean code and refactoring are no longer expensive, and should no longer be bargained for, neglected or given the lip service that they have received throughout most of my career. Test suites and documentation, too.

              (Given the nature of working with LLMs, I also suspect that clean, idiomatic code is more important than ever, since LLMs have presumably been trained on that, but this is just a personal superstition, that is probably increasingly false, but also feels harmless)

              The only time I think it is appropriate to land a large amount of code at once is if it is a single act of entirely brain dead refactoring, doing nothing new, such as renaming a single variable across an entire codebase, or moving/breaking/consolidating a single module or file. And there better be tests. Otherwise, get an LLM to break things up and make things easier for me to understand, for crying out loud: there are precious few reasons left not to make reviewing PRs as easy as possible.

              So, I posit that the emotional reaction from certain audiences is still the largest, most exhausting difference.

              • hatefulmoron23 minutes ago
                I don't really understand your point. It reads like you're saying "I like good code, it doesn't matter if it comes from a person or an LLM. If a person is good at using an LLM, it's fine." Sure, but the problem people have with LLMs is their _propensity_ to create slop in comparison to humans. Dismissing other people's observations as purely an emotional reaction just makes it seem like you haven't carefully thought about other people's experiences.
              • grey-area2 hours ago
                clean code and refactoring are no longer expensive

                Are you contending that LLMs produce clean code?

                • WalterSear2 hours ago
                  They do, for many people. Perhaps you need to change your approach.
                  • dmurray39 minutes ago
                    If you can produce a clean design, the LLM can write the code.
                    • fragmede17 minutes ago
                      Unless you're doing something fabulously unique (at which point I'm jealous you get to work on such a thing), they're pretty good at cribbing the design of things if it's something that's been well documented online (canonically, a CRUD SaaS app for a specific niche).
          • exe34an hour ago
            Are you quite sure that's the only difference you can think of? Let me give you a hint: is there any difference in the volume for the same cost at all?
        • rablackburn3 hours ago
          > It would be willfully ignorant to pretend that there's not an explosion of a novel and specific kind of stupidity

          I 100% know what you mean, and largely agree, but you should check out the guidelines, specifically:

          > Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative.

          And like, the problem _is_ *bad*. A fun, on-going issue at work is trying to coordinate with a QA team who believe chatgpt can write css selectors for HTML elements that are not yet written.

          That same QA team deeply care about the spirit of their work, and are motivated by, the _very_ relatable sentiment of, you DONT FUCKING BREAK USER SPACE.

          Yeah, in the unbridled, chaotic, raging plasma that is our zeitgeist at the moment, I'm lucky enough to have people dedicating a significant portion of their life to trying to do quality assurance in the idiomatic, industry best-standard way. Blame the FUD, not my team.

          I would put to you that the observation that they do not (yet) grok what, for lack of a more specific universally understood term we are calling, "AI" (or LLMs if you are Fancy. But of course none of these labels are quite right). People need time to observe, and learn. And people are busy with /* gestures around vaguely at everything /*.

          So yes, we should acknowledge that long-winded trash PRs from AI are a new emergent problem, and yes, if we study the specific problem more closely we will almost certainly find ever more optimal approaches.

          Writing off the issue as "stupidity" is mean. In both senses.

      • cespare3 hours ago
        It is 1995. You get an unsolicited email with a dubious business offer. Upon reflection, you decide it's not worth consideration and delete it. No need to wonder how it was sent to you; that doesn't need to influence the way you handle it.

        No. We need spam filters for this stuff. If it isn't obvious to you yet, it will be soon. (Or else you're one of the spammers.)

    • danenania3 hours ago
      I’m curious how people would suggest dealing with large self-contained features that can’t be merged to main until they are production-ready, and therefore might become quite large prior to a PR.

      While it would be nice to ship this kind of thing in smaller iterative units, that doesn’t always make sense from a product perspective. Sometimes version 0 has bunch of requirements that are non-negotiable and simply need a lot of code to implement. Do you just ask for periodic reviews of the branch along the way?

      • wiseowise15 minutes ago
        > I’m curious how people would suggest dealing with large self-contained features that can’t be merged to main until they are production-ready

        Are you hiding them from CIA or Al-Qaeda?

        Feature toggles, or just plain Boolean flag are not rocket science.

      • arachnid923 hours ago
        The way we do it where I work (large company in the cloud/cybersecurity/cdn space):

        - Chains of manageable, self-contained PRs each implementing a limited scope of functionality. “Manageable” in this context means at most a handful of commits, and probably no more than a few hundred lines of code (probably less than a hundred tbh).

        - The main branch holds the latest version of the code, but that doesn’t mean it’s deployed to production as-is. Releases are regularly cut from stable points of this branch.

        - The full “product” or feature is disabled by a false-by-default flag until it’s ready for production.

        - Enablement in production is performed in small batches, rolling back to disabled if anything breaks.

      • JonChesterfield2 hours ago
        They come from people who have established that their work is worth the time to review and that they'll have put it together competently.

        If it's a newcomer to the project, a large self contained review is more likely to contain malware than benefits. View with suspicion.

      • foxglacier3 hours ago
        The partial implementation could be turned off with a feature flag until it's complete.
      • exe34an hour ago
        you line up 10-20 PRs and merge them in a temporary integration branch that gets tested/demoed. The PRs still have to be reviewed/accepted and merged into main separately. You can say 'the purpose of this pr is to do x for blah, see top level ticket'. often there will be more than one ticket based on how self-contained the PRs are.
  • EagnaIonat3 hours ago
    Everyone talking about having them break it down into smaller chunk. Vibe coding there is a near guarantee the person doesn't know what the code does either.

    That alone should be the reason to block it. But LLM generated code is not protected by law, and by extension you can damage your code base.

    My company does not allow LLM generated code into anything that is their IP. Generic stuff outside of IP is fine, but every piece has to flagged that it is created by an LLM.

    In short, these are just the next evolution of low quality PRs.

    • smsm4242 minutes ago
      > Vibe coding there is a near guarantee the person doesn't know what the code does either.

      Accepting code into the project when only one person (the author) knows what it does is a very bad idea. That's why reviews exist. Accepting code that zero persons know what it does is sheer screaming insanity.

    • exe34an hour ago
      > Everyone talking about having them break it down into smaller chunk. Vibe coding there is a near guarantee the person doesn't know what the code does either.

      that's the point though, if they can't do it, then you close the ticket and tell them to fork off.

      • EagnaIonatan hour ago
        I agree, but you are potentially opening yourself up to 20+ PRs which are all vibe coded.
  • MikeNotThePope6 hours ago
    How about this?

    “This PR is really long and I’m having a hard time finding the energy to review it all. My brains gets full before I get to the end. Does it need to be this long?”

    Force them to make a case for it. Then see how they respond. I’d say good answers could include:

    - “I really trieeld to make it smaller, but I couldn’t think of a way, here’s why…”

    - “Now that I think about it, 95% of this code could be pushed into a separate library.”

    - “To be honest, I vibe coded this and I don’t understand all of it. When I try to make it smaller, I can’t find a way. Can we go through it together?”

  • O-stevnsan hour ago
    That's a lot of code for a PR, though i should admit I have made PR's being half that size myself.

    Personally I think it's difficult to address these kinds of PR's but I also think that git is terrible at providing solutions to this problem.

    The concept of stacked PR's are fine up to the point where you need to make changes throughout all yours branches, then it becomes a mess. If you (like me) might have a tendency to rewrite your solution several times before ending up with the final result, then having to split this into several PR's does not help anyone. The first PR will likely be outdated the moment I begin working on the next.

    Open source is also more difficult in this case because contrary to working for a company with a schedule, deadlines etc... you can't (well you shouldn't) rush a review when it's on your own time. As such PR's can sit for weeks or months without being addressed. When you eventually need to reply to comments about how, why etc.. you have forgotten most of it and needs to read the code yourself to re-claim the reasoning. At that time it might be easier to re-read a 9000 lines PR over time rather than reading 5-10 PR's with maybe meaningful descriptions and outcome.

    Also, if it's from a new contributor, I wouldn't accept such a PR, vibe coded or not.

  • ares62316 minutes ago
    Ask them if they reviewed the AI’s output before opening the PR. If they didn’t then ask them to at least review it first rather than having you do all the work. If they did then is a 2nd review from you really necessary? ;)
  • LaFollean hour ago
    There are good suggestions in the thread.

    One suggestion that possibly is not covered is that you/we can document clearly how AI generated PRs will be handled, make it easy for contributors to discover it and if/when such PR shows up refer the documented section to save yourself time.

  • lionkor28 minutes ago
    Close them and report to your boss. If your boss doesn't care, look for a new job. Once you have a new job, quit the old and cite that specific case as the reason.
  • TriangleEdge6 hours ago
    Amazon eng did some research and found the number of comments in a code review is proportional to the number of lines changed. Huge CRs get little comments. Small CRs get a lot of comments. At Amazon, it's common to have a 150 to 300 line limit to changes. It depends on the team.

    In your case, I'd just reject it and ensure repo merges require your approval.

    • kwk14 hours ago
      "Inversely proportional" for what it's worth
    • senderista3 hours ago
      Also, some teams have CR metrics that can be referenced for performance evaluations.
    • zukzuk6 hours ago
      That’s a great way to discourage anyone ever doing any large scale refactoring, or any other heavy lifting.
      • febusravenga2 hours ago
        That's good. Because large refactorings are usually harmful. They are also usually unplanned, not scoped and based on very unquantifiable observations like "I don't like the code is structured" - let's do ity way.
      • TriangleEdge6 hours ago
        The review bots can be bypassed.
      • arachnid923 hours ago
        Just split up your work across multiple PRs.
  • rhubarbtreean hour ago
    In our company, you would immediately reject the PR based on size. There are a bunch of other quick bounce items it could also fail on, eg documentation.

    The PR would then be split into small ones up to 400 lines long.

    In truth, such a big PR is an indicator that either (a) the original code is a complete mess and needs reengineering or more likely (b) the PR is vibe coded and is making lots of very poor engineering decisions and goes in the bin.

    We don’t use AI agents for coding. They’re not ready. Autocomplete is fine. Agents don’t reason like engineers, they make crap PRs.

  • alexdowad6 hours ago
    Be tactful and kind, but straightforward about what you can't/don't want to spend time reviewing.

    "Thanks for the effort, but my time and energy is limited and I can't practically review this much code, so I'm closing this PR. We are interested in performance improvements, so you are welcome to pick out your #1 best idea for performance improvement, discuss it with the maintainers via ..., and then (possibly) open a focused PR which implements that improvement only."

    • ivanjermakovan hour ago
      Depends on context of course, but in my book "my time and energy is limited" is not a valid reason for a reject. Get back once you have time, review in chunks.
      • wiseowise12 minutes ago
        > is not a valid reason for a reject

        As a reviewer or as a submitter?

  • JohnFen6 days ago
    I'd just reject it for being ridiculous. It didn't pass the first step of the review process: the sniff test.
    • brudgers6 days ago
      Charitably, even though it is not what you or I would do, the pull request could be a best good faith effort of a real human being.

      So to me, it's less about being ridiculous (and "ridiculous" is a fighting word) and more a simple "that's not how this team does things because we don't have the resources to work that way."

      Mildly hurt feelings in the most likely worst case (no food for a viral overtop tweet). At best recruitment of someone with cultural fit.

      • JohnFen6 days ago
        My objection to a PR like this has nothing to do with whether or not a human wrote it. It's that the PR is too large and complex. The reason I'd give for rejecting it would be that. I wouldn't say "it's ridiculous" as the reason. I would 100% be thinking that, though.
        • brudgers6 days ago
          That’s good.

          My experience is “too large/complex” provides an opening for arguementivenes and/or drama.

          “We don’t do it like this” does not so much. It is social, sufficient and not a matter of opinion (“too” is a matter of opinion).

          • BrenBarn3 hours ago
            What about "this is large and complex enough to be not the way we do things"?
  • smsm4244 minutes ago
    The only way such a PR can be reviewed is if it's accompanied with a detailed PRD and tech design documents, and at least half of that LOC count is tests. Even then it requires a lot of interactive work from both sides. I have seen PRs third or quarter of this size that took weeks to properly review and bring to production quality. Unless there's something artificially inflating the side of it (like auto-generated files or massive test fixtures, etc.) I wouldn't ever commit to reviewing such a behemoth without a very very good reason to.
  • andreygrehov3 hours ago
    That 10+ years old joke never gets old:

    10 lines of code = 10 issues.

    500 lines of code = "looks fine."

    Code reviews.

  • raincole3 hours ago
    You ask questions. Literally anything, like asking them why they believe this feature is needed, what their code does, why they made a DSL parser, etc.

    The question itself doesn't matter. Just ask something. If their answer is genuine and making sense you deal with it like a normal PR. If their answer is LLM-generated too then block.

    • 3 hours ago
      undefined
  • Ask the submitter to review and leave their comments first or do a peer code review with them and force them to read the code. It's probably the first time they'll have read the code as well...
    • groguzt3 hours ago
      I really like this, the fact that vibe coded PRs are often bad is that people don't review it themselves first, they just look at the form, and if it looks vaguely similar to what they had in their mind, they'll just hit save and not ask the LLM for corrections
  • dosinga6 hours ago
    Ideally you have a document in place saying this is how we handle vibe coding, something like: if you have the AI write the first version, it is your responsibility to make it reviewable.

    The you can say (and this is hard), this looks like it is vibe code and misses that first human pass we want to see in these situations (link), please review and afterwards feel free to (re)submit.

    In my experience they'll go away. Or they come back with something that isn't cleaned up and you point out just one thing. Or sometimes! they actually come back with the right thing.

  • fathermarz2 hours ago
    Let me ask a different question. Large refactor that ended up in a 60K line python PR because the new lead didn’t feel like merging it in until it was basically done. Even ask other devs to merge into his branch and then we would merge later.

    How does one handle that with tact and not lose their minds?

    • wiseowise8 minutes ago
      You get Leetcode subscription and start going through paths for a company that can match or exceed your salary.
    • JonChesterfield2 hours ago
      Refuse to merge into their branch. If you have serious test coverage and the refactor doesn't change behaviour, it'll be fine.

      If you don't have test coverage, or if the "refactor" is also changing behaviour, that project is probably dead. Make sure there's a copy of the codebase from before the new lead joined so there's a damage mitigation roll back option available.

  • jeremyjh6 hours ago
    I'd just close it without comment. Or maybe if I'm feeling really generous I'll make a FAQ.md that gives a list of reasons why we'll close PRs without review or comment and link that in the close comments. I don't owe anyone any time on my open source projects. That said, I haven't had this issue yet.
    • tracerbulletx6 hours ago
      That's fine for an open source project, but many many companies are mandating AI use, they're putting it in performance reviews, they're buying massive Cursor subscriptions. You'd be cast as an obstructionist to AI's god like velocity ™.
  • calini29 minutes ago
    Vibe merge review it using Copilot or equivalent, and then close it :)
    • cryptonym18 minutes ago
      Prompt: be over cautious on every code line, this is junior code and they can learn a lot from this PR. Generate many comments on why it shouldn't be merged as-is and make sure every corner case is covered. Be super paranoid, mistakes in the code could hurt the company or people.

      If you are lucky, they will also vibe fix it.

  • rvrs4 days ago
    Enforce stacked PRs, reject PRs over 500-1k LoC (I'd argue even lower, but it's a hard sell)
  • locknitpickeran hour ago
    > How would you go about reviewing a PR like this?

    State the PR is too large to be reviewed, and ask the author to break it down into self-contained units.

    Also, ask which functional requirements the PR is addressing.

    Ask for a PR walkthrough meeting to have the PR author explain in detail to an audience what they did and what they hope to achieve.

    Establish max diff size for PRs to avoid this mess.

  • T_Potato2 hours ago
    I have a tangent question: How do you deal with a team that spends days nitpicking implementation, double-speak and saying. I didn't actually expect you to implement this the way I said, I was just saying it would be nice if it was like this, can you undo it. I spend 3 weeks on a code review because of the constant back and forth; and I wish oh I wish they would allow PR to be small but the rule is that the PR has to implement the full deliverable feature. And that can mean 20 files to constantly change and change and change and change. Oh and then the why did you use Lombok question that occurs even though the project uses lombok and so you are stuck defending the use of a library that's is used in the project for no random reason than to flatter the egos of the gatekeepers who say, yes this is good but I want you to name this abc instead of ab before we merge. When in context it doesn't add or remove any value, not even clarity.
    • tjansenan hour ago
      Generally, my stance is that I add more value by doing whatever ridiculous thing people ask me to change than waste my time arguing about it. There are some obvious exceptions, like when the suggestions don't work or make the codebase significantly worse. But other than that, I do whatever people suggest, to save my time, their time, and deliver faster. And often, once you're done with their initial suggestions, people just approve.

      This doesn't help all the time. There are those people who still keep finding things they want you to change a week after they first reviewed the code. I try to avoid including them in the code review. The alternative is to talk to your manager about making some rules, like giving reviewers only a day or two to review new code. It's easy to argue for that because those late comments really hinder productivity.

    • dbetteridge2 hours ago
      Doesn't help you much I imagine, but the one time we had a dev like this he was fired after multiple complaints to the team lead.
  • siwatanejo6 hours ago
    Forget about code for a second. This all depends a lot of what goal does the PR achieve? Does it align with the goals of the project?
    • appreciatorBus4 hours ago
      How can you tell if it aligns with the goals of the project without reviewing 9000 lines of code first?
      • ivanjermakovan hour ago
        PRs rarely exist in a vacuum. Usually there is a ticket/issue/context which required a code change.
      • siwatanejoan hour ago
        Are you kidding me? You should be able to explain from the user PoV what does the PR achieve, a new feature? a bugfix?

        That data point is waaaaaay more important than any other when considering if you should think about reviewing it or not.

        • wiseowise7 minutes ago
          Okay, it does align. What next?
  • devrundown4 days ago
    9000 LOC is way too long for a pull request unless there is some very special circumstance.

    I would ask them to break it up into smaller chunks.

  • ugh123an hour ago
    Are there tests written? You could start by demanding tests pass and demonstrate some kind of coverage metric.
  • le-mark6 hours ago
    How long was this person working on it? Six months? Anything this big should’ve had some sort of design review. The worst is some junior going off and coding some garbage no one sees for a month.
    • jonchurch_6 hours ago
      You can churn this stuff out in about an hour these days though, seriously. Thats part of the problem, the asymmetry of time to create vs time to review.

      If I can write 8 9k line PRs everyday and open them against open source projects, even closing them let alone engaging with them in good faith is an incredible time drain vs the time investment to create them.

  • zigcBenx6 days ago
    In my opinion no PR should have so much changes. It's impossible to review such things.

    The only exception is some large migration or version upgrade that required lots of files to change.

    As far it goes for Vibe coded gigantic PRs It's a straight reject from me.

  • johnnyanmac6 hours ago
    excuse me, 9000? If that isn't mostly codegen, including some new plugin/API, or a fresh repository I'd reject it outright. LLM's or not.

    In my eyes, there really shouldn't be more than 2-3 "full" files worth of LOC for any given PR (which should aim to to address 1 task/bug each. If not, maybe 2-3 at most), and general wisdom is to aim to keep "full" files around 600 LOC each (For legacy code, this is obviously very flexible, if not infeasible. But it's a nice ideal to keep in mind).

    An 1800-2000 LOC PR is already pushing what I'd want to review, but I've reviewed a few like that when laying scaffolding for a new feature. Most PR's are usually a few dozen lines in 4-5 files each, so it's far below that.

    9000 just raises so many red flags. Do they know what problem they are solving? Can they explain their solution approach? Give general architectual structure to their implementation? And all that is before asking the actual PR concerns of performance, halo effects, stakeholders, etc.

  • hsbauauvhabzb27 minutes ago
    “Hey chatgpt, reject this pr for me. Be extremely verbose about the following topics:

    - Large prs - vibe coding - development quality”

    • wiseowise4 minutes ago
      Finally, an advice from 10x AI engineer.
  • dbgrman5 hours ago
    TBH, depends on what is being reviewed. Is it a prototype that might not see light of day and is only for proof-of-concept? Did an RFC doc precede it and reviewers are already familiar with the project? Were the authors expecting this PR? Was there a conversation before the PR was sent out? Was there any effort to have a conversation after the PR was shared? Was this even meant to be merged into main?

    I'll just assume good intent first of all. Second, 9000 LOC spanning 63 lines is not necessarily an AI generated code. It could be a code mod. It could be a prolific coder. It could be a lot of codegen'd code.

    Finally, the fact that someone is sending you 9000 LOC code hints that they find this OK, and this is an opportunity to align on your values. If you find it hard to review, tell them that I find it hard to review, I can't follow the narrative, its too risky, etc. etc.

    Code review is almost ALWAYS an opportunity to have a conversation.

  • shinycode28 minutes ago
    Don’t read it, approve it.
  • ojr2 hours ago
    I would test if the new features work and if there is any regressions around critical business functions and merge it, if my manual tests pass.
  • throwaway1063826 hours ago
    You don't.

    Was your project asking for all this? No? Reject.

  • aryehof3 hours ago
    This is effectively a product, not a feature (or bug). Ask the submitter how you can you determine if this meets functional and non-functional requirements, to start with?
  • Roark662 hours ago
    Many people gave good tips, so let me answer in general.

    As someone on the "senior" side AI has been very helpful in speeding up my work. As I work with many languages, many projects I haven't touched in months and while my code is relatively simple the underlying architecture is rather complex. So where I do use AI my prompts are very detailed. Often I spot mistakes that get corrected etc. With this I still see a big speedup (at least 2x,often more). The quality is almost the same.

    However, I noticed many "team leads" try to use the AI as an excuse to push too difficult tasks onto "junior" people. The situation described by the OP is what happens sometimes.

    Then when I go to the person and ask for some weird thing they are doing I get "I don't know, copilot told me"...

    Many times I tried to gently steer such AI users towards using it as a learning tool. "Ask it to explain to you things you don't understand" "Ask questions about why something is written this way" and so on. Not once I saw it used like this.

    But this is not everyone. Some people have this skill which lets them get a lot more out of pair programming and AI. I had a couple trainees in the current team 2 years ago that were great at this. This way as "pre-AI" in this company, but when I was asked to help them they were asking various questions and 6 months later they were hired on permanent basis. Contrast this with: - "so how should I change this code"? - You give them a fragment, they go put it in verbatim and come back via teams with a screenshot of an error message...

    Basically expecting you will do the task for them. Not a single question. No increased ability to do it on their own.

    This is how they try to use AI as well. And it's a huge time waster.

  • sshine2 hours ago
    Same standard as if they had made it themselves: a sequence of logically ordered commits.
  • occzan hour ago
    Easy, you reject it.
  • abhimanyue19984 hours ago
    vibe review it with AI then run it on vibe production support. simple.
  • wheelerwj3 hours ago
    The same way you review a non vibe coded pr. Whats that got to do with anything? A shit pr is a shit pr.
  • renewiltord33 minutes ago
    It's basic engineering principle: you do not do work amplification. e.g. debouncing, request coalescing, back-pressure are all techniques to prevent user from making server do lots of work in response to small user effort.

    As example, you have made summarization app. User is try to upload 1 TB file. What you do? Reject request.

    You have made summarization app. User is try upload 1 byte file 1000 times. What you do? Reject request.

    However, this is for accidental or misconfigured user. What if you have malicious user? There are many technique for this as well: hell-ban, tarpit, limp.

    For hell-ban simply do not handle request. It appear to be handled but is not.

    For tarpit, raise request maker difficulty. e.g. put Claude Code with Github MCP on case, give broad instructions to be very specific and request concise code and split etc. etc. then put subsequent PRs also into CC with Github MCP.

    For limp, provide comment slow using machine.

    Assuming you're not working with such person. If working with such person, email boss and request they be fired. For good of org, you must kill the demon.

  • tacostakohashi4 days ago
    Use AI to generate the review, obviously.
  • mort96an hour ago
    Close them.
  • PeterStueran hour ago
    Before review ask for a rational and justification. Might be just overcomplicated AI slop, could also be someone actually went beyond the basics and realy produced something next level.

    A simple email could tell the difference.

  • cat_plus_plus2 hours ago
    Vibe review with all the reasons it should not be merged obviously.
  • tayo422 hours ago
    You can't really review this. Rubber stamp it or reject it.
  • wengo3146 days ago
    reject outright. ask to split it into reasonable chain of changesets.
  • aaronrobinson6 days ago
    Reject it
  • 9999000009996 hours ago
    Reject it and tell them to actually code it.
  • throwaway2903 hours ago
    Don't accept this PR. If it's bot generated you are not here to review it. They can find a bot to review bot generated requests.
  • userbinator3 hours ago
    If it's full of the typical vibe-coded nonsense that's easy to spot upon a quick-but-close inspection (unused functions, dead-end variables and paths that don't make sense, excessively verbose and inaccurate comments, etc.), I would immediately reject.
  • anarticle2 hours ago
    No face, no case. They have to break it way down, just like at any org. In fact, I would ask for more tests than usual with a test plan/proof they passed. 9k is a little spicy, separate PRs, or an ad hoc huddle with them rubber ducking you through the code. Depends on if you care about this that much or not.

    Unless you really trust them, it's up to the contributor to make their reasoning work for the target. Else, they are free to fork it if it's open source :).

    I am a believer in using llm codegen as a ride along expert, but it definitely triggers my desire to over test software. I treat most codegen as the most junior coder had written it, and set up guardrails against as many things llm and I can come up with.

  • atoav2 hours ago
    Tell them to give you a phone call and have them explain the code to you : )
  • 3 hours ago
    undefined
  • exe34an hour ago
    simple, ask them to break it down into smaller pieces with clear explanation of what it does and why it's needed. Then set up an AI to drag them in the dirt with pointless fixes. or just close them as won't-fix.
  • vasan2 days ago
    Just reflect upon it, see if you gave him less time to complete it. I would just have a meet with him and confront it.
  • ninetyninenine3 hours ago
    You vibe review it. I’m actually only half kidding here.
  • ChrisMarshallNY6 hours ago
    I write full app suites that have less than 9000 LoC. I tend toward fewer, large-ish source files, separated by functional domains.

    I once had someone submit a patch (back in the SVN days), that was massive, and touched everything in my system. I applied it, and hundreds of bugs popped up.

    I politely declined it, but the submitter got butthurt, anyway. He put a lot of work into it.

  • estan hour ago
    write another AI to hardcore review it and eventually reject it.
  • wheelerwj3 hours ago
    The same way you do a non vibe coded pr. If its a shit pr, its a shit pr.
  • never_inline6 hours ago
    close button.
  • bmitc6 hours ago
    Reject it and request the author makes it smaller.

    PRs should be under 1000 lines.

    The alternative is to sit down with them and ask what they're trying to accomplish and solve the problem from that angle.

  • ripped_britches6 hours ago
    Obviously by vibe reviewing it
  • CamperBob23 hours ago
    Please review this PR. Look carefully for bugs, security issues, and logical conflicts with existing code. Report 'Pass' if the PR is of sufficient quality or 'Fail' if you find any serious issues. In the latter case, generate a detailed report to pass along to the submitter.

    (ctrl-v)

  • hshdhdhehd3 days ago
    With a middle finger
  • sherinjosephroyan hour ago
    [dead]
  • huflungdungan hour ago
    [dead]
  • bulucantik6 days ago
    [dead]
  • foxfired6 hours ago
    It's funny just today I published an article with the solution to this problem.

    If they don't bother writing the code, why should you bother reading it? Use an LLM to review it, and eventually approve it. Then of course, wait for the customer to complain, and feed the complaint back to the LLM. /s

    Large LLM generated PRs are not a solution. They just shift the problem to the next person in the chain.

    • throwawayffffas6 hours ago
      How do you know they didn't bother to write it? For all we know the submitter has been quietly hammering away at this for months.
      • foxfired6 hours ago
        The title says it is vibe-coded. By definition, it means they didn't write it.
        • throwawayffffas6 hours ago
          But how do they know it's vibe-coded? It may have a smell to it. But the author might not know it for a fact. The fact it's vibe-coded is actually irrelevant the size of the request is the main issue.
          • foxfired5 hours ago
            I'm not gonna make assumptions on behalf of OP, but if you have domain knowledge, you can quickly tell when a PR is vibe-coded. In a real world scenario, it would be pretty rare for someone to generate this much code in a single PR.

            And if they did in fact spend 6 months painstakingly building it, it wouldn't hurt to break it down into multiple PRs. There is just so much room for error reviewing such a giant PR.

          • sunaookami3 hours ago
            You can recognize it by the rocket emojis in the PR description ;)
  • exclipy6 hours ago
    I made a /split-commit prompt that automatically splits a megacommit into smaller commits. I've found this massively helpful for making more reviewable commits. You can either run this yourself or send this to your coworker to have them run it before asking you to re-review it.

    Sometimes it doesn't split it among optimal boundaries, but it's usually good enough to help. There's probably room for improvement and extension (eg. re-splitting a branch containing many not-logical commits, moving changes between commits, merging commits, ...) – contributions welcome!

    You can install it as a Claude Code plugin here: https://github.com/KevinWuWon/kww-claude-plugins (or just copy out the prompt from the repo into your agent of choice)

  • ako3 hours ago
    AI code generators are getting better fast, in the near future they will be able to produce good changes faster than you can review. How will you deal with it then? Most vibe coding tools can also produce smaller PR, but then you have to deal with 250+ PRs in 1 week. Is that more manageable? My guess is we need new tool, get the human out of the loop. More automated reviews, tests, etc.
    • akoan hour ago
      Instead of downvotes i would appreciate some insightful comments on this, as i'm currently struggling with this problem. In the last week i've vibe-code (vibe-engineered?) a typescript project with 230+ commits, 64 typescripts files, with 27k+ lines of code. Too much to actually read. Validation mostly through testing, automated test, architecture reviews (generate mermaid diagrams). I'm mostly reviewing the code structure and architecture, libraries it uses, etc. It has 600+ unit and integration tests, but even reviewing those is too much...
      • shinycode20 minutes ago
        Our problem is not coding. Our problem is knowledge. If no one reads it and no one knows how it works and that’s what the company wants because we need to ship fast then the company doesn’t understand what software is all about. Code is a language, we write stories that makes a lot of sense and has consequences. If the companies does not care that humans need to know and decide in details the story and how it’s written then let it accept the consequence of a sttastistically generated story with no human supervision. Let it trust the statistics when there will be a bug and no one knows how it works because no one read it and no one is there anymore to debug. We’ll see in the end if it’s cheaper to let the code be written and only understood by statistical algorithms. Otherwise, just work differently instead of generating thousand of loc, it’s your responsibility to review and understand no matter how long it takes.
      • smsm4230 minutes ago
        > In the last week i've vibe-code (vibe-engineered?) a typescript project with 230+ commits, 64 typescripts files, with 27k+ lines of code. Too much to actually read.

        Congratulations, you discovered that generating code is only part of software development process. If you don't understand what the code is actually doing, good luck maintaining it. If it's never reviewed, how do you know these tests even test anything? Because they say "test passed"? I can write you a script that prints "test passed" a billion times - would you believe it is a billion unit tests? If you didn't review them, you don't have tests. You have a pile of code that looks like tests. And "it takes too long to review" is not an excuse - it's like saying "it's too hard to make a car, so I just took a cardboard box, wrote FERRARI on it and sit inside it making car noises". Fine, but it's not a car. It's just pretending. If it's not properly verified, what you have is not tests, it's just pretending.