1 pointby jemiluv84 hours ago2 comments
  • jemiluv84 hours ago
    I was reading the above PR and a couple of others that were rejected primarily because the person making the PR didn't even understand the problem to begin with.

    LLMs tend to make most people feel like they can write code without understanding the problem at hand. In a lot of cases, they even climb the ladder of ai-suggested designs and end up with what is probably poorly designed but works anyway. That probably fuels their confidence and gets them to continue - I get away with some of these things on most reactive UI frameworks.

    When doing systems programming however, it is hard to get away with poorly designed, conceived and executed work. Especially in open source projects where a couple of maintainers have to retain context of the entire project over a long period of time to facilitate their ability to review and make community contributions possible. These people tend to understand the product deeply and tend to also do gate-keeping for the quality of code that is contributed. Without that gate-keeping, open source might just not be sustainable.

    Today with all these llm tools, people just get up and feel like they can ai-slop their way to PRs on open source projects. This is a maintenance burden on open source maintainers that I fear will only increase over time.

    It is probably time for github to implement a policy of enabling maintainers to ban some users from making a PR?