103 pointsby benswerd10 hours ago16 comments
  • AgentOrange12345 hours ago
    "Every optional field is a question the rest of the codebase has to answer every time it touches that data,"

    This is a beautiful articulation of a major pet peeve when using these coding tools. One of my first review steps is just looking for all the extra optional arguments it's added instead of designing something good.

    • shepherdjerredan hour ago
      There's nothing specific to AI about this. Humans make the same mistake.

      To solve this permanently, use a linter and apply a "ratchet" in CI so that the LLM cannot use ignore comments

  • ChrisMarshallNY9 hours ago
    Because of the way that I use AI, I am constantly looking at the code. I usually leave it alone, if I can; even if I don't really like it.

    I will, often go back, after the fact, and ask for refactors and documentation.

    It works. Probably a lot slower than using agents, but I test every step, and it is a lot faster than I would do it, unassisted.

    • benswerd9 hours ago
      I don't think testing the product alone is good enough, because when you give it tests it has to pass it prioritizes passing them at the expense of everything else — including code quality. I've seen it pull in random variables, break semantic functions, etc.
  • earljwagner2 hours ago
    The concepts of Semantic Functions and Pragmatic Functions seem to be analogous to a Functional Core and Imperative shell (FCIS):

    https://testing.googleblog.com/2025/10/simplify-your-code-fu...

    The key insight of FCIS is that complicated logic with large dependencies leads to a large test suite that runs slowly. The solution is to isolate the complicated logic in the functional core. Test that separately from the simpler, more sequential tests of the imperative shell.

  • xiaolu6272 hours ago
    What changed for me isn’t that AI writes bad code by default, but that it lowers the friction to adding code faster than the team can properly absorb it. The dangerous part is not obvious bugs, it’s subtle erosion of consistency.
  • gravitronic8 hours ago
    *adds "be intentional" to the prompt*

    Got it, good idea.

  • abcde6667777 hours ago
    My intentionality is that I'll never let it make the changes. I make the changes. I might make changes it suggests, but only upon review and only written with my hands.
    • benswerd7 hours ago
      I think this style of work will go away. I was skeptical but I now write the majority of my code through agents.
      • abcde6667772 hours ago
        I don't think it will go away, I think there will remain a niche for code where we care about precision. Maybe that niche will get smaller over time, but I think it will be a hold out for quite a while. A loose analogy I've found myself using of late is comparing it to bespoke vs off the shelf suits.

        For instance, two things I'm currently working on: - A reasonably complicated indie game project I've been doing solo for four years. - A basic web API exposing data from a legacy database for work.

        I can see how the API could be developed mostly by agents - it's a pretty cookie cutter affair and my main value in the equation is just my knowledge of the legacy database in question.

        But for the game... man, there's a lot of stuff in there that's very particular when it comes to performance and the logic flow. An example: entities interacting with each other. You have to worry about stuff like the ordering of events within a frame, what assumptions each entity can make about the other's state, when and how they talk to each other given there's job based multi-threading, and a lot of performance constraints to boot (thousands of active entities at once). And that's just a small example from a much bigger iceberg.

        I'm pretty confident that if I leaned into using agents on the game I'd spend more time re-explaining things to them than I do just writing the code myself.

        • benswerd21 minutes ago
          I write systems rust on the cutting edge all day. My work is building instant MicroVM sandboxes.

          I was shocked recently when it helped me diagnose a musl compile issue, fork a sys package, and rebuild large parts of it in 2 hours. Would've taken me atleast 2 weeks to do it without AI.

          Don't want to reveal the specific task, but it was a far out of training data problem and it was able to help me take what would've normally taken 2 weeks down to 2 hours.

          Since then I've been going pretty hard at maximizing my agent usage, and tend to have a few going at most times.

      • dougg2 hours ago
        I see this a lot in research as well, unfortunately including myself. I do miss college where I would hand write a few thousand lines of code in a month, but i’m just so much more productive now.
      • thepukingcat6 hours ago
        +1 for this, once you have a solid plan with the AI and prompt it to make one small changes at a time and review as you go, you could still be in control of your code without writing a single line
  • mattacular6 hours ago
    Code cannot and should not be self documenting at scale. You cannot document "the why" with code. In my experience, that is only ever used as an excuse not to write actual documentation or use comments thoughtfully in the codebase by lazy developers.
    • bdangubic6 hours ago
      this always starts out right but over the years the code changes and its documentation seldom does, even on the best of teams. the amount of code documentation that I have seen that is just plain wrong (it was right at some point) far outnumbers the amount of code documentation that was actually in-sync with the code. 30 years in the industry so large sample size. now I prefer no code documentation in general
      • derrak5 hours ago
        Are there any good systems that somehow enforce consistency between documentation and code? Maybe the problem is fundamentally ill-posed.
        • sgc2 hours ago
          I am not saying it doesn't matter because it does, but how much does it matter now since we can get documentation on the fly?

          I started working on something today I hadn't touched in a couple years. I asked for a summary of code structure, choices I made, why I made them, required inputs and expected outputs. Of course it wasn't perfect, but it was a very fast way to get back up to speed. Faster than picking through my old code to re-familiarize myself for sure.

        • reverius422 hours ago
          Keeping the documentation in the repo (Markdown files) and using an AI coding agent to update the code seems to work quite well for keeping documentation up to date (especially if you have an AGENTS.md/CLAUDE.md in the repo telling it to always make sure the documentation is up to date).
        • jurgenburgenan hour ago
          Ultimately the code is the documentation.
          • benswerd19 minutes ago
            This is correct. Comments serve a purpose too, but they should only be used when code fails to self document which should be the exception.
  • clbrmbr9 hours ago
    Page not rendering well on iPhone Safari.

    Good content tho!

  • 8 hours ago
    undefined
  • benswerd10 hours ago
    I've seen a lot of people talking about how AI is making codebases worse. I reject that, people are making codebases worse by not being intentional about how their AI writes code.

    This is my take on how to not write slop.

    • peacebeard9 hours ago
      Agreed. When you submit code you must take responsibility for its quality. Blaming AI for low quality code is like blaming hammers for giant holes in the drywall. If you don't know how to use AI tools without confidence that your code is high quality, you need to re-assess how you use those tools. I'm not saying AI tools are bad. They're great. But the prevalence of people pushing the tools beyond their limits is not a failure of the tools. Vibe coding may be fun but tight-leash high-oversight AI usage is underrated in my opinion.
      • newAccount20256 hours ago
        I think this is mostly right.

        In a blameless postmortem style process, you would look at not just the mistake itself but the factors influencing the mistake and how to mitigate them. E.g., doctor was tired AND the hospital demanded long hours AND the industry has normalized this.

        So yes, the programmers need to hold the line AND ALSO the velocity of the tool makes it easy to get tired AND and its confidence and often-good results promote laziness or maybe folks just don’t know better AND it can thrash your context and bounce you around the code base making it hard to remember the subtleties AND on and on.

        Anyway, strong agree on “dude, review better” as a key part of the answer. Also work on all this other stuff and understand the cost of VeLOciTy…

      • 8 hours ago
        undefined
    • tabwidth9 hours ago
      The intention part is right but the bottleneck is review. AI is really good at turning your clean semantic functions into pragmatic ones without you noticing. You ask for a feature, it slips a side effect into something that was pure, tests still pass. By the time you catch it you've got three more PRs built on top.
      • peacebeard9 hours ago
        In my experience trying to push the onus of filtering out slop onto reviewers is both ineffective and unfair to the reviewer. When you submit code for review you are saying "I believe to the best of my ability that this code is high quality and adequate but it's best to have another person verify that." If the AI has done things without you noticing, you haven't reviewed its output well enough yet and shouldn't be submitting it to another person yet.
        • skydhash7 hours ago
          Code review should be a transmission of ideas and helping spotting errors that can slip in due to excessive familiarity with the changes (which are often glaring to anyone other than the author).

          If you're not familiar with the patch enough to answer any question about it, you shouldn't submit it for review.

    • systemsweird9 hours ago
      I think there’s just a lot of people who would love to push lower quality code for a variety of legitimate and illegitimate reasons (time pressure, cost, laziness, skill issues, bad management, etc). AI becomes a perfect scapegoat for lowered code quality.

      And you’re completely right, humans are still the ones in control here. It’s entirely possible to use AI without lowering your standards.

    • Heer_J10 hours ago
      [dead]
  • ares6233 hours ago
    What if it's not _my_ codebase?
  • mrbluecoat9 hours ago
    ..but unintentional AI (aka Modern Chaos Monkey) is so much more fun!
    • benswerd9 hours ago
      LOL fr. I've been talking with some friends about RL on chaos monkeying the codebase to benchmark on feature isolation for measuring good code.
  • openclaw016 hours ago
    [dead]
  • fhouser7 hours ago
    [dead]
  • Sense_1018566 hours ago
    [dead]
  • mika-el9 hours ago
    [flagged]
    • p1necone9 hours ago
      I haven't really extensively evaluated this, but my instinct is to really aggressively trim any 'instructions' files. I try to keep mine at a mid-double-digit linecount and leave out anything that's not critically important. You should also be skeptical of any instructions that basically boil down to "please follow this guideline that's generally accepted to be best practice" - most current models are probably already aware - stick to things that are unique to your project, or value decisions that aren't universally agreed upon.
    • benswerd8 hours ago
      Wrestled with this a bit. The struggle with this one in particular is its as much for people to read as it is for agents, and the agents are secondary in its case.

      I generally agree on this as best practice today, though I think it will become irrelevant in the next 2 generations of models.

    • keeganpoppen8 hours ago
      it’s not that shorter rules are intrinsically better, it’s that longer rules tend to have irrelevant junk in them. ceteris paribus, longer rules are better. it’s just most of the time the longer rules fall under the Blaise Pascal-ian “i regret i didn’t have time to make this shorter”.
    • w29UiIm2Xz8 hours ago
      Shouldn't all of this be implicit from the codebase? Why do I have to write a file telling it these things?
      • cjonas8 hours ago
        For any sufficiently large codebase, the agent only ever has a very % of the code loaded into context. Context engineering strategies like "skills" allow the agent to more efficiently discover the key information required to produce consistent code.
      • cyanydeez8 hours ago
        mostly because reading the code base fills up the context window; as you aggregate context, you then need to synthesize the basics; these things arnt intelligence; they dont know whats useless and whats useful. They're as accurate as the structureyou surround them with.
    • slopinthebag8 hours ago
      AI comments are against the rules. Fuck off, bot.
    • devnotes777 hours ago
      [dead]