142 pointsby cratermoon8 hours ago19 comments
  • dirkcan hour ago
    Two things I'd add

    1. software doesn't only have tech maintenance - there is also user support and it increases as software grows.

    2. I'm not convinced maintenance costs scale linearly. And even if it scales linearly, you will eventually get to a point where maintenance takes up all your time.

  • keithnz6 hours ago
    In my experience AI reduces maintenance costs. Though, context might matter here, I'm working on a multi decade set of projects, while there is a lot of greenfield feature development, the old code / older projects have suddenly become a lot easier to work with, modernize, and in a bunch of cases, eliminated. Dependency on old libraries, build tools, in some cases updated, in other cases just eliminated, builds are faster, easier for developers, etc. End to end testing has become a lot easier to setup and automate. DevOps have been improved a lot, diagnosing production issues drastically improved, we have a ton of logs and information, and while we have various consolidated dashboards / monitoring to capture critical things, now we can do a lot more analysis on our deployed system (~50 ish projects)
    • theteapot3 hours ago
      This rings true for me too, but I don't think it counts if your just using AI to aid maintenance. The basic argument in the article is around how many hours of maintenance you have to do for each hour of "value-add" feature development. So A. your only measuring maintenance costs not the ratio and B. The "old code" whp wasn't written with AI in the first place.
  • richardbarosky6 hours ago
    Insightful. Agree with this take.

    Unfortunately, maintainability is simply bucketed as a "non-functional" requirement.

    Maintainability (and similar NFRs) should actually be considered what preserves and enables the delivery of future functional requirements -- in contrast to framing non-functional requirements as simply "how" the software must do what it does vs. the "what"/functional requirements that "actually matter".

    From that standpoint, if a steady flow of features/improvements is important for a project, maintainability isn't really a non-functional requirement at all, and amounts to being a functional requirement, in practice, over anything except the shortest of time horizons.

    • Jenk5 minutes ago
      I've found the first, and most important, step for any team or organisation to eliminate concerns with NFRs, "tech debt", and whatever else it may be called, is to stop giving it a name.

      I'm being completely serious. By giving it some kind of distinct name, you are giving license to it being ring-fenced and de-prioritised by someone who doesn't (but, arguably, probably should) know better.

      Quality matters. It hits your P&L very quickly and very hard if you don't maintain it. So it is as important as any other factor.

    • bluefirebrand2 hours ago
      > amounts to being a functional requirement, in practice, over anything except the shortest of time horizons

      Right! The unfortunate thing is that many software companies don't seem to think much further than a quarter ahead, not really.

      Sure they might have a product roadmap that extends for a year or two into the future, but let's be honest. Often that roadmap is mostly for sales purposes, not engineering planning purposes. Product and engineering will pivot if sales slump. The earlier in the company's lifespan, the more likely this will happen often

      However if companies get out of this startup mode then they should start to stabilize... But many don't. They continue this pattern of short sighted short term planning, which means product stability remains a low priority effort.

      Ultimately I guess many companies just either do not have the resources to build good software or do not actually care to

  • gitaarik3 hours ago
    Yeah, but to be honest, I sometimes just tell Claude to cleanup / refactor stuff; it finds a lot of things, discusses it with me and I approve the plan, and it churns away my tokens for some time. I do this once in a while, and I've been doing this for over 6 months and I don't feel like my development has significantly slowed down. Yeah my token usage is more for sure, but my codebase also is, so I'm not worried about that. To me AI seems to make maintenance very easy, like the rest. You just need to do it.

    Edit: I make it sound a bit simple maybe. I do more extensive redactors also, where I'm more involved and opinionated. But I don't feel the need to do that very often very deeply. But yeah sometimes it's definitely necessary to prevent the project from going off rails.

    • tossandthrow2 hours ago
      This is my experience exactly.

      I have reduced our response time on our api to 30ms from 80ms and gotten a setup we can comfortably grow into.

      I had not had time to track down these optimizations without Claude code.

  • joshka2 hours ago
    I feel like AI might let us model some of the things that we initially didn't scope that led to these problems (e.g. "Decided not to fix every bug, or upgrade every dependency") - being able to more easily ask a system that can dig into "how much time are we spending on stuff related to foo"

    AI tooling can also be a place where we start building our view of what maintainable software practices look like so we don't make decisions that have these same tail effort profiles. That can be things like building out tooling to handle maintenance updates

    I think the real thing that comes out of AI tooling is probably that the tooling needs to be trained (or steered) towards activities that enhance human attention management.

    • yurishimoan hour ago
      > AI tooling can also be a place where we start building our view of what maintainable software practices look like so we don't make decisions that have these same tail effort profiles. That can be things like building out tooling to handle maintenance updates

      This has been possible already but from my vantage point, it doesn't look like anyone really did it? Sure, there already exists tons of OSS that is built for this case, even before AI, yet it seems to me to always come back to incentives. IMO, there is no incentive to write maintainable software (and I'm not sure there ever will be one at this pace). Businesses are only incentivized to write enough software to accomplish the task within their own defined SLAs and nothing further. But even that doesn't seem to be a blocker at this point if Github is used as an example.

      Good software comes from people who care deeply about solving the problems in way that they are invested in. If your employees don't care about your product, you're already starting on the wrong foot. AI isn't going to incentivize bad-average developers to write better software or a good developer to push back harder against their clueless manager. When they make the decision, AI might help (assuming it doesn't make a bigger mess) but it's not going to reduce technical debt in any meaningful way without a sea change of perspective from product managers around the world.

      So far, I just don't see it happening in theory or in practice. I hope I'm proven wrong!

      • joshka40 minutes ago
        I think I have a different perspective on this because I've worked in places that do care about that sort of thing on tools that do focus on those sorts of things. I think the long term incentive for these tools to address tech debt as a goal comes from the AI eval benchmarks trending towards being saturated. The advantages of one tool over another will be in the longer context things. This naturally will tend to start to act as a forcing function for training to focus on the longer tail of software development. A good way of thinking about this is GPT 3.5 was good at dealing well with lines of code and functions, 4 was functions, small apps, 5 seems adept at delivering apps and systems, 6 will be systems and whole enterprised programs of work.
  • m4637 hours ago
    Same with code reviews.

    I wonder if AI could make code reviews more presentable.

    for example, with human code reviews, developers learn quickly not to visually change code like reflowing code or comments, changing indent (where the tools can't suppress it), moving functions around or removing lines or other spurious changes.

    And don't refactor code needlessly.

    also, could break reviews up into two reviews - functional changes and cosmetic changes.

  • hona_mind2 hours ago
    The article's framing around the maintenance-to-feature ratio resonates with something I've been noticing in my own workflow.

    One underappreciated aspect: the artifact surface area of an AI session grows much faster than the code surface area. For every hour of Claude Code output, you get not just code changes but screenshots, generated images, exported transcripts, spec drafts, downloaded model weights — all scattered across wherever Finder happened to drop them.

    The maintenance cost argument applies here too. If you can't quickly navigate to the right artifact at the right moment, you end up re-generating things you already have, or worse, losing context between sessions. The "maintenance" of your working environment is a real tax on the ratio the article is describing.

    I've been trying to address the file-side of this problem specifically, but the broader point stands: AI coding agents will only reduce net maintenance costs if the surrounding tooling (file management, context switching, artifact organization) keeps pace.

  • faangguyindia2 hours ago
    With AI, you can hypothesise what can potentially break with each new addition (which your regression tests do not even capture at present). Then, you can write tests for each of those hypotheses, ask AI to deploy a canary, ask AI to divert 5% of traffic to the canary. Ask AI to analyse the logs for any signs of regression in performance, ask AI to roll it out to 100% if everything is good. Congrats! At this point, you've become a slave to AI and cannot do without it. Even logging into a remote server now causes mental pain; having to do anything by hand causes pain. You just wait for your limit to be reset to return to slavery again. A master of a slave is as much of a slave to his salve as the slave is to the master itself.
    • danielblnan hour ago
      My local model humming next to me will always be available. Is it as good as a foundational model? No. But it'll work just fine for most pedestrian tasks and I don't need to keep now useless mechanical knowledge in my brain.
  • philipp-gayretan hour ago
    Would be an interesting concept and read were it grounded in reality. Unfortunately, it's data and graphs pulled out of someone's imagination. Reality is nowadays with the right skillset you can take state of the art AI tools and get a complete language rewrite and or refactor and be done the same afternoon.
    • pdhborgesan hour ago
      At least if you a test suite that doesn't have to be migrated. I too would like to migrate some services from Python to Rust but my test suite is written in Python so I would have to actually check if the test suite migration was correct manually (I can't event compile it!) before doing the rewrite.
  • ianmarcinkowski5 hours ago
    My low value comment. This feels directionally correct to me. The problems I've been struggling with in my dev job for the past 6 months have been 80% maintenance/legacy code interfering with new feature development.

    Some of our developers are overly aggressive about using AI and I've started going down that path because I need to keep up and actually enjoy the flow of working with AI in my IDE.

    I put a lot of work into keeping my area of the codebase understandable and coherent but I do not see that from the others on our team. I'm not perfect but I and extremely sensitive to incoherent, or un-grok-able at a glance.

    Anyway, I like the novel (to me at least) framing of this article!

  • devinabox2 hours ago
    Great Article! I think ultimately we are heading towards a world where much better software will be created. This is the major roadblock we need to cross over before that can be true, but I think it is a very tractable problem!

    I created a video that talks about this in more detail:

    https://www.youtube.com/watch?v=G3Q7Y-nrUbk

  • hamhamed3 hours ago
    This is what I've been preaching to my team. With 5.5 and 4.7 the coding agents are good enough know to almost never take any tech debt. Any new feature or fixes should come with a cleanup or refactor, on the same PR.
    • esailijaan hour ago
      That's better than 99.99999% humans. Where do I put my credit card details?
  • stevepotter7 hours ago
    For me, if I can make a kickass testing system that people love so much that they actually build features with it and it’s not an afterthought, then maintenance becomes much easier. It’s often called test driven development but I’ve rarely seen it done in such a way that the dev ex is good enough for it to work.

    But say you have that. Then you have great profiling. At that point you can measure correctness and performance. Then implementation becomes less of a focal point. And that makes it a lot easier to concede coding to ai

    • NotGMan7 hours ago
      This will probably be how things will work in future: devs will shift to specifying features which will be validate through tests.

      The AI will then be middle layer that will iterate until tests pass.

      Layer 1: Specs (Humans)

      Layer 2: Code (AI mostly)

      Layer 3: Tests (AI + human checks).

      • visarga4 hours ago
        Yes, that is how I see it too. What I would add is - intent testing - collect user messages, and check them against executed work from time to time. Every ask must be implemented and tested, every code must be justified by a user message.
      • jplusequalt5 hours ago
        What a boring fucking future.
        • bluefirebrand2 hours ago
          No kidding. AI does all the interesting problem solving and humans...

          Write tests. The most boring activity on the planet

        • 2 hours ago
          undefined
  • Jimmy02525 hours ago
    The maintenance-cost framing is the useful constraint. I’d rather see agents default to smaller diffs, test scaffolding, and explicit assumptions than maximize lines changed per prompt.
    • robotbikes5 hours ago
      I think this is still the role of human oversight, these tools will forever be imperfect and the instructions we give them as prompts will always been prone to inaccuracies/misinterpretation. I find it useful to evaluate the code and often ask for simpler solutions and so far it has produced slightly more elegant solutions. The tendency to spawn helper functions to solve every problem or doing things in a slightly weird or at least unconvential way when there is an easier/standard way of doing it that would create less code. Your ideas if automated would definitely make things more maintainable but even code produced my machines require a human to be responsible for making sure/verifying it works.
  • aetherspawn5 hours ago
    I think AI is great for the soul destroying boring stuff that makes me want to quit my job like wrapping legacy code in test cases. Hey I’ll take on any idiot who’s willing to do that job, even if he’s artificial.
    • WhereIsTheTruth9 minutes ago
      You can only type at 50WPM and read one file at a time, the LLM doesn't have the physical limits, use it at your advantage so you can actually focus on the work that matter
  • psychoslave2 hours ago
    https://www.laws-of-software.com/laws/kernighan/ relates here.

    The incitives for remote LLMs are off with providing defaults which optimize for maintenable sound architecture though. Same way Claude is going to produce overview of the indexes of the summaries of comprehensive reports, no one is going to read. No doubt this feels like excellent KPI on how much output was generated.

  • lovich4 hours ago
    So what are all of these agentic based strategies going to do once the infinite money spigot of investment into AI ends and they need to start charging prices that actually make a profit?

    I get that most of the cost is in training and not inference, but I don’t see how models stay useful once the worlds software updates in a few months post training since the models can’t learn without said training.

    Are we just going to have shops do the equivalent of old COBOL shops where everything is built to one years standards and the main language/framework is mostly set in stone?

    • tedbradley2 hours ago
      Glad you asked. AI empowers people who couldn't do a job before to do a job. With more supply of qualified workers, these workers compete with each other by lowering the salary they'll take.

      So:

      * You get paid less. * The company might pay a similar amount due to LLM costs. Although, it could be more or less as well, depending on how it works out.

      A couple of years ago, I saw a story of a guy writing two articles for a website a day. The boss asked him if he wanted to transition to AI-assisted writer for less pay. He said, "No." After a couple of weeks, he got canned. He checked the website out, and it had a bunch of AI writing on it.

      LLMs are there to reduce your salaries and increase the businessowner's profits. Bigger inequality in wealth, it's only going to grow more and more. Also, a ton of people fired across many different fields.

      • ehntoan hour ago
        That is one possibility (that is playing out). Another one worth contrasting is the idea of AI as leverage for the worker. If you can take a regular developer and augment their output by 25%, then they have become more valuable to you and you should pay them more. Why should you pay them more? Because the market rate will price in that they provide more value now and you'll lose those workers to competitors if you don't.

        That's a pretty old economic idea, and it will be interesting to see if it holds up in this instance. I have no idea how this all plays out. I do think it won't be one size fits all though.

  • 6 hours ago
    undefined
  • immanuwellan hour ago
    [dead]