10 pointsby MichaelRazum2 hours ago9 comments
  • kf12 minutes ago
    Yes, absolutely, if you don't use AI in coding you will be a legacy developer sooner rather than later.

    Everyone seriously doing it has a bunch of agents in a corporate like structure doing code reviews, the bad AI code is when someone is just using a single instance of Claude or Chat, but when you have 50 agents competing to write the best code from a single prompt, it hits differently.

  • baCist2 hours ago
    I think all of this has a dark future. And this can be argued based on how AI works.

    AI systems look at code on the internet that was written by humans. This is smart, clean code. And they learn from it. What they produce — unreadable spaghetti code — is the maximum they can squeeze out of the best code written by humans.

    In the near future, AI-generated code will flood the internet, and AI will start training on its own code. On the other hand, juniors will forget how to write good code.

    And when these two factors come together in the near future, I honestly don’t know what will happen to the industry.

    • MichaelRazum44 minutes ago
      Not sure tbh. The labs which are creating the AI - definitely know what they are doing, and its incredible. Would just argue that the AI will become only better in the future
  • Daedrenan hour ago
    It's a problem. Seniors with AI perform far better because they have the skills and experience to properly review the LLM's plans and outputs.

    Juniors don't have that skillset yet, but they're being pushed to use AI because their peers are using it. Where do you draw the line?

    What will happen when the current senior developers start retiring? What will happen when a new technology shows up that LLMs don't have human-written code to be trained on? Will pure LLM reasoning and generated agent skills be enough to bridge the gap?

    It's all very interesting questions about the future of the development process.

  • decastevean hour ago
    Reviewing code becomes more arduous. Not only are the pull requests more bloated, the developer who pushed them doesn't always understand the implications of their changes. It's harder to maintain and track down bugs. I spend way too much time explaining AI generated code to the developer who "wrote" it.
    • MichaelRazum40 minutes ago
      Agree, especially a review is always an knowledge update/exchange and for juniors a learning experience. If it is AI generated, its just not worth the time.
  • kpbogdan2 hours ago
    Yea, the development process is changing rapidly now. We are in this transitional period. I have not idea where we will end up but it will be different place vs were we were like 1 year ago.
  • coldtea2 hours ago
    >So what do you guys think? Is this the future?

    Yes. The feature is quickly produced slop. Future LLMs will train on it too, getting even more sloppy. And "fresh out of uni juniors" and "outsourced my work to AI" seniors wont know any better.

  • damnitbuilds2 hours ago
    There seems to be a disconnect, with some people claiming they don't write code any more, only specs, and me trying to get Copilot to fix a stupid sizing bug in our layout engine and it Not Getting It.

    Is this because the guys claiming success are working in popular, known, more limited areas like Javascript in web pages, and the people outside those, with more complex systems, don't get the same results ?

    I also note that most of the "Don't code any more" guys have AI tools of their own to promote...

    • MichaelRazum38 minutes ago
      Maybe try claude. Also people are orchestrating AI for example with ralph. I think it is possible to write pretty decent, test driven, code with AI
    • nazgu1an hour ago
      In my opinion these guys just don't give a sh** on "stupid sizing bugs". Those who cares about how they software behaves and looks like realises after a while that most of AI claims are scam.
  • vdelpuerto29 minutes ago
    [dead]
  • johnwhitman19 minutes ago
    [dead]