23 pointsby khvirabyan4 months ago4 comments
  • danjl4 months ago
    Future code bases will contain more detailed specifications, design documents, and company guidelines and the code will become more of a byproduct, generated as needed. I already do this more and more with my LLM config files. This has already happened awhile ago with compilers replacing machine code and more recently with open source libraries replacing proprietary code. So, yes, more "intent" based documents, which represent a higher level abstraction of the problem.
  • junto4 months ago
    This is an interesting perspective and certainly makes sense that the “why” always needs the human. What I think is clear for the moment is that AI coding helpers can’t deliver anything novel by themselves, although through luck they might stumble across new invention by combination of existing human thought. The question for me is, once the majority of AI code has been re-spidered and vectorized, and reused and reused, is there anything novel left?
  • williamcotton4 months ago
    What if instead of static code our programs of the future are written/rewritten on the fly?

    Imagine a simple todo application. Instead of a conditional filter on completed items the code itself changes to always and only return completed items upon request, spoken or otherwise.

    • nthingtohide4 months ago
      Ephemeral customised personalized apps are the future. Imagine you have to plan for wedding or big event. Just build an app for it. The app will fit like glove to your hand.
    • bibryam4 months ago
      AI already does that. Send a question to chatgpt that requires analytics, chatgpt will write a python code, execute it, and return you the result.
  • nthingtohide4 months ago
    Wrt to premature optimization, in future, AI will write the most optimized for every general and obscure cases out of the box eking out every last blood of performance from any kind of hardware.

    This will usher in the era of "hypermature optimization".

    • stuckinhell4 months ago
      Already seeing this happen in salesforce implementation at work.

      It's pretty crazy how many problems we have are people trying to solve already solved problems

      • nthingtohide4 months ago
        Can you share one or two anecdotes? I am curious.
        • stuckinhell4 months ago
          I need to be vague, because lets just say CRUD apps are nearly a solved problem even in some very technical fields.

          Not much changes between billing models, hr models, even custom financial models aren't THAT custom these days.

          Consolidation of best practices across diverse technical fields has happened rapidly fast in the last 10 years.

          So the AI catches things, business managers, project managers, business analysts, and programmers miss during initial project scoping and design.

    • dartos4 months ago
      What makes you say that besides blind faith in the exponential advancement of language models?

      I’ve need no evidence that AI writes particularly performant code (especially wrt to specific hardware.)

      Nor have I seen any big player showing off models which were tuned for performant code generation.

      Code performance work is almost always very specific to the codebase at hand and not at all general.

      • nthingtohide4 months ago
        Is there a physical law which will limit AI progress? I don't have a timeline in mind but I think that is the future.
        • dartos4 months ago
          > Is there a physical law which will limit AI progress?

          Probably the same law that limits compute power wrt to space and energy.

          Look at how much energy goes into training GPT 4.5 vs it’s improvements (on our own potentially bunk benchmarks, granted)

          Also like… literally nothing in the universe advances exponentially forever.

          Whether or not any of that is valid, I’d like an answer to my question.

          Is the only reason you think that a blind belief that AI will advance exponentially in all domains?

          • nthingtohide4 months ago
            If AI is the general purpose pattern recognition algorithm and real world has patterns then as long as data i.e. real world has juice, AI will do better. Recursive self improvement is included in the domain. The only constraint will be how long it takes to gather feedback from the real world when AI does experiments.
            • dartos4 months ago
              I’m sorry, but I didn’t understand what you meant on and after the word “juice”

              I’m not sure any of that was relevant to what I was saying.

              And you still didn’t answer my question.

              • nthingtohide4 months ago
                I am only doubtful about the exponential part. I am hopeful that yes all fields will ultimately subsumed by AI.
                • dartos4 months ago
                  I believe exponential (as opposed to logarithmic) advancement would be needed for that to happen.

                  If the improvement curve is linear, it would be prohibitively expensive as and if logarithmic, it’d be impossible.

                  • nthingtohide4 months ago
                    Another point in your favour is the problem space itself will become exponentially large. Imagine getting a requirement to synthesize a material with 40 properties rather than 1 or 2.