22 pointsby khvirabyan2 days ago4 comments
  • danjl18 hours ago
    Future code bases will contain more detailed specifications, design documents, and company guidelines and the code will become more of a byproduct, generated as needed. I already do this more and more with my LLM config files. This has already happened awhile ago with compilers replacing machine code and more recently with open source libraries replacing proprietary code. So, yes, more "intent" based documents, which represent a higher level abstraction of the problem.
  • juntoa day ago
    This is an interesting perspective and certainly makes sense that the “why” always needs the human. What I think is clear for the moment is that AI coding helpers can’t deliver anything novel by themselves, although through luck they might stumble across new invention by combination of existing human thought. The question for me is, once the majority of AI code has been re-spidered and vectorized, and reused and reused, is there anything novel left?
  • williamcotton20 hours ago
    What if instead of static code our programs of the future are written/rewritten on the fly?

    Imagine a simple todo application. Instead of a conditional filter on completed items the code itself changes to always and only return completed items upon request, spoken or otherwise.

    • bibryam6 hours ago
      AI already does that. Send a question to chatgpt that requires analytics, chatgpt will write a python code, execute it, and return you the result.
    • nthingtohide8 hours ago
      Ephemeral customised personalized apps are the future. Imagine you have to plan for wedding or big event. Just build an app for it. The app will fit like glove to your hand.
  • nthingtohidea day ago
    Wrt to premature optimization, in future, AI will write the most optimized for every general and obscure cases out of the box eking out every last blood of performance from any kind of hardware.

    This will usher in the era of "hypermature optimization".

    • stuckinhella day ago
      Already seeing this happen in salesforce implementation at work.

      It's pretty crazy how many problems we have are people trying to solve already solved problems

      • nthingtohide10 hours ago
        Can you share one or two anecdotes? I am curious.
        • stuckinhell7 minutes ago
          I need to be vague, because lets just say CRUD apps are nearly a solved problem even in some very technical fields.

          Not much changes between billing models, hr models, even custom financial models aren't THAT custom these days.

          Consolidation of best practices across diverse technical fields has happened rapidly fast in the last 10 years.

          So the AI catches things, business managers, project managers, business analysts, and programmers miss during initial project scoping and design.

    • dartosa day ago
      What makes you say that besides blind faith in the exponential advancement of language models?

      I’ve need no evidence that AI writes particularly performant code (especially wrt to specific hardware.)

      Nor have I seen any big player showing off models which were tuned for performant code generation.

      Code performance work is almost always very specific to the codebase at hand and not at all general.

      • nthingtohidea day ago
        Is there a physical law which will limit AI progress? I don't have a timeline in mind but I think that is the future.
        • dartosa day ago
          > Is there a physical law which will limit AI progress?

          Probably the same law that limits compute power wrt to space and energy.

          Look at how much energy goes into training GPT 4.5 vs it’s improvements (on our own potentially bunk benchmarks, granted)

          Also like… literally nothing in the universe advances exponentially forever.

          Whether or not any of that is valid, I’d like an answer to my question.

          Is the only reason you think that a blind belief that AI will advance exponentially in all domains?

          • nthingtohide20 hours ago
            If AI is the general purpose pattern recognition algorithm and real world has patterns then as long as data i.e. real world has juice, AI will do better. Recursive self improvement is included in the domain. The only constraint will be how long it takes to gather feedback from the real world when AI does experiments.
            • dartos15 hours ago
              I’m sorry, but I didn’t understand what you meant on and after the word “juice”

              I’m not sure any of that was relevant to what I was saying.

              And you still didn’t answer my question.

              • nthingtohide10 hours ago
                I am only doubtful about the exponential part. I am hopeful that yes all fields will ultimately subsumed by AI.
                • dartos9 hours ago
                  I believe exponential (as opposed to logarithmic) advancement would be needed for that to happen.

                  If the improvement curve is linear, it would be prohibitively expensive as and if logarithmic, it’d be impossible.

                  • nthingtohide4 hours ago
                    Another point in your favour is the problem space itself will become exponentially large. Imagine getting a requirement to synthesize a material with 40 properties rather than 1 or 2.