7 pointsby WoodenChair18 hours ago3 comments
  • chuckledog16 hours ago
    > “As an aside, I think there may be an increased reason to use dynamic interpreted languages for the intermediate product. I think it will likely become mainstream in future LLM programming systems to make live changes to a running interpreted program based on prompts.”

    Curious whether the author is envisioning changing configuration of running code on the fly (which shouldn’t require an interpreted language)? Or whether they are referring to changing behavior on the fly?

    Assuming the latter, and maybe setting the LLM aspect aside: is there any standard safe programming paradigm that would enable this? I’m aware of Erlang (message passing) and actor pattern systems, but interpreted languages like Python don’t seem to be ideal for these sorts of systems. I could be totally wrong here, just trying to imagine what the author is envisioning.

    • handoflixue16 hours ago
      I think at some point in the future, you'll be able to reconfigure programs just by talking to your LLM-OS: Want the System Clock to show seconds? Just ask your OS to make the change. Need a calculator app that can do derivatives? Just ask your OS to add that feature.

      "Configuration" implies a preset, limited number of choices; dynamic languages allow you to rewrite the entire application in real time.

    • WoodenChair16 hours ago
      I was envisioning the latter (changing behavior on the fly). Think the hot-reload that Flutter/Dart provides, but on steroids and guided by an LLM.

      Interpretation isn’t strictly required, but I think runtimes that support hot-swap / reloadable boundaries (often via interpretation or JIT) make this much easier in practice.

  • cadamsdotcom17 hours ago
    The spec rarely has enough detail to deterministically create a product, so current vibecoding is a lottery.

    So we generate one or many changesets (in series or in parallel) then iterate on one. We force the “chosen one” to be the one true codification of the spec + the other stuff we didn’t write down anywhere. Call it luck driven development.

    But there’s another way.

    If we keep starting fresh from the spec, but keep adding detail after detail, regenerating from scratch each time.. and the LLM has enough room in context to handle a detailed spec AND produce output, and the result is reasonably close to deterministic because the LLM makes “reasonable choices” for everything underspecified.. that’s a paradigm shift.

    • tjr17 hours ago
      At that level of detail, how far removed are we from “programming”?
      • cadamsdotcom13 hours ago
        Far!

        But without the need to “program” you can focus on the end user and better understand their needs - which is super exciting.

  • SadWebDeveloper17 hours ago
    This is another pointless article about LLM's... vibe coding is the present not the future, the only sad part of all of it is that LLM's is killing something important: code documentation.

    Every single documentation out there for new libs is AI generated and that is feed again into LLMs with MCP/Skills servers, the age of the RTFM gang is over sigh

    • 17 hours ago
      undefined