2 pointsby delduca9 days ago3 comments
  • serf9 days ago
    the compile loop still slows down an LLM, so like always it's mostly project dependent.

    What you want are the errors a compiler provides for guidance to the LLM, and ideally no compile loop; so if your project supports it python + a linter that is hook'd into the LLM for deterministic firing is a very solid choice for lots of LLM projects.

    as to the Electron application : I sort of disagree as far as LLMs are concerned.

    Once your program takes care of as many things as Electron does under-the-hood your LOC is going to be fairly large and that metric itself is going to be the main thing that makes the codebase less and less compatible with LLM usage. Generally speaking it's best to stick to really modular non-monolithic codebases and make a best attempt at keeping each modular small and well documented.

    Once you get some huge monolith somewhere in the codebase LLMs will start losing themselves in unpacking it and the quality of work will drop elsewhere as context is eaten.

  • zahlman7 days ago
    > For example, why create a Python Lambda that has a large cold start, is slower, and ends up costing more, when I can build the same Lambda in C++?

    So that you can have less code, which is more readable and focused on the actual task. Auditing code matters if you care about code quality at all, which you had better if you want to build significantly beyond what the AI can one-shot.

    > The same applies to bloated Electron applications

    This is actually completely different. I agree with you on avoiding bloat. Efficiency is not the opposite of bloat.

  • absynth9 days ago
    RISC-V via librisc implies a lot of untapped potential. There's C/C++ and a bunch of others usable inside that.

    Python/Ruby/Erlang et all still have a lot of horsepower so I'd not discount them yet.