2 pointsby 0-bad-sectors4 hours ago4 comments
  • vibe423 hours ago
    Keep it simple and run a fresh, new context for each prompt.

    I use the pi-mono coding agent with several different new open models running locally.

    The simpler and more precise the prompt the better it works. Some examples:

    "Review all golang code files in this folder. Look for refactor opportunities that make the code simpler, shorter, easier to understand and easier to maintain, while not changing the logic, correctness or functionality of the code. Do not modify any code; only describe potential refactor changes."

    After it lists a bunch of potential changes, it's then enough to write "Implement finding 4. XYZ" and sometimes add "Do not make any other changes" to keep the resulting agent actions focused.

  • phoughton3 hours ago
    I find describing the problem often works well, as opposed specifying a solution / change. And if it has a means to validate it's results then its significantly better as well.

    So maybe describe the problem and work first on a means to detect errors, second - then let it rip.

    • vibe423 hours ago
      This. And when possible, first asking the AI to add more granular logging around the code where the problem is - then re-run the code and feed the new log in a new context.

      I've used this to debug some moderately complex bugs in golang and godot code and it works really well - the combo of having a new context with the (sometimes overly) granular debug logging and only the required, specific source code.

  • jomon0032 hours ago
    I let it make a detailed to do list of issues it has or questions it has and discuss it one by one. And fix and push to git. Works very reliably.
  • saltyoldman3 hours ago
    It is really good at refactors. Saying things like "deprecate the following configuration keys" and replace them with the concept of ... Even in a codebase with over 1000 files, it just works.