19 pointsby mordymoop2 days ago4 comments
  • waffletowera day ago
    I appreciate the thesis -- however, experiences with iterative development via Claude Code, Gemini CLI etc. show that the statement "the characteristic LLM pattern has always been that they get the right answer almost immediately, or they never do" is simply incorrect.
  • zen9282 days ago
    I lean toward agreeing to this notion as well, I see this as the battle of practicality on use of my time and the payoff of the skills im gaining. I actually learned this a bit earlier in the heyday of stable diffusion where colossal efforts were being spent on designing techniques for prompting and inpainting img2img and understanding the effects of qualifiers and keywords. Large swathes of the community found ways to subtly manipulate the input to obtain specific characteristics in visuals, and common tricks were passed around to produce mostly consistent results.

    Then LORAs were conceptualized, designed, and implemented by scientific researchers completely unrelated to the community (brought to fine tune LLMs, adapted to work with diffusion), which almost instantly displaced most of those techniques. If you ever wasted any amount of time learning those methods, your knowledge is actively tainted by outdated tidbits that need to be unlearned. Foundational changes of this nature happened frequently.

    Many similar stories will go untold, im sure.

    • mordymoopa day ago
      I also had a good friend who was an absolute wizard with early stablediffusion. he could make the model do things that were supposedly impossible at the time. His prompts were works of art. Now any of the commercial image models go far beyond what he could do. It's interesting to think about how there was this ephemeral art form of manipulating image models that existed for about a year.

      The same could be said of prompt engineering. Gone are the days of telling the model that it is an expert software engineer with a PhD in the most relevant subtopic. These days the common wisdom is to just clearly articulate what you want it to do. Huge amounts of energy put into prompt engineering are now completely swept away by incremental model advances.

  • mordymoop2 days ago
    A post arguing that agent orchestration is not the future of agentic coding.
  • yieldcrva day ago
    > The eventual incremental improvement of the core model intelligence always blows away whatever capabilities improvements can be obtained with advanced agent orchestration systems.

    Yep. I'm watching 5 month old youtube videos about all these MCP servers you can add to Claude Code and subsequently how to manage the context window bloat, and I'm like ".... all these problems they're describing are solved out of the box?"

    and that's when I realize its 5 months old and therefore not using Opus 4.5