Watching the shift from hand written prompts to systematically optimizing prompt × model combinations is fascinating. It challenges how I’ve traditionally thought about prompt engineering and points toward a more systems-driven approach to reasoning.
Curious whether this becomes the default for LLM applications, where prompts aren’t static artifacts but continuously optimized components of the stack.