I feel the problem of token wasting a lot, and actually that was the first reason I had to introduce a hierarchy for instructions, and the artfact indexes: avoid wasting. Then I realized that this approaches helped to keep a lean context that can help the AI agent to deliver better results.
Consider that in the initial phase the token consumption is very limited: is in the implementation phase that the tokens are consumed fast and that the project can proceed with minimal human intevenction. You can try just the fist requirement collection phase to try out the approach, the implementation phase is something pretty boring and not innovative.
How can you tell if your prompt process works? I feel like the outputs from SDLC process are so much more high level than could be done with evals, but I am no eval expert.
How would you benchmark this?