First a definition, the research your data sources and consumers. Have the ai write .md files about all of the external characteristics of the application.
Then have it go over those docs for consistency, correctness, and coherence.
Then have it make a list of the things that need to be understood before the application can be delivered. Adress those questions.
Then rewrite the specification document.
Then determine any protocols or formats the system requires. You can just ask. Then adjust, rewrite.
Then ask for a dependency graph for the various elements of development.
Then ask for an implementation plan that is modular, creates and maintains a clear separation of concerns, and is incrementaly implementable and testable.
At this point, have it go over all of the documentation for consistency, coherency, and correctness.
You’ll notice we haven’t written code yet. But you actually have, you are descending an abstraction ladder.
At this point there may be more documents that you need, depending on what you are doing. The key is to document every aspect of the project before you start writing code, and to verify at each step that all documents are correct, coherent, and consistent. That part is key, if you don’t do it you already have a pile of garbage by now.
Now, you implement the first phase or two of the implementation plan. Test. Evaluate the code for correctness, consistency, coherence, and comments.
When the code is complete, often a few evaluation cycles later, you then ask it to document the code. Then you ask it to review all the documentation for the 3Cs. When all of the code and docs are stable, go on to the next phase.
Basically document the plan, make the code, document the code, and verify for consistency, correctness, coherence, and comments every step of the way. This loop ensures that what you end up with is not only what you wanted to build, but also that all of the code is , in fact, consistent, correct, and coherent , and has good comments (the comments aren’t for you, but they matter to the model.)
I cold start each session carefully (an onboarding.md that directs the agent to a company/project onboarding that includes the company culture, project goals, and reasons why success will matter to the AI itself. Then a journal for the model to put learnings, another for curiosity points, and recently a one for non-project-related musings, the onboarding process itself, and whatever else seems salient.
All of this burns tokens and context, of course, but I find I can develop larger projects this way without backtracking or wasted days. My productivity is 4-10x depending on the day, even with all of this model psychology management.
In my projects, it has made a huge difference. YMMV.