I like the styling, it's really slick. I also like that you enable me to use the tool online without signing up. I was curious about how you're supporting this, and paying for inference, but I see now that you haven't really wired up anything. When I try to generate my 1pager, it returns a placeholder.
> Turn your messy ideas...
I'm not a fan of this framing. Messy has negative connotations, so it's not clear why you're insulting me when we just met. ;)
The wizard:
There's a bit of duplication, since you have "Tell the agent..." as well as "Tell me...", both conveying the same information.
I can jump through steps without competing prior ones. Isn't that going to cause a problem?
It's hard to truly evaluate this further without seeing it in action. As other authors have said, many agents already support Plan Mode, so it it's important for you to distinguish yourself from that.
The jumping through steps is not intended - that's a regression.
I agree on plan mode - this one is just a lot more featured. I should include some samples to demonstrate that. Here's an example, if you're interested, of the prompt plan output - https://github.com/benjaminshoemaker/data_graph_gap_report/b...
I also fixed the wizard text, I agree.
I also fixed the step jumping.
Would love to hear your feedback if you try it again with the fixes in place :)
I'm using AI a lot, in planning but I take close manual oversight on specs and development plan and still read all active path code (give AI a little but not too much leeway on testing, since sometimes they start writing test asserting true == true).
If you're looking for feedback, you could include a tiny section on the homepage about how to run the output docs. e.g. put them in a folder, point Claude Code/Codex to it and give it the prompt.
Thanks for building this!
Now you are going to have one write out instructions for an AI?! I guess we know how the AI apocalypse gets started!
(And then the implementation plan is fed to the same sort of AI that you were going to give the "idea" to in the first place.)
If doing this gives good results, then it shouldn't be necessary.
Most advances in tools I've used in the last two years are exactly this sort of "automate the steering and feedback loop that the prompt goes through" automated-fairly-boilerplate-sequencing of refinement of initial idea -> plan -> execution -> feedback.
Unless your tool has people skills, this engineer can just take the spec to the agent ;)
I guess we should tell thousands of AI researchers to stop what they're doing right now since you're a single prompt away from solving the problem??
there's no need for an app like this anyway.
You want this as a series of prompts that handle the various stages.