I am an indie dev and I built this tool to scratch my own itch.
I spent a lot of time trying to replicate styles I saw on X (Twitter), but I realized that simply copying prompts doesn't work anymore. The problem is that every AI model now speaks a different "language." Midjourney relies heavily on parameters like --sref and --stylize, while Flux prefers structured data, and DALL-E just wants simple natural English.
Existing image-to-text tools usually just describe what is in the image. They don't tell the model how to generate it.
So I built Prompt Lab to focus on model-specific tuning. When you upload an image, my tool analyzes the visual style and composition, then translates that data into the specific syntax for your target model.
It is definitely an MVP, so things might break. Please give it a spin and let me know what you think in the comments. I am actively working on it today, so if you have ideas on how to improve the prompt syntax or spot any bugs, just drop them here. I'm ready for your feedback.