Instead of running heavy ML models that eat up the VRAM needed for generation, I implemented a clustering engine that runs entirely on Web Workers. It uses Jaccard Similarity and Levenshtein Distance with prompt normalization to automatically stack 50+ variations of the same prompt into a single view.
Parsing ComfyUI workflows is tricky because of the spaghetti node graph. The app includes a declarative traversal engine to trace parameters backwards from SINK nodes, but I also built a specific Custom Save Node that forces a clean metadata dump at generation time to bypass the complexity entirely.
I just released v0.13.0 with a focus on performance and organization logic
In this update, I rewrote the file indexer (Phase B). I switched from asynchronous header reads to synchronous reads with controlled concurrency. This eliminated disk contention on my SSD and reduced read times from ~800ms to ~10ms per file, making indexing a lot quicker.
I maintain automated builds for Windows, macOS, and Linux (AppImage), so you can run it directly without needing to set up a Node environment.