One thought - vendors like cursor.ai have the benefit of highly tuned prompts, presumably by programming language, as the result of their user bases. How is it possible to compete with this?
On another note, I have played around with v0 etc, but AFAIK there is no really good UX/UI AI tool that can effectively replace a designer in the way that coding tools are replacing engineers (to a certain extent).
On prompts: We've been competing with Cursor for the last 2 years in the enterprise with Zencoder, and winning nice deals based on quality. At some point, we were very protective of our prompts, but two things happened: -most of the coding vendors' prompts were leaked, there are repos online that have prompts from a bunch. The moment you allow a custom end-point for LLM, your prompts are sniffable. -agents became better at instruction following, so a lot of prompting changed to "less is more".
So with these two industry trends, we reversed the course: -moved our harness into CLI - this exposes our tips and tricks, but is better for user privacy and for user's ability to tinker the harness. For example, this allows a set-up where no code leaves your perimeter (if you use local harness and "local" model, where "local" means different things for different people) -opened the workflows in Zenflow (they are in markdown and editable)
First of all kudos for the nice UI. I like when apps looks well. Onboarding process was smooth. I paired it with Zencoder's agent (as mentioned I use their VSCode plugin and already had a sub).
I used it to implement a small refactoring for my side project. What I like compared to plugins, I did not have to switch between agents or explicitly ask to write a plan/spec. It's I guess one of the core ideas behind the app and feels really AI-ish because it's not code editor (similar to claude code). The only thing I missed in the process is rendered markdown for previews. But I did not used the app for long, maybe there is an option to render markdown.
Overall great experience so far. Gonna explore it more. Wanna try it with Gemini and Claude Code. Again kudos it's not locked to use only Zencoder's agents.
One question: I see this supports custom workflows, which I love and want to try out. Could this support a "Ralph Wiggum"-style [0][1] continuous loop workflow? This is a pattern I've been playing around with, and if I could implement it here with all the other features of this product, that would be pretty awesome.
[0] https://paddo.dev/blog/ralph-wiggum-autonomous-loops/ [1] https://github.com/onorbumbum/ralphio
Create a new task with your prompt, and hit "Create" (instead of "Create and Run"). The interface will show a little hint "Edit steps in plan.md", with 'plan.md' being clickable. Click on it and edit it, experimenting with some ideas. {Bonus tip: toggle "Auto-start steps", to keep it Ralph-y)
I just winged the workflowsbelow, and it worked for the prompt I threw at it. If you like it, you can save it as your custom workflow and use it in the future. If you don't like it - change to your preference.
Now, I prefer a slightly different flow: Implement > Review > [Fix] (and typically limit the loop to 3 times to avoid "divergence"). We'll ship some pre-built templates for that soon. Our researchers are currently working on various variations on our private datasets.
--- # Quick change
## Configuration - *Artifacts Path*: {@artifacts_path} → `.zenflow/tasks/{task_id}`
---
## Agent Instructions
This is a quick change workflow for small or straightforward tasks where all requirements are clear from the task description.
### Your Approach
1. Proceed directly with implementation 2. Make reasonable assumptions when details are unclear 3. Do not ask clarifying questions unless absolutely blocked 4. Focus on getting the task done efficiently
This workflow also works for experiments when the feature is bigger but you don't care about implementation details.
If blocked or uncertain on a critical decision, ask the user for direction.
---
## Workflow Steps
### [ ] Step: Implementation
Implement the task directly based on the task description.
1. Make reasonable assumptions for any unclear details 2. Implement the required changes in the codebase 3. Add and run relevant tests and linters if applicable 4. Perform basic manual verification if applicable
Save a brief summary of what was done to `{@artifacts_path}/report.md` if significant changes were made.
After you are done with the step add another one to `{@artifacts_path}/plan.md` that will describe the next improvement opportunity.
Also, I found unexpected use case for it. Even when I need to only change couple lines of code, I just run quick fix workflow for it, because Zenflow automatically creates worktree, branch, commit etc. And PR is created with few clicks. It'll seems like a minor thing, but it irritates me a lot to do all this stuff myself for small changes. One thing I miss here is automatic PR name and description creation according to templates my company uses.
So you got me with the hook, and you bullet three features, but where’s the resolution of the hook issue? You left me with the hook?? What am I missing?
For us, in this scenario: 1) the pipeline helps agent perform better 2) reviewing the spec is much more convenient than when juggling between TUI and text editor (esp. if you are running 5 of those pipelines in parallel) 3) if you configure the reviewer in the settings, cross-agent review saves us from some of the minutae of guiding/aligning the agent
Lmk if I misunderstood your question, happy to help.
I am just a hobbyist but was curious how you’re thinking through the pricing plans.
Meanwhile, you can BYOA - bring your own agent - if you are a hobbyist, Gemini is free with gmail (but they WILL train on your data). And if you have ChatGPT sub, you can use codex CLI with Zenflow for no extra charge (and they don't train on paid users data).
nice
Apple Silicon (ARM64): https://download.zencoder.ai/zenflowapp/stable/0.0.52/app/da...
Intel (x64): https://download.zencoder.ai/zenflowapp/stable/0.0.52/app/da...
We'll figure out the FF script blocking.
Then there's a blurb about the CEO who claims "AI doesn't need better prompts. It needs orchestration." which is something I have always felt to be true, especially after living through highly engineered prompts becoming suddenly useless when conditions change because of how brittle they are.
I might even give this a shot and I usually eschew AI plugins because of how cloud connected they are.
I am a nobody, but I think these people are making a bunch of right moves in this AI space.