6 pointsby ns9000112 hours ago3 comments
  • digdatechAGI5 hours ago
    the comment from jjmarinho about Wispr + Claude is interesting to me — voice dictation gets you into the agent, but what happens to the intent after? you've said what you want, but the system still needs to know which agent, what project context, what constraints. the dictated note is still unstructured at that point.

    curious whether mercury has any primitives for capturing that "pre-dispatch" context or if the assumption is that the human has already structured the task before it enters the no-code layer.

  • jjmarinho12 hours ago
    Honestly I don’t even bother with interfaces anymore as I really like using voice dictation with Whispr + Claude, feels much more natural.

    But actually building these teams of agents is a thing I didn’t try yet. So would be interesting to see how the platform evolves to support power users like me.

    • ns9000112 hours ago
      Good news - we have a Mercury MCP! We have a setup prompt you can feed to your Claude Code allowing it to interact with other agents in your team - without you needing to visit the UI at all.

      I have a workflow using the MCP where my OpenClaw talks with 3 hosted Claude Codes which text me PRs via iMessage

  • TechExpert291012 hours ago
    looks interesting... where do you draw the line between human in the loop and agents going all in?
    • ns9000112 hours ago
      all write actions are human-in-the-loop by default. these actions are sent to a user's inbox where they can approve/deny requests and also view the lineage of A2A conversations that led to the action being proposed. this has been super helpful for debugging agent teams without taking on any risk.

      but if you want, you can configure 'fast mode' where you let your agents take all actions (read/write) without human approval.