3 pointsby astroanywhere13 hours ago2 comments
  • The-Pebble12 hours ago
    What stood out to me here is the idea of orchestrating multiple coding agents across machines rather than treating AI coding as a single-agent workflow. Most discussions about AI-assisted development still assume one tool running locally, but the approach described here (task decomposition + parallel execution across machines) feels closer to how distributed build systems evolved.

    The dependency-graph model is particularly interesting. If AI agents can operate on isolated git worktrees and resolve tasks in parallel, the bottleneck shifts from raw coding to how well the system can plan and coordinate tasks. In practice that probably means developers spend more time defining boundaries between tasks rather than writing every line themselves.

    Another challenge I’ve noticed when experimenting with these tools is deciding which agent to use for which task. Different coding agents behave very differently depending on the type of work (refactoring, feature building, test generation, etc.). Having a runner that can dispatch tasks to different agents and machines could make that experimentation much easier.

    For anyone exploring the broader ecosystem of agentic coding tools, this overview was useful as well: https://prommer.net/en/tech/guides/best-ai-agentic-coding-to...

    It compares several of the current tools and workflows that are emerging around multi-agent development.

    Curious how people think this model scales once teams start coordinating dozens of agents simultaneously.

    • astroanywhere11 hours ago
      Thanks for your comments.

      Re Dependency graph: In real coding, sometimes even the arguably the best agent -- claude code + Opus 4.6 + High reasoning -- still struggles, because either the tasks are very complicated or human prompting cannot articulate the problem in a way that agent can understand and solve it.

      We allow graph-based task decomposition, replanning if user does not like it, and even more complex graph operations such as (i) expanding a task into a few nodes, (ii) rephreasing a subgraph of tasks into a new set of tasks.

      In this way, the gains are (i) the agent is better at understanding the whole project, (ii)task executions can be parallel and retried. Say the user wants to change the prompt of a particular step, then all the tasks before that does not need to be re-run.

      Re Different models for different tasks: So far we don't support that, but that is in our pipeline. Claude code has that. For example, context compaction arguably is done by Sonnet.

  • astroanywhere13 hours ago
    Hi HN, I built Astro because I was frustrated with running AI coding agents one at a time. You describe what you want, sit there waiting, then manually feed the next task. The ceiling isn't capability — it's coordination.

    Astro sits above agents like Claude Code, Codex, and OpenClaw. You describe a goal once, it generates a dependency graph (DAG, not a flat list), and dispatches tasks in parallel across your machines. Each task runs in an isolated git worktree and opens a PR. Tasks that can run in parallel do — total time equals the longest path, not the sum of all tasks.

    Key design decisions:

    - Your machines run the agents with your API keys. The Astro server never calls AI models and never sees your keys. - Every task dispatch is cryptographically signed by your browser. The agent runner verifies the signature before executing. - The agent runner is fully open source (this repo). The server provides the planning UI and dashboard at astroanywhere.com. - Works with multiple agents: Claude Code, Codex, OpenClaw, OpenCode. It auto-detects what's installed. - Dispatches to local machines, HPC clusters (Slurm), and cloud VMs. One `npx @astroanywhere/agent` command sets everything up.

    We also ship built-in templates (stock analysis reports, academic paper review, presentation generation) that run as parallel task graphs out of the box.

    Quick start: install an agent (e.g. `npm i -g @anthropic-ai/claude-code`), register at astroanywhere.com, run `npx @astroanywhere/agent`, and you're connected.

    Website with walkthrough: https://astroanywhere.com

    Also see https://github.com/astro-anywhere/astro-examples for example outputs.

    Happy to answer questions about the architecture, the planning approach, or anything else.