49 pointsby enmanuelmag4 hours ago11 comments
  • andreypka few seconds ago
    looks interesting, starred
  • philipp-gayret3 hours ago
    Interesting project. I am working on a similar solution. Eventually you will run into the following with harnesses, so I wonder how these questions work with your project;

    1) Can you define a process other than build -> review -> .. etc. And more importantly, can you define a process that is more complex? For example for each review finding, do X. Or go from end-to-end test, back to build.

    2) In your setup, how does a sub-agent prove undeniably, that it's work is complete? Does the "lead" agent just look at the output? If so, it would effectively make the lead an implicit reviewer for all agents, so I don't follow why you would need a review step.

    3) Can you have steps in between these agentic processes that do not involve agents?

    • Fiahil3 hours ago
      Not Op.

      For 1), yes, there is an "observe" step in the process where - when the project is deployed - it observes and reconciles what happens vs what should happen based on specs.

      I believe more variant are bound to emerge when harnesses become more prevalent. We only scratched the surface, so don't generalize over the process yet.

  • elysianfieldsan hour ago
    This looks really cool. Did you think about including automatic worktree creation + sandboxing?

    I've built sth similar (more focus on the project setup and being able to work on multiple things at once with a single agent), that uses git worktrees to create a separate space (symlinks .env files) and bubblewrap to isolate the worktree for the agent.

  • eugeniecregan9 minutes ago
    This is very cool.

    We have been working on a communication layer that would be, I believe, complementing it by allowing the agents to actually talk to each other and to agents in other teams: https://github.com/awebai/aweb

  • arctide2 hours ago
    hit this exact thing running a routines hub.

    When an agent is told to do something by the scheduler, the next step in the process only believes it’s done if the agent’s status is marked as ‘posted’. Statuses like ‘ready_to_post’ or ‘draft_verified_awaiting_review’… these are actually errors that the system needs to fix on the following attempt.

    The trickiest part was dealing with being stopped, but not having something break. You have to have ways to say “this happened, and it isn't what we wanted”, for example, ‘blocked_quota’, ‘blocked_no_credentials’, or ‘skipped_anti_bunching’. If you don't have those, the main program will endlessly retry and spend all your money.

    the typed handoff in ahk is the right primitive imo. discipline on top: agents never write half-states. every run terminates in a documented terminal status, success or otherwise.

  • yshamrei2 hours ago
    It looks very promising! Is there any plan to implement a ralph-loop inside?
  • lynellf2 hours ago
    Looks cool, but is it really provider agnostic? I only see Claude Code and OpenCode as advertised examples.

    How does this differ from RooCode and similar agent orchestration tools?

  • ajaystream2 hours ago
    [dead]
  • rootbrief3 hours ago
    [flagged]
  • EvidenceRun3 hours ago
    [flagged]
  • enmanuelmag4 hours ago
    [dead]
    • koolba3 hours ago
      If you replace sqlite with a remote DB like Postgres, can you federate the agents and have multiple pointing at the same central knowledge store?
    • igravious3 hours ago
      This should be a "Show HN" https://news.ycombinator.com/show