13 pointsby paifamily14 hours ago8 comments
  • jlongo78a minute ago
    I juggle multi-agents for persistent tasks like coding and debugging. Makes context-switching a breeze. How’ve you optimized yours?
  • guerython2 hours ago
    On our team we split the flow into six agents: scraper, classifier, context builder, summary writer, responder, and post-monitor. They never share a conversation; each pulls jobs tagged for it from a Postgres queue, locks the row with `SELECT ... FOR UPDATE`, hits a shared vector store for context, writes the result, and lets the orchestrator (n8n flow) enqueue the next job. We keep the prompts tiny and deterministic, so the only state is the job row and the vector hash. This async job-as-library + policy layer is the only architecture that scales; the thing that fails spectacularly is when we let them all talk on a single Slack channel, because they start racing to be decision-maker and race for tool calls. The trick was to treat every tool as a service call with capacity controls plus a watcher that unpicks deadlocks.
  • Horos4 hours ago
    I've set a fully async patern. blobs chunked into sqlite shards.

    It's a blind fire n forget go worker danse.

    wich can be hold as monitoreed or scale as multiple instances if needed by simple parameters.

    Basicaly, It's a job as librairy patern.

    If you dont need real time, its bulletproof and very llm friendly.

    and a good token saver by the batching abilities.

    • leandot4 hours ago
      Curious about more details about this setup?
      • Horos4 hours ago
        The "job as library" pattern is simple: instead of wiring jobs into main or a framework, you split into 3 things.

        Your queue is a struct with New(db) — it knows submit, poll, complete, fail, nothing else.

        Your worker is another struct that loops on the queue and dispatches to handlers registered via RegisterHandler("type", fn). Your handlers are pure functions (ctx,payload) → (result, error) carried by a dependency struct.

        Main just assembles: open DB, create queue, create worker, register handlers, call worker.Start(ctx). Result: each handler is unit-testable without the worker or network, the worker is reusable across any pipeline, and lifecycle is controlled by a simple context.Cancel().

        Bonus: here the queue is a SQLite table with atomic poll (BEGIN IMMEDIATE), zero external infra.

        The whole "framework" is 500 lines of readable Go, not an opaque DSL. TL;DR: every service is a library with New() + Start(ctx), the binary is just an assembler.

        The "all in connectivity" pattern means every capability in your system — embeddings, document extraction, replication, MCP tools — is called through one interface: router.Call(ctx,"service", payload).

        The router looks up a SQLite routes table to decide how to fulfill that call: in-memory function (local), HTTP POST (http), QUIC stream (quic), MCP tool (mcp), vector embedding (embed), DB replication (dbsync), or silent no-op (noop).

        You code everything as local function calls — monolith. When you need to split a service out, you UPDATE one row in the routes table, the watcher picks it up via PRAGMA data_version, and the next call goes remote.

        Zero code change, zero restart. Built-in circuit breaker, retry with backoff, fallback-to-local on remote failure, SSRF guard.

        The caller never knows where the work happens.

        That's the "job as library" pattern: the boundary between monolith and microservices is a config row, not an architecture decision.

        https://github.com/hazyhaar/pkg/tree/main/connectivity

  • xpnsec4 hours ago
    More interestingly, what frameworks/harnesses/architecture are people using to drive multi-agent workflows?
  • Irving-AI7 hours ago
    How well is your agent performing?
  • Nancy09047 hours ago
    It sounds complicated. Is your Agent trying to solve everything?
  • mrothrocan hour ago
    [dead]
  • CodeBit2610 hours ago
    [dead]