2 pointsby NaNhkNaN3 hours ago2 comments
  • zippolyon18 minutes ago
    Interesting premise — agree that agents need runtime infrastructure, not just frameworks.

      One dimension we've been exploring: runtime governance. Even with a good runtime, agents can fabricate compliance
      records, silently drop tasks, or escalate privileges through delegation chains.
    
      We built Y*gov (github.com/liuhaotian2024-prog/Y-star-gov) — a deterministic enforcement layer that sits between
      agents and tools. check() runs in 0.042ms, no LLM in the enforcement path. We run our entire company on it (5 AI
      agents, 1 human).
    
      The runtime conversation should include: what happens when the agent does something it shouldn't?
  • NaNhkNaN3 hours ago
    Introducing trama — agents don't need frameworks. They need a runtime.

    You say what you want. trama writes the orchestration as a complete, executable program — then runs it, auto-repairs when it breaks, and versions every change. Because the orchestration is code, not config, the agent can write it and rewrite it — and so can you. git clone && trama run.

    What makes this different: trama programs can generate other trama programs. A parent program decomposes a task, spawns sub-programs, runs them, synthesizes the results. The orchestration is not configured — it's written by the agent, and the agent can rewrite it on demand.

    ~1000 lines of runtime. No ceiling — as LLMs get better at writing code, trama gets more powerful without a single framework change.

    Built on @badlogicgames's pi as the intelligence substrate. The autonomous optimization loop is inspired by @karpathy's autoresearch — propose, eval, keep or discard, repeat. trama just makes the loop — and the program itself — agent-written.