1 pointby sarkarsaurabh275 hours ago1 comment
  • sarkarsaurabh275 hours ago
    I'm the project owner. I've been running multiple AI coding agents simultaneously and had no way to answer basic questions: which one is using the most tokens? Why did that session end with 40 tool calls and no working code? Is any agent touching files it shouldn't?

    Riva is a local-first monitor for AI agents running on your machine. No cloud, no telemetry.

    What it does:

      - Detects running agents (Claude Code, Cursor, Kiro, Codex CLI, Gemini CLI, Cline, Windsurf, LangGraph, CrewAI, AutoGen, and more) by process name, exe path, and
      config dir
      - Live TUI (riva watch) and web dashboard with CPU, memory, uptime, child processes, network connections
      - Session forensics — parses agent JSONL transcripts into turn chains with timeline, tool calls, dead-end detection, and efficiency metrics. Works for both
      interactive and headless/API sessions
      - Skills tracking — detects slash commands (/commit, /review-pr) across sessions and computes usage count, success rate, backtrack rate, avg tokens per invocation
      - Security audit — scans agent configs for exposed API keys, world-readable credential files, suspicious MCP stdio commands, and agents running as root
      - Boundary policies — define allowed file paths, network domains, and max child processes; violations fire configurable hooks
      - OpenTelemetry export — push metrics, logs, and traces to Grafana/Datadog/Jaeger
      - riva --mcp-help — outputs a Markdown tool description so any agent can introspect and use riva itself as a tool
    
    Install: pip install riva riva # TUI dashboard riva web start # web dashboard at localhost:7821

    The session forensics piece turned out to be the most useful part in practice. You can replay any past session as a structured timeline and see exactly where an agent went in circles, what it read vs wrote, and what decisions it made in its thinking blocks.

    Website: sarkar.ai/riva/