1 pointby ckpark1234 hours ago1 comment
  • ckpark1234 hours ago
    I built this because I kept shipping agents that looked cheap on paper but crushed revenue. Last year I deployed a customer research agent—$2/day in API costs, but it unlocked $40K in contract negotiations we'd have otherwise missed. The problem: every tool I found only tracked spend. Nobody was measuring what the agent actually earned.

    So I built a scorecard that lets agents see their own ROI. The key insight is revenue attribution—we instrument your API calls so every agent knows which deal it influenced, which customer it upsold. Then agents can self-optimize via MCP based on actual impact, not guesses.

    The dashboard shows cost, revenue, and ROI per agent. One customer went from 12 agents to 6 agents producing 2.3x revenue—just by cutting the bottom performers and reallocating tokens to the top ones.

    It's open source (MIT, 23 MCP tools). Free tier: 3 agents, no card. Try it: `npx @metrxbot/mcp-server --demo` or app.metrxbot.com.

    The question I'm wrestling with: when agents can measure their own performance, how much self-optimization is too much? Should an agent decline work if it predicts low ROI?