3 pointsby LuxBennu5 hours ago2 comments
  • LuxBennu4 hours ago
    I ran this on my own prompt history and three things surprised me. found 3 API keys buried in copy-pasted stack traces (`reprompt privacy`). 35% of my agent sessions had error loops -- the agent retrying the same failing approach 3+ times (`reprompt agent`). And 50-70% of my conversation turns were filler like "ok try that" (`reprompt distill`).

        pip install reprompt-cli
        reprompt scan && reprompt
    
    Everything runs locally -- zero network calls, zero telemetry. Also works as an MCP server and GitHub Action.
  • kiyeonjeon3 hours ago
    Love the "no LLM calls" approach. Scoring prompts in <1ms locally is exactly the right tradeoff. Most tools overcomplicate this.
    • LuxBennu2 hours ago
      Thanks! Turns out structural signals get you surprisingly far. An LLM catches more, but speed is the feature.