3 pointsby Rohan518 hours ago1 comment
  • Rohan518 hours ago
    Hey HN — I built this after getting fed up with AI-generated PRs slipping through code review unnoticed. TODOs everywhere, placeholder variables, empty except blocks — the usual slop.

    roast-my-code scans your repo with static analysis rules specifically tuned for AI-generated code patterns, then calls an LLM (Groq free tier by default, so $0 to try) to generate a brutal, specific roast referencing your actual file names and issues.

    Stack: Python + Typer + Rich + Jinja2. The HTML report exports a shareable shields.io badge with your score.

    Try it: pip install roast-my-code

    Would love to hear what patterns you'd add — especially if you've spotted AI slop in the wild that my analyzer doesn't catch yet.

    • ksaj6 hours ago
      — I see what you did there.

      I'm terrible about placeholder variables and functions. This thing might rip me to shreds.

      • Rohan514 hours ago
        Haha — it's surprisingly therapeutic to get roasted by your own tool. I ran it on the repo itself and it called out my own placeholder names in the test fixtures. The fallback roast lines weren't safe either.

        Let me know what score you get if you try it! The worst I've seen so far was a 12/100 on a legacy codebase with 200+ TODOs.