roast-my-code scans your repo with static analysis rules specifically tuned for AI-generated code patterns, then calls an LLM (Groq free tier by default, so $0 to try) to generate a brutal, specific roast referencing your actual file names and issues.
Stack: Python + Typer + Rich + Jinja2. The HTML report exports a shareable shields.io badge with your score.
Try it: pip install roast-my-code
Would love to hear what patterns you'd add — especially if you've spotted AI slop in the wild that my analyzer doesn't catch yet.
I'm terrible about placeholder variables and functions. This thing might rip me to shreds.
Let me know what score you get if you try it! The worst I've seen so far was a 12/100 on a legacy codebase with 200+ TODOs.