Code Intel is an experimental platform that combines: -AST parsing (to extract structure, anti-patterns, duplication), - Dependency graph analysis (circular imports, module cohesion), - RAG over the full codebase (ChromaDB + MiniLM embeddings), - Multi-agent LLM reasoning (Security/Performance/Architecture agents via GPT-4).
All orchestrated via FastAPI + WebSockets, with a React frontend and one-click GitHub integration.
Right now it only supports Python, we deliberately scoped it narrow to validate the core pipeline before expanding to JS/TS/Rust.
→ Try the live demo (no install): https://codebase-intelligence.vercel.app → Run locally: see README — takes ~5 mins to set up with your OpenAI key & GitHub OAuth.
I’d genuinely love feedback on: - What signals matter most when onboarding to a new codebase? - How would you reduce hallucination risk in LLM-based code analysis? - Would you prefer agent-based reasoning (like this) or fine-tuned smaller models?
(And yes — I’m bracing for the “just use Semgrep/CodeQL” comments — happy to explain how this complements them.)