The core idea: every time you correct Nova, it extracts a lesson, generates a DPO training pair, and when enough pairs accumulate, it automatically fine-tunes itself with A/B evaluation before deploying the new model.
No other open-source AI assistant has this learning loop.
*What it does:* - Correction detection (2-stage: regex + LLM) → lesson extraction → DPO training data → automated fine-tuning with A/B eval - Temporal knowledge graph (20 predicates, fact supersession, provenance tracking) - Hybrid retrieval (ChromaDB vectors + SQLite FTS5 + Reciprocal Rank Fusion) - 21 tools, 4 messaging channels (Discord/Telegram/WhatsApp/Signal), 14 proactive monitors - MCP client AND server (expose Nova's intelligence to Claude Code, Cursor, etc.)
*What it's not:* - Not a ChatGPT wrapper — runs Qwen3.5:27b locally via Ollama, zero cloud dependency - Not a LangChain/LangGraph project — single async pipeline, ~74 files of plain Python - Not a coding agent — it's a personal assistant (but you can connect it to coding agents via MCP)
*Security:* 4-tier access control, prompt injection detection (4 categories), SSRF protection, HMAC skill signing, Docker hardening (read-only root, no-new-privileges, all caps dropped). Built with OWASP Agentic Security in mind — unlike certain 200K-star projects that got CVE'd within weeks of launch.
*Stack:* Python, FastAPI, httpx, Ollama, ChromaDB, SQLite, React. 1,443 tests.
No GPU? Set `LLM_PROVIDER=openai` and use cloud inference while keeping all data local.