This is an early beta (v0.4), but I’m releasing it as a public repo because the results are surprisingly fun. There's a sample novel in the examples/ folder ("GitHub-Man Saves the Universe!") that I generated from just a title.
The Architecture
Instead of a single long context window, I built a recursive pipeline: Initial Prompt -> Story Bible Generator -> Outline Maker -> Chapter Writer -> Continuity Auditor -> Repeat.
The "Secret Sauce" (SCI) I noticed standard LLM characters sound the same. So I created a linguistic fingerprinting technique called SCI (Stylistic Compression Induction). I prompt the Story Planner to assign a "voice anchor" to each character (e.g., "Responds in apocalyptic revelations of cryptic brevity"). It forces the model to filter dialogue through that specific stylistic lens.
Anti-Slop Measures I added a rolling "banned phrases" list. The Summarizer Agent reads the new chapter, identifies "AI-isms" (like "The air was thick with..."), and adds them to a negative constraint list for the next chapter. The final list of banned phrases is hilarious. It's basically a map of the model's own clichés.
Meta-Note I have a CS degree (Harvey Mudd '02) but I didn't write the React/TypeScript code by hand. I prompted Gemini's code assistant to build the whole stack in "free developer mode" while I acted as the PM/Debugger. It was a wild way to build software. I can read the code to debug it, but the AI wrote the syntax. Repo is open source (MIT). Would love to hear if the SCI technique works for your prompts.