What's worked for me: having separate sections for "things Claude needs every session" vs "reference material it can look up when needed". The former stays concise. The latter can grow.
The anti-pattern I see is people treating it like a growing todo list or dumping every correction in there. That's when you get rot. If a correction matters, it should probably become a linting rule or a test anyway.
And I am back to manually including markdown files in the prompt.
Say I have a TypeScript.md file with formatting and programming guidelines for TypeScript. Sometimes Claude edit TypeScript files without consulting it, despite it being linked referenced CLAUDE.md.
If your CLAUDE.md approach is public, would be interested to learn from it.
Currently, I have Claude set up a directory with CLAUDE.md, a roadmap, a frequently used commands guide, detailed by-phase plans, and a session log with per-session insights and next steps.
After each phase is done, I have it update the documents and ask what could have been done to make the session easier—often the answer is clearer instructions, which eventually leads to topic-specific documents or a new Claude Skill.
(edit) These reflection tasks are spelled out in the CLAUDE.md and in the docs directory, so I don't have to type them. Each new session, I paste a guide for how Claude should access the information from the last session.
So probably complementary to what you're doing, not a replacement. If you're already doing post-phase reviews, you might catch most of this anyway.
----
New session:
## Prompt
Set up project documentation structure for this new project.
Create the following:
1. \*CLAUDE.md\* (project root) - Concise (<40 lines) entry point with:
- One-line hypothesis/goal
- "Start Here" pointing to docs/ROADMAP.md
- Quick Reference (key commands)
- Key Files table
- Documentation links
- "End of Session" protocol (append to docs/session_log.md)
2. \*docs/ROADMAP.md\* - Status dashboard:
- Current state table (what exists)
- Blockers table (with links to plans)
- Priority actions table (with links to plans)
- Success criteria (minimum + publishable)
3. \*docs/plans/\* directory with plan templates:
- Each plan: Priority, Status, Depends on, Problem, Goal, Steps, Success Criteria
- Name format: plan_descriptive_name.md (not numbered)
4. \*docs/runbook.md\* - Common operations:
- How to add new data sources
- How to run analysis
- How to run tests
5. \*docs/session_log.md\* - Work history:
- Template for session entries
- "Prior Sessions" section for context
6. \*docs/archive/\* - For completed/old docs
If this is a data pipeline project, also set up:
- SQLAlchemy + Pydantic architecture (see sqlalchemy_pydantic_template.zip)
- tests/ directory with pytest structure
- you can copy the file from ./generic_claude_project_src.zip to the current folder and unzip the copy. DO NOT modify the original file or try to move the original file
Keep all docs concise. Link between docs rather than duplicating.
---
## After Setup Checklist
- [ ] CLAUDE.md is <50 lines
- [ ] ROADMAP.md fits on one screen
- [ ] Each plan has clear success criteria
- [ ] runbook.md has actual commands (not placeholders)
- [ ] session_log.md has first entry from setup session
---
## python
use the py-PROJECT pyenv
ensure pyenv is installed
## Example Structure
project/
├── CLAUDE.md # Entry point (<50 lines)
├── docs/
│ ├── ROADMAP.md # Status + plan links
│ ├── runbook.md # Common operations
│ ├── session_log.md # Work history
│ ├── plans/
│ │ ├── plan_data_loading.md
│ │ ├── plan_analysis.md
│ │ └── plan_validation.md
│ └── archive/
│ └── plans/ # Old plans with date prefix
├── src/ or project_name_src/ # Code
└── tests/
└── test_*.py
## Session log example
## YYYY-MM-DD: Brief Title
\*Goal\*: What you aimed to do
**Done**: What was accomplished (be specific)
**Discovered**: Non-obvious findings about codebase/data/tools
**Decisions**: Choices made and brief rationale
**Deferred**: Work skipped and why (not just "next steps")
**Blockers**: Issues preventing progress (or "None")
The key shift: less about tasks completed, more about knowledge gained that isn't captured elsewhere.
Continuation prompt Review the project state and tell me what to work on:
1. Read docs/ROADMAP.md
2. Read docs/session_log.md (last entry)
3. Read the highest-priority plan in docs/plans/
Then summarize:
- Last session (1 line)
- Current blockers
- Recommended next step
Ask if I'm ready to proceed.
The zip archive above is just a standard Python module with __init__ and hints at how I'd like things to be named.LLMs degrade as the context / prompt size grow. For that reason I don't even use a CLAUDE.md at all.
There are very few bits that I do need to routinely repeat, because those are captured by linters/tests, or prevented by subdividing the tasks in small-enough chunks.
Maybe at times I wish I could quickly add some frequently used text to prompts (e.g. "iterate using `make test TEST=foo`"), but otherwise I don't want to delegate context/prompt building to an AI - it would quickly snowball.
# Check for POSITIVE patterns (new in v3)
elif echo "$PROMPT" | grep -qiE "perfect!|exactly right|that's exactly|that's what I wanted|great approach|keep doing this|love it|excellent|nailed it"; then
is fanciful.[1] https://github.com/BayramAnnakov/claude-reflect/blob/main/sc...
I use claude code extensively, and lately it does all the coding for me. I've been asking it to go through claude.md to see if it needs to be updated after making changes.
I'm now thinking that I will let claude code write code, but never touch claude.md. It needs to be as short and I need to have full control over it.
As i lead the direction of the development the claude.md is my primary way of influencing it, in every single session.
Wdyt?
here is their privacy policy: https://privacy.claude.com/en/articles/10023580-is-my-data-u...
but, again, in my case this is more about my project/user specific corrections and preferences