SCAN fix: put questions at the end of each prompt section. Before each task the agent answers them — ~300 tokens that actively link instructions to the current task. Not re-reading the prompt passively, but generating connections to it.
Months of daily use across Claude and Kimi. No benchmarks — can't measure attention weights from outside. But the difference is obvious: without SCAN agents lose rules by mid-session, with SCAN they don't.
No dependencies, any model, open method. Full writeup in the gist.
On drift being "mostly gone" — depends on prompt complexity. With a simple system prompt, sure, modern models hold up fine. But with a large instruction set (mine is ~4000 tokens, 25+ rules across 7 sections) the drift is very much still there, even on Opus. The more rules you have, the more they compete for attention, and the easier it is for specific ones to drop off mid-session.
Also worth noting — this isn't limited to coding agents. Any long-running LLM workflow with complex instructions has the same problem. Customer support bots that forget their tone policy, medical assistants that stop citing sources, content moderation that gets lenient over time. If you have a system prompt with rules and a session longer than 20 minutes — the rules will decay.