Though I'd love to see an analysis of pre-gpt writing to see if it was more prevalent than we remember but lacked the acute sensitivity to it.
There's also the potential that AI started it but people read AI stuff and organically propagate AI tropes in their own words because it's part of the writing they consume.
Changed my mental model of using Skills a bit anyway.
> LLMs forget things as their context grows. When a conversation gets long, the context window fills up, and Claude Code starts compacting older messages. To prevent the agent from forgetting the skill’s instructions during a long thread, Claude Code registers the invoked skill in a dedicated session state.
> When the conversation history undergoes compaction, Claude Code references this registry and explicitly re-injects the skill’s instructions: you never lose the skill guardrails to context bloat.
If true, this means that over time a session can grow to contain all or most skills, negating the benefit of progressive disclosure. I would expect it would be better to let compaction do its thing with the possibility of an agent re-fetching a skill if needed.
I don't trust the article though. It looks like someone just pointed a LLM at the codebase and asked it to write an article.