Curious how the token optimization presets balance context window costs against the depth of call graph analysis. I've found that aggressively pruning context to save on input tokens often degrades reasoning quality pretty quickly when dealing with complex dependencies.