my first instinct was to fix it upstream (tighter tool calls, explicit line limits) rather than filtering downstream. and that helps a lot. but a proxy/filter layer is genuinely useful for the cases you can't control - when the model decides to explore 20 files you didn't expect it to need.
curious about the failure modes though. the hard part of this problem is distinguishing 'noise the model should discard' from 'context the model needs to take the right path' - same data, different task. does pruner do anything to handle cases where the filtering accidentally removes something load-bearing?