I've been noticing this creeping into my own AI coding suggestions lately. An LLM doesn't inherently understand "abandonware" or community health; it just sees that a package technically solves the logic puzzle in its context window. We've spent the last decade building CI/CD tooling to catch known CVEs, but we don't have great guardrails for an AI confidently importing an 8-year-old unmaintained library that happens to have zero reported vulnerabilities simply because nobody has looked at it in a decade.