1 pointby rzk2 hours ago1 comment
  • david_iqlabs2 hours ago
    This feels similar to how fuzzing evolved. Early fuzzers threw random input everywhere and produced tons of noise. Modern fuzzing works best when you give it a structured harness and a narrow target. LLMs seem to behave the same way the threat model acts like a fuzzing harness.