4 pointsby azonez2 hours ago1 comment
  • azonez2 hours ago
    Hi HN! We built PenStrike because teams using LLMs kept asking the same question: how do we secure these models before production? Our platform scans for prompt injections, jailbreaks, insecure configurations and emerging vulnerabilities - automatically.

    We wouldd love feedback from the community, especially around:

    scan types you’d want to see

    missing security signals

    integrations we should prioritize

    issues you run into

    We are here all day to answer questions!

    aziz@penstrike.io