3 pointsby andreadev3 hours ago1 comment
  • JoshBlythe2 hours ago
    In app builders using LLM's you would expect proper prompt injection procedures to be in place - but surprise surprise, it's not usually the case. AI tools tend to ship fast and security is alwasy an aferthought.

    I see this pattern constantly in my day job (I work in cyber for a FTSE 100 bank). I keep seeing tools that just prioritise developer experience over actual input validation, then act surprised when someone exploits it.

    I've also been building a drop in solution for this exact issue outside of work. Happy to see this stuff (in the best way possible) as it acts as affirmation that what I'm doing is valuable.