10 pointsby mettamage7 hours ago4 comments
  • mettamage7 hours ago
    I was playing around with some prompt injection guard rails frameworks. I know they don't mitigate attack classes, but they at least do something. I just got a bit miffed about the high false positive rates I saw in my own testing.

    This one has a low false positive rate. And I thought that was interesting.

  • ekns5 hours ago
    There is a simple way to mitigate prompt injection. Just check metadata only: is this action by the LLM suspicious given trusted metadata, blanking out the data
  • carterschonwald6 hours ago
    while i cant speak regarding arbitrary prompt injections, ive been using a simple approach i add to any llm harness i use, that seems to solve turn or role confusion being remotely viable.

    i really need to test my toolkit (carterkit) augmented harnesses on some of the more respectavle benchmarks

  • ninju4 hours ago
    You misspelled 'execute' in the video ;)
    • bastawhiz4 hours ago
      And it still did the right thing. Which I think makes the demo slightly more impressive.