4 pointsby KellyCriterion9 hours ago5 comments
  • ungreased06753 hours ago
    I have started using prompt injection techniques on coworkers who rely on LLMs to analyze complex arguments. It works fairly well, but would work very well if I knew exactly which model they were using and could craft phrases for that one.
  • muzani8 hours ago
    Sometimes we try out of mischief, but this might only work on the most primitive of LLMs, like GPT-5.3 or some various self hosted ones. The new ones are more resistant to such prompt hacks.
  • Remi_Etien2 minutes ago
    [dead]
  • Pythius9 hours ago
    [dead]
  • truepricehq3 hours ago
    [dead]