I have started using prompt injection techniques on coworkers who rely on LLMs to analyze complex arguments. It works fairly well, but would work very well if I knew exactly which model they were using and could craft phrases for that one.
Sometimes we try out of mischief, but this might only work on the most primitive of LLMs, like GPT-5.3 or some various self hosted ones. The new ones are more resistant to such prompt hacks.