79 pointsby chilipepperhott10 days ago4 comments
  • wunderwuzzi237 days ago
    Beware of ANSI escape codes where the LLM might hijack your terminal, aka Terminal DiLLMa.

    https://embracethered.com/blog/posts/2024/terminal-dillmas-p...

    • chilipepperhott6 days ago
      That's actually crazy and I'll keep it in mind. Right now, I am mostly using it for data generation, so no untrusted prompts are going in. I'll add a disclaimer to the repo.
    • thephyber6 days ago
      Are there any projects to sanitize the output of LLMs before it is injected into Bash scripts or other source code?

      I get the feeling this will start to break into the OWASP Top 10 in the next few years…

      • jmholla6 days ago
        While on the topic, does anybody have a good utility to sanitize things? I'm imagining something I can pipe to:

            xclip -selection clipboard -o | sanitize
        
        I've been meaning to throw something together myself, but I worry I'd miss something.
        • thephyber2 days ago
          A previous company tried to do this with a single “clean_xss” function. It’s not possible because different contexts of code have different sanitization logic. JSON encoding, URL encoding, DOM sources and sinks, HTML attributes, SCRIPT tag, CSS, etc all are escaped or sanitized in different ways. Trying to make a single function/script with no knowledge of contexts just makes the developer sense more security than exists.
          • jmholla3 hours ago
            I should've been clearer. I just want to fantastic terminal escape sequences. It's probably as straightforward as I've imagined.
  • TheDong6 days ago
    I feel like the incumbent for running llm prompts, including locally, on the cli is llm: https://github.com/simonw/llm?tab=readme-ov-file#installing-...

    How does this compare?

  • zoobab6 days ago
    Did a similar curl script to ask questions to Llama3 hosted at Duckduckgo:

    https://github.com/zoobab/curlduck

  • 7 days ago
    undefined