7 pointsby tgalal5 hours ago1 comment
  • lousyclicker5 hours ago
    That's a cool trick, but piping potentially sensitive server data back to your local machine snd through an external llm API kind of defeats the purpose of "never granting SSH" access. Also curious about latency.
    • tgalal5 hours ago
      The point is to avoid installing tools or granting LLM access and the "steering wheel" on the server itself. The data you pipe is the same data you'd copy-paste into ChatGPT or similars anyway. There is certainly bit of latency when piping/reading a lot of of data into the context, as everything is tunneled through the local machine, but I'd argue that the context size being limited by the llm itself makes it acceptable for most use cases.