4 pointsby steadeepanda6 hours ago4 comments
  • derrak4 hours ago
    > It's deterministic on purpose (doesn't include any Al layer)

    I wouldn’t use the word deterministic here. I would use the word symbolic. Determinism, meaning that you always get the same output on the same input, isn’t what you want here. For instance, you can use an LLM without temperature, etc. and its output will be deterministic. More over, if you had a symbolic, non-deterministic algorithm you would probably also be happy to use that.

    • steadeepanda4 hours ago
      LLMs are probabilistic by nature so even if you're using without temperature it doesn't remove completely this fact, it would just narrow the output. However here we're aiming for an already defined set of rules on purpose, with no LLM including in the decision workflow on purpose. You can't safely rely on LLM for security, it's contradictory because of the current nature of LLMs, which is one of the issues that we have today, and that we're trying to propose a solution for. But yeah it's possible to include an LLM in the decision workflow it's just that in comes with cons that I was trying to mitigate with this solution
      • derrak3 hours ago
        I think your solution is a good idea. I was just pushing back on why it’s a good idea. Determinism isn’t the crux. The crux is that you’re using a symbolic algorithm with well-defined formal semantics.

        I was trying to show that determinism is not the crux by pointing out that there are ways to get a deterministic output from an LLM. And that thought experiment shows that determinism isn’t what’s essential.

        And I will disagree about merely narrowing the outputs. If I download a local model and set the temperature to zero and give it the same prompt twice, I will get the same output. Not one of several outputs in a narrow set. LLMs are functions.

        • steadeepanda3 hours ago
          Ah okayy, yeah sure you're right. I didn't mean it that way. I mean I know we can get deterministic output from LLM but the issue is that even with that LLMs are trained on large set of data that open a surface for prompt injections and other attacks, and no matter how strong your guardrails are there's still a way to inject a prompt that even if you configure for deterministic output. So where I was going for the "determinism" was that the solution I made sits outside the LLMs it has nothing to do with the internal reasoning, and since "determinism" it ensure and safe and secure action check against the defined rules.

          Maybe here I should emphasize on the fact that it's external to any LLM? I don't know.

  • jaylew19975 hours ago
    nice
  • sbw70an hour ago
    [dead]
  • Remi_Etien4 hours ago
    [dead]