3 pointsby EPendragon6 hours ago1 comment
  • derrak4 hours ago
    Lets solidify definitions. A procedure is deterministic iff for all inputs, it always produces the same output on that input.

    Now, I am going to be pedantic because words matter here. I agree with the author that LLMs have downsides that can be addressed with _symbolic_ tools. But _determinism_ has very little to do with this.

    > LLMs by nature are non-deterministic

    This is false. LLMs are functions. All appearances otherwise are an artifact of how we use them.

    This fact already suggests that determinism isn’t (entirely) what you want. Because even if you _could_ use LLMs as functions (I admit you can’t always do this with frontier models), that wouldn’t make you happy.

    > I want the output to always be predictable based on the behavior of the program and provided configuration.

    Here, I will argue that predictability here is divorced from determinism. You want an output that has a certain semantic relationship with the input. E.g., if you give a spec as input, you want a program that satisfies this spec.

    Here it should be obvious that getting the same output on the same input is not very important. Who cares if the arguments to the function are renamed? Who cares if the function is implemented differently but essentially does what the spec asks?

    I argue the _only_ thing that matters is that the output satisfies the intended relationship with the input. And this is orthogonal to determinism.

    Edit:

    It looks like the author has the same realization:

    > And while even this example shows how differently an LLM responds to the same query, it ends up producing a more reliable output.

    • EPendragon3 hours ago
      Hey, @derrak! Thanks for the feedback! I appreciate your constructive criticism of the post.

      > > LLMs by nature are non-deterministic > This is false. LLMs are functions. All appearances otherwise are an artifact of how we use them.

      I completely agree that LLMs are functions, and that they are essentially programs like any other program is. What I am trying to illustrate is that these functions are not pure. They are full of side effects. And thereby produce different results based on a large number of factors.

      > This fact already suggests that determinism isn’t (entirely) what you want. Because even if you _could_ use LLMs as functions (I admit you can’t always do this with frontier models), that wouldn’t make you happy.

      What do you mean here by "even if you could use LLMs as functions, that wouldn't make you happy"? What do you imply by using them as functions, and by making one happy?

      > Here it should be obvious that getting the same output on the same input is not very important. Who cares if the arguments to the function are renamed? Who cares if the function is implemented differently but essentially does what the spec asks?

      I would disagree here. Renaming arguments or implementing the function differently, doesn't negate the fact that I still want "pure" output from a function. I don't want to have a function that once in a while produces 2+2=5 instead of 4. I want it to always be the same: two arguments (2, 2) should always result in the same output (4).

      > I argue the _only_ thing that matters is that the output satisfies the intended relationship with the input.

      I agree with you here, but with one addition - it "always" satisfies the relationship with the input. As long as it always does that, that is deterministic behavior.

      The definition of deterministic algorithm is as follows: "A deterministic algorithm is an algorithm that, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states." This is what I am essentially talking about and seeking in my interactions with AI systems.