8 pointsby lelanthran5 hours ago4 comments
  • conorbergin7 minutes ago
    LLMs are deterministic, the same model under the same conditions will produce the same output, unless some randomness is purposefully injected. Neural networks in general can be thought of as universal function approximators.
  • bigstrat20039 minutes ago
    You're right, but the reality is that the people who are excited about LLMs don't care about determinism. They are happy to hand off the thinking to a third party, even if it will give wrong answers they don't notice.
  • jqpabc1234 hours ago
    In other words, LLMs are probabilistic, not deterministic.
    • sscaryterry4 hours ago
      Dare I say, so are humans?
      • jqpabc1233 hours ago
        This used to be a big reason why we used computers --- to help eliminate the probability of error.

        But apparently, not so much any more.

        • somewhereoutthan hour ago
          Right, it was the perfect match: Humans for fuzzy touchy feely stuff, computers for hard edged correct calculations. How have we managed to screw this up so badly?
  • cyanydeezan hour ago
    This makes sense, but you need to understand that you're ignoring the compiler once you're past the machine code level which isn't an abstraction right, it's the root. So ignoring that part of the missive, goin from C to Python, different compilers do add different machine code.

    C and Python have a bunch of different compilers, so you don't if you take the same code, the f' output can be different. There's determinism within the same compiler. Add in different architectures, and the machine code output definitely is more varied than presented.

    But that's still a manageable; then what if you add in all the dependencies, well you get a more florid complexity.

    So really, it's a shitty abstraction rather than an inaccurate analogy. If you lined them up in levels, there could be some universe where they are a valid abstraction. But it's not the current universe, because we know the models function on non-determinism.

    I'd posit if there was a 'turtles all the way down' abstraction for the LLM, it's simply coming from the other end, the one where human mind might start entering the picture.