2 pointsby DanielVZ2 hours ago1 comment
  • smccabe0an hour ago
    There's some empirical backing to this if you consider what LLMs doing as part of the same regime: LLMs take a token stream and inflate it to the N-dimensional space in their embedding. We take a string of words and apply it to our model of the world, our understanding, and memories. I've had a lot of success in understanding the math through that lens.