281 pointsby surprisetalk4 days ago22 comments
  • helloplanets18 hours ago
    For the visual learners, here's a classic intro to how LLMs work: https://bbycroft.net/llm
  • vivzkestrel7 hours ago
    - while impressive, it still doesnt tell me why a neural network is architected the way it is and that my bois is where this guy comes in https://threads.championswimmer.in/p/why-are-neural-networks...

    - make a visualization of the article above and it would be the biggest aha moment in tech

  • tpdly17 hours ago
    Lovely visualization. I like the very concrete depiction of middle layers "recognizing features", that make the whole machine feel more plausible. I'm also a fan of visualizing things, but I think its important to appreciate that some things (like 10,000 dimension vector as the input, or even a 100 dimension vector as an output) can't be concretely visualized, and you have to develop intuitions in more roundabout ways.

    I hope make more of these, I'd love to see a transformer presented more clearly.

  • chan110 hours ago
    Super cool visualization Found this vid by 3Blue1Brown super helpful for visualizing transformers as well. https://www.youtube.com/watch?v=wjZofJX0v4M&t=1198s
    • bilbo-b-baggins4 hours ago
      Their series on LLMs, neural nets, etc., is amazing.
  • esafak18 hours ago
    This is just scratching the surface -- where neural networks were thirty years ago: https://en.wikipedia.org/wiki/MNIST_database

    If you want to understand neural networks, keep going.

    • abrookewood10 hours ago
      Which, if you are trying to learn the basics, is actually a great place to start ...
  • swframe211 hours ago
    This Welch Labs video is very helpful: https://www.youtube.com/watch?v=qx7hirqgfuU
  • 4fterd4rk19 hours ago
    Great explanation, but the last question is quite simple. You determine the weights via brute force. Simply running a large amount of data where you have the input as well as the correct output (handwriting to text in this case).
    • ggambetta18 hours ago
      "Brute force" would be trying random weights and keeping the best performing model. Backpropagation is compute-intensive but I wouldn't call it "brute force".
      • Ygg218 hours ago
        "Brute force" here is about the amount of data you're ingesting. It's no Alpha Zero, that will learn from scratch.
        • jazzpush215 hours ago
          What? Either option requires sufficient data. Brute force implies iterating over all combinations until you find the best weights. Back-prop is an optimization technique.
          • Ygg26 hours ago
            In context of grandparents post.

                 > You determine the weights via brute force. Simply running a large amount of data where you have the input as well as the correct output 
            
            Brute force just means guessing all possible combinations. A dataset containing most human knowledge is about as brute force as you can get.

            I'm fairly sure that Alpha Zero data is generated by Alpha Zero. But it's not an LLM.

            • fc417fc8022 hours ago
              No, a large dataset does not make something brute force. Rather than backprop, an example of brute force might be taking a single input output pair then systematically sampling the model parameter space to search for a sufficiently close match.

              The sampling stage of Evolution Strategies at least bears a resemblance but even that is still a strategic gradient descent algorithm. Meanwhile backprop is about as far from brute force as you can get.

  • droidist24 hours ago
    Really cool. The animations within a frame work well.
  • ge9617 hours ago
    I like the style of the site it has a "vintage" look

    Don't think it's moire effect but yeah looking at the pattern

  • 8cvor6j844qw_d615 hours ago
    Oh wow, this looks like a 3d render of a perceptron when I started reading about neural networks. I guess essentially neural networks are built based on that idea? Inputs > weight function to to adjust the final output to desired values?
    • mr_toad10 hours ago
      The layers themselves are basically perceptrons, not really any different to a generalized linear model.

      The ‘secret sauce’ in a deep network is the hidden layer with a non-linear activation function. Without that you could simplify all the layers to a linear model.

    • sva_11 hours ago
      A neural network is basically a multilayer perceptron

      https://en.wikipedia.org/wiki/Multilayer_perceptron

    • adammarples13 hours ago
      Yes, vanilla neural networks are just lots of perceptrons
  • jazzpush215 hours ago
    I love this visual article as well:

    https://mlu-explain.github.io/neural-networks/

  • vicentwu5 hours ago
    I like the CRT-like filter effect.
  • jetfire_171114 hours ago
    Spent 10 minutes on the site and I think this is where I'll start my day from next week! I just love visual based learning.
  • cwt13717 hours ago
    This visualizations reminds me of the 3blue1brown videos.
    • giancarlostoro17 hours ago
      I was thinking the same thing. Its at least the same description.
  • shrekmas12 hours ago
    As someone who does not use Twitter, I suggest adding RSS to your site.
  • anon29116 hours ago
    Nice visuals, but misses the mark. Neural networks transform vector spaces, and collect points into bins. This visualization shows the structure of the computation. This is akin to displaying a Matrix vector multiplication in Wx + b notation, except W,x,and b have more exciting displays.

    It completely misses the mark on what it means to 'weight' (linearly transform), bias (affine transform) and then non-linearly transform (i.e, 'collect') points into bins

    • titzer15 hours ago
      > but misses the mark

      It doesn't match the pictures in your head, but it nevertheless does present a mental representation the author (and presumably some readers) find useful.

      Instead of nitpicking, perhaps pointing to a better visualization (like maybe this video: https://www.youtube.com/watch?v=ChfEO8l-fas) could help others learn. Otherwise it's just frustrating to read comments like this.

      • fc417fc8022 hours ago
        It's not nitpicking to point out major missing pieces. Comments like this might tend to come across as critical but they are incredibly valuable for any reader that doesn't know what he doesn't know.
  • 14 hours ago
    undefined
  • artemonster16 hours ago
    I get 3fps on my chrome, most likely due to disabled HW acceleration
  • atultw8 hours ago
    Nice work
  • pks01616 hours ago
    Great visualization!
  • javaskrrt17 hours ago
    very cool stuff