239 pointsby Anon843 months ago8 comments
  • smokel3 months ago
    If you're more into videos, be sure to check out Stefano Ermon's CS236 Deep Generative Models [1]. All lectures are available on YouTube [2].

    [1] https://deepgenerativemodels.github.io/

    [2] https://m.youtube.com/playlist?list=PLoROMvodv4rPOWA-omMM6ST...

    • storus3 months ago
      I wish Stanford kept offering CS236 but they haven't run it for two years already :(
  • dvrp3 months ago
    hn question: how is this not a dupe of my days old submission (https://news.ycombinator.com/item?id=45743810) ?
    • borski3 months ago
      It is, but dupes are allowed in some cases:

      “Are reposts ok?

      If a story has not had significant attention in the last year or so, a small number of reposts is ok. Otherwise we bury reposts as duplicates.”

      https://news.ycombinator.com/newsfaq.html

      Also, from the guidelines: “Please don't post on HN to ask or tell us something. Send it to hn@ycombinator.com.”

      • dlcarrier3 months ago
        I presume that email address is for when you want to ask something of Hacker New, not to ask something about Hacker News.

        For example they probably didn't want posts like "Hey Hacker News, why don't you call for the revival of emacs and the elimination of all vi users?" and would rather you email them so they can ignore it, but they also don't want email messages asking "How do I italicize text in a Hacker News comments, seriously I can't remember and I would have done so earlier in this comment if I could?" and would rather you ask the community who could answer it without bothering anyone working at Y Combinator.

        • fragmede3 months ago
          Are you saying this based on experience or are you projecting? In my experience (tho not asking how to italicize text using * characters) Dang and tomhow are happy to answer all sorts of questions. Sometimes they do get bogged down by the reality of running a site of this site manually, as it were, but I can't remember a question that didn't eventually get answered. I'll even tell them I vouched for this bunch of dead comments, was that the right thing to do? And one of them will write back saying mostly, but just fyi comment xyz was more flamebaity than idea, but thank you for asking and working on calibrating your vouch-o-meter.
      • stathibus3 months ago
        in other words - "it is lol, also go pound sand"
        • bondarchuk3 months ago
          What's the problem? Someone submitted it for people to read but it didn't catch on, now it's resubmitted and people can read it after all. Everyone happy. Don't be so attached to imaginary internet points.
        • borski3 months ago
          That's not what I said, but okay.
  • JustFinishedBSG3 months ago
    CTRL-F: "Fokker-Planck"

    > 97 matches

    Ok I'll read it :)

    • joaquincabezas3 months ago
      why am I only getting 26 matches? where's the threshold then? :D
      • tim3333 months ago
        It's all about the en dashes and Fokker-Planck vs Fokker–Planck.
        • dlcarrier3 months ago
          PDF files often break up sentences in ways that the find utility can't follow, so even if they ask have the same dash, it might not find them all. At least those names are uncommon enough you could search for just one.
        • joaquincabezas3 months ago
          AI is definitely related to dashes!!
  • gdmaher3 months ago
    Cool (but long) text. I wanted an overview so I used claude to make a chapter-by-chapter summary, sharing in case anyone else finds it useful

    https://github.com/gmaher/diffusion_principles_summary

  • bondarchuk3 months ago
    Is there something equivalent in scope and comprehensiveness for transformers?
  • scatedbymath3 months ago
    i m scared by the maths
    • BrokenCogs3 months ago
      Are you sure you're not scated?
  • mlmonkey3 months ago
    470 pages?!?!?!? FML! :-D
  • leptons3 months ago
    Reading this reinforces that a lot of what makes up current "AI" is brute forcing and not actually intelligent or thoughtful. Although I suppose our meat-minds could also be brute-forcing everything throughout our entire lives, and consciousness is like a chat prompt sitting on top of the machinery of the mind. But artificial intelligence will always be just as soulless and unfulfilling as artificial flavors.
    • dhampi3 months ago
      Guessing you’re a physicist based on the name. You don’t think automatically doing RG flow in reverse has beauty to it?

      There’s a lot of “force” in statistics, but that force relies on pretty deep structures and choices.

    • Bromeo3 months ago
      Are you familiar with the "Bitter Lesson" by recent Turing Award winner Rich Sutton? http://www.incompleteideas.net/IncIdeas/BitterLesson.html
    • tim3333 months ago
      Always is a long time. It may get better.
    • theptip3 months ago
      Intelligence is the manifold that these brute-force algorithms learn.

      Of course we don’t brute-force this in our lifetime. Evolution encoded the coarse structure of the manifold over billions of years. And then encoded a hyper-compressed meta-learning algorithm into primates across millions of years.

      • uecker3 months ago
        Learning a manifold is not intelligence as it lacks the reasoning part.
        • esafak3 months ago
          Leaning the manifold is understanding. Reasoning, which takes place on the manifold, is applying that understanding.
          • uecker3 months ago
            I am not sure what you definition of "understanding" is that you apply here.
            • esafak3 months ago
              I mean understanding physics and the universe of natural possibilities; what can happen. Then comes why.
              • uecker3 months ago
                Fitting a manifold to a bunch of samples does not allow you to understand what can happen in the universe. For example, if you train a regular diffusion model on correct sudokus, it will produce sudokus with errors because it does not understand the rules.
                • esafak3 months ago
                  You raise a good point for the diffusion case, which trains only on positive examples, but generally speaking negative examples will warp the manifold appropriately.
                  • uecker3 months ago
                    Sure, but you show a few correct examples to a human it will quickly pick up the correct rules. And this is understanding.