15 pointsby ArchitectAI3 months ago7 comments
  • codemusings3 months ago
    Right. Because compute power and/or a physics based model is the limiting factor for accurately predicting when a seismic event happens. Training on historic data is hardly the problem that need's solving.

    It's the leading indicators that are actually measurable that are missing. You know the ones that allow for evacuations and other protective measures.

  • promptfluid3 months ago
    This is wild — optimizing I/O and memory flow instead of brute-forcing with clusters is exactly the kind of rethink AI infrastructure needs. You basically inverted the whole scaling narrative. Curious if the zero disk reads trick could generalize to other physics-heavy domains (fluid sims, EM propagation, etc.) or if it depends on the dataset’s uniformity. Either way, killer proof that smarter beats bigger.
  • Grosvenor3 months ago
    This is interesting. Can you share the model/github?
  • qmarchi3 months ago
    Given the swath of sensors that Japan has, and the long history of a lot of them. I do wonder what the result of training off their datasets would be.
  • Woberto3 months ago
    So when/where is the next big one coming?
  • NunoSempere3 months ago
    How do I get access to this?
  • highd3 months ago
    How are you doing your train/test split?