1 pointby mooreds2 hours ago1 comment
  • cong-or2 hours ago
    The 2.5 days to scan 100 million cutouts is impressive, but what's more interesting is the approach: rather than training a model to recognize specific categories of objects, AnomalyMatch flags things that don't fit learned patterns. It's fundamentally different from classification.

    This "anomaly detection" framing seems underexplored in astronomy compared to other fields. In fraud detection or infrastructure monitoring, flagging outliers is standard practice. The fact that this is the first systematic anomaly search of the Hubble archive suggests there's probably low-hanging fruit in other telescope datasets too.

    The "several dozen objects that defied classification altogether" is the most interesting part. Those are either noise, errors in the pipeline, or genuinely new phenomena. Would be curious what the false positive rate looked like during manual review.