← Back to Timeline

A machine-learning pipeline for real-time detection of gravitational waves from compact binary coalescences

Astrophysics

Authors

Ethan Marx, William Benoit, Alec Gunny, Rafia Omer, Deep Chatterjee, Ricco C. Venterea, Lauren Wills, Muhammed Saleem, Eric Moreno, Ryan Raikman, Ekaterina Govorkova, Dylan Rankin, Michael W. Coughlin, Philip Harris, Erik Katsavounidis

Abstract

The promise of multi-messenger astronomy relies on the rapid detection of gravitational waves at very low latencies ($\mathcal{O}$(1\,s)) in order to maximize the amount of time available for follow-up observations. In recent years, neural-networks have demonstrated robust non-linear modeling capabilities and millisecond-scale inference at a comparatively small computational footprint, making them an attractive family of algorithms in this context. However, integration of these algorithms into the gravitational-wave astrophysics research ecosystem has proven non-trivial. Here, we present the first fully machine learning-based pipeline for the detection of gravitational waves from compact binary coalescences (CBCs) running in low-latency. We demonstrate this pipeline to have a fraction of the latency of traditional matched filtering search pipelines while achieving state-of-the-art sensitivity to higher-mass stellar binary black holes.

Concepts

gravitational waves convolutional networks signal detection low-latency gw pipeline inference-as-a-service data augmentation scalability classification model validation scientific workflows anomaly detection

The Big Picture

In gravitational-wave astronomy, seconds matter. When two black holes spiral into each other somewhere in the cosmos, the collision sends ripples through spacetime. Ground-based detectors like LIGO and Virgo can pick up these gravitational waves, but the real scientific payoff comes when optical and radio telescopes can point at the source while the aftermath is still unfolding.

Some of the most violent mergers involve neutron stars, the ultra-dense remnants of massive stars that have gone supernova. A neutron star collision produces gravitational waves, gamma-ray bursts, visible light, and X-rays all at once. To catch the full picture, astronomers need alerts within about a second of detection. Traditional search algorithms are thorough but too slow for that.

A team at MIT, LIGO, the University of Minnesota, and several other institutions has built the first fully machine-learning-based pipeline, called Aframe, that detects gravitational waves from colliding black holes in real time. It matches the sensitivity of the best existing algorithms on higher-mass black hole mergers while running at a fraction of their latency.

Key Insight: Aframe is the first end-to-end ML pipeline for gravitational wave detection that operates at low latency. It achieves millisecond-scale inference and matches state-of-the-art sensitivity for higher-mass binary black hole signals.

How It Works

Aframe watches two data streams at once, one from LIGO’s detector in Hanford, Washington and one from Livingston, Louisiana, and answers a single question: is there a gravitational wave signal hiding in this noise?

The architecture adapts ResNet34 (“Residual Network with 34 layers”), a deep learning model originally built for image recognition, to handle one-dimensional time-series data. Two modifications matter most:

  • 1D convolutions replace standard 2D convolutions, processing data that unfolds over time rather than across a two-dimensional image.
  • Group Normalization replaces Batch Normalization. Standard batch normalization tracks statistics across the full training batch, but training data is loaded with simulated signals while live inference sees mostly noise. Group Normalization computes statistics within smaller channel groups, keeping the network’s behavior consistent between training and deployment.

Figure 1

The pipeline runs in a sliding-window fashion at 4 Hz, evaluating a new window every 0.25 seconds. Raw output scores get smoothed with a one-second “top hat” filter (a moving average), converting noisy frame-by-frame scores into a stable detection statistic. That 4 Hz rate is frequent enough to catch transient signals without blowing up the compute budget.

For throughput, Aframe uses an inference-as-a-service setup: the neural network runs as a dedicated, always-on service rather than being invoked on demand. NVIDIA’s Triton Inference Server and TensorRT handle the inference, while separate client processes manage data loading and preprocessing. Each component scales independently. A technique called snapshotting caches overlapping input data server-side, cutting out redundant transfers for the unchanged portions of each window.

Training relies on heavy data augmentation. Simulated binary black hole waveforms are injected at varying signal strengths into real LIGO noise, forcing the network to deal with the non-stationary, glitchy character of actual detector data rather than idealized Gaussian noise. The loss function is binary cross-entropy, optimized with Adam and a one-cycle learning rate schedule with cosine annealing.

Figure 2

Why It Matters

The immediate payoff is speed. Traditional matched filtering pipelines compare incoming data against a vast library of precomputed signal templates, checking a fingerprint against every entry in the database. They’re exquisitely sensitive, but the computational overhead is real. Aframe delivers comparable sensitivity on higher-mass binary black holes while cutting time-to-alert to a fraction of what matched filtering requires.

In multi-messenger astronomy, that speed difference is everything. Gravitational waves combined with light and other signals give a far more complete picture of cosmic events, but only if follow-up happens fast enough. Catching a kilonova (the luminous explosion following a neutron star merger) at peak brightness rather than already fading can make or break the science from a single event.

The deeper significance here is infrastructural. The field has spent years watching ML proposals stall between the paper and the control room. By building a fully deployable pipeline and benchmarking it against the GWTC-3 catalog (the published record of confirmed detections from LIGO and Virgo’s third observing run), the team has produced something the community can actually run. Not just another proof-of-concept.

Binary neutron star mergers are the obvious next target. They spend far longer in the detector’s sensitive band and pose harder detection challenges. The inference architecture, normalization choices, and augmentation strategies developed here all transfer directly.

Bottom Line: Aframe shows that a fully ML-based gravitational wave search pipeline can operate in real time with competitive sensitivity and a fraction of traditional latency. Faster multi-messenger alerts are no longer hypothetical.


IAIFI Research Highlights

Interdisciplinary Research Achievement
This work connects deep learning infrastructure with gravitational-wave astrophysics, putting a neural network pipeline into production inside the LIGO computing environment and benchmarking it against established physics search algorithms.
Impact on Artificial Intelligence
The team's use of Group Normalization to resolve the training-inference distribution mismatch, combined with an inference-as-a-service architecture for scalable deployment, offers concrete lessons for any real-time ML system in scientific computing.
Impact on Fundamental Interactions
Aframe achieves state-of-the-art sensitivity to higher-mass binary black hole mergers at millisecond inference latency, enabling the sub-second gravitational-wave alerts that multi-messenger astronomy needs to maximize follow-up science.
Outlook and References
Future work targets binary neutron star detection and adaptation to next-generation detectors; the full pipeline and methods are described in [arXiv:2403.18661](https://arxiv.org/abs/2403.18661) (Marx et al., 2024).

Original Paper Details

Title
A machine-learning pipeline for real-time detection of gravitational waves from compact binary coalescences
arXiv ID
2403.18661
Authors
Ethan Marx, William Benoit, Alec Gunny, Rafia Omer, Deep Chatterjee, Ricco C. Venterea, Lauren Wills, Muhammed Saleem, Eric Moreno, Ryan Raikman, Ekaterina Govorkova, Dylan Rankin, Michael W. Coughlin, Philip Harris, Erik Katsavounidis
Abstract
The promise of multi-messenger astronomy relies on the rapid detection of gravitational waves at very low latencies ($\mathcal{O}$(1\,s)) in order to maximize the amount of time available for follow-up observations. In recent years, neural-networks have demonstrated robust non-linear modeling capabilities and millisecond-scale inference at a comparatively small computational footprint, making them an attractive family of algorithms in this context. However, integration of these algorithms into the gravitational-wave astrophysics research ecosystem has proven non-trivial. Here, we present the first fully machine learning-based pipeline for the detection of gravitational waves from compact binary coalescences (CBCs) running in low-latency. We demonstrate this pipeline to have a fraction of the latency of traditional matched filtering search pipelines while achieving state-of-the-art sensitivity to higher-mass stellar binary black holes.