Infinite Variance in Monte Carlo Sampling of Lattice Field Theories
Authors
Cagin Yunus, William Detmold
Abstract
In Monte Carlo calculations of expectation values in lattice quantum field theories, the stochastic variance of the sampling procedure that is used defines the precision of the calculation for a fixed number of samples. If the variance of an estimator of a particular quantity is formally infinite, or in practice very large compared to the square of the mean, then that quantity can not be reliably estimated using the given sampling procedure. There are multiple scenarios in which this occurs, including in Lattice Quantum Chromodynamics, and a particularly simple example is given by the Gross-Neveu model where Monte Carlo calculations involve the introduction of auxiliary bosonic variables through a Hubbard-Stratonovich (HS) transformation. Here, it is shown that the variances of HS estimators for classes of operators involving fermion fields are divergent in this model and an even simpler zero-dimensional analogue. To correctly estimate these observables, two alternative sampling methods are proposed and numerically investigated.
Concepts
The Big Picture
Imagine measuring the average height of people in a room, but every so often a giant walks in, someone ten thousand feet tall, destroying your running average. You could take a million measurements and still never get a reliable answer. This is the statistical nightmare physicists face when calculating certain properties of the subatomic world using Monte Carlo sampling, a technique that throws random numbers at problems too complex to solve with pen and paper.
In quantum field theory, which describes particles and forces at their most fundamental level, physicists can’t just solve equations on a whiteboard. They chop spacetime into a discrete grid called a lattice and use Monte Carlo sampling to estimate physical quantities numerically.
It works well in many cases. But sometimes the structure of the theory causes the variance of the estimator (the spread in your answers) to be formally infinite. When that happens, no amount of computing power gives a reliable result. You can run your simulation forever and never converge.
Cagin Yunus and William Detmold at MIT’s Center for Theoretical Physics have pinpointed exactly why this infinite-variance problem arises in specific theories, particularly the Gross-Neveu model and Lattice QCD. They also propose two concrete fixes.
Key Insight: In certain lattice field theories, standard Monte Carlo estimators for fermion-field operators have formally infinite variance. Not because of numerical errors, but because of the mathematical structure of the sampling procedure itself. Any finite number of samples is statistically meaningless.
How It Works
The problem starts with fermions: particles like quarks and electrons governed by the Pauli exclusion principle, which forbids two identical fermions from occupying the same quantum state. They’re notoriously hard to simulate directly. The standard workaround is the Hubbard-Stratonovich (HS) transformation. You introduce an auxiliary bosonic field (a fictional particle type without the Pauli restriction) to replace complicated four-fermion interactions with simpler quadratic ones. The transformation is mathematically exact, but it changes what you’re sampling over.
The Gross-Neveu model is a 2D quantum field theory that shares key structural features with QCD, the theory of the strong nuclear force. Physicists use it as a testbed. In this model, the HS transformation introduces a continuous auxiliary field, and Monte Carlo then samples configurations of that field.
Here’s the trouble. When the Dirac operator (the matrix governing fermion behavior) has eigenvalues close to zero, the estimator for fermion propagators blows up. These are called “exceptional configurations.” Yunus and Detmold prove this rigorously: for operators built from fermion fields, the HS estimators have divergent second moments. The variance isn’t merely large; it’s formally infinite. The consequences:
- The Central Limit Theorem no longer applies, so sample averages don’t converge to true means
- Sample variance keeps growing with each new sample instead of stabilizing
- No finite number of Monte Carlo samples yields a statistically reliable result

To isolate the problem, the authors first study a zero-dimensional toy model that strips away spacetime entirely and keeps only the algebraic structure. Because the toy model is analytically tractable, they can verify their results against exact calculations before testing two proposed solutions on the full 2D theory.
Solution 1: Discrete Hubbard-Stratonovich. Replace the continuous auxiliary field with one taking only a finite set of values. Variance becomes finite by construction, since no single sample can produce an infinite contribution. The tradeoff: the number of configurations grows exponentially with system size, limiting scalability. For small systems, though, it works.

Solution 2: Sequential Reweighting. For physical quantities that are always zero or positive (this covers many observables in these theories), you can write the mean of a high-variance quantity as a product of means of lower-variance quantities. Instead of estimating one wildly fluctuating number directly, you decompose it into a chain. Estimate the first factor, use those results to estimate the second conditioned on the first, and so on. Each factor has finite variance, even when the direct estimate does not.

In numerical tests, sequential reweighting correctly recovers exact answers where the standard HS estimator fails completely. The failure of the standard approach is particularly dangerous because the sample variance can appear well-behaved, masking the underlying divergence.
Why It Matters
This isn’t abstract. Lattice QCD, the computational framework for the strong nuclear force binding protons and neutrons, hits the same kind of infinite-variance problems from zero-modes of the Dirac operator. Nuclear correlation functions built from large numbers of quark fields are needed for understanding atomic nuclei from first principles, and they run straight into this wall. The more quarks involved, the worse it gets.
The root cause is structural. Whenever you use a continuous HS transformation to handle fermionic interactions, you risk introducing infinite-variance estimators. The same problem turns up in condensed matter physics (the Hubbard model, central to theories of high-temperature superconductivity) and in particle physics. Sequential reweighting applies well beyond the Gross-Neveu model and remains practical even where discrete HS transformations become computationally infeasible.
Open questions remain. The discrete HS approach needs a way to scale to larger volumes, perhaps through importance sampling within the discrete space. Both methods also need testing in full 4D lattice QCD, where the computational stakes are highest.
Bottom Line: Yunus and Detmold have identified a fundamental statistical failure mode in Monte Carlo simulations of lattice field theories and offered two workable remedies. Their sequential reweighting method makes reliable first-principles calculations of fermion observables possible in regimes where standard estimators break down entirely.
IAIFI Research Highlights
This work connects rigorous statistical theory with lattice quantum field theory, using tools from probability and stochastic analysis to diagnose and fix a fundamental computational problem in nuclear and particle physics.
The sequential reweighting framework provides a general strategy for taming infinite-variance estimators in Monte Carlo methods. The same breakdown can occur in machine learning whenever importance sampling or stochastic estimation is involved.
By characterizing and resolving infinite-variance failures in the Gross-Neveu model and outlining the path to Lattice QCD, this work improves prospects for reliable first-principles calculations of hadronic and nuclear observables.
Future work will extend these methods to larger spacetime volumes and full 4D lattice QCD; the paper is available at [arXiv:2205.01001](https://arxiv.org/abs/2205.01001).
Original Paper Details
Infinite Variance in Monte Carlo Sampling of Lattice Field Theories
2205.01001
Cagin Yunus, William Detmold
In Monte Carlo calculations of expectation values in lattice quantum field theories, the stochastic variance of the sampling procedure that is used defines the precision of the calculation for a fixed number of samples. If the variance of an estimator of a particular quantity is formally infinite, or in practice very large compared to the square of the mean, then that quantity can not be reliably estimated using the given sampling procedure. There are multiple scenarios in which this occurs, including in Lattice Quantum Chromodynamics, and a particularly simple example is given by the Gross-Neveu model where Monte Carlo calculations involve the introduction of auxiliary bosonic variables through a Hubbard-Stratonovich (HS) transformation. Here, it is shown that the variances of HS estimators for classes of operators involving fermion fields are divergent in this model and an even simpler zero-dimensional analogue. To correctly estimate these observables, two alternative sampling methods are proposed and numerically investigated.