Fast estimation of the look-elsewhere effect using Gaussian random fields

We discuss the use of Gaussian random fields to estimate the look-elsewhere effect correction. We show that Gaussian random fields can be used to model the null-hypothesis significance maps from a large set of statistical problems commonly encountered in physics, such as template matching and likelihood ratio tests. Some specific examples are searches for dark matter using pixel arrays, searches for astronomical transients, and searches for fast-radio bursts. Gaussian random fields can be sampled efficiently in the frequency domain, and the excursion probability can be fitted with these samples to extend any estimation of the look-elsewhere effect to lower p values. In addition, in cases where the Gaussian random field is stationary and the parameter space is Euclidean, the look-elsewhere effect correction can be computed analytically. We demonstrate these methods using two example template matching problems. Finally, we apply these methods to estimate the trial factor of a 43\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4^3$$\end{document} accelerometer array for the detection of dark matter tracks in the Windchime project. When a global significance of 3σ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3\sigma $$\end{document} is required, the estimated trial factor for such an accelerometer array is 1014\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$10^{14}$$\end{document} for a one-second search, and 1022\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$10^{22}$$\end{document} for a 1-year search.


I. INTRODUCTION
In hypothesis testing problems with composite hypotheses, the correct frequentist p-value might not be the same as the p-value one would compute for fixed values of the composite hypothesis parameters [1].This is referred to as the look-elsewhere effect.The correct p-value given composite hypotheses is often termed the global p-value, whereas the p-value computed with fixed parameters is termed the local p-value.The look-elsewhere effect correction is often parameterized by a trial factor, which is the ratio between the local and global p-values [2].A simple approach to finding the trial factor would be to run a large number of null-hypothesis Monte-Carlo simulations, often using simplified or "toy" models that retain the relevant statistical properties.An example of this approach can be found in [3].
However, data-analysis and inference of modern experiments can be extremely computationally-intensive, requiring dedicated computational infrastructure.Even then, with complex and high-dimensional problems, the use of toy Monte Carlo simulations can be computationally too expensive.This makes estimation of trial factors for the purposes of sensitivity projections for future experiments difficult.Thus, the use of Gaussian random fields to directly generate null-hypothesis significance maps (termed 'null significance maps') and estimate the trial factor can be very useful.
Gaussian random fields are random functions over a domain, where the values of every finite collection of points on the domain are described by a multivariate Gaussian distribution.In parameter estimation problems, the domain would be the parameter space as defined by the relevant parameters, such as a finite twodimensional Euclidean space for a search for a transient in an image.Such fields can be viewed as a higher- * qin106@purdue.edudimensional generalization of Gaussian processes, commonly used in Gaussian process regression [4].Gaussian random fields are used for the estimation of the lookelsewhere effect in neuroimaging [5], and for the modelling of the matter distribution in the universe [6].Work exists regarding the use of Gaussian random fields for the estimation of the look-elsewhere effect in physics [7,8].The use of Gaussian random fields to estimate lookelsewhere effect corrections relies on computation of the excursion probability, which is the probability for samples of a Gaussian random field to exceed a given significance level.
In this paper, we detail techniques to use Gaussian random fields for the estimation of look-elsewhere effect corrections, with a focus on problems with underlying Gaussian random variables.We expand upon the work presented in [7] by including a way to estimate the look-elsewhere effect correction analytically in Euclidean parameter spaces without requiring computations of the Euler characteristic from data or simulation.Our work also differs from [8], which focuses on how to compute a covariance matrix using specially constructed mock datasets, termed Asimov datasets [9], and then estimating the trial factor from the covariance matrix as a 1D Gaussian process.Our work instead uses higherdimensional Gaussian fields directly, hence allowing for direct computation of the trial factor in arbitrarily high dimensional parameter spaces if the spectral moments of the Gaussian random field can be directly computed or expressed.These methods are complementary, as our method does not require the computation of the entire covariance matrix at high resolution, which might be difficult for high-dimensional parameter spaces, whereas [8] does not assume stationarity in the Gaussian process, which is useful for more complex statistical problems.We also include methods to sample stationary and Euclidean Gaussian random fields in the frequency domain to reduce computational expense.
In section II, we discuss the classes of problems that can be modelled by Gaussian random fields, overview the spectral method for efficient sampling of Gaussian random fields, and introduce an analytic approximation for the excursion probability.We then demonstrate these methods using a 2-dimensional template-matching problem with a Gaussian kernel, and a 1-dimensional template-matching problem with a non-Gaussian kernel, in section III and section IV respectively.Then, in section V, we demonstrate how these methods can be applied to a likelihood ratio test, where a check for asymptotic behaviour is needed.Finally, in section VI, we use these methods to estimate the trial factor when searching for dark matter tracks using an array of accelerometers.

II. STATISTICAL UNDERPINNINGS OF METHOD
This section is split into 3 parts.First, we explore why Gaussian random fields correctly model the significance maps of a large set of problems in section II A. Second, we discuss how to sample Gaussian random fields efficiently in section II B. Finally, in section II C we describe a fitting procedure that can be used to extend this method to small p-values using an analytic approximation of the excursion probability of a Gaussian random field, and a way to directly estimate the excursion probability using the Euler characteristic when certain conditions that will be elaborated upon are met.

A. Why Gaussian random fields can model a large set of problems
Let us consider a statistical problem where one searches for a fluctuation over a finely-spaced set of Gaussian random variables distributed in a parameter space.One example of this could be a template matching search for a transient over a regular grid of CCD pixels with Gaussian noise, as shown in Fig. 1.Such a setup might be encountered in searches for astronomical transients [10,11].
While a 2D grid with a simple template that is symmetric and does not vary with position is depicted in Fig. 1, this is for ease of illustration, and these assumptions are not made in the following argument except where noted.We can see that at each possible template position, the resultant signal strength recovered is a weighted sum of Gaussian random variables, where the weights correspond to the template amplitude at a given random variable.The significance map formed using such a template thus corresponds to the formal definition of a random field [12] where each point is Gaussiandistributed, and every collection of points represent a multivariate Gaussian distribution.
In addition, given independent underlying random variables, the covariance between two points can be computed from the template directly.Consider two points in the parameter space, (x 0 , x 1 ).At each point, the random variable in the case of a signal-free dataset is given by: where X j are the underlying finely spaced Gaussian random variables such as CCD pixels, and α i,j refers to the template value at each underlying random variable for template i. the expected value of X j , E(X j ) = 0, is taken without loss of generality, as the mean value can be subtracted.As such, the covariance would be given by: This can be further simplified if the underlying Gaussian random variables are independent, as then E(X i X j ) = 0 for i ̸ = j.The covariance can then be computed: We can thus see that in the case of template matching with underlying Gaussian random variables, the significance map is modelled by a Gaussian random field and the covariance function can be directly calculated based on the template as well as the measured properties of the underlying random variables.In addition, Gaussian random fields can also be used to model significance maps with underlying Gaussian random variables generated from likelihood ratio tests.This is because the log likelihood-ratio is the sum of squared residuals, normalized by the standard deviation, as shown in Equation (4).
Some other problems that do not use underlying Gaussian random variables can still be represented approximately by Gaussian random fields in some circumstances.For example, if a likelihood ratio test is used, the distribution of the test statistic asymptotically approaches a χ 2 distribution due to Wilks' theorem [13] when the relevant conditions, detailed in [13], are satisfied.In such a situation, a signal-free significance map over a parameter space would represent a χ 2 random field [7].This differs from a Gaussian random field.However, because a χ 2 random variable is defined as the square of a Gaussian random variable, a χ 2 random field can be sampled by sampling a Gaussian random field with the correct covariance, then squaring it.The excursion probability of such a random field is thus double the one-sided excursion probability described in section II C. The fact that a χ 2 random field can be represented using Gaussian random field was noted by Ananiev and Read in [8].
Due to Lindeberg and Lyapunov's central limit theorems, [14,15], this even holds for non-Gaussian underlying random variables such as Poisson noise, as long as Lindeberg's condition or Lyapunov's condition are satisfied [15].More generally, our results are applicable whenever the underlying random variables are independent and sufficiently finely distributed to give a large sample size, as long as the criteria for the central limit theorems are met.Template matching problems can then be approximated by Gaussian random fields even if the underlying random variables are non-Gaussian.While a detailed discussion of these conditions is beyond the scope of this paper, verifying the Gaussianity of a given significance map numerically or constraining it using the Berry-Esseen theorem [16] might be sufficient for practical purposes.Taken together, the methods we discuss in this paper are applicable to a large variety of statistical problems commonly encountered in experimental physics.
It should be noted that such methods based on Gaussian random fields are limited in usefulness in cases where the noise sources are not Gaussian in nature and the central limit theorem does not apply.Moreover, these methods do not account for other sources of false positives such as experimental backgrounds.

B. Efficient spectral sampling of stationary
Gaussian random fields While for sufficiently complex problems, Gaussian random fields might be easier to sample than toy Monte Carlo-based methods, even this might still be too computationally intensive.For example, if we consider template-matching search in a flat 2-dimensional parameter space with 10 2 bins per dimension, there would be 10 4 points that need to be correlated with each other, resulting in a covariance matrix with 10 8 entries.We can see that with higher dimensional problems, populating such a covariance matrix (which is needed for naive sampling of Gaussian random fields) quickly becomes intractable.
A review of efficient methods for the sampling of Gaussian random fields can be found in [17].In sections III and VI, we use the spectral method as described in [17] to efficiently sample from Gaussian random fields.This method of generating samples from a Gaussian random field requires the field to be weakly stationary, such that a covariance function can be described by a function of the displacement between two points.In that case, where K s (s) is the autocorrelation function for a random field of zero mean.For a field with unity variance, which is typical for a significance map that is scaled to represent the signal-to-noise ratio, K s (s) can be scaled to have a maximum value of one.This case is considered without loss of generality, as one can scale any stationary Gaussian random field to have a variance of unity.The Fourier transform of the autocorrelation function of a weakly stationary process is the power spectral density (PSD) due to the Wiener-Khinchin theorem [18,19].
The spectral method for sampling Gaussian random fields makes use of this Fourier transform pair.Due to the Wiener-Khinchin theorem, instead of sampling a stationary Gaussian random field with a given autocorrelation function directly, it can be sampled in the frequency domain with the correct PSD as given by the Fourier transform of the autocorrelation function.This can be done by multiplying the Fourier transform of white noise by the square root of the PSD, then performing an inverse Fourier transform to return the sample to the relevant parameter space.We can thus sample stationary Gaussian random fields in high-dimensional parameter space without having to populate extremely large covariance matrices.For the remainder of text, we will refer to this method for sampling stationary Gaussian random fields as the spectral method.

C. Analytic approximation of excursion probability
Even when sampling Gaussian random fields with the spectral method, in high-dimensional parameter spaces, sampling can still be prohibitively expensive due to the curse of dimensionality.In this scenario, the lookelsewhere effect correction can be extended to lower pvalues than otherwise feasible due to computational requirements using an analytic approximation of the excursion probability.For a random field {f (t) : t ∈ M }, the excursion probability over a confidence level u is defined as: It can be seen that the excursion probability represents the probability that any point on a random field exceeds u.The excursion probability for a smooth Gaussian random field on a locally convex space is given by [12]: where Φ(u) is the Gaussian tail distribution, N = dim(M ), σ is the standard deviation of the Gaussian random field, and α > 1 is a constant describing the exponential suppression of the error term.We can see that Equation ( 7) contains a number of constants (C i ).
While these constants can be computed directly for some cases, this is sometimes non-trivial [12].In these cases, Equation ( 7) can be used to fit the excursion probability directly using a set of samples of signal-free significance maps.This, combined with the spectral method of sampling Gaussian random fields shown in section II B, greatly reduces the computational cost of computing the look-elsewhere effect correction.It should be noted that Equation ( 7) is based on approximating the excursion probability using the mean Euler characteristic of an excursion set [12].Thus, for particularly computationally challenging problems, methods from [7] for estimating the Euler characteristic can further reduce the number of samples that are needed to derive the look-elsewhere effect correction.
The Euler characteristic φ of an excursion set A u can be interpreted as the number of isolated regions in an excursion set where the Gaussian random field f (t) exceeds some significance level u (f (t) > u), minus the number of holes in these isolated regions [5]; a precise mathematical definition can be found in [12].If the Gaussian random field is on a rectangular Euclidean space M = N i = 1[0, M i ], the mean Euler characteristic can be expressed in terms of the dimensions of the Euclidean space and the derivatives of the random field [12]: where O k refers to the set of k-dimensional faces of M that contain the origin, and |J| refers to the volume of the faces.Iterating over the k-dimensional faces that contain the origin can also be interpreted as iterating over kdimensional slices of the space M .Λ J is a k × k matrix containing the spectral moments of the Gaussian random field [12].These can be computed explicitly using the second derivative of the covariance function [12]: where λ ij refers to the element of Λ J in the i th row and the j th column, d i refers to the i th element of the s vector, and K s (s) is the covariance function as defined in Equation (5).
For a problem where the point response function is known, this can be computed directly using the following integral [5]: where α s (s) is the point response function, and s T denotes the transpose of the s vector.This is equivalent to the template function used in Equation (11) where the field is stationary, such that α s (x − ⃗ x) = α(x, ⃗ x).For the case of a Gaussian kernel with covariance matrix Σ, we obtain λ J = Σ −1 /2 [5].
For problems with stationary fields and Euclidean parameter spaces where Equation ( 8) applies, this equation can be interpreted as an approximation of p excur .Thus, Equation ( 8) can be combined with either Equation (9) or Equation (10) to produce an estimate of p excur .

III. DEMONSTRATION WITH A 2D TOY PROBLEM
The ideas introduced in section II can be first demonstrated using a search in a 2D parameter space.Here, we will model a 2D template matching search using a 2D Gaussian random field.2D Gaussian random fields can be used to model the look-elsewhere correction for various experiments, such as searches for dark matter using pixel detectors [20,21] and searches for astronomical transients [10,11].
The expected signal shape is chosen to be a Gaussian kernel for this problem; hence, the template matching kernel can also be modelled using a Gaussian kernel.In such a situation, the normalized covariance function (the correlation function) is also a Gaussian kernel with double the covariance matrix and √ 2 the linear dimensions, as shown in Figure 2.This can be seen by expanding Equation (3) in the case of a position-independent kernel: where α(x 0 , ⃗ x j ) is the template weight for a sample at ⃗ x j , and a template at x 0 .For the case of a stationary field, σ j is the same for all j, so we call this simply σ.Then, taking the template function to be a Gaussian with covariance matrix Σ, we can apply the continuum approximation if each data samples takes up a volume of V s .Thus, we arrive at: We can see that this is linearly proportional to the convolution of two Gaussian distributions.It is a well known result that the convolution of Gaussian distributions is Gaussian, with a covariance matrix that is the sum of the covariance matrix of the individual Gaussian random variables [19].Thus, if the templates in a search are Gaussian, the resulting Gaussian field from the template search has the covariance function: where Σ ′ = 2Σ.One can then compute the normalization analytically using the convolution integral Equation (12).In our case, we simply discard the normalization.This is not an issue as the goal is to produce a null significance map for the purpose of calibrating the look-elsewhere effect, and such significance maps are normalized to have unity variance so that the local significance is directly given by the significance map.A single sample from the Gaussian random field described by Equation ( 13) is shown in Figure 3.The parameter space is divided into 60 bins in each dimension when generating this sample.We can now compare samples generated using a traditional toy Monte Carlo and Gaussian field samples.This is shown in Figure 4.The Gaussian random field is sampled both using the spectral method described in section II B and a naive method, where we directly sample a large covariance matrix describing the covariance between every pair of points as a multivariate Gaussian.
As expected, the excursion probability obtained from toy Monte Carlo samples agree with those obtained by sampling Gaussian random fields.This demonstrates how Gaussian random fields can be sampled to produce large numbers of null significance map samples without a toy Monte Carlo whereby mock data is generated and used to produce significance maps via template matching.Finally, we can test the use of Equation (7) to fit the excursion probability.As the error term in Equation ( 7) is exponentially suppressed at small excursion probabilities, the fit only uses data points from after u 2 = 10, where the excursion probability is approximately 0.1.An estimate using the covariance of the Gaussian kernel and Equation ( 8) is also performed; these are shown in Figure 5.
Even though only 10 3 samples are used to fit the excursion probability, the fit matches the excursion probability expected from the toy MC samples.This demonstrates

IV. DEMONSTRATION WITH A 1D TEMPLATE MATCHING PROBLEM
We showed in section III that in the case of a Gaussian search kernel and uniformly distributed underlying random variables, the covariance function can be easily computed.Indeed, while the derivation focused on Gaussian kernels, the result should hold in general for kernels that are closed under convolution, such as kernels that represent stable distributions [22].
In many cases, however, the covariance function might be easier to compute numerically.To demonstrate such an example, we consider a 1-dimensional search with a non-Gaussian kernel, for an excess in time-series data due to a particle interacting via a long-range force passing by  7) is shown (dotted purple), and we can see that a fit with only 10 3 samples agrees well with the excursion probability derived from 10 5 toy MC samples.We can see that the Euler characteristic computed using Equation (8) (dashed cyan) also agrees well with both the toy MC samples and the excursion probability fit.
an accelerometer.This toy problem is inspired by the Windchime project [23], where the direct detection of dark matter particles with masses of around the Planck mass will be attempted.While technically challenging, it has been suggested that this might be possible with large accelerometer arrays [24].In the case of a dark matter particle passing by an accelerometer, the force as a function of time is given by Equation ( 14), where G is the gravitational constant, m χ is the mass of a dark matter particle, m s is the test mass of the sensor, b is the impact parameter of a dark matter track, and v is the velocity of the dark matter track [24].
For template matching purposes, a normalized template with the same shape as Equation ( 14) can be used.This is shown in equation Equation (15).
In this demonstration, values of b = 3 mm and v = 3 × 10 5 m/s are used, and the sampling rate is 10 9 Hz.With this information, we can generate the template for template matching, and then compute the covariance function using Equation (3).As this system is also described by a stationary random field, this is done by computing the autocorrelation of the template.These are shown together with MC data samples in Figure 6.
As in section III, the global p-values at 4σ and 5σ local significance are computed using 10 5 signal-free samples each using a toy MC, direct sampling of the Gaussian random field using the covariance function, sampling of the Gaussian random field in frequency space, and a bestfit with 10 3 samples using Equation (7).The best-fit only uses data points after u 2 = 10, where the excursion probability is approximately 0.1.These results are shown in Table II.
As we expect, the different values agree with the computed uncertainty, demonstrating how the methods outlined in this paper can be used to estimate the lookelsewhere effect.While the example here uses a template that is relevant to the Windchime project, this procedure can be used in general to calibrate the look-elsewhere effect correction for problems involving template matching or matched filtering of time-series data, including sonar [25] and fast radio burst detection [26].It should be noted that for cases involving multiple templates, correlations between templates would need to be computed as well to avoid underestimation of the significance of a signal.(17).The lower energy threshold for the peak search is indicated by the black dashed line.The width of the Gaussian peak is 1 keV for this search.

V. THE LOOK-ELSEWHERE EFFECT IN A PEAK SEARCH WITH PROFILE LIKELIHOODS
Profile likelihoods are often used in particle physics experiments as part of likelihood ratio tests [13].In this section, we explore the use of Gaussian random fields to model the look-elsewhere effect correction for a peak search using a likelihood ratio test.This type of statistical problem can be encountered in dark matter searches such as [3,27], or resonance searches in collider experiments [28,29].
To demonstrate how Gaussian random fields can be used to model the behaviour of the likelihood ratio test statistic, we consider a peak search in a dataset with no signal and with a background distribution that is flat, aside from an efficiency roll-off at the start.This is modelled using the cumulative distribution function of a gamma distribution.The level of the likelihood function is left as a nuisance variable for the profile likelihood.
An unbinned likelihood is used, given by [13]: where µ is the signal level, β is the background level, E peak is the signal location, N is the number of events in the dataset, B(E) is the background distribution, and Gauss(E) represents the standard Gaussian distribution.The profile likelihood is then given by L(E peak , µ, β), where β is the best-fit value of β given (E peak , µ).The  test-statistic is thus: A plot of q(E peak ) can be seen in Figure 7.
According to Wilks' theorem, the likelihood ratio test statistic should asymptotically approach a χ 2 distribution.However, as the null-hypothesis is defined by µ 0 = 0, and this lies on the edge of the parameter space µ ∈ [0, ∞), the asymptotic distribution is instead [13].The probability of an upward fluctuation of this this distribution above u 2 is equal to the probability of an upward fluctuation of a standard normal distribution above u.Before we can model the test statistic using Gaussian random fields, however, asymptoticity must first be verified.This can be done by sampling the test statistic for different toy MC datasets at random values of E peak , and verifying that the resulting distribution follows the expected distribution.This is shown in Figure 8.
We can now compare the excursion probability from toy MC runs, such as that shown in Figure 7, with the  7) is conducted on the toy MC samples.The toy MC was only run 10 3 times, due to the computational expense.We can see that the Euler characteristic computed using Equation (8) (dashed cyan) agrees well with both the toy MC samples and the excursion probability fit.
In addition, the excursion probability obtained from sampling the Gaussian random field 10 5 times both naively and using the spectral method agree as well.
excursion probability obtained using Gaussian random fields.This is shown in Figure 9.For this comparison, only 1000 runs of the toy MC were done due to the computational expense.As the signal model is a Gaussian peak with σ = 1 keV, the covariance function is also modelled as a Gaussian peak with σ = √ 2 keV, as described in section III.The results are summarised in Table III.The best-fit only uses data points after u 2 = 8, where the excursion probability is approximately 0.1.It can be seen that as with the earlier examples, the various methods agree within uncertainties.As such, it can be seen that these methods can be used to determine the look-elsewhere effect correction for a likelihood ratio test using profile likelihoods, as long as one verifies that the test statistic follows the asymptotic distribution.

VI. APPLICATION TO WINDCHIME
Finally, these methods can now be applied to estimate the look-elsewhere effect correction needed for a dark matter direct detection experiment based on the Windchime concept [23].In this section, we consider the detection of dark matter interacting via a long range force using a 0.6 m array of 4 3 accelerometers, with a sampling rate of 10 7 Hz.
The force on a single sensor is given in Equation ( 14).However, for a particle passing through a sensor array, the impact parameter b would be different for each sensor, and additionally, the time of closest approach differs between sensors.Thus, instead of using the template for a single sensor, a template for the entire array is considered.As each template represents a track, we have to consider the parameterization of a track through the sensor array.We accomplish this using a bounding sphere that is larger than the accelerometer array, so that each track through the array intersects the bounding sphere twice and hence can be parameterized by two points on the bounding sphere.Any given template can then be parameterized by 6 parameters: velocity (v), entry time, the spherical coordinates of the entry point (cos(θ 0 ), ϕ 0 ), and the spherical coordinates of the exit point (cos(θ 1 ), ϕ 1 ).The cosine of the θ angles is used as evenly-spaced bins in cos(θ) represent equal-sized areas on a sphere.Here, we consider a bounding sphere with a diameter of 1 m, so that it encloses the entire array.
For each set of the 6 parameters, a template is generated by considering the force on each sensor over a series of timesteps.At every timestep, the distance between the particle and every sensor is calculated, and the template for each sensor is computed using the inverse-square law.The equation for the template at the i th timestep and the j th sensor is thus: After the computation of the entire template over a set of timesteps and all sensors, the template is divided by its sum to normalize it to unity.Finally, the covariance between two sets of parameters can be computed by summing across all sensors and timesteps using Equation (3).This allows for the covariance function to be mapped out between one chosen template and other templates in the parameter space.Some 2D slices of the covariance function are shown in Figure 10.
It can be seen from Figure 10 that the Gaussian random field representing this problem is not stationary.This is because in the case of a stationary field, the covariance function only depends on the displacement between points, as described in Equation ( 5).This implies that K(x, x − s) = K(x − s, x) = K(x, x + s).Thus, for a stationary process, K s (s) = K s (−s) and the covariance function is symmetric.Unfortunately, this means that Wiener-Khinchin theorem [18,19] does not apply, and spectral sampling of this covariance is not possible.To get an estimate of the look-elsewhere correction, we can approximate the covariance function using a symmetric functional form.Here, for the purposes of an order-of-magnitude estimate, we use a Gaussian kernel to approximate the covariance function.To do this, first, points are sampled based on the 6D histogram shown in 10.After that, the covariance of this large point cloud is computed to find the approximate covariance function.The 2D slices of this approximate covariance function are shown in Figure 11.
A random sample from the Gaussian random field represented by Figure 11, sampled using the spectral method, is shown in Figure 12.Random samples are generated approximating the parameter space with a Euclidean parameter space of equal volume.This is conservative, as correlations near the edges of the parameter space that should be connected, such as ϕ 0 = −π and ϕ 0 = π, that are not considered would result in a lower trial factor if this approximation was not taken.Thus, the trial factor inferred with this approximation is higher than it would otherwise be.
The excursion probability can now be fitted to samples such as Figure 12.The excursion probability estimated with 2000 such samples is shown in Figure 13.
We can now compute the trial factor using the fit in Figure 13.First, we need to compute the signal-to-noise , with a fit using Equation ( 7) (dotted purple) and the Euler characteristic (Equation ( 8)) (dashed cyan).2000 samples generated using the spectral method are shown here.
ratio threshold needed for a search with confidence level 1−α over time t.Here, we use α = 0.0027, corresponding to a significance level of 3σ.The signal-to-noise ratio threshold is then found by solving Equation ( 19) for u: where Ψ(u) is the fitted excursion probability function, V corresponds to the fraction of parameter space covered by the sampled Gaussian field, and T ′ T corresponds to the search time covered by the random field divided by the desired search time.This procedure tells us that we need a signal-to-noise ratio threshold of at least 8.4 for a 1 s search time and 10.4 for a 1 yr search time.The trial factor, N trials , is given by This results in estimated trial factors of ∼ 10 14 for a 1 s search, and ∼ 10 22 for a 1 yr search.We can see that due to the high dimensional search space, the Windchime experiment suffers from a rather high trial factor.Thus, thresholds much higher than the 5σ level customary in particle physics [1,30,31] are needed for a rare event search with an accelerometer array.
A. The computational expense of calibrating for the look-elsewhere effect For a N dimensional parameter space with m bins per dimension, computing the covariance function requires comparing one parameter with N m − 1 others.Each comparison is equivalent to a template matching computation or likelihood function evaluation in terms of computational effort.This is a significant speedup over sampling the random field directly via toy MC, as each individual toy MC sample would involve generating a mock dataset and then doing template matching across all sensors and timesteps for N m different sets of parameters.In the case shown here with 6 parameters and 11 bins per dimension, computing the convariance function required 1.7 × 10 6 such evaluations.
This also compares favourably with frequentist-Bayesian methods for the estimation of the lookelsewhere effect such as [32].This is because such methods still require a posterior distribution to be sampled, which still requires a large number of likelihood evaluations.While a detailed analysis of the best-case number of likelihood evaluations required is beyond the scope of this paper, we found that searching for a track in simulated data for the sensor array described in this section required approximately ∼ 10 7 likelihood evaluations using nested sampling with a slice sampler as implemented in [33].However, it should be noted that if the posterior was to be sampled anyway as part of the data analysis and inference procedure, the method presented in [32] does allow one to calibrate for the look-elsewhere effect with minimal computational overhead.
We can also compute the look-elsewhere effect correction with minimal computational expense if a search for signals is conducted using the self-calibration method presented in [34].However, the methods shown in this work allow for us to directly compute this correction without having to perform such a search.In the cases shown in section III, section IV, and section V, the look-elsewhere effect correction can be directly computed from Equation (8) using the search template or signal model, and in this section, while the covariance function had to be computed numerically, this only required sampling a small subset of the parameter space.As such, while the self-calibration method shown in [34] is extremely performant and allows one to compute the lookelsewhere effect correction with almost no computational overhead when such a search is conducted, the methods presented in this work are more useful for quickly estimating the look-elsewhere effect for the purposes of a sensitivity study where such a search has not been conducted, and may be prohibitive without additional computational infrastructure.

VII. CONCLUSIONS
In this paper, we described and demonstrated the use of Gaussian random fields in the estimation of the lookelsewhere effect.The presented methods can be used to greatly reduce the computational requirements for the estimation of the look-elsewhere effect.This is particularly useful for high-dimensional and otherwise computationally complex problems.Our methods can also be helpful for sensitivity projections of future experiments, where the computational infrastructure needed for the data-analysis of such an experiment does not yet exist.
We have shown that Gaussian random fields can be used to model a large set of statistical problems commonly encountered in physics.When it has been ascertained that a given significance map can be modelled by a Gaussian random field, three techniques that can be used to reduce the computational cost of estimating the look-elsewhere effect correction for local significance are demonstrated in this paper.First, various methods exist for the efficient sampling of Gaussian random fields, such as the spectral method where samples are generated in frequency space.A review of such methods can be found in [17].This can allow for Gaussian random fields to be sampled more efficiently than the directly sampling from a large covariance matrix.Second, an analytic approximation of excursion probability, from [12], can be used to fit a small set of null significance map samples.Finally, given a stationary Gaussian field and a Euclidean parameter space, we have demonstrated that it is possible to directly compute the excursion probability based on the covariance function or template matching template [5,12].These methods can be combined to further reduce the computational cost of estimating the lookelsewhere effect correction at low p-values.
We then demonstrate these techniques on 2D and 1D toy problems.The 2D toy problem represents, for example, searches for dark matter using pixel detectors [20,21] and searches for astronomical transients [10,11].The 1D toy problem represents searches in a 1D parameter space, such as searches for dark matter using accelerometers [23], sonar [25], and fast radio burst detection [26].Using 10 5 samples generated with each method, we show that the look-elsewhere effect corrections derived using toy MC significance maps agree with those sampled from Gaussian random fields, both when the Gaussian random field covariance functions are directly sampled and when the Gaussian random fields are sampled using the spectral method.Finally, a much smaller sample of 10 3 null significance maps is used to fit the excursion probability.This analytic fit also agrees with the other approaches, allowing for a greater reduction in computational cost.We also demonstrate that the analytic fit matches the Euler characteristic computed directly using Equation (8).
Finally, we have applied these techniques to a 4 3 accelerometer array based on the Windchime concept.We find that when we require a global significance of 3σ the estimated trial factor for such an accelerometer array is 10 14 for a 1 s search, and 10 22 for a 1 yr search.Taken together, the methods we introduce can help speed up the computation of trial factors in high-dimensional statistical problems, such as that encountered in track finding for Windchime.

FIG. 1 .
FIG.1.Diagram depicting a template matching search for an excess over a grid of random variables.The grid of random variables is indicated by the blue points, and the template used to search for an excess is shown as the colored wireframe distribution.
FIG. 2. 2-dimensional toy problem template matching kernel and covariance function.The ellipse corresponding to the full width at half maximum (FWHM) of the covariance function is √ 2 bigger than that of the kernel in linear dimension.

FIG. 3 .
FIG. 3. Example of a single null random sample.The FWHM of the covariance function is overlaid as a black dashed ellipse for comparison.

FIG. 5 .
FIG.5.The fraction of null random samples showing false positives as a function of the significance threshold in units of σ 2 .A fit using Equation (7) is shown (dotted purple), and we can see that a fit with only 10 3 samples agrees well with the excursion probability derived from 10 5 toy MC samples.We can see that the Euler characteristic computed using Equation (8) (dashed cyan) also agrees well with both the toy MC samples and the excursion probability fit.

FIG. 6 .
FIG. 6. Top left: The template function of the 1D toy problem.Top right: The correlation function derived as the autocorrelation of the template.Bottom left: One random sample containing a true signal, with the signal truth expectation shown in dashed orange.Bottom right: Two significance maps generated using the toy MC procedure.The blue line contains a true signal, whereas the black dashed line does not.

FIG. 7 .
FIG.7.Top: One data sample from the MC simulation of a peak search and the background expectation.1σ errorbars are shown.Bottom: The test statistic computed using Equation(17).The lower energy threshold for the peak search is indicated by the black dashed line.The width of the Gaussian peak is 1 keV for this search.

FIG. 9 .
FIG. 9.The fraction of null random samples showing false positives as a function of the significance threshold in units of σ 2 .A fit using Equation (7) is conducted on the toy MC samples.The toy MC was only run 10 3 times, due to the computational expense.We can see that the Euler characteristic computed using Equation (8) (dashed cyan) agrees well with both the toy MC samples and the excursion probability fit.In addition, the excursion probability obtained from sampling the Gaussian random field 10 5 times both naively and using the spectral method agree as well.

− 4 TABLE
III. Global p-values at 4σ and 4.5σ local significance for the peak search with profile likelihoods.It can be seen that the p-values are consistent within stated binomial errors, and both the best fit value produced using a fit of the toy MC samples and the estimate produced with Equation (8) match the sampled values well.Local significances of 4σ and 4.5σ are shown here instead of 4σ and 5σ for the earlier examples as the look-elsewhere effect correction is smaller here, resulting in very large errorbars at 5σ local significance.

FIG. 11 . 10 100FIG. 12 .
FIG. 11.Gaussian kernel approximation of the slices of the 6D correlation function shown in Fig. 10 FIG.13.The fraction of null random samples showing false positives as a function of the significance threshold in units of σ 2 , with a fit using Equation (7) (dotted purple) and the Euler characteristic (Equation (8)) (dashed cyan).2000 samples generated using the spectral method are shown here.

TABLE I .
Global p-values at 4σ and 5σ local significance for the 2D toy problem.It can be seen that the p-values are consistent within stated binomial errors, and both the best fit value produced using a 1% sample size and the estimate produced with Equation (8) reproduce the simulated values well.
4IG.4.The fraction of 10 5 null random samples showing false positives as a function of the significance threshold in units of σ2.We can see that the different methods to generate random fields produce global p-values that are in agreement.

TABLE II .
Global p-values at 4σ and 5σ local significance for the 1D template matching problem.It can be seen that the p-values are consistent within stated binomial errors, and both the best fit value produced using a 1% sample size and the estimate produced with Equation (8) reproduce the simulated values well.