The Estimation of Depth to Basement Under Sedimentary Basins from Gravity Data: Review of Approaches and the ITRESC Method, with an Application to the Yucca Flat Basin (Nevada)

This paper reviews different approaches to the problem of finding the shape of the basement buried under sedimentary basins from gravity data and explores the applicability of a recently proposed method to a well-constrained real case, comparing the results obtained with the models computed using a variety of techniques. Many gravity inversion techniques to estimate the depth to basement based on rather different approaches have been proposed. As is well known, the interpretation of gravity data requires certain assumptions about the source, aimed at constraining the solution of an ambiguous problem. The different approaches imply different kinds of solutions, namely a density contrast distribution at depth, or a grid defining the depth to basement in the entire area of study or several single depth estimates. Each approach has its own advantages and weaknesses. In this context, special attention is given to the ITerative RESCaling method (ITRESC), which has been recently proposed. In this method, there is no need to assume a density function, which is estimated by a data-driven procedure and then used to generate a grid of the depth to basement. This technique is based on the depth–gravity relation plot, illustrating the link between the depth to basement, known at some control points (e.g., boreholes or interpretation of other geophysical data), and the values of the residual gravity anomaly. An important feature of the ITRESC method is that borehole control points are used globally rather than locally, providing constraints for all parts of the model. The main features of this innovative method are illustrated and evaluated by its application to the gravity anomalies of the Yucca Flat basin (Nevada). The results are compared with models obtained by previous gravity interpretations and by the processing of other geological and geophysical data.


Introduction
The analysis and interpretation of gravity anomalies are often aimed at inferring the morphology of the basement buried beneath sedimentary basins. The information about the depth to basement can be useful in several different geological contexts and applications (tectonic trends analyses, hydrocarbon exploration, hydrogeological studies, etc.). The gravity field is not a linear functional of depth, thickness or shape of the source, and thus the estimate of the morphology of the basement undulating surface from gravity data represents a nonlinear inverse problem. A scheme of the commonly used approaches for the basement depth gravity modelling is presented in Table 1. It is well known that gravity anomaly modelling suffers from nonuniqueness (e.g., Blakely 1996): many realistic models generated by data inversion can produce the same observed gravity measurements. The problem can be mitigated by constraining the sought-after solution within a subset of possible ones, by supplying a priori information about its characteristics. Thus, one of the most important differences among the many methods proposed to obtain models of the basement depth under sedimentary basins concerns the a priori assumptions about the source.
A commonly used approach to this interpretation problem is the Parker-Oldenburg method (Oldenburg 1974) based on the Fast Fourier Transform (FFT). An extremely efficient FFT forward formula for the computation of gravity or magnetic fields of undulating layers (Parker 1972) is rearranged in an iterative scheme, assuming the knowledge of the density contrast between sediments and basement rocks, as well as the mean depth to the basement. Perhaps, even more popular is Bott's method (Bott 1960). This method uses an iterative process in which an initial approximation of the sediment thickness in correspondence to the location of each gravity datum is obtained using the slab formula (g = 2πγΔρt, where γ is the gravitational constant, Δρ is the density contrast and t is the slab thickness, so that t = g/2πγΔρ), assuming a density contrast. The misfit between the observed and predicted data is then converted into corrections to the initial model (once again using the slab formula), and the process is stopped when the misfit falls below a certain threshold. Here, the nonlinearity of the modelling is expressed in the iterative process (Silva et al. 2014). The method was later modified by various authors, with improvements on (1) the iteration step size, (2) the use of a density-depth function instead of a constant density and (3) the fitting function used: 1. Cordell and Henderson (1968) modified the step size of the iterative phase of Bott's method by introducing the ratio between the observed to the computed gravity anomalies generated by the model obtained from the previous iteration. The updated model is given by the product of this ratio multiplied by the previous model. Silva et al. (2014) proposed accelerating the convergence by dividing the residual (i.e. the difference between observed gravity and the gravity computed by the current model) by an arbitrary and relatively large positive value (no longer calculated using the slab formula) and accepting the resulting model if the L2 norm of the residual vector obtained is less than that relative to the previous iteration. 2. The density of unconsolidated or lithified sediments filling a basin depends principally upon the percentage of volume occupied by voids. At depth, the increase in the lithostatic pressure will reduce the porosity and thus increase the density of the sediments. This means that the density contrast between the sediments and the basement rocks can become very small at depth. Based on various types of density measurements (drillhole gravity data, gamma-gamma density and saturated or unsaturated sample density Table 1 Synthesis of the possible approaches in the basement depth gravity modelling Nonlinear inverse methods

Depth-to-basement estimates
Examples of methods Bott (1960) Parker-Oldenburg (1974) Martins et al. (2011) Feng et al. (2018 Li and Oldenburg (1998) Portniaguine and Zhdanov (1999) Sun and Li (2015) Werner ( Resulting model Basement morphology model 3D density distribution, to be interpreted in terms of basement depths Basement surface results from interpolation of single depth estimates measurements), several functions describing the variation of the density contrast with the depth were used in the modelling of gravity data. These include linear models (e.g., Phelps et al. 1999), or functions implying that the largest density increase occurs near the surface (quadratic, exponential, hyperbolic and parabolic models; e.g., Athy 1930;Granser 1987;Litinsky 1989). These various attempts to model the density contrast variation with depth reflect the fact that in different geological contexts the sediment density can vary in different ways and that even when density data are available, it is often not easy to synthesize all the information into a single reliable mathematical function. This difficulty arises because, on the one hand, borehole density data collected in a basin may differ from place to place (e.g., Phelps et al. 1999) and, on the other hand, gravity data may not be easily fitted by using the density contrast function estimated from density measurements at a few locations in a basin (e.g., Cai and Zhdanov 2015).
As the density contrast is often assumed in the gravity modelling of sedimentary basins and dramatically determines the solution, interpretation methods that do not involve this choice could be a convenient alternative. 3. The fitting function is the formula used to forward compute the field generated by the current model in the iterative phase. As Silva et al. (2014) have pointed out, the interpretation model for Bott's method is defined by the fitting functional used. This fitting functional may be the analytical formula for the gravity field of a 2D body with polygonal cross section or of 2D or 3D rectangular prisms, or algorithms to compute the field due to complex undulating layers (Parker 1972). Bott's method (or its variants) is very efficient because it does not involve matrix multiplications or inversions. Thus, the computational efficiency of this approach depends mainly on the selected fitting function, with the Fourier-domain approaches being much faster than those operating in the space domain. To be able to model the vertical variation of the density contrast, an appropriate fitting function must be chosen. The available tools include the following: analytical formulas modelling the gravity effect due to prismatic bodies with parabolic (Chakravarthi et al. 2002) or cubic polynomial (Garcia-Abdeslem 2005) density contrast functions, formulas modelling the gravity due to polyhedral bodies with a linear density contrast function (e.g., Holstein 2003), algorithms for modelling the gravity of undulating layer sources, operating in the Fourier domain, where the density contrast varies with depth according to an exponential (Granser 1987) or to a cubic polynomial law (Wu 2016;Wu and Lin 2017).
Jachens and Moring (1990) used a version of Bott's method incorporating a procedure to estimate and remove the gravity effect generated by density variations within the basement. It is based on the gridding of the gravity field measured on basement outcrops on the whole basin area and on its subtraction from the observed gravity field. The method then proceeds with the iterative phase of Bott's method, and the field predicted from the model is subtracted from the basement field until the difference becomes negligible. Blakely and Jachens (1991) applied this method to isostatic residual gravity anomalies in Nevada in order to define the morphology of the pre-Tertiary surface and to produce a basement gravity map, reflecting variations of density within the pre-Tertiary basement. The same method was applied by Phelps et al. (1999) to the isostatic anomalies of the Yucca Flat basin (Nevada) and their results will be discussed in the following sections. Unfortunately, the success of the separation procedure described above is critically dependent on accurate knowledge of the density of the basin deposits and of the basement (Blakely and Jachens 1991). The possibility of isolating the field due to density variations within the basement from the field due to basin-fill density contrasts with the basement does appear interesting although I am not aware of any published attempt to prove the effectiveness of the procedure on synthetic data.
Other gravity data interpretation approaches to modelling the basement morphology use the Tikhonov regularized inversion method (Tikhonov and Arsenin 1977) to obtain the basement geometry, assuming a density contrast. With these approaches, the subsurface is generally discretized into a grid of columnar prisms of known horizontal dimensions and known density, and the parameters to be estimated in the inversion model consist of the thicknesses of the columns. This inversion problem is ill-posed, and needs to be stabilized by minimizing a suitable objective function. The objective function includes a regularizing function (e.g., the L2 norm of the discrete first-order derivative of the model) which gives a smooth character to the solution. In the inversion for the basement depth, Martins et al. (2011) introduced the total variation functional (corresponding to the L1 norm of the discrete derivative of the model) capable of recovering a model where the abrupt changes of the morphology are not penalized. This objective function should therefore provide a reliable approximation of even steep morphological jumps along large-offset fault planes. Sun and Li (2014) developed a nonlinear inversion method for adaptively recovering both smooth and blocky features in the resulting model. This method assumes knowledge of the density contrast and requires two preliminary inversions to partition the study area into regions where the structure is expected to be smooth or blocky. In the final inversion, the appropriate Lp model norm, with different p values at different locations, is minimized. Feng et al. (2018) used a similar approach, implementing a nonlinear inverse method that minimizes an objective function containing a composite regularizing function, allowing variation of the smoothness of the model in the study area. The total variation stabilizer is used where the basin structure is estimated to be 'discontinuous', while a smooth stabilizer is used elsewhere. This areal distinction is automatically made by using a specific edgeanalysis tool applied to the gravity anomalies or to a first-approximation model.
Another approach to the estimation of the basement morphology includes the nonlinear and non-iterative method proposed by Fedi (1997). The interesting feature of this approach is that a constant density contrast can be estimated from the comparison of the observed data with the data predicted from a model obtained independently of the density contrast. This method requires knowledge (or the assumption) of the minimum and maximum depths to the basement.
All these nonlinear methods, estimating just the basement morphology, can be useful to obtain a basic modelling of a sedimentary basin. However, the obtained models have certain limitations due to the necessary assumptions, the most critical being the density contrast function, including the choice of the mathematical model for its variation with depth (e.g., constant, linear, exponential or parabolic laws) and the absence of lateral variations of the density contrast in the basin sediments. On the other hand, their output clearly defines the basement surface, simplifying the interpretation of results.
Linear inversion algorithms are designed to produce a model that describes the density contrast distribution in a subsurface volume (e.g., Li and Oldenburg 1998;Portniaguine and Zhdanov 1999). They could be used to obtain more complete modelling of a sedimentary basin, avoiding the above-mentioned assumptions typical of basement estimator algorithms. However, this approach has its own complexities, associated with regularized inversion, such as the choice of stabilizers in the objective function, which determine the main properties of the solution (e.g., smooth vs. blocky models), the choice of regularization parameters and the bounds of the model parameters. This inversion requires the solution to a much greater number of parameters with respect to the simpler basement depth estimation and thus the computation time is generally longer. The final density contrast cube will rarely display in a clear and objective way the geometry of the geological limit between basement and filling sediments. This problem has been recently taken into account by clustering inversion techniques, incorporating a priori knowledge such as petrophysical data, producing a geologically differentiated model as a final product (e.g., Sun and Li 2015). To my knowledge, this technique has never yet been applied to a basement morphology modelling problem.
Spectral methods are a classic tool for obtaining information about the average depth to ensemble of sources at depth (Naidu 1968;Spector and Grant 1970;Maus and Dimri 1995;Fedi et al. 1997). They are often used to depict the basement undulations when used with a moving-window approach (e.g., Blakely 1983;Okubo et al. 1985). However, they can only provide average depth estimates relative to a wide area, so they are not useful when a detailed basin study is required. Spectral methods are mostly used in Curie depth estimation (magnetic case) and other regional applications (e.g., Maus et al. 1997).
Lastly, a (generally rough) basement morphology estimation can be obtained by interpolation of single depth estimates given by methods that are independent of the density contrast. These include Werner deconvolution (Werner 1953;Ku and Sharp 1983), Euler deconvolution (Reid et al. 1990;Nabighian and Hansen 2001), Tilt-depth method (Salem et al. 2007) and multiscale methods (e.g., Fedi et al. 2012;Florio and Fedi 2014;Florio et al. 2009). These algorithms are generally valid for specific ideal model sources (the so-called 'one-point' sources: sphere, lines, sheets or contact; e.g.,: Reid et al. 1990), so that their application needs the correct identification of the source model, otherwise the accuracy of the depth determination is significantly degraded. However, some of them are model independent such as the multiscale geometric method (e.g., Fedi et al. 2009) or can simultaneously estimate the model type, so that no assumption is required (e.g., Nabighian and Hansen 2001; Florio and Fedi 2014). Moreover, the depth estimations are located according to the distribution of the anomaly gradients and the source singularity points, so that they may not be optimally distributed in the study area, with a lack of depth estimates where the basement is flat or deep (Salem et al. 2014).
Considering the difficulty of use and all the assumptions required in the various approaches described above, perhaps the simplified solution of modelling just the surface of the basement may represent the best solution for obtaining a spatially detailed estimation of its morphology. The only difficulty lies in the choice of the density contrast or of the density contrast-depth function. However, this parameter will significantly affect the results, increasing the estimated depth when the density contrast decreases and vice versa. This appears to be the Achilles' heel of most basement estimator methods, because a reliable value for the density contrast or of its variation with depth is difficult to obtain even when borehole data are available. For example, Phelps and Graham (2002), when modelling the gravity anomalies to find the depth to basement beneath Frenchman Flat (Nevada, USA), chose to modify the density profile derived from a gamma-gamma log until the gravity model matched the depth to pre-Cenozoic rocks at boreholes. Cai and Zhdanov (2015) had similar difficulties when modelling the isostatic gravity anomalies of Deadman Lake and Surprise Spring basins in California (USA) for the morphology of the basement, using a density function model proposed by the U.S. Geological Survey (USGS) based on drilling information (Roberts et al. 2002). They found that a constant density contrast must be used to obtain a good fit with the observed data and concluded that the USGS density model may not be optimal.
Recently, Florio (2018) proposed an almost fully automatic iterative basement modelling method (ITRESC, ITerative RESCaling), where the only assumption required is the absence of lateral density contrasts inside the basin fill or the basement, an assumption common to many other above-described basement depth estimator methods. The ITRESC method is based on the availability of information about the depth to basement at a few points, so it represents a geologically constrained interpretation procedure. A distinctive feature of this method is that it is able to estimate a constant density contrast or a densitydepth function.
In the following sections, after briefly reviewing the ITRESC method, I will illustrate the performance of this method in an application to the Yucca Flat basin in Nevada, comparing the results with those obtained using other approaches.

The ITRESC Method
I will outline the basic principles of the ITRESC method. A full description and discussion of the method can be found in Florio (2018).

Basic Description
The ITRESC method relies on the availability of two datasets: (1) a residual Bouguer gravity anomaly map; (2) a set of depths to the basement within the investigation area, as obtained from analysis of borehole logs or from previous geophysical research (e.g., seismic data, electric/electromagnetic soundings or even gravity modelling, if the depth estimates have been obtained without assuming the density contrast).
A link is established between these geological (borehole data) and gravity datasets by building the depth-gravity relationship, a simple plot of basement depths versus the gravity data observed at the same borehole locations. Florio (2018) showed that this relation is approximately linear when the density contrast is constant (homogeneous basins), while it will assume a concave-downwards shape in the more general case of the decrease in the density contrast with depth. This relationship between the basement depth and gravity values can be immediately used to transform, or rescale, the gravity anomalies into basement depths. A p-order polynomial is fitted to the depth/gravity plot and the estimated coefficients ( c p ) are used to build a 'first-approximation' model, b 1 x i : where x i , i = 1,…,N is a set of horizontal coordinates of the N gravity data ( g x i ). It is important to emphasize that this first-approximation model is obtained without any knowledge of the density contrast. By using complex synthetic models, Florio (2018) showed that the first-approximation model closely reproduces the general shape of the basin, but it is a smooth model (in the case of a constant density contrast between sediments and basement, it will have the same shape of the gravity field) so that it has great errors in the steepest areas of the true buried morphology.
To improve this first model, the ITRESC method needs an estimate of the density contrast function. If the depth-gravity relationship is well approximated by a first-order regression, a constant density contrast is implied, and its value can be easily found by using the method proposed by Fedi (1997). The observed gravity is compared with the gravity predicted by the first-approximation model using a unit density. The slope of the line stemming from a first-order regression of the data provides an estimate of the constant density contrast. If the density contrast varies with depth, the depth-gravity relationship is better fitted by a higher-order polynomial regression. In this case, the procedure to estimate a depth/density contrast function is outlined below: 1. the plot of observed versus unit-density predicted data is fitted by a suitable polynomial; 2. the polynomial is simplified in several linear segments by the Douglas and Peucker (1973) polyline simplification algorithm; 3. the slope for each of the obtained linear segments is computed, representing a set of density contrasts at different depths; 4. the depths to the corresponding layers are found by recording the observed gravity coordinates of the endpoints of the linear segments and by looking for the corresponding depths in the depth-gravity relation plot.
After obtaining an estimate of the density contrast, Δ est , the field of the first-approximation model ( g x i , Δ est , b 1 ) can be recalculated and subtracted from the observed data ( g x i ). This misfit is multiplied by the slope (s) of the line best fitting the depth-gravity relation (representing the step size of the iterative phase) and rescaled into depths. The rescaled misfit is added to the first-approximation model, so that, at the j-th iteration, the model b j x i is given by This procedure is iteratively repeated, until a stopping criterion is finally met (e.g., the misfit reaches the estimated measurement error).
The gravity field produced by the models is computed in the Fourier domain (e.g., Parker 1972), so that the computation time is generally short, even for large matrices.

The Depth/Gravity Relationship
The core of this method is the study of the depth/gravity plot, rarely considered in previous approaches to the interpretation of gravity data. In this plot, the interpretative results from other approaches (boreholes or geophysical studies) are naturally integrated with gravity data, and it is therefore possible to check their compatibility with gravity data (assuming that the only density contrasts are at the basement interface). It is very important to highlight that in the ITRESC method the depth constraints are not used locally, as in standard interpretational procedures (e.g., when we want to force our gravity model to be consistent to the basement depth at some borehole locations). Instead, the available information on the depth to basement is used to define a general gravity-rescaling law to be applied to the whole basin, even in areas where no constraint is available.
It is interesting to note that even for a synthetic homogeneous basin, there is a certain scattering from the expected linear trend of the points describing the depth/gravity relationship (e.g., see Florio 2018, Fig. 1b). When studying real sedimentary basins, the causes of the scattering in this plot may depend on: • the local character of the constraints versus the 'global' character of the gravity field at that points: this produces an 'intrinsic scattering' of the depth/gravity relationship plot, as the gravity data at the borehole location will also contain the contribution of distant sources; ( • the presence of geological heterogeneities both in the basin infill and in the basement; • the basement depth variability versus the gravity grid dimension (aliasing of the gravity changes due to small wavelength oscillations of the basement depth); • the inaccuracies in the depth constraints (essentially when these constraints come from the geophysical interpretations of the basement depth); • the inaccuracies in the gravity data (due to processing and measurements).
The distribution and number of depth constraints are important parameters that influence the possibility of a successful interpretation using the ITRESC method. The importance of the number of constraints is analysed in Florio (2018). It can be seen that if the depth to basement is known in points spanning the entire range of the gravity anomalies in the study area, they may provide sufficient information for a good reconstruction of the basement even if they are only in a small number. The presence of constraints related to the extreme values of the range of gravity anomalies (or values close to these) enables extrapolation errors of the depth-gravity relationship to be avoided. Thus, the areal distribution of the depth constraints appears more important than their number.

The ITRESC Method is Almost Fully Automatic
The ITRESC procedure outlined above requires virtually no control by the user. It is only necessary to choose the best regression model that fits the depth/gravity relationship and the observed/predicted data plot (used to estimate the density contrast). In practice, the most critical choice regards the regression model for the depth/gravity relationship. Generally, useful polynomial models have a degree not greater than 4 in both cases.
Deciding which is the best approximating model for making inferences from the data is indicated philosophically by the 'principle of parsimony' and operationally by several information-theoretical criteria (e.g., Burnham and Anderson 2002). The principle of parsimony favours a model with the smallest possible number of parameters for an adequate representation of the data. A proper model-selection criterion attempts to identify a model in which the error of approximation (bias) and the error due to random  (Williams et al. 2005). Black dots identify the position of 10 virtual boreholes encountering the basement. b Gravity field generated by the Bishop basement model (a), assuming a density contrast with the filling sediments defined by Eq. (6) fluctuations in the data (variance) are well balanced. There is generally a trade-off between goodness of fit and parsimony: low parsimony models (i.e. models with many parameters) tend to have a better fit than high parsimony models. This is not usually a good thing; adding more parameters results in a good model fit for the data at hand, but the same model will likely be useless for predicting other data sets, or to extrapolate the prediction outside the boundaries given by the available data.
In the specific case of the approximation of the depth/gravity relationship, the available depth constraints are often not distributed to cover the full gravity anomaly range in the studied area, so that the fitting polynomial must be able to safely extrapolate the depth/gravity relationship up to the maximum and minimum gravity values in the studied area. Therefore, it is important to have some quantitative tools that help to choose the best approximating model. I found useful two different tools, Akaike's Information Criterion (AIC; Akaike 1973) and the Analysis of Variance (Davis 2002).
1. In the case of least squares estimation with normally distributed errors, the Akaike's Information Criterion can be expressed as a simple function of the residual sum of squares (Burnham and Anderson 2002): where n is the number of data, K is the total number of estimated regression parameters, ̂2 = ∑̂ 2 i n and ̂ i are the estimated residuals for a specific candidate model. Equation (3) expresses the trade-off between bias and variance (or between underfitting and overfitting), as the first term on the right-hand side decreases as more parameters are added to the approximating model, while the second term gets larger as more parameters are added to the approximating model. When the size of the sample n is small with respect to the number of parameters K (e.g., n/K < 40; Burnham and Anderson 2002), a second-order bias adjustment to the AIC was proposed (Hurvich and Tsai 1989): In practice, AIC c is computed for each of the candidate models, the models are ranked accordingly, and the one with the smallest value of AIC c is selected as the best. AIC and AIC c will tend to select the same model when the ratio n/K is sufficiently large. 2. The Analysis of Variance can be used for evaluating the significance of added terms in curvilinear regressions (Davis 2002). A regression sum of squares, such as where ŷ i is the estimated value of y i at specific values of x i , Ȳ is the average of the regressed variable Y and n is the number of data, is computed for a linear fit (SS R1 ) and for a quadratic fit (SS R2 ). Then SS R2 is subtracted from SS R1 , the resulting quantity providing a measure of the increase in fit resulting from the additional regression term. To evaluate the statistical significance of the added regression term, a standard F-test is carried out, and a given term is considered as contributing significantly to the regression only if the resulting F-value falls in the critical region (e.g., for a level of significance of 5%). ( These statistical tools can be a useful guide for justifying the choice of a particular polynomial model that approximates the depth-gravity relationship (or the observed vs. predicted data plot). However, if from the chosen model the results (density-depth function) do not make geological sense or do not support the available constraints, this is a reason for excluding it from the set of candidate models.

Example of Application to the Gravity Field of a Synthetic Model
Before illustrating and discussing the application of the ITRESC method in the real case of the Yucca Basin (Nevada), I will briefly describe an application to a complex synthetic model with a continuous density-depth function. The structure is that of the Bishop basement ( Fig. 1a; Williams et al. 2005), a synthetic basement model obtained by rescaling a real topography around Bishop (CA, USA). In this test, the density contrast varies with the depth assuming a progressive compaction of the sediments filling the basin, according to the following cubic polynomial law: where z is the depth in m and the density is expressed in kg/m 3 . The basement gravity field (Fig. 1b) was calculated using the algorithm developed by Wu (2016), operating in the frequency domain and allowing a polynomial function describing the vertical variation of the density contrast below each (x, y) point. Assuming to know the depth to basement at 10 locations in the area (Fig. 1a), the depth-gravity relationship is built, and according to the application Akaike's criterion it is approximated by a fourth-order polynomial (Fig. 2a).
A first-approximation model of the depth to basement, independent of the density contrasts, is readily obtained (Fig. 2b). To estimate the density-depth function, the plot of the observed gravity versus the gravity predicted by the obtained model using a unity density contrast is analysed (Fig. 3a). According to Akaike's criterion, the plot is approximated by a third-order polynomial and a 15 layer density-depth function is obtained (Fig. 3b). The estimated density function reproduces the true depth density function rather accurately, and (6) Δ = −0.01z 3 + 4z 2 + 4z − 500, Fig. 2 Application of ITRESC method to a synthetic case. a Depth-gravity relationship plot. The black line is a linear regression using a fourth-order polynomial. b First-approximation basement model obtained by rescaling the gravity data using the polynomial function estimated in a. This model is a smooth representation of the true basement, obtained independently of the density contrast the final model ( Fig. 4) represents the true basement morphology with a good amount of detail.

Geological Setting
Yucca Flat is a Tertiary extensional basin located in the NE corner of the Nevada National Security Site (NNSS, formerly the Nevada Test Site), Nevada, USA, in the Basin and Range province, about 150 km North-west of Las Vegas, Nevada. The oldest rocks outcropping at the edges of the area, or encountered at depth in boreholes, are upper Precambrian or Paleozoic limestones, dolomites, quartzites, argillites and siltstones, along with isolated late Mesozoic granodioritic intrusions. These rocks form an irregular surface ('basement') upon which upper Miocene volcanic rocks (mainly tuffaceous in nature; e.g., Byers et al. 1976) and Quaternary clastic rocks (colluvial fans Fig. 3 Application of ITRESC method to a synthetic case. a Observed versus unit-density predicted gravity plot. The black line is a third-order polynomial fitting the data and the red dashed line is its simplification into 15 segments using the Douglas and Peucker (1973) method. b Estimate of the density-depth function (blue line) based on the slope of the segments of the red dashed line in a, following the procedure outlined in the text. The true density contrast (ochre line; Eq. 6) is shown for reference of Tertiary and Quaternary sediments) were deposited. The basin is dominated by NS trending, eastward dipping, normal faults such as the Carpetbag fault and the Yucca fault, formed by eastward extension (Cole 1987 ; Fig. 5). The topography of the area gently slopes down in South-Southwest direction, with a difference in elevation of about 400 m between the North-eastern and South-western basin boundaries (Fig. 5). Yucca Flat was one of four major nuclear test regions of the NNSS. The Yucca Flat area had been the site of nuclear weapons tests since 1951. Since 1957, the tests were limited to underground explosions, with the last nuclear test done in 1992. Since the start of the military operations in this area, there has been great interest in estimating the depth to the buried pre-Cenozoic basement rocks, and the gravity method has been successfully used for this purpose. The aim of this research changed over time, from the optimization of the depth of burial of the nuclear explosives (e.g., Healey 1966), to finding the optimal place for high-level nuclear waste storage (Healey et al. 1987) and, more recently, to supporting the modelling of the pathway of underground contaminants by constraining the basin configuration and identifying faults and other possible water conduits or barriers (Phelps et al. 1999). The Paleozoic basement carbonate rocks that lie at depth in the Yucca Flat basin are of particular interest as these units are a part of the regional lower carbonate aquifer, which offers the highest potential for transport of contaminants outside the basin (Wurtz, personal communication 2020).

Gravity and Borehole Data
Gravity anomalies in South-west Nevada mainly reflect the distribution at depth of pre-Cenozoic carbonate rocks (Ponce et al. 1999). Phelps et al. (1999) presented the most complete gravity anomaly data for Yucca Flat, a collection of almost 10,000 densely distributed gravity measurements acquired over the previous decades. The isostatic residual gravity anomalies used in the present study were digitized from Fig. 7a in Phelps et al. (1999). The resulting grid (Fig. 6a) has a spacing of 200 m in both North-South and West-East directions (the same spacing as the original grid in Phelps et al. 1999). Ferguson et al. (1988) assume that the measurement error for the Yucca Flat gravity dataset ranges between 0.5 and 0.15 mGal. The isostatic gravity anomaly map (Fig. 6a) shows that a rather distinct gravity pattern characterizes the Fig. 6 Gravity anomalies of Yucca Flat. a Isostatic gravity anomalies at ground level. b Isostatic gravity anomalies upward continued to the constant level of 1660 m above sea level (a.s.l.). Gravity anomalies outside the basin boundaries are blanked eastern and western Yucca Flat areas. The eastern area is associated with a gravity low, with minimum values in its southernmost part, while to the West of the Carpetbag fault, a North-South elongated gravity high and low (not corresponding to any topographic relief) can be observed. Gravity highs are present along the basin boundary, marking the outcrops of basement rocks.
A large number of boreholes, drilled for exploratory reasons or for the emplacement of the nuclear ordnances, are present in the area. More than 160 boreholes were found to encounter the basement rocks in Yucca Flat. Some of them are located very close each other, falling within the same 200 × 200 m gravity grid cell. In this case, the borehole data (location and depth to basement) were averaged in each 200 × 200 m gravity grid cell, and the final number of boreholes constraining the basement depth dropped to 141. The distribution of the boreholes reaching the basement is not uniform in the Yucca Flat area, with just a few ones in the Southern part of the basin (Fig. 5).

Gravity Modelling
To correctly model the gravity anomalies with the ITRESC method, they should be referred to a constant altitude. In fact, the ITRESC algorithm includes the computation of the gravity field of the basement model using a 2D FFT method (Parker 1972) and this implies that the data refers to a plane, which is not the case for land gravity data. For this reason, the data grid was differentially upward continued to the constant altitude of 1660 m (Fig. 6b), higher than the maximum elevation in the area, using a wavelet-domain spatially varying scale filter (Ridsdill-Smith and Dentith 2000). The resulting upward continued gravity anomalies are somewhat smoother than the original data, although the same general characteristics were retained (Fig. 6). The first step of the ITRESC procedure involves building the depth-gravity relationship. In Fig. 7a, the depth to basement, as estimated at 141 wells in Yucca Flat, is plotted against the gravity anomaly at the same location. In this plot, since the gravity data are referred to the elevation of 1660 m a.s.l., the depths to basement are expressed as distances with respect to the same altitude. The plot shows that at borehole locations the gravity data follow a general decreasing trend with the increase in the depth to the pre-Tertiary basement rocks. This demonstrates that gravity anomalies mainly reflect the depth of pre-Cenozoic carbonate rocks, and thus the applicability of the ITRESC method to the gravity anomalies of the Yucca Flat basin. The depth-gravity relationship (Fig. 7a) was fitted using a second-order polynomial (the black line in Fig. 7a), according to the results of the statistical analysis described in Sect. 2.3, and a first-approximation model was obtained independently from the density contrast (Fig. 7b). Figure 7b shows the thickness of sediments, computed by subtracting the distance between the data altitude (1660 m) and the topography elevation from the obtained depth to basement (referred to the data altitude of 1660 m). As expected, this model shows that the basin sediments have the greatest thickness in the Southeastern area, while in the Western part of the basin their thickness is considerably less. As illustrated in previous sections, to improve the resolution of this model, it is necessary to estimate how the density contrast varies with the depth, iteratively model the misfit between observed and computed anomalies, and summing the correction term to this first model.
The estimate of the density contrast function is based on the plot of the field generated by the first-approximation model assuming a unit-density contrast versus the observed data (Fig. 8a). The data points in this plot display a curved trend that was approximated by a second-order polynomial and then simplified by the Douglas and Peucker algorithm (1973) into eight linear segments whose slopes, representing the density contrasts at different depths, were evaluated. The observed gravity values corresponding to the endpoints of these segments are used in the depth-gravity plot (Fig. 7a) to find the depth intervals. This process resulted in the density-depth function displayed in Fig. 8b. The density contrast model presents eight layers ranging from about 1000 kg/ m 3 , for the shallowest layer, to less than 500 kg/m 3 for the deepest layer. The overall shape of the density depth function shows an almost linear decrease in the density contrast with depth. It is assumed that the minimum density contrast also extends to depths greater than those shown in the plot of Fig. 8b.
The gravity field of the first-approximation model is then computed using the estimated density function and subtracted from the observed gravity. The residual is rescaled as explained in Eq. (2), and the first-approximation model is updated. This iterative process leads in 6 iterations to a residual field with an RMS less than the target value (0.15 mGal). The corresponding model is shown in Fig. 9a. The difference between borehole depths to basement and the basement model depth has an average value of about 40 m, a standard deviation of about 190 m and a mean absolute deviation of about 160 m (Fig. 9b). The error is within ± 100 m for the 38% of the control points and within ± 200 m for the 69% of the control points. The field produced by this model well reproduces the observed data, with misfit values lower than 0.4 mGal in almost all the basin areas (Fig. 10).
The main features of the model can be summarized as follows: (a) the maximum thickness of the basin sediments is about 1.2 km in the southern area; (b) the basin appears divided into several distinct foundered areas separated by saddles; (c) sometimes, these saddles are elongated buried ridges, as in the case of the NW-SE basement high, in the central-eastern part of the basin; (d) a shallow North-South elongated ridge, buried at variable depths between 200 and just a few tens of meters below the topography, separates the Western part of the basin from the Eastern one; (e) to the West of this ridge, other smaller basin areas are present, deepening to the North.
Several features of the basement model seem to indicate a predominant structural control on the origin and evolution of the basin such as the noticeable and localized basement elevation gradients, which are compatible with the known main fault distribution mapped from surface geological data, and the presence of different buried depocentres separated by thresholds, as well as their flat bottom.

Comparison with Other Yucca Flat Basement Models
Other researchers modelled the Yucca Flat gravity anomalies to estimate the pre-Tertiary basement morphology. Ferguson et al. (1988) inverted the data using both the Fig. 8 Density contrast function estimation. a Observed versus predicted gravity data plot. The data points in this plot display a curved trend that was approximated by a second-order polynomial (black line) and then simplified by the Douglas and Peucker algorithm (1973) into eight linear segments (red dashed line). b Resulting density contrast function (red line) compared to different density functions used in other studies for the same area Cordell-Henderson method (Cordell and Henderson 1968) and the Parker-Oldenburg approach (Oldenburg 1974), obtaining very similar results from these different techniques. Phelps et al. (1999) used the method of Jachens and Moring (1990), which is similar to Bott's method, but includes a procedure for estimating and removing the gravity effect generated by density variations in the basement (see Sect. 1).
A long wavelength gravity field was removed in both studies, but it was estimated using different techniques. In both these studies, the gravity anomalies are referred to the measurement surface, i.e. the topography, and the inversion techniques employed require the assumption of a density contrast. Ferguson et al. (1988) used a constant density contrast of 700 kg/m 3 between the filling sediments and the basement rocks. Their   Fig. 8b. b Map of the difference between the observed data (Fig. 6b) and the data predicted by the ITRESC basement model in a) model (Fig. 11a) shows a good agreement with the ITRESC model (Fig. 9a), estimating a maximum thickness of about 1100 m and displaying similar basement features, although with less resolution since the sampling step of the gravity data used (610 m) is about three times that used in this study (200 m). The basement model developed by Phelps et al. (1999) (Fig. 11b) displays more details with respect to the ITRESC model, as the authors did not use upward continued data. However, the model is in general very similar to the ITRESC basement, except in the Southern area, where a considerably thicker Tertiary and Quaternary sediment layer is estimated (about 2500 m thick). In this modelling, the used density contrast decreases with the depth according to a linear function defined by taking into account the density values obtained by the interpretation of borehole gravity data (Phelps et al. 1999). This function (Fig. 8b) assigns a density contrast of 910 kg/m 3 at the surface and very low values at depth. These very low values explain the great depth to the basement estimated in the Southern area (about 2500 m below the ground level; Fig. 11b).
The ITRESC estimated density function displays some similarities with the density functions assumed in the above-mentioned studies (Fig. 8b). The constant density contrast value assumed by Ferguson et al. (1988) is in fact close to the average value of the ITRESC density function (700 vs. 740 kg/m 3 ), while the linear function assumed by Phelps et al. (1999) has practically the same slope as the ITRESC function, although with density contrast values lower by 100-200 kg/m 3 than the ITRESC function.
Other models for the Yucca Flat basement depth include the Hydrostratigraphic Framework Model (HFM; Bechtel Nevada 2006), built from a collection of geological data, such as drill holes and information at outcrops along the basin edges, and geophysical data, such as 2D seismic refraction profiles (indicating, in the southern area, a maximum depth of about 1.3 km ;Healey 1966) and the gravity model by Phelps et al. (1999). HFM shows a maximum depth to basement of about 1200 m in the South-eastern area, in agreement with the ITRESC basement model. The Phelps et al. (1999) gravity model was used selectively to define the basement in areas of poor well control (Prothro, personal communication 2020). One such areas is the southern portion of Yucca Flat, where the Phelps et al. (1999) Fig. 11 a Yucca Flat basement model, as computed by Ferguson et al. (1988). b Yucca Flat basement model, as computed by Phelps et al. (1999). Both models are shown in terms of the thickness of the alluvial and volcanic formations overlying the Paleozoic basement gravity interpretation shows that the basement is at a depth of 2500 m. However, based on geological considerations, the HFM Paleozoic basement surface was modelled at a much shallower depth (Prothro, personal communication 2020). Recently, Toney et al. (2019) presented seismic tomography models along two profiles across Yucca Flat. These authors consider their models to be in general agreement with the HFM, apparently assuming a bedrock velocity at the Paleozoic interface of about 4 km/s.

Spatial Analysis of the Misfit Between Model and Borehole Depth to Basement
Unlike other approaches using gravity for estimating the basement depth, the ITRESC method does not force the estimate of the depth to basement to be equal to the depth constraints it uses. In fact, as mentioned above, the depth constraints are only required to define a rescaling law, valid across all the studied area, for transforming the gravity anomalies into depths to basement. Thus, the analysis of the spatial distribution of the misfit between borehole depths to basement and the ITRESC model at these locations can provide useful information about the horizontal variability of the density function and can be exploited to improve the model. If the ITRESC model is deeper than the basement found at boreholes, this difference is positive (Fig. 12). This situation is common near the northern and central border of the Yucca Flat basin. On the contrary, the central and southern areas are mainly characterized by the presence of negative values, meaning that the ITRESC basement is shallower than the depth found at boreholes. In Fig. 12, the two areas are separated by a cyan line. To avoid any over-interpretation of the data, while drawing this line I tried to define geologically meaningful large areas instead of following isolated features of the basement-borehole misfit data. I therefore began by classifying the misfit into 3 clusters containing positive, negative and around zero (± 50 m) values (Fig. 12). I then drew the boundaries between the two domains Fig. 12 Spatial analysis of the misfit between model and borehole depths to basement. Coloured dots represent the differences between the depth to basement detected at boreholes and the ITRESC model depth at the same location. The cyan line separates two domains with different signs of the misfit. Black contours represent the topography by considering only the positive and negative values, implying that the points with an 'around zero' misfit could fall indifferently into one of the two areas. I considered the spatial consistency of the two domains more important rather than including all the positive (negative) misfit points in the same area. Indeed, Fig. 12 shows that one positive misfit point is included in the central-southern area containing negative misfits.
Given the spatial consistency of the sign of this misfit, new modelling was carried out by separately considering the two groups of boreholes relative to the above two areas (identified and separated by the cyan line in Fig. 12). For each of them a depth-gravity relationship plot is obtained (Fig. 13a, b) and distinct density functions are estimated (Fig. 13c). Thus, the original density-depth function computed with data from the entire basin area is now replaced by two density-depth functions, one computed using data from the northern marginal areas and the other from the central and southern areas. The difference between these new density functions can be observed mainly (Fig. 13c) (a) at shallow depths, where the density function relative to the northern marginal areas shows a contrast greater than 1100 kg/m 3 , and (b) at great depths, where in the marginal areas, a density contrast smaller than that occurring at central areas seems to be present. The geological significance of these two differences may involve: (a) the presence at shallow depths of unconsolidated sediments, more evident when the density function computation involved data from only the Northern marginal areas; (b) the presence at great depths, in the Northern areas of the basin, of lithic debris denser than the volcanic tuffs widespread at the same depths in central parts of the basin, deriving from the steep basement reliefs outcropping North, North-west and North-east of the basin boundary. Finally, two basement models fitting the gravity data were computed. In both cases, six iterations were sufficient to obtain a model capable of generating a gravity field comparable to the observed gravity within the limits of the target RMS misfit value (0.15 mGal). These two models were then combined along the line separating the two areas using the tool GridKnit, contained in the Geosoft software. The final basement model (Fig. 14a) thus also includes a lateral variation of the density-depth function, along the cyan line in Fig. 12. The gravity field produced by this model satisfactorily fits the observed data, with a RMS value of 0.16 mGal. While confirming the general description of the Paleozoic surface morphology as given by the model of Fig. 9, it has an improved fit with the borehole depths, with an average error of 13 m, a standard deviation reduced to 133 m and an average absolute deviation of about 100 m (Fig. 14b). The error is within ± 100 m for 59% of the control points and within ± 200 m for 86% of the control points. The final ITRESC model compares very well to the Paleozoic interface as interpreted from the seismic tomography results (Toney et al. 2019) and as defined in the HFM (Bechtel Nevada 2006). Along the THOR1 transect ( Fig. 15a; Toney et al. 2019) ITRESC and Ferguson et al. (1988) models are similar, while the Phelps et al. (1999) model indicates a very deep basement to the SE, in contrast to seismic and geologic models. In the westernmost part of the THOR2 transect ( Fig. 15b; Toney et al. 2019) all the three gravity models are consistent with the seismic tomography, showing a deeper basement than that depicted by the HFM, while along the rest of the profile they are in a general agreement with the HFM and the seismic tomography model.
With respect to previous gravity interpretations at Yucca Flat, the ITRESC model displays the following improvements: 1. Unlike previous models, the density/depth functions used in the ITRESC modelling have been deduced from a data-driven procedure. Other models were computed with the difficult task of extracting a single density or a density function from often inconsistent borehole density measurements. Ferguson et al. (1988) solved these inconsistencies by finally selecting a constant density contrast. In general, a variation of just 100 or 200 kg/ m 3 can make a significant difference to the estimated depth, yet we can rarely estimate the density of geological units with this accuracy. This means that the estimates of the average density of sediments from the borehole logs are uncertain so the algorithms relying on density contrast assumptions may yield fairly variable answers. The ITRESC approach solves this potential ambiguity. 2. The ITRESC Yucca Flat model includes both a vertical and lateral variation of the density contrast, and as such can be considered a better approximation to the true geological complexity. 3. Unlike all the previous gravity modelling, the data in this case have been rigorously interpreted by considering the need for an upward continuation to recalculate them, using a FFT algorithm, at a unique level. Ferguson et al. (1988) and Phelps et al. (1999) also used a FFT algorithm to forward calculate the model gravity anomaly, although they did not take into account the different elevations of the data. In fact, they considered a computation plane near the surface, although the observed gravity data are collected on an irregular topographic surface. This inevitably introduced some errors in the modelling. 4. Of the three gravity models, the ITRESC model is the one that is most similar to the HFM along the two profiles shown in Fig. 15, with a mean difference of 7 m and 36 m for THOR1 and THOR2 profiles, respectively, with a standard deviation of about 110 m in both cases, and a mean absolute deviation of about 90 m for both profiles. These figures are very similar to the differences between borehole depths to basement and basement model depths illustrated in Fig. 14b.

Conclusions
The estimation of the basement depth underlying sedimentary basins is one of the oldest applications of gravity and magnetic data. Many different approaches can be used, each one offering some advantages at the cost of accepting several assumptions about the final model. In general, the simplest and fastest methods are designed to generate a grid describing the basement morphology, by assuming a density contrast between the sediments filling the basin and the basement rocks. Thus, in this kind of modelling, the chosen density contrast plays a very important role. In this class of methods, the ITRESC method overcomes the need to choose a density contrast function. In this case, the density function is not assumed, but is instead estimated by a data-driven process depending on gravity and borehole data. This feature of the method is extremely helpful, because the selection of a suitable density contrast is often not an easy task, even when borehole data about lithologies or densities at depth are available (e.g., Ferguson et al. 1988;Phelps et al. 1999;Cai and Zhdanov 2015).
The other main drawback of these methods derives from the assumption of lateral homogeneity of the density contrast across the basin. This assumption is also shared by the ITRESC method. The horizontal invariance of the density contrast involves both the sediment filling and the basement itself. The horizontal density variations in the basin fill may be related to differences in emplacement and the nature of the sediments. For example, in the Yucca Flat case, such horizontal density variations may be caused by different thicknesses of the Tertiary volcanic formations or by the presence of fanglomerates along the steeper borders of the basin. On the other hand, the heterogeneity of the basement lithology, and thus of the basement density, may cause a horizontally variable density contrast with the basin fill. Therefore, it may equally represent a possible source of error in the modelling of the basement depth by approaches such as ITRESC.
In the case of Yucca Flat, very little is known about these variations occurring below hundreds of meters of sediments, the only information coming from the evidence of different basement lithologies outcropping at the basin edges. However, density variations in the basin fill, occurring close to the measurement level, are likely to represent sources of error that are far more important than the possible density contrast variations at the bottom of the basin generated by basement geology variations.
Another distinctive feature of the ITRESC method is that the final model is not forced to match the depth to basement at the boreholes. This may be considered a disadvantage, but it actually allows for investigation of the horizontal variations of the density function across the basin.
The modelling of the gravity data in the Yucca Flat area proved to be a successful test for the ITRESC procedure. The high number of depth constraints is a challenging feature of this interpretational case. By using a two-step approach, the ITRESC method produced a final model incorporating a horizontal density contrast variation between the northern and central marginal areas and the central-Southern area.
These results demonstrate that the ITRESC method can be a valid approach for interpreting gravity anomalies in terms of depth to basement in all cases where independent estimates of this depth at certain places are available. These constraints may be borehole data or any other geophysical interpretation models. The ITRESC method can be equally applied to magnetic anomalies, provided they are correctly transformed into pseudo-gravity anomalies.

Availability of Data and Material
Bishop basement model data and their relative description are available at https ://wiki.seg.org/wiki/Bisho p_Model . Borehole data for Yucca Flat area are available at https ://www.scien cebas e.gov/mercu ry/#/.Code Availability All the codes used in modelling are written by the author and are not available. The diverging colourmap used in Fig. 1b and 10b was created by Peter Kovesi (2015) and is available at https ://peter koves i.com/proje cts/colou rmaps /. I used the recursive Douglas-Peucker polyline simplification algorithm as implemented by Wolfgang Schwanghart, downloadable from https ://it.mathw orks.com/matla bcent ral/filee xchan ge/21132 -line-simpl ifica tion.

Compliance with Ethical Standards
Conflict of interest The author declares no conflict of interest or competing interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.