Abstract
In the inverse distance weighting interpolation the interpolated, value is a weighted mean of the sampled values, with weights decreasing with the distances. The most widely adopted class of distance functions is the class of negative powers of order \(\alpha \) and the appropriate choice of the smoothing parameter \(\alpha \) is a crucial issue. In this paper, we give sufficient conditions for the design-based consistency of the inverse distance weighting interpolator when \(\alpha \) is selected by cross-validation techniques, and a pseudo-population bootstrap approach is introduced to estimate the accuracy of the resulting interpolator. A simulation study is performed to empirically confirm the theoritical findings and to investigate the finite-sample properties of the interpolator obtained using leave-one-out cross-validation. Moreover, a comparison with the nearest neighbor interpolator, which is the limiting case for \(\alpha =\infty \), is performed. Finally, the estimation of the surface of the Shannon diversity index of tree diameter at breast height in the experimental watershed of Bonis forest (Southern Italy) is described.
1 Introduction
Successful management of natural, social and economic resources requires detailed information about their spatial pattern. For example, mapping soil composition and mineral concentration is essential in geology, and pollutants concentration in ecology, while climatologists are interested in mapping atmospheric variables, such as temperature, humidity, and precipitation. In these cases, the study region is constituted by a continuous set of locations, conceptualized as a continuous spatial population, with the density of the survey variable at each location giving rise to a surface. Occasionally, the study area is partitioned into a finite population of areas, such as a network of regular polygons, as frequently happens in forest inventories, or into a collection of irregular patches, such as administrative districts. The survey variable is the total amount of an attribute within each area, such as the tree biomass in forestry or the volume of a specific agricultural production in economics. Finally, finite populations of units scattered over the study region, such as the factories in a district, the shrubs in a natural reserve, may be of interest. In this case, the survey variable is the value of an attribute attached to each unit.
Spatial prediction enables estimating the value of the survey variable or of its density at unsampled locations on the basis of a sample of locations, thus allowing to construct wall-to-wall maps depicting the spatial pattern of the survey variable throughout the whole study area. The inverse distance weighting (IDW) interpolation is a technique extensively applied by practitioners, also owing to the availability of GIS tools automatically implementing the interpolation. Commonly, IDW interpolation is considered as a non-stochastic method of spatial prediction, and, as such, no uncertainty is associated (Cressie 1993, Sect. 5.9). In the IDW interpolation, according to the first law of geography by Tobler (1970), values recorded at sampled locations do not contribute equally, since the interpolated value is achieved as a weighted mean of the observed values, with weights decreasing with the distances to the location where interpolation has to be performed. Any positive decreasing function of the distances obviously allows giving less weight to observed values further away from the location. In particular, the most widely adopted class of functions is the class of negative powers distance functions of order \(\alpha \), \(\phi (d)=d^{-\alpha }\) (see e.g., Gong et al. 2014; Noori et al. 2014; Bărbulescu et al. 2021), where d is a positive real number representing the distance and \(\alpha \) is a positive real number playing the role of the smoothing parameter. Therefore the appropriate choice of the smoothing parameter becomes a crucial issue. A default value of 2 is commonly adopted when GIS software are used, nevertheless, it can also be selected by means of cross-validation techniques, such as the leave-one-out cross-validation (LOOCV) (see e.g., Hall and Robinson 2009; Wu et al. 2019).
Recently, the IDW interpolator has been approached under continuous populations (Fattorini et al. 2018a), finite population of areas (Fattorini et al. 2018b) and finite populations of units (Fattorini et al. 2019) in a design-based approach. In this framework, the uncertainty, which is associated to the interpolated values, only stems from the probabilistic sampling scheme adopted to select locations.
Conditions ensuring design-based consistency of the IDW interpolator with negative power distance functions have been proven to hold for any fixed finite \(\alpha >2\) and also for \(\alpha =\infty \), which leads to the well-known nearest neighbour (NN) interpolator (Fattorini et al. 2018a, b, 2019, 2021).
The purpose of this paper is to derive sufficient conditions to prove the design-based asymptotic properties of the IDW interpolator when \(\alpha \) is selected according to LOOCV. In particular, asymptotic results are achieved in a unifying approach that includes the three types of spatial populations, thus rendering the theoretical developments less burdensome. The finite-sample performance of the corresponding IDW interpolator is empirically compared to that of the NN interpolator. Indeed the latter avoids the computational effort needed for implementing LOOCV, which can be very time-consuming with large study areas where interpolation must be performed for thousands of points, areas or units. Furthermore, a pseudo-population bootstrap approach is introduced to obtain an estimator of the accuracy of the corresponding data driven IDW interpolator. The paper is organized as follows. Notation and setting are given in Sect. 2. In Sect. 3 the IDW interpolator is introduced in a unifying approach for the three types of spatial populations. Section 4 is devoted to the choice of the smoothing parameter by means of LOOCV and to the design-based consistency of the corresponding data driven IDW interpolator. In Sect. 5 a pseudo-population bootstrap estimator of the precision of the data driven IDW interpolator is proposed. A simulation and a case study are respectively described in Sects. 6 and 7, while concluding remarks are reported in Sect. 8. The Appendix contains technical details and proofs. Supplementary Information contains figures referring to the simulation study.
2 Notation and setting
Consider a study region A that is assumed to be a compact set of \({\mathbb {R}}^{2}\) and denote by \(\lambda \) the Lebesgue measure on \({\mathbb {R}}^{2}\). Interest is in the estimation of the value or density of a survey variable Y on a subset B of A, where B can be a continuum of points, a finite population of areas or a finite population of units. In order to deal with the three types of spatial populations in a unifying approach, consider a function f related to the Y-values, which, without loss of generality, is supposed to be a bounded and measurable function with values on [0, L], with \(L\in {\mathbb {R}}\) and \(L<\infty \).
In the case of continuous populations, B coincides with A and f(p) is the density of Y at any point \(p\in A\). When finite populations of areas are considered, B coincides with A and is partitioned into N areas \({ a}_{1},\ldots ,{ a}_{N} \). In this framework, \(y_{j} \) and \(f_j={y_{j} }/{\lambda ({ a}_{j} )}\) are the amount and the density of Y within \({ a}_{j}\), respectively and f turns out to be
for each \(p\in B\), where I(E) is the indicator function of the event E. Since the area size \(\lambda (a_j)\) is usually known for each \(j=1,\ldots ,N\), the knowledge of the piecewise constant surface f(p) is equivalent to the knowledge of \(f_1,\ldots ,f_N\). Finally, when finite populations of units are considered, B is the set \(\{p_1,\ldots ,p_N\}\) of N unit locations and \(y_j=f(p_j)\) is the value of the survey variable for the unit j.
Let \(P_1,\ldots ,P_n\) be n random variables with values in B that represent the n locations selected from B by means of a probabilistic sampling scheme. In the case of continuous populations, \(P_1,\dots ,P_n\) denote n locations selected in the continuum B and \({f(P}_1),\dots ,f(P_n)\) are the densities of Y recorded at those locations. In the case of finite populations of areas, \(P_1,\dots ,P_n\) denote the centroids identifying the n sampled areas and \(f(P_1),\dots ,f(P_n)\) are the densities recorded within the corresponding areas. Finally, in the case of finite populations of units, \(P_1,\dots ,P_n\) denote the locations of n sampled units and \(f(P_1),\dots ,f(P_n)\) are the values of Y for these units. In all these three settings, the goal is the estimation of f(p) for any \( p \in B \) on the basis of the values of f recorded at the sampled locations \(P_1,\dots ,P_n\).
3 The IDW interpolator under negative power distance functions
When mapping is considered in a design-based approach and, consequently, uncertainty only stems from the adopted sampling scheme, the use of an assisting model is necessary. Indeed, when estimating f(p) at a single location \(p\in B\), either p is sampled and there is no need for estimation, or p is unsampled, so that no information about it is available for performing estimation. The very simple Tobler’s first law of geography, i.e. units that are close in space tend to have more similar properties than units that are far apart (Tobler 1970), can be exploited as assisting model. To this end, let \(Q_p=\bigcup ^n_{i=1}{\{P_i=c(p)\}}\), where \(c(p)=p\) for continuous populations and populations of units, while c(p) is the centroid of the area containing p for populations of areas, and denote by \(\phi :[0,\infty )\rightarrow {\mathbb {R}}^{+}\) a non-increasing continuous function on \((0,\infty )\), with \(\phi (0)=0,\mathop {\lim }\limits _{d\rightarrow 0^{+} } \phi (d)=\infty \). The IDW interpolator can be expressed as
with weights \(w_i(p)\)s given by
The properties of the IDW interpolator have been investigated by Fattorini et al. (2018a, 2018b), Fattorini et al. (2019) for continuous populations, populations of areas and populations of units, respectively. In particular, three asymptotic scenarios, all referring to the infill paradigm (Cressie 1993), have been considered and the design-based asymptotic consistency of the IDW interpolator has been proved. Without entering into technical details, design-based asymptotic consistency has been achieved at the cost of supposing: (i) some forms of smoothness of the survey variable throughout the study region; (ii) the use of a sampling design able to asymptotically achieve spatial balance of the selected locations, that is to ensure that the selected locations are well spread throughout the study region; (iii) in the case of finite populations, some sort of regularities, such as in the shape of areas; (iv) some mathematical properties of the distance functions adopted for weighting sampled observations.
It must be pointed out that the previous conditions are likely to be satisfied in most real environmental surveys. Indeed, the smoothness assumption (i) is very common and it is at the basis of most interpolation techniques (e.g., Cressie 1993, Sect. 3.1). Moreover, this assumption reasonably holds when dealing with natural phenomena where the density of an attribute changes smoothly throughout space and when it changes abruptly, that usually occurs along borders delineating variations in the characteristics of the study region (e.g., forest-meadows). Obviously, design-based consistency does not hold where discontinuities are present. However, since borders may be realistically approximated by curves well approaching the theoretical condition of discontinuity over a region of zero measure, design-based consistency is preserved on the whole.
Regarding condition (ii), the asymptotic spatial balance is ensured under the schemes usually applied in environmental surveys. The achievement of spatial balance has been the main target for long time and it is ensured under very complex schemes (see e.g., Grafström and Tillé 2013; Stevens and Olsen 2004; Jauslin and Tillé 2020) but also under some familiar, widely applied schemes such as systematic grid sampling (SGS) and tesselation stratified sampling (TSS) when dealing with continuous populations, one-per-stratum stratified sampling (OPSS) and systematic sampling (SYS) when dealing with population of areas, stratified spatial sampling with proportional allocation when dealing with population of units.
Furthermore, as to condition (iii), and in particular referring to the regularity of the shape of areas, it is ensured by the fact that in most cases, especially in environmental surveys, areas are regular polygons.
Finally, as to the choice of the distance function, the mathematical condition on \(\phi \) ensuring the design-based consistency of (1) is
Condition (2) does not actually constitute an assumption, since the distance function is chosen by the user.
A widely applied class of distance functions is the class of negative powers distance functions of order \(\alpha \), given by
which, for any fixed \(\alpha >2\), satisfies (2) and gives rise to weights of type
Consequently, the corresponding IDW interpolator turns out to be
which obvioulsy depends on the chosen \(\alpha \)-value. The choice of \(\phi _{\alpha }(d)=d^{-\alpha }\) is particularly appealing owing to its simplicity and because the interpretation of the \({\alpha }\) parameter, as a smoothing parameter, is rather straightforward. Indeed, as the weights are decreasing functions of \({\alpha }\), the larger the values of \({\alpha }\), the smaller the contributions of the sampled points with larger distances from p. As a matter of fact, as argued by Fattorini et al. (2021), for \(\alpha \rightarrow \infty \), \({\widehat{f}}_{\infty }(p)\) reduces to the well-known NN interpolator in which the interpolated value of f(p) is the value observed at the sampled location nearest to p. More precisely, the NN spatial interpolator of f is the piecewise constant function given by
where \(H_p=\{i:\ \Vert P_i-c(p)\Vert ={\min }_{h=1,\dots n}\Vert P_h-c(p)\Vert \}\) is the set of sampled locations that are nearest to c(p). Fattorini et al. (2021) prove that the design-based consistency of (4) continues to hold and therefore, any choice of \(\alpha >2\), including \(\alpha =\infty \), ensures the design-based consistency of the IDW interpolator to hold. To avoid arbitrariness, the choice of \(\alpha \) should be performed using data driven procedures.
4 The choice of the smoothing parameter
An intuitive and widely applied data driven procedure for choosing smoothing parameters can be based on LOOCV methods (e.g., Giraldo et al. 2011; Ignaccolo et al. 2014; Montanari and Cicchitelli 2014). LOOCV is a multipurpose general criterion consisting in removing one sampled location from the dataset, interpolating the value or the density of the Y-variable at the removed sampled location using all other locations and then repeating this process for each sampled location. The interpolated values are then compared with the actual values at the omitted locations. Compared to other cross-validation techniques, LOOCV does not suffer of the random selection of the so-called training set and interpolations at the removed sample locations are obtained on the basis of a sample reduced by only one point. More precisely, according to the LOOCV, \(\alpha \) should be selected to minimize
where \({\widehat{f}}_{\alpha ,-i}(P_i)\) is the IDW interpolated value by means of the sample of \(n-1\) locations obtained by deleting the i-sampled location. It is worth noting that (5), up to a multiplicative constant depending on n or N, can be considered a naïve estimator of the overall measure of precision given by
where \(\mu \) is the Lebesgue measure when continuous populations and population of areas are considered while it is the counting measure with population of units. In particular, when dealing with continuous populations, (6) reduces to the design-based counterpart of the mean integrated squared error. Furthermore, when population of equal-sized areas are considered, (6) reduces to
where \(c_j \) is the centroid of \(a_j\). Finally, for population of units, (6) turns out to be
Therefore, the choice of \(\alpha \) by means of LOOCV can be interpreted as a criterion allowing to minimize the estimate of the overall precision measures. Finally, as (7) and (8) up to the multiplicative constant represent totals of mean squared errors, an alternative approach for choosing \(\alpha \) can be based on the minimization of the Horvitz-Thompson (HT) type estimate. In particular, the HT estimate is
where \(\pi _i\) is the first-order inclusion probability of the i-th area or of the i-th unit. Note that if the sampling design adopted to select the areas or the units ensures equal first-order inclusion probabilities the two approaches are identical.
In the following, the IDW interpolator where \(\alpha \) is chosen by means of LOOCV is denoted by \(\widehat{f}_{\widehat{\alpha }}\) and henceforth, for sake of brevity, referred to as data driven (DD) interpolator.
Thanks to Proposition 1, reported in the Appendix A, design-based consistency results of the IDW interpolator obtained by Fattorini et al. (2018a, 2018b, 2019) can be extended to the DD interpolator. Moreover, Proposition 1 allows to broaden the asymptotic properties of the IDW interpolator with fixed \(\alpha \) not only to the DD interpolator, but also to the IDW interpolator when the smoothing parameter is chosen by any data driven procedure.
5 Pseudo-population bootstrap estimation of precision
Pseudo-population bootstrap is one of the bootstrap methods adopted in design-based inference (Mashreghi et al. 2016). It is based on constructing a pseudo-population, able to mimic the characteristics of the unknown population, from which bootstrap samples are selected using the same sampling scheme adopted in the survey.
Recently, Conti et al. (2020) provided, in a unified framework, the theoretical justification of the use of pseudo-population bootstrap in a wide range of situations. More precisely, they derived conditions on pseudo-populations, ensuring that the bootstrap distributions of plug-in estimators, resampled from these pseudo-populations by means of suitable resampling schemes, asymptotically coincide with the actual distributions of the estimators. However, the crucial condition is that, as the population size increases, the sequence of designs converges to the rejective sampling design of maximum entropy. Unfortunately, as pointed out by Franceschi et al. (2022), these results cannot be exploited in spatial surveys. Indeed, the most effective sampling designs are aimed at achieving spatial balance and do not generally converge to the rejective design of maximum entropy. Thus, we propose to use the DD interpolator for constructing the pseudo-population from which bootstrap samples are selected by means of the sampling scheme adopted to select \(P_1, \ldots , P_n\) and to estimate the mean squared error of \(\widehat{f}_{\widehat{\alpha }}(p)\) by means of the mean squared error of the bootstrap distribution. The intuition behind this proposal is that, under conditions ensuring consistency of the DD interpolator, the pseudo-population converges to the true one in such a way that the bootstrap distribution should converge to the true distribution, thus providing reliable estimators of the mean squared error.
Accordingly, for each \(p\in B\), the pseudo-population bootstrap estimator of the root mean squared error of \(\widehat{f}_{\widehat{\alpha }}(p)\) is given by
where M is the number of bootstrap samples and \({\widehat{f}}_{{\widehat{\alpha }},m}^*(p)\) is the bootstrapped value of the IDW interpolator at \(p\in B\) based on \(\widehat{f}_{\widehat{\alpha }}(P^*_{1,m}),\dots ,\widehat{f}_{\widehat{\alpha }}(P^*_{n,m})\), i.e. for any \(p\in B\) and \(m=1, \ldots , M\)
where \(P^*_{1,m},\dots ,P^*_{n,m}\) are the locations selected in the m-th bootstrap resampling using the scheme adopted to select the original sample, \({Q}^*_{p,m}=\cup ^n_{i=1}\{P^*_{i,m}=c(p)\}\) and
where, with a slight abuse of notation, \({\widehat{\alpha }}\) is the smoothing parameter selected by means of LOOCV on the m-th bootstrap sample.
In Proposition 2 of the Appendix it is proven that, for n and M large enough, under suitable conditions,
that is the pseudo-population bootstrap estimator (10) is not too conservative. Relation (11) continues to hold for random size sampling designs (such as 3P sampling, implemented in the simulation study for the population of units), when the expected value of the reciprocal of the sample size is sufficiently small (see Remark in the Appendix). Moreover, it is worth noting that (11) holds not only for DD but also for any IDW interpolator whatever data driven procedure is adopted to choose the smoothing parameter and for any fixed \(\alpha >2\).
Even if (11) may induce to suspect a large overestimation of the true mean squared error, that may even mask the effectiveness of the DD interpolator, \(\sqrt{10}\) is just an upper bound and, as such, it should be viewed as a threshold limiting possible overestimation. As to the conditions for (11) to hold, the first concerns the sampling scheme adopted to select the sample points and basically is satisfied by balanced spatial sampling schemes under which the consistency of the DD interpolator is also ensured, while the second is a mathematical condition on f needed in the case of continuous populations and finite populations of areas. In particular, f is demanded to be differentiable at p with \(\nabla f(p)\ne 0\). Finally, the requirement of M large enough can be readily satisfied by simply increasing the computational effort.
6 Simulation studies
An extensive simulation study has been performed in order to empirically check the theoretical findings on the design-based consistency of the DD interpolator and to assess its finite-sample properties. Moreover, the performance of the DD interpolator has been compared to that of the NN interpolator, which, in turn, avoids the intensive computational efforts needed for the selection of the smoothing parameter. More precisely, the simulation study aims to evaluate the absolute bias (AB) and root mean squared error (RMSE) of DD and NN interpolators for any location where interpolation is performed.
As to the estimation of the root mean squared error, the pseudo-population bootstrap estimator (10) has not been implemented in the simulation study owing to the unworkable increase of the computational effort, in addition to that involved in the LOOCV. However, the performance of the pseudopopulation bootstrap estimator has been already investigated by for the NN interpolator. In particular, referring to the same three population scenarios and to the same artificial surfaces adopted in this simulation study, performed an intensive simulation study that suggested the conservative nature of the pseudo-population bootstrap estimator.
The three artificial surfaces, Surf1, Surf2 and Surf3, used for generating continuous populations, finite populations of areas and finite populations of units which, at any location \({p}=(p_{1},p_{2})\), are respectively given by
where the constants \(C_1\), \(C_2\) and \(C_3\) ensure a maximum value of 10. The three surfaces are displayed in Fig. 1 of the Supplementary Information.
6.1 Continuous populations
Referring to the asymptotic scenario in Fattorini et al. (2018a), the three surfaces represent artificial population densities on the squared study region \((0,1)\times (0,1)\) and samples of increasing size are considered. In particular, sampling is performed by selecting \(n=16, 36, 64, 100\) locations by means of URS, TSS and SGS, all ensuring consistency of DD and NN interpolators. URS is the most straightforward scheme which consists in randomly and independently selecting the n locations. TSS consists of partitioning the study region into n spatial subsets of equal size and randomly and independently selecting a location in each subset. Moreover, if a regular tessellation of the study region into n regular polygons is considered, also SGS, which consists of randomly selecting a location in one polygon and systematically repeating it in the remaining polygons, can be performed.
For implementing the last two schemes, the unit square is partitioned into \(4\times 4\), \(6\times 6\), \(8\times 8\), \(10\times 10\) grids of equal-sized quadrats and a location is selected in each quadrat.
6.2 Populations of areas
Following the asymptotic scenario in Fattorini et al. (2018b), the squared study region \((0,1)\times (0,1)\) is partitioned into an increasing number N of areas of decreasing size and samples of increasing size are selected. More precisely, for each artificial surface, four populations of \(N=100, 400, 900, 1600\) areas are constructed by partitioning the unit square into grids of \(10\times 10\), \(20\times 20\), \(30\times 30\), \(40\times 40\) quadrats and taking, as Y-values, the integrals of the surface within quadrats. The Y-values are rescaled in such a way that the maximum value is 10. Sampling is performed by selecting \(n=0.1N\) quadrats by means of simple random sampling without replacement (SRSWOR), OPSS and SYS, all guaranteeing consistency. OPSS is implemented by partitioning the population into n blocks/strata of contiguous areas and randomly selecting an area from each block/stratum. When populations are constituted by grids of regular polygons, SYS can be alternatively performed by randomly selecting a polygon in one block and then repeating it in the remaining \(n-1\) blocks. The last two schemes are performed by partitioning grids into blocks of \(2{\times } 5\) contiguous quadrats and selecting one quadrat per block.
6.3 Populations of units
According to the asymptotic scenario in Fattorini et al. (2019), nested populations and samples of increasing size are considered on the study region \((0,1)\times (0,1)\). More precisely, three nested populations of \(N=500, 1000, 1500\) units are located on the unit square in accordance with four spatial patterns referred to as regular, random, trended and clustered. As to the regular pattern, populations are constructed independently generating the first 500 locations at random but discarding those having distances smaller than \(0.5\times {500}^{-1/2}\) to those previously generated, then adding further 500 locations at random but discarding those with distances smaller than \({0.5\times 1000}^{-1/2}\) to those previously generated, and, finally, randomly adding further 500 locations but discarding those having distances smaller than \({0.5\times 1500}^{-1/2}\) to those previously generated. As to the random pattern, populations are constructed by independently generating 1500 locations at random and then assigning the first 500 to the smaller population, the first 1000 to the second one and all of them to the largest. As to the trended pattern, populations are constructed independently generating 1500 pairs of random numbers \(u_{1}, u_{2}\) uniformly distributed on [0, 1], performing the transformation \((1-u_{1}^{2}, 1-u_{2}^{2})\) to determine locations, and then assigning the first 500 locations to the smallest population, the first 1000 to the second one and all of them to the largest. Finally, as to the clus pattern, populations are constructed independently generating 10 cluster centers at random and assign each cluster 50 locations generated from a spherical normal distribution centerd at the cluster center with variance 0.025, adding further 50 locations to each cluster from the same distribution and, at last, adding further 50 locations to each cluster from the same distribution. Points falling outside the unit square are discarded and newly generated. The three surfaces are used for assigning the Y-values in the populations.
As to the choice of the sampling scheme, only 3P sampling, from the acronym of “probability proportional to prediction", is considered. Indeed, some of the most relevant populations of units, whose interpolation is of interest, are probably natural populations such as trees or shrubs. In such situations the list and the units locations are usually not available and mapping is commonly precluded. The sole cases in which mapping is possible, occur for populations located in study regions of limited size and all the units can be visited, and therefore located and listed. In this case, 3P sampling is commonly performed.
Under 3P sampling, all the units of the population are visited by a crew of experts, a prediction \(x_j\) for the value of the survey variable is given by the experts for each unit j of the population and units are independently included in the sample with probabilities \({\pi }_j=x_j/L^*\) where \(L^*\) must be large enough to ensure that \({\pi }_j\le 1\) for each j (Gregoire and Valentine 2008).
Following Kinnunen et al. (2007), experts’ predictions for the \(y_j\)s are obtained by assuming the existence of a maximum error rate of prediction \(\rho \in (0,1)\), that occurs at the extremes of the values of the survey variable, in such a way that small values near the lower bound l are over evaluated and large values near the upper bound L are under evaluated. In this case, prediction ranges from \((1+\rho )l\) to \((1-\rho )L\) when the value of the survey variable is equal to l and L, respectively. Moreover, for simplicity, experts’ predictions for the \(y_j\)s are generated using the relationship \(x_j=a+by_j\) with \(b=1-\rho (L+l)/(L-l)\) and \(a=(1+\rho )l-bl\), where \(L=10\), \(\rho =0.10\) and \(l=4\). In this case, \(L^*=50\) is adopted. Units with Y-value smaller than \(l=4\) are discarded from populations in order to ensure a lower bound of \({\pi }_0=0.08\) for the inclusion probabilities. Predictions, joined with choice of l, L and \(L^*\), ensure an expected sampling fraction ranging from about \(12\%\) to \(15\%\) in all cases, according to the spatial units pattern.
6.4 Simulation implementation and results
For each combination of population, sampling scheme, and sample size, sampling is replicated 10,000 times. At each simulation run, for each \({p}\in B\), the DD interpolator is computed by considering the \(\alpha \) value which minimizes (5) for \(\alpha \) in \(\left\{ 2,\ldots ,21 \right\} \). In particular, for continuous populations, interpolation is performed on a regular grid of \(100\times 100\) locations on \((0,1)\times (0,1)\). As to the population of units, inaccurate and imprecise interpolations of the smallest values of the survey variable may occur (Fattorini et al. 2020). To overcome this drawback, prediction errors, given by the difference between the Y-values and the corresponding experts’ predictions, are interpolated. Thus, the interpolated Y-values are given by the sum of the expert predictions and the interpolated errors.
Since for larger values of \(\alpha \) the DD interpolator is practically indistinguishable from NN interpolator, if the minimum is reached for \(\widehat{\alpha }=21\), NN interpolation is performed. Notwithstanding for \(\alpha =2\) the asymptotic properties of the IDW are not proven, the value has been considered as it is a rather common choice for practitioners, being the default value of some widely applied GIS software, such as ArcGIS and Surfer. Furthermore, at each simulation run, the NN interpolator is also computed.
For each location where interpolation is performed, the AB and the RMSE of DD and NN interpolators are computed from the Monte Carlo distributions of the corresponding estimates. Furthermore, the mode of the Monte Carlo distribution of the finite values of \(\widehat{\alpha }\) selected by means of (5) and the percentage of simulation runs giving rise to \(\widehat{\alpha }=\infty \) (\(F_\infty \)) are calculated. For any combination of population, sampling scheme and sample size, Tables 1, 2 and 3 report the minima, maxima and means of AB and RMSE, together with the mode and \(F_\infty \) for the DD interpolator.
Additionally, the performance of DD and NN interpolators is empirically compared by computing, for each location, a measure of relative efficiency (RE) as the ratio between the RMSE of the NN interpolator and of the DD interpolator. The corresponding cumulative distribution function are displayed in Figs. 1, 2 and 3. Figures 2–10 of the Supplementary Information show the spatial pattern of RE for all the populations, sampling schemes and sample sizes.
The simulation results confirm the theoretical findings. Indeed, from Tables 1, 2 and 3, it is at once apparent that for each combination of population, surface and sampling scheme, both AB and RMSE generally decrease as the sample size increases, with very few exceptions for population of areas when surface 3 and SYS are considered. As to the choice of the smoothing parameter, for continuous populations and for populations of areas, DD interpolator generally reduces to the NN interpolator under SGS and SYS, respectively. When the two interpolators do not coincide, even if the AB averages tends to be smaller for the NN interpolator, DD outperforms NN interpolator in terms of averages of RMSE and in terms of RE. The performance of the two interpolators tend to be more comparable under surface 3, which presents some discontinuities. Indeed, from Figs. 1 and 2, it is at once apparent that, for surface 1 and surface 2, under URS and TSS in case of continuous populations, and under SRSWOR and OPSS in case of populations of areas, the percentage of points where RE is smaller than 1 is rather low, while, for surface 3, the percentage is rather close to 50%.
When populations of units are considered, the percentage of simulation runs in which DD and NN interpolators coincide is mostly higher than \(70\%\) for all combinations of surfaces and sampling schemes (Table 3). However NN interpolator shows its superiority in terms of AB, RMSE and RE (Table 3; Fig. 3).
Therefore, in order to give some practical recommendations, when dealing with continuous populations and populations of areas, DD interpolator seems to be preferable to NN interpolator, while, with a population of units, NN interpolator should be adopted.
Finally, the DD interpolator is also implemented when \(\alpha \) is chosen in order to minimize (9) giving rise to very similar results, not reported for the sake of brevity.
7 Case study
The proposed mapping strategy was applied for the estimation of the surface of the Shannon diversity index of tree diameter at breast height (DBH) in the experimental watershed of Bonis forest (139 ha) located in the mountain area of Sila Greca (Southern Italy) and mainly characterized by pinewoods originating from artificial reforestation. Mapping DBH Shannon diversity index is crucial for evaluating the re-naturalization processes on-going at various degrees across the watershed, which in turn are related to structural heterogeneity. In particular, for any location p of the watershed, the surface value, given by the DBH Shannon diversity index computed on a circular plot of radius 20 m centered at p, was of interest.
Data from a survey implemented in 2016, shared by the Department for Innovation in Biological, Agro-food and Forest Systems (University of Tuscia), were adopted. More precisely, plot sampling was performed by locating 36 circular plots of radius 20 m by means of URS (see Fig. 4). For each plot, DBHs were recorded and then grouped into five diameter classes (less than 17.5 cm, from 17.5 to 35 cm, from 35 to 52.5 cm, from 52.5 to 70 cm) and the Shannon diversity index was determined.
Surface estimation was performed at the centroids of 10,000 equal-sized polygons partitioning the study region by means of the IDW interpolator (3) with \(\widehat{\alpha }=3\) (see Fig. 5a). The selected smoothing parameter was obtained by the LOOCV procedure described in Sect. 4. From the resulting surface, 1000 bootstrap samples of size 36 were selected according to URS to estimate RMSEs by means of (10).
Owing to the relationship between the re-naturalization process and the structual heterogeneity, Fig. 5a allows identifying areas characterized by more or less advanced re-naturalization. Furthermore, low values of uncertainties in a very large portion of the study region can be easily detected from Fig. 5b. Therefore, the estimated map can be reasonably considered as a very helpful tool for investigating re-naturalization processes and, more in general, for watershed management.
8 Conclusions
As pointed out by Maleika (2020) and Joseph and Kang (2011), a common choice for \(\alpha \) is 2, which also constitutes the default value in widely applied GIS software and also smaller \(\alpha \)-values are considered in the literature (see e.g., Bărbulescu et al. 2021). Often, the value of \(\alpha \) is selected either by a visual inspection of the resulting map or by using a cross-validation approach. If the value is arbitrarily selected by the researcher, only values of \(\alpha >2\) should be considered as they guarantee the design-based consistency of the IDW interpolator (Fattorini et al. 2018a, b, 2019). In this paper, design-based consistency is proven to hold for \(\alpha >2\) also when \(\alpha \) is obtained by optimizing any function of the sampled locations. In particular, when cross-validation techniques are adopted, minimization of the summary statistics quantifying the discrepancy between observed and predicted values should be performed only considering \(\alpha \) values greater than 2. Indeed, these achieved consistency results add statistical rigor to extensively adopted cross-validation techniques for implementing IDW interpolation. Moreover, empirical results suggest that, for finite sample sizes, the performance of the DD interpolator seems to be superior to that of the NN interpolator when continuous populations and populations of areas are considered. However, their performance seems to be more comparable when discontinuities are present and no systematic designs are considered. Thus, with real populations, where discontinuities are present but the set of discontinuity points has measure zero, both interpolators are still consistent from a design-based perspective but probably the behavior of the NN interpolator may become to be competitive. Finally, for the considered populations of units, under 3P sampling, the NN interpolator is undoubtedly preferrable.
References
Bărbulescu A, Şerban C, Marina-Larisa I (2021) Computing the beta parameter in IDW interpolation by using a genetic algorithm. Water 13(6):863
Conti PL, Marella D, Mecatti F, Andreis F (2020) A unified principled framework for resampling based on pseudo-populations: asymptotic theory. Bernoulli 26(2):1044–1069
Cressie NA (1993) Statistics for spatial data. Wiley, New York
Fattorini L, Franceschi S, Corona P (2020) Design-based mapping of tree attributes by 3p sampling. Biom J 62(7):1810–1825
Fattorini L, Marcheselli M, Pisani C, Pratelli L (2018a) Design-based maps for continuous spatial populations. Biometrika 105(2):419–429
Fattorini L, Marcheselli M, Pisani C, Pratelli L (2019) Design-based mapping for finite populations of marked points. Electron J Stat 13(1):2121–2149
Fattorini L, Marcheselli M, Pisani C, Pratelli L (2021) Design-based properties of the nearest neighbor spatial interpolator and its bootstrap mean squared error estimator. Biometrics 78:1454
Fattorini L, Marcheselli M, Pratelli L (2018b) Design-based maps for finite populations of spatial units. J Am Stat Assoc 113(522):686–697
Franceschi S, Di Biase RM, Marcelli A, Fattorini L (2022) Some empirical results on nearest-neighbour pseudo-populations for resampling from spatial populations. Stats 5(2):385–400
Giraldo R, Delicado P, Mateu J (2011) Ordinary kriging for function-valued spatial data. Environ Ecol Stat 18(3):411–426
Gong G, Mattevada S, O’Bryant SE (2014) Comparison of the accuracy of kriging and IDW interpolations in estimating groundwater arsenic concentrations in texas. Environ Res 130:59–69
Grafström A, Tillé Y (2013) Doubly balanced spatial sampling with spreading and restitution of auxiliary totals. Environmetrics 24(2):120–131
Gregoire T, Valentine H (2008) Sampling strategies for natural resources and the environment. CRC Press, Boca Raton
Hall P, Robinson AP (2009) Reducing variability of crossvalidation for smoothing-parameter choice. Biometrika 96(1):175–186
Ignaccolo R, Mateu J, Giraldo R (2014) Kriging with external drift for functional data for air quality monitoring. Stoch Environ Res Risk A 28(5):1171–1186
Jauslin R, Tillé Y (2020) Spatial spread sampling using weakly associated vectors. J Agric Biol Environ Stat 25(3):431–451
Joseph VR, Kang L (2011) Regression-based inverse distance weighting with applications to computer experiments. Technometrics 53(3):254–265
Kinnunen J, Maltamo M, Päivinen R (2007) Standing volume estimates of forests in Russia: how accurate is the published data? Forestry 80(1):53–64
Maleika W (2020) Inverse distance weighting method optimization in the process of digital terrain model creation based on data collected from a multibeam echosounder. Appl Geomat 12(4):397–407
Mashreghi Z, Haziza D, Léger C (2016) A survey of bootstrap methods in finite population sampling. Stat Surv 10:1–52
Montanari GE, Cicchitelli G (2014) Sampling theory and geostatistics: a way of reconciliation, contributions to sampling statistics. Springer, New York, pp 151–165
Noori MJ, Hassan HH, Mustafa YT (2014) Spatial estimation of rainfall distribution and its classification in Duhok governorate using GIS. J Water Resour Prot 6:75–82
Stevens DL Jr, Olsen AR (2004) Spatially balanced sampling of natural resources. J Am Stat Assoc 99(465):262–278
Tobler WR (1970) A computer movie simulating urban growth in the Detroit region. Econ geogr 46:234–240
Wu CY, Mossa J, Mao L, Almulla M (2019) Comparison of different spatial interpolation methods for historical hydrographic data of the lowermost Mississippi river. Ann GIS 25(2):133–151
Acknowledgements
The authors acknowledge the support of NBFC to University of Siena, funded by the Italian Ministry of University and Research, PNRR, Missione 4 Componente 2, “Dalla ricerca all’impresa’, Investimento 1.4, Project CN00000033.
Funding
Open access funding provided by Università degli Studi di Siena within the CRUI-CARE Agreement.
Author information
Authors and Affiliations
Corresponding author
Supplementary Information
Below is the link to the electronic supplementary material.
Appendices
Appendix A
In order to derive the asymptotic design-based unbiasedness and consistency of the DD interpolator following the proofs by Fattorini et al. (2018a, 2018b, 2019) it is enough to prove Proposition 1.
Proposition 1
Suppose there exists \(\alpha _0>2\) such that \(\widehat{\alpha }>\alpha _0\). For any \(\delta ,\delta ^\prime >0\) with \(\delta ^\prime <\delta \) and for each \(p\in B\)
where \(A_{i} (p,\delta )=Q_{p}^{c} \cap \left\{ \Vert c(p)-P_{i}\Vert >\delta \right\} \) and \(B_{n} (\delta ^\prime ,p)=\bigcap _{i=1}^{n}\{\Vert c(p)-P_{i}\Vert >\delta ^\prime \}\). Moreover, it holds
where \({{\mathcal {D}}}\) is a suitable countable subset of B.
Proof
Since \(\widehat{\alpha }\ge \alpha _0\) and it holds
for any \(i=1,\ldots ,n\), then
and
Adding (A3) and (A4) and taking expectation, inequality (A1) immediately follows. Similarly, inequality (A2) holds because
and
which imply
\(\square \)
Appendix B
Proposition 2
Suppose that, for a given sample size n, the sampling design ensures the existence of \(\delta _n>0\) such that
with \(\lim _n\delta _n=0\) and that there exist a vector \(a\in {\mathbb {R}}^{2}\), \(a \not =0\) and a function \(q\mapsto o(\Vert {q-p}\Vert )\) negligible with respect to \(\Vert {q-p}\Vert \), such that
Then, there exists \(n_0\) such that for any \(n\ge n_0\) and for M large enough, it holds
Proof
Since \((P^*_{1,m}, \ldots P^*_{n,m})\) for \(m=1,\ldots , M\) are independent and identically distributed random vectors, owing to the strong law of large numbers, conditional to \(P_1,\dots ,P_n\), as M increases, \({\widehat{V}}^*_{{\widehat{\alpha }},M}(p)\) converges almost surely to
where \(E^*\) denotes expectation conditional to \(P_1,\dots ,P_n\).
Now consider the ratio
that, for M sufficiently large, is equivalent to
since \({\widehat{V}}^*_{{\widehat{\alpha }},M}(p)\) converges almost surely to \({V}^*_{\widehat{\alpha }}(p,{P}_1,\dots ,{P}_n)\).
From the elementary inequality \((a+b)^2\le 2(a^2+b^2)\) it follows that
Since
it is enough to prove that
To this aim, note that
where
Since \({\widehat{\alpha }}=\phi (P_1,\ldots ,P_n)\) and, for large n, \({\widehat{\alpha }}\approx \phi (P^*_{1,1},\ldots ,P^*_{n,1})\) it follows
Thanks to (B5) and (B6), the function f can be considered linear and, in this case,
Then, for large n, it holds
The proposition is so proven. \(\square \)
Remark
When a random size sampling design is considered, Proposition 2 continues to hold under (B5) and (B6) if the expected value of the reciprocal of the sample size is sufficiently small. Indeed, let \({\mathcal {N}}\) the r.v. denoting the sample size. Then
in such a way that the probability that \({\mathcal {N}}\) is greater than the threshold \(n_0\) is large.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Fattorini, L., Franceschi, S., Marcheselli, M. et al. Design-based spatial interpolation with data driven selection of the smoothing parameter. Environ Ecol Stat 30, 103–129 (2023). https://doi.org/10.1007/s10651-023-00555-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10651-023-00555-w
Keywords
- Inverse distance weighting interpolator
- Pointwise and uniform consistency
- Pseudo-population bootstrap
- Spatial populations