Appendix:
Our aim is to build a function that describes the property variation, given by
$$ s(\textbf{x}) = \sum\limits_{j = 1}^{3} d_{j} p_{j}(\textbf{x}) + \sum\limits_{i = 1}^{N} c_{i} \phi(\|\textbf{x} - \textbf{x}_{i}\|), $$
(A.1)
where p1(x) = 1, p2(x) = r, p3(x) = z, ϕ is the radial basis function, and c = (c1,c2,…,cN) and d = (d1,d2,d3) are the unknown coefficients (to be determined). There are numerous forms of radial basis functions available, such as linear, Gaussian, and multiquadric (Buhmann 2004). Here, we use a thin-plate spline radial basis function, given by Wahba (1990)
$$ \phi(\rho) = \frac{1}{8 \pi} \rho^{2} \log(\rho), $$
(A.2)
where ρ = ∥x −xi∥ is the distance between a data point, xi, (a centre of the RBF) and a point on the surface. Nonlocal bases, i.e. those where \(\phi (\rho ) \rightarrow \infty \) as \(\rho \rightarrow \infty \), may perform better than local bases. Furthermore, the thin-plate spline is not dependent on a user-set shape parameter (Holmes and Mallick 1998), and is invariant under translation and rotation transformations (Franke 1982).
To obtain the unknown coefficients, c and d, we begin by assuming the data, yi, can be modelled as
$$ y_{i} = s(\textbf{x}_{i}) + {\epsilon}_{i}, \quad i = 1, \ldots, N, $$
(A.3)
where \({\epsilon }_{i} \sim {} N(0, \sigma ^{2})\) is the error. We must solve the minimisation problem,
$$ \min_{\textbf{c}, \textbf{d}}\frac{1}{N} {\sum}_{i = 1}^{N} \left( y_{i} - s(\textbf{x}_{i})\right)^{2} + \lambda J(s), $$
(A.4)
where λ is the smoothing parameter and J(s) is the penalty functional, given by
$$ J(s) = {\int}_{-\infty}^{\infty} {\int}_{-\infty}^{\infty} \left( s_{x_{1} x_{1}}^{2} + 2 s_{x_{1} x_{2}}^{2} + s_{x_{2} x_{2}}^{2} \right) dA. $$
(A.5)
For λ = 0, s(x) becomes a surface that interpolates the data, and as \(\lambda \rightarrow \infty \) we obtain the linear least squares solution (Wahba 1990).
Wahba (1990) shows that the solution to the minimisation problem (Eq. (A.4)) can be found by solving the linear system,
$$ \begin{array}{@{}rcl@{}} (K + N\lambda I)\textbf{c} + P\textbf{d} &= \textbf{y}, \end{array} $$
(A.6)
$$ \begin{array}{@{}rcl@{}} P^{T} \textbf{c} &= \textbf{0}, \end{array} $$
(A.7)
where K is the N × N matrix with ij th entry ϕ(∥xi −xj∥), P is the N × 3 matrix with (i,k) entry given by pk(xi), I is the N × N identity matrix, \({\square }^{T}\) is the transpose operator, c = (c1,…,cN)T, d = (d1,d2,d3)T, y = (y1,…,yN)T, and 0 is the 3 × 1 zero vector.
We see from Eq. (A.6) that to account for the penalty term we simply adjust the diagonal elements of K. To compute the value of λ, we utilise generalised cross validation (GCV) (Wahba 1990). This involves minimising the GCV function, V (λ), where
$$ V(\lambda) = N \|(I - A(\lambda))\textbf{y}\|^{2} / \left[ Tr(I - A(\lambda))\right]^{2}. $$
(A.8)
Tr() is the trace operator, and A(λ) is known as the influence matrix which can be calculated from (Wahba 1990)
$$ I - A(\lambda) = N \lambda Q_{2} ({Q_{2}^{T}} (K + N \lambda I) Q_{2})^{-1} {Q_{2}^{T}}. $$
(A.9)
Here, Q2 is computed from the QR decomposition of P, namely
$$ P = \left[\begin{array}{c|c} Q_{1} & Q_{2} \end{array}\right] \left[\begin{array}{cc} R\\ 0 \end{array}\right], $$
(A.10)
where \(Q_{1} \in \mathbb {R}^{N \times 3}\), \(Q_{2} \in \mathbb {R}^{N \times (N-3)}\) and R is an upper triangular matrix.