Skip to main content

Advertisement

Log in

Radial Basis Function Network for Ore Grade Estimation

  • Published:
Natural Resources Research Aims and scope Submit manuscript

Abstract

This paper highlights the performance of a radial basis function (RBF) network for ore grade estimation in an offshore placer gold deposit. Several pertinent issues including RBF model construction, data division for model training, calibration and validation, and efficacy of the RBF network over the kriging and the multilayer perceptron models have been addressed in this study. For the construction of the RBF model, an orthogonal least-square algorithm (OLS) was used. The efficacy of this algorithm was testified against the random selection algorithm. It was found that OLS algorithm performed substantially better than the random selection algorithm. The model was trained using training data set, calibrated using calibration data set, and finally validated on the validation data set. However, for accurate performance measurement of the model, these three data sets should have similar statistical properties. To achieve the statistical similarity properties, an approach utilizing data segmentation and genetic algorithm was applied. A comparative evaluation of the RBF model against the kriging and the multilayer perceptron was then performed. It was seen that the RBF model produced estimates with the R 2 (coefficient of determination) value of 0.39 as against of 0.19 for the kriging and of 0.18 for the multilayer perceptron.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6

Similar content being viewed by others

References

  • Chen, S., Cowan, C. F. N., and Grant, P. M., 1991, Orthogonal least-squares learning algorithms for radial basis function networks: IEEE Trans. Neural Netw., v. 2, no. 3, p. 302–309.

    Article  Google Scholar 

  • Chatterjee, S., Bhattacherjee, A., Samanta, B., and Pal, S. K., 2006, Ore grade estimation of a limestone deposit in India using artificial neural network: Appl. GIS, v. 2, no. 1, p. 2.1–2.20.

    Article  Google Scholar 

  • Chilès, J. P., and Delfiner, P., 1999, Geostatistics modelling spatial uncertainty: Wiley-Interscience, New York, p. 720.

    Google Scholar 

  • Goldberg, D. E., 2000, Genetic algorithms in search, optimization, and machine learning: Pearson Education Asia Pte Ltd., p. 412.

  • Haykins, S., 1999, Neural networks: a comprehensive foundation (2nd edn.): Prentice Hall, New Jersey, p. 824.

    Google Scholar 

  • Howlett, J. R., and Jain, C. J., 2001, Radial Basis function networks 2: Physica-Verlag, Heidelberg, New York, p. 357.

    Google Scholar 

  • Journel, A. G., 1982, The indicator approach to estimation of spatial distributions: Proceedings of the 17th APCOM International Symposium, AIME, New York, p. 793–806.

  • Kapageridis, I. K., 2005, Input space configuration effects in neural network-based grade estimation: Comput. Geosci., v. 31, no. 6, p. 704–717.

    Article  Google Scholar 

  • Ke, J., 2002, Neural-network modeling of placer ore grade spatial variability: doctoral dissertation: University of Alaska Fairbanks, USA, p. 251.

  • Koike, K., Matsuda, S., Suzuki, T., and Ohmi, M., 2002, Neural network-based estimation of principal metal contents in the Hokuroku district, Northern Japan, for exploring Kuroko-type deposits: Nat. Resour. Res., v. 11, no. 2, p. 135–156.

    Article  Google Scholar 

  • Koike, K., and Matsuda, S., 2003, Characterizing content distributions of impurities in a limestone mine using a feed-forward neural network: Nat. Resour. Res., v. 12, no. 3, p. 209–223.

    Article  Google Scholar 

  • Li, C., Ye, H., and Wang, G., 2004, Nonlinear time series modeling and prediction using RBF network with improved clustering algorithm: 2004 IEEE International Conference on Systems, Man and Cybernetics, p. 3513–3518.

  • Longinov, N. E., 1994, Predicting pilot look-angle with a radial basis function network: IEEE Trans. Syst. Man Cybernet., v. 24, no. 10, p. 1511–1518.

    Article  Google Scholar 

  • Potts, M. S., and Broomhead, D. S., 1991, Time series prediction with a radial basis function neural network: SPIE Adapt. Signal Process., v. 1565, p. 255–266.

    Google Scholar 

  • Rendu, J. M., 1980, Disjunctive Kriging: comparison of theory with actual results: Math. Geol., v. 12, no. 4, p. 305–320.

    Article  Google Scholar 

  • Samanta, B., Bandopadhyay, S., and Ganguli, R., 2004, Data segmentation and genetic algorithms for sparse data division in Nome placer gold grade estimation using neural network and geostatistics: Mining Explor. Geol., v. 11, p. 69–76.

    Article  Google Scholar 

  • Samanta, B., Bandopadhyay, S., Ganguli, R., and Dutta, S., 2004, Sparse data division using data segmentation and Kohonen Network for neural network and geostatistical ore grade modeling in Nome Offshore Placer Deposit: Nat. Resour. Res., v. 13, no. 3, p. 189–200.

    Article  Google Scholar 

  • Samanta, B., Ganguli, R., and Bandopadhyay, S., 2005, Comparing the predictive performance of neural networks with ordinary Kriging in a bauxite deposit: Mining Technol., v. 114, no. 3, p. 129–139.

    Article  Google Scholar 

  • Samanta, B., and Bandopadhyay, S., 2009, Construction of a radial basis function network using an evolutionary algorithm for grade estimation in a placer gold deposit: Comput. Geosci., v. 35, no. 8, p. 1592–1602.

    Article  Google Scholar 

  • Wu, X., and Zhou, Y., 1993, Reserve estimation using neural network techniques: Comput. Geosci., v. 19, no. 4, p. 567–575.

    Article  Google Scholar 

  • Yama, B. R., and Lineberry, G. T., 1999, Artificial neural network application for a predictive task in mining: Mining Eng., v. 51, no. 2, p. 59–64.

    Google Scholar 

  • Powell, M. J. D., 1987, Radial basis functions for multivariable interpolation: a review, in Mason, J. C., and Cox, M. G., eds., Algorithms for Approximation: Oxford, Clarendon, p. 143–168.

    Google Scholar 

  • Orr, M. J. L., 1996, Introduction to radial basis function network: Technical Report, Center of Cognitive Science, University of Edinburgh.

  • Verly, G., 1987, The multigaussian approach and its applications to the estimation of local reserves: Math. Geol., v. 15, no. 2, p. 259–286.

    Google Scholar 

Download references

Acknowledgments

This study was funded by the DST under the fast tract scheme. This financial help by the DST is gratefully acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Biswajit Samanta.

Appendix: Mathematical Formulation of RBF Network

Appendix: Mathematical Formulation of RBF Network

Construction of RBF network model can be viewed as a special case of a regression model, in which RBF network model can be formulated as:

$$ y(x) = w_{\text{o}} + \sum\limits_{i = 1}^{M} {w_{i} } \phi_i + \varepsilon(x) $$
(A1)

where y(x) can be regarded as a dependent variable, w i’s as regression coefficients, and \( \phi_i \) as the regressors or independent variables.

For n number of data points in the training set, the above equation can be written in a matrix form

$$ y = w\phi + \varepsilon $$
(A2)

where

$$ \begin{aligned} y &= \left[ {y(x_{1}),y(x_{2}),y(x_{n})} \right] \\ \phi &= \left[ {\phi_{1} ,\phi_{2} , \ldots,\phi_{m} } \right]\\ \phi_{i} &= \left[ {\phi (x_{1} ),\phi (x_{2} ) \ldots \phi (x_{n} )} \right] \\ \varepsilon &= \left[ {\varepsilon (x_{1} ),\varepsilon (x_{2} ) \ldots\varepsilon (x_{n} )} \right]\end{aligned} $$

The regressor vectors \( \phi_i \) form a set of basis vectors, and the least-square solution of w satisfies the condition that \( w\phi \) would be the projection of y onto the space spanned by these basis vectors. As a result, the square of \( w\phi \) will indicate that part of the variance y, which is explained by these regressors. However, since the RBFs are usually correlated, it is not understood clearly as what would be the contribution of the individual regressors to this variance. The orthogonal least-square algorithm assists to resolve this problem. The algorithm transforms a set of \( \phi_i \) into a set of orthogonal basis vectors, thus making it possible to count individual contribution of the basis vector to explain the total variance. The regression matrix \( \phi \) can be decomposed into \( \phi=RA \), where A is a M × M matrix with 1’s on the diagonal and 0’s below the diagonal

$$ {\text{i}}.{\text{e}}.\; A = \left[ \begin{array}{lllllll} 1 &\ldots& \alpha_{12}& \ldots& \alpha_{13}& \ldots& \alpha_{1m} \hfill \\ 0& \ldots& 1& \ldots& \alpha_{23}& \ldots& \alpha_{2m} \hfill \\ 0& \ldots& 0& \ldots& 1& \ldots &\ldots\hfill \\ 0& \ldots& 0& \ldots& 0& \ldots &\alpha_{m - 1m} \hfill \\ 0& \ldots& 0& \ldots& 0& \ldots& 1 . \hfill \\ \end{array} \right] $$
(A3)

and R is an n × m matrix with orthogonal columns r i such that

$$ RR^{\text{T}} = H $$
(A4)

The space spanned by a set of regressors \( \phi_i \) will be the same space spanned the sets of orthogonal basis vectors r i. Hence, Eq. A1 can be rewritten as

$$ y = Rg + \varepsilon $$
(A5)

The orthogonal least-square solution g is given by

$$ \hat{g} = (R^{\text{T}} R)^{ - } R^{\text{T}} y $$
(A6)

The quantities \( \hat{g} \) and \( \hat{w} \) satisfy the triangular matrix A

$$ {\text{i}}.{\text{e}}. \; A\hat{w} = \hat{g} $$
(A7)

The classical Gram-Schmidt algorithm can be used to derive Eq. A2 and thus the LS estimate of \( \hat{w} \). The well-known Gram-Schmidt method computes one column of A at a time and orthogonalizes \( \phi \) as follows: at the kth stage, the kth column of the matrix is made orthogonal to each of the k − 1 previously orthogonalized columns and the process is repeated for k = 2,…, m. The computational procedure can be represented as

$$ r_{1} = \phi_{1} $$
$$\left.\begin{array}{ll} \alpha_{ik} = r_{i}^{\text{T}} \phi_{k} /(r_{i}^{\text{T}} r_{i} )&1 \le i < k\\ r_k=\phi_{k} - \sum\limits_{i = 1}^{k - 1} {\alpha_{ik} r_{i} } &\end{array}\right\}\quad k = 2, \ldots,m$$

The OLS has superior numerical properties than the simple LS method. However, main usage of the OLS method in this study is to select the most significant RBFs from the whole set of RBFs in a forward regression manner. Because r i and r j are orthogonal when \( i \ne j \), the sum of square of y i,

$$ y^{\text{T}} y = \sum\limits_{i = 1}^{m} {g_{i}^{2} r_{i}^{\text{T}} r_i} + \varepsilon^{\text{T}} \varepsilon $$
(A8)

If y is the desired output vector after mean is removed, then the variance of y is given by

$$ N^{ - 1} y^{\text{T}} y = N^{ - 1} \sum\limits_{i = 1}^{m} {g_{i}^{2} r_{i}^{\text{T}} r_i} + N^{ - 1} \varepsilon^{\text{T}} \varepsilon $$
(A9)

It can be seen that \( N^{ - 1} \sum\limits_{i = 1}^{m} {g_{i}^{2} r_{i}^{\text{T}} r_i} \) is a part of the desired output variance which is explained by the regressors, and \( N^{ - 1} \varepsilon^{\text{T}} \varepsilon \) is the error variance. Hence \( \frac{1}{N}g_{i}^{2} r_{i}^{\text{T}} r_{i} \) is the incremental variance explained by the regressor r i , after its introduction into the regression model, and an error reduction ratio due to r i can be defined as

$$ \left[ {\text{err}} \right]_{i} = {\frac{{g_{i}^{2} r_{i}^{\text{T}} r_{i} }}{{y^{\text{T}} y}}}\quad 1 \le i \le M $$
(A10)

This ratio offers a simple and effective means of selecting a radial basis function in a forward regression manner.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Samanta, B. Radial Basis Function Network for Ore Grade Estimation. Nat Resour Res 19, 91–102 (2010). https://doi.org/10.1007/s11053-010-9115-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11053-010-9115-z

Keywords

Navigation