Skip to main content

Advertisement

Log in

Deep Reinforcement Learning for Mineral Prospectivity Mapping

  • Published:
Mathematical Geosciences Aims and scope Submit manuscript

Abstract

Machine learning algorithms, including supervised and unsupervised learning ones, have been widely used in mineral prospectivity mapping. Supervised learning algorithms require the use of numerous known mineral deposits to ensure the reliability of the training results. Unsupervised learning algorithms can be applied to areas with rare or no known deposits. Reinforcement learning (RL) is a type of machine learning algorithm that differs from supervised and unsupervised learning models in that the learning process is performed interactively by the agent and environment. The environment feeds the agent with reward signals and states, and the agent synthetically evaluates the mineralization potential of each state based on these rewards. In this study, a deep RL framework was constructed for mineral prospectivity mapping, and a case study for mapping gold prospectivity in northwest Hubei Province, China, was used to test the framework. The deep RL agent extracted the information of known mineralization by automatically interacting with the environment while simultaneously mining potential mineralization information from the unlabeled dataset. Its comparison with random forest and isolation forest models demonstrates that deep RL performs better regardless of the number of known mineral deposits because of its unique reward and feedback mechanism. The delineated high-potential areas show a strong spatial correlation with known gold deposits and can therefore provide significant clues for future prospecting in the study area.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

References

Download references

Acknowledgements

We would like to thank two reviewers’ comments and suggestions which helped us improve this study. This study was supported by the IAMG Mathematical Geosciences Student Awards and the National Natural Science Foundation of China (42172326).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Renguang Zuo.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shi, Z., Zuo, R. & Zhou, B. Deep Reinforcement Learning for Mineral Prospectivity Mapping. Math Geosci 55, 773–797 (2023). https://doi.org/10.1007/s11004-023-10059-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11004-023-10059-9

Keywords

Navigation