Skip to main content

Advertisement

Log in

An efficient video-based rainfall intensity estimation employing different recurrent neural network models

  • Research
  • Published:
Earth Science Informatics Aims and scope Submit manuscript

Abstract

The evaluation of precipitation using conventional approaches encounters various challenges stemming from limited financial resources, restricted geographic coverage, and reduced accuracy. In contrast, today urban areas are widely equipped with surveillance cameras, which, combined with advancements in deep learning techniques, has allowed for the creation of a highly effective and precise deep learning model capable of estimating rainfall intensity based on captured videos. In this article, a hybrid regression model based on Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) is developed to estimate rainfall intensity from one-minute videos. Three recurrent units named SimpleRNN, GRU and LSTM have been employed in our hybrid CNN+RNN model, which were tested with Adam, RMSProp and SGD optimizers. In particular, the mean absolute percentage error (MAPE) reports the value between 3.55%-6.95%, which are far better results compared to competitive rainfall estimation models. Also, the proposed model shows a very good performance in terms of complexity by recording the time of 0.47 seconds to calculate 100 frames. This prompt and precise measurement of precipitation is of potential to be applied in effectively managing water resources and mitigating the potential risks of flooding. Our regression model provides rainfall intensity estimates as output in a continuous space, in contrast to some methods, which only categorize the rainfall intensities into some discrete levels. Also, it receives one-minute videos as input, which eliminates the need to interpolate labels to obtain rainfall intensities in multi-millisecond scale as required in image-based algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data Availability

The video dataset used in this work is publicly available at https://figshare.com/articles/dataset/Estimating_rainfall_intensity_with_a_high_spatiotemporal_resolution_using_an_image-based_deep_learning_model/14709423/1.

References

Download references

Funding

This project received financial support by the Regional Water Company of Qazvin, through grant GZHW2-00001.

Author information

Authors and Affiliations

Authors

Contributions

Farshid Rajabi and Neda Faraji wrote and edited the main manuscript, developed the algorithm and implemented it by writing python codes. Masoumeh Hashemi proposed the main idea and the main paper to work on. Also, Neda Faraji revised the manuscript.

Corresponding author

Correspondence to Neda Faraji.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflict of interest.

Additional information

Communicated by: H. Babaie.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rajabi, F., Faraji, N. & Hashemi, M. An efficient video-based rainfall intensity estimation employing different recurrent neural network models. Earth Sci Inform (2024). https://doi.org/10.1007/s12145-024-01290-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s12145-024-01290-x

Keywords

Navigation