Skip to main content

Advertisement

Log in

Red-green-blue to normalized difference vegetation index translation: a robust and inexpensive approach for vegetation monitoring using machine vision and generative adversarial networks

  • Published:
Precision Agriculture Aims and scope Submit manuscript

Abstract

High-resolution multispectral imaging of agricultural fields is expensive but helpful in detecting subtle variations in plant health and stress symptoms before the appearance of visible indications. To aid precision agriculture (PA) practices, an innovative and inexpensive protocol for robust and timely monitoring of vegetation symptoms has been evaluated. This innovative but inexpensive protocol uses machine vision (MV) and generative adversarial networks (GAN) to translate red-green-blue (RGB) imagery captured with unmanned aerial vehicle (UAV) into a valuable normalized difference vegetation index (NDVI) map. This study used direct translation of RGB imagery in NDVI index, in contrast with similar studies that used GANs in near-infrared (NIR) translation. The protocol was tested by flying a fixed-winged UAV developed by senseFly Inc. (Cheseaux-sur-Lausanne, Switzerland) model Ebee-X, equipped with a RedEdge-MX sensor, to capture images from five different potatoes fields located in Prince Edward Island – Canada, during the growing season of 2021. The images were captured throughout the growing season under vegetation (15–30 DAP; days after plantation), tuber formation (30–45 DAP), tuber bulking (75–110 DAP), and tuber maturation stages (> 110 DAP). The NDVI was calculated from captured UAV aerial surveys using NIR and red bands to develop pairwise datasets for the training of GANs. Five hundred pairwise images were used (80% training, 10% validation, and 10% testing) for training and evaluation of GANs. Two famous GANs, namely Pix2Pix and Pix2PixHD, were tested compared to various training and evaluation indicators. The Pix2PixHD outperformed Pix2Pix GAN by recording lower root mean square error (RMSE) (5.40 to 13.73) and higher structural similarity index matrix (SSIM) score (0.69 to 0.90) during the evaluation of the protocol. The results of this study are breakthroughs to be used for economic vegetation and orchard health monitoring after the training of models. The trained GANs can translate simple RGB domains into useful vegetation indices maps for variable rate PA practices. This innovative protocol can also translate remote sensing imagery of large-scale agricultural fields and commercial orchards into NDVI to extract useful information about plant health indicators.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data Availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on request.

References

  • Beisel, N. S., Callaham, J. B., Sng, N. J., Taylor, D. J., Paul, A. L., Ferl, R. J., Beisel, C., Callaham, J. B., Sng, N. J., Taylor, D. J., Paul, A. L., & Ferl, R. J. (2018). Utilization of single-image normalized difference vegetation index (SI-NDVI) for early plant stress detection. Applications in Plant Sciences, 6(10), e01186. https://doi.org/10.1002/APS3.1186.

    Article  PubMed  PubMed Central  Google Scholar 

  • Borji, A. (2018). Pros and cons of GAN evaluation measures. Computer Vision and Image Understanding, 179, 41–65. https://doi.org/10.1016/j.cviu.2018.10.009.

    Article  Google Scholar 

  • Burt, P. J., & Adelson, E. H. (1983). The Laplacian pyramid as a Compact Image Code. Undefined, 31(4), 532–540. https://doi.org/10.1109/TCOM.1983.1095851.

    Article  Google Scholar 

  • de Lima, D. C., Saqui, D., Mpinda, S. A. T., & Saito, J. H. (2022). Pix2Pix Network to Estimate Agricultural Near Infrared Images from RGB Data. 48(2), 299–315.https://doi.org/10.1080/07038992.2021.2016056

  • Din, N., Naz, B., & Zai, S. (2021). of, W. A.-I. J., & undefined. (2021). Onion Crop Monitoring with Multispectral Imagery using Deep Neural Network. International Journal of Advanced Computer Science and Applications, 12(5). https://pdfs.semanticscholar.org/ec4c/0ab4d8c87780fe06d070c322435a5819e39d.pdf

  • Farooque, A., Zare, M., Zaman, Q., Abbas, F., Bos, M., Esau, T., Acharya, B., & Schumann, A. (2019). Evaluation of DualEM-II sensor for soil moisture content estimation in the potato fields of Atlantic Canada. Plant Soil and Environment, 65(6), 290–297. https://doi.org/10.17221/72/2019-pse.

    Article  Google Scholar 

  • Gandhi, S. A., & Kulkarni, C. V. (2013). MSE Vs SSIM.International Journal of Scientific & Engineering Research, 4(7). http://www.ijser.org

  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. Advances in Neural Information Processing Systems. https://doi.org/10.1145/3422622.

    Article  Google Scholar 

  • Guan, S., Fukami, K., Matsunaka, H., Okami, M., Tanaka, R., Nakano, H., Sakai, T., Nakano, K., Ohdan, H., & Takahashi, K. (2019). Assessing correlation of high-resolution NDVI with fertilizer application level and yield of rice and wheat crops using small UAVs. Remote Sensing, 11(2), 112. https://doi.org/10.3390/rs11020112.

    Article  Google Scholar 

  • Iizuka, S., Simo-Serra, E., & Ishikawa, H. (2017). Globally and locally consistent image completion. ACM Transactions on Graphics (TOG), 36(4), https://doi.org/10.1145/3072959.3073659.

  • Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. Proceedings – 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017-Janua, 5967–5976. https://doi.org/10.1109/CVPR.2017.632

  • Johnson, J., Alahi, A., & Fei-Fei, L. (2016). Perceptual Losses for Real-Time Style Transfer and Super-Resolution. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 9906 LNCS, 694–711. https://doi.org/10.1007/978-3-319-46475-6_43

  • Kong, L., Lian, C., Huang, D., Li, Z., Hu, Y., Zhou, Q., & Tech, M. (2021). Breaking the Dilemma of Medical Image-to-image Translation. Advances in Neural Information Processing Systems, 34. https://github.com/Kid-Liet/Reg-GAN.

  • Lunetta, R. S., Knight, J. F., Ediriwickrema, J., Lyon, J. G., & Worthy, L. D. (2006). Land-cover change detection using multi-temporal MODIS NDVI data. Remote Sensing of Environment, 105(2), 142–154. https://doi.org/10.1016/j.rse.2006.06.018.

    Article  Google Scholar 

  • Mescheder, L., Geiger, A., & Nowozin, S. (2018). Which Training Methods for GANs do actually Converge?

  • Moran, M. S., Inoue, Y., & Barnes, E. M. (1997). Opportunities and limitations for image-based remote sensing in precision crop management. Remote Sensing of Environment, 61(3), 319–346. https://doi.org/10.1016/S0034-4257(97)00045-X.

    Article  Google Scholar 

  • Pang, Y., Lin, J., Qin, T., & Chen, Z. (2021). Image-to-image translation: methods and applications. IEEE Transactions on Multimedia. https://doi.org/10.1109/TMM.2021.3109419.

    Article  Google Scholar 

  • Robert, P. C. (2002). Precision agriculture: a challenge for crop nutrition management. Progress in Plant Nutrition: Plenary Lectures of the XIV International Plant Nutrition Colloquium, 143–149. https://doi.org/10.1007/978-94-017-2789-1_11

  • Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 9351, 234–241. https://arxiv.org/abs/1505.04597v1

  • Salehi, P., & Chalechale, A. (2020). Pix2Pix-based Stain-to-Stain Translation: A Solution for Robust Stain Normalization in Histopathology Images Analysis. Iranian Conference on Machine Vision and Image Processing, MVIP, 2020-February. https://doi.org/10.1109/MVIP49855.2020.9116895

  • Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X. (2016). Improved Techniques for Training GANs.Advances in Neural Information Processing Systems,2234–2242. https://arxiv.org/abs/1606.03498v1

  • Singh, S., Pandey, P., Khan, M. S., & Semwal, M. (2021). Multi-temporal High Resolution Unmanned Aerial Vehicle (UAV) Multispectral Imaging for Menthol Mint Crop Monitoring. 2021 6th International Conference for Convergence in Technology, I2CT 2021. https://doi.org/10.1109/I2CT51068.2021.9418204

  • Suarez, P. L., Sappa, A. D., & Vintimilla, B. X. (2017). Infrared Image Colorization Based on a Triplet DCGAN Architecture. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2017-July, 212–217. https://doi.org/10.1109/CVPRW.2017.32

  • Suarez, P. L., Sappa, A. D., Vintimilla, B. X., & Hammoud, R. I. (2019). Image vegetation index through a cycle generative adversarial network. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2019-June, 1014–1021. https://doi.org/10.1109/CVPRW.2019.00133

  • Valor, E., & Caselles, V. (1996). Mapping land surface emissivity from NDVI: application to European, African, and south american areas. Remote Sensing of Environment, 57(3), 167–184. https://doi.org/10.1016/0034-4257(96)00039-9.

    Article  Google Scholar 

  • Wang, C., Zheng, H., Yu, Z., Zheng, Z., Gu, Z., & Zheng, B. (2018a). Discriminative region proposal adversarial networks for high-quality image-to-image translation. In Proceedings of the European Conference on Computer Vision (ECCV), 770–785. http://openaccess.thecvf.com/content_ECCV_2018a/html/Chao_Wang_Discriminative_Region_Proposal_ECCV_2018a_paper.html

  • Wang, T. C., Liu, M. Y., Zhu, J. Y., Tao, A., Kautz, J., & Catanzaro, B. (2017). High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 8798–8807. https://doi.org/10.1109/CVPR.2018.00917

  • Wang, Z., Chen, Z., & Wu, F. (2018b). Thermal to visible facial image translation using generative adversarial networks. IEEE Signal Processing Letters, 25(8), 1161–1165. https://doi.org/10.1109/LSP.2018b.2845692.

    Article  Google Scholar 

  • Wang, Z., Bovik, A. C., Sheikh, R., H., & Simoncelli, E. P. (2004). Image Quality Assessment: from error visibility to Structural Similarity. IEEE TRANSACTIONS ON IMAGE PROCESSING, 13(4), https://doi.org/10.1109/TIP.2003.819861.

  • Yang, Q., Li, N., Zhao, Z., Fan, X., Chang, E. I. C., & Xu, Y. (2018). MRI Cross-Modality NeuroImage-to-NeuroImage Translation. https://arxiv.org/abs/1801.06940v2

Download references

Acknowledgements

The authors would like to thank Department of Agriculture and Land, Government of Prince Edward Island and the Natural Science and Engineering Council of Canada for providing the funding to conduct this research work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Aitazaz A. Farooque.

Ethics declarations

Conflict of Interest

The authors declare no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Farooque, A.A., Afzaal, H., Benlamri, R. et al. Red-green-blue to normalized difference vegetation index translation: a robust and inexpensive approach for vegetation monitoring using machine vision and generative adversarial networks. Precision Agric 24, 1097–1115 (2023). https://doi.org/10.1007/s11119-023-10001-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11119-023-10001-3

Keywords

Navigation