Abstract
The use of computer vision techniques based on machine learning (ML) and deep learning (DL) algorithms has increased in order to improve agricultural output in a cost-effective manner. Researchers have used ML and DL techniques for different agriculture applications such as crop classification, automatic crop harvesting, pest and disease detection from the plant, weed detection, land cover classification, soil profiling, and animal welfare. This chapter summarizes and analyzes the applications of these algorithms for crop management activities like crop yield prediction, diseases and pest detection, and weed detection. The study presents advantages and disadvantages of various ML and DL models. We have also discussed the issues and challenges faced while applying the ML and DL algorithms to different crop management activities. Moreover, the available agriculture data sources, data preprocessing techniques, ML algorithms, and DL models employed by researchers and the metrics used for measuring the performance of models are also discussed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Adriano Cruz, J. (2014). Enhancement of growth and yield of upland rice (Oryza sativa L.) by Actinomycetes. Agrotechnol s1. https://doi.org/10.4172/2168-9881.S1.008
Amara, J., Bouaziz, B., & Algergawy, A. (2017). A deep learning-based approach for banana leaf diseases classification. (BTW 2017)-Workshopband.
Arun Pandian, J., & Geetharamani, G. (2019). Data for: Identification of plant leaf diseases using a 9-layer deep convolutional neural network. Mendeley Data, V1. https://doi.org/10.17632/tywbtsjrjv.1
Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017) Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(12), 2481–2495.
Bah, M. D., Hafiane, A., & Canals, R. (2018). Deep learning with unsupervised data labeling for weed detection in line crops in UAV images. Remote Sensing, 10, 1690. https://doi.org/10.3390/rs10111690
Benos, L., Tagarakis, A. C., Dolias, G., et al. (2021). Machine learning in agriculture: A comprehensive updated review. Sensors, 21, 3758. https://doi.org/10.3390/s21113758
dos Santos, F. A., Freitas, D. M., da Silva, G. G., et al. (2019). Unsupervised deep learning and semi-automatic data labeling in weed discrimination. Computers and Electronics in Agriculture, 165, 104963. https://doi.org/10.1016/j.compag.2019.104963
dos Santos, F. A., Matte Freitas, D., Gonçalves da Silva, G., et al. (2017). Weed detection in soybean crops using ConvNets. Computers and Electronics in Agriculture, 143, 314–324. https://doi.org/10.1016/j.compag.2017.10.027
Du, L., Zhang, R., & Wang, X. (2020). Overview of two-stage object detection algorithms. Journal of Physics: Conference Series, 1544, 012033. https://doi.org/10.1088/1742-6596/1544/1/012033
Ebrahimi, M. A., Khoshtaghaza, M. H., Minaei, S., & Jamshidi, B. (2017). Vision-based pest detection based on SVM classification method. Computers and Electronics in Agriculture, 137, 52–58. https://doi.org/10.1016/j.compag.2017.03.016
Fuentes, A., Yoon, S., Kim, S. C., & Park, D. S. (2017). A Robust deep-learning-based detector for real-time tomato plant diseases and pests recognition. Sensors, 17, 2022. https://doi.org/10.3390/s17092022
Gong, L., Yu, M., Jiang, S., et al. (2021). Deep learning based prediction on greenhouse crop yield combined TCN and RNN. Sensors, 21, 4537. https://doi.org/10.3390/s21134537
Hamadani, H., Rashid,S. M., Parrah, J. D., et al. (2021). Traditional farming practices and its consequences. In Dar, G. H., Bhat, R. A., Mehmood, M. A., & Hakeem, .K. R. (Eds.), Microbiota and biofertilizers, Vol 2: Ecofriendly tools for reclamation of degraded soil environs (pp. 119–128). Springer International Publishing.
Haug, S., & Ostermann, J. (2015). A Crop/weed field image dataset for the evaluation of computer vision based precision agriculture tasks. In L. Agapito, M. M. Bronstein, & C. Rother (Eds.), Computer vision—ECCV 2014 workshops (pp. 105–116). Springer International Publishing.
Hughes, D. P., & Salathe, M. (2015). An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv preprint arXiv:1511.08060.
Karthik, M. H., Anand, S., et al. (2020). Attention embedded residual CNN for disease detection in tomato leaves. Applied Soft Computing, 86, 105933. https://doi.org/10.1016/j.asoc.2019.105933
Kerkech, M., Hafiane, A., & Canals, R. (2020). Vine disease detection in UAV multispectral images using optimized image registration and deep learning segmentation approach. Computers and Electronics in Agriculture, 174, 105446. https://doi.org/10.1016/j.compag.2020.105446
Khattak, A., Asghar, M. U., Batool, U., et al. (2021). Automatic detection of citrus fruit and leaves diseases using deep neural network model. IEEE Access, 1–1. https://doi.org/10.1109/ACCESS.2021.3096895
Li, M., Zhang, Z., Lei, L., et al. (2020). agricultural greenhouses detection in high-resolution satellite images based on convolutional neural networks: Comparison of faster R-CNN, YOLO v3 and SSD. Sensors, 20, 4938. https://doi.org/10.3390/s20174938
Liu, J., & Wang, X. (2021). Plant diseases and pests detection based on deep learning: A review. Plant Methods, 17, 22. https://doi.org/10.1186/s13007-021-00722-9
Lu, H., Cao, Z., & Xiao, Y., et al. (2015). Joint crop and tassel segmentation in the wild. In 2015 Chinese Automation Congress (CAC) (pp. 474–479).
Muruganantham, P., Wibowo, S., Grandhi, S., et al. (2022). A systematic literature review on crop yield prediction with deep learning and remote sensing. Remote Sensing, 14, 1990. https://doi.org/10.3390/rs14091990
Nguyen, G., Dlugolinsky, S., Bobák, M., et al. (2019). Machine learning and deep learning frameworks and libraries for large-scale data mining: A survey. Artificial Intelligence Review, 52, 77–124. https://doi.org/10.1007/s10462-018-09679-z
Olsen, A., Konovalov, D. A., Philippa, B., et al. (2019). DeepWeeds: A multiclass weed species image dataset for deep learning. Science and Reports, 9, 2058. https://doi.org/10.1038/s41598-018-38343-3
Picon, A., Seitz, M., Alvarez-Gila, A., et al. (2019). Crop conditional convolutional neural networks for massive multi-crop plant disease classification over cell phone acquired images taken on real field conditions. Computers and Electronics in Agriculture, 167, 105093. https://doi.org/10.1016/j.compag.2019.105093
Rashid, M., Bari, B. S., Yusup, Y., et al. (2021). A comprehensive review of crop yield prediction using machine learning approaches with special emphasis on palm oil yield prediction. IEEE Access, 9, 63406–63439. https://doi.org/10.1109/ACCESS.2021.3075159
Rauf, H. T., Saleem, B. A., Lali, M. I. U., et al. (2019). A citrus fruits and leaves dataset for detection and classification of citrus diseases through machine learning. Data in Brief, 26, 104340. https://doi.org/10.1016/j.dib.2019.104340
Rico-Fernández, M. P., Rios-Cabrera, R., Castelán, M., et al. (2019). A contextualized approach for segmentation of foliage in different crop species. Computers and Electronics in Agriculture, 156, 378–386. https://doi.org/10.1016/j.compag.2018.11.033
Sa, I., Chen, Z., Popović, M., et al. (2018). weedNet: Dense semantic weed classification using multispectral images and MAV for smart farming. IEEE Robotics and Automation Letters, 3, 588–595. https://doi.org/10.1109/LRA.2017.2774979
Senthilnath, J., Dokania, A., Kandukuri, M., et al. (2016). Detection of tomatoes using spectral-spatial methods in remotely sensed RGB images captured by UAV. Biosystems Engineering, 146, 16–32. https://doi.org/10.1016/j.biosystemseng.2015.12.003
Subeesh, A., Bhole, S., Singh, K., et al. (2022). Deep convolutional neural network models for weed detection in polyhouse grown bell peppers. Artificial Intelligence in Agriculture, 6, 47–54. https://doi.org/10.1016/j.aiia.2022.01.002
Venkataramanan, A, Laviale, M., Figus, C., et al. (2021). Tackling inter-class similarity and intra-class variance for microscopic image-based classification. In Computer Vision Systems: 13th International Conference, ICVS 2021, Virtual Event, September 22–24, 2021, Proceedings 13 (pp. 93–103). Springer International Publishing.
Wang, F., Jiang, M., Qian, C., et al. (2017). Residual attention network for image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3156–3164).
Wolanin, A., Mateo-García, G., Camps-Valls, G., et al. (2020). Estimating and understanding crop yields with explainable deep learning in the Indian Wheat Belt. Environmental Research Letters, 15, 024019. https://doi.org/10.1088/1748-9326/ab68ac
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix: Literature Review Papers
Appendix: Literature Review Papers
# | Paper title | Dataset | Size of the dataset | Preprocessing steps | Main approach | Algorithm/model used | Performance metric used | Conclusion | Ref. |
---|---|---|---|---|---|---|---|---|---|
1 | Detection of tomatoes using spectral-spatial methods in remotely sensed RGB images captured by UAV | Video has been recorded by a camera mounted on a UAV | Based on region of interest (ROI), images are extracted from the video | Image resizing | Three unsupervised spectral clustering methods are compared for grouping pixels into tomatoes and non-tomatoes | K-means, expectation maximization (EM), self-organizing map (SOM) | ROC parameters: precision, recall, and F1-score | EM proved to be better (precision: 0.97) than K-means (precision:0.66) and SOM (precision:0.89) | (Senthilnath et al., 2016) |
2 | A contextualized approach for segmentation of foliage in different crop species | 1. Carrot dataset 2. Maize dataset 3. Tomato dataset | 1. 60 images of carrot 2. 50 images of maize 3. 43 images of tomato | Color features were extracted by transforming the image into different color spaces like RGB, CIE Lab, CIE Luv, HSV, HSL, YCrCb, and 2G-R-B. The color feature was also calculated using different color indices like ExG, 2G-R-B, VEG, CIVE, MExG, COM1, and COM2 | The color feature vector was provided to the SVM classifier to detect leaf area and non-leaf area in an image. Three different approaches were compared 1. CIE Luv + SVM, 2. CIVE + SVM, 3. COM2 + SVM | SVM | Quality of segmentation, accuracy of models | CIE Luv + SVM performs better as compared to others | (Rico-Fernández et al., 2019) |
3 | Deep learning Based prediction on greenhouse crop yield Combined TCN and RNN | Environmental parameters (CO2 concentration, relative humidity, etc.) and historical yield information from three different greenhouses of tomato | A temporal sequence of data containing both historical yield and environmental information is normalized and provided to the RNN | Representative features are extracted using the LSTM + RNN layer and fed into the temporal convolutional network | LSTM-RNN and TCN | MSE, RMSE | Mean and standard deviation of RMSEs 10.45 ± 0.94 for the dataset from greenhouse1, 6.76 ± 0.45 for dataset from greenhouse2, and 7.40 ± 1.88 for dataset from greenhouse3 | (Gong et al., 2021) | |
4 | Vine disease detection in UAV multispectral images using optimized image registration and deep learning segmentation approach | RGB and infrared images collected using UAV | 4 classes shadow, ground, healthy, and symptomatic class 17,640 samples for each class, among them 14,994 used for training and 2,646 for validation | The dataset was labeled using a semi-automatic method (a sliding window). Each block was classified by a LeNet5 network for pre-labeling. Labeled images are corrected manually, and labeled images are used for segmentation | Two SegNet models were trained separately for the RGB images and for the infrared images. Both models' outputs are combined in two ways “fusion AND” and “fusion OR” | SegNet | Precision, recall, F1-score, accuracy | “Fusion OR” approach provides a better accuracy (95.02%) over the “fusion AND” approach (88.14%), RGB image-based model (94.41%), and infrared image-based model (89.16%) | (Kerkech et al., 2020) |
5 | Attention embedded residual CNN for disease detection in tomato leaves | PlantVillage and augmented datasets | 95,999 tomato leaf images for training and 24,001 images were used for validation | Data augmentation techniques like central zoom, random crop and zoom, and contrast images | CNN-based multiclass classification into three diseases classes (i.e., early blight, late blight, and leaf mold) and 1 healthy class for tomato leaves | CNN and modified CNN | Accuracy | Accuracy of baseline CNN model: 84%; residual CNN model: 95%; attention embedded residual CNN model: 98% | (Karthik et al., 2020) |
6 | Crop conditional Convolutional Neural Networks for massive multi-crop plant disease classification over cell phone acquired images taken on real field conditions | Own dataset, collected images using a mobile phone | A total of 1,21,955 images of multiple crops like wheat, corn, rapeseed, barley, and rice are collected | Image resize | Three approaches were proposed to detect seventeen diseases of five healthy classes from five different crops 1. Independent model for each of the five crops 2. Single model, i.e., multi-crop, for the entire dataset 3. Use of crop metadata (CropID) information along with a multi-crop model | ResNet-50-CNN | AUC, sensitivity, specificity, balanced accuracy (BAC) | Independent single-crop models showed an average BAC of 0.92, whereas the baseline multi-crop model showed an average BAC of 0.93. Crop conditional CNN architecture performs well with 0.98 average BAC | (Picon et al., 2019) |
7 | Automatic detection of Citrus fruit and leaves diseases using deep neural network model | Citrus dataset and PlantVillage dataset | 2293 images | Images are preprocessed like normalizing and scaling and then used for training, validation, and testing to classify diseases into five classes | 80% of preprocessed images are provided as input to CNN to train it. The remaining 20% of the images are used for validation and testing of the model. This proposed model is also compared with other ML/DL-based models | CNN (two layers) | Test accuracy, training loss, training time, precision, recall | The proposed CNN model has 95.65% accuracy | (Khattak et al., 2021) |
8 | A deep learning-based approach for banana leaf diseases classification | Images of banana leaves (healthy and diseased) were obtained from the PlantVillage dataset | 3700 images | Images were resized to 60 × 60 pix. and converted to grayscale images and used for the classification process | Classification of the leaf images into three classes using LeNet architecture | LeNet architecture-CNN | Accuracy, precision, recall, F1-score | The model performs well for color images as compared to grayscale images. Accuracy is 99.72% for 50–50, train-test split | (Amara et al., 2017) |
9 | Vision-based pest detection based on SVM classification method | Images were obtained from a strawberry greenhouse using a camera mounted on a robot arm | 100 images | The non-flower regions are considered as background and removed by applying the gamma correction. Histogram equalization and contrast stretching were used to remove any remaining background | SVM with different kernel functions of region index and color index is used for classification | SVM | MSE, RMSE, MAE, MPE | Pests are detected from images using SVM with a mean percentage error less than 2.25% | (Ebrahimi et al., 2017) |
10 | Deep learning with Unsupervised Data Labeling for Weed Detection in Line Crops in UAV Images | Images were collected by UAV from two farm fields | Total: 17,044 (bean field), 15,858 (spinach field) | Background removal, skeleton, Hough transformation for (Crow Row) line detection | Images were labeled using unsupervised methods and supervised methods and used for crop/weed discrimination using CNN | ResNet-18, SVM, RF | AUC | AUCs in the bean field are 91.37% for unsupervised data labeling and 93.25% for supervised data labeling. In the spinach field, they are 82.70% and 94.34%, respectively | (Bah et al., 2018) |
11 | Unsupervised deep learning and semi-automatic data labeling in weed discrimination | Grass-Broadleaf and DeepWeeds datasets | Grass-Broadleaf: Total 15,536 segments (3249 of soil, 7376 of soybean, 3520 of grass, and 1191 of broadleaf weeds). DeepWeeds: 17,509 images | Segmentation and image resize | Joint unsupervised learning of deep representations and image clusters (JULE) and deep clustering for unsupervised learning of visual features (DeepCluster) | Inception-V3, VGG16, ResNet | Precision | Inception-V3 has better precision 0.884 for Grass-Broadleaf dataset, and VGG16 has better precision 0.646 for DeepWeeds dataset as compared to ResNet | (dos Santos et al. 2019) |
12 | Deep convolutional neural network models for weed detection in polyhouse grown bell peppers | Images are captured using digital camera of mobile | Total of 1106 images collected and augmented to increase the size of dataset | Data augmentation, outlier detection, standardization, normalization | Four models based on CNN are compared for classification of image into bell paper or weed class | AlexNet, GoogLeNet, Inception-V3, Xception | precision, accuracy, recall, F1-score | Inception-V3 performance well as compared to other three models | (Subeesh et al., 2022) |
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Vithlani, S.K., Dabhi, V.K. (2023). Machine Learning and Deep Learning in Crop Management—A Review. In: Chaudhary, S., Biradar, C.M., Divakaran, S., Raval, M.S. (eds) Digital Ecosystem for Innovation in Agriculture. Studies in Big Data, vol 121. Springer, Singapore. https://doi.org/10.1007/978-981-99-0577-5_2
Download citation
DOI: https://doi.org/10.1007/978-981-99-0577-5_2
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-0576-8
Online ISBN: 978-981-99-0577-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)