Skip to main content

Machine Learning and Deep Learning in Crop Management—A Review

  • Chapter
  • First Online:
Digital Ecosystem for Innovation in Agriculture

Abstract

The use of computer vision techniques based on machine learning (ML) and deep learning (DL) algorithms has increased in order to improve agricultural output in a cost-effective manner. Researchers have used ML and DL techniques for different agriculture applications such as crop classification, automatic crop harvesting, pest and disease detection from the plant, weed detection, land cover classification, soil profiling, and animal welfare. This chapter summarizes and analyzes the applications of these algorithms for crop management activities like crop yield prediction, diseases and pest detection, and weed detection. The study presents advantages and disadvantages of various ML and DL models. We have also discussed the issues and challenges faced while applying the ML and DL algorithms to different crop management activities. Moreover, the available agriculture data sources, data preprocessing techniques, ML algorithms, and DL models employed by researchers and the metrics used for measuring the performance of models are also discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sunil K. Vithlani .

Editor information

Editors and Affiliations

Appendix: Literature Review Papers

Appendix: Literature Review Papers

#

Paper title

Dataset

Size of the dataset

Preprocessing steps

Main approach

Algorithm/model used

Performance metric used

Conclusion

Ref.

1

Detection of tomatoes using spectral-spatial methods in remotely sensed RGB images captured by UAV

Video has been recorded by a camera mounted on a UAV

Based on region of interest (ROI), images are extracted from the video

Image resizing

Three unsupervised spectral clustering methods are compared for grouping pixels into tomatoes and non-tomatoes

K-means, expectation maximization (EM), self-organizing map (SOM)

ROC parameters: precision, recall, and F1-score

EM proved to be better (precision: 0.97) than K-means (precision:0.66) and SOM (precision:0.89)

(Senthilnath et al., 2016)

2

A contextualized approach for segmentation of foliage in different crop species

1. Carrot dataset

2. Maize dataset

3. Tomato dataset

1. 60 images of carrot

2. 50 images of maize

3. 43 images of tomato

Color features were extracted by transforming the image into different color spaces like RGB, CIE Lab, CIE Luv, HSV, HSL, YCrCb, and 2G-R-B. The color feature was also calculated using different color indices like ExG, 2G-R-B, VEG, CIVE, MExG, COM1, and COM2

The color feature vector was provided to the SVM classifier to detect leaf area and non-leaf area in an image. Three different approaches were compared

1. CIE Luv + SVM,

2. CIVE + SVM,

3. COM2 + SVM

SVM

Quality of segmentation, accuracy of models

CIE Luv + SVM performs better as compared to others

(Rico-Fernández et al., 2019)

3

Deep learning Based prediction on greenhouse crop yield Combined TCN and RNN

Environmental parameters (CO2 concentration, relative humidity, etc.) and historical yield information from three different greenhouses of tomato

 

A temporal sequence of data containing both historical yield and environmental information is normalized and provided to the RNN

Representative features are extracted using the LSTM + RNN layer and fed into the temporal convolutional network

LSTM-RNN and TCN

MSE, RMSE

Mean and standard deviation of RMSEs 10.45 ± 0.94 for the dataset from greenhouse1, 6.76 ± 0.45 for dataset from greenhouse2, and 7.40 ± 1.88 for dataset from greenhouse3

(Gong et al., 2021)

4

Vine disease detection in UAV multispectral images using optimized image registration and deep learning segmentation approach

RGB and infrared images collected using UAV

4 classes shadow, ground, healthy, and symptomatic class

17,640 samples for each class, among them 14,994 used for training and 2,646 for validation

The dataset was labeled using a semi-automatic method (a sliding window). Each block was classified by a LeNet5 network for pre-labeling. Labeled images are corrected manually, and labeled images are used for segmentation

Two SegNet models were trained separately for the RGB images and for the infrared images. Both models' outputs are combined in two ways “fusion AND” and “fusion OR”

SegNet

Precision, recall, F1-score, accuracy

“Fusion OR” approach provides a better accuracy (95.02%) over the “fusion AND” approach (88.14%), RGB image-based model (94.41%), and infrared image-based model (89.16%)

(Kerkech et al., 2020)

5

Attention embedded residual CNN for disease detection in tomato leaves

PlantVillage and augmented datasets

95,999 tomato leaf images for training and 24,001 images were used for validation

Data augmentation techniques like central zoom, random crop and zoom, and contrast images

CNN-based multiclass classification into three diseases classes (i.e., early blight, late blight, and leaf mold) and 1 healthy class for tomato leaves

CNN and modified CNN

Accuracy

Accuracy of baseline CNN model: 84%; residual CNN model: 95%; attention embedded residual CNN model: 98%

(Karthik et al., 2020)

6

Crop conditional Convolutional Neural Networks for massive multi-crop plant disease classification over cell phone acquired images taken on real field conditions

Own dataset, collected images using a mobile phone

A total of 1,21,955 images of multiple crops like wheat, corn, rapeseed, barley, and rice are collected

Image resize

Three approaches were proposed to detect seventeen diseases of five healthy classes from five different crops

1. Independent model for each of the five crops

2. Single model, i.e., multi-crop, for the entire dataset

3. Use of crop metadata (CropID) information along with a multi-crop model

ResNet-50-CNN

AUC, sensitivity, specificity, balanced accuracy (BAC)

Independent single-crop models showed an average BAC of 0.92, whereas the baseline multi-crop model showed an average BAC of 0.93. Crop conditional CNN architecture performs well with 0.98 average BAC

(Picon et al., 2019)

7

Automatic detection of Citrus fruit and leaves diseases using deep neural network model

Citrus dataset and PlantVillage dataset

2293 images

Images are preprocessed like normalizing and scaling and then used for training, validation, and testing to classify diseases into five classes

80% of preprocessed images are provided as input to CNN to train it. The remaining 20% of the images are used for validation and testing of the model. This proposed model is also compared with other ML/DL-based models

CNN (two layers)

Test accuracy, training loss, training time, precision, recall

The proposed CNN model has 95.65% accuracy

(Khattak et al., 2021)

8

A deep learning-based approach for banana leaf diseases classification

Images of banana leaves (healthy and diseased) were obtained

from the PlantVillage dataset

3700 images

Images were resized to 60 × 60 pix. and converted to grayscale images and used for the classification process

Classification of the leaf images into three classes using LeNet architecture

LeNet architecture-CNN

Accuracy, precision, recall, F1-score

The model performs well for color images as compared to grayscale images. Accuracy is 99.72% for 50–50, train-test split

(Amara et al., 2017)

9

Vision-based pest detection based on SVM classification method

Images were obtained from a strawberry greenhouse using a camera mounted on a robot arm

100 images

The non-flower regions are considered as background and removed by applying the gamma correction. Histogram equalization and contrast stretching were used to remove any remaining background

SVM with different kernel functions of region index and color index is used for classification

SVM

MSE, RMSE, MAE, MPE

Pests are detected from images using SVM with a mean percentage error less than 2.25%

(Ebrahimi et al., 2017)

10

Deep learning with Unsupervised Data Labeling for Weed Detection in Line Crops in UAV Images

Images were collected by UAV from two farm fields

Total: 17,044 (bean field), 15,858 (spinach field)

Background removal, skeleton, Hough transformation for (Crow Row) line detection

Images were labeled using unsupervised methods and supervised methods and used for crop/weed discrimination using CNN

ResNet-18, SVM, RF

AUC

AUCs in the bean field are 91.37% for unsupervised data labeling and 93.25% for supervised data labeling. In the spinach field, they are 82.70% and 94.34%, respectively

(Bah et al., 2018)

11

Unsupervised deep learning and semi-automatic data labeling in weed discrimination

Grass-Broadleaf and DeepWeeds datasets

Grass-Broadleaf: Total 15,536 segments (3249 of soil, 7376 of soybean, 3520 of grass, and 1191 of broadleaf weeds). DeepWeeds: 17,509 images

Segmentation and image resize

Joint unsupervised learning of deep representations and image clusters (JULE) and deep clustering for unsupervised learning of visual features (DeepCluster)

Inception-V3, VGG16, ResNet

Precision

Inception-V3 has better precision 0.884 for Grass-Broadleaf dataset, and VGG16 has better precision 0.646 for DeepWeeds dataset as compared to ResNet

(dos Santos et al. 2019)

12

Deep convolutional neural network models for weed detection in polyhouse grown bell peppers

Images are captured using digital camera of mobile

Total of 1106 images collected and augmented to increase the size of dataset

Data augmentation, outlier detection, standardization, normalization

Four models based on CNN are compared for classification of image into bell paper or weed class

AlexNet, GoogLeNet, Inception-V3, Xception

precision, accuracy, recall, F1-score

Inception-V3 performance well as compared to other three models

(Subeesh et al., 2022)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Vithlani, S.K., Dabhi, V.K. (2023). Machine Learning and Deep Learning in Crop Management—A Review. In: Chaudhary, S., Biradar, C.M., Divakaran, S., Raval, M.S. (eds) Digital Ecosystem for Innovation in Agriculture. Studies in Big Data, vol 121. Springer, Singapore. https://doi.org/10.1007/978-981-99-0577-5_2

Download citation

Publish with us

Policies and ethics