Skip to main content
Log in

SK-MobileNet: A Lightweight Adaptive Network Based on Complex Deep Transfer Learning for Plant Disease Recognition

  • Research Article-Computer Engineering and Computer Science
  • Published:
Arabian Journal for Science and Engineering Aims and scope Submit manuscript

Abstract

The current convolution neural network approaches have attracted extensive interest because the performance is better than that of conventional machine learning methods in the plant disease recognition. However, there are still facing challenges. For instance, the image background sometime is complex, and the model can detect plant lesions, but it is difficult to use and detect the specific pest position. The high complexity of the model is not conducive to the deployment and development of mobile software. Even the dataset has problems such as labeling errors and few positive or negative samples, which restrict the development of disease recognition. In this study, we investigate the deep convolution networks based on deep transfer learning for plant disease recognition. We propose a model called as Selective Kernel MobileNet (SK-MobileNet), which is lightweight enough to greatly reduce the computing cost when deployed to servers. Experimental results show that the proposed approach reaches the accuracy of 99.28% in the public dataset. The proposed approach illustrates a significant increase in the efficiency with the lower complexity compared to other existing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Li, L.; Zhang, S.; Wang, B.: Plant disease detection and classification by deep learning—a review. IEEE Access 9, 56683–56698 (2021)

    Article  Google Scholar 

  2. Abade, A.; Ferreira, P.A.; de Barros Vidal, F.: Plant diseases recognition on images using convolutional neural networks: a systematic review. Comput. Electron. Agric. 185, 106125 (2021)

  3. Jayme, Garcia, Arnal, Barbedo: A review on the main challenges in automatic plant disease identification based on visible range images. Biosyst. Eng. (2016)

  4. Sasaki, Y.; Okamoto, T.; Imou, K.; Torii, T.: Automatic diagnosis of plant disease: recognition between healthy and diseased leaf. J. Jsam 61(2), 119–126 (1999)

  5. Dhaka, V.S.; Meena, S.V.; Rani, G.; Sinwar, D.; Ijaz, M.F.; Woźniak, M.; et al.: A survey of deep convolutional neural networks applied for prediction of plant leaf diseases. Sensors 21(14), 4749 (2021)

    Article  Google Scholar 

  6. Sugiyama, M.; Nakajima, S.; Kashima, H.; Von Buenau, P.; Kawanabe, M.: Direct importance estimation with model selection and its application to covariate shift adaptation. In: NIPS, vol. 7, pp. 1433–1440. Citeseer (2007)

  7. Kingma, D.P.; Ba, J.: Adam: a method for stochastic optimization. arXiv preprint (2014). arXiv:1412.6980

  8. Yang, F.; Li, F.; Zhang, K.; Zhang, W.; Li, S.: Influencing factors analysis in pear disease recognition using deep learning. Peer-to-Peer Netw. Appl. 1–13 (2020)

  9. Long, M.; Zhu, H.; Wang, J.; Jordan, M.I.: Deep transfer learning with joint adaptation networks. In: International Conference on Machine Learning, pp. 2208–2217. PMLR (2017)

  10. Hughes, D.; Salathé, M.; et al.: An open access repository of images on plant health to enable the development of mobile disease diagnostics (2015). arXiv preprint arXiv:1511.08060

  11. Radford, A.; Metz, L.; Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. Comput. Sci. (2015)

  12. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C.: Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)

  13. Hinton, G.E.; Osindero, S.; Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. (2006)

  14. Krizhevsky, A.; Sutskever, I.; Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural. Inf. Process. Syst. 25, 1097–1105 (2012)

    Google Scholar 

  15. Boukerche, A.; Tao, Y.; Sun, P.: Artificial intelligence-based vehicular traffic flow prediction methods for supporting intelligent transportation systems. Comput. Netw. 182, 107484 (2020)

    Article  Google Scholar 

  16. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Sánchez, C.I.: A survey on deep learning in medical image analysis. Med. Image Anal. 42(9), 60–88 (2017)

    Article  Google Scholar 

  17. Wang, N.; Li, Q.; Abd El-Latif, A.A.; Zhang, T.; Niu, X.: Toward accurate localization and high recognition performance for noisy iris images. Multim. Tools Appl. 71(3), 1411–1430 (2014)

  18. Gad, R.; Talha, M.; Abd El-Latif, A.A.; Zorkany, M.; Ayman, E.-S.; Nawal, E.-F.; Muhammad, G.: Iris recognition using multi-algorithmic approaches for cognitive internet of things (ciot) framework. Futur. Gener. Comput. Syst. 89, 178–191 (2018)

    Article  Google Scholar 

  19. Sujatha, R.; Chatterjee, J.M.; Jhanjhi, N.Z.; Brohi, S.N.: Performance of deep learning vs machine learning in plant leaf disease detection. Microprocess. Microsyst. 80(6), 103615 (2021)

    Article  Google Scholar 

  20. Meng, T.; Wolter, K.; Wu, H.; Wang, Q.: A secure and cost-efficient offloading policy for mobile cloud computing against timing attacks. Pervasive Mob. Comput. 45, 4–18 (2018)

    Article  Google Scholar 

  21. Krizhevsky, A.; Sutskever, I.; Hinton, G.: Imagenet classification with deep convolutional neural networks. In: NIPS (2012)

  22. Simonyan, K.; Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Comput. Sci. (2014)

  23. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

  24. He, K.; Zhang, X.; Ren, S.; Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)

  25. Deng, J.; Dong, W.; Socher, R.; Li, L.; Kai Li, Li Fei-Fei: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009)

  26. Tan, C.; Sun, F.; Kong, T.; Zhang, W.; Yang, C.; Liu, C.: A survey on deep transfer learning. In: International Conference on Artificial Neural Networks (2018)

  27. Han, S.; Pool, J.; Tran, J.; Dally, W.J.: Learning both weights and connections for efficient neural networks (2015) arXiv:1506.02626

  28. Zhao, Y.; Chen, J.; Xu, X.; Lei, J.; Zhou, W.: Sev-net: Residual network embedded with attention mechanism for plant disease severity detection. Pract. Exp. Concurr. Comput. (2021)

  29. Yang, Z.; Yue, J.; Li, Z.; Zhu, L.: Vegetable image retrieval with fine-tuning vgg model and image hash—sciencedirect. IFAC-PapersOnLine 51(17), 280–285 (2018)

  30. Agarwal, M.; Gupta, S.; Biswas, K.K.: A new conv2d model with modified relu activation function for identification of disease type and severity in cucumber plant. Sust. Comput. Inf. Syst. 100473 (2020)

  31. Hinton, G.E.: Rectified Linear Units Improve Restricted Boltzmann Machines Vinod Nair (2010)

  32. Chen, J.; Chen, J.; Zhang, D.; Sun, Y.; Nanehkaran, Y.A.: Using deep transfer learning for image-based plant disease identification. Comput. Electron. Agric. 173, 105393 (2020)

    Article  Google Scholar 

  33. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H.: Mobilenets: efficient convolutional neural networks for mobile vision applications (2017). arXiv preprint arXiv:1704.04861

  34. Kawasaki, Y.; Uga, H.; Kagiwada, S.; Iyatomi, H.: Basic study of automated diagnosis of viral plant diseases using convolutional neural networks. In: International Symposium on Visual Computing, pp. 638–645. Springer (2015)

  35. Howard, A.; Sandler, M.; Chu, G.; Chen, L.-C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; Le, Q.V.; Adam, H.: Searching for mobilenetv3. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2019)

  36. Li, X.; Wang, W.; Hu, X.; Yang, J.: Selective kernel networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2019)

  37. Luo, W.; Li, Y.; Urtasun, R.; Zemel, R.: Understanding the effective receptive field in deep convolutional neural networks (2017). arXiv preprint arXiv:1701.04128

  38. Li, X.; Zhang, S.; Jiang, B.; Qi, Y.; Bi, N.: Dac: Data-free automatic acceleration of convolutional networks. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV) (2019)

  39. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; Desmaison, A.; Köpf, A.; Yang, E.; DeVito, Z.; Raison, M.; Tejani, A.; Chilamkurthy, S.; Steiner, B.; Fang, L.; Bai, J.; Chintala, S.: Pytorch: An imperative style, high-performance deep learning library (2019). arXiv:1912.01703

  40. Too, E.C.; Yujian, L.; Njuki, S.; Yingchun, L.: A comparative study of fine-tuning deep learning models for plant disease identification. Comput. Electron. Agric. 272–279 (2018)

  41. Wistuba, M.; Rawat, A.; Pedapati, T.: A survey on neural architecture search (2019). arXiv preprint arXiv:1905.01392

  42. Baker, B.; Gupta, O.; Naik, N.; Raskar, R.: Designing neural network architectures using reinforcement learning (2016). arXiv preprint arXiv:1611.02167

  43. Zoph, B.; Le, Q.V.: Neural architecture search with reinforcement learning (2016). arXiv preprint arXiv:1611.01578

  44. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)

  45. Hu, J.; Shen, L.; Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018)

  46. Xie, S.; Girshick, R.; Dollar, P.; Tu, Z.; He, K.: Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)

  47. Li, Y.; Wang, N.; Shi, J.; Hou, X.; Liu, J.: Adaptive batch normalization for practical domain adaptation. Patt. Recogn. J. Patt. Recogn. Soc. (2018)

  48. Mingsheng, L.; Yue, C.; Zhangjie, J.; Wang, M.; Jordan, I.: Transferable representation learning with deep adaptation networks. IEEE Trans. Patt. Anal. Mach. Intell. (2018)

  49. Ma, N.; Zhang, X.; Zheng, H.-T.; Sun, J.: Shufflenet v2: Practical guidelines for efficient cnn architecture design. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 116–131 (2018)

  50. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Confer Ence on Computer Vision and Pattern Recognition (CVPR) (2016)

  51. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K.: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and\(<\) 0.5 mb model size (2016). arXiv preprint arXiv:1602.07360

Download references

Acknowledgements

This work was supported by Heilongjiang Provincial Natural Science Foundation of China (Grant No. LH2020F044), the 2019-“Chunhui Plan” Cooperative Scientific Research Project of Ministry of Education of China (Grant No. HLJ2019015), the Fundamental Research Funds for Heilongjiang Universities, China (Grant No. 2020-KYYWF-1014), and National innovation and entrepreneurship training program for Chinese College Students (Grant No. 202110212027).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jialiang Peng.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, G., Peng, J. & El-Latif, A.A.A. SK-MobileNet: A Lightweight Adaptive Network Based on Complex Deep Transfer Learning for Plant Disease Recognition. Arab J Sci Eng 48, 1661–1675 (2023). https://doi.org/10.1007/s13369-022-06987-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13369-022-06987-z

Keywords

Navigation