Skip to main content
Log in

Orchid classification using homogeneous ensemble of small deep convolutional neural network

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

Orchids are flowering plants in the large and diverse family Orchidaceae. Orchid flowers may share similar visual characteristics even they are from different species. Thus, classifying orchid species from images is a hugely challenging task. Motivated by the inadequacy of the current state-of-the-art general-purpose image classification methods in differentiating subtle differences between orchid flower images, we propose a hybrid model architecture to better classify the orchid species from images. The model architecture is composed of three parts: the global prediction network (GPN), the local prediction network (LPN), and the ensemble neural network (ENN). The GPN predicts the orchid species by global features of orchid flowers. The LPN looks into local features such as the organs of orchid plant via a spatial transformer network. Finally, the ENN fuses the intermediate predictions from the GPN and the LPN modules and produces the final prediction. All modules are implemented based on a robust convolutional neural network with transfer learning methodology from notable existing models. Due to the interplay between the modules, we also guidelined the training steps necessary for achieving higher predictive performance. The classification results based on an extensive in-house Orchids-52 dataset demonstrated the superiority of the proposed method compared to the state of the art.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Antipov, G., Berrani, S.A., Dugelay, J.L.: Minimalistic CNN-based ensemble model for gender prediction from face images. Pattern Recogn. Lett. 70, 59–65 (2016)

    Article  Google Scholar 

  2. Can Malli, R., Aygun, M., Kemal Ekenel, H.: Apparent age estimation using ensemble of deep learning models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 9–16 (2016)

  3. Chai, Y., Rahtu, E., Lempitsky, V., Van Gool, L., Zisserman, A.: Tricos: a tri-level class-discriminative co-segmentation method for image classification. In: European Conference on Computer Vision. Springer, pp. 794–807 (2012)

  4. Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1251–1258 (2017)

  5. Devadas, R., Pamarthi, R., Meitei, A., Pattanayak, S., Sherpa, R., et al.: Morphological description of novel phaius primary hybrid (orchidaceae). J. Exp. Biol. Agric. Sci. 7(2), 138–147 (2019)

    Article  Google Scholar 

  6. Duan, M., Li, K., Li, K.: An ensemble cnn2elm for age estimation. IEEE Trans. Inf. Forensics Secur. 13(3), 758–772 (2017)

    Article  Google Scholar 

  7. Duan, M., Li, K., Yang, C., Li, K.: A hybrid deep learning cnn-elm for age and gender classification. Neurocomputing 275, 448–461 (2018)

    Article  Google Scholar 

  8. Fan, Y., Lam, J.C., Li, V.O.: Multi-region ensemble convolutional neural network for facial expression recognition. In: International Conference on Artificial Neural Networks. Springer, pp. 84–94 (2018)

  9. Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding. arXiv preprint arXiv:1510.00149 (2015)

  10. Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efficient neural network. In: Advances in Neural Information Processing Systems, pp. 1135–1143 (2015)

  11. Harangi, B.: Skin lesion classification with ensembles of deep convolutional neural networks. J. Biomed. Inform. 86, 25–32 (2018)

    Article  Google Scholar 

  12. Hiary, H., Saadeh, H., Saadeh, M., Yaqub, M.: Flower classification using deep convolutional neural networks. IET Comput. Vis. 12(6), 855–862 (2018)

    Article  Google Scholar 

  13. Hossain, M.M.: Ex vitro seedling development from in vitro rhizome-like bodies in eulophia promensis lindl.: a new technique for orchid propagation. J Bot 2015 (2015)

  14. Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)

  15. Hu, W., Hu, R., Xie, N., Ling, H., Maybank, S.: Image classification using multiscale information fusion based on saliency driven nonlinear diffusion filtering. IEEE Trans. Image Process. 23(4), 1513–1526 (2014)

    Article  MathSciNet  Google Scholar 

  16. Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size. arXiv preprint arXiv:1602.07360 (2016)

  17. Ioffe, S.: Batch renormalization: towards reducing minibatch dependence in batch-normalized models. In: Advances in Neural Information Processing Systems, pp. 1945–1953 (2017)

  18. Jaderberg, M., Simonyan, K., Zisserman, A., et al.: Spatial transformer networks. In: Advances in Neural Information Processing Systems, pp. 2017–2025 (2015)

  19. Khan, F.S., Van de Weijer, J., Vanrell, M.: Modulating shape features by color attention for object recognition. Int. J. Comput. Vis. 98(1), 49–64 (2012)

    Article  Google Scholar 

  20. Kingma, D.P., Salimans, T., Welling, M.: Variational dropout and the local reparameterization trick. In: Advances in Neural Information Processing Systems, pp. 2575–2583 (2015)

  21. Kumar, A., Kim, J., Lyndon, D., Fulham, M., Feng, D.: An ensemble of fine-tuned convolutional neural networks for medical image classification. IEEE J. Biomed. Health Inform. 21(1), 31–40 (2016)

    Article  Google Scholar 

  22. Mete, B.R., Ensari, T.: Flower classification with deep cnn and machine learning algorithms. In: 2019 3rd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT). IEEE, pp. 1–5 (2019)

  23. Nilsback, M.E., Zisserman, A.: A visual vocabulary for flower classification. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), vol. 2. IEEE, pp. 1447–1454 (2006)

  24. Nilsback, M.E., Zisserman, A.: Automated flower classification over a large number of classes. In: 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing. IEEE, pp. 722–729 (2008)

  25. Nilsback, M.E., Zisserman, A.: Delving deeper into the whorl of flower segmentation. Image Vis. Comput. 28(6), 1049–1062 (2010)

    Article  Google Scholar 

  26. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet large scale visual recognition challenge. IJCV 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y

    Article  MathSciNet  Google Scholar 

  27. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)

  28. Sarachai, W., Bootkrajang, J., Chaijaruwanich, J., Somhom, S.: Orchids classification using spatial transformer network with adaptive scaling. In: International Conference on Intelligent Data Engineering and Automated Learning. Springer, pp. 1–10 (2019)

  29. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on CVPR, pp. 2818–2826 (2016)

  30. Toğaçar, M., Ergen, B., Cömert, Z.: Classification of flower species by using features extracted from the intersection of feature selection methods in convolutional neural network models. Measurement 158, 107703 (2020)

    Article  Google Scholar 

  31. Yang, T.J., Chen, Y.H., Sze, V.: Designing energy-efficient convolutional neural networks using energy-aware pruning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5687–5695 (2017)

  32. Yu, J., Lukefahr, A., Palframan, D., Dasika, G., Das, R., Mahlke, S.: Scalpel: customizing dnn pruning to the underlying hardware parallelism. ACM SIGARCH Comput. Archit. News 45(2), 548–560 (2017)

    Article  Google Scholar 

  33. Zhang, X., Zhou, X., Lin, M., Sun, J.: Shufflenet: an extremely efficient convolutional neural network for mobile devices. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6848–6856 (2018)

Download references

Acknowledgements

This research was supported by Chiang Mai University.

Author information

Authors and Affiliations

Authors

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sarachai, W., Bootkrajang, J., Chaijaruwanich, J. et al. Orchid classification using homogeneous ensemble of small deep convolutional neural network. Machine Vision and Applications 33, 17 (2022). https://doi.org/10.1007/s00138-021-01267-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00138-021-01267-6

Keywords

Navigation