Shallow and Deep Model Investigation for Distinguishing Corn and Weeds

  • Yu Xia
  • Hongxun Yao
  • Xiaoshuai Sun
  • Yanhao Zhang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10735)


Nowadays, the development of agriculture is growing very fast. The yields of corn is also an important indicator and a great part in the agriculture, which makes automatic weeds removal a necessary and urgent task. There are many challenges to distinguish the corn and weed, the biggest one is the similarity in both color and shape between corn and weeds. The processing speed is also very important in practical application. In this paper, we investigate two methods to fulfill this task. The first one is training and computing the SIFT and HARRIS feature descriptors, then using the SVM classifier to distinguish the corn and weeds. The second one is an End-to-End solution based on the faster R-CNN model. In addition, we design a specific module in order to improve the processing speed and ensure the accuracy at the same time. The experiment results conducted on our dataset demonstrate that detection based on improved Faster R-CNN model can better handle the problem.


SIFT/HARRIS Faster R-CNN Weed detection Image classification 



This work was supported by the National Natural Science Foundation of China under Project No. 61472103.


  1. 1.
    Wu, S.G., Bao, F.S., Xu, E.Y., Wang, Y., Chang, Y., Xiang, Q.: A leaf recognition algorithm for plant classification using probabilistic neural network, CoRR abs/0707.4289 (2007)Google Scholar
  2. 2.
    Jiang, Y., Ma, J.: Combination features and models for human detection. In: CVPR, pp. 240–248. IEEE Computer Society (2015)Google Scholar
  3. 3.
    Fusek, R., Sojka, E., Mozdren, K., Surkala, M.: Energy-transfer features and their application in the task of face detection. In: AVSS, pp. 147–152. IEEE Computer Society (2013)Google Scholar
  4. 4.
    Wang, J., Zhu, H., Yu, S., Fan, C.: Object tracking using color-feature guided network generalization and tailored feature fusion. Neurocomputing 238, 387–398 (2017)CrossRefGoogle Scholar
  5. 5.
    Girshick, R.B., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: CVPR, pp. 580–587. IEEE Computer Society (2014)Google Scholar
  6. 6.
    Girshick, R.B.: Fast R-CNN. In: ICCV, pp. 1440–1448. IEEE Computer Society (2015)Google Scholar
  7. 7.
    Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: NIPS, pp. 91–99 (2015)Google Scholar
  8. 8.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition, CoRR abs/1409.1556 (2014)Google Scholar
  9. 9.
    Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). Scholar
  10. 10.
    Redmon, J., Divvala, S.K., Girshick, R.B., Farhadi, A.: You only look once: unified, real-time object detection. In: CVPR, pp. 779–788. IEEE Computer Society (2016)Google Scholar
  11. 11.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Yu Xia
    • 1
  • Hongxun Yao
    • 1
  • Xiaoshuai Sun
    • 1
  • Yanhao Zhang
    • 1
  1. 1.School of Computer Science and TechnologyHarbin Institute of TechnologyHarbinChina

Personalised recommendations