Advertisement

Comparing Ellipse Detection and Deep Neural Networks for the Identification of Drinking Glasses in Images

  • Abdul JabbarEmail author
  • Alexandre Mendes
  • Stephan Chalup
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11754)

Abstract

This study compares a deep learning approach with the traditional computer vision method of ellipse detection on the task of detecting semi-transparent drinking glasses filled with water in images. Deep neural networks can, in principle, be trained until they exhibit excellent performance in terms of detection accuracy. However, their ability to generalise to different types of surroundings relies on large amounts of training data, while ellipse detection can work in any environment without requiring additional data or algorithm tuning. Two deep neural networks trained on different image data sets containing drinking glasses were tested in this study. Both networks achieved high levels of detection accuracy, independently of the test image resolution. In contrast, the ellipse detection method was less consistent, greatly depending on the visibility of the top and bottom of the glasses, and water levels. The method detected the top of the glasses in less than half of the cases, at lower resolutions; and detection results were even worse for the water level and bottom of the glasses, in all resolutions.

Keywords

Deep learning Computer vision Ellipse detection Drinking glasses Training data Algorithm tuning Deep neural networks 

Notes

Acknowledgements

Abdul Jabbar was supported by a UNRSC50:50 PhD scholarship from The University of Newcastle.

References

  1. 1.
    Xu, Y., Nagahara, H., Shimada, A., Taniguchi, R.: Transcut: Transparent object segmentation from a light-field image. CoRR abs/1511.06853 (2015)Google Scholar
  2. 2.
    Klank, U., Carton, D., Beetz, M.: Transparent object detection and reconstruction on a mobile platform. In: IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China (2011)Google Scholar
  3. 3.
    Ihrke, I., Kutulakos, K., Lensch, H., Magnor, M., Heidrich, W.: State of the art in transparent and specular object reconstruction. In: EUROGRAPHICS Star Proceedings, pp. 87–108. EG, Crete, Greece (2008)Google Scholar
  4. 4.
    Nair, P., Saunders, A.: Hough transform based ellipse detection algorithm. Pattern Recogn. Lett. 17(7), 777–784 (1996). http://www.sciencedirect.com/science/article/pii/0167865596000141CrossRefGoogle Scholar
  5. 5.
    Yuen, H.K., Illingworth, J., Kittler, J.: Detecting partially occluded ellipses using the hough transform. Image Vis. Comput. 7(1), 31–37 (1989).  https://doi.org/10.1016/0262-8856(89)90017-6CrossRefGoogle Scholar
  6. 6.
    Xie, Y., Ji, Q.: A new efficient ellipse detection method. In: 16th International Conference on Pattern Recognition, vol. 2, pp. 957–960. IEEE (2002)Google Scholar
  7. 7.
    Fornaciari, M., Prati, A.: Very fast ellipse detection for embedded vision applications. In: 2012 Sixth International Conference on Distributed Smart Cameras (ICDSC), pp. 1–6 (2012)Google Scholar
  8. 8.
    Prasad, D.K., Leung, M.K.H.: Clustering of ellipses based on their distinctiveness: an aid to ellipse detection algorithms. In: 2010 3rd International Conference on Computer Science and Information Technology, vol. 8, pp. 292–297, July 2010Google Scholar
  9. 9.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016)Google Scholar
  10. 10.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 25 (NIPS), pp. 1097–1105. Curran Associates, Inc. (2012)Google Scholar
  11. 11.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014). http://arxiv.org/abs/1409.1556
  12. 12.
    Szegedy, C., et al.: Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9 (2015)Google Scholar
  13. 13.
    Abadi, M., et al.: TensorFlow: Large-scale machine learning on heterogeneous systems (2015). https://www.tensorflow.org/. software available from tensorflow.org
  14. 14.
    Lin, T., et al.: Microsoft COCO: common objects in context. CoRR abs/1405.0312 (2014). http://arxiv.org/abs/1405.0312
  15. 15.
    Jabbar, A., Farrawell, L., Fountain, J., Chalup, S.K.: Training deep neural networks for detecting drinking glasses using synthetic images. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, E.S. (eds.) ICONIP 2017. Lecture Notes in Computer Science, vol. 10635, pp. 354–363. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-70096-0_37CrossRefGoogle Scholar
  16. 16.
  17. 17.
    Blender Foundation: Blender. https://www.blender.org/
  18. 18.
    Michael, L., David, W.: Distance between sets. Nature 234, 34–35 (1971)CrossRefGoogle Scholar
  19. 19.
    Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 88(2), 303–338 (2010)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Abdul Jabbar
    • 1
    Email author
  • Alexandre Mendes
    • 1
  • Stephan Chalup
    • 1
  1. 1.School of Electrical Engineering and ComputingThe University of NewcastleCallaghanAustralia

Personalised recommendations