On Recognizing Transparent Objects in Domestic Environments Using Fusion of Multiple Sensor Modalities

  • Alexander HaggEmail author
  • Frederik Hegger
  • Paul G. Plöger
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9776)


Current object recognition methods fail on object sets that include both diffuse, reflective and transparent materials, although they are very common in domestic scenarios. We show that a combination of cues from multiple sensor modalities, including specular reflectance and unavailable depth information, allows us to capture a larger subset of household objects by extending a state of the art object recognition method. This leads to a significant increase in robustness of recognition over a larger set of commonly used objects.


Object recognition Transparency Fusion Modalities Domestic robotics Multimodal 


  1. 1.
    Albrecht, S., Marsland, S.: Seeing the unseen: simple reconstruction of transparent objects from point cloud data. In: Robotics: Science and Systems (2013)Google Scholar
  2. 2.
    Alt, N., Rives, P., Steinbach, E.: Reconstruction of transparent objects in unstructured scenes with a depth camera. In: IEEE International Conference on Image Processing, pp. 4131–4135. IEEE, September 2013Google Scholar
  3. 3.
    Wang, T., He, X., Barnes, T.: Glass object segmentation by label transfer on joint depth and appearance manifolds. In: IEEE Conference on Image Processing (ICIP), pp. 2944–2948 (2013)Google Scholar
  4. 4.
    Blake, A., Bulthoff, H.: Shape from specularities: computation and psychophysics. Philos. Trans. Roy. Soc. 331(1260), 237–252 (1991)CrossRefGoogle Scholar
  5. 5.
    Blei, D., Ng, A., Jordan, M.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3(3/1/2003), 993–1022 (2003)zbMATHGoogle Scholar
  6. 6.
    Chiu, W., Blanke, U., Fritz, M.: Improving the kinect by cross-modal stereo. Br. Mach. Vis. Conf. 1(2), 3 (2011)Google Scholar
  7. 7.
    Fritz, M., Bradski, G., Karayev, S.: An additive latent feature model for transparent object recognition. In: Advances in Neural Information Processing Systems, vol. 22, pp. 1–9 (2009)Google Scholar
  8. 8.
    Heath, T.L.: A History of Greek Mathematics: vol. 2. From Aristarchus to Diophantus. A History of Greek Mathematics. Dover Publications, New York (2000)zbMATHGoogle Scholar
  9. 9.
    Hinterstoisser, S., Cagniart, C.: Gradient response maps for real-time detection of textureless objects. IEEE Trans. Pattern Anal. Mach. Intell. 34, 876–888 (2012)CrossRefGoogle Scholar
  10. 10.
    Klank, U., Carton, D., Beetz, M.: Transparent object detection and reconstruction on a mobile platform. In: IEEE Conference on Robotics and Automation (ICRA), pp. 5971–5978 (2011)Google Scholar
  11. 11.
    Koshikawa, K., Shirai, Y.: A model-based recognition of glossy objects using their polarimetrical properties. Adv. Robot. 2(2), 137–147 (1987)CrossRefGoogle Scholar
  12. 12.
    Lambert, J.: Photometria. Eberhard Klett Verlag, Augsburg (1760)Google Scholar
  13. 13.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRefGoogle Scholar
  14. 14.
    Lysenkov, I., Rabaud, V.: Pose estimation of rigid transparent objects in transparent clutter. In: IEEE Conference on Robotics and Automation (ICRA), pp. 162–169 (2013)Google Scholar
  15. 15.
    Maeno, K., Nagahara, H.: Light field distortion feature for transparent object recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2786–2793, June 2013Google Scholar
  16. 16.
    Mahendru, A., Sarkar, M.: Bio-inspired object classification using polarization imaging. In: International Conference on Sensing Technology, pp. 207–212. IEEE, December 2012Google Scholar
  17. 17.
    Marton, Z.C., Rusu, R.B., Jain, D., Klank, U., Beetz, M.: Probabilistic categorization of kitchen objects in table settings with a composite sensor. In: 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4777–4784. IEEE, October 2009Google Scholar
  18. 18.
    Netz, A., Osadchy, M.: Using specular highlights as pose invariant features for 2D–3D pose estimation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 721–728 (2011)Google Scholar
  19. 19.
    Osadchy, M.: Using specularities for recognition. In: IEEE International Conference on Computer Vision, vol. 2, pp. 1512–1519. IEEE (2003)Google Scholar
  20. 20.
    Saito, M., Sato, Y., Ikeuchi, K., Kashiwagi, H.: Measurement of surface orientations of transparent objects by use of polarization in highlight. Syst. Comput. Jpn. 32(5), 64–71 (2001)CrossRefGoogle Scholar
  21. 21.
    Zhang, L., Hancock, E.: A comprehensive polarisation model for surface orientation recovery. In: IEEE Conference on Pattern Recognition, pp. 3791–3794 (2012)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Alexander Hagg
    • 1
    Email author
  • Frederik Hegger
    • 1
  • Paul G. Plöger
    • 1
  1. 1.Department of Computer ScienceBonn-Rhein-Sieg University of Applied SciencesSankt AugustinGermany

Personalised recommendations