Advertisement

Autonomous Robots

, Volume 42, Issue 2, pp 197–208 | Cite as

A comparison of volumetric information gain metrics for active 3D object reconstruction

  • Jeffrey Delmerico
  • Stefan Isler
  • Reza Sabzevari
  • Davide Scaramuzza
Article
Part of the following topical collections:
  1. Active Perception

Abstract

In this paper, we investigate the following question: when performing next best view selection for volumetric 3D reconstruction of an object by a mobile robot equipped with a dense (camera-based) depth sensor, what formulation of information gain is best? To address this question, we propose several new ways to quantify the volumetric information (VI) contained in the voxels of a probabilistic volumetric map, and compare them to the state of the art with extensive simulated experiments. Our proposed formulations incorporate factors such as visibility likelihood and the likelihood of seeing new parts of the object. The results of our experiments allow us to draw some clear conclusions about the VI formulations that are most effective in different mobile-robot reconstruction scenarios. To the best of our knowledge, this is the first comparative survey of VI formulation performance for active 3D object reconstruction. Additionally, our modular software framework is adaptable to other robotic platforms and general reconstruction problems, and we release it open source for autonomous reconstruction tasks.

Keywords

Active vision Information gain 3D reconstruction 

References

  1. Aloimonos, J., Weiss, I., & Bandyopadhyay, A. (1988). Active vision. International Journal of Computer Vision, 1(4), 333–356.CrossRefGoogle Scholar
  2. Bajcsy, R. (1988). Active Perception. Proceedings of the IEEE, 76(8), 966–1005.CrossRefGoogle Scholar
  3. Banta, J., Wong, L., Dumont, C., & Abidi, M. (2000). A next-best-view system for autonomous 3-D object reconstruction. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 30(5), 589–598.CrossRefGoogle Scholar
  4. Banta, J. E., Zhien, Y., Wang, X. Z., Zhang, G., Smith, M. T., & Abidi, M. A. (1995). Best-next-view algorithm for three-dimensional scene reconstruction using range images. In Proceedings of the SPIE 2588, intelligent robots and computer vision XIV: Algorithms, techniques, active vision, and materials handling, 418 (October 3, 1995). doi: 10.1117/12.222691.
  5. Blake, A., & Yuille, A. (1988). Active vision. Cambridge: The MIT Press.Google Scholar
  6. Chen, S., & Li, Y. (2005). Vision sensor planning for 3-D model acquisition. IEEE Transactions on Systems, Man, and Cybernetics, Part B Cybernetics, 35(5), 894–904.CrossRefGoogle Scholar
  7. Chen, S., Li, Y., & Kwok, N. M. (2011). Active vision in robotic systems: A survey of recent developments. International Journal of Robotics Research, 30(11), 1343–1377.CrossRefGoogle Scholar
  8. Connolly, C., et al. (1985). The determination of next best views. In IEEE international conference on robotics and automation (ICRA) (Vol. 2, pp. 432–435). IEEE.Google Scholar
  9. Forster, C., Pizzoli, M., & Scaramuzza, D. (2014). Appearance-based active, monocular, dense depth estimation for micro aerial vehicles. In Robotics: Science and Systems (RSS).Google Scholar
  10. Hornung, A., Wurm, K. M., Bennewitz, M., Stachniss, C., & Burgard, W. (2013). OctoMap: An efficient probabilistic 3D mapping framework based on octrees. Autonomous Robots,. doi:10.1007/s10514-012-9321-0. http://octomap.github.com. Software http://octomap.github.com.
  11. Isler, S., Sabzevari, R., Delmerico, J., & Scaramuzza, D. (2016). An information gain formulation for active volumetric 3d reconstruction. In IEEE international conference on robotics and automation (ICRA).Google Scholar
  12. Jensen, R., Dahl, A., Vogiatzis, G., Tola, E., & s H., A. (2014). Large scale multi-view stereopsis evaluation. In Proceedings of IEEE international conference on computer vision and pattern recognition.Google Scholar
  13. Krainin, M., Curless, B., & Fox, D. (2011). Autonomous generation of complete 3d object models using next best view manipulation planning. In IEEE international conference on robotics and automation (ICRA) (pp. 5031–5037). IEEE.Google Scholar
  14. Kriegel, S., Rink, C., Bodenmüller, T., & Suppa, M. (2015). Efficient next-best-scan planning for autonomous 3d surface reconstruction of unknown objects. Journal of Real-Time Image Processing, 10, 611. doi: 10.1007/s11554-013-0386-6.CrossRefGoogle Scholar
  15. Pito, R. (1999). A solution to the next best view problem for automated surface acquisition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(10), 1016–1030.CrossRefGoogle Scholar
  16. Potthast, C., & Sukhatme, G. S. (2014). A probabilistic framework for next best view estimation in a cluttered environment. Journal of Visual Communication and Image Representation, 25(1), 148–164.CrossRefGoogle Scholar
  17. Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., Wheeler, R., & Ng, A. Y. (2009). ROS: An open-source robot operating system. In ICRA workshop on open source software (Vol. 3, p. 5).Google Scholar
  18. Rodolà, E., Albarelli, A., Bergamasco, F., & Torsello, A. (2013). A scale independent selection process for 3d object recognition in cluttered scenes. International Journal of Computer Vision, 102(1–3), 129–145.MathSciNetCrossRefGoogle Scholar
  19. Schmid, K., Hirschmüller, H., Dömel, A., Grixa, I., Suppa, M., & Hirzinger, G. (2012). View planning for multi-view stereo 3D reconstruction using an autonomous multicopter. Journal of Intelligent and Robotic Systems, 65(1–4), 309–323.CrossRefGoogle Scholar
  20. Scott, W., Roth, G., & Rivest, J. F. (2003). View planning for automated 3d object reconstruction and inspection. ACM Computing Surveys, 35(1), 64–96.CrossRefGoogle Scholar
  21. Torabi, L., & Gupta, K. (2012). An autonomous six-DOF eye-in-hand system for in situ 3D object modeling. International Journal of Robotics Research, 31(1), 82–100.CrossRefGoogle Scholar
  22. Trummer, M., Munkelt, C., & Denzler, J. (2010). Online next-best-view planning for accuracy optimization using an extended e-criterion. In International conference on pattern recognition (ICPR) (pp. 1642–1645). IEEE.Google Scholar
  23. Vasquez-Gomez, J. I., Sucar, L. E., & Murrieta-Cid, R. (2014). View planning for 3d object reconstruction with a mobile manipulator robot. In IEEE/RSJ international conference on intelligent robots and systems (IROS).Google Scholar
  24. Vasquez-Gomez, J. I., Sucar, L. E., Murrieta-Cid, R., & Lopez-Damian, E. (2014). Volumetric next best view planning for 3d object reconstruction with positioning error. International Journal of Advanced Robotic Systems, 11, 159.CrossRefGoogle Scholar
  25. Wettach, J., & Berns, K. (2010). Dynamic frontier based exploration with a mobile indoor robot. In international symposium on Robotics (ISR) (pp. 1–8). VDE.Google Scholar
  26. Yamauchi, B. (1997). A frontier-based approach for autonomous exploration. In IEEE international conference on robotics and automation (ICRA) (pp. 146–151). IEEE.Google Scholar

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  1. 1.Robotics and Perception GroupUniversity of ZurichZurichSwitzerland

Personalised recommendations