A comparison of volumetric information gain metrics for active 3D object reconstruction
- 543 Downloads
In this paper, we investigate the following question: when performing next best view selection for volumetric 3D reconstruction of an object by a mobile robot equipped with a dense (camera-based) depth sensor, what formulation of information gain is best? To address this question, we propose several new ways to quantify the volumetric information (VI) contained in the voxels of a probabilistic volumetric map, and compare them to the state of the art with extensive simulated experiments. Our proposed formulations incorporate factors such as visibility likelihood and the likelihood of seeing new parts of the object. The results of our experiments allow us to draw some clear conclusions about the VI formulations that are most effective in different mobile-robot reconstruction scenarios. To the best of our knowledge, this is the first comparative survey of VI formulation performance for active 3D object reconstruction. Additionally, our modular software framework is adaptable to other robotic platforms and general reconstruction problems, and we release it open source for autonomous reconstruction tasks.
KeywordsActive vision Information gain 3D reconstruction
- Banta, J. E., Zhien, Y., Wang, X. Z., Zhang, G., Smith, M. T., & Abidi, M. A. (1995). Best-next-view algorithm for three-dimensional scene reconstruction using range images. In Proceedings of the SPIE 2588, intelligent robots and computer vision XIV: Algorithms, techniques, active vision, and materials handling, 418 (October 3, 1995). doi: 10.1117/12.222691.
- Blake, A., & Yuille, A. (1988). Active vision. Cambridge: The MIT Press.Google Scholar
- Connolly, C., et al. (1985). The determination of next best views. In IEEE international conference on robotics and automation (ICRA) (Vol. 2, pp. 432–435). IEEE.Google Scholar
- Forster, C., Pizzoli, M., & Scaramuzza, D. (2014). Appearance-based active, monocular, dense depth estimation for micro aerial vehicles. In Robotics: Science and Systems (RSS).Google Scholar
- Isler, S., Sabzevari, R., Delmerico, J., & Scaramuzza, D. (2016). An information gain formulation for active volumetric 3d reconstruction. In IEEE international conference on robotics and automation (ICRA).Google Scholar
- Jensen, R., Dahl, A., Vogiatzis, G., Tola, E., & s H., A. (2014). Large scale multi-view stereopsis evaluation. In Proceedings of IEEE international conference on computer vision and pattern recognition.Google Scholar
- Krainin, M., Curless, B., & Fox, D. (2011). Autonomous generation of complete 3d object models using next best view manipulation planning. In IEEE international conference on robotics and automation (ICRA) (pp. 5031–5037). IEEE.Google Scholar
- Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., Wheeler, R., & Ng, A. Y. (2009). ROS: An open-source robot operating system. In ICRA workshop on open source software (Vol. 3, p. 5).Google Scholar
- Trummer, M., Munkelt, C., & Denzler, J. (2010). Online next-best-view planning for accuracy optimization using an extended e-criterion. In International conference on pattern recognition (ICPR) (pp. 1642–1645). IEEE.Google Scholar
- Vasquez-Gomez, J. I., Sucar, L. E., & Murrieta-Cid, R. (2014). View planning for 3d object reconstruction with a mobile manipulator robot. In IEEE/RSJ international conference on intelligent robots and systems (IROS).Google Scholar
- Wettach, J., & Berns, K. (2010). Dynamic frontier based exploration with a mobile indoor robot. In international symposium on Robotics (ISR) (pp. 1–8). VDE.Google Scholar
- Yamauchi, B. (1997). A frontier-based approach for autonomous exploration. In IEEE international conference on robotics and automation (ICRA) (pp. 146–151). IEEE.Google Scholar