Skip to main content
Log in

How to localize humanoids with a single camera?

Autonomous Robots Aims and scope Submit manuscript

Abstract

In this paper, we propose a real-time vision-based localization approach for humanoid robots using a single camera as the only sensor. In order to obtain an accurate localization of the robot, we first build an accurate 3D map of the environment. In the map computation process, we use stereo visual SLAM techniques based on non-linear least squares optimization methods (bundle adjustment). Once we have computed a 3D reconstruction of the environment, which comprises of a set of camera poses (keyframes) and a list of 3D points, we learn the visibility of the 3D points by exploiting all the geometric relationships between the camera poses and 3D map points involved in the reconstruction. Finally, we use the prior 3D map and the learned visibility prediction for monocular vision-based localization. Our algorithm is very efficient, easy to implement and more robust and accurate than existing approaches. By means of visibility prediction we predict for a query pose only the highly visible 3D points, thus, speeding up tremendously the data association between 3D map points and perceived 2D features in the image. In this way, we can solve very efficiently the Perspective-n-Point (PnP) problem providing robust and fast vision-based localization. We demonstrate the robustness and accuracy of our approach by showing several vision-based localization experiments with the HRP-2 humanoid robot.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24

Similar content being viewed by others

Notes

  1. For more information, please check the following url: http://www.vicon.com/.

References

  • Agarwal, S., Snavely, N., Simon, I., Seitz, S. M., & Szeliski, R. (2009). Building Rome in a day. In Intl. conf. on computer vision (ICCV).

    Google Scholar 

  • Alcantarilla, P. F., Oh, S., Mariottini, G., Bergasa, L. M., & Dellaert, F. (2010). Learning visibility of landmarks for vision-based localization. In IEEE intl. conf. on robotics and automation (ICRA), Anchorage, AK, USA (pp. 4881–4888).

    Google Scholar 

  • Alcantarilla, P. F., Ni, K., Bergasa, L. M., & Dellaert, F. (2011). Visibility learning in large-scale urban environment. In IEEE intl. conf. on robotics and automation (ICRA), Shanghai, China.

    Google Scholar 

  • Angeli, A., Filliat, D., Doncieux, S., & Meyer, J. A. (2008). Fast and incremental method for loop-closure detection using bags of visual words. IEEE Transactions on Robotics, 24, 1027–1037.

    Article  Google Scholar 

  • Ansar, A., & Danilidis, K. (2003). Linear pose estimation from points or lines. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(4), 1–12.

    Google Scholar 

  • Atkeson, C., Moore, A., & Schaal, S. (1997). Locally weighted learning. AI Review, 11, 11–73.

    Google Scholar 

  • Bay, H., Ess, A., Tuytelaars, T., & Gool, L. V. (2008). SURF: speeded up robust features. Computer Vision and Image Understanding, 110(3), 346–359.

    Article  Google Scholar 

  • Blösch, M., Weiss, S., Scaramuzza, D., & Siegwart, R. (2010). Vision based MAV navigation in unknown and unstructured environments. In IEEE intl. conf. on robotics and automation (ICRA), Anchorage, AK, USA.

    Google Scholar 

  • Bohg, J., Holst, C., Huebner, K., Ralph, M., Rasolzadeh, B., Song, D., & Kragic, D. (2009). Towards grasp-oriented visual perception for humanoid robots. International Journal of Humanoid Robotics, 6(3), 387–434.

    Article  Google Scholar 

  • Bolles, R., & Fischler, M. (1981). A RANSAC-based approach to model fitting and its application to finding cylinders in range data. In Intl. joint conf. on AI (IJCAI), Vancouver, Canada (pp. 637–643).

    Google Scholar 

  • Bouguet, J. (2008a). The calibration toolbox for Matlab. example 5: Stereo rectification algorithm (code and instructions only).

  • Bouguet, J. (2008b). Documentation: camera calibration toolbox for Matlab. www.vision.caltech.edu/bouguetj/calib_doc/.

  • Byröd, M., & Åström, K. (2010). Conjugate gradient bundle adjustment. In Eur. conf. on computer vision (ECCV).

    Google Scholar 

  • Checklov, D., Mayol-Cuevas, W., & Calway, A. (2008). Appearance based indexing for relocalisation in real-time visual SLAM. In British machine vision conf (BMVC), Leeds, UK.

    Google Scholar 

  • Cummins, M., & Newman, P. (2008). FAB-MAP: probabilistic localization and mapping in the space of appearance. International Journal of Robotics Research, 27(6), 647–665.

    Article  Google Scholar 

  • Davison, A. J., Reid, I. D., Molton, N. D., & Stasse, O. (2007). MonoSLAM: Real-time single camera SLAM. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(6), 1052–1067.

    Article  Google Scholar 

  • Dellaert, F., & Kaess, M. (2006). Square root SAM: simultaneous localization and mapping via square root information smoothing. International Journal of Robotics Research, 25(12), 1181–1203.

    Article  MATH  Google Scholar 

  • Dominey, P., Mallet, A., & Yoshida, E. (2007). Progress in programming the HRP-2 humanoid using spoken language. In IEEE intl. conf. on robotics and automation (ICRA), Rome, Italy.

    Google Scholar 

  • Durrant-White, H., & Bailey, T. (2006). Simultaneous localization and mapping SLAM: part 1. IEEE Robotics and Automation Magazine, 13(3), 99–110.

    Article  Google Scholar 

  • Foissote, T., Stasse, O., Wieber, P., Escande, A., & Kheddar, A. (2010). Autonomous 3D object modeling by a humanoid using an optimization-driven next-best-view formulation. International Journal of Humanoid Robotics, 7(3), 407–428.

    Article  Google Scholar 

  • Geiger, A., Roser, M., & Urtasun, R. (2010). Efficient large-scale stereo matching. In Asian conf. on computer vision (ACCV), Queenstown, New Zealand.

    Google Scholar 

  • Gil, A., Mozos, O., Ballesta, M., & Reinoso, O. (2010). A comparative evaluation of interest point detectors and local descriptors for visual SLAM. Machine Vision and Applications, 21(6), 905–920.

    Article  Google Scholar 

  • Harris, C., & Stephens, M. (1988). A combined corner and edge detector. In Proc. fourth alvey vision conference (pp. 147–151).

    Google Scholar 

  • Hartley, R. (1999). Theory and practice of projective rectification. International Journal of Computer Vision, 35, 115–127.

    Article  Google Scholar 

  • Hartley, R., & Zisserman, A. (2000). Multiple view geometry in computer vision. Cambridge: Cambridge University Press.

    MATH  Google Scholar 

  • Hirschmüller, H. (2008). Stereo processing by semiglobal matching and mutual information. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(2), 328–341.

    Article  Google Scholar 

  • Hornung, A., Wurm, K., & Bennewitz, M. (2010). Humanoid robot localization in complex indoor environments. In IEEE/RSJ intl. Conf. on intelligent robots and systems (IROS), Taipei, Taiwan.

    Google Scholar 

  • Jian, Y. D., Balcan, D., & Dellaert, F. (2011). Generalized subgraph preconditioners for large-scale bundle adjustment. In Intl. conf. on computer vision (ICCV), Barcelona, Spain.

    Google Scholar 

  • Kaess, M., Ni, K., & Dellaert, F. (2009). Flow separation for fast and robust stereo odometry. In IEEE intl. conf. on robotics and automation (ICRA), Kobe, Japan.

    Google Scholar 

  • Kagami, S., Nishiwaki, K., Kuffner, J., Thompson, S., Chestnutt, J., Stilman, M., & Michel, P. (2005). Humanoid HRP2-DHRC for autonomous and interactive behavior. In Proc. of the intl. symp. of robotics research (ISRR) (pp. 103–117).

    Google Scholar 

  • Kaneko, K., Kanehiro, F., Kajita, S., Hirukawa, H., Kawasaki, T., Hirata, M., Akachi, K., & Isozumi, T. (2004). Humanoid robot HRP-2. In IEEE intl. conf. on robotics and automation (ICRA), New Orleans, USA (pp. 1083–1090).

    Google Scholar 

  • Klein, G., & Murray, D. W. (2007). Parallel tracking and mapping for small AR workspaces. In IEEE and ACM intl. sym. on mixed and augmented reality (ISMAR), Nara, Japan.

    Google Scholar 

  • Konolige, K. (1997). Small vision system: hardware and implementation. In Proc. of the intl. symp. of robotics research (ISRR) (pp. 111–116).

    Google Scholar 

  • Konolige, K., & Agrawal, M. (2008). FrameSLAM: from bundle adjustment to real-time visual mapping. IEEE Transactions on Robotics, 24(5), 1066–1077.

    Article  Google Scholar 

  • Kwak, N., Stasse, O., Foissotte, T., & Yokoi, K. (2009). 3D grid and particle based SLAM for a humanoid robot. In IEEE-RAS intl. conference on humanoid robots, Paris, France (pp. 62–67).

    Google Scholar 

  • Li, L., Socher, R., & Fei-Fei, L. (2009). Towards total scene understanding: classification, annotation and segmentation in an automatic framework. In IEEE conf. on computer vision and pattern recognition (CVPR), Miami, FL, USA.

    Google Scholar 

  • Lourakis, M. A. (2004). levmar: Levenberg-Marquardt nonlinear least squares algorithms in C/C++. [web page] http://www.ics.forth.gr/~lourakis/levmar/.

  • Lourakis, M. A., & Argyros, A. (2009). SBA: A software package for generic sparse bundle adjustment. ACM Transactions on Mathematical Software, 36(1), 1–30.

    Article  MathSciNet  Google Scholar 

  • Lu, C., Hager, G., & Mjolsness, E. (2000). Fast and globally convergent pose estimation from video images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(6), 610–622.

    Article  Google Scholar 

  • Marquardt, D. (1963). An algorithm for least-squares estimation of nonlinear parameters. SIAM Journal on Applied Mathematics (SIAP), 11(2), 431–441.

    Article  MathSciNet  MATH  Google Scholar 

  • Mei, C., Sibley, G., Cummins, M., Newman, P., & REAL, R. I. (2010). A system for large-scale mapping in constant-time using stereo. International Journal of Computer Vision, 94(2), 198–214.

    Article  Google Scholar 

  • Michel, P., Chestnutt, J., Kagami, S., Nishiwaki, K., Kuffner, J., & Kanade, T. (2007). GPU-accelerated real-time 3D tracking for humanoid locomotion and stair climbing. In IEEE/RSJ intl. conf. on intelligent robots and systems (IROS), San Diego, CA, USA (pp. 463–469).

    Google Scholar 

  • Mikolajczyk, K., & Schmid, C. (2005). A performance evaluation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(10), 1615–1630.

    Article  Google Scholar 

  • Mouragnon, E., Lhuillier, M., Dhome, M., Dekeyser, F., & Sayd, P. (2009). Generic and real-time structure from motion using Local bundle adjustment. Image and Vision Computing, 27, 1178–1193.

    Article  Google Scholar 

  • Nistér, D., Naroditsky, O., & Bergen, J. (2004). Visual odometry. In IEEE conf. on computer vision and pattern recognition (CVPR).

    Google Scholar 

  • Ozawa, R., Takaoka, Y., Kida, Y., Nishiwaki, K., Chestnutt, J., & Kuffner, J. (2007). Using visual odometry to create 3D maps for online footstep planning. In IEEE intl. conference on systems, man and cybernetics, Hawaii, USA (pp. 2643–2648).

    Google Scholar 

  • Perrin, N., Stasse, O., Lamiraux, F., & Yoshida, E. (2010). Approximation of feasibility tests for reactive walk on HRP-2. In IEEE intl. conf. on robotics and automation (ICRA), Anchorage, AK (pp. 4243–4248).

    Google Scholar 

  • Pretto, A., Menegatti, E., Bennewitz, M., Burgard, W., & Pagello, E. (2009). A visual odometry framework robust to motion blur. In IEEE intl. conf. on robotics and automation (ICRA), Kobe, Japan.

    Google Scholar 

  • Scharstein, D., & Szeliski, R. (2002). A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision, 47, 7–42.

    Article  MATH  Google Scholar 

  • Schweighofer, G., & Pinz, A. (2006). Fast and globally convergent structure and motion estimation for general camera models. In British machine vision conf. (BMVC).

    Google Scholar 

  • Schweighofer, G., & Pinz, A. (2008). Globally optimal O(n) solution to the PnP problem for general camera models. In British machine vision conf. (BMVC).

    Google Scholar 

  • Snavely, N., Seitz, S. M., & Szeliski, R. (2006). Photo tourism: exploring image collections in 3D. In SIGGRAPH.

    Google Scholar 

  • Stachniss, C., Bennewitz, M., Grisetti, G., Behnke, S., & Burgard, W. (2008). How to learn accurate grid maps with a humanoid. In IEEE intl. conf. on robotics and automation (ICRA), Pasadena, CA, USA.

    Google Scholar 

  • Stasse, O., Davison, A., Sellaouti, R., & Yokoi, K. (2006). Real-time 3D SLAM for a humanoid robot considering pattern generator information. In IEEE/RSJ intl. conf. on intelligent robots and systems (IROS), Beijing, China (pp. 348–355).

    Google Scholar 

  • Stasse, O., Larlus, D., Lagarde, B., Escande, A., Saidi, F., Kheddar, A., Yokoi, K., & Jurie, F. (2007). Towards autonomous object reconstruction for visual search by the humanoid robot HRP-2. In IEEE-RAS intl. conference on humanoid robots, Pittsburg, USA (pp. 151–158).

    Google Scholar 

  • Stasse, O., Verrelst, B., Wieber, P., Vanderborght, B., Evrard, P., Kheddar, A., & Yokoi, K. (2008). Modular architecture for humanoid walking pattern prototyping and experiments. Advanced Robotics, Special Issue on Middleware for Robotics–Software and Hardware Module in Robotics System, 22(6), 589–611.

    Google Scholar 

  • Strasdat, H., Montiel, J. M. M., & Davison, A. (2010). Real-time monocular SLAM: why filter? In IEEE intl. conf. on robotics and automation (ICRA), Anchorage, USA.

    Google Scholar 

  • Strelow, D., & Singh, S. (2004). Motion estimation from image and inertial measurements. International Journal of Robotics Research, 23(12), 1157–1195.

    Article  Google Scholar 

  • Triggs, B., McLauchlan, P., Hartley, R., & Fitzgibbon, A. (1999). Bundle adjustment—a modern synthesis. In W. Triggs, A. Zisserman, & R. Szeliski (Eds.), Vision algorithms: theory and practice, LNCS (pp. 298–375). Berlin: Springer.

    Google Scholar 

  • Wendel, A., Irschara, A., & Bischof, H. (2011). Natural landmark-based monocular localization for MAVs. In IEEE intl. conf. on robotics and automation (ICRA), Shanghai, China (pp. 5792–5799).

    Google Scholar 

  • Williams, B., Klein, G., & Reid, I. (2007). Real-time SLAM relocalisation. In Intl. conf. on computer vision (ICCV).

    Google Scholar 

Download references

Acknowledgements

This work has been financed with funds from the Ministerio de Economía y Competitividad through the project ADD-Gaze (TRA2011 - 29001 - C04 - 01), as well as from the Comunidad de Madrid through the project Robocity2030 (CAM - S - 0505 / DPI / 000176). The authors would also like to thank the Joint French-Japanese Robotics Laboratory (JRL), CNRS/AIST, Tsukuba, Japan.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pablo F. Alcantarilla.

Electronic Supplementary Material

Below are the links to the electronic supplementary material.

(MPG 35.3 MB)

(MPG 5.1 MB)

(MPG 10.0 MB)

(MPG 9.8 MB)

(MPG 37.2 MB)

(MPG 35.6 MB)

(MPG 18.9 MB)

(MPG 11.0 MB)

(MPG 21.1 MB)

(MPG 9.3 MB)

(MPG 4.7 MB)

(PPT 869 kB)

(TXT 3 kB)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Alcantarilla, P.F., Stasse, O., Druon, S. et al. How to localize humanoids with a single camera?. Auton Robot 34, 47–71 (2013). https://doi.org/10.1007/s10514-012-9312-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10514-012-9312-1

Keywords

Navigation