Skip to main content

Robot Vision, Autonomous Vehicles, and Human Robot Interaction

  • Chapter
  • First Online:
Active Lighting and Its Application for Computer Vision

Abstract

Sensors are indispensable to robots. Active-lighting sensors, for example, work in a robust manner, even under severe conditions. They are utilized heavily for various robot tasks, such as object localization/recognition, navigation, and manipulation. In this chapter, we review two robotics applications: simultaneous localization and mapping (SLAM) for navigation tasks and learning-from-observation (LfO) for manipulation tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Dissanayake MG, Newman P, Clark S, Durrant-Whyte HF, Csorba M (2001) A solution to the simultaneous localization and map building (slam) problem. IEEE Trans Robot Autom 17(3):229–241

    Article  Google Scholar 

  2. Xue L, Ono S, Banno A, Oishi T, Sato Y, Ikeuchi K (2012) Global 3d modeling and its evaluation for large-scale highway tunnel using laser range sensor. SEISAN KENKYU 64(2):155–160

    Google Scholar 

  3. Besl PJ, McKay ND (1992) Method for registration of 3-d shapes. In: Sensor fusion IV: control paradigms and data structures, vol 1611. International Society for Optics and Photonics, pp 586–606

    Google Scholar 

  4. Oishi T, Kurazume R, Nakazawa A, Ikeuchi K (2005) Fast simultaneous alignment of multiple range images using index images. In: Fifth international conference on 3-d digital imaging and modeling (3DIM’05). IEEE, pp 476–483

    Google Scholar 

  5. Ono S, Ikeuchi K (2004) Self-position estimation for virtual 3d city model construction with the use of horizontal line laser scanning. Int J ITS Res 2:67–75

    Google Scholar 

  6. Bolles RC, Baker HH, Marimont DH (1987) Epipolar-plane image analysis: an approach to determining structure from motion. Int J Comput Vis (IJCV) 1(1):7–55

    Article  Google Scholar 

  7. Ikeuchi K, Suehiro T, Tanguy P, Wheeler M (1991) Assembly plan from observation. In: Annual research review. The Robotics Institute, Carnegie Mellon University, pp 37–53

    Google Scholar 

  8. Ikeuchi K, Suehiro T (1994) Toward an assembly plan from observation. I. Task recognition with polyhedral objects. IEEE Trans Robot Autom 10(3):368–385

    Google Scholar 

  9. Suehiro T, Ikeuchi K (1992) Towards an assembly plan from observation: part II: correction of motion parameters based on fact contact constraints. In: Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (IROS), vol 3. IEEE, pp 2095–2102

    Google Scholar 

  10. Ikeuchi K, Kawade M, Suehiro T (1993) Assembly task recognition with planar, curved and mechanical contacts. In: Proceedings of the IEEE international conference on robotics and automation (ICRA). IEEE, pp 688–694

    Google Scholar 

  11. Takamatsu J, Morita T, Ogawara K, Kimura H, Ikeuchi K (2006) Representation for knot-tying tasks. IEEE Trans Robot 22(1):65–78

    Article  Google Scholar 

  12. Kuhn HW, Tucker AW (1956) Linear inequalities and related systems. In: Annals of mathematics studies. Princeton University Press

    Google Scholar 

  13. Nakaoka S, Nakazawa A, Kanehiro F, Kaneko K, Morisawa M, Hirukawa H, Ikeuchi K (2007) Learning from observation paradigm: leg task models for enabling a biped humanoid robot to imitate human dances. Int J Robot Res 26(8):829–844

    Article  Google Scholar 

  14. Ikeuchi K, Ma Z, Yan Z, Kudoh S, Nakamura M (2018) Describing upper-body motions based on labanotation for learning-from-observation robots. Int J Comput Vis (IJCV) 126(12):1415–1429

    Article  Google Scholar 

  15. Guest AH (2013) Labanotation: the system of analyzing and recording movement. Routledge

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Katsushi Ikeuchi .

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Ikeuchi, K. et al. (2020). Robot Vision, Autonomous Vehicles, and Human Robot Interaction. In: Active Lighting and Its Application for Computer Vision. Advances in Computer Vision and Pattern Recognition. Springer, Cham. https://doi.org/10.1007/978-3-030-56577-0_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-56577-0_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-56576-3

  • Online ISBN: 978-3-030-56577-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics