Skip to main content

Spatial Understanding as a Common Basis for Human-Robot Collaboration

  • Conference paper
  • First Online:
Advances in Human Factors in Robots and Unmanned Systems (AHFE 2017)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 595))

Included in the following conference series:

  • 1803 Accesses

Abstract

We are developing a robotic cognitive architecture to be embedded in autonomous robots that can safely interact and collaborate with people on a wide range of physical tasks. Achieving true autonomy requires increasing the robot’s understanding of the dynamics of its world (physical understanding), and particularly the actions of people (cognitive understanding). Our system’s cognitive understanding arises from the Soar cognitive architecture, which constitutes the reasoning and planning component. The system’s physical understanding stems from its central representation, which is a 3D virtual world that the architecture synchronizes with the environment in real time. The virtual world provides a common representation between the robot and humans, thus improving trust between them and promoting effective collaboration.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Oliva, A., Torralba, A.: The role of context in object recognition. Trends Cogn. Sci. 11(12), 520–527 (2008)

    Article  Google Scholar 

  2. Marr, D.: Vision. W. H. Freeman, San Francisco (1982)

    Google Scholar 

  3. Hanson, A., Riseman, E.: Visions: a computer system for interpreting scenes. In: Hanson, A., Riseman, E. (eds.) Computer Vision. Academic Press, New York (1978)

    Google Scholar 

  4. Csurka, G., et al.: Visual categorization with bags of keypoints. In: Workshop on Statistical Learning in Computer Vision, ECCV, vol. 1 (2004)

    Google Scholar 

  5. Mortensen, E., Deng, H., Shapiro, L.: A SIFT descriptor with global context. In: International Conference on Computer Vision and Pattern Recognition (2005)

    Google Scholar 

  6. Marques, O., Barenholtz, E., Charvillat, V.: Context modelling in computer vision: techniques, implications and applications. Multimed. Tools Appl. 51, 303–339 (2011)

    Article  Google Scholar 

  7. Ungar, S.: Cognitive mapping without visual experience. In: Kitchin, R., Freundschuh, S. (eds.) Cognitive Mapping: Past Present and Future. Routledge, London (2000)

    Google Scholar 

  8. Shanahan, M.P.: A cognitive architecture that combines internal simulation with a global workspace. Conscious. Cogn. 15, 433–449 (2006)

    Article  Google Scholar 

  9. Pezzulo, G., et al.: The mechanics of embodiment: a dialog on embodiment and computational modeling. Front. Psychol. 2, A5 (2011)

    Google Scholar 

  10. Rayner, K.: Eye movements and cognitive processes in reading, visual search, and scene perception. In: Findlay, J.M., Walker, R., Kentridge, R.W. (eds.) Eye Movement Research: Mechanisms, Processes, and Applications, pp. 3–21. Elsevier, New York (1995)

    Chapter  Google Scholar 

  11. Barnes, N., Liu, Z.-Q.: Embodied computer vision for mobile robots. In: ICIPS 1997, pp. 1395–1399 (1997). doi:10.1109/ICIPS.1997.669238

  12. Benjamin, D.P., Lyons, D., Monaco, J.V., Lin, Y., Funk, C.: Using a virtual world for robot planning. In: SPIE Conference on Multisensor, Multisource Information Fusion (2012). http://csis.pace.edu/robotlab/pubs/SPIE2012.pdf

  13. Lyons, D., Nirmal, P., Benjamin, D.P.: Navigation of uncertain terrain by fusion of information from real and synthetic imagery. In: SPIE Conference on Multisensor, Multisource Information Fusion (2012). http://csis.pace.edu/robotlab/pubs/LyonsNirmalBenjamin2012.pdf

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to D. Paul Benjamin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this paper

Cite this paper

Benjamin, D.P., Li, T., Shen, P., Yue, H., Zhao, Z., Lyons, D. (2018). Spatial Understanding as a Common Basis for Human-Robot Collaboration. In: Chen, J. (eds) Advances in Human Factors in Robots and Unmanned Systems. AHFE 2017. Advances in Intelligent Systems and Computing, vol 595. Springer, Cham. https://doi.org/10.1007/978-3-319-60384-1_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-60384-1_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-60383-4

  • Online ISBN: 978-3-319-60384-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics