Object Learning and Grasping Capabilities for Robotic Home Assistants

  • S. Hamidreza Kasaei
  • Nima Shafii
  • Luís Seabra Lopes
  • Ana Maria Tomé
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9776)

Abstract

This paper proposes an architecture designed to create a proper coupling between perception and manipulation for assistive robots. This is necessary for assistive robots, not only to perform manipulation tasks in reasonable amounts of time, but also to robustly adapt to new environments by handling new objects. In particular, this architecture provides automatic perception capabilities that will allow robots to, (i) incrementally learn object categories from the set of accumulated experiences and (ii) infer how to grasp household objects in different situations. To examine the performance of the proposed architecture, quantitative and qualitative evaluations have been carried out. Experimental results show that the proposed system is able to interact with human users, learn new object categories over time, as well as perform object grasping tasks.

Keywords

Assistive robots Object grasping Object learning and recognition 

Notes

Acknowledgement

This work was funded by National Funds through FCT project PEst-OE/EEI/UI0127/2016 and FCT scholarship SFRH/BD/94183/2013.

References

  1. 1.
    Bohg, J., Morales, A., Asfour, T., Kragic, D.: Data-driven grasp synthesis – a survey. IEEE Trans. Robot. 30(2), 289–309 (2014)CrossRefGoogle Scholar
  2. 2.
    Chauhan, A., Lopes, L.S.: Using spoken words to guide open-ended category formation. Cogn. Process. 12(4), 341–354 (2011)CrossRefGoogle Scholar
  3. 3.
    Ciocarlie, M., Hsiao, K., Jones, E.G., Chitta, S., Rusu, R.B., Şucan, I.A.: Towards reliable grasping and manipulation in household environments. In: Khatib, O., Kumar, V., Sukhatme, G. (eds.) Experimental Robotics. Springer Tracts in Advanced Robotics, vol. 79, pp. 241–252. Springer, Heidelberg (2014).  https://doi.org/10.1007/978-3-642-28572-1_17CrossRefGoogle Scholar
  4. 4.
    Collet, A., Xiong, B., Gurau, C., Hebert, M., Srinivasa, S.S.: Herbdisc: towards lifelong robotic object discovery. Int. J. Robot. Res. 34(1), 3–25 (2015).  https://doi.org/10.1177/0278364914546030CrossRefGoogle Scholar
  5. 5.
    Hsiao, K., Chitta, S., Ciocarlie, M., Jones, E.G.: Contact-reactive grasping of objects with partial shape information. In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1228–1235. IEEE (2010)Google Scholar
  6. 6.
    Jain, A., Kemp, C.C.: El-E: an assistive mobile manipulator that autonomously fetches objects from flat surfaces. Auton. Robot. 28(1), 45–64 (2010)CrossRefGoogle Scholar
  7. 7.
    Hamidreza Kasaei, S., Oliveira, M., Lim, G.H., Lopes, L.S., Tomé, A.M.: Interactive open-ended learning for 3D object recognition: an approach and experiments. J. Intell. Robot. Syst. 80, 1–17 (2015)CrossRefGoogle Scholar
  8. 8.
    Lai, K., Bo, L., Ren, X., Fox, D.: A large-scale hierarchical multi-view RGB-D object dataset. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 1817–1824, May 2011Google Scholar
  9. 9.
    Leroux, C., Lebec, O., Ghezala, M.B., Mezouar, Y., Devillers, L., Chastagnol, C., Martin, J.C., Leynaert, V., Fattal, C.: ARMEN: assistive robotics to maintain elderly people in natural environment. IRBM 34(2), 101–107 (2013)CrossRefGoogle Scholar
  10. 10.
    Lim, G.H., Oliveira, M., Mokhtari, V., Hamidreza Kasaei, S., Chauhan, A., Seabra Lopes, L., Tome, A.: Interactive teaching and experience extraction for learning about objects and robot activities. In: 2014 RO-MAN: The 23rd IEEE International Symposium on Robot and Human Interactive Communication, pp. 153–160, August 2014Google Scholar
  11. 11.
    Martinez Torres, M., Collet Romea, A., Srinivasa, S.: Moped: a scalable and low latency object recognition and pose estimation system. In: IEEE International Conference on Robotics and Automation, (ICRA 2010), May 2010Google Scholar
  12. 12.
    Oliveira, M., Lim, G.H., Seabra Lopes, L., Hamidreza Kasaei, S., Tome, A., Chauhan, A.: A perceptual memory system for grounding semantic representations in intelligent service robots. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE (2014)Google Scholar
  13. 13.
    Oliveira, M., Lopes, L.S., Lim, G.H., Hamidreza Kasaei, H., Tomé, A.M., Chauhan, A.: 3D object perception and perceptual learning in the RACE project. Robot. Auton. Syst. 75, 614–626 (2016). Part BCrossRefGoogle Scholar
  14. 14.
    Srinivasa, S., Ferguson, D.I., Vande Weghe, M., Diankov, R., Berenson, D., Helfrich, C., Strasdat, H.: The robotic busboy: steps towards developing a mobile robotic home assistant. In: International Conference on Intelligent Autonomous Systems, pp. 2155–2162 (2008)Google Scholar
  15. 15.
    Stückler, J., Steffens, R., Holz, D., Behnke, S.: Efficient 3D object perception and grasp planning for mobile manipulation in domestic environments. Robot. Auton. Syst. 61(10), 1106–1115 (2013)CrossRefGoogle Scholar
  16. 16.
    Vahrenkamp, N., Do, M., Asfour, T., Dillmann, R.: Integrated grasp and motion planning. In: 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 2883–2888. IEEE (2010)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • S. Hamidreza Kasaei
    • 1
  • Nima Shafii
    • 1
  • Luís Seabra Lopes
    • 1
    • 2
  • Ana Maria Tomé
    • 1
    • 2
  1. 1.IEETA - Instituto de Engenharia Electrónica e Telemática de AveiroUniversidade de AveiroAveiroPortugal
  2. 2.Departamento de Electrónica, Telecomunicações e InformáticaUniversidade de AveiroAveiroPortugal

Personalised recommendations