Advertisement

Active Object Search Exploiting Probabilistic Object–Object Relations

  • Jos Elfring
  • Simon Jansen
  • René van de Molengraft
  • Maarten Steinbuch
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8371)

Abstract

This paper proposes a probabilistic object-object relation based approach for an active object search. An important role of mobile robots will be to perform object-related tasks and active object search strategies deal with the non-trivial task of finding an object in unstructured and dynamically changing environments. This work builds further upon an existing approach exploiting probabilistic object-room relations for selecting the room in which an object is expected to be. Learnt object-object relations allow to search for objects inside a room via a chain of intermediate objects. Simulations have been performed to investigate the effect of the camera quality on path length and failure rate. Furthermore, a comparison is made with a benchmark algorithm based the same prior knowledge but without using a chain of intermediate objects. An experiment shows the potential of the proposed approach on the AMIGO robot.

Keywords

Mobile Robot Target Object Object Relation Active Object Real Robot 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Aydemir, A., Sjöö, K., Folkesson, J., Pronobis, A., Jensfelt, P.: Search in the real world: Active visual object search based on spatial relations. In: 2011 IEEE International Conference on Robotics and Automation, pp. 2818–2824 (2011)Google Scholar
  2. 2.
    Elfring, J., van den Dries, S., van de Molengraft, M., Steinbuch, M.: Semantic world modeling using probabilistic multiple hypothesis anchoring. Robotics and Autonomous Systems 61(2), 95–105 (2013)Google Scholar
  3. 3.
    Garvey, T.D.: Perceptual strategies for purposive vision. Ph.D. thesis, Stanford University, Stanford, CA, USA, aAI7613006 (1976)Google Scholar
  4. 4.
    Hanheide, M., Gretton, C., Dearden, R.W., Hawes, N.A., Wyatt, J.L., Pronobis, A., Aydemir, A., Göbelbecker, M., Zender, H.: Exploiting probabilistic knowledge under uncertain sensing for efficient robot behaviour. In: Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI 2011), Barcelona, Spain (July 2011)Google Scholar
  5. 5.
    Hinterstoisser, S., Holzer, S., Cagniart, C., Ilic, S., Konolige, K., Navab, N., Lepetit, V.: Multimodal templates for real-time detection of texture-less objects in heavily cluttered scenes. In: 2011 IEEE International Conference on Computer Vision (ICCV), pp. 858–865 (November 2011)Google Scholar
  6. 6.
    Jensen, F.V., Nielsen, T.D.: Bayesian Networks and Decision Graphs. Springer (2007)Google Scholar
  7. 7.
    Joho, D., Burgard, W.: Searching for objects: Combining multiple cues to object locations using a maximum entropy model. In: 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 723–728 (May 2010)Google Scholar
  8. 8.
    Joho, D., Senk, M., Burgard, W.: Learning search heuristics for finding objects in structured environments. Robotics and Autonomous Systems 59(5), 319–328 (2011); special Issue ECMR 2009Google Scholar
  9. 9.
    Kochenderfer, M.J., Gupta, R.: Common sense data acquisition for indoor mobile robots. In: Nineteenth National Conference on Artificial Intelligence, AAAI 2004, pp. 605–610. AAAI Press / The MIT Press (2003)Google Scholar
  10. 10.
    Kollar, T., Roy, N.: Utilizing object-object and object-scene context when planning to find things. In: IEEE International Conference on Robotics and Automation, ICRA 2009, pp. 2168–2173 (May 2009)Google Scholar
  11. 11.
    Kunze, L., Beetz, M., Saito, M., Azuma, H., Okada, K., Inaba, M.: Searching objects in large-scale indoor environments: A decision-theoretic approach. In: 2012 IEEE International Conference on Robotics and Automation (ICRA), pp. 4385–4390 (May 2012)Google Scholar
  12. 12.
    Pronobis, A., Jensfelt, P.: Large-scale semantic mapping and reasoning with heterogeneous modalities. In: Proceedings of the 2012 IEEE International Conference on Robotics and Automation (ICRA 2012), Saint Paul, MN, USA, pp. 3515–3522 (May 2012)Google Scholar
  13. 13.
    Russell, B., Torralba, A., Murphy, K., Freeman, W.: Labelme: A database and web-based tool for image annotation. International Journal of Computer Vision 77, 157–173 (2008)CrossRefGoogle Scholar
  14. 14.
    Tsotsos, J.K., Shubina, K.: Attention and Visual Search: Active Robotic Vision Systems that Search. In: 5th International Conference on Computer Vision Systems Conference (March 2007), http://biecoll.ub.uni-bielefeld.de/volltexte/2007/1
  15. 15.
    Viswanathan, P., Southey, T., Little, J., Mackworth, A.: Place classification using visual object categorization and global information. In: 2011 Canadian Conference on Computer and Robot Vision (CRV), pp. 1–7 (May 2011)Google Scholar
  16. 16.
    Viswanathan, P., Southey, T., Little, J., Mackworth, A.: Automated place classification using object detection. In: 2010 Canadian Conference on Computer and Robot Vision (CRV), May 31-June 2, vol. 2, pp. 324–330 (2010)Google Scholar
  17. 17.
    Wixson, L.E., Ballard, D.H.: Using intermediate objects to improve the efficiency of visual search. International Journal of Computer Vision 12, 209–230 (1994)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Jos Elfring
    • 1
  • Simon Jansen
    • 1
  • René van de Molengraft
    • 1
  • Maarten Steinbuch
    • 1
  1. 1.Faculty of Mechanical EngineeringEindhoven University of TechnologyEindhovenThe Netherlands

Personalised recommendations