Intelligent Service Robotics

, Volume 12, Issue 4, pp 371–380 | Cite as

Active object search in an unknown large-scale environment using commonsense knowledge and spatial relations

  • Mingu KimEmail author
  • Il Hong Suh
Original Research Paper


In this study, the goal is to efficiently and actively search for a target object in a previously unknown large-scale environment. To this end, we develop a probabilistic environment model that can utilize spatial commonsense knowledge and environment-specific spatial relations. The model evaluates the merit of exploring each possible viewpoint in the environment to find the target object. Then, the path planning method incorporates the estimated value of these viewpoints and the time cost between them to generate an efficient search path that minimizes the total search time. We also describe a search space reduction method that improves the feasibility of the proposed approach in large-scale environments. To validate the approach, we compare the search times of the proposed method to those of human participants, a coverage-based search and a random search in simulation experiments. The results show that the proposed method can generate search paths with similar search times to those of human participants, while clearly outperforming the coverage-based and random search methods. We also demonstrate the applicability of the approach in real-world experiments in which the robot could find the target object without a single failure case in 70 trials.


Active object search Mobile robot Probabilistic environment model 



  1. 1.
    Suh IH, Lim GH, Hwang W, Suh H, Choi JH, Park YT (2007) Ontology-based multi-layered robot knowledge framework (OMRKF) for robot intelligence. In: 2007 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 429–436Google Scholar
  2. 2.
    Lim GH, Suh IH (2012) Improvisational goal-oriented action recommendation under incomplete knowledge base. In: 2012 IEEE international conference on robotics and automation (ICRA), pp 896–903Google Scholar
  3. 3.
    Shubina K, Tsotsos JK (2010) Visual search for an object in a 3D environment using a mobile robot. Comput Vis Image Underst 114(5):535CrossRefGoogle Scholar
  4. 4.
    Ma J, Chung TH, Burdick J (2011) A probabilistic framework for object search with 6-DOF pose estimation. Int J Robot Res 30(10):1209CrossRefGoogle Scholar
  5. 5.
    Kollar T, Roy N (2009) Utilizing object-object and object-scene context when planning to find things. In: 2009 IEEE international conference on robotics and automation (ICRA), pp 2168–2173Google Scholar
  6. 6.
    Samadi M, Kollar T, Veloso M (2012) Using the web to interactively learn to find objects. In: Twenty-Sixth AAAI conference on artificial intelligence, pp 2074–2080Google Scholar
  7. 7.
    Kunze L, Burbridge C, Hawes N (2014) Bootstrapping probabilistic models of qualitative spatial relations for active visual object search. In: 2014 AAAI spring symposium series, pp 24–26Google Scholar
  8. 8.
    Kunze L, Doreswamy KK, Hawes N (2014) Using qualitative spatial relations for indirect object search. In: 2014 IEEE international conference on robotics and automation (ICRA), pp 163–168Google Scholar
  9. 9.
    Zhang S, Stone P (2015) CORPP: Commonsense reasoning and probabilistic planning, as applied to dialog with a mobile robot. In: Twenty-Ninth AAAI conference on artificial intelligence, pp 1394–1400Google Scholar
  10. 10.
    Zhang S, Sridharan M, Wyatt JL (2015) Mixed logical inference and probabilistic planning for robots in unreliable worlds. IEEE Trans Robot 31(3):699CrossRefGoogle Scholar
  11. 11.
    Veiga TS, Miraldo P, Ventura R, Lima PU (2016) Efficient object search for mobile robots in dynamic environments: semantic map as an input for the decision maker. In: 2016 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 2745–2750Google Scholar
  12. 12.
    Toris R, Chernova S (2017) Temporal persistence modeling for object search. In: 2017 IEEE international conference on robotics and automation (ICRA), pp 3215–3222Google Scholar
  13. 13.
    Kunze L, Sridharan M, Dimitrakakis C, Wyatt J (2017) Adaptive sampling-based view planning under time constraints. In: 2017 European conference on mobile robots (ECMR), pp 1–6Google Scholar
  14. 14.
    Zhu Y, Mottaghi R, Kolve E, Lim JJ, Gupta A, Fei-Fei L, Farhadi A (2017) Target-driven visual navigation in indoor scenes using deep reinforcement learning. In: 2017 IEEE international conference on robotics and automation (ICRA), pp 3357–3364Google Scholar
  15. 15.
    Hanheide M, Göbelbecker M, Horn GS, Pronobis A, Sjöö K, Aydemir A, Jensfelt P, Gretton C, Dearden R, Janicek M et al (2017) Robot task planning and explanation in open and uncertain worlds. Artif Intell 247:119MathSciNetCrossRefGoogle Scholar
  16. 16.
    Dougherty ER, Lotufo RA (2003) Hands-on morphological image processing, vol 59. SPIE Press, BellinghamCrossRefGoogle Scholar
  17. 17.
    Vogt P, Riitters KH, Estreguil C, Kozak J, Wade TG, Wickham JD (2007) Mapping spatial patterns with morphological image processing. Landsc Ecol 22(2):171CrossRefGoogle Scholar
  18. 18.
    Cover TM, Thomas JA (2012) Elements of information theory. Wiley, New YorkzbMATHGoogle Scholar
  19. 19.
    Lauritzen SL, Richardson TS (2002) Chain graph models and their causal interpretations. J R Stat Soc Ser B (Stat Methodol) 64(3):321MathSciNetCrossRefGoogle Scholar
  20. 20.
    Suwa H, Todo S (2010) Markov chain Monte Carlo method without detailed balance. Phys Rev Lett 105(12):120603CrossRefGoogle Scholar
  21. 21.
    Dorigo M, Birattari M, Stutzle T (2006) Ant colony optimization. IEEE Comput Intell Mag 1(4):28CrossRefGoogle Scholar
  22. 22.
    Stützle T, Hoos HH (2000) MAX-MIN ant system. Future Generat Comput Syst 16(8):889CrossRefGoogle Scholar
  23. 23.
    Labbé M, Michaud F (2014) Online global loop closure detection for large-scale multi-session graph-based slam. In: 2014 IEEE/RSJ international conference on intelligent robots and system (IROS), pp 2661–2666Google Scholar
  24. 24.
    Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE conference on computer vision and pattern recognition (CVPR), pp 580–587Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Electronics and Computer EngineeringHanyang UniversitySeongdong-guKorea
  2. 2.Department of Electronics and Computer EngineeringHanyang UniversitySeongdong-guKorea

Personalised recommendations