Skip to main content
Log in

Scenario Reduction for Probabilistic Robot Path Planning in the Presence of Preferences on Hidden States

  • Research Article - Computer Engineering and Computer Science
  • Published:
Arabian Journal for Science and Engineering Aims and scope Submit manuscript

Abstract

In practical robot motion planning, robots usually do not have a full model of their surrounding, and hence no complete and correct plan can be prepared in advance. In other words, in real-world scenarios a mobile robot operates in a partially known environment, meaning that there exists incomplete information about the true state of several ‘hidden’ variables, such as open/closed doors, which may represent potential blockages of the path of the robot. Consequently, a robot may have a probability distribution estimation of the states of the hidden variables and a preference over the possible values of such hidden variables. In this paper, to deal with the problem of choosing optimal policies for planning in off-line mode, a linear programming model is developed that incorporates the probability distribution of hidden variables. Furthermore, a heuristic method is proposed for planning in the presence of numerous hidden variables which relies on optimistic assumptive planning and produces near-optimal policies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Aydin S., Kilic I., Temeltas H.: Using Linde Buzo Gray Clustering neural networks for solving the motion equations of a mobile robot. Arab. J. Sci. Eng. 36(5): 795–807 (2011)

    Article  Google Scholar 

  2. Korayem, M.H.; Bamdad, M.; Tourajizadeh, H.; Shafiee, H.; Zehtab, R.M.; Iranpour, A.: Development of ICASBOT: a cable-suspended Robot’s with six DOF. Arab. J. Sci. Eng. doi:10.1007/s13369-012-0352-9, (2012)

  3. Thrun S., Burgard W., Fox D.: Probabilistic robotics. MIT Press, Cambridge (2005)

    MATH  Google Scholar 

  4. Si, B.; Pawelzik, K.; Herrmann, J.M.: Robot exploration by subjectively maximizing objective information Gain. In: Proceedings of the IEEE International Conference on Robotics and Biomimetics. pp. 862–867 (2004)

  5. Likhachev, M.: Search-based planning for large dynamic environments, School of Computer Science, Carnegie Mellon University, PhD Thesis, (2005)

  6. Koenig S., Likhachev M., Furcy D.: Lifelong planning A*. Artif. Intell. 155, 93–146 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  7. Hansen E.A., Zilberstein S.: LAO*: A heuristic search algorithm that finds solutions with loops. Artif. Intell. 129, 35–62 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  8. Ferguson, D.; Stentz, A.; Thrun, S.: PAO* for planning with hidden state. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), (2004)

  9. Matthies, L.; Xiong, Y.; Hogg, R.; Zhu, D.; Rankin, A.; Kennedy, B.; Hebert, M.; Maclachlan, R.; Won, C.; Frost, T.; Sukhatme, G.; McHenry, M.; Goldberg, S.: A portable autonomous urban reconnaissance robot. Robot. Auton. Systems, 40(2), 163–172 (2000)

    Google Scholar 

  10. Salani M., Duyckaerts G., Swartz P.G.: Ambush avoidance in vehicle routing for valuable delivery. J. Transp. Secur. 3, 41–55 (2010)

    Article  Google Scholar 

  11. Myers K.: CPEF: A continuous planning and execution framework. Artif. Intell. Mag. 20(4): 63–69 (1999)

    Google Scholar 

  12. Likhachev, M.; Stentz, A.: Path clearance. IEEE Robotics & Automation Magazine, 16(2), pp. 62–72 (2009)

    Google Scholar 

  13. Murphy, L.; Newman, P.: Planning most-likely paths from overhead imagery. In: Proceedings IEEE International Conference on Robotics and Automation. (ICRA’10). Anchorage, AK. (2010)

  14. Likhachev M., Stentz A.: Probabilistic planning with clear preferences on missing information. Artif. Intell. 173, 696–721 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  15. Kikuti D., Cozman F.G., Filho R.S.: Sequential decision making with partially ordered preferences. Artif. Intell. 175, 1346–1365 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  16. Koenig, S.: Goal-directed acting with incomplete information, Ph.D. dissertation, School of Computer Science, Carnegie Mellon University, Pittsburgh, (1997)

  17. Nourbakhsh I.R., Genesereth M.R.: Assumptive planning and execution: a simple, working robot architecture. Auton. Robots 3(1): 49–67 (1996)

    Article  Google Scholar 

  18. Likhachev M., Ferguson D., Gordon G., Stentz A., Thrun S.: Anytime search in dynamic graphs. Artif. Intell. 172, 1613–1643 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  19. Koenig, S.; Likhachev, M.; Furcy, D.: Lifelong planning A*. Artif. Intell., 155, 93–146 (2004)

  20. Stentz, A.: The focused D* algorithm for real-time replanning. In: Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), (1995)

  21. Bonet, B.; Geffner, H.: Planning with incomplete information as heuristic search in belief space. In: Proceedings 5th Int. Conf. AI Planning and Scheduling. (AIPS), Colorado, AAAI Press, 52–61 (2000)

  22. Korf R.: Real-time heuristic search. Artif. Intell. 42(23): 189–211 (1990)

    Article  MATH  Google Scholar 

  23. Molineaux, M.; Klenk, M.; Aha, D.W.: Goal-driven autonomy in a navy strategy simulation. In: 24th AAAI Conference on Artificial Intelligence, Atlanta, (2010)

  24. Eyerich, P.; Keller, T.; Helmert, M.: High-quality policies for the Canadian Traveler’s problem. In: Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI, Atlanta, Georgia, USA, (2010)

  25. Papadimitriou C.H., Tsitsiklis J.N.: (1987) The complexity of Markov decision processes. Math. Oper. Res. 12(3): 441–450

    Article  MATH  MathSciNet  Google Scholar 

  26. Ching, W.K.; Michael, K.N.: Markov chains: models, algorithms and applications, Springer Science (2006)

  27. Taha, H.A.: Operations research: an introduction, 8th edn., Pearson Prentice Hall, Upper Saddle River, (2007)

  28. Petra H., Borgwardt K.H.: Interior-point methods: worst case and average case analysis of a phase-I algorithm and a termination procedure. J. Complex. 18, 833–910 (2002)

    Article  MATH  Google Scholar 

  29. Klee, V.; Minty, G.J.: How good is the Simplex algorithm? In: Shisha O. (ed.) Inequalities, III, Academic Press, New York, pp. 159–175 (1972)

  30. Bazaraa, M.S.; Jarvis, J.J.; Sherali, H.D.: Linear Programming and Network Flows, 4th edn., Wiley, Hoboken, (2010)

  31. Winston, W.L.: Operations research applications and algorithms, 4th edn., Duxbury Press, California (2003)

  32. Puterman M.L.(1994) Markov decision processes: discrete stochastic dynamic programming. Wiley, London

    Book  MATH  Google Scholar 

  33. Stentz, A.: Map-based strategies for robot navigation in unknown environments. In: Proceedings AAAI Spring Symposium on Planning with Incomplete Information for Robot Problems, Stanford University, California, 110–116 (1996)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ellips Masehian.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Movafaghpour, M.A., Masehian, E. & Moradi, H. Scenario Reduction for Probabilistic Robot Path Planning in the Presence of Preferences on Hidden States. Arab J Sci Eng 39, 2909–2928 (2014). https://doi.org/10.1007/s13369-014-0959-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13369-014-0959-0

Keywords

Navigation