Abstract
In practical robot motion planning, robots usually do not have a full model of their surrounding, and hence no complete and correct plan can be prepared in advance. In other words, in real-world scenarios a mobile robot operates in a partially known environment, meaning that there exists incomplete information about the true state of several ‘hidden’ variables, such as open/closed doors, which may represent potential blockages of the path of the robot. Consequently, a robot may have a probability distribution estimation of the states of the hidden variables and a preference over the possible values of such hidden variables. In this paper, to deal with the problem of choosing optimal policies for planning in off-line mode, a linear programming model is developed that incorporates the probability distribution of hidden variables. Furthermore, a heuristic method is proposed for planning in the presence of numerous hidden variables which relies on optimistic assumptive planning and produces near-optimal policies.
Similar content being viewed by others
References
Aydin S., Kilic I., Temeltas H.: Using Linde Buzo Gray Clustering neural networks for solving the motion equations of a mobile robot. Arab. J. Sci. Eng. 36(5): 795–807 (2011)
Korayem, M.H.; Bamdad, M.; Tourajizadeh, H.; Shafiee, H.; Zehtab, R.M.; Iranpour, A.: Development of ICASBOT: a cable-suspended Robot’s with six DOF. Arab. J. Sci. Eng. doi:10.1007/s13369-012-0352-9, (2012)
Thrun S., Burgard W., Fox D.: Probabilistic robotics. MIT Press, Cambridge (2005)
Si, B.; Pawelzik, K.; Herrmann, J.M.: Robot exploration by subjectively maximizing objective information Gain. In: Proceedings of the IEEE International Conference on Robotics and Biomimetics. pp. 862–867 (2004)
Likhachev, M.: Search-based planning for large dynamic environments, School of Computer Science, Carnegie Mellon University, PhD Thesis, (2005)
Koenig S., Likhachev M., Furcy D.: Lifelong planning A*. Artif. Intell. 155, 93–146 (2004)
Hansen E.A., Zilberstein S.: LAO*: A heuristic search algorithm that finds solutions with loops. Artif. Intell. 129, 35–62 (2001)
Ferguson, D.; Stentz, A.; Thrun, S.: PAO* for planning with hidden state. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), (2004)
Matthies, L.; Xiong, Y.; Hogg, R.; Zhu, D.; Rankin, A.; Kennedy, B.; Hebert, M.; Maclachlan, R.; Won, C.; Frost, T.; Sukhatme, G.; McHenry, M.; Goldberg, S.: A portable autonomous urban reconnaissance robot. Robot. Auton. Systems, 40(2), 163–172 (2000)
Salani M., Duyckaerts G., Swartz P.G.: Ambush avoidance in vehicle routing for valuable delivery. J. Transp. Secur. 3, 41–55 (2010)
Myers K.: CPEF: A continuous planning and execution framework. Artif. Intell. Mag. 20(4): 63–69 (1999)
Likhachev, M.; Stentz, A.: Path clearance. IEEE Robotics & Automation Magazine, 16(2), pp. 62–72 (2009)
Murphy, L.; Newman, P.: Planning most-likely paths from overhead imagery. In: Proceedings IEEE International Conference on Robotics and Automation. (ICRA’10). Anchorage, AK. (2010)
Likhachev M., Stentz A.: Probabilistic planning with clear preferences on missing information. Artif. Intell. 173, 696–721 (2009)
Kikuti D., Cozman F.G., Filho R.S.: Sequential decision making with partially ordered preferences. Artif. Intell. 175, 1346–1365 (2011)
Koenig, S.: Goal-directed acting with incomplete information, Ph.D. dissertation, School of Computer Science, Carnegie Mellon University, Pittsburgh, (1997)
Nourbakhsh I.R., Genesereth M.R.: Assumptive planning and execution: a simple, working robot architecture. Auton. Robots 3(1): 49–67 (1996)
Likhachev M., Ferguson D., Gordon G., Stentz A., Thrun S.: Anytime search in dynamic graphs. Artif. Intell. 172, 1613–1643 (2008)
Koenig, S.; Likhachev, M.; Furcy, D.: Lifelong planning A*. Artif. Intell., 155, 93–146 (2004)
Stentz, A.: The focused D* algorithm for real-time replanning. In: Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), (1995)
Bonet, B.; Geffner, H.: Planning with incomplete information as heuristic search in belief space. In: Proceedings 5th Int. Conf. AI Planning and Scheduling. (AIPS), Colorado, AAAI Press, 52–61 (2000)
Korf R.: Real-time heuristic search. Artif. Intell. 42(23): 189–211 (1990)
Molineaux, M.; Klenk, M.; Aha, D.W.: Goal-driven autonomy in a navy strategy simulation. In: 24th AAAI Conference on Artificial Intelligence, Atlanta, (2010)
Eyerich, P.; Keller, T.; Helmert, M.: High-quality policies for the Canadian Traveler’s problem. In: Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI, Atlanta, Georgia, USA, (2010)
Papadimitriou C.H., Tsitsiklis J.N.: (1987) The complexity of Markov decision processes. Math. Oper. Res. 12(3): 441–450
Ching, W.K.; Michael, K.N.: Markov chains: models, algorithms and applications, Springer Science (2006)
Taha, H.A.: Operations research: an introduction, 8th edn., Pearson Prentice Hall, Upper Saddle River, (2007)
Petra H., Borgwardt K.H.: Interior-point methods: worst case and average case analysis of a phase-I algorithm and a termination procedure. J. Complex. 18, 833–910 (2002)
Klee, V.; Minty, G.J.: How good is the Simplex algorithm? In: Shisha O. (ed.) Inequalities, III, Academic Press, New York, pp. 159–175 (1972)
Bazaraa, M.S.; Jarvis, J.J.; Sherali, H.D.: Linear Programming and Network Flows, 4th edn., Wiley, Hoboken, (2010)
Winston, W.L.: Operations research applications and algorithms, 4th edn., Duxbury Press, California (2003)
Puterman M.L.(1994) Markov decision processes: discrete stochastic dynamic programming. Wiley, London
Stentz, A.: Map-based strategies for robot navigation in unknown environments. In: Proceedings AAAI Spring Symposium on Planning with Incomplete Information for Robot Problems, Stanford University, California, 110–116 (1996)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Movafaghpour, M.A., Masehian, E. & Moradi, H. Scenario Reduction for Probabilistic Robot Path Planning in the Presence of Preferences on Hidden States. Arab J Sci Eng 39, 2909–2928 (2014). https://doi.org/10.1007/s13369-014-0959-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13369-014-0959-0