Advertisement

Intelligent Service Robotics

, Volume 5, Issue 4, pp 245–258 | Cite as

Analysis of solutions to the time-optimal planning and execution problem

  • Thomas Allen
  • Steven Scheding
Special Issue
  • 181 Downloads

Abstract

This paper analyses solutions to the time-optimal planning and execution (TOPE) problem, in which the aim is to minimise the total time required for an agent to achieve its objectives. The TOPE process provides a means of adjusting system parameters in real-time to achieve this aim. Prior work by the authors showed that agent-based planning systems employing the TOPE process can yield better performance than existing techniques, provided that a key estimation step can be run sufficiently fast and accurately. This paper describes several real-time implementations of this estimation step. A Monte-Carlo analysis compares the performance of TOPE systems using these implementations against existing state-of-the-art planning techniques. It is shown that the average case performance of the TOPE systems is significantly better than the existing methods. Since the TOPE process can be added to an existing system without modifying the internal processes, these results suggest that similar performance improvement may be obtained in a multitude of robotics applications.

Keywords

Agent-based systems Autonomous agents Control architectures and programming Learning and adaptive systems  Adaptive control 

Notes

Acknowledgments

This work is supported in part by the Australian Research Council (ARC) Centre of Excellence programme, funded by the ARC and the New South Wales State Government. The authors are grateful to the Rio Tinto Centre for Mine Automation for the use of \(\sim \)70,000 hours of CPU time required to perform this analysis.

References

  1. 1.
    Allen T (2011) Time-optimal active decision making. PhD thesis, The University of SydneyGoogle Scholar
  2. 2.
    Allen T (2011) The time-optimal planning and execution problem. In: IEEE international conference on robotics and automation, Shanghai, ChinaGoogle Scholar
  3. 3.
    Allen T, Hill A, Underwood J, Scheding S (2009) Dynamic path planning with multi-agent data fusion: the parallel hierarchical replanner. In: Proceedings of the international conference on robotics and automation, Kobe, Japan, pp 3245–3250Google Scholar
  4. 4.
    Bekris K, Chen B, Ladd A, Plaku E, Kavraki L (2003) Multiple query probabilistic roadmap planning using single query primitives. In: IEEE/RSJ international conference on intelligent robots and systemsGoogle Scholar
  5. 5.
    Belghith K, Kabanza F, Hartman L, Nkambou R (2006) Anytime dynamic path-planning with flexible probabilistic roadmaps. In: IEEE international conference on robotics and automationGoogle Scholar
  6. 6.
    Bellman R (1957) Dynamic programming. Princeton University Press, New JerseyzbMATHGoogle Scholar
  7. 7.
    Canny J (1988) The complexity of robot motion planning. MIT Press, CambridgeGoogle Scholar
  8. 8.
    Cerny V (1985) A thermodynamical approach to the travelling salesman problem: an efficient simulation algorithm. J Optim Theory Appl 45:41–51MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Ferguson D, Likhachev M, Stentz A (2005) A guide to heuristic-based path planning. In: Proceedings of the international workshop on planning under uncertainty for autonomous systems. International conference on automated planning and scheduling (ICAPS)Google Scholar
  10. 10.
    Fournier A, Fussel D, Carpenter L (1982) Computer rendering of stochastic models. Commun ACM 25:371–384CrossRefGoogle Scholar
  11. 11.
    Hammersley J, Handscomb D (1975) Monte Carlo methods. Methuen, LondonGoogle Scholar
  12. 12.
    Hansen E, Zhou R (2007) Anytime heuristic search. J Artif Intell Res 28:267–297MathSciNetzbMATHGoogle Scholar
  13. 13.
    Hansen E, Zilberstein S (2001) Montioring and control of anytime algorithms: a dynamic programming approach. Artif Intell 126:139–157MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Hart P, Nilsson N, Raphael B (1968) A formal basis for the heuristic determination of minimum cost paths. IEEE Trans Syst Sci Cybern 2:100–107CrossRefGoogle Scholar
  15. 15.
    Hart P, Nilsson N, Raphael B (1972) Correction to “A formal basis for the heuristic determination of minimum cost paths”. SIGART Newslett 37:28–29CrossRefGoogle Scholar
  16. 16.
    Horvitz E (1987) Reasoning about beliefs and actions under computational resource constraints. In: Proceedings of the third workshop on uncertainty in artifical intelligence, Seattle, WA, pp 429–444Google Scholar
  17. 17.
    Howard T, Green C, Kelly A, Ferguson D (2008) State space sampling of feasible motions for high-performance mobile robot navigation in complex environments. J Field Robot 25: 325–345CrossRefGoogle Scholar
  18. 18.
    Karumanchi S (2010) Off-road mobility analysis from proprioceptive feedback. PhD thesis, Department of Aerospace, Mechanical, and Mechatronic Engineering, The University of SydneyGoogle Scholar
  19. 19.
    Karumanchi S, Allen T, Bailey T, Scheding S (2010) Non-parametric learning to aid path planning over slopes. Int J Robot Res 29(8):997–1018CrossRefGoogle Scholar
  20. 20.
    Kavraki LE, Svestka P, Latombe JC, Overmars MH (1996) Probabilistic roadmaps for path planning in high-dimensional configuration spaces. IEEE Trans Robot Autom 12(4):566–580CrossRefGoogle Scholar
  21. 21.
    Kelly A, Stentz A, Amidi O, Bode M, Bradley D, Diaz-Calderon A, Happold M, Herman H, Mandelbaum R, Pilarski T, Rander P, Thayer S, Vallidis N, Warner R (2006) Toward reliable off road autonomous vehicles operating in challenging environments. Int J Robot Res 25(5–6):449–483Google Scholar
  22. 22.
    Kirkpatrick S, Gelatt C, Vecchi M (1983) Optimization by simulated annealing. Science New Series 220(4598):671–680MathSciNetCrossRefGoogle Scholar
  23. 23.
    Knepper R, Kelly A (2006) High performance state lattice planning using heuristic look-up tables. In: 2006 IEEE/RSJ international conference on intelligent robots and systems, pp 3375–3380Google Scholar
  24. 24.
    LaValle SM (2006) Planning algorithms. Cambridge University Press, Cambridge. http://planning.cs.uiuc.edu/
  25. 25.
    Likhachev M, Ferguson D (2009) Planning long dynamically feasible maneuvers for autonomous vehicles. Int J Robot Res 28:933–945CrossRefGoogle Scholar
  26. 26.
    Likhachev M, Ferguson D, Gordon G, Stentz A, Thrun S (2005) Anytime dynamic A*: an anytime, replanning algorithm. In: Proceedings of the international conference on automated planning and scheduling (ICAPS)Google Scholar
  27. 27.
    Likhachev M, Ferguson D, Gordon G, Stentz A, Thrun S (2005) Anytime dynamic A*: the proofs. Technical report. CMU-RI-TR-05-12, Robotics Institute, Pittsburgh, PAGoogle Scholar
  28. 28.
    Likhachev M, Gordon G, Thrun S (2003) ARA*: anytime A* with provable bounds on sub-optimality. In: Neural information processing systemsGoogle Scholar
  29. 29.
    Matthews G (2008) Asynchronous decision making for decentralised autonomous systems. PhD thesis, Department of Aerospace, Mechanical, and Mechatronic Engineering, The University of SydneyGoogle Scholar
  30. 30.
    Miller G (1986) The definition and rendering of terrain maps. Comput Graph 20(4):39–48CrossRefGoogle Scholar
  31. 31.
    Pearl J (1984) Heuristics. Addison-Wesley, New YorkGoogle Scholar
  32. 32.
    Pivtoraiko M, Kelly A (2005) Efficient constrained path planning via search state lattices. In: The 8th international symposium on artifical intelligence, robotics, and automation in spaceGoogle Scholar
  33. 33.
    Pivtoraiko M, Kelly A (2005) Generating near minimal spanning control sets for constrained motion planning in discrete state spaces. In: International conference on intelligent robots and systemsGoogle Scholar
  34. 34.
    Pohl I (1970) Heuristic search viewed as path finding in a graph. Artif Intell 1:193–204MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Russell S, Norvig P (2003) Artifical intelligence: a modern approach, 2nd edn. Prentice Hall, New YorkGoogle Scholar
  36. 36.
    Russell S, Wefald E, Karnaugh M, Karp R, McAllester D, Subramanian D, Wellman M (1991) Principles of metareasoning. In: Artificial intelligence. Morgan Kaufmann, pp 400–411Google Scholar
  37. 37.
    Saitoh T, Suzuki M, Kuroda Y (2010) Vision-based probabilistic map estimation with an inclined surface grid for rough terrain rover navigation. Adv Robot 24:421–440CrossRefGoogle Scholar
  38. 38.
    Sánchez G, Latombe JC (2003) A single-query bi-directional probabilistic roadmap planner with lazy collision checking. Int J Robot Res 6:403–417CrossRefGoogle Scholar
  39. 39.
    Sutton R, Barto A (1998) Reinforcement learning. MIT Press, CambridgeGoogle Scholar
  40. 40.
    Tesauro G (1995) Temporal difference learning and TD-gammon. Commun Assoc Comput Mach 38(3):58–69Google Scholar
  41. 41.
    Thayer J, Ruml W (2008) Faster than weighted A*: an optimistic approach to bounded suboptimal search. In: Proceedings of the eighteenth international conference on automated planning and schedulingGoogle Scholar
  42. 42.
    Underwood J (2009) Reliable and safe autonomy for ground vehicles in unstructured environments. PhD thesis, Department of Aerospace, Mechanical, and Mechatronic Engineering, The University of Sydney Google Scholar
  43. 43.
    Valve Corporation: Steam Hardware Survey (2010) http://store.steampowered.com/hwsurvey/cpus/
  44. 44.
    Voss R (1987) Fractals in nature: characterization, measurement, and simulation. In: SIGGRAPHGoogle Scholar
  45. 45.
    Watkins C (1989) Learning from delayed rewards. PhD thesis, Cambridge University, CambridgeGoogle Scholar
  46. 46.
    Zilberstein S (1996) Using anytime algorithms in intelligent systems. Artif Intell Mag 17:73–83Google Scholar
  47. 47.
    Zilberstein S, Russell S (1995) Imprecise and approximate computation. Approximate reasoning using anytime algorithms, chap 4. Kluwer, NorwellGoogle Scholar

Copyright information

© Springer-Verlag 2012

Authors and Affiliations

  1. 1.The Australian Centre for Field RoboticsThe University of SydneySydneyAustralia

Personalised recommendations