Autonomous Robots

, Volume 16, Issue 1, pp 49–79 | Cite as

Bayesian Robot Programming

  • Olivier Lebeltel
  • Pierre Bessière
  • Julien Diard
  • Emmanuel Mazer


We propose a new method to program robots based on Bayesian inference and learning. It is called BRP for Bayesian Robot Programming. The capacities of this programming method are demonstrated through a succession of increasingly complex experiments. Starting from the learning of simple reactive behaviors, we present instances of behavior combination, sensor fusion, hierarchical behavior composition, situation recognition and temporal sequencing. This series of experiments comprises the steps in the incremental development of a complex robot program. The advantages and drawbacks of BRP are discussed along with these different experiments and summed up as a conclusion. These different robotics programs may be seen as an illustration of probabilistic programming applicable whenever one must deal with problems based on uncertain or incomplete knowledge. The scope of possible applications is obviously much broader than robotics.

Bayesian robot programming control of autonomous robots computational architecture for autonomous systems theory of autonomous systems 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Aji, S.M. and McEliece, R.J. 2000. The generalized distributive law. IEEE Trans. Information Theory, 46(2).Google Scholar
  2. Alami, R., Chatila, R., Fleury, S., Ghallab, M., and Ingrand, F. 1998. An architecture for autonomy. International Journal for Robotics Research (IJRR), 17(4):315–337.Google Scholar
  3. Arulampalam, S., Maskell, S., Gordon, N., and Clapp, T. 2001. A tutorial on particle filters for on-line nonlinear/non-Gaussian Bayesian tracking. IEEE Transactions on Signal Processing. Available at Scholar
  4. Aycard, O. 1998. Architecture de contrôle pour robot mobile en environnement intérieur structuré. Ph.D. thesis, Université Henri Poincaré, Nancy, France.Google Scholar
  5. Bernhardt, R. and Albright, S.L. (Ed.). 1993. Robot Calibration. Chapman & Hall.Google Scholar
  6. Beetz, M. and Belker, T. (2001). Learning strucured reactive navigation plans from executing MDP navigation policies. In Proceedings of Agents 2001. Google Scholar
  7. Bessière, P., Dedieu, E., Lebeltel, O., Mazer, E., and Mekhnacha, K. 1998a. Interprétation ou Description (I): Proposition pour une théorie probabiliste des systèmes cognitifs sensori-moteurs. Intellectica, 26/27:257–311, Paris, France.Google Scholar
  8. Bessière, P., Dedieu, E., Lebeltel, O., Mazer, E., and Mekhnacha, K. 1998b. Interprétation ou Description (II): Fondements mathématiques de l'approche F+D. Intellectica, 26/27:313–336, Paris, France.Google Scholar
  9. Bessière, P. 2002. Procèdé de détermination de la valeur à donner à différents paramètres d'un système: Demande de brevet d'invention n?0235541154.4, Institut Européen des Brevets.Google Scholar
  10. Bessière, P. and the BIBA-INRIA Research Group. 2003. Survey: Probabilistic methodology and techniques for artefact conception and development. INRIA Technical Report RR-4730. Available at Scholar
  11. Borrelly, J.-J., Coste, E., Espiau, B., Kapellos, K., Pissard-Gibollet, R., Simon, D., and Turro, N. 1998. The ORCCAD architecture. International Journal for Robotics Research (IJRR), 17(4):338–359.Google Scholar
  12. Boutilier, C., Reiter, R., Soutchanski, M., and Thrun, S. 2000. Decision-theoretic, high-level agent programming in the situation calculus. In Proceedings of AAAI 2000.Google Scholar
  13. Brafman, R.I., Latombe, J.-C., Moses, Y., and Shoham, Y. 1997. Applications of a logic of knowledge to motion planning under uncertainty. Journal of the ACM, 44(5):633–668.Google Scholar
  14. Bretthorst, G.L. 1988. Bayesian Spectrum Analysis and Parameter Estimation. Springer Verlag.Google Scholar
  15. Brooks, R.A. 1986. A robust layered control systems for a mobile robot. IEEE Journal of Robotics and Automation, 2(1): 14–23.Google Scholar
  16. Cooper, G. 1990. The computational complexity of probabilistic inference using Bayesian belief networks. Artificial Intelligence, 42:393–405.Google Scholar
  17. Coué, C. and Bessière, P. 2001. Chasing an elusive target with a mobile robot. In IEEE/IROS 2001, Maui, Hawaii, USA.Google Scholar
  18. Coué, C., Fraichard, Th., Bessière, P., and Mazer, E. 2002. Multisensor data fusion using bayesian programming: An automotive application. In Proc. of the IEEE-RSJ Int. Conf. on Intelligent Robots and Systems (IROS).Google Scholar
  19. Coué, C., Fraichard, Th., Bessière, P., and Mazer, E. 2003. Using Bayesian programming for multi-sensor multi-target tracking in automotive applications. In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA) (in Press).Google Scholar
  20. Cox, R.T. 1961. The Algebra of Probable Inference. The John Hopkins Press: Baltimore, USA.Google Scholar
  21. Cox, R.T. 1979. Of inference and inquiry, an essay in inductive logic. In Raphael D. Levine and Myron Tribus (Eds.), The Maximum Entropy Formalism, M.I.T. Press: USA.Google Scholar
  22. Dagum, P. and Luby, M. 1993. Approximate probabilistic reasoning in Bayesian belief network is NP-Hard. Artificial Intelligence, 60:141–153.Google Scholar
  23. Darwiche, A. and Provan, G. 1997. Query DAGs: A practical paradigm for implementing belief-network inference. Journal of Artificial Intelligence Research (JAIR), 6:147–176.Google Scholar
  24. Dedieu, E. 1995. La représentation contingente: Vers une reconciliation des approches fonctionnelles et structurelles de la robotique autonome. Thèse de troisième cycle INPG (Institut National Polytechnique de Grenoble) Grenoble, France.Google Scholar
  25. Dekhil, M. and Henderson, T.C. 1998. Instrumented sensor system architecture. International Journal for Robotics Research (IJRR), 17(4):402–417.Google Scholar
  26. Delcher, A.L., Grove, A.J., Kasif, S., and Pearl, J. 1996. Logarithmic-time updates and queries in probabilistic networks. Journal of Artificial Intelligence Research (JAIR), 4:37–59.Google Scholar
  27. Diard, J. and Lebeltel, O. 1999. Bayesian learning experiments with a Khepera Robot, In Experiments with the Mini-Robot Khepera: Löffler Mondada Rückert (Ed.), Proceedings of the 1st International Khepera Workshop, December 1999, Paderborn, HNI-Verlagsschriftenreihe, Band 64, Germany, pp. 129–138.Google Scholar
  28. Diard, J. and Lebeltel, O. 2000. Bayesian programming and hierarchical learning in robotics. In Meyer, Berthoz, Floreano, Roitblat and Wilson (Eds.), SAB2000 Proceedings Supplement Book, Publication of the International Society for Adaptive Behavior: Honolulu.Google Scholar
  29. Diard, J. 2003. La carte bayésienne: Un modèle probabiliste hiérarchique pour la navigation en robotique mobile. PhD thesis, Institut National Polytechnique de Grenoble (INPG), 27 janvier 2003.Google Scholar
  30. Donald, B.R. 1988. A geometric approach to error detection and recovery for robot motion planning with uncertainty. Artificial Intelligence, 37:223–271.Google Scholar
  31. Erickson, G.J. and Smith, C.R. 1988a. Maximum-Entropy and Bayesian Methods in Science and Engineering, Vol. 1: Foundations. Kluwer Academic Publishers.Google Scholar
  32. Erickson, G.J. and Smith, C.R. 1988b. Maximum-Entropy and Bayesian Methods in Science and Engineering, Vol. 2: Applications. Kluwer Academic Publishers.Google Scholar
  33. Frey, B.J. 1998. Graphical Models for Machine Learning and Digital Communication. MIT Press.Google Scholar
  34. Fox, D., Burgard, W., Kruppa, H., and Thrun, S. 2000.A Probabilistic approach to collaborative multi-robot localization. Autonomous Robots, 8:325–344.Google Scholar
  35. Fox, D., Thrun, S., Dellaert, F., and Burgard, W. 2001. Particle filters for mobile robot localization. In Doucet, A., de Freitas, N., and Gordon, N. (Eds.), Sequential Monte Carlo Methods in Practice, Springer-Verlag: New York, USA.Google Scholar
  36. Gutmann, J.-S., Burgard, W., Fox, D., and Konolige, K. 1998. Experimental comparison of localization methods. In Interenational Conference on Intelligent Robots and Systems. Google Scholar
  37. Halpern, J.Y. 1999a. A counterexample to theorems of Cox and Fine. Journal of Artificial Intelligence Research (JAIR), 10:67–85.Google Scholar
  38. Halpern, J.Y. 1999b. Cox's Theorem revisited. Journal of Artificial Intelligence Research (JAIR), 11:429–435.Google Scholar
  39. Jaakkola, T.S. and Jordan, M.I. 1999. Variational probabilistic inference and the QMR-DT network. Journal of Artificial Intelligence Research (JAIR), 10:291–322.Google Scholar
  40. Jaynes, E.T. 1979. Where do we stand on maximum entropy? In Raphael D. Levine and Myron Tribus (Eds.), The Maximum Entropy Formalism M.I.T. Press: USA.Google Scholar
  41. Jaynes, E.T. 1982. On the rationale of maximum-entropy methods. In Proceedings of the IEEE.Google Scholar
  42. Jaynes, E.T. 2003. Probability Theory-The logic of Science. Cambridge University Press (in press).Google Scholar
  43. Jensen, F., Lauritzen, S., and Olesen, K. 1990. Bayesian updating in recursive graphical models by local computations. Computational Statistical Quaterly, 4:269–282.Google Scholar
  44. Jordan, M.I. and Jacobs, R.A. 1994. Hierarchical mixtures of experts and the EM algorithm. Neural Computation, 6:181–214.Google Scholar
  45. Jordan, M. 1998. Learning in Graphical Models. MIT Press.Google Scholar
  46. Jordan, M., Ghahramani, Z., Jaakkola, T.S., and Saul, L.K. 1999. An introduction to variational methods for graphical models. Machine Learning (in press).Google Scholar
  47. Kaelbling, L.P., Littman, M.L., and Cassandra, A.R. 1996. Partially observable Markov decision processes for artificial intelligence, reasoning with uncertainty in robotics. In International Workshop, RUR'95, Proceedings, Springer-Verlag, pp. 146-162.Google Scholar
  48. Kaelbling, L.P., Cassandra, A.R., and Kurien, J.A. 1996. Acting under uncertainty: Discrete Bayesian models for mobile-robot navigation. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems. Google Scholar
  49. Kaelbling, L.P., Littman, M.L., and Cassandra, A.R. 1998. Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101.Google Scholar
  50. Kapur, J.N. and Kesavan, H.K. 1992. Entropy Optimization Principles with Applications. Academic Press.Google Scholar
  51. Koening, S. and Simmons, R. 1998. A robot navigation architecture based on partially observable Markov decision process models. In D. Kortenkamp, R.P. Bonasso, and R. Murphy (Eds.), Mobile Robots and Artificial Intelligence, AAAI Press.Google Scholar
  52. Koller, D. and Pfeffer, A. 1997. Object-oriented Bayesian networks. In Proceedings of the 13th Annual Conference on Uncertainty in AI (UAI), Providence, Rhode Island, USA.Google Scholar
  53. Konolidge, K. 1997. Improved occupancy grids for map building. Autonomous Robots, 4:351–367.Google Scholar
  54. Konolidge, K. and Chou, K. 1999. Markov localization using correlation. In International Joint Conference on Artificial Intelligence, Stockolm, Sweden.Google Scholar
  55. Lane, T. and Kaelbling, L.P. 2001. Toward hierachical decomposition for planning in uncertain environments. Workshop on planning under Uncertainty and Incomplete Information at the 2001 International Joint Conference on Artificial Intelligence (IJCAI-2001).Google Scholar
  56. Laplace, Pierre Simon de. 1774. Mémoire sur les probabilités des causes par les évènements; Mémoire de l'académie royale des sciences. Reprinted in Oeuvres complètes de Laplace, vol. 8, Gauthier Villars, Paris, France.Google Scholar
  57. Laplace, Pierre Simon de. 1814. Essai philosophique sur les probabilités; Courcier Imprimeur, Paris. Reprinted in Oeuvres complètes de Laplace, vol. 7, Gauthier Villars, Paris, France.Google Scholar
  58. Lauritzen, S. and Spiegelhalter, D. 1988. Local computations with probabilities on graphical structures and their application to expert systems. Journal of the Royal Statistical Society B, 50:157–224.Google Scholar
  59. Lauritzen, S. L. 1996. Graphical Models. Oxford University Press.Google Scholar
  60. Lebeltel, O. 1999. Programmation Bayésienne des Robots. Ph.D. Thesis, Institut National Polytechnique de Grenoble (INPG); Grenoble, France.Google Scholar
  61. Lebeltel, O., Diard, J., Bessière, P., and Mazer, E. 2000. A bayesian framework for robotic programming. In Twentieth International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2000), Paris, France.Google Scholar
  62. Lozano-Perez, T., Mason, M.T., and Taylor, R.H. 1984. Automatic synthesis of fine-motion strategies for robots. International Journal of Robotics Research, 3(1):3–24.Google Scholar
  63. MacKay, D.G. 1996. Introduction to Monte Carlo methods. In M. Jordan (Ed.), Proc. of an Erice Summer School. Google Scholar
  64. Maes, P. 1989. How to do the right thing. Connection Science Journal, 1(3): 291–323.Google Scholar
  65. Matalon, B. 1967. Epistémologie des probabilités. In Jean Piaget (Ed.), Logique et connaissance scientifique, Encyclopédie de la Pléiade, Editions Gallimard, Paris, France.Google Scholar
  66. Mazer, E., Boismain, G., Bonnet des Tuves, J., Douillard, Y., Geoffroy, S., Dubourdieu, J., Tounsi, M., and Verdot, F. 1998. START: An industrial system for teleoperation. In Proc. of the IEEE Int. Conf. on Robotics and Automation, vol. 2, Leuven (BE), pp. 1154–1159.Google Scholar
  67. McLachlan, G.J. and Deep, D. 2000. Finite Mixture Models. Wiley: New York, USA.Google Scholar
  68. Mekhnacha, K. 1999. Fr Méthodes probabilistes bayésiennes pour la prise en compte des incertitudes géométriques: Application à la CAO-robotique. PhD. thesis INPG (Institut National Polytechnique de Grenoble), Grenoble, France.Google Scholar
  69. Mekhnacha, K., Mazer, E., and Bessière, P. 2000. A Robotic CAD system using a Bayesian framework. In Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS 2000, Best Paper Award), vol. 3, Takamatsu, Japan, pp. 1597–1604.Google Scholar
  70. Mekhnacha, K., Mazer, E., and Bessière, P. 2001. The design and implementation of a Bayesian CAD modeler for robotic applications. Advanced Robotics, 15(1).Google Scholar
  71. Mohammad-Djafari, A. and Demoment, G. 1992. Maximum Entropy and Bayesian Methods. Kluwer Academic Publishers.Google Scholar
  72. Murphy, K. 1999. Bayesian map learning in dynamic environments. In Proceedings of NIPS 99.Google Scholar
  73. Neal Radford, M. 1993. Probabilistic inference using Markov chain Monte-Carlo methods. Technical Report, CRG-TR-93-1, University of Toronto.Google Scholar
  74. Parr, R. and Russell, S. 1998. Reinforcement learning with hierarchies of machines. In Proceedings of NIPS, 1998.Google Scholar
  75. Pearl, J. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers: San Mateo, California, USA.Google Scholar
  76. Robert, C. 1990. An entropy concentration theorem: Applications, in artificial intelligence and descriptive statistics. Journal of Applied Probabilities. Google Scholar
  77. Robinson, J.A. 1965. A machine oriented logic based on the resolution principle. Jour. Assoc. Comput. Mach., 12.Google Scholar
  78. Robinson, J.A. 1979. Logic: Form and Function. North-Holland, New York, USA.Google Scholar
  79. Robinson, J.A. and Sibert, E.E. 1983a. LOGLISP: An alternative to PROLOG. Machine Intelligence, 10.Google Scholar
  80. Robinson, J.A. and Sibert, E.E. 1983b. LOGLISP: Motivation, design and implementation. Machine Intelligence, 10.Google Scholar
  81. Rosenblatt, J.K. 2000. Optimal selection of uncertain actions by maximizing expected utility. Autonomous Robots, 9:17–25.Google Scholar
  82. Roumeliotis, S.I. and Bekey, G. 2000. Collective localization: A distributed Kalman filter approach to localization of groups of mobile robots. In IEEE International Conference on Robotics and Automation.Google Scholar
  83. Roumeliotis, S.I. and Bekey, G.A. 2000. Bayesian estimation and Kalman filtering: A unified framework for Mobile Robot localization. In Proc. IEEE Int. Conf. on Robotics and Automation, San Fransisco, CA, pp. 2985-2992.Google Scholar
  84. Ruiz, A., Lopez-de-Teruel, P.E., and Garrido, M.C. 1998. Probabilistic Inference from arbitrary uncertainty using mixtures of factorized generalized Gaussians. Journal of Artificial Intelligence Research (JAIR), 9:167–217.Google Scholar
  85. Saul, L.K., Jaakkola, T., and Jordan, M.I. 1996. Mean field theory for sigmoid belief networks. Journal of Artificial Intelligence Research (JAIR), 4:61–76.Google Scholar
  86. Schneider, S.A., Chen, V.W., Pardo-Castellote, G., and Wang, H.H. 1998. ControlShell: A software architecture for complex electromechanical systems. International Journal for Robotics Research (IJRR), 17(4):360–380.Google Scholar
  87. Shatkay, H. 1998. Learning models for robot navigation. PhD. dissertation and Technical Report cs-98-11, Brown University, Department of Computer Science, Providence, RI.Google Scholar
  88. Smith, C.R. and Grandy, W.T. Jr. 1985. Maximum-Entropy and Bayesian Methods in Inverse Problems. D. Reidel Publishing Company.Google Scholar
  89. Tarentola, A. 1987. Inverse Problem Theory: Methods for Data Fitting and Model Parameters Estimation. Elsevier: NewYork, USA.Google Scholar
  90. Thrun, S. 1998. Bayesian landmark learning for mobile robot localization. Machine Learning, 33(1):41–76.Google Scholar
  91. Thrun, S., Burgard, W., and Fox, D. 1998. A probabilistic approach to concurrent mapping and localization for mobile robots. Autonomous Robots, 5:253–271.Google Scholar
  92. Thrun, S. 2000. Towards programming tools for robots that integrate probabilistic computation and learning. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA).Google Scholar
  93. Zhang, N.L. and Poole, D. 1996. Exploiting causal independence in Bayesian network inference. Journal of Artificial Intelligence Research (JAIR), 5:301–328.Google Scholar

Copyright information

© Kluwer Academic Publishers 2004

Authors and Affiliations

  • Olivier Lebeltel
  • Pierre Bessière
  • Julien Diard
  • Emmanuel Mazer

There are no affiliations available

Personalised recommendations