Design of a Control Architecture for Habit Learning in Robots
Researches in psychology and neuroscience have identified multiple decision systems in mammals, enabling control of behavior to shift with training and familiarity of the environment from a goal-directed system to a habitual system. The former relies on the explicit estimation of future consequences of actions through planning towards a particular goal, which makes decision time longer but produces rapid adaptation to changes in the environment. The latter learns to associate values to particular stimulus-response associations, leading to quick reactive decision- making but slow relearning in response to environmental changes. Computational neuroscience models have formalized this as a coordination of model-based and model-free reinforcement learning. From this inspiration we hypothesize that it could enable robots to learn habits, detect when these habits are appropriate and thus avoid long and costly computations of the planning system. We illustrate this in a simple repetitive cube-pushing task on a conveyor belt, where a speed-accuracy trade-off is required. We show that the two systems have complementary advantages in these tasks, which can be combined for performance improvement.
KeywordsAdaptive Behaviour Habit Learning Reinforcement Learning Robotic Architecture
Unable to display preview. Download preview PDF.
- 3.Caluwaerts, K., Favre-Félix, A., Staffa, M., N’Guyen, S., Grand, C., Girard, B., Khamassi, M.: Neuro-inspired navigation strategies shifting for robots: Integration of a multiple landmark taxon strategy. In: Prescott, T.J., Lepora, N.F., Mura, A., Verschure, P.F.M.J. (eds.) Living Machines 2012. LNCS, vol. 7375, pp. 62–73. Springer, Heidelberg (2012)CrossRefGoogle Scholar
- 4.Caluwaerts, K., Staffa, M., N’Guyen, S., Grand, C., Dollé, L., Favre-Félix, A., Girard, B., Khamassi, M.: A biologically inspired meta-control navigation system for the psikharpax rat robot. Bioinspiration and Biomimetics (2012)Google Scholar
- 7.Dickinson, A.: Contemporary animal learning theory. Cambridge University Press, Cambridge (1980)Google Scholar
- 10.Gat, E.: On three-layer architectures. In: Artificial Intelligence and Mobile Robots. MIT Press (1998)Google Scholar
- 11.Huys, Q.J., Eshel, N., O’Nions, E., Sheridan, L., Dayan, P., Roiser, J.P.: Bonsai trees in your head: how the pavlovian system sculpts goal-directed choices by pruning decision trees. PLoS Computational Biology 8(3) (2012)Google Scholar
- 12.Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. Journal of Artificial Intelligence Research 4, 237–285 (1996)Google Scholar
- 15.Kober, J., Bagnell, D., Peters, J.: Reinforcement learning in robotics: A survey. International Journal of Robotics Research (11), 1238–1274 (2013)Google Scholar
- 16.Lesaint, F., Sigaud, O., Flagel, S.B., Robinson, T.E., Khamassi, M.: Modelling Individual Differences in the Form of Pavlovian Conditioned Approach Responses: A Dual Learning Systems Approach with Factored Representations. PLoS Comput Biol 10(2) (February 2014)Google Scholar
- 18.Quigley, M., Conley, K., Gerkey, B.P., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A.Y.: Ros: an open-source robot operating system. In: ICRA Workshop on Open Source Software (2009)Google Scholar
- 19.Sutton, R.S., Barto, A.G.: Introduction to Reinforcement Learning, 1st edn. MIT Press, Cambridge (1998)Google Scholar
- 20.Watkins, C.: Learning from Delayed Rewards. PhD thesis, King’s College, Cambridge, UK (1989)Google Scholar