During sleep and wakeful rest, the hippocampus replays sequences of place cells that have been activated during prior experiences. These replays have been interpreted as a memory consolidation process, but recent results suggest a possible interpretation in terms of reinforcement learning. The Dyna reinforcement learning algorithms use off-line replays to improve learning. Under limited replay budget, prioritized sweeping, which requires a model of the transitions to the predecessors, can be used to improve performance. We investigate if such algorithms can explain the experimentally observed replays. We propose a neural network version of prioritized sweeping Q-learning, for which we developed a growing multiple expert algorithm, able to cope with multiple predecessors. The resulting architecture is able to improve the learning of simulated agents confronted to a navigation task. We predict that, in animals, learning the transition and reward models should occur during rest periods, and that the corresponding replays should be shuffled.
This is a preview of subscription content, log in to check access.
The authors thank O. Sigaud for fruitful discussions, and F. Cinotti for proofreading. This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 640891 (DREAM Project). This work was performed within the Labex SMART (ANR-11-LABX-65) supported by French state funds managed by the ANR within the Investissements d’Avenir programme under reference ANR-11-IDEX-0004-02.
O’Keefe, J., Dostrovsky, J.: The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Res. 34(1), 171–175 (1971)CrossRefGoogle Scholar
Wilson, M.A., McNaughton, B.L., et al.: Reactivation of hippocampal ensemble memories during sleep. Science 265(5172), 676–679 (1994)CrossRefGoogle Scholar
Foster, D.J., Wilson, M.A.: Reverse replay of behavioural sequences in hippocampal place cells during the awake state. Nature 440(7084), 680–683 (2006)CrossRefGoogle Scholar
Lee, A.K., Wilson, M.A.: Memory of sequential experience in the hippocampus during slow wave sleep. Neuron 36(6), 1183–1194 (2002)CrossRefGoogle Scholar
Gupta, A.S., van der Meer, M.A.A., Touretzky, D.S., Redish, A.D.: Hippocampal replay is not a simple function of experience. Neuron 65(5), 695–705 (2010)CrossRefGoogle Scholar
Chen, Z., Wilson, M.A.: Deciphering neural codes of memory during sleep. Trends Neurosci. 40(5), 260–275 (2017)CrossRefGoogle Scholar
Peyrache, A., Khamassi, M., Benchenane, K., Wiener, S.I., Battaglia, F.P.: Replay of rule-learning related neural patterns in the prefrontal cortex during sleep. Nat. Neurosci. 12(7), 919–926 (2009)CrossRefGoogle Scholar
McClelland, J.L., McNaughton, B.L., O’reilly, R.C.: Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychol. Rev. 102(3), 419 (1995)CrossRefGoogle Scholar
De Lavilléon, G., Lacroix, M.M., Rondi-Reig, L., Benchenane, K.: Explicit memory creation during sleep demonstrates a causal role of place cells in navigation. Nat. Neurosci. 18(4), 493–495 (2015)CrossRefGoogle Scholar
Cazé, R., Khamassi, M., Aubin, L., Girard, B.: Hippocampal replays under the scrutiny of reinforcement learning models (2018, submitted)Google Scholar
Sutton, R.S.: Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In: Proceedings of the Seventh International Conference on Machine Learning, pp. 216–224 (1990)CrossRefGoogle Scholar
Moore, A.W., Atkeson, C.G.: Prioritized sweeping: reinforcement learning with less data and less time. Mach. Learn. 13(1), 103–130 (1993)Google Scholar
Peng, J., Williams, R.J.: Efficient learning and planning within the Dyna framework. Adapt. Behav. 1(4), 437–454 (1993)CrossRefGoogle Scholar
Khamassi, M., Lacheze, L., Girard, B., Berthoz, A., Guillot, A.: Actor-critic models of reinforcement learning in the basal ganglia: from natural to arificial rats. Adapt. Behav. 13, 131–148 (2005)CrossRefGoogle Scholar
Lin, L.H.: Self-improving reactive agents based on reinforcement learning, planning and teaching. Mach. Learn. 8(3/4), 69–97 (1992)CrossRefGoogle Scholar
Paz-Villagrán, V., Save, E., Poucet, B.: Independent coding of connected environments by place cells. Eur. J. Neurosci. 20(5), 1379–1390 (2004)CrossRefGoogle Scholar