Skip to main content

Drifting neuronal representations: Bug or feature?


The brain displays a remarkable ability to sustain stable memories, allowing animals to execute precise behaviors or recall stimulus associations years after they were first learned. Yet, recent long-term recording experiments have revealed that single-neuron representations continuously change over time, contravening the classical assumption that learned features remain static. How do unstable neural codes support robust perception, memories, and actions? Here, we review recent experimental evidence for such representational drift across brain areas, as well as dissections of its functional characteristics and underlying mechanisms. We emphasize theoretical proposals for how drift need not only be a form of noise for which the brain must compensate. Rather, it can emerge from computationally beneficial mechanisms in hierarchical networks performing robust probabilistic computations.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3

Availability of data and materials

Not applicable.

Code availability

Not applicable.


Download references


We thank Cengiz Pehlevan and Venkatesh Murthy for support and mentorship. We also thank Matthew Farrell, Michael Goard, Siddharth Jayakumar, and Torben Ott for helpful comments on our manuscript.


PM was supported by a grant from the Harvard Mind Brain Behavior Interfaculty Initiative. SQ and JAZ-V were supported by the NIH (1UF1NS111697-01), the Intel Corporation (through the Intel Neuromorphic Research Community), and a Google Faculty Research Award. JAZ-V was also partially supported by the NSF-Simons Center for Mathematical and Statistical Analysis of Biology at Harvard, the Harvard Quantitative Biology Initiative, and the Harvard FAS Dean’s Competitive Fund for Promising Scholarship.

Author information

Authors and Affiliations



All authors contributed equally to conceptualization, literature review, and writing. They are listed alphabetically.

Corresponding author

Correspondence to Paul Masset.

Ethics declarations

Conflict of interest

The authors declare no other competing interests.

Ethics approval

Not applicable.

Consent to participate

Not applicable.

Consent for publication

Not applicable.

Additional information

Communicated by Jean-Marc Fellous.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

All authors contributed equally and are listed alphabetically.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Masset, P., Qin, S. & Zavatone-Veth, J.A. Drifting neuronal representations: Bug or feature?. Biol Cybern 116, 253–266 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


  • Representational drift
  • Bayesian learning
  • Neural networks
  • Representation learning