This special issue on “Complex Spatial Navigation in Animals, Computational Models and Neuro-inspired Robots” has its origins in a workshop held Sept 28, 2018, in Lyon, France, organized by Peter Ford Dominey, Jean-Marc Fellous, and Alfredo Weitzenfeld sponsored by a joint NSF and ANR CRCNS grant (award 1429937). The goal of the workshop was to discuss the latest advances in understanding the neural mechanisms of complex spatial navigation using experimental studies, computational modeling and robotics evaluations, giving emphasis on studies that relate at least 2 of the 3 techniques (animals, computational neuroscience and neuro-robotics). This issue brings together contributions from workshop participants as well as from additional researchers to discuss the study of spatial cognition in complex environments.

The human brain is one of the most complex biological computing device, with over 300 billion parallel processors, linked with over 30 trillion connections. How does one study such a phenomenal machine? The classic scientific approach has always been to place biological complexity in a simple environment where most of the features are controlled and easily manipulated. Then, for each set of environmental parameters, repeatedly, painstakingly, present well-designed sensory inputs and measure relevant behavioral outputs. The hope is that, on average, some interesting relationships between inputs and outputs will emerge, and that these relationships will give insights into the underlying neural computations. This method necessarily makes a number of strong assumptions, not the least of which is that the complexity of a system can be understood by collecting data from many simple computations, and putting them together (somehow). We know this is unlikely to fully work: The brain is not just the sum of its parts. But this is the best we can do… so far.

Many domain areas involve complex neural computations. Some involve well-defined and controllable inputs (e.g., visual perception), others involve well-defined and measurable outputs (e.g., decision making). Spatial navigation achieves a bit of a trade-off in that the inputs are reasonably well-defined (e.g., a maze, with walls and obstacles) and the outputs can be measured with some degree of precision (e.g., position data, speed). Moreover, there is such a thing as simple navigation, as in going from point A to point B and complex spatial navigation, as in finding your way out of a maze. Though measures of spatial navigation complexity do not as of yet exist, we (humans) do share an understanding of what is spatially complex and what is not. That shared understanding is, of course, coming from years of experience and thousands years of evolution. Our brain has evolved, so we can learn to navigate in simple and complex environments with relative ease. (We do much better than robots, in that respect.) It may be time to move off the reductionist approach for a while, and ask the difficult questions: Can we understand the neural computations underlying spatial navigation in complex environments? Are they similar to the ones we use in simple environments? What is navigation complexity and how do we measure it?

The first contribution by Michael Arbib provides a general overview of spatial navigation and provides linkage between the different papers included in the special issue.

  • Michael Arbib, From Spatial Navigation via Visual Construction to Episodic Memory and Imagination

  • Tiffany Hwu, Jeffrey L. Krichmar, A Neural Model of Schemas and Memory Encoding.

  • Pablo Scleidorovich, Martin Llofriu, Jean-Marc Fellous, Alfredo Weitzenfeld, A Computational Model for Spatial Cognition Combining Dorsal and Ventral Hippocampal Place Field Maps: Multi-scale Navigation.

  • Stephen Hausler, Zetao Chen, Michael E. Hasselmo, Michael Milford, Bio-Inspired Multi-Scale Fusion.

  • Mehdi Khamassi, Benoit Girard, Modeling awake hippocampal reactivations with model-based bidirectional planning.

  • Nicolas Cazin, Pablo Scleidorovich, Alfredo Weitzenfeld, Peter Ford Dominey, Real-Time Sensory-Motor integration of Hippocampal Place Cell Replay and Prefrontal Sequence Learning in Simulated and Physical Rat Robots for Novel Path Optimization.

  • Joseph D. Monaco, Grace M. Hwang, Kevin M. Schultz, Kechen Zhang, Cognitive swarming in complex environments with attractor dynamics and oscillatory computing.

  • Zhuocheng Xiao, Kevin Lin and Jean-Marc Fellous, Conjunctive reward-place coding properties of dorsal distal CA1 hippocampus cells.

  • Mingda Ju, Philippe Gaussier, A model of path integration and representation of spatial context in the retrosplenial cortex.

Jean-Marc Fellous, Peter Dominey, and Alfredo Weitzenfeld.

Editors.