The scope of this special issue is to celebrate the work of the late Professor John Taylor. John began his career in 1956 as a theoretical physicist and has contributed many seminal papers and books to high-energy physics, black holes, quantum gravity and string theory. He held positions in leading Universities in the UK, USA and Europe in physics and mathematics. He created the Centre for Neural Networks at King’s College, London, in 1990 and served as its Director till the day he died. He was appointed Emeritus Professor of Mathematics of London University in 1996 after having spent 25 years there as a Professor of Mathematics. He was Guest Scientist at the Research Centre in Juelich, Germany, from 1996 till 1998, working on analysis of brain images. He has acted as consultant in Neural Networks to several companies. Till his death, he was the Director of Research on Global Bond products and Tactical Asset Allocation for a financial investment company involved in time series prediction. He was the European Editor-in-Chief of the Neural Networks journal and acted as President of the International Neural Network Society in 1995 and the European Neural Network Society in 1993–1994. Since 2009, he was the founding Chair of the Advisory Editorial Board for the Cognitive Computation journal.

John worked in the field of Neural Networks since 1969. He has contributed ever since to all aspects of neural networks and cognitive computation including their applications to finance and robotics. Specifically, research topics Professor Taylor contributed to include but are not limited to:

  • Noisy nets, synapses and the pRAM chip

  • Dynamics of learning processes

  • Mathematical analysis of neural networks and their hardware implementations

  • Neurocomputational models of perception, attention, learning and memory, decision making, motor control, cognitive control, observational learning, emotions, thinking, reasoning, conceptualization, knowledge representation, language and consciousness

  • Neural network applications to finance, robotics and brain imaging

To memorate Professor Taylor’s work, the special issue considered original research articles, review articles, letters and commentaries from former and current students, junior and senior colleagues of his. All submitted articles clearly stated in what way their work was based on Professor Taylor’s previous research and how they extended it. All submitted papers were rigorously peer-reviewed which led to seven papers being finally accepted.

The Issue begins with the article by Bressloff and Coombes who revisited the work of John Taylor on neural ‘bubble’ dynamics in two-dimensional neural field models. The focus of John’s work was on the properties of radially symmetric ‘bubbles,’ including existence and radial stability, with applications to the theory of topographic map formation in self-organizing neural networks. In addition to reviewing John’s work in this area, the authors also include some recent results that treat more general classes of perturbations.

Connors and Trappenberg propose a modified weight combination method for improved path integration. Their work provides an analysis of the path integration mechanism, with respect to the speed of the packet movement and the robustness of the field under strong rotational inputs. The analysis illustrates challenges in controlling the activity packet size under strong rotational inputs, as well as a limited speed capability using the existing path integration mechanism. As a result of the analysis, the authors propose a novel modification to the weight combination method to provide a higher speed capability and increased robustness of the field. Their proposed method is shown to lead to an increase of over two times in the existing speed capability, and a resistance of the field to break down under strong rotational inputs.

Bugmann and Taylor present a formal analytical description of activity propagation in a simple multilayer network of coincidence-detecting neuron models receiving and generating Poisson spike trains. Simulations are also presented. The analyzed multilayer neural network exhibits stochastic propagation of neural activity. Such propagation has interesting features, such as information delocalization, that could explain backward masking. Stochastic propagation is normally observed in simulations of networks of spiking neurons. One of the contributions of this paper is to offer a method for formalizing and quantifying such effects, albeit in a simplified system. An interesting feature of the proposed model is its potential to describe within a single framework a number of apparently unrelated characteristics of visual information processing, such as latencies, backward masking, synchronization and temporal pattern of post-stimulus histograms. Due to its simplicity, the model can easily be understood, refined and extended. This work has its origins in the nineties, but modeling latencies and firing probabilities in more realistic biological system is still an unsolved problem.

Apolloni, Malchiodi and Taylor cope with the key step of bootstrap methods of generating a possibly infinite sequence of random data preserving properties of the distribution law, starting from a primary sample actually drawn from this distribution. The authors solve this task in a cooperative way within a community of generators where each improves its performance from the analysis of the other partners’ production. Since the analysis is based on an a priori distrust of the other partners’ production, the authors denote the partner ensemble as a gossip community and denote the statistical procedure learning by gossip. This procedure is proven by the authors to be highly efficient when applied to the elementary problem of reproducing a Bernoulli distribution, with a properly moderated distrust rate when the absence of a long-term memory requires an online estimation of the bootstrap generator parameters. This fact makes the procedure viable as a basic template of an efficient interaction scheme within social network agents.

Taylor, Cutsuridis, Hartley, Althoefer and Nanayakkara describe a brief survey of observational learning, with particular emphasis on how this could impact on the use of observational learning in robots. The authors present a set of simulations of a neural model which fits recent experimental data and such that it leads to the basic idea that observational learning uses simulations of internal models to represent the observed activity, so allowing for efficient learning of the observed actions. The authors conclude with a set of recommendations as to how observational learning might most efficiently be used in developing and training robots for their variety of tasks.

Mohan, Morasso, Sandini and Kasderidis describe the first developments in relation to the learning and reasoning capabilities of Darwin robots. The novelty in the computational architecture stems from the incorporation of recent ideas firstly from the field of ‘connectomics’ that attempts to explore the large-scale organization of the cerebral cortex and secondly from recent functional imaging and behavioral studies in support of the embodied simulation hypothesis. An example of how simulation of perception and action lead the robot to reason about how its world can change such that it becomes little bit more conducive toward realization of its internal goal (an assembly task) is used to describe how ‘object’ ‘action’ and ’body’ meet in the Darwin architecture and how inference emerges through embodied simulation.

Finally, Cutsuridis and Taylor show aspects of brain processing on how visual perception, recognition, attention, cognitive control, value attribution, decision making, affordances and action can be melded together in a coherent manner in a cognitive control architecture of the perception–action cycle for visually guided reaching and grasping of objects by a robot or an agent. The work is based on the notion that separate visuomotor channels are activated in parallel by specific visual inputs and are continuously modulated by attention and reward, which control a robot’s/agent’s action repertoire. The cognitive control architecture consists of a number of neurocomputational mechanisms heavily supported by experimental brain evidence: spatial saliency, object selectivity, invariance to object transformations, focus of attention, resonance, motor priming, spatial-to-joint direction transformation and volitional scaling of movement.

The Guest Editors would like to thank the anonymous reviewers for their rigorous reviews which helped ensure the quality of papers published in this Special Issue and also the journal’s Editorial Office for timely assistance.