This journal strongly supports data sharing,Footnote 1 , Footnote 2 and by extension model sharing.Footnote 3 There is a strong need for model sharing in computational neuroscience, especially because the field is lagging behind in model sharing compared to the broader field of systems biology.Footnote 4 At present ModelDBFootnote 5 is the main resource for model sharing in neuroscience.

The main argument in favor of model sharing is and remains scientific integrity: it is only by making the complete code necessary to run a model freely available that the science done using that model is reproducible. But of course model databases can be used for other purposes than just replicating published science. The most important application at present is probably educational, because making students work on real models is quite effective training and allows them to focus on understanding computational principles instead of on how to build a model from scratch.

But, increasingly, existing models are also ‘recycled’ to become part of newer research projects. This is sometimes called ‘plug-and-play’ modeling: one pulls a few models out of a database, ‘plugs’ them together into a new model and is ready to ‘play’. There is nothing new to this. In fact the original Hodgkin-Huxley model of the squid giant axonFootnote 6 has been used in many models of cortical neurons, despite the fact that it does not spike at vertebrate body temperature (neither does the squid axon). Promoting the reuse of models has always been an explicit goal of the GENESIS simulatorFootnote 7 and, more recently, the Open Source BrainFootnote 8 offers a nice graphical interface of what models are available for ‘check-out’.

To make model sharing more effective a lot of resources have been put into the development of simulator independent description languages4, Footnote 9 , Footnote 10 , Footnote 11 and accompanying ontologies.Footnote 12 , Footnote 13 But with all these efforts in developing databases and languages the actual science has not always been at the front and, as we will see, these resources do not make it obvious that models were often tuned to simulate specific experiments and may not generalize very well outside of this experimental context. This is not a problem if one combines models from only one source, like for example the Blue Brain Project combining neuron models based on hundreds of slice experiments, performed in house in a controlled way, to construct a network model of the cortical column.Footnote 14

But in most actual application of ‘plug-and-play’ simulation the researcher combines models based on data from many different experiments, be it different channel models to construct a detailed neuron modelFootnote 15 or diverse neuron models to construct a network model.Footnote 16 In my own experience three major problems pop up when one does this and, at present, such problems can only be prevented by careful checking of either the model files or the original publication, because the necessary information is not shown in the model databases mentioned.

The first problem is well known among old-timers, the problem of animal species and neuron type. In the old days little quantitative data was available about ion channels and one often had to resort to data from different neuron types and/or different animal species, with as worst example having to use invertebrate data to model a vertebrate channel (not just the Hodgkin-Huxley model). This problem has become less severe, because nowadays almost all data is restricted to either recordings from rat or mouse (though not exclusively). But while all model databases strongly distinguish between neuron types and often use it as the prime classifier, they do not report animal species despite possibly important differences between these species.

The second problem is also well known in the context of making kinetic models of channel gating: the temperature at which the recording was performed. When different channel kinetic equations are combined into a neuron model, the temperature settings of the channel models have to be brought to a common setting by scaling the rate factors according to the ‘Q10’.Footnote 17 This is because the experiments were not always performed at the same temperature. A less often recognized problem, the third one, are the electrode and bathing solutions used in experiments. If these were different, the ionic Nernst potentials17 were different and this will be reflected in the corresponding channel models.

While these problems are reasonably well understood for single cell modeling, many network modelers are, in my experience, quite naive about this. This results in network models combining conductance-based models downloaded from a database that have different temperature and Nernst potential settings for each neuron model… Obviously this outcome is not scientifically desirable. However, at present, there are no easy tools to help modelers identify such problems. And, even worse, it is often very difficult to solve the problem. Many single neuron models turn out not to be very robust and attempts to modify their temperature or Nernst potentials often cause them to stop functioning or to show unphysiological behavior.

In conclusion, ‘plug-and-play’ modeling sometimes combines incompatible neuron models, but this may not be immediately obvious, and it may be quite hard to fix the problem. What can the field do about improving this situation? Both modelers and experimentalists can and should contribute to solving these problems.

At the experimental side it would be very helpful if experimental techniques were more standardized, with less variance in methods used between different laboratories. The difference in electrode and bathing solutions used, for example, often leads to substantial differences in neuron physiology (like afterhyperpolarization properties) and it can be a real challenge to a modeler to decide which is the ‘best’ neuron recording for a new model.Footnote 18 Standardization of experimental protocols is one of the explicit goals of several of the big science initiatives that have arisen recently in neuroscience,Footnote 19 but it remains to be seen how this will impact the conditions in the thousands of neuroscience labs around the world.

At the modeling side it is obvious that model databases should provide better tools to prevent these problems from occurring. The initiative by publishers to standardize reporting of the resources used for researchFootnote 20 may provide a good context to organize such information. And finally, modelers should become more concerned about the robustness of the models they make. There has been a lot of focus on finding proper model parameters, with a very extensive literature on automated parameter search methods to create new models,Footnote 21 , Footnote 22 and a recent emphasis on the existence of large families of good models,Footnote 23 , Footnote 24 but few studies have focused on robustness of models across different experimental or physiological conditions. In the absence of more robust models it may also be useful to develop automated parameter retuning techniques that can make variants of existing models to operate under different conditions.