Skip to main content

Biological accuracy in large-scale brain simulations

Abstract

The advancement of computing technology makes it possible to build extremely accurate digital reconstructions of brain circuits. Are such unprecedented levels of biological accuracy essential for brain simulations to play the roles they are expected to play in neuroscientific research? The main goal of this paper is to clarify this question by distinguishing between various roles played by large-scale simulations in contemporary neuroscience, and by reflecting about what makes a simulation biologically accurate. It is argued that large-scale simulations may play model-oriented and prediction-oriented roles in brain research, and that the concept of biological accuracy can be interpreted as related to the plausibility of the theoretical model implemented in the simulation system, to the accuracy of the computer implementation, and to the level of details of the implemented model. Building on these observations and distinctions, it is argued that biological accuracy is not essential for a computer simulation to play the epistemic roles it is expected to play in brain research.

This is a preview of subscription content, access via your institution.

Notes

  1. 1.

    The concept of theoretical model, as outlined here, is not to be confused with the concept of mechanistic model (Glennan and Illari 2018). No particular position is taken here about the relationship between theoretical models and mechanistic models.

  2. 2.

    The question whether the hardware or the software of a computer is what simulates the target system in computer simulation studies has been debated in the philosophical literature (Barberousse et al. 2009; Beisbart 2017). Here the simulation system is taken to be a programmed computer described symbolically, i.e., in terms of variables taking values and of relationships holding among them. However, different interpretations of the notion of “programmed computer” may be compatible with the analysis carried out here.

  3. 3.

    As pointed out by Miłkowski (2015), who draws from Suppes (1962) and Bogen and Woodward (1988), large-scale simulationists—more generally, neuroscientists—deal with models of neurophysiological data, also called phenomena in the philosophical literature, not to be confused with raw observations (in the expression “model of data”, the term “model” does not denote theoretical models as defined in Sect. 3; see Miłkowski 2015, for clarifications). Thus, in model-oriented analyses, the behaviour of the simulation system is compared to a model of the target system behaviour (and not to raw data about it), while in prediction-oriented analyses, the behaviour of the simulation system is used to formulate a model of the behaviour that the target system would display in particular circumstances. Models of behavioural data incorporate some degrees of abstraction and idealization (Miłkowski 2015).

  4. 4.

    A detailed description of the dynamics of the neuron model, including reference to passive properties and conductance mechanisms, and of the MOO algorithm is provided in the “Methods” section of the cited article. The NEURON simulation environment was used to simulate the model (Hines and Carnevale 1997).

  5. 5.

    This schematic reconstruction of the two case studies is silent on some scientifically and methodologically interesting details. No description of the MOO algorithm used in the PC study is provided, and the criteria used by the authors of the LFP study to simplify the network in the various experimental sessions are not discussed. Explicitly discussing these and other details may be important to obtain a finer-grained reconstruction of the experimental procedures adopted within model-oriented and prediction-oriented neuroscientific analyses. Such a finer-grained reconstruction is left to future studies, the purpose of this article being to clarify the distinction between the two kinds of analyses and to bring this distinction to bear on the import of biological accuracy in brain simulations.

  6. 6.

    One thing is to describe the behaviour of the target system (e.g., to describe the electrical activity of some cells of a neural tissue), another thing is to produce a theoretical model of the behaviour of the target system (e.g., to produce a non-concrete interpreted structure characterizing the target system in terms of properties which may predict or explain the electrical activity of some cells of a neural tissues). This distinction is consistent with the characterization of the notions of “theoretical model” and “behaviour of a system” offered in Sects. 2 and 3.1.

  7. 7.

    The model-oriented strategy has been sometimes dubbed “synthetic method” in Artificial Intelligence and cybernetics (Cordeschi 2002). Note that this strategy can only lead one to reduce or increase the space of the how-possibly theoretical models of B’s behaviour. A’s reproduction of the latter, per se, guarantees neither that M is the only possible model of it nor that it is explanatory.

  8. 8.

    It is not claimed here that the distinction between model-oriented and prediction-oriented analyses is specific to neuroscience. It may well be found in other areas of scientific research. Here it is suggested only that it may be helpful to reflect on the import of biological accuracy in neuroscientific simulation studies.

  9. 9.

    The distinction between implementation accuracy and model plausibility is akin to the distinction between validation and verification which is made by Winsberg in the following terms: “Verification … is the process of determining whether or not the output of the simulation approximates the true solutions to the differential equations of the original model. Validation, on the other hand, is the process of determining whether or not the chosen model is a good enough representation of the real-world system for the purpose of the simulation” (Winsberg 2010, pp. 19–20). This distinction is made there with reference to the computer simulation of differential equations representing physical systems, while this article is concerned with the computer simulation of theoretical models of biological systems, no restriction being made to models couched in terms of differential equations. Still, verification, as defined by Winsberg, concerns the relationship between the simulation system and the theoretical model it implements, while validation concerns the relationship between the theoretical model and the target system. Verification and validation can be therefore thought as the process of checking implementation accuracy and model plausibility, respectively.

  10. 10.

    The level of detail of the model is regarded here as one possible dimension of biological accuracy: this means that, in the present analysis, one may deem a simulation biologically accurate because the theoretical model implemented in the machine is highly detailed. A simulation may be highly detailed—thus biologically accurate, under this interpretation of the term—independently of whether the model is how-actually: a theoretical model may be highly detailed and utterly “wrong”. In other terms, the dimension of biological accuracy discussed here is independent of the dimension discussed in Sect. 4.1. This implies that one may deem a simulation biologically accurate even though the implemented theoretical model fails to correctly represent the target system, in the sense discussed in Sect. 2—which may sound odd according to an intuitive meaning of “accurate”. There are good reasons to use the term in this way, though. In common parlance, a painting can be said to reproduce accurately (i.e., with lots of details) something which does not exist. This is more or less the sense in which the discarded models in the PC study are accurate (detailed), even though, in fact, they are implausible models of PC cells.

  11. 11.

    https://archive.nytimes.com/www.nytimes.com/external/idg/2009/11/24/24idg-ibm-cat-brain-simulation-dismissed-as-hoax-by-rival-39598.html (last visited on October 10th 2018).

  12. 12.

    Discussing the relation between the level of detail of a model and its explanatory value was not among the goals of this article, which was chiefly focused on the question whether biological accuracy is essential for a brain simulation to play the expected role in neuroscientific research. Indeed, here it has been argued that some simulation analyses aim to test explanatory models—and the philosophical literature on abstraction, idealization and explanation is surely relevant to reflect on the conditions under which a theoretical model is explanatory. However, the definition of model-oriented analyses provided in Sect. 3 does not rule out model-oriented analyses aiming to test non-explanatory theoretical models. Moreover, it has been argued that simulation systems can also be employed to predict the behaviour of the brain. Therefore, even though the aforementioned literature is surely relevant to reflect on some specific issues concerning one of the roles that brain simulations may play in neuroscience, the scope of the present analysis is somehow more general and does not specifically concern the norms of (mechanistic) explanation.

References

  1. Ananthanarayanan, R., Esser, S. K., Simon, H. D., & Modha, D. S. (2009). The cat is out of the bag: Cortical simulations with 109 neurons, 1013 synapses. In Proceedings of the conference on high performance computing networking, storage and analysis-SC’09 (p. 1).

  2. Barberousse, A., Franceschelli, S., & Imbert, C. (2009). Computer simulations as experiments. Synthese,169(3), 557–574.

    Article  Google Scholar 

  3. Batterman, R. W., & Rice, C. C. (2014). Minimal model explanations. Philosophy of Science,81(3), 349–376.

    Article  Google Scholar 

  4. Beisbart, C. (2017). Are computer simulations experiments? And if not, how are they related to each other? European Journal for Philosophy of Science,8(2), 171–204.

    Article  Google Scholar 

  5. Bogen, J., & Woodward, J. (1988). Saving the phenomena. Philosophical Review,97(3), 303–352.

    Article  Google Scholar 

  6. Bokulich, A. (2017). Models and explanation. In L. Magnani & T. Bertolotti (Eds.), Springer handbook of model-based science (pp. 103–118). New York: Springer.

    Chapter  Google Scholar 

  7. Boone, W., & Piccinini, G. (2016). Mechanistic abstraction. Philosophy of Science,83(5), 686–697.

    Article  Google Scholar 

  8. Cordeschi, R. (2002). The discovery of the artificial behavior, mind and machines before and beyond cybernetics. Dordrecht: Springer.

    Google Scholar 

  9. Craver, C. F. (2006). When mechanistic models explain. Synthese,153(3), 355–376.

    Article  Google Scholar 

  10. Craver, C. F., & Kaplan, D. M. (2018). Are more details better? On the norms of completeness for mechanistic explanations. The British Journal for the Philosophy of Science. https://doi.org/10.1093/bjps/axy015.

    Article  Google Scholar 

  11. de Garis, H., Shuo, C., Goertzel, B., & Ruiting, L. (2010). A world survey of artificial brain projects, part I: Large-scale brain simulations. Neurocomputing,74(1–3), 3–29.

    Article  Google Scholar 

  12. Dotko, P., Hess, K., Levi, R., Nolte, M., Reimann, M., Scolamiero, M., et al. (2016). Topological analysis of the connectome of digital reconstructions of neural microcircuits. Arxiv,1, 1–28.

    Google Scholar 

  13. Elgin, M., & Sober, E. (2002). Cartwright on explanation and idealization. Erkentniss,57, 441–450.

    Article  Google Scholar 

  14. Eliasmith, C., Stewart, T. C., Choo, X., Bekolay, T., DeWolf, T., Tang, C., et al. (2012). A large-scale model of the functioning brain. Science,338(6111), 1202–1205.

    Article  Google Scholar 

  15. Frigg, R., & Nguyen, J. (2017). Models and representation. Springer handbook of model-based science (pp. 49–102). Cham: Springer.

    Chapter  Google Scholar 

  16. Glennan, S. (2017). The new mechanical philosophy. Oxford: Oxford University Press.

    Book  Google Scholar 

  17. Glennan, S., & Illari, P. (Eds.). (2018). The Routledge handbook of mechanisms and mechanical philosophy. New York: Routledge.

    Google Scholar 

  18. Hay, E., Hill, S., Schürmann, F., Markram, H., & Segev, I. (2011). Models of neocortical layer 5b pyramidal cells capturing a wide range of dendritic and perisomatic active properties. PLoS Computational Biology, 7(7), e1002107. https://doi.org/10.1371/journal.pcbi.1002107.

    Article  Google Scholar 

  19. Hines, M. L., & Carnevale, N. T. (1997). The NEURON simulation environment. Neural Computation,9(6), 1179–1209.

    Article  Google Scholar 

  20. Kandel, E. R., Markram, H., Matthews, P. M., Yuste, R., & Koch, C. (2013). Neuroscience thinks big (and collaboratively). Nature Reviews Neuroscience,14(9), 659–664.

    Article  Google Scholar 

  21. Kaplan, D. M. (2011). Explanation and description in computational neuroscience. Synthese,183(3), 339–373.

    Article  Google Scholar 

  22. Lindén, H., Tetzlaff, T., Potjans, T. C., Pettersen, K. H., Grün, S., Diesmann, M., et al. (2011). Modeling the spatial reach of the LFP. Neuron,72(5), 859–872.

    Article  Google Scholar 

  23. Markram, H. (2006). The blue brain project. Nature Reviews Neuroscience,7(2), 153–160.

    Article  Google Scholar 

  24. Markram, H., Meier, K., Lippert, T., Grillner, S., Frackowiak, R., Dehaene, S., et al. (2011). Introducing the human brain project. Procedia Computer Science,7, 39–42.

    Article  Google Scholar 

  25. Markram, H., Muller, E., Ramaswamy, S., Reimann, M. W., Abdellah, M., Sanchez, C. A., et al. (2015). Reconstruction and simulation of neocortical microcircuitry: Supplemental information. Cell,163(2), 456–492.

    Article  Google Scholar 

  26. Miłkowski, M. (2015). Explanatory completeness and idealization in large brain simulations: A mechanistic perspective. Synthese,193(5), 1457–1478.

    Article  Google Scholar 

  27. Piccinini, G. (2007). Computational modeling versus computational explanation: Is everything a turing machine, and does it matter to the philosophy of mind? Australasian Journal of Philosophy,85(1), 93–115.

    Article  Google Scholar 

  28. Potochnik, A. (2015). Causal patterns and adequate explanations. Philosophical Studies,172(5), 1163–1182.

    Article  Google Scholar 

  29. Potochnik, A. (2017). Idealization and the aims of science. Chicago: University of Chicago Press.

    Book  Google Scholar 

  30. Reimann, M., Anastassiou, C., Perin, R., Hill, S. L., Markram, H., & Koch, C. (2013). A biophysically detailed model of neocortical local field potentials predicts the critical role of active membrane currents. Neuron,79(2), 375–390.

    Article  Google Scholar 

  31. Sharott, A. (2014). Local field potential, methods of recording. Encyclopedia of computational neuroscience (pp. 1–3). New York, NY: Springer.

    Google Scholar 

  32. Suppes, P. (1962). Models of data. In E. Nagel, P. Suppes, & A. Tarski (Eds.), Logic, methodology, and philosophy of science: Proceedings of the 1960 international congress (pp. 252–261). Stanford: Stanford University Press.

    Google Scholar 

  33. Weisberg, M. (2007). Three kinds of idealization. Journal of Philosophy,104(12), 639–659.

    Article  Google Scholar 

  34. Weisberg, M. (2013). Simulation and similarity. Using models to understand the world. Oxford: Oxford University Press.

    Book  Google Scholar 

  35. Winsberg, E. B. (2010). Science in the age of computer simulation. Chicago: University of Chicago Press.

    Book  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Edoardo Datteri.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Datteri, E. Biological accuracy in large-scale brain simulations. HPLS 42, 5 (2020). https://doi.org/10.1007/s40656-020-0299-1

Download citation

Keywords

  • Epistemology of large-scale simulations
  • Simulations in neuroscience
  • Biological accuracy of a simulation
  • Simulation-based model testing