Skip to main content
Log in

New Labels for Old Ideas: Predictive Processing and the Interpretation of Neural Signals

  • Published:
Review of Philosophy and Psychology Aims and scope Submit manuscript

Abstract

Philosophical proponents of predictive processing cast the novelty of predictive models of perception in terms of differences in the functional role and information content of neural signals. However, they fail to provide constraints on how the crucial semantic mapping from signals to their informational contents is determined. Beyond a novel interpretative gloss on neural signals, they have little new to say about the causal structure of the system, or even what statistical information is carried by the signals. That means that the predictive framework for perception can be relabeled in traditional, non-predictive terms, with no empirical consequences relevant to existing or future data. To the extent that neuroscientific research based on predictive processing is both innovative and productive, it will be due to the framework’s suggestive heuristic effects, or perhaps auxiliary empirical claims about implementation, rather than a difference in the information-processing structure that it describes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. Friston (2005, 2008, 2009, 2010), Adams et al. (2013).

  2. Hohwy et al. (2008)

  3. Part of the enthusiasm is due to the apparent unification of perception and action that predictive processing theories provide, and this is indeed a novel contribution of the predictive framework. However, in this paper I will focus just on the implications of the theory for perception, leaving aside prediction-based accounts of action (e.g. Adams et al. 2013). I will also only be considering the relevant models as models of perceptual inference on a short time scale, after model weights are already set, rather than as models of perceptual learning over slower time scales. That said, I do think it is reasonable to highlight the distinctive role of predictions and prediction error in learning (as in the reinforcement learning paradigm described in footnote 4 below).

  4. The reward prediction error model maps firing of dopamine neurons in the ventral tegmental area (VTA), a region that is active when an animal receives rewards, to the reward prediction error variable in a computational model of reinforcement learning. That model accurately predicts the firing rate of the VTA neurons in stimulus conditions of expected, unexpected and less-than-expected reward. Sutton (1988); Schultz et al. (1997)

  5. Forward models for motor control were developed to solve the problem of controlling motor action given noise in the system and tight timing constraints. Conant ad Ashby (1970); Wolpert and Miall (1996)

  6. Linear predictive coding was originally a method of compressing audio and video signals for transmission – a way to deliver the very same information in a signal, but using less bandwidth. Atal and Schroeder (1970).

  7. For important contributions to this traditional story, see Hubel and Wiesel (1959), Marr (1982), and Kanwisher et al. (1997). Marr in particular was very sensitive to the underdetermination of distal causes by proximal sensory input; an important part of his framework involved the positing of implicit assumptions regimenting visual processes (e.g. deriving surface information from 2D retinal images). See his discussion on pp. 265–267 of Vision. Priors in the predictive theory, I take it, are explicit – they are identified with patterns of top-down activity, rather than implicit in the functional architecture of the system (or in the “algorithms” that it implements). As I’ll argue later, however, the distinction between implicit and explicit is going to depend on the encoding scheme, and that, in turn, will depend on the eye of the beholder.

  8. The characterization of the flow of information as top-down/backwards or bottom-up/forwards immediately raises the question of how those directions should be understood. Once we move away from the peripheral sensory surfaces, the complex connectivity and organization of brain areas and cell populations can only be shoehorned into a strictly hierarchical model with difficulty, if at all. Although there are broad projections from population to population, from brain area to brain area, there is no clear pathway that all signals take that would allow us to tag one area as definitively “above” or “below” another for all signals or all tasks. Still, for the most well-studied portions of the visual system (for example), there is a canonical progression from the retina to LGN, to V1, V2, V4, and IT that is taken to be the bottom-up sequence, and this is portrayed in Fig 1.

    One more principled way to address the difficulty is to do as Shea (2015) proposes and replace the distinction between top-down and bottom-up with the more flexible contrast between direct and indirect processing. The idea is that the relevant distinction is between activity that depends primarily on current (or more recent) input, vs. activity that depends primarily on previously stored information. So bottom-up activity is generally activity based on current input, while top-down activity can be interpreted as the activation of previously stored information. This characterization is compatible with both traditional and predictive processing stories and is entirely neutral between them.

  9. Proponents of predictive processing theories really do say this! Here’s Anil Seth saying that perception is just hallucination (https://www.ted.com/talks/anil_seth_how_your_brain_hallucinates_your_conscious_reality), and in print, Seth (2017). See also Frith (2007) “our perceptions are fantasies that coincide with reality,” Hohwy (2013) “conscious experience is like a fantasy or virtual reality constructed to keep sensory input at bay,” and Grush (BBS 2004 p.393) with the precursor to some of this work, quoting Ramesh Jain (in italics for emphasis): “perception is a controlled hallucination process.”

  10. The Bayesian/non-Bayesian distinction cross-cuts any divide between predictive and traditional theories. See Rescorla (2015, 2016).

  11. If at the level of neurons all this talk of representations, predictions, and encodings is merely rhetorical flourish, then we should understand the predictive theory as essentially mechanistic and not content-involving at all. That being so, “predictive” would mean nothing more than the attempt to physically match bottom-up input, through top-down channels, with the polarity reversed such that the vehicles literally cancel each other out. The setup would be something like that found in noise-cancelling headphones. As with those headphones, the circuit can be constructed in such a way as to minimize any remaining noise, and even to “learn” the ambient sound profile so that performance improves over time.

    At that level, the theories could be understood to be making very specific claims about neural vehicles, understood as spikes or average firing rates, and neural units, or the cells themselves. The subtraction process at this level then would not be a subtraction of contents, but rather a cancelling out of physical vehicles themselves. Now the top down activity will be a set of spikes that are supposed to cancel out an incoming set of spikes by some physical mechanism. Spikes cannot be negative (or the picture would be somewhat cleaner), but we could imagine inhibitory sub-threshold inputs arriving at the same time and cell as excitatory ones, and only an excess of excitatory inputs will then result in spikes that travel up the hierarchy. In contrast to the informational or semantic-level claims that I criticize, this is a very specific empirical claim that could in fact be verified once the relevant populations were identified.

  12. “What is most distinctive about this … proposal (and where much of the break from tradition really occurs) is that it depicts the forward flow of information as solely conveying error, and the backward flow as solely conveying predictions.” (Clark BBS pp187–188).

  13. See Bialek et al. (1991) for an early exposition of this point. In addition, we presumably also care about not just how the probabilities change, but to what extent that matters for those making use of the signal. These functional notions of information go beyond the merely statistical, and make explicit what is only implicit in the purely statistical approach. However, they are too involved to discuss in detail here – see Cao (2012), Shea (2014, 2018), Shea et al. (2017), Mann (2018), and Fresco and Ginsburg (2018) for more philosophical perspectives on this issue.

  14. That is, signals propagating (relatively) directly from the sensory periphery. See footnote 7 for the motivation for this gloss on “bottom-up”.

  15. Relative to a particular convention that specifies the content of a signal independently of how much it reduces a receiver’s uncertainty, the two may be distinguished (as in digital signal processing where engineers stipulate the conventional encodings). However, no such independently stipulated convention exists for neural signals, so we are left only with the appeal to what effect the signal has on a receiver.

  16. See for example Spratling (2008a) and Issa et al. (2018). Everywhere in this paper, “traditional” means only that signals are not interpreted as “predictions” or “errors”, not that there are no top-down signals.

  17. Quine (1960) p.35 Ch 2 Word and Object

  18. Clark (2013)

  19. Weilnhammer et al. (2017), Hohwy et al. (2008)

  20. Issa et al. (2018)

  21. The nomenclature is not standardized, so “classical” here does not mean feedforward-only. A feedforward-only model would be genuinely different.

  22. fMRI is only an indirect marker of neural activity because it measures changes in blood oxygenation level, which is a time-lagged, spatially distorted metabolic concomitant of neural activity. The exact nature of its relationship to action potential firing (vs. sub-threshold neural activity, for example) has also not been fully characterized. Nonetheless, we often see similar patterns of activity in BOLD that we see in direct neural recordings and other means of visualizing neural activity.

  23. I’m tempted to say that if positing expectation or priors is helpful in explaining the system, then we can reasonably say that the fatiguing process has been co-opted to serve this functional role. That is, the priors are encoded in the disposition of the cell to fire less over time, and so fatigue - understood as depletion of whatever physical resources the cell requires to fire - may be a mechanism by which the system’s priors come to influence the cell’s response.

  24. How long the stimulus has been on is inevitably confounded with how expected it is. Experimenters have attempted to manipulate expectations independently (e.g. by giving some other cue), but still the effectiveness of the cue is tied to past experience with the cue, and so produces some other confound. (See Summerfeld & deLange 2014 for a review of some of these attempts).

  25. Dynamic coding has been a hallmark of predictive accounts, which emphasize the adaptability of the system to changing circumstances; e.g. “The retina adjusts its processing dynamically. The spatio-temporal receptive fields of retinal ganglion cells change after a few seconds in a new environment.” Hosoya et al. (2005). Here is Clark, quoting Friston and Price (2001): ‘the representational capacity and inherent function of any neuron, neuronal population, or cortical area is dynamic and context-sensitive [and] neuronal responses, in any given cortical area, can represent different things at different times’.

  26. p.44, Clark (2016)

  27. Moore and Zirnsak (2017). Of course, not everyone agrees with my diagnosis; see Summerfield et al. (2014) for further discussion.

  28. In the actual brain, perhaps some of the priors are innate dispositions to certain patterns of activity. In principle priors could be built into a Rao & Ballard style model as well – but that would undermine the part of the explanatory success came from their not needing to build anything in – instead, units that exhibited activity patterns similar to those of V1 cells just “fell out” from their optimization process.

  29. There is a worry here (raised by Reviewer 2) that the weights are learned parameters of the model – and a feedforward model would not necessarily learn the same parameters. This seems plausible, but I am not here concerned with how the model is initially constructed or trained, but rather its responses to stimuli once trained (and once all the weights have been set). That is, I am not taking the model to be a model of how the brain learns, but rather how it infers or responds on the time scale of perception. So once the weights are fixed, we can interpret the numbers in the very same model as either error signals or stimulus information signals.

  30. Again (with apologies for the repetition…) “What is most distinctive about this … proposal (and where much of the break from tradition really occurs) is that it depicts the forward flow of information as solely conveying error, and the backward flow as solely conveying predictions.” (Clark BBS pp187–188). But once we set the (content) labels aside, the two models are identical (i.e. not committed to any physical or causal differences). Different people applying the model to different systems might want to posit different implementation relations, or different ways of picking out the relevant causal structure, but these relations are not going to be part of the model itself.

  31. The labels can be thought of as intentional glosses; see Egan (2014).

  32. So it might just happen that those who prefer the predictive terminology also prefer one particular implementation while those who prefer the traditional terminology prefer another, different, implementation. But these implementation hypotheses are driven by considerations other than whether perception is predictive or not (such as, for example, what the connectivity patterns of cells in layer V vs. layer VI of cortex are).

  33. This is not only because of the possibility of relabeling discussed, but perhaps more scientifically salient, our best models of V1 responses now come from neural network models optimized for image recognition. See Cadena et al. (2019).

  34. “This difference signal obtained by predictive coding has a much smaller dynamic range than the raw image, and is therefore more suited for transmission through neural fibres with a limited firing rate.” (Hosoya et al. 2005). Strictly speaking this needn’t always be true; consider a case where values in a bitmap can go from 0 to 100 (arbitrary units). Then the range of potential error values would go from −100 to +100, which in fact comes to a greater dynamic range than the original signal required … but presumably we can make up for this by not needing to send as many pixels. Or if we expect errors to be small on average. Or any number of other small adjustments not specified. In general, which encoding scheme is more efficient depends on the kinds of data that you expect to be transmitted on average, and also on the other requirements and constraints on the system. Here is another typical statement of the efficiency claim: “Since top-down predictions suppress expected sensory input (i.e., reduce prediction error), expected stimuli lead to relatively little neuronal firing. Such a coding scheme has several advantages. First, it is metabolically efficient.” (Kok and de Lange 2015)

  35. Indeed, the “dark room” problem has been much discussed in the context of predictive theories. The best way of eliminating error is to eliminate bottom-up signals, and one way to do that second thing (a way that predictive and traditional theorists agree on!) is to sit quietly in a dark room leaving the perceptual system unperturbed. On this description, it seems even more clear that the role of so-called predictive error is the same role as the role of bottom-up sensory information in traditional models.

  36. Part of this might be a reflected glow from the enthusiasm for Bayesian theories more generally – and it may be that some promising Bayesian models of perception are cast as predictive, but the two terms are not the same and should not be used interchangeably (as Rescorla argues, forcefully, in his 2017 review of Clark’s Surfing Uncertainty).

  37. A predictive theorist might double down, and argue that one contribution of the predictive framework is to show how entities capable of the relatively bloodless activities characterized by traditional models are, under an alternative labeling scheme, engaged in activities that seem much more interesting and closer to the ones that we ourselves engage in. Or perhaps they point towards a reduction of predictions and errors to things that we can understand in purely causal terms, such as modulatory activity and stimulus-evoked responses. If so, this would be a radical claim, and one that is perhaps empirically testable by looking at neural correlates of behavioral surprise, but I am dubious about this proposal, because the personal-level experience (and practice) of prediction, expectation, and error seem very different from the subpersonal processes and signals sharing the same names. If I were constantly predicting and correcting during perception, it seems to me that looking at the world would feel much more like sight-reading music – a kind of frantic muddle to stay on top of what’s coming – rather than the immersive and effortless process that I actually experience. (Perhaps if I were better at sight-reading I would have different intuitions about this.)

  38. Of course, the same issues can be raised with respect to understanding evolution as a maximizing process, see e.g. Okasha (2019) for an overview of that debate. In our context, perception (traditionally) is supposed to maximize (or satisfice with respect to) the accuracy of its depiction of the world.

  39. See Grush (2001) and Cao (2018) for further discussion of this worry.

  40. I am grateful to Colin Allen, Marc Artiga, Lindy Blackburn, Ned Block, Tian Yu Cao, David Chalmers, Daniel Dennett, Shaul Druckmann, Gary Ebbs, Frankie Egan, Jon Gauthier, Peter Godfrey-Smith, Fred Keijzer, Daniel Kraemer, Enoch Lambert, Kirk Ludwig, Tim Maudlin, Michael Rescorla, Nick Shea, Aaron Sidford, Mark Sprevak, Michael Strevens, Jared Warren, John Han Wen, Adam White, Martha White, Daniel Yamins, and audiences at KCL, MIT, Tufts, Antwerp, and Bochum for helpful discussions and comments. I would also like to thank Reviewer 2 for careful and constructive criticism.

References

  • Adams, R.A., S. Shipp, and K.J. Friston. 2013. (2013) predictions not commands: Active inference in the motor system. Brain Structure & Function 218: 611–643.

    Google Scholar 

  • BS Atal, MR Schroeder. (1970) Adaptive predictive coding of speech signals. The bell system technical journal (volume: 49, issue: 8).

  • Bialek, W., Rieke, F., de Ruyter van Steveninck, RR., and Warland, D. 1991. Reading a neural code. Science 252 (5014): 1854–1857

  • Bitzer, S., H. Park, F. Blankenburg, and S.J. Kiebel. 2014. Perceptual decision making: Drift-diffusion model is equivalent to a Bayesian model. Frontiers in Human Neuroscience 8: 102.

    Google Scholar 

  • Cadena, S.A., G.H. Denfield, E.Y. Walker, L.A. Gatys, A.S. Tolias, M. Bethge, and A.S. Ecker. 2019. Deep convolutional models improve predictions of macaque V1 responses to natural images. PLoS Computational Biology 15 (4): e1006897.

    Google Scholar 

  • Cao, R. 2012. A Teleosemantic approach to information in the brain. Biology and Philosophy 27: 49–71.

    Google Scholar 

  • Cao, R. 2018. Computational explanations and neural coding. In Routledge handbook of the computational mind, ed. M. Sprevak and M. Colombo. London: Routledge.

  • Carlton, T., and A. McVean. 1995. The role of touch, pressure and nociceptive mechanoreceptors of the leech in unrestrained behaviour. J. Comp. Physiol. [A] 177: 781–791.

    Google Scholar 

  • Clark, A. 2013. Whatever next? Predictive brains, situated agents, and the future of cognitive science. The Behavioral and Brain Sciences 36: 181–125.

    Google Scholar 

  • Clark, A. 2015. Embodied prediction. In T. Metzinger & J. M. Windt (Eds). Open MIND: 7(T). Frankfurt am Main: MIND group.

  • Clark, A. 2016. Surfing uncertainty: Prediction, action, and the embodied mind. Oxford University Press.

  • Conant, Roger C., and W. Ross Ashby. 1970. Every good regulator of a system must be a model of that system. International Journal of Systems Science 1 (2): 89–97.

    Google Scholar 

  • Egan, F. 2014. How to think about mental content. Philosophical Studies 170: 115–135.

    Google Scholar 

  • Fresco, N., Ginsburg, S. & Jablonka (2018) Functional information: A graded taxonomy of difference-makers E. Rev.Phil.Psych.

  • Friston, K. 2005. A theory of cortical responses. Philosophical Transactions of the Royal Society of London B: Biological Sciences 360 (1456): 815–836.

    Google Scholar 

  • Friston, K. 2008. Hierarchical models in the brain. PLoS Computational Biology 4 (11): e1000211.

    Google Scholar 

  • Friston, K. 2009. The free-energy principle: A rough guide to the brain? Trends in Cognitive Sciences 13 (7): 293–301.

    Google Scholar 

  • Friston, K.J. 2010. The free-energy principle: A unified brain theory? Nature Reviews Neuroscience 11 (2): 127–138.

    Google Scholar 

  • Friston, K., and C.J. Price. 2001. Dynamic representations and generative models of brain function. Brain Research Bulletin 54 (3): 275–285.

    Google Scholar 

  • Frith, C.D. 2007. Making up the mind: How the brain creates our mental world. Blackwell.

  • Grush, R. 2001. The semantic challenge to computational neuroscience, In Theory and Method in the Neurosciences, Peter K. Machamer, Peter McLaughlin and Rick Grush, 155–172. Pittsburgh: University of Pittsburgh Press.

  • Grush, R. 2004. The emulation theory of representation. Motor control, imagery, and perception Behavioral and Brain Sciences 27 (3): 377–396.

    Google Scholar 

  • Hohwy, J. 2013. The predictive mind. New York, NY: Oxford University Press.

    Google Scholar 

  • Hohwy, J., A. Roepstorff, and K. Friston. 2008. Predictive coding explains binocular rivalry: An epistemological review. Cognition 108: 687–701.

    Google Scholar 

  • Hosoya, T., S.A. Baccus, and M. Meister. 2005. Dynamic predictive coding by the retina. Nature 436 (7): 71–77.

    Google Scholar 

  • Hubel, D.H., and T.N. Wiesel. 1959. Receptive fields of single neurones in the cat's striate cortex. The Journal of Physiology. 124 (3): 574–591.

    Google Scholar 

  • Issa, E.B., C.F. Cadieu, and J.J. DiCarlo. 2018. Neural dynamics at successive stages of the ventral visual stream are consistent with hierarchical error signals. eLife 2018;7:e42870.

  • Kanwisher, N., J. McDermott, and M.M. Chun. 1997. The fusiform face area: A module in human extrastriate cortex specialized for face perception. The Journal of Neuroscience 17 (11): 4302–4311.

    Google Scholar 

  • Kok P, de Lange PF. (2015) Predictive coding in sensory cortex. Forstmann, UB and Wagenmakers, E-J, editors, an introduction to model-based cognitive neuroscience, pages 221–44. Springer, New York, NY.

  • Lewis, J.E., and W.B. Kristan Jr. 1998. Representation of touch location by a population of leech sensory neurons. Journal of Neurophysiology 80 (5): 2584–2592.

    Google Scholar 

  • Mann, S.F. 2018. Consequences of a functional account of information rev. Phil.Psych. 2018.

  • Marr, David (1982) Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. Henry Holt and Co., Inc. New York, NY, USA.

  • Moore, T., and M. Zirnsak. 2017. Neural mechanisms of selective visual attention. Annual Review of Psychology. 68: 47–72.

    Google Scholar 

  • Mumford, D. 1992. On the computational architecture of the neocortex. II. The role of cortico-cortical loops. Biological Cybernetics 66 (3): 241–251.

    Google Scholar 

  • Serre, T., Oliva, A., and Poggio, T. 2007. A feedforward architecture accounts for rapid categorization. Proceedings of the National Academy of Sciences 104 (15): 6424–6429

  • Poggio, T., and T. Serre. 2013. Models of visual cortex. Scholarpedia 8 (4): 3516.

    Google Scholar 

  • Putnam, H. 1988. Representation and reality. Cambridge: MIT Press.

    Google Scholar 

  • Quine, W.V.O. 1960. Word and object. Cambridge: Harvard University Press.

    Google Scholar 

  • Rao, R., and D. Ballard. 1999. Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience volume 2: 79–87.

    Google Scholar 

  • Rescorla, M. 2015. Bayesian perceptual psychology. In The Oxford handbook of the philosophy of perception, ed. M. Matthen: Oxford University Press.

    Google Scholar 

  • Rescorla, M. 2016. Bayesian sensorimotor psychology. Mind and Language 31: 3–36.

    Google Scholar 

  • Rescorla, M. 2017. Review of Andy Clark’s surfing uncertainty Notre Dame Philosophical Reviews.

    Google Scholar 

  • Schultz, W., P. Dayan, and R.R. Montague. 1997. A neural substrate of prediction and reward. Science. 275: 1593–1599.

    Google Scholar 

  • Seth, Anil, (2017) “From Unconscious Inference to The Beholder's Share: Predictive Perception and Human Experience.” web PsyArXiv, (forthcoming, European Review).

  • Shea, N. 2014. Reward prediction error signals are meta-representational. Noûs 48: 314–341.

    Google Scholar 

  • Shea, N. 2015. Distinguishing top-down from bottom-up effects. In Biggs, Matthen and stokes, eds, 73–91. OUP: Perception and Its Modalities Oxford.

    Google Scholar 

  • Shea, N. 2018. Representation in cognitive science. Oxford University Press.

  • Spratling, M.W. 2008a. Predictive coding as a model of biased competition in visual attention. Vision Research 48: 1391–1408.

    Google Scholar 

  • Spratling, M.W. 2008b. Reconciling predictive coding and biased competition models of cortical function. Frontiers in Computational Neuroscience 2: 4.

    Google Scholar 

  • Stanley, G. (2013) “Reading and Writing the Neural Code.” Nature Neuroscience, Vol 16, No 3.

  • Summerfield, C., and F.P. de Lange. 2014. Expectation in perceptual decision making: Neural and computational mechanisms. Nature Reviews Neuroscience volume 15: 745–756.

    Google Scholar 

  • Summerfield, C., and T. Egner. 2009. Expectation (and attention) in visual cognition. Trends in Cognition Science 13: 403–409.

    Google Scholar 

  • Sutton, R.S. 1988. Learning to predict by the methods of temporal differences. Machine Learning 3: 9–44.

    Google Scholar 

  • Sutton, Richard S., and Andrew G. Barto. 1998. Reinforcement learning: An introduction. MIT Press.

  • Weilnhammer, V., H. Stuke, G. Hesselmann, P. Sterzer, and K. Schmack. 2017. A predictive coding account of bistable perception - a model-based fMRI study. PLoS Computational Biology 13 (5): e1005536.

    Google Scholar 

  • Wolpert, M., and D.M. Miall. 1996. Forward models for physiological motor control. Neural Networks 9 (8): 1265–1279.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rosa Cao.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

“Prediction is in effect the conjectural anticipation of further sensory evidence for a foregone conclusion. When a prediction comes out wrong, what we have is a divergent and troublesome sensory stimulation that tends to inhibit that once foregone conclusion, and so to extinguish the sentence-to-sentence conditionings that led to the prediction. Thus it is that theories wither when their predictions fail.” - WVO Quine (Word and Object Ch 1)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cao, R. New Labels for Old Ideas: Predictive Processing and the Interpretation of Neural Signals. Rev.Phil.Psych. 11, 517–546 (2020). https://doi.org/10.1007/s13164-020-00481-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13164-020-00481-x

Navigation