# Dendritic morphology predicts pattern recognition performance in multi-compartmental model neurons with and without active conductances

- First Online:

- Received:
- Revised:
- Accepted:

DOI: 10.1007/s10827-014-0537-1

- Cite this article as:
- de Sousa, G., Maex, R., Adams, R. et al. J Comput Neurosci (2015) 38: 221. doi:10.1007/s10827-014-0537-1

- 3 Citations
- 1.3k Downloads

## Abstract

In this paper we examine how a neuron’s dendritic morphology can affect its pattern recognition performance. We use two different algorithms to systematically explore the space of dendritic morphologies: an algorithm that generates all possible dendritic trees with 22 terminal points, and one that creates representative samples of trees with 128 terminal points. Based on these trees, we construct multi-compartmental models. To assess the performance of the resulting neuronal models, we quantify their ability to discriminate learnt and novel input patterns. We find that the dendritic morphology does have a considerable effect on pattern recognition performance and that the neuronal performance is inversely correlated with the mean depth of the dendritic tree. The results also reveal that the asymmetry index of the dendritic tree does not correlate with the performance for the full range of tree morphologies. The performance of neurons with dendritic tapering is best predicted by the mean and variance of the electrotonic distance of their synapses to the soma. All relationships found for passive neuron models also hold, even in more accentuated form, for neurons with active membranes.

### Keywords

Dendritic tree Associative memory Synaptic plasticity Asymmetry index Electrotonic length## 1 Introduction

This paper studies functional aspects of dendritic morphologies. The dendritic trees that are present in the brain exhibit a great variety of morphologies, and different types of neurons (such as pyramidal cells, Purkinje cells, Golgi cells) are characterised by their specific dendritic structures. It is unlikely that this is accidental, and several hypotheses have been posited to explain the existence of these variable dendritic dimensions and branching structures. For example, it has been suggested that the dendritic morphology of a neuron is optimised so that the cost of propagating signals from the synapses to the soma is minimal (Cuntz et al. 2007; Wen and Chklovskii 2008). It is also thought that the dendritic topology, how the dendritic segments are connected, could relate to the firing pattern of the neuron (Mainen and Sejnowski 1996; Fohlmeister and Miller 1997; Krichmar et al. 2002; van Ooyen et al. 2002; van Elburg and van Ooyen 2010).

Dendrites have an important role in the information processing that takes place in a neuron. They are involved in the generation, propagation and integration of synaptic potentials, the back propagation of action potentials and the induction of synaptic plasticity (Gulledge et al. 2005; London and Häusser 2005; Cuntz et al. 2007; Wen and Chklovskii 2008). The latter has been implicated in learning and therefore in the functioning of associative memory [for example Chen et al. (2011) and Steuber et al. (2007)].

In the present study we take a model neuron, which may either be passive or contain active ion channels, and train it to perform a pattern recognition task. The synaptic strengths are set so that the neuron responds differently to patterns it has learnt, from purely random, novel patterns. Both the learnt and novel patterns are sparse, binary patterns that do not change over time. In order to see the effect of morphological variation in the dendrites we generated a variety of dendritic trees. In our first experiment, using a small neuron with only 22 terminal points and 43 synapses, we were able to generate every possible dendritic structure with binary bifurcations, and measure the performance of all of these model neurons. In the remaining experiments we used a bigger neuron with 128 terminal points and 255 synapses, in which the size of the morphological space entailed that we evaluated only a sample of all the possible morphologies. There are a variety of metrics that can be used to characterise a tree structure, such as its symmetry or mean depth. Using these measures we were able to examine how well these metrics were able to predict the performance of a neuron from its morphology.

## 2 Methods

We first present the neuron models and their biophysical parameters (Section 2.1), followed by four metrics used to quantify the morphological features of the dendrite (Section 2.2). Next, we describe the algorithms used for generating, in a systematic way, sample neurons that differed only in their dendritic topology (Section 2.3). Section 2.4 deals with the pattern recognition task (the patterns, their presentation and the metric used to assess neuronal performance), and the final Section 2.5 gives some implementation details. The simulation source code and neuronal model are freely available at https://code.google.com/p/evol-patrec.

### 2.1 The neuron model

The neuronal model used in this work is based on the model and dendritic morphologies described by van Ooyen et al. (2002). In their work, the authors built simple dendritic morphologies with the same electrophysiological properties but varying topological arrangements, shown to produce different firing patterns. This model was not based on an actual neuronal morphology, but rather used to represent members of an abstract morphology space. All morphologies were binary trees with a simplified structure, including all dendritic segments having the same length. Because of the simplicity of the morphologies produced by this model, they were chosen to be the basis of our search for optimal morphologies for pattern recognition.

#### 2.1.1 Passive and active membrane properties

*C*

_{m}= 0.75

*μ*F/cm

^{2},

*R*

_{m}= 30 kΩcm

^{2}and

*R*

_{a}= 150 Ωcm. For the active models, Hodgkin-Huxley-type kinetic descriptions of ion channels were also taken from van Ooyen et al. (2002), based on Mainen and Sejnowski’s two-compartmental model (Mainen and Sejnowski 1996). Their conductance densities and reversal potentials are presented in Table 1. Note that a different

*E*

*l*

*e*

*a*

*k*value was used for the passive model (-65 mV), in agreement with a study on pattern recognition in a model hippocampal pyramidal cell by Graham (2001).

Ion channel conductances from van Ooyen’s model. These conductances are expressed in pS/*μ*m^{2}, reversal potentials in mV

Conductance | Symbol | Soma | Dendrite | Reversal potential |
---|---|---|---|---|

Fast | \(\overline {g}_{Na}\) | 3000 | 15 | 60 |

Fast non-inactivating | \(\overline {g}_{Kv}\) | 150 | − | −90 |

Slow voltage-dependent non-inactivating | \(\overline {g}_{Km}\) | − | 0.1 | −90 |

Slow | \(\overline {g}_{KCa}\) | − | 3 | −90 |

High voltage-activated | \(\overline {g}_{Ca}\) | − | 0.3 | 140 |

Leak | \(\overline {g}_{Leak}\) | − | 0.33 | −70 |

#### 2.1.2 Compartmentalisation and synapses

Each edge of the binary tree representing the dendritic morphology was implemented as an isopotential compartment receiving one synapse. As a result (see below), the neurons simulated in the exhaustive search, having *m* = 22 terminal branches, counted 43 (= 2 *m* −1) dendritic compartments. Those sampled from the space of trees with 128 terminal branches had 255 compartments. The lengths and diameters of the soma and dendritic compartments were also based on the values used in van Ooyen et al. (2002). The soma was a cylinder of 20 *μ*m length and diameter. Each dendritic compartment had a diameter of 2.5 *μ*m. The passive models were simulated with dendritic compartmental lengths of either 10 *μ*m (all simulations apart from Fig. 8) or 5 *μ*m (Fig. 8, which contains a direct comparison between active and passive models in the same panels). Only 5 *μ*m long compartments were used in the active models, which required some initial tuning to make them appropriate for the pattern recognition task (see also Section 2.4). To analyse the effect of tapering on neuronal performance, we also introduced a new parameter called the *tapering factor*. The tapering factor is the ratio between the diameter of the child branch and its parent \(\left (\mathit {tapering}\: \mathit {factor}=\frac {diam_{child}}{diam_{parent}}\right )\). Hence, *t**a**p**e**r**i**n**g**f**a**c**t**o**r*=1 means no tapering, and *t**a**p**e**r**i**n**g**f**a**c**t**o**r*=0.8 means that the diameter of each child branch will be 20% smaller than that of its parent branch. To prevent the generation of unrealistically thin dendrites through tapering, a minimum dendritic diameter of 0.1 μm was reinforced.

The models were excited through activation of synapses of the AMPA-receptor type, one on each compartment. These synapses were modelled as time-varying conductances with a dual-exponential time-course and implemented as *Exp2Syn* objects in the NEURON simulator (Carnevale and Hines 2006), with parameter values of 0.2 and 2 ms for the rise and decay time-constant, respectively, and a reversal potential of 0 mV. The peak conductance amplitude of a naive synapse (before learning) was set to 1 nS for passive models and 1.5 nS for active models. After learning, these conductances were scaled by multiplying them with the resulting synaptic weights (see Section 2.4).

### 2.2 Morphological tree metrics

*A*

_{t}for a given tree

*α*

^{n}, with

*n*terminal segments and

*n*−1 bifurcation points, is defined as:

*A*

_{p}at a given vertex

*j*is defined as :

*r*

_{j}and

*s*

_{j}are the number of terminal segments in the two subtrees of the vertex

*j*and

*A*

_{p}(1,1) is equal to zero. Given this equation, we find that the asymmetry index is zero for the most symmetric tree and close to one for the most asymmetric one.

*α*

^{n}with

*n*terminal segments, the mean depth

*P*

_{t}is defined as:

*P*

_{i}is the total number of edges on the path from the

*i*th segment to the soma. Notice that the mean depth is calculated over all dendritic segments instead of just the terminal ones as previously done in related work (van Ooyen et al. 2002). This is required as we need to consider the location of all synapses, which are uniformly distributed over all dendritic segments.

*i*has its length

*ℓ*

_{i}normalised by an electrotonic length constant

*λ*

_{i}, which is defined as:

*d*

_{i}is the diameter of the dendritic segment

*i*. So, the normalised electrotonic length

*Λ*

_{i}is given as:

*n*terminal segments, the following equation is used:

_{i}is the sum of the electrotonic lengths Λ

_{j}of all the dendritic segments on the path from dendritic segment

*i*to soma. It is important to notice that when all compartments have the same length and diameter (no tapering), the mean electrotonic path length is proportional to the mean depth metric. For this reason, mean electrotonic path length is only reported in the final section where tapering is examined.

The last metric, variance of electrotonic path length, calculates the variance of π_{i} across all synapses. This metric was introduced as it may better correlate with the signal-to-noise metric used to quantify neuronal performance, which also involves the calculation of variances (as later explained in Section 2.4).

### 2.3 Systematic tree generation

#### 2.3.1 Representation of dendritic trees

To represent and generate dendritic trees, the partition notation from van Pelt and Verwer (1985) was used. A partition at a bifurcation point in a binary tree is defined by a pair of numbers which denote the degree of each subtree. Each partition represents a bifurcation point, where the nodes of its subtree are split into those on its left and those on its right branch. The topology of the whole tree can therefore be characterised by the set of partitions at its bifurcation points. So, for example, the most asymmetric tree with 5 terminal points can be described by the partitions 5(1 4(1 3(1 2(1 1)))) .

*T*

_{n}with

*n*terminal points can be described using the following rule:

*a*+

*b*=

*n*;

*a*,

*b*>0 and

*T*

_{1}=1.

#### 2.3.2 Trees exhaustively generated

#### 2.3.3 Trees selectively generated

As it was not possible to simulate the whole range of neuronal morphologies for the desired dendritic tree order (128 terminal points), a second method was used, comparing randomly generated morphologies. To achieve this, we implemented an algorithm to produce samples of dendritic trees with a given number of terminal points (see pseudocode given in Algorithm 2 in the “Appendix”). This algorithm differs from the one to generate trees exhaustively mainly at the splitting function. Instead of generating the whole possible range of partitions for each pair of branches *a* and *b*, the algorithm depends on a bias value, which controls the partition. In summary, low bias is more likely to generate trees with extreme values, which means more symmetric or asymmetric trees. On the other hand, if the bias is equal to 0.5 the algorithm generates completely random trees.

### 2.4 The pattern recognition task

The neuronal model was trained to discriminate between stored and novel spatial input patterns. A pattern was a random vector of binary numbers with one number for each compartment, a positive bit meaning that the associated synapse was to be activated. The patterns were sparse (only about 10% of the synapses were activated per pattern). The selectively generated neurons with 128 terminal branches (255 dendritic compartments) had 255-bit input patterns, with 25 positive bits. In the exhaustive search of neurons with 22 terminal branches, 43-bit patterns were used with 4 positive bits. Throughout all neurons and trials, each presented pattern was newly generated. Potential effects of randomness were avoided by averaging, for selected neuron samples, over 100 trials (Figs. 7a, 8, 10 and 11).

*N*patterns to be learnt were

**x**^{μ}(

*μ*is 1..

*N*), the (dimensionless) weight at synapse

*i*was given by \(w_{i}={\displaystyle {\sum }_{\mu }x_{i}^{\mu }}\). In the recall phase, the performance was measured by comparing the neuronal responses after the presentation of learnt and novel input patterns. In the passive models, the comparison was done by using the somatic EPSP amplitudes as shown in Fig. 4; in the active models, the number of evoked action potentials in a 100 ms time window was used as shown in Fig. 5. It should be noted that the accuracy of this performance evaluation is determined by the number of trials, the greater the number of trials the more accurate is the performance measurement.

*μ*

_{s}and

*μ*

_{n}represent the mean values and \({\sigma _{s}^{2}}\) and \({\sigma _{n}^{2}}\) the variances of the responses to stored and novel patterns, respectively. From the histograms presented in Figs. 4 and 5, it is possible to find a clear discrimination between stored (blue bars) and novel patterns (red bars), which result in high s/n ratios in both figures (23.76 and 16.20 respectively). Note that all signal-to-noise ratios mentioned in Section 3 are the averages of at least five complete trials of this learning and testing procedure.

Given that the measured responses were different in passive and active neurons (EPSP amplitude versus number of spikes), a slightly different strategy of pattern presentation was used in the active models in order to obtain signal-to-noise ratios of similar magnitude. More particularly, whereas in passive neurons all positive pattern bits activated their associated synapses simultaneously and only once, in active neurons these synapses were each activated by a train of 5 spikes with 3 ms interspike intervals. Moreover, as the outcome of the active neurons was discrete (their number of spikes), their range of responses was limited and it was not uncommon for some neuronal topologies to generate identical spike numbers to all patterns, rendering the variances zero and hence the s/n ratio non-existent. To avoid this problem noise was added to the synapses of the active models. This noise comprised both a jitter on the timing of the spikes in the afferent train, and a random background 1 Hz activation of each synapse in the tree with a strength of 0.5 nS. As a control, we applied in Fig. 8 the same afferent trains and background noise to both active and passive models.

### 2.5 Implementation details

All trees were generated by LISP programs, stored using their partition representation, and read into the NEURON simulator (Hines and Carnevale 1997) by custom-made routines implemented in C++. Simulating a neuronal morphology with 128 terminal points took approximately 7 seconds for passive models and 24 seconds for active models, on an 2 Quad-Core Intel Xeon 2.8-GHz processor with 8Gb physical memory under MacOS X 10.6. The most intense simulations ran 155,000 passive morphologies for about 10 days distributed over five dual Quad-core computers. The data were analysed using MATLAB (MathWorks).

## 3 Results

In our exploration of the relationship between pattern recognition performance and dendritic morphology, and our search for the best metric to describe this relationship, we first compared all possible trees with 22 terminal points using a passive model neuron. Next we compared, using both passive and active neuronal models, an extensive sample of trees with 128 terminal points, for which the space of all morphologies is too large to be explored exhaustively. Finally, we studied the effects of tapering of the diameter of the dendritic compartments.

### 3.1 Comparing exhaustively generated trees

These results demonstrate that for trees with 22 terminal points, the trees with lower values of the two metrics, which represent the more symmetric morphologies, are the ones with better pattern recognition performance. The trends presented in Fig. 6 show a decrease of performance when the morphologies become more asymmetric. The fluctuations found in the last bins of each metric as well as the initial bins for the asymmetry index metric result from the low number of trees that are contained in these bins.

### 3.2 Comparing selectively generated trees

*bias*) that could be set to ensure that trees with extreme morphologies (very symmetric and asymmetric trees) were sampled as well. The scatter plot of Fig. 7a shows, for each sampled tree implemented as a passive neuron, the signal-to-noise ratio averaged over 5 trials of 20 patterns. Notice that the LISP program sampled the entire range of asymmetry indices, though not uniformly, which generated the set of data found in the blue scatter plot of Fig. 7a. Initially, the result seemed unpromising. However, when a more detailed examination was undertaken a clearer pattern emerged. As for each neuron 20 new patterns (10 acting as stored patterns and 10 acting as novel patterns) were randomly generated at each trial, and as each tree was assessed over only 5 trials, the difficulty of the pattern recognition task was expected to depend on the particular set of 5 times 20 patterns that was used for each neuron (for example, it should be easier for neurons to distinguish orthogonal or non-overlapping patterns). Thus, as the accuracy of the performance measure increased with the number of trials, we randomly selected one neuron from each bin and averaged its performance over 100 trials of the same pattern recognition task. The results are shown as red data points in Fig. 7a, which allowed to verify the overall trend visible in the data. In the next step, we partitioned the tree space into bins of 0.01 width along the asymmetry index axis as above. The blue error bars in (b) plot the mean and standard deviation in each bin, and the smooth character of the curve arises from the large number of samples averaged for each bin. For (c) a similar plot was produced where the mean and standard deviation of the signal-to-noise ratio for each bin is plotted against the mean depth of each tree. This metric shows a better correlation with the pattern recognition performance, which can be explained by the way this metric is calculated based on the distance of each synapse from the soma.

### 3.3 Robustness of the results

*±*standard deviation, NMDA/AMPA ratio 0.5: s/n = 32.59

*±*11.27 for fully symmetric trees, s/n = 24.59

*±*9.01 for fully asymmetric trees; NMDA/AMPA ratio 1: s/n = 29.31

*±*12.62 for fully symmetric trees, s/n = 23.18

*±*10.52 for fully asymmetric trees; compare Fig. 10d).

### 3.4 The effect of dendritic tapering

## 4 Discussion

The main result of this paper is that the dendritic morphology of a neuron has a major effect on its pattern recognition performance. To study how dendritic morphology affects pattern recognition performance, we generated all possible dendritic trees with 22 terminal points, and compared simulations of these smaller trees to a representative selection of larger trees with 128 terminal points. In both cases, the fully symmetric morphologies showed a better performance when compared to the fully asymmetric ones. However, the results for the selectively generated trees with 128 terminal points showed that the dendritic morphologies with an asymmetry index up to 0.4 performed as well as the most symmetric ones. The inability of morphology to affect performance for very symmetric trees can be explained by the fact that all trees with an asymmetry index up to 0.4 correspond to the same set of trees with the lowest mean depth, which results in a poor overall correlation between asymmetry index and neuronal performance.

The present study is an extension of previous work by us (Steuber et al. 2007) and others (Graham 2001) that has used input patterns with synapses that are activated synchronously by single pulses. The nature of the patterns of neuronal activity that are stored and recalled in real neuronal systems is not known, although the sparse activity that has been recorded in many neuronal systems suggests that some neurons have to decode patterns of single spikes or bursts of spikes (Chadderton et al. 2004). Other studies (Poirazi et al. 2003b) have used input patterns where synapses were activated by high-frequency spike trains. However, a companion paper by the same authors (Poirazi et al. 2003a) has shown that although the type of input pattern (single pulses vs spike train) affects the exact shape of the neuronal input-output relation, the type of arithmetic operations performed by the neurons is the same for both types of input patterns. Whilst we have used an evolutionary algorithm to optimise the number of spikes that each active synapse receives in another study (de Sousa et al. 2012), in the current study we have therefore used simple types of input patterns with one spike or a short burst of spikes for each synapse. Although the type of input pattern may affect the value of the signal-to-noise ratio and hence the pattern recognition performance, we never found it to affect the shape of the relationships between dendritic morphology and pattern recognition described in the present paper.

Although our conclusions are based on simulations of binary trees trained to recognise random (and hence uncorrelated) input patterns, we think they can be generalised to more anatomically constrained input configurations, like those which neurons receive for instance in layered structures such as neocortex. Indeed, in the present pattern recognition task, the neurons had to summate the weights of the synapses that were activated by the pattern, and at least in the active models, compare this sum to a reference, their spike threshold (Willshaw et al. 1969). As the input patterns caused transient responses, the neurons also had to act as coincidence detectors, to which symmetric neurons arguably are better suited. In a study of model neurons for interaural coincidence detection, Agmon-Snir et al. (1998) reported another advantage of neurons with symmetric trees, which holds even in the absence of learning: the threshold intensity needed to fire them is lower for spatially balanced than for unbalanced stimuli. However, this is not to say that asymmetric trees would always have a negative effect on pattern recognition. If neurons were to recognise asynchronous input patterns, asymmetric trees would offer an advantage by slowing down the propagation of the earliest EPSPs so as to synchronise their arrival at the soma with the EPSPs of later activated synapses (Rall 1964). Moreover, the optimal shape of a dendritic tree will be affected by other factors, such as the desire to maximise the number of possible connectivity patterns between dendrites and neighbouring axons (Wen et al. 2009).

From the present results, and more particularly from the inverse relationship between electrotonic distance and pattern recognition performance (Fig. 11), one might be inclined to conclude that smaller neurons would always be better pattern recognisers than big neurons. In the limit, even a single-compartment neuron, though physically implausible, would be most cost beneficial. However, this conclusion is mistaken, as it is based on the comparison of trees built of compartments of a given fixed length. Indeed, when in another study we used genetic algorithms to optimise dendritic shape (de Sousa et al. 2012), treating compartmental length as a free parameter, we found no clear indication that neurons would minimise the length of their compartments, which obviously would further minimise both electrotonic distance and its variance. The reason is that individual synapses must be sufficiently isolated, or compartmentalised, to prevent sublinearities in the generation and summation of their EPSPs, which inevitably arise due to shunting of the current at the synapse’s reversal potential (Rall 1964). One could of course linearise the interaction between synapses by reducing their weights, but in actual neurons membrane noise may put a limit on this miniaturisation, as it does for axons (Faisal et al. 2005). Hence the trade-off between minimising the synapses’ distance from the soma, and preventing sublinear interference by maximising the distance between them may be best satisfied by symmetric multi-compartmental trees. Another strategy for neurons, not covered by the present study, may be to enhance their computational capacity by taking advantage of dendritic nonlinearities and expanding them through localised, branch-specific interactions (Legenstein and Maass 2011; Poirazi and Mel 2001; Poirazi et al. 2003a, b; Caze et al. 2013).