# Motion processing with wide-field neurons in the retino-tecto-rotundal pathway

- First Online:

- Received:
- Revised:
- Accepted:

- 2 Citations
- 469 Downloads

## Abstract

The retino-tecto-rotundal pathway is the main visual pathway in non-mammalian vertebrates and has been found to be highly involved in visual processing. Despite the extensive receptive fields of tectal and rotundal wide-field neurons, pattern discrimination tasks suggest a system with high spatial resolution. In this paper, we address the problem of how global processing performed by motion-sensitive wide-field neurons can be brought into agreement with the concept of a local analysis of visual stimuli. As a solution to this problem, we propose a firing-rate model of the retino-tecto-rotundal pathway which describes how spatiotemporal information can be organized and retained by tectal and rotundal wide-field neurons while processing Fourier-based motion in absence of periodic receptive-field structures. The model incorporates anatomical and electrophysiological experimental data on tectal and rotundal neurons, and the basic response characteristics of tectal and rotundal neurons to moving stimuli are captured by the model cells. We show that local velocity estimates may be derived from rotundal-cell responses via superposition in a subsequent processing step. Experimentally testable predictions which are both specific and characteristic to the model are provided. Thus, a conclusive explanation can be given of how the retino-tecto-rotundal pathway enables the animal to detect and localize moving objects or to estimate its self-motion parameters.

### Keywords

Visual motion Retino-tecto-rotundal pathway Optic tectum Nucleus rotundus Optic flow## 1 Introduction

Retinal axons belonging to the retino-tecto-rotundal pathway project in a precise retinotopical manner to the optic tectum (TO) (Mpodozis et al. 1996; Karten et al. 1997). The tectofugal projection arises exclusively from cells in the tectal layer 13 or stratum griseum centrale (SGC) and targets the thalamic nucleus rotundus (Rt) without maintaining a retinotopic organization (Fig. 1(a, b)) (Benowitz and Karten 1976; Engelage and Bischof 1993; Mpodozis et al. 1996; Marin et al. 2003). Nevertheless, deficits in pattern-discrimination tasks and the dramatic postlesional threshold variations in acuity measurements point to the existence of a system with high spatial resolution (Hodos and Karten 1966; Hodos 1969; Hodos and Bonbright 1974; Mulvanny 1979; Hodos et al. 1984; Macko and Hodos 1984; Bessete and Hodos 1989; Watanabe 1991; Güntürkün and Hahmann 1999; Laverghetta and Shimizu 1999; Nguyen et al. 2004). Hodos and Karten (1966) conducted behavioral experiments with pigeons which were trained to peck one of two discs on which visual stimuli were projected. They observed that lesions in the nucleus rotundus caused severe deficits in performance in brightness- and pattern-discrimination tasks. Later on, Laverghetta and Shimizu (1999) showed that lesions in the nucleus rotundus impaired the detection of small moving stimuli. Furthermore, lesions in the caudal ectostriatum, which is the telencephalic target of the tectofugal visual pathway, also caused severe to moderate deficits in visual acuity and motion processing tasks (Hodos et al. 1984; Nguyen et al. 2004).

Tectal neurons with somata in tectal layer 13 have large circular receptive fields spanning ≈ 10 − 60 degrees of the visual field (Luksch et al. 1998; Wu et al. 2005; Schmidt and Bischof 2001). A reconstruction of a representative neuron in the optic tectum is shown in Fig. 1(c). The distribution of dendritic endings is sparse, such that the summed receptive fields of the dendritic endings fill less than 1% of the total receptive field (Mahani et al. 2006). The anatomical organization corresponds to a spotty receptive-field fine structure (Troje and Frost 1998; Letelier et al. 2002; Mahani et al. 2006; Schmidt and Bischof 2001) (Fig. 1(d)). Tectal neurons respond vigorously to small moving stimuli, but they are only weakly selective for the orientation or direction of motion of the stimulus (Frost and Nakayama 1983; Sun et al. 2002). In the tecto-rotundal projection, rotundal neurons receive input from tectal neurons distributed throughout the entire tectum (Fig. 1(b)), whereby the precise point-to-point topography of the retino-tectal projection is completely lost (Benowitz and Karten 1976; Ngo et al. 1994; Karten et al. 1997; Deng and Rogers 1998; Hellmann and Güntürkün 2001; Marin et al. 2003). The tecto-rotundal projection is currently interpreted as implementing a transformation from a retinotopically-organized map into a functionally-organized map (Hellmann and Güntürkün 2001). The several anatomical subdivisions of the Rt correlate with neural populations that respond specifically to different visual modalities, such as two-dimensional motion and in-depth motion (Revzin 1970; Wang and Frost 1990; Wang et al. 1993).

From a theoretical point of view, the following questions arise: (i) How is spatial information organized in the retino-tecto-rotundal pathway, in view of the sparse but extensive receptive and dendritic fields of the neurons? (ii) How is sensitivity to direction of motion generated in the rotundus largely in the absence of periodically arranged subunits that account for motion sensitivity in neural models of other brain areas such as V1 or MT? (iii) How can local velocity estimates be retrieved from motion-sensitive neurons, i.e. rotundal neurons, that have receptive fields spanning up to 120 deg of visual angle?

The paper is structured as follows. In Section 2, we propose a model of the retino-tecto-rotundal pathway and investigate theoretically the spatial organization of this pathway. We also propose a neural network for the extraction of local-velocity fields from the rotundal neural population. In Section 3, we establish by means of computer simulations that the proposed model accounts for motion-sensitive responses of neurons in the optic tectum and the nucleus rotundus. We further provide experimentally testable predictions and demonstrate that local-velocity fields can be computed from the responses of rotundal model neurons. Finally, in Section 4, the results of the model are discussed and directions for future research are indicated.

## 2 Model

In this section, the retino-tectal-rotundal-pathway model is defined and its properties are analyzed theoretically. The basic organization is described in Section 2.1, then, motion-sensitive mechanisms of rotundal model neurons are described in Section 2.2. In Section 2.3 we present a neural model for extracting local-velocity fields from rotundal neurons, serving as a proof of concept. In Section 2.4, the connectivity constraints of the model at the tecto-rotundal projection are explored theoretically. In Sections 2.5–2.7, a preprocessing filter is described, model parameters used in computer simulations are characterized, and an error measure for performance evaluation is defined, respectively.

### 2.1 Basic organization of the retino-tecto-rotundal pathway

We propose a firing-rate model of the retino-tecto-rotundal pathway. The tectal and rotundal neurons are modelled as summing units that integrate the responses of input neurons, followed by a rectification of the signal. The receptive/dendritic field properties of tectal and rotundal neurons are constructed from anatomical and electrophysiological data, and as such provide parameters to the model. Sensitivity to stimulus velocity is introduced into the model by including temporal filters at the stage of the tecto-rotundal projection in Section 2.2.

*I*(

**x**,

*t*) of a tectal neuron

*i*through a continuous, time-dependent firing-rate function

*A*is the area of the visual field, \(R_{tc}^i(\mathbf{x})\) is the receptive field,

**x**= (

*x*,

*y*) is the position vector, and

*t*is the temporal dimension of the visual input. The visual input, integrated over the receptive field, is rectified (rectification is being symbolized by [ ]

_{ + }). According to the rectification model, [

*a*]

_{ + }=

*a*if

*a*>

*τ*, and zero otherwise, where

*τ*is a threshold parameter (Granit et al. 1963). The functional form of the model neurons is chosen according to standard firing-rate models (Dayan and Abbott 2005). A schematic of the sparse and random connectivity of a tectal neuron is presented in Fig. 2 (colored in red). The receptive fields of the model tectal neurons are assumed to have a spotty and random fine structure in accordance with experimental data. The parameters of the model are specified in Section 2.6 and Fig. 4.

*i*. Multiplying the Fourier transforms of the tectal-cell receptive fields of

*n*

_{tc}tectal neurons by their respective firing-rate functions and summing, we obtain a representation (or map) of tectal responses in Fourier space

**k**= (

*k*

_{x},

*k*

_{y}) is a wave vector with spatial frequencies

*k*

_{x}and

*k*

_{y}. It is important to note that the map

*M*

_{tc}(

**k**,

*t*) is only implicitly defined through the population of tectal neurons. This map serves as a mental construct in tracking the functional processing path of the neural system being modeled. In Section 3.1, we will employ computer simulations to show that for large numbers of tectal neurons, the approximation

*M*

_{tc}(

**k**,

*t*) corresponds to a subpopulation of tectal neurons, while each tectal neuron can be a member of more than one subpopulation. We assume that rotundal neurons receive input from these subpopulations via an intermediate rotundal unit (depicted in green in Fig. 2). These mediating units are merely constructs to schematize the spatial-frequency processing of the rotundal neurons. We model the response of a rotundal neuron

*j*by randomly sampling the responses of tectal supopulations to obtain

*R*

_{rc}(

**x**), assuring that \(r_{rc}^j(t)\) is real valued. Inserting Eq. (2) in Eq. (4) yields

*i*and rotundal neuron

*j*. Hence, according to our model, the connectivity pattern at the tecto-rotundal projection is determined by the receptive-field structure of tectal and rotundal neurons via Eq. (8). Consequently, within our model, function—expressed in neuronal response properties—is directly related to network connectivity. This is a characteristic feature of the model which may provide the opportunity in the future to test the underlying assumptions of the model directly.

The choice of the functional form of the projection can be motivated as follows. First, reconstruction of the stimulus at the tecto-rotundal projection ensures that at each layer of the pathway the stimulus can be encoded using the same number of neurons, as shown in Section 2.4, instead of requiring an increasingly growing number of neurons along the pathway. Second, spatial frequencies are “exposed” at the projection, allowing spatiotemporal filtering to be employed, e.g. to obtain velocity sensitivity, *without* requiring periodic receptive fields in accordance with experimental observation.

*n*

_{rc}rotundal neurons, giving

### 2.2 Motion processing with tectal and rotundal neurons

So far, we have described how spatial visual data is organized in our model of the retino-tecto-rotundal pathway. We have defined tectal subpopulations representing global Fourier components of the visual input. However, tectal and rotundal neurons also show selectivity for motion attributes. For example, tectal neurons have been shown to be selective for moving stimuli, while they are only weakly selective for direction of motion (Troje and Frost 1998). This property of tectal neurons might have its origin in the synaptic properties of tectal neurons that promote suppression for static stimuli (Luksch et al. 2004; Khanbabaie et al. 2007), and/or in retinal preprocessing that enhances stimulus contrast. Hence we preprocess the visual input with a high-pass filter (see Section 2.5) to account for spatiotemporal contrast enhancement effects, without going into more detail here.

*j*selective for a velocity

**v**as

*T*

_{v,k}(

*t*) is a temporal filter selective for a temporal frequency

*ω*=

**k**·

**v**, which implements the motion constraint equation (Adelson and Bergen 1985; Barron et al. 1994). This equation states that all the nonzero power associated with a translating 2D pattern lies on a plane through the origin in Fourier space, whose orientation is determined by the pattern velocity vector. The pattern velocity itself can be derived from the nonzero Fourier components by finding the velocity for which the constraint lines of the Fourier components intersect.

*i*and rotundal neuron

*j*.

*t*≥ 0 and zero otherwise. The function \(\exp[-(\omega -\mathbf{k}\cdot \mathbf{v})^2/\xi|\mathbf{k}|^2]\) is a Gaussian of width

*ξ*. This functional form allows us to adjust how strictly the motion constraint equation is enforced (by varying

*ξ*). The parameter

*ξ*has dimensions deg

^{2}/s

^{2}. The temporal filter contains a spatial-frequency-dependent weighting term, which ensures that the same number of cycles is sampled for each spatial frequency.

**v**segments the part of the image moving with the corresponding velocity. Reconstruction of the segmented entity in real space can be achieved by summing over all rotundal responses selective for the velocity

**v**, giving

### 2.3 Extracting local velocity from rotundal responses

In Section 2.2, we proposed that moving entities can be segmented by means of velocity-selective rotundal model neurons based on Eq. (18). In Section 3.4, we will support this proposition with computer simulations. Of course, real image sequences exhibit complex patterns of motion involving accelerated motions, including rotation. It is thus desirable to compute a local-velocity field (or optic-flow field) of the visual input, in which a velocity estimate is assigned to each point in the image sequence. In this subsection, we formulate a neural model that permits local-velocity estimates to be derived from populations of velocity-selective rotundal neurons, following up with an algorithmic implementation of the model.

The computation of local velocity requires a joint representation of velocity and position. However, rotundal neurons are selective for velocity but not for position. In the Section 3.1, it is demonstrated that position can be retrieved from the rotundal responses by integrating the responses of certain subpopulations (see also Eqs. (9–10) and Eq. (18)). We assume that this operation is implemented by neurons positioned at a higher level in the visual pathway, which we call C1 neurons. These neurons might be located in the caudal ectostriatum (Gu et al. 2002; Nguyen et al. 2004).

*o*jointly selective for velocity

**v**and position

**x**is defined by

*a*]

_{ + / − }= [

*a*]

_{ + }if

*a*≥ 0 and [ −

*a*]

_{ + }otherwise. Constructive interference, measured by taking the absolute value, i.e. full rectification, of the superposed weighted responses of velocity-selective rotundal neurons, leads to a joint selectivity for position and velocity.

*α*is a smoothing parameter with dimension deg. This computational scheme is illustrated in Fig. 3. The smoothing operation at the secondary stage could alternatively be implemented by horizontal interactions between C1 neurons instead of a two-stage neural network.

**v**

_{e}(

**x**,

*t*) to each point of the input sequence

*I*(

**x**,

*t*) by finding the C2 neuron among the C2 subpopulation selective for

**x**that shows the strongest response, thereby providing the local velocity estimate

_{t}denotes a temporal convolution and \(\tilde{F}\) is the inverse spatial Fourier transformation. Keeping in mind that the total rectification performed in Eq. (20) is mathematically equivalent to taking the absolute value of

*M*

_{rc,v}(

**x**,

*t*), Eq. (23) can then be replaced by

_{x}denotes a spatial convolution. It has been demonstrated recently that algorithms utilizing global Fourier transformations for velocity estimation are not impaired by uncertainties arising in algorithms that utilize a local measurement window (Dellen and Wörgötter 2008). Furthermore, a confidence measure can be defined as

*τ*

_{r}can be applied to select only the more reliable velocity estimates—which is a common strategy in computer vision (Barron et al. 1994).

### 2.4 Connectivity of the tecto-rotundal projection

The weight (connection strength) of a connection between a tectal and rotundal neuron is defined via Eq. (8). If the weight is zero or sufficiently close to zero, the connection can be considered as non-existing. Hence, according to our model, the number of connections is influenced by the choice of the model parameters (see also Section 2.6).

The total number of (non-zero) connections is also bounded by the upper limit *n*_{tc} of the sum of Eq. (7). To assure proper transmission of the spatial information of the stimulus, *n*_{tc} is assumed to be large, which implies that a rotundal neuron makes connections with many tectal cells. Experiments (in the cerebral cortex) indicate that a neuron receives input from about 10^{4} neurons (Koch 1999; Pakkenberg et al. 2003). The large dendritic fields of rotundal neurons may suggest an even higher number for the tecto-rotundal projection. However, we will show in the following that the number of required connections can be decreased by creating rotundal subpopulations which receive input from an exclusive subset of tectal neurons, distributed throughout the entire tectum.

*j*receives only input from a subpopulation

*P*

_{s}of

*n*

_{c}tectal neurons. Hence, we write

*n*

_{tc}of tectal neurons, the following equations hold

*n*

_{c}

*n*

_{q}=

*n*

_{tc}is fulfilled. Here,

*n*

_{q}denotes the number of subpopulations. It can be shown by computer simulations that for large numbers of

*n*

_{a}of rotundal neurons, one can write (in some approximation)

*P*′

_{s}is a subpopulation of

*n*

_{a}rotundal neurons receiving exclusively input from tectal subpopulation

*P*

_{s}. Hence, we have a total number of

*n*

_{rc}=

*n*

_{a}

*n*

_{q}rotundal neurons. According to this derivation, the number of connections can be decreased without impairing function if a proportional amount of rotundal subpopulations is created, each subpopulation receiving input from the same group of tectal neurons. Assuming a fixed total number of tectal neurons

*n*

_{tc}and

*n*

_{a}constant, then system performance is constant if

### 2.5 Preprocessing

*τ*

_{f}is a threshold parameter with dimension deg

^{ − 2}and

*k*

_{x},

*k*

_{y}, and

*k*

_{t}are the frequencies of the image sequence, and \(f_s=\text{s}^2/\text{deg}^2\). This filter enhances the spatiotemporal contrast of the image sequence.

### 2.6 Tectal and rotundal receptive-field parameters

*A*with a uniform probability of 0.1 points/deg

^{2}. This is done by tiling the potential receptive-field area into small surface elements of 1 deg by 1 deg. With probability 0.1, we locate a point (corresponding to a dendritic ending) in that element. This is repeated for all surface elements. Ideally, this should be done for elements of 1/

*n*deg by 1/

*n*deg and probability 0.1/

*n*with

*n*→ ∞. In Fig. 4(a), the histogram of nearest neighbor distances between the created points is shown, representing the distribution of dendritic endings of tectal cells. A visual angle of 1 deg corresponds approximately to 100

*μ*m on the tectal surface (Mahani et al. 2006). The resulting distribution of points (corresponding to dendritic endings) is thus in a realistic range. Having specified the point distribution (or dendritic endings), the corresponding weights to each point are determined by sampling a uniform distribution between − 0.5 and 0.5, as depicted in Fig. 4(b). The corresponding spatial pattern is presented in Fig. 4(c). The corresponding energies of the spatial Fourier components are shown in Fig. 4(d).

The receptive fields of rotundal neurons are defined in spatial Fourier space and are generated by taking the spatial Fourier transform of a point distribution identical to the one used to generate tectal receptive fields (Fig. 4(c)). To our knowledge, there is no conclusive data available about the receptive-field structure of rotundal neurons. Using Eq. (8), we compute the distribution of weights representing the connection strength between a tectal and rotundal neuron. The histogram of the weights is shown in Fig. 4(e). Weights range between − 0.5 and 0.5 with a maximum at zero. Hence, for the chosen receptive fields, only a small fraction of connections are strong. This is in accordance with experimental data (Marin et al. 2003).

Our knowledge about the precise receptive-field properties of tectal neurons is limited. Tectal dendritic-field properties only provide an approximation of tectal receptive fields (Troje and Frost 1998; Mahani et al. 2006; Schmidt and Bischof 2001). However, potential hidden structures in tectal and rotundal receptive fields may have a considerable impact on tecto-rotundal connectivity. Experimental data by Schmidt and Bischof (2001) indicates that the sparse and spotty receptive fields of tectal neurons contain substructures, which are however not yet sufficiently quantified to draw further conclusions with respect to our model. Receptive-field parameters have also been measured for neurons in the superior colliculus (Prevost et al. 2007; Mooney et al. 1988), the mammalian homolog of the optic tectum, but mainly in superficial layers. Future measurements of responses of deep tectal neurons to grating stimuli, i.e. spatial frequency, would allow us to further constrain the model. For example, sparseness in the spatial Fourier space (instead of sparseness in real space) of tectal and rotundal receptive fields has a strong impact on the connectivity pattern predicted by the model. In Fig. 4(f, g) the corresponding spatial patterns of the receptive fields are shown in real space and in Fourier space, respectively. Most weights defining the tecto-rotundal projection are zero, resulting in a sharp peak in the histogram (Fig. 4(h)).

It is important to keep in mind that our model of the retino-tecto-rotundal pathway is completely defined through Eqs. (1)–(18). Experimentally measurable quantities, such as the receptive-field structure of tectal and rotundal neurons, are parameters to the model that can be adjusted according to current knowledge. Predictions of the model such as the distribution of weights of the tecto-rotundal connection are necessarily influenced by the parameter values.

### 2.7 Error measurements

*m*×

*n*are given, error measures can be computed to quantify the performance of the algorithm. According to Barron et al. (1994), the angular error is defined as

**v**

_{e}is the estimated velocity and

**v**is the true velocity.

## 3 Results

In this section, response properties of tectal and rotundal model neurons are computed using computer simulations using the model of the retino-tecto-rotundal pathway introduced in Section 2. We choose parameters *τ*_{f} = 0.2 deg^{2} and *ξ* = 0.6 frame^{2}/deg^{2} for the spatiotemporal filters of the retino-tecto-rotundal pathway (see Eqs. (17) and (36)). The 2D spatial functions, defining the receptive fields of tectal and rotundal neurons, *R*_{tc}(**x**) and *R*_{rc}(**x**), respectively, are specified in Section 2.6. To obtain an intensity distribution with a zero mean for each image sequence, we subtract the mean intensity value from *I*(**x**,*t*). The rectification threshold *τ* (see Eq. (1)) is set to zero. We further show in Section 3.6 that local velocity fields can be computed from the output of model of the retino-tecto-rotundal pathway using the computational scheme derived in Section 2.3. For the C2 neurons, we choose a smoothing parameter *α* = 10 deg (see Eqs. (26) and (22)). The model parameters are not altered unless indicated otherwise.

### 3.1 Organization of spatial information in the tectal and rotundal cell populations

^{2}, assuming that each pixel corresponds to 1 deg of visual angle. For computational reasons, the resolution is constrained to a maximum of 1 deg, which is below the resolution of the avian visual system. The receptive field of each model tectal cell is generated according to Section 2.6 assuming sparseness in real space (see Fig. 5(a–c)). For practicality, the receptive field size was chosen such that the visual field is tiled into equally large parts. This detail has no effect on the results of the computation. For the tectal-cell populations, we computed the representation of the population response in Fourier space (Eq. (2)) and calculated the correlation coefficient of the Fourier transform

*F*[

*I*(

**x**)] of the input image and

*M*

_{tc}(

**k**). In Fig. 5(a), the correlation coefficient is plotted as function of the number of tectal cells. For large numbers of cells, a correlation coefficient above 0.95 is obtained. The squared correlation coefficient measures the correlation (or amount of variance reconstructed) and has a value larger than 0.9, demonstrating that the original image can be largely retrieved from the tectal populations. The input image, shown as the left panel of the inset in Fig. 5(a) and also shown in Fig. 10(b), is a snapshot of the so-called taxi sequence. The reconstructed image from the tectal responses is juxtaposed as the right panel of the inset.

The receptive field of rotundal neurons is generated according to Section 2.6 assuming sparseness in real space (see Fig. 4(c–e)) having a size of 95×128 deg. The representation of a representative rotundal receptive field in Fourier space is shown in Fig. 4(d). We can now reconstruct the visual input from the rotundal responses by computing *M*_{rc}(**x**,*t*), but replacing *M*_{tc}(**k**) by *F*[*I*(**x**)]. The correlation coefficient of the input image *I*(**x**) and *M*_{rc}(**x**) is then calculated and plotted in Fig. 5(b) as a function of the number of rotundal cells. For large numbers of cells, a correlation coefficient above 0.9 is obtained, demonstrating that the original image can be largely retrieved from the rotundal populations.

We investigate how noise in the connectivity between tectal and rotundal neurons affects the quality of reconstruction. According to Eq. (2), each tectal cell contributes to each supopulation **k** with a weight *R*_{tc}(**k**). To each of these weights, we add a noise term \(f_n n_r \overline{|R_{tc}(\mathbf{k})|}\) where *f*_{n} is the noise factor and *n*_{r} is random number drawn from a Gaussian distribution with a standard deviation of 1. The correlation coefficient of the reconstructed image and the original are plotted in Fig. 5(c). Noise in the connectivity impairs image-reconstruction performance for noise terms being in the range of the average absolute connection strength, approximated by \(\overline{|R_{tc}(\mathbf{k})|}\). However, for noise level of 50% of the average absolute connection strength, i.e. *f*_{n} = 0.5, performance drops only by about 10%. For a noise level of 400%, i.e. *f*_{n} = 4, performance decreases by about 40%. These values suggest robustness and graceful degradation of performance with network damage, which is typical for coarse-coding schemes (Hinton et al. 1986).

### 3.2 Response properties of tectal neurons

We also simulate the response of tectal neurons to various stimulus attributes, i.e. spatial frequency, orientation, and speed. The same parameters are used for the tectal neuron as in the previous subsection. The image sequence contains 20 frames of size 100×100 deg^{2}. We choose the size of tectal receptive fields to be 50×50 deg^{2}. The remaining receptive field parameters are not altered. The size of the rotundal-cell receptive fields is chosen as 100×100 deg^{2}. In the following, velocities are defined in deg/frame for convenience, and typical speeds of objects in this paper are chosen to be in the range of 0 to 5 deg/frame. The frame rate of the motion sequence allows translating the velocity units to deg/s. Typical frame rates are 24 frames/s. For example, a speed of 1 deg/frame corresponds to a speed of 24 deg/s.

*x*direction with 1 deg/frame as a function of the spatial frequency of the grating. While the spatial-frequency tuning curves of individual model neurons exhibit multiple but random peaks (Fig. 6(a), left panel, blue lines). The mean response of the population (averaged over 200 neurons) shows a slight preference for high spatial frequencies (Fig. 6(b), thick red line). This result can be attributed to the preprocessing of the image sequence with a high-pass filter and conforms with the observation that tectal responses are suppressed by static stimuli. We calculate the number of peaks and the corresponding peak heights (above the mean) of the individual tuning curves. The histogram of the number of peaks and peak heights are plotted in the right upper and lower panel. Multiple peaks are commonly observed.

Next, we calculate the response of tectal neurons to a grating of a spatial frequency *k* = 0.2 cycles/deg moving with 1 deg/frame for different orientations of the grating. While the tuning curves of individual neurons show multiple peaks (Fig. 6(b), blue lines), the averaged tuning curve does not show selectivity for orientation (Fig. 6(b), thick red line). The histogram of the number of peaks and the peak heights of the corresponding tuning curves are given in the right upper and lower panel, respectively.

Lastly, we compute the response of tectal model neurons to a solid square of size 10×10 deg^{2} moving along the *x* axis for different constant velocities. Individual tuning curves show sensitivity to stimulus speed and occasionally weak directional selectivity (Fig. 6(c), blue lines). The averaged tuning curve exhibits strong sensitivity to stimulus speed, but does not show any directional selectivity (Fig. 6(c), thick red line). This result is in accordance with experimental data (Troje and Frost 1998). We calculated the speed tuning index of a tectal neuron by computing [*r*_{1} − *r*_{0}]/*r*_{1} where *r*_{1} and *r*_{0} are the respective values of the tuning curve at *v*_{x} = 1 deg/frame and *v*_{x} = 0 deg/frame. All speed tuning indices smaller than zero were set to zero. The resulting histogram of the speed tuning index is presented in the right panel of Fig. 6(c). Most neurons of the population have a speed-tuning index close to 1.

To our knowledge, no experimental tuning curves to frequency and orientation of spatial Fourier components are available for deep tectal neurons of the tectofugal pathway. Spatial-frequency tuning has been measured only for neurons in superficial layers of the optic tectum. However, there is experimental evidence that deep tectal neurons are only weakly selective for direction of motion, but strongly tuned to speed (Troje and Frost 1998; Letelier et al. 2002).

### 3.3 Response properties of rotundal model neurons

**v**= (1,0) deg/frame to a solid square of size 10×10 deg

^{2}moving along the

*x*axis at different constant velocities. The parameters of the tectal neurons are chosen as in the previous subsection. The rotundal-cell receptive fields are again of size 100×100 deg

^{2}. For the temporal filter of the rotundal neurons, we take

*ξ*= 0.6 frame

^{2}/deg

^{2}. The individual velocity tuning curves are presented in Fig. 7(a) (blue lines). Most of the tuning curves exhibit a pronounced sensitivity to direction of motion. This is reflected in the average tuning curve (based on 200 neurons), depicted in thick red. Directional selectivity has been observed for a certain class of rotundal neurons (Wang and Frost 1990; Wang et al. 1993).

For comparison, we compute the tuning curves for rotundal neurons *without* including the preprocessing step at the tectal level (Fig. 7(b)). The resulting tuning curves show strong selectivity for direction of motion, demonstrating that this rotundal property is not a consequence of the preprocessing operation. The selectivity for direction stems instead from the temporal filtering taking place at the tecto-rotundal projection.

To date, there is not sufficient data available to compare our results quantitatively to real velocity-tuning curves. Experiments however have shown that neurons in the ventral subdivision of the nucleus rotundus are sensitive to the direction of motion (Wang et al. 1993; Wang and Frost 1990).

For the rotundal population of Fig. 7(a), we computed the peak height and the number of peaks of the tuning curves. The histograms are plotted in Fig. 7(c, d). Most tuning curves exhibit only a single peak, however double peaks are observed as well. We further calculated the direction tuning index of each rotundal neuron as [*r*_{1} − *r*_{ − 1}]/*r*_{1} where *r*_{1} and *r*_{ − 1} are the respective values of the tuning curve at *v*_{x} = 1 and *v*_{x} = − 1 deg/frame. Most neurons of the population are directionally selective, with a direction tuning index larger than 0.5 deg/frame (see Fig. 7(e)). A population of 12×10^{4} tectal neurons was chosen for this simulation.

### 3.4 Motion segmentation through motion-sensitive rotundal subpopulations

**v**= (1,0) deg/frame, to a stimulus consisting of a camouflaged random-dot square moving with a speed of 1 deg/frame to the right, in front of a random-dot background pattern moving with a speed of 1 deg/frame to the left. A schematic of the stimulus is given in Fig. 8(a). Motion segmentation in the proposed model is simulated by computing

*M*

_{rc,v}(

**x**,

*t*) for the input image sequence based on a tectal cell population of 12×10

^{4}neurons and a rotundal cell population of 8×10

^{4}neurons. The response

*M*

_{r,v}(

**x**,

*t*) of the rotundal population in real space is depicted in Fig. 8(b). The moving camouflaged random-dot square has been segmented with sharp boundaries and precise spatiotemporal detail.

### 3.5 Local velocity computation and the aperture problem

*v*

_{x}= 2 deg/frame, which is the velocity of the stimulus. The position tuning curve of a C2 neuron to the same stimulus with respect to its receptive-field center is shown in Fig. 9(b). The responsive region of the C2 neurons is approximately 20 deg in diameter. If using a larger stimulus of 12×12 deg size, population tuning becomes less pronounced (see Fig. 9(c), solid line), however, the correct velocity estimate can still be extracted from the population response. If the area outside the responsive region of the model neuron is masked, the responses of the C2 neuron suggest an incorrect stimulus velocity of

*v*

_{x}= − 2 deg/frame in the

*x*-direction (see Fig. 9(c), dashed line). This demonstrates that distributed global processing of velocity and subsequent reconstruction of position allow velocities to be reconstructed locally without introducing an aperture. The parameter choices of the C2 model neurons have been

*τ*

_{f}= 0.2 deg

^{2},

*ξ*= 0.6 frame

^{2}/deg

^{2}, and

*α*= 3 deg.

### 3.6 Local-velocity fields from rotundal responses

In this section, we demonstrate that the proposed model enables the animal to compute optic flow from real image sequences. Using the algorithmic implementation of Eq. (26), we compute the local-velocity fields of four image sequences for parameter choices *τ*_{f} = 0.2 deg^{2}, *ξ* = 0.6 frame^{2}/deg^{2}, *α* = 10 deg, and *τ*_{r} = 5. For practical implementation reasons, the temporal filter here has been chosen to be non-causal, which is expected to have only a minor effect on the results. The image sequences selected are benchmark examples commonly used in the machine-vision community (Barron et al. 1994).

*x*-axis (horizontal direction) in front of several trees. The velocities are as large as two pixels/frame, and the sequence contains 20 frames. A snapshot is depicted in the left panel of Fig. 10(a), while the right panel shows the estimated velocity field in the

*x*-direction. The optic-flow field captures the predominant velocity pattern of the sequence and segments the images into foreground and background.

In the Hamburg taxi sequence, a street scene is shown with four moving objects: a taxi turning the corner, a car in the lower left driving from the left to the right, a van in the lower right driving from the right to the left, and a person walking in upper left. Image speeds of the four moving objects are approximately 1.0, 3.0, 3.0, and 0.3 pixels/frame, respectively. The sequence contains 20 frames. A snapshot is shown in the right panel of Fig. 10(b). Adopting the same parameters as for the SRI-sequence, our algorithm returns an optic-flow field in which the moving objects are clearly visible (Fig. 10(b), right panel). The velocity estimates are close to the true velocities of the objects.

The translating and diverging tree sequence are created by moving a camera sideways and towards an image of a tree, respectively. The algorithm returns flow fields with 97% density and angular errors of 1.19 deg_{a} for the translating-tree sequence and 3.83 deg_{a} for the diverging-tree sequence (Fig. 10(c)).

We also apply the algorithm to the well-known Yosemite sequence (Fig. 10(d)). Each frame of the Yosemite sequence has been generated by mapping aerial photography onto a digital-terrain map. Speeds in the lower left corner go up to four pixels/frame, while the clouds translate with about one pixel/frame to the right. The algorithm achieves an angular error of 7.73 deg_{a} everywhere except for the area of the clouds. In the cloud area, the true motion is unknown, since the clouds are undergoing Brownian motion and changing shape.

## 4 Discussion

We have presented a firing-rate model of the retino-tecto-rotundal pathway for the processing of Fourier-based motion. In this model, responses of tectal neurons are obtained by integrating the visual space over the receptive field of the neuron, which, in accordance with experimental data, is assumed to consist of random dots sparsely distributed over a large area of the visual space. We have established that despite of the lack of periodic structures, motion signals can be generated, giving rise to directionally-selective responses of neurons in the nucleus rotundus. Using biologically plausible model parameters, a characteristic distribution of direction-tuning indices for the rotundal population is predicted. Furthermore, spatial information is retained in the population response and can be retrieved at any stage of the processing stream. As a proof of concept, we showed that local velocity estimates may be derived from responses of the rotundal model neuron population through superposition of rotundal responses by a neural network. This includes the prediction of neurons jointly selective for position and velocity, potentially located in the caudal ectostriatum (Nguyen et al. 2004). Motion-sensitive neurons in the caudal ectostriatum receive input from the nucleus rotundus and have large receptive fields (Nguyen et al. 2004; Gu et al. 2002). The emergence of so-called hot spots within the excitatory receptive field of ectostriatal neurons might indicate the onset of position reconstruction (Gu et al. 2002). Using an algorithmic equivalent of the model, local-velocity fields of four real sequences featuring complex motions have been computed for a fixed set of parameters, demonstrating the feasibility of the approach.

Considering the large receptive fields of tectal and rotundal neurons, a distributed representation of spatiotemporal information is considered to be a plausible choice to describe motion processing in the retino-tecto-rotundal pathway. The model results demonstrate that high spatial acuity is indeed in agreement with the specific properties of this pathway. This is also in agreement with work on coarse coding by Hinton et al. (1986), who showed that a stimulus can be represented more accurately by a collection of neurons with broad response functions than by a collection of nuerons with more finely-tuned response functions.

So far, we confined the analysis and modeling to feedforward processes. However, neurons in the tecto-rotundal pathway exhibit contextual effects. For example, tectal responses to a moving stimulus are suppressed by a background moving in the same direction, and vice versa (Frost and Nakayama 1983; Sun et al. 2002). Contextual influences are thought to be mediated by lateral connections or through feedback from brain areas at a later processing stage (Nakayama 1985; Dellen and Wessel 2008). Our model of the retino-tecto-rotundal pathway might be extended to allow for such interactions. In the future, we aim to investigate the role of isthmo-tectal feedback on motion processing (Meyer et al. 2008). Further, there is evidence that certain classes of neurons in the nucleus rotundus compute various optical variables of looming objects (Wang et al. 1993; Sun and Frost 1998). The neuronal responses of these neurons could be modelled adequately by sampling over tectal subpopulations that encode radial spatial frequencies at the tecto-rotundal projection.

Representing the stimulus by distributed representations and performing motion processing in this global space offers specific advantages compared to local representations, such as representational efficiency, i.e. more entities can be encoded by the same number of neurons, and graceful degradation of performance in response to network damage or noise. For motion processing tasks, distributed representations have the advantage that local velocity responses obtained via superposition of responses of wide-field neurons are not constrained by the (measurement-window-induced) aperture problem (see Fig. 9), which is typically introduced when utilizing a small measurement window, i.e. small receptive fields. Hence, the proposed model and the respective optic-flow algorithm are fundamentally different from theories of motion processing culminating in the computation of optic flow that have been developed for the geniculocortical pathway in mammals (Adelson and Bergen 1985; Heeger 1988). In these models, velocity estimates are derived from simple and complex cells that feature periodically arranged *on* and *off* subunits, with each cell covering a local patch of the visual space. In our model, local velocity estimates arise from constructive interference effects in distributed representations. Constructive interference allows joint selectivity for position and velocity to arise, even though at previous processing steps velocity-sensitive rotundal neurons have received input from the entire visual field. The model shows that global transformations are not in conflict with the computation of local velocities.

## Notes

### Acknowledgements

The work has received support from the German Ministry for Education and Research (BMBF) via the Bernstein Center for Computational Neuroscience (BCCN) Göttingen under Grant No. 01GQ0430, the NIH/NEI, ROI EY015678, and the EU Project Drivsco under Contract No. 016276-2. JWC acknowledges support from Fundação para a Ciência e a Tecnologia of the Portuguese Ministério da Ciência, Tecnologia e Ensino Superior and Fundação Luso-Americana.

**Open Access**

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

### References

- Adelson, E. H., & Bergen, J. R. (1985). Spatiotemporal energy models for the perception of motion.
*Journal of the Optical Society of America. A, 2*, 284–299.CrossRefGoogle Scholar - Barron, J. L., Fleet, D. J., Beauchemin, S., & Burkitt, T. (1994). Performance of optical flow techniques.
*International Journal of Computer Vision, 12*, 43–77.CrossRefGoogle Scholar - Benowitz, L. I., & Karten, H. J. (1976). Organization of the tectofugal visual pathway in the pigeon: A retrograde transport study.
*Journal of Comparative Neurology, 167*, 503–520.CrossRefPubMedGoogle Scholar - Bessete, B., & Hodos, W. (1989). Intensity, color, and pattern discrimination deficits after lesions of the core and the belt regions of the ectostriatum.
*Visual Neuroscience, 2*, 27–34.CrossRefGoogle Scholar - Dayan, P., & Abbott, L. (2005).
*Theoretical neuroscience: Computational and mathematical modeling of neural systems*. Cambridge: MIT.Google Scholar - Dellen, B., Clark, J., & Wessel, R. (2007). The brain’s view of the natural world in motion: Computing structure from function using directional fourier transformations.
*International Journal of Modern Physics B, 21*, 2493–2504.CrossRefGoogle Scholar - Dellen, B., & Wessel, R. (2008). Visual motion detection. In L. Squire, et al. (Ed.),
*The new encyclopedia of neuroscience*(pp. 291–295). Amsterdam: Elsevier.Google Scholar - Dellen, B., & Wörgötter, F. (2008). A local algorithm for the computation of optic flow via constructive interference of global fourier components. In
*Proceedings of the British machine vision conference 2008*(pp. 795–804).Google Scholar - Deng, C., & Rogers, L. (1998). Organization of the tecto-rotundal and sp/ips rotundal projection in the chick.
*Journal of Comparative Neurology, 394*, 171–185.CrossRefPubMedGoogle Scholar - Engelage, J., & Bischof, H. (1993). The organization of the tectofugal pathway in birds: A comparative review. In H. Zeigler, & H. Bischof (Eds.),
*Vision, brain, and behavior in birds*(pp. 137–158). Cambridge: MIT.Google Scholar - Frost, B., & Nakayama, K. (1983). Single visual neurons code opposing motion independent of direction.
*Science, 220*, 744–745.CrossRefPubMedGoogle Scholar - Granit, R., Kernell, D., & Shortess, G. K. (1963). Quantitative aspects of repetitive firing of mammalian motorneurons, caused by injected currents.
*Journal of Neurophysiology (London), 168*, 911–931.Google Scholar - Gu, Y., Wang, Y., Zhang, T., & Wang, S. (2002). Stimulus size selectivity and receptive field organization of ectostriatal neurons in the pigeon.
*Journal of Comparative Physiology. A, 188*, 173–178.CrossRefGoogle Scholar - Güntürkün, O., & Hahmann, U. (1999). Functional subdivisions of the ascending visual pathways in the pigeon.
*Behavioural Brain Research, 98*, 193–201.CrossRefPubMedGoogle Scholar - Heeger, D. (1988). Optical flow using spatiotemporal filters.
*International Journal of Computer Vision, 1*, 279–302.CrossRefGoogle Scholar - Hellmann, B., & Güntürkün, O. (2001). Structural organization of parallel information processing within the tectofugal visual system of the pigeon.
*Journal of Comparative Neurology, 429*, 94–112.CrossRefPubMedGoogle Scholar - Hennig, M., Funke, K., & Wörgötter, F. (2002). The influence of different retinal sub-circuits on the non-linearity of ganglion cell behavior.
*Journal of Neuroscience, 22*, 8726–8738.PubMedGoogle Scholar - Hinton, G. E., McClelland, J. L., & Rumelhart, D. E. (1986). Distributed representations. In D. E. Rumelhart, J. L. McClelland, & the PDP research group (Eds.),
*Parallel distributed processing: Explorations in the microstructure of cognition*(Vol. 1, pp. 77–109). Cambridge: MIT.Google Scholar - Hodos, W. (1969). Color-discrimination deficits after lesions of the nucleus rotundus in pigeons.
*Brain, Behavior and Evolution, 2*, 185–200.CrossRefGoogle Scholar - Hodos, W., & Bonbright, J. (1974). Intensity difference thresholds in pigeons after lesions of the tectofugal and thalamofugal visual pathways.
*Journal of Comparative & Physiological Psychology, 87*, 1013–1031.CrossRefGoogle Scholar - Hodos, W., & Karten, H. (1966). Brightness and pattern discrimination deficits after lesions of nucleus rotundus in the pigeon.
*Experimental Brain Research, 2*, 151–167.CrossRefGoogle Scholar - Hodos, W., Macko, K., & Besette, B. (1984). Near-field acuity changes after visual system lesions in pigeons. II. Telencephalon.
*Behavioural Brain Research, 13*, 15–30.CrossRefPubMedGoogle Scholar - Karten, H., Cox, K., & Mpodozis, J. (1997). Two distinct populations of tectal neurons have unique connections within the retinotectorotundal pathway of the pigeon (
*Columbia livia*).*Journal of Comparative Neurology, 387*, 449–465.CrossRefPubMedGoogle Scholar - Khanbabaie, R., Mahani, A., & Wessel, R. (2007). Contextual interaction of gabaergic circuitry with dynamic synapses.
*Journal of Neurophysiology, 97*(4), 2802–2811.CrossRefPubMedGoogle Scholar - Koch, C. (1999).
*Biophysics of computation: Information processing in single neurons*. New York: Oxford University Press.Google Scholar - Laverghetta, A. V., & Shimizu, T. (1999). Visual discrimination in the pigeon (columba livia): Effects of selective lesions of the nucleus rotundus.
*Neuroreport, 10*, 981–985.CrossRefPubMedGoogle Scholar - Letelier, J., Marin, G., Karten, H., Fredes, F., Sentis, E., Weber, P., et al. (2002). Tectal ganglion cells in the pigeon (columbia livia): Microstructure of their motion sensitive receptive fields.
*Society for Neuroscience Abstract, 28*, 761.17.Google Scholar - Luksch, H., Cox, K., & Karten, H. (1998). Bottlebrush dendritic endings and large dendritic fields: Motion-detecting neurons in the tectofugal pathway.
*Journal of Comparative Neurology, 396*, 399–414.CrossRefPubMedGoogle Scholar - Luksch, H., Khanbabaie, R., & Wessel, R. (2004). Synaptic dynamics mediate sensitivity to motion independent of stimulus details.
*Nature Neuroscience, 7*(4), 380–388.CrossRefPubMedGoogle Scholar - Macko, K., & Hodos, W. (1984). Near-field acuity after visual system lesions in pigeons. I. Thalamus.
*Behavioural Brain Research, 13*, 1–14.CrossRefPubMedGoogle Scholar - Mahani, A., Khanbabaie, R., Luksch, H., & Wessel, R. (2006). Sparse spatial sampling for the computation of motion in multiple stages.
*Biological Cybernetics, 10*, 1–12.Google Scholar - Marin, G., Letelier, J., Henny, P., Sentis, E., Farfan, G., Fredes, F., et al. (2003). Spatial organization of the pigeon tectorotundal pathway: An interdigitating topographic arrangement.
*Journal of Comparative Neurology, 458*(4), 361–380.CrossRefPubMedGoogle Scholar - Meyer, U., Shao, J., Chakrabarty, S., Brandt, S., Luksch, H., & Wessel, R. (2008). Distributed delays stabilize neural feedback systems.
*Biological Cybernetics, 99*(1), 79–87.CrossRefPubMedGoogle Scholar - Mooney, R. D., Nikoletseas, M. M., Ruiz, S. A., & Rhoades, R. W. (1988). Receptive-field properties and morphological characteristics of the superior colliculus neurons that project to the lateral posterior and dorsal lateral geniculate nuclei in the hamster.
*Journal of Neurophysiology, 59*(4), 1333–1351.PubMedGoogle Scholar - Mpodozis, J., Cox, K., Shimizu, T., Bischof, H., Woodson, W., & Karten, H. (1996). Gabaergic inputs to the nucleus rotundus (pulvinar inferior) of the pigeon (
*Columbia livia*).*Journal of Comparative Neurology, 374*, 204–222.CrossRefPubMedGoogle Scholar - Mulvanny, P. (1979). Discrimination of line orientation by visual nuclei. In A. Granda, & J. Maxwell (Eds.),
*Neural mechanisms of behavior in the pigeon*(pp. 199–222). New York: Plenum.Google Scholar - Nakayama, K. (1985). Biological image motion processing: A review.
*Vision Research, 25*, 625–660.CrossRefPubMedGoogle Scholar - Ngo, T., Davies, D., Egedi, G., & Tömböl, T. (1994). Phaseolus lectin anterograde tracing study of the tectorotundal projections in the domestic chick.
*Journal of Anatomy, 184*, 129–136.PubMedGoogle Scholar - Nguyen, A., Spetch, M., Crowder, N., Winship, I., Hurd, P., & Wylie, D. (2004). A dissociation of motion and spatial-pattern vision in the avian telencephalon: Implications for the evolution of visual streams.
*Journal of Neuroscience, 26*, 4962–4970.CrossRefGoogle Scholar - Pakkenberg, B., Pelvig, D., Marner, L., Bundgaard, M. J., Gundersen, H. J., Nyengaard, J. R., et al. (2003). Aging and the human neocortex.
*Experimental Gerontology, 38*(1–2), 95–99.CrossRefPubMedGoogle Scholar - Prevost, F., Lepore, F., & Guillemot, J. P. (2007). Spatio-temporal receptive field properties of cells in the rat’s superior colliculus.
*Brain Research, 1142*, 80–81.CrossRefPubMedGoogle Scholar - Revzin, A. (1970). Some characteristics of wide-field units in the brain of the pigeon.
*Brain Research, 2*, 264–276.Google Scholar - Schmidt, A., & Bischof, H. J. (2001). Neurons with complex receptive fields in the stratum griseum centrale of the zebra finch (taeniopygia guffata castanotis gould) optic tectum.
*Journal of Comparative Physiology. A, 187*(11), 913–924.CrossRefGoogle Scholar - Sun, H., & Frost, B. (1998). Computation of different optical variables of looming objects in pigeon nucleus rotundus neurons.
*Nature Neuroscience, 1*, 296–303.CrossRefPubMedGoogle Scholar - Sun, H., Zhao, J., Southall, T., & Xu, B. (2002). Contextual influences on the directional responses of tectal cells in pigeons.
*Visual Neuroscience, 19*, 133–144.PubMedGoogle Scholar - Troje, N., & Frost, B. (1998). The physiological fine structure of motion sensitive neurons in the pigeon’s tectum opticum.
*Society for Neuroscience Abstract, 24*, 642.9.Google Scholar - Wang, Y., & Frost, B. (1990). Functional organization of the nucleus rotundus of pigeons.
*Society for Neuroscience Abstract, 16*, 1314.Google Scholar - Wang, Y., Jing, S., & Frost, B. (1993). Visual processing in pigeon nucleus rotundus: Luminance, color, motion, and looming subdivisions.
*Visual Neuroscience, 10*, 21–30.CrossRefPubMedGoogle Scholar - Watanabe, S. (1991). Effects of ectostriatal lesions on natural concept, pseudoconcept, and artificial pattern discrimination in pigeons.
*Visual Neuroscience, 6*, 497–506.CrossRefPubMedGoogle Scholar - Wörgötter, F., & Koch, C. (1991). A detailed model of the primary visual pathway in the cat: Comparison of afferent excitatory and intracortical inhibitory connection schemes for orientation selectivity.
*Journal of Neuroscience, 11*(7), 1959–1979.PubMedGoogle Scholar - Wu, L., Niu, Y., & Wang, S. (2005). Tectal neurons signal impending collision of looming objects in the pigeon.
*European Journal of Neuroscience, 22*, 2325–2331.CrossRefPubMedGoogle Scholar