1 Introduction

The program of theoretical physics for understanding a given system is to specify first principles to it and to solve the resulting equations. That program has been extremely successful and defined our idea of an exact and quantitative science. The predictive power of the first principles originates from the astonishing correspondence between experimental objects and mathematical structures. The mechanics of macroscopic objects corresponds to variational principles and differential equations, the behaviour of microscopic objects corresponds to operator theory in Hilbert spaces.

The biophysics and biochemistry of cells obey the first principles, too. But cells consist of many components and interactions. Specifying the fundamental equations of physics to a living cell is close to impracticable. The approach of theoretical biophysics is consequently what usually is called mathematical modelling. Instead of a derivation from first principles, a hypothesis on the components and interactions assumed to be most relevant for a specific process of interest defines the model equations. The assumptions need to be verified retrospectively by contrasting model predictions with experimental results. Modelling has to find the balance between capturing all relevant components, manageable complexity and the purpose of the model. Within this balance and in particular since modelling lacks the certainty of first principles, it is fundamental to start the formulation of the model equations within the mathematical structures to which the system to be modelled corresponds to gain predictive power. Otherwise a model might capture the experiment used to develop it, but very likely fails in predictions beyond this specific setting.

Only a few cellular dynamical systems are currently characterized well enough for identifying the mathematical structure corresponding to them. Intracellular \(\hbox {Ca}^{2+}\) dynamics is one of them. The \(\hbox {Ca}^{2+}\) pathway translates extracellular signals into intracellular responses by increasing the cytosolic \(\hbox {Ca}^{2+}\) concentration in a stimulus dependent pattern [7, 32, 94]. The concentration increase can be caused either by \(\hbox {Ca}^{2+}\) entry from the extracellular medium through plasma membrane channels, or by \(\hbox {Ca}^{2+}\) release from internal storage compartments. In the following, we will focus on inositol 1,4,5-trisphosphate (\(\hbox {IP}_3\))-induced \(\hbox {Ca}^{2+}\) release from the endoplasmic reticulum (ER), which is the predominant \(\hbox {Ca}^{2+}\) release mechanism in many cell types. \(\hbox {IP}_3\) sensitizes \(\hbox {Ca}^{2+}\) channels (\(\hbox {IP}_3\)Rs) on the ER membrane for \(\hbox {Ca}^{2+}\) binding, such that \(\hbox {Ca}^{2+}\) released from the ER through one channel increases the open probability of neighboring channels. This positive feedback of \(\hbox {Ca}^{2+}\) on its own release channel is called \(\hbox {Ca}^{2+}\)-induced-\(\hbox {Ca}^{2+}\)-release (CICR). Opening of an \(\hbox {IP}_3\)R triggers a \(\hbox {Ca}^{2+}\) flux into the cytosol due to the large concentration differences between the two compartments. CICR sometimes strongly multiplies channel opening to a global release and concentration spike. The released \(\hbox {Ca}^{2+}\) is removed from the cytosol either by sarco-endoplasmic reticulum \(\hbox {Ca}^{2+}\) ATPases (SERCAs) into the ER or by plasma membrane \(\hbox {Ca}^{2+}\) ATPases into the extracellular space.

\(\hbox {IP}_3\)Rs are spatially organized into clusters of up to about fifteen channels within an area with a diameter of 100–500 nm. These clusters are scattered across the ER membrane with distances of 1–7 \(\upmu \hbox {m}\)[10, 53, 59, 85, 92, 93]. CICR and \(\hbox {Ca}^{2+}\) diffusion couple the state dynamics of the channels. Given that the diffusion length of free \(\hbox {Ca}^{2+}\) is less than 2 \(\upmu \hbox {m}\) due to the presence of \(\hbox {Ca}^{2+}\) binding molecules in the cytoplasm and SERCAs, the coupling between channels in a cluster is much stronger than the coupling between adjacent clusters [96]. The structural hierarchy of \(\hbox {IP}_3\)Rs from the single channel to clusters and cluster arrays on cell level shown in Fig. 1 is also reflected in the dynamic responses of the intracellular \(\hbox {Ca}^{2+}\) concentration as revealed through fluorescence microscopy and simulations [10, 62, 97, 108]. Openings of single \(\hbox {IP}_3\)Rs (blips) may trigger collective openings of \(\hbox {IP}_3\)Rs within a cluster (puffs). \(\hbox {Ca}^{2+}\) diffusing from a puff site can then activate neighboring clusters, eventually leading to a global, i.e., cell wide, \(\hbox {Ca}^{2+}\) spike [35, 53, 62, 63]. Repetitive sequences of these \(\hbox {Ca}^{2+}\) spikes encode information that is used to regulate many processes in various cell types [7, 55, 73].

\(\hbox {Ca}^{2+}\) exerts also a negative feedback on the channel open probability, which acts on a slower time scale than the positive feedback, and has a higher \(\hbox {Ca}^{2+}\) half maximum value than CICR [10, 50, 63, 67, 99, 108]. This \(\hbox {Ca}^{2+}\)-dependent negative feedback helps terminating puffs. Therefore, the puff probability immediately after a puff is smaller than the stationary value but typically not 0. Channel clusters recover within a few seconds to the stationary puff probability from this \(\hbox {Ca}^{2+}\)-dependent inhibition [10, 50, 63, 67, 99, 108].

The negative feedback terminating global release spikes causes an absolute refractory period \({T}_{\mathrm{min}}\) as part of the interspike intervals (ISIs) lasting tens of seconds [71, 100, 107]. The molecular mechanism of this feedback is pathway and cell type specific and not always known. A negative feedback on the \(\hbox {IP}_3\) concentration might be involved [5, 69]. Hence, the negative feedback that determines the time scale of interspike intervals is different from the feedback contributing to interpuff intervals. It requires global (whole cell) release events.

Fig. 1
figure 1

Hierarchical organization of \(\hbox {IP}_3\) induced \(\hbox {Ca}^{2+}\) signalling with concentration signals of the corresponding structural level. The elementary building block is the \(\hbox {IP}_3\)R channel (bottom). It opens and closes stochastically. An open channel entails \(\hbox {Ca}^{2+}\) release into the cytosol due to the large concentration difference between the ER and the cytosol. Since channels are clustered, opening of a single channel, which is called a blip, leads to activation of other channels in the cluster, i.e., a puff (middle). The cluster corresponds to a region with \(\hbox {Ca}^{2+}\) release with a radius \({R}_{\mathrm{cl}}\) that is fixed by the number of open channels. The stochastic local events are orchestrated by diffusion and CICR into cell wide \(\hbox {Ca}^{2+}\) waves, which form the spikes on cell level (top). (Figure from ref. [83].)

Modelling of \(\hbox {Ca}^{2+}\) signalling has relied heavily on ordinary differential equations in the last decades, established as the rate equations for the average fractions of \(\hbox {IP}_3\)Rs in states corresponding to \(\hbox {IP}_3\)R state schemes and spatially averaged \(\hbox {Ca}^{2+}\), \(\hbox {IP}_3\) and buffer concentrations [86,87,88, 104]. This approach neglects noise and fluctuations [89]. However, the experimental evidence both on puffs and sequences of global spikes demonstrated random behavior and, therefore, the relevance of higher moments. Additionally, most models do not distinguish between local and global processes and feedbacks. That entailed in the end dependencies of system characteristics like, e.g., the average interspike interval (period) on measurable parameter values which deviate from experimental observations or require parameter values not supported by measurements [88, 104]. The purpose of most models is to simulate cellular behavior, and ordinary differential equations are very convenient to that end. Their derivation, however, has to take the large fluctuations into account, i.e., has to start from stochastic theory as the mathematical structure corresponding to \(\hbox {Ca}^{2+}\) dynamics. We will illustrate with the Siekmann \(\hbox {IP}_3\)R model, how this might be done.

An alternative to simulating cellular behavior by differential equations is to determine the distribution of cellular properties generated by the noise inherent to the system [38, 54]. Such an approach would correspond more to the noisy character of cell dynamics, but will only take hold, if the analysis of experimental results engages into such a view on cellular behavior and measures distributions and/or their moments [54]. We will discuss a concept for calculating the first two moments of the interspike interval distribution.

\(\hbox {Ca}^{2+}\) spikes and their statistical measures have also some similarity with sequences of neural action potentials, the famous neural spike trains. We will also briefly discuss how concepts from computational neuroscience, such as multidimensional integrate-and-fire models and spike train power spectra could be useful to model and analyze \(\hbox {Ca}^{2+}\) spiking.

2 Experimental results defining modelling concepts

2.1 Puffs are the local dynamics of \(\hbox {Ca}^{2+}\) signalling

2.1.1 Puff properties

Increasingly advanced experimental methods have provided a rich set of data that may guide modelling approaches [45, 59, 60, 79, 81, 100, 102, 106]. Single \(\hbox {IP}_3\)R channels within a cluster stochastically bind and unbind \(\hbox {IP}_3\) and \(\hbox {Ca}^{2+}\). In a state with high open probability, the channels open and close on a millisecond time scale in patch clamp experiments [43]. That time scale does not show up on opening/closing steps of cellular puff measurements probably due to filtering by the involved reaction-diffusion processes of dye-based recordings.

Due to CICR, a single \(\hbox {IP}_3\)R opening can increase the open probability of neighbouring channels until the complete cluster opens and releases \(\hbox {Ca}^{2+}\), creating a puff. But gating of \(\hbox {IP}_3\)Rs is biphasically dependent on [\(\hbox {Ca}^{2+}\)] at intermediate [\(\hbox {IP}_3\)]. Up until a few \(\upmu \hbox {M}\) the open probability increases with increasing [\(\hbox {Ca}^{2+}\)], while even higher [\(\hbox {Ca}^{2+}\)] inhibits the \(\hbox {IP}_3\)Rs, represented by a bell-shaped open probability curve [8]. [\(\hbox {Ca}^{2+}\)] around or in clusters becomes very large fast during puffs, easily reaching tens of \(\upmu \hbox {M}\) or more, Fig. 2 [6]. These are concentrations in the inhibitory regime (s. microdomains), such that \(\hbox {Ca}^{2+}\) release also has a fast negative feedback component on clusters.

Recent experiments on puff behavior of all three isoforms of the \(\hbox {IP}_3\)R shed light on their local dynamics in form of puff frequency, puff amplitudes, open channels per puff, rise and fall times, and duration [59]. The average puff duration (full duration at half-maximum, FDHM) is about 41 ms ± 3 ms for wild-type \(\hbox {IP}_3\)Rs. While the opening of clusters is explained by CICR within clusters, possible closing mechanisms of single \(\hbox {IP}_3\)Rs and clusters are still being discussed. Among possible puff termination mechanisms are stochastic attrition (there is always a probability for many channels to spontaneously close together within a short time window), local ER-depletion (the ER becomes devoid of \(\hbox {Ca}^{2+}\) locally, not able to support local cluster \(\hbox {Ca}^{2+}\) efflux), luminal activation (regulation by \(\hbox {Ca}^{2+}\) or other molecule species on the ER-side of the \(\hbox {IP}_3\)R), or coupled gating (single closing may trigger closing of the cluster due to coupled channel dynamics) [90]. High [\(\hbox {Ca}^{2+}\)] together with the biphasic \(\hbox {Ca}^{2+}\) dependency is also assumed to be at least a contributing factor of puff termination [106].

Various single channel behaviours in the course of a puff have been measured in experiments. While a steep increase of the fluorescence signal measured directly at puff sites as a quick opening of coupled channels is common, the termination of puffs can be realized in numerous ways. Smooth decay, step-wise decay, or closing with infrequent re-opening or bursting re-openings are among the most occurring channel closing scenarios or puff shapes, respectively [106]. Sometimes multiple \(\hbox {IP}_3\)Rs within one cluster close almost in near-synchrony in experiment on some occasions, yielding the seldom occurring block puff [106]. This occurred more often compared to expectations based on sets of independently closing channels (stochastic attrition). Observation of neighbouring open \(\hbox {IP}_3\)Rs within clusters with either one or two open channels confirmed deviations from the behavior of pairs of independent channels. This overall behaviour cannot be explained by inhibitory fast high \(\hbox {Ca}^{2+}\) (biphasic open probability) or local ER-depletion, suggesting an important but yet unknown channel coupling mechanism leading to coupled gating that renders puff duration and channel-coupled puff termination robust.

While regulation of \(\hbox {IP}_3\)Rs by luminal \(\hbox {Ca}^{2+}\) content or other molecules inside the ER has been a seemingly intractable question for decades, recent experimental studies have found further support for the hypothesis of luminal control. \(\hbox {IP}_3\)Rs have been reported to be regulated by luminal [\(\hbox {Ca}^{2+}\)]\(_\mathrm{ER}\) and likely the widely-expressed luminal \(\hbox {Ca}^{2+}\) buffer protein annexin A1 (ANXA1) which together inhibit \(\hbox {IP}_3\)Rs at high [\(\hbox {Ca}^{2+}\)]\(_\mathrm{ER}\) [102].

New findings suggest that \(\hbox {IP}_3\)Rs have two distinct modes of \(\hbox {Ca}^{2+}\) release. A punctate liberation mode during the rise of the \(\hbox {Ca}^{2+}\) transient which is then followed by a diffuse mode that sustains global \(\hbox {Ca}^{2+}\) release. The punctate mode is terminated before reaching the peak, likely through an yet unknown mechanism regulated by [\(\hbox {Ca}^{2+}\)]\(_\mathrm{ER}\). These two modes could also target different effector species, regulating different downstream elements of the \(\hbox {IP}_3\) induced \(\hbox {Ca}^{2+}\) signalling pathway. [60]

Fig. 2
figure 2

\(\hbox {Ca}^{2+}\) concentration profile around a \(\hbox {IP}_3\)R (cluster). While [\(\hbox {Ca}^{2+}\)] peaks at the cluster located at r=0 \(\upmu \hbox {m}\), [\(\hbox {Ca}^{2+}\)] distanced from the cluster will be one to two orders of magnitude smaller. Since \(\hbox {IP}_3\)R dynamics are subject to [\(\hbox {Ca}^{2+}\)] in very close proximity to them, this makes meaningful cell wide spatial averaging difficult at best [6]

2.1.2 The dynamic regime of the local dynamics

Intracellular \(\hbox {Ca}^{2+}\) dynamics is a reaction diffusion system. The reactions comprise release of \(\hbox {Ca}^{2+}\) from the ER, pumping by SERCAs, buffering and the binding/unbinding with other \(\hbox {Ca}^{2+}\) binding sites. The reaction dynamics is local, diffusion provides the spatial coupling. The dynamic regime (excitable, bistable or oscillatory) of a reaction diffusion system is dominated by the the dynamic regime of the local dynamics. From a structural point of view, the local dynamics are the cluster dynamics.

[\(\hbox {Ca}^{2+}\)] profiles in the vicinity of single \(\hbox {IP}_3\)Rs and within clusters cannot be measured directly, but can be simulated [96] or calculated analytically in good approximation [6, 96]. The [\(\hbox {Ca}^{2+}\)] at the cluster locations is about one or two orders of magnitude larger than spatially averaged concentration values, and decreases steeply with increasing distance from the channel, Fig. 2. This leads to the existence of microdomains of large [\(\hbox {Ca}^{2+}\)] at clusters with open channels, which are only weakly coupled to neighboring clusters by steep concentrations gradients. It is the local \(\hbox {Ca}^{2+}\) dynamics that affects cluster dynamics the most.

The \(\hbox {Ca}^{2+}\) concentration at closed clusters is the resting concentration in the range of \(\le 100\) nM. Concentrations at open channels are >20 \(\upmu \hbox {M}\) [6, 96]. The dynamic range of the regulatory binding sites for both the positive and negative feedback of \(\hbox {Ca}^{2+}\) to the open probability extends from a few hundred nM to micromolar values below 10 \(\upmu \hbox {M}\)[43, 49, 95]. Oscillatory dynamics require concentration values in the dynamic range. However, with these large concentration changes, the system essentially never is in this dynamic range and the regime of the deterministic limit of the cluster dynamics is either excitable or bistable (except tiny parameter ranges) [97].

This conclusion is supported by an investigation into the time scales on cluster level. Typical interpuff intervals last a few seconds [25, 26, 53, 99], interspike intervals are in the range from about 20 s to a few minutes. If the local dynamics were oscillatory and caused the sequence of spikes, the time scale of the ISI should be detectable as a temporal modulation of properties of the puff sequence at a given site. That has not been found [99]. A modulation of puff sequences on the ISI time scale could not be detected and no evidence of an oscillatory regime of the local dynamics has been observed [99]. The ISI time scale has only been observed on cell level.

Replacing local \(\hbox {Ca}^{2+}\) concentrations with globally averaged [\(\hbox {Ca}^{2+}\)] values as the input for \(\hbox {IP}_3\)Rs, even though their values differ by orders of magnitude, leads to misleading \(\hbox {IP}_3\)R and \(\hbox {Ca}^{2+}\) dynamics [97]. Averaged global concentrations during spikes are in the dynamic range of the \(\hbox {IP}_3\)R regulatory binding sites thus allowing for cluster-cluster coupling. Using them in mathematical models as the \(\hbox {Ca}^{2+}\) concentration experienced by the \(\hbox {IP}_3\)R entailed oscillatory dynamics. However, that dynamic regime shrinks to negligible parameter ranges, high frequency and tiny global amplitudes with realistic local concentrations [97] and could not be verified by local measurements [99]. Thus \(\hbox {IP}_3\)R \(\hbox {Ca}^{2+}\) dissociation constants guarantee spatial coupling but do not allow oscillatory local dynamics.

Fig. 3
figure 3

A–D Fluorescence signals of \(\hbox {Ca}^{2+}\) spike trains (upper panels) and extracted ISI sequences (lower panels) from four different cell types. ISI are irregular. F–G The standard deviation of ISIs against their average, each dot represents the data of one experiment, i.e., measured spike train of a cell. The wide spread indicates large cell-to-cell variability, but there is a functional \(\sigma \)-\({T}_{\mathrm{av}}\) moment relation visible as a linear fit. Plots from [81]

2.2 The cellular global dynamics of \(\hbox {Ca}^{2+}\) signalling

2.2.1 Interspike intervals of global spikes are random

Once a cluster of \(\hbox {IP}_3\)Rs opens to create a puff, the released \(\hbox {Ca}^{2+}\) diffuses within the cell. If it reaches neighbouring clusters there is a probability of triggering follow up puffs. This can then become a self amplifying process, until a critical number of open clusters is reached, resulting in a cell-wide \(\hbox {Ca}^{2+}\) release event, called a \(\hbox {Ca}^{2+}\) spike [61]. These global spikes can be measured similar to measuring local puffs and can be described with the same quantities, like interspike interval (ISI), duration, or amplitude. Measuring a sequence of \(\hbox {Ca}^{2+}\) spikes over a few minutes to hours yields a spike train from which we obtain the sequence of interspike intervals, Fig. 3. Just like blibs and puffs, spike times are inherently random, the ISI as a property of subsequent spike times is random as well. A global \(\hbox {Ca}^{2+}\) spike has an inhibitory effect on subsequent puff events. The recovery form that inhibition takes tens of seconds, i.e., it is negative feedback on long time scales. It creates an absolute refractory period \({T}_{\mathrm{min}}\) during which no puffs occur.

We can quantify how random spike timing of a given spike train is by the relation between the standard deviation \(\sigma \) of ISIs and the average ISI, \({T}_{\mathrm{av}}\). We see in Fig. 3 that they are linearly related like

$$\begin{aligned} \sigma =\alpha \left( {T}_\mathrm{av} - {T}_\mathrm{min} \right) . \end{aligned}$$
(1)

Such a linear relation has been found for all cases investigated (8 cell types and 10 conditions [17, 28, 31, 81, 100], see also [68]). The coefficient of variation of the stochastic part \({T}_{\mathrm{av}}-{T}_{\mathrm{min}}\) of the ISIs is CV \(=\sigma /({T}_\mathrm{av} - {T}_\mathrm{min})=\alpha \). The larger the CV, the more stochastic is the output of a given process. A CV value equal to 1 indicates a Poisson process, which is maximally random. A vanishing CV indicates a deterministic process.

We determined the CV or \(\alpha \) resp. as the slope of the linear approximation to population data as in Fig. 3, and from 2 experimental conditions with an individual cell. We found both values to agree [80, 82] turning \(\alpha \) into an observable not subject to cell variability (which is different from the results with puff sites in this respect [99]). Additionally, the value of \(\alpha \) turned out to be robust against changes of buffering conditions [81], stimulation strength and three pharmacological perturbations of the \(\hbox {Ca}^{2+}\) signalling system. That surprising robustness turns Eq. (1) into one of the equations defining \(\hbox {Ca}^{2+}\) signalling from the perspective of quantitative approaches. The value of \(\alpha \) is set by the time scale of recovery from global negative feedback terminating the release spikes [98].

2.2.2 The relation between average interspike interval and stimulation

Cells are stimulated by extracellular agonists [A] binding to receptors in the cell membrane. The strength of stimulation controls the intracellular concentration of \(\hbox {IP}_3\). In general, we observe only puffs at low stimulation, spikes at intermediate agonist concentration and maintained high \(\hbox {Ca}^{2+}\) concentration in some cell types and with some pathways at very strong stimulation. Within the spiking regime, cells respond to an increase of agonist concentration with a decrease of the average ISI, \({T}_{\mathrm{av}}\) [36, 100]. It was found for all pathways tested that the population averaged response could be well fit to a single exponential function which depends on the strength of the stimulus, given by the extracellular agonist concentration [A], Fig. 4, that is

$$\begin{aligned} {T}_\mathrm{pop} = {T}_\mathrm{ref} \ \mathrm {e}^{-\beta ([A]-[A_\mathrm{ref}])} - {T}_\mathrm{min}. \end{aligned}$$
(2)
Fig. 4
figure 4

Relation between ISI population average \(T_\mathrm{pop}\) and and agonist concentration stimulating the cells for three cell types and four pathways. The lines are fits to Eq. (2). Figure from [100]

Here \({T}_{\mathrm{min}}\) is the smallest ISI reached at strong stimulation, i.e., the absolute refractory period plus spike duration, and \({T}_{\mathrm{ref}}\) the reference ISI at a reference agonist concentration \([A_\mathrm{ref}]\). \(\beta \) is a constant for a given cell type and signalling pathway. We also found \(\beta \) to be the same for all individual cells. Hence, it is another observable defining \(\hbox {Ca}^{2+}\) signalling from the perspective of quantitative approaches. A third observable not subject to cell variability is \({T}_{\mathrm{min}}\).

All the cell-to-cell variability is represented by \({T}_\mathrm{ref} \mathrm {e}^{ \beta [A_\mathrm{ref}]}\). \([A_\mathrm{ref}]\) determines the position of the concentration response relation on the [A]-axis. It can be chosen to be the agonist concentration at the onset of spiking.

That exponential dependency on stimulation in Eq. (2) follows from paired stimuli experiments, i.e., runs much deeper than a simple direct fit of an ansatz to experimental data. The change of the average stochastic part of the ISI \(\varDelta \mathrm{T_\mathrm{av}}\) due to an agonist concentration step is proportional to the average stochastic part \({T_\mathrm{av1}}-{ T_\mathrm{min}}\) at the lower agonist concentration \({T}_{\mathrm{av1}}\) [100]:

$$\begin{aligned} \varDelta {T_\mathrm{av}}= & {} \beta _s\left( {T_\mathrm{av1}}-{ T_\mathrm{min}}\right) . \end{aligned}$$
(3)

In general, \(\beta _s\) depends on the agonist concentration [A] and the concentration step \(\varDelta [A]\). Experiments showed \(\partial \beta _s/\partial [A]=0\) and \(\partial \beta _s/\partial \varDelta [A]=\mathrm{const}=\beta \), which entails the exponential relation Eq. (2) with the same \(\beta \) for all individual cells [100].

2.2.3 Long time scales from slow global processes and small spike probabilities

With some cell types, individual cells or experimental situations, \({T}_{\mathrm{av}}\) is much longer than any time scale that is relevant for the state dynamics of clusters or even global cellular dynamics. From a dynamical systems point of view applying to deterministic models, this should not be possible, since each time scale requires a process setting it. However, long time scales might result simply from small probabilities and not from a slow process. Decay of a single radioactive atom for example happens at a random moment in time. If the atom is rather stable, decay is unlikely and it takes a long time on average to happen. But there is no process leading to the decay event. The state of the atom is stationary up to the time of the event. That may also apply to spike generation with the cell in the role of the atom and generation of a spike corresponding to the decay event. If the spike generation probability is small, we may observe long average ISIs and the state of the cell before the spikes is essentially stationary. There is no process setting the long time scale in that case.

Alternatively, there might be a slow process setting a long average ISI. The recovery from the negative feedback, which terminates spikes, is a prime candidate for such a slow process. The negative feedback might for example decrease [\(\hbox {IP}_3\)] [5], which then needs to recover before the next spike can occur. The inhibitory effect is a substantial decrease of the puff probability, which entails an absolute refractory period.

We can use the CV or \(\alpha \) to assess the relative weight of small probability vs slow process in setting \({T}_{\mathrm{av}}\). If the CV is equal to 1, the ISIs follow an exponential distribution and are maximally random. There is no slow process setting the long time scale in that case, very similar to nuclear radioactive decay of an atom. A CV of 0 would indicate a purely deterministic and noise-free process with vanishing deviation. If the CV value is between 0 (deterministic) and 1 (pure randomness), a slow process changes the spike probability without rendering spike generation deterministic. Note, the average ISI is not simply the inverse of the recovery rate in that regime [39]. Measured CVs are between 0.2 (e.g., stimulated hepatocytes) and 0.98 (e.g., spontaneous spiking in microglia).

3 Open problems

We consider as open problems what is lacking for a theory able to derive the cellular signals from molecular properties. The large cell-to-cell variability defines here what is meaningful to be described by theory. An intuitive explanation for cell variability among many other possibilities might be the differences in the relative cluster positions. However, also this question has not been exhausted yet.

The puff property distributions for amplitude, duration and IPI have been simulated or described by ansatzes by a variety of groups [14,15,16], but have not been analytically derived yet. We cannot expect analytic expressions using realistic channel models (see below), but the distributions have not been written down even for strongly simplified models. Lock et al. recently demonstrated that all three \(\hbox {IP}_3\)R isoforms generate similar puff property distributions sampled from many puff sites [59]. Hence, the distributions cannot depend on detailed molecular properties and a simplifying approach as common ground would make sense and would be a starting point providing conceptual understanding.

The situation with respect to global signals is similar. The interspike interval distribution for ISI sequences normalized by the average has been measured for HEK cells and spontaneously spiking astrocytes [84] and simulated [37], but it has not been derived yet. The robustness of the coefficient of variation CV has been very well confirmed experimentally [81, 98, 100] and has been simulated [72, 82, 83, 98], but has neither been derived in some analytical work.

The concentration response curve of the average ISI shows an exponential dependency on the extracellular agonist concentration stimulating the cell [100]. The agonist sensitivity in the exponent is cell type and pathway specific [100]. The pre-factor of the exponent picks up all the cell variability. This detailed knowledge on the concentration response curve also awaits its theoretical explanation.

Open problems with respect to methods mainly concern the role of fluctuations. The large values of coefficients of variation on all structural levels demonstrates that fluctuation are not negligible. Their potential role becomes more tangible by considering intracellular \(\hbox {Ca}^{2+}\) signalling as a deterministic reaction-diffusion system. The dynamic regime is then fixed by the local dynamics. We have no experimental evidence for an oscillatory local dynamics of intracellular \(\hbox {Ca}^{2+}\) signalling [99], and the whole literature on puffs suggests the local dynamics to exhibit only time scales of a few seconds. The experimental results are compatible with an excitable regime of the local dynamics. Consequently, spikes are due to fluctuations. Concepts taking fluctuations along in systems of ordinary differential equations (ODEs) exist [109], but have not been applied to the system, yet. We will discuss them also below.

4 Modelling concepts from molecular properties to global dynamics including fluctuations and noise

The essence of the \(\hbox {Ca}^{2+}\) signalling system is defined by its general properties, which are also the basic requirements models should meet:

  • The sequence of dynamic regimes with increasing stimulation: puffs, spikes, permanently elevated \(\hbox {Ca}^{2+}\). Pathway dependent also a bursting regime may follow or replace the spiking regime.

  • The dynamics of individual clusters are not oscillatory on the time scale of ISI.

  • Cell-to-cell variability of average ISI is large.

  • The spiking regime obeys Eqs. (1), (2) and (3) with \({T}_{\mathrm{min}}\), \(\alpha \) and \(\gamma \) being cell type and pathway specific but not subjected to cell variability.

  • ISIs depend sensitively on parameters of spatial coupling.

These general properties apply to all cells. Cells exhibit variability with respect to concentrations of the functional proteins, geometry of clusters and the cell-wide cluster array, ER luminal \(\hbox {Ca}^{2+}\) content etc. The general properties of \(\hbox {Ca}^{2+}\) signalling cannot depend on the details of these highly variable cellular characteristics, which calls for models as simple as possible but meeting the above requirements.

Puff models should start from the molecular properties of the \(\hbox {IP}_3\)R . Its random state changes are the source of noise. We will use one of the most recent models of the \(\hbox {IP}_3\)R to describe concepts, the Siekmann model [79], which is a Markov model based on single-channel data. We will discuss in that context, how fluctuations might enter ODE-focused modelling approaches.

Puff property distributions form the basis for modelling of global dynamics. We will discuss concepts for calculating moments of ISI distributions. Most current models adapt molecular rate constants to global time scales to reproduce measured average ISI values. However, the origin of the time scales on global level are global processes. We will sketch how to introduce these global processes into the coupling between the puff dynamics and global dynamics which allows for using realistic molecular parameters.

Fig. 5
figure 5

State scheme of the Siekmann \(\hbox {IP}_3\)R model. \(C_i\) represent closed and \(O_i\) open states; q’s are transition rates connecting two adjacent states and indicating how fast an \(\hbox {IP}_3\)R switches between the two states. The entire structure comprises two parts: one is the high-activity part or drive mode, containing three closed states \(C_1, C_2, C_3\), and one open state \(O_6\). The other is the low-activity part or park mode, which includes one closed state \(C_4\) and one open state \(O_5\). Only the rates connecting these two modes are \(\hbox {Ca}^{2+}\) dependent [40, 79]

4.1 \(\hbox {IP}_3\)R clusters as ensembles of receptors described by the Siekmann model

Several channels (up to fifteen) form a cluster. The opening of one receptor channel within a cluster (’blib’) increases the open probability of the other channels in the cluster due to strong channel coupling by \(\hbox {Ca}^{2+}\) diffusion, which may cause a puff. We consider a cluster consisting of a stochastic ensemble of N channels.

We denote the number of channels in state i according to Fig. 5 as \(n_i \ge 0\) with \(N=\sum _{i} n_i\), effectively removing one degree of freedom due to this requirement. A puff occurs if some critical number of channels is in the open state, motivating to study the expectation value to be in a state i\(\langle n_i \rangle \).

The total change in probability for a set \(\{n_i\} = \{ n_1, n_2, ..., n_6 \} = \{ n_1, n_2, ..., n_5, N-n_1-\cdots -n_5 \}\) is given by the probability fluxes for each single channel transition from state j to state k, resulting in a change of \(\{n_i\}\) like

$$\begin{aligned} n_j \rightarrow n_j -1 \quad \cap \quad n_k \rightarrow n_k +1. \end{aligned}$$

We write the Master equation for the probability \(P(\{n_i\},t) = P(n_1,n_2, ..., n_6,t) \) to be in state \(\{n_i\}\) at time \([t, t + \mathrm {d}t]\) as

$$\begin{aligned}&\dot{P}(\{n_i\},t) = q_{12} (n_1+1) \ P(n_1+1, n_2-1, ..., t) \nonumber \\&\quad + q_{21} (n_2+1) \ P(n_1-1, n_2+1, ..., t) \nonumber \\&\quad + q_{62} (n_6+1) \ P(..., n_2-1, ..., n_6+1, t) \nonumber \\&\quad + q_{26} (n_2+1) \ P(..., n_2+1, ..., n_6-1, t) \nonumber \\&\quad + q_{32} (n_3+1) \ P(..., n_2-1, n_3+1, ..., t) \nonumber \\&\quad + q_{23} (n_2+1) \ P(..., n_2+1, n_3-1, ..., t) \nonumber \\&\quad + q_{42} (n_4+1) \ P(...,n_2-1, ..., n_4+1, ..., t) \nonumber \\&\quad + q_{24} (n_2+1) \ P(...,n_2+1, ..., n_4-1, ..., t) \nonumber \\&\quad + q_{45} (n_4+1) \ P(..., n_4+1, n_5-1, ..., t) \nonumber \\&\quad + q_{54} (n_5+1) \ P(..., n_4-1, n_5+1, ..., t) \nonumber \\&\quad - \left( q_{12} \ n_1 + q_{21} \ n_2 +q_{62} \ n_6 + q_{26} \ n_2 \right. \nonumber \\&\quad + q_{32} \ n_3 + q_{23} \ n_2 + \ q_{42}(n_5, n_6) \ n_4 \nonumber \\&\quad \left. + q_{24}(n_5, n_6) \ n_2 + q_{45} \ n_4 + q_{54} \ n_5 \right) \ P(\{n_i\},t) \end{aligned}$$
(4)

Coupling by diffusion within the cluster happens on a time scale below 1 ms, i.e., it is fast compared to the time scale of \(\hbox {Ca}^{2+}\)-dependent state changes of the Siekmann model of a few ms. Hence, the concentration profile reaches is stationary state on the channel state dynamics time scale and we can assume the local \(\hbox {Ca}^{2+}\) concentration to depend on the number of open channels \(n_5\) and \(n_6\) but not additionally on time. That renders the \(\hbox {Ca}^{2+}\)-dependent rates functions of \(n_5\) and \(n_6\): \(q_{42}(n_5, n_6)\), \(q_{24}(n_5, n_6)\). These rates can then not be taken out of the sum when calculating the moment’s dynamics.

All existing ODE models for the \(\hbox {IP}_3\)R state dynamics are rate equations for the first moment of the state probabilities [33, 34]. They are derived from

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}t} \ \langle n_i \rangle = \sum _{\{n_i\}} n_i \dot{P}(\{n_i\}, t). \end{aligned}$$
(5)

Only edges connected to state i, as shown in Fig. 5, contribute terms to the moment dynamics of \(n_i\). We find

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}t} \ \langle n_1 \rangle =&- q_{12} \ \langle n_1 \rangle + q_{21} \ \langle n_2 \rangle \nonumber \\ \frac{\mathrm {d}}{\mathrm {d}t} \ \langle n_2 \rangle =&- (q_{21} + q_{26} + q_{23}) \ \langle n_2 \rangle + q_{12} \ \langle n_1 \rangle \nonumber \\&+ q_{62} \ \langle n_6 \rangle + q_{32} \ \langle n_3 \rangle - \langle n_2 q_{24} \rangle + \langle n_4 q_{42} \rangle \nonumber \\ \frac{\mathrm {d}}{\mathrm {d}t} \ \langle n_3 \rangle =&- q_{32} \ \langle n_3 \rangle + q_{23} \ \langle n_2 \rangle \nonumber \\ \frac{\mathrm {d}}{\mathrm {d}t} \ \langle n_4 \rangle =&- q_{45} \ \langle n_4 \rangle + q_{54} \ \langle n_5 \rangle - \langle n_4 q_{42} \rangle + \langle n_2 q_{24} \rangle \nonumber \\ \frac{\mathrm {d}}{\mathrm {d}t} \ \langle n_5 \rangle =&- q_{54} \ \langle n_5 \rangle + q_{45} \ \langle n_4 \rangle \nonumber \\ \frac{\mathrm {d}}{\mathrm {d}t} \ \langle n_6 \rangle =&- q_{62} \ \langle n_6 \rangle + q_{26} \ \langle n_2 \rangle . \end{aligned}$$
(6)

With the state number dependence in the ligand dependent rates \(q_{42}(n_5, n_6)\) and \(q_{24}(n_5, n_6)\) showing up in the dynamics for \(\langle n_2 \rangle \) and \(\langle n_4 \rangle \) we see that second moments contribute to the dynamics of the first moments. Hence, we need to determine their dynamics also. Using the the master equation Eq. (4) for both higher moments we find

$$\begin{aligned}&\frac{\mathrm {d}}{\mathrm {d}t} \ \langle n_4 \ q_{42} \rangle = -\langle n_4 \ q_{42}^2 \rangle + \langle n_2 \ q_{24} \ q_{42} \rangle \nonumber \\&\quad +q_{62} \left( -\langle n_4 \ n_6 \ q_{42} \rangle + \langle n_4 \ (n_6 -1)\ q_{42}(n_6-1) \rangle \right. \nonumber \\&\quad \left. + \langle n_4 \ q_{42}(n_6-1) \rangle \right) \nonumber \\&\quad +q_{26} \left( - \langle n_2 \ n_4 \ q_{42} \rangle + \langle n_2 \ n_4 \ q_{42}(n_6-1) \rangle \right) \nonumber \\&\quad + q_{45} \left( - \langle n_4^2 \ q_{42} \rangle + \langle n_4^2 \ q_{42}(n_5+1) \rangle \right. \nonumber \\&\quad \left. - \langle n_4 \ q_{42}(n_5+1) \rangle \right) \nonumber \\&\quad + q_{54} \left( - \langle n_4 \ n_5 \ q_{42} \rangle + \langle n_4 \ (n_5-1) \ q_{42}(n_5-1) \rangle \right. \nonumber \\&\quad + \langle n_4 \ q_{42}(n_5-1) \rangle \nonumber \\&\quad \left. - \langle (n_5-1) \ q_{42}(n_5-1) \rangle - \langle q_{42}(n_5-1) \rangle \right) \end{aligned}$$
(7)
$$\begin{aligned}&\frac{\mathrm {d}}{\mathrm {d}t} \ \langle n_2 \ q_{24} \rangle = - q_{21} \langle n_2 \ q_{24} \rangle + q_{12} \langle n_1 \ q_{24} \rangle \nonumber \\&\quad + q_{62} \left( - \langle n_2 \ n_6 \ q_{24} \rangle + \langle n_2 \ (n_6-1) \ q_{24}(n_6-1) \rangle \right. \nonumber \\&\quad +\langle n_2 \ q_{24}(n_6-1) \rangle \nonumber \\&\quad \left. + \langle (n_6-1) \ q_{24}(n_6-1) \rangle + \langle q_{24}(n_6-1) \rangle \right) \nonumber \\&\quad + q_{26} \left( - \langle n_2^2 \ q_{24} \rangle + \langle n_2^2 \ q_{24}(n_6+1) \rangle \right. \nonumber \\&\quad \left. - \langle (n_6+1) \ q_{24}(n_6+1) \rangle + \langle q_{24}(n_6+1) \rangle \right) \nonumber \\&\quad + q_{32} \langle n_3 \ q_{24} \rangle - q_{23} \langle n_2 \ q_{24} \rangle \nonumber \\&\quad + \langle n_4 \ q_{24} \ q_{42} \rangle - \langle n_2 \ q_{24}^2 \rangle \nonumber \\&\quad +q_{45} \left( -\langle n_2 \ n_4 \ q_{24} \rangle + \langle n_2 \ n_4 \ q_{24}(n_5+1) \rangle \right) \nonumber \\&\quad + q_{54} \left( - \langle n_2 \ n_5 \ q_{24} \rangle + \langle n_2 \ (n_5-1) \ q_{24}(n_5-1) \rangle \right. \nonumber \\&\quad \left. + \langle n_2 \ q_{24}(n_5-1) \rangle \right) \end{aligned}$$
(8)

The occurrence of third moments here illustrates the hierarchy of moment equations, where the first moment’s dynamics depend on a combination of first and second moments (\(n_2\) and \(n_4\) dynamics in Eq. (6)), while the second moments depend on the combination of second and third moments (Eq. (7)), and so on.

Measured CV values for IPIs above 0.4 strongly suggest higher moments not to be negligible. Hence, we do not learn from experiments, where we can cut off higher moments to get a finite number of ODEs. However, higher moments might destabilize stable stationary states of the first moment [109] and thus drive the concentration dynamics. Hence, they are worth to be studied. All existing ODE models of \(\hbox {IP}_3\)R state dynamics approximate higher moments by products of first moments and averages of functions by functions of averages like \(\langle n_2 q_{42}(n_5,n_6) \rangle =\langle n_4 \rangle q_{42}(\langle n_5 \rangle ,\langle n_6 \rangle )\), \(\langle n_4 q_{24}(n_5,n_6) \rangle =\langle n_2 \rangle q_{24}(\langle n_5 \rangle ,\langle n_6 \rangle )\). That allows for cutting the hierarchy of moment equations after the first moment. That neglects fluctuations.

We suggest to study whether higher moments may drive puff dynamics and where the hierarchy of moment equations can be cut off. This might lead to a set of ODEs as \(\hbox {Ca}^{2+}\) signalling model with realistic parameter values, which establishes the ability to simulate time courses with all the computational comfort ODEs provide.

5 Spike generation as first passage process with time dependent transition probabilities

We would like to present a concept calculating the moments of the ISI distribution in this section as it naturally corresponds to the random spike timing. We also suggest a method to invoke global processes modulating the local dynamics.

The stochastic element of such a formulation of spike generation is a single cluster described by its IPI, puff duration and amplitude distributions. Such an approach dispenses with detailed intracluster concentration dynamics [98]. A model in the same spirit set up to simulate time courses has been developed by Calabrese et al. [14]. Clusters open sequentially. Once a critical number \(N_\mathrm{cr}\) of open clusters has been reached, the remaining ones will open with almost certainty due to coupling by \(\hbox {Ca}^{2+}\) diffusion and the positive feedback by CICR. There are many (\({N}_{\mathrm{paths}}\)) paths from all clusters closed to this critical number (see Fig. 6). The ISI distribution is the distribution of first passage times from 0 to \(N_\mathrm{cr}\) open clusters with this approach.

The negative feedback terminating spikes entails a very small cluster open probability just after a global spike, from which all clusters slowly recover. Thus, slow time scales from global processes enter as a slow time dependence of the cluster IPI, puff duration and amplitude distributions.

Fig. 6
figure 6

Starting from zero puffs, any \(\hbox {IP}_3\)R cluster may start the first puff randomly, increasing \(\hbox {Ca}^{2+}\) locally (orange iso surfaces). This means going from state 0 to state 1. From there on there are many different ways to reach the critical nucleus with \(N_\mathrm{cr}\) open clusters at which a global \(\hbox {Ca}^{2+}\) spike occurs. Averaging over all \({N}_{\mathrm{paths}}\) paths from 0 to \(N_\mathrm{cr}\) open cluster leads to a linear chain of states indexed by the number of open clusters and connected by transitions characterized by waiting time distributions \(\varPsi _{i,i\pm 1}\)

5.1 Linear chain of states

We suggest to radically simplify the problem to reach a system which describes general properties not depending on assumptions restricting the validity of results too much and to reach possibly analytically tractable equations. We obtain a linear chain of states by averaging over all possible paths from 0 to \(N_\mathrm{cr}\) open clusters. That chain of states is indexed by the number of open clusters. The states are connected either by transition rate functions \(f_{i,i\pm 1}(t,\gamma )\) or waiting time distributions \(\varPsi _{i,i\pm 1}(t,t-t',\gamma )\) that both result from puff properties.

The transition probabilities pick up slow time scales by their dependence on the time t since the last global spike. The probability to leave the initial state 0 and go further up at early times after a global spike is very small, such that no puffs occur early. One can almost only move to the left in the linear chain. Figure 8 shows exemplary waiting time distributions with recovery from global negative feedback for initial and later times.

Recovery from global negative feedback is described by a transient with rate \(\gamma \). For the case of transition rates, we have

$$\begin{aligned} f_{i,i + 1}(t) = \lambda _{i, i + 1} \ (1 - e^{- \gamma t}), \qquad f_{i,i - 1} = \lambda _{i, i - 1} . \end{aligned}$$

After about \(t_r=5 \gamma ^{-1}\) the inhibitory effect vanishes and the system has recovered globally, i.e., \(f_{i,i + 1}(t>t_r) \approx \lambda _{i, i + 1}\).

The description with transition rates uses asymptotically markovian rates that are asymmetric in the sense of the recovery from global negative feedback only affecting the up rates. This is the case, because negative feedback influences the probability of clusters opening, not closing. For the case using waiting time distributions, they are the probability distributions from which a time value is drawn that determines when to jump to the next state, i.e., the time to the next opening or closing of a cluster. They depend on the time t since the last global spike, the relative time spent in a state \(\varDelta t = t-t'\), where \(t'\) is the time of entering the current state, and the current and target state i and \(i\pm 1\), respectively. The direction of the jump is drawn from the splitting probabilities, which are the relative weights, i.e., time integrals over \(\varPsi _{i,i\pm 1}\) at t, of possible outgoing transitions, and add up to one. This allows evaluating the system when using double exponentially distributed waiting times. The first time reaching the critical nucleus \(N_\mathrm{cr}\) is equivalent to generating a cell wide spike. We are, therefore, interested in the moments of the first passage time probability distribution to reach \(N_\mathrm{cr}\).

Experiments show that puff times often do not strictly follow an exponential distribution, but rather a double exponential in some cases, Fig. 7. This requires using waiting time distributions instead of rate functions and to use the general master equation, which is formulated more generally in terms of probability fluxes.

Fig. 7
figure 7

Interpuff interval distributions for SH-SY5Y and HEK cells at resting [\(\hbox {Ca}^{2+}\)] with double exponential fits [99]

Apart from choosing the state variables of the state scheme, the \(\varPsi \)’s or f’s contain all the physics. This includes effects of stimulation, and positive and negative feedback from CICR on short time scales, but also recovery from global negative feedback on long time scales.

Positive feedback by CICR means the more clusters are open the larger is the open probability of the closed clusters. In mathematical terms, the \(\lambda _{i,i+1}\) are increasing functions of i. One possible choice is

$$\begin{aligned} \lambda _{i,i+1} = \lambda _0 ({[\hbox {IP}_{{3}}]} ) \ (N_T -i) (1 + i)^k, \end{aligned}$$
(9)

where stimulation strength is included via the \(\hbox {IP}_3\) sensitive puff frequency \(\lambda _0\) [26], \(N_T\) is the total number of clusters, and \(k \in \{1,2,3\}\) some model parameter to quantify the strength of the positive feedback. Left-going rates in their most simple form account for the number of open clusters like

$$\begin{aligned} \lambda _{i,i-1} = i\ \lambda _{-}. \end{aligned}$$
(10)

with a single cluster closing rate \(\lambda _{-}\). The first and second moments of the first passage time distribution from 0 to \(N_\mathrm{cr}\) open clusters can be calculated for very general \(f_{i,i\pm 1}\) or \(\varPsi _{i,i\pm 1}\) with the method described in Falcke and Friedhoff [39]. The only requirement is that the \(f_{i,i\pm 1}\) or \(\varPsi _{i,i\pm 1}\) can be Laplace transformed. This is possible for the \(\varPsi _{i,i\pm 1}\) despite their dependency on t and \(t-t'\) if the t-dependency is exponential like \(\varPsi _{i,i\pm 1} \propto \psi _{i,i\pm 1}(t-t')e^{-\gamma t}\) [39]. Therefore, this method provides a basis for investigating a large variety of positive feedbacks by the choice of i-dependency of rates and waiting times, puff duration properties by the choice of left-going rates and \(t-t'\)-dependency, pathway properties by the choice of [\(\hbox {IP}_3\)]-dependency, etc.

Fig. 8
figure 8

A \(\varPsi _{i,i+1}\) to go a state up vanishes at early times \(t'\) (dotted) for early times \(t-t'\). The global inhibitory effect only allows fast upwards transitions only after enough time has passed (straight line). B \(\varPsi _{i,i-1}\), downwards transitions are not immediately affected by global negative feedback, but they change due to the normalization condition \(\int _{t'}^\infty \mathrm{d}t\, (\varPsi _{i,i+1}(t,t-t') + \varPsi _{i,i-1}(t,t-t')) = 1\). Future modelling has to adapt waiting time distributions of this type to IPI and puff duration distributions

5.2 Calculating the CV

The state scheme presented in Fig. 6 and its transition rate functions (or waiting time distributions) define a (generalized) master equation, which can be solved using Laplace transforms to determine the moments of the first passage time distribution to reach state \(N_\mathrm{cr}\) [39]. The only requirement towards the waiting time distributions is that their Laplace transform exists. In case of transition rate functions, solving the master equation yields the Laplace transform (denoted by a tilde) of the probability vector \({\tilde{P}}_i(s)\) for a process that started in state i at \(t=0\) as

$$\begin{aligned} {\tilde{P}}_i(s)=\left[ \mathbb {1}s-E\right] ^{-1}r_i + \sum _{k=1}^\infty \prod _{j=0}^{k-1} {\tilde{B}}(s+j\gamma ) \ \big [\mathbb {1}(s+k\gamma )-E\big ]^{-1} \ r_i. \end{aligned}$$
(11)

Solving the generalized master equation including the waiting time distributions gives as a solution the Laplace transforms of the probability flux vector,

$$\begin{aligned} {\tilde{I}}(s)=&\big [\mathbb {1}-{\tilde{G}}(s)\big ]^{-1}{\tilde{q}}(s) +\sum _{k=1}^\infty \prod _{j=0}^{k-1} {\tilde{A}}(s+j\gamma )\nonumber \\&\times \big [\mathbb {1}-{\tilde{G}}(s+k\gamma )\big ]^{-1} \ {\tilde{q}}(s+k\gamma ) \end{aligned}$$
(12)

where A, B, E, and G are matrices that depend on the length of the chain of states N and the transition rates or waiting time distributions, \(f_{i, i \pm 1}\) and \(\varPsi _{i, i \pm 1}\), respectively, and \(r_i\) and \({\tilde{q}}\) contain the initial conditions, as explained in [39].

Application of this theory to a chain with state independent transitions, as it might result from a random walk, found that the CV has a minimum for a certain resonant length \({\bar{N}}\). For a given set of parameters, CV(N), and therefore, the value of \({\bar{N}}\) can be controlled by varying the rate of recovery from negative feedback \(\gamma \). The stochastic process to reach state \({\bar{N}}\) for the first time is, therefore, more precise than reaching smaller or larger values of N for the first time. This is interesting in its own and for the general theory of stochastic physics, but does not resemble the robustness of the CV against changes in \(N_\mathrm{cr}\) found in \(\hbox {Ca}^{2+}\) signalling. Here the CV is constant and independent of the various number of \(\hbox {IP}_3\)R clusters per cell found in experiments, due to cell-to-cell variability. Hence, while the approach described in ref. [39] provides the tools it has not solved the problem, yet. Future modelling of \(\hbox {Ca}^{2+}\) signalling, therefore, needs to properly define the transition rate functions \(f_{i,i\pm 1}\) or the waiting time distributions \(\varPsi _{i,i\pm 1}\) to reproduce the measured properties of the CV, in particular its robustness against cell variability and variable conditions.

CICR and spatial coupling of clusters have to be reflected by the transitions probabilities to model \(\hbox {Ca}^{2+}\) spike generation. The probabilities for opening of more clusters, derived from \(\varPsi _{i,i+1}(t,t-t',\gamma )\) or \(f_{i,i+1}(t)\), increases with the \(\hbox {Ca}^{2+}\) concentration due to CICR, i.e., it increases with the number of open clusters. Due to spatial coupling by \(\hbox {Ca}^{2+}\) diffusion, it also increases with the number of closed neighbors of open clusters and thus could pick up geometrical or spatial aspects. \(\hbox {Ca}^{2+}\) binding molecules in the cytosol decreasing \(\hbox {Ca}^{2+}\) diffusion would decrease the probability of opening more clusters. However, this still has to be worked out.

6 Similarities and differences to neural spiking

It is interesting and potentially useful to discuss in which respects \(\hbox {Ca}^{2+}\)-spiking resembles or differs from the spiking activity of neurons, a biological problem that has been quantitatively explored by mathematical modeling to an impressive extent [48, 52]. This concerns the single neuron’s spontaneous activity and its characterization by interspike interval (ISI) histograms, ISI correlation coefficients, and spike train power spectra, the autonomous activity of many neurons in recurrent networks, and the encoding of time-dependent stimuli.

Obvious differences between the two forms of spiking are (i) the physical quantity that undergoes spiking (\(\hbox {Ca}^{2+}\) concentration vs trans-membrane voltage), (ii) the time-scales and typical mean ISIs (several sec to minutes for \(\hbox {Ca}^{2+}\)-spikes vs several to hundreds of ms for neurons), and (iii) the constancy of the spike form (the shape of \(\hbox {Ca}^{2+}\) spikes is more variable than that of neural action potentials). A technical but important difference is the typical length of experimental recordings: neural spike trains may contain many thousands of spike pulses in a quasi-stationary setting, whereas Calcium spike trains are mostly limited to less than a hundred spikes. This poses a severe limitation for the determination of certain higher order statistics, such as interspike interval correlations. Related to this, for many sensory neurons, researchers can systematically explore the information transmission by presenting well-defined sensory (eg. acoustic, visual or electric) stimuli in the form of harmonic or broadband signals. This allows to study whether neurons preferentially encode information about slow, intermediate or fast stimulus components (see, for instance, [9, 27, 77]). In \(\hbox {Ca}^{2+}\) experiments, one is mostly concerned with presenting a certain amount of signaling molecules in a step-like manner, which resembles the first experiments in neuroscience, see e.g. the famous work of Lord Adrian [1]). This, however, seems to be only a consequence of the current technical limitation and the question how the sequence of \(\hbox {Ca}^{2+}\) spikes encode truly time-dependent signals may come into focus once more spikes can be recorded in experiment and stimuli can be better controlled.

Biophysically, it is interesting that both spiking phenomena rely on the opening and closing of ionic channels and the positive and negative feedback loops are mediated by the \(\hbox {Ca}^{2+}\) or voltage-dependence of the opening and closing rates of these channels. The main players in the neural dynamics are the \(\hbox {Na}^{+}\) and \(\hbox {K}^+\)-selective voltage-dependent ion channels. This is described in the framework of the famous Hodgkin-Huxley model for the voltage across the neuron membrane V(t) (see standard textbooks on the topic, e.g., [24, 52])

$$\begin{aligned} C\frac{\mathrm{d}V}{\mathrm{d}t}&=I_\text {Ext} - I_\text {K} - I_\text {Na} - I_\text {L}, \end{aligned}$$
(13)
$$\begin{aligned} I_\text {K}&= g_\text {K} n^4(t)(V - E_\text {K}),\;\; I_\text {Na}= g_\text {Na} m^3(t)h(t)(V - E_\text {Na}),\;\; \nonumber \\ I_\text {L}&= g_\text {L}(V - E_\text {L}), \end{aligned}$$
(14)

where C is the membrane capacitance and \(I_\text {Ext}\) is an external current that can serve as an stimulus. The variables \(I_\text {K}\), \(I_\text {Na}\), \(I_\text {L}\) describe ionic potassium, sodium and leak currents, respectively. The parameters \(g_\text {K}\), \(g_\text {Na}\), \(g_\text {L}\) and \(E_\text {K}\), \(E_\text {Na}\), \(E_\text {L}\) are the corresponding maximal conductances and reversal potentials. The remaining variables m(t), n(t) and h(t) are the gating variables that are of particular importance for the generation of an action potential. They are described by

$$\begin{aligned} \tau _x(V)\frac{\mathrm{d}x}{\mathrm{d}t}&= x_\infty (V) - x \end{aligned}$$
(15)

where x can be substituted by nm or h. Much like in early modeling of CIRC by \(\hbox {Ca}^{2+}\) channels the variables m and n describe two fast binding processes that activate certain channels, while h describes a slow process that inactivates the sodium-selective channels after a depolarization of the membrane. In a \(\hbox {Ca}^{2+}\) channel model this would correspond to the fast activation due to the binding of activating \(\hbox {Ca}^{2+}\) and \(\hbox {IP}_3\) and the slow inactivation due the binding of inhibitory \(\hbox {Ca}^{2+}\).

The positive feedback loop that is essential to understand the upstroke of the action potential is the sodium dynamics: sodium is in excess outside the cell, the opening probability of the \(\hbox {Na}^+\) selective channels increases upon depolarization. A small depolarization will thus lead to the opening of some channels, which causes \(\hbox {Na}^+\) ions to rush into the cell, which depolarizes the membrane further, leading to more channel openings and so forth. This positive feedback loop can be compared to the puff generation via \(\hbox {Ca}^{2+}\)-induced \(\hbox {Ca}^{2+}\) release (CIRC) but also to the accelerated puff generation via the global \(\hbox {Ca}^{2+}\) concentration prior to a cell-wide \(\hbox {Ca}^{2+}\)-spike.

Inactivation of \(\hbox {Na}^+\) channels and activation of \(K^+\)-selective channels (with potassium being in excess inside the cell) leads to the termination of the action potential. Put in mathematical terms, negative feedback loops on a somewhat slower timescale explain the second half of the neural spike—this again is very similar on a mathematical level to the mechanism at work in \(\hbox {Ca}^{2+}\) spiking.

There are features in the neural membrane dynamics that are sensitive to time-dependent input currents and there are features which are not. Among the latter is the exact shape of the action potential—once the voltage is sufficiently depolarized, a largely stereotypical action potential is generated. To simplify the description, one may cut out this stereotypical part of the dynamic response as it cannot contribute to the signal transmission property of a neuron and focus on what is really the signal-dependent part. This is what is done in an Integrate-and-Fire (IF) model:

$$\begin{aligned} \tau _V\frac{\mathrm{d}V}{\mathrm{d}t}=f(V)+s(t), \;\;\; \text {if}\;\; V(t)\ge V_\mathrm{T} \; \rightarrow \; V(t)=V_\mathrm{R} \end{aligned}$$
(16)

where the more involved dynamics of the different ion channels and corresponding currents are subsumed in a simplified function f(V) that describes the currents up to some threshold \(V_\mathrm{T}\). Interestingly, the particular shape of f(V) can be obtained experimentally [3, 4]. Brette [11] argues that the positive \(\hbox {Na}^{+}\) feedback that sets in after a particular voltage is crossed is so abrupt that a simple linear model \(f(V) = \mu - V\) with constant parameter \(\mu \), i.e., the famous leaky Integrate-and-Fire (LIF) model, describes the sub-threshold dynamics of a real neuron best.

The function s(t) could be a time-dependent signal or a stochastic processes accounting for intrinsic and/or external noise. Indeed, especially the generation of the action potential is a stochastic process due to the presence of multiple sources of noise. This includes channel noise, quasi random input from other neurons (network noise) and the unreliability of synapses [101]. Many of these noise sources can be approximated by a Gaussian stochastic process, and often the simplifying assumption of strictly uncorrelated (white) Gaussian noise is made and so will we do in the remainder of this paper. We would like to mention the limitations of this assumption. Some sorts of channel noise [42, 75] and very often for network noise [30, 103], fluctuations display significant correlations. Furthermore, for strong synaptic connections, the shot-noise character of neural network noise invalidates the Gaussian approximation in some cases [70].

Hence, when we want to mimic spontaneous stochastic spiking, a simple choice for the driving current is to set \(s(t)=\sqrt{2D} \xi (t)\), i.e., to use a white Gaussian noise of intensity D with \(\langle \xi (t) \rangle =0\) and \(\langle \xi (t)\xi (t+\tau ) \rangle =\delta (\tau )\). For concreteness, we state again the standard stochastic model, the leaky integrate-and-fire model with white Gaussian noise:

$$\begin{aligned} \tau _V\frac{\mathrm{d}V}{\mathrm{d}t}=-V+\mu +\sqrt{2D} \xi (t), \;\;\; \text {if}\;\; V(t)\ge V_\mathrm{T} \; \rightarrow \; V(t)=V_\mathrm{R} \end{aligned}$$
(17)

Note that the spike is not explicitly modelled, instead if V(t) reaches the threshold a spike is said to be emitted at time \(t_i = t\) and the voltage variable is reset to the reset value \(V_\mathrm{R}\). The abstract spikes are described by delta-functions \(\delta (t-t_i)\) and form the spike-train, i.e., the sum of all spikes: \(\delta (t-t_i)\)

$$\begin{aligned} x(t)=\sum \delta (t-t_i) \end{aligned}$$
(18)

The spike train is the essential output of an IF model and its different statistics under the influence of noisy stimulation currents has been the subject of many studies (see [13, 44, 51, 101] for reviews of stochastic IF models). We note that the reset after a spike may occur instantaneously or with some refractory period \(\tau _\text {ref}\) that accounts for the temporal extent of the action potential in a conductance-based model.

Several statistics of neural spike trains are also routinely studied for \(\hbox {Ca}^{2+}\) spikes. The stationary spike rate is given by an ensemble average, \(r_0=\langle x(t) \rangle \) but can be also determined via a time average, \(r_0 = \lim _{T\rightarrow \infty } 1/T \int _0^T \mathrm{d}t\, x(t) = \lim _{T\rightarrow \infty } N(T)/T\) (where N(T) is the number of spikes in the time interval T). Statistics of the interspike interval \(I_i = t_i - t_{i-1}\) (the time between to consecutive spikes) have been already discussed for \(\hbox {Ca}^{2+}\) spikes: the mean interval \(\langle I \rangle =1/r_0\), the coefficient of variation \(CV =\langle (I-\langle I \rangle )^2 \rangle /\langle I \rangle \), and, of course the most complete description of the single interval, the full ISI probability density function (PDF) \(\rho (I)\). There are, however, also a number of statistics that are not as common in the study of \(\hbox {Ca}^{2+}\) spikes but well established in the computational neuroscience community. These include (i) count statistics, especially the Fano factor \(F(T) = \langle (N(T)-\langle N(T) \rangle )^2 \rangle /\langle N(T) \rangle \) that compares the growth of the spike count’s variance to its mean (see, e.g., [21, 105] for studies that highlight the importance of the Fano factor and [84] for a study that investigates the Fano factor in the context of \(\hbox {Ca}^{2+}\) spiking), (ii) the spike-train correlation function \(C(\tau ) = \langle x(t)x(t+\tau )\rangle - \langle x(t)\rangle ^2\) that describes the probability to find a spike at time \(t_i + \tau \) if a reference spike occurred at time \(t_i\). This statistics bears information of the spike generation process, for instance experimentally and theoretically obtained spike-train correlation functions usually have a decreased firing probability right after a spike has occurred reflecting refractory processes similar to what is observed in \(\hbox {Ca}^{2+}\) puffs. Often, oscillatory activity is better characterized in the Fourier domain by the spike-train power spectrum:

$$\begin{aligned} S(f) =&\lim _{T\rightarrow \infty }\frac{\langle {\tilde{x}}(f){\tilde{x}}(f)^* \rangle }{T} \nonumber \\ =&\int _{-\infty }^{\infty } \mathrm{d}\tau \, e^{i2\pi f \tau }C(\tau ) \quad \text { with }\nonumber \\ {\tilde{x}}(f)=&\int _{0}^T \mathrm{d}t\, x(t)e^{i 2\pi f t}. \end{aligned}$$
(19)

According to the first defining equation, the power spectrum is given by the variance of the Fourier coefficients \({\tilde{x}}(f)\) of the spike train in a time window T. However, according to the second equation and the Wiener–Khinchine theorem [46], it is also given by the Fourier transform of the autocorrelation function. Turning back to the interspike intervals, we mention finally the serial correlation coefficient (SCC):

$$\begin{aligned} \rho _k = \frac{\langle (I_i -\langle I_i \rangle ) (I_{i+k}-\langle I_{i+k}) \rangle \rangle }{\langle (I_i-\langle I_i \rangle )^2 \rangle } \end{aligned}$$
(20)

that puts the covariance between two ISIs that are lagged by an integer k in relation to the variance of the single interval providing a number between \(-1\) and 1. Correlations among ISIs may reflect slower processes that are at work in the driving input or in the intrinsic dynamics of the neuron. For instance, a negative SCC of adjacent intervals indicates that an ISI longer than the mean is on average followed by an interval shorter than the mean and/or the other way around. Such correlations have been found in many neurons (see [2, 41] for reviews) and may lead to an improved information transmission [18, 19]. Many of these statistics are related, as it can be easily demonstrated by means of the power spectrum.

Fig. 9
figure 9

Power spectrum of an LIF model with strong input \({\dot{v}} = \mu - v + \sqrt{2D}\xi (t)\). The high frequency limit reflects the mean firing rate \(r_0\), while the low frequency limit bears information about the variability of the spike train. In the considered case of strong mean input \(\mu \gg v_\mathrm{T}\) the interspike interval PDF can be approximated by an inverse Gaussian distribution that is fully characterized by \(r_0\) and \(C_V\). Parameters: \(\mu = 10\), \(D=1\), \(v_\mathrm{R} = 0\) and \(v_\mathrm{T} = 1\)

Fig. 10
figure 10

Power spectrum S(f) (left) and serial correlation coefficients \(\rho _k\) (right) of an adaptive LIF model with \(\mu = 10\), \(D=1.0\), \(\tau _m=1\), \(\tau _a=2\), \(\varDelta =3\). Power spectrum (blue line on the left) and SCCs (blue symbols on the right) are obtained form the original spike train of the adapting neuron; orange line (left) and symbols (right) are obtained from a shuffled version of the same spike train (ISIs in the sequence are randomly shuffled, which removes interval correlations and leads to a renewal spike train). Accordingly the low frequency limit for the power spectrum of the renewal spike train has a higher power compared to the nonrenewal case. Generally, a decreased power at low frequency may improve the signal to noise ratio for potential low frequency signals, hence, improving the information transmission properties of the neuron

We have already pointed out the relation between spike-train correlation function and spike-train power spectrum via the Wiener-Khinchine theorem. The power spectrum, however, also contains information on the interval statistics (see [22]). In the high-frequency limit of a stationary stochastic spike train, the spectrum saturates at the firing rate (the inverse of the mean ISI), \(\lim _{f\rightarrow \infty } S(f) = r_0=1/\langle I \rangle \). If intervals are independent, i.e., if we deal with a renewal spike train, the spectrum attains also a simple form in the opposite limit of vanishing frequency:

$$\begin{aligned} \lim _{f\rightarrow 0} S(f) = r_0 CV^2, \end{aligned}$$
(21)

which means that by comparing the high- and low-frequency limits we can read off how regular the renewal spike train is. More generally, the full spike-train power spectrum of a renewal point process can be obtained from the knowledge of the interspike interval probability density, more specifically, its one-sided Fourier transform, \({\tilde{\rho }}(f)\), via the expression [91]

$$\begin{aligned} S(f)=r_0\frac{1-|{\tilde{\rho }}(f)|^2}{|1-{\tilde{\rho }}(f)|^2}. \end{aligned}$$
(22)

The spectrum can thus be calculated for the leaky IF model driven by white Gaussian noise [57] (using much earlier results for the Laplace transform of the first-passage-time density of an Ornstein–Uhlenbeck process [23]), because in this model, the reset of the voltage erases any memory about previous ISIs and the driving noise is uncorrelated and thus does not carry memory either. Since the exact result for the power spectrum uses higher mathematical functions (the parabolic cylinder functions), it is instructive to look for a further simplification, which can be achieved if the system is in the strongly mean-driven regime \(\mu \gg V_\mathrm{T}-V_\mathrm{R}\). In this case, the statistics of the LIF model is close to that of a perfect IF model with \(f(V)=\mu \) (omitting the leak term on the right hand side of Eq. (17)). For this model the ISI density is an inverse Gaussian probability density [47] the Fourier transform of which is a simple exponential function:

$$\begin{aligned} \rho (I)&=\frac{\vert V_\mathrm{T}-V_\mathrm{R} \vert }{\sqrt{4 \pi D I^3}} \cdot \exp \left[ -\frac{(V_\mathrm{T}-V_\mathrm{R}-\mu I)^2}{4DI}\right] , \end{aligned}$$
(23)
$$\begin{aligned} {\tilde{\rho }}(f)&=\exp \left\{ \frac{\mu }{2D}(V_\mathrm{T}-V_\mathrm{R})\left( 1-\sqrt{1-i\frac{8\pi f D}{\mu ^2}}\right) \right\} \end{aligned}$$
(24)

In Fig. 9, we display a simulated spike-train power spectrum for an LIF model with strong mean input (\(\mu =10\gg V_\mathrm{T}-V_\mathrm{R}=1\)) and highlight the limit cases, from which both the firing rate \(r_0\) and coefficient of variation CV can be readily obtained. The simulation is also compared to Eq. (22), using as an approximate description \({\tilde{\rho }}(f)\) from Eq. (24); the approximation agrees very well for this example, because the constant drift \(\mu \) is dominating the subthreshold dynamics so strongly that the LIF dynamics is close to that of a perfect IF model. Comparable power spectra have indeed been reported in \(\hbox {Ca}^{2+}\) spiking [82].

Many extensions of the simple one-dimensional IF model have been studied analytically, such as models with time-dependent threshold [56] or models with colored [12, 64, 74] or non-Gaussian noise [29, 65, 70]. In higher-dimensional stochastic IF models we can also reproduce non-renewal behavior observed in many neurons in the form of a non-vanishing serial correlation coefficient, \(\rho _k\ne 0\) for \(k>0\). As in \(\hbox {Ca}^{2+}\) spiking, also in neural spiking there are often slower processes at work that steer the pulse-generating process—either as a simple external control or as a feedback of the spike train onto the spike generator. This can be easily incorporated into the Integrate-and-fire framework by adding a slow variable. Consider the following example, where the membrane potential is affected by an additional negative adaptation current a(t):

$$\begin{aligned} \tau _v\frac{\mathrm{d}V}{\mathrm{d}t}&=\mu -V -a + \sqrt{2D}\xi (t), \end{aligned}$$
(25)
$$\begin{aligned} \tau _a \frac{\mathrm{d}a}{\mathrm{d}t}&= - a\nonumber \\&\text {If}\; V(t)>V_\mathrm{T}\; \text {then}\; V(t) \rightarrow V_\mathrm{R} \; \text {and}\; a\rightarrow a+\varDelta \end{aligned}$$
(26)

In the last line, we complemented the usual reset rule for the voltage by an incrementation rule for the adaptation variable a(t): it is increased by a value \(\varDelta \) when a spike occurs. In between spikes, according to Eq. (26) the adaptation variable will decay exponentially with the time constant \(\tau _a\) that is typically larger than the membrane time constant or the mean interspike interval and ranges between 50ms and several seconds. A sequence of spikes occurring in rapid succession (as we for instance observe when the neuron is subject to a depolarizing current step) leads to a large value of the adaptation variable which has an inhibiting effect on the voltage dynamics in Eq. (25)—the response to the current step will be initially a rapid increase that is followed by an adaptation to a much lower value.

The IF model endowed with an adaptation current is conveniently termed adaptive IF models and can be thought of as a simplification of a conductance-based model with a Ca2+ gated K+ current. Since the adaptation current is not reset to a fixed value but increased upon spiking it carries information of past ISIs and may lead to ISI correlations. Indeed the model and some related ones as for instance the IF model with dynamical threshold have been shown to generate nonrenewal spike trains and, specifically, negative interspike interval correlations [20, 58]; analytical methods to calculate these correlations have been worked out over the last years [76, 78].

As a consequence of the nonrenewal character of the spike train, the power spectrum is not described anymore by Eq. (22) but by a more complex expression involving higher order interval distributions (see, e.g., [51]). While the high-frequency limit of the spectrum is still given by the firing rate (black dashed line in Abb. 10), the zero-frequency limit now involves also the serial correlation coefficients:

$$\begin{aligned} \lim _{f\rightarrow 0}S(f) = r_0 CV^2 \left( 1 + 2\sum _{k=1}^\infty \rho _k\right) , \end{aligned}$$
(27)

which in the renewal case reduces to Eq. (21). The effect of the negative ISI correlations is thus to reduce power at low frequencies, which can be clearly seen by comparing the original spectrum to the power spectrum of the shuffled spike train. This effect is especially important for the transmission of slow stimuli (so far not included in the model Eqs. (25), (26)): the power spectrum of the spontaneous state is a good approximation for the noise background in the case of a time-dependent signal (e.g., a cosine signal with low frequency) being present. If noise power is reduced at low frequencies, this can increase the signal-to-noise ratio [18, 19].

It is conceivable that some of the concepts reviewed here for neural spike trains may become relevant and applicable to Ca spike trains once longer recordings and better temporal control of stimuli become possible. Since in particular slower processes are at work in the intracellular \(\hbox {Ca}^{2+}\) dynamics, models like the adaptive integrate-and-fire model that we discussed may serve as an inspiration to capture the cumulative refractoriness of \(\hbox {Ca}^{2+}\) spikes.

7 Conclusion

Modelling of \(\hbox {Ca}^{2+}\) signalling has taken place in the tradeoff between models accounting for the randomness of puffs and spikes, cell variability and measured parameter dependencies on one side and rate equation models convenient to simulate time courses on the other side in recent years. Rate equation models need further development to reproduce measured parameter dependencies. We suggest to include higher moment’s dynamics derived from the Master equation to account for spike generation by fluctuations. Alternatively, approaches like spike generation as first passage of a random walk on a linear chain of states as presented in Sects. 5 and 5.1 might be used.

Stochastic theory of neuronal spiking might serve as a role model for what can be achieved with stochastic theory of \(\hbox {Ca}^{2+}\) spiking. The main challenges ahead are to go beyond simple renewal approaches for spike generation towards ISI correlations, cumulative refractoriness and other phenomena comprising several ISI, explanation of the concentration response relation of the ISI and the robustness properties of the moment relation.

The task of mechanistic mathematical modelling in cell biology is to identify mechanisms on the basis of formulating them as hypothesis in mathematical models. Here, the agreement with experimental data serves as part of the hypothesis verification. Rate equation models fail here with respect to the agonist concentration response relation of the average interspike interval, the sensitive dependence of the average interspike interval on parameters of spatial coupling (diffusion, buffers, geometry) and of course the moment relation as a defining property of \(\hbox {Ca}^{2+}\) spiking. Stochastic models still have to be developed to address these problems.

Such a model development may lead to answers to obvious questions in the field. Frequency encoding is one of the generally accepted and experimentally supported concepts providing meaning to \(\hbox {Ca}^{2+}\) signals [66]. However, spike timing is random. The spectrum of a spike train with exponentially distributed ISI is flat. The absolute refractory period introduces frequencies with moderately larger power in the spectrum than the average power [82], but essentially there is no typical frequency in many \(\hbox {IP}_3\) induced \(\hbox {Ca}^{2+}\) spike sequences. Taking the large cell-to-cell variability at a given agonist concentration into account, there is no defined relation between agonist concentration and \(\hbox {Ca}^{2+}\) spike frequency applying to all cells of a given type, but each cell has its own relation. How can frequency encoding work with these properties of spiking? What are the reasons for the large cell variability and what does it mean? Addressing these questions requires models that faithfully reproduce the properties of spike sequences including their fluctuations but also have predictive power, e.g., by telling us how the spike statistics will vary if biophysical parameters are changed.