Skip to main content
Log in

Learning Biological Dynamics From Spatio-Temporal Data by Gaussian Processes

  • Methods
  • Published:
Bulletin of Mathematical Biology Aims and scope Submit manuscript

Abstract

Model discovery methods offer a promising way to understand biology from data. We propose a method to learn biological dynamics from spatio-temporal data by Gaussian processes. This approach is essentially “equation free” and hence avoids model derivation, which is often difficult due to high complexity of biological processes. By exploiting the local nature of biological processes, dynamics can be learned with data sparse in time. When the length scales (hyperparameters) of the squared exponential covariance function are tuned, they reveal key insights of the underlying process. The squared exponential covariance function also simplifies propagation of uncertainty in multi-step forecasting. After evaluating the performance of the method on synthetic data, we demonstrate a case study on real image data of E. coli colony.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data availability

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

References

  • Aubert M, Badoual M, Christov C, Grammaticos B (2008) A model for glioma cell migration on collagen and astrocytes. Journal of the royal society interface 5(18):75–83

    Article  Google Scholar 

  • Brunton SL, Kutz JN (2019) Data-Driven Science and Engineering. Cambridge University Press

  • Brunton SL, Proctor JL, Kutz JN (2016) Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the national academy of sciences 113(15):3932–3937

    Article  MathSciNet  Google Scholar 

  • Deisenroth MP, Huber MF, Hanebeck UD (2009) Analytic moment-based Gaussian process filtering. In: Proceedings of the 26th annual international conference on machine learning, ACM, pp 225–232

  • Eikenberry SE, Sankar T, Preul MC, Kostelich EJ, Thalhauser CJ, Kuang Y (2009) Virtual glioblastoma: Growth, migration and treatment in a three-dimensional mathematical model. Cell Proliferation 42(4):511–528. https://doi.org/10.1111/j.1365-2184.2009.00613.x

    Article  Google Scholar 

  • Farchi A, Bocquet M (2018) Comparison of local particle filters and new implementations. Nonlinear Processes in Geophysics 25(4)

  • Frigola R, Lindsten F, Schön TB, Rasmussen CE (2013) Bayesian inference and learning in Gaussian process state-space models with particle MCMC. In: Advances in Neural Information Processing Systems, pp 3156–3164

  • Ghosh A, Mukhopadhyay S, Roy S, Bhattacharya S (2014) Bayesian inference in nonparametric dynamic state-space models. Statistical Methodology 21:35–48

    Article  MathSciNet  Google Scholar 

  • Girard A, Rasmussen CE, Candela JQ, Murray-Smith R (2003) Gaussian process priors with uncertain inputs application to multiple-step ahead time series forecasting. In: Advances in neural information processing systems, pp 545–552

  • Han L, Eikenberry S, He C, Johnson L, Preul C, M, J Kostelich E, Kuang Y, Preul MC, Kostelich EJ, Kuang Y, (2019) Patient-specific parameter estimates of glioblastoma multiforme growth dynamics from a model with explicit birth and death rates. Mathematical Biosciences and Engineering 16(5):5307–5323

  • He C, Bayakhmetov S, Harris D, Kuang Y, Wang X (2020) A predictive reaction-diffusion based model of e. coli colony growth control. IEEE Control Systems Letters

  • Hunt BR, Kostelich EJ, Szunyogh I (2007) Efficient data assimilation for spatiotemporal chaos: A local ensemble transform Kalman filter. Physica D: Nonlinear Phenomena 230(1–2):112–126

    Article  MathSciNet  Google Scholar 

  • Jackson PR, Juliano J, Hawkins-Daarud A, Rockne RC, Swanson KR (2015) Patient-Specific Mathematical Neuro-Oncology: Using a Simple Proliferation and Invasion Tumor Model to Inform Clinical Practice. Bulletin of Mathematical Biology 77(5):846–856

    Article  MathSciNet  Google Scholar 

  • Jazwinski AH (2007) Stochastic processes and filtering theory. Courier Corporation

  • Kawasaki K, Mochizuki A, Matsushita M, Umeda T, Shigesada N (1997) Modeling spatio-temporal patterns generated bybacillus subtilis. Journal of theoretical biology 188(2):177–185

    Article  Google Scholar 

  • Khain E, Katakowski M, Charteris N, Jiang F, Chopp M (2012) Migration of adhesive glioma cells: Front propagation and fingering. Physical Review E 86(1):11904

    Article  Google Scholar 

  • Kostelich EJ, Kuang Y, McDaniel JM, Moore NZ, Martirosyan NL, Preul MC (2011) Accurate state estimation from uncertain data and models: an application of data assimilation to mathematical models of human brain tumors. Biology Direct 6(1):64

    Article  Google Scholar 

  • Lagergren JH, Nardini JT, Baker RE, Simpson MJ, Flores KB (2020) Biologically-informed neural networks guide mechanistic modeling from sparse experimental data. PLoS Computational Biology 16(11):e1008462

    Article  Google Scholar 

  • Lagergren JH, Nardini JT, Michael Lavigne G, Rutter EM, Flores KB (2020b) Learning partial differential equations for biological transport models from noisy spatio-temporal data. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 476(2234), https://doi.org/10.1098/rspa.2019.0800, https://royalsocietypublishing.org/doi/abs/10.1098/rspa.2019.0800

  • Lewis MA, Kareiva P (1993) Allee dynamics and the spread of invading organisms. Theoretical Population Biology 43(2):141–158

    Article  Google Scholar 

  • Leyva JF, Málaga C, Plaza RG (2013) The effects of nutrient chemotaxis on bacterial aggregation patterns with non-linear degenerate cross diffusion. Physica A: Statistical Mechanics and its Applications 392(22):5644–5662

    Article  MathSciNet  Google Scholar 

  • Lipková J, Angelikopoulos P, Wu S, Alberts E, Wiestler B, Diehl C, Preibisch C, Pyka T, Combs SE, Hadjidoukas P, Others Lipkova J, Angelikopoulos P, Wu S, Alberts E, Wiestler B, Diehl C, Preibisch C, Pyka T, Combs SE, Hadjidoukas P, Van Leemput K, Koumoutsakos P, Lowengrub J, Menze B (2019) Personalized Radiotherapy Design for Glioblastoma: Integrating Mathematical Tumor Models, Multimodal Scans, and Bayesian Inference. IEEE transactions on medical imaging 38(8):1875–1884

    Article  Google Scholar 

  • Liu H, Ong YS, Shen X, Cai J (2020) When gaussian process meets big data: A review of scalable gps. IEEE transactions on neural networks and learning systems 31(11):4405–4423

    Article  MathSciNet  Google Scholar 

  • McDaniel J, Kostelich E, Kuang Y, Nagy J, Preul MC, Moore NZ, Matirosyan NL (2013) Data assimilation in brain tumor models. In: Mathematical Methods and Models in Biomedicine, Springer, pp 233–262

  • Messenger DA, Bortz DM (2021a) Weak sindy for partial differential equations. Journal of Computational Physics p 110525

  • Messenger DA, Bortz DM (2021) Weak sindy: Galerkin-based data-driven model selection. Multiscale Modeling & Simulation 19(3):1474–1497

    Article  MathSciNet  Google Scholar 

  • Mimura M, Sakaguchi H, Matsushita M (2000) Reaction-diffusion modelling of bacterial colony patterns. Physica A: Statistical Mechanics and its Applications 282(1–2):283–303

    Article  Google Scholar 

  • Morzfeld M, Hodyss D, Poterjoy J (2018) Variational particle smoothers and their localization. Quarterly Journal of the Royal Meteorological Society 144(712):806–825

    Article  Google Scholar 

  • Nardini JT, Lagergren JH, Hawkins-Daarud A, Curtin L, Morris B, Rutter EM, Swanson KR, Flores KB (2020) Learning Equations from Biological Data with Limited Time Samples. Bulletin of Mathematical Biology 82(9):1–33

    Article  MathSciNet  Google Scholar 

  • Neal RM (2012) Bayesian learning for neural networks, vol 118. Springer Science & Business Media

  • Otsu N (1979) A threshold selection method from gray-level histograms. IEEE transactions on systems, man, and cybernetics 9(1):62–66

    Article  Google Scholar 

  • Ott E, Hunt BR, Szunyogh I, Zimin AV, Kostelich EJ, Corazza M, Kalnay E, Patil DJ, Yorke JA (2004) A local ensemble Kalman filter for atmospheric data assimilation. Tellus A: Dynamic Meteorology and Oceanography 56(5):415–428

    Article  Google Scholar 

  • Raissi M, Karniadakis GE (2018) Hidden physics models: Machine learning of nonlinear partial differential equations. Journal of Computational Physics 357:125–141. https://doi.org/10.1016/j.jcp.2017.11.039,1708.00588

  • Raissi M, Perdikaris P, Karniadakis GE (2018) Numerical Gaussian processes for time-dependent and nonlinear partial differential equations. SIAM Journal on Scientific Computing 40(1):A172–A198. https://doi.org/10.1137/17M1120762,1703.10230

  • Rasmussen CE, Williams CKI (2006) Gaussian processes for machine learning. MIT Press, https://mitpress.mit.edu/books/gaussian-processes-machine-learning

  • Rebeschini P, van Handel R (2015) Can local particle filters beat the curse of dimensionality? The Annals of Applied Probability 25:2809–2866

    Article  MathSciNet  Google Scholar 

  • Roberts S, Osborne M, Ebden M, Reece S, Gibson N, Aigrain S (2013) Gaussian processes for time-series modelling. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 371(1984), https://doi.org/10.1098/rsta.2011.0550, http://dx.doi.org/10.1098/rsta.2011.0550

  • Rutter EM, Stepien TL, Anderies BJ, Plasencia JD, Woolf EC, Scheck AC, Turner GH, Liu Q, Frakes D, Kodibagkar V, Kuang Y, Preul MC, Kostelich EJ (2017) Mathematical Analysis of Glioma Growth in a Murine Model. Scientific Reports 7(1):1–16

    Article  Google Scholar 

  • Schmid PJ (2010) Dynamic mode decomposition of numerical and experimental data. Journal of fluid mechanics 656:5–28

    Article  MathSciNet  Google Scholar 

  • Szunyogh I, Kostelich EJ, Gyarmati G, Kalnay E, Hunt BR, Ott E, Satterfield E, Yorke JA (2008) A local ensemble transform Kalman filter data assimilation system for the NCEP global model. Tellus A: Dynamic Meteorology and Oceanography 60(1):113–130

    Article  Google Scholar 

  • Wang J, Fleet D, Hertzmann A (2008) Gaussian Process Dynamical Models for Human Motion. IEEE Transactions on Pattern Analysis and Machine Intelligence 30(2):283–298

    Article  Google Scholar 

  • Williams CKI, Rasmussen CE (1996) Gaussian processes for regression. In: Advances in neural information processing systems, pp 514–520

Download references

Acknowledgements

The authors thank Dr. Xiao Wang and Samat Bayakhmetov for providing the data of E. coli colony, and Duane Harris for helpful discussions on image processing. The work of CH and YK is supported in part by National Institutes of Health (NIH) under Grant 5R01GM1314.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lifeng Han.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Derivation of Prediction and Uncertainty Propagation

Appendix: Derivation of Prediction and Uncertainty Propagation

Since product of Gaussian functions are again a Gaussian, we can do the following integration analytically. In general, for \({\mathbf {x}}\sim N({\varvec{\mu }},{\varvec{\Sigma }})\)

$$\begin{aligned}&\int k(\hat{{\mathbf {x}}},{\mathbf {x}};{\mathbf {V}})p({\mathbf {x}})d{\mathbf {x}}\nonumber \\&\quad = \sigma _{k}^{2}\left| {\varvec{\Sigma }}{\mathbf {V}}+{\mathbf {I}}\right| ^{-1/2}\exp \bigg (-\frac{1}{2}(\hat{{\mathbf {x}}}-{\varvec{\mu }})^{T}({\varvec{\Sigma }}+{\mathbf {V}}^{-1})^{-1}(\hat{{\mathbf {x}}}-{\varvec{\mu }})\bigg ) \end{aligned}$$
(6)
$$\begin{aligned}&\qquad \int k(\hat{{\mathbf {x}}},{\mathbf {x}};{\mathbf {V}})k(\hat{{\mathbf {y}}},{\mathbf {x}};{\mathbf {W}})p({\mathbf {x}})d{\mathbf {x}}\nonumber \\&\quad = \frac{\sigma _{k}^{4}}{\left| {\varvec{\Sigma }}({\mathbf {V}}+{\mathbf {W}})+{\mathbf {I}}\right| ^{1/2}}\exp \bigg (-\frac{1}{2}\big (\hat{{\mathbf {x}}}^{T}{\mathbf {V}}\hat{{\mathbf {x}}}+\hat{{\mathbf {y}}}{\mathbf {W}}\hat{{\mathbf {y}}}-{\mathbf {q}}^{T}({\mathbf {V}}+{\mathbf {W}}){\mathbf {q}}\big )\bigg )\nonumber \\&\qquad \exp \bigg (-\frac{1}{2}({\mathbf {q}}-{\varvec{\mu }})^{T}(\varvec{\Sigma }+({\mathbf {V}}+{\mathbf {W}})^{-1})^{-1}({\mathbf {q}}-\varvec{\mu })\bigg ) \end{aligned}$$
(7)

where \({\mathbf {q}}=({\mathbf {V}}+{\mathbf {W}})^{-1}({\mathbf {V}}\hat{{\mathbf {x}}}+{\mathbf {W}}\hat{{\mathbf {y}}})\). In the following, we show how to take into account of the uncertainty of input in forecasting by using (6) and (7).

To simplify notations, we denote training data as \({\mathscr {D}}=(\hat{{\mathbf {u}}},\tilde{{\mathbf {v}}})\) and test input \({\mathbf {u}}^{*}\sim N({\mathbf {m}},{\mathbf {S}})\). Note that here we deviate from our previous notations in main text in favor of presenting a general method. We want to find the mean and variance of the test output \({\mathbf {u}}\equiv (u_{j})_{j\in \{1:n_{x}\}}\). The GP regression tells that

$$\begin{aligned} {\mathbf {u}}\vert {\mathbf {u}}^{*},{\mathscr {D}}\sim N({\mathbf {u}}^{*}+{\mathbf {K}}_{*}^{T}\mathbf {\varvec{\Sigma }}^{-1}(\tilde{{\mathbf {v}}}-\hat{{\mathbf {u}}}),{\mathbf {K}}_{**}-{\mathbf {K}}_{*}^{T}\mathbf {\varvec{\Sigma }}^{-1}{\mathbf {K}}_{*}) \end{aligned}$$

where \(({\mathbf {K}}_{*})_{ij}=k(\hat{{\mathbf {u}}}_{{\mathscr {N}}(i)},{\mathbf {u}}_{{\mathscr {N}}(j)}^{*})\), \((\varvec{\Sigma })_{ij}=k(\hat{{\mathbf {u}}}_{{\mathscr {N}}(i)},\hat{{\mathbf {u}}}_{{\mathscr {N}}(j)})+\sigma _{\epsilon }^{2}\delta _{ij}\) and \(({\mathbf {K}}_{**})_{ij}=k({\mathbf {u}}_{{\mathscr {N}}(j)}^{*},{\mathbf {u}}_{{\mathscr {N}}(j)}^{*})\). To simplify notation we let \(\varvec{\beta }=\varvec{\Sigma }^{-1}(\tilde{{\mathbf {v}}}-\hat{{\mathbf {u}}})\) and note that it is sorely determined by training data \({\mathscr {D}}\). The mean of \({\mathbf {u}}\) is easy, i.e.,

$$\begin{aligned} E[{\mathbf {u}}]&= E[E[{\mathbf {u}}\vert {\mathbf {u}}^{*}]] \nonumber \\&= E[{\mathbf {u}}^{*}+{\mathbf {K}}_{*}^{T}\varvec{\beta }] \nonumber \\&= {\mathbf {m}}+{\mathbf {L}}^{T}\varvec{\beta } \end{aligned}$$
(8)

where by simply applying (6), we have

$$\begin{aligned} ({\mathbf {L}})_{rj}= & {} \int k(\hat{{\mathbf {u}}}_{{\mathscr {N}}(r)},{\mathbf {u}}_{{\mathscr {N}}(j)}^{*};{\varvec{\Lambda }}^{-1})p({\mathbf {u}}_{{\mathscr {N}}(j)}^{*})d{\mathbf {u}}_{{\mathscr {N}}(j)}^{*} \\= & {} \sigma _{k}^{2}\left| {\mathbf {S}}_{{\mathscr {N}}(j)}{\varvec{\Lambda }}^{-1}+{\mathbf {I}}\right| ^{-1/2} \\&\exp \bigg (-\frac{1}{2}(\hat{{\mathbf {u}}}_{{\mathscr {N}}(r)}-{\mathbf {m}}_{{\mathscr {N}}(j)})^{T}({\mathbf {S}}_{b_{j}}+{\varvec{\Lambda }})^{-1}(\hat{{\mathbf {u}}}_{{\mathscr {N}}(r)}-{\mathbf {m}}_{{\mathscr {N}}(j)})\bigg ) \end{aligned}$$

where \({\mathbf {S}}_{{\mathscr {N}}(j)}\) is the sub-covariance matrix that concerns only \({\mathscr {N}}(j)\).

The variance-covariance matrix of \({\mathbf {u}}\) is slightly more complicated. Let’s consider it component-wisely. By law of total variance,

$$\begin{aligned} cov(u_{i},u_{j})= & {} E[cov(u_{i},u_{j}\vert {\mathbf {u}}^{*})]+cov(E[u_{i}\vert {\mathbf {u}}^{*}],E[u_{j}\vert {\mathbf {u}}^{*}]) \end{aligned}$$

We first compute

$$\begin{aligned}&cov(E[u_{i}\vert {\mathbf {u}}^{*}],E[u_{j}\vert {\mathbf {u}}^{*}])\\&\quad = E[E[u_{i}\vert {\mathbf {u}}^{*}]E[u_{j}\vert {\mathbf {u}}^{*}]]-E[E[u_{i}\vert {\mathbf {u}}^{*}]]\cdot E[E[u_{j}\vert {\mathbf {u}}^{*}]]\\&\quad = E[(u_{i}^{*}+{\mathbf {k}}_{*i}^{T}\varvec{\beta })(u_{j}^{*}+{\mathbf {k}}_{*j}^{T}\varvec{\beta })]-(m_{i}+{\mathbf {l}}_{i}^{T}\varvec{\beta })(m_{j}+{\mathbf {l}}_{j}^{T}\varvec{\beta })\\&\quad = E[u_{i}^{*}u_{j}^{*}]+E[u_{i}^{*}{\mathbf {k}}_{*j}^{T}\varvec{\beta }]+E[{\mathbf {k}}_{*i}^{T}\varvec{\beta }u_{j}^{*}]\!+\!\varvec{\beta }^{T}E[{\mathbf {k}}_{*i}{\mathbf {k}}_{*j}^{T}]\varvec{\beta }-(m_{i}\!+\!{\mathbf {l}}_{i}^{T}\varvec{\beta })(m_{j}\!+\!{\mathbf {l}}_{j}^{T}\varvec{\beta })\\&\quad = S_{ij}+E[u_{i}^{*}{\mathbf {k}}_{*j}^{T}\varvec{\beta }]+E[{\mathbf {k}}_{*i}^{T}\varvec{\beta }u_{j}^{*}]+\varvec{\beta }^{T}(E[{\mathbf {k}}_{*i}{\mathbf {k}}_{*j}^{T}]-{\mathbf {l}}_{i}{\mathbf {l}}_{j}^{T})\varvec{\beta }-(m_{i}{\mathbf {l}}_{j}^{T}+m_{j}{\mathbf {l}}_{i}^{T})\varvec{\beta } \end{aligned}$$

where \({\mathbf {k}}_{*i}\) is the i-th column of \({\mathbf {K}}_{*}\) and \({\mathbf {l}}_{i}\) is the i-th column of \({\mathbf {L}}\). We try to determine term by term. First,

$$\begin{aligned} E[u_{i}^{*}u_{j}^{*}]=S_{ij}+m_{i}m_{j}. \end{aligned}$$

Second and third, \(E[u_{i}^{*}{\mathbf {k}}_{*j}^{T}]\) is a vector and its r-th entry is

$$\begin{aligned}&\int u_{i}^{*}k(\hat{{\mathbf {u}}}_{{\mathscr {N}}(r)},{\mathbf {u}}_{{\mathscr {N}}(j)}^{*})p({\mathbf {u}}_{{\mathscr {N}}(j)\cup i}^{*})d{\mathbf {u}}_{{\mathscr {N}}(j)\cup i}^{*}\\&\quad = \int u_{i}^{*}k({\mathbf {P}}^{T}\hat{{\mathbf {u}}}_{{\mathscr {N}}(r)},{\mathbf {u}}_{{\mathscr {N}}(j)\cup i}^{*};{\mathbf {P}}^{T}{\varvec{\Lambda }}^{-1}{\mathbf {P}})p({\mathbf {u}}_{{\mathscr {N}}(j)\cup i}^{*})d{\mathbf {u}}_{{\mathscr {N}}(j)\cup i}^{*}\\&\quad = \text {last entry of }({\mathbf {P}}^{T}{\varvec{\Lambda }}^{-1}{\mathbf {P}}+{\mathbf {S}}_{{\mathscr {N}}(j)\cup i}^{-1})^{-1}({\mathbf {P}}^{T}{\varvec{\Lambda }}^{-1}\hat{{\mathbf {u}}}_{r-1:r+1}+{\mathbf {S}}_{{\mathscr {N}}(j)\cup i}^{-1}{\mathbf {m}}_{{\mathscr {N}}(j)\cup i\cup i}) \end{aligned}$$

if \(i\notin {\mathscr {N}}(j)\) where \({\mathbf {P}}{\mathbf {u}}_{{\mathscr {N}}(j)\cup i}^{*}={\mathbf {u}}_{{\mathscr {N}}(j)}^{*}\). If \(i\in {\mathscr {N}}(j)\), the r-th entry of \(E[u_{i}^{*}{\mathbf {k}}_{*j}^{T}]\) is the one of the entries of

$$\begin{aligned} ({\varvec{\Lambda }}^{-1}+{\mathbf {S}}_{{\mathscr {N}}(j)}^{-1})^{-1}({\varvec{\Lambda }}^{-1}\hat{{\mathbf {u}}}_{{\mathscr {N}}(r)}+{\mathbf {S}}_{{\mathscr {N}}(j)}^{-1}{\mathbf {m}}_{{\mathscr {N}}(j)}) \end{aligned}$$

that corresponds to the position of i in \({\mathscr {N}}(j)\).

Fourth, \(E[{\mathbf {k}}_{*i}{\mathbf {k}}_{*j}^{T}]\equiv \tilde{{\mathbf {L}}}\) is a matrix with

$$\begin{aligned} {\tilde{L}}_{rs}= & {} \int k(\hat{{\mathbf {u}}}_{{\mathscr {N}}(r)},{\mathbf {u}}_{{\mathscr {N}}(i)}^{*})k(\hat{{\mathbf {u}}}_{{\mathscr {N}}(s)},{\mathbf {u}}_{{\mathscr {N}}(j)}^{*})p({\mathbf {u}}_{{\mathscr {N}}(i)\cup {\mathscr {N}}(j)}^{*})d{\mathbf {u}}_{{\mathscr {N}}(i)\cup {\mathscr {N}}(j)}^{*}\\= & {} \int k({\mathbf {P}}^{T}\hat{{\mathbf {u}}}_{{\mathscr {N}}(r)},{\mathbf {u}}_{{\mathscr {N}}(i)\cup {\mathscr {N}}(j)}^{*};{\mathbf {P}}^{T}{\varvec{\Lambda }}^{-1}{\mathbf {P}})k({\mathbf {Q}}^{T}\hat{{\mathbf {u}}}_{{\mathscr {N}}(s)},{\mathbf {u}}_{{\mathscr {N}}(i)\cup {\mathscr {N}}(j)}^{*};{\mathbf {Q}}^{T}{\varvec{\Lambda }}^{-1}{\mathbf {Q}})\\&p({\mathbf {u}}_{{\mathscr {N}}(i)\cup {\mathscr {N}}(j)}^{*})d{\mathbf {u}}_{{\mathscr {N}}(i)\cup {\mathscr {N}}(j)}^{*} \end{aligned}$$

where \({\mathbf {P}}{\mathbf {u}}_{{\mathscr {N}}(i)\cup {\mathscr {N}}(j)}^{*}={\mathbf {u}}_{{\mathscr {N}}(i)}^{*}\) and \({\mathbf {Q}}{\mathbf {u}}_{{\mathscr {N}}(i)\cup {\mathscr {N}}(j)}^{*}={\mathbf {u}}_{{\mathscr {N}}(j)}^{*}\). We note that the above is in the form of (7) and hence can be readily evaluated.

Next we compute

$$\begin{aligned} E[cov(u_{i},u_{j}\vert {\mathbf {u}}^{*})]= & {} E[k({\mathbf {u}}_{i-1:i+1}^{*},{\mathbf {u}}_{j-1:j+1}^{*})]-E[Tr(\varvec{\Sigma }^{-1}{\mathbf {k}}_{*i}{\mathbf {k}}_{*j}^{T})] \end{aligned}$$

The first term on right hand side

$$\begin{aligned} E[k({\mathbf {u}}_{{\mathscr {N}}(i)}^{*},{\mathbf {u}}_{{\mathscr {N}}(j)}^{*})]= & {} \int k({\mathbf {u}}_{{\mathscr {N}}(i)}^{*},{\mathbf {u}}_{{\mathscr {N}}(j)}^{*})p({\mathbf {u}}_{{\mathscr {N}}(i)\cup {\mathscr {N}}(j)}^{*})d{\mathbf {u}}_{{\mathscr {N}}(i)\cup {\mathscr {N}}(j)}^{*}\\= & {} \int k(0,{\mathbf {u}}_{{\mathscr {N}}(i)\cup {\mathscr {N}}(j)}^{*};{\mathbf {P}}^{T}{\varvec{\Lambda }}^{-1}{\mathbf {P}})p({\mathbf {u}}_{{\mathscr {N}}(i)\cup {\mathscr {N}}(j)}^{*})d{\mathbf {u}}_{{\mathscr {N}}(i)\cup {\mathscr {N}}(j)}^{*} \end{aligned}$$

where \({\mathbf {P}}{\mathbf {u}}_{{\mathscr {N}}(i) \cup {\mathscr {N}}(j)}^{*}={\mathbf {u}}_{{\mathscr {N}}(i)}^{*}-{\mathbf {u}}_{{\mathscr {N}}(j)}^{*}\). The second term

$$\begin{aligned} E[Tr(\varvec{\Sigma }^{-1}{\mathbf {k}}_{*i}{\mathbf {k}}_{*j}^{T})]= & {} Tr(\varvec{\Sigma }^{-1}E[{\mathbf {k}}_{*i}{\mathbf {k}}_{*j}^{T}])\\= & {} Tr(\varvec{\Sigma }^{-1}\tilde{{\mathbf {L}}}) \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Han, L., He, C., Dinh, H. et al. Learning Biological Dynamics From Spatio-Temporal Data by Gaussian Processes. Bull Math Biol 84, 69 (2022). https://doi.org/10.1007/s11538-022-01022-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11538-022-01022-6

Keywords

Navigation