Skip to main content

Advertisement

Log in

Emulator-assisted data assimilation in complex models

  • Published:
Ocean Dynamics Aims and scope Submit manuscript

Abstract

Emulators are surrogates of complex models that run orders of magnitude faster than the original model. The utility of emulators for the data assimilation into ocean models is still not well understood. High complexity of ocean models translates into high uncertainty of the corresponding emulators which may undermine the quality of the assimilation schemes based on such emulators. Numerical experiments with a chaotic Lorenz-95 model are conducted to illustrate this point and suggest a strategy to alleviate this problem through the localization of the emulation and data assimilation procedures. Insights gained through these experiments are used to design and implement data assimilation scenario for a 3D fine-resolution sediment transport model of the Great Barrier Reef (GBR), Australia.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  • Ariathurai R, Krone RB (1976) Finite element model for cohesive sediment transport. J Hydraul Div, ASCE, 104, HY2 323–328.

  • Baird ME, Cherukuru N, Jones E, Margvelashvili N, Mongin M, Oubelkheir K, Ralph P, Rizwi F, Robson B, Schroeder T, Skerratt J, Steven A, Wild-Allen K (2016) Remote-sensing reflectance and true colour produced by a coupled hydrodynamic, optical, sediment, biogeochemical model of the Great Barrier Reef, Australia: comparison with satellite data. Environ Model Software 78:79–96

    Article  Google Scholar 

  • Brando V, Dekker A, Park Y-J, Schroeder T (2012) An adaptive semianalytical inversion of ocean colour radiometry in optically complex waters. Appl Optics 51(15):2808–2833

    Article  Google Scholar 

  • Brando V, Schroeder T, King E, Dyce P (2015) Reef Rescue Marine Monitoring Program: Using remote sensing for GBR-wide water quality, Final Report for 2012/13 Activities, CSIRO Report to the Great Barrier Reef Marine Park Authority, pp 213. Available online http://hdl.handle.net/11017/2971

  • Castelletti A, Galelli S, Restelli M, Soncini-Sessa R (2012) Data-driven dynamic emulation modelling for the optimal management of environmental systems. Environ Model Software 34:30–43. doi:10.1016/j.envsoft.2011.09.003

    Article  Google Scholar 

  • Chen CR, Beardsley C, Cowles G (2006) An unstructured grid, finite-volume coastal ocean model (FVCOM) system. Oceanography 19(1):78–89

    Article  Google Scholar 

  • Forrester AIJ, Keane AJ (2009) Recent advances in surrogate-based optimization. Prog Aerosp Sci 45:50–79. doi:10.1016/j.paerosci.2008.11.001

    Article  Google Scholar 

  • Frolov S, Baptista A, Leen T, Lu Z, van der Merwe R (2009) Fast data assimilation using a nonlinear Kalman filter and a model surrogate: an application to the Columbia River estuary. Dyn Atmospheres Oceans 48:16–45

    Article  Google Scholar 

  • Fuentes M, Guttorp P, Challenor P (2003) Statistical assessment of numerical models. Int Statist Rev 71(2):201–221

    Article  Google Scholar 

  • Grant D, Madsen O (1982) Movable bed roughness in unsteady oscillatory flow. J Geophys Res 87(C1):469–481

    Article  Google Scholar 

  • Herzfeld M, Andrewartha JA (2010) Modelling the physical oceanography of D’Entrecasteaux Channel and the Huon Estuary, south-eastern Tasmania. Mar Freshw Res 61:568–586

    Article  Google Scholar 

  • Kennedy M, O’Hagan A (2001) Bayesian calibration of computer models. J R Stat Soc B 63:425–464

  • Kitanidis PK (1986) Parameter uncertainty in estimation of spatial functions; Bayesian analysis. Water Resour Res 22(4):499–507

    Article  Google Scholar 

  • Leeds WB, Wikle CK, Fiechter J (2014) Emulator-assisted reduced-rank ecological data assimilation for nonlinear multivariate dynamical spatio-temporal process. Stat Methodol 17:126–138

    Article  Google Scholar 

  • Lui J, Chen R (1998) Sequential Monte Carlo methods for dynamical systems. J Am Stat Assoc 90:567–576

    Google Scholar 

  • Lorenz EN (1995) Predictability: a problem partly solved. In: Proceedings of the Seminar on Predicability, European Center on Medium Range Weather Forecasting 1, 1–18. Online at: http://www.ecmwf.int/publications/library/do/references/show?id=87423. Accessed 01/08/2016

  • Madsen OS (1994) Spectral wave-current bottom boundary layer flows, in Coastal Engineering 1994 Proceedings, 24th International Conference Coastal Engineering Research Council/ASCE, pp 384–398

  • Margvelashvili N, Campbell EP (2012) Sequential data assimilation in fine-resolution models using error-subspace emulators: theory and preliminary evaluation. J Mar Syst 90:13–22. doi:10.1016/j.jmarsys.2011.08.004

    Article  Google Scholar 

  • Mattern JP, Fennel K, Dowd M (2012) Estimating time-dependent parameters for a biological ocean model using an emulator approach. J Mar Syst 96–97:32–47. doi:10.1016/j.jmarsys.2012.01.015

  • Mongin M, Baird M, Tilbrook B, Matear R, Lenton A, Herzfeld M, Wild-Allen K, Skerratt J, Margvelashvili N, Robson B, Duarte C, Gustafsson M, Ralph P, Steven A (2016) Remote-sensing reflectance and true colour produced by a coupled hydrodynamic, optical, sediment, biogeochemical model of the Great Barrier Reef, Australia: Comparison with satellite data. Nature Commun 7. URL http://dx.doi.org/10.1038/ncomms10732. Accessed 01/08/2016

  • Oakley J, O’Hagan A (2002) Bayesian inference for the uncertainty distribution of computer model outputs. Biometrika 89(4):769–787

    Article  Google Scholar 

  • Razavi S, Tolson BA, Burn DH (2012) Review od surrogate modelling in water resources. Water Resour Res 48. doi:10.1029/2011WR011527

  • Sacks J, Welch W, Mitchell T, Wynn H (1989) Design and analysis of computer experiments. Stat Sci 4:409–423

    Article  Google Scholar 

  • Schaffelke B, Carleton J, Skuza M, Zagorskis I, Furnas MJ (2012) Water quality in the inshore Great Barrier Reef lagoon: implications for long-term monitoring and management. Mar Poll Bull 65:249–260

    Article  Google Scholar 

  • Schiller A, Herzfeld M, Brinkman R, Stuart G (2014) Monitoring, predicting and managing one of the seven natural wonders of the world. Bull Am Meteor Soc 95(1):23–30

  • Schroeder T, Behnert I, Schaale M, Fischer J, Doerffer R (2007) Atmospheric correction algorithm for MERIS above Case-2 waters. Int J Remote Sensing 28(7):1469–1486

  • Wikle CK, Milliff RF, Herbei R, Leeds WB (2013) Modern statistical methods in oceanography: a hierarchical view. Stat Sci 28:466–486

    Article  Google Scholar 

  • Wild-Allen K, Skerratt J, Whitehead J, Rizwi F, Parslow J (2013) Mechanisms driving estuarine water quality: a 3D biogeochemical model for informed management. Estuar Coast Shelf Sci 135:33–45. doi:10.1016/j.ecss.2013.04.009

  • Wilkin JL, Arango HG, Haidvogel DB, Lichtenwalner CS, Glenn SM, Hedstrom KS (2005) A regional ocean modeling system for the long-term ecosystem laboratory. J Geophys Res 110(C6):C06S91

    Article  Google Scholar 

  • Van der Merwe R, Leen T, Frolov S, Baptista A (2007) Fast neural network surrogates for very high dimensional physics-based models in computational oceanography. Neural Netw 20:462–478

    Article  Google Scholar 

  • Ter Braak CJF (2006) A Markov Chain Monte Carlo version of the genetic algorithm differential evolution: easy Bayesian computing for real parameter spaces. Stat Comput 16:239–249. doi:10.1007/s11222-006-8769-1

    Article  Google Scholar 

  • Van Leeuwen P, (2009) Particle filtering in geophysical systems: review. Am Meteorol Soc, Special collection: Mathematical advances in data assimilation 137:4089–4114

  • Van Leeuwen P (2011) Efficient nonlinear data-assimilation in geophysical fluid dynamics. Comput Fluids 46:52–58

    Article  Google Scholar 

  • Vrugt J (2011) DREAM(D): an adaptive Markov Chain Monte Carlo simulation algorithm to solve discrete, noncontinuous, posterior parameter estimation problems. Hydrol Earth Syst Sci Discuss 8:4025–4052. doi:10.5194/hessd-8-4025-2011, www.hydrol-earth-syst-sci-discuss.net/8/4025/2011/

    Article  Google Scholar 

Download references

Acknowledgements

The model simulations were developed as part of the eReefs project, a public-private collaboration between Australia’s leading operational and scientific research agencies, government, and corporate Australia. The development of the emulation and assimilation schemes has been funded through the CSIRO Computational and Simulation Sciences Transformational Capability Platform. Remote-sensing data was sourced from the Integrated Marine Observing System (IMOS); IMOS is supported by the Australian Government through the National Collaborative Research Infrastructure Strategy and the Super Science Initiative. Coastal turbidity records have been obtained from the Australian Institute of Marine Science via Caring for our Country Reef Rescue program. The model simulations have been carried out on the Australian National Computing Infrastructure.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nugzar Yu Margvelashvili.

Additional information

Responsible Editor: Tal Ezer

This article is part of the Topical Collection on the 7th International Workshop on Modeling the Ocean (IWMO) in Canberra, Australia 1–5 June, 2015

Appendices

Appendix 1

1.1 Gaussian process modeling

A detailed description of the Gaussian process modeling (GPM) can be found in Kitanidis (1986), Oakley and O’Hagan (2002), and Fuentes et al. (2003), and we outline only the general approach here. GPM is a Bayesian method for emulating complex deterministic models. The output from the model is considered a realization of a stochastic process, and the prior knowledge of this process is updated using an ensemble of model runs to form a posterior distribution. We formulate the prior knowledge by decomposing the model function into a mean component with an additive Gaussian error:

$$ f\left(\boldsymbol{\uptheta} \right)=m\left(\boldsymbol{\uptheta} \right)+e\left(\boldsymbol{\uptheta} \right), $$
(10)

where m(θ) = E[f(θ)] and e(θ) is a zero-mean Gaussian process. We further decompose the mean function into a linear sum of basis functions familiar from regression modeling:

$$ m\left(\boldsymbol{\uptheta} \right)=\mathrm{h}{\left(\boldsymbol{\uptheta} \right)}^{\mathrm{T}}\boldsymbol{\upbeta} . $$

Here, h(θ) is a vector of N h known functions (regressors) and β is a vector of N h unknown coefficients. We capture the dependence in this Gaussian process by modeling the covariance between two points as

$$ c\left({\boldsymbol{\uptheta}}_1,{\boldsymbol{\uptheta}}_2\right)={\sigma}^2r\left({\boldsymbol{\uptheta}}_1-{\boldsymbol{\uptheta}}_2\right), $$

where r(⋅) is a correlation function satisfying r(0) = 1 and σ 2 is a variance parameter to be estimated. We adopt the following form for r(⋅),

$$ r\left({\boldsymbol{\uptheta}}_1-{\boldsymbol{\uptheta}}_2\right)= \exp \left\{-{\left({\boldsymbol{\uptheta}}_1-{\boldsymbol{\uptheta}}_2\right)}^T\tilde{\mathbf{R}}\left({\boldsymbol{\uptheta}}_1-{\boldsymbol{\uptheta}}_2\right)\right\}, $$

where \( \tilde{\mathbf{R}}= diag\left\{{r}_i:i=1, \dots,\;{N}_{\theta}\right\} \) is a positive definite matrix of scale parameters.

Our prior knowledge can now be expressed in terms of the parameters βσ 2 and \( \tilde{\mathbf{R}} \), but including \( \tilde{\mathbf{R}} \) into the inference problem makes it intractable. Hence, the prior specification is completed by assigning a vague prior p(β, σ 2) ∝ σ − 2, and using empirical estimates of the {r i }. The posterior distributions calculated may therefore be regarded as conditional on these estimates (Kennedy and O’Hagan 2001).

The prior distribution (10) is updated using a training sample

$$ \mathrm{y}=\left\{f\left({\boldsymbol{\uptheta}}_i\right):i= 1,\dots, {N}_y\right\}, $$

which is a set of model simulations for configurations {θ i  : i = 1, …, N y }.. The posterior distribution for x(⋅) given the training sample is a scaled Student t process (Kitanidis 1986; Fuentes et al. 2003),

$$ \frac{f\left(\boldsymbol{\uptheta} \right)-\widehat{m}\left(\boldsymbol{\uptheta} \right)}{\widehat{\sigma}\;\widehat{c}\left(\boldsymbol{\uptheta}, \boldsymbol{\uptheta} \right)}\sim {t}_{N_{\mathrm{y}}-{N}_h} $$
(11)

where

$$ \widehat{m}\left(\boldsymbol{\uptheta} \right)={\widehat{\boldsymbol{\upbeta}}}^T\mathbf{h}\left(\boldsymbol{\uptheta} \right)+{\left(\mathrm{y}-\mathbf{H}\widehat{\boldsymbol{\upbeta}}\right)}^{\mathrm{T}}{A}^{-1}t\left(\boldsymbol{\uptheta} \right), $$
(12)
$$ \begin{array}{l}\widehat{c}\left(\boldsymbol{\uptheta}, \boldsymbol{\uptheta} \right)=c\left(\boldsymbol{\uptheta}, \boldsymbol{\uptheta} \right)-t{\left(\boldsymbol{\uptheta} \right)}^T{\mathbf{A}}^{- 1}t\left(\boldsymbol{\uptheta} \right)+\\ {}{\left[\mathbf{h}\left(\boldsymbol{\uptheta} \right)-{\mathbf{H}}^T{\mathbf{A}}^{\mathit{\hbox{-}}1}t\left(\boldsymbol{\uptheta} \right)\right]}^T{\left({\mathbf{H}}^T{\mathbf{A}}^{- 1}\mathbf{H}\right)}^{- 1}\cdot \left[\mathbf{h}\left(\boldsymbol{\uptheta} \right)-{\mathbf{H}}^T{\mathbf{A}}^{- 1}t\left(\boldsymbol{\uptheta} \right)\right].\end{array} $$
(13)

Here, \( \widehat{\boldsymbol{\upbeta}}={\left({\mathbf{H}}^{\mathrm{T}}{\mathbf{A}}^{- 1}\mathbf{H}\right)}^{- 1}{\mathbf{H}}^{\mathrm{T}}{\mathbf{A}}^{- 1}\mathrm{y} \) is a vector of regression coefficient estimates; (A) ij  = c(θ i , θ j ) is the variance matrix for the design points; H = (h(θ 1 ), …, h(θ Ny ))T; t(θ) = (c(θ, θ 1 ), …, c(θ, θ Ny))T; and \( {\widehat{\sigma}}^2=\frac{{\mathbf{y}}^T\left[{\mathbf{A}}^{- 1}-{\mathbf{A}}^{- 1}\mathbf{H}{\left({\mathbf{H}}^{\mathrm{T}}{\mathbf{A}}^{- 1}\mathbf{H}\right)}^{- 1}{\mathbf{H}}^{\mathrm{T}}{\mathbf{A}}^{- 1}\right]\mathbf{y}}{N_y-{N}_h- 2}. \)

The mean value (Eq. (12)) is referred to as an emulator and gives a fast and cheap approximation of the unknown function. The first component of Eq. (12) is a generalized least squares prediction at point θ, and the second component improves the prediction by interpolating errors between the design points {θ i  : i = 1, …, N y } and the prediction point θ. The predictions at the design points are exactly the corresponding observations. As the prediction point θ moves away from the design points, the second component of Eq. (12) goes to zero, yielding the generalized least squares prediction. In our study, we assumed a constant covariance scale (r i  ≡ r, ∀i). The regressor term was set to linear form h(θ)≡(1, θ).

Appendix 2

1.1 Sampling strategy

1.1.1 Importance sampling and residual resampling

According to the importance sampling technique, the posterior distribution (Eq. 7) can be expressed as a sum of weighted delta functions (Van Leeuwen 2009),

$$ p\left({\mathbf{x}}_i\left|{\mathbf{Y}}_i^o\right.\right)={\displaystyle \sum_{j=1}^N{w}_i^j\delta \left({\mathbf{x}}_i-{\mathbf{x}}_i^j\right)} $$
(14)

Here, i index refers to time, j index enumerates individual members of the ensemble, δ is a Kronecker delta function, and weights w read as follows

$$ {w}_i^j=\frac{p\left({\mathbf{y}}_i^o\left|{\mathbf{x}}_i^j\right.\right)}{{\displaystyle \sum_{j=1}^Np\left({\mathbf{y}}_i^o\left|{\mathbf{x}}_i^j\right.\right)}} $$
(15)

Using Eqs. (14) and (15), the mean of any function f(x) can be expressed as

$$ \overline{f{\left({\mathbf{x}}_i\right)}_{p\left({\mathbf{x}}_i\left|{\mathbf{Y}}_i^o\right.\right)}}={\displaystyle \int f\left({\mathbf{x}}_i\right)}\cdot p\left({\mathbf{x}}_i\left|{\mathbf{Y}}_i^o\right.\right)d{\mathbf{x}}_i={\displaystyle \sum_{j=1}^N{w}_i^j\cdot f\left({\mathbf{x}}_i^j\right)} $$
(16)

With Eq. (15), one can propagate forward in time weights of the ensemble members and, having known these weights, estimate statistical properties of that ensemble via Eq. (16). For a degenerate ensemble, however, the weight of most particles is close to zero, meaning that these particles represent implausible states of the simulated system. In some cases, the problem of the degeneracy can be addressed through the resampling of particles. The idea is to get rid of low-weight particles and increase the number of “healthy” particles (i.e., particles having high weights). An overview of the established resampling techniques is available in Van Leeuwen (2009). In our study, we employ the residual resampling technique. According to this method, all weights are multiplied by the ensemble size N. “Then n copies are taken of each particle i in which n is the integer part of Nw i . After obtaining these copies of all members with Nw i  > = 1, the integer parts of Nw i are subtracted from Nw i . The rest of the particles are drawn randomly from the resulting distribution. This method was introduced by Lui and Chen (1998), who report a substantial reduction in sampling noise and thus an improved performance of the filter …” (Van Leeuwen 2009).

1.1.2 Metropolis-Hastings sampling with differential-evolution proposal

Once the importance sampling residual resampling (IS-RR) step is complete, the assimilation scheme proceeds with Metropolis-Hastings Differential Evolution (MH-DE) sampling of Eq. (7) (Ter Braak 2006; Vrugt 2011). DE provides a random proposal particle in a way that acknowledges the distribution of the forecast particles. This new sample is either accepted or rejected according to MH criteria. The number of MH-DE iterations for every assimilation window in our study is set to 10 times the size of the ensemble of models. The key motivation behind MH-DE is to populate the ensemble with new healthy particles (note that IS-RR can produce identical copies of particles. MH-DE sampling is expected to diversify these samples).

A general MCMC sampling strategy is to draw a random sample from the proposal q() and accept it with probability.

$$ p\left({\mathbf{x}}_i^{j+1}\left|{\mathbf{x}}_i\right.\right)= \min \left[\frac{p\left({\mathbf{y}}_i^o\left|{\mathbf{x}}_i^{j+1}\right.\right)p\left({\mathbf{x}}_i^{j+1}\left|{\mathbf{Y}}_{i-1}^o\right.\right)q\left({\mathbf{x}}_i^j\left|{\mathbf{x}}_i^{j+1}\right.\right)}{p\left({\mathbf{y}}_i^o\left|{\mathbf{x}}_i^j\right.\right)p\left({\mathbf{x}}_i^j\left|{\mathbf{Y}}_{i-1}^o\right.\right)q\left({\mathbf{x}}_i^{j+1}\left|{\mathbf{x}}_i^j\right.\right)},\kern0.36em 1\right] $$
(17)

According to the DE proposal, samples from q() are given by

$$ {\mathbf{x}}_i^{j+1}={\mathbf{x}}_i^{h,j}+\gamma \left[{\mathbf{x}}_i^{n,j}-{\mathbf{x}}_i^{m,j}\right]+{\boldsymbol{\upvarepsilon}}_i $$
(18)

where x i h,j, x i n,j, and x i m,j are random samples from p(x i j|Y i−1 o), γ is a scaling constant, and ε i is an additive error term. Figure 11 illustrates DE sampling.

Fig. 11
figure 11

Schematic representation of DE sampling

Since the DE proposal is symmetric, the acceptance probability (Eq.17) reduces to

$$ p\left({\mathbf{x}}_i^{j+1}\left|{\mathbf{x}}_i\right.\right)= \min \left[\frac{p\left({\mathbf{y}}_i^o\left|{\mathbf{x}}_i^{j+1}\right.\right)p\left({\mathbf{x}}_i^{j+1}\left|{\mathbf{Y}}_{i-1}^o\right.\right)}{p\left({\mathbf{y}}_i^o\left|{\mathbf{x}}_i^j\right.\right)p\left({\mathbf{x}}_i^j\left|{\mathbf{Y}}_{i-1}^o\right.\right)},\kern0.36em 1\right] $$
(19)

The likelihood function in Eq. (19) in our study is assumed to be Gaussian:

$$ p\left({\mathbf{y}}_i^o\left|{\mathbf{x}}_i^j\right.\right)\sim \exp \left\{-\frac{1}{2}{\left[{\mathbf{y}}_i^o-H\left({\mathbf{x}}_i^j\right)\right]}^T{\mathbf{R}}^{-1}\left[{\mathbf{y}}_i^o-H\left({\mathbf{x}}_i^j\right)\right]\right\} $$
(20)

Here, H(x i j) is a measurement operator and R is the error covariance of the observation.

The forecast density in Eq. (19) is given by

$$ p\left(\mathbf{x}\left|{\mathbf{Y}}_{i-1}^o\right.\right)={{\displaystyle \int \left[p\left({\mathbf{x}}_i\left|{\mathbf{x}}_{i-1}\right.\right)p\left({\mathbf{x}}_{i-1}\left|{\mathbf{Y}}_{i-1}^o\right.\right)\right]d\mathbf{x}}}_{i-1} $$
(21)

Note that for a deterministic model, p(x i |Y i−1 o) = p(x i−1|Y i−1 o), where p(x i−1|Y i−1 o) can be interpreted as a “prior” distribution of the model. The probability of the acceptance (Eq. (19)) in this case is expressed as a ratio of likelihoods multiplied by the corresponding “priors.” In the degenerate case, the members of the ensemble tend to collapse into a relatively small subregion of the state space. The prior density p(x i−1|Y i−1 o) estimated from these samples will have its major support allocated within that subregion as well. For any new proposal located beyond that subregion, the prior probability and, hence, the probability of the acceptance are likely to be small—the ensemble will get stuck in the degenerate state. To facilitate the recovery of such ensembles from the degenerate state, in our study we assume the prior density to be constant, thus reducing the acceptance probability (Eq. (19)) to the ratio of likelihoods

$$ p\left({\mathbf{x}}_i^{j+1}\Big|{\mathbf{x}}_i\right)= \min \left[\frac{p\left({\mathbf{y}}_i^o\left|{\mathbf{x}}_i^{j+1}\right.\right)}{p\left({\mathbf{y}}_i^o\left|{\mathbf{x}}_i^j\right.\right)},\kern0.36em 1\right] $$
(22)

Probability (22) is consistent with the acceptance probability of a batch assimilation of data over the time window (ti-1, ti) under assumption of uninformative priors. Despite this assumption, the knowledge of observations prior to ti-1, in our study, is still passed through to the new assimilation window via the allocation of prior samples delivered from the previous assimilation step. These prior samples are anticipated to reduce the number of the burn-off iterations typically required by the MCMC machinery. An impact of the assumption of the uninformative priors on the estimated posterior (7) has not been fully investigated in this study.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Margvelashvili, N., Herzfeld, M., Rizwi, F. et al. Emulator-assisted data assimilation in complex models. Ocean Dynamics 66, 1109–1124 (2016). https://doi.org/10.1007/s10236-016-0973-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10236-016-0973-8

Keywords

Navigation