Skip to main content
Log in

Structural identifiability and sensitivity

  • Original Paper
  • Published:
Journal of Pharmacokinetics and Pharmacodynamics Aims and scope Submit manuscript

Abstract

Ordinary differential equation models often contain a large number of parameters that must be determined from measurements by estimation procedure. For an estimation to be successful there must be a unique set of parameters that can have produced the measured data. This is not the case if a model is not structurally identifiable with the given set of inputs and outputs. The local identifiability of linear and nonlinear models was investigated by an approach based on the rank of the sensitivity matrix of model output with respect to parameters. Associated with multiple random drawn of parameters used as nominal values, the approach reinforces conclusions regarding the local identifiability of models. The numerical implementation for obtaining the sensitivity matrix without any approximation, the extension of the approach to multi-output context and the detection of unidentifiable parameters were also discussed. Based on elementary examples, we showed that (1°) addition of nonlinear elements switches an unidentifiable model to identifiable; (2°) in the presence of nonlinear elements in the model, structural and parametric identifiability are connected issues; and (3°) addition of outputs or/and new inputs improve identifiability conditions. Since the model is the basic tool to obtain information from a set of measurements, its identifiability must be systematically checked.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. In the following, the term “identifiability” alone refers for structural identifiability.

References

  1. Bellman R, Astrom KJ (1970) On structural identifiability. Math Biosci 7:329–339

    Article  Google Scholar 

  2. Jacquez JA (1996) Compartmental analysis in biology and medicine, 3rd edn. BioMedware, Ann Arbor, pp 309–345

    Google Scholar 

  3. Cobelli C, Romanin-Jacur G (1976) On the structural identifiability of biological compartmental systems in a general input-output configuration. Math Biosci 30:139–151

    Article  Google Scholar 

  4. Walter E, Pronzato L (1995) Identifiabilities and nonlinearities. In: Fossard AJ, Normand-Cyrot D (eds) Nonlinear systems: modeling and estimation 1, vol 1. Chapman and Hall, London, pp 111–143

    Chapter  Google Scholar 

  5. Jacquez JA (1982) The inverse problem for compartmental systems. Math Comput Simul 24:452–459

    Article  Google Scholar 

  6. Chis OT, Banga JR, Balsa-Canto E (2011) Structural identifiability of systems biology models: a critical comparison of methods. PLoS ONE 6:e27755

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  7. Seber GAF, Wild CJ (1989) Nonlinear regression analysis. Wiley series in probability and mathematical statistics. Wiley, New York, pp 32–68

    Google Scholar 

  8. Cintron-Arias A, Banks HT, Capaldi A, Lloyd AL (2009) A sensitivity matrix based methodology for inverse problem formulation. J Inverse Ill-Posed Probl 17:545–564

    Article  Google Scholar 

  9. Rothenberg TJ (1971) Identification in parametric models. Econometrica 39:577–591

    Article  Google Scholar 

  10. Kapur S (1989) Maximum entropy models in science and engineering. Wiley, New York, pp 119–123

    Google Scholar 

  11. Gill PE, Murray W, Wright MH (1981) Practical optimization. Academic Press, London, pp 45–125

    Google Scholar 

  12. MATLAB (2004) High-performance numeric computation and visualization software, 7.0. The Math Works, Natick

    Google Scholar 

  13. Dotsch HGM, VanDenHof PMJ (1996) Test for local structural identifiability of high-order non-linearly parametrized state space models. Automatica 32:875–883

    Article  Google Scholar 

  14. Karlsson J, Anguelova M, Jirstand M (2012) An efficient method for structural identifiability analysis of large dynamic systems. IFAC Proc 45:941–946

    Article  Google Scholar 

  15. Cobelli C, Romanin-Jacur G (1976) Controllability, observability and structural identifiability of multi-input and multi-output biological compartmental systems. IEEE Trans Bio-Med Eng 23:93–100

    Article  CAS  Google Scholar 

  16. Shivva V, Korell J, Tucker IG, Duffull SB (2013) A approach for identifiability of population pharmacokinetic–pharmacodynamic models. CPT Pharmacomet Syst Pharmacol 2:e49

    Article  CAS  Google Scholar 

  17. Shivva V, Korell J, Tucker IG, Duffull SB (2014) Parameterisation affects identifiability of population models. J Pharmacokin Pharmacodyn 41:81–86

    Article  CAS  Google Scholar 

  18. Dahlquist G, Bjorck A (1974) Numerical methods (trans: Anderson N). Series in automatic computation. Prentice Hall, Upper Saddle River, pp 143–156

    Google Scholar 

  19. Chappell MJ, Gunn RN (1998) A procedure for generating locally identifiable reparameterisations of unidentifiable non-linear systems by the similarity transformation approach. Math Biosci 148:21–41

    Article  CAS  PubMed  Google Scholar 

  20. Atkinson AC, Donev AN (1992) Optimum experimental designs. Oxford statistical sciences series. Clarendon Press, Oxford, pp 93–132

    Google Scholar 

  21. Kulcsar C, Pronzato L, Walter E (1994) Optimal experimental design and therapeutic drug monitoring. Int J Biomed Comput 36:95–101

    Article  CAS  PubMed  Google Scholar 

  22. Ogungbenro K, Dokoumetzidis A, Aarons L (2008) Application of optimal design methodologies in clinical pharmacology experiments. Pharm Stat 8:239–252

    Article  Google Scholar 

  23. Walter E, Pronzato L (1990) Qualitative and quantitative experiment design for phenomenological models—a survey. Automatica 26:195–213

    Article  Google Scholar 

  24. Cobelli C, DiStefano JJ (1980) Parameter and structural identifiability concepts and ambiguities: a critical review and analysis. Am J Physiol 239:7–24

    Google Scholar 

  25. Hamby DM (1994) A review of techniques for parameter sensitivity analysis of environmental models. Environ Model Assess 32:135–154

    Article  CAS  Google Scholar 

  26. Richalet J, Rault A, Pouliquen R (1971) Identification des Processus par la Méthode du Modèle. Gordon and Breach, Paris, pp 38–78

    Google Scholar 

  27. Bonate PL (2011) Pharmacokinetic-pharmacodynamic modeling and simulation, 2nd edn. Springer, New York, pp 29–40

    Book  Google Scholar 

  28. Yates JW (2006) Structural identifiability of physiologically based pharmacokinetic models. J Pharmacokin Pharmacodyn 33:421–439

    Article  CAS  Google Scholar 

  29. Yates JW, Jones RD, Walker M, Cheung SY (2009) Structural identifiability and indistinguishability of compartmental models. Expert Opin Drug Metab Toxicol 5:295–302

    Article  PubMed  Google Scholar 

  30. Gibiansky L, Gibiansky E, Kakkar T, Ma P (2008) Approximations of the target-mediated drug disposition model and identifiability of model parameters. J Pharmacokin Pharmacodyn 35:573–591

    Article  CAS  Google Scholar 

  31. Chappell MJ (1996) Structural identifiability of models characterizing saturable binding: comparison of pseudo-steady-state and non-pseudo-steady-state model formulations. Math Biosci 133:1–20

    Article  CAS  PubMed  Google Scholar 

  32. Janzen DLI, Bergenholm L, Jirstand M, Parkinson J, Yates J, Evans ND, Chappell MJ (2016) Parameter identifiability of fundamental pharmacodynamic models. Front Physiol 7:1–12

    Article  Google Scholar 

  33. Bearup DJ, Evans ND, Chappell MJ (2013) The input-output relationship approach to structural identifiability analysis. Comput Meth Prog Biomed 109:171–181

    Article  Google Scholar 

  34. Anderson TW (1984) An introduction to multivariate statistical analysis. Wiley series in probability and mathematical statistics, 2nd edn. Wiley, New York, pp 43–50

    Google Scholar 

  35. Domurado M, Domurado D, Vansteenkiste S, Marre AD, Schacht E (1995) Glucose oxidase as a tool to study in vivo the interaction of glycosylated polymers with the mannose receptor of macrophages. J Control Release 33:115–123

    Article  CAS  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Athanassios Iliadis.

Ethics declarations

Conflict of interest

The author declares that he has no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1

Hypotheses

Observations on the system consist in measurements yi from m samples drawn at ti times, i = 1:m. Vectors y and t compile the above yi, and ti, respectively. The ultimate goal is to fit these data by a model involving p parameters xj, j = 1:p, by maximizing the likelihood function L(x/y), where x is the vector collection of xj. The fundamental assumption is m > p.

Three working hypotheses are commonly introduced:

  1. 1.

    Measurements yi are considered random, obtained by adding the observation error ei to the model output y(tix), i.e., yi = y(tix) + ei.

  2. 2.

    The error ei follows the normal distribution ei ∼ N(0, σ 2 i ) with zero mean and variance σ 2 i . Then, the random yi is distributed according to yi ∼ N[y(tix), σ 2 i ]. Also, the reduced observation error

    $$\varepsilon_{i} \left( {t_{i} ,\underline{x} } \right) = \frac{{y_{i} - y\left( {t_{i} ,\underline{x} } \right)}}{{\sigma_{i} }}$$

    follows the standard normal distribution, i.e., ɛi(tix) ∼ N(0, 1).

  3. 3.

    Observation errors are independent for different samples, i.e., E[ɛiɛj] = 0 for i ≠ j (whereas E[ɛ 2 i ] = 1).

Likelihood and sensitivity

Under the above conditions, the negative log likelihood to be minimized is

$$- \ln L\left( {{{\underline{x} } \mathord{\left/ {\vphantom {{\underline{x} } {\underline{y} }}} \right. \kern-0pt} {\underline{y} }}} \right) = \frac{1}{2} \sum\limits_{i = 1}^{m} {\ln \left( {2\pi \sigma_{i}^{2} } \right)} + \frac{1}{2} \sum\limits_{i = 1}^{m} {\varepsilon_{i}^{2} }$$

with derivative with respect to a given model parameter xj

$$- \frac{{\partial \ln L\left( {{{\underline{x} } \mathord{\left/ {\vphantom {{\underline{x} } {\underline{y} }}} \right. \kern-0pt} {\underline{y} }}} \right)}}{{\partial x_{j} }} = \sum\limits_{i = 1}^{m} {\frac{1}{{\sigma_{i} }}\frac{{\partial \sigma_{i} }}{{\partial x_{j} }}} + \sum\limits_{i = 1}^{m} {\frac{1}{{\varepsilon_{i} }}\frac{{\partial \varepsilon_{i} }}{{\partial x_{j} }}} = \sum\limits_{i = 1}^{m} {\left( {1 - \varepsilon_{i}^{2} } \right)\frac{1}{{\sigma_{i} }}\frac{{\partial \sigma_{i} }}{{\partial x_{j} }} - \sum\limits_{i = 1}^{m} {\varepsilon_{i} \frac{1}{{\sigma_{i} }}\frac{{\partial y\left( {t_{i} ,\underline{x} } \right)}}{{\partial x_{j} }}} } .$$
(5)

The sensitivity of the variance model σi is

$$\rho_{ij} = \frac{{\partial \sigma_{i} }}{{\partial x_{j} }}$$
(6)

and the sensitivity of the model output y(tix) at time ti and with respect to the parameter xj is

$$\varphi_{ij} = \frac{{\partial y\left( {t_{i} ,\underline{x} } \right)}}{{\partial x_{j} }}.$$
(7)

They are compiled in vector forms

$$\underline{\rho }_{j} = [\begin{array}{*{20}c} {\rho_{1j} } & \cdots & {\rho_{mj} } \\ \end{array} ]^{{ \, {\rm T}}} \,\,{\text{and}}\,\,\underline{\varphi }_{j} = [\begin{array}{*{20}c} {\varphi_{1j} } & \cdots & {\varphi_{mj} } \\ \end{array} ]^{{ \, {\rm T}}} .$$

The error ɛ components in relationship (5) are also presented in vector form

$$\underline{w} = \left[ {\begin{array}{*{20}c} {1 - \varepsilon_{1}^{2} } & \cdots & {1 - \varepsilon_{m}^{2} } \\ \end{array} } \right]^{{ \, {\rm T}}} \,\,{\text{and}}\,\,\underline{z} = \left[ {\begin{array}{*{20}c} {1 - \varepsilon_{1} } & \cdots & {1 - \varepsilon_{m} } \\ \end{array} } \right]^{{ \, {\rm T}}}$$

to obtain the final form of the above derivative (5)

$$- \frac{{\partial \ln L(\underline{x} /\underline{y} )}}{{\partial x_{j} }} = \underline{w}^{\rm T} \varSigma^{{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} \underline{\rho }_{j} - \underline{z}^{\rm T} \varSigma^{{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} \underline{\varphi }_{j} ,$$

where Σ is a m-order diagonal matrix with elements σ 2 i .

A similar derivative of ln L(x/y) with respect to another parameter xk is considered and the expectation of the above derivatives product is\(\begin{aligned} {\rm E}\left[ {\frac{{\partial \ln L(\underline{x} /\underline{y} )}}{{\partial x_{j} }}\frac{{\partial \ln L(\underline{x} /\underline{y} )}}{{\partial x_{k} }}} \right] = & \underline{\rho }_{j}^{\rm T} \varSigma^{{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} \, {\rm E}\left[ {\underline{w} \underline{w}^{\rm T} } \right]\varSigma^{{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} \underline{\rho }_{k} - \varphi_{j}^{\rm T} \varSigma^{{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} \, {\rm E}\left[ {\underline{z} \underline{w}^{\rm T} } \right]\varSigma^{{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} \underline{\rho }_{k} \\ & - \underline{\rho }_{j}^{\rm T} \varSigma^{{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} \, {\rm E}\left[ {\underline{w} \underline{z}^{\rm T} } \right]\varSigma^{{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} \underline{\varphi }_{k} + \varphi_{j}^{\rm T} \varSigma^{{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} \, {\rm E}\left[ {\underline{z} \underline{z}^{\rm T} } \right]\varSigma^{{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} \underline{\varphi }_{k} . \\ \end{aligned}\)

According to the rules of moments for a multivariate normal distribution [34]

$${\rm E}\left[ {\underline{w} \underline{w}^{\rm T} } \right] = 2I\quad {\rm E}\left[ {\underline{z} \underline{w}^{\rm T} } \right] = {\rm E}\left[ {\underline{w} \underline{z}^{\rm T} } \right] = 0\quad {\rm E}\left[ {\underline{z} \underline{z}^{\rm T} } \right] = I,$$

the previous expression becomes

$${\rm E}\left[ {\frac{{\partial \ln L(\underline{x} /\underline{y} )}}{{\partial x_{j} }}\frac{{\partial \ln L(\underline{x} /\underline{y} )}}{{\partial x_{k} }}} \right] = 2\underline{\rho }_{j}^{\rm T} \varSigma^{ - 1} \underline{\rho }_{k} + \underline{\varphi }_{j}^{\rm T} \varSigma^{ - 1} \underline{\varphi }_{k} .$$
(8)

Error variance model

To weight observations, the error variance model

$$\sigma = Ky^{a} \left( {t,\underline{x} } \right)$$
(9)

is commonly used. Because it involves model outputs y(tix), σ depend on model parameters x and the presence of the sensitivities of the variance model ρij is justified in relationship (8). Given the model (9) and definition (7), the sensitivity of the variance model (6) becomes

$$\rho_{ij} = Kay^{a - 1} \left( {t_{i} ,\underline{x} } \right)\frac{{\partial y\left( {t_{i} ,\underline{x} } \right)}}{{\partial x_{j} }} = a\frac{{\sigma_{i} }}{{y\left( {t_{i} ,\underline{x} } \right)}}\varphi_{ij}$$

or ρj = 1/2Y−1φj and alternatively ρk = 1/2Y−1φk with Y the diagonal m-order matrix with elements y(tix).

Accordingly, the final form of relationship (8) is

$${\rm E}\left[ {\frac{{\partial \ln L\left( {{{\underline{x} } \mathord{\left/ {\vphantom {{\underline{x} } {\underline{y} }}} \right. \kern-0pt} {\underline{y} }}} \right)}}{{\partial x_{j} }}\frac{{\partial \ln L\left( {{{\underline{x} } \mathord{\left/ {\vphantom {{\underline{x} } {\underline{y} }}} \right. \kern-0pt} {\underline{y} }}} \right)}}{{\partial x_{k} }}} \right] = \varphi_{j}^{\rm T} \left[ {2a^{2} Y^{ - 2} + \varSigma^{ - 1} } \right]\underline{\varphi }_{k} = \varphi_{j}^{\rm T} D\underline{\varphi }_{k}$$

with

$$D \triangleq 2a^{2} Y^{ - 2} + \varSigma^{ - 1} .$$
(10)

By expanding the above relationship for jk = 1:p,

$${\rm E}\left[ {\frac{{\partial \ln L\left( {{{\underline{x} } \mathord{\left/ {\vphantom {{\underline{x} } {\underline{y} }}} \right. \kern-0pt} {\underline{y} }}} \right)}}{{\partial \underline{x} }}\frac{{\partial \ln L\left( {{{\underline{x} } \mathord{\left/ {\vphantom {{\underline{x} } {\underline{y} }}} \right. \kern-0pt} {\underline{y} }}} \right)}}{{\partial \underline{x}^{\rm T} }}} \right] = \varPhi^{\rm T} \left( {\underline{t} ,\underline{x} } \right)D\left( {\underline{t} ,\underline{x} } \right) \, \varPhi \left( {\underline{t} ,\underline{x} } \right)$$

where elements φij are compiled in the Φ(tx) m × p sensitivity matrix of model outputs at t with respect to the model parameters x. Again, D(tx) is a m-order positive definite matrix depending on model output matrix Y and on variance model of observation error matrix Σ.

Appendix 2

The benchmark pharmacokinetic model [35] is described by the set of ordinary differential equations

$$\begin{array}{*{20}c} {\frac{{dy_{1} (t)}}{dt} = \alpha_{1} \left[ {y_{2} (t) - y_{1} (t)} \right] - \frac{{k_{a} V_{m} y_{1} (t)}}{{k_{c} k_{a} + k_{c} y_{3} (t) + k_{a} y_{1} (t)}}} & {y_{1} (0) = C_{0} } \\ {\frac{{dy_{2} (t)}}{dt} = \alpha_{2} \left[ {y_{1} (t) - y_{2} (t)} \right]} & {y_{2} (0) = 0} \\ {\frac{{dy_{3} (t)}}{dt} = \beta_{1} \left[ {y_{4} (t) - y_{3} (t)} \right] - \frac{{k_{c} V_{m} y_{3} (t)}}{{k_{c} k_{a} + k_{c} y_{3} (t) + k_{a} y_{1} (t)}}} & {y_{3} (0) = \gamma C_{0} } \\ {\frac{{dy_{4} (t)}}{dt} = \beta_{2} \left[ {y_{3} (t) - y_{4} (t)} \right]} & {y_{2} (0) = 0} \\ \end{array}$$

involving nine parameters presented below with their associated domains of definition

$$\begin{array}{*{20}c} {10^{ - 2} \le \alpha_{1} \le 10} \\ {10^{ - 2} \le \alpha_{2} \le 10} \\ {10^{ - 2} \le \beta_{1} \le 10} \\ {10^{ - 2} \le \beta_{2} \le 10} \\ {10^{ - 2} \le k_{a} \le 10} \\ {10^{ - 2} \le k_{c} \le 10} \\ {10 \le V_{m} \le 10^{3} } \\ {100 \le C_{0} \le 10^{4} } \\ {0.1 \le \gamma \le 10} \\ \end{array}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Iliadis, A. Structural identifiability and sensitivity. J Pharmacokinet Pharmacodyn 46, 127–135 (2019). https://doi.org/10.1007/s10928-019-09624-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10928-019-09624-9

Keywords

Navigation