Abstract
The baroreflex involves a number of control pathways. In this chapter we consider in greater detail the role of the control of unstressed volume mobilization. We also consider an alternative approach for choosing parameters most likely to be estimable and we apply this method to a model incorporating the control of unstressed volume and compare to data.
1 Introduction
The baroreflex represents the primary cardiovascular system (CVS) short-term global control response mechanism. The baroreflex acts to stabilize blood pressure during stresses that alter this pressure. The baroreflex control response includes varying heart rate H and heart muscle contractility S, systemic resistance \({R}_{\mathrm{s}}\), and vascular unstressed volume \({V }_{\mathrm{u}}\) (and perhaps vascular compliance c). Increasing H, S, and \({R}_{\mathrm{s}}\) will act to raise pressure as will a reduction in unstressed volume which increases effective blood volume as outlined below.
The baroreflex control of vascular resistance involves contraction or dilation of small arterioles. Increasing vascular contraction in the arterioles will increase systemic resistance which will act to support blood pressure. This contraction can be supplemented or overridden by local mechanisms that adjust local blood flow to respond to local metabolic activity.
The baroreflex also can vary venous vascular volume in a way that affects so-called unstressed volume. Unstressed blood volume \({V }_{\mathrm{u}}\) is the blood volume that fills a vascular element before causing distension of the vascular walls (filling volume). Any pressure inducing additional volume will stretch the vascular walls to accommodate the additional volume. This additional volume generated by stretching the vascular walls is termed stressed volume and the pressure generating the distension is termed dynamic pressure (this pressure is the pressure involved in determining blood flow). When the baroreflex reduces unstressed volume reservoirs, this implies that more blood is added to the dynamic circulation helping to support blood pressures.
As mentioned above, in response to blood pressure change, the baroreflex (in conjunction with local self-regulatory mechanisms) varies the levels of \({V }_{\mathrm{u}}\) (and venous vascular compliance), H, S, and \({R}_{\mathrm{s}}\), allowing for a complex blending of control responses to a variety of CVS stresses. Given the complexity of interactions via the various baroreflex control pathways, modeling can, together with specialized data, provide important insight into this key cardiovascular control mechanism. The material presented in Chap. 10 examines a number of issues related to cardiovascular control during orthostatic stress. This chapter focuses in particular on the role of the control of \({V }_{\mathrm{u}}\).
2 Stressed and Unstressed Vascular Volume
Unstressed volume \({V }_{\mathrm{u}}\) represents reservoirs of blood which can be accessed (mobilized) by control mechanisms to support blood pressure when blood volume is lost or otherwise removed from dynamic circulation. Approximately 25–30 % of total blood volume is \({V }_{\mathrm{u}}\) mobilizable by baroreflex sympathetic nerve activation [11, 16]. Mobilization of \({V }_{\mathrm{u}}\) helps to maintain mean arterial pressure (MAP), despite the central hypovolemia (low dynamic blood volume) induced by head-up-tilt (HUT) or lower body negative pressure (LBNP), both of which induce CVS stresses similar to orthostatic stress (stress due to blood pooling in lower extremities during upright posture). Further discussion on orthostatic stress is given in Sect. 10.1. In addition, the control of \({V }_{\mathrm{u}}\) can be an important control component when the system is responding to blood loss such as occurs during hemorrhage [6].
Figure 11.1 indicates the relation between compliance, pressure, stressed and unstressed volume. As volume is introduced above the filling volume of a vascular element (this filling volume as mentioned above is \({V }_{\mathrm{u}}\)), pressure induces a stretching to accommodate an additional (stressed) volume in the vascular element. The volume added due to stretch depends on the elastic characteristics of the walls responding to the pressure. The term compliance is the derivative of the functional relationship of the (stressed) volume to the pressure inducing this volume. The compliance actually varies depending on the level of stretch in the vascular walls (as elastic properties change with stretch) [18]. However, over narrow pressure ranges, compliance is typically assumed constant or piecewise constant. The sum of stressed and unstressed volume divided by the distending pressure will be termed the capacitance of the vascular element (a number of definitions of capacitance exist).
3 Model Structure
The above described physiological complexities imply that mathematical modeling is necessary to study quantitatively the interaction of the various factors and mechanisms involved in short-term CVS control. In particular, our purpose in this chapter is to consider the parameter estimation process using an example of patient-specific data. We describe a different subset selection approach for determining parameters to estimate. This method can be compared to the approaches discussed in Chaps. 2, 3, and 10.
Given the focus on the parameter estimation aspect, we present here only an overview of the model structure that we employ to study the baroreflex control during an HUT or LBNP test. This model includes features of unstressed volume control. Variations of the model applied in this paper have been used to model orthostatic stress (LBNP or HUT) and blood loss due to hemorrhage [5, 6, 12]. Details of the full model can be found in Appendix A.1 of [1] available at www.uni-graz.at/imawww/reports/index.html.
The model includes ten compartments representing various body tissues compartments as well as 11 additional state variables associated with control mechanisms, and plasma-interstitial fluid exchange. For the purpose of parameter identification, the mathematical model equations and the corresponding parameter-sensitivity equations were generated symbolically using a specialized software tool [19]. The blood compartments were expressed directly in terms of compartmental blood volumes rather than blood pressures as done in [1]. This equivalent representation simplifies the associated mass balance relations, which become plain expressions of flows between compartments that are independent on time derivatives of compliances, unstressed volumes or bias pressures. The instantaneous blood pressure–volume relation for each compartment is expressed, as shown in Fig. 11.1, by a piece-wise linear function. The effect of control on unstressed volume, at a given fixed compartmental volume, affects compartmental pressures like a wedge moving vertically. Variations of unstressed volume can therefore, in principle, produce instantaneous pressure variations without violating continuity conditions imposed upon total volume.
3.1 Mass Balance Equations
The generic form for mass balance relations depends upon the interplay of several model variables such as P which represents the pressure, c the compliance, V the volume of a compartment, F the flow between vascular compartments, R the resistance to flow between compartments, and other model variables and parameters. The concept of model parameter adopted in the following is rather flexible. A parameter may refer in a first instance to adjustable coefficients that remain constant during a simulation run, in contrast to fixed constants such as \(\pi = 3.14\ldots,\) and become in a second instance time-varying functions that may be either user-defined, e.g., model inputs, or be functions of other variables in the system.
Compartments and modeled control relations are depicted in Fig. 11.2. Subscripts reference the compartments in this block diagram in a straight forward way using the symbols “\(\mathrm{as}\)”, “per”, “up”, “ren”, “spl”, “leg”, “avc”, “vc”, “ap”, and “vp”. For example, “ren” refers to the renal compartment and “avc” refers to the abdominal vena cava (see Table 11.2). For each compartment, variations in compliance, local resistance, and \({V }_{\mathrm{u}}\) can be induced using various formats of baroreflex control mechanisms (or local mechanisms) as described below.
The standard form of the volume dynamics of compartment “comp” has the mass balance relation given by
where \({F}_{\mathrm{in,comp}}\) represents the natural circulating blood flow into the compartment as well as additions to the compartment via external routes, e.g., blood infusion, and \({F}_{\mathrm{out,comp}}\) the natural circulating flow out of the compartment including blood loss from the compartment via vascular flows, e.g., hemorrhage could also be included as a term. By viewing the overall CVS, including cumulative blood volume loss/gain, as a closed system the total volume of blood in the system becomes a constant. This would allow, in principle, to consider the dynamics of a reduced (by one) number of compartments, and to calculate the volume of one compartment as the difference between total blood in the system and sum of the remaining compartmental volumes. While this strategy has been adopted in [1], it has been dropped in this study for sake of simplicity in the model formulation and in the symbolic derivation of parameter sensitivities.
The instantaneous pressure of a generic vascular compartment “\(\mathrm{comp}\)” in the model is given by
where c is compliance, \({V }_{\mathrm{u,comp}}\) is unstressed volume, \(\lfloor {V }_{\mathrm{comp}} - {V }_{\mathrm{u,comp}}\rfloor \) is non-negatively constrained stressed volume, and \({P}_{\mathrm{bias,comp}}\) is any additional external (orthostatic, positive or negative) pressure to compartment “\(\mathrm{comp}\)”. More specifically, \({P}_{\mathrm{bias,comp}}\) reflects transmural pressure viewed here, in contrast with [1], as outside minus inside pressure so that a positive \({P}_{\mathrm{bias}}\) term represents a higher outside pressure that will eventually decrease the compartmental volume. Similarly, a negative term represents a lower outside pressure that causes vascular volume to increase.
It is important to stress that \({P}_{\mathrm{bias}}\) is the main external input available for non-invasive experimentation aimed to infer upon the function of the baroreflex control system in humans. This is possible through the collection and model-based analysis of variations in heart rate and other measurable physiological cardiovascular variables following arterial and venous pressure changes, which can be elicited through external lower body positive or negative pressure, or following gravitational stress due to orthostasis stretching the lower limb walls and generating in effect an additional unstressed volume contribution to total volume. An open problem remains the attribution of the correct extent of bias pressure elicited at various blood compartments during different perturbation experiments.
The generic expression of blood flow entering (most) compartments due to differences with arterial pressure is given, according to Ohm’s law, by
where R in,comp is arterial vascular resistance of compartment “\(\mathrm{comp}\)”. Similarly blood flow leaving (most) compartments towards a generic venous pool “v” are given by
where R out,comp is venous vascular resistance of compartment “\(\mathrm{comp}\)”, and \(\lfloor \cdot \rfloor \) represents non-negative constrained blood flow in presence of venous vascular valves.
Mean artero-venous pressure differences are sustained by left and right heart cardiac outputs separately, which are of course identical on average. Left and right heart cardiac outputs are modeled as the product of heart rate H and the respective stroke volumes which depend upon the respective ventricular contractilities and the corresponding pre- and after-loads [1]. It must be underlined that the CVS model does not describe pulsatile blood flow but only variability of cardiovascular parameters averaged over single heart beats.
3.2 Control Equations
Control response depends on sensory input to the baroreflex reflecting systemic arterial pressure \({P}_{\mathrm{as}}\) and systemic venous pressure represented in the model by \({P}_{\mathrm{vc}}\) (vena cava pressure) as depicted in Fig. 11.2. Assumptions on the distribution of control effects to various compartments can be found in [1]. We will apply the same control presented in the Chap. 10. The generic form of a baroreflex feedback control loop is implemented by
where x(t) is the control, \(\bar{P}\) is a current pressure, τ is a time constant that characterizes the time it takes (delay effect) for the control variable to obtain its full effect. The expression \({x}_{\text{ ctr}}\) is a set-point function. It reflects the observed baroreflex characteristic of a decreasing or increasing sigmoidal relation of control variable (decreasing for heart rate and resistance, increasing for \({V }_{\mathrm{u}}\)) in response to level of blood pressure. Here \({x}_{\text{ min}}\) and \({x}_{\text{ max}}\) are the minimum and maximum values for the controlled parameter x, respectively. The quantity \({\alpha }_{{x}_{P}}\) is the resting nominal pressure referencing a midpoint in the control level. Also, β helps to determine the steepness of the sigmoid and hence is connected to the characteristic gain. Further details on the development of this control can also be found in [1, 14, 15]. The final system steady state need not be exactly \({\alpha }_{{x}_{P}}\). The above equation is decreasing in \(\bar{P}\) and hence can be employed for heart rate and resistance control. Reversing the maximum and minimum value positions generates an increasing function which is appropriate for unstressed volume control. Note also that the choice of \({\alpha }_{{x}_{P}}\) adjusts the relative position of the control value between the maximum and minimum values in steady state. We assume a central position for heart rate and resistance while assuming unstressed volume is near the maximum values (which implies that the control responds primarily during volume reductions). Other formulations of control such as given in [2, 22] can easily be incorporated as well.
3.3 Control Responses: Unstressed Volume and Systemic Resistance
As mentioned above, complete details of the full model can be found in [1]. We summarize here the implementation of the unstressed volume and resistance controls:
-
Each vascular compartment includes a degree of unstressed volume.
-
Baroreflex changes in systemic resistance \({R}_{\mathrm{s}}\) are distributed among relevant compartments inflow resistances to the compartments depicted in Fig. 11.2. Note that the change in resistance (\(\Delta {R}_{\mathrm{s}}\)) is a variable representing sympathetic drive to vary \({R}_{\mathrm{s}}\) by some amount. In principle \(\Delta {R}_{\mathrm{s}}\) could grow very high but changes locally will be constrained by autoregulation through parameters that restrict the increase to local resistance that would block a minimum blood flow.
-
A similar partition is implemented for \(\Delta {V }_{\mathrm{u}}\). Unstressed volume is distributed among several compartments but changes are assumed to be implemented only in certain compartments namely in the renal and splanchnic compartments.
-
H only enters one equation at one place so no division is necessary.
4 Data
The data used in this paper were collected from HUT tests [9]. One data set is applied to parameter estimation. Additional representative research and typical experimental design for such tests can be found in [8, 9, 13]. Figures 11.3 and 11.4 illustrate the characteristics of data that was collected. Measurements were taken for systolic and diastolic blood pressure, from which mean pressure is calculated. Heart rate was calculated from observed RR intervals. In addition central venous pressure (CVP) was measured invasively and muscle sympathetic nerve activity was measured to provide assessment of sympathetic response to orthostatic stress. Respiratory movement was measured to allow for more accurate assessment of heart rate and blood pressure variability and assess respiratory activity modulating sympathetic neural traffic. Several points should be made:
-
Raw data: Arterial pressure and RR intervals were collected essentially continuously. The data was collected using the Finapres system which monitors RR intervals between heart beats and which employs a finger cuff (calibrated by the typical arm cuff) to monitor blood pressure. Central venous pressure was measured invasively with sensor transducers placed in venous return pathways to the right heart (median or basilic vein). Other hemodynamic quantities could also be monitored such as stroke volume and systemic resistance but these variables are estimated using internal modeling strategies by the Finapres. These values are most useful for following dynamic changes and were not used as part of the estimation process. The data, as can be seen from Fig. 11.3 includes noise and artifacts.
-
Processed data: This data was derived by removing artifacts and calculating a moving average of measured values to smooth the data as depicted in Fig. 11.4.
As a result of artifacts, data were used beginning at 900 s near the start of the HUT. The data was followed for about 15 min as discussed below. A number of other non-invasive but tricky measurements are possible, including Doppler measurement of blood flow velocity to estimate cardiac output and NIRS to monitor regional blood flows. These measurements could enhance the estimation process.
5 Model Identification
The CVS model is described by a system of 21 nonlinear differential equations that define the dynamics of compartmental blood volumes and of auxiliary state variables. The number of (potentially) adjustable parameters is 114, and it is evident that not all parameters are identifiable from the adopted input–output experiment. In particular, the model outputs considered for parameter estimation are heart rate H, systolic pressure \({P}_{\mathrm{as}}\) and central venous pressure (vena cava) \({P}_{\mathrm{vc}}\). The measured outputs coincide therefore with three state variables of the system, which is however irrelevant for identification purposes.
The perturbation experiment consisted of a HUT test with stepwise increments of the inclination angle of a tilt table, starting from the horizontal resting condition, and with the patient in supine position. To approximate the pressure bias \({P}_{\mathrm{bias}}\) provoked during the HUT perturbation test, the model input to the CVS was expressed, in a first attempt, as a staircase, piece-wise constant function with increments of 10 [mmHg] every 3 min beginning at 15 min. The input bias pressure was assumed to cause an equal decrease of \({P}_{\mathrm{bias}}\) in the leg compartment, and a partial (30 %) decrease in the splanchnic and abdominal vena cava regions. The pressure bias chosen to correspond to the degree of HUT (from 0 \({}^{\circ }\) up to about 65 \({}^{\circ }\)) was based on typical conversion correspondences between LBNP pressure and HUT degree found in the literature (e.g., [10]).
The above staircase input was used during an early stage of model identification, but did not provide satisfactory results because model outputs exhibited, unlike the data, rapid transients in coincidence with the changes in bias pressure during HUT. The second, and more successful, attempt for describing the model input consisted in a continuously varying bias pressure with constant slope of 10/3 mmHg per min.
With either model input representation the single input multiple output model resulted clearly unidentifiable according to the criteria described below, and a model order reduction by subspace selection for parameter identification appeared necessary which was implemented as follows.
6 Sensitivity Identifiability: A Subset Selection Approach
Parameter identification plays a central role in physiological systems modeling for validating modeling hypotheses against experimental data and, in general, for solving the inverse problem in practical applications. Either global or local identifiability is a mandatory requirement for estimating with some degree of confidence model parameters from input–output experiments. The most restrictive requirement is global a priori identifiability, which is a structural property of a model that is in general ascertainable only for particular classes of models of reduced complexity. On the contrary, local a posteriori identifiability can be thought of as the least restrictive requirement for estimated parameters to optimize locally, yet uniquely the cost function associated with the adopted fitting criterion, e.g., weighted non-linear least squares or maximum likelihood. For continuously differentiable cost functions the local optimum is characterized by vanishing gradients with respect to parameters calculated at the optimal solution, and the optimizing parameter vector is uniquely defined, according to the inverse function theorem, if the Jacobian matrix is non-singular. Since the Jacobian matrix depends generally upon the sensitivities with respect to parameters of the measured model outputs taken at discrete sampling times, the requirements for local identifiability can be expressed in terms of the properties of the parameter-sensitivity matrix of measured outputs.
Various different strategies exist to overcome lack of local identifiability which include modifications of the cost function, such as in Bayesian inference by including prior information on parameters; reduction of the dimension of the vector of estimated parameters down to an identifiable subset of parameters; or through linear transformations of the parameter space and subsequent selection of a reduced rank subspace with a smaller number of actually estimated parameters. In this paper, we apply this latter approach of reduced rank subspace selection for parameter identification based on singular value decomposition.
6.1 Parameter Identification Framework
For the purpose of model parameter estimation using non-linear weighted least squares (NLWLS) we consider a generic model described by a system of non-linear ordinary differential equations
where \(\mathbf{x}(t)\, \in {\mathbb{R}}^{{n}_{x}}\) is the state trajectory with initial condition \(\mathbf{x}(0) ={ \mathbf{x}}_{0}(\mathbf{p})\), \(\mathbf{p} \in {\mathbb{R}}^{{n}_{p}}\) is the parameter vector and \(\mathbf{u}(t)\ \in {\mathbb{R}}^{{n}_{u}}\) is the input vector. The measurable output vector is, generally, given by a system of non-linear functions
where \(\mathbf{y}(t,\mathbf{p})\, \in {\mathbb{R}}^{{n}_{y}}\) is expressed explicitly as a function of parameter vector p, because x(t) is itself a function of p according to Eq. (11.6). The dependence of \(\mathbf{y}(t,\mathbf{p})\) upon a known input u(t) is tacit. In the present study n x = 21, n p = 114, n u = 1, n y = 3, and g is linear.
Parameter identification is based on noisy measurements, taken over a finite horizon at discrete time points \(\{{t}_{j},j = 1,\ldots,N\}\), and given by
where \(\mathbf{e}({t}_{j})\) is assumed, for simplicity, zero-mean uncorrelated white noise with known diagonal covariance matrix, and \({\mathbf{p}}^{{_\ast}}\) represents the true parameter vector that generated the particular set of observed data. With the given hypotheses about measurement noise, Eq. (11.8) can be expressed in terms of the scalar components
Is is worth stressing that \(\{{\mathbf{z}}_{i}({t}_{j}),i\,=\,1,\ldots,{n}_{y};j\,=\,1,\ldots,N\}\) represent experimental data, while \({\mathbf{y}}_{i}({t}_{j},\mathbf{p})\) represent the i-th simulated model output at time t j calculated for a particular value of parameter vector p. Moreover, by recognizing that \({\mathbf{p}}^{{_\ast}}\) will remain largely unidentifiable the role of \({\mathbf{p}}^{{_\ast}}\) in Eqs. (11.8) or (11.9) is considered of minor importance. A more practical approach is to assign initial values, \({\mathbf{p}}_{0}\), on the basis of prior knowledge and hypotheses about the CVS, and to improve the quality of model predictions by fitting the model outputs to available data through adjustments of a reduced subset of parameter vector \(\mathbf{p}\). Any prior information available on parameters, such as positivity constraints or bounds, can be included into the model equations. In this study we constrained parameters to be positive by means of the log-transformation, which consists of replacing a generic positively constrained parameter, p > 0, with e lnp, where the unbounded lnp replaces p in the list of parameters. This transformation has several advantages, including increased robustness of numerical simulation, and implicit parameter scaling in the calculation of sensitivities.
Irrespective of non-linear parameter transformations, the cost function used for NLWLS is the weighted sum of squares given by
where the weights w ij are usually taken as the reciprocal of measurement noise variance of output \({\mathbf{y}}_{i}\) sampled at time t j , but can be also used to exclude some dubious data point by letting w ij = 0 or to fit primarily one particular model output to the related data by increasing the corresponding weights. Equation (11.10) can be expressed more concisely as
where Z and Y(p) represent the vectors of sequential measurements and model outputs, respectively, e.g., Z = [\({\mathbf{z}}_{1}({t}_{1}),\ldots,\) \({\mathbf{z}}_{1}({t}_{N}),\) \({\mathbf{z}}_{2}({t}_{1})\), …, \({\mathbf{z}}_{2}({t}_{N}),\) …, \({\mathbf{z}}_{{n}_{y}}({t}_{1}),\ldots,\) \({\mathbf{z}}_{{n}_{y}}({t}_{N}){]}^{\mathsf{T}}\), and W is the diagonal weighing matrix.
Given the above notation, well known properties and results are derived in the following. The NLWLS problem yields the parameter estimates defined as
The optimal solution is characterized by the optimality condition
where S(p) is the sensitivity matrix of the model outputs Y(p) (see below). Moreover, the local behavior of the cost function (11.13) around the optimum \( \hat{\mathbf{P}} \) is characterized by its Hessian matrix which must be positive definite in order to uniquely characterize the local optimal solution \( \hat{\mathbf{P}} \). This is equivalent to the concept of local identifiability of \( \hat{\mathbf{P}} \). With some abuse of notation the Hessian matrix of the \(\mathit{WSS}\) cost function becomes
where \(\overline{\mathbf{S}} ={ \mathbf{W}}^{\frac{1} {2} }\,\mathbf{S}\) is the weighted sensitivity matrix. The right hand side approximation is justified if either the weighted Hessians of the model outputs at various times (\({\nabla }_{{\mathbf{p}}^{2}}^{2}\mathbf{Y}(\hat{\mathbf{p}}){\mathbf{W}}^{\frac{1} {2} }\)) are small, i.e., quasi-linear behavior with small curvature, or if the weighted estimation residuals \({\mathbf{W}}^{\frac{1} {2} }(\mathbf{Z} -\mathbf{Y}(\hat{\mathbf{p}}))\) are small, or both. Even if we assume a priori that one of these simplifying assumptions is valid, the Hessian \({\nabla }_{{\mathbf{p}}^{2}}^{2}\mathit{WSS}(\hat{\mathbf{p}})\) is only guaranteed to be positive semidefinite. Only if the weighted sensitivity matrix \( \overline{S} \)(\( \hat{\mathbf{P}} \)) has full rank the Hessian becomes positive definite. This observation is equivalent to the fact that model parameters are locally identifiable only if the sensitivity matrix of the measured outputs has full rank.
6.1.1 Calculation of Model Sensitivities
Given the model differential equations (11.6), the matrix \({\nabla }_{\mathbf{p}}\mathbf{x}(t)\) defines the sensitivity of the state trajectory with respect to parameter variations, or equivalently \(d\mathbf{x}(t) \simeq {\nabla }_{\mathbf{p}}\mathbf{x}(t) \cdot d\mathbf{p}\). The i-th column of \({\nabla }_{\mathbf{p}}\mathbf{x}(t)\), which will be indicated as \({\mathbf{x}}_{{\mathbf{p}}_{ i}}(t)\), represents the sensitivity at time t of the state vector with respect to the i-th component of parameter vector \(\mathbf{p}\). This sensitivity vector is the solution of the following dynamic equations
with initial conditions \({\mathbf{x}}_{{\mathbf{p}}_{i}}(0) = \partial \mathbf{x}(0)/\partial {\mathbf{p}}_{i}\). The matrix \({\nabla }_{x}\mathbf{f}(\mathbf{x}(t),\mathbf{p},\mathbf{u}(t),t)\) represents the Jacobian of the dynamic system equations with respect to the state, which needs to be determined only once for all parameters.
Similarly, with reference to the output equations (11.7) we define the output sensitivity matrix \({\nabla }_{\mathbf{p}}\mathbf{y}(t)\), such that \(d\mathbf{y}(t) = {\nabla }_{\mathbf{p}}\mathbf{y}(t) \cdot d\mathbf{p}\), and whose i-th column \({\mathbf{y}}_{{\mathbf{p}}_{i}}(t)\), represents the sensitivity of the output trajectory with respect to the i-th element of \(\mathbf{p}\). It is defined as
The implementation of the above approach is thus based on analytic derivation of model equations rather than on numerical differentiation of output trajectories using parameter perturbations and finite differences. This is a so-called algorithmic differentiation method in which sensitivities are computed from symbolic derivatives of the same computer code used for calculating model outputs. The derivatives of model outputs with respect to parameters are therefore “correct” even if the sensitivities are small in the order of roundoff errors, and are robust with respect to changes in numerical integration step size, which can cause large errors with numerical differentiation.
The implementation of the above equations (11.15) and (11.16) requires symbolic differentiation of the model’s differential and output equations with respect to state variables and parameters, and needs the generation of computer code for the numerical solution of the extended system of model equations. This task can be fully automated using computer algebra software or using ad hoc symbolic differentiation as implemented in [19]. In the present study the total number of differential equation used to simulate the system dynamics (11.6) and the sensitivity differential equations (11.15) was \({n}_{x} \cdot ({n}_{p} + 1) = 2,415\), and the number of system outputs and their sensitivities was \({n}_{y} \cdot ({n}_{p} + 1) = 345\). The numerical simulation using a variable step 4/5-th order Runge–Kutta–Fehlberg method was surprisingly time efficient, most likely thanks to the common subexpression elimination capabilities of the optimizing compiler used (GNU Fortran (GCC) 4.2.3). Simulations, graphics and optimization algorithms were carried out within the statistical software package R (http://www.R-project.org/).
6.2 Reduced Rank Subspace Selection Using Singular Value Decomposition
The widely used singular value decomposition (SVD) approach for reduced rank subset selection is presented within the context of iterative, restricted step, Gauss–Newton method used to minimize the weighted sum of squares function (11.11). In particular, given at the k-th iteration the parameter vector \({\mathbf{p}}_{k}\), the Gauss–Newton iteration moves into the opposite direction to the gradient of cost function (11.11), taking into account the local curvature of the cost function using the approximation of the Hessian introduced in (11.14). In particular, the direction in which to move the parameter vector is calculated by solving the normal equations
where \(\overline{\mathbf{E}}({\mathbf{p}}_{k}) ={ \mathbf{W}}^{\frac{1} {2} }(\mathbf{Z} -\mathbf{Y}({\mathbf{p}}_{k}))\) are current weighted residuals. The same weighing matrix \({\mathbf{W}}^{\frac{1} {2} }\) is thus used to normalize the rows of the sensitivity matrix \(\mathbf{S}({\mathbf{p}}_{k})\) as well as the current prediction errors.
The actual restricted step taken in direction \(d{\mathbf{p}}_{k}\) determines the new vector of parameters
where \(0 < {\alpha }_{k} \leq 1\) is chosen such that \(\mathit{WSS}({\mathbf{p}}_{k+1}) < \mathit{WSS}({\mathbf{p}}_{k})\). This latter inequality can be satisfied for some \({\alpha }_{k} > 0\), if Eq. (11.17) has a unique solution, that is if \(\overline{\mathbf{S}}({\mathbf{p}}_{k})\) has full rank. This could be obtained through left multiplication of the weighted residuals by the pseudoinverse \(\overline{\mathbf{S}}{({\mathbf{p}}_{k})}^{+} ={ \left [\overline{\mathbf{S}}{({\mathbf{p}}_{k})}^{\mathsf{T}}\overline{\mathbf{S}}({\mathbf{p}}_{k})\right ]}^{-1}\overline{\mathbf{S}}{({\mathbf{p}}_{k})}^{\mathsf{T}}\). However, taken for granted that the model parameter vector \({\mathbf{p}}_{k}\) is locally unidentifiable, the sensitivity matrix \(\overline{\mathbf{S}}({\mathbf{p}}_{k})\) is rank deficient, and (11.17) has not a unique solution.
Singular value decomposition (SVD) is a dependable approach to determine the pseudoinverse of a matrix and is based on the following factorization
where \({\mathbf{U}}_{k} \in {\mathbb{R}}^{{n}_{y}\cdot N\times {n}_{y}\cdot N}\) and \({\mathbf{V}}_{k} \in {\mathbb{R}}^{{n}_{p}\times {n}_{p}}\) are the orthonormal eigenvector matrices of \(\overline{\mathbf{S}}({\mathbf{p}}_{k})\overline{\mathbf{S}}{({\mathbf{p}}_{k})}^{T},\) and \(\overline{\mathbf{S}}{({\mathbf{p}}_{k})}^{\mathsf{T}}\overline{\mathbf{S}}({\mathbf{p}}_{k})\), respectively, and \({\Sigma }_{k} \in {\mathbb{R}}^{{n}_{y}\cdot N\times {n}_{p}}\) is diagonal (referring to the top n p ×n p submatrix) with sorted singular values \({\sigma }_{1} \geq {\sigma }_{2} \geq,\ldots \geq {\sigma }_{{n}_{p}} \geq 0\), which are also the square roots of the eigenvalues of the positive-semidefinite matrix \(\overline{\mathbf{S}}{({\mathbf{p}}_{k})}^{\mathsf{T}}\overline{\mathbf{S}}({\mathbf{p}}_{k})\).
By hypothesis, \(\overline{\mathbf{S}}({\mathbf{p}}_{k})\) is rank deficient and the effective rank r < n p is characterized, in theory, by \({\sigma }_{r+1} = 0\) and, in practice, by \({\sigma }_{r+1}/{\sigma }_{1} \approx 0\). This justifies the approximation of \(\overline{\mathbf{S}}({\mathbf{p}}_{k})\) by
where \(\tilde{{\Sigma }}_{k}\) has only the first r positive singular values and others are zero. Because of roundoff errors the rank r is rarely defined exactly through \({\sigma }_{r+1} = 0\), and in practice it is linked to the largest singular value such that \({\sigma }_{r+1} < \delta \,{\sigma }_{1} \leq {\sigma }_{r}\), for a chosen δ > 0, usually as a function of machine precision. For the aims of this study we are interested in a solution of the normal equations (11.17) using the approximation (11.20), with a numerical rank r k calculated for a particular threshold \({\delta }_{k}\). This yields the (approximate) pseudoinverse of \(\overline{\mathbf{S}}({\mathbf{p}}_{k})\) given by
where \(\tilde{{\Sigma }}_{k}^{+}\) has the reciprocals of the first r k diagonal elements of \({[\tilde{{\Sigma }}_{k}]}^{\mathsf{T}}\) and zero otherwise. The (approximate) solution of (11.17) is then given by
which represents a practically feasible approach for computing the search direction in the Gauss–Newton algorithm. Equation (11.22) bears the interpretation that the direction of parameter variations \(d{\mathbf{p}}_{k}\) is a linear combination of the first r columns of \({\mathbf{V}}_{k}\) (right singular eigenvectors of \(\overline{\mathbf{S}}({\mathbf{p}}_{k})\)), with coefficients proportional to the reciprocal of the corresponding singular values multiplied by the projection of the weighted prediction errors \(\overline{\mathbf{E}}({\mathbf{p}}_{k})\) onto the first r columns of \({\mathbf{U}}_{k}\) (left singular eigenvectors of \(\overline{\mathbf{S}}({\mathbf{p}}_{k})\)). In formula
6.3 Effective Dimensions of Estimated Parameter Vectors
The selection of a reduced number of orthonormal right singular eigenvectors of \(\overline{\mathbf{S}}({\mathbf{p}}_{k})\) for representing search directions in the original parameter space has an intuitive interpretation in terms of restrictions of step size along certain directions in the parameter space. This can be seen by writing Eq. (11.23) as
with \({\theta }_{{r}_{k}}\) representing the first r k components of the transformed parameter vector \(\theta ={ \mathbf{V}}_{k}^{\mathsf{T}}d\mathbf{p}\), where \(d\mathbf{p}\) represents the search direction towards the “true” parameter vector to be approximated by \(d{\mathbf{p}}_{k}\). By considering a generic j-th scalar component of \(d\mathbf{p}\), its value is thus “spread out” onto the values of \(\theta \) with coefficients equal to the j-th column of \({\mathbf{V}}_{k}^{\mathsf{T}}\), i.e., the j-th row of \({\mathbf{V}}_{k}\). The j-th component of the approximating vector \(d{\mathbf{p}}_{k}\) is then the sum of the first r k components of θ multiplied by the j-th row of \({\mathbf{V}}_{k}\). The sum of the first r k squared row elements of \({\mathbf{V}}_{k}\) represent therefore the fraction of the “true” parameter variations that are accounted for by \(d{\mathbf{p}}_{k}\) in Eq. (11.24). These are actual fractions ∈ [0, 1] because \({\mathbf{V}}_{k}\) is orthonormal.
The choice of an effective rank r k limits therefore the search dimension in the transformed parameter space, i.e., of \({\theta }_{{r}_{k}}\), as well as the step size of the individual search directions in the original parameter space. Unlike other parameter selection approaches, e.g., based on QR factorization with pivoting, there is thus no clear-cut interpretation for the dimension of the effectively estimated parameter vector, because there is no strict limitation upon the number of (original) model parameters that may vary during the estimation process. Such limitations are rather imposed implicitly by fractional scaling factors associated with each estimated parameter. These factors can be used, especially after sorting in decreasing order, to assess the relative importance of the various parameters in the model identification process. Parameters having scaling factors close to unity may be interpreted as fully identifiable from the experimental data, while others with small factors as essentially fixed. To assess the most sensitive parameters one may restrict the attention to parameters having scaling factors above a certain level, e.g., 5 %. In contrast, insensitive model parameters that are characterized by relatively small norms of the corresponding columns in the sensitivity matrix, are mapped to right singular eigenvectors of \(\overline{\mathbf{S}}({\mathbf{p}}_{k})\) with small singular values and are therefore expected to be invariant also for the original parametrization.
A drawback of the above interpretation framework is that the SVD of the sensitivity matrix \(\overline{\mathbf{S}}({\mathbf{p}}_{k})\) changes generally at each iteration and the effective rank determined with a given threshold level δ k may vary as well. The above analysis may therefore yield different interpretations if carried out at different points of the estimation procedure, e.g., with initial parameter values versus final solution.
A further source of uncertainty in the evaluation of the relevance of individual model parameters in the model fitting procedure is related to parameter scaling, which has a direct effect on the magnitude of model output sensitivities with respect to parameters. If applicable, the systematic use of log-transformation of parameters reduces the influence of parameter scaling providing implicitly model output sensitivities with respect to fractional changes of parameters.
6.3.1 Implementation and Practical Issues
The dimension of the reduced rank subspace of parameter variations has been defined as a function of the threshold δ k , i.e., \({r}_{k} = {r}_{k}({\delta }_{k})\), rather than as an arbitrarily chosen fixed number. This provides at least theoretically for \(\delta \rightarrow 0\) a consistent estimate of the “true” dimension, r max , of the identifiable subspace. In practice r max and the associated minimum threshold \({\delta }_{min}\) can be derived from the log-plot of singular values that typically exhibit an abrupt decline in correspondence to r max . For the purpose of parameter identification such a numerical rank can however exceed to a large extent the number of parameter components that can be effectively identified from the data. In fact, given a subspace of parameter variations defined by the first r k eigenvectors of \({\mathbf{V}}_{k}\), a reduction in \({\delta }_{k}\) affecting r k will add new orthogonal components to the search direction (see Eq. (11.23)). This may be beneficial for improving the solution of the NLWLS problem as long as the new added components do not interfere too much with, or even overwhelm, the previous components.
To clarify let us assume that \({\mathbf{p}}_{k}\) satisfies the optimality conditions (11.13) such that
which means that the weighted residuals are orthogonal to the first r k columns of \({\mathbf{U}}_{k}\). New components added by reducing \({\delta }_{k}\) maintain the optimality of \({\mathbf{p}}_{k}\) and a new minimum is searched in new orthogonal directions. On the contrary, if \({\mathbf{p}}_{k}\) is far away from the optimum \(\hat{\mathbf{p}}\), the projection of \(\overline{\mathbf{E}}({\mathbf{p}}_{k})\) onto \({\mathbf{U}}_{k}\) may yield large coefficients in (11.23) for all components of \({\mathbf{V}}_{k}\). In such a case the norm of \(d{\mathbf{p}}_{k}\) increases inversely proportionally to the singular values \({\sigma }_{i}\) and adding too many components may cause convergence problems due to non-linearities of the optimization problem.
An iterative procedure that was found effective for finding optimal solutions of the NLWLS problem was to fix a decreasing sequence for \({\delta }_{K}\), where K is an outer iteration counter, e.g., \({\delta }_{K} \in \{ 1{0}^{-2},1{0}^{-3},1{0}^{-4},\ldots \}\), and iterating, with inner iteration counter k, the restricted step Gauss–Newton algorithm described above with \({\delta }_{k} = {\delta }_{K}\) until convergence to \({\mathbf{p}}_{K}\). This latter was used as initial parameter value for the restart of the algorithm with updated K. To avoid over-parameterization and over-fitting of the experimental data, the outer iteration was stopped manually after subjective evaluation of the goodness of fit and improvements of the cost function achieved between two outer iterations.
7 Results
The parameter identification procedure was initially applied to the CVS model with a stepwise increasing function, according to the experimental procedure. The model showed however some difficulties in describing the smooth decline in central venous pressure observed experimentally, which did not appear to be affected by the step changes in tilt table angle. For this reason the parameter identification procedure was repeated also with a linear increase of HUT perturbation, as depicted in Fig. 11.5.
Experimental data and final model predictions obtained with the smallest tolerance level \({\delta }_{K} = 1{0}^{-4}\) and with the stepwise and linearly increasing (ramp) test inputs are shown in Figs. 11.6 and 11.7, respectively. The final model fits obtained with the two input representations were nearly equivalent as regards the predictions of heart rate and mean blood pressure. In contrast, the prediction of central venous pressure was less accurate with the step input, probably due to inadequate modeling assumptions about the distributed effect on the CVS of the perturbation input.
With both input representations the model failed to predict the raise of blood pressure at the beginning of the observation interval, which occurred even before the beginning of the test starting at 15 min. These kinds of random short-term blood pressure, as well as heart rate, fluctuations are normal in healthy subjects and are associated with spontaneous and evoked sympathetic and parasympathetic autonomic activity. An advanced model description should probably include stochastic terms in the dynamic model equations.
For each model input representation, the final model fit was obtained by applying the iterative parameter estimation procedure as described previously with three levels of δ K . The model fit improved, as expected, with decreasing \({\delta }_{K}\) and increasing effective rank (Table 11.1), but did not substantially improve with further reduction of δ K . Table 11.1 shows that the ramp input representation definitely outperformed the step input representation only with the smallest δ K considered, achieving a smaller WSS with a smaller effective rank.
A detailed picture of the relationship between effective rank and different levels of δ is given in Fig. 11.8 for the final model with parameter estimates obtained with the ramp input representation with \({\delta }_{K} = 1{0}^{-4}\). It can be observed that the final effective rank r K = 13 used for parameter estimation is only a small portion of the “numerical” rank of the sensitivity matrix, which lies between 57 and 74 (Fig. 11.8). Tables 11.2–11.4 provide units for states and certain parameters.
As regards the effective dimensions of the estimated parameter vectors, Tables 11.5 and 11.6 report the fractions of estimated parameter variability for the stepwise and continuously changing HUT test input, respectively. The scaling factors are sorted in decreasing order and only values above 5 % are reported. Despite some differences found between the two model input representations, some parameters, mainly related to the control of heart rate and vascular resistances, are ranked highest in both tables. On the contrary, physiologically relevant parameters, such as those related to the control of unstressed volume, appear to play a significant role in parameter estimation only for one or the other input representation.
Tables 11.7–11.11 provide the parameter estimates (Initial value Init) and the estimated values for the step and continuous rise in HUT. Parameter symbols in Tables 11.5–11.11 are generated by the special software tool described in [19] which constructs code from model equations. This code format is easily translated to the symbols provided in [1], Fig. 1, and the generic equations provided here. Underscores denote subscripts. For the control parameters unstressed volume and resistance, the middle symbols c,v, and k, refer to the nominal value (starting value before perturbation), the maximal sustainable value, and the proportion of overall change, respectively. The latter two refer to the fact that the model builds in a constraint for the proportion of total unstressed volume and a constraint on minimal compartment blood flow (i.e., a constraint on inflow resistance).
8 Sensitivity Identifiability: A General Strategy
The above presented method for selecting parameters to be estimated represents one approach to refine the parameter estimation process. Other approaches are discussed in Chaps. 1, 2 and 10. The coordination of model structure with data availability represents a key step in overall model development and validation. The goal is to match model and data in such a way as to improve the robustness and accuracy of the parameter estimation process. Chapter 1 discusses in detail the overall issue of model validation, and how analysis of model identifiability with respect to available data fits into this process.
A conceptual iterative scheme employing sensitivity analysis and subset selection is depicted in Fig. 11.9. This figure illustrates that model design and validation involves a process of refinement in which information on available data guides reasonable model reduction and how analysis of model structure can guide experimental design.
The major steps in this iterative process include the following components:
-
Once a model has been constructed which incorporates an appropriate degree of physiological detail for the task of the model, classical sensitivity analysis (analysis of how a given model output changes in response to small change in a given parameter) and subset selection can be applied to analyzed the identification problem. This analysis is referred to as sensitivity identifiability analysis as described in [4, 17] but broadened here to include subset selection and generalized sensitivity analysis. This analysis can provide guidance on reasonable model reduction leading to combinations of parameters to estimate that can be identified given the available model output (we will refer to this as a priori identifiability [3]). A posteriori identifiability refers to an assessment of the reliability of the estimates given the quality of data.
-
However, subset selection and classical sensitivity analysis can be used to not only detect parameters to estimate but also can be employed to assess the value of adding new (and perhaps expensive or invasive measurements (see, e.g., [7]). Conversely, the application of generalized sensitivity analysis (as described in Chap. 1) can provide some guidance on the design of the experiment and how to carry out data collection to improve the parameter estimation process.
-
The iterative application of these tools (and decisions based on the information provided) is indicated by the dashed lines showing how one step or aspect of the process can influence others. For example the double arrows between a priori identifiability and experimental design indicates how information on either aspect can shape the other. Generalized sensitivity (Chap. 1) comes into play here. Notice also that experimental design can lead to new information that changes model design. In addition, the parameter estimation process itself can be repeated leading to improved initial guesses for the parameters.
-
The final stage of model validation can be carried out by tuning the model parameters to subsets of given data and testing if the model with these parameters can adequately predict observed behavior when perturbations, conditions, or some of the parameters are varied to represent a new situation. For examples and further discussion see [20, 21].
The following observations are made in regards to the method described here in relation to the above model validation protocol:
-
The presented method has proved to be a robust approach for (partial) parameter estimation which was a necessary prerequisite for increasing our confidence in the model’s capabilities and weaknesses (validation). The results of this study suggest that there is likely a misspecification of the effect of external bias pressure on the various compartments (a temporary workaround has been the use of a linearly increasing HUT test input, which markedly improved the prediction of central venous pressure) and that random variability of cardiovascular parameters may contribute to large modeling errors, especially during resting conditions.
-
The newly proposed index for quantifying the fraction of estimated parameter variability within the reduced rank subset selection method provides a means for assessing which parameters are estimable with a particular experiment design and which are not, giving the basis for modifying the experiment design, especially for improving the estimation of poorly estimated parameters. Such kind of evaluation can be based on virtual experiments carried out through simulation studies. In this regard, the availability of a robust parameter estimation approach allows the refinement of prior information on parameter values, improving the quality of simulations.
9 Appendix
References
Batzel, J.J., Fürtinger, S., Bachar, M., Fink, M., Kappel, F.: Sensitivity identifiability of a baroreflex control system model. Tech. Rep. IMA03-09, Institute for Mathematics and Scientific Computing, University of Graz (2009 (journal submission))
Cavalcanti, S., Cavani, S., Ciandrini, A., Avanzolini, G.: Mathematical modeling of arterial pressure response to hemodialysis-induced hypovolemia. Comput. Biol. Med. 36, 128–144 (2006)
Cobelli, C., Carson, E.R., Finkelstein, L., Leaning, M.S.: Validation of simple and complex models in physiology and medicine. Am. J. Physiol. Regul. Integr. Comp. Physiol. 246(2) R259–R266 (1984)
Cobelli, C., DiStefano 3rd, J.J.: Parameter and structural identifiability concepts and ambiguities: A critical review and analysis. Am. J. Physiol. 239(1), R7–R24 (1980)
Fink, M., Batzel, J.J., Kappel, F.: An optimal control approach to modeling the cardiovascular-respiratory system: An application to orthostatic stress. Cardiovasc. Eng. 4(1), 27–38 (2004)
Fink, M., Batzel, J.J., Kappel, F.: Modeling the human cardiovascular-respiratory control response to blood volume loss due to hemorrhage. In: Commault, C., Marchand, N. (eds.) Positive Systems: Lecture Notes in Control and Information Sciences, vol. 341, pp. 145–152. Springer, Berlin Heidelberg (2006)
Fink, M., Batzel, J.J., Tran, H.: A respiratory system model: parameter estimation and sensitivity analysis. Cardiovasc. Eng. 8(2), 120–134 (2008)
Furlan, R., Porta, A., Costa, F., Tank, J., Baker, L., Schiavi, R., Robertson, D., Malliani, A., Mosqueda-Garcia, R.: Oscillatory patterns in sympathetic neural discharge and cardiovascular variables during orthostatic stimulus. Circulation 101(8), 886–892 (2000)
Furlan, R., Jacob, G., Palazzolo, L., Rimoldi, A., Diedrich, A., Harris, P.A., Porta, A., Malliani, A., Mosqueda-Garcia, R., Robertson, D.: Sequential modulation of cardiac autonomic control induced by cardiopulmonary and arterial baroreflex mechanisms. Circulation 104(24), 2932–2937 (2001)
Goswami, N., Loeppky, J.A., Hinghofer-Szalkay, H.: LBNP: past protocols and technical considerations for experimental design. Aviat. Space Environ. Med. 79(5), 459–471 (2008)
Janssens, U., Graf, J.: Volume status and central venous pressure. Anaesthesist 58(5), 513–519 (2009)
Kappel, F., Fink, M., Batzel, J.J.: Aspects of control of the cardiovascular-respiratory system during orthostatic stress induced by lower body negative pressure. Math. Biosci. 206(2), 273–308 (2007)
Mosqueda-Garcia, R., Furlan, R., Fernandez-Violante, R., Desai, T., Snell, M., Jarai, Z., Ananthram, V., Robertson, R.M., Robertson, D.: Sympathetic and baroreceptor reflex function in neurally mediated syncope evoked by tilt. J. Clin. Invest. 99(11), 2736–2744 (1997)
Olufsen, M.S., Ottesen, J.T., Tran, H.T.: Modeling cerebral blood flow control during posture change from sitting to standing. Cardiovasc. Eng. 4(1), 47–58 (2004)
Olufsen, M.S., Ottesen, J.T., Tran, H.T., Ellwein, L.M., Lipsitz, L.A., Novak, V.: Blood pressure and blood flow variation during postural change from sitting to standing: Model development and validation. J. Appl. Physiol. 99(4), 1523–1537 (2005)
Pang, C.C.: Autonomic control of the venous system in health and disease: effects of drugs. Pharmacol. Ther. 90(2-3), 179–230 (2001)
Reid, J.G.: Structural identifiability in linear time-invariant systems. IEEE Trans. Automat. Contr. 22, 242–246 (1977)
Risk, M.R., Lirofonis, V., Armentano, R.L., Freeman, R.: A biphasic model of limb venous compliance: a comparison with linear and exponential models. J. Appl. Physiol. 95, 1207–1215 (2003)
Thomaseth, K.: Multidisciplinary modelling of biomedical systems. Comput. Meth. Programs Biomed. 71(3), 189–201 (2003)
Thomaseth, K., Cobelli, C.: Generalized sensitivity functions in physiological system identification. Ann. Biomed. Eng. 27(5), 607–616 (1999)
Thomaseth, K., Cobelli, C.: Analysis of information content of pharmakokinetic data using generalized sensitivity functions. In: Proceedings of the 22nd Annual EMBS International Conference of the IEEE, vol. 1, pp. 435–437 (2000)
Ursino, M., Antonucci, M., Belardinelli, E.: Role of active changes in venous capacity by the carotid baroreflex: analysis with a mathematical model. Am. J. Physiol. 267, H2531–H2546 (1994)
Acknowledgements
This research was partially funded by FWF (Austria) under project P18778-N13.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Thomaseth, K., Batzel, J.J., Bachar, M., Furlan, R. (2013). Parameter Estimation of a Model for Baroreflex Control of Unstressed Volume. In: Batzel, J., Bachar, M., Kappel, F. (eds) Mathematical Modeling and Validation in Physiology. Lecture Notes in Mathematics(), vol 2064. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-32882-4_11
Download citation
DOI: https://doi.org/10.1007/978-3-642-32882-4_11
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-32881-7
Online ISBN: 978-3-642-32882-4
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)