Background

Real-time simulation of non-linear solids is always a delicate task due to the heavy computational cost associated with the linearization of the equations. Applications are ubiquitous, ranging from industrial uses [1] to surgery planning and training [2, 3] or movies [4].

Probably, the field in which more effort has been paid to the development of real-time simulation techniques is that of computational surgery [510]. This is because surgery training systems are equipped with haptic peripherals, those that provide the user with realistic touch sensations (force feedback). Just like some 25 pictures per second are necessary for a realistic perception of movement in films, haptic feedback needs for some 500 Hz to 1 kHz in order to achieve the necessary realism. The difficulty of the task is thus readily understood: to perform 500–1000 simulations of highly non-linear solids (soft tissues are frequently assumed to be hyperelastic), possibly suffering contact, cutting, etc.

Among the very few truly non-linear surgery simulators developed so far, one can cite [1013]. Essentially, the former employ some type of explicit, lumped mass, Lagrangian finite elements to perform the simulations, possibly including an intensive usage of GPUs. However, in previous works of the authors, see [1316], a different approach has been studied by employing model order reduction techniques, see Fig. 1.

Fig. 1
figure 1

Surgical simulator. Prototype of surgical simulator developed by the authors. On the left, the haptic peripheral is shown

Basically, our approach to real-time simulations consists on the off-line computation of a sort of high-dimensional solution to the problem at hand,

$$\begin{aligned} {\varvec{u}} = {\varvec{u}} ({\varvec{x}}, t, q_1, q_2, \ldots , q_p), \end{aligned}$$
(1)

where \(\varvec{u}\) represents the essential field of the problem (usually, the displacement field of the solid), \(\varvec{x}\) the coordinates of each physical point, and \(q_1, q_2, \ldots , q_p\) a set of parameters that could affect the solution and whose meaning will be clear in brief.

Equation (1) thus represents a sort of response surface in the sense that it provides with the solution for any physical coordinate, time instant and value of the p parameters. Instead of response surface, and to highlight the fact that no set of a priori experiments will be necessary to obtain such a response in our method, we prefer to call Eq. (1) a computational vademecum [17], inspired by the work of ancient engineers (such as Bernoulli, for instance [18]), who compiled sets of known solutions to problems of interest.

The problem with an approach such as that introduced in Eq. (1) is that such an expression is inherently high dimensional. If we try to discretize the governing equations of the problem so as to obtain an approximation to Eq. (1), and do it by a mesh-based method such as finite differences, volumes or elements, we will soon realize that the complexity of the problem will make us run out of computer memory very soon. This is due to the well-known exponential growth of the number of degrees of freedom (nodes of the mesh) with the number of dimensions of the problem. In other words, the well-known curse of dimensionality [19].

In order to overcome the curse of dimensionality, the authors proposed some years ago a technique inspired by proper orthogonal decomposition methods (POD) that generalizes its properties to high dimensional spaces and operates a priori. Such a technique has been coined as proper generalized decomposition (PGD) [2024] and its main characteristic is to assume that the essential field (1) can be approximated in a separated form, i.e.,

$$\begin{aligned} \varvec{u} (\varvec{x}, t, q_1, q_2, \ldots , q_p) \approx \sum _{i=1}^n \varvec{F}_i(\varvec{x}) \circ \varvec{G}_i(t) \circ \varvec{Q}_i^1(q_1)\circ \varvec{Q}_i^2(q_2) \circ \ldots \circ \varvec{Q}_i^p(q_p), \end{aligned}$$
(2)

where the symbol “\(\circ \)” appears for the Hadamard, Schur or entry-wise product of vectorial functions. Since functions \(\varvec{F}_i(\varvec{x}),\varvec{G}_i(t) ,\varvec{Q}_i^1(q_1), \varvec{Q}_i^2(q_2), \ldots , \varvec{Q}_i^p(q_p)\) are a priori unknown, one readily recognizes the inherent non-linear character of the problem of finding such an approximation (even if the governing equations are linear). PGD operates through a greedy algorithm, in which usually a fixed point alternating directions algorithm is used. More details will be given in the “Formulation of the problem in a PGD setting”.

One crucial problem related to such an approximation, see Eq. (2), is the choice of the number of terms n employed in the approximation. Being the main objetive of a simulator to provide the user with a realistic force feedback, the aim of the work presented herein is to develop a suitable error estimator that allows us to fix the number of functional products n necessary for a given tolerance in the error of the transmitted force. The literature on error estimation for model order reduction is vast, see [2531], to name but a few. In “Formulation of the problem in a PGD setting” we recall the basics of the PGD approach to the problem at hand. In “One possible explicit linearization of the formulation” we revisit one of the possible linearization of the problem and, finally, in “An error estimator based on the dual formulation” we develop the sought error estimator for the force feedback. The paper is completed with two different numerical examples in “Numerical examples” that show the performance of the method.

Formulation of the problem in a PGD setting

As a model non-linear problem hyperelasticity has been chosen. This constitutes a sufficiently general theory, with important implications in the simulation of soft living tissues [32, 33], for instance, and therefore in surgical simulators as an ubiquitous example of the restrictions placed by real-time constraints.

In what follows we follow closely the explicit linearization procedure first developed by the authors in [13], although more sophisticated approaches were developed in [15]. For the sake of simplicity, consider a particularly useful instance of the vademecum given by Eq. (1),

$$\begin{aligned} \varvec{u} = \varvec{u}(\varvec{x}, \varvec{s}), \end{aligned}$$

i.e., a generalized solution of the displacement field of a solid undergoing a load at any possible point of its boundary, \(\varvec{s}\). Therefore, the loading point \(\varvec{s}\) acts here as the parameter \(q_1\) in (1). For simplicity, we assume the acting force \(\varvec{t}\) as vertical and of unity module (a more general setting can be established by letting \(\varvec{t}\) itself be an additional vectorial parameter). Under this rationale, the weak form of the static equilibrium equations of the solid can be established as find the displacement \(\varvec{u}\in \mathcal H^1\) such that for all \(\varvec{u}^*\in \mathcal H^1_0\):

$$\begin{aligned} \int _{\bar{\Gamma }}\int _\Omega \varvec{\nabla }^{(s)} \varvec{u}^*: \varvec{\sigma }d\Omega d{\bar{\Gamma }}=\int _{\bar{\Gamma }}\int _{\Gamma _{t2}} \varvec{u}^*\cdot \varvec{t} d\Gamma d{\bar{\Gamma }} \end{aligned}$$
(3)

where \(\Gamma =\Gamma _u \cup \Gamma _t\) represents the boundary of the solid, divided into essential and natural regions, and where \(\Gamma _t= \Gamma _{t1}\cup \Gamma _{t2}\), i.e., regions of homogeneous and non-homogeneous, respectively, natural boundary conditions. \(\varvec{\nabla }^{(s)}\) stands for the symmetric part of the gradient. \(\bar{\Gamma }\subseteq \Gamma _{t2}\) represents the possible loading area within the exposed surface of the body and is actually a choice of the analyst. In surgery simulators, it is often taken as the portion of the organ surface accesible for the surgeon. Here, \(\varvec{t}=\varvec{e}_k \cdot \delta (\varvec{x}-\varvec{s})\), where \(\delta \) represents the Dirac-delta function and \(\varvec{e}_k\) the unit vector along the z-coordinate axis.

In the spirit of PGD techniques, the external load is then decomposed (by applying SVD techniques, for instance) as

$$\begin{aligned} t_j \approx \sum _{i=1}^m f^i_j(\varvec{x}) g^i_j(\varvec{s}) \end{aligned}$$

where m represents the order of truncation and \(f^i_j, g^i_j\) represent the jth component of vectorial functions in space and boundary position, respectively. Following Eq. (2), the high dimensional solution of the problem will be sought as

$$\begin{aligned} u^n_j(\varvec{x},\varvec{s})=\sum _{k=1}^n X_j^k(\varvec{x})\cdot Y_j^k(\varvec{s}), \end{aligned}$$
(4)

where the term \(u_j\) refers to the j-th component of the displacement vector, \(j=1,2,3\) and functions \(\varvec{X}^k\) and \(\varvec{Y}^k\) represent the separated functions used to approximate the unknown field.

PGD techniques proceed by finding iteratively new terms improving this approximation in a greedy framework. Therefore, if a new functional pair \(\varvec{R}(\varvec{x}) \circ \varvec{S}(\varvec{s})\) is sought,

$$\begin{aligned} u^{n+1}_j(\varvec{x},\varvec{s})= u^n_j(\varvec{x},\varvec{s})+ R_j(\varvec{x})\cdot S_j(\varvec{s}), \end{aligned}$$
(5)

a linearization algorithm is compulsory, since the unknown is now a pair of functions. This is usually accomplished by iterative fixed point, alternating directions algorithms that proceed as follows.

Computation of \(S(\varvec{s})\) assuming \(R(\varvec{x})\) is known

If standard assumptions of variational calculus are applied,

$$\begin{aligned} u^*_j(\varvec{x},\varvec{s})= R_j(\varvec{x})\cdot S^*_j(\varvec{s}). \end{aligned}$$
(6)

This admissible variation of the (high dimensional) displacement field, indicated by the star symbol, is then injected into the weak form of the problem, Eq. (3), thus giving

$$\begin{aligned}&\int _{\bar{\Gamma }}\int _\Omega \varvec{\nabla }^{(s)} (\varvec{R}\circ \varvec{S}^*) :\varvec{\mathsf C} :\varvec{\nabla }^{(s)} \left( \sum _{k=1}^n \varvec{X}^k\circ \varvec{Y}^k+ \varvec{R}\circ \varvec{S}\right) d\Omega d{\bar{\Gamma }}\nonumber \\&\quad =\int _{\bar{\Gamma }}\int _{\Gamma _{t2}} (\varvec{R}\circ \varvec{S}^*) \cdot \left( \sum _{k=1}^m \varvec{f}^k \circ \varvec{g}^k \right) d\Gamma d{\bar{\Gamma }}, \end{aligned}$$
(7)

or,

$$\begin{aligned}&\int _{\bar{\Gamma }}\int _\Omega \varvec{\nabla }^{(s)} (\varvec{R}\circ \varvec{S}^*):\varvec{\mathsf C} :\varvec{\nabla }^{(s)} (\varvec{R}\circ \varvec{S}) d\Omega d{\bar{\Gamma }}\nonumber \\&\quad =\int _{\bar{\Gamma }}\int _{\Gamma _{t2}} (\varvec{R}\circ \varvec{S}^*)\cdot \left( \sum _{k=1}^m \varvec{f}^k \circ \varvec{g}^k \right) d\Gamma d\bar{\Gamma } - \int _{\bar{\Gamma }}\int _{\Omega } \varvec{\nabla }^{(s)}\left( \varvec{R}\circ \varvec{S}^* \right) \cdot \mathcal R^n d\Omega d\bar{\Gamma }, \end{aligned}$$
(8)

where \(\mathcal R^n\) represents:

$$\begin{aligned} \mathcal R^n = \varvec{\mathsf C}:\varvec{\nabla }^{(s)} \varvec{u}^n. \end{aligned}$$
(9)

Since the symmetric gradient operates on spatial variables only, we arrive at:

$$\begin{aligned}&\int _{\bar{\Gamma }}\int _\Omega (\varvec{\nabla }^{(s)} \varvec{R}\circ \varvec{S}^*): \varvec{\mathsf C} :(\varvec{\nabla }^{(s)} \varvec{R}\circ \varvec{S}) d\Omega d{\bar{\Gamma }}\nonumber \\&\quad =\int _{\bar{\Gamma }}\int _{\Gamma _{t2}} (\varvec{R}\circ \varvec{S}^*)\cdot \left( \sum _{k=1}^m \varvec{f}^k \circ \varvec{g}^k \right) d\Gamma d\bar{\Gamma }- \int _{\bar{\Gamma }}\int _{\Omega } \left( \varvec{\nabla }^{(s)} \varvec{R}\circ \varvec{S}^* \right) \cdot \mathcal R^n d\Omega d\bar{\Gamma }\end{aligned}$$
(10)

where all the terms depending on \(\varvec{x}\) are known and hence all integrals over \(\Omega \) and \(\Gamma _{t2}\) (support of the regularization of the initially punctual load) can be computed to arrive at an equation for \(\varvec{S}(\varvec{s})\).

Computation of \(R(\varvec{x})\) assuming \(S(\varvec{s})\) is known

Proceeding in an entirely similar way,

$$\begin{aligned} u^*_j(\varvec{x},\varvec{s})= R^*_j(\varvec{x})\cdot S_j(\varvec{s}), \end{aligned}$$
(11)

which, substituted in the weak form of the problem, Eq. (3), gives

$$\begin{aligned}&\int _{\bar{\Gamma }}\int _\Omega \varvec{\nabla }^{(s)} (\varvec{R}^*\circ \varvec{S}):\varvec{\mathsf C} :\varvec{\nabla }^{(s)} \left( \sum _{k=1}^n \varvec{X}^k\circ \varvec{Y}^k+ \varvec{R}\circ \varvec{S}\right) d\Omega d{\bar{\Gamma }}\nonumber \\&\quad =\int _{\bar{\Gamma }}\int _{\Gamma _{t2}} (\varvec{R}^*\circ \varvec{S}) \cdot \left( \sum _{k=1}^m \varvec{f}^k \circ \varvec{g}^k \right) d\Gamma d{\bar{\Gamma }}. \end{aligned}$$
(12)

Again, all the terms depending on \(\varvec{s}\) (load position) can be integrated on \(\bar{\Gamma }\), thus giving an elasticity-like problem to obtain the function \(\varvec{R}(\varvec{x})\).

One possible explicit linearization of the formulation

The simplest hyperelastic constitutive model is the Kirchhoff–Saint Venant (KSV) model. Despite its well-known instabilities in compression, KSV provides with a very neat formulation in which to apply the developments that are to come. Therefore, for the sake of simplicity, we assume that the energy density functional is given by

$$\begin{aligned} \Psi = \frac{\lambda }{2}(\text {tr}(\varvec{E}))^2+\mu \varvec{E}:\varvec{E} \end{aligned}$$
(13)

where \(\lambda \) and \(\mu \) are Lame’s constants. The Green-Lagrange strain tensor, \(\varvec{E}\), is classically defined as

$$\begin{aligned} \varvec{E}=\frac{1}{2} (\varvec{F}^{T} \varvec{F} -\varvec{I}) =\varvec{\nabla }^{(s)} \varvec{u} + \frac{1}{2}(\varvec{\nabla }\varvec{u} \cdot \varvec{\nabla }\varvec{u}^T) \end{aligned}$$
(14)

where \(\varvec{F}=\varvec{\nabla {u}}+\varvec{I}\) is the gradient of deformation tensor. Correspondingly, the second Piola-Kirchhoff stress tensor can be obtained by

$$\begin{aligned} \varvec{S}=\frac{\partial \Psi (\varvec{E})}{\partial \varvec{E}}=\varvec{\mathsf {C}}:\varvec{E} \end{aligned}$$
(15)

in which \(\varvec{ \mathsf {C}}\) is the fourth-order constitutive (here, linear elastic) tensor.

The simplest linearization of the resulting problem comes from an explicit assumption in which load is applied in a series of pseudo-time increments \(\Delta t\), producing displacement increments \(\Delta \varvec{u}(\varvec{x},\varvec{s})\). At each time increment, the previously described PGD fixed point alternating directions algorithm is employed. So, by introducing the non-linear strain measure given by Eq. (14), into this incremental framework, the following expression is obtained:

$$\begin{aligned} \varvec{E}^{t+\Delta t} = \varvec{\nabla }_s \left( \varvec{u}^t + \Delta \varvec{u} \right) + \frac{1}{2}\left( \varvec{\nabla }(\varvec{u}^t+\Delta \varvec{u}) \cdot \varvec{\nabla }^T(\varvec{u}^t+\Delta \varvec{u})\right) . \end{aligned}$$
(16)

Equivalently, admissible variations of strain take the form

$$\begin{aligned} \varvec{E}^*= & {} \varvec{\nabla }^{(s)} (\Delta \varvec{u}^*) + \frac{1}{2}(\varvec{\nabla }(\Delta \varvec{u}^*))\cdot \varvec{\nabla }^T(\varvec{u}^t+\Delta \varvec{u})+\frac{1}{2}\varvec{\nabla }(\varvec{u}^t+\Delta \varvec{u})\cdot \varvec{\nabla }^T(\Delta \varvec{u}^*) \nonumber \\= & {} \varvec{\nabla }^{(s)} (\Delta \varvec{u}^*) + \varvec{\nabla }(\Delta \varvec{u}^*)\cdot \varvec{\nabla }^T(\varvec{u}^t+\Delta \varvec{u}) \end{aligned}$$
(17)

By substituting into the weak form of the equilibrium equation, Eqs. (16) and (17) the following left hand side term of Eq. (3) is obtained

$$\begin{aligned}&\int _{\bar{\Gamma }}\int _{\Omega (t)} \varvec{E}^* : \varvec{\mathsf C}:\varvec{E} d\Omega d\bar{\Gamma }\nonumber \\&\quad = \int _{\bar{\Gamma }}\int _{\Omega (t)} \left( \varvec{\nabla }^{(s)} (\Delta \varvec{u}^*) + \varvec{\nabla }(\Delta \varvec{u}^*)\cdot \varvec{\nabla }^T(\varvec{u}^t+\Delta \varvec{u})\right) :\varvec{\mathsf {C}}\nonumber \\&\qquad :\left( \varvec{\nabla }_s \left( \varvec{u}^t + \Delta \varvec{u} \right) + \frac{1}{2}\left( \varvec{\nabla }(\varvec{u}^t+\Delta \varvec{u}) \cdot \varvec{\nabla }^T(\varvec{u}^t+\Delta \varvec{u})\right) \right) d\Omega d\bar{\Gamma }. \end{aligned}$$
(18)

To linearize Eq. (18), in [13] a strategy is proposed by keeping in the formulation only constant terms and those linear in \(\Delta \varvec{u}\). The resulting weak form is composed by the following long albeit simple collection of terms:

$$\begin{aligned}&\int _{\bar{\Gamma }}\int _{\Omega (t)} \varvec{E}^* : \varvec{\mathsf C}:\varvec{E} d\Omega d\bar{\Gamma }\nonumber \\&\quad = \int _{\bar{\Gamma }}\int _{\Omega (t)} \varvec{\nabla }^{(s)}(\Delta \varvec{u}^*):\varvec{\mathsf C}:\varvec{\nabla }^{(s)} \varvec{u}^t d\Omega d\bar{\Gamma }\nonumber \\&\qquad + \int _{\bar{\Gamma }}\int _{\Omega (t)} \varvec{\nabla }^{(s)}(\Delta \varvec{u}^*):\varvec{\mathsf C}:\varvec{\nabla }^{(s)} (\Delta \varvec{u}) d\Omega d\bar{\Gamma }\nonumber \\&\qquad + \int _{\bar{\Gamma }}\int _{\Omega (t)} \varvec{\nabla }^{(s)}(\Delta \varvec{u}^*):\varvec{\mathsf C}:\frac{1}{2}\varvec{\nabla }\varvec{u}^t\cdot \varvec{\nabla }^T \varvec{u}^t d\Omega d\bar{\Gamma }\nonumber \\&\qquad + \int _{\bar{\Gamma }}\int _{\Omega (t)} \varvec{\nabla }^{(s)}(\Delta \varvec{u}^*):\varvec{\mathsf C}:\varvec{\nabla }\varvec{u}^t\cdot \varvec{\nabla }^T (\Delta \varvec{u}) d\Omega d\bar{\Gamma }\nonumber \\&\qquad + \int _{\bar{\Gamma }}\int _{\Omega (t)} \varvec{\nabla }(\Delta \varvec{u}^*)\cdot \varvec{\nabla }^T\varvec{u}^t:\varvec{\mathsf C}:\varvec{\nabla }^{(s)} \varvec{u}^t d\Omega d\bar{\Gamma }\nonumber \\&\qquad + \int _{\bar{\Gamma }}\int _{\Omega (t)} \varvec{\nabla }(\Delta \varvec{u}^*)\cdot \varvec{\nabla }^T\varvec{u}^t:\varvec{\mathsf C}:\varvec{\nabla }^{(s)} (\Delta \varvec{u}) d\Omega d\bar{\Gamma }\nonumber \\&\qquad + \int _{\bar{\Gamma }}\int _{\Omega (t)} \varvec{\nabla }(\Delta \varvec{u}^*)\cdot \varvec{\nabla }^T\varvec{u}^t:\varvec{\mathsf C}:\frac{1}{2}\varvec{\nabla }\varvec{u}^t\cdot \varvec{\nabla }^T \varvec{u}^t d\Omega d\bar{\Gamma }\nonumber \\&\qquad + \int _{\bar{\Gamma }}\int _{\Omega (t)} \varvec{\nabla }(\Delta \varvec{u}^*)\cdot \varvec{\nabla }^T\varvec{u}^t:\varvec{\mathsf C}:\varvec{\nabla }\varvec{u}^t\cdot \varvec{\nabla }^T (\Delta \varvec{u}) d\Omega d\bar{\Gamma }\nonumber \\&\qquad + \int _{\bar{\Gamma }}\int _{\Omega (t)} \varvec{\nabla }(\Delta \varvec{u}^*)\cdot \varvec{\nabla }^T(\Delta \varvec{u}):\varvec{\mathsf C}:\varvec{\nabla }^{(s)} \varvec{u}^t d\Omega d\bar{\Gamma }\nonumber \\&\qquad + \int _{\bar{\Gamma }}\int _{\Omega (t)} \varvec{\nabla }(\Delta \varvec{u}^*)\cdot \varvec{\nabla }^T(\Delta \varvec{u}):\varvec{\mathsf C}:\frac{1}{2}\varvec{\nabla }\varvec{u}^t\cdot \varvec{\nabla }^T \varvec{u}^t d\Omega d\bar{\Gamma }. \end{aligned}$$
(19)

Despite the apparent complexity of these equations, a very simple scheme results that has provided, however, very good results.

However, a critical issue remains in this case (or, in general, when dealing with PGD approximations of non-linear problems), which is that of selecting the number of terms n composing the approximation, see Eq. (4). This must be done on the basis of predictions given by a suitable error estimator, which is the main objective of this work and will be detailed in the following section.

An error estimator based on the dual formulation

What Eq. (19) represents in fact is an incremental, explicit linearization of the originally non-linear problem. Thus, by using a compact notation, we can say that at pseudo-time step p the weak form of the problem looks like

$$\begin{aligned} a^p(\Delta \varvec{u}, \Delta \varvec{u}^*)=b^p(\Delta \varvec{u}^*). \end{aligned}$$

Errors in the PGD solution of this linearized equation come from two sources. First, the separated representation of the solution, given by Eq. (2), involves a truncation of the sum at a number n of terms. Secondly, the sought functions \(\varvec{F}_i\), \(\varvec{G}_i\), ..., are actually expressed by projecting them onto a finite element mesh of size h. In brief, the following diagram depicts the situation:

figure a

where we have denoted \(e_{\text {PGD}}=\Vert \varvec{u}_h^{n=\infty }-\varvec{u}_h^{n}\Vert = \Vert \varvec{u}-\varvec{u}_{h=0}^n\Vert \) and \(e_{\text {FEM}} = \Vert \varvec{u}_{h=0}^n-\varvec{u}_h^n\Vert =\Vert \varvec{u} -\varvec{u}_h\Vert \). Finally, the sought, total committed error would be \(e= \Vert \varvec{u} -\varvec{u}_h^n\Vert \).

It is noteworthy to mention that, if the FE mesh size, h is not chosen judiciously, the total error in the simulation, composed by the sum of the FEM error plus the PGD error, will never get below a prescribed tolerance despite the number of modes added to the PGD approximation. Therefore, care must be paid not only to the number of terms n in the PGD approximation, but to the mesh size, h.

The objective of this paper is to determine the number of terms necessary to reach some error threshold in the non-linear problem given by Eq. (3), equipped with the non-linear constitutive equations (13). This error assessment is performed by establishing a (here, linear) functional \(\ell ^o(\cdot )\), used to extract certain quantity of interest. For the application we are pursuing (surgery simulators with haptic feedback), this quantity of interest would be the perceived reaction force at the peripheral. The main advantage of the linearization introduced in Eq. (19) is precisely that within each time step the increment in the reaction force is a linear function of the increment of (vertical, for simplicity) displacement (at the loading point), i.e.,

$$\begin{aligned} \ell ^o(\Delta \varvec{u}_h^n)=\Delta u_z(\varvec{x}_0, \varvec{s}_0), \end{aligned}$$

in other words, the increment of vertical displacement at \(\varvec{x}_0\) provoked by the load acting at \(\varvec{s}_0\). In our approach, since we interested in estimating the error on the force value, we simply take \(\varvec{x}_0=\varvec{s}_0\), with \(\varvec{x}_0\) a particular point on the loading surface.

Following [34] (although other approaches are equally feasible for PGD, see [3537]), the error in the quantity of interest is obtained through an auxiliary problem, often referred to as dual or adjoint problem. In [34], the exact solution of the auxiliary problem is replaced by a more accurate solution, which in a PGD context can naturally be obtained by performing some extra enrichment increments (i.e., letting n grow sufficiently to a value N).

Therefore, the dual or adjoint problem will now look like

$$\begin{aligned} a(\Delta \varvec{u}^*,\varvec{\varphi })=\ell ^o(\Delta \varvec{u}^*), \end{aligned}$$
(20)

with \(\varvec{\varphi }\) the dual unknown. The error in the quantity of interest could therefore be computed by

$$\begin{aligned} \ell ^o(e)=b(\varvec{\varphi })-a(\varvec{u}_h^n,\varvec{\varphi }), \end{aligned}$$

or, equivalently,

$$\begin{aligned} \ell ^o(e)=a(e,\varepsilon )\le |e| \cdot |\varepsilon |, \end{aligned}$$

with \(\varepsilon =\Vert \varvec{\varphi }-\varvec{\varphi }_h\Vert \). As mentioned before, the exact solution of the dual problem, \(\varvec{\varphi }\) is not often available, so that it is approximated as

$$\begin{aligned} \varvec{\varphi }\approx \varvec{\varphi }_h^{N \gg n }. \end{aligned}$$

Although different possibilities exist, see for instance [31], in [34] different strategies were analyzed to determine the necessary value of N. For instance, results taking \(N=n+5\), \(N=2n\) or, simply, \(N=n\) were analyzed. In general some extra terms, say 5, are enough to determine a good dual solution.

In what follows, we show two examples of application of the proposed methodology for two non-linear problems, formulated under the framework of the theory of hyperelasticity.

Fig. 2
figure 2

Model for the beam bending problem

Fig. 3
figure 3

Deformed beam for a particular location of the point load

Numerical examples

Cantilever beam

We consider the example of a cantilevered Kirchhoff-Saint Venant beam whose geometry is shown in Fig. 2. Beam nodes are assumed fixed at one of the ends, while the rest of the degrees of freedom are assumed to be free. The mesh is composed by tetrahedral elements, with \(3\times 3\) nodes in the \(40\times 40\) mm\(^2\) cross-section and 21 nodes in the longitudinal direction, 400 mm long. Material parameters were Young’s modulus \(E=2\times 10^{11}\) Pa and Poisson’s coefficient \(\nu =0.3\). The applied force is assumed to be always vertical and its value taken as \(10^8\) N.

The deformed configuration of the beam for one particular location of the load is shown in Fig. 3. The four first modes \(\varvec{X}^k(\varvec{x})\), \(k=1,2,\ldots , 4\) are shown in Fig. 4.

Fig. 4
figure 4

First four spatial modes \({\varvec{X}^k}({\varvec{x}})\), \(k=1, \ldots , 4\)

For this example we have considered that the load could be applied at any of the 24 farthest nodes of the upper face of the beam (nodes within the black rectangular area in Fig. 2). For this admissible loading region, the resulting first four modes \(\varvec{Y}^k(\varvec{s})\), \(k=1, \ldots , 4\), are depicted in Fig. 5.

Fig. 5
figure 5

First four spatial modes \({\varvec{Y}^k}({\varvec{s}})\), \(k=1, \ldots , 4\)

The loading process was solved by taking \(p=8\) pseudo-time steps, both for the primal and dual problems. The dual problem was solved by applying a stopping criterion such that \(\Vert \varvec{\varphi }^{n+1}-\varvec{\varphi }^{n}\Vert \le 10^{-8}\). With such a criterion, the computation of \(\varvec{\varphi }\) involved eight modes, one per time step. The evolution of the predicted error with the number of modes n employed in the computation of the primal variable is shown in Fig. 6.

Fig. 6
figure 6

Convergence of the error with the number of modes n

Fig. 7
figure 7

Model for the liver palpation problem. In red, the region in which loading is allowed

Remark 1

It is important to note that, despite the fact that we have considered 24 possible positions for the load vector, the fact of finding up to 60 modes to express the solution is not an inconsistency. One could think that obtaining a singular value decomposition of the 24 displacement vectors corresponding to the distinct 24 possible load positions would give up to 24 possible modes to express the high-dimensional solution \(\varvec{u} (\varvec{x}, \varvec{s})\). However, in this case we have performed an explicit, incremental solution of the non-linear problem, by dividing it into \(p=8\) pseudo-time steps. Therefore, in none of the examples shown the limit number of 24 modes has been reached. The highest number of modes for a particular load increment was 11, thus very far from 24. This is consistent with our previous experience in the development of computational vademecums by PGD techniques.

Remark 2

In addition, modes for the different load steps are not mutually orthogonal. An additional compression of the modes with the so-called PGD-projection, see [38], provides with a very restricted number of modes. In this case, the modes could be compressed so as to consider less than 12 modes for the whole loading process without further increase in the error in the approximation.

Palpation of the liver

In this example we apply the dust developed error estimator to the simulation of liver palpation. The liver model, already shown in Fig. 1, is essentially the same developed in previous references by the authors, see details in [13, 14].

The model, see Fig. 7, is composed by 2853 nodes and 10,519 linear tetrahedra. For the ease exposition, a Kirchhoff-Saint Venant constitutive law with E = 160,000 Pa and \(\nu =0.48\) is considered. More sophisticated constitutive laws are equally possible, see [7] and references therein. In Fig. 7 the region \(\bar{\Gamma }\) in which the load can be applied has been highlighted in red. Only 66 nodes have been chosen as candidates for loading, but of course even the entire surface of the organ can be chosen, as in [13], for instance.

Fig. 8
figure 8

First four spatial modes of the liver problem, \({\varvec{X}^k}({\varvec{x}})\), \(k=1, \ldots , 4\)

As in the previous example, the first four spatial and loading modes \(\varvec{X}^k(\varvec{x})\) and \(\varvec{Y}^k(\varvec{s})\), \(k=1, \ldots , 4\), are depicted in Figs. 8 and 9, respectively. A load of 30 N was applied in a sequence of three pseudo-time steps (a quasi-static process is here considered, see [16] for details on the dynamic problem).

Fig. 9
figure 9

First four spatial modes of the liver problem, \({\varvec{Y}^k}({\varvec{s}})\), \(k=1, \ldots , 4\)

Fig. 10
figure 10

Convergence of the error with the number of modes n for the liver problem

The dual problem was solved by applying a stopping criterion such that \(\Vert \varvec{\varphi }^{n+1}-\varvec{\varphi }^{n}\Vert \le 10^{-8}\). The evolution of the predicted error with the number of modes n employed in the computation of the primal variable is shown in Fig. 10.

Conclusions

In applications with haptic response, the development of a suitable error indicator of the force being transmitted to the user is of utmost importance, as can be readily understood. In this paper, we have developed a method for the error estimation in such a quantity of interest for a real-time simulator based on the use of reduced order models. In particular, proper generalized decomposition techniques have been employed.

Based on previous developments of the authors, an explicit linearization of the originally non-linear constitutive equations in the framework of PGD has been employed. This renders the problem in the form of a sequence of linear problems, for which an error estimator in the spirit of [34] has been employed. It is based on the employ of the so-called dual problem as a stopping criterion for the original (or primal) one.

The result is the first example (up to our knowledge) of an error estimator for non-linear problems in the framework of PGD methods in general, and haptic simulators in particular.