We employ an analysis-by-synthesis approach to refine the input coarse mesh animation, at every frame, by optimizing the following energy \(\mathbf E (M)\) with respect to the collection of Surface Gaussian means \(M = \{\hat{\mu }_0,\dots \hat{\mu }_{n_s-1}\}\):
$$\begin{aligned} \mathbf E (M) = E_{sim} - w_{reg} E_{reg} - w_{temp} E_{temp}. \end{aligned}$$
(4)
The term \(E_{sim}\) measures the color similarity of the projected Surface Gaussians with the Image Gaussians obtained from each camera view. \(E_{reg}\) is used to keep the distribution of the Surface Gaussians geometrically smooth, whereas \(w_{reg}\) is an user defined smoothness weight. The additional term \(E_{temp}\) is used to temporally smooth the displacements of the Surface Gaussians over time to avoid visual artifacts such as jittering, whereas \(w_{temp}\) is a user defined weight.
We constrain the Surface Gaussians to only move along the corresponding vertex (normalized) normal direction \(N_s\):
$$\begin{aligned} \hat{\mu }_{s} = \hat{\mu }_{s}^{init} + N_s k_s \in \mathbb {R}^3 \end{aligned}$$
(5)
where \(\hat{\mu }_s^{init}\) is the initial Surface Gaussian mean inizialized as the vertex position \(v_s\) at the beginning of each frame, and \(k_s\) is the unknown vertex displacement.
This hard constraint brings two main advantages: first, it forces the Surface Gaussians to maintain a regular distribution on the surface, and secondly it highly reduces the number of parameters to optimize for (single scalar displacements \(k_s\), instead of the 3 dimensions \([\hat{\mu }_s]_x, [\hat{\mu }_s]_y\) and \([\hat{\mu }_s]_z\)), resulting in higher performances as well as better-posed convergence.
By maximizing \(\mathbf {E}(M)\) for each frame in terms of the collection of Surface Gaussian means M, we aim at best surface-image similarity with the best distribution in space (on the surface) and time (across frames). We define each term of \(\mathbf {E}(M)\) analytically and compute its derivatives with respect to the unknown displacements \(k_s, \forall s \in \{0\dots n_s-1\}\), which we then set to 0 for maximization purposes. The derivatives are:
$$\begin{aligned} \begin{aligned} \frac{\partial \mathbf E }{\partial k_s}&= \frac{\partial }{\partial k_s}\left( E_{sim} - w_{reg} E_{reg} - w_{temp} E_{temp}\right) \\&= \frac{\partial E_{sim}}{\partial k_s} - w_{reg} \frac{\partial E_{reg}}{\partial k_s} - w_{temp} \frac{\partial E_{temp}}{\partial k_s} \\ \end{aligned} \end{aligned}$$
(6)
In the next sections, we describe each term in detail and provide the full derivation of the analytic derivatives.
Similarity Term
We exploit the power of the implicit Gaussian representation of both input images and surface in order to derive a closed-form analytical formulation for our similarity term. In principle, a pair of Image Gaussian and projected Surface Gaussian should have high similarity measures when they show similar properties in terms of color and their spacial localization is sufficiently close. This measure can be formulated as the integral of the product of the projected Surface Gaussian \(G_s(x)\) and Image Gaussian \(G_i(x)\), weighted by their color similarity \( T(\delta _{i,s})\), as follows:
$$\begin{aligned} {\varPhi }_{i,s} = T_{{\varDelta }_c}(\delta _{i,s}) \left[ \int _{{\varOmega }}{G_i(x) G_s(x) \partial {x}}\right] ^2 \end{aligned}$$
(7)
In the above equation \(\delta _{i,s} = || \eta _i - \eta _s ||^2 \in \mathbb {R}^{+}\) measures the Euclidean distance between the colors, \({\varDelta }_c\) is the maximum color distance allowed (after which the color similarity should drop to 0), and \(T_{{\varDelta }}(\delta ): \mathbb {R} \rightarrow \mathbb {R}\) is the Wendland radial basis function (Wendland 1995) modeled by:
$$\begin{aligned} T_{{\varDelta }}(\delta ) = \left\{ \begin{array}{l l} \Big (1 - \frac{\delta }{{\varDelta }}\Big )^4 \Big (4 \frac{\delta }{{\varDelta }} + 1\Big ) &{} \quad \text {if }\delta < {\varDelta }\\ &{}\\ 0 &{} \quad \text {otherwise} \end{array} \right. \end{aligned}$$
(8)
Applying \(T_{{\varDelta }}\) function on \(\delta \) results in a smooth color similarity measure that is equal 1 if \(\delta = 0\), i.e. \(T_{{\varDelta }}(0) = 1\) and smoothly decreases towards 0 as \(\delta \) approaches \({\varDelta }\), i.e. \(\lim _{\delta \rightarrow {\varDelta }} T_{{\varDelta }}(\delta ) = 0\).
The main advantage of using a Gaussian representation is that the integral in Eq. 7 has a closed-form solution, namely another Gaussian with combined properties:
$$\begin{aligned} {\varPhi }_{i,s}= & {} T_{{\varDelta }_c}(\delta _{i,s}) \left[ \int _{{\varOmega }} \frac{1}{\sqrt{\pi \sigma _s \sigma _i}} exp\left( -\frac{1}{2}\frac{||x-\mu _{i}||^2}{\sigma _i^2}\right) \right. \nonumber \\&\left. \times ~exp\left( -\frac{1}{2}\frac{||x-\mu _{s}||^2}{\sigma _s^2}\right) \partial x \right] ^2\nonumber \\= & {} T_{{\varDelta }_c}(\delta _{i,s}) \left[ \frac{\sqrt{2 \sigma _s \sigma _i}}{\sqrt{(\sigma _{s}^2+\sigma _{i}^2)}} exp\left( -\frac{1}{2}\frac{||\mu _i - \mu _s ||^2}{\sigma _{s}^2+\sigma _{i}^2}\right) \right] ^2\nonumber \\= & {} T_{{\varDelta }_c}(\delta _{i,s}) 2 \frac{\sigma _{s} \sigma _{i}}{\sigma _{s}^2+\sigma _{i}^2} exp\left( -\frac{||\mu _i - \mu _s ||^2}{\sigma _{s}^2+\sigma _{i}^2}\right) \end{aligned}$$
(9)
The use of normalized Surface Gaussians with the chosen normalization factor allows to mathematically constrain the overlap \({\varPhi }_{i,s}\) in the interval [0, 1], which has appealing properties concerning the next formulations’ steps. Although we do not make use of this, it is worth mention that \({\varPhi }_{i,s}\) with normalized Gaussian also eases the optimization of the size (i.e. standard deviation) along with the mean of the Surface Gaussians, that was previously impractical (see Fig. 4 for comparison).
To compute \(E_{sim}\), we first calculate the overlap of the set of Surface Gaussian against the set of Image Gaussians for each camera view, obtained by summing-up all overlaps \({\varPhi }_{i,s}\), \(\forall i, s\). Then, we normalize the result considering the number of cameras \(n_c\) and the maximum obtainable overlap, which can be easily found counting out the Image Gaussians \(\sum _i {\varPhi }_{i,i} = \sum _i 1 = n_i^c\), \(\forall c\):
$$\begin{aligned} E_{sim} = \frac{1}{n_{c}} \sum _{c=0}^{n_{c}-1} \left[ \frac{1}{n_i^c} \sum _{i = 0}^{n_i^c - 1} min \left( \sum _{s = 0}^{n_s-1} {\varPhi }_{i,s}, 1\right) \right] \end{aligned}$$
(10)
such that \(E_{sim} \in [0,1]\). The use of normalized Gaussians contributes in an improvement in performance (\(3\%\) w.r.t. the unnormalized version). In this equation, the inner minimization implicitly handles occlusions on the surface as it prevents occluded Gaussian projections into the same image location to contribute multiple times to the energy. This is an elegant way for handling occlusion while preserving at the same time energy smoothness.
Derivative for
\(E_{sim}\): In order to calculate the derivative of \(E_{sim}\), we note that most of its terms are constant with respect to \(k_s\), except the projected means \(\mu _{s}\) and the variances \(\sigma _{s}\), within the term \({\varPhi }_{i,s}\).
Using homogeneous coordinates, expressed throughout the paper using the superindex h, we first compute the Surface Gaussian mean in 2D image space, \(\mu _{s}^h\), by projecting the constrained Surface Gaussian mean \(\hat{\mu }_s^h\) from Eq. 5, using the camera projection matrix \(P \in \mathbb {R}^{4 \times 4}\):
$$\begin{aligned} \mu _{s}^{h} = P \hat{\mu }_s^h = P(\hat{\mu }_s^{init} + N_s^h k_s) \in \mathbb {R}^3 \end{aligned}$$
(11)
where \(\hat{\mu }_s^{init}\) is the initial Surface Gaussian mean, initialized as the vertex position \(v_s\), in homogeneous coordinates. The derivative of \(\mu _{s}^{h}\) with respect to \(k_s\) is defined as:
$$\begin{aligned} \frac{\partial \mu _{s}^{h}}{\partial k_s}= & {} \frac{\partial }{\partial k_s}(P(\hat{\mu }_s^{init} + N_s^h k_s)) = P\frac{\partial }{\partial k_s}(\hat{\mu }_s^{init} + N_s^h k_s) \nonumber \\= & {} P \left( 0 + N_s^h \frac{\partial }{\partial k_s}(k_s)\right) = P N_s^h \end{aligned}$$
(12)
Combining Eqs. 2 and 12, the derivative of \(\mu _s\) evaluates to:
$$\begin{aligned} \begin{aligned}&\frac{\partial \mu _{s}}{\partial k_s} = \left( \begin{array}{c} \frac{\partial }{\partial k_s}\left( \frac{{[\mu _{s}^{h}]}_x}{{[\mu _{s}^{h}]}_z} \right) \\ \frac{\partial }{\partial k_s}\left( \frac{{[\mu _{s}^{h}]}_y}{{[\mu _{s}^{h}]}_z} \right) \end{array} \right) = \left( \begin{array}{c} \frac{\frac{\partial {[\mu _{s}^{h}]}_x}{\partial k_s} {[\mu _{s}^{h}]}_z- {[\mu _{s}^{h}]}_x\frac{\partial {[\mu _{s}^{h}]}_z}{\partial k_s}}{{{[\mu _{s}^{h}]}_z}^2} \\ \frac{\frac{\partial {[\mu _{s}^{h}]}_y}{\partial k_s} {[\mu _{s}^{h}]}_z- {[\mu _{s}^{h}]}_y\frac{\partial {[\mu _{s}^{h}]}_z}{\partial k_s}}{{{[\mu _{s}^{h}]}_z}^2} \\ \end{array} \right) \\&= \left( \begin{array}{c} \frac{\partial }{\partial k_s}\left( {[\mu _{s}^{h}]}_x\right) - [\mu _s]_x \frac{\partial }{\partial k_s}\left( {[\mu _{s}^{h}]}_z\right) \\ \frac{\partial }{\partial k_s}\left( {[\mu _{s}^{h}]}_y\right) - [\mu _s]_y \frac{\partial }{\partial k_s}\left( {[\mu _{s}^{h}]}_z\right) \\ \end{array} \right) \frac{1}{{[\mu _{s}^{h}]}_z} \\&= \left( \begin{array}{c} {{[P N_s^h]}_x- [\mu _s]_x {[P N_s^h]}_z}\\ {{[P N_s^h]}_y- [\mu _s]_y {[P N_s^h]}_z}\\ \end{array} \right) \frac{1}{[P (\hat{\mu }_s^{init} + N_s^h k_s)]_z}. \end{aligned} \end{aligned}$$
(13)
The derivative with respect to \(k_s\) of the projected variance \(\sigma _s\) is calculated by applying simple derivation rules:
$$\begin{aligned} \begin{aligned} \frac{\partial \sigma _{s}}{\partial k_s}&= \frac{\partial \sigma _{s}}{\partial {[\mu _{s}^{h}]}_z} \frac{\partial {[\mu _{s}^{h}]}_z}{\partial k_s} = \frac{-f \hat{\sigma }_s}{({[\mu _{s}^{h}]}_z)^2}\frac{\partial {[\mu _{s}^{h}]}_z}{\partial k_s} \\&= \frac{-\sigma _{s}}{{[\mu _{s}^{h}]}_z}\frac{\partial {[\mu _{s}^{h}]}_z}{\partial k_s} = \frac{-\sigma _{s}}{{[\mu _{s}^{h}]}_z}{[P N_s^h]}_z\in \mathbb {R}, \end{aligned} \end{aligned}$$
(14)
Therefore, the derivative of the term \({\varPhi }_{i,s}\) with respect to \(k_s\) is obtained by substituting Eqs. 13 and 14 in 9, which generates:
$$\begin{aligned} \begin{aligned}&\frac{\partial }{\partial k_s}({\varPhi }_{i,s}) = T_{{\varDelta }_c}(\delta _{i,s}) 2 \frac{\partial }{\partial k_s}\Bigg (\frac{\sigma _{s}\sigma _{i}}{\sigma _{s}^2+\sigma _{i}^2} e^{-\frac{||\mu _i - \mu _s ||^2}{\sigma _{s}^2+\sigma _{i}^2}}\Bigg )\\&= T_{{\varDelta }_c}(\delta _{i,s}) 2 \Bigg \{2 \frac{\sigma _{s}\sigma _{i}}{\sigma _{s}^2+\sigma _{i}^2} e^{-\frac{||\mu _i - \mu _s ||^2}{\sigma _{s}^2+\sigma _{i}^2}}\Bigg [\frac{\partial {[\mu _{s}^{h}]}_z}{\partial k_s} \Bigg (-\frac{1}{2} +\\&+ \frac{\sigma _{s}^2}{\sigma _{s}^2+\sigma _{i}^2} - \frac{||\mu _i - \mu _s ||^2\sigma _{s}^2}{(\sigma _{s}^2+\sigma _{i}^2)^2 }\Bigg )\frac{1}{{[\mu _{s}^{h}]}_z} + \frac{(\mu _i - \mu _s) \frac{\partial \mu _{s}}{\partial k_s}}{\sigma _{s}^2+\sigma _{i}^2}\Bigg ]\Bigg \}\\&= T_{{\varDelta }_c}(\delta _{i,s}) 4 \frac{\sigma _{s}\sigma _{i}}{\sigma _{s}^2+\sigma _{i}^2} e^{-\frac{||\mu _i - \mu _s ||^2}{\sigma _{s}^2+\sigma _{i}^2}}\Bigg [{[P N_s^h]}_z\Bigg (-\frac{1}{2} +\\&+ \frac{\sigma _{s}^2}{\sigma _{s}^2+\sigma _{i}^2} - \frac{||\mu _i - \mu _s ||^2\sigma _{s}^2}{(\sigma _{s}^2+\sigma _{i}^2)^2 }\Bigg )\frac{1}{{[\mu _{s}^{h}]}_z} + \frac{(\mu _i - \mu _s) \frac{\partial \mu _s}{\partial k_s}}{\sigma _{s}^2+\sigma _{i}^2}\Bigg ]\end{aligned} \end{aligned}$$
(15)
Finally, the derivative of \(E_{sim}\) with respect to \(k_s\) is:
$$\begin{aligned} \frac{\partial E_{sim}}{\partial k_s} = \frac{1}{n_{c}} \sum _{c=0}^{n_{c}-1} \frac{1}{n_i^c} \sum _{i = 0}^{n_i^c-1} \left\{ \begin{array}{l l} \frac{\partial {\varPhi }_{i,s}}{\partial k_s} &{} \text {if} \sum \nolimits _{s = 0}^{n_s-1} {\varPhi }_{i,s} < 1\\ &{}\\ 0 &{} \text {otherwise} \end{array} \right. \end{aligned}$$
(16)
Regularization Term
The regularization term constraints the Surface Gaussians in the local neighborhood such that the final reconstructed surface is sufficiently smooth. This is accomplished by constraining the displacements \(k_s\) along the normals by minimizing the following equation:
$$\begin{aligned} E_{reg} = \sum _{s = 0}^{n_s-1} \frac{1}{|{\varPsi }(s)|}{\sum _{j \in {\varPsi }(s)} T_{{\varDelta }_d}(\delta _{s,j}) \left( k_s - k_j\right) ^2}, \end{aligned}$$
(17)
where \({\varPsi }(s)\) is a set of Surface Gaussian indices that are neighbors of \(G_s\), \(T_{{\varDelta }}(\delta )\) is defined in Eq. 8, \(\delta _{s,j} \in \mathbb {R}^{+}\) is the geodesic surface distance between \( G_s \) and \( G_j \) measured in number of edges and \({\varDelta }_d\) is the maximum allowed geodesic distance (after which \(T_{{\varDelta }_d}\) drops to 0). Since we assume fixed surface topology for our experiments, \(\delta _{sj}\) does not change, and in particular is constant with respect to the degrees of freedom \(k_s\). We compute the geodesic distance among all vertices and all possible neighbors only once for each sequence. The effect of the minimization of \(E_{reg}\) is to maintain a smooth surface where all close neighbors show similar displacements the more they are close to each other. A similar formulation in the case of free motion of the Surface Gaussian without any normal constraints would be harder to formulate. It would possible require more complex and additional terms to guarantee smooth and regular surface distribution of the resulting vertex positions.
Table 1 User-defined parameters of the energy function \(\mathbf E \), together with their description, values interval and default value
Derivative for
\(E_{reg}\): The derivative of \(E_{reg}\) with respect to \(k_s\) is calculated by simple derivation rules as follows:
$$\begin{aligned} \begin{aligned}&\frac{\partial E_{reg}}{\partial k_s}\! = \!\frac{\partial }{\partial k_s}\! \left( \sum _{s = 0}^{n_s-1} \frac{1}{|{\varPsi }(s)|}{\sum _{j \in {\varPsi }(s)} \! T_{{\varDelta }_d}(\delta _{s,j}) \! \left( k_s\! - \!k_j\right) ^2}\!\right) \\&\quad = \frac{1}{|{\varPsi }(s)|} \sum _{j \in {\varPsi }(s)} \! T_{{\varDelta }_d}(\delta _{sj}) \left( \frac{\partial \left( \! k_s \! - \! k_j \! \right) ^2 }{\partial k_s} + \frac{\partial \left( \! k_j \! - k_s \! \right) ^2}{\partial k_s}\right) \\&\quad = \frac{1}{|{\varPsi }(s)|} \sum _{j \in {\varPsi }(s)} \! T_{{\varDelta }_d}(\delta _{s,j}) \left( 2 \left( k_s\! - k_j\right) - 2 \left( k_j\! - k_s\right) \right) \\&\quad = \frac{4}{|{\varPsi }(s)|} \sum _{j \in {\varPsi }(s)} T_{{\varDelta }_d}(\delta _{s,j}) \left( k_s - k_j\right) \end{aligned} \end{aligned}$$
(18)
Temporal Smoothing Term
The temporal smoothing term is used to constraint the displacements \(k_s\) over time, generating a smooth temporal deformation and avoiding jitter and artifacts. This additional term is defined as follows:
$$\begin{aligned} E_{temp} = \sum _{s=0}^{n_s - 1} \left( \frac{1}{2} (k_s^{f-2} + k_s^{f}) - k_s^{f-1} \right) ^2\end{aligned}$$
(19)
where \(k_s^{f-2}\), \(k_s^{f-1}\) and \(k_s^{f}\) are respectively the normal displacement \(k_s\) computed 2 frames before, 1 frame before and at the current frame. This formulation is inspired by the acceleration law, aiming at obtaining time consistent results with smooth acceleration. The smoothing term comes into play after computing the displacements for the first 2 frames, when the constants for the first frame \(k_s^{1}\), and second frame \(k_s^{2}\) are known.
Derivative for
\(E_{temp}\): The derivative of \(E_{temp}\) with respect to \(k_s^f\) at the current frame f is calculated by simple derivation rules as follows:
$$\begin{aligned} \frac{\partial E_{temp}}{\partial k_s}= & {} 2 \left( \frac{1}{2} \left( k_s^{f-2} + k_s^{f}\right) \! -\! k_s^{f-1} \! \right) \! \left( \frac{1}{2} (0 + 1)\! -\! 0\right) \nonumber \\= & {} \frac{1}{2} (k_s^{f-2} + k_s^{f}) - k_s^{f-1}\end{aligned}$$
(20)
Optimization
Our energy function \(\mathbf {E}\) can be efficiently optimized using an iterative gradient-based approach. For each iteration t of the maximization process, we compute the derivative of \(\mathbf {E}^t\) with respect to each \(k_s, s \in \{0\dots n_s - 1\}\), as obtained summing-up all energy term derivatives together, following Eq. 6.
To improve computational efficiency, we evaluate the overlap \({\varPhi }_{i,s}\) only for visible Surface Gaussians from each camera view. Explicit visibility computation is performed only once at the beginning of each frame, by considering each Surface Gaussian as simple vertices. The implicit occlusion handling takes care of consistently handling new occlusions that might arise during optimization. The Gaussian overlap is then computed against visible projected Surface Gaussians and Image Gaussians in a local neighborhood, by considering only the closest Image Gaussians up to a distance threshold \(T_{dist}\) in number of pixels and a color distance threshold \(T_{color}\). Table 1 summarizes the main user defined parameters as well as their default values.
We efficiently optimize our energy function \(\mathbf E \) using a conditioned gradient ascent approach. The general gradient ascent method is a first-order optimization procedure that aims at finding local maxima by taking steps proportional to the energy gradient. It uses a scalar factor, the conditioner \(\gamma \), associated to the analytical derivatives that increases (resp. decreases) step-by-step when the gradient sign is constant (resp. fluctuating).
We define the gradient at the iteration t of the maximization operation as \({\nabla }(\mathbf {E})^t= \frac{\partial }{\partial k_s}(\mathbf E )^t\) and proceed as follows. At each optimization step t we update the displacements \(k_s^t\) based on the current normalized gradient \(\overline{\nabla }(\mathbf {E})^t\) and conditioner \(\gamma ^t\)
$$\begin{aligned} k_s^t = k_s^{t-1} + \overline{\nabla }(\mathbf {E})^t \gamma ^t \end{aligned}$$
(21)
where \(k_s^{0} = 0\), \(\forall s = 0\dots n_s-1\), and \(\overline{\nabla }(\mathbf {E})^t\) is the normalized gradient computed considering the maximum \({\nabla }(\mathbf {E})^t\) among all \(s = 0\dots n_s -1\) at the current step to ensure values in the interval [0, 1]:
$$\begin{aligned} \overline{\nabla }(\mathbf {E})^t = \frac{{\nabla }(\mathbf {E})^t}{max\left( {\nabla }(\mathbf {E})^t, s = 0\dots n_s-1\right) }\end{aligned}$$
(22)
The conditioner is initially set to \(\gamma ^0 = 0.1\), then we update it based on the gradients at previous and current step as follows:
$$\begin{aligned} \gamma ^{t+1} = \left\{ \begin{array}{l l} min\left( 1.2 \gamma ^{t},\frac{{\varDelta }_{\gamma }}{\overline{\nabla }(\mathbf {E})^t}\right) &{} \quad \text {if }\left( \overline{\nabla }(\mathbf {E})^{t-1} \overline{\nabla }(\mathbf {E})^{t}\right) > 0\\ &{}\\ 0.5 \gamma ^{t} &{} \quad \text {otherwise} \end{array} \right. \end{aligned}$$
(23)
where \({\varDelta }_{\gamma } = 1\) mm is the maximum step size. We additionally check if the gradient has dramatically decreased in magnitude, and if so further dampen the conditioner based on the gradient ratio:
$$\begin{aligned} \gamma ^{t+1} = 0.25 \frac{\overline{\nabla }(\mathbf {E})^{t-1}}{\overline{\nabla }(\mathbf {E})^{t}} \gamma ^t \end{aligned}$$
(24)
The use of the conditioner brings three main advantages: it allows for faster convergence to the final solution, it prevents undesired zig-zag-ing while approaching local maxima, and it constraints at the same time the analytical derivative size. Such benefits are depicted in Fig. 5, which shows the impact of the conditioner on the convergence curve trend. For each frame, we perform at least 5 and at most 1000 iterations, and stop when \(\frac{|\mathbf E ^{t} - \mathbf E ^{t-1}|}{max(1,\mathbf E ^{t},\mathbf E ^{t-1})} \le 1e^{-8}\).
Once the convergence has been reached (typically around iteration 200 for all sequences, see Fig. 5), we update the vertex positions of the input mesh at the current frame by simply displacing them along the corresponding normal using the found optimal \(k_s\). Note that in practice, when rendering the final resulting mesh sequence, we add an extra \(\epsilon \) to the computed vertex displacement \(k_s\). This is needed to compensate for the small surface bias (shrink along the normal during optimization) that is due to the spatial extent of the Gaussians. Hence, we update the vertex position
$$\begin{aligned} v_s = v_s^{init} + N_s \cdot (k_s + \epsilon ) \end{aligned}$$
(25)
where \(v_s^{init}\) is the original location of the vertex, \(N_s\) is the corresponding unchanged normal and \(\epsilon = \sigma _s\) throughout this work.