Abstract
The focus of this work is to provide an extensive numerical study of the parallel efficiency and robustness of a staggered dualprimal Newton–Krylov deluxe solver for implicit time discretizations of the Bidomain model. This model describes the propagation of the electrical impulse in the cardiac tissue, by means of a system of parabolic reactiondiffusion partial differential equations. This system is coupled to a system of ordinary differential equations, modeling the ionic currents dynamics. A staggered approach is employed for the solution of a fully implicit time discretization of the problem, where the two systems are solved successively. The arising nonlinear algebraic system is solved with a Newton–Krylov approach, preconditioned by a dualprimal Domain Decomposition algorithm in order to improve convergence. The theoretical analysis and numerical validation of this strategy has been carried out in Huynh et al. (SIAM J. Sci. Comput. 44, B224–B249, 2022) considering only simple ionic models. This paper extends this study to include more complex biophysical ionic models, as well as the presence of ischemic regions, described mathematically by heterogeneous diffusion coefficients with possible discontinuities between subregions. The results of several numerical experiments show robustness and scalability of the proposed parallel solver.
1 Introduction
Modern computing platforms allow the study and simulation of complex biophysical phenomena with very high accuracy, thanks to the availability of large computing and memory resources on last generation machines, as well as to advanced mathematical and numerical models. As a consequence, the development of accurate and efficient largescale numerical solvers have become a very important research topic in the parallel scientific computing community. In the cardiac research field, the opportunities given by computational studies of the heart functions, have increased the interactions among clinicians, mathematicians, physicists and engineers, working together to better understand the behavior of the different parts of the heart and to use these studies to predict and simulate several cardiac pathologies [25, 30].
The mathematical modeling of the heart involves systems of partial differential equations (PDEs) and ordinary differential equations (ODEs), which are combined together to model the three main cardiac functions: the electrophysiology [4], the induced cardiac mechanical activity [27], and the blood circulation inside the heart [8].
In this paper, we focus on the first function and design efficient numerical solvers for cardiac electrophysiological models which are also scalable and robust. Among the recent works devoted to the study of parallel cardiac solvers as well as the coupling of different electrical and mechanical cardiac models, we refer to [2, 6, 23, 32] and to the references therein. The main objective of this work is to investigate the efficiency and robustness of a decoupled dualprimal Newton–Krylov solver for implicit time discretizations of the Bidomain model, already introduced in [15] but only coupled with simplified ionic models. This paper extends this study to include both more complex and biophysically detailed ionic models, as well as the presence of ischemic regions.
The Bidomain model describes the propagation of the electric signal in the cardiac tissue by means of a system of parabolic PDEs, coupled with a system of ODEs representing the ionic currents dynamics. By using a fully implicit time scheme for the discretization of the temporal variable, we obtain at each time step a nonlinear algebraic system, which is solved by the Newton method. We choose to solve the Jacobian linear system arising at each Newton step with a Krylov iterative solver, together with Balancing Domain Decomposition with Constraints (BDDC) [9, 10] and DualPrimal Finite Element Tearing and Interconnecting (FETIDP) [12] preconditioners, in order to accelerate convergence. These algorithms belongs to the class of dualprimal Domain Decomposition (DD) methods, where the degrees of freedom (dofs) are classified into those internal to each subdomain and into those on the interface, which are further divided into dual and primal dofs [29].
In previous work, the authors have analyzed and presented a theoretical convergence rate estimate for the preconditioned solver, together with several preliminary parallel tests using a simple phenomenological ionic model. In particular, the proposed solution strategy relies on a staggered approach, where the two PDE and ODE systems are solved successively, in contrast to a monolithic or coupled approach (e.g. [16, 22]).
We extend here the numerical results by studying the robustness of the proposed solver both in case biophysical ionic models are employed, such as the Luo–Rudy phase one [19] and the Ten Tusscher–Panfilov [28] human ionic models, and in case ischemic regions are considered. The inclusion of ischemic regions in the computational domain is modeled mathematically by introducing jumps in the diffusion coefficients; in turn, the discontinuity of the diffusion coefficients on the boundaries of the ischemic region impact on the conditioning of the linear systems, thus requiring robust preconditioned iterative solvers.
This paper is structured as follows: in Section 2 we review the micro and macroscopic models of cardiac electrophysiology, introducing the application we consider for our dualprimal solver, presented in Section 3, together with the adopted discretization and solution schemes. Section 4 provides extensive parallel numerical tests that show scalability and robustness of the proposed dualprimal Newton–Krylov solver, posing the basis for several possible future extensions, discussed in the conclusive Section 5.
2 The Cardiac Electrical Model
In the following Section, we briefly review our cardiac reaction  diffusion model, by introducing the assumptions needed for its formulation. Moreover, we introduce the ionic models employed both in the theoretical analysis and in the numerical experiments.
2.1 The Bidomain Model
The mathematical description of the electrical activity in the cardiac tissue, known as myocardium, is provided by the Bidomain model.
The myocardium can be represented as the composition of two ohmic conducting media, named intra (Ω_{i}) and extracellular (Ω_{e}) domains, separated by the active cellular membrane (Γ); the latter acts as insulator between the two domains, as otherwise there would be no potential difference across the domain. These anisotropic continuous media are assumed to coexist at every point of the tissue and to be connected by a distributed continuous cellular membrane [5]. Additionally, the cardiac muscle fibers rotate counterclockwise and their arrangement is modeled as laminar sheets running radially from the outer (epicardium) to the inner surface (endocardium) of the heart.
This setting influences the mathematical definition of the conductivity tensors needed for the formulation of the Bidomain equations. In particular, at each point x of the cardiac domain we can define an orthonormal triplet of vectors a_{l}(x), a_{t}(x) and a_{n}(x), respectively parallel to the local fiber direction, tangent and orthogonal to the laminar sheets, and transversal to the fiber axis ([18]). By denoting with \(\sigma _{l, t, n}^{i,e}\) the conductivity coefficients in the intra and extracellular domains along the corresponding directions, we define the conductivity tensors D_{i} and D_{e} of the two media as
Structural inhomogeneities in the intra or extracellular spaces due to the presence of gap junctions, blood vessels and collagen are generally included in the conductivity tensors D_{i} and D_{e} as inhomogeneous functions of space.
Thanks to the hypothesis on the cardiac tissue, the electric potential is defined in each point of the two domains as a quantity averaged over a small volume: consequently, every point of the cardiac tissue is assumed to belong to both the intracellular and the extracellular spaces, thus being assigned both an intra and an extracellular potential. From now on, we will denote by Ω the cardiac tissue volume represented by the superposition of these two spaces.
The parabolicparabolic formulation of the Bidomain model can be stated as follows: find the intracellular and extracellular potentials \(u_{i,e}: {\Omega } \times (0, T) \rightarrow \mathbb {R}\), the transmembrane potential v = u_{i} − u_{e}: \({\Omega } \times (0, T) \rightarrow \mathbb {R}\), the gating variables \(\mathbf {w}: {\Omega } \times (0, T) \rightarrow \mathbb {R}^{M}\), the ionic concentration variables \(\mathbf {c}: {\Omega } \times (0, T) \rightarrow \mathbb {R}^{S}\), \(M, S \in \mathbb {N}\), such that
given \(I_{\text {app}}^{i,e}: {\Omega } \times (0, T) \rightarrow \mathbb {R}\) intra and extracellular applied currents and initial values \(v_{0}: {\Omega } \rightarrow \mathbb {R}\), \(\mathbf {w}_{0}: {\Omega } \rightarrow \mathbb {R}^{M}\) and \(\mathbf {c}_{0}: {\Omega } \rightarrow (0, +\infty )^{S}\)
Here, C_{m} is the membrane capacitance for unit area of the membrane surface and χ is the membrane surface to volume ratio. The Neumann zero boundary conditions in the system (1, last row) represent mathematically the assumption that the heart is electrically insulated. The nonlinear reaction term I_{ion} and the ODEs system for the gating variables w (which represent the opening and closing process of the ionic channels) and the ionic concentrations c are given by the chosen ionic membrane model.
Results on existence, uniqueness and regularity of the solution of system (1) have been extensively studied, see for example [5, 31].
2.2 Ionic Models
In this work, we consider three different ionic models. First, a phenomenological ionic model, derived from a modification of the renowned FitzHugh–Nagumo model [13, 14], named the Rogers–McCulloch (RMC) ionic model [24]. This model overcomes the hyperpolarization of the cell during the repolarization phase by adding a nonlinear dependence between the transmembrane potential and the gating variable and by neglecting the ionic concentrations variable. For this model, I_{ion}(v,w) and R(v,w) are given by
where G, v_{th}, v_{p}, η_{1} and η_{2} are given coefficients.
We then consider two more biophysically detailed ionic models, namely the Luo–Rudy phase one (LR1) and the Ten Tusscher–Panfilov (TP06) ionic models, which provide a more detailed description of the ionic currents in cardiac cells. For the several equations of these more complex ionic models we refer to the original papers [19, 28].
The theoretical analysis presented in [15] considered only the class of phenomenological ionic model (neglecting the ionic concentration variables). This work extends our previous study to include the LR1 and TP06 ionic models, as well as the presence of ischemic regions.
3 DualPrimal Newton–Krylov Solvers
In this section, we briefly present our discretization choices for the model, by providing a finite element discretization in space, a fully implicit time discretization in time and a decoupling (or staggered) solution strategy, in the same fashion as in [20, 26]. Then, we introduce our dualprimal solver for the arising nonlinear algebraic system and we mention the theoretical convergence result, which can be found in its extended form in [15].
3.1 Space and Time Discretizations
We discretize in space the cardiac domain Ω with Q_{1} finite elements, leading to the semidiscrete system
with A_{i}, A_{e}, M the stiffness and mass matrices arising from the finite element discretization.
Regarding the discretization in time, instead of using implicitexplicit (IMEX) schemes [6, 32], where the diffusion term is treated implicitly, while the remaining terms explicitly, or more generally operator splitting strategies [3], we consider here a fully implicit time discretization in a decoupling (or staggered) approach.
In this procedure, at each time step, the ODEs system representing the ionic model is solved first; then, the nonlinear algebraic Bidomain system is solved and updated. In a very schematic way, this strategy can be summarized as: for each time step n,

1.
given the intra and extracellular potentials at the previous time step, define \(\mathbf {v} := \mathbf {u}_{i}^{n}  \mathbf {u}_{e}^{n}\) and compute the gating and ionic concentrations variables
$$ \begin{array}{@{}rcl@{}} \mathbf{w}^{n+1}  \tau R (\mathbf{v}, \mathbf{w}^{n+1}) &=& \mathbf{w}^{n}, \\ \mathbf{c}^{n+1}  \tau C (\mathbf{v}, \mathbf{w}^{n+1}, \mathbf{c}^{n+1}) &=& \mathbf{c}^{n}. \end{array} $$ 
2.
solve and update the Bidomain nonlinear system. Given \(\mathbf {u}_{i,e}^{n}\) at the previous time step and given w^{n+ 1} and c^{n+ 1}, compute \(\mathbf {u}^{n+1} = (\mathbf {u}_{i}^{n+1}, \mathbf {u}_{e}^{n+1})\) by solving the system \(\mathcal {F}_{\text {dec}} (\mathbf {u}^{n+1}) = \mathcal {G}\)
$$ \begin{array}{@{}rcl@{}} \mathcal{F}_{\text{dec}}(\mathbf{u}^{n+1}) &=& \left( \chi C_{m} \mathcal{M} + \tau \mathcal{A} \right) \begin{bmatrix} \mathbf{u}_{i}^{n+1} \\ \mathbf{u}_{e}^{n+1} \end{bmatrix} + \tau \begin{bmatrix} M \mathbf{I_{\text{ion}}}(\mathbf{v}^{n+1}, \mathbf{w}^{n+1}, \mathbf{c}^{n+1}) \\ M \mathbf{I_{\text{ion}}}(\mathbf{v}^{n+1}, \mathbf{w}^{n+1}, \mathbf{c}^{n+1}) \end{bmatrix},\\ \mathcal{G} &=& \chi C_{m} \mathcal{M} \begin{bmatrix} \mathbf{u}_{i}^{n} \\ \mathbf{u}_{e}^{n} \end{bmatrix} + \tau \begin{bmatrix} M \mathbf{I_{\text{app}}^{i}} \\ M \mathbf{I_{\text{app}}^{e}} \end{bmatrix}, \end{array} $$where
$$ \mathcal{A} = \begin{bmatrix} A_{i} & 0 \\ 0 & A_{e} \end{bmatrix}, \qquad \mathcal{M} = \begin{bmatrix} M & M \\ M & M \end{bmatrix}. $$
We observe that the Jacobian linear system associated to the nonlinear problem in step 2 is symmetric.
This strategy is usually adopted in contrast to a monolithic approach, where the PDEs and ODEs systems are solved altogether, and the computational workload is higher due to the presence of the gating and ionic concentrations variables in the nonlinear algebraic system. Nevertheless, this strategy has been extensively studied and several scalable parallel preconditioners have been designed and analyzed (e.g. [16, 21, 22]).
3.2 Dualprimal Preconditioners
In this approach, a linear system has to be solved at each Newton step: to this end, we employ an iterative method, preconditioned by a dualprimal substructuring algorithm.
Let us consider a partition of the computational domain Ω into N nonoverlapping subdomains of diameter H_{j}, such that \({\Omega } = \cup _{j=1}^{N} {\Omega }_{j}\), and we define the subdomain interface as \({\Gamma } = \left (\cup _{j=1}^{N} \partial {\Omega }_{j} \right ) \backslash \partial {\Omega }\). Let W^{(j)} be the associated local finite element spaces. We introduce the product spaces
where we have partitioned W^{(j)} into the interior part \(W_{I}^{(j)}\) and the finite element trace space \(W_{\Gamma }^{(j)}\). We define \(\widehat {W} \subset W\) as the subspace of functions of W, which are continuous in all interface variables between subdomains and similarly we denote by \(\widehat {W}_{\Gamma } \subset W_{\Gamma }\), the subspace formed by the continuous elements of W_{Γ}. In order to obtain good convergence and to ensure that each local problem is invertible, a proper choice of primal constraints is needed. These primal constraints are, in this sense, continuity constraints which we require to hold throughout the iterations. By denoting with \(\widetilde {W}\) the space of finite element functions in W, which are continuous in all primal variables, we also have \(\widehat {W} \subset \widetilde {W} \subset W\) and \(\widehat {W}_{\Gamma } \subset \widetilde {W}_{\Gamma } \subset W_{\Gamma }\).
Let \(W_{\Pi }^{(j)} \subset W_{\Gamma }^{(j)}\) be the primal subspace of continuous functions across the interface and that will be subassembled between the subdomains sharing Γ^{(j)}. Moreover, denote with \(W_{\Delta }^{(j)} \subset W_{\Gamma }^{(j)}\) the space of finite element functions (called dual) which can be discontinuous across the interface and vanish at the primal degrees of freedom. Analogously, \(W_{\Pi } = {\prod }_{j=1}^{N} W_{\Pi }^{(j)}\) and \(W_{\Delta } = {\prod }_{j=1}^{N} W_{\Delta }^{(j)}\), thus W_{Γ} = W_{π} ⊕ W_{Δ}. Using this notation, we can decompose \(\widetilde {W}_{\Gamma }\) into a primal subspace \(\widehat {W}_{\Pi }\) which has continuous elements only and a dual subspace W_{Δ} which contains finite element functions which are not continuous. In this work, we will denote with subscripts I, Δ and π the interior, dual and primal variables, respectively.
In the substructuring framework, the starting problem Kw = f is reduced to the interface Γ by eliminating the degrees of freedom (dofs) interior to each subdomain, obtaining the Schur complement system
where \(S_{\Gamma } = K_{\Gamma {\Gamma }}  K_{\Gamma I} K_{I I}^{1} K_{I {\Gamma }}\) and \(g_{\Gamma } = f_{\Gamma }  K_{\Gamma I} K_{I I}^{1} f_{I}\) are obtained by reordering the dofs in interior and interface.
The reordering of the degrees of freedom allows to define algorithms where each subproblem is solved independently from the others except for the primal constraints, where the variables are assumed to be continuous across the subdomains. For a detailed presentation, we refer to [29].
On these premises, it is possible to built two of the most used dualprimal iterative substructuring algorithms, namely the Balancing Domain Decomposition with Constraints (BDDC) and dualprimal Finite Element Tearing and Interconnecting (FETIDP) preconditioners.
In order to ensure a correct continuity of the solution across the subdomains, an appropriate interface averaging is needed. In our work, we focus on both standard ρ and deluxe scalings of the dual variables.
Restriction and scaling operators
Before introducing the preconditioners, it is helpful to understand how the scaling procedure works. We define the restriction operators
and the direct sums \(R_{\Delta } = \oplus R_{\Delta }^{(j)}\), \(R_{\Pi } = \oplus R_{\Pi }^{(j)} \) and \(\widetilde {R}_{\Gamma } = R_{\Gamma {\Pi }} \oplus R_{\Gamma {\Delta }}\), which maps W_{Γ} into \(\widetilde {W}_{\Gamma }\).
The ρscaling can be defined for the Bidomain model at each node x ∈Γ^{(j)} as
where \(\mathcal {N}_{x}\)^{Footnote 1} is the set of indices of all subdomains with x in the closure of the subdomain.
Conversely, the deluxe scaling (introduced in [7, 10]) computes the average \(\bar {w} = E_{D} w\) for each face \(\mathcal {F}\) or edge \(\mathcal {E}\) equivalence class.
Suppose that \(\mathcal {F}\) is shared by subdomains Ω_{j} and Ω_{k}. Let \(S^{(j)}_{\mathcal {F}}\) and \(S^{(k)}_{\mathcal {F}}\) be the principal minors obtained from \(S^{(j)}_{\Gamma }\) and \(S^{(k)}_{\Gamma }\) by extracting all rows and columns related to the degrees of freedom of the face \(\mathcal {F}\). Denote with \(u_{j,\mathcal {F}} = R_{\mathcal {F}} u_{j}\) the restriction of u_{j} to the face \(\mathcal {F}\) through the restriction operator \(R_{\mathcal {F}}\). Then, the deluxe average across \(\mathcal {F}\) can be defined as
It is possible to extend this definition when considering the deluxe average across an edge \(\mathcal {E}\). Suppose for simplicity that \(\mathcal {E}\) is shared by only three subdomains with indices j_{1}, j_{2} and j_{3}; the extension to more than three subdomains is equivalent. Let \(u_{j,\mathcal {E}} = R_{\mathcal {E}} u_{j}\) be the restriction of u_{j} to the edge \(\mathcal {E}\) through the restriction operator \(R_{\mathcal {E}}\) and define \(S_{\mathcal {E}}^{(j_{123})} = S_{\mathcal {E}}^{(j_{1})} + S_{\mathcal {E}}^{(j_{2})} + S_{\mathcal {E}}^{(j_{3})}\); the deluxe average across an edge \(\mathcal {E}\) is defined as
The average \(\bar {u}\) is constructed with the contributions from the relevant equivalence classes involving the substructure Ω_{j}. These contributions will belong to \(\widehat {W}_{\Gamma }\), after being extended by zero to \({\Gamma } \backslash \mathcal {F}\) or \({\Gamma } \backslash \mathcal {E}\). The sum of all contributions \(R^{T}_{\ast } \bar {u}_{\ast }\) are then added from the different equivalence classes to obtain
where E_{D} is a projection and
is its complementary projection.
We define the scaling matrix for each subdomain Ω_{j} as
being \(k_{1}, \dots , k_{j} \in {\varXi }_{j}^{\ast }\), a set containing the indices of the subdomains that share the face \(\mathcal {F}\) or the edge \(\mathcal {E}\) and where the diagonal blocks are given by \(D^{(j)}_{\mathcal {F}} = \big (S^{(j)}_{\mathcal {F}} + S^{(k)}_{\mathcal {F}}\big )^{1} S^{(j)}_{\mathcal {F}}\) or \(D^{(j)}_{\mathcal {E}} = (S_{\mathcal {E}}^{(j_{1})} + S_{\mathcal {E}}^{(j_{2})} + S_{\mathcal {E}}^{(j_{3})})^{1} S_{\mathcal {E}}^{(j_{1})}\) in case the deluxe scaling is considered. Conversely, if the ρscaling is taken into account, the j th diagonal scaling matrix D^{(j)} contains the pseudoinverses (3) along the diagonal.
Lastly, we define the scaled local restriction operators
R_{D,Δ} as direct sum of \(R_{D, {\Delta }}^{(j)}\) and the global scaled operator \(\widetilde {R}_{D, {\Gamma }} = R_{\Gamma {\Pi }} \oplus R_{D, {\Delta }} R_{\Gamma {\Delta }}\).
BDDC preconditioners
BDDC are twolevel preconditioners introduced in [9] for the Schur complement system
where \(\widehat {S}_{\Gamma } = R_{\Gamma }^{T} S_{\Gamma } R_{\Gamma }\) and \(\widehat {f}_{\Gamma } = R_{\Gamma }^{T} f_{\Gamma }\) are obtained with the operator R_{Γ} which is the sum of local operators \(R_{\Gamma }^{(j)}\) that return the local interface component.
They can be considered as evolution of balancing Neumann–Neumann algorithms, where local and coarse problems are treated additively. In these algorithms, the choice of primal constraints across the interface is important, since it influences the structure and size of the coarse problem and hence the overall convergence of the method.
It is possible to define BDDC preconditioners using the scaled restriction operators \(\widetilde {R}_{D, {\Gamma }}^{T}\) as
where the action of the inverse of \(\widetilde {S}_{\Gamma }\) can be evaluated as
The first term is the sum of local solvers on each subdomain Ω_{j}, while the second is a coarse solver for the primal variables, where
are the primal problem and the matrix which maps the primal degrees of freedom to the interface variables respectively.
Then, BDDC algorithm for the solution of the Schur complement problem (4) can be defined as: find \(u_{\Gamma } \in \widehat {W}_{\Gamma }\) such that
Once the interface u_{Γ} is computed, we can retrieve the internal solution u_{I} by solving local Dirichlet problems.
FETIDP preconditioners
FETIDP preconditioners were first proposed in [12] as an alternative to onelevel and twolevel FETI and are based on the transposition from the Schur complement problem (2) to a minimization problem on \(\widetilde {W}_{\Gamma }\) with continuity constraints on the dual degrees of freedom, by means of additional variables (named Lagrange multipliers). By introducing a jump matrix B, needed to ensure a correct transmission of the solution between subdomains, the resulting saddle point system
is further reduced to a problem only in the Lagrange multipliers unknowns
This linear system is solved iteratively with FETIDP preconditioners \(M^{1}_{\text {FETIDP}} = B_{D} \widetilde {S}_{\Gamma } {B_{D}^{T}}\) (where B_{D} is the scaled jump operator)
Once the Lagrange multipliers are computed and the interface variables w_{Γ} retrieved, it is possible to compute the internal variables w_{I} by solving local problems on each subdomain.
Convergence analysis
We have proved in [15] a theoretical convergence rate estimate for these two dual  primal preconditioned operators, since BDDC and FETIDP have been proven to share the same essential spectrum in [17], if the same coarse space is chosen.
4 Parallel Numerical Results
We extend here the parallel numerical experiments presented in [15] by testing the efficiency, scalability and robustness of the proposed dualprimal Newton–Krylov solver in the following settings:

when biophysical ionic models, such as the LR1 and TP06 models, are considered in a physiological situation;

in the presence of an ischemic region, modeled mathematically by introducing jumps in the diffusion coefficients and other ionic parameters.
We consider two simplified geometries, a thin slab and a portion of half truncated ellipsoid, modeling an idealized left ventricle tissue geometry. The parametric equations of the prolate ellipsoid are given by the system
where a(r) = a_{1} + r(a_{2} − a_{1}), b(r) = b_{1} + r(b_{2} − b_{1}) and c(r) = c_{1} + r(c_{2} − c_{1}) with a_{1,2}, b_{1,2} and c_{1,2} given coefficients defining the main axes of the ellipsoid. Regarding the cardiac fibers, we consider a intramural rotation linearly with the depth of 120^{∘} proceeding counterclockwise from the surface corresponding to the outer layer of the tissue (epicardium, r = 1) to the surface corresponding to the inner layer (endocardium, r = 0), see e.g. Fig. 11.
As already stated, we refer to the original papers [19, 24, 28] for the equations and parameters of Luo–Rudy phase one (LR1), Rogers–McCulloch (RMC) and Ten Tusscher–Panfilov (TP06) ionic models, respectively. Details about the implementation of the ischemic region will be given in the related paragraph. Additionally, we refer to [5] for the parameters related to the Bidomain model.
We apply an external stimulus I_{app} = 100 mA/cm^{3} for 1 ms on a small region of the epicardium. Instead, when the slab geometry is considered, the stimulus is applied in one corner of the domain, over a spheric volume of radius 0.1 cm.
We consider a time interval of 2 ms, with fixed time step τ = 0.05 ms, for a total of 40 time steps. This assumption is not restrictive, since with larger time steps it is possible to lose accuracy in the representation of the wavefront propagating in the tissue, while smaller time steps would only increment the computational workload to marginally increase the accuracy of the solution.
The parallel C codes have been developed using the Portable Extensible Toolkit for Scientific Computation (PETSc) library [1] and the numerical tests have been carried out on Indaco cluster from University of Milan (https://www.indaco.unimi.it/), a Linux Infiniband cluster with 16 nodes, each carrying 2 processors Intel Xeon E52683 V4 2.1 GHz with 16 cores each, for a total amount of 512 cores. We assign to each processor one local problem, thus resulting in a correspondence between subdomains and processors. We employ the SNES package from PETSc library, which provide a readytouse environment for the solution of nonlinear algebraic systems. We compare the performance of BDDC and FETIDP preconditioners with the one of algebraic multigrid, from the Hypre implementation (boomer AMG, or bAMG, [11]) and from the builtin PETSc implementation (GAMG), both with default parameters. For the optimality tests, we compare both the standard ρscaling and the deluxe scaling. In order to test the efficiency of our solver on parallel architectures, we also compute the parallel speedup \(S_{p} = \frac {T_{1}}{T_{p}}\), the ratio between the runtime T_{1} on 1 processor and the average runtime T_{p} on p processors.
4.1 Normal Physiological Tests
With start with considering a normal physiological situation for the LR1 and TP06 ionic models and study the weak scalability and optimality of our dualprimal solvers.
Weak scalability
Tables 1 and 2 report a comparison between the bAMG, BDDC and FETIDP preconditioners. In this set of tests, the local mesh size is fixed to 12 ⋅ 12 ⋅ 12 and the number of processors is increased from 4 to 256, resulting in an increasing number of total dofs from 16,250 to 926,786. Due to the loss of positive semidefinite property, the Generalized Minimal Residual (GMRES) method is employed for the solution of the Jacobian linear system.
The number of nonlinear iterations do not increase with the number of subdomains, and present lower values for TP06 in both geometries. Additionally, this parameter seems to be affected by the complexity of the geometry, since it is higher for the truncated ellipsoid.
The performance of BDDC and FETIDP in terms of average CPU time show robustness of the preconditioned solver, since this quantity remains bounded while increasing the number of subdomains, except for the case of BDDC for LR1 with 16 processors for which we do not have any clear explanation; this trend cannot be found for bAMG, which presents higher and increasing values.
Regarding the average number of linear iterations, for the slab geometry and for both LR1 and TP06 ionic models, BDDC and FETIDP present lower and bounded values, while for bAMG these values increase with the number of processors. On the ellipsoidal geometry instead, we have some fluctuations for BDDC and FETIDP, although the average number of linear iterations remains bounded; the multigrid presents higher values with respect to its corresponding cases on the slab.
Optimality tests
We report the results of optimality tests for increasing ratio H/h in Tables 3 and 4. As in the preliminary tests with the simple RMC ionic model reported in [15], these results are independent of the scaling employed (the ρscaling and deluxe scaling, see [10, 29] for more details).
The average number of nonlinear iterations settles around 24 for the LR1 ionic model and around 23 for the TP06 model, with higher values for the ellipsoidal domain. The number of nonlinear iterations seems to be independent of the coarse space employed (V, V+E, V+E+F). As confirmed by Fig. 2, the average number of linear iterations per time step deteriorates while increasing the local problem size when the coarse space consists of only subdomain vertex constraints (V). Instead, by enriching the primal space by adding edge (V+E) and face (V+E+F) constraints, this quantity remains bounded and with lower values.
The average CPU times are almost identical if we consider the richest primal spaces (V+E and V+E+F).
4.2 Transmural Ischemic Tests
In order to test the robustness of our dualprimal Bidomain decoupled solver, we add jumps in the diffusion coefficients, modelling pathological conditions such as myocardial ischemia. In particular, we consider a small and regular transmural ischemic region located at the center of both geometries (see Fig. 1). We consider only the RMC and TP06 ionic models, investigating the behavior of the dualprimal solver both in case of a simple phenomenological ionic model and in case of a human ventricular ionic model.
In the ischemic region, we decrease the diffusion coefficients \({\sigma _{l}^{i}}\) and \({\sigma _{t}^{i}}\) along and across fiber as shown in Table 5. Furthermore, we reduce the ionic current by 30% for the RMC ionic model; in case of the TP06 ionic model, we increase the potassium extracellular concentration K_{o} from 5.4 mV to 8 mV, and we decrease the sodium conductance G_{Na} by 30%, simulating a region with moderate ischemia. Others settings can be found in [5, Chapter 9] and the references therein.
We represent in Figs. 3 and 4 the time evolution of the transmembrane potential v and the extracellular potential u_{e} from the epicardial surfaces respectively, with a transmural ischemic region in the middle of the slab geometry.
In these experiments, we employ the PETSc implementation of algebraic multigrid (GAMG), with default parameters.
Weak scalability. Transmural ischemic region
The results of the weak scalability tests can be found in Tables 6 and 7, where we compare the performance of algebraic multigrid and dualprimal preconditioners, with both ionic models and geometries. The local mesh size is fixed to 14 ⋅ 14 ⋅ 14 and the number of processors is increased from 4 to 256, thus resulting in an increasing number of dofs from 25,230 to 1,462,050.
As already observed in the previous comparison in normal physiological cases, the average number of nonlinear iterations for the TP06 model is higher than the RMC model. Additionally, we again observe higher nonlinear iteration counts for the more complex ellipsoidal domain.
Regarding the slab geometry, BDDC and FETIDP scale well in terms of average number of linear iterations, since this quantity remains bounded while increasing the number of processors, while GAMG’s iterations increase. For the ellipsoidal geometry, we obtain more linear iteration fluctuations for all preconditioners, but the dualprimal preconditioners present lower values than GAMG.
BDDC and FETIDP are slower in terms of average CPU time than GAMG, probably due to the higher need of interprocessors communication. In contrast to these higher values, GAMG presents an increasing trend for increasing processor counts, while the dualprimal preconditioners show a slower CPU time growth.
Strong scalability. Transmural ischemic region
We then consider strong scalability tests, where we fix the global mesh to 192 ⋅ 192 ⋅ 32 elements (for a total of 2,458,434 dofs) for the slab geometry and we increase the number of subdomains from 32 to 256. In contrast, we fix the global mesh to 128 ⋅ 128 ⋅ 64 elements for the portion of truncated ellipsoid (thus resulting in 2,163,3360 dofs).
The average number of nonlinear iterations increases with the complexity of the ionic current: as reported in Tables 8 and 9, this parameter increases from 12 per time step using the RMC model to 23 per time step with TP06 model.
The robustness of the solver is confirmed by the average number of linear iterations, which is comparable for all the preconditioners and for both ionic models; moreover, this indicates that our dualprimal solver retains its good convergence properties even for more complex ionic models. Again, as a consequence, the CPU times of TP06 model increase (with respect to RMC CPU times) due to the increase of nonlinear iterations, but the associated parallel speedups of the models are comparable. Since we are working with a low number of processors (thus meaning larger local problems), BDDC and FETIDP outperform the ideal speedup (both with respect to 32 and 64 processors), since the factorization of the matrices takes most of the computational time.
Optimality tests. Transmural ischemic region
Lastly, we report in Tables 10 and 11 the results of optimality tests for increasing ratio H/h. We fix the number of processors (and subdomains) to 64 and we increase the local size H/h from 4 to 24, thus reducing the finite element size h.
Since the dualprimal preconditioners have been proven to be spectrally equivalent, we focus only on the performance of BDDC. For both ionic models, we again consider both scalings (ρscaling on top, deluxe scaling at bottom of each table) and we test the solver for increasing primal spaces including only vertex constraints (V), vertex and edge constraints (V+E) or vertex, edge and face constraints (V+E+F).
Similar results hold for both geometries, independently of the ionic model employed or the scaling chosen. Despite higher average CPU timings for the deluxe scaling, all the other parameters are similar between the two scalings (the average number of nonlinear iterations are the same, for each ionic model).
If we consider only vertices V in the coarse space, we observe that the number of linear iterations increases with the local size, as shown in Fig. 5. On the other hand, by adding edge and (V+E) and face (V+E+F) constraints in the primal space, we obtain a quasioptimality condition, where the linear iterations remains bounded, except for the slab geometry with TP06 ionic model, where the ρscaling with the full primal space (V+E+F) behaves as the coarsest primal space (V).
5 Conclusions
We have reviewed BDDC and FETIDP preconditioners for fully implicit time discretizations of the Bidomain system, solved through a staggered strategy. We have presented extensive parallel numerical experiments testing the efficiency, scalability and robustness of the solver, both in case of complex ionic models and in the presence of regions with moderate ischemia. The results confirm the validity of the proposed solvers, enlarging the class of available methods for the numerical solution of complex reaction  diffusion models. Future studies should investigate alternatives to the Newton method, by exploring the potentiality of quasiNewton and inexactNewton algorithms.
Change history
29 November 2022
The original online version of this article was revised: Missing Open Access funding information has been added in the Funding Note.
Notes
\(\mathcal {N}_{x}\) induces the definition of an equivalence relation that classifies the interface degrees of freedom into faces, edges and vertices equivalence classes.
References
Balay, S., et al.: PETSc web page. https://www.mcs.anl.gov/petsc/ (2019)
Barnafi, N., Zunino, P., Dedè, L., Quarteroni, A.: Mathematical analysis and numerical approximation of a general linearized porohyperelastic model. Comput. Math. Appl. 91, 202–228 (2021)
Chen, H., Li, X., Wang, Y.: A twoparameter modified splitting preconditioner for the Bidomain equations. Calcolo 56, 21 (2019)
Colli Franzone, P., Savaré, G.: Degenerate evolution systems modeling the cardiac electric field at micro and macroscopic level. In: Lorenzi, A., Ruf, B (eds.) Evolution Equations, Semigroups and Functional Analysis. Progress in Nonlinear Differential Equations and Their Applications, vol. 50, pp 49–78. Birkhäuser, Basel (2002)
Colli Franzone, P., Pavarino, L.F., Scacchi, S.: Mathematical Cardiac Electrophysiology. Springer, Cham (2014)
Colli Franzone, P., Pavarino, L.F., Scacchi, S.: A numerical study of scalable cardiac electromechanical solvers on HPC architectures. Front. Physiol. 9–268 (2018)
Beirão Da Veiga, L., Pavarino, L.F., Scacchi, S., Widlund, O.B., Zampini, S.: Isogeometric BDDC preconditioners with deluxe scaling. SIAM J. Sci. Comput. 36, A1118–A1139 (2014)
Di Gregorio, S., et al.: A computational model applied to myocardial perfusion in the human heart: From large coronaries to microvasculature. J. Comput. Phys. 424, 109836 (2021)
Dohrmann, C.R.: A preconditioner for substructuring based on constrained energy minimization. SIAM J. Sci. Comput. 25, 246–258 (2003)
Dohrmann, C.R., Widlund, O.B.: A BDDC algorithm with deluxe scaling for threedimensional H(curl) problems. Commun. Pure Appl. Math. 69, 745–770 (2016)
Falgout, R.D., Yang, U.M.: Hypre, High Performance Preconditioners: Users Manual. Technical report, Lawrence Livermore National Laboratory (2006)
Farhat, C., Lesoinne, M., LeTallec, P., Pierson, K., Rixen, D.: FETIDP: A dual–primal unified FETI method—part i: a faster alternative to the twolevel FETI method. Int. J. Numer. Methods Eng. 50, 1523–1544 (2001)
FitzHugh, R.: Impulses and physiological states in theoretical models of nerve membrane. Biophys. J. 1, 445–466 (1961)
FitzHugh, R.: Mathematical models of excitation and propagation in nerve. Biol. Eng., pp. 1–85 (1969)
Huynh, N.M.M., Pavarino, L.F., Scacchi, S.: Parallel Newton–Krylov BDDC and FETIDP deluxe solvers for implicit time discretizations of the cardiac Bidomain equations. SIAM J. Sci. Comput. 44, B224–B249 (2022)
Huynh, N.M.M.: Newton–KrylovBDDC deluxe solvers for nonsymmetric fully implicit time discretizations of the Bidomain model. arXiv:2102.08736 (2021)
Li, J., Widlund, O.B.: FETIDP, BDDC, and block Cholesky methods. Int. J. Numer. Methods Eng. 66, 250–271 (2006)
LeGrice, I.J., Smaill, B.H., Chai, L.Z., Edgar, S.G., Gavin, J.B., Hunter, P.J.: Laminar structure of the heart: ventricular myocyte arrangement and connective tissue architecture in the dog. Amer. J. Physiol. Heart Circ. Physiol. 269, H571–H582 (1995)
Luo, C., Rudy, Y.: A model of the ventricular cardiac action potential. Depolarization, repolarization, and their interaction. Circ. Res. 68, 1501–1526 (1991)
Munteanu, M., Pavarino, L.F.: Decoupled Schwarz algorithms for implicit discretizations of nonlinear Monodomain and Bidomain systems. Math. Models Methods Appl. Sci. 19, 1065–1097 (2009)
Munteanu, M., Pavarino, L.F., Scacchi, S.: A scalable Newton–Krylov–Schwarz method for the Bidomain reactiondiffusion system. SIAM J. Sci. Comput. 31, 3861–3883 (2009)
Murillo, M., Cai, X.C.: A fully implicit parallel algorithm for simulating the nonlinear electrical activity of the heart. Numer. Linear Algebra Appl. 11, 261–277 (2004)
Quarteroni, A., Lassila, T., Rossi, S., RuizBaier, R.: Integrated Heart—Coupling multiscale and multiphysics models for the simulation of the cardiac function. Comput. Methods Appl. Mech. Eng. 314, 345–407 (2017)
Rogers, J.M., McCulloch, A.D.: A collocationGalerkin finite element model of cardiac action potential propagation. IEEE Trans. Biomed. Eng. 41, 743–757 (1994)
Salvador, M., Fedele, M., Africa, P.C., Sung, E., Dede’, L., Prakosa, A., Chrispin, J., Trayanova, N., Quarteroni, A.: Electromechanical modeling of human ventricles with ischemic cardiomyopathy: Numerical simulations in sinus rhythm and under arrhythmia. Comput. Biol. Med. 136, 104674 (2021)
Scacchi, S.: A multilevel hybrid Newton–Krylov–Schwarz method for the Bidomain model of electrocardiology. Comput. Methods Appl. Mech. Eng. 200, 717–725 (2011)
Smith, N.P., Nickerson, D.P., Crampin, E.J., Hunter, P.J.: Multiscale computational modelling of the heart. Acta Numer. 13, 371–431 (2004)
Ten Tusscher, K.H.W.J., Noble, D., Noble, P.J., Panfilov, A.V.: A model for human ventricular tissue. Amer. J. Physiol. Heart Circ. Physiol. 286, H1573–H1589 (2004)
Toselli, A., Widlund, O.: Domain Decomposition Methods  Algorithms and Theory. Springer, Berlin (2005)
Trayanova, N.A.: Wholeheart modeling: Applications to cardiac electrophysiology and electromechanics. Circ. Res. 108, 113–128 (2011)
Veneroni, M.: Reaction–diffusion systems for the macroscopic bidomain model of the cardiac electric field. Nonlinear Anal. Real World Appl. 10, 849–868 (2009)
Zampini, S.: Dualprimal methods for the cardiac Bidomain model. Math. Models Methods Appl. Sci. 24, 667–696 (2014)
Acknowledgements
The authors would like to acknowledge the Indaco cluster computing resources (https://www.indaco.unimi.it/) made available for conducting the research reported in this paper.
This work was supported by grants of MIUR (PRIN 2017AXL54F_002, PRIN 2017AXL54F_003), Istituto Nazionale di Alta Matematica (INDAMGNCS) and by the European HighPerformance Computing Joint Undertaking EuroHPC under grant agreement No 955495 (MICROCARD).
Funding
Open access funding provided by Università degli Studi di Pavia within the CRUICARE Agreement.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
We dedicate this paper to Prof. Dr. Alfio Quarteroni on the occasion of his 70th birthday.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Huynh, N.M.M., Pavarino, L.F. & Scacchi, S. Scalable and Robust DualPrimal Newton–Krylov Deluxe Solvers for Cardiac Electrophysiology with Biophysical Ionic Models. Vietnam J. Math. 50, 1029–1052 (2022). https://doi.org/10.1007/s10013022005761
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10013022005761
Keywords
 Domain decomposition
 FETIDP and BDDC preconditioners
 Deluxe scaling
 Bidomain system
 Implicit time discretizations