Modeling excitable cells with the EMI equations: spectral analysis and iterative solution strategy

In this work, we are interested in solving large linear systems stemming from the Extra-Membrane-Intra (EMI) model, which is employed for simulating excitable tissues at a cellular scale. After setting the related systems of partial differential equations (PDEs) equipped with proper boundary conditions, we provide numerical approximation schemes for the EMI PDEs and focus on the resulting large linear systems. We first give a relatively complete spectral analysis using tools from the theory of Generalized Locally Toeplitz matrix sequences. The obtained spectral information is used for designing appropriate (preconditioned) Krylov solvers. We show, through numerical experiments, that the presented solution strategy is robust w.r.t. problem and discretization parameters, efficient and scalable.


Introduction
The EMI (Extra, Intra, Membrane) model, also known as the cell-by-cell model, is a numerical building block in computational electrophysiology, employed to simulate excitable tissues at a cellular scale.With respect to homogenized models, such as the well-known mono and bidomain equations, the EMI model explicitly resolves cells morphologies, enabling detailed biological simulations.For example, inhomogeneities in ionic channels along the membrane, as observed in myelinated neuronal axons [12], can be described within the EMI model.
The main areas of application for the EMI model are computational cardiology and neuroscience, where spreading excitation by electrical signalling plays a crucial role [13,17,25,18,24,14,33,29]. We refer to [34] for an exhaustive description of the EMI model, its derivation, and relevant applications.
This work considers a Galerkin-type approximation of the underlying system of partial differential equations (PDEs).A finite element discretization leads to block-structured linear systems of possibly large dimensions.The use of ad hoc tools developed in the theory of Generalized Locally Toeplitz matrix sequences allows us to give a quite complete picture of the spectral features (the eigenvalue distribution) of the resulting matrices and matrix sequences.
The latter information is crucial to understand the algebraic properties of such discrete objects and to design iterative solution strategies, enabling largescale models, as in [10,9].More precisely, spectral analysis is employed for designing tailored preconditioning strategies for Krylov methods.The resulting efficient and robust solution strategies are theoretically studied and numerically tested.
The current paper is organized as follows.In Section 2 the continuous problem is introduced together with possible generalizations.Section 3 considers a basic Galerkin strategy for approximating the examined problem.The spectral analysis is given in Section 4 in terms of distribution results and degenerating eigenspaces.Specifically, Subsection 4.1 lays out the foundational theories and concepts necessary for understanding the distribution of the entire system (presented in a general form in Subsection 4.2) and the specific stiffness matrices and matrix sequences (discussed in Subsection 4.3).These findings are then employed in Subsection 4.4 for proposing a preconditioning strategy.Moreover, in Section 5 we report and critically discuss the numerical experiments and Section 6 contains conclusions and a list of relevant open problems.

The EMI problem
We introduce the partial differential equations characterizing the EMI model.When comparing to the homogenized models, we observe that the novelty of the EMI approach consists in the fact that the cellular membrane Γ is explicitly represented, as well as the intra-and extra-cellular quantities, denoted with subscript i and e respectively.
More in detail, given a domain Ω = Ω i ∪ Ω e ∪ Γ ⊂ R d , typically with d ∈ {2, 3}, ∂Ω = ∂Ω e /Γ, and ∂Ω i = Γ, we consider the following time-discrete problem for the intra-and extra-cellular potentials u i , u e , and for the membrane current I m : σ e ∇u e (x) • n e = −σ i ∇u i (x) with σ e , σ i , τ ∈ R + , n i (resp.n e ) is the outer normal on ∂Ω i (resp.∂Ω e ) and f ∈ L 2 (Γ) is known.We can close the EMI problem with homogeneous boundary conditions: σ e ∇u e (x) • n e = 0 for x ∈ ∂Ω N , with ∂Ω = ∂Ω D ∪ ∂Ω N .In the case of a pure Neumann problem, with ∂Ω = ∂Ω N , uniqueness must be enforced through an additional constraint (e.g. a condition on the integral of u e ).We are imposing homogeneous boundary conditions for simplicity; the inhomogeneous case reduces to the homogeneous case by considering the lifting of the boundary data.Essentially, the EMI problem consists of two homogeneous Poisson problems coupled at the interface Γ through a Robin-type condition (3)-( 5), depending on the current I m .The EMI problem can be considered a mixed-dimensional problem since the unknowns of interest are defined on sets of different dimensionality, d for u i and u e and d − 1 for I m .
It is worth noticing that possible dynamics originate in the membrane via the source term f .In particular, eq. ( 5) is obtained by discretizing point-wise the capacitor current-voltage relation, a time-dependent ODE with t ∈ (0, T ] given a final time T > 0: where the positive constant C m is a capacitance and the ionic current I ion is a reaction term.An implicit (resp.explicit) integration of I m (resp.I ion ), with time step ∆t > 0, results in eq. ( 5) with and τ = C −1 m ∆t.The model in eq. ( 1)-( 5) can describe a single cell or multiple disjoint cells, i.e. with Ω i = N cell j=1 Ω j i , in an extracellular media, cf. Figure 1.Extending model ( 1)-( 5) to multiple cells, with possibly common membranes (a.k.a.gap junctions), is straightforward [23].

Weak formulation and discrete operators
The EMI problem can be weakly formulated in various ways, depending on the unknowns of interest.We refer to [34] for a broad discussion on various formulations (including the so-called mixed ones).As it could be expected from the structure of (1)-( 5), all formulations and corresponding discretizations give rise to block operators, with different blocks corresponding to Ω i , Ω e , and possibly Γ.
We use the so-called single-dimensional formulation and the corresponding discrete operators.In this setting, the weak form depends only on bulk quantities u i and u e since the current term I m is replaced by: according to equations (4)- (5).Let us remark that "single" refers to the previous substitution, eliminating the variable defined in Γ; the overall EMI problem is still in multiple dimensions, in the sense that d ≥ 1.
After substituting the expression for I m in (3), assuming the solution u r , for r ∈ {i, e}, to be sufficiently regular over Ω r , we multiply the PDEs in ( 1)-( 2) by test functions v r (x) ∈ V r (Ω r ), with V r a sufficiently regular Hilbert space with elements satisfying the boundary conditions in ( 6)- (7); in practice H 1 (Ω i ) and H 1 (Ω e ), would be a standard choice.After integrating over Ω r and applying integration by parts, using the normal flux definition (3), the weak EMI problem reads: find for all test functions v e ∈ V e (Ω e ) and v i ∈ V i (Ω i ).We refer to [34, Section 6.2.1] for boundedness and coercivity results for this formulation.
For each subdomain Ω r , with r ∈ {i, e}, we construct a conforming tassellation T r .We then introduce a yet unspecified discretization via finite element basis functions (e.g.Lagrangian elements of order p ∈ N on a regular grid) for V e and V i : with N e , N i ∈ N denoting the number of degrees of freedom in the corresponding subdomains.We can further decompose N r = N r,in + N r,Γ , i.e. further differentiating between internal and membrane degrees of freedom, with N r,Γ basis functions ϕ r j having support intersecting Γ, cf. Figure 1.In the numerical experiments, we will consider matching T r on the interface Γ and the same p for V e,h and V i,h ; nevertheless Section 3 and Section 4 are developed in a general setting, so that the theory is ready also for potential extensions.
From ( 9)- (10) we define the following discrete operators: intra-and extra-Laplacians membrane mass matrices: Right: example of the discretization setting for d = 2 on a regular grid.We notice that interior and exterior nodes overlap in Γ.For p = 1, in this case, we have N i = 9, N e = 24 with N i,in = 1, N e,in = 16, and N Γ = 8.Corresponding labels are reported.and the coupling matrix Finally, we write the linear system of size n = N e + N i corresponding to ( 9)-( 10) with u r , f r ∈ R Nr the unknowns and the right hand side corresponding to Ω r and r ∈ {i, e}.We remark that the operator A n is symmetric and positive definite 1 .Making the degrees of freedom in Γ denoted explicitly (with reference to Figure 1), i.e. u i,Γ , u e,Γ ∈ R NΓ , as well as the interior ones, i.e. u i,in ∈ R Ni−NΓ and u e,in ∈ R Ne−NΓ , we can rewrite more extensively system (16) as with 1 Dirichlet boundary conditions can be imposed to the right-hand side to enforce symmetry.
For forthcoming use, we define the bulk matrix where all membrane terms are zeroed.

Spectral analysis
In this section, we study the spectral distribution of the matrix sequence {A n } n under various assumptions for determining the global behaviour of the eigenvalues of A n as the matrix size n tends to infinity.The spectral distribution is given by a smooth function called the (spectral) symbol as it is customary in the Toeplitz and Generalized Locally Toeplitz (GLT) setting [20,21,8,7].First, we give the formal definition of Toeplitz structures, eigenvalue (spectral) and singular value distribution, the basic tools that we use from the GLT theory, and finally, we provide the specific analysis of our matrix sequences under a variety of assumptions.

Toeplitz structures, spectral symbol, GLT tools
We initially formalize the definition of block Toeplitz and circulant sequences associated with a matrix-valued Lebesgue integrable function.Then, we provide the notion of eigenvalue (spectral) and singular value distribution, and we introduce the basic tools taken from the GLT theory.Definition 4.1 [Toeplitz, block-Toeplitz, multilevel Toeplitz matrices] A finitedimensional or infinite-dimensional Toeplitz matrix is a matrix that has constant elements along each descending diagonal from left to right, namely, As the first equation in (19) indicates, the matrix T can be infinite-dimensional.A finite-dimensional Toeplitz matrix of dimension n is denoted as T n .We also consider sequences of Toeplitz matrices as a function of their dimension, denoted as In general, the entries a k , k ∈ Z, can be matrices (blocks) themselves, defining T n as a block Toeplitz matrix.Thus, in the block case, a k , k ∈ {−n + 1, • • • , n−1} are blocks of size s 1 ×s 2 , the subindex of T n is the number of blocks in the Toeplitz matrix while N 1 × N 2 is the size of the matrix with N 1 = n s 1 , N 2 = n s 2 .To be specific, we use also the notation X N1,N2 = T n .A special case of block-Toeplitz matrices is the class of two-and multilevel block Toeplitz matrices, where the blocks are Toeplitz (or multilevel Toeplitz) matrices themselves.The standard Toeplitz matrices are sometimes addressed as unilevel Toeplitz.
By following the multi-index notation in [35][Section 6], with each f we can associate a sequence of Toeplitz matrices {T n } n , where , or for d = 2, i.e. the two-level case, and for example n = (2, 3), we have The function f is referred to as the generating function (or the symbol of ) T n .Using a more compact notation, we say that the function f is the generating function of the whole sequence {T n } n and we write As in the scalar case, the function f is referred to as the generating function of T n .We say that the function f is the generating function of the whole sequence {T n } n , and we use the notation T n = T n (f ).Definition 4.3 Let f : D → C s×s be a measurable matrix-valued function with eigenvalues λ i (f ) and singular values σ i (f ), i = 1, . . ., s. Assume that D ⊂ R d is Lebesgue measurable with positive and finite Lebesgue measure and with eigenvalues λ j (A n ) and singular values σ j (A n ), j = 1, . . ., d n .
• We say that {A n } n is distributed as f over D in the sense of the eigenvalues, and we write for every continuous function F with compact support.In this case, we say that f is the spectral symbol of {A n } n .
• We say that {A n } n is distributed as f over D in the sense of the singular values, and we write for every continuous function F with compact support.In this case, we say that f is the singular value symbol of {A n } n .
• The notion {A n } n ∼ σ (f, D) applies also in the rectangular case where f is C s1×s2 matrix-valued.In such a case the parameter s in formula (21) has to be replaced by the minimum between s 1 and s 2 : furthermore n ×d (2)   n with d n in formula (21) being the minimum between d (1) n and d n .Of course the notion of eigenvalue distribution does not apply in a rectangular setting.
Throughout the paper, when the domain can be easily inferred from the context, we replace the notation ) is that when n is sufficiently large, the eigenvalues (resp.singular values) of A n can be subdivided into s different subsets of the same cardinality.Then d n /s eigenvalues (resp.singular values) of A n can be approximated by a sampling of λ 1 (f ) (resp.σ 1 (f )) on a uniform equispaced grid of the domain D, and so on until the last d n /s eigenvalues (resp.singular values), which can be approximated by an equispaced sampling of λ s (f ) (resp.σ s (f )) in the domain D.

Remark 4.2 We say that
For Toeplitz matrix sequences, the following theorem due to Tilli holds, which generalizes previous research along the last 100 years by Szegő, Widom, Avram, Parter, Tyrtyshnikov, and Zamarashkin (see [21,7] and references therein).
The following theorem is useful for computing the spectral distribution of a sequence of Hermitian matrices.For the related proof, see [ With the notations of the result above, the matrix sequence {P * n A n P n } n is called a compression of {A n } n and the single matrix In what follows we take into account a crucial fact that often is neglected: the generating function of a Toeplitz matrix sequence and even more the spectral symbol of a given matrix sequence is not unique, except for the trivial case of either a constant generating function or a constant spectral symbol.In fact, here we report and generalize Remark 1.3 at p. 76 in [15] and the discussion below Theorem 3 at p. 8 in [16].

Remark 4.3 [Multiple Toeplitz generating functions and multiple spectral symbols]
Let n be a multiple of k and consider the Toeplitz matrix X n = T n (f ).We can view X n as a block-Toeplitz with blocks of size k ×k such that ).In our specific context, we have with e j , j = 1, . . ., k, being the canonical basis of C k .
As an example, let n be even and consider the function f (θ) = 2 − 2 cos(θ).According to Definition 4.2, X n = T n (f ) but we can also view the matrix as X n = T n 2 (f [2] ), namely [2] ), where Thus, f [2] = A 0 + A 1 e iθ + A T 1 e −iθ .It is clear that analogous multiple representations of Toeplitz matrices hold for every function f .As a consequence, taking into consideration Theorem 4.1, we have simultaneously for any fixed k independent of n.It should be also observed that a situation in which k does not divide n is not an issue thanks to the compression argument in Theorem 4.2.
More generally, we can safely claim that the spectral symbol in the Weyl sense of Definition 4.3 is far from unique and in fact any rearrangement is still a symbol (see [16,6]).A simple case is given by standard Toeplitz sequences {T n (f )} n , with f real-valued and even that is due to the even character of f , and hence it is also true that A general analysis of these concepts via rearrangement theory can be found in [6] and references therein.

Symbol analysis
In this section, we state and prove three results which hold under reasonable assumptions.The first has the maximal generality, while the second and the third can be viewed as special cases of the first.Let us remind that N i , N e and N Γ denote the number of intra-, extra-and membrane degrees of freedom.
Theorem 4.3 Assume that {τ D e ⊂ R ke , D i ⊂ R ki , given f e and f i extra-and intra-spectral symbols.It follows that ψ Z denoting the characteristic function of the set Z.

Proof
First, we observe that because of the Galerkin approach and the structure of the equations, all the involved matrices are real and symmetric.Hence the symbols f e , f i are necessarily Hermitian matrix-valued.Without loss of generality we assume that both f e , f i take values into C s×s (the case where f e , f i have different matrix sizes is treated in Remark 4.4, for the sake of notational simplicity).Because of equations ( 23) and ( 24), setting for any continuous function F with bounded support.Now we build the 2 × 2 block matrix X[N e , N i , N Γ ] = diag(τ A in e , τ A in i ) and we consider again a generic continuous function F with bounded support.By the block diagonal structure of X[N e , N i , N Γ ], we infer As a consequence, taking the limit as N i , N e → ∞, using the assumption we obtain that the limit of ∆(N e , N i , N Γ , F ) exists and it is equal to With regard to Definition 4.3, the quantity (27) does not look like the right-hand side of (20), since we see two different terms.The difficulty can be overcome by enlarging the space with a fictitious domain [0, 1] and interpreting the sum in (27) as the global integral involving a step function.In reality, we rewrite ( 27) as is complete.Now we observe that X[N e , N i , N Γ ] is a compression of Ãn (cf.definition after Theorem 4.2): therefore by exploiting again the hypothesis N Γ = o(min{N i , N e }) which implies N Γ = o(n), by Theorem 4.2, we deduce Finally in Exercise 5.3 in the book [20] it is proven the following: if {V n } n ∼ λ f (not necessarily Toeplitz or GLT), {A n = V n +Y n } n with {Y n } n zero-distributed and A n , Y n Hermitian for every dimension, then {A n } n ∼ λ f .This setting is exactly our setting since by direct inspection the rank of is bounded by 4N Γ .Hence the number of nonzero eigenvalues of the latter matrix cannot exceed 4N Γ = o(n), n = N i + N e , and therefore the related matrix sequence is zero-distributed in the eigenvalue sense i.e. {R n } n ∼ λ 0. Since A n = Ãn + R n , the application of the statement in [20, Exercise 5.3] implies directly ) and the proof is concluded.• Remark 4. 4 The case where f e and f i have different matrix sizes would complicate the derivations in the proof of the above theorem.However, in this case, the argument is simple and relies on the non-uniqueness of the spectral symbol: for a discussion on the matter refer to [16,6] and Remark 4.3.
The following two corollaries simplify the statement of Theorem 4.3, under special assumptions which are satisfied for a few basic discretization schemes and when dealing with elementary domains. {τ Proof Notice that the writing are equivalent for c = 1/2.• Corollary 4.2 Assume that {τ Proof The proof of the relation { Ãn } n , {A n } n ∼ λ (f, D) follows directly from the limit displayed in (27), after replacing f e , f i with f and necessarily D e , D i with D. The rest is obtained verbatim as in Theorem 4.3.•

Analysis of stiffness matrix sequences
The spectral analysis of stiffness matrix sequences is crucial to understand the global spectral behaviour of EMI matrices and, in turn, to justify our preconditioning approach.According to [22,Section 5], given a bi-dimensional square domain, the sequence of corresponding stiffness matrices {A square n } n obtained from the linear Q 1 Lagrangian finite element method2 (FEM) has the following spectral distribution: , the functions f 1D and h 1D being, respectively, the symbols associated to the stiffness and mass matrix sequences in one dimension, that is, Expanding the coefficients, we have Let us now consider {T n (f square )} n , the sequence of bi-level Toeplitz matrices generated by f square , which is a multilevel GLT sequence by definition (see [21]).We adopt the bi-index notation, that is, we consider the elements of T n (f square ) to be indexed with a bi-index (i 1 , i 2 ) with i 1 = 1, . . ., n 1 and i 2 = 1, . . ., n 2 .Let us focus on Ω i .In the Q 1 case there is a one-to-one correspondence between the mesh grid points which are in the interior of the subdomain Ω in i and the N i degrees of freedom, where N i is the dimension of V i,h .See [22,Section 3] for an explanation.
Following the analysis performed in [5], first, we set to zero all the rows (i 1 , i 2 ) and columns (i is not a mesh grid point tied to a degree of freedom associated with a trial function in V i,h .We call the matrix resulting from this operation T i .Then, we define the restriction maps Π Ωi and Π T Ωi such that Π Ωi T i Π T Ωi is the matrix obtained from T i deleting all rows (i 1 , i 2 ) and columns (i With a similar procedure, we construct also the matrices Π Ωe T e Π T Ωe .Then, we can apply [5,Lemma 4.4] and conclude that Apart from a low-rank correction due to boundary conditions, the matrices Π Ωi T i Π T Ωi coincide with A in i and the same holds for the external domain.Since low-rank corrections are zero-distributed and hence in the Hermitian case have no impact on spectral distributions (see Exercise 5.3 in book [20]), we can safely write Finally, by exploiting again the useful non-uniqueness of the spectral symbol as in Remark 4.3, we also have since f square over [0, π] 2 is a rearrangement of both f e over D e and f i over D i .Given relationships ( 33) and ( 34), Theorem 4.3 applies since the other assumptions are verified when using any reasonable discretization.For instance, the parameter r in Theorem 4.3 is the ratio Furthermore, if µ d (Ω i ) = µ d (Ω e ), r = 1/2 and Corollary 4.1 applies.We observe that in our specific setting, due to ( 35) and (36), also the assumptions of Corollary 4.2 hold true, so that the assumption on the ratio r is no longer relevant.

Monolithic solution strategy
Since, by construction, A n is symmetric positive definite (cf.Section 3), we employ the conjugate gradient method (CG) to solve system (16) iteratively.We first describe the dependency on τ and subsequently discuss preconditioning.

Conjugate gradient convergence
To better understand the convergence properties of the CG method in the EMI context, in particular robustness w.r.t.τ , we recall the following result, proved in [4].
Lemma 4.1 Assume A n to be a symmetric positive definite matrix of size n.Moreover, suppose there exists a, b > 0, both independent of the matrix size n, and an integer q < n such that then k iterations of CG are sufficient in order to obtain a relative error of ϵ of the solution, for any initial guess, i.e.
where e k is the error vector at the kth iteration and a .This lemma can be useful if q ≪ n, i.e. if there are q outliers in the spectrum of A n , all larger than b.Interestingly, the convergence of the CG method does not depend on the magnitude of the outliers, but only on q.The latter is an explanation of why the convergence rate of the CG applied to a linear system with matrix τ −1 A n does not significantly depend on τ , as we will show in the numerical section.Indeed, the matrix τ −1 A n can be written as where the first term does not depend on τ and the second term has rank at most 2N Γ .Thanks to the Cauchy interlacing theorem (see [11]) In practice, for τ sufficiently small, we have q = 2N Γ , as we show in the numerical section (cf. Figure 3).Nevertheless, due to the proximity of the lower bound a to zero, the convergence of the CG method is far from satisfactory, and it becomes essential to design an appropriate preconditioner for A n .

Preconditioning
We propose a theoretical preconditioner, which is proven to be effective through the spectral analysis carried out in the previous sections.The process of implementing a practical preconditioning strategy will be discussed at the end of the current section.
Denoting by I Γ e and I Γ i the identity matrices of the same size as A Γ e and A Γ i respectively, a first observation is that the block diagonal matrix is such that the Hermitian matrix has rank less than or equal to 4N Γ = o(n), n = N i +N e , with a similar argument of Theorem 4.3 proof.In fact the typical behavior is N Γ ∼ √ n if Ω is a twodimensional domain.In the general setting of a d-dimensional domain and with any reasonable discretization scheme we have The function f square in formula ( 32) is nonnegative, implying that the matrices T n (f square ) are Hermitian positive definite according to [21,Theorem 3.1].The restriction operators Π Ωi and Π T Ωi have full rank and so Π Ωi T n (f square )Π T Ωi is Hermitian positive definite.Moreover, in our case the boundary conditions do not alter the positive definiteness and the same holds for A in e .Hence, the matrix is well-defined and we can write having rank less than or equal to 4N Γ = o(n), n = N i + N e , which means that the sequence {P } n is zerodistributed.Since all the involved matrices are Hermitian, Exercise 5.3 in book [20] tells us that the matrix sequence {P } n is distributed as 1 in the eigenvalue sense.Therefore P n could serve as an effective preconditioner for A n , also because, as already pointed out, the number of possible outliers cannot grow more than O(n d−1 d ), d ≥ 1, d dimensionality of the physical domain.We now face the task of finding an efficient method for approximating the matrix-vector products with the inverses of A in e and A in i .Given the multilevel structure of these matrices and their role as discretization of the Laplacian operator, multigrid techniques are a natural choice.Defining k = ⌊n/4⌋, when dealing with a bi-level Toeplitz matrix of size n generated by f square , the conventional bisection and linear interpolation operators are an effective choice for restriction and prolongation operators.These can be paired with the Jacobi smoother, which requires suitably chosen relaxation parameters.See [30] for the theoretical background.A similar multigrid strategy can be applied to the "subdomain" matrices A in e and A in i thanks to [31].In practice, we will apply one multigrid iteration as preconditioner for the whole matrix A n , avoiding the construction of P n .It has been proven [31] that if two linear systems have spectrally equivalent coefficient matrices if a multigrid method is effective for a linear system, then it is effective also for the other.The matrix sequences {A n } n and {P n } n share the same spectral distribution, but the matrices may not be spectrally equivalent due to outliers, which are at most 4N Γ by the Cauchy interlacing theorem.However, outliers are controlled by the CG method, as shown by Lemma 4.1.For a recent reference on spectrally equivalent matrix sequences and their use to design fast iterative solvers see [3].
5 Numerical results

Problem and discretization settings
We consider the following two scenarios: (ii) A three-dimensional geometry, representing one astrocyte [1], cf. Figure 2 discretized with 32365 grid nodes leading to n = 212548 degrees of freedom for p = 2 (as used in the numerical experiments).

Implementation and solver parameters
We use FEniCS [2,26] for parallel finite element assembly and multiphenics4 to handle multiple meshes with common interfaces and the corresponding mixeddimensional weak forms.FEniCS wraps PETSc parallel solution strategies.For multilevel preconditioning, we use hypre algebraic multigrid (AMG) [19], with default options, cf.Appendix A. For comparative studies, we use MUMPS as a direct solver and incomplete LU (ILU) preconditioning with zero fill-in.Parallel ILU relies on a block Jacobi preconditioned, approximating each block inverse with ILU.For iterative strategies, we use a tolerance of 10 −6 for the relative unpreconditioned residual, as stopping criteria.
The implementation is verified considering the same benchmark problem as in [33, Section 3.2], obtaining the expected convergence rates.

Experiments: eigenvalues and CG convergence
We consider the behaviour of unpreconditioned CG for τ → 0 and scenario (i).For simplicity, in this section, we use a unit right-hand side, i.e. f = 1/∥1∥ 2 in (16).In Figure 3 an example of the spectrum of A n is shown; as presented in Section 4.4, varying τ influences only 2N Γ eigenvalues.Using Lemma 4.1, with q = 2N Γ , results in a strict upper bound for the number of CG iterations, which is observed in practice, despite the increasing condition number κ(A n ).
In this idealized setting, imposing τ = 1, we numerically confirm the results of sections 4.2-4.3,showing that the spectral distribution with the symbol domain 4 ,1] (x), reasonably approximates the eigenvalues of A n , given a uniform sampling even for very few grid points in each of the 5 dimensions of D. In order to make visualization possible, we evaluate the function g on its domain and then we arrange the evaluations in ascending order, which corresponds to considering the monotone rearrangement of g.Note that in this setting the number r defined in Theorem 4.3 is equal to 1/4.In the left panel in Figure 4 we observe that the spectrum of A n is qualitatively described by the samplings of the function g over a uniform grid on D.Moreover, in our particular setting, also the spectral distribution holds, and we numerically confirm this result in the right panel of Figure 4, where the eigenvalues of A n are described by the samplings of the function f square over a uniform grid on [0, π] 2 .

Experiments: preconditioning
In Table 1 we report CG iterations to convergence for scenario (i), varying the discretization parameters N, p and τ .Similarly, in Table 2, we report iterations to convergence using the AMG preconditioned CG (PCG), showing robustness w.r.t.all discretization parameters.We remark that, for PCG, time-to-solution depends linearly on n, as expected for multilevel methods.In Table 3 we report runtimes and iterations to convergence for various preconditioning strategies for scenario (i); assembly and direct solution timings are also reported for practicality.In terms of efficiency, the AMG and ILU preconditioners are preferable.
In Table 4, we show strong scaling data corresponding to scenario (ii), i.e. the astrocyte geometry.In this case, the ILU preconditioner is preferable both in terms of runtime and parallel efficiency.Summing up the numerical results, we can observe how the AMG preconditioned CG exhibit a near-to-optimal robustness, converging in 5 to 7 iterations (N, p) for all the cases considered.For time-to-solution, an ILU preconditioning might be preferable for three-dimensional complex geometries, while AMG remains the most convenient choice for structured discrete problems.Let us remark that the ILU preconditioner can be particularly suitable for multiple time steps since the ILU decomposition has to be computed only once.

Concluding remarks
We described numerical approximation schemes for the EMI equations and studied the structure and spectral features of the coefficient matrices obtained from a finite element discretization in the so-called single-dimensional formulation.The obtained spectral information has been employed for designing appropriate (preconditioned) Krylov solvers.Numerical experiments have been presented and critically discussed; depending on the context (dimensionality, geometry) the CG method, preconditioned with AMG or ILU results in an efficient, scalable, and robust solution strategy.The spectrum analysis made it possible to understand the convergence properties, somehow counterintuitive, of CG in this context.Due to the intrinsic generality of the tools in Sections 4.1-4.2, the given machinery can be applied to a variety of discretization schemes including isogeometric analysis, finite differences, and finite volumes, in the spirit of the sections of books [21,7]

Definition 4 . 2 [
Toeplitz sequences (generating function of)] Denote by f a d-variate complex-valued integrable function, defined over the domain

Figure 2 :
Figure 2: Left: astrocyte geometry and corresponding mesh.A cube with 0.1 sides is shown as a reference.Right: solution corresponding to Table4.

Figure 3 :
Figure 3: Left: spectrum of τ −1 A n varying τ for (N, p) = (16, 1).The [a, b] = [λ min (B n ), λ max (B n )] interval is shown, with reference to Lemma 4.1 and equation (37).Crucially, the majority of eigenvalues are included in [a, b], which is independent of τ .Right: condition number of A n and CG iterations to convergence.In yellow the theoretical bound is shown, which depends on a, b and the relative error ϵ.

Figure 4 :
Figure4: Left: comparison between the eigenvalues λ j (A n ) and the samples of g over a uniform grid on D. Right: comparison between the eigenvalues λ j (A n ) and the samples f square over a uniform grid on [0, π] 2 .In both cases (N, p) = (128, 1) and τ = 1.
27, Theorem 4.3] and [28, Theorem 8].Here, the conjugate transpose of the matrix X is denoted by X * .Theorem 4.2 [27, Theorem 4.3] Let {A n } n be a sequence of matrices, with A n Hermitian of size d n , and let {P n } n be a sequence such that P n ∈ C dn×δn , P * n

Table 2 :
Number of PCG iterations to convergence, scenario (i), using a single AMG iteration as preconditioner.

Table 3 :
Runtimes (s) and iterations to convergence (in square brackets) for various solution strategies for τ = 0.01 and scenario (i) with (N, p) = (512, 1) for a single core.For completeness, finite element assembly timing is also reported.

Table 4 :
Runtimes (s) and iterations to convergence (in square brackets) for various parallel solution strategies for τ = 0.01 and scenario (ii), cf. the astrocyte geometry in Figure2, and p = 2.For completeness, finite element assembly timings are also reported.