1 Introduction

In this work we consider the dynamical low-rank approximation of the Vlasov–Poisson equation in \(d \le 3\) spatial dimensions:

$$\begin{aligned} \partial _t f + \varvec{v} \cdot \nabla _{\varvec{x}} f - \varvec{E}(t, \varvec{x}) \cdot \nabla _{\varvec{v}} f = 0 \quad \text { in } \Omega = \Omega _x \times \Omega _v, \quad t \in (0,T), \end{aligned}$$
(1)

with a bounded domain \(\Omega _x \subset {\mathbb {R}}^d\), and \(\Omega _v = {\mathbb {R}}^d\). This equation models the time evolution of an electron density \(f = f(t, \varvec{x}, \varvec{v})\) of a collisionless plasma as a function of space and velocity in the presence of an electrical field \(\varvec{E}\). We assume the initial condition

$$\begin{aligned} f(0,\varvec{x}, \varvec{v}) = f_0(\varvec{x}, \varvec{v}) \end{aligned}$$

and inflow boundary conditions of the form

$$\begin{aligned} f(t,\cdot ,\cdot ) = g(t,\cdot ,\cdot ), \quad \text { on } \Gamma ^- = \{(\varvec{x},\varvec{v}) \in \partial \Omega _x \times \Omega _v \,|\, \varvec{v} \cdot \varvec{n}_x < 0 \}. \end{aligned}$$
(2)

Here \(\varvec{n}_x\) denote the outward normal vectors of the spatial domain \(\Omega _x\). The electrical field \(\varvec{E}\) can either be fixed or dependent on the density f via a Poisson equation:

$$\begin{aligned} \varvec{E}(t,\varvec{x}) = - \nabla _{\varvec{x}} \Phi (t,\varvec{x}), \quad -\Delta \Phi = \rho , \quad \rho (t, \varvec{x}) = \rho _b -\int _{ \Omega _v} f(t,\varvec{x}, \varvec{v}) \, {\textrm{d}} \varvec{v}, \end{aligned}$$
(3)

supplemented with appropriate boundary conditions. Here \(\rho _b\) is a background charge.

Simulations of such systems are computationally demanding since the time evolution of an 2d-dimensional, i.e. up to six-dimensional, function has to be calculated. Applying standard discretization schemes thus leads to an evolution equation in \({\mathcal {O}}(n^{2d})\) degrees of freedom, where n is the number of grid points in one dimension and a corresponding high computational effort. To tackle the problem, methods such as particle methods [24], adaptive multiscale methods [5], and sparse grids [17] have been used.

In the seminal paper [7] dynamical low-rank approximation (DLRA) has been proposed for solving (1). DLRA is a general concept of approximating the evolution of time-dependent multivariate functions using a low-rank model, typically based on separation of variables. It originates in classic areas of mathematical physics, such as molecular dynamics [21], and has been proposed in the works [18, 19] as a general numerical tool for the time-integration of ODEs on fixed-rank matrix or tensor manifolds. An overview from the perspective of numerical analysis and further references can be found in [23]. The approach can also be made rigorous for low-rank functions in \(L_2\)-spaces thanks to their tensor product Hilbert space structure; see, e.g., [2].

For the simulation of (1) using DLRA, a rather natural separation of space and velocity variables is applied. After discretizing the problem in a corresponding tensor product discretization space, one then seeks an approximate solution curve for equation (1) of the form

$$\begin{aligned} f_r(t,\varvec{x}, \varvec{v}) = \sum _{i=1}^r\sum _{j=1}^r X_i(t,\varvec{x}) S_{ij}(t) V_j(t,\varvec{v}), \end{aligned}$$
(4)

that is, \(f_r(t,\cdot ,\cdot )\) is a rank-r function for every time point t. For the numerical time-integration of the low-rank factors in this representation, [7] adopted the so-called projector-splitting approach from [22], which is one of the work-horse algorithms for DLRA, to the case of the transport Eq. (1).

In the present work, we wish to study in more detail how to incorporate the inflow boundary condition (2) into such a scheme. In [7] this question was somewhat circumvented by considering periodic boundary conditions, which, however, is not always applicable in real world problems. Enforcing boundary conditions directly on low-rank representations like (4) appears to be rather impractical, or at least poses some difficulties. For the nonlinear Boltzmann equation such an approach has been considered in [16]. Here, we will instead start out from a weak formulation of the transport problem (1) including a weak formulation of the boundary condition according to [11, Sec. 76.3]. We then use the projector splitting approach for the time-stepping in this weak formulation, where the problem is iteratively reduced to subspaces belonging to variations of space or velocity variables only. We note that in [15] boundary conditions for a two-dimensional product domain (corresponding to \(d=1\) in our notation) were included in this way through discretization by a discontinuous Galerkin method.

It is important to note already here that in order to still benefit from the seperation of variables, our approach assumes the boundary function g to be a finite sum of tensor products, or at least be approximable by such, see Eq. (7) below. Moreover, we require the spatial domain to have a piecewise linear boundary so that the outer normal vectors are piecewise constant, see Eq. (6).

As will be derived in Sect. 2, the resulting effective equations for the low-rank factors in our weak formulation of the projector splitting scheme take the form of Friedrichs’ systems, that is, systems of hyperbolic equations in weak formulation, that respect the boundary conditions without violating the tensor product structure. These systems can hence be solved by established PDE solvers. We remark that our derivation of these equations remains more or less formal, and the existence of weak solutions (in the continuous setting) will not be studied. Let us also mention the work [20] which is related to our work insofar as it shows that for hyperbolic problems a continuous formulation of the projector splitting integrator prior to discretization has favorable numerical stability properties.

We then proceed by deriving corresponding discrete equations that allow to solve the system numerically. Specifically, in our implementation we will use stabilized finite elements [11] for discretization. We then apply the method to solve the weak Landau damping on a periodic domain (in order to verify the discretization) as well as a linear equation involving a piecewise linear boundary. Here, in addition to the standard projector-splitting approach we will also consider a weak formulation of the unconventional low-rank integrator proposed in [3]. It consists in modifications regarding the update strategy for the low-rank factors and allows to make the whole scheme rank-adaptive [4], which is important in practice, based on subspace augmentation. Similar strategies are considered in [13] and [15].

Several recent works have been focussing on the conservation properties in dynamical low-rank integrators [6, 8,9,10, 12, 14], such as mass, momentum and energy, based on modified Galerkin conditions. This important aspect is not yet addressed in the present paper and will remain for future work.

The paper is organized as follows. In Sect. 2 we detail our idea of a weak formulation of the projector-splitting integrator for the Vlasov–Poisson equation which is capable of handling the inflow boundary conditions. From this, in Sect. 3 we obtain discrete equations by restricting to finite-dimensional spaces according to the Galerkin principle, resulting in Algorithm 1. The rank-adaptive unconventional integrator is also considered (Algorithm 2). In Sect. 4 results of numerical experiments are presented to illustrate the principle feasibility of our algorithms.

2 Weak formulation of the projector splitting integrator

Our goal is to develop a projector splitting scheme for DLRA of the transport problem (1) that is capable of handling the inflow boundary conditions (2). To achieve this, we start out from a weak formulation of (1) including a weak formulation of the boundary condition according to [11, Sec. 76.3]. Assume for now that the electrical field is fixed. Let W be an appropriate closed subspace of \(L_2(\Omega )\) (e.g. \(W \subseteq H^1(\Omega )\) is sufficient) and denote by \(P_W\) the \(L_2\)-orthogonal projector onto W. Then the goal in the weak formulation is to find \(f \in C^1([0,T],W)\) with \(f(0) = P_{W} f_0\) and such that at every \(t \in (0,T)\) it holds

$$\begin{aligned} \int _\Omega \partial _t f(t) w \, {\textrm{d}} \varvec{x} {\textrm{d}} \varvec{v} + a(t, f(t),w) = \ell (w) \text { for all } w \in W, \end{aligned}$$
(5)

where

$$\begin{aligned} a(t,u,w)&= \int _\Omega \varvec{v} \cdot \nabla _{\varvec{x}} u \, w - \varvec{E} (t,\varvec{x}) \cdot \nabla _{\varvec{v}} u \, w \, {\textrm{d}}\varvec{x} {\textrm{d}} \varvec{v} - \int _{\Gamma ^-} \varvec{v} \cdot \varvec{n}_x \, u \, w \, {\textrm{d}} s, \\ \ell (w)&= -\int _{\Gamma ^-} \varvec{v} \cdot \varvec{n}_x \, g \, w \, {\textrm{d}} s. \end{aligned}$$

Here we have taken into account that \(\Omega _v = {\mathbb {R}}^d\) is unbounded and hence the normal vectors for \(\Omega \) read \(\varvec{n} = (\varvec{n}_x, \varvec{0})\). While our derivations are based on the above formulation with fixed electric field \(\varvec{E}\), the case where \(\varvec{E}\) depends on f will be discussed in Sect. 2.5. The variational formulation (5) has the advantage that the boundary conditions are integrated in the bilinear form and right hand side. It therefore can be easily combined with the so-called time-dependent variational principle, also called Dirac–Frenkel principle, underlying DLRA.

In order to benefit from the separation of variables in DLRA, it will be necessary to impose some restrictions on the boundary and the function g in the inflow condition (2). Specifically, we assume that the spatial domain has a piecewise linear boundary, so that one has a decomposition

$$\begin{aligned} \Gamma _x := \partial \Omega _x = \bigcup _\nu \Gamma ^{(\nu )}_x\, , \end{aligned}$$
(6)

where each part \(\Gamma ^{(\nu )}_x\) has a constant outer normal vector \(\varvec{n}^{(\nu )}_x\). In addition, we require that g itself admits a separation into space and velocity variables by a finite sum of tensor products,

$$\begin{aligned} g(t,\varvec{x}, \varvec{v}) = \sum _\mu g_x^{(\mu )}(t, \varvec{x}) \cdot g_v^{(\mu )}(t, \varvec{v}), \end{aligned}$$
(7)

or can at least be approximated in this form.

In the following sections we derive the projector splitting approach of DLRA in the weak formulation and derive effective equations for the low-rank factors in (4) in the form of Friedrichs’ systems (systems of hyperbolic equations in weak formulation) that are amenable to established PDE solvers. In this way the boundary conditions will be incorporated without sacrificing the tensor product structure.

2.1 DLRA and projector splitting

We first present the basic idea of the projector splitting approach for the dynamical low-rank solution of the weak formulation (5). As required for DLRA, we consider a possibly infinite-dimensional tensor product subspace

$$\begin{aligned} W = W_x \otimes W_v \subseteq L_2(\Omega _x) \otimes L_2(\Omega _v) = L_2(\Omega ) \end{aligned}$$

where we assume that \(W_x \subseteq L_2(\Omega _x)\) and \(W_v \subseteq L_2(\Omega _v)\) are suitable subspaces to ensure that (5) is well-defined on W. For example, \(W_x\) and \(W_v\) could be subspaces of \(H^1\), which would correspond to a mixed regularity with respect to space and velocity.

The corresponding manifold \({\mathcal {M}}_r\) of low-rank functions in W is

$$\begin{aligned} {\mathcal {M}}_{r} = \Big \{ \varphi (\varvec{x}, \varvec{v}) = \sum _{i=1}^r \sum _{j=1}^r X_i(\varvec{x}) S_{ij}V_j(\varvec{v}) \, \Big \vert \, X_i \in W_x, \, V_j \in W_v, \, S_{ij} \in {\mathbb {R}} \Big \}, \end{aligned}$$
(8)

with the additional conditions that the \(X_1,\dots ,X_r\) and \(V_1,\dots ,V_r\) are orthonormal systems in \(W_x\) and \(W_v\), respectively, and the matrix \(S = [S_{ij}]\) has rank r. Following the Dirac-Frenkel variational principle, a dynamical low-rank approximation for the weak formulation (5) asks for a function \(f_r \in C^1([0,T],W)\) such that \(f_r(t) \in {\mathcal {M}}_r\) and

$$\begin{aligned} \int _\Omega \partial _t f_r(t) w \, {\textrm{d}} \varvec{x} {\textrm{d}} \varvec{v} + a(t, f_r(t),w) = \ell (w) \text {\quad for all } w \in T_{f_r(t)} {\mathcal {M}}_r, \end{aligned}$$
(9)

for all \(t \in (0,T)\), where \(T_{f_r(t)} {\mathcal {M}}_r\) is the tangent space of \({\mathcal {M}}_r\) at \(f_r(t)\) specified below. For the initial value one may take \(f_r(0) = {{\tilde{f}}}_0\), where \({{\tilde{f}}}_0\) is a rank-r approximation of \(f_0\).

The DLRA formulation (9) can be interpreted as a projection of the time derivative of the solution onto the tangent space, which implicitly restricts it to the manifold. The projector splitting integrator from [22] is now based on the rather peculiar fact that the tangent spaces of \({\mathcal {M}}_r\) can be decomposed into two smaller subspaces corresponding to variations in the X and V component only. Specifically, let

$$\begin{aligned} f_r(t,\varvec{x}, \varvec{v}) = \sum _{i=1}^r \sum _{j=1}^r X_i(t,\varvec{x}) S_{ij}(t) V_j(t,\varvec{v}) \end{aligned}$$

be in \({\mathcal {M}}_r\) at time t, then it holds that

$$\begin{aligned} T_{f_r(t)} {\mathcal {M}}_r = T_{V(t)} + T_{X(t)}, \end{aligned}$$

where

$$\begin{aligned} T_{V(t)} = \Big \{ \sum _{i=1}^r K_i(\varvec{x}) V_i(t,\varvec{v})\, \Big \vert \, K_i \in W_x \Big \}, \quad T_{X(t)} = \Big \{ \sum _{i=1}^r X_i(t,\varvec{x}) L_i(\varvec{v})\, \Big \vert \, L_i \in W_v\Big \}.\end{aligned}$$

Note that these two subspaces are not complementary as they intersect in the space

$$\begin{aligned} T_{X(t),V(t)} = T_{X(t)} \cap T_{V(t)} = \Big \{ \sum _{i=1}^r \sum _{j=1}^r X_i(t,\varvec{x}) {{\tilde{S}}}_{ij} V_j(t,\varvec{v}) \,\Big |\, {{\tilde{S}}}_{ij} \in {\mathbb {R}} \Big \}. \end{aligned}$$

Correspondingly, the \(L_2\)-orthogonal projection onto the full tangent space \(T_{f_r(t)} {\mathcal {M}}_r\) can be decomposed as

$$\begin{aligned} P_{f_r(t)} = P_{V(t)} - P_{X(t),V(t)} + P_{X(t)}, \end{aligned}$$
(10)

where \(P_{V(t)}\), \(P_{X(t),V(t)}\), and \(P_{X(t)}\) denote the \(L_2\)-orthogonal projections onto the subspaces \(T_{X(t)}\), \(T_{X(t),V(t)}\), and \(T_{V(t)}\), respectively.

The projector-splitting integrator performs the time integration of the system according to the decomposition (10) of the tangent space projector. Sticking to the weak formulation (9), one time step from some point \(t_0\) with

$$\begin{aligned} f_r(t_0) = \sum _{i=1}^r \sum _{j=1}^r X_i^{0}(\varvec{x}) S_{ij}^{0} V_j^{0}(\varvec{v}) \in {\mathcal {M}}_r \end{aligned}$$
(11)

to a point \(t_1=t_0 + \Delta t\) then consists of the following three steps:

  1. 1.

    Solve the system (9) restricted to the subspace \(T_{V^0}\) on the time interval \([t_0,t_1]\) with initial condition \(f_r(t_0)\). At time \(t_1\) one obtains

    $$\begin{aligned} {{\hat{f}}}(\varvec{x}, \varvec{v}) = \sum _{j=1}^r K_j(\varvec{x}) V_j^0(\varvec{v}) \in T_{V^0}.\end{aligned}$$

    Find an orthonormal system \(X_1^1, \ldots , X_r^1\) for \(K_1, \ldots , K_r\) such that

    $$\begin{aligned} K_j = \sum _{i=1}^r X_i^1 {{\hat{S}}}_{ij}.\end{aligned}$$
  2. 2.

    Solve the system (9) restricted to the subspace \(T_{X^1,V^0}\) on the time interval \([t_0,t_1]\) with initial condition \({{\hat{f}}}\), and taking into account the minus sign in the projector. At time \(t_1\) one obtains

    $$\begin{aligned} {{\tilde{f}}}(\varvec{x}, \varvec{v}) = \sum _{i=1}^r\sum _{j=1}^r X_i^1(\varvec{x}) {{\tilde{S}}}_{ij} V_j^0(\varvec{v}) \in T_{X^1,V^0}. \end{aligned}$$

    This step is often interpreted as a backward in time integration step, which is possible if there is no explicit time dependence in the coefficients or boundary conditions. However, in our case the inflow g is in general time dependent which then does not admit for such an interpretation in general.

  3. 3.

    Solve the system (9) restricted to the subspace \(T_{X^1}\) on the time interval \([t_0,t_1]\) with initial condition \({{\tilde{f}}}\). At time \(t_1\) one obtains

    $$\begin{aligned} {{\bar{f}}}(\varvec{x}, \varvec{v}) = \sum _{i=1}^r X_i^1(\varvec{x}) L_i(\varvec{v}) \in T_{X^1}. \end{aligned}$$

    Find an orthonormal system \(V_1^1, \ldots , V_r^1\) for \(L_1, \ldots , L_r\) such that

    $$\begin{aligned} L_i = \sum _{j=1}^r S_{ij}^1 V_j^1.\end{aligned}$$

As the final solution at time point \(t_1\) one then takes

$$\begin{aligned} f_r(t_1, \varvec{x}, \varvec{v}) \approx {{\bar{f}}}(\varvec{x}, \varvec{v}) = \sum _{i=1}^r \sum _{j=1}^r X_i^1(\varvec{x}) S_{ij}^1 V_j^1(\varvec{v}). \end{aligned}$$

A modification of this scheme is the so called unconventional integrator proposed in [3]. There, the K-step and L-step are performed independently using the same initial data from \(f_r(t_0)\) (i.e. the K-step is identical with the one above, but the L-step is performed on \(T_{X^0}\) with initial value \(f_r(t_0)\)). As a result, one obtains two new orthonormal sets of component functions \(X_1^1,\dots ,X_r^1\) and \(V_1^1,\dots ,V_r^1\) for the spatial and velocity domains. The S-step is then performed afterwards in a “forward” way (i.e. without flipping signs) using these new bases. Compared to the standard scheme, this decoupling the S-step from the K- and L-steps also offers a somewhat more natural way of making the scheme rank-adaptive, which is important in practice since a good guess of r might not be known. Specifically, in [4] it is proposed to first augment the new bases to \({{\hat{X}}} = \{X^0_1,\dots ,X^0_r,K^1_1,\dots ,K^1_r \}\) and \({{\hat{V}}} = \{V^0_1,\dots ,V^0_r,L^1_1,\dots ,L^1_r \}\) (i.e. including the old bases), orthonormalize these augmented bases, and perform the S-step in the space

$$\begin{aligned} T_{{{\hat{X}}}, {{\hat{V}}}} = {{\,\textrm{span}\,}}{{\hat{X}}} \otimes {{\,\textrm{span}\,}}{{\hat{V}}}. \end{aligned}$$

Note that this space contains functions of rank 2r (in general) and still contains \(f_r(t_0)\) which one takes as initial condition for the S-step. The solution at time point \(t_1\) can then be truncated back to rank r or any other rank (less or equal to 2r) depending on a chosen truncation threshold using singular value decomposition.

Regardless of whether the projector splitting or the modified unconventional integrator is chosen, the efficient realization of the above steps is based on their formulation in terms of the component functions K, S and L only, which in turn requires a certain separability of space and velocity variables in the system (9). We therefore next investigate the single steps in detail, focusing only on the projector splitting integrator and omitting the required modifications for the (rank-adaptive) unconventional integrator (it will be discussed in the discrete setting in Sect. 3.3). A particular focus is how to ensure the required separability for the boundary value terms, which will lead to the Assumptions (7) and (6) that g is separable and the boundary is piecewise linear. As we will see, this gives effective equations in the form of Friedrichs’ systems for the factors K and L, and a matrix ODE for S. Their discrete counter-parts as well as the algorithmic schemes for both the projector splitting integrator and the unconventional integrator will be presented in Sect. 3.

2.2 Weak formulation of the K-step

The time dependent solution of the first step, i.e. (9) restricted to \(T_{V^0}\), is a function

$$\begin{aligned} {{\hat{f}}}(t,\varvec{x}, \varvec{v}) = \sum _{j=1}^r K_j(t,\varvec{x}) \cdot V_j^0(\varvec{v}) \end{aligned}$$

on the time interval \([t_0,t_1]\) with initial condition \({{\hat{f}}}(t_0,\varvec{x},\varvec{v}) = f_r(t_0,\varvec{x},\varvec{v})\) from (11). Here the \(V_j^0\) are fixed and the functions \(K_1,\dots ,K_r\) need to be determined. Their dynamics are governed by a system of first order partial differential equations which we derive in the following.

The test functions \(w \in T_{V^0}\) have the form

$$\begin{aligned} w(\varvec{x}, \varvec{v}) = \sum _{j=1}^r \psi _j^{(x)}(\varvec{x})\cdot V_j^0(\varvec{v}), \qquad \psi _j^{(x)} \in W_x, \end{aligned}$$

for all \(j=1,\ldots , r\). Testing with w in (9) specifically means

$$\begin{aligned}{} & {} \big (\partial _t {{\hat{f}}}(t)+{\varvec{v}} \cdot \nabla _{\varvec{x}} {{\hat{f}}}(t) -{\varvec{E}}(t,{\varvec{x}}) \cdot \nabla _{\varvec{v}} {{\hat{f}}}(t), w \big )_{L_2(\Omega , {\mathbb {R}})} \nonumber \\{} & {} \qquad \qquad -\big ({\varvec{n}}_x \cdot {\varvec{v}} \, {{\hat{f}}}(t), w \big )_{L_2(\Gamma ^-, {\mathbb {R}})} = -\big ({\varvec{n}}_x \cdot {\varvec{v}} \, g, w \big )_{L_2(\Gamma ^-, {\mathbb {R}})}. \end{aligned}$$
(12)

We will write this equation in vector form and therefore define

$$\begin{aligned} \varvec{K} = \big [ K_j \big ]_{j=1,\ldots ,r}, \qquad \varvec{\psi }^{(x)} = \big [ \psi _j^{(x)} \big ]_{j=1,\ldots ,r}, \end{aligned}$$

which can be regarded as vector-valued functions in \((W_x)^r\).

We investigate the terms in (12) separately. The term regarding the inner product in \(L_2(\Omega , {\mathbb {R}})\) has three parts. The first involves the time derivative:

$$\begin{aligned} \big (\partial _t {{\hat{f}}}(t) , w\big )_{L_2(\Omega , {\mathbb {R}})}&= \sum _{i=1}^r \sum _{j=1}^r \int _{\Omega _x} \!\! \partial _t K_i(t,\varvec{x}) \cdot \psi _j^{(x)}(\varvec{x}) \, {\textrm{d}} \varvec{x} \cdot \int _{\Omega _v} \! V_i^0(\varvec{v}) \cdot V_j^0(\varvec{v}) \, {\textrm{d}} \varvec{v} \\&= \sum _{j=1}^r \int _{\Omega _x} \!\! \partial _t K_j(t,\varvec{x}) \cdot \psi _j^{(x)}(\varvec{x}) \, {\textrm{d}} \varvec{x} \\&= \big ( \partial _t \varvec{K}(t), \varvec{\psi }^{(x)}\big )_{L_2(\Omega _x, {\mathbb {R}}^r)}, \end{aligned}$$

where we have used the pairwise orthonormality of \(V_1, \dots , V_r\). The second part reads

$$\begin{aligned} \int _{\Omega } \varvec{v} \cdot \nabla _{\varvec{x}}&{{\hat{f}}}(t,\varvec{x}, \varvec{v}) \cdot w(\varvec{x}, \varvec{v}) \, {\textrm{d}}(\varvec{x}, \varvec{v}) \\ {}&= \sum _{k=1}^d \sum _{i=1}^r \sum _{j=1}^r \int _{\Omega _x} \!\! \partial _{x_k} K_i(t,\varvec{x}) \cdot \psi _j^{(x)}(\varvec{x}) \, {\textrm{d}} \varvec{x} \cdot \! \int _{\Omega _v} \!\! v_k \cdot V_i^0(\varvec{v}) \cdot V_j^0(\varvec{v}) \, {\textrm{d}} \varvec{v} \\&= \sum _{k=1}^d \big ( {\mathcal {A}}^k_x \cdot \partial _{x_k} \varvec{K}(t), \varvec{\psi }^{(x)}\big )_{L_2(\Omega _x, {\mathbb {R}}^r)} \end{aligned}$$

with the symmetric \(r \times r\) matrix

$$\begin{aligned} {\mathcal {A}}^k_x = \Big [ \int _{\Omega _v} \! v_k \cdot V_j^0(\varvec{v}) \cdot V_i^0(\varvec{v}) \, {\textrm{d}} \varvec{v} \Big ]_{i,j=1,\ldots ,r}. \end{aligned}$$

Finally, the part involving the electrical field can be written as

$$\begin{aligned} \int _{\Omega }&\varvec{E}(t,\varvec{x}) \cdot \nabla _{\varvec{v}} {{\hat{f}}}(t,\varvec{x}, \varvec{v}) \cdot w(\varvec{x}, \varvec{v}) \, {\textrm{d}}(\varvec{x}, \varvec{v}) \\&= \sum _{i=1}^r \sum _{j=1}^r \int _{\Omega _x} \sum _{k=1}^d \Big [ E_k(t,\varvec{x}) \cdot \int _{\Omega _v} \! \partial _{v_k} V_i^0(\varvec{v}) \cdot V_j^0(\varvec{v}) \, {\textrm{d}} \varvec{v}\Big ] \, K_i(t,\varvec{x}) \cdot \psi _j^{(x)}(\varvec{x}) \, {\textrm{d}} \varvec{x} \\&= \big ( {\mathcal {K}}_x(t, \cdot ) \cdot \varvec{K}(t), \varvec{\psi }^{(x)}\big )_{L_2(\Omega _x, {\mathbb {R}}^r)} \end{aligned}$$

where

$$\begin{aligned} {\mathcal {K}}_x(t,\varvec{x}) = \Big [\sum _{k=1}^d E_k(t,\varvec{x})\, \int _{\Omega _v} \! \partial _{v_k} V_j^0(\varvec{v}) \cdot V_i^0(\varvec{v}) \, {\textrm{d}} \varvec{v} \Big ]_{i,j=1,\ldots ,r}\, . \end{aligned}$$
(13)

For the terms in (12) involving the inflow boundary \(\Gamma ^-\) we make use of our assumption that the spatial boundary \(\partial \Omega _x\) can be decomposed as in (6) with a constant normal vector \(\varvec{n}_x^{(\nu )}\) on each part \(\Gamma _x^{(\nu )}\) of the boundary. We decompose \(\Gamma ^-\) accordingly into

$$\begin{aligned}\Gamma ^- = \bigcup _\nu \Gamma ^{(\nu )}_x \times \Omega ^{(\nu )}_v, \qquad \Omega _v^{(\nu )} = \{\varvec{v} \in \Omega _v \,|\, \varvec{n}^{(\nu )}_x \cdot \varvec{v} < 0\} .\end{aligned}$$

Then the boundary term from the bilinear form can be written as

$$\begin{aligned} \int _{\Gamma ^-} \! \varvec{n}_x \cdot&\varvec{v} \, {{\hat{f}}}(t, \varvec{x}, \varvec{v})) \cdot w(\varvec{x}, \varvec{v}) \, \mathrm ds \\&= \sum _{i=1}^r\sum _{j=1}^r \sum _{\nu } \int _{\Gamma ^{(\nu )}_x} \Big [ \int _{\Omega ^{(\nu )}_v} \! \varvec{n}^{(\nu )}_x \cdot \varvec{v} \, V_i^0(\varvec{v}) \, V_j^0(\varvec{v}) \, {\textrm{d}} \varvec{v} \Big ] \, K_i(t, \varvec{x}) \cdot \psi _j^{(x)}(\varvec{x}) \, {\textrm{d}} s \\&= \big ({\mathcal {B}}_x(\cdot ) \, \varvec{K}(t), \varvec{\psi }^{(x)}\big )_{L_2(\Gamma _x, {\mathbb {R}}^r)} \end{aligned}$$

where

$$\begin{aligned} {\mathcal {B}}_x(\varvec{x}) = \Big [ \sum _\nu \chi _{\Gamma _x^{(\nu )}}(\varvec{x}) \int _{\Omega ^{(\nu )}_v} \varvec{n}^{(\nu )}_x \cdot \varvec{v} \, V_j^0(\varvec{v}) \, V_i^0(\varvec{v}) \, {\textrm{d}} \varvec{v} \Big ]_{i,j=1,\ldots ,r} \end{aligned}$$

with characteristic function \(\chi \).

For the boundary term on the right hand side we recall the decomposition (7) of the function g into a sum of tensor products. Proceeding as above we obtain

$$\begin{aligned} \int _{\Gamma ^-} \!\! {\varvec{n}}_x \cdot&{\varvec{v}} \, g(t, {\varvec{x}}, {\varvec{v}}) \cdot w({\varvec{x}}, {\varvec{v}}) \, \mathrm ds \\&= \sum _{j=1}^r \sum _{\nu , \mu } \int _{\Gamma ^{(\nu )}_x} \Big [ \int _{\Omega ^{(\nu )}_v} \! {\varvec{n}}^{(\nu )}_x \cdot {\varvec{v}} \, g^{(\mu )}_v(t, {\varvec{v}}) \, V_j^0({\varvec{v}}) \, {\text {d}} {\varvec{v}} \Big ] \, g^{(\mu )}_x(t, {\varvec{x}}) \cdot \psi _j^{(x)}({\varvec{x}}) \, {\text {d}} s \\&= \big ({\varvec{G}}_x(t,\cdot ), {\varvec{\psi }}^{(x)}\big )_{L_2(\Gamma _x, \mathbb R^r)} \end{aligned}$$

with

$$\begin{aligned} \varvec{G}_x(t, \varvec{x}) = \Big [ \sum _{\mu ,\nu } \chi _{\Gamma _x^{(\nu )}}(\varvec{x})\, g^{(\mu )}_x(t, \varvec{x}) \int _{\Omega ^{(\nu )}_v} \! \varvec{n}^{(\nu )}_x \cdot \varvec{v} \, \, g^{(\mu )}_v(\varvec{v}) \, V_j^0(\varvec{v}) \, {\textrm{d}} \varvec{v} \Big ]_{j=1,\ldots ,r}\, . \end{aligned}$$

In summary (12) is equivalent to the following system of first order partial differential equations for \(\varvec{K}(t)\) in weak formulation with boundary penalty,

$$\begin{aligned} \begin{aligned}&\big ( \partial _t \varvec{K}(t) + \sum _{k=1}^d {\mathcal {A}}_x^k \cdot \partial _k \varvec{K}(t) - {\mathcal {K}}_x(t,\cdot ) \, \varvec{K}(t), \varvec{\psi }^{(x)}\big )_{L_2(\Omega _x, {\mathbb {R}}^r)} \\&- \big ({\mathcal {B}}_x(\cdot ) \, \varvec{K}, \varvec{\psi }^{(x)}\big )_{L_2(\Gamma _x, {\mathbb {R}}^r)} = -\big (\varvec{G}_x(t,\cdot ), \varvec{\psi }^{(x)}\big )_{L_2(\Gamma _x, {\mathbb {R}}^r)} \quad \text {for all } \varvec{\psi }^{(x)} \in (W_x)^r, \end{aligned}\nonumber \\ \end{aligned}$$
(14)

which needs to be solved for \(\varvec{K}(t_1)\) with the initial values

$$\begin{aligned} \varvec{K}(t_0) = \Big [ \sum _{i=1}^r X_i^0(\varvec{x}) S_{ij}^0 \Big ]_{ j=1,\dots ,r} \end{aligned}$$

from the previous approximation (11).

2.3 Formulation of the S-step

The solution of the second step of the splitting integrator is a time-dependent function

$$\begin{aligned} {{\tilde{f}}}(t, \varvec{x}, \varvec{v}) = \sum _{i=1}^r\sum _{j=1}^r X_i^1(\varvec{x}) S_{ij}(t) V_j^0(\varvec{v}) \end{aligned}$$

in the subspace space \(T_{X^1,V^0}\), where the evolution for S on the interval \([t_0, t_1]\) is governed by (9) restricted to test functions \(w \in T_{X^1, V^0}\), but taking into account the minus sign of \(P_{X^1,V^0}\) in the corresponding projector splitting (10). The initial condition reads \({{\tilde{f}}}(t_0,\varvec{x}, \varvec{v}) = {{\hat{f}}}(t_1,\varvec{x}, \varvec{v}) = \sum _{i,j = 1}^r X_i^1(\varvec{x}) {{\hat{S}}}_{ij} V_j^0(\varvec{v})\). The test function in \(w \in T_{X^1,V^0}\) will be written as

$$\begin{aligned} w(\varvec{x}, \varvec{v}) = \sum _{m=1}^r \sum _{n=1}^r X_m^1(\varvec{x}) \Sigma _{mn}^{} V_n^0(\varvec{v}), \qquad \Sigma = [\Sigma _{mn}] \in {\mathbb {R}}^{r \times r}. \end{aligned}$$

For such test functions, we again consider the different contributions as in (12) (with \({{\tilde{f}}}\) instead of \({{\hat{f}}}\)). For the time derivative we get

$$\begin{aligned} \big (\partial _t {{\tilde{f}}}(t) , w\big )_{L_2(\Omega , {\mathbb {R}})}{} & {} = \sum _{ijmn} \dot{S}_{ij} \Sigma _{mn} \int _{\Omega _x} X_i^1(\varvec{x}) X_m^1(\varvec{x}) \, {\textrm{d}} \varvec{x} \cdot \int _{\Omega _v} V_j^0(\varvec{v}) V_n^0 (\varvec{v})\, {\textrm{d}} \varvec{v} \\{} & {} = \big (\dot{S}, \Sigma \big )_F, \end{aligned}$$

where \((\cdot ,\cdot )_F\) is the Frobenius inner product on \(\mathbb R^{r\times r}\). The first two contributions of the bilinear form read

$$\begin{aligned} \int _{\Omega } \varvec{v} \cdot&\nabla _{\varvec{x}} {{\tilde{f}}}(t,\varvec{x}, \varvec{v}) \cdot w(\varvec{x}, \varvec{v}) \, {\textrm{d}}(\varvec{x}, \varvec{v}) \\ {}&= \sum _{k=1}^d \sum _{ijmn} \underbrace{\int _{\Omega _x} \partial _{x_k} X_i^1(\varvec{x}) \cdot X_m^1(\varvec{x}) \, {\textrm{d}} \varvec{x}}_{{}:=[D^{(k,1)}]_{mi}} \cdot \underbrace{\int _{{\mathbb {R}}^d} v_k \cdot V_j^0(\varvec{v}) \cdot V_n^0(\varvec{v}) \, {\textrm{d}} \varvec{v}}_{{}:=[C^{(k,1)}]_{nj}} S_{ij} \Sigma _{mn}\\&= \Big ( \sum _{k=1}^d D^{(k,1)}S (C^{(k,1)})^T, \Sigma \Big )_F \end{aligned}$$

and

$$\begin{aligned} \int _{\Omega }&\varvec{E}(t,\varvec{x}) \cdot \nabla _{\varvec{v}} f(t,\varvec{x}, \varvec{v}) \cdot w(\varvec{x}, \varvec{v}) \, {\textrm{d}}(\varvec{x}, \varvec{v}) \\&= \sum _{k=1}^d \sum _{ijmn} \underbrace{\int _{\Omega _x} E_k(t,\varvec{x}) X_i^1(\varvec{x}) \cdot X_m^1(\varvec{x}) \, {\textrm{d}} \varvec{x}}_{{}:=[D^{(k,2)}(t)]_{mi}} \cdot \underbrace{\int _{{\mathbb {R}}^d} \partial _{v_k} \cdot V_j^0(\varvec{v}) \cdot V_n^0(\varvec{v}) \, {\textrm{d}} \varvec{v}}_{{}:=[C^{(k,2)}]_{nj}} S_{ij} \Sigma _{mn}\\&= \Big ( \sum _{k=1}^d D^{(k,2)}S (C^{(k,2)})^T, \Sigma \Big )_F. \end{aligned}$$

The terms stemming from the boundary condition take the form

$$\begin{aligned} \int _{\Gamma ^-} \! \varvec{n}_x&\cdot \varvec{v} \, f(t, \varvec{x}, \varvec{v}) \cdot w(\varvec{x}, \varvec{v}) \, \mathrm ds \\&= \sum _{\nu } \sum _{ijmn} \underbrace{\int _{\Gamma ^{(\nu )}_x} X_i^1(\varvec{x}) \cdot X_m^1(\varvec{x}) \, {\textrm{d}} s}_{{}:=[B^{(\nu ,x)}]_{mi}} \cdot \underbrace{\int _{\Omega ^{(\nu )}_v} \varvec{n}^{(\nu )}_x \cdot \varvec{v} \, V_j^0(\varvec{v}) \, V_n^0(\varvec{v}) \, {\textrm{d}} \varvec{v}}_{{}:=[B^{(\nu ,v)}]_{nj}} \, S_{ij} \Sigma _{mn} \\&= \sum _{\nu } \big ( B^{(\nu ,x)}S (B^{(\nu ,v)})^T, \Sigma \big )_F \, , \end{aligned}$$

and

$$\begin{aligned} \int _{\Gamma ^-} \!&\varvec{n}_x \cdot \varvec{v} \, g(t, \varvec{x}, \varvec{v}) \cdot w(\varvec{x}, \varvec{v}) \, \mathrm ds \\&= \sum _{\nu \mu } \sum _{mn} \underbrace{\int _{\Gamma ^{(\nu )}_x} g^{(\mu )}_x(t, \varvec{x}) \cdot X_m^1(\varvec{x}) \, {\textrm{d}} s}_{{}:=[\varvec{g}^{(\nu ,\mu ,x)}(t)]_m} \cdot \underbrace{\int _{\Omega ^{(\nu )}_v} \varvec{n}^{(\nu )}_x \cdot \varvec{v} \, g^{(\mu )}_v(\varvec{v}) \, V_n^0(\varvec{v}) \, {\textrm{d}} \varvec{v}}_{{}:=[\varvec{g}^{(\nu ,\mu ,v)}(t)]_n}\, \Sigma _{mn} \\&= \big (G_S ,\Sigma \big )_F, \quad \end{aligned}$$

with

$$\begin{aligned} G_S = \sum _{\nu \mu } \varvec{g}^{(\nu ,\mu ,x)}\cdot \big (\varvec{g}^{(\nu ,\mu ,v)}\big )^T \,. \end{aligned}$$

Putting everything together and testing with all \(\Sigma = [\Sigma _{mn}] \in {\mathbb {R}}^{r \times r}\) yields the ODE

$$\begin{aligned} \dot{S} - \sum _{k=1}^d\Big [ D^{(k,1)}S (C^{(k,1)})^T - D^{(k,2)}S (C^{(k,2)})^T\Big ] + \sum _\nu B^{(\nu ,x)}S (B^{(\nu ,v)})^T = G_S\nonumber \\ \end{aligned}$$
(15)

(in strong form) for obtaining \(S(t_1) = {{\tilde{S}}}\) from the initial condition \(S(t_0) = {{\hat{S}}}\). Note again that compared to the original problem (9) all signs (except the one of \(\dot{S}\)) have been flipped due to the negative sign of the corresponding projector in the splitting (10).

2.4 Weak formulation of the L-step

The equations for the L-step are derived in a similar way as the K-step in Sect. 2.2. Here we seek

$$\begin{aligned} {{\bar{f}}}(t, \varvec{x}, \varvec{v}) = \sum _{i=1}^r X_i^1(\varvec{x}) \cdot L_i(t,\varvec{v}) \end{aligned}$$

in the subspace \(T_{X^1}\) with fixed \(X_1^1,\dots ,X_r^1\) from the initial value \({{\bar{f}}}(t_0,\varvec{x},\varvec{v}) = {{\tilde{f}}}(t_1,\varvec{x}, \varvec{v})\). The test functions in \(T_{X^1}\) are of the form

$$\begin{aligned} w(\varvec{x}, \varvec{v}) = \sum _{i=1}^r X_i^1(\varvec{x}) \cdot \psi _i^{(v)}(\varvec{v}), \qquad \psi _i^{(v)} \in W_v. \end{aligned}$$

Let again

$$\begin{aligned} \varvec{L} = \big [ L_i \big ]_{i=1,\ldots ,r}, \qquad \varvec{\psi }^{(v)} = \big [ \psi _i^{(v)} \big ]_{i=1,\ldots ,r}\, . \end{aligned}$$

be vector valued functions in \((W_v)^r\).

When restricting the weak formulation as in (12) (with \({{\bar{f}}}\) instead of \({{\hat{f}}}\)) to test functions in \(T_{X^1}\), the first three resulting contributions take a similar form as in the K-step,

$$\begin{aligned} \big (\partial _t \varvec{L}(t) - \sum _{k=1}^d {\mathcal {A}}^k_v(t) \cdot \partial _k \varvec{L}(t) + {\mathcal {K}}_v(\cdot ) \cdot \varvec{L}(t), \varvec{\psi }^{(v)}\big )_{L_2(\Omega _v, {\mathbb {R}}^r)} \end{aligned}$$

but this time with

$$\begin{aligned} \begin{aligned} {\mathcal {A}}^k_v(t)&= \Big [ \int _{\Omega _x} \! E_k(t,\varvec{x}) \, X_j^1(\varvec{x}) \cdot X_i^1(\varvec{x}) \, {\textrm{d}} \varvec{x} \Big ]_{i,j=1,\ldots ,r}, \\ {\mathcal {K}}_v(\varvec{v})&= \Big [\sum _{k=1}^d v_k \, \int _{\Omega _x} \! \partial _{x_k} X_j^1(\varvec{x}) \cdot X_i^1(\varvec{x}) \, {\textrm{d}} \varvec{x} \Big ]_{i,j=1,\ldots ,r} \end{aligned} \end{aligned}$$
(16)

(note that in the K-step \({\mathcal {A}}_x^k\) featured the velocity and \({\mathcal {K}}_x\) the electric field).

The boundary term in the bilinear form gives

$$\begin{aligned} \int _{\Gamma ^-} \! \varvec{n}_x \cdot \varvec{v}&\, f(t, \varvec{x}, \varvec{v})) \cdot w(\varvec{x}, \varvec{v}) \, \mathrm ds \\&= \sum _{i=1}^r \sum _{j=1}^r \sum _{\nu } \int _{\Omega ^{(\nu )}_v} \varvec{n}^{(\nu )}_x \cdot \varvec{v} \, \Big [ \int _{\Gamma ^{(\nu )}_x} \, X_i^1(\varvec{x}) \, X_j^1(\varvec{x}) \, {\textrm{d}} s \Big ] \, L_i(t, \varvec{v}) \cdot \psi _j^{(v)}(\varvec{v}) \, {\textrm{d}} \varvec{v} \\&= \big ({\mathcal {B}}_v(\cdot ) \, \varvec{L}, \varvec{\psi }^{(v)}\big )_{L_2(\Omega _v, {\mathbb {R}}^r)} \, , \end{aligned}$$

where

$$\begin{aligned} {\mathcal {B}}_v(\varvec{v}) = \Big [ \sum _\nu \chi _{\Omega ^{(\nu )}_v}(\varvec{v}) \varvec{n}^{(\nu )}_x \cdot \varvec{v} \int _{\Gamma ^{(\nu )}_x} X_j^1(\varvec{x}) \, X_i^1(\varvec{x}) \, {\textrm{d}} s\Big ]_{i,j=1,\ldots ,r}. \end{aligned}$$

Hence, the boundary condition on the spatial domain \(\Omega _x\) leads to the additional multiplicative term on the whole domain \(\Omega _v\). Boundary terms for the velocity variable are not present, since since we assume \(\Omega _v={\mathbb {R}}^d\) to be unbounded.

For the right hand side, we have

$$\begin{aligned} \int _{\Gamma ^-} \!\!&\varvec{n}_x \cdot \varvec{v} \, g(t, \varvec{x}, \varvec{v}) \cdot w(\varvec{x}, \varvec{v}) \, \mathrm ds \\&= \sum _{i=1}^r \sum _{\mu \nu } \int _{\Omega _v} \chi _{\Omega ^{(\nu )}_v}(\varvec{v}) \varvec{n}^{(\nu )}_x \cdot \varvec{v} \, \Big [ \int _{\Gamma ^{(\nu )}_x} g^{(\mu )}_x(t,\varvec{x}) \, X_i^1(\varvec{x}) \, {\textrm{d}} s \Big ] \, g^{(\mu )}_v(t, \varvec{v}) \cdot \psi _i^{(v)}(\varvec{v}) \, {\textrm{d}} \varvec{v} \\&=\big (\varvec{G}_v(t, \cdot ), \varvec{\psi }^{(v)}\big )_{L_2(\Omega _v, \mathbb R^r)} \end{aligned}$$

with

$$\begin{aligned} \varvec{G}_v(t, \varvec{v}) = \Big [ \sum _{\mu \nu } g^{(\mu )}_v(t, \varvec{v}) \chi _{\Omega ^{(\nu )}_v}(\varvec{v}) \varvec{n}^{(\nu )}_x \cdot \varvec{v} \int _{\Gamma ^{(\nu )}_x} g^{(\mu )}_x(t,\varvec{x}) \, X_i^1(\varvec{x}) \, {\textrm{d}} s \Big ]_{i=1,\ldots ,r}. \end{aligned}$$

The resulting equation for the L-step reads: Find \(\varvec{L}(t) \in (W_v)^r\) for \(t \in [t_0,t_1]\) such that

$$\begin{aligned} \big (\partial _t \varvec{L}(t)- & {} \sum _{k=1}^d {\mathcal {A}}^k_v(t) \cdot \partial _k \varvec{L}(t) + {\mathcal {K}}_v(\cdot ) \, \varvec{L}(t) - {\mathcal {B}}_v(\cdot ) \varvec{L}, \varvec{\psi }^{(v)}\big )_{L_2(\Omega _v, {\mathbb {R}}^r)}\nonumber \\= & {} -\big (\varvec{G}_v(t, \cdot ), \varvec{\psi }^{(v)}\big )_{L_2(\Omega _v, {\mathbb {R}}^r)} \quad \text {for all } \varvec{\psi }^{(v)} \in (W_v)^r. \end{aligned}$$
(17)

The initial condition is

$$\begin{aligned} \varvec{L}(t_0) = \Big [ \sum _{i=1}^r {{\tilde{S}}}_{ij}^0 V^0_j(\varvec{v}) \Big ]_{ j=1,\dots ,r}, \end{aligned}$$

where the \(V_j^0\) are from \(f_r(t_0)\) in the previous time step (11), and \({{\tilde{S}}}\) has been determined in the S-step.

2.5 Electrical field

So far we have assumed that the electrical field does not depend on the density f. However, in the setting of a Lie splitting the other case can be included quite easily as well. At the beginning of the time step from \(t_0\) to \(t_1\), as discussed in Sects. 2.22.4, the electrical field can be computed via the Poisson equation (3) and then simply held constant during that step. This will only introduce an error of first order in time, as does the projector splitting itself.

3 Discrete equations

In Sects. 2.22.4 the governing equations for the projector splitting integrator have been formulated in a continuous setting. They consist of the two Friedrichs’ systems (14) and (17) for \(\varvec{K}\) and \(\varvec{L}\), and the finite-dimensional system of ODEs (15) for S. In this section we will introduce a finite element discretization for the approximate numerical solution of the hyperbolic systems and derive the governing discrete equations that need to be solved in the practical computation.

A natural choice for the discretization is to use the discontinuous Galerkin method. However, the resulting equations for \(\varvec{K}\), \(\varvec{L}\), and S (Sect. 2) include derivatives of the bases functions \(X_i\) and \(V_j\), see for example Eqs. (16) and (13). Hence, for a first approach we will use continuous finite element method, which of course has to be stabilized. It remains to future work to investigate the use of DG methods.

3.1 Discretization

Instead of an unbounded domain we will now use a finite domain \(\Omega _v = [-v_{\max },v_{\max }]^d\) for the velocity variable with \(v_{\max }\) big enough, and impose periodic boundary conditions.

Let meshes \({\mathcal {T}}_{x/v}\) on the domains \(\Omega _{x/v}\) be given and define \(H^1\)-conforming discretization spaces

$$\begin{aligned} W_{x}^h = \textrm{span}\{ \varphi ^{(x)}_\alpha \,|\, \alpha =1,\ldots ,n_{x}\}, \qquad W_{v}^h = \textrm{span}\{ \varphi ^{(v)}_\beta \,|\, \beta =1,\ldots ,n_v\} \, . \end{aligned}$$

We are now looking for an approximate solution of the Vlasov–Poisson equation (1) in the manifold

$$\begin{aligned} {\mathcal {M}}_r^h = \Big \{ \varphi ^h(\varvec{x}, \varvec{v}) = \sum _{i=1}^r \sum _{j=1}^r X_i^h(\varvec{x}) S_{ij} V_j^h(\varvec{v}) \, \Big \vert \, X_i^h \in W_x^h, \, V_j^h \in W_v^h, \, S_{ij} \in {\mathbb {R}} \Big \}, \end{aligned}$$

of rank-r finite element functions, where as before the systems \(X_1^h,\ldots ,X_r^h\) and \(V_1^h,\dots ,V_r^h\) are assumed to be orthonormal, and \(S = [S_{ij}]\) has rank r. To represent elements in \({\mathcal {M}}_r^h\), we introduce coefficient matrices

$$\begin{aligned} {\textsf{X}} = [{\textsf{X}}_{\alpha i}] \in {\mathbb {R}}^{n_x \times r}, \qquad {\textsf{V}} = [{\textsf{V}}_{\beta j}] \in {\mathbb {R}}^{n_v \times r} \end{aligned}$$

such that

$$\begin{aligned} X^h_i(\varvec{x}) = \sum _{\alpha =1}^{n_x} {\textsf{X}}_{\alpha i} \varphi ^{(x)}_\alpha (\varvec{x}), \qquad V^h_j(t,\varvec{v}) = \sum _{\beta =1}^{n_v} {\textsf{V}}_{\beta j} \varphi ^{(v)}_\beta (\varvec{v}). \end{aligned}$$
(18)

The goal is then to find the solution in the form

$$\begin{aligned} f_r^h(t, \varvec{x}, \varvec{v}) = \sum _{\alpha = 1}^{n_x} \sum _{\beta = 1}^{n_v} {\textsf{f}}_{\alpha \beta }^{}(t) \varphi ^{(x)}_\alpha (\varvec{x}) \varphi ^{(v)}_\beta (\varvec{v}) \end{aligned}$$

with coefficients

$$\begin{aligned} {\textsf{f}}(t) = [{\textsf{f}}_{\alpha \beta }(t)] = {\textsf{X}}(t) \cdot S(t) \cdot {\textsf{V}}(t)^T \in {\mathbb {R}}^{n_x \times n_v} \end{aligned}$$

at prescribed time points t. Note that in order to have the \(X_1^h,\dots ,X_k^h\) orthonormal in \(L_2\) as was always assumed above, the matrix \({\textsf{X}}\) needs to be orthogonal with respect to the inner product of the mass matrix of the basis functions \(\varphi ^{(x)}_\alpha \) (the matrix \({\textsf{M}}_x\) below). A similar remark applies to \({\textsf{V}}\).

Following the Galerkin principle, the steps of the projector splitting integrator can be formulated in terms of the matrices \({\textsf{X}}\), S, and \({\textsf{V}}\) by solving the Eqs. (14), (15), and (17) with the finite element spaces \(W_{x/v}^h\) instead of \(W_{x/v}\). This means that we use the finite dimensional spaces \((W_{x/v}^h)^r\) both as ansatz and test spaces. Specifically, for the Friedrichs’ systems (14) and (17) our aim is to find factors \(K^h_j\) and \(L^h_i\) of the form

$$\begin{aligned} K^h_j(t,\varvec{x}) = \sum _{\alpha =1}^{n_x} {\textsf{K}}_{\alpha j}(t) \varphi ^{(x)}_\alpha (\varvec{x}), \qquad L^h_i(t,\varvec{v}) = \sum _{\beta =1}^{n_v} {\textsf{L}}_{\beta i}(t) \varphi ^{(v)}_i(\varvec{v}) \end{aligned}$$
(19)

for \(i,j=1,\dots ,r\). We gather the coefficients in the matrices

$$\begin{aligned} {\textsf{K}}(t) = \big [ {\textsf{K}}_{\alpha j}(t)\big ]_{\alpha ,j} \in {\mathbb {R}}^{n_x \times r}, \quad {\textsf{L}}(t) = \big [ \mathsf L_{\beta i}(t) \big ]_{\beta ,i} \in {\mathbb {R}}^{n_v \times r} \,. \end{aligned}$$

The governing discrete equations are obtained by inserting the expressions (18) for \(X_i\) and \(V_j\) as well as (19) for \(K_i\) and \(L_j\) into (14), (15), and (17). In order to formulate them conveniently we will use the following additional discretization matrices:

$$\begin{aligned} {\textsf{M}}_x&= \big [ \langle \varphi ^{(x)}_\alpha , \, \varphi ^{(x)}_{\alpha '} \rangle _{L_2(\Omega _x)} \big ]_{\alpha , \alpha '},&{\textsf{M}}_{x, E_k}&= \big [ \langle \varphi ^{(x)}_\alpha , \, E_k \, \varphi ^{(x)}_{\alpha '} \rangle _{L_2(\Omega _x)} \big ]_{\alpha , \alpha '}, \\ {\textsf{T}}_{x,k}&= \big [ \langle \varphi ^{(x)}_\alpha , \, \partial _{x_k} \varphi ^{(x)}_{\alpha '} \rangle _{L_2(\Omega _x)} \big ]_{\alpha , \alpha '},&{\textsf{M}}_{x,\Gamma ^{(\nu )}_x}&= \big [ \langle \varphi ^{(x)}_\alpha , \, \varphi ^{(x)}_{\alpha '} \rangle _{L_2(\Gamma ^\nu )} \big ]_{\alpha , \alpha '}, \\ {\textsf{M}}_{v}&= \big [ \langle \varphi ^{(v)}_\beta , \, \varphi ^{(v)}_{\beta '} \rangle _{L_2(\Omega _v)} \big ]_{\beta ,\beta '},&{\textsf{M}}_{v,k}&= \big [ \langle \varphi ^{(v)}_\beta , \, v_k \varphi ^{(v)}_{\beta '} \rangle _{L_2(\Omega _v)} \big ]_{\beta ,\beta '}, \\ {\textsf{T}}_{v,k}&= \big [ \langle \varphi ^{(v)}_\beta , \, \partial _{v_k} \varphi ^{(v)}_{\beta '} \rangle _{L_2(\Omega _v)} \big ]_{\beta , \beta '},&{\textsf{M}}_{v,\Omega ^{(\nu )}}&= \big [ \langle \varphi ^{(v)}_\beta , \, n^{(\nu )} \cdot v \, \varphi ^{(v)}_{\beta '} \rangle _{L_2(\Omega ^{(\nu )}_v)} \big ]_{\beta ,\beta '}, \\ {\textsf{G}}_x(t)&= \big [ \langle \varphi ^{(x)}_\alpha , \, G_{x,j}(t,\cdot ) \rangle _{L_2(\Gamma _x, {\mathbb {R}}^r)} \big ]_{\alpha ,j},&{\textsf{G}}_v(t)&= \big [ \langle \varphi ^{(v)}_\beta , \, G_{v,i}(t,\cdot ) \rangle _{L_2(\Omega _v, {\mathbb {R}}^r)} \big ]_{\beta ,i}. \end{aligned}$$

In addition, we will take into account that finite element solutions of hyperbolic partial differential equations have to be stabilized. For that purpose we use the continuous interior penalty (CIP) stabilization [11]. For a mesh \({\mathcal {T}}\), its bilinear form is given by

$$\begin{aligned} s_{{\mathcal {T}}}^{\textrm{CIP}}(v_h, w_h) = \sum _{F \in \mathcal F_0^h} h_F^2 \left( [\![\nabla v_h ]\!], [\![ \nabla w_h ]\!] \right) _{L_2(F)} \end{aligned}$$

where \({\mathcal {F}}_0^h\) is the set of all interior interfaces of \({\mathcal {T}}\) and \([\![\cdot ]\!]\) is the jump term with respect to the interface F. The corresponding discretization matrices are denoted by

$$\begin{aligned} {\textsf{C}}_{x} = \big [ s_{\mathcal T_x}^{\textrm{CIP}}(\varphi ^{(x)}_j, \varphi ^{(x)}_i) \big ]_{i,j}, \qquad {\textsf{C}}_{v} = \big [ s_{\mathcal T_v}^{\textrm{CIP}}(\varphi ^{(v)}_j, \varphi ^{(v)}_i) \big ]_{i,j} \,. \end{aligned}$$

Eventually, the discrete versions of Eqs. (14), (15), and (17) read as follows:

$$\begin{aligned} {\textsf{M}}_x \dot{{\textsf{K}}}&= -\sum _{k=1}^d \big ( {\textsf{T}}_{x,k} \cdot {\textsf{K}} \cdot \langle {\textsf{M}}_{v,k} \rangle _{{\textsf{V}}}^T - {\textsf{M}}_{x,E_k} \cdot {\textsf{K}} \cdot \langle {\textsf{T}}_{v,k} \rangle _{{\textsf{V}}}^T \big ) - \delta \, {\textsf{C}}_x \cdot {\textsf{K}} \nonumber \\&\quad + \sum _\nu {\textsf{M}}_{x,\Gamma ^{(\nu )}_x} \cdot {\textsf{K}} \cdot \langle {\textsf{M}}_{v,\Omega ^{(\nu )}_v}\rangle _{{\textsf{V}}}^T - {\textsf{G}}_x(t), \end{aligned}$$
(20a)
$$\begin{aligned} {\dot{S}}&= \sum _{k=1}^d \big ( \langle {\textsf{T}}_{x,k} \rangle _{{\textsf{X}}} \cdot S \cdot \langle {\textsf{M}}_{v,k} \rangle _{{\textsf{V}}}^T - \langle {\textsf{M}}_{x,E_k} \rangle _{{\textsf{X}}} \cdot S \cdot \langle {\textsf{T}}_{v,k} \rangle _{{\textsf{V}}}^T\big ) \nonumber \\&\quad - \sum _\nu \langle {\textsf{M}}_{x,\Gamma ^{(\nu )}_x} \rangle _{{\textsf{X}}} \cdot S \cdot \langle {\textsf{M}}_{v,\Omega ^{(\nu )}_v}\rangle _{{\textsf{V}}}^T + G_S, \end{aligned}$$
(20b)
$$\begin{aligned} {\textsf{M}}_v \dot{{\textsf{L}}}&= -\sum _{k=1}^d \big ( {\textsf{M}}_{v,k} \cdot {\textsf{L}} \cdot \langle {\textsf{T}}_{x,k} \rangle _{{\textsf{X}}}^T -{\textsf{T}}_{v,k} \cdot {\textsf{L}} \cdot \langle {\textsf{M}}_{x,E_k} \rangle _{{\textsf{X}}}^T \big ) - \delta \, {\textsf{C}}_v \cdot {\textsf{L}} \nonumber \\&\quad + \sum _\nu {\textsf{M}}_{v,\Omega ^{(\nu )}_v} \cdot {\textsf{L}} \cdot \mathsf \langle {\textsf{M}}_{x,\Gamma ^{(\nu )}_x}\rangle _{\mathsf X}^T - {\textsf{G}}_v(t) \,. \end{aligned}$$
(20c)

Here we use the abbreviations

$$\begin{aligned} \langle {\textsf{M}}\rangle _{{\textsf{X}}} = {\textsf{X}}^T \cdot {\textsf{M}} \cdot {\textsf{X}}, \quad \langle {\textsf{M}}\rangle _{{\textsf{V}}} = {\textsf{V}}^T \cdot {\textsf{M}} \cdot {\textsf{V}}\end{aligned}$$

for discretization matrices \({\textsf{M}}\). The parameter \(\delta \ge 0\) controls the stabilization.

3.2 Discrete projector splitting integrator

Based on the above equations, the complete realization of the projector splitting scheme, including the appropriate initial conditions and the orthogonalization steps, is outlined in Algorithm 1.

It also includes in lines 1 and 2 the computation of the electric field as described in Sect. 2.5. For that purpose the density \(\rho \) has to be calculated given by \(f_r^h(t_0,\cdot ,\cdot ) \in {\mathcal {M}}_r^h\) by integrating out the v variable. The result is a finite element function \(\rho ^h \in W_x^h\) which is used on the right hand side of the Poisson equation (3). We used ansatz functions of order two to solve the resulting linear equation directly for the potential \(\Phi ^h\). The electrical field \(\varvec{E}^h\) can be computed as the negative gradient of \(\Phi ^h\) and is a discontinuous finite element function on the same mesh. Now, updating the discretization matrices \({\textsf{M}}_{x,E_k}\) involving the electrical field, the projector splitting steps from \(t_0\) and \(t_1\) can be performed.

Algorithm 1
figure a

First order dynamical low rank integrator with projector splitting

3.3 Rank adaptive algorithm

The rank-adaptive algorithm is based on the unconventional low-rank integrator proposed in [3] and its rank-adaptive extension in [4]. It is displayed in Algorithm 2. Starting with a rank-\(r_0\) function, it first performs independent K-steps and L-steps from the initial data \({\textsf{X}}^0\) and \({\textsf{V}}^0\). The computed factors \(\mathsf K^1\) and \({\textsf{L}}^1\) are used to enlarge the previous bases for \({\textsf{X}}^0\) and \({\textsf{V}}^0\) to dimension \(2 r_0\) each. This is achieved by computing orthonormal bases \(\hat{{\textsf{X}}}\) and \(\hat{{\textsf{V}}}\) of the augmented matrices \([{\textsf{X}}^0, \mathsf K^1] \in {\mathbb {R}}^{n_x \times 2 r_0}\) and \([{\textsf{V}}^0, \mathsf L^1] \in {\mathbb {R}}^{n_v \times 2 r_0}\). Then a new coefficient matrix of the form

$$\begin{aligned} {[}{\textsf{f}}_{\alpha \beta }] = \hat{{\textsf{X}}} S \hat{{\textsf{V}}}^T \end{aligned}$$

is sought by performing a ‘forward’ S-step

$$\begin{aligned} {\dot{S}}&= - \sum _{k=1}^d \big ( \langle {\textsf{T}}_{x,k} \rangle _{\hat{{\textsf{X}}}} \cdot S \cdot \langle {\textsf{M}}_{v,k} \rangle _{\hat{{\textsf{V}}}}^T - \langle {\textsf{M}}_{x,E_k} \rangle _{\hat{{\textsf{X}}}} \cdot S \cdot \langle {\textsf{T}}_{v,k} \rangle _{\hat{{\textsf{V}}}}^T\big ) \nonumber \\&\quad + \sum _\nu \langle {\textsf{M}}_{x,\Gamma ^{(\nu )}_x} \rangle _{\hat{{\textsf{X}}}} \cdot S \cdot \langle \mathsf M_{v,\Omega ^{(\nu )}_v}\rangle _{\hat{{\textsf{V}}}}^T - G_S, \end{aligned}$$
(21)

which differs from (20b) in the signs of the right-hand side. For the initial condition \(S(t_0) = {{\hat{S}}}\), \({\textsf{X}}^0 S^0 {\textsf{V}}^0\) has to be expressed with respect to the larger bases \(\hat{{\textsf{X}}}\) and \(\hat{{\textsf{V}}}\), i.e. as \(\hat{{\textsf{X}}} {{\hat{S}}} \hat{{\textsf{V}}}\). The new solution then has rank \(2 r_0\) (in general) and can be truncated to a lower rank \(r_1\) according to a tolerance, which yield the actual new factors \({\textsf{X}}^1 \in {\mathbb {R}}^{n_x \times r_1}\), \(S^1 \in {\mathbb {R}}^{r_1 \times r_1}\), and \({\textsf{V}}^1 \in {\mathbb {R}}^{n_v \times r_1}\).

Algorithm 2
figure b

First order rank adaptive unconventional integrator (RAUC)

4 Numerical experiments

We present numerical results of two experiments for testing the methods described in Sect. 3. Since to our knowledge there does not exist an analytic solution of the Vlasov–Poisson equation on bounded domains to which our numerical results could be compared, we chose to consider the following two setups. The first experiment will be the classical Landau damping where analytic results on the decay of the electric energy are known. However, the domain is assumed to be periodic so that no boundary conditions are needed. This scenario has been treated in several previous works. Our second experiment will then include boundary conditions on a polygonal spatial domain to actually test our proposed approach for handling these.

In our tests linear finite elements are used and the matrix ODEs (lines 4, 6, and 8 in Algorithm 1, and line 11 in Algorithm 2) are solved using an explicit Runge-Kutta method of third order. The implementation is based on the finite element library |MFEM| [1] and uses its Python wrapper |PyMFEM|.Footnote 1 The computations are carried out using standard numerical routines from |numpy| and |scipy|. All experiments have been performed on a desktop computer (i9 7900, 128 GB RAM), the code is available online.Footnote 2

4.1 Landau damping

As a first numerical test we consider the classical Landau damping in two spatial and velocity dimensions. We use periodic domains \(\Omega _x = [0, 4 \pi ]^2\) and \(\Omega _v = [-6, 6]^2\) and a background density of \(\rho _b = 1\). As initial condition we choose

$$\begin{aligned} f(0,\varvec{x},\varvec{v}) = \frac{1}{2\pi } \mathrm e^{-\vert \varvec{v}\vert ^2/2}\, \big ( 1 + \alpha \cos (k x_1) + \alpha \cos (k x_2)\big ), \quad \alpha = 10^{-2}, \, k = \frac{1}{2}. \end{aligned}$$

The same setup was investigated in [7]. For this case linear analysis shows that the electric field decays with a rate of \(\gamma \approx 0.153\). For the discretization we use regular grids with \(n_x=64^2\) and \(n_v=256^2\) degrees of freedom (level 0).

Using Algorithm 1 the simulation was carried out fixed ranks \(r=5,10,15\) and a time step of \(\Delta t = 0.005\). The results are shown in Fig. 1. As can be seen in the top left plot, for rank \(r=5\) the computed electric energy \(\frac{1}{2} \, \int _{\Omega _x} | \varvec{E}(t,\varvec{x}) |^2 \, {\textrm{d}} \varvec{x}\) exhibits the analytical decay rate approximately up to \(t=35\). For rank \(r=10\) the electrical energy starts to deviate at the end of the time interval, while the solution with rank \(r=15\) shows the correct rate within the full simulation time.

Fig. 1
figure 1

Simulation results of the 2+2-dimensional Landau damping using fixed ranks (Alg. 1); electric energy including the analytical decay rate (upper left) and relative error of the invariants (22). For the total energy and entropy the lines for all three ranks almost overlap

Furthermore, it is known that the particle number, the total energy and the entropy, that is, the quantities

$$\begin{aligned} \begin{aligned}&\int _{\Omega } f(t,\varvec{x},\varvec{v}) \, {\textrm{d}} \varvec{x} \, {\textrm{d}} \varvec{v}, \quad \frac{1}{2} \int _{\Omega } \vert \varvec{v}\vert ^2 \, f(t,\varvec{x},\varvec{v}) \, {\textrm{d}} \varvec{x} \, {\textrm{d}} \varvec{v} + \frac{1}{2} \int _{\Omega ^{(\varvec{x})}} \vert \varvec{E}(t,\varvec{x})\vert ^2 \, {\textrm{d}} \varvec{x}, \\&\int _{\Omega } \vert f(t,\varvec{x},\varvec{v})\vert ^2 \, {\textrm{d}} \varvec{x} \, {\textrm{d}} \varvec{v}, \end{aligned} \end{aligned}$$
(22)

are invariants of the exact solution. In Fig. 1 we see that the mass and the total energy are almost conserved, whereas the entropy is only preserved up to a small error.

Fig. 2
figure 2

Simulation results of the 2+2-dimensional Landau damping using the rank adaptive algorithm (Algorithm 2) for different tolerances \(\epsilon \) and discretizations; electric energy including the analytical decay rate (left) and ranks (right)

For the rank adaptive case (Algorithm 2) the system was simulated for the same discretization (level 0) and different tolerances \(\epsilon \). The corresponding results in Fig. 2 show that the electric energy deteriorates at around \(t=25\) for both tolerances \(\epsilon =10^{-5}, 10^{-6}\). Although the rank increases up to 40 (the maximal rank allowed) for the case of \(\epsilon =10^{-6}\) the accuracy in the electric energy does not improve. To improve accuracy the simulation is carried out on a uniformly refined spatial and velocity mesh (level 1). The results for a time step of \(\Delta t = 0.00125\) and a tolerance of \(\epsilon =10^{-6}\) are shown in Fig. 2. The electric energy shows the correct decay in electric energy up to approximately \(t=40\).

Investigating the computed bases X and V in more detail shows that in the rank adaptive case spurious oscillations are present. In Fig. 3 two exemplary basis functions \(X_i(\varvec{x})\) of level 0 at time \(t=50\) are displayed. In the fixed rank case (\(r=15\)) the basis function is much smoother than in the rank adaptive simulation (\(\epsilon =10^{-5}\)).

Fig. 3
figure 3

Spatial basis functions \(X_i(\varvec{x})\) at time \(t=50\) for fixed rank (\(r=15\), left) and rank adaptive (level 0, \(\epsilon =10^{-5}\), right) simulation

In summary spurious modes may enter in the course of the simulation for the rank adaptive simulation. However, the accuracy can be improved by refining the discretization. In contrast, the algorithm using a fixed rank seems to have a regularizing effect. It remains to future work to investigate this effect more closely.

4.2 Inflow boundary condition with constant electrical field

In the second example we focus on the boundary condition and compare the numerical to an analytical solution. In this setting, we are solving the transport equation (1) in \(2+2\) dimensions with a constant electrical field \(\varvec{E} = [0, 4]^T\) on a triangular spatial domain

$$\begin{aligned} \Omega _x = \{ (x_1,x_2) \,|\, -0.5< x_1< 0.5, \, -x_1/2+1/4< x_2 < x_1/2 - 1/4 \} \end{aligned}$$

with initial condition \(f(0,\varvec{x}, \varvec{v}) = 0\) and \(\Omega _v = {\mathbb {R}}^2\). The inflow

$$\begin{aligned} f(t,\varvec{x}, \varvec{v}) = {{\bar{f}}}(t, \varvec{x}, \varvec{v}) \quad \text {on } \Gamma ^-. \end{aligned}$$
(23)

on the boundary of \(\Omega _x\) will be determined by a function \(\bar{f}\), which is a solution of the same equation as for f, but on the whole domain \({\mathbb {R}}^2\) and with initial condition

$$\begin{aligned} {{\bar{f}}}(0, \varvec{x}, \varvec{v}) = {{\bar{f}}}_0(\varvec{x}, \varvec{v}). \end{aligned}$$

Here, \({{\bar{f}}}_0\) has compact spatial support just outside the triangular domain \(\Omega _x\) and a compactly supported velocity distribution centred around \(\varvec{v} = [2,0]^T\). More precisely, we set

$$\begin{aligned} {{\bar{f}}}_0(\varvec{x}, \varvec{v}) = \phi \Big (\frac{x_1 - 0.5 - \sigma _x}{\sigma _x}\Big ) \cdot \phi \Big (\frac{x_2 - 0.1}{\sigma _x}\Big ) \cdot \phi \Big (\frac{v_1 - 2}{\sigma _v}\Big ) \cdot \phi \Big (\frac{v_2}{\sigma _v}\Big ), \end{aligned}$$
(24)

where \(\sigma _x = 0.2\), \(\sigma _v = 0.5\), and

$$\begin{aligned} \phi (z) = {\left\{ \begin{array}{ll} z^2 \cdot (2 |z| - 3) + 1 &{} |z| \le 1 \\ 0 &{} |z|>1 \end{array}\right. }\end{aligned}$$

is a \(C^1\) function supported in \([-1,1]\) and centered around 0. In consequence \({{\bar{f}}}_0\) is a product function supported in a four-dimensional cube with side lengths controlled by \(\sigma _x\) and \(\sigma _v\).

By the method of characteristics we obtain

$$\begin{aligned} {{\bar{f}}}(t, \varvec{x}, \varvec{v}) = {{\bar{f}}}_0(\varvec{x} - \varvec{v} \cdot t - \varvec{E} \cdot t^2/2, \varvec{v} + \varvec{E} \cdot t). \end{aligned}$$
(25)

Restricting \({{\bar{f}}}\) on \(\Omega _x \times {\mathbb {R}}^2\) solves the original problem and can be used to assess the quality of its numerical solution \(f^h_r\).

In order to use \({{\bar{f}}}\) as the inflow function in our dynamical low-rank approach, we have to work with an approximation in separable form instead. For that purpose, we use the fact that, by (25) and our choice (24), \({{\bar{f}}}\) can be written as

$$\begin{aligned} {{\bar{f}}}(t, \varvec{x}, \varvec{v}) = g_1(t,x_1, v_1) \cdot g_2(t,x_2, v_2). \end{aligned}$$

The factors \(g_1\) and \(g_2\) are then evaluated for a given time on a regular fine grid \((x_i, v_i)\) and singular value decompositions are computed. The product of the truncated SVDs is then used for computing the inflow (23) as well as the error of the numerical solution. In our experiments we used the truncation ranks of 25.

The transport equation is solved numerically on the time domain [0, 0.5]. On the coarsest scale (level 0) we use a conforming triangulation of \(\Omega _x\) with 339 vertices. The velocity domain is chosen as \(\Omega _v = [-4,4]^2\) with periodic boundary conditions and a regular triangulation with 4096 vertices. For the numerical solution the rank adaptive algorithm (Algorithm 2) is used with a time step of \(\Delta t = 0.005\), a truncation threshold of \(\epsilon = 10^{-3}\), and a stabilization parameter \(\delta = 10^{-2}\). Higher levels \(\ell =1,2\) are obtained by uniformly refining the meshes in \(\Omega _x\) as well as \(\Omega _v\). For these levels the parameters \(\Delta t\) and \(\epsilon \) are scaled by \(2^{-2\ell }\).

Figure 4 shows the evolution of the computed spatial density

$$\begin{aligned} \rho ^h(t, \varvec{x}) = \int _{\Omega _v} f_{r}^h(t,\varvec{x}, \varvec{v}) \, {\textrm{d}} \varvec{v} \end{aligned}$$
(26)

for level \(\ell =1\) at different time steps together with its error computed using the analytical solution \({{\bar{f}}}\). At the beginning of the simulation, the density flows into the domain, is transported and finally leaves the domain. The error remains reasonably small during the whole process.

A more detailed investigation of the \(L_2\) error is depicted in the upper part of Fig. 5. It is obtained by comparing the numerical solutions \(f_{r}^h\) to the analytical solution \({{\bar{f}}}\) on a once uniformly refined grid. As one can see, in the beginning the error increases for all levels but decays again for \(t \ge 0.35\). At about that time a significant fraction of the density has left the domain already (see Fig. 4). The maximal error decays as the level increases.

Fig. 4
figure 4

Spatial density \(\rho ^h(t, \varvec{x})\) (left), see (26), and error scaled by a factor of 10 (right) of the numerical solution of (1) with constant electrical field for level \(\ell =1\) at times \(t = 0.075, \, 0.25, \, 0.375\)

The lower part of the figure shows the ranks which were used by the rank adaptive algorithm. As more particles enter the domain and the distribution spreads out, the ranks increase. For higher levels of discretization a smaller truncation parameter is used in order to balance the low-rank approximation error with the discretization error, leading to higher ranks.

Fig. 5
figure 5

Numerical solution of (1) for different levels of discretization. The upper graph shows the \(L_2\) error computed with respect to the analytical solution \({{\bar{f}}}\), see (25), on a uniformly refined grid. The lower graph shows the ranks used by the rank adaptive integrator

5 Conclusion and outlook

In this paper we have studied how to incorporate inflow boundary conditions in the dynamical low-rank approximation (DLRA) for the Vlasov–Poisson equation based on its weak formulation. The single steps in the projector splitting integrator, or the rank-adaptive unconventional integrator, can be interpreted as restrictions of the weak formulation to certain subspaces of the tangent space. The efficient solution of these sub-steps requires the separability of the boundary integrals, which is ensured for piecewise linear boundaries together with a separable inflow function. The resulting equations can be solved using FEM solvers based on Galerkin discretization. We confirmed the feasibility of our approach in numerical experiments.

As a next step, the conservation of physical invariants such as mass and momentum in the numerical schemes could be addressed, perhaps by extending methods from [6, 8, 10, 12, 14]. Enforcing nonnegativity of the density function in the DLRA approach is another open issue.

A potential advantage of the weak formulation for the sub-problems in the projector splitting integrator is that in principle it should allow for a great flexibility regarding the discretization spaces. In particular, they do not need to be fixed in advance and mesh-adaptive methods could be used for solving the sub-steps, provided suitable interpolation and prolongation operations are available, as well as adaptive solvers for Friedrichs’ systems. We leave this as possible future work. Other elaborate adaptive integrators have been proposed in [15], which also could be combined with our approach.