1 Introduction

While physical understanding is inherently based on empirical evidence, numerical simulation tools have become an integral part of the majority of fields within physics. When tested against observations, numerical models can strengthen or invalidate existing theories and quantify the degree to which the theories have to be improved. Simulation results can also complement observations by giving them a larger context. In space physics, spacecraft measurements concern only one point at one time in the vast volume of space, indicating that discerning spatial phenomena from temporal changes is difficult. This is a shortcoming that has also led to the use of spacecraft constellations, like the European Space Agency’s Cluster mission (Escoubet et al. 2001). However, simulations are considerably more cost-effective compared to spacecraft, and they can be adopted to address physical systems that cannot be reached by in situ experiments, like the distant galaxies. Finally, and most importantly, predictions of physical environments under varying conditions are always based on modelling. Predicting the near-Earth environment in particular has become increasingly important, not only because the near-Earth space hosts expensive assets used to monitor our planet. The space environmental conditions threatening space- or ground-based technology or human life are commonly termed as space weather. Space weather predictions include two types of modelling efforts; those targeting real-time modelling (similar to terrestrial weather models), and those which test and improve the current space physical understanding together with top-tier experiments. This paper concerns the latter approach.

The physical conditions within the near-Earth space are mostly determined by physics of collisionless plasmas, where the dominant physical interactions are caused by electromagnetic forces over a collection of charged particles. There are three main approaches to model plasmas: (1) the fluid approach (e.g., magnetohydrodynamics, MHD), (2) the fully kinetic approach, and (3) hybrid approaches combining the first two. Present global models including the entire near-Earth space in three dimensions (3D) and resolving the couplings between different regions are largely based on MHD (e.g., Janhunen et al. 2012). However, single-fluid MHD models are basically scale-less in that they assume that plasmas have a single temperature approximated by a Maxwellian distribution. Therefore they provide a limited context to the newest space missions, which produce high-fidelity multi-point observations of spatially overlapping multi-temperature plasmas. The second approach uses a kinetic formulation as represented by the Vlasov theory (Vlasov 1961). In this approach, plasmas are treated as velocity distribution functions in a six-dimensional phase space consisting of three-dimensional ordinary space (3D) and a three-dimensional velocity space (3V). The majority of kinetic simulations model the Vlasov theory by a particle-in-cell (PIC) method (Lapenta 2012), where a large number of particles are propagated within the simulation, and the distribution function is constructed from particle statistics in space and time. The fully kinetic PIC approach means that both electrons and protons are treated as particles within the simulation. Such simulations in 3D are computationally extremely costly, and can only be carried out in local geometries (e.g., Daughton et al. 2011).

A hybrid approach in the kinetic simulation regime means usually that electrons are treated with a fluid description, but protons and heavier ions are treated kinetically. Again, the vast majority of simulations use a hybrid-PIC approach, which have previously considered 2D spatial regimes due to computational challenges (e.g., Omidi et al. 2005; Karimabadi et al. 2014), but have recently been extended into 3D using a limited resolution (e.g., Lu et al. 2015; Lin et al. 2017). This paper does not discuss the details of the PIC approach, but instead concentrates on a hybrid-Vlasov method, where the ion velocity distribution is discretised and modelled with a 3D–3V grid. The difference to hybrid-PIC is that in hybrid-Vlasov the distribution functions are evolved in time as an entity, and not constructed from particle statistics. The main advantage is therefore that the distribution function becomes noiseless. This can be important for the problem at hand, because the distribution function is in many respects the core of plasma physics as the majority of the plasma parameters and processes can be derived from it. As will be described, hybrid-Vlasov methods have been used mostly in local geometries, because the 3D–3V requirement implies a large computational cost. A global approach, which in space physics means simulation box sizes exceeding thousands of ion inertial lengths or gyroradii per dimension, have not been possible as naturally the large volume has to consider the velocity space as well. The world’s (so far) only global magnetospheric hybrid-Vlasov simulation, the massively parallel Vlasiator, is therefore the prime application in this article.

This paper is organised as follows: Sect. 2 introduces the typical plasma systems and relevant processes one encounters in space. Sections 3 and 4 introduce the Vlasov theory and its numerical representations. Section 5 describes Vlasiator in detail and justifies the decisions made in the design of the code to aid those who would like to design their own (hybrid-)Vlasov system. At the time of writing, there are no standard verification cases for a (hybrid-)Vlasov system, but we describe the test cases used for Vlasiator. The physical findings are then illustrated briefly, showing that Vlasiator has made a paradigm change in space physics, emphasising the role of scale coupling in large-scale plasma systems. While this paper concerns mostly the near-Earth environment, we hope it is useful for astrophysical applications as well. Astrophysical large-scale modelling is still mostly based on non-magnetised gas (Springel 2005; Bryan et al. 2014), while in reality astrophysical objects are in the plasma state. In the future, pending new supercomputer infrastructure, it may be possible to design astrophysical simulations based on MHD first, and later possibly on kinetic theories. If this becomes feasible, we hope that our design strategies, complemented and validated by in situ measurements, can be helpful.

2 Kinetic physics in astrophysical plasmas

Thermal and non-thermal interactions between charged particles and electromagnetic fields follow the same basic rules throughout the universe, but the applicability of simplified theories and the relevant spatial, temporal, and virial scales vary greatly between different scopes of research. In this section, we present an overview of regions of interest and the phenomena found within them.

2.1 Astrophysical media and objects

Prime examples of themes requiring modelling are e.g., the dynamics of hot, cold, and dark matter in an expanding universe with unknown boundaries. The birth of the universe connects the rapid expansion and cooling of baryonic matter with quantum fluctuation anisotropies that eventually lead to the formation of galactic superclusters. Astrophysical simulations of the universe should naturally account for expansion of space-time and associated effects of general relativity, and modelling of high-energy phenomena should correctly account for special relativity due to velocities approaching the speed of light. A recent forerunner in modelling the universe is EAGLE (Schaye et al. 2015), which utilises smoothed particle hydrodynamics, with subgrid modelling providing feedback of star formation, radiative cooling, stellar mass loss and feedback from stars and accreting black holes. These simulations operate on very much larger scales compared to the Vlasov equation for ions and electrons, yet they depend strongly on knowledge of processes at smaller length and time scales. Due to the majority of the universe consisting of the mostly empty interstellar and intergalactic media, the energy content of turbulent space plasmas must be understood. This has been investigated through the Vlasov equation (see, e.g., Weinstock 1969). Conversely, turbulent behaviour at large scales can act as a model for extending power laws to smaller scales (Maier et al. 2009). An alternative if less common approach for modelling galactic dynamics is to describe the distribution of stars as a Vlasov–Poisson system, to be explained below, with gravitational force terms instead of electromagnetic effects (Guo and Li 2008). This approach highlights the use of the Vlasov equation also on large spatial scales.

2.2 Solar system

Plasma simulations of the solar system are mostly concerned with the modelling of solar activity and its influence on the heliosphere. Solar activity can be divided into two components: the solar wind, consisting of particles escaping continuously from the solar corona due to its thermal expansion, and carrying with them turbulent fields; and transient phenomena such as flares and coronal mass ejections, during which energy and plasma are released explosively from the Sun into the heliosphere. Topics of active research in solar physics include for example the acceleration and expansion of the solar wind (Yang et al. 2012; Verdini et al. 2010; Pinto and Rouillard 2017), coronal heating (De Moortel and Browning 2015; Cranmer et al. 2017) and flux emergence (Schmieder et al. 2014). The latter is particularly important for transient solar activity, as flares and coronal mass ejections are due to the destabilisation of coronal magnetic structures through magnetic reconnection. The typical length of these coronal structures ranges between \(10^{6}\) and \(10^{8}\) m. Of great interest is also the propagation of the solar wind and solar transients into the heliosphere, in particular for studying their interaction with Earth and other planetary environments. Because of the large scales of the systems considered, solar and heliospheric simulations are generally based on MHD, indicating that currently existing theories of the Sun and the solar eruption are mostly based on the MHD approximation. Applying the Vlasov approach to near-Earth physics, having important analogies to the solar plasmas, may therefore provide important feedback to existing solar theories as well.

2.3 Near-Earth space and other planetary environments

Figure 1 illustrates the near-Earth space. The shock separating the terrestrial magnetic domain from the solar wind is called the bow shock (e.g., Omidi 1995), and the region of shocked plasma downstream is the magnetosheath (Balogh and Treumann 2013). The interplanetary magnetic field (IMF), which at 1 AU typically forms an angle of 45\(^{\circ }\) relative to the plasma flow direction, intensifies at the shock, increasing the magnetic field strength to roughly four-fold compared to that in the solar wind (e.g., Spreiter and Stahara 1994).

Fig. 1
figure 1

Vlasiator modelling of the magnetosphere in the noon–midnight meridian plane viewed from the morning sector. Different physical regions outlined in the text are annotated, along with the solar wind and IMF directions

The bow shock–magnetosheath system hosts highly variable and turbulent environmental conditions, with the bow shock normal angle with respect to the IMF direction being one of the most important factors controlling the level of variability. At portions of the bow shock where the IMF is quasi-parallel with the bow shock normal (termed quasi-parallel shock), some particles reflect at the shock and propagate back upstream causing instabilities and waves in the foreshock upstream of the bow shock (e.g., Hoppe et al. 1981). On the quasi-perpendicular side of the shock, where the IMF direction is more perpendicular to the bow shock normal, the downstream magnetosheath is much smoother, but exhibits large-scale waves originating from anisotropies in the ion distribution function (e.g., Génot et al. 2011; Soucek et al. 2015; Hoilijoki et al. 2016). The foreshock–bow shock–magnetosheath coupled system is under active research, and since it is the magnetosheath plasma which ultimately determines the conditions within the near-Earth space, most important open questions include the processes which determine the plasma characteristics in space and time. The entire system has previously been modelled with MHD, which is usable to infer average properties of the dayside system (e.g., Palmroth et al. 2001; Chapman and Cairns 2003; Dimmock and Nykyri 2013; Mejnertsen et al. 2018), but unable to take into account particle reflection, kinetic waves, turbulence, and it neglects e.g., plasma asymmetries between the quasi-parallel and quasi-perpendicular sides of the shock that require a non-Maxwellian ion distribution function.

The earthward boundary of the magnetosheath is called the magnetopause, a current layer exhibiting large gradients in the plasma parameter space. Energy and mass exchange between the upstream plasma and the magnetosphere occurs at the magnetopause (Palmroth et al. 2003, 2006c; Pulkkinen et al. 2006; Anekallu et al. 2011; Daughton et al. 2014; Nakamura et al. 2017), and therefore its processes are important in determining the amount of energy driving the space weather phenomena, which can endanger technological systems or human health (Watermann et al. 2009; Eastwood et al. 2017). Space weather phenomena are complicated and varied, and we give a non-exhaustive list just to name a few most important categories. Direct energetic particle flows from the Sun alter the communication conditions especially at high latitudes, affecting radio broadcasts, aircraft communication with air traffic control, and radar signals. Sudden changes in the magnetic field induce currents in the terrestrial long conductors, such as gas pipelines, railways, and power grids that can sometimes be disrupted (e.g., Wik et al. 2008). Increasing numbers of satellites are being launched, vulnerable to sudden events in the geospace, as it has been experienced that some spacecraft have stopped operation in response to space weather events (Green et al. 2017). Overall, some estimations show that in the worst case, an extreme space weather event could induce economic costs of the order of 1–2 trillion USD during the first year following its occurrence, and that it could take 4–10 years for the society to recover from its effects (National Research Council 2008). Understanding and predicting the geospace is ultimately done by modelling. While the previous global MHD models can be executed near real-time and they provide the average description of the system, they cannot capture the kinetic physics that is needed to explain the most severe space weather events.

One additional factor in the accurate modelling of the geospace as a global system is that one needs to address the ionised upper atmosphere called the ionosphere within the simulation. The Earth’s ionosphere is a weakly ionised medium, divided into three regions—named D (60–90 km), E (90–150 km), and F (> 150 km)—corresponding to three peaks in the electron density profile (Hargreaves 1995). From the magnetospheric point of view, the ionosphere represents a conducting layer closing currents flowing between the ionosphere and magnetosphere (Merkin and Lyon 2010), reflecting waves (Wright and Russell 2014), and depositing precipitating particles (e.g., Rodger et al. 2013). Further, the ionosphere is a source of cold electrons (Cran-McGreehin and Wright 2005) and heavier ions (e.g., Peterson et al. 1981). These cold ions of ionospheric origin may affect local processes in the magnetosphere, such as magnetic reconnection at the magnetopause (André et al. 2010; Toledo-Redondo et al. 2016). The global MHD models typically use an electrostatic module for the ionosphere, coupled to the magnetosphere by currents, precipitation and electric potential (e.g., Janhunen et al. 2012; Palmroth et al. 2006a). The ionosphere itself is modelled either empirically or based on first principles: The International Reference Ionosphere (IRI) model describes the ionosphere empirically from 50 to 1500 km altitude (Bilitza and Reinisch 2008), while for instance, the Sodankylä Ion and Neutral Chemistry model solves the photochemistry of the D region, taking into account several hundred chemical reactions involving 63 ions and 13 neutral species (Verronen et al. 2005, and references therein). At higher altitudes, transport processes become important, and models such as TRANSCAR (Blelly et al. 2005, and references therein) or the IRAP Plasmasphere–Ionosphere Model (Marchaudon and Blelly 2015) couple a kinetic model for the transport of suprathermal electrons with a fluid approach to resolve the chemistry and transport of ions and thermal electrons in the convecting ionosphere. Neither empirical nor the first-principles based models are using the Vlasov equation, which at the ionosphere concerns much finer scales.

In general, the interaction of the solar wind with the other magnetized planets in our solar system is essentially similar to that with Earth. The main differences stem from the scales of the systems, which depend on the strength of their intrinsic magnetic field and the solar wind parameters changing with heliospheric distance. While the modelling of the magnetospheres of the outer giants is only achievable to date using fluid approaches, the small size of Mercury’s magnetosphere has been targeted for global kinetic simulations (Richer et al. 2012). For the same reason, kinetic models are also a popular tool to investigate the plasma environment of non-magnetized bodies such as Mars, Venus, comets, and asteroids. In particular, Umeda and Ito (2014) and Umeda and Fukazawa (2015) have studied the interaction of a weakly magnetized body with the solar wind by means of full-Vlasov simulations.

2.4 Scales and processes

The following processes are central in explaining plasma behaviour in the Solar–Terrestrial system and astrophysical domains: (1) magnetic reconnection enabling energy and mass transfer between different magnetic domains, (2) shocks forming due to supersonic relative flow speeds between plasma populations, (3) turbulence providing energy dissipation across scales, and (4) plasma instabilities transferring energy between the plasma and waves. All these processes contribute to particle acceleration, which is one of the most researched topics within Solar–Terrestrial and astrophysical domains, and notorious in requiring understanding of both local microphysics and global scales. Below, we introduce some examples of these processes within systems having scales that can be addressed with the Vlasov approach. Simulations of non-thermal space plasmas encompass a vast range of scales, from the smallest ones (electron scales, ion kinetic scales) to local and even global structures. Table 1 lists typical ranges of a handful of plasma parameters encountered in different branches of space sciences and astrophysics. Especially in a larger astrophysical context, simulations cannot directly encompass all relevant spatial and temporal scales. It is important to note, however, that scientific results of kinetic effects can be achieved even without directly resolving all the spatial scales that may at first glance appear to be a requirement (Pfau-Kempf et al. 2018).

Reconnection is a process whereby oppositely oriented magnetic fields break and re-join, allowing a change in magnetic topology, plasma mixing, and energy transfer between different magnetic domains. Within the magnetosphere, reconnection occurs between the terrestrial northward oriented magnetic field and the magnetosheath magnetic field that mostly mimics the direction of the IMF, but can sometimes be significantly altered due to magnetosheath processes (Turc et al. 2017). Magnetospheric energy transfer is most efficient when the magnetosheath magnetic field is southward, while for northward IMF reconnection locations move to the nightside lobes (Palmroth et al. 2006c). Actively researched topics focus on understanding the nature and location of reconnection as a function of driving conditions (e.g., Hoilijoki et al. 2014; Fuselier et al. 2017). Energy transfer at the magnetopause sets the near-Earth space into a global circulation (Dungey 1961), leading to reconnection in the magnetospheric tail. The tail centre hosts a hot and dense plasma sheet, the home of perhaps most diligent scientific investigations within the domain of magnetospheric physics. Especially in focus have been explosive times when the magnetospheric tail disrupts and launches into space, accelerating particles and causing abrupt changes in the global geospace (e.g., Sergeev et al. 2012). Reconnection has been suggested as one of the main drivers of tail disruptions (e.g., Angelopoulos et al. 2008), while other theories related to plasma kinetic instabilities exist as well (e.g., Lui 1996). Tail disruptions have important analogues in solar eruptions (e.g., Birn and Hesse 2009), and investigating the tail disruptions with global simulations together with in situ measurements may shed light into other astrophysical systems as well.

Table 1 Typical plasma parameters and scales for solar–terrestrial and astrophysical phenomena

Collisionless shocks form due to plasma populations flowing supersonically with respect to each other, redistributing flow energy into thermal energy and accelerating particles (e.g., Balogh and Treumann 2013; Marcowith et al. 2016). Shock fronts such as those found at supernova explosions are an efficient accelerator (Fermi 1949). Diffusive shock acceleration (e.g., Axford et al. 1977; Krymskii 1977; Blandford and Ostriker 1978; Bell 1978) is the primary source of solar energetic particles, and occurs from the non-relativistic (e.g., Lee 2005) to the hyper-relativistic (Aguilar et al. 2015) energy regimes. Shock–particle interactions including kinetic effects have been modelled using various analytical and semiempirical methods (see, e.g. Afanasiev et al. 2015, 2018; Hu et al. 2017; Kozarev and Schwadron 2016; Le Roux and Arthur 2017; Luhmann et al. 2010; Ng and Reames 2008; Sokolov et al. 2009; Vainio et al. 2014), but drastic approximations are usually required in order to model the whole acceleration process, and Vlasov methods have not yet been utilised. The classic extension of hydrodynamic shocks into the MHD regime has been disproven by a number of hybrid models due to, e.g., shock reformation (Caprioli and Spitkovsky 2013; Hao et al. 2017) and anisotropic pressure and energy imbalances due to non-thermal particle populations (Chao et al. 1995; Génot 2009). Only a self-consistent treatment including kinetic effects is capable of describing diffusive shock acceleration accurately. Recent works coupling shocks and high-energy particle effects include, e.g., those by Guo and Giacalone (2013), Bykov et al. (2014), Bai et al. (2015) and van Marle et al. (2018). Challenges associated with simulating shocks include modelling gyrokinetic scales for ions whilst allowing the simulation to cover the large spatial volume involved in the particle trapping and energisation process. Radially expanding shock fronts within strong magnetic domains result in a requirement for high resolution both spatially and temporally. Modern numerical approaches usually make some sacrifices, e.g. performing 1D–2V self-consistent calculations or advancing 3D–1V semi-analytical models. The Vlasov approach is especially interesting in probing the physics of particle injection, trapping, acceleration and escape.

In addition to shock acceleration, kinetic simulations of solar system plasmas are also applied to the study of solar wind turbulence. How energy cascades from large to small scales and is eventually dissipated is an outstanding question, which can be addressed using kinetic simulations (Bruno and Carbone 2013). Hybrid-Vlasov simulations (Valentini et al. 2010; Verscharen et al. 2012; Perrone et al. 2013) have in particular been utilised to study the fluctuations around the ion inertial scale, which is of particular importance as it marks the transition between the fluid and kinetic scales.

Plasma instabilities arise when a source of free energy in the plasma allows a wave mode to grow non-linearly. They are ubiquitous in our universe, and play an important role in both solar–terrestrial physics, where, for example, the Kelvin–Helmholtz instability transfers solar wind plasma into the Earth magnetosphere (e.g., Nakamura et al. 2017), and in astrophysical media, for instance in accretion disks, where the turbulence is driven by the magnetorotational instability. Vlasov models have been applied to the study of many instabilities, such as the Rayleigh–Taylor instability (Umeda and Wada 2016, 2017), Weibel-type instabilities (Inglebert et al. 2011; Ghizzo et al. 2017), and the Kelvin–Helmholtz instability (Umeda et al. 2010b).

3 Modelling with the Vlasov equation

Simulating plasma provides numerous challenges from the modelling perspective. Constructing a perfect representation of the plasma, where every single charge carrier is factored in the equations, would require an immense amount of computational power. Spatial scales required to fully describe the plasma environment range from the microscale Debye length, up to the macroscale of the phenomena one is trying to simulate. The particle density needed is such that performing a fully kinetic simulation of even a low-density plasma self-consistently, using Maxwell’s equations for the electric and magnetic fields and Lorentz’ equation for the protons and electrons, is out of reach even to present day large supercomputers. Currently, only plasma phenomena that occur in relatively short spatial and temporal scales, such as magnetic reconnection and high frequency waves, are modelled using this approach. For this reason, adopting a continuous resolution of the velocity space and using distribution functions as the main object to be simulated provides a meritorious form of simulating plasmas.

Plasmas can also be treated as a fluid and the standard way of doing that is using the magnetohydrodynamic (MHD) approximation. This modelling approach is more suitable for large domain sizes where detailed information is not necessary. However, MHD does not offer information about small spatio-temporal scales. Statistical mechanics provides some information about a neutral gas based on assumptions done at the atomic scale. The kinetic plasma approach treated in the following is based on this same principle. It describes the plasmas using distribution functions in phase space and uses Maxwell’s equations and the Vlasov equation to advance the fields and the distribution functions, respectively.

3.1 The Vlasov equation

In plasmas, as in neutral gases, the dynamical state of every constituent particle can be described by its position (\(\mathbf {x}\)) and momentum (\(\mathbf {p}\)) (or velocity \(\mathbf {v}\)) at a given time t. It is also common to separate the different species s in a plasma (electrons, protons, helium ions, etc). Accordingly, the dynamical state of a system of particles of species s, at a given time, can be described by a distribution function \(f_s(\mathbf {x},\mathbf {v},t)\) in 6-dimensional space, also called phase space.

The distribution function \(f_{s}(\mathbf {x},\mathbf {v},t)\) represents the phase-space density of the species inside a phase-space volume element of size \(\mathrm {d}^3\mathbf {x} \mathrm {d}^3\mathbf {v}\) during a time \(\mathrm {d}t\) at the (\(\mathbf {x},\mathbf {v},t\)) point. Hence, in a system with N particles, integrating over the spatial volume \(\mathcal {V}_r\) and the velocity volume \(\mathcal {V}_v\) (i.e., the entire phase-space volume \(\mathcal {V}\)) one obtains

$$\begin{aligned} N = \int _{\mathcal {V}_r}\int _{\mathcal {V}_v} f_{s}(\mathbf {x},\mathbf {v},t) \, \mathrm {d}^3\mathbf {x} \, \mathrm {d}^3\mathbf {v}. \end{aligned}$$
(1)

It is important to represent and describe the time evolution of the distribution functions given some external conditions. The Boltzmann equation,

$$\begin{aligned} \frac{\partial f_{s}}{\partial t} + \mathbf {v} \cdot \frac{\partial f_{s}}{\partial \mathbf {x}} + \frac{\mathbf {F}}{m} \cdot \frac{\partial f_{s}}{\partial \mathbf {v}} = \left( \frac{\partial f_{s}}{\partial t} \right) _\mathrm{coll} \end{aligned}$$
(2)

uses the distribution function \(f_{s}\) to describe the collective behaviour of a system of particles subject to collisions and external forces \(\mathbf {F}\), where the term on the right-hand side represents the forces acting on particles in collisions. Its derivation starts from the standard equation of motion and also takes Liouville’s theorem into account (see Sect. 3.3) and it is therefore valid for any Hamiltonian system. In plasmas, the Lorentz force takes the role of the external force and collisions between particles are often neglected. Taking these two assumptions into consideration, one obtains the Vlasov equation (Vlasov 1961), often called “the collisionless Boltzmann equation”:

$$\begin{aligned} \frac{\partial f_s}{\partial t} + \mathbf {v} \cdot \frac{\partial f_s}{\partial \mathbf {x}} + \frac{q_s}{m_s} \left( \mathbf {E} + \mathbf {v} \times \mathbf {B} \right) \cdot \frac{\partial f_s}{\partial \mathbf {v}} = 0. \end{aligned}$$
(3)

If a significant part of the plasma acquires high enough kinetic energy, then relativistic effects start to become important. It can be shown that the Vlasov equation (3) is Lorentz-invariant and therefore holds in such cases if v is simply considered to be the proper velocity (Thomas 2016). There are only very few numerical applications related to space or astrophysics that directly solve the relativistic Vlasov equation (as opposed to the particle-in-cell approach, which solves the same physical system through statistical sampling of particles and their propagation, compare Sect. 1) but it is more common in other contexts, such as laser–plasma interaction applications (Martins et al. 2010; Inglebert et al. 2011). By using a frame that is propagating at relativistic speeds, a Lorentz-boosted frame, the smallest time or space scales to be resolved become larger and the plasma length shrinks due to Lorentz contraction, indicating that the simulation execution times are accelerated.

3.2 Closing the Vlasov equation system

In any simulation, it is necessary to couple the Vlasov equation with the field equations to form a closed set of equations. The Vlasov equation deals with the time evolution of the distribution function and uses the electromagnetic fields as input. Thus the fields need to be evolved based on the updated distribution function. There are two main ways of closing the equation set: the electrostatic approach, which uses the Poisson equation to close the system and the electromagnetic approach, which uses the Maxwell equations to that end. They are typically referred to as the Vlasov–Poisson system and the Vlasov–Maxwell system of equations. With appropriate approximations, the system can also be closed without solving the Vlasov equation for all species.

3.2.1 The Vlasov–Poisson equations

The Vlasov–Poisson equations model plasma in the electrostatic limit without a magnetic field (corresponding to the assumption that \(v/c \rightarrow 0\) for any relevant velocity v in the system). Thus Eq. (3) takes the form

$$\begin{aligned} \frac{\partial f_s}{\partial t} + \mathbf {v} \cdot \frac{\partial f_s}{\partial \mathbf {x}} + \frac{q_s}{m_s} \mathbf {E} \cdot \frac{\partial f_s}{\partial \mathbf {v}} = 0 \end{aligned}$$
(4)

for all species and the system is closed by the Poisson equation

$$\begin{aligned} \nabla ^2\varPhi + \frac{\rho _q}{\epsilon _0} = 0, \end{aligned}$$
(5)

where \(\varPhi \) is the electric potential and \(\epsilon _0\) is the vacuum permittivity. Using Eq. (1), the total charge density \(\rho _q\) is obtained by taking the zeroth moment of f for all species:

$$\begin{aligned} \rho _q = \frac{1}{\mathcal {V}_r} \sum _s q_s N_s = \sum _s q_s \int _{\mathcal {V}_v} f_s(\mathbf {x},\mathbf {v},t) \, \mathrm {d}^3\mathbf {v}. \end{aligned}$$
(6)

3.2.2 The Vlasov–Maxwell equations

In the electromagnetic case, the Vlasov equation (3) is retained for all species and complemented by the Maxwell equations, namely the Ampère law

$$\begin{aligned} \nabla \times \mathbf {B} = \mu _0\mathbf {j} + \frac{1}{c^2}\frac{\partial \mathbf {E}}{\partial t}, \end{aligned}$$
(7)

the Faraday law

$$\begin{aligned} \nabla \times \mathbf {E} = -\frac{\partial \mathbf {B}}{\partial t}, \end{aligned}$$
(8)

and the Gauss laws

$$\begin{aligned} \nabla \cdot \mathbf {B} = 0 \quad \mathrm {and}\quad \nabla \cdot \mathbf {E} = \frac{\rho _q}{\epsilon _0}. \end{aligned}$$
(9)

Usually in a numerical scheme only Eqs. (7) and (8) are discretised. If Eq. (9) is not satisfied by the numerical method used, numerical instabilities can occur because the underlying system needs to be divergence-free.

3.2.3 Hybrid-Vlasov systems

The hybrid-Vlasov systems retain only the Vlasov equation for the ions, thus neglecting the electrons to a certain extent. This has the advantage that the model is not required to resolve the short temporal and spatial scales associated with electron dynamics. Typically, the system is closed by taking moments of the Vlasov equation and making approximations pertinent to the simulation system at hand.

Integrating (3) over the velocity space, one gets the continuity equation or the zeroth moment of the Vlasov equation:

$$\begin{aligned} \frac{\partial n_s}{\partial t} + \nabla \cdot \left( n_s \mathbf {V}_s \right) = 0, \end{aligned}$$
(10)

where \(n_s\) and \(\mathbf {V}_s\) are the number density and the bulk velocity of species s, respectively. Multiplying (3) by the phase-space momentum \(m_s \mathbf {v}_s\) and integrating over the velocity space, one obtains the equation of motion or the first moment of the Vlasov equation:

$$\begin{aligned} n_s m_s \left( \frac{\partial \mathbf {V}_s}{\partial t} + (\mathbf {V}_s \cdot \nabla ) \mathbf {V}_s \right) - n_s q_s \left( \mathbf {E} + \mathbf {V}_s \times \mathbf {B} \right) + \nabla \cdot \mathcal {P}_s = 0, \end{aligned}$$
(11)

where \(\mathcal {P}_s\) is the pressure tensor of species s, which can in turn be obtained as the second moment of \(f_s\). This leads to a chain of equations where each step depends on the next moment of \(f_s\). The most common closure of hybrid-Vlasov systems is taken at this level by summing the electron and ion equations of motion and neglecting terms based on the electron-to-ion mass ratio \(m_\mathrm{e}/m_\mathrm{i} \ll 1\), leading to the generalised Ohm’s law

$$\begin{aligned} \mathbf {E} + \mathbf {V}\times \mathbf {B} = \frac{\mathbf {j}}{\sigma } + \frac{\mathbf {j}\times \mathbf {B}}{n_e e} - \frac{\nabla \cdot \mathcal {P}_\mathrm{e}}{n_e e} + \frac{m_\mathrm{e}}{n_e e^2}\frac{\partial \mathbf {j}}{\partial t}, \end{aligned}$$
(12)

where \(\sigma \) is the conductivity, e is the elementary charge, \(n_e\) is the electron number density, and \(\mathbf {j}\) is the total current density. In the limit of slow temporal variations, the rightmost electron inertia term is typically dropped from the equation. Further, assuming high conductivity, one is left with the Hall term \(\mathbf {j}\times \mathbf {B}/\rho _q\) and the electron pressure gradient term \(\nabla \cdot \mathcal {P}_\mathrm{e}/\rho _q\) on the right-hand side. The electron pressure term can be handled in a number of ways, such as using isobaric or isothermal assumptions. If electrons are assumed to be completely cold, the equation can be written as the Hall MHD Ohm’s law

$$\begin{aligned} \mathbf {E} + \mathbf {V}\times \mathbf {B} = \frac{\mathbf {j}\times \mathbf {B}}{n_e e}. \end{aligned}$$
(13)

Thus the hybrid-Vlasov system of equations retains the Vlasov equation (3) for ions and the Maxwell–Ampère and Maxwell–Faraday equations (7) and (8) but replaces the Gauss equations by a generalised Ohm equation (12) with appropriate approximations. If rapid field fluctuations are excluded from the solution, the displacement current from Ampère’s law can be omitted, resulting in the Darwin approximation and yielding

$$\begin{aligned} \nabla \times \mathbf {B} = \mu _0\mathbf {j}. \end{aligned}$$
(14)

This makes the equation system more easily tractable.

Note that conversely, neglecting ion dynamics and retaining the electron Vlasov equation can be advantageous in certain contexts and is formally equivalent to the above with switched electron and ion variables.

3.3 Properties of the Vlasov equation

When solving the Vlasov equation, there are a number of useful properties that can be used for an advantage in numerical solvers. In its fundamental structure, the Vlasov equation is a 6D advection equation, which equals a zero material derivative of phase-space density. In the absence of any source terms, finding a solution at any given point in time requires to determine the motion of the phase-space density. One particularly handy property follows from Liouville’s theorem, from which the Vlasov equation is derived. It states that phase-space density is constant along the trajectories of the system. This means that a solution of the system at one point in time can be followed forward or backward in time arbitrarily, as long as the trajectories of phase-space elements, characterising the Vlasov equation, are known.

One consequence of Liouville’s theorem is that initial density perturbations tend to form smaller and smaller structures as trajectories of the phase-space regions with different densities converge over time. In physical reality, this so-called filamentation has a natural cutoff at scales where diffusive scattering effects become important, however, this is not part of the pure mathematical description of the Vlasov equation. Therefore, numerical implementations need to address this issue either by explicit filtering steps, or innate numerical diffusivity.

A fundamental consideration in any physical modelling is the conservation of certain quantities, like mass, momentum, energy, and electric charge. This of course applies to Vlasov-based plasma modelling as well. Variational approaches have been used to derive the Vlasov–Maxwell and Vlasov–Poisson systems of equations as well as reduced forms (e.g., Marsden and Weinstein 1982; Ye and Morrison 1992; Brizard 2000; Brizard and Tronko 2011). Care has to be taken when developing numerical solutions of the Vlasov equation that quantities relevant to the problem to be solved are conserved adequately by the method (e.g., Filbet et al. 2001; Crouseilles et al. 2010; Cheng et al. 2013, 2014; Becerra-Sagredo et al. 2016; Einkemmer and Lubich 2018).

4 Numerical modelling and HPC aspects

Finding solutions to the Vlasov equation in order to model physical systems typically involves computer simulations. Therefore, the phase space density \(f(\mathbf {x},\mathbf {v},t)\) needs to be numerically represented, which strongly influences the choice of solvers and the resulting simulation code. This section is structured around different numerical representations of f.

A problem common to all numerical approaches solving the Vlasov equation is the curse of dimensionality—to fully reproduce all physical behaviour, the simulation domain must be 6-dimensional, with all 6 dimensions being of sufficient resolution or fidelity for the desired physical system. This considerably impacts the size of the phase space and hence the computational burden of the algorithm.

Fig. 2
figure 2

Different ways of numerically representing the phase-space density \(f(\mathbf {x},\mathbf {v},t)\): a In a Eulerian grid, every grid cell stores the local value of phase-space density, which is transported across cell boundaries. b Spectral representations (shown here: Fourier-space in \(\mathbf {v}\)) allow for some update steps of phase-space density to be performed locally. c In a tensor train representation, phase-space density is represented as a sum of tensor products of single coordinates’ distribution functions which get transported individually

4.1 Eulerian approach

In a straightforward manner, the phase-space distribution function \(f(\mathbf {x},\mathbf {v},t)\) can be discretised on a Eulerian grid, which can be operated by different kinds of solvers (see Fig. 2a). The structure of the Vlasov equation allows both finite volume (FV) as well as semi-Lagrangian solvers to be employed, and all of these have been operated with some success (Arber and Vann 2002). Discretisation of velocity space to a finite grid size \(\varDelta v\) also automatically imposes a lower limit for phase-space filamentation (compare Sect. 3.3), at which the grid will naturally smooth out the structure. In some cases this is a purely numerically diffusive process, whereas others use explicit smoothing, filtering or subgrid modelling (e.g., Klimas 1987; Klimas and Farrell 1994). The 6-dimensional structure of the phase space, along with the physical scales and resolutions imposed by the underlying physical system (compare Sect. 2) make a Eulerian representation in a Cartesian grid computationally impractical in the vast majority of cases concerning a large simulation volume.

Let us consider as an example a simulation of the Earth’s entire magnetosphere using a full 3D–3V, Eulerian hybrid-Vlasov model. Let it extend up to the lunar orbit (\(x \sim \pm \,60 \, R_E\) in every direction) resolving approximately the solar wind ion inertial length (\(\varDelta x \sim 100\) km), and let the velocity space encompass typical solar wind velocities with some safety margin (\(v \sim \pm \,2000\) km/s) while resolving the solar wind thermal speed (\(\varDelta v \sim 30\) km/s). In this case the resulting phase space would contain a total of \(10^{18}\) cells. If each of them were represented by a single-precision floating point value, a minimum of 4 EiB of memory would be required to represent it!

Fortunately, there are many possibilities for simplification of the computational grid size:

  • Reduction of phase-space dimension, if the physical system under consideration allows it, is an easy and efficient way to reduce computational load. Simulations of longitudinal wave instabilities (Jenab and Kourakis 2014; Shoucri 2008) and fundamental studies of filamentation have been performed in a 1D–1V setup, whereas laser wakefield and wave interaction simulations tend to be modelled in 2D–2V or 2D–3V setups (Besse et al. 2007; Sircombe et al. 2004; Thomas 2016). Another possibility here is to globally reduce the number of grid points by introducing a sparse grid representation, where the grid may be uneven with respect to Cartesian coordinates, while remaining static during runtime (Kormann and Sonnendrücker 2016; Guo and Cheng 2016). This is sometimes referred to as a sparse grid representation.

  • Gyrokinetic simulations reduce the velocity space by dropping the azimuthal velocity dimensions perpendicular to the magnetic field, thus assuming complete gyrotropy of the distribution functions (e.g., Görler et al. 2011).

  • Adaptively refined grids can be employed to reduce resolution and thus computational expense in areas of phase space that are considered less important for the physical problem at hand (Wettervik et al. 2017; Besse et al. 2008).

  • In many physical scenarios, large parts of phase space contain an extremely low, if not zero, density, and contribute nothing to the overall dynamic development. Suitable pruning of the phase-space grid can thus be performed to obtain a data structure that dynamically removes grid elements during runtime and keeps them only in regions deemed relevant for the physical system dynamics. The computational speed-up gained through the reduction of phase-space volume thus obtained can in some cases be a tradeoff against physical accuracy, and needs to be carefully considered. We have implemented this option in Vlasiator, and call it the dynamic sparse phase space representation, discussed more in Sect. 5.2. This method is not to be mixed to the static sparse grid methods (Kormann and Sonnendrücker 2016; Guo and Cheng 2016) that are fundamentally dimension reduction techniques, similar to the low-rank approximations.

In plasmas, the magnetic field \(\mathbf {B}\) makes the particles gyrate while the electric field \(\mathbf {E}\) causes them to accelerate and drift. It can be advantageous to take the characteristics of acceleration due to Lorentz’ force into consideration when choosing an appropriate grid for the numerical phase-space representation. Common ideas include:

  • A polar velocity coordinate system aligned with the magnetic field and centred around the drift velocity, \(\mathbf {v} = ( v_\parallel ,\, v_r,\, v_\phi ),\) in which the gyrophase coordinate \(v_\phi \) has a much lower resolution than \(v_\parallel \) and \(v_r\). This can be employed in cases where the velocity distribution functions are known not to deviate strongly from gyrotropy, i.e., to exhibit cylindrical symmetry with respect to the magnetic field direction. However, the disadvantage of a polar velocity space over a Cartesian one is the more complex coordinate transformation required for transport into the spatial neighbours.

  • A Cartesian representation of velocity space, in which its coordinate axes co-rotate with the local gyration at every given spatial cell. Such a setup has the advantage that no transformation of velocity space due to the magnetic field will have to be performed at all, and no numerical diffusion due to gyration motion will occur. It does however come at the cost of more complicated spatial updates, since neighbouring spatial domains do no longer have identical velocity space axes. A suitable interpolation or reconstruction has to take place in the spatial transport, thus potentially negating the advantage in numerical diffusivity.

For the actual process of solving the Vlasov equation, a fundamental decision has to be made in the structure of the code, whether the solution steps are to be performed in a proper 6D manner (e.g., Vogman 2016), or whether a Strang-splitting approach will be taken (Strang 1968; Cheng and Knorr 1976; Mangeney et al. 2002), in which the position and velocity space solution steps are performed independently of each other. Due to the large number of involved dimensions, and thus computational context of each solver step, the latter approach tends to have significant performance benefits, whilst still achieving convergence (Einkemmer and Ostermann 2014). Alternative time-splitting methods based on Hamiltonians have also been proposed (e.g., Crouseilles et al. 2015; Casas et al. 2017).

If a Cartesian velocity grid is employed in the simulation, multiple families of solvers are available for it (Filbet and Sonnendrücker 2003b). In all cases, the effects of diffusivity of the solver need to be considered. Especially uncontrolled velocity space diffusion manifests itself as numerical heating, as the distribution function tends to broaden over time. Higher orders of solvers and reconstruction methods, as well as explicit formulations in which moments of the distribution function are conserved are therefore advisable (Balsara 2017).

The choice of a Eulerian representation of phase space brings certain implementation details for high performance computing (HPC) aspects with it. The relative ease of spatial decomposition into independent computational domains, which communicate through ghost cells, can be employed readily for Eulerian Vlasov simulations, providing a straightforward path to parallel implementations. On the other hand, the inherent limitations of Eulerian schemes (such as conditions for time steps, compare Sect. 5.3) limit their overall numerical efficiency, and the high-dimensional nature of phase space can lead to challenges in appropriately represented and resolved numerical grids. As so often in HPC, design decisions have to be based on the specific properties of the physical system under investigation.

4.1.1 Finite volume solvers

As the Vlasov equation (3) is fundamentally a hyperbolic conservation law in 6D, it can be solved using the well-established methods of Finite Volumes (FV, LeVeque 2002). In this approach, the phase-space fluxes are calculated through each interface of a discrete simulation volume (or phase-space cell) by reconstructing the phase-space density distribution through an appropriate interpolation scheme. The characteristic velocities at both sides of this interface are determined and the Riemann problem (Toro 2014) at each of these interfaces is solved to update the material content in each cell.

If Strang splitting is used to perform separate spatial and velocity-space updates, it is noteworthy that the state vector only contains a single scalar quantity (the phase-space density) and each cell interface update only needs to take a single characteristic velocity into consideration: For the update in a spatial direction, the characteristic is given by the corresponding cell’s velocity space coordinates, whereas in the velocity space update step, the acceleration due to magnetic, electric and external field forces is homogeneous throughout each spatial cell. The Riemann problem for the Vlasov update does therefore not require the solution or approximation of an eigenvalue problem, which significantly simplifies its solution in comparison to hydrodynamic or MHD FV solvers. This property also enables the efficient use of semi-Lagrangian solvers (discussed more in Sect. 4.4).

As will be shown in Sect. 5, versions of the Vlasiator code until 2014 employed a FV formulation of its phase space update (von Alfthan et al. 2014) and numerous other implementations exist (Banks and Hittinger 2010; Wettervik et al. 2017). A comprehensive introduction to the implementation and thorough analysis of the behaviour of a fully 6D implementation of a FV Vlasov solver is given by Vogman (2016).

4.1.2 Finite difference solvers

While the Vlasov equation (3) could in principle be solved by directly employing finite difference methods, this approach does not seem to be favoured, and its applications in the literature appear to be limited to fundamental theory studies only (e.g., Schaeffer 1998; Holloway 1995). The biggest issue with finite difference formulations is the lack of explicit conservation moments of the distribution function and related quantities. While high-order methods can still maintain suitable approximate conservation properties, their computational demands and/or diffusivity make them impractical.

4.2 Spectral solvers

Instead of a direct discretisation of the phase-space density \(f(\mathbf {x},\mathbf {v}, t)\) in its \(\mathbf {x}\) and \(\mathbf {v}\) coordinates, a change of basis functions can be employed, each coming with benefits and limitations. The transformation of f into a different basis can be performed in the velocity coordinates only (cf. Fig. 2b), or in both spatial and velocity coordinates, depending on the physical application.

If a splitting scheme is employed, where velocity and real space advection updates are treated separately, the advection in a Fourier-transformed coordinate can be completely performed in Fourier space, as the transform of any coordinate \(x \rightarrow k_x\) results in the differential advection operator \(v_x \,\nabla _x\) turning into a simple multiplication:

$$\begin{aligned} v_x \, \nabla _x f\left( x\right) \xrightarrow {\mathrm {\small {Fourier}}} \mathrm {i}\, v_x k_x \, \tilde{f}\left( k_x\right) . \end{aligned}$$
(15)

However, for the acceleration update, this transformation brings in the additional complication that the acceleration \(\mathbf {a}\) would have to be independent of \(\mathbf {v}\), which is true for the electrostatic Vlasov–Poisson system, but incorrect in full Vlasov–Maxwell scenarios, due to the v-dependence of the Lorentz force. In order to accommodate velocity-dependent acceleration, solving a system in such a way typically requires multiple forward and backward Fourier transforms within one time step (Klimas and Farrell 1994).

The limit of filamentation in a thus-represented velocity space becomes the question of which maximum velocity space frequency \(\mathbf {k}_{v,\text {max}}\) is available, and the filamentation problem itself becomes a boundary problem at the maximum extents of velocity \(\mathbf {k}\)-space (Eliasson 2011). However, stability issues of this scheme remain under discussion (Figua et al. 2000; Klimas et al. 2017).

Finally, a full Fourier-space representation of \(\tilde{f}\left( \mathbf {k}_r,\mathbf {k}_v, t\right) \), in which also the real space coordinate \(\mathbf {x} \rightarrow \mathbf {k}_r\) is transformed is a possibility. However, it further complicates the treatment of configuration and velocity space boundaries (Eliasson 2001). When used with periodic spatial boundary conditions, such a setup can be quite efficient for the study of kinetic plasma wave interactions.

Apart from the Fourier basis, other orthogonal function systems can be used as the basis for description of phase-space densities. A popular choice is presented by Hermite functions (Delzanno 2015; Camporeale et al. 2016), whose \(\mathbf {L}^2\) convergence behaviour closely matches that of physical velocity distribution functions, and whose property of being eigenfunctions to the Fourier transform can be used as a numerical advantage. Since the zeroth Hermite function \(H_0(\mathbf {v})\) is simply a Maxwellian particle distribution, a hybrid-Vlasov code with Hermitian basis functions should replicate MHD behaviour in this limit. Adaptive inclusion of higher-order Hermite functions then allows an increasing amount of kinetic physics to be numerically represented.

A common problem of any kind of spectral method, be it Fourier-based, or using any other choice of nonlocalised basis functions, is the formulation of boundary conditions. While microphysical simulations of wave or scattering behaviour can usually get away with periodic boundary conditions, macroscopic systems will require boundaries at which interaction of plasma with solid or gaseous matter is to be modelled (such as planetary or stellar surfaces), inflow conditions as well as outflow boundaries. Due to the unavailability of suitable spectral formulations for these boundaries, spectral-domain solvers have not gained foothold in modelling efforts of macroscopic astrophysical systems.

In any nonlocal choice of basis function for the phase-space representation, be it Fourier-, Hermite- or wavelet-based (Besse et al. 2008), extra thought has to be put into scalability of parallel solvers for it. If a change of basis function (such as a switch from a real-space to a Fourier space representation) is required as part of the simulation update step, this will typically not scale beyond hundreds of cores in supercomputing environments.

4.3 Tensor train

An entirely separate class of numerical representations for the phase-space density is provided by the tensor train formalism (Kormann 2015) illustrated in Fig. 2c. The idea behind this approach is inspired by Strang-splitting solvers, in which spatial and velocity dimensions are treated in individual and subsequent solver steps. The overall distribution function \(f(x_1,x_2,\ldots ,x_n)\) is represented as a tensor product of component basis functions,

$$\begin{aligned} f(x_1,x_2,\ldots ,x_n) = \prod _{k=1}^n f_k(x_k) \end{aligned}$$
(16)

in which each \(f_k(x_k)\) is only dependent on a single coordinate \(x_k\), and thus only affected by a single dimension’s update step. The generalised formulation is called the Tensor Train of ranks \(r_1\ldots r_n\) (compare Fig. 2c),

$$\begin{aligned} f(x_1,x_2,\ldots ,x_n) = \sum _{\alpha _1=1}^{r_1} \cdots \sum _{\alpha _n=1}^{r_n} \prod _{k=1}^n Q_k(\alpha _k-1, x_k, \alpha _k), \end{aligned}$$
(17)

in which the distribution function is entirely formulated in terms of sums of products of \(Q_k\), which themselves only depend on a single coordinate \(x_k\). Transport can be performed by individually affecting each \(Q_k\), followed by a rounding step to keep the tensor train compact.

While this approach has so far only been employed in low-dimensional approaches and for feasibility studies, and attempts at large numerical simulations using tensor train models have not yet been performed, efforts to integrate them into existing codebases are underway.

Fig. 3
figure 3

Illustration of Lagrangian and semi-Lagrangian approaches: a in a full Lagrangian solver, phase-space samples are moved along their characteristic trajectories, never mapped to a grid and the phase-space values in between are obtained by interpolation. b In a forward semi-Lagrangian method, the same process is used, but the phase-space values are redistributed to a grid at regular intervals. c A backward semi-Lagrangian method follows the characteristic of every target phase-space point backwards in time and obtains the source value by interpolation

4.4 Semi-Lagrangian and fully Lagrangian solvers

As a consequence of Liouville’s theorem (cf. Sect. 3.3), numerical solutions of the Vlasov equation can be elegantly formulated in Lagrangian and semi-Lagrangian ways, by following the characteristics in phase space. Since the spatial velocity of any point in phase space is simply given by its velocity space coordinates, and its acceleration due to Lorentz’ force is provided by the local electromagnetic field quantities, a unique characteristic for each point in phase space is easily obtained in a simulation (cf. Sect. 4.1.1).

As the simulation progresses, the distribution of these sample points will shift, maintaining their initial phase-space density values, and the volumes in between them obtain phase-space density values through interpolation. If necessary, new samples can be created, or existing ones merged, where filamentation requires it. In essence, fully Lagrangian simulation codes (Kazeminezhad et al. 2003; Nunn 2005; Jenab and Kourakis 2014) track the motion of samples of density through phase space, stepping forward in time, resulting in an updated phase-space distribution. This is illustrated in Fig. 3a. Sometimes, these methods are referred to as Lagrangian particle methods, as each phase-space sample can be modelled as a macroparticle.

Much more common than the fully Lagrangian formulation of Vlasov solvers is the family of semi-Lagrangian solvers (Sonnendrücker et al. 1999). In these, the phase-space samples to be propagated are obtained at every time step from a Eulerian description of phase space, their transport along the characteristics is calculated within the time step, and the resulting updated phase-space density is sampled back into a Eulerian grid (which can be either structured or unstructured, see Besse and Sonnendrücker 2003). This process can be performed either forwards in time (Crouseilles et al. 2009, see Fig. 3b), in which the source grid points are scattered into the target locations, or backwards in time (Sonnendrücker et al. 1999; Pfau-Kempf 2016), where each target grid point performs a gather operation, spatially interpolating inside the previous state of the time steps (Fig. 3c). Backwards semi-Lagrangian methods are sometimes also referred to as Flux Balance Methods (see Filbet et al. 2001). Either way, an interpolation step will be involved which may again lead to significant numerical diffusion, unless methods are used to minimise it. Some of the more common interpolation procedures used are cubic splines and Hermite reconstruction because they produce smooth results with reasonable accuracy and are less dissipative than other methods using continuous interpolations (Filbet and Sonnendrücker 2003a). Lagrange interpolation methods produce more accurate results but require higher order polynomials and large stencils to limit diffusion. The high-order discontinuous Galerkin method for spatial discretisation, along with a semi-Lagrangian time stepping method, has also been used in Vlasov–Poisson systems providing an improvement in accuracy compared to previously used techniques (Rossmanith and Seal 2011). The flexibility of combining different approaches is also seen in a recent particle-based semi-Lagrangian method for solving the Vlasov–Poisson equation (Cottet 2018).

Cheng and Knorr (1976) were the first authors to employ semi-Lagrangian updates of a Vlasov–Poisson problem in a Strang-splitting setup, which they refer to as the time-splitting scheme, in which they independently treated advection due to temporal and spatial updates. Mangeney et al. (2002) later formulated a Strang-splitting scheme for the Vlasov–Maxwell equation. As such a splitting scheme performs acceleration and translation steps separately, the phase-space trajectories of any simulation point approximates their physical behaviour in a staircase-like dimension-by-dimension pattern.

4.5 Field solvers

The Vlasov equation does not stand alone in describing the physical system in consideration, but requires a further prescription of the fields introducing the force terms. In the vast majority of cases in computational astrophysics, these will be electromagnetic forces self-consistently produced through the motion of charged particles in plasma, although there have been examples of Vlasov-gravity simulations (Guo and Li 2008), in which the Poisson equation was solved based on the simulation’s mass distribution. Also in a few cases, the fields affecting phase-space distributions are considered entirely an external simulation input, with no feedback from the phase-space density onto the fields (Palmroth et al. 2013), which can be called “test-Vlasov” simulations, in analogy to test-particle runs. These are particularly useful as test cases before the fully operational code can be launched.

A key requirement for any field solver is to preserve the solenoidality of the magnetic field \(\mathbf {B}\) expressed by Eq. (9). There are two main avenues used to achieve this goal (e.g., Tóth 2000; Balsara and Kim 2004; Zhang and Feng 2016, and references therein). Either the field reconstruction is divergence-free by design, such as the one used in Vlasiator (see Sect. 5.3), or a procedure is needed to periodically clean up the divergence of \(\mathbf {B}\) arising from numerical errors.

In the following sections, different solvers for electromagnetic fields (and their simplifications) will be discussed in relation to astrophysical Vlasov simulations. These are fundamentally very similar in structure to the field solvers used in other simulation methods, such as PIC and MHD, and can in many cases be adapted directly from these with little change.

4.5.1 Electrostatic solvers

If modelling a physical system in which the interaction of plasma with magnetic fields is of little importance (such as electrostatic wave instabilities, dusty plasmas, surface interactions (Chane-Yook et al. 2006) and other typically local phenomena), the magnetic field force (\(q \mathbf {v} \times \mathbf {B}\)) part of the Vlasov equation can be neglected, and a purely electrostatic system remains.

Neglecting the effects of \(\mathbf {B}\) completely leads to the Vlasov–Poisson system of equations (4) and (5), for which the field solver needs to find a solution to the Poisson equation at every time step. Being an elliptic differential equation that is solved instantaneous in time, no time step limit arises from the field solver itself. Typically, solvers use approximate iterative approaches, multigrid methods or Fourier-space solutions (Birdsall and Langdon 2004).

Another option, if an initial solution for the electric field has been found (or happens to be trivial), is to update it in time by using Ampere’s equation in the absence of \(\mathbf {B}\),

$$\begin{aligned} \frac{\partial \mathbf {E}}{\partial t} = -\, \frac{\mathbf {J}}{\epsilon _0} \end{aligned}$$
(18)

in either an explicit finite-difference manner, or using more advanced implicit formulations (Cheng et al. 2014). Special care should however be taken to prevent the violation of the Gauss law [cf. Eq. (9)] by using appropriate numerical methods.

4.5.2 Full electromagnetic solvers

If the full plasma microphysics of both electrons and ions is to be considered, and particularly if radio wave or synchrotron emissions are intended outcomes of the system, one must use the full set of Maxwell’s equations. A popular and well-established family of electromagnetic field solvers is the finite difference time-domain (FDTD) approach, which has a longstanding history in electrical engineering applications. In formulating the finite differences for the \(\partial \mathbf {E} / \partial t \sim \nabla \times \mathbf {B}\) and \(\partial \mathbf {B} / \partial t \sim \nabla \times \mathbf {E}\) terms of Maxwell’s equations, it is often advantageous to use a staggered-grid approach, in which the electric field and magnetic field grids are offset from one another by half a grid spacing in every direction (Yee 1966). In this setup, every component of the electric field vector is surrounded by magnetic field components and vice versa, so that the finite difference evaluation of the rotation can be performed without any need for interpolation.

Care should be taken when employing FDTD solvers for studies of wave propagation at high frequencies or wave numbers, as the numerical dispersion relations of waves are deviating from their physical counterparts for high \(\mathbf {k}\), and this effect in particular is anisotropic in nature, and most strongly pronounced in cases of diagonal propagation, due to the intrinsic differences in the manner by which grid-aligned and non-grid-aligned computations are handled. Kärkkäinen et al. (2006) and Vay et al. (2011) present a thorough analysis of this problem in the case of PIC simulations, and provide suggestions for mitigating their effects. The largest disadvantage of FDTD solvers is their stringent requirement to resolve the propagation of fields at the speed of light, thus leading to extremely short time step lengths. In order to simulate anything at non-microscopic timescales, other methods will need to be used. Fourier-space solvers of Maxwell’s equations are advantageous in this respect, as they do not come with fundamental time step limitations. This is weighed up by the fact that their parallelisation is more difficult, and the formulation of appropriate boundary conditions is not always possible (cf. Sect. 4.2).

4.5.3 Hybrid solvers

If large-scale phenomena with timescales much larger than the local light crossing time are being investigated, FDTD Maxwell solvers quickly lose their appeal. If magnetic field phenomena are still to be considered self-consistently in the simulation, appropriate modifications of the electrodynamic behaviour have to be taken, so that their simulation with longer time steps becomes feasible.

One common way to get rid of the speed of light as a limiting factor is by getting rid of the electromagnetic mode as a solution to Maxwell’s equation altogether, in a process called the Darwin approximation (see Sect. 3.2.3 and Schmitz and Grauer 2006; Bauer and Kunze 2005). In this process, the electric field is decomposed into its longitudinal and transverse components \(\mathbf {E} = \mathbf {E}_L + \mathbf {E}_T\), with \(\nabla \times \mathbf {E}_L = 0\) and \(\nabla \cdot \mathbf {E}_T = 0\). Only \(E_L\) is allowed to participate in the temporal update of \(\mathbf {B}\), so that the electromagnetic mode drops out of the simulated physical system. As a result, the fastest remaining wavemode in the system becomes the Alfvén wave, and the maximum time step rises significantly, by a factor of \(c/v_A\).

Approximating the full set of Maxwell equations comes at the cost of not having a closed set of equations any more. As already shown in Sect. 3.2.3, the system is typically closed by providing a relation between the electric and magnetic field such as Eq. (12), called Ohm’s law. The level of complexity of Ohm’s law directly influences the simulation results as it immediately affects the kinetic physics described by the model.

4.6 Coupling schemes

A reduction of the computational burden of a model can be achieved by coupling different schemes in order to focus the use of the costlier kinetic model on the region(s) of interest while solving other parts of the system with less intensive algorithms. This is also a means of extending the simulation domain where one system is taken as the boundary condition of the other. Various classes of coupled models exist, depending on the coupling interface chosen.

One strategy is to define a spatial region of interest in which the expensive kinetic model is applied, embedded in a wider domain covered by a significantly cheaper fluid model. While the method is under investigation and has been tested on classic small problems (Rieke et al. 2015), it has not been applied in the context of large-scale astrophysical simulations yet. However, this type of coupling is being used successfully in the case of fluid–PIC coupling (e.g., Tóth et al. 2016; Chen et al. 2017) and also in reconnection studies (e.g., Usami et al. 2013). The disadvantage of this strategy is that scale coupling cannot be addressed as the kinetic effects do not spread into the fluid regime, and smaller-scale physics can only affect the solution in the domain at which the kinetic physics is in force.

Another strategy consists in defining the regions of interest in velocity space, that is coupling a fluid scheme describing the large-scale behaviour of a system with a Vlasov model handling suprathermal populations introducing kinetics into the model. Again, this is a recent development for which a certain amount of theoretical work and testing on small cases has been done (e.g., Tronci et al. 2014) but not yet extended to larger scale applications.

4.7 Computational burden of Vlasov simulations

Representing numerically the complete velocity phase space of a kinetic plasma system including all required physical processes is computationally intensive, and a large amount of data needs to be stored and processed. Different possible representations of the phase-space distribution and solution methods and their expected scaling shall be given in this section. Computational requirements for equivalent PIC simulations are estimated in comparison, although due to their different tuneable parameters, a rigorous comparison is difficult and beyond the scope of this work.

As shown in Sect. 4.1, a blunt Eulerian discretisation of a magnetospheric simulation without any velocity space sparsity results in \(10^{18}\) sample points or a minimum of 4 EiB memory requirement, which is unrealistic on current and next-generation architectures. A first approach is to reduce the dimensionality from a full 3D space to a 2D slice, which results in a reduction of sample points of the order of \(10^4\). Obviously a further reduction to 1D yields a similar gain. With a sparse velocity space strategy as used in Vlasiator (see Sect. 5.2 below) a further reduction by a factor of \(10^2\)\(10^3\) sample points can be achieved. Typically modern large-scale kinetic simulations both with Vlasov-based methods (e.g., Palmroth et al. 2017) and PIC methods (e.g., Daughton et al. 2011) reach an order of magnitude of \(10^{11}\)\(10^{12}\) sample points.

Beyond the number of sample points to be treated, the length of the propagation time step relative to the total simulation time aimed for is a crucial component of the computational burden of a model. Certain classes of solvers are limited in that respect as they cannot allow a signal to propagate more than one sampling interval or discretisation cell per time step (see Sect. 5.3). With respect to hybrid models using the Darwin approximation, the inclusion of electromagnetic (light) waves in the model description results in a reduction of the allowable time step by a factor of \(10^3\) or more. Eulerian solvers typically have similar limitations which can impact the time step by a factor of approximately \(10^2\) due to the Larmor motion in velocity space in the presence of magnetic field. Subcycling strategies and the use of Lagrangian algorithms are common approaches to alleviate these issues, at the potential cost of some physical detail however.

A choice of basis function for the representation of velocity space other than Eulerian grids (like spectral or Hermite bases) can in many cases be beneficial to limit the memory requirements for reasonable approximations of the velocity space morphology. Care must however be taken that non-local transformations from one basis to another, such as a Fourier transform, tend to have unfavourable scaling behaviour in massively parallel implementations. Tensor-train formulations appear to be a promising avenue for the representation of phase space densities that have suitable computational properties, but large-scale space plasma applications have not been demonstrated yet.

Higher computational requirements are expected if physics of multiple particle species (especially electrons) are essential for the system under investigation. The need to represent multiple separate distribution functions multiplies the memory and computation requirements. The relative mass ratios of these species also have an effect on the kinetic time and length scales that need to be resolved. Going from a purely proton-based hybrid-Vlasov to a “full-Vlasov” simulation, in which electrons are included as a kinetic species shortens the Larmor times by a factor of \(m_p / m_e = 1836\) and depending on the employed solver may require resolution of the plasma’s Debye length. The latter factor means that, with respect to the reference hybrid simulation considered above, which approximately resolves the ion kinetic scales, a spatial resolution increase of the rough order of \(10^5\) would be required (see Table 1), amounting to a staggering \(10^{15}\) more sampling points. In order to reduce this considerable overhead, a common approach is to rescale physical quantities such as the electron-to-proton mass ratio and/or the speed of light (e.g., Hockney and Eastwood 1988), while hoping to maintain quantitatively correct kinetic physics behaviour.

Most of these scaling relations likewise apply in PIC. In these, however, the parameter most strongly affecting the computational burden of the phase space representation is the particle count. As a rule of thumb, a PIC simulation with a particle count similar to the sampling point count of an equivalent Vlasov simulation will have a similar overall computational cost. For many physical scenarios, this particle count can be chosen to be significantly lower (on the order of 100–1000 particles/cell), especially if noisy representations of the velocity spaces are acceptable. In simulations with high dynamic density contrasts, in which certain simulation regions deplete of particles, as well as setups in which a minimisation of sampling noise is essential (such as investigations of nonlinear wave phenomena), PIC and Vlasov simulations are expected to reach a break-even point.

4.8 Achievements in Vlasov-based modelling

The progress of available scientific computing capabilities towards and beyond petascale in the last decade has driven the interest in and applicability of Vlasov-based methods to multidimensional space and astrophysical plasma problems. Table 2 compiles existing research work using direct solutions of the Vlasov equation in plasma physics. Table 2 only includes works with a direct link to space physics and astrophysics, meaning that purely theoretical work as well as research from adjacent fields, in particular nuclear fusion and laser–plasma interaction, has been omitted from this list on purpose. As of 2018, the Vlasov equation has thus been used in space plasma physics and plasma astrophysics to model magnetic reconnection, instabilities and turbulence, the interaction of the solar wind with the Earth and other bodies, radio emissions in near-Earth space and the charge distribution around spacecraft.

Table 2 Space and astrophysical applications of Vlasov-based plasma simulation methods

5 Vlasiator

This section considers the choices and approaches made for the Vlasiator code, attempting to describe the near-Earth space at ion kinetic scales. Vlasiator simulates the global near-Earth plasma environment through a hybrid-Vlasov approach. The evolution of the phase-space density of kinetic ions is solved with Vlasov’s equation (Eq. 3), with the evolution of electromagnetic fields described through Faraday’s law (Eq. 8), Gauss’ law and the solenoid condition (Eq. 9), and the Darwin approximation of Ampère’s law (Eq. 14). Electrons are modelled as a massless charge-neutralising fluid. Closure is provided via the generalised Ohm’s law (Eq. 12) under the assumptions of high conductivity, slow temporal variations, and cold electrons, i.e., the Hall MHD Ohm’s law (Eq. 13). The source code of Vlasiator is available at http://github.com/fmihpc/vlasiator according to the Rules of the Road mapped out at http://www.physics.helsinki.fi/vlasiator.

5.1 Background

Vlasiator has its roots in the discussions within the global MHD simulation community around 2005. It was becoming evident that while global MHD simulations are important, their capabilities, especially in the inner magnetosphere, are limited. The inner magnetosphere consists of spatially overlapping plasma populations of different temperatures (e.g., Baker 1995) and therefore the environment cannot be satisfactorily modelled with MHD to a degree allowing e.g., environmental predictions for societally critical spacecraft or as a context for upcoming missions, like the Van Allen Probes (e.g., Fox and Burch 2013). To this end, two strategies emerged, including either coupling a global MHD simulation with an inner magnetospheric simulation (e.g., Huang et al. 2006), or going beyond MHD with the then newly introduced hybrid-PIC approach (e.g., Omidi and Sibeck 2007). Coupling different codes carries a risk that the effects of the coupling scheme dominate over the improved physics. On the other hand, while hybrid-PIC simulations had produced important breakthroughs (e.g., Omidi et al. 2005), the velocity distributions computed through binning are noisy due to the limited number of launched particles, which could compromise physical conclusions. Further, due to the limited number of particles, the hybrid-PIC simulations could not provide sharp gradients, which would become a problem especially in the magnetotail, where the lobes surrounding the dense plasma sheet are almost empty. As the tail physics is critical in the global description of the magnetosphere, the idea about a hybrid-Vlasov simulation emerged.

The objective was simple, just to go beyond MHD by introducing protons as a kinetic population modelled by a distribution function and thus getting rid of the noise. Several challenges were identified. First, if one neglects electrons as a kinetic population, one will, e.g., lose electron-scale instabilities that can be important in the tail physics (e.g., Pritchett 2005). Second, a global hybrid-Vlasov approach is still an extreme computational challenge even with a coarse ion-scale resolution, since it must be carried out in six dimensions. Further, doubts existed about whether grid resolutions achievable with current computational resources would facilitate ion kinetic physics. However, with a new approach without historical heritage, one could utilize the latest high-performance computing methods and new computational architectures, provided that the code would always be portable to the latest technology. The computational resources were still obeying the “Moore law”, and petascale systems had just become operational (Kogge 2009). With these prospects in mind, Vlasiator was proposed to the newly established European Research Council in 2007, which solicited new ideas with a high risk–high gain vision.

5.2 Grid discretisation

The position space is discretised on a uniform Cartesian grid of cubic cells. Each cell holds the values of variables that are either being propagated or reconstructed (e.g., the electric and magnetic fields and the ion velocity distribution moments) as well as housekeeping variables. In addition, a three-dimensional uniform Cartesian velocity space grid is stored in each spatial cell. For position space Vlasiator uses the Distributed Cartesian Cell-Refinable Grid library (DCCRG; Honkonen et al. 2013) albeit without making use of the adaptive mesh refinement capabilities it offers. The library can distribute the grid over a large supercomputer using the domain decomposition approach (see Sect. 5.5 for details on the parallelisation strategies).

The velocity space grid is purpose built for that specific task. A major performance gain in terms of memory and computation is achieved by storing and propagating the volume average of f in every cell at position \(\mathbf {x}\) only in those velocity space cells where f exceeds a given density threshold \(f_\mathrm{min}\). In order to accurately model propagation and acceleration, a buffer layer is maintained by modelling also cells that are adjacent in position or velocity space. The principle is illustrated in Fig. 4. This threshold can be constant or scaled linearly with the ion density. For each ion population, the maximal velocity space extents and the resolution are set by the user. This so-called sparse velocity space strategy allows to increase the resolution and track the distribution function in the regions where it is present instead of wasting computational resources covering the full extents of reachable velocity space. It is however important to set the value of \(f_\mathrm{min}\) carefully in order to conserve the moments of f (density, pressure, etc.) to the desired accuracy and in order to include in a given simulation all expected features such as low-density high-energy populations. A detailed discussion of the effects of the grid discretisation parameters on the simulation of a collisionless shock was published by Pfau-Kempf et al. (2018).

Fig. 4
figure 4

Figure from Pfau-Kempf (2016)

Illustration of the sparse velocity space. Left: full extent of velocity space including a population drifting at \(V_x\approx -\,500\) km/s. Middle: cut through the population. Right: slice showing the cells with \(f>f_\mathrm{min}\) in full colour and retained neighbours greyed out. 1: block fully above \(f_\mathrm{min}\). 2: block partially above \(f_\mathrm{min}\). 3: retained neighbouring block. 4: disregarded block without neighbours above \(f_\mathrm{min}\). See Sect. 5.5 for details on the block structure.

5.3 Solvers and time-integration

The structure of the hybrid-Vlasov set of equations leads to the logical split into a solver for the Vlasov equation and a solver for the electric and magnetic field propagation.

5.3.1 Vlasov solver

In advancing the Vlasov equation, Vlasiator utilises Strang splitting (Umeda et al. 2009, 2011, and references therein), where updates of the particle distribution functions are performed, separately using a spatial translation operator \(S_T\) for advection; \(S_T = \left( \mathbf{v}\cdot \frac{\partial f_\mathrm{s}}{\partial \mathbf{x}}\right) \), and an acceleration operator \(S_A\) = \(\left( \frac{q_\mathrm{s}}{m_\mathrm{s}}{} \mathbf{E}\cdot \frac{\partial f_\mathrm{s}}{\partial \mathbf{v}}\right) \) including rotation \(\left( \frac{q_\mathrm{s}}{m_\mathrm{s}}(\mathbf{v}\times \mathbf{B})\cdot \frac{\partial f_\mathrm{s}}{\partial \mathbf{v}}\right) \) for each phase-space volume average. The splitting is performed using a standard leapfrog scheme, where

$$\begin{aligned} \widetilde{f}^{N+1} = S_A\left( \frac{\varDelta t}{2}\right) S_T(\varDelta t) S_A\left( \frac{\varDelta t}{2}\right) \widetilde{f}^{N}. \end{aligned}$$

Acceleration over step length \(\varDelta t\) is thus calculated based on the field values computed at the mid-point of each acceleration step, i.e., at each actual time step as used for translation.

A global time step is defined, with time advancement calculated for distribution functions and fields in separate yet linked computations. Earlier versions of Vlasiator have used finite volume (FV) Vlasov solvers. In the earliest versions of the code, a FV method based on the solver proposed by Kurganov and Tadmor (2000) was used (Palmroth et al. 2013). A Riemann-type FV solver (LeVeque 1997; Langseth and LeVeque 2000) was used in subsequent works (Kempf et al. 2013, 2015; Pokhotelov et al. 2013; Sandroos et al. 2013, 2015; von Alfthan et al. 2014). For these solvers the classical Courant–Friedrichs–Lewy (CFL) condition (Courant et al. 1928) for maximal allowable time steps when calculating fluxes from one phase-space cell to another is

$$\begin{aligned} \varDelta t < \mathrm {min} \left( \frac{\varDelta x_i}{\mathrm {max}(|v_i|)}, \frac{\varDelta v_i}{\mathrm {max}(|a_i|)} \right) , \end{aligned}$$

where i is indexed over three dimensions. In previous versions the CFL condition was found to be very limiting. Vlasiator utilize a semi-Lagrangian scheme (SLICE-3D, Zerroukat and Allen 2012; https://github.com/fmihpc/lib-slice3d/), in which mass conservation is ensured by a conservative remapping from a Eulerian to a Lagrangian grid. Note however that the sparse velocity space strategy implemented in Vlasiator (see Sect. 5.2) breaks the mass conservation (see Pfau-Kempf et al. 2018, for a discussion of the effect of the phase space density threshold on mass conservation). The specificity of the SLICE-3D scheme is to split the full 3D remapping into successive 1D remappings, which reduces the computational cost of the spatial translation and facilitates its parallel implementation.

The velocity space update due to acceleration \(S_A\left( \frac{\varDelta t}{2}\right) \) will generally be described by an offset 3D rotation matrix (due to gyration around \(\mathbf {B}\)). As every offset rotation matrix can be decomposed into three shear matrices \(S = S_x S_y S_z\), each performing an axis-parallel shear into one spatial dimension (Chen and Kaufman 2000), the numerically efficient semi-Lagrangian acceleration update using the SLICE-3D approach is possible: before each shear transformation, the velocity space is rearranged into a single-cell column format parallel to the shear direction in memory, each of which requires only a one-dimensional remapping with a high reconstruction order (in Vlasiator, 5th order reconstruction is typically employed for this step). These column updates are optimised to make full use of vector instructions. This update method comes with a maximum rotation angle limit due to the shear decomposition of about \(22^\circ \), which imposes a further time step limit. For larger rotational angles per time step (caused by stronger magnetic fields), the acceleration can be subcycled.

The position space update \(S_T(\varDelta t)\) will generally be described by a translation matrix with no rotation, and the same SLICE-3D approach lends itself to it in a similar vein as for velocity space. The main difference is the typical use of 3rd order reconstruction in order to keep the stencil width at two. The use of a semi-Lagrangian scheme allow the implementation of a time step limit

$$\begin{aligned} \varDelta t < \mathrm {min} \left( \frac{\varDelta x_i}{\mathrm {max}(|v_i|)} \right) \end{aligned}$$

based on spatial translation only. This condition constrains the spatial translation of any volume averages to a maximum value of \(\varDelta x_i\) in direction i, accounting for only those velocities within phase-space which have populated active cells (see the sparse grid implementation, Sect. 5.2). This is employed not due to stability requirements, but rather to decrease communication bandwidth by ensuring that a single ghost cell in each spatial direction is sufficient.

5.3.2 Field solver

The field solver in Vlasiator (von Alfthan et al. 2014) is based on the upwind constrained transport algorithm by Londrillo and Del Zanna (2004) and uses divergence-free reconstruction of the magnetic fields (Balsara 2009). It utilizes a second-order Runge–Kutta algorithm including the interpolation method demonstrated by Valentini et al. (2007) to obtain the intermediate moments of f needed to update the electric and magnetic fields (von Alfthan et al. 2014).

The algorithm is subject to a CFL condition such that the fastest-propagating wave mode cannot travel more than half a spatial cell per time step. As the field solver was extended to include the Hall term in Ohm’s law, the CFL limit severely impacts the time step of the whole propagation in regions of high magnetic field strength or low plasma density. If the imbalance between the time step limits from the Vlasov solver and from the field solver is too strong, the computation of the electric and magnetic fields is subcycled such as to retain an acceptable global time step length (Pfau-Kempf 2016).

5.3.3 Time stepping

The leapfrog scheme of the propagation is initialised by half a time step of acceleration. If the time step needs to change during the simulation due to the dynamics of the system, f is accelerated backwards by half an old time step and forwards again by half a new time step and the algorithm resumes with the new global time step. The complete sequence of the time propagation in Vlasiator is depicted in Fig. 5, including a synthetic version of the equations used in the different parts.

Fig. 5
figure 5

Time stepping in Vlasiator. The translation and acceleration of f are leapfrogged following the Strang-splitting method. The algorithm is initialised by half a time step of acceleration (step 0. in red). Then 1. f is translated forward by one step \(\varDelta t\) (possibly subcycled, see text). 2. \(\mathbf {E},\mathbf {B}\) are stepped forward by \(\varDelta t\) (possibly subcycled, see text). 3. f is accelerated forward by \(\varDelta t\). The sequence is repeated (4.–6.)

5.4 Boundary and initial conditions

The reference frame used in Vlasiator is defined as follows: the origin is located at the centre of the Earth, the x axis points towards the Sun, the z axis points northward perpendicular to both the x axis and the ecliptic plane, and the y axis completes the right-handed set. This coordinate system is equivalent to the Geocentric Solar Ecliptic (GSE) frame which is commonly used when studying the near-Earth space.

The solar wind enters the simulation domain at the \(+\,x\) boundary, while copy conditions (i.e., homogeneous Neumann boundary conditions obtained by copying the velocity distribution functions and magnetic fields from the boundary cells to their neighbouring ghost cells) are used for the \(-\,x\) boundary and for the boundaries perpendicular to the flow. In the current 2D–3V runs, periodic conditions are applied in the out-of-plane direction (i.e., \(+\,z\) and \(-\,z\) for the ecliptic runs and \(+\,y\) and \(-\,y\) for the polar runs). Currently, three versions of the copy conditions are implemented, which can be adjusted in order to mitigate issues such as self-replicating phenomena at the boundaries. At the beginning of the run or at a restart, the outflow condition can be set to a classic copy condition, to a copy condition where the value of f is modified in order to avoid self-replication or inflowing features, or static conditions can be maintained at the boundary.

The simulation also requires an inner boundary around the Earth, in order to screen the origin of the terrestrial dipole. In the inner magnetosphere, the magnetic field strength increases dramatically, resulting in very small time steps, which would significantly slow down the whole simulation. Also, close to the Earth, the ionospheric plasma can no longer be described as a collisionless and fully ionised medium, and another treatment would be required in order to properly simulate this region. The inner boundary is therefore located at 30,000 km (about \(4.7 \, \mathrm {R}_{\mathrm {E}}\)) from the Earth’s centre and is currently modelled as a perfect conductor. The distribution functions in the boundary cells retain their initial Maxwellian distributions throughout the run. The electric field is set to zero in the layer of boundary cells closest to the origin, and the magnetic field component tangential to the boundary is fixed to the value given by the Earth’s dipole field. Since the ionospheric boundary is given in the Cartesian coordinate system, it is not exactly spherical but staircase-like, introducing several computational problems (e.g., Cangellaris and Wright 1991). This has not been a large problem in Vlasiator up to date, possibly since the computations are carried out in 2D–3V setup. Once the computations are carried out in 3D–3V, this may pose a larger problem, because the magnetic field will be stronger in 3D near poles.

In addition to defining boundary conditions, the phase-space cells within the simulation box must be initialised to some reasonable values, after which the magnetic and gas pressures and flow conditions cause the state of the simulation to change, and to converge towards a valid description of the magnetospheric system as the box is flushing. The usual method employed in Vlasiator is to initialise the velocity space in each cell within the simulation (excluding the region within the inner boundary) to match values picked from a Maxwellian distribution in agreement with the inflow boundary solar wind density, temperature, and bulk flow direction. The inner boundary is initialised with a constant proton temperature and number density with no bulk flow. The initial phase space density sampling can be improved by averaging over multiple densities obtained via equally spaced velocity vectors within the single velocity space cell.

The Earth’s magnetic field closely resembles that of a magnetic dipole, and within the scope of Vlasiator the dipole has been approximated as being aligned with the z-axis. For ecliptic xy plane simulations the dipole field can be used as-is, but for polar xz plane simulations, a 2D line dipole (which scales as \(r^{-2}\) rather than \(r^{-3}\)) must be used instead in order to prevent the occurrence of unphysical currents due to out-of-plane field curvature. When using this approach, one must calculate the dipole strength that represents reality in some chosen manner, and this is achieved by choosing a value that in turn reproduces the magnetopause at the realistic \(\sim 10\,\mathrm {R}_\mathrm{E}\) standoff distance (a similar treatment as found in, e.g., Daldorff et al. 2014). As the dipole magnetic field is not included in the inflow boundary, there cannot exist a boundary-perpendicular magnetic field component in order to respect the solenoid condition. For ecliptic runs, the dipole field component is aligned with z and thus there is no component perpendicular to the inflow boundary. For polar runs, the dipole field component perpendicular to the inflow boundary must be removed to prevent magnetic field divergence. This is achieved by placing a mirror dipole identical to the Earth’s dipole model at the position \((2 \cdot (X_1 - \varDelta x),0,0)\) that is at twice the distance from the origin to the edge of the final non-boundary simulation cell. For each simulation cell, the static background magnetic flux through each face is thus assigned as a combination of flux calculated from the chosen dipole field model, a mirror dipole if present, and the solar wind IMF vector. This background field, which is curl-free and divergence-free, is left static, and instead any calculations involving magnetic fields operate on a separate field which acts as a perturbation from this initial field.

5.5 Parallelisation strategies

Given the curse of dimensionality in Vlasov simulations (cf. Sect. 4), the amounts of memory and computational steps required for global magnetospheric hybrid-Vlasov simulations are extreme. Therefore, the use of supercomputer resources and parallelisation techniques is essential. Vlasiator uses three levels of parallelisation, of which the first employed is the decomposition of the spatial simulation domain into subdomains handling individual tasks using the Message-Passing Interface (MPI, MPI Forum 2004). The use of the DCCRG grid library (Honkonen et al. 2013) provides most of the glue code for MPI communication and management of computational domain interfaces.

Thanks to the sparse velocity space representation, large savings in memory usage and computational demand can be achieved. However, the sparse velocity space induces a further problem, because the computational effort to solve the Vlasov equation is no longer constant for every spatial simulation cell, but varies in direct relation to the complexity of the velocity space at every given point. Due to the large variety of physical processes present in the magnetospheric domain, this leads to large load imbalances throughout the simulation box, making a simple Cartesian subdivision of space over computational tasks infeasible and necessitating a dynamic rebalancing of the work distribution. To this end, Vlasiator relies on the Zoltan library (Devine et al. 2002; Boman et al. 2012), which creates an optimised space decomposition from continuously updated run-time metrics, providing a number of different algorithms to do so (usual production runs are using recursive coordinate bisection). Figure 6 shows an example of the resulting spatial decomposition in a 2D global magnetospheric simulation run in the Earth’s polar plane, in which the incoming solar wind plasma with low-complexity Maxwellian velocity distributions on the right hand side is processed by visibly larger computational domain sizes than the more complex velocity space structures in the magnetosphere.

Fig. 6
figure 6

Spatial plot of dynamic load balancing in Vlasiator in a global magnetospheric simulation in the polar plane. The simulation frame is identical to that of Fig. 1. Each computational rank 0–4799 is mapped to a colour, and the corresponding rectangular domain is coloured accordingly. Domain decomposition is performed by run-time updated recursive coordinate bisection using the Zoltan library (Devine et al. 2002; Boman et al. 2012)

The second level of parallelisation is carried out within the computational domain of each MPI task. The domain containing the MPI tasks is typically handled by a full supercomputer node (or a fraction of it) with multiple CPU cores and threads, which includes a local parallelisation level based on OpenMP (OpenMP Architecture Review Board 2011). All computationally intensive solver steps have been designed to be run thread-parallel over multiple spatial cells, or in the case of the SLICE-3D position space update (see Sect. 5.3) multiple parallel evaluations over the velocity space to make optimum use of the available shared memory parallel computing architecture within one node. As a third level of parallelisation, all data structures involved in computationally expensive solver steps have been designed to benefit from vector processing of modern CPUs. Specifically, the velocity space representation in Vlasiator is based on \(4\times 4\times 4\) cell blocks, which are always processed as a whole. This allows multiple velocity cells to be solved at the same time, using single instruction multiple data techniques (Fog 2016).

A further complication of parallel Vlasov simulations are the associated input/output requirements. Not only does it require a parallel input/output system that scales to the required number of nodes, but the sparse velocity space structure requires an appropriate file format able to represent the sparsity, without relying on fixed data offsets. For Vlasiator’s specific use case, the VLSV library and file format have been developed (http://github.com/fmihpc/vlsv). Using parallel MPI-IO (MPI Forum 2004), it allows high-performance input/output even for simulation restart files which, given the large system size of Earth’s magnetosphere, tend to get up to multiple terabytes in size. A plugin for the popular scientific visualisation suite VisIt (Childs et al. 2012) is available, as is a python library that allows for quantitative analysis of the output files (http://github.com/fmihpc/analysator).

Along with the industry’s trend towards architectures featuring large numbers of cores and/or GPUs as a primary computing element, an early version of Vlasiator was parallelised using the CUDA standard and run on small numbers of GPUs (Sandroos et al. 2013). This avenue was not pursued further because of the lack of suitably large systems, and a number of bottlenecks following from the structure of the Vlasov simulations on the one hand and the characteristics of GPUs on the other hand.

5.6 Verification and test problems

As standard verification tests for a hybrid-Vlasov system do not exist, the first verification effort of Vlasiator was presented in Kempf et al. (2013). A simulation of low-\(\beta \) plasma waves (where \(\beta \) is the ratio of thermal and magnetic pressures) in a one-dimensional case with various angles of propagation with respect to the magnetic field was used to generate dispersion curves and surfaces. These were then compared to analytical solutions from the linearised plasma wave equations given by the Waves in Homogeneous, Anisotropic Multicomponent Plasmas (WHAMP) code (Rönnmark 1982). Excellent agreement between the results obtained from the two approaches was found in the case of parallel, perpendicular and oblique propagation, the only noticeable difference taking place for high frequencies and wave numbers, likely as a result of too coarse a representation of the Hall term in the Vlasiator simulations at that time.

In the work presented by von Alfthan et al. (2014), the study of the ion/ion right-hand resonant beam instability is another effort to verify the hybrid-Vlasov model implemented in Vlasiator against the analytic solution of the dispersion equation for that instability. The obtained instability growth rates were found to behave as predicted by theory in the cool beam regime, although with slightly lower values which can be explained by the finite size of the simulation box used. This paper also discussed comparisons of results from the hybrid-Vlasov approach with those obtained with hybrid-PIC codes to underline that distribution functions are comparable albeit smoother and better-resolved with the former approach.

More recently, Kilian et al. (2017) presented a set of validation tests based on kinetic plasma waves, and discussed what their expected behaviour should look like in fully kinetic PIC simulations as well as different levels of simplification (Darwin approximation, EMHD, hybrid). By nature, waves and instabilities are a sensitive and valuable verification tool for plasma models, as they are an emergence of the collective behaviour of plasma. As such they are an excellent verification test for a complete model going well beyond unit tests of single solver components.

The increasing computational performance of Vlasiator has allowed significant improvements in spatial resolution. It was still 850 km early on (von Alfthan et al. 2014; Kempf et al. 2015) but subsequent runs were performed at 300 km and even 227 km resolution (e.g., Palmroth et al. 2015; Hoilijoki et al. 2017). Nevertheless even at these finer resolutions the typical kinetic scales are still not properly resolved in magnetospheric simulations. This can lead to the a priori concern that under-resolved hybrid-Vlasov simulations would not fare better than their considerably cheaper MHD forerunners and would similarly lack any kinetic plasma phenomena. A systematic study of the effects of the discretisation parameters of Vlasiator on the modelling of a collisionless shock alleviates this concern. Using a one-dimensional shock setup with conditions comparable to the terrestrial bow shock, Pfau-Kempf et al. (2018) show that even at spatial resolutions of 1000 km the results clearly depart from fluid theory and are consistent with a kinetic description. Of course an increased resolution of 200 km leads to a dramatic improvement in the physical detail accessible to the model, even though not yet fully resolving ion kinetic scales. This study also highlights the importance of choosing the velocity resolution and the phase-space density threshold \(f_\mathrm{min}\) carefully as they affect the conservation properties of the model and as a consequence the physical processes it can describe.

5.7 Physics results

Having completed verification tests, one can compare simulation results with experimental ground-based or spacecraft data, in other words proceed towards validation of the model. The first step for Vlasiator was to perform a global test-Vlasov simulation in 3D ordinary space (Palmroth et al. 2013). In this test f was propagated through the electromagnetic fields computed by the MHD model GUMICS-4 (Janhunen et al. 2012). This test showed that the early test-Vlasov version of Vlasiator already reproduced well the position of the Earth’s bow shock as well as magnetopause and magnetosheath plasma properties. Typical energy–latitude ion velocity profiles during northward IMF conditions were also successfully obtained with Vlasiator in that same study.

Focusing on the ion velocity distributions in the foreshock, a study by Pokhotelov et al. (2013) demonstrated that the physics of ions in the vicinity of quasi-parallel MHD shocks is well reproduced by Vlasiator. The simulation presented in that paper is a global dayside magnetospheric run in 2D ordinary space (ecliptic plane) for which the IMF angle relative to the Sun–Earth axis is \(45^\circ \). The foreshock was successfully reproduced by the model, and the reflected ion velocity distributions given by Vlasiator were found to be in agreement with spacecraft observations. In particular, deep in the ion foreshock, so-called cap-shaped ion distributions were reproduced by the model in association with 30 s sinusoidal waves which have been created as a result of ion/ion resonance interaction.

Validation of Vlasiator using spacecraft data was presented by Kempf et al. (2015), where the various shapes of ion distributions in the foreshock were reviewed, localised relative to the foreshock boundaries, identified in Time History of Events and Macroscale Interactions during Substorms (THEMIS, Angelopoulos 2008) data and compared to model results. The agreement between Vlasiator-simulated distributions and those observed by THEMIS was found to be very good, giving additional credibility to the hybrid-Vlasov approach and its feasibility.

While the papers discussed above essentially presented validations of the hybrid-Vlasov approach implemented in Vlasiator, the model has since 2015 been producing novel physical results. The first scientific investigations of the solar wind-magnetosphere interaction utilizing Vlasiator focus on dayside processes, from the foreshock to the magnetopause. Vlasiator offers in particular an unprecedented view of the suprathermal ion population in the foreshock. The moments of this population are direct outputs from the code, thus facilitating the analysis of parameters such as the suprathermal ion density or velocity throughout the foreshock (Kempf et al. 2015; Palmroth et al. 2015). In contrast, such parameters require some careful data processing to be extracted from spacecraft measurements, and large statistics are needed in order to obtain global maps of the foreshock.

Vlasiator allows to investigate the properties and the structuring of the ultra-low frequency (ULF, 1 mHz to \(\sim \) 1 Hz) waves which pervade the foreshock both on the local and the global scales. Direct comparison of a Vlasiator run with measurements from the THEMIS mission during similar solar wind conditions confirmed that Vlasiator reproduces well the characteristics of the waves at the spacecraft location (Palmroth et al. 2015). The typical features of the waves are in agreement with the reported literature. The observed oblique propagation, relative to the ambient magnetic field, of these foreshock waves has been a long-standing question because theory predicts that they should be parallel-propagating. Based on Vlasiator results, Palmroth et al. (2015) proposed a new scenario to explain this phenomenon, which they attributed to the global variation of the suprathermal ion population properties across the foreshock.

Vlasiator also offers unprecedented insight into the physics of the magnetosheath, which hosts mirror mode waves downstream of the quasi-perpendicular shock. Hoilijoki et al. (2016) found that the growth rate of the mirror mode waves was smaller than theoretical expectations, but in good agreement with spacecraft observations. As Hoilijoki et al. (2016) explain, this discrepancy has been ascribed to the local and global variations of the plasma parameters, as well as the influence of other wave modes, not being taken into account in the previous theoretical estimates. Using Vlasiator’s capability to track the evolution of the plasma as it propagates from the bow shock into the magnetosheath, Hoilijoki et al. (2016) demonstrated that mirror modes develop preferentially along magnetosheath streamlines whose origin at the bow shock lies in the vicinity of the foreshock ULF wave boundary. This is probably due to the fact that the plasma is more unstable to mirror modes in this region because of the perturbations in the foreshock transmitting into the magnetosheath. This result outlines the importance of the global approach, as a similar result would not be present in coupled codes nor in codes that do not model both the foreshock and the magnetosheath simultaneously.

Magnetic reconnection is a topic of intensive research, as it is the main process through which plasma and energy are transferred from the solar wind into the magnetosphere. Many questions remain unresolved in the dayside and in the nightside. In the dayside, active research topics include the position of the reconnection line and the bursty or continuous nature of reconnection, while in the nightside the most important topic is the global magnetospheric reconfiguration caused either by reconnection or by a tail current disruption. In order to tackle these questions, the simulation domain of Vlasiator, which so far corresponded to the Earth’s equatorial plane, was changed to cover the noon–midnight meridian plane (xz plane in the reference frame defined in Sect. 5.4). To address the dayside–nightside coupling processes in reconnection, the simulation domain was extended to include the nightside reconnection site within the same simulation domain, stretching as far as \(-\,94 \, \mathrm {R}_{\mathrm {E}}\) along the x direction. This run, carried out in 2016, remains at the time of this writing the most computationally expensive Vlasiator run performed so far.

Hoilijoki et al. (2017) presented an investigation of reconnection and flux transfer event (FTE) processes at the dayside magnetopause, and showed that even under steady IMF conditions the location of the reconnection line varies with time, even allowing multiple reconnection lines to exist at a given time. Many FTEs are produced during the simulation, and occasionally magnetic islands have been observed to coalesce, which underlines the power of kinetic-based modelling in capturing highly dynamical and localised processes. Additionally, Hoilijoki et al. (2017) showed that the local reconnection rate measured at locations of the reconnection lines correlates well with the analytical rate for the asymmetric reconnection derived by Cassak and Shay (2007). This paves the way in using Vlasiator to investigate e.g., the effects of dayside reconnection in the nightside.

Vlasiator has proven to be a useful and powerful tool to reveal localised phenomena which were never imagined before, and to narrow down the regions of the near-Earth environment where to search for them in observational data sets. One example of this can be found in the work by Pfau-Kempf et al. (2016), in which transient, local ion foreshocks were discovered at the bow shock under steady solar wind and IMF conditions, as illustrated in Fig. 7. These transient foreshocks were found to be related to FTEs at the dayside magnetopause produced by unsteady reconnection and creating fast mode waves propagating upstream in the magnetosheath (Fig. 7a–c). These wave fronts can locally alter the shape of the bow shock, thus creating favourable conditions for local foreshocks to appear (Fig. 7d, e). Observational evidence giving credit to this scenario was found in a data set comprising Geotail observations near the bow shock and ground-based signatures of FTEs in SuperDARN radar and magnetometer data.

While the first set of publications essentially dealt with dayside processes, Vlasiator can also be applied to the study of nightside phenomena. The first investigation of magnetotail processes using Vlasiator was performed by Palmroth et al. (2017), showcasing multiple reconnection lines in the plasma sheet and the ejection of a plasmoid, under steady IMF conditions (see Fig. 8). This study underlined that dayside reconnection may have a direct consequence in stabilising nightside reconnection, as flux tubes originating from dayside reconnection influenced the local conditions within the nightside plasma sheet. Again, this study illustrates how important it is to capture the whole system simultaneously using a kinetic approach.

Fig. 7
figure 7

Figure from Pfau-Kempf (2016)

Local, transient foreshocks generated by magnetosheath bow waves. a Ion density (colour) and magnetic field lines (black lines) in the near-Earth space, showcasing magnetic islands created by magnetic reconnection. b Zoom-in on a strong magnetic island, pushing a bow (dashed black line) and a heck (dashed-dotted white line) wave into the magnetosheath. c Plasma parameters across the bow wave, cut along the white arrow in (b). d Parallel ion temperature, showing the transient foreshock as a region of enhanced parallel temperature extending into the solar wind. The bow shock position and its normal direction are indicated by solid white lines, and the dashed lines illustrate the nominal bow shock shape and its normal direction without perturbation. e Ion velocity distribution function in the transient foreshock at the location marked by the plus sign in (d), comparable to what is observed in the regular foreshock

Fig. 8
figure 8

Figure from Palmroth et al. (2017)

Tail reconnection in Vlasiator. a Overview of the ion density in the simulation. bd Close-up of the tail reconnection region at different times in the run. The colour scheme shows the current density. The X points are marked with stars, which move, appear and disappear as time proceeds

5.8 Future avenues

Vlasiator is funded through several multi-annual grants, with which the code is improved and developed. Major building blocks for making Vlasiator possible in the past were not only the increase of the computational resources, but also several algorithmic innovations. Examples of these are the sparse grid for distribution functions, and the semi-Lagrangian solver discussed above. Further, the code has been continuously optimised to fit better in different parallel architectures. With these main steps, the efficiency of Vlasiator has been improved effectively about eight orders of magnitude relative to the performance in the beginning of the project, allowing to simulate 2D–3V systems with high resolution (Palmroth et al. 2017). Recently, a simulation run with a cylindrical ionosphere and a layer of grid cells in the third spatial dimension has been carried out, thus approaching the full 3D–3V representation.

The development of Vlasiator is closely tied to the awarded grants. In terms of numerics, near-term plans are to include an adaptive mesh refinement both into the ordinary space and velocity space, required for a full 3D–3V system. These improvements would allow to place higher resolution to regions of interest, and consequently to save in number of time steps and storage. The DCCRG grid already supports adaptive mesh refinement, and thus the task is mainly to add this support into the solvers and optimise the performance.

In terms of physics, perhaps the most visible change in the near past was the addition of heavier ions. In recent times, the role of heavier ions, e.g., in dayside reconnection has become evident (e.g., Fuselier et al. 2017), and thus the correct reproduction of the system at ion scales requires solving heavier ions as well. While the addition requires more memory and storing capacity, in terms of coding it was relatively simple as the ions can be represented as an additional sparse representation of the velocity space, not adding much to the overall computational load. The first runs with heavier ions with additional ion populations were produced in 2018. The first set of runs considered helium flowing from the solar wind boundary, and the second set added oxygen flow from the ionospheric boundary. The analysis for these runs is ongoing.

In the near term, the ionospheric boundary will also be improved. In the 2D–3V runs the ionosphere can be relatively simple, but in 3D–3V it needs to be updated as well. In the first approximation, it can be similar to the type of boundary used in global MHD simulations, which typically couple the field-aligned currents, electric potential and precipitation between the ionosphere and magnetosphere (e.g., Janhunen et al. 2012). Later, the ionosphere should be updated to take into account the more detailed information that the Vlasov-based magnetospheric domain can offer relative to MHD. The objective is to push the inner edge of the simulation domain earthwards from its current position (around 5 \(R_E\)). Other planned improvements include allowing for the Earth’s dipole field to be tilted with respect to the z direction, and replacing the mirror dipole method of ensuring the solenoid condition with an alternative method, for instance a radially vanishing vector potential description of the dipole field. Inclusion of such capabilities would allow investigations of the inner magnetospheric physics in terms of solar wind driving, which would close the circle: The problems of reproducing the inner magnetospheric physics by the global MHD simulations was one of the main motivations for developing Vlasiator in the first place.

Other possible future avenues would be to consider other environments that will be investigated with present and future space missions. An example of this is Mercury targeted by the upcoming BepiColombo mission. Cometary environments and the comet–solar wind interactions should be interesting in terms of the recently added heavy ion support, in the context of the Rosetta mission data analysis. Further, the upcoming Juice mission will visit the icy moons of Jupiter, indicating that e.g., the Ganymede–Jupiter interaction may also be one viable option for future.

6 Conclusions and outlook

There are several main conclusions that can be made from all Vlasiator results so far. The first one is related to the applicability of the hybrid-Vlasov system for ions within the global magnetospheric context. When Vlasiator was first proposed, concerns arose as to whether ions are the dominant species controlling the system dynamics or does one need electrons as well. In particular, a physical representation of the reconnection process may require electrons, while the ion-scale Vlasiator would still model reconnection similarly as global MHD simulations, i.e., through numerical diffusion. However, even an MHD simulation, treating both ions and electrons as a fluid, is capable of modelling global magnetospheric dynamics (Palmroth et al. 2006a, b), indicating that reconnection driving the global dynamics must be within the right ballpark. Since Vlasiator is also able to produce results that are in agreement with in situ measurements, kinetic ions seem to be a major contributor in reproducing global dynamics. Whether the electrons play a larger role in global dynamics remains to be determined in the future, if such simulations become possible.

Another major conclusion based on Vlasiator is the role of grid resolution in global setups. Again, one of the largest concerns in the beginning of Vlasiator development was that the ion gyroscales could not be reached within a global simulation volume, raising fears that the outcomes would be MHD-like, even though early hybrid-PIC simulations were also carried out at ion inertial length scales (e.g., Omidi et al. 2005). In this context, the first runs included an element of surprise, as even rather coarse resolution grids induce kinetic phenomena that are in agreement with in situ observations (Pokhotelov et al. 2013). Latest results have clearly indicated that kinetic physics emerges even at coarse spatial resolution (Pfau-Kempf et al. 2018). It should be emphasised that this result would not have been foreseeable without developing the simulation first. Further, it indicates that also electron physics could be trialled without resolving the actual electron scales. One can hence conclude that others attempting to develop a (hybrid-)Vlasov simulation may face less concerns due to the grid resolution, even in setups with major computational challenges, like e.g., portions of the Sun.

The most common physical conclusion based on Vlasiator simulations is that “everything affects everything”, indicating that scale coupling is important in global magnetospheric dynamics. One avenue of development for the global MHD simulations in the recent years has been code coupling, where e.g., problem-specific codes have been coupled into the global context (Huang et al. 2006), or e.g., PIC simulations have been embedded within the MHD domain (Tóth et al. 2016). While these approaches are interesting and advance physical understanding, they cannot approach scale coupling as the specific kinetic phenomena are only addressed within their respective simulation volumes. A prime example of the scale coupling is the emergence of transient foreshocks, driven by bow waves generated by dayside reconnection (Pfau-Kempf et al. 2016). Another example is the generation of oblique foreshock waves due to a global variability of backstreaming populations (Palmroth et al. 2015). These results could not have been achieved without a simulation that resolves both small and large scales simultaneously.

Vlasov-based methods have not yet been widely adopted in the fields of astrophysics and space physics to model large-scale systems beyond the few examples cited in Table 2, mainly due to the truly astronomical computational cost such simulations can have. The experience with Vlasiator nevertheless demonstrates that Vlasov-based modelling is strongly complementary to other methods and provides unprecedented insight well worth the implementation effort. Based on the pioneering work realised in the Solar–Terrestrial physics community, it is hoped that Vlasov-based methods will gain in popularity and lead to breakthrough results in other fields of space physics and astrophysics as well.

Finally, it should be emphasized that a critical success factor in the Vlasiator development has been the close involvement with technological advances in the field of high-performance computing. European research infrastructures for supercomputing have been developed almost hand-in-hand with Vlasiator, providing an opportunity to always target the newest platforms thus feeding directly into the code development. Should similar computationally intensive codes be designed and implemented elsewhere, it is recommended to keep a keen eye on the technological development of the supercomputing platforms.