Abstract
This paper reviews Vlasovbased numerical methods used to model plasma in space physics and astrophysics. Plasma consists of collectively behaving charged particles that form the major part of baryonic matter in the Universe. Many concepts ranging from our own planetary environment to the Solar system and beyond can be understood in terms of kinetic plasma physics, represented by the Vlasov equation. We introduce the physical basis for the Vlasov system, and then outline the associated numerical methods that are typically used. A particular application of the Vlasov system is Vlasiator, the world’s first global hybridVlasov simulation for the Earth’s magnetic domain, the magnetosphere. We introduce the design strategies for Vlasiator and outline its numerical concepts ranging from solvers to coupling schemes. We review Vlasiator’s parallelisation methods and introduce the used highperformance computing (HPC) techniques. A short review of verification, validation and physical results is included. The purpose of the paper is to present the Vlasov system and introduce an example implementation, and to illustrate that even with massive computational challenges, an accurate description of physics can be rewarding in itself and significantly advance our understanding. Upcoming supercomputing resources are making similar efforts feasible in other fields as well, making our design options relevant for others facing similar challenges.
Introduction
While physical understanding is inherently based on empirical evidence, numerical simulation tools have become an integral part of the majority of fields within physics. When tested against observations, numerical models can strengthen or invalidate existing theories and quantify the degree to which the theories have to be improved. Simulation results can also complement observations by giving them a larger context. In space physics, spacecraft measurements concern only one point at one time in the vast volume of space, indicating that discerning spatial phenomena from temporal changes is difficult. This is a shortcoming that has also led to the use of spacecraft constellations, like the European Space Agency’s Cluster mission (Escoubet et al. 2001). However, simulations are considerably more costeffective compared to spacecraft, and they can be adopted to address physical systems that cannot be reached by in situ experiments, like the distant galaxies. Finally, and most importantly, predictions of physical environments under varying conditions are always based on modelling. Predicting the nearEarth environment in particular has become increasingly important, not only because the nearEarth space hosts expensive assets used to monitor our planet. The space environmental conditions threatening space or groundbased technology or human life are commonly termed as space weather. Space weather predictions include two types of modelling efforts; those targeting realtime modelling (similar to terrestrial weather models), and those which test and improve the current space physical understanding together with toptier experiments. This paper concerns the latter approach.
The physical conditions within the nearEarth space are mostly determined by physics of collisionless plasmas, where the dominant physical interactions are caused by electromagnetic forces over a collection of charged particles. There are three main approaches to model plasmas: (1) the fluid approach (e.g., magnetohydrodynamics, MHD), (2) the fully kinetic approach, and (3) hybrid approaches combining the first two. Present global models including the entire nearEarth space in three dimensions (3D) and resolving the couplings between different regions are largely based on MHD (e.g., Janhunen et al. 2012). However, singlefluid MHD models are basically scaleless in that they assume that plasmas have a single temperature approximated by a Maxwellian distribution. Therefore they provide a limited context to the newest space missions, which produce highfidelity multipoint observations of spatially overlapping multitemperature plasmas. The second approach uses a kinetic formulation as represented by the Vlasov theory (Vlasov 1961). In this approach, plasmas are treated as velocity distribution functions in a sixdimensional phase space consisting of threedimensional ordinary space (3D) and a threedimensional velocity space (3V). The majority of kinetic simulations model the Vlasov theory by a particleincell (PIC) method (Lapenta 2012), where a large number of particles are propagated within the simulation, and the distribution function is constructed from particle statistics in space and time. The fully kinetic PIC approach means that both electrons and protons are treated as particles within the simulation. Such simulations in 3D are computationally extremely costly, and can only be carried out in local geometries (e.g., Daughton et al. 2011).
A hybrid approach in the kinetic simulation regime means usually that electrons are treated with a fluid description, but protons and heavier ions are treated kinetically. Again, the vast majority of simulations use a hybridPIC approach, which have previously considered 2D spatial regimes due to computational challenges (e.g., Omidi et al. 2005; Karimabadi et al. 2014), but have recently been extended into 3D using a limited resolution (e.g., Lu et al. 2015; Lin et al. 2017). This paper does not discuss the details of the PIC approach, but instead concentrates on a hybridVlasov method, where the ion velocity distribution is discretised and modelled with a 3D–3V grid. The difference to hybridPIC is that in hybridVlasov the distribution functions are evolved in time as an entity, and not constructed from particle statistics. The main advantage is therefore that the distribution function becomes noiseless. This can be important for the problem at hand, because the distribution function is in many respects the core of plasma physics as the majority of the plasma parameters and processes can be derived from it. As will be described, hybridVlasov methods have been used mostly in local geometries, because the 3D–3V requirement implies a large computational cost. A global approach, which in space physics means simulation box sizes exceeding thousands of ion inertial lengths or gyroradii per dimension, have not been possible as naturally the large volume has to consider the velocity space as well. The world’s (so far) only global magnetospheric hybridVlasov simulation, the massively parallel Vlasiator, is therefore the prime application in this article.
This paper is organised as follows: Sect. 2 introduces the typical plasma systems and relevant processes one encounters in space. Sections 3 and 4 introduce the Vlasov theory and its numerical representations. Section 5 describes Vlasiator in detail and justifies the decisions made in the design of the code to aid those who would like to design their own (hybrid)Vlasov system. At the time of writing, there are no standard verification cases for a (hybrid)Vlasov system, but we describe the test cases used for Vlasiator. The physical findings are then illustrated briefly, showing that Vlasiator has made a paradigm change in space physics, emphasising the role of scale coupling in largescale plasma systems. While this paper concerns mostly the nearEarth environment, we hope it is useful for astrophysical applications as well. Astrophysical largescale modelling is still mostly based on nonmagnetised gas (Springel 2005; Bryan et al. 2014), while in reality astrophysical objects are in the plasma state. In the future, pending new supercomputer infrastructure, it may be possible to design astrophysical simulations based on MHD first, and later possibly on kinetic theories. If this becomes feasible, we hope that our design strategies, complemented and validated by in situ measurements, can be helpful.
Kinetic physics in astrophysical plasmas
Thermal and nonthermal interactions between charged particles and electromagnetic fields follow the same basic rules throughout the universe, but the applicability of simplified theories and the relevant spatial, temporal, and virial scales vary greatly between different scopes of research. In this section, we present an overview of regions of interest and the phenomena found within them.
Astrophysical media and objects
Prime examples of themes requiring modelling are e.g., the dynamics of hot, cold, and dark matter in an expanding universe with unknown boundaries. The birth of the universe connects the rapid expansion and cooling of baryonic matter with quantum fluctuation anisotropies that eventually lead to the formation of galactic superclusters. Astrophysical simulations of the universe should naturally account for expansion of spacetime and associated effects of general relativity, and modelling of highenergy phenomena should correctly account for special relativity due to velocities approaching the speed of light. A recent forerunner in modelling the universe is EAGLE (Schaye et al. 2015), which utilises smoothed particle hydrodynamics, with subgrid modelling providing feedback of star formation, radiative cooling, stellar mass loss and feedback from stars and accreting black holes. These simulations operate on very much larger scales compared to the Vlasov equation for ions and electrons, yet they depend strongly on knowledge of processes at smaller length and time scales. Due to the majority of the universe consisting of the mostly empty interstellar and intergalactic media, the energy content of turbulent space plasmas must be understood. This has been investigated through the Vlasov equation (see, e.g., Weinstock 1969). Conversely, turbulent behaviour at large scales can act as a model for extending power laws to smaller scales (Maier et al. 2009). An alternative if less common approach for modelling galactic dynamics is to describe the distribution of stars as a Vlasov–Poisson system, to be explained below, with gravitational force terms instead of electromagnetic effects (Guo and Li 2008). This approach highlights the use of the Vlasov equation also on large spatial scales.
Solar system
Plasma simulations of the solar system are mostly concerned with the modelling of solar activity and its influence on the heliosphere. Solar activity can be divided into two components: the solar wind, consisting of particles escaping continuously from the solar corona due to its thermal expansion, and carrying with them turbulent fields; and transient phenomena such as flares and coronal mass ejections, during which energy and plasma are released explosively from the Sun into the heliosphere. Topics of active research in solar physics include for example the acceleration and expansion of the solar wind (Yang et al. 2012; Verdini et al. 2010; Pinto and Rouillard 2017), coronal heating (De Moortel and Browning 2015; Cranmer et al. 2017) and flux emergence (Schmieder et al. 2014). The latter is particularly important for transient solar activity, as flares and coronal mass ejections are due to the destabilisation of coronal magnetic structures through magnetic reconnection. The typical length of these coronal structures ranges between \(10^{6}\) and \(10^{8}\) m. Of great interest is also the propagation of the solar wind and solar transients into the heliosphere, in particular for studying their interaction with Earth and other planetary environments. Because of the large scales of the systems considered, solar and heliospheric simulations are generally based on MHD, indicating that currently existing theories of the Sun and the solar eruption are mostly based on the MHD approximation. Applying the Vlasov approach to nearEarth physics, having important analogies to the solar plasmas, may therefore provide important feedback to existing solar theories as well.
NearEarth space and other planetary environments
Figure 1 illustrates the nearEarth space. The shock separating the terrestrial magnetic domain from the solar wind is called the bow shock (e.g., Omidi 1995), and the region of shocked plasma downstream is the magnetosheath (Balogh and Treumann 2013). The interplanetary magnetic field (IMF), which at 1 AU typically forms an angle of 45\(^{\circ }\) relative to the plasma flow direction, intensifies at the shock, increasing the magnetic field strength to roughly fourfold compared to that in the solar wind (e.g., Spreiter and Stahara 1994).
The bow shock–magnetosheath system hosts highly variable and turbulent environmental conditions, with the bow shock normal angle with respect to the IMF direction being one of the most important factors controlling the level of variability. At portions of the bow shock where the IMF is quasiparallel with the bow shock normal (termed quasiparallel shock), some particles reflect at the shock and propagate back upstream causing instabilities and waves in the foreshock upstream of the bow shock (e.g., Hoppe et al. 1981). On the quasiperpendicular side of the shock, where the IMF direction is more perpendicular to the bow shock normal, the downstream magnetosheath is much smoother, but exhibits largescale waves originating from anisotropies in the ion distribution function (e.g., Génot et al. 2011; Soucek et al. 2015; Hoilijoki et al. 2016). The foreshock–bow shock–magnetosheath coupled system is under active research, and since it is the magnetosheath plasma which ultimately determines the conditions within the nearEarth space, most important open questions include the processes which determine the plasma characteristics in space and time. The entire system has previously been modelled with MHD, which is usable to infer average properties of the dayside system (e.g., Palmroth et al. 2001; Chapman and Cairns 2003; Dimmock and Nykyri 2013; Mejnertsen et al. 2018), but unable to take into account particle reflection, kinetic waves, turbulence, and it neglects e.g., plasma asymmetries between the quasiparallel and quasiperpendicular sides of the shock that require a nonMaxwellian ion distribution function.
The earthward boundary of the magnetosheath is called the magnetopause, a current layer exhibiting large gradients in the plasma parameter space. Energy and mass exchange between the upstream plasma and the magnetosphere occurs at the magnetopause (Palmroth et al. 2003, 2006c; Pulkkinen et al. 2006; Anekallu et al. 2011; Daughton et al. 2014; Nakamura et al. 2017), and therefore its processes are important in determining the amount of energy driving the space weather phenomena, which can endanger technological systems or human health (Watermann et al. 2009; Eastwood et al. 2017). Space weather phenomena are complicated and varied, and we give a nonexhaustive list just to name a few most important categories. Direct energetic particle flows from the Sun alter the communication conditions especially at high latitudes, affecting radio broadcasts, aircraft communication with air traffic control, and radar signals. Sudden changes in the magnetic field induce currents in the terrestrial long conductors, such as gas pipelines, railways, and power grids that can sometimes be disrupted (e.g., Wik et al. 2008). Increasing numbers of satellites are being launched, vulnerable to sudden events in the geospace, as it has been experienced that some spacecraft have stopped operation in response to space weather events (Green et al. 2017). Overall, some estimations show that in the worst case, an extreme space weather event could induce economic costs of the order of 1–2 trillion USD during the first year following its occurrence, and that it could take 4–10 years for the society to recover from its effects (National Research Council 2008). Understanding and predicting the geospace is ultimately done by modelling. While the previous global MHD models can be executed near realtime and they provide the average description of the system, they cannot capture the kinetic physics that is needed to explain the most severe space weather events.
One additional factor in the accurate modelling of the geospace as a global system is that one needs to address the ionised upper atmosphere called the ionosphere within the simulation. The Earth’s ionosphere is a weakly ionised medium, divided into three regions—named D (60–90 km), E (90–150 km), and F (> 150 km)—corresponding to three peaks in the electron density profile (Hargreaves 1995). From the magnetospheric point of view, the ionosphere represents a conducting layer closing currents flowing between the ionosphere and magnetosphere (Merkin and Lyon 2010), reflecting waves (Wright and Russell 2014), and depositing precipitating particles (e.g., Rodger et al. 2013). Further, the ionosphere is a source of cold electrons (CranMcGreehin and Wright 2005) and heavier ions (e.g., Peterson et al. 1981). These cold ions of ionospheric origin may affect local processes in the magnetosphere, such as magnetic reconnection at the magnetopause (André et al. 2010; ToledoRedondo et al. 2016). The global MHD models typically use an electrostatic module for the ionosphere, coupled to the magnetosphere by currents, precipitation and electric potential (e.g., Janhunen et al. 2012; Palmroth et al. 2006a). The ionosphere itself is modelled either empirically or based on first principles: The International Reference Ionosphere (IRI) model describes the ionosphere empirically from 50 to 1500 km altitude (Bilitza and Reinisch 2008), while for instance, the Sodankylä Ion and Neutral Chemistry model solves the photochemistry of the D region, taking into account several hundred chemical reactions involving 63 ions and 13 neutral species (Verronen et al. 2005, and references therein). At higher altitudes, transport processes become important, and models such as TRANSCAR (Blelly et al. 2005, and references therein) or the IRAP Plasmasphere–Ionosphere Model (Marchaudon and Blelly 2015) couple a kinetic model for the transport of suprathermal electrons with a fluid approach to resolve the chemistry and transport of ions and thermal electrons in the convecting ionosphere. Neither empirical nor the firstprinciples based models are using the Vlasov equation, which at the ionosphere concerns much finer scales.
In general, the interaction of the solar wind with the other magnetized planets in our solar system is essentially similar to that with Earth. The main differences stem from the scales of the systems, which depend on the strength of their intrinsic magnetic field and the solar wind parameters changing with heliospheric distance. While the modelling of the magnetospheres of the outer giants is only achievable to date using fluid approaches, the small size of Mercury’s magnetosphere has been targeted for global kinetic simulations (Richer et al. 2012). For the same reason, kinetic models are also a popular tool to investigate the plasma environment of nonmagnetized bodies such as Mars, Venus, comets, and asteroids. In particular, Umeda and Ito (2014) and Umeda and Fukazawa (2015) have studied the interaction of a weakly magnetized body with the solar wind by means of fullVlasov simulations.
Scales and processes
The following processes are central in explaining plasma behaviour in the Solar–Terrestrial system and astrophysical domains: (1) magnetic reconnection enabling energy and mass transfer between different magnetic domains, (2) shocks forming due to supersonic relative flow speeds between plasma populations, (3) turbulence providing energy dissipation across scales, and (4) plasma instabilities transferring energy between the plasma and waves. All these processes contribute to particle acceleration, which is one of the most researched topics within Solar–Terrestrial and astrophysical domains, and notorious in requiring understanding of both local microphysics and global scales. Below, we introduce some examples of these processes within systems having scales that can be addressed with the Vlasov approach. Simulations of nonthermal space plasmas encompass a vast range of scales, from the smallest ones (electron scales, ion kinetic scales) to local and even global structures. Table 1 lists typical ranges of a handful of plasma parameters encountered in different branches of space sciences and astrophysics. Especially in a larger astrophysical context, simulations cannot directly encompass all relevant spatial and temporal scales. It is important to note, however, that scientific results of kinetic effects can be achieved even without directly resolving all the spatial scales that may at first glance appear to be a requirement (PfauKempf et al. 2018).
Reconnection is a process whereby oppositely oriented magnetic fields break and rejoin, allowing a change in magnetic topology, plasma mixing, and energy transfer between different magnetic domains. Within the magnetosphere, reconnection occurs between the terrestrial northward oriented magnetic field and the magnetosheath magnetic field that mostly mimics the direction of the IMF, but can sometimes be significantly altered due to magnetosheath processes (Turc et al. 2017). Magnetospheric energy transfer is most efficient when the magnetosheath magnetic field is southward, while for northward IMF reconnection locations move to the nightside lobes (Palmroth et al. 2006c). Actively researched topics focus on understanding the nature and location of reconnection as a function of driving conditions (e.g., Hoilijoki et al. 2014; Fuselier et al. 2017). Energy transfer at the magnetopause sets the nearEarth space into a global circulation (Dungey 1961), leading to reconnection in the magnetospheric tail. The tail centre hosts a hot and dense plasma sheet, the home of perhaps most diligent scientific investigations within the domain of magnetospheric physics. Especially in focus have been explosive times when the magnetospheric tail disrupts and launches into space, accelerating particles and causing abrupt changes in the global geospace (e.g., Sergeev et al. 2012). Reconnection has been suggested as one of the main drivers of tail disruptions (e.g., Angelopoulos et al. 2008), while other theories related to plasma kinetic instabilities exist as well (e.g., Lui 1996). Tail disruptions have important analogues in solar eruptions (e.g., Birn and Hesse 2009), and investigating the tail disruptions with global simulations together with in situ measurements may shed light into other astrophysical systems as well.
Collisionless shocks form due to plasma populations flowing supersonically with respect to each other, redistributing flow energy into thermal energy and accelerating particles (e.g., Balogh and Treumann 2013; Marcowith et al. 2016). Shock fronts such as those found at supernova explosions are an efficient accelerator (Fermi 1949). Diffusive shock acceleration (e.g., Axford et al. 1977; Krymskii 1977; Blandford and Ostriker 1978; Bell 1978) is the primary source of solar energetic particles, and occurs from the nonrelativistic (e.g., Lee 2005) to the hyperrelativistic (Aguilar et al. 2015) energy regimes. Shock–particle interactions including kinetic effects have been modelled using various analytical and semiempirical methods (see, e.g. Afanasiev et al. 2015, 2018; Hu et al. 2017; Kozarev and Schwadron 2016; Le Roux and Arthur 2017; Luhmann et al. 2010; Ng and Reames 2008; Sokolov et al. 2009; Vainio et al. 2014), but drastic approximations are usually required in order to model the whole acceleration process, and Vlasov methods have not yet been utilised. The classic extension of hydrodynamic shocks into the MHD regime has been disproven by a number of hybrid models due to, e.g., shock reformation (Caprioli and Spitkovsky 2013; Hao et al. 2017) and anisotropic pressure and energy imbalances due to nonthermal particle populations (Chao et al. 1995; Génot 2009). Only a selfconsistent treatment including kinetic effects is capable of describing diffusive shock acceleration accurately. Recent works coupling shocks and highenergy particle effects include, e.g., those by Guo and Giacalone (2013), Bykov et al. (2014), Bai et al. (2015) and van Marle et al. (2018). Challenges associated with simulating shocks include modelling gyrokinetic scales for ions whilst allowing the simulation to cover the large spatial volume involved in the particle trapping and energisation process. Radially expanding shock fronts within strong magnetic domains result in a requirement for high resolution both spatially and temporally. Modern numerical approaches usually make some sacrifices, e.g. performing 1D–2V selfconsistent calculations or advancing 3D–1V semianalytical models. The Vlasov approach is especially interesting in probing the physics of particle injection, trapping, acceleration and escape.
In addition to shock acceleration, kinetic simulations of solar system plasmas are also applied to the study of solar wind turbulence. How energy cascades from large to small scales and is eventually dissipated is an outstanding question, which can be addressed using kinetic simulations (Bruno and Carbone 2013). HybridVlasov simulations (Valentini et al. 2010; Verscharen et al. 2012; Perrone et al. 2013) have in particular been utilised to study the fluctuations around the ion inertial scale, which is of particular importance as it marks the transition between the fluid and kinetic scales.
Plasma instabilities arise when a source of free energy in the plasma allows a wave mode to grow nonlinearly. They are ubiquitous in our universe, and play an important role in both solar–terrestrial physics, where, for example, the Kelvin–Helmholtz instability transfers solar wind plasma into the Earth magnetosphere (e.g., Nakamura et al. 2017), and in astrophysical media, for instance in accretion disks, where the turbulence is driven by the magnetorotational instability. Vlasov models have been applied to the study of many instabilities, such as the Rayleigh–Taylor instability (Umeda and Wada 2016, 2017), Weibeltype instabilities (Inglebert et al. 2011; Ghizzo et al. 2017), and the Kelvin–Helmholtz instability (Umeda et al. 2010b).
Modelling with the Vlasov equation
Simulating plasma provides numerous challenges from the modelling perspective. Constructing a perfect representation of the plasma, where every single charge carrier is factored in the equations, would require an immense amount of computational power. Spatial scales required to fully describe the plasma environment range from the microscale Debye length, up to the macroscale of the phenomena one is trying to simulate. The particle density needed is such that performing a fully kinetic simulation of even a lowdensity plasma selfconsistently, using Maxwell’s equations for the electric and magnetic fields and Lorentz’ equation for the protons and electrons, is out of reach even to present day large supercomputers. Currently, only plasma phenomena that occur in relatively short spatial and temporal scales, such as magnetic reconnection and high frequency waves, are modelled using this approach. For this reason, adopting a continuous resolution of the velocity space and using distribution functions as the main object to be simulated provides a meritorious form of simulating plasmas.
Plasmas can also be treated as a fluid and the standard way of doing that is using the magnetohydrodynamic (MHD) approximation. This modelling approach is more suitable for large domain sizes where detailed information is not necessary. However, MHD does not offer information about small spatiotemporal scales. Statistical mechanics provides some information about a neutral gas based on assumptions done at the atomic scale. The kinetic plasma approach treated in the following is based on this same principle. It describes the plasmas using distribution functions in phase space and uses Maxwell’s equations and the Vlasov equation to advance the fields and the distribution functions, respectively.
The Vlasov equation
In plasmas, as in neutral gases, the dynamical state of every constituent particle can be described by its position (\(\mathbf {x}\)) and momentum (\(\mathbf {p}\)) (or velocity \(\mathbf {v}\)) at a given time t. It is also common to separate the different species s in a plasma (electrons, protons, helium ions, etc). Accordingly, the dynamical state of a system of particles of species s, at a given time, can be described by a distribution function \(f_s(\mathbf {x},\mathbf {v},t)\) in 6dimensional space, also called phase space.
The distribution function \(f_{s}(\mathbf {x},\mathbf {v},t)\) represents the phasespace density of the species inside a phasespace volume element of size \(\mathrm {d}^3\mathbf {x} \mathrm {d}^3\mathbf {v}\) during a time \(\mathrm {d}t\) at the (\(\mathbf {x},\mathbf {v},t\)) point. Hence, in a system with N particles, integrating over the spatial volume \(\mathcal {V}_r\) and the velocity volume \(\mathcal {V}_v\) (i.e., the entire phasespace volume \(\mathcal {V}\)) one obtains
It is important to represent and describe the time evolution of the distribution functions given some external conditions. The Boltzmann equation,
uses the distribution function \(f_{s}\) to describe the collective behaviour of a system of particles subject to collisions and external forces \(\mathbf {F}\), where the term on the righthand side represents the forces acting on particles in collisions. Its derivation starts from the standard equation of motion and also takes Liouville’s theorem into account (see Sect. 3.3) and it is therefore valid for any Hamiltonian system. In plasmas, the Lorentz force takes the role of the external force and collisions between particles are often neglected. Taking these two assumptions into consideration, one obtains the Vlasov equation (Vlasov 1961), often called “the collisionless Boltzmann equation”:
If a significant part of the plasma acquires high enough kinetic energy, then relativistic effects start to become important. It can be shown that the Vlasov equation (3) is Lorentzinvariant and therefore holds in such cases if v is simply considered to be the proper velocity (Thomas 2016). There are only very few numerical applications related to space or astrophysics that directly solve the relativistic Vlasov equation (as opposed to the particleincell approach, which solves the same physical system through statistical sampling of particles and their propagation, compare Sect. 1) but it is more common in other contexts, such as laser–plasma interaction applications (Martins et al. 2010; Inglebert et al. 2011). By using a frame that is propagating at relativistic speeds, a Lorentzboosted frame, the smallest time or space scales to be resolved become larger and the plasma length shrinks due to Lorentz contraction, indicating that the simulation execution times are accelerated.
Closing the Vlasov equation system
In any simulation, it is necessary to couple the Vlasov equation with the field equations to form a closed set of equations. The Vlasov equation deals with the time evolution of the distribution function and uses the electromagnetic fields as input. Thus the fields need to be evolved based on the updated distribution function. There are two main ways of closing the equation set: the electrostatic approach, which uses the Poisson equation to close the system and the electromagnetic approach, which uses the Maxwell equations to that end. They are typically referred to as the Vlasov–Poisson system and the Vlasov–Maxwell system of equations. With appropriate approximations, the system can also be closed without solving the Vlasov equation for all species.
The Vlasov–Poisson equations
The Vlasov–Poisson equations model plasma in the electrostatic limit without a magnetic field (corresponding to the assumption that \(v/c \rightarrow 0\) for any relevant velocity v in the system). Thus Eq. (3) takes the form
for all species and the system is closed by the Poisson equation
where \(\varPhi \) is the electric potential and \(\epsilon _0\) is the vacuum permittivity. Using Eq. (1), the total charge density \(\rho _q\) is obtained by taking the zeroth moment of f for all species:
The Vlasov–Maxwell equations
In the electromagnetic case, the Vlasov equation (3) is retained for all species and complemented by the Maxwell equations, namely the Ampère law
the Faraday law
and the Gauss laws
Usually in a numerical scheme only Eqs. (7) and (8) are discretised. If Eq. (9) is not satisfied by the numerical method used, numerical instabilities can occur because the underlying system needs to be divergencefree.
HybridVlasov systems
The hybridVlasov systems retain only the Vlasov equation for the ions, thus neglecting the electrons to a certain extent. This has the advantage that the model is not required to resolve the short temporal and spatial scales associated with electron dynamics. Typically, the system is closed by taking moments of the Vlasov equation and making approximations pertinent to the simulation system at hand.
Integrating (3) over the velocity space, one gets the continuity equation or the zeroth moment of the Vlasov equation:
where \(n_s\) and \(\mathbf {V}_s\) are the number density and the bulk velocity of species s, respectively. Multiplying (3) by the phasespace momentum \(m_s \mathbf {v}_s\) and integrating over the velocity space, one obtains the equation of motion or the first moment of the Vlasov equation:
where \(\mathcal {P}_s\) is the pressure tensor of species s, which can in turn be obtained as the second moment of \(f_s\). This leads to a chain of equations where each step depends on the next moment of \(f_s\). The most common closure of hybridVlasov systems is taken at this level by summing the electron and ion equations of motion and neglecting terms based on the electrontoion mass ratio \(m_\mathrm{e}/m_\mathrm{i} \ll 1\), leading to the generalised Ohm’s law
where \(\sigma \) is the conductivity, e is the elementary charge, \(n_e\) is the electron number density, and \(\mathbf {j}\) is the total current density. In the limit of slow temporal variations, the rightmost electron inertia term is typically dropped from the equation. Further, assuming high conductivity, one is left with the Hall term \(\mathbf {j}\times \mathbf {B}/\rho _q\) and the electron pressure gradient term \(\nabla \cdot \mathcal {P}_\mathrm{e}/\rho _q\) on the righthand side. The electron pressure term can be handled in a number of ways, such as using isobaric or isothermal assumptions. If electrons are assumed to be completely cold, the equation can be written as the Hall MHD Ohm’s law
Thus the hybridVlasov system of equations retains the Vlasov equation (3) for ions and the Maxwell–Ampère and Maxwell–Faraday equations (7) and (8) but replaces the Gauss equations by a generalised Ohm equation (12) with appropriate approximations. If rapid field fluctuations are excluded from the solution, the displacement current from Ampère’s law can be omitted, resulting in the Darwin approximation and yielding
This makes the equation system more easily tractable.
Note that conversely, neglecting ion dynamics and retaining the electron Vlasov equation can be advantageous in certain contexts and is formally equivalent to the above with switched electron and ion variables.
Properties of the Vlasov equation
When solving the Vlasov equation, there are a number of useful properties that can be used for an advantage in numerical solvers. In its fundamental structure, the Vlasov equation is a 6D advection equation, which equals a zero material derivative of phasespace density. In the absence of any source terms, finding a solution at any given point in time requires to determine the motion of the phasespace density. One particularly handy property follows from Liouville’s theorem, from which the Vlasov equation is derived. It states that phasespace density is constant along the trajectories of the system. This means that a solution of the system at one point in time can be followed forward or backward in time arbitrarily, as long as the trajectories of phasespace elements, characterising the Vlasov equation, are known.
One consequence of Liouville’s theorem is that initial density perturbations tend to form smaller and smaller structures as trajectories of the phasespace regions with different densities converge over time. In physical reality, this socalled filamentation has a natural cutoff at scales where diffusive scattering effects become important, however, this is not part of the pure mathematical description of the Vlasov equation. Therefore, numerical implementations need to address this issue either by explicit filtering steps, or innate numerical diffusivity.
A fundamental consideration in any physical modelling is the conservation of certain quantities, like mass, momentum, energy, and electric charge. This of course applies to Vlasovbased plasma modelling as well. Variational approaches have been used to derive the Vlasov–Maxwell and Vlasov–Poisson systems of equations as well as reduced forms (e.g., Marsden and Weinstein 1982; Ye and Morrison 1992; Brizard 2000; Brizard and Tronko 2011). Care has to be taken when developing numerical solutions of the Vlasov equation that quantities relevant to the problem to be solved are conserved adequately by the method (e.g., Filbet et al. 2001; Crouseilles et al. 2010; Cheng et al. 2013, 2014; BecerraSagredo et al. 2016; Einkemmer and Lubich 2018).
Numerical modelling and HPC aspects
Finding solutions to the Vlasov equation in order to model physical systems typically involves computer simulations. Therefore, the phase space density \(f(\mathbf {x},\mathbf {v},t)\) needs to be numerically represented, which strongly influences the choice of solvers and the resulting simulation code. This section is structured around different numerical representations of f.
A problem common to all numerical approaches solving the Vlasov equation is the curse of dimensionality—to fully reproduce all physical behaviour, the simulation domain must be 6dimensional, with all 6 dimensions being of sufficient resolution or fidelity for the desired physical system. This considerably impacts the size of the phase space and hence the computational burden of the algorithm.
Eulerian approach
In a straightforward manner, the phasespace distribution function \(f(\mathbf {x},\mathbf {v},t)\) can be discretised on a Eulerian grid, which can be operated by different kinds of solvers (see Fig. 2a). The structure of the Vlasov equation allows both finite volume (FV) as well as semiLagrangian solvers to be employed, and all of these have been operated with some success (Arber and Vann 2002). Discretisation of velocity space to a finite grid size \(\varDelta v\) also automatically imposes a lower limit for phasespace filamentation (compare Sect. 3.3), at which the grid will naturally smooth out the structure. In some cases this is a purely numerically diffusive process, whereas others use explicit smoothing, filtering or subgrid modelling (e.g., Klimas 1987; Klimas and Farrell 1994). The 6dimensional structure of the phase space, along with the physical scales and resolutions imposed by the underlying physical system (compare Sect. 2) make a Eulerian representation in a Cartesian grid computationally impractical in the vast majority of cases concerning a large simulation volume.
Let us consider as an example a simulation of the Earth’s entire magnetosphere using a full 3D–3V, Eulerian hybridVlasov model. Let it extend up to the lunar orbit (\(x \sim \pm \,60 \, R_E\) in every direction) resolving approximately the solar wind ion inertial length (\(\varDelta x \sim 100\) km), and let the velocity space encompass typical solar wind velocities with some safety margin (\(v \sim \pm \,2000\) km/s) while resolving the solar wind thermal speed (\(\varDelta v \sim 30\) km/s). In this case the resulting phase space would contain a total of \(10^{18}\) cells. If each of them were represented by a singleprecision floating point value, a minimum of 4 EiB of memory would be required to represent it!
Fortunately, there are many possibilities for simplification of the computational grid size:

Reduction of phasespace dimension, if the physical system under consideration allows it, is an easy and efficient way to reduce computational load. Simulations of longitudinal wave instabilities (Jenab and Kourakis 2014; Shoucri 2008) and fundamental studies of filamentation have been performed in a 1D–1V setup, whereas laser wakefield and wave interaction simulations tend to be modelled in 2D–2V or 2D–3V setups (Besse et al. 2007; Sircombe et al. 2004; Thomas 2016). Another possibility here is to globally reduce the number of grid points by introducing a sparse grid representation, where the grid may be uneven with respect to Cartesian coordinates, while remaining static during runtime (Kormann and Sonnendrücker 2016; Guo and Cheng 2016). This is sometimes referred to as a sparse grid representation.

Gyrokinetic simulations reduce the velocity space by dropping the azimuthal velocity dimensions perpendicular to the magnetic field, thus assuming complete gyrotropy of the distribution functions (e.g., Görler et al. 2011).

Adaptively refined grids can be employed to reduce resolution and thus computational expense in areas of phase space that are considered less important for the physical problem at hand (Wettervik et al. 2017; Besse et al. 2008).

In many physical scenarios, large parts of phase space contain an extremely low, if not zero, density, and contribute nothing to the overall dynamic development. Suitable pruning of the phasespace grid can thus be performed to obtain a data structure that dynamically removes grid elements during runtime and keeps them only in regions deemed relevant for the physical system dynamics. The computational speedup gained through the reduction of phasespace volume thus obtained can in some cases be a tradeoff against physical accuracy, and needs to be carefully considered. We have implemented this option in Vlasiator, and call it the dynamic sparse phase space representation, discussed more in Sect. 5.2. This method is not to be mixed to the static sparse grid methods (Kormann and Sonnendrücker 2016; Guo and Cheng 2016) that are fundamentally dimension reduction techniques, similar to the lowrank approximations.
In plasmas, the magnetic field \(\mathbf {B}\) makes the particles gyrate while the electric field \(\mathbf {E}\) causes them to accelerate and drift. It can be advantageous to take the characteristics of acceleration due to Lorentz’ force into consideration when choosing an appropriate grid for the numerical phasespace representation. Common ideas include:

A polar velocity coordinate system aligned with the magnetic field and centred around the drift velocity, \(\mathbf {v} = ( v_\parallel ,\, v_r,\, v_\phi ),\) in which the gyrophase coordinate \(v_\phi \) has a much lower resolution than \(v_\parallel \) and \(v_r\). This can be employed in cases where the velocity distribution functions are known not to deviate strongly from gyrotropy, i.e., to exhibit cylindrical symmetry with respect to the magnetic field direction. However, the disadvantage of a polar velocity space over a Cartesian one is the more complex coordinate transformation required for transport into the spatial neighbours.

A Cartesian representation of velocity space, in which its coordinate axes corotate with the local gyration at every given spatial cell. Such a setup has the advantage that no transformation of velocity space due to the magnetic field will have to be performed at all, and no numerical diffusion due to gyration motion will occur. It does however come at the cost of more complicated spatial updates, since neighbouring spatial domains do no longer have identical velocity space axes. A suitable interpolation or reconstruction has to take place in the spatial transport, thus potentially negating the advantage in numerical diffusivity.
For the actual process of solving the Vlasov equation, a fundamental decision has to be made in the structure of the code, whether the solution steps are to be performed in a proper 6D manner (e.g., Vogman 2016), or whether a Strangsplitting approach will be taken (Strang 1968; Cheng and Knorr 1976; Mangeney et al. 2002), in which the position and velocity space solution steps are performed independently of each other. Due to the large number of involved dimensions, and thus computational context of each solver step, the latter approach tends to have significant performance benefits, whilst still achieving convergence (Einkemmer and Ostermann 2014). Alternative timesplitting methods based on Hamiltonians have also been proposed (e.g., Crouseilles et al. 2015; Casas et al. 2017).
If a Cartesian velocity grid is employed in the simulation, multiple families of solvers are available for it (Filbet and Sonnendrücker 2003b). In all cases, the effects of diffusivity of the solver need to be considered. Especially uncontrolled velocity space diffusion manifests itself as numerical heating, as the distribution function tends to broaden over time. Higher orders of solvers and reconstruction methods, as well as explicit formulations in which moments of the distribution function are conserved are therefore advisable (Balsara 2017).
The choice of a Eulerian representation of phase space brings certain implementation details for high performance computing (HPC) aspects with it. The relative ease of spatial decomposition into independent computational domains, which communicate through ghost cells, can be employed readily for Eulerian Vlasov simulations, providing a straightforward path to parallel implementations. On the other hand, the inherent limitations of Eulerian schemes (such as conditions for time steps, compare Sect. 5.3) limit their overall numerical efficiency, and the highdimensional nature of phase space can lead to challenges in appropriately represented and resolved numerical grids. As so often in HPC, design decisions have to be based on the specific properties of the physical system under investigation.
Finite volume solvers
As the Vlasov equation (3) is fundamentally a hyperbolic conservation law in 6D, it can be solved using the wellestablished methods of Finite Volumes (FV, LeVeque 2002). In this approach, the phasespace fluxes are calculated through each interface of a discrete simulation volume (or phasespace cell) by reconstructing the phasespace density distribution through an appropriate interpolation scheme. The characteristic velocities at both sides of this interface are determined and the Riemann problem (Toro 2014) at each of these interfaces is solved to update the material content in each cell.
If Strang splitting is used to perform separate spatial and velocityspace updates, it is noteworthy that the state vector only contains a single scalar quantity (the phasespace density) and each cell interface update only needs to take a single characteristic velocity into consideration: For the update in a spatial direction, the characteristic is given by the corresponding cell’s velocity space coordinates, whereas in the velocity space update step, the acceleration due to magnetic, electric and external field forces is homogeneous throughout each spatial cell. The Riemann problem for the Vlasov update does therefore not require the solution or approximation of an eigenvalue problem, which significantly simplifies its solution in comparison to hydrodynamic or MHD FV solvers. This property also enables the efficient use of semiLagrangian solvers (discussed more in Sect. 4.4).
As will be shown in Sect. 5, versions of the Vlasiator code until 2014 employed a FV formulation of its phase space update (von Alfthan et al. 2014) and numerous other implementations exist (Banks and Hittinger 2010; Wettervik et al. 2017). A comprehensive introduction to the implementation and thorough analysis of the behaviour of a fully 6D implementation of a FV Vlasov solver is given by Vogman (2016).
Finite difference solvers
While the Vlasov equation (3) could in principle be solved by directly employing finite difference methods, this approach does not seem to be favoured, and its applications in the literature appear to be limited to fundamental theory studies only (e.g., Schaeffer 1998; Holloway 1995). The biggest issue with finite difference formulations is the lack of explicit conservation moments of the distribution function and related quantities. While highorder methods can still maintain suitable approximate conservation properties, their computational demands and/or diffusivity make them impractical.
Spectral solvers
Instead of a direct discretisation of the phasespace density \(f(\mathbf {x},\mathbf {v}, t)\) in its \(\mathbf {x}\) and \(\mathbf {v}\) coordinates, a change of basis functions can be employed, each coming with benefits and limitations. The transformation of f into a different basis can be performed in the velocity coordinates only (cf. Fig. 2b), or in both spatial and velocity coordinates, depending on the physical application.
If a splitting scheme is employed, where velocity and real space advection updates are treated separately, the advection in a Fouriertransformed coordinate can be completely performed in Fourier space, as the transform of any coordinate \(x \rightarrow k_x\) results in the differential advection operator \(v_x \,\nabla _x\) turning into a simple multiplication:
However, for the acceleration update, this transformation brings in the additional complication that the acceleration \(\mathbf {a}\) would have to be independent of \(\mathbf {v}\), which is true for the electrostatic Vlasov–Poisson system, but incorrect in full Vlasov–Maxwell scenarios, due to the vdependence of the Lorentz force. In order to accommodate velocitydependent acceleration, solving a system in such a way typically requires multiple forward and backward Fourier transforms within one time step (Klimas and Farrell 1994).
The limit of filamentation in a thusrepresented velocity space becomes the question of which maximum velocity space frequency \(\mathbf {k}_{v,\text {max}}\) is available, and the filamentation problem itself becomes a boundary problem at the maximum extents of velocity \(\mathbf {k}\)space (Eliasson 2011). However, stability issues of this scheme remain under discussion (Figua et al. 2000; Klimas et al. 2017).
Finally, a full Fourierspace representation of \(\tilde{f}\left( \mathbf {k}_r,\mathbf {k}_v, t\right) \), in which also the real space coordinate \(\mathbf {x} \rightarrow \mathbf {k}_r\) is transformed is a possibility. However, it further complicates the treatment of configuration and velocity space boundaries (Eliasson 2001). When used with periodic spatial boundary conditions, such a setup can be quite efficient for the study of kinetic plasma wave interactions.
Apart from the Fourier basis, other orthogonal function systems can be used as the basis for description of phasespace densities. A popular choice is presented by Hermite functions (Delzanno 2015; Camporeale et al. 2016), whose \(\mathbf {L}^2\) convergence behaviour closely matches that of physical velocity distribution functions, and whose property of being eigenfunctions to the Fourier transform can be used as a numerical advantage. Since the zeroth Hermite function \(H_0(\mathbf {v})\) is simply a Maxwellian particle distribution, a hybridVlasov code with Hermitian basis functions should replicate MHD behaviour in this limit. Adaptive inclusion of higherorder Hermite functions then allows an increasing amount of kinetic physics to be numerically represented.
A common problem of any kind of spectral method, be it Fourierbased, or using any other choice of nonlocalised basis functions, is the formulation of boundary conditions. While microphysical simulations of wave or scattering behaviour can usually get away with periodic boundary conditions, macroscopic systems will require boundaries at which interaction of plasma with solid or gaseous matter is to be modelled (such as planetary or stellar surfaces), inflow conditions as well as outflow boundaries. Due to the unavailability of suitable spectral formulations for these boundaries, spectraldomain solvers have not gained foothold in modelling efforts of macroscopic astrophysical systems.
In any nonlocal choice of basis function for the phasespace representation, be it Fourier, Hermite or waveletbased (Besse et al. 2008), extra thought has to be put into scalability of parallel solvers for it. If a change of basis function (such as a switch from a realspace to a Fourier space representation) is required as part of the simulation update step, this will typically not scale beyond hundreds of cores in supercomputing environments.
Tensor train
An entirely separate class of numerical representations for the phasespace density is provided by the tensor train formalism (Kormann 2015) illustrated in Fig. 2c. The idea behind this approach is inspired by Strangsplitting solvers, in which spatial and velocity dimensions are treated in individual and subsequent solver steps. The overall distribution function \(f(x_1,x_2,\ldots ,x_n)\) is represented as a tensor product of component basis functions,
in which each \(f_k(x_k)\) is only dependent on a single coordinate \(x_k\), and thus only affected by a single dimension’s update step. The generalised formulation is called the Tensor Train of ranks \(r_1\ldots r_n\) (compare Fig. 2c),
in which the distribution function is entirely formulated in terms of sums of products of \(Q_k\), which themselves only depend on a single coordinate \(x_k\). Transport can be performed by individually affecting each \(Q_k\), followed by a rounding step to keep the tensor train compact.
While this approach has so far only been employed in lowdimensional approaches and for feasibility studies, and attempts at large numerical simulations using tensor train models have not yet been performed, efforts to integrate them into existing codebases are underway.
SemiLagrangian and fully Lagrangian solvers
As a consequence of Liouville’s theorem (cf. Sect. 3.3), numerical solutions of the Vlasov equation can be elegantly formulated in Lagrangian and semiLagrangian ways, by following the characteristics in phase space. Since the spatial velocity of any point in phase space is simply given by its velocity space coordinates, and its acceleration due to Lorentz’ force is provided by the local electromagnetic field quantities, a unique characteristic for each point in phase space is easily obtained in a simulation (cf. Sect. 4.1.1).
As the simulation progresses, the distribution of these sample points will shift, maintaining their initial phasespace density values, and the volumes in between them obtain phasespace density values through interpolation. If necessary, new samples can be created, or existing ones merged, where filamentation requires it. In essence, fully Lagrangian simulation codes (Kazeminezhad et al. 2003; Nunn 2005; Jenab and Kourakis 2014) track the motion of samples of density through phase space, stepping forward in time, resulting in an updated phasespace distribution. This is illustrated in Fig. 3a. Sometimes, these methods are referred to as Lagrangian particle methods, as each phasespace sample can be modelled as a macroparticle.
Much more common than the fully Lagrangian formulation of Vlasov solvers is the family of semiLagrangian solvers (Sonnendrücker et al. 1999). In these, the phasespace samples to be propagated are obtained at every time step from a Eulerian description of phase space, their transport along the characteristics is calculated within the time step, and the resulting updated phasespace density is sampled back into a Eulerian grid (which can be either structured or unstructured, see Besse and Sonnendrücker 2003). This process can be performed either forwards in time (Crouseilles et al. 2009, see Fig. 3b), in which the source grid points are scattered into the target locations, or backwards in time (Sonnendrücker et al. 1999; PfauKempf 2016), where each target grid point performs a gather operation, spatially interpolating inside the previous state of the time steps (Fig. 3c). Backwards semiLagrangian methods are sometimes also referred to as Flux Balance Methods (see Filbet et al. 2001). Either way, an interpolation step will be involved which may again lead to significant numerical diffusion, unless methods are used to minimise it. Some of the more common interpolation procedures used are cubic splines and Hermite reconstruction because they produce smooth results with reasonable accuracy and are less dissipative than other methods using continuous interpolations (Filbet and Sonnendrücker 2003a). Lagrange interpolation methods produce more accurate results but require higher order polynomials and large stencils to limit diffusion. The highorder discontinuous Galerkin method for spatial discretisation, along with a semiLagrangian time stepping method, has also been used in Vlasov–Poisson systems providing an improvement in accuracy compared to previously used techniques (Rossmanith and Seal 2011). The flexibility of combining different approaches is also seen in a recent particlebased semiLagrangian method for solving the Vlasov–Poisson equation (Cottet 2018).
Cheng and Knorr (1976) were the first authors to employ semiLagrangian updates of a Vlasov–Poisson problem in a Strangsplitting setup, which they refer to as the timesplitting scheme, in which they independently treated advection due to temporal and spatial updates. Mangeney et al. (2002) later formulated a Strangsplitting scheme for the Vlasov–Maxwell equation. As such a splitting scheme performs acceleration and translation steps separately, the phasespace trajectories of any simulation point approximates their physical behaviour in a staircaselike dimensionbydimension pattern.
Field solvers
The Vlasov equation does not stand alone in describing the physical system in consideration, but requires a further prescription of the fields introducing the force terms. In the vast majority of cases in computational astrophysics, these will be electromagnetic forces selfconsistently produced through the motion of charged particles in plasma, although there have been examples of Vlasovgravity simulations (Guo and Li 2008), in which the Poisson equation was solved based on the simulation’s mass distribution. Also in a few cases, the fields affecting phasespace distributions are considered entirely an external simulation input, with no feedback from the phasespace density onto the fields (Palmroth et al. 2013), which can be called “testVlasov” simulations, in analogy to testparticle runs. These are particularly useful as test cases before the fully operational code can be launched.
A key requirement for any field solver is to preserve the solenoidality of the magnetic field \(\mathbf {B}\) expressed by Eq. (9). There are two main avenues used to achieve this goal (e.g., Tóth 2000; Balsara and Kim 2004; Zhang and Feng 2016, and references therein). Either the field reconstruction is divergencefree by design, such as the one used in Vlasiator (see Sect. 5.3), or a procedure is needed to periodically clean up the divergence of \(\mathbf {B}\) arising from numerical errors.
In the following sections, different solvers for electromagnetic fields (and their simplifications) will be discussed in relation to astrophysical Vlasov simulations. These are fundamentally very similar in structure to the field solvers used in other simulation methods, such as PIC and MHD, and can in many cases be adapted directly from these with little change.
Electrostatic solvers
If modelling a physical system in which the interaction of plasma with magnetic fields is of little importance (such as electrostatic wave instabilities, dusty plasmas, surface interactions (ChaneYook et al. 2006) and other typically local phenomena), the magnetic field force (\(q \mathbf {v} \times \mathbf {B}\)) part of the Vlasov equation can be neglected, and a purely electrostatic system remains.
Neglecting the effects of \(\mathbf {B}\) completely leads to the Vlasov–Poisson system of equations (4) and (5), for which the field solver needs to find a solution to the Poisson equation at every time step. Being an elliptic differential equation that is solved instantaneous in time, no time step limit arises from the field solver itself. Typically, solvers use approximate iterative approaches, multigrid methods or Fourierspace solutions (Birdsall and Langdon 2004).
Another option, if an initial solution for the electric field has been found (or happens to be trivial), is to update it in time by using Ampere’s equation in the absence of \(\mathbf {B}\),
in either an explicit finitedifference manner, or using more advanced implicit formulations (Cheng et al. 2014). Special care should however be taken to prevent the violation of the Gauss law [cf. Eq. (9)] by using appropriate numerical methods.
Full electromagnetic solvers
If the full plasma microphysics of both electrons and ions is to be considered, and particularly if radio wave or synchrotron emissions are intended outcomes of the system, one must use the full set of Maxwell’s equations. A popular and wellestablished family of electromagnetic field solvers is the finite difference timedomain (FDTD) approach, which has a longstanding history in electrical engineering applications. In formulating the finite differences for the \(\partial \mathbf {E} / \partial t \sim \nabla \times \mathbf {B}\) and \(\partial \mathbf {B} / \partial t \sim \nabla \times \mathbf {E}\) terms of Maxwell’s equations, it is often advantageous to use a staggeredgrid approach, in which the electric field and magnetic field grids are offset from one another by half a grid spacing in every direction (Yee 1966). In this setup, every component of the electric field vector is surrounded by magnetic field components and vice versa, so that the finite difference evaluation of the rotation can be performed without any need for interpolation.
Care should be taken when employing FDTD solvers for studies of wave propagation at high frequencies or wave numbers, as the numerical dispersion relations of waves are deviating from their physical counterparts for high \(\mathbf {k}\), and this effect in particular is anisotropic in nature, and most strongly pronounced in cases of diagonal propagation, due to the intrinsic differences in the manner by which gridaligned and nongridaligned computations are handled. Kärkkäinen et al. (2006) and Vay et al. (2011) present a thorough analysis of this problem in the case of PIC simulations, and provide suggestions for mitigating their effects. The largest disadvantage of FDTD solvers is their stringent requirement to resolve the propagation of fields at the speed of light, thus leading to extremely short time step lengths. In order to simulate anything at nonmicroscopic timescales, other methods will need to be used. Fourierspace solvers of Maxwell’s equations are advantageous in this respect, as they do not come with fundamental time step limitations. This is weighed up by the fact that their parallelisation is more difficult, and the formulation of appropriate boundary conditions is not always possible (cf. Sect. 4.2).
Hybrid solvers
If largescale phenomena with timescales much larger than the local light crossing time are being investigated, FDTD Maxwell solvers quickly lose their appeal. If magnetic field phenomena are still to be considered selfconsistently in the simulation, appropriate modifications of the electrodynamic behaviour have to be taken, so that their simulation with longer time steps becomes feasible.
One common way to get rid of the speed of light as a limiting factor is by getting rid of the electromagnetic mode as a solution to Maxwell’s equation altogether, in a process called the Darwin approximation (see Sect. 3.2.3 and Schmitz and Grauer 2006; Bauer and Kunze 2005). In this process, the electric field is decomposed into its longitudinal and transverse components \(\mathbf {E} = \mathbf {E}_L + \mathbf {E}_T\), with \(\nabla \times \mathbf {E}_L = 0\) and \(\nabla \cdot \mathbf {E}_T = 0\). Only \(E_L\) is allowed to participate in the temporal update of \(\mathbf {B}\), so that the electromagnetic mode drops out of the simulated physical system. As a result, the fastest remaining wavemode in the system becomes the Alfvén wave, and the maximum time step rises significantly, by a factor of \(c/v_A\).
Approximating the full set of Maxwell equations comes at the cost of not having a closed set of equations any more. As already shown in Sect. 3.2.3, the system is typically closed by providing a relation between the electric and magnetic field such as Eq. (12), called Ohm’s law. The level of complexity of Ohm’s law directly influences the simulation results as it immediately affects the kinetic physics described by the model.
Coupling schemes
A reduction of the computational burden of a model can be achieved by coupling different schemes in order to focus the use of the costlier kinetic model on the region(s) of interest while solving other parts of the system with less intensive algorithms. This is also a means of extending the simulation domain where one system is taken as the boundary condition of the other. Various classes of coupled models exist, depending on the coupling interface chosen.
One strategy is to define a spatial region of interest in which the expensive kinetic model is applied, embedded in a wider domain covered by a significantly cheaper fluid model. While the method is under investigation and has been tested on classic small problems (Rieke et al. 2015), it has not been applied in the context of largescale astrophysical simulations yet. However, this type of coupling is being used successfully in the case of fluid–PIC coupling (e.g., Tóth et al. 2016; Chen et al. 2017) and also in reconnection studies (e.g., Usami et al. 2013). The disadvantage of this strategy is that scale coupling cannot be addressed as the kinetic effects do not spread into the fluid regime, and smallerscale physics can only affect the solution in the domain at which the kinetic physics is in force.
Another strategy consists in defining the regions of interest in velocity space, that is coupling a fluid scheme describing the largescale behaviour of a system with a Vlasov model handling suprathermal populations introducing kinetics into the model. Again, this is a recent development for which a certain amount of theoretical work and testing on small cases has been done (e.g., Tronci et al. 2014) but not yet extended to larger scale applications.
Computational burden of Vlasov simulations
Representing numerically the complete velocity phase space of a kinetic plasma system including all required physical processes is computationally intensive, and a large amount of data needs to be stored and processed. Different possible representations of the phasespace distribution and solution methods and their expected scaling shall be given in this section. Computational requirements for equivalent PIC simulations are estimated in comparison, although due to their different tuneable parameters, a rigorous comparison is difficult and beyond the scope of this work.
As shown in Sect. 4.1, a blunt Eulerian discretisation of a magnetospheric simulation without any velocity space sparsity results in \(10^{18}\) sample points or a minimum of 4 EiB memory requirement, which is unrealistic on current and nextgeneration architectures. A first approach is to reduce the dimensionality from a full 3D space to a 2D slice, which results in a reduction of sample points of the order of \(10^4\). Obviously a further reduction to 1D yields a similar gain. With a sparse velocity space strategy as used in Vlasiator (see Sect. 5.2 below) a further reduction by a factor of \(10^2\)–\(10^3\) sample points can be achieved. Typically modern largescale kinetic simulations both with Vlasovbased methods (e.g., Palmroth et al. 2017) and PIC methods (e.g., Daughton et al. 2011) reach an order of magnitude of \(10^{11}\)–\(10^{12}\) sample points.
Beyond the number of sample points to be treated, the length of the propagation time step relative to the total simulation time aimed for is a crucial component of the computational burden of a model. Certain classes of solvers are limited in that respect as they cannot allow a signal to propagate more than one sampling interval or discretisation cell per time step (see Sect. 5.3). With respect to hybrid models using the Darwin approximation, the inclusion of electromagnetic (light) waves in the model description results in a reduction of the allowable time step by a factor of \(10^3\) or more. Eulerian solvers typically have similar limitations which can impact the time step by a factor of approximately \(10^2\) due to the Larmor motion in velocity space in the presence of magnetic field. Subcycling strategies and the use of Lagrangian algorithms are common approaches to alleviate these issues, at the potential cost of some physical detail however.
A choice of basis function for the representation of velocity space other than Eulerian grids (like spectral or Hermite bases) can in many cases be beneficial to limit the memory requirements for reasonable approximations of the velocity space morphology. Care must however be taken that nonlocal transformations from one basis to another, such as a Fourier transform, tend to have unfavourable scaling behaviour in massively parallel implementations. Tensortrain formulations appear to be a promising avenue for the representation of phase space densities that have suitable computational properties, but largescale space plasma applications have not been demonstrated yet.
Higher computational requirements are expected if physics of multiple particle species (especially electrons) are essential for the system under investigation. The need to represent multiple separate distribution functions multiplies the memory and computation requirements. The relative mass ratios of these species also have an effect on the kinetic time and length scales that need to be resolved. Going from a purely protonbased hybridVlasov to a “fullVlasov” simulation, in which electrons are included as a kinetic species shortens the Larmor times by a factor of \(m_p / m_e = 1836\) and depending on the employed solver may require resolution of the plasma’s Debye length. The latter factor means that, with respect to the reference hybrid simulation considered above, which approximately resolves the ion kinetic scales, a spatial resolution increase of the rough order of \(10^5\) would be required (see Table 1), amounting to a staggering \(10^{15}\) more sampling points. In order to reduce this considerable overhead, a common approach is to rescale physical quantities such as the electrontoproton mass ratio and/or the speed of light (e.g., Hockney and Eastwood 1988), while hoping to maintain quantitatively correct kinetic physics behaviour.
Most of these scaling relations likewise apply in PIC. In these, however, the parameter most strongly affecting the computational burden of the phase space representation is the particle count. As a rule of thumb, a PIC simulation with a particle count similar to the sampling point count of an equivalent Vlasov simulation will have a similar overall computational cost. For many physical scenarios, this particle count can be chosen to be significantly lower (on the order of 100–1000 particles/cell), especially if noisy representations of the velocity spaces are acceptable. In simulations with high dynamic density contrasts, in which certain simulation regions deplete of particles, as well as setups in which a minimisation of sampling noise is essential (such as investigations of nonlinear wave phenomena), PIC and Vlasov simulations are expected to reach a breakeven point.
Achievements in Vlasovbased modelling
The progress of available scientific computing capabilities towards and beyond petascale in the last decade has driven the interest in and applicability of Vlasovbased methods to multidimensional space and astrophysical plasma problems. Table 2 compiles existing research work using direct solutions of the Vlasov equation in plasma physics. Table 2 only includes works with a direct link to space physics and astrophysics, meaning that purely theoretical work as well as research from adjacent fields, in particular nuclear fusion and laser–plasma interaction, has been omitted from this list on purpose. As of 2018, the Vlasov equation has thus been used in space plasma physics and plasma astrophysics to model magnetic reconnection, instabilities and turbulence, the interaction of the solar wind with the Earth and other bodies, radio emissions in nearEarth space and the charge distribution around spacecraft.
Vlasiator
This section considers the choices and approaches made for the Vlasiator code, attempting to describe the nearEarth space at ion kinetic scales. Vlasiator simulates the global nearEarth plasma environment through a hybridVlasov approach. The evolution of the phasespace density of kinetic ions is solved with Vlasov’s equation (Eq. 3), with the evolution of electromagnetic fields described through Faraday’s law (Eq. 8), Gauss’ law and the solenoid condition (Eq. 9), and the Darwin approximation of Ampère’s law (Eq. 14). Electrons are modelled as a massless chargeneutralising fluid. Closure is provided via the generalised Ohm’s law (Eq. 12) under the assumptions of high conductivity, slow temporal variations, and cold electrons, i.e., the Hall MHD Ohm’s law (Eq. 13). The source code of Vlasiator is available at http://github.com/fmihpc/vlasiator according to the Rules of the Road mapped out at http://www.physics.helsinki.fi/vlasiator.
Background
Vlasiator has its roots in the discussions within the global MHD simulation community around 2005. It was becoming evident that while global MHD simulations are important, their capabilities, especially in the inner magnetosphere, are limited. The inner magnetosphere consists of spatially overlapping plasma populations of different temperatures (e.g., Baker 1995) and therefore the environment cannot be satisfactorily modelled with MHD to a degree allowing e.g., environmental predictions for societally critical spacecraft or as a context for upcoming missions, like the Van Allen Probes (e.g., Fox and Burch 2013). To this end, two strategies emerged, including either coupling a global MHD simulation with an inner magnetospheric simulation (e.g., Huang et al. 2006), or going beyond MHD with the then newly introduced hybridPIC approach (e.g., Omidi and Sibeck 2007). Coupling different codes carries a risk that the effects of the coupling scheme dominate over the improved physics. On the other hand, while hybridPIC simulations had produced important breakthroughs (e.g., Omidi et al. 2005), the velocity distributions computed through binning are noisy due to the limited number of launched particles, which could compromise physical conclusions. Further, due to the limited number of particles, the hybridPIC simulations could not provide sharp gradients, which would become a problem especially in the magnetotail, where the lobes surrounding the dense plasma sheet are almost empty. As the tail physics is critical in the global description of the magnetosphere, the idea about a hybridVlasov simulation emerged.
The objective was simple, just to go beyond MHD by introducing protons as a kinetic population modelled by a distribution function and thus getting rid of the noise. Several challenges were identified. First, if one neglects electrons as a kinetic population, one will, e.g., lose electronscale instabilities that can be important in the tail physics (e.g., Pritchett 2005). Second, a global hybridVlasov approach is still an extreme computational challenge even with a coarse ionscale resolution, since it must be carried out in six dimensions. Further, doubts existed about whether grid resolutions achievable with current computational resources would facilitate ion kinetic physics. However, with a new approach without historical heritage, one could utilize the latest highperformance computing methods and new computational architectures, provided that the code would always be portable to the latest technology. The computational resources were still obeying the “Moore law”, and petascale systems had just become operational (Kogge 2009). With these prospects in mind, Vlasiator was proposed to the newly established European Research Council in 2007, which solicited new ideas with a high risk–high gain vision.
Grid discretisation
The position space is discretised on a uniform Cartesian grid of cubic cells. Each cell holds the values of variables that are either being propagated or reconstructed (e.g., the electric and magnetic fields and the ion velocity distribution moments) as well as housekeeping variables. In addition, a threedimensional uniform Cartesian velocity space grid is stored in each spatial cell. For position space Vlasiator uses the Distributed Cartesian CellRefinable Grid library (DCCRG; Honkonen et al. 2013) albeit without making use of the adaptive mesh refinement capabilities it offers. The library can distribute the grid over a large supercomputer using the domain decomposition approach (see Sect. 5.5 for details on the parallelisation strategies).
The velocity space grid is purpose built for that specific task. A major performance gain in terms of memory and computation is achieved by storing and propagating the volume average of f in every cell at position \(\mathbf {x}\) only in those velocity space cells where f exceeds a given density threshold \(f_\mathrm{min}\). In order to accurately model propagation and acceleration, a buffer layer is maintained by modelling also cells that are adjacent in position or velocity space. The principle is illustrated in Fig. 4. This threshold can be constant or scaled linearly with the ion density. For each ion population, the maximal velocity space extents and the resolution are set by the user. This socalled sparse velocity space strategy allows to increase the resolution and track the distribution function in the regions where it is present instead of wasting computational resources covering the full extents of reachable velocity space. It is however important to set the value of \(f_\mathrm{min}\) carefully in order to conserve the moments of f (density, pressure, etc.) to the desired accuracy and in order to include in a given simulation all expected features such as lowdensity highenergy populations. A detailed discussion of the effects of the grid discretisation parameters on the simulation of a collisionless shock was published by PfauKempf et al. (2018).
Solvers and timeintegration
The structure of the hybridVlasov set of equations leads to the logical split into a solver for the Vlasov equation and a solver for the electric and magnetic field propagation.
Vlasov solver
In advancing the Vlasov equation, Vlasiator utilises Strang splitting (Umeda et al. 2009, 2011, and references therein), where updates of the particle distribution functions are performed, separately using a spatial translation operator \(S_T\) for advection; \(S_T = \left( \mathbf{v}\cdot \frac{\partial f_\mathrm{s}}{\partial \mathbf{x}}\right) \), and an acceleration operator \(S_A\) = \(\left( \frac{q_\mathrm{s}}{m_\mathrm{s}}{} \mathbf{E}\cdot \frac{\partial f_\mathrm{s}}{\partial \mathbf{v}}\right) \) including rotation \(\left( \frac{q_\mathrm{s}}{m_\mathrm{s}}(\mathbf{v}\times \mathbf{B})\cdot \frac{\partial f_\mathrm{s}}{\partial \mathbf{v}}\right) \) for each phasespace volume average. The splitting is performed using a standard leapfrog scheme, where
Acceleration over step length \(\varDelta t\) is thus calculated based on the field values computed at the midpoint of each acceleration step, i.e., at each actual time step as used for translation.
A global time step is defined, with time advancement calculated for distribution functions and fields in separate yet linked computations. Earlier versions of Vlasiator have used finite volume (FV) Vlasov solvers. In the earliest versions of the code, a FV method based on the solver proposed by Kurganov and Tadmor (2000) was used (Palmroth et al. 2013). A Riemanntype FV solver (LeVeque 1997; Langseth and LeVeque 2000) was used in subsequent works (Kempf et al. 2013, 2015; Pokhotelov et al. 2013; Sandroos et al. 2013, 2015; von Alfthan et al. 2014). For these solvers the classical Courant–Friedrichs–Lewy (CFL) condition (Courant et al. 1928) for maximal allowable time steps when calculating fluxes from one phasespace cell to another is
where i is indexed over three dimensions. In previous versions the CFL condition was found to be very limiting. Vlasiator utilize a semiLagrangian scheme (SLICE3D, Zerroukat and Allen 2012; https://github.com/fmihpc/libslice3d/), in which mass conservation is ensured by a conservative remapping from a Eulerian to a Lagrangian grid. Note however that the sparse velocity space strategy implemented in Vlasiator (see Sect. 5.2) breaks the mass conservation (see PfauKempf et al. 2018, for a discussion of the effect of the phase space density threshold on mass conservation). The specificity of the SLICE3D scheme is to split the full 3D remapping into successive 1D remappings, which reduces the computational cost of the spatial translation and facilitates its parallel implementation.
The velocity space update due to acceleration \(S_A\left( \frac{\varDelta t}{2}\right) \) will generally be described by an offset 3D rotation matrix (due to gyration around \(\mathbf {B}\)). As every offset rotation matrix can be decomposed into three shear matrices \(S = S_x S_y S_z\), each performing an axisparallel shear into one spatial dimension (Chen and Kaufman 2000), the numerically efficient semiLagrangian acceleration update using the SLICE3D approach is possible: before each shear transformation, the velocity space is rearranged into a singlecell column format parallel to the shear direction in memory, each of which requires only a onedimensional remapping with a high reconstruction order (in Vlasiator, 5th order reconstruction is typically employed for this step). These column updates are optimised to make full use of vector instructions. This update method comes with a maximum rotation angle limit due to the shear decomposition of about \(22^\circ \), which imposes a further time step limit. For larger rotational angles per time step (caused by stronger magnetic fields), the acceleration can be subcycled.
The position space update \(S_T(\varDelta t)\) will generally be described by a translation matrix with no rotation, and the same SLICE3D approach lends itself to it in a similar vein as for velocity space. The main difference is the typical use of 3rd order reconstruction in order to keep the stencil width at two. The use of a semiLagrangian scheme allow the implementation of a time step limit
based on spatial translation only. This condition constrains the spatial translation of any volume averages to a maximum value of \(\varDelta x_i\) in direction i, accounting for only those velocities within phasespace which have populated active cells (see the sparse grid implementation, Sect. 5.2). This is employed not due to stability requirements, but rather to decrease communication bandwidth by ensuring that a single ghost cell in each spatial direction is sufficient.
Field solver
The field solver in Vlasiator (von Alfthan et al. 2014) is based on the upwind constrained transport algorithm by Londrillo and Del Zanna (2004) and uses divergencefree reconstruction of the magnetic fields (Balsara 2009). It utilizes a secondorder Runge–Kutta algorithm including the interpolation method demonstrated by Valentini et al. (2007) to obtain the intermediate moments of f needed to update the electric and magnetic fields (von Alfthan et al. 2014).
The algorithm is subject to a CFL condition such that the fastestpropagating wave mode cannot travel more than half a spatial cell per time step. As the field solver was extended to include the Hall term in Ohm’s law, the CFL limit severely impacts the time step of the whole propagation in regions of high magnetic field strength or low plasma density. If the imbalance between the time step limits from the Vlasov solver and from the field solver is too strong, the computation of the electric and magnetic fields is subcycled such as to retain an acceptable global time step length (PfauKempf 2016).
Time stepping
The leapfrog scheme of the propagation is initialised by half a time step of acceleration. If the time step needs to change during the simulation due to the dynamics of the system, f is accelerated backwards by half an old time step and forwards again by half a new time step and the algorithm resumes with the new global time step. The complete sequence of the time propagation in Vlasiator is depicted in Fig. 5, including a synthetic version of the equations used in the different parts.
Boundary and initial conditions
The reference frame used in Vlasiator is defined as follows: the origin is located at the centre of the Earth, the x axis points towards the Sun, the z axis points northward perpendicular to both the x axis and the ecliptic plane, and the y axis completes the righthanded set. This coordinate system is equivalent to the Geocentric Solar Ecliptic (GSE) frame which is commonly used when studying the nearEarth space.
The solar wind enters the simulation domain at the \(+\,x\) boundary, while copy conditions (i.e., homogeneous Neumann boundary conditions obtained by copying the velocity distribution functions and magnetic fields from the boundary cells to their neighbouring ghost cells) are used for the \(\,x\) boundary and for the boundaries perpendicular to the flow. In the current 2D–3V runs, periodic conditions are applied in the outofplane direction (i.e., \(+\,z\) and \(\,z\) for the ecliptic runs and \(+\,y\) and \(\,y\) for the polar runs). Currently, three versions of the copy conditions are implemented, which can be adjusted in order to mitigate issues such as selfreplicating phenomena at the boundaries. At the beginning of the run or at a restart, the outflow condition can be set to a classic copy condition, to a copy condition where the value of f is modified in order to avoid selfreplication or inflowing features, or static conditions can be maintained at the boundary.
The simulation also requires an inner boundary around the Earth, in order to screen the origin of the terrestrial dipole. In the inner magnetosphere, the magnetic field strength increases dramatically, resulting in very small time steps, which would significantly slow down the whole simulation. Also, close to the Earth, the ionospheric plasma can no longer be described as a collisionless and fully ionised medium, and another treatment would be required in order to properly simulate this region. The inner boundary is therefore located at 30,000 km (about \(4.7 \, \mathrm {R}_{\mathrm {E}}\)) from the Earth’s centre and is currently modelled as a perfect conductor. The distribution functions in the boundary cells retain their initial Maxwellian distributions throughout the run. The electric field is set to zero in the layer of boundary cells closest to the origin, and the magnetic field component tangential to the boundary is fixed to the value given by the Earth’s dipole field. Since the ionospheric boundary is given in the Cartesian coordinate system, it is not exactly spherical but staircaselike, introducing several computational problems (e.g., Cangellaris and Wright 1991). This has not been a large problem in Vlasiator up to date, possibly since the computations are carried out in 2D–3V setup. Once the computations are carried out in 3D–3V, this may pose a larger problem, because the magnetic field will be stronger in 3D near poles.
In addition to defining boundary conditions, the phasespace cells within the simulation box must be initialised to some reasonable values, after which the magnetic and gas pressures and flow conditions cause the state of the simulation to change, and to converge towards a valid description of the magnetospheric system as the box is flushing. The usual method employed in Vlasiator is to initialise the velocity space in each cell within the simulation (excluding the region within the inner boundary) to match values picked from a Maxwellian distribution in agreement with the inflow boundary solar wind density, temperature, and bulk flow direction. The inner boundary is initialised with a constant proton temperature and number density with no bulk flow. The initial phase space density sampling can be improved by averaging over multiple densities obtained via equally spaced velocity vectors within the single velocity space cell.
The Earth’s magnetic field closely resembles that of a magnetic dipole, and within the scope of Vlasiator the dipole has been approximated as being aligned with the zaxis. For ecliptic x–y plane simulations the dipole field can be used asis, but for polar x–z plane simulations, a 2D line dipole (which scales as \(r^{2}\) rather than \(r^{3}\)) must be used instead in order to prevent the occurrence of unphysical currents due to outofplane field curvature. When using this approach, one must calculate the dipole strength that represents reality in some chosen manner, and this is achieved by choosing a value that in turn reproduces the magnetopause at the realistic \(\sim 10\,\mathrm {R}_\mathrm{E}\) standoff distance (a similar treatment as found in, e.g., Daldorff et al. 2014). As the dipole magnetic field is not included in the inflow boundary, there cannot exist a boundaryperpendicular magnetic field component in order to respect the solenoid condition. For ecliptic runs, the dipole field component is aligned with z and thus there is no component perpendicular to the inflow boundary. For polar runs, the dipole field component perpendicular to the inflow boundary must be removed to prevent magnetic field divergence. This is achieved by placing a mirror dipole identical to the Earth’s dipole model at the position \((2 \cdot (X_1  \varDelta x),0,0)\) that is at twice the distance from the origin to the edge of the final nonboundary simulation cell. For each simulation cell, the static background magnetic flux through each face is thus assigned as a combination of flux calculated from the chosen dipole field model, a mirror dipole if present, and the solar wind IMF vector. This background field, which is curlfree and divergencefree, is left static, and instead any calculations involving magnetic fields operate on a separate field which acts as a perturbation from this initial field.
Parallelisation strategies
Given the curse of dimensionality in Vlasov simulations (cf. Sect. 4), the amounts of memory and computational steps required for global magnetospheric hybridVlasov simulations are extreme. Therefore, the use of supercomputer resources and parallelisation techniques is essential. Vlasiator uses three levels of parallelisation, of which the first employed is the decomposition of the spatial simulation domain into subdomains handling individual tasks using the MessagePassing Interface (MPI, MPI Forum 2004). The use of the DCCRG grid library (Honkonen et al. 2013) provides most of the glue code for MPI communication and management of computational domain interfaces.
Thanks to the sparse velocity space representation, large savings in memory usage and computational demand can be achieved. However, the sparse velocity space induces a further problem, because the computational effort to solve the Vlasov equation is no longer constant for every spatial simulation cell, but varies in direct relation to the complexity of the velocity space at every given point. Due to the large variety of physical processes present in the magnetospheric domain, this leads to large load imbalances throughout the simulation box, making a simple Cartesian subdivision of space over computational tasks infeasible and necessitating a dynamic rebalancing of the work distribution. To this end, Vlasiator relies on the Zoltan library (Devine et al. 2002; Boman et al. 2012), which creates an optimised space decomposition from continuously updated runtime metrics, providing a number of different algorithms to do so (usual production runs are using recursive coordinate bisection). Figure 6 shows an example of the resulting spatial decomposition in a 2D global magnetospheric simulation run in the Earth’s polar plane, in which the incoming solar wind plasma with lowcomplexity Maxwellian velocity distributions on the right hand side is processed by visibly larger computational domain sizes than the more complex velocity space structures in the magnetosphere.
The second level of parallelisation is carried out within the computational domain of each MPI task. The domain containing the MPI tasks is typically handled by a full supercomputer node (or a fraction of it) with multiple CPU cores and threads, which includes a local parallelisation level based on OpenMP (OpenMP Architecture Review Board 2011). All computationally intensive solver steps have been designed to be run threadparallel over multiple spatial cells, or in the case of the SLICE3D position space update (see Sect. 5.3) multiple parallel evaluations over the velocity space to make optimum use of the available shared memory parallel computing architecture within one node. As a third level of parallelisation, all data structures involved in computationally expensive solver steps have been designed to benefit from vector processing of modern CPUs. Specifically, the velocity space representation in Vlasiator is based on \(4\times 4\times 4\) cell blocks, which are always processed as a whole. This allows multiple velocity cells to be solved at the same time, using single instruction multiple data techniques (Fog 2016).
A further complication of parallel Vlasov simulations are the associated input/output requirements. Not only does it require a parallel input/output system that scales to the required number of nodes, but the sparse velocity space structure requires an appropriate file format able to represent the sparsity, without relying on fixed data offsets. For Vlasiator’s specific use case, the VLSV library and file format have been developed (http://github.com/fmihpc/vlsv). Using parallel MPIIO (MPI Forum 2004), it allows highperformance input/output even for simulation restart files which, given the large system size of Earth’s magnetosphere, tend to get up to multiple terabytes in size. A plugin for the popular scientific visualisation suite VisIt (Childs et al. 2012) is available, as is a python library that allows for quantitative analysis of the output files (http://github.com/fmihpc/analysator).
Along with the industry’s trend towards architectures featuring large numbers of cores and/or GPUs as a primary computing element, an early version of Vlasiator was parallelised using the CUDA standard and run on small numbers of GPUs (Sandroos et al. 2013). This avenue was not pursued further because of the lack of suitably large systems, and a number of bottlenecks following from the structure of the Vlasov simulations on the one hand and the characteristics of GPUs on the other hand.
Verification and test problems
As standard verification tests for a hybridVlasov system do not exist, the first verification effort of Vlasiator was presented in Kempf et al. (2013). A simulation of low\(\beta \) plasma waves (where \(\beta \) is the ratio of thermal and magnetic pressures) in a onedimensional case with various angles of propagation with respect to the magnetic field was used to generate dispersion curves and surfaces. These were then compared to analytical solutions from the linearised plasma wave equations given by the Waves in Homogeneous, Anisotropic Multicomponent Plasmas (WHAMP) code (Rönnmark 1982). Excellent agreement between the results obtained from the two approaches was found in the case of parallel, perpendicular and oblique propagation, the only noticeable difference taking place for high frequencies and wave numbers, likely as a result of too coarse a representation of the Hall term in the Vlasiator simulations at that time.
In the work presented by von Alfthan et al. (2014), the study of the ion/ion righthand resonant beam instability is another effort to verify the hybridVlasov model implemented in Vlasiator against the analytic solution of the dispersion equation for that instability. The obtained instability growth rates were found to behave as predicted by theory in the cool beam regime, although with slightly lower values which can be explained by the finite size of the simulation box used. This paper also discussed comparisons of results from the hybridVlasov approach with those obtained with hybridPIC codes to underline that distribution functions are comparable albeit smoother and betterresolved with the former approach.
More recently, Kilian et al. (2017) presented a set of validation tests based on kinetic plasma waves, and discussed what their expected behaviour should look like in fully kinetic PIC simulations as well as different levels of simplification (Darwin approximation, EMHD, hybrid). By nature, waves and instabilities are a sensitive and valuable verification tool for plasma models, as they are an emergence of the collective behaviour of plasma. As such they are an excellent verification test for a complete model going well beyond unit tests of single solver components.
The increasing computational performance of Vlasiator has allowed significant improvements in spatial resolution. It was still 850 km early on (von Alfthan et al. 2014; Kempf et al. 2015) but subsequent runs were performed at 300 km and even 227 km resolution (e.g., Palmroth et al. 2015; Hoilijoki et al. 2017). Nevertheless even at these finer resolutions the typical kinetic scales are still not properly resolved in magnetospheric simulations. This can lead to the a priori concern that underresolved hybridVlasov simulations would not fare better than their considerably cheaper MHD forerunners and would similarly lack any kinetic plasma phenomena. A systematic study of the effects of the discretisation parameters of Vlasiator on the modelling of a collisionless shock alleviates this concern. Using a onedimensional shock setup with conditions comparable to the terrestrial bow shock, PfauKempf et al. (2018) show that even at spatial resolutions of 1000 km the results clearly depart from fluid theory and are consistent with a kinetic description. Of course an increased resolution of 200 km leads to a dramatic improvement in the physical detail accessible to the model, even though not yet fully resolving ion kinetic scales. This study also highlights the importance of choosing the velocity resolution and the phasespace density threshold \(f_\mathrm{min}\) carefully as they affect the conservation properties of the model and as a consequence the physical processes it can describe.
Physics results
Having completed verification tests, one can compare simulation results with experimental groundbased or spacecraft data, in other words proceed towards validation of the model. The first step for Vlasiator was to perform a global testVlasov simulation in 3D ordinary space (Palmroth et al. 2013). In this test f was propagated through the electromagnetic fields computed by the MHD model GUMICS4 (Janhunen et al. 2012). This test showed that the early testVlasov version of Vlasiator already reproduced well the position of the Earth’s bow shock as well as magnetopause and magnetosheath plasma properties. Typical energy–latitude ion velocity profiles during northward IMF conditions were also successfully obtained with Vlasiator in that same study.
Focusing on the ion velocity distributions in the foreshock, a study by Pokhotelov et al. (2013) demonstrated that the physics of ions in the vicinity of quasiparallel MHD shocks is well reproduced by Vlasiator. The simulation presented in that paper is a global dayside magnetospheric run in 2D ordinary space (ecliptic plane) for which the IMF angle relative to the Sun–Earth axis is \(45^\circ \). The foreshock was successfully reproduced by the model, and the reflected ion velocity distributions given by Vlasiator were found to be in agreement with spacecraft observations. In particular, deep in the ion foreshock, socalled capshaped ion distributions were reproduced by the model in association with 30 s sinusoidal waves which have been created as a result of ion/ion resonance interaction.
Validation of Vlasiator using spacecraft data was presented by Kempf et al. (2015), where the various shapes of ion distributions in the foreshock were reviewed, localised relative to the foreshock boundaries, identified in Time History of Events and Macroscale Interactions during Substorms (THEMIS, Angelopoulos 2008) data and compared to model results. The agreement between Vlasiatorsimulated distributions and those observed by THEMIS was found to be very good, giving additional credibility to the hybridVlasov approach and its feasibility.
While the papers discussed above essentially presented validations of the hybridVlasov approach implemented in Vlasiator, the model has since 2015 been producing novel physical results. The first scientific investigations of the solar windmagnetosphere interaction utilizing Vlasiator focus on dayside processes, from the foreshock to the magnetopause. Vlasiator offers in particular an unprecedented view of the suprathermal ion population in the foreshock. The moments of this population are direct outputs from the code, thus facilitating the analysis of parameters such as the suprathermal ion density or velocity throughout the foreshock (Kempf et al. 2015; Palmroth et al. 2015). In contrast, such parameters require some careful data processing to be extracted from spacecraft measurements, and large statistics are needed in order to obtain global maps of the foreshock.
Vlasiator allows to investigate the properties and the structuring of the ultralow frequency (ULF, 1 mHz to \(\sim \) 1 Hz) waves which pervade the foreshock both on the local and the global scales. Direct comparison of a Vlasiator run with measurements from the THEMIS mission during similar solar wind conditions confirmed that Vlasiator reproduces well the characteristics of the waves at the spacecraft location (Palmroth et al. 2015). The typical features of the waves are in agreement with the reported literature. The observed oblique propagation, relative to the ambient magnetic field, of these foreshock waves has been a longstanding question because theory predicts that they should be parallelpropagating. Based on Vlasiator results, Palmroth et al. (2015) proposed a new scenario to explain this phenomenon, which they attributed to the global variation of the suprathermal ion population properties across the foreshock.
Vlasiator also offers unprecedented insight into the physics of the magnetosheath, which hosts mirror mode waves downstream of the quasiperpendicular shock. Hoilijoki et al. (2016) found that the growth rate of the mirror mode waves was smaller than theoretical expectations, but in good agreement with spacecraft observations. As Hoilijoki et al. (2016) explain, this discrepancy has been ascribed to the local and global variations of the plasma parameters, as well as the influence of other wave modes, not being taken into account in the previous theoretical estimates. Using Vlasiator’s capability to track the evolution of the plasma as it propagates from the bow shock into the magnetosheath, Hoilijoki et al. (2016) demonstrated that mirror modes develop preferentially along magnetosheath streamlines whose origin at the bow shock lies in the vicinity of the foreshock ULF wave boundary. This is probably due to the fact that the plasma is more unstable to mirror modes in this region because of the perturbations in the foreshock transmitting into the magnetosheath. This result outlines the importance of the global approach, as a similar result would not be present in coupled codes nor in codes that do not model both the foreshock and the magnetosheath simultaneously.
Magnetic reconnection is a topic of intensive research, as it is the main process through which plasma and energy are transferred from the solar wind into the magnetosphere. Many questions remain unresolved in the dayside and in the nightside. In the dayside, active research topics include the position of the reconnection line and the bursty or continuous nature of reconnection, while in the nightside the most important topic is the global magnetospheric reconfiguration caused either by reconnection or by a tail current disruption. In order to tackle these questions, the simulation domain of Vlasiator, which so far corresponded to the Earth’s equatorial plane, was changed to cover the noon–midnight meridian plane (x–z plane in the reference frame defined in Sect. 5.4). To address the dayside–nightside coupling processes in reconnection, the simulation domain was extended to include the nightside reconnection site within the same simulation domain, stretching as far as \(\,94 \, \mathrm {R}_{\mathrm {E}}\) along the x direction. This run, carried out in 2016, remains at the time of this writing the most computationally expensive Vlasiator run performed so far.
Hoilijoki et al. (2017) presented an investigation of reconnection and flux transfer event (FTE) processes at the dayside magnetopause, and showed that even under steady IMF conditions the location of the reconnection line varies with time, even allowing multiple reconnection lines to exist at a given time. Many FTEs are produced during the simulation, and occasionally magnetic islands have been observed to coalesce, which underlines the power of kineticbased modelling in capturing highly dynamical and localised processes. Additionally, Hoilijoki et al. (2017) showed that the local reconnection rate measured at locations of the reconnection lines correlates well with the analytical rate for the asymmetric reconnection derived by Cassak and Shay (2007). This paves the way in using Vlasiator to investigate e.g., the effects of dayside reconnection in the nightside.
Vlasiator has proven to be a useful and powerful tool to reveal localised phenomena which were never imagined before, and to narrow down the regions of the nearEarth environment where to search for them in observational data sets. One example of this can be found in the work by PfauKempf et al. (2016), in which transient, local ion foreshocks were discovered at the bow shock under steady solar wind and IMF conditions, as illustrated in Fig. 7. These transient foreshocks were found to be related to FTEs at the dayside magnetopause produced by unsteady reconnection and creating fast mode waves propagating upstream in the magnetosheath (Fig. 7a–c). These wave fronts can locally alter the shape of the bow shock, thus creating favourable conditions for local foreshocks to appear (Fig. 7d, e). Observational evidence giving credit to this scenario was found in a data set comprising Geotail observations near the bow shock and groundbased signatures of FTEs in SuperDARN radar and magnetometer data.
While the first set of publications essentially dealt with dayside processes, Vlasiator can also be applied to the study of nightside phenomena. The first investigation of magnetotail processes using Vlasiator was performed by Palmroth et al. (2017), showcasing multiple reconnection lines in the plasma sheet and the ejection of a plasmoid, under steady IMF conditions (see Fig. 8). This study underlined that dayside reconnection may have a direct consequence in stabilising nightside reconnection, as flux tubes originating from dayside reconnection influenced the local conditions within the nightside plasma sheet. Again, this study illustrates how important it is to capture the whole system simultaneously using a kinetic approach.
Future avenues
Vlasiator is funded through several multiannual grants, with which the code is improved and developed. Major building blocks for making Vlasiator possible in the past were not only the increase of the computational resources, but also several algorithmic innovations. Examples of these are the sparse grid for distribution functions, and the semiLagrangian solver discussed above. Further, the code has been continuously optimised to fit better in different parallel architectures. With these main steps, the efficiency of Vlasiator has been improved effectively about eight orders of magnitude relative to the performance in the beginning of the project, allowing to simulate 2D–3V systems with high resolution (Palmroth et al. 2017). Recently, a simulation run with a cylindrical ionosphere and a layer of grid cells in the third spatial dimension has been carried out, thus approaching the full 3D–3V representation.
The development of Vlasiator is closely tied to the awarded grants. In terms of numerics, nearterm plans are to include an adaptive mesh refinement both into the ordinary space and velocity space, required for a full 3D–3V system. These improvements would allow to place higher resolution to regions of interest, and consequently to save in number of time steps and storage. The DCCRG grid already supports adaptive mesh refinement, and thus the task is mainly to add this support into the solvers and optimise the performance.
In terms of physics, perhaps the most visible change in the near past was the addition of heavier ions. In recent times, the role of heavier ions, e.g., in dayside reconnection has become evident (e.g., Fuselier et al. 2017), and thus the correct reproduction of the system at ion scales requires solving heavier ions as well. While the addition requires more memory and storing capacity, in terms of coding it was relatively simple as the ions can be represented as an additional sparse representation of the velocity space, not adding much to the overall computational load. The first runs with heavier ions with additional ion populations were produced in 2018. The first set of runs considered helium flowing from the solar wind boundary, and the second set added oxygen flow from the ionospheric boundary. The analysis for these runs is ongoing.
In the near term, the ionospheric boundary will also be improved. In the 2D–3V runs the ionosphere can be relatively simple, but in 3D–3V it needs to be updated as well. In the first approximation, it can be similar to the type of boundary used in global MHD simulations, which typically couple the fieldaligned currents, electric potential and precipitation between the ionosphere and magnetosphere (e.g., Janhunen et al. 2012). Later, the ionosphere should be updated to take into account the more detailed information that the Vlasovbased magnetospheric domain can offer relative to MHD. The objective is to push the inner edge of the simulation domain earthwards from its current position (around 5 \(R_E\)). Other planned improvements include allowing for the Earth’s dipole field to be tilted with respect to the z direction, and replacing the mirror dipole method of ensuring the solenoid condition with an alternative method, for instance a radially vanishing vector potential description of the dipole field. Inclusion of such capabilities would allow investigations of the inner magnetospheric physics in terms of solar wind driving, which would close the circle: The problems of reproducing the inner magnetospheric physics by the global MHD simulations was one of the main motivations for developing Vlasiator in the first place.
Other possible future avenues would be to consider other environments that will be investigated with present and future space missions. An example of this is Mercury targeted by the upcoming BepiColombo mission. Cometary environments and the comet–solar wind interactions should be interesting in terms of the recently added heavy ion support, in the context of the Rosetta mission data analysis. Further, the upcoming Juice mission will visit the icy moons of Jupiter, indicating that e.g., the Ganymede–Jupiter interaction may also be one viable option for future.
Conclusions and outlook
There are several main conclusions that can be made from all Vlasiator results so far. The first one is related to the applicability of the hybridVlasov system for ions within the global magnetospheric context. When Vlasiator was first proposed, concerns arose as to whether ions are the dominant species controlling the system dynamics or does one need electrons as well. In particular, a physical representation of the reconnection process may require electrons, while the ionscale Vlasiator would still model reconnection similarly as global MHD simulations, i.e., through numerical diffusion. However, even an MHD simulation, treating both ions and electrons as a fluid, is capable of modelling global magnetospheric dynamics (Palmroth et al. 2006a, b), indicating that reconnection driving the global dynamics must be within the right ballpark. Since Vlasiator is also able to produce results that are in agreement with in situ measurements, kinetic ions seem to be a major contributor in reproducing global dynamics. Whether the electrons play a larger role in global dynamics remains to be determined in the future, if such simulations become possible.
Another major conclusion based on Vlasiator is the role of grid resolution in global setups. Again, one of the largest concerns in the beginning of Vlasiator development was that the ion gyroscales could not be reached within a global simulation volume, raising fears that the outcomes would be MHDlike, even though early hybridPIC simulations were also carried out at ion inertial length scales (e.g., Omidi et al. 2005). In this context, the first runs included an element of surprise, as even rather coarse resolution grids induce kinetic phenomena that are in agreement with in situ observations (Pokhotelov et al. 2013). Latest results have clearly indicated that kinetic physics emerges even at coarse spatial resolution (PfauKempf et al. 2018). It should be emphasised that this result would not have been foreseeable without developing the simulation first. Further, it indicates that also electron physics could be trialled without resolving the actual electron scales. One can hence conclude that others attempting to develop a (hybrid)Vlasov simulation may face less concerns due to the grid resolution, even in setups with major computational challenges, like e.g., portions of the Sun.
The most common physical conclusion based on Vlasiator simulations is that “everything affects everything”, indicating that scale coupling is important in global magnetospheric dynamics. One avenue of development for the global MHD simulations in the recent years has been code coupling, where e.g., problemspecific codes have been coupled into the global context (Huang et al. 2006), or e.g., PIC simulations have been embedded within the MHD domain (Tóth et al. 2016). While these approaches are interesting and advance physical understanding, they cannot approach scale coupling as the specific kinetic phenomena are only addressed within their respective simulation volumes. A prime example of the scale coupling is the emergence of transient foreshocks, driven by bow waves generated by dayside reconnection (PfauKempf et al. 2016). Another example is the generation of oblique foreshock waves due to a global variability of backstreaming populations (Palmroth et al. 2015). These results could not have been achieved without a simulation that resolves both small and large scales simultaneously.
Vlasovbased methods have not yet been widely adopted in the fields of astrophysics and space physics to model largescale systems beyond the few examples cited in Table 2, mainly due to the truly astronomical computational cost such simulations can have. The experience with Vlasiator nevertheless demonstrates that Vlasovbased modelling is strongly complementary to other methods and provides unprecedented insight well worth the implementation effort. Based on the pioneering work realised in the Solar–Terrestrial physics community, it is hoped that Vlasovbased methods will gain in popularity and lead to breakthrough results in other fields of space physics and astrophysics as well.
Finally, it should be emphasized that a critical success factor in the Vlasiator development has been the close involvement with technological advances in the field of highperformance computing. European research infrastructures for supercomputing have been developed almost handinhand with Vlasiator, providing an opportunity to always target the newest platforms thus feeding directly into the code development. Should similar computationally intensive codes be designed and implemented elsewhere, it is recommended to keep a keen eye on the technological development of the supercomputing platforms.
References
Afanasiev A, Battarbee M, Vainio R (2015) Selfconsistent Monte Carlo simulations of proton acceleration in coronal shocks: effect of anisotropic pitchangle scattering of particles. Astron Astrophys 584:A81. https://doi.org/10.1051/00046361/201526750. arXiv:1603.08857
Afanasiev A, Vainio R, Rouillard AP, Battarbee M, Aran A, Zucca P (2018) Modelling of proton acceleration in application to a ground level enhancement. Astron Astrophys 614:A4. https://doi.org/10.1051/00046361/201731343
Aguilar M et al (2015) Precision measurement of the proton flux in primary cosmic rays from rigidity 1 GV to 1.8 TV with the alpha magnetic spectrometer on the International Space Station. Phys Rev Lett 114:171103. https://doi.org/10.1103/PhysRevLett.114.171103
André M, Vaivads A, Khotyaintsev YV, Laitinen T, Nilsson H, Stenberg G, Fazakerley A, Trotignon JG (2010) Magnetic reconnection and cold plasma at the magnetopause. Geophys Res Lett 37:L22108. https://doi.org/10.1029/2010GL044611
Anekallu CR, Palmroth M, Pulkkinen TI, Haaland SE, Lucek E, Dandouras I (2011) Energy conversion at the Earth’s magnetopause using single and multispacecraft methods. J Geophys Res 116:A11204. https://doi.org/10.1029/2011JA016783
Angelopoulos V (2008) The THEMIS mission. Space Sci Rev 141:5–34. https://doi.org/10.1007/s1121400893361
Angelopoulos V, McFadden JP, Larson D, Carlson CW, Mende SB, Frey H, Phan T, Sibeck DG, Glassmeier KH, Auster U, Donovan E, Mann IR, Rae IJ, Russell CT, Runov A, Zhou XZ, Kepko L (2008) Tail reconnection triggering substorm onset. Science 321:931–935. https://doi.org/10.1126/science.1160495
Arber TD, Vann RGL (2002) A critical comparison of Euleriangridbased Vlasov solvers. J Comput Phys 180:339–357. https://doi.org/10.1006/jcph.2002.7098
Axford WI, Leer E, Skadron G (1977) The acceleration of cosmic rays by shock waves. In: International cosmic ray conference, vol 11, pp 132–137. https://doi.org/10.1007/9783662255230_11
Bai XN, Caprioli D, Sironi L, Spitkovsky A (2015) Magnetohydrodynamicparticleincell method for coupling cosmic rays with a thermal plasma: application to nonrelativistic shocks. Astrophys J 809:55. https://doi.org/10.1088/0004637X/809/1/55
Baker DN (1995) The inner magnetosphere: a review. Surv Geophys 16:331–362. https://doi.org/10.1007/BF01044572
Balogh A, Treumann RA (2013) Physics of collisionless shocks—space plasma shock waves. ISSI scientific report, vol 12. Springer, Heidelberg. https://doi.org/10.1007/9781461460992
Balsara DS (2009) Divergencefree reconstruction of magnetic fields and WENO schemes for magnetohydrodynamics. J Comput Phys 228:5040–5056. https://doi.org/10.1016/j.jcp.2009.03.038
Balsara DS (2017) Higherorder accurate spacetime schemes for computational astrophysics—Part I: Finite volume methods. Living Rev Comput Astrophys 3:2. https://doi.org/10.1007/s4111501700028
Balsara DS, Kim J (2004) A comparison between divergencecleaning and staggeredmesh formulations for numerical magnetohydrodynamics. Astrophys J 602:1079. https://doi.org/10.1086/381051
Banks JW, Hittinger JAF (2010) A new class of nonlinear finitevolume methods for Vlasov simulation. IEEE Trans Plasma Sci 38:2198–2207. https://doi.org/10.1109/TPS.2010.2056937
Bauer S, Kunze M (2005) The Darwin approximation of the relativistic Vlasov–Maxwell system. Ann Henri Poincare 6:283–308. https://doi.org/10.1007/s000230050207y
BecerraSagredo J, Málaga C, Mandujano F (2016) Moments preserving and highresolution semiLagrangian advection scheme. SIAM J Sci Comput 38:A2141–A2161. https://doi.org/10.1137/140990619
Bell AR (1978) The acceleration of cosmic rays in shock fronts. I. MNRAS 182:147–156. https://doi.org/10.1093/mnras/182.2.147
Besse N, Sonnendrücker E (2003) SemiLagrangian schemes for the Vlasov equation on an unstructured mesh of phase space. J Comput Phys 191:341–376. https://doi.org/10.1016/s00219991(03)003188
Besse N, Mauser N, Sonnendrücker E (2007) Numerical approximation of selfconsistent Vlasov models for lowfrequency electromagnetic phenomena. Int J Appl Math Comput Sci 17:361–374. https://doi.org/10.2478/v1000600700303
Besse N, Latu G, Ghizzo A, Sonnendrücker E, Bertrand P (2008) A waveletMRAbased adaptive semiLagrangian method for the relativistic Vlasov–Maxwell system. J Comput Phys 227:7889–7916. https://doi.org/10.1016/j.jcp.2008.04.031
Bilitza D, Reinisch BW (2008) International Reference Ionosphere 2007: improvements and new parameters. Adv Space Res 42:599–609. https://doi.org/10.1016/j.asr.2007.07.048
Birdsall CK, Langdon AB (2004) Plasma physics via computer simulation. CRC Press, Boca Raton
Birn J, Hesse M (2009) Reconnection in substorms and solar flares: analogies and differences. Ann Geophys 27:1067–1078. https://doi.org/10.5194/angeo2710672009
Blandford RD, Ostriker JP (1978) Particle acceleration by astrophysical shocks. Astrophys J 221:L29–L32. https://doi.org/10.1086/182658
Blelly PL, Lathuillère C, Emery B, Lilensten J, Fontanari J, Alcaydé D (2005) An extended TRANSCAR model including ionospheric convection: simulation of EISCAT observations using inputs from AMIE. Ann Geophys 23:419–431. https://doi.org/10.5194/angeo234192005
Boman EG, Catalyurek UV, Chevalier C, Devine KD (2012) The Zoltan and Isorropia parallel toolkits for combinatorial scientific computing: partitioning, ordering, and coloring. Comput Sci Eng 20:129–150. https://doi.org/10.3233/SPR20120342
Brizard AJ (2000) New variational principle for the Vlasov–Maxwell equations. Phys Rev Lett 84:5768–5771. https://doi.org/10.1103/PhysRevLett.84.5768
Brizard AJ, Tronko N (2011) Exact momentum conservation laws for the gyrokinetic Vlasov–Poisson equations. Phys Plasmas 18:082307. https://doi.org/10.1063/1.3625554
Bruno R, Carbone V (2013) The solar wind as a turbulence laboratory. Living Rev Sol Phys 10:2. https://doi.org/10.12942/lrsp20132
Bryan GL, Norman ML, O’Shea BW, Abel T, Wise JH, Turk MJ, Reynolds DR, Collins DC, Wang P, Skillman SW, Smith B, Harkness RP, Bordner J, Kim J, Kuhlen M, Xu H, Goldbaum N, Hummels C, Kritsuk AG, Tasker E, Skory S, Simpson CM, Hahn O, Oishi JS, So GC, Zhao F, Cen R, Li Y, The Enzo Collaboration (2014) ENZO: an adaptive mesh refinement code for astrophysics. Astrophys J Suppl Ser 211:19. https://doi.org/10.1088/00670049/211/2/19
Bykov AM, Ellison DC, Osipov SM, Vladimirov AE (2014) Magnetic field amplification in nonlinear diffusive shock acceleration including resonant and nonresonant cosmicray driven instabilities. Astrophys J 789:137. https://doi.org/10.1088/0004637X/789/2/137
Camporeale E, Delzanno G, Bergen B, Moulton J (2016) On the velocity space discretization for the Vlasov–Poisson system: comparison between implicit Hermite spectral and particleincell methods. Comput Phys Commun 198:47–58. https://doi.org/10.1016/j.cpc.2015.09.002
Cangellaris A, Wright D (1991) Analysis of the numerical error caused by the stairstepped approximation of a conducting boundary in FDTD simulations of electromagnetic phenomena. IEEE Trans Ant Prop 39:1518–1525. https://doi.org/10.1109/8.97384
Caprioli D, Spitkovsky A (2013) Cosmicrayinduced filamentation instability in collisionless shocks. Astrophys J 765:L20. https://doi.org/10.1088/20418205/765/1/L20
Casas F, Crouseilles N, Faou E, Mehrenberger M (2017) Highorder Hamiltonian splitting for the Vlasov–Poisson equations. NuMat 135:769–801. https://doi.org/10.1007/s002110160816z
Cassak PA, Shay MA (2007) Scaling of asymmetric magnetic reconnection: general theory and collisional simulations. Phys Plasmas 14:102114. https://doi.org/10.1063/1.2795630
Cerri SS, Califano F (2017) Reconnection and smallscale fields in 2D–3V hybridkinetic driven turbulence simulations. New J Phys 19:025007. https://doi.org/10.1088/13672630/aa5c4a
Cerri SS, Califano F, Jenko F, Told D, Rincon F (2016) Subprotonscale cascades in solar wind turbulence: driven hybridkinetic simulations. Astrophys J Lett 822:L12. https://doi.org/10.3847/20418205/822/1/L12
Cerri SS, Franci L, Califano F, Landi S, Hellinger P (2017a) Plasma turbulence at ion scales: a comparison between particle in cell and Eulerian hybridkinetic approaches. J Plasma Phys 83:705830202. https://doi.org/10.1017/S0022377817000265
Cerri SS, Servidio S, Califano F (2017b) Kinetic cascade in solarwind turbulence: 3D3V hybridkinetic simulations with electron inertia. Astrophys J Lett 846:L18. https://doi.org/10.3847/20418213/aa87b0
ChaneYook M, Clerc S, Piperno S (2006) Space charge and potential distribution around a spacecraft in an isotropic plasma. J Geophys Res. https://doi.org/10.1029/2005JA011401
Chao JK, Zhang XX, Song P (1995) Derivation of temperature anisotropy from shock jump relations: theory and observations. Geophys Res Lett 22:2409–2412. https://doi.org/10.1029/95GL02187
Chapman JF, Cairns IH (2003) Threedimensional modeling of Earth’s bow shock: shock shape as a function of Alfvén Mach number. J Geophys Res 108:1174. https://doi.org/10.1029/2002JA009569
Chen B, Kaufman A (2000) 3D volume rotation using shear transformations. Graph Models 62:308–322. https://doi.org/10.1006/gmod.2000.0525
Chen Y, Tóth G, Cassak P, Jia X, Gombosi TI, Slavin JA, Markidis S, Peng IB, Jordanova VK, Henderson MG (2017) Global threedimensional simulation of Earth’s dayside reconnection using a twoway coupled magnetohydrodynamics with embedded particleincell model: initial results. J Geophys Res 122:10318–10335. https://doi.org/10.1002/2017JA024186
Cheng CZ, Knorr G (1976) The integration of the Vlasov equation in configuration space. J Comput Phys 22:330–351. https://doi.org/10.1016/00219991(76)90053X
Cheng Y, Gamba IM, Morrison PJ (2013) Study of conservation and recurrence of Runge–Kutta discontinuous Galerkin schemes for Vlasov–Poisson systems. J Sci Comput 56:319–349. https://doi.org/10.1007/s109150129680x
Cheng Y, Christlieb AJ, Zhong X (2014) Energyconserving discontinuous Galerkin methods for the Vlasov–Ampère system. J Comput Phys 256:630–655. https://doi.org/10.1016/j.jcp.2013.09.013
Childs H, Brugger E, Whitlock B, Meredith J, Ahern S, Pugmire D, Biagas K, Miller M, Harrison C, Weber GH, Krishnan H, Fogal T, Sanderson A, Garth C, Bethel EW, Camp D, Rübel O, Durant M, Favre JM, Navrátil P (2012) VisIt: an enduser tool for visualizing and analyzing very large data. In: Wes Bethel E, Childs H, Hansen C (eds) High performance visualization—enabling extremescale scientific insight. CRC Press, Boca Raton, pp 357–372
Cottet GH (2018) SemiLagrangian particle methods for highdimensional Vlasov–Poisson systems. J Comput Phys 365:362–375. https://doi.org/10.1016/j.jcp.2018.03.042
Courant R, Friedrichs K, Lewy H (1928) Über die partiellen Differenzengleichungen der mathematischen Physik. Math Ann 100:32–74. https://doi.org/10.1007/BF01448839
CranMcGreehin AP, Wright AN (2005) Electron acceleration in downward auroral fieldaligned currents. J Geophys Res 110:A10S15. https://doi.org/10.1029/2004JA010898
Cranmer SR, Gibson SE, Riley P (2017) Origins of the ambient solar wind: implications for space weather. Space Sci Rev 212:1345–1384. https://doi.org/10.1007/s112140170416y
Crouseilles N, Respaud T, Sonnendrücker E (2009) A forward semiLagrangian method for the numerical solution of the Vlasov equation. Comput Phys Commun 180:1730–1745. https://doi.org/10.1016/j.cpc.2009.04.024
Crouseilles N, Mehrenberger M, Sonnendrücker E (2010) Conservative semiLagrangian schemes for Vlasov equations. J Comput Phys 229:1927–1953. https://doi.org/10.1016/j.jcp.2009.11.007
Crouseilles N, Einkemmer L, Faou E (2015) Hamiltonian splitting for the Vlasov–Maxwell equations. J Comput Phys 283:224–240. https://doi.org/10.1016/j.jcp.2014.11.029
Daldorff LKS, Tóth G, Gombosi TI, Lapenta G, Amaya J, Markidis S, Brackbill JU (2014) Twoway coupling of a global Hall magnetohydrodynamics model with a local implicit particleincell model. J Comput Phys 268:236–254. https://doi.org/10.1016/j.jcp.2014.03.009
Daughton W, Roytershteyn V, Karimabadi H, Yin L, Albright BJ, Bergen B, Bowers KJ (2011) Role of electron physics in the development of turbulent magnetic reconnection in collisionless plasmas. Nature Phys 7:539–542. https://doi.org/10.1038/nphys1965
Daughton W, Nakamura TKM, Karimabadi H, Roytershteyn V, Loring B (2014) Computing the reconnection rate in turbulent kinetic layers by using electron mixing to identify topology. Phys Plasmas 21:052307. https://doi.org/10.1063/1.4875730
De Moortel I, Browning P (2015) Recent advances in coronal heating. Philos Trans R Soc London, Ser A. https://doi.org/10.1098/rsta.2014.0269
Delzanno G (2015) Multidimensional, fullyimplicit, spectral method for the Vlasov–Maxwell equations with exact conservation laws in discrete form. J Comput Phys 301:338–356. https://doi.org/10.1016/j.jcp.2015.07.028
Devine K, Boman E, Heaphy R, Hendrickson B, Vaughan C (2002) Zoltan data management services for parallel dynamic applications. CSE 4:90–97. https://doi.org/10.1109/5992.988653
Dimmock AP, Nykyri K (2013) The statistical mapping of magnetosheath plasma properties based on THEMIS measurements in the magnetosheath interplanetary medium reference frame. J Geophys Res 118:4963–4976. https://doi.org/10.1002/jgra.50465
Dungey JW (1961) Interplanetary magnetic field and the auroral zones. Phys Rev Lett 6:47–48. https://doi.org/10.1103/PhysRevLett.6.47
Eastwood JP, Biffis E, Hapgood MA, Green L, Bisi MM, Bentley RD, Wicks R, McKinnell LA, Gibbs M, Burnett C (2017) The economic impact of space weather: where do we stand? Risk Anal 37:206–218. https://doi.org/10.1111/risa.12765
Einkemmer L, Lubich C (2018) A lowrank projectorsplitting integrator for the Vlasov–Poisson equation. ArXiv eprints arXiv:1801.01103
Einkemmer L, Ostermann A (2014) Convergence analysis of strang splitting for Vlasovtype equations. SIAM J Numer Anal 52:140–155. https://doi.org/10.1137/130918599
Eliasson B (2001) Outflow boundary conditions for the Fourier transformed onedimensional Vlasov–Poisson system. J Sci Comput 16:1–28. https://doi.org/10.1023/A:1011132312956
Eliasson B (2011) Numerical simulations of the Fouriertransformed Vlasov–Maxwell system in higher dimensionstheory and applications. Transp Theor Stat Phys 39:387–465. https://doi.org/10.1080/00411450.2011.563711
Escoubet CP, Fehringer M, Goldstein M (2001) The Cluster mission. Ann Geophys 19:1197–1200. https://doi.org/10.5194/angeo1911972001
Fermi E (1949) On the origin of the cosmic radiation. Phys Rev 75:1169–1174. https://doi.org/10.1103/PhysRev.75.1169
Figua H, Bouchut F, Feix M, Fijalkow E (2000) Instability of the filtering method for Vlasov’s equation. J Comput Phys 159:440–447. https://doi.org/10.1006/jcph.2000.6423
Filbet F, Sonnendrücker E (2003a) Comparison of Eulerian Vlasov solvers. Comput Phys Commun 150:247–266. https://doi.org/10.1016/S00104655(02)00694X
Filbet F, Sonnendrücker E (2003b) Numerical methods for the Vlasov equation. In: Brezzi F, Buffa A, Corsaro S, Murli A (eds) Numerical mathematics and advanced applications. Springer, Milan, pp 459–468. https://doi.org/10.1007/9788847020894_43
Filbet F, Sonnendrücker E, Bertrand P (2001) Conservative numerical schemes for the Vlasov equation. J Comput Phys 172:166–187. https://doi.org/10.1006/jcph.2001.6818
Fog A (2016) Agner Fog vector class library. http://www.agner.org/optimize/#vectorclass. Accessed 25 July 2018
Fox N, Burch JL (2013) The Van Allen Probes mission. Springer, New York
Franci L, Cerri SS, Califano F, Landi S, Papini E, Verdini A, Matteini L, Jenko F, Hellinger P (2017) Magnetic reconnection as a driver for a subionscale cascade in plasma turbulence. Astrophys J Lett 850:L16. https://doi.org/10.3847/20418213/aa93fb
Fuselier SA, Burch JL, Mukherjee J, Genestreti KJ, Vines SK, Gomez R, Goldstein J, Trattner KJ, Petrinec SM, Lavraud B, Strangeway RJ (2017) Magnetospheric ion influence at the dayside magnetopause. J Geophys Res 122:8617–8631. https://doi.org/10.1002/2017JA024515
Génot V (2009) Analytical solutions for anisotropic MHD shocks. Astrophys Space Sci Trans 5:31–34. https://doi.org/10.5194/astra5312009
Génot V, Broussillou L, Budnik E, Hellinger P, Trávníček PM, Lucek E, Dandouras I (2011) Timing mirror structures observed by cluster with a magnetosheath flow model. Ann Geophys 29:1849–1860. https://doi.org/10.5194/angeo2918492011
Ghizzo A, Sarrat M, Del Sarto D (2017) Vlasov models for kinetic Weibeltype instabilities. J Plasma Phys 83:705830101. https://doi.org/10.1017/S0022377816001215
Gibby AR, Inan US, Bell TF (2008) Saturation effects in the VLFtriggered emission process. J Geophys Res 113:A11215. https://doi.org/10.1029/2008JA013233
Görler T, Lapillonne X, Brunner S, Dannert T, Jenko F, Merz F, Told D (2011) The global version of the gyrokinetic turbulence code GENE. J Comput Phys 230:7053–7071. https://doi.org/10.1016/j.jcp.2011.05.034
Green JC, Likar J, Shprits Y (2017) Impact of space weather on the satellite industry. Space Weather 15:804–818. https://doi.org/10.1002/2017SW001646
Guo W, Cheng Y (2016) A sparse grid discontinuous Galerkin method for highdimensional transport equations and its application to kinetic simulations. SIAM J Sci Comput 38:A3381–A3409. https://doi.org/10.1137/16m1060017
Guo F, Giacalone J (2013) The acceleration of thermal protons at parallel collisionless shocks: threedimensional hybrid simulations. Astrophys J 773:158. https://doi.org/10.1088/0004637X/773/2/158
Guo Y, Li Z (2008) Unstable and stable galaxy models. Commun Math Phys 279:789–813. https://doi.org/10.1007/s002200080439z
Hao Y, Gao X, Lu Q, Huang C, Wang R, Wang S (2017) Reformation of rippled quasiparallel shocks: 2D hybrid simulations. J Geophys Res. https://doi.org/10.1002/2017JA024234
Hargreaves JK (1995) The solar–terrestrial environment. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511628924
Harid V, Gołkowski M, Bell T, Li JD, Inan US (2014) Finite difference modeling of coherent wave amplification in the Earth’s radiation belts. Geophys Res Lett 41:8193–8200. https://doi.org/10.1002/2014GL061787
Hockney RW, Eastwood JW (1988) Computer simulation using particles. Hilger, Bristol
Hoilijoki S, Souza VM, Walsh BM, Janhunen P, Palmroth M (2014) Magnetopause reconnection and energy conversion as influenced by the dipole tilt and the IMF B\(_{x}\). J Geophys Res 119:4484–4494. https://doi.org/10.1002/2013JA019693
Hoilijoki S, Palmroth M, Walsh BM, PfauKempf Y, von Alfthan S, Ganse U, Hannuksela O, Vainio R (2016) Mirror modes in the Earth’s magnetosheath: results from a global hybridVlasov simulation. J Geophys Res 121:4191–4204. https://doi.org/10.1002/2015JA022026
Hoilijoki S, Ganse U, PfauKempf Y, Cassak PA, Walsh BM, Hietala H, von Alfthan S, Palmroth M (2017) Reconnection rates and X line motion at the magnetopause: global 2D–3V hybridVlasov simulation results. J Geophys Res 122:2877–2888. https://doi.org/10.1002/2016JA023709
Holloway JP (1995) A comparison of three velocity discretizations for the Vlasov equation. In: International conference on plasma science (papers in summary form only received), p 95. https://doi.org/10.1109/PLASMA.1995.529657
Honkonen I, von Alfthan S, Sandroos A, Janhunen P, Palmroth M (2013) Parallel grid library for rapid and flexible simulation development. Comput Phys Commun 184:1297–1309. https://doi.org/10.1016/j.cpc.2012.12.017
Hoppe MM, Russell CT, Frank LA, Eastman TE, Greenstadt EW (1981) Upstream hydromagnetic waves and their association with backstreaming ion populations—ISEE 1 and 2 observations. J Geophys Res 86:4471–4492. https://doi.org/10.1029/JA086iA06p04471
Hu J, Li G, Ao X, Zank GP, Verkhoglyadova O (2017) Modeling particle acceleration and transport at a 2D CMEdriven shock. J Geophys Res 122:10. https://doi.org/10.1002/2017JA024077
Huang CL, Spence HE, Lyon JG, Toffoletto FR, Singer HJ, Sazykin S (2006) Stormtime configuration of the inner magnetosphere: Lyon–Fedder–Mobarry MHD code, Tsyganenko model, and GOES observations. J Geophys Res 111:A11S16. https://doi.org/10.1029/2006JA011626
Inglebert A, Ghizzo A, Reveille T, Sarto DD, Bertrand P, Califano F (2011) A multistream Vlasov modeling unifying relativistic Weibeltype instabilities. Europhys Lett 95:45002. https://doi.org/10.1209/02955075/95/45002
Janhunen P, Palmroth M, Laitinen T, Honkonen I, Juusola L, Facskó G, Pulkkinen TI (2012) The GUMICS4 global MHD magnetosphere–ionosphere coupling simulation. J Atmos SolTerr Phys 80:48–59. https://doi.org/10.1016/j.jastp.2012.03.006
Jenab SMH, Kourakis I (2014) Vlasovkinetic computer simulations of electrostatic waves in dusty plasmas: an overview of recent results. Eur Phys J D 68:219. https://doi.org/10.1140/epjd/e2014501774
Karimabadi H, Roytershteyn V, Vu HX, Omelchenko YA, Scudder J, Daughton W, Dimmock A, Nykyri K, Wan M, Sibeck D, Tatineni M, Majumdar A, Loring B, Geveci B (2014) The link between shocks, turbulence, and magnetic reconnection in collisionless plasmas. Phys Plasmas 21:062308. https://doi.org/10.1063/1.4882875
Kärkkäinen M, Gjonaj E, Lau T, Weiland T (2006) Lowdispersion wake field calculation tools. In: Proceedings of ICAP 2006, Chamonix, France, vol 1, p 35
Kazeminezhad F, Kuhn S, Tavakoli A (2003) Vlasov model using kinetic phase point trajectories. Phys Rev E 67:026704. https://doi.org/10.1103/PhysRevE.67.026704
Kempf Y, Pokhotelov D, von Alfthan S, Vaivads A, Palmroth M, Koskinen HEJ (2013) Wave dispersion in the hybridVlasov model: verification of Vlasiator. Phys Plasmas 20:112114. https://doi.org/10.1063/1.4835315
Kempf Y, Pokhotelov D, Gutynska O, Wilson LB III, Walsh BM, von Alfthan S, Hannuksela O, Sibeck DG, Palmroth M (2015) Ion distributions in the Earth’s foreshock: hybridVlasov simulation and THEMIS observations. J Geophys Res 120:3684–3701. https://doi.org/10.1002/2014JA020519
Kilian P, Muñoz PA, Schreiner C, Spanier F (2017) Plasma waves as a benchmark problem. J Plasma Phys. https://doi.org/10.1017/S0022377817000149
Klimas AJ (1987) A method for overcoming the velocity space filamentation problem in collisionless plasma model solutions. J Comput Phys 68:202–226. https://doi.org/10.1016/00219991(87)900520
Klimas A, Farrell W (1994) A splitting algorithm for Vlasov simulation with filamentation filtration. J Comput Phys 110:150–163. https://doi.org/10.1006/jcph.1994.1011
Klimas AJ, Viñas AF, Araneda JA (2017) Simulation study of Landau damping near the persisting to arrested transition. J Plasma Phys 83:905830405. https://doi.org/10.1017/S002237781700054X
Kogge PM (2009) The challenges of petascale architectures. Comput Sci Eng 11:10–16. https://doi.org/10.1109/MCSE.2009.150
Kormann K (2015) A semiLagrangian Vlasov solver in tensor train format. SIAM J Sci Comput 37:B613–B632. https://doi.org/10.1137/140971270
Kormann K, Sonnendrücker E (2016) Sparse grids for the Vlasov–Poisson equation. In: Garcke J, Pflüger D (eds) Sparse grids and applications—Stuttgart 2014. Springer, Cham, pp 163–190. https://doi.org/10.1007/9783319282626_7
Kozarev KA, Schwadron NA (2016) A datadriven analytic model for proton acceleration by largescale solar coronal shocks. Astrophys J 831:120. https://doi.org/10.3847/0004637X/831/2/120. arXiv:1608.00240
Krymskii G (1977) A regular mechanism for the acceleration of charged particles on the front of a shock wave. DoSSR 234:1306–1308
Kurganov A, Tadmor E (2000) New highresolution central schemes for nonlinear conservation laws and convection–diffusion equations. J Comput Phys 160:241–282. https://doi.org/10.1006/jcph.2000.6459
Langseth JO, LeVeque RJ (2000) A wave propagation method for threedimensional hyperbolic conservation laws. J Comput Phys 165:126–166. https://doi.org/10.1006/jcph.2000.6606
Lapenta G (2012) Particle simulations of space weather. J Comput Phys 231:795–821. https://doi.org/10.1016/j.jcp.2011.03.035
Le Roux JA, Arthur AD (2017) Acceleration of solar energetic particles at a fast traveling shock in nonuniform coronal conditions. J Phys: Conf Ser 900:012013. https://doi.org/10.1088/17426596/900/1/012013
Lee MA (2005) Coupled hydromagnetic wave excitation and ion acceleration at an evolving coronal/interplanetary shock. Astrophys J Suppl Ser 158:38–67. https://doi.org/10.1086/428753
Leonardis E, SorrisoValvo L, Valentini F, Servidio S, Carbone F, Veltri P (2016) Multifractal scaling and intermittency in hybrid Vlasov–Maxwell simulations of plasma turbulence. Phys Plasmas 23:022307. https://doi.org/10.1063/1.4942417
LeVeque RJ (1997) Wave propagation algorithms for multidimensional hyperbolic systems. J Comput Phys 131:327–353. https://doi.org/10.1006/jcph.1996.5603
LeVeque RJ (2002) Finite volume methods for hyperbolic problems. Cambridge texts in applied mathematics. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511791253
Lin Y, Wing S, Johnson JR, Wang XY, Perez JD, Cheng L (2017) Formation and transport of entropy structures in the magnetotail simulated with a 3D global hybrid code. Geophys Res Lett 44:5892–5899. https://doi.org/10.1002/2017GL073957
Londrillo P, Del Zanna L (2004) On the divergencefree condition in Godunovtype schemes for ideal magnetohydrodynamics: the upwind constrained transport method. J Comput Phys 195:17–48. https://doi.org/10.1016/j.jcp.2003.09.016
Lu S, Lu Q, Lin Y, Wang X, Ge Y, Wang R, Zhou M, Fu H, Huang C, Wu M, Wang S (2015) Dipolarization fronts as earthward propagating flux ropes: a threedimensional global hybrid simulation. J Geophys Res 120:6286–6300. https://doi.org/10.1002/2015JA021213
Luhmann JG, Ledvina SA, Odstrcil D, Owens MJ, Zhao XP, Liu Y, Riley P (2010) Cone modelbased SEP event calculations for applications to multipoint observations. Adv Space Res 46:1–21. https://doi.org/10.1016/j.asr.2010.03.011
Lui ATY (1996) Current disruption in the Earth’s magnetosphere: observations and models. J Geophys Res 101:13067–13088. https://doi.org/10.1029/96JA00079
Maier A, Iapichino L, Schmidt W, Niemeyer JC (2009) Adaptively refined large eddy simulations of a galaxy cluster: turbulence modeling and the physics of the intracluster medium. Astrophys J 707:40. https://doi.org/10.1088/0004637X/707/1/40
Mangeney A, Califano F, Cavazzoni C, Trávníček P (2002) A numerical scheme for the integration of the Vlasov–Maxwell system of equations. J Comput Phys 179:495–538. https://doi.org/10.1006/jcph.2002.7071
Marchaudon A, Blelly PL (2015) A new interhemispheric 16moment model of the plasmasphere–ionosphere system: IPIM. J Geophys Res 120:5728–5745. https://doi.org/10.1002/2015JA021193
Marcowith A, Bret A, Bykov A, Dieckman ME, Drury LO, Lembège B, Lemoine M, Morlino G, Murphy G, Pelletier G, Plotnikov I, Reville B, Riquelme M, Sironi L, Stockem Novo A (2016) The microphysics of collisionless shock waves. Rep Prog Phys 79:046901. https://doi.org/10.1088/00344885/79/4/046901
Marsden JE, Weinstein A (1982) The hamiltonian structure of the Maxwell–Vlasov equations. Physica D 4:394–406. https://doi.org/10.1016/01672789(82)900434
Martins SF, Fonseca RA, Lu W, Mori WB, Silva LO (2010) Exploring laserwakefieldaccelerator regimes for nearterm lasers using particleincell simulation in Lorentzboosted frames. Nature Phys 6:311–316. https://doi.org/10.1038/nphys1538
Mejnertsen L, Eastwood JP, Hietala H, Schwartz SJ, Chittenden JP (2018) Global MHD simulations of the Earth’s bow shock shape and motion under variable solar wind conditions. J Geophys Res 123:259–271. https://doi.org/10.1002/2017JA024690
Merkin VG, Lyon JG (2010) Effects of the lowlatitude ionospheric boundary condition on the global magnetosphere. J Geophys Res 115:A10202. https://doi.org/10.1029/2010JA015461
MPI Forum (2004) MPI: a messagepassing interface standard—version 3.1. http://www.mpiforum.org/docs/mpi3.1/mpi31report.pdf. Accessed 25 July 2018
Nakamura TKM, Hasegawa H, Daughton W, Eriksson S, Li WY, Nakamura R (2017) Turbulent mass transfer caused by vortex induced reconnection in collisionless magnetospheric plasmas. Nature Commun 8:1582. https://doi.org/10.1038/s41467017015790
National Research Council (2008) Severe space weather events: understanding societal and economic impacts: a workshop report. The National Academies Press, Washington. https://doi.org/10.17226/12507
Ng CK, Reames DV (2008) Shock acceleration of solar energetic protons: the first 10 minutes. Astrophys J Lett 686:L123. https://doi.org/10.1086/592996
Nunn D (2005) Vlasov hybrid simulation—an efficient and stable algorithm for the numerical simulation of collisionfree plasma. Transp Theor Stat Phys 34:151–171. https://doi.org/10.1080/00411450500255518
Nunn D, Omura Y, Matsumoto H, Nagano I, Yagitani S (1997) The numerical simulation of VLF chorus and discrete emissions observed on the Geotail satellite using a Vlasov code. J Geophys Res 102:27083–27097. https://doi.org/10.1029/97JA02518
Omidi N (1995) How the bow shock does it. Rev Geophys 33:629–637. https://doi.org/10.1029/95RG00116
Omidi N, Sibeck DG (2007) Flux transfer events in the cusp. Geophys Res Lett 34:L04106. https://doi.org/10.1029/2006GL028698
Omidi N, BlancoCano X, Russell CT (2005) Macrostructure of collisionless bow shocks: 1. Scale lengths. J Geophys Res 110:A12212. https://doi.org/10.1029/2005JA011169
OpenMP Architecture Review Board (2011) OpenMP application program interface—version 3.1. http://www.openmp.org/mpdocuments/OpenMP3.1.pdf. Accessed 25 July 2018
Palmroth M, Janhunen P, Pulkkinen TI, Peterson WK (2001) Cusp and magnetopause locations in global MHD simulation. J Geophys Res 106:29435–29450. https://doi.org/10.1029/2001JA900132
Palmroth M, Pulkkinen TI, Janhunen P, Wu CC (2003) Stormtime energy transfer in global MHD simulation. J Geophys Res 108:1048. https://doi.org/10.1029/2002JA009446
Palmroth M, Janhunen P, Germany G, Lummerzheim D, Liou K, Baker DN, Barth C, Weatherwax AT, Watermann J (2006a) Precipitation and total power consumption in the ionosphere: global MHD simulation results compared with Polar and SNOE observations. Ann Geophys 24:861–872. https://doi.org/10.5194/angeo248612006
Palmroth M, Janhunen P, Pulkkinen TI (2006b) Hysteresis in solar wind power input to the magnetosphere. Geophys Res Lett 33:L03107. https://doi.org/10.1029/2005GL025188
Palmroth M, Laitinen TV, Pulkkinen TI (2006c) Magnetopause energy and mass transfer: results from a global MHD simulation. Ann Geophys 24:3467–3480. https://doi.org/10.5194/angeo2434672006
Palmroth M, Honkonen I, Sandroos A, Kempf Y, von Alfthan S, Pokhotelov D (2013) Preliminary testing of global hybridVlasov simulation: magnetosheath and cusps under northward interplanetary magnetic field. J Atmos SolTerr Phys 99:41–46. https://doi.org/10.1016/j.jastp.2012.09.013
Palmroth M, Archer M, Vainio R, Hietala H, PfauKempf Y, Hoilijoki S, Hannuksela O, Ganse U, Sandroos A, von Alfthan S, Eastwood JP (2015) ULF foreshock under radial IMF: THEMIS observations and global kinetic simulation Vlasiator results compared. J Geophys Res 120:8782–8798. https://doi.org/10.1002/2015JA021526
Palmroth M, Hoilijoki S, Juusola L, Pulkkinen T, Hietala H, PfauKempf Y, Ganse U, von Alfthan S, Vainio R, Hesse M (2017) Tail reconnection in the global magnetospheric context: Vlasiator first results. Ann Geophys 35:1269–1274. https://doi.org/10.5194/angeo3512692017
Perrone D, Valentini F, Servidio S, Dalena S, Veltri P (2013) Vlasov simulations of multiion plasma turbulence in the solar wind. Astrophys J 762:99. https://doi.org/10.1088/0004637X/762/2/99
Perrone D, Bourouaine S, Valentini F, Marsch E, Veltri P (2014a) Generation of temperature anisotropy for alpha particle velocity distributions in solar wind at 0.3 AU: Vlasov simulations and Helios observations. J Geophys Res 119:2400–2410. https://doi.org/10.1002/2013JA019564
Perrone D, Valentini F, Servidio S, Dalena S, Veltri P (2014b) Analysis of intermittent heating in a multicomponent turbulent plasma. Eur Phys J D 68:209. https://doi.org/10.1140/epjd/e2014501521
Peterson WK, Sharp RD, Shelley EG, Johnson RG, Balsiger H (1981) Energetic ion composition of the plasma sheet. J Geophys Res 86:761–767. https://doi.org/10.1029/JA086iA02p00761
PfauKempf Y (2016) Vlasiator—from local to global magnetospheric hybridVlasov simulations. PhD thesis, University of Helsinki. http://urn.fi/URN:ISBN:9789523360013. Accessed 25 July 2018
PfauKempf Y, Hietala H, Milan SE, Juusola L, Hoilijoki S, Ganse U, von Alfthan S, Palmroth M (2016) Evidence for transient, local ion foreshocks caused by dayside magnetopause reconnection. Ann Geophys 34:943–959. https://doi.org/10.5194/angeo349432016
PfauKempf Y, Battarbee M, Ganse U, Hoilijoki S, Turc L, von Alfthan S, Vainio R, Palmroth M (2018) On the importance of spatial and velocity resolution in the hybridVlasov modeling of collisionless shocks. Front Phys 6:44. https://doi.org/10.3389/fphy.2018.00044
Pinto RF, Rouillard AP (2017) A multiple fluxtube solar wind model. Astrophys J 838:89. https://doi.org/10.3847/15384357/aa6398
Pokhotelov D, von Alfthan S, Kempf Y, Vainio R, Koskinen HEJ, Palmroth M (2013) Ion distributions upstream and downstream of the Earth’s bow shock: first results from Vlasiator. Ann Geophys 31:2207–2212. https://doi.org/10.5194/angeo3122072013
Pritchett PL (2005) Externally driven magnetic reconnection in the presence of a normal magnetic field. J Geophys Res 110:A05209. https://doi.org/10.1029/2004JA010948
Pucci F, Vásconez CL, Pezzi O, Servidio S, Valentini F, Matthaeus WH, Malara F (2016) From Alfvén waves to kinetic Alfvén waves in an inhomogeneous equilibrium structure. J Geophys Res 121:1024–1045. https://doi.org/10.1002/2015JA022216
Pulkkinen TI, Palmroth M, Tanskanen EI, Janhunen P, Koskinen HEJ, Laitinen TV (2006) New interpretation of magnetospheric energy circulation. Geophys Res Lett 33:L07101. https://doi.org/10.1029/2005GL025457
Richer E, Modolo R, Chanteur GM, Hess S, Leblanc F (2012) A global hybrid model for Mercury’s interaction with the solar wind: case study of the dipole representation. J Geophys Res 117:10228. https://doi.org/10.1029/2012JA017898
Rieke M, Trost T, Grauer R (2015) Coupled Vlasov and twofluid codes on GPUs. J Comput Phys 283:436–452. https://doi.org/10.1016/j.jcp.2014.12.016
Rodger CJ, Kavanagh AJ, Clilverd MA, Marple SR (2013) Comparison between POES energetic electron precipitation observations and riometer absorptions: implications for determining true precipitation fluxes. J Geophys Res 118:7810–7821. https://doi.org/10.1002/2013JA019439
Rönnmark K (1982) WHAMP—waves in homogeneous, anisotropic multicomponent plasmas. Kiruna Geophysical Institute reports
Rossmanith JA, Seal DC (2011) A positivitypreserving highorder semiLagrangian discontinuous Galerkin scheme for the Vlasov–Poisson equations. J Comput Phys 230:6203–6232. https://doi.org/10.1016/j.jcp.2011.04.018
Sandroos A, Honkonen I, von Alfthan S, Palmroth M (2013) MultiGPU simulations of Vlasov’s equation using Vlasiator. Parallel Comput 39:306–318. https://doi.org/10.1016/j.parco.2013.05.001
Sandroos A, von Alfthan S, Hoilijoki S, Honkonen I, Kempf Y, Pokhotelov D, Palmroth M (2015) Vlasiator: global kinetic magnetospheric modeling tool. In: Numerical modeling of space plasma flows, ASTRONUM2014, Astronomical Society of the Pacific conference series, vol 498, p 222
Sarrat M, Ghizzo A, Del Sarto D, Serrat L (2017) Parallel implementation of a relativistic semiLagrangian Vlasov–Maxwell solver. Eur Phys J D 71:271. https://doi.org/10.1140/epjd/e2017801884
Schaeffer J (1998) Convergence of a difference scheme for the Vlasov–Poisson–Fokker–Planck system in one dimension. SIAM J Numer Anal 35:1149–1175. https://doi.org/10.1137/S0036142996302554
Schaye J, Crain RA, Bower RG, Furlong M, Schaller M, Theuns T, Dalla Vecchia C, Frenk CS, McCarthy IG, Helly JC, Jenkins A, RosasGuevara YM, White SDM, Baes M, Booth CM, Camps P, Navarro JF, Qu Y, Rahmati A, Sawala T, Thomas PA, Trayford J (2015) The EAGLE project: simulating the evolution and assembly of galaxies and their environments. Mon Not R Astron Soc 446:521–554. https://doi.org/10.1093/mnras/stu2058
Schmieder B, Archontis V, Pariat E (2014) Magnetic flux emergence along the solar cycle. Space Sci Rev 186:227–250. https://doi.org/10.1007/s1121401400889
Schmitz H, Grauer R (2006) Darwin–Vlasov simulations of magnetised plasmas. J Comput Phys 214:738–756. https://doi.org/10.1016/j.jcp.2005.10.013
Sergeev VA, Angelopoulos V, Nakamura R (2012) Recent advances in understanding substorm dynamics. Geophys Res Lett 39:L05101. https://doi.org/10.1029/2012GL050859
Servidio S, Valentini F, Califano F, Veltri P (2012) Local kinetic effects in twodimensional plasma turbulence. Phys Rev Lett 108:045001. https://doi.org/10.1103/PhysRevLett.108.045001
Servidio S, Osman KT, Valentini F, Perrone D, Califano F, Chapman S, Matthaeus WH, Veltri P (2014) Proton kinetic effects in Vlasov and solar wind turbulence. Astrophys J Lett 781:L27. https://doi.org/10.1088/20418205/781/2/L27
Servidio S, Valentini F, Perrone D, Greco A, Califano F, Matthaeus WH, Veltri P (2015) A kinetic model of plasma turbulence. J Plasma Phys 81:325810107. https://doi.org/10.1017/S0022377814000841
Shoucri M (2008) Eulerian codes for the numerical solution of the Vlasov equation. CNSNS 13:174–182. https://doi.org/10.1016/j.cnsns.2007.04.004
Sircombe NJ, Arber TD, Dendy RO (2004) Accelerated electron populations formed by Langmuir wave–caviton interactions. Phys Plasmas 12:012303. https://doi.org/10.1063/1.1822934
Sokolov IV, Roussev II, Skender M, Gombosi TI, Usmanov AV (2009) Transport equation for MHD turbulence: application to particle acceleration at interplanetary shocks. Astrophys J 696:261–267. https://doi.org/10.1088/0004637X/696/1/261
Sonnendrücker E, Roche J, Bertrand P, Ghizzo A (1999) The semiLagrangian method for the numerical resolution of the Vlasov equation. J Comput Phys 149:201–220. https://doi.org/10.1006/jcph.1998.6148
Soucek J, Escoubet CP, Grison B (2015) Magnetosheath plasma stability and ULF wave occurrence as a function of location in the magnetosheath and upstream bow shock parameters. J Geophys Res 120:2838–2850. https://doi.org/10.1002/2015JA021087
Spreiter J, Stahara S (1994) Gasdynamic and magnetohydrodynamic modeling of the magnetosheath: a tutorial. Adv Space Res 14:5–19. https://doi.org/10.1016/02731177(94)900426
Springel V (2005) The cosmological simulation code gadget2. Mon Not R Astron Soc 364:1105–1134. https://doi.org/10.1111/j.13652966.2005.09655.x
Strang G (1968) On the construction and comparison of difference schemes. SIAM J Numer Anal 5:506–517. https://doi.org/10.1137/0705041
Thomas AGR (2016) Vlasov simulations of thermal plasma waves with relativistic phase velocity in a Lorentz boosted frame. Phys Rev E 94:053204. https://doi.org/10.1103/PhysRevE.94.053204
ToledoRedondo S, André M, Vaivads A, Khotyaintsev YV, Lavraud B, Graham DB, Divin A, Aunai N (2016) Cold ion heating at the dayside magnetopause during magnetic reconnection. Geophys Res Lett 43:58–66. https://doi.org/10.1002/2015GL067187
Toro E (2014) Riemann solvers and numerical methods for fluid dynamics: a practical introduction. Springer, Berlin. https://doi.org/10.1007/b79761
Tóth G (2000) The \(\nabla \cdot \mathbf{B}=0\) constraint in shockcapturing magnetohydrodynamics codes. J Comput Phys 161:605–652. https://doi.org/10.1006/jcph.2000.6519
Tóth G, Jia X, Markidis S, Peng IB, Chen Y, Daldorff LKS, Tenishev VM, Borovikov D, Haiducek JD, Gombosi TI, Glocer A, Dorelli JC (2016) Extended magnetohydrodynamics with embedded particleincell simulation of Ganymede’s magnetosphere. J Geophys Res 121:1273–1293. https://doi.org/10.1002/2015JA021997
Tronci C, Tassi E, Camporeale E, Morrison PJ (2014) Hybrid VlasovMHD models: Hamiltonian vs. nonHamiltonian. Plasma Phys Control Fusion 56:095008. https://doi.org/10.1088/07413335/56/9/095008
Turc L, Fontaine D, Escoubet CP, Kilpua EKJ, Dimmock AP (2017) Statistical study of the alteration of the magnetic structure of magnetic clouds in the Earth’s magnetosheath. J Geophys Res 122:2956–2972. https://doi.org/10.1002/2016JA023654
Umeda T (2012) Effect of ion cyclotron motion on the structure of wakes: a Vlasov simulation. Earth Planets Space 64:16. https://doi.org/10.5047/eps.2011.05.035
Umeda T, Fukazawa K (2015) A highresolution global Vlasov simulation of a small dielectric body with a weak intrinsic magnetic field on the K computer. Earth Planets Space 67:49. https://doi.org/10.1186/s4062301502160
Umeda T, Ito Y (2014) Entry of solarwind ions into the wake of a small body with a magnetic anomaly: a global Vlasov simulation. Planet Space Sci 93:35–40. https://doi.org/10.1016/j.pss.2014.02.002
Umeda T, Wada Y (2016) Secondary instabilities in the collisionless Rayleigh–Taylor instability: full kinetic simulation. Phys Plasmas 23:112117. https://doi.org/10.1063/1.4967859
Umeda T, Wada Y (2017) NonMHD effects in the nonlinear development of the MHDscale Rayleigh–Taylor instability. Phys Plasmas 24:072307. https://doi.org/10.1063/1.4991409
Umeda T, Togano K, Ogino T (2009) Twodimensional fullelectromagnetic Vlasov code with conservative scheme and its application to magnetic reconnection. Comput Phys Commun 180:365–374. https://doi.org/10.1016/j.cpc.2008.11.001
Umeda T, Miwa J, Matsumoto Y, Nakamura TKM, Togano K, Fukazawa K, Shinohara I (2010a) Full electromagnetic Vlasov code simulation of the Kelvin–Helmholtz instability. Phys Plasmas 17:052311. https://doi.org/10.1063/1.3422547
Umeda T, Togano K, Ogino T (2010b) Structures of diffusion regions in collisionless magnetic reconnection. Phys Plasmas 17:052103. https://doi.org/10.1063/1.3403345
Umeda T, Kimura T, Togano K, Fukazawa K, Matsumoto Y, Miyoshi T, Terada N, Nakamura TKM, Ogino T (2011) Vlasov simulation of the interaction between the solar wind and a dielectric body. Phys Plasmas 18:012908. https://doi.org/10.1063/1.3551510
Umeda T, Ito Y, Fukazawa K (2013) Global Vlasov simulation on magnetospheres of astronomical objects. J Phys: Conf Ser 454:012005. https://doi.org/10.1088/17426596/454/1/012005
Umeda T, Ueno S, Nakamura TKM (2014) Ion kinetic effects on nonlinear processes of the Kelvin–Helmholtz instability. Plasma Phys Control Fusion 56:075006. https://doi.org/10.1088/07413335/56/7/075006
Usami S, Horiuchi R, Ohtani H, Den M (2013) Development of multihierarchy simulation model with nonuniform space grids for collisionless driven reconnection. Phys Plasmas 20:061208. https://doi.org/10.1063/1.4811121
Vainio R, Pönni A, Battarbee M, Koskinen HEJ, Afanasiev A, Laitinen T (2014) A semianalytical foreshock model for energetic storm particle events inside 1 AU. J Space Weather Space Clim 4:A08. https://doi.org/10.1051/swsc/2014005
Valentini F, Trávníček P, Califano F, Hellinger P, Mangeney A (2007) A hybridVlasov model based on the current advance method for the simulation of collisionless magnetized plasma. J Comput Phys 225:753–770. https://doi.org/10.1016/j.jcp.2007.01.001
Valentini F, Califano F, Veltri P (2010) Twodimensional kinetic turbulence in the solar wind. Phys Rev Lett 104:205002. https://doi.org/10.1103/PhysRevLett.104.205002
Valentini F, Perrone D, Veltri P (2011) Shortwavelength electrostatic fluctuations in the solar wind. Astrophys J 739:54. https://doi.org/10.1088/0004637X/739/1/54
Valentini F, Servidio S, Perrone D, Califano F, Matthaeus WH, Veltri P (2014) Hybrid Vlasov–Maxwell simulations of twodimensional turbulence in plasmas. Phys Plasmas 21:082307. https://doi.org/10.1063/1.4893301
Valentini F, Perrone D, Stabile S, Pezzi O, Servidio S, De Marco R, Marcucci F, Bruno R, Lavraud B, De Keyser J, Consolini G, Brienza D, SorrisoValvo L, Retinò A, Vaivads A, Salatti M, Veltri P (2016) Differential kinetic dynamics and heating of ions in the turbulent solar wind. New J Phys 18:125001. https://doi.org/10.1088/13672630/18/12/125001
van Marle AJ, Casse F, Marcowith A (2018) On magnetic field amplification and particle acceleration near nonrelativistic astrophysical shocks: particles in MHD cells simulations. Mon Not R Astron Soc 473:3394–3409. https://doi.org/10.1093/mnras/stx2509
Vásconez CL, Valentini F, Camporeale E, Veltri P (2014) Vlasov simulations of kinetic Alfvén waves at proton kinetic scales. Phys Plasmas 21:112107. https://doi.org/10.1063/1.4901583
Vásconez CL, Pucci F, Valentini F, Servidio S, Matthaeus WH, Malara F (2015) Kinetic Alfvén wave generation by largescale phase mixing. Astrophys J 815:7. https://doi.org/10.1088/0004637X/815/1/7
Vay JL, Geddes C, CormierMichel E, Grote D (2011) Numerical methods for instability mitigation in the modeling of laser wakefield accelerators in a Lorentzboosted frame. J Comput Phys 230:5908–5929. https://doi.org/10.1016/j.jcp.2011.04.003
Verdini A, Velli M, Matthaeus WH, Oughton S, Dmitruk P (2010) A turbulencedriven model for heating and acceleration of the fast wind in coronal holes. Astrophys J Lett 708:L116. https://doi.org/10.1088/20418205/708/2/L116
Verronen PT, Seppälä A, Clilverd MA, Rodger CJ, Kyrölä E, Enell CF, Ulich T, Turunen E (2005) Diurnal variation of ozone depletion during the October–November 2003 solar proton events. J Geophys Res. https://doi.org/10.1029/2004JA010932
Verscharen D, Marsch E, Motschmann U, Müller J (2012) Kinetic cascade beyond magnetohydrodynamics of solar wind turbulence in twodimensional hybrid simulations. Phys Plasmas 19:022305. https://doi.org/10.1063/1.3682960
Vlasov AA (1961) Manyparticle theory and its application to plasma. Gordon & Breach, New York
Vogman G (2016) Fourthorder conservative Vlasov–Maxwell solver for Cartesian and cylindrical phase space coordinates. PhD thesis, University of California in Berkeley. https://escholarship.org/uc/item/1c49t97t. Accessed 25 July 2018
von Alfthan S, Pokhotelov D, Kempf Y, Hoilijoki S, Honkonen I, Sandroos A, Palmroth M (2014) Vlasiator: first global hybridVlasov simulations of Earth’s foreshock and magnetosheath. J Atmos SolTerr Phys 120:24–35. https://doi.org/10.1016/j.jastp.2014.08.012
Watermann J, Wintoft P, Sanahuja B, Saiz E, Poedts S, Palmroth M, Milillo A, Metallinou FA, Jacobs C, Ganushkina NY, Daglis IA, Cid C, Cerrato Y, Balasis G, Aylward AD, Aran A (2009) Models of solar wind structures and their interaction with the Earth’s space environment. Space Sci Rev 147:233–270. https://doi.org/10.1007/s1121400994949
Weinstock J (1969) Formulation of a statistical theory of strong plasma turbulence. Phys Fluids 12:1045–1058. https://doi.org/10.1063/1.2163666
Wettervik BS, DuBois TC, Siminos E, Fülöp T (2017) Relativistic Vlasov–Maxwell modelling using finite volumes and adaptive mesh refinement. Eur Phys J D 71:157. https://doi.org/10.1140/epjd/e2017801022
Wik M, Viljanen A, Pirjola R, Pulkkinen A, Wintoft P, Lundstedt H (2008) Calculation of geomagnetically induced currents in the 400 kV power grid in southern Sweden. Space Weather 6:07005. https://doi.org/10.1029/2007SW000343
Wright AN, Russell AJB (2014) Alfvén wave boundary condition for responsive magnetosphere–ionosphere coupling. J Geophys Res 119:3996–4009. https://doi.org/10.1002/2014JA019763
Yang LP, Feng XS, Xiang CQ, Liu Y, Zhao X, Wu ST (2012) Timedependent MHD modeling of the global solar corona for year 2007: driven by dailyupdated magnetic field synoptic data. J Geophys Res 117:A08110. https://doi.org/10.1029/2011JA017494
Ye H, Morrison PJ (1992) Action principles for the Vlasov equation. Phys Fluids B 4:771–777. https://doi.org/10.1063/1.860231
Yee K (1966) Numerical solution of intial boundary value problems involving Maxwell’s equations in isotropic media. IEEE Trans Ant Prop 14:302–307. https://doi.org/10.1109/TAP.1966.1138693
Zenitani S, Umeda T (2014) Some remarks on the diffusion regions in magnetic reconnection. Phys Plasmas 21:034503. https://doi.org/10.1063/1.4869717
Zerroukat M, Allen T (2012) A threedimensional monotone and conservative semiLagrangian scheme (SLICE3D) for transport problems. Quart J R Meteorol Soc 138:1640–1651. https://doi.org/10.1002/qj.1902
Zhang M, Feng X (2016) A comparative study of divergence cleaning methods of magnetic field in the solar coronal numerical simulation. FrASS 3:6. https://doi.org/10.3389/fspas.2016.00006
Acknowledgements
We acknowledge The European Research Council for Starting Grant 200141QuESpace, with which Vlasiator (http://helsinki.fi/vlasiator) was developed, and Consolidator Grant 682068PRESTISSIMO awarded to further develop Vlasiator and use it for scientific investigations. We gratefully also acknowledge the Academy of Finland (Grant Numbers 138599, 267144, and 309937). The Finnish Centre of Excellence in Research of Sustainable Space, funded through the Academy of Finland with Grant Number 312351, supports Vlasiator development and science as well. We acknowledge all computational grants we have received: PRACE/Tier0 2012061111 on Hermit/HLRS, PRACE/Tier1 on Abel/UiONOTUR, PRACE/Tier0 2014112573 on HazelHel/HLRS, PRACE/Tier0 2016153521 on Marconi/CINECA, CSC – IT Center of Science Grand Challenge grants on 2015 and 2016, and the pilot use in summer as well as the special Christmas present pilot use of sisu.csc.fi in 2014. LT is supported by Marie SklodowskaCurie Grant Agreement No. 704681.
Author information
Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Palmroth, M., Ganse, U., PfauKempf, Y. et al. Vlasov methods in space physics and astrophysics. Living Rev Comput Astrophys 4, 1 (2018). https://doi.org/10.1007/s4111501800032
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s4111501800032
Keywords
 Plasma physics
 Computational physics
 Vlasov equation
 Astrophysics
 Space physics