Physical, numerical, and computational challenges of modeling neutrino transport in core-collapse supernovae

The proposal that core collapse supernovae are neutrino driven is still the subject of active investigation more than fifty years after the seminal paper by Colgate and White. The modern version of this paradigm, which we owe to Wilson, proposes that the supernova shock wave is powered by neutrino heating, mediated by the absorption of electron-flavor neutrinos and antineutrinos emanating from the proto-neutron star surface, or neutrinosphere. Neutrino weak interactions with the stellar core fluid, the theory of which is still evolving, are flavor and energy dependent. The associated neutrino mean free paths extend over many orders of magnitude and are never always small relative to the stellar core radius. Thus, neutrinos are never always fluid like. Instead, a kinetic description of them in terms of distribution functions that determine the number density of neutrinos in the six-dimensional phase space of position, direction, and energy, for both neutrinos and antineutrinos of each flavor, or in terms of angular moments of these neutrino distributions that instead provide neutrino number densities in the four-dimensional phase-space subspace of position and energy, is needed. In turn, the computational challenge is twofold: (i) to map the kinetic equations governing the evolution of these distributions or moments onto discrete representations that are stable, accurate, and, perhaps most important, respect physical laws such as conservation of lepton number and energy and the Fermi--Dirac nature of neutrinos and (ii) to develop efficient, supercomputer-architecture-aware solution methods for the resultant nonlinear algebraic equations. In this review, we present the current state of the art in attempts to meet this challenge.


Preface
At this stage in the development of the theory of core-collapse supernovae two possible explosion mechanisms are most often discussed: neutrino-driven and magnetorotationally-driven. The state of the theory is not sufficiently well developed to determine whether or not there is a clear break between these two cases or whether they represent limiting cases of a continuum. Nonetheless, in any scenario, the physics discussed here is relevant. It either dominates, leading to a neutrino-driven explosion, or sets the stage for a magneto-rotationally-driven supernova. That is, core-collapse supernova theorists have no choice but to first master and, more important, implement realistic models of neutrino transport in core-collapse supernova environments. What is meant by "realistic" will hopefully become clear as we progress through this review, but what will also hopefully become clear: Challenges to achieving realism will be faced on multiple fronts: physical, numerical, and computational.
When charged to write this review, we were asked not to provide an encyclopedic review of past work in the field but, rather, to present the current issues and challenges faced by the core-collapse supernova modeling community, particularly as they pertain to what is arguably the most difficult aspect to model: neutrino transport. Thus, with this charge in mind, we have written our review with an emphasis on the future, on what modelers must and will face to develop realistic models of these most important events.

Setting the stage
The idea that core-collapse supernovae could be neutrino driven was first proposed more than fifty years ago by Colgate and White (1966) in their seminal numerical study. This work set neutrinos front and center in core-collapse supernova theory, which has remained the case ever since. The Colgate and White studies were followed by the early studies of Wilson (1971) that cast doubt on the efficacy of their proposal. But the development of the electroweak theory, which predicted the existence of weak neutral currents, would change all that. Given weak neutral currents, Freedman recognized that it would be possible for neutrinos to scatter off of the nucleons in a nucleus collectively. The cross sections for such scattering would be proportional to the nuclear neutron number, N, and would consequently be large.
Shortly thereafter, Wilson (1974), using the new weak interaction cross sections for this process, demonstrated that the Colgate and White proposal was in fact viable. The recognition of this intertwined relationship between core-collapse supernova physics and neutrino weak interaction physics drives continued research to this day. Nearly forty years of further study in the context of the assumption of spherical symmetry was set in motion by this early and foundational work, which traversed a range of descriptions of neutrino transport in stellar cores, a range of sophistication of the treatment of the microphysics input included in the models, which includes the neutrino weak interaction physics and the equations of state describing a stellar core's nuclear, leptonic, and photonic degrees of freedom.
Neutrino mediation of core-collapse supernova dynamics in its modern instantiation is through charged-current absorption of electron neutrinos and antineutrinos on neutrons and protons, respectively. The nucleons become available as the stalled supernova shock wave dissociates the nuclei in the infalling stellar core material as the material passes through it. The neutrino absorption heats the material, depositing energy behind the shock. The shock loses energy initially to dissociation and neutrino losses. When sufficient energy is deposited by neutrino heating, the shock again becomes dynamical, propagates outward in radius, and reverses the infall of material passing through it, to disrupt the star in a core-collapse supernova (Wilson, 1985;Bethe and Wilson, 1985). This modern instantiation of neutrinos' role in the supernova mechanism relies on the developments surrounding the large neutrino-nucleus scattering cross sections discussed earlier. Arnett (1977) was the first to show that such cross sections led to the trapping of the electron neutrinos produced during stellar core collapse through electron capture on nuclei and protons. He demonstrated that, despite their nature as weakly interacting particles, the densities in the stellar core rise sufficiently rapidly to render the electron neutrino mean free paths smaller than the size of the stellar core. Neutrino trapping gives rise to a trapped degenerate sea of electron neutrinos in the inner stellar core that emerge after stellar core bounce and the launch of the supernova shock wave from the proto-neutron star on diffusive time scales.
The proto-neutron star comprises the inner cold unshocked core and a hot shocked mantle of material above it that is not ejected by the shock. Electron degeneracy is lifted in the hot mantle, leading to a significant population of electron-positron pairs, which in turn leads to the production of neutrinos and antineutrinos of all three flavors via electron-positron annihilation. The densities in the mantle are sufficiently high that neutrinospheres for all three flavors of neutrinos and antineutrinos exist, all lying within kilometers of each other, as a function of flavor and energy, in the density cliff that defines the proto-neutron star surface. The post-bounce stratification of the core, setting the stage for neutrino shock revival is shown in Fig. 1. Neutrinos of all three flavors emerge from their respective neutrinospheres at the proto-neutron star surface. Between the proto-neutron star surface and the shock, neutrino heating and cooling take place through charged-current electron neutrino and antineutrino absorption on and emission by nucleons, respectively. The different radial dependencies of neutrino heating and cooling lead to net heating above the "gain radius" and net cooling below it. The region between the gain radius and the shock, where net neutrino heating takes place, is known as the gain region.
The energy deposition rate per gram of material in the gain region can be expressed in terms of the electron neutrino and antineutrino luminosities, squared rms energies, and inverse flux factor aṡ where ε is the internal energy of the stellar core fluid per gram, X n,p are the neutron and proton mass fractions, respectively, L ν e ,ν e are the electron neutrino and antineutrino luminosities, respectively, F ν e ,ν e are the inverse flux factors for the electron neutrinos and antineutrinos, respectively, and λ a 0 ,λ a 0 are constants related to the weak interaction coupling constants. Thus, knowledge of the neutrino luminosities, spectra, and angular distributions are needed to compute the neutrino heating rates. This requires knowledge of the neutrino distribution functions, f ν e ,ν e (r, θ , φ , E, θ p , φ p ,t), from which these quantities can be calculated. The neutrino distribution functions are determined by solving their respective Boltzmann kinetic equations, which will be discussed later. Thus, the core-collapse supernova problem is a phase space problem, in the end involving 6 dimensions plus time. The common parlance, dividing core-collapse supernova models between "1D" (spherical symmetry), "2D" (axisymmetry), or "3D" models is quite misleading. In reality, the dimensionality is 3D for spherical symmetry, involving 1 spatial dimension (radius) and 2 momentum-space dimensions (neutrino energy and a single direction cosine), 5D for axisymmetry, involving 2 spatial dimensions (radius and θ ) and 3 momentum-space dimensions (neutrino energy and 2 direction cosines), and 6D, involving 3 spatial dimensions (radius, θ , and φ ) and 3 momentum-space dimensions (neutrino energy and 2 direction cosines).
The central densities of the proto-neutron star reach values between 10 14 and 10 15 g cm −3 . Its mass, which is O(1) M , is initially contained within a radius O(100) km. Such conditions are not Newtonian. Detailed comparisons made in the context of spherically symmetric models of core-collapse supernovae  between Newtonian and general relativistic models revealed the dramatic differences in the overall "compactification" of the postbounce core configuration defined by the neutrinosphere, gain, and shock radii, as well as the dramatic difference between the magnitudes of the infall velocities through the gain region. Moreover, neutrino luminosities and rms energies were increased in the general relativistic case due to the higher core temperatures. These studies made obvious the fact the core-collapse supernova environment is a general relativistic environment.
Models that assume spherical symmetry reached the needed level of sophistication only fairly recently, with fully general relativistic models that included Boltzmann neutrino transport, an extensive set of neutrino weak interactions, and, at the time, an industry-standard equation of state (Liebendörfer et al, 2001;Lentz et al, 2012b). The outcomes of these models were quite discouraging. In all cases, the shock radius reaches a maximum and then recedes with time until the simulations are terminated. Explosion does not occur, and the end outcome of each simulation would be the formation of a stellar-mass black hole.
With the exception of the lowest-mass massive stars (Kitaura et al, 2006) it became clear the Colgate and White proposal was doomed to fail without the aid of Challenges of modeling neutrino transport in core-collapse supernovae 7 additional physics. Specifically, the assumption of spherical symmetry had to be eliminated. In retrospect, it is now obvious why: Neutrino emission by the proto-neutron star, driving the explosion above, is fueled by the accretion of stellar core material onto it. Explosion in spherical symmetry would cut off such accretion entirely once initiated, cutting off the fuel that drives the neutrino emission that drives the explosion. Unless accretion and explosion can occur simultaneously, we are presented with a Goldilocks problem: Enough energy has to be deposited behind the shock before explosion occurs. But for sufficiently energetic explosions, an explosion cannot occur too soon. And given that the accretion rates decrease with time, due to stellar core density profiles, an explosion also cannot occur too late.
The first two-dimensional core-collapse supernova simulations by Herant et al (1992Herant et al ( , 1994 demonstrated that accretion and explosion naturally coexist in the postshock flow. Heating by the proto-neutron star from below generates convection in the gain region. Such "neutrino-driven" convection allows continued accretion while some of the material is heated, expands, and moves outward. Lower-entropy, accreting fingers are evident in Fig. 2, as well as higher-entropy rising plumes. The Herant et al. studies opened the next, much-needed chapter in core-collapse supernova theory. As with spherically symmetric modeling, axisymmetric modeling continues to this day. (See Müller 2020 for a focused and comprehensive review on convection and other fluid instabilities in core collapse supernova environments that are integral to the supernova explosion mechanism.) The core-collapse supernova modeling community has not yet produced general relativistic axisymmetric models with Boltzmann neutrino transport and with industry-standard weak interaction physics and equations of state, but significant progress has been made. The first simulations to evolve both the neutrino spectra and their angular distributions were performed by Ott et al (2008). Included were the spatial advection terms on the left-hand side of the Boltzmann equation (corresponding to neutrino transport in each of the spatial dimensions) and the collision term on the right-hand side of the equation (corresponding to neutrino sources and sinks due to emission, absorption, and scattering) with a subset of the weak interactions considered complete today. The simulations were purely Newtonian. Neglected were all relativistic effects in the Boltzmann kinetic equations, describing special relativistic Doppler shift of neutrino energies, general relativistic blue and red shift of neutrino energies, angular aberration of neutrino propagation, etc. Outcomes from their multi-angle, multi-frequency approach were compared with outcomes from a similar simulation performed with multigroup flux-limited diffusion. Notable differences were obtained between the two transport approaches in the results obtained for neutrino radiation field quantities entering the expression for neutrino heating, Eq. (1)specifically, the inverse flux factors and rms energies -, which translated into notable differences in neutrino heating, which were up to a factor of 3 for rapidly rotating cores. More recent studies assuming axisymmetry by Nagakura et al (2018) implemented special relativistic Boltzmann neutrino transport with a subset of the neutrino weak interactions regarded as essential in today's leading multi-physics models, coupled to Newtonian hydrodynamics and gravity. In light of their Boltzmann implementation, these authors were able to make assessments regarding the fundamental assumption at the heart of the most commonly used closure prescription-the Fig. 2: Snapshot of neutrino-driven convection at 25 ms after bounce in the twodimensional core-collapse supernova model of Herant et al (1994) initiated from a 25 M progenitor. so-called M1 closure-currently in use in most multi-dimensional supernova studies deploying multidimensional neutrino transport in a moments approach we will discuss shortly. Nagakura et al. find that the assumption that the neutrino radiation field is not in fact axisymmetric about the outward radial direction, reflected in nonnegligible off-diagonal components of the Eddington tensor-specifically, k rθ . The authors emphasize how such components play a non-negligible role in the evolution of the neutrino fluxes, increasing the neutrino luminosities by ∼10%. The neutrino heating rate, Eq. (1), is then increased commensurately. Experience has shown that corrections at this level in any or all of the quantities entering the neutrino heating rate are noteworthy and warrant continued exploration, perhaps for all models, but especially in light of marginal cases of explosion for some, perhaps many, progenitors.
Not unexpectedly, given the physical complexity and the computational cost, no simulations have been performed to date that deploy three-dimensional general relativistic Boltzmann neutrino transport in general relativistic core-collapse super-Challenges of modeling neutrino transport in core-collapse supernovae Schematic showing the key characteristics of the ray-by-ray neutrino transport approximation. Along each radial ray (e.g., along segments DB or HF), a complete solution to the spherically symmetric neutrino transport equations is obtained assuming spherical conditions given by the conditions along each ray. This approximation afforded the ability to implement sophisticated transport solvers that had been developed in the context of models of core-collapse supernovae assuming spherical symmetry, at the expense of ignoring net lateral transport that would occur in multiple spatial dimensions. In spherical symmetry, neutrinos can propagate along the segment AB, which is clearly not a purely radial segment. Therefore, there is lateral transport. However, in spherical symmetry, every neutrino propagating along AB is matched by a neutrino propagating along CB, and the net flux at point B is purely radial. The lateral fluxes cancel exactly. Focusing on neutrino heating at point B, the ray-by-ray approach assumes that the thermodynamic conditions across the proto-neutron star surface (i..e., the neutrinosphere) between points A and C are uniform and given by the thermodynamic conditions at point D. Given a temporary hot spot at point D on the surface, the neutrino heating at point B would be overestimated. Moreover, were point H significantly cooler, relatively speaking, at the same instant, heating at point F would be underestimated because the hot spot at point D would be ignored even though it is within the cone of neutrino trajectories contributing to the neutrino heating at F. Thus, the ray-by-ray approximation may lead to larger angular variations in the neutrino radiation field than would be present were three-dimensional transport used-particularly if the hot spots on the proto-neutron star surface persist.
nova models-i.e., including general relativistic hydrodynamics and gravity. This is a long-term goal and, as made clear by what we have learned in the context of studies in spherical and axisymmetry, a needed goal. Nonetheless, three-dimensional corecollapse supernova modeling of increasing sophistication is ongoing. The first threedimensional core-collapse supernova models were performed by Fryer and Warren (2004) using gray (neutrino angle-and energy-integrated) radiation hydrodynamics. The first spectral (neutrino-angle integrated) three-dimensional models were performed by Hanke et al (2013). The current stable of spectral three-dimensional models fall under two categories. Both implement spectral (but not multi-angle) neutrino transport in a one-or two-moment approach. In one category, the so-called "ray-byray" approximation is used. In the other, the neutrino transport is three dimensional.
(A clarifying remark: The simulations by Hanke et al. used a Boltzmann solver in the context of their ray-by-ray approach. As such, some angular dependence was kept. However, three-dimensional models require two angles to describe a neutrino's propagation direction, and in the ray-by-ray approach the angular dependence in one of the angles is approximate in the sense that it is computed assuming spherical symmetry.) The earliest three-dimensional models-e.g., those of Hanke et al.-implemented ray-by-ray transport. In the ray-by-ray approach, the three-dimensional neutrino transport problem is broken up into N = N θ × N φ spherically symmetric problems, where N θ ,φ are then number of θ , φ zones used in the simulation. The ray-by-ray approximation follows lateral neutrino transport under the assumption of spherical symmetry, meaning there is lateral transport of individual neutrinos, but the net lateral flux is zero. (For example, neutrinos can propagate along the segment between A and B in Fig. 3, but an equal number of neutrinos must propagate along the path between C and B, such that the net flux at point B is purely radial.) Moreover, as illustrated by Fig. 3, neutrino heating at a point in the gain region may be over-or under-estimated. Consider the point B in the heating region. The backward cone emanating from point B subtends a portion of the neutrinosphere, between points A and C, that is the source of the neutrinos that heat the material at point B. The ray-byray approximation, which assumes spherical symmetry for each ray, assumes that the thermodynamic conditions across the neutrinosphere between points A and C are the same as those at point D. If point D is a hot spot, the ray-by-ray approximation will compute the heating at point B assuming the neutrinosphere between points A and C is hot. For neutrino heating at point F, and assuming that point H is not a hot spot, the ray-by-ray approximation will assume that conditions at point H are mimicked across the portion of the neutrinosphere between points E and G, regardless of the fact that point D is hot and within that portion of the surface. Thus, the neutrino heating at point B will be overestimated, whereas the neutrino heating at point F will be underestimated. Whether or not the ray-by-ray approximation leads to significant over-or under-estimations of the neutrino heating over the course of the shock reheating epoch will of course depend on whether or not such variations in the thermodynamic conditions across the neutrinosphere persist, which requires a comparison taking into consideration the time dependence of such thermodynamic conditions. Comparisons between ray-by-ray and non-ray-by-ray approaches in the context of axisymmetric core-collapse supernova models found notable differences Challenges of modeling neutrino transport in core-collapse supernovae 11 in, among other outcomes, the time to explosion (Skinner et al, 2016). However, more recent comparisons in the context of three-dimensional models found no significant differences between the two approaches (Glas et al, 2019). Of course, without threedimensional transport implementations, it would be difficult to assess the efficacy of using the ray-by-ray approach, or other approximations. In the end, such approximations must be removed, if only just to check them. The ray-by-ray approach of the Oak Ridge group is based on one-moment closure through flux-limited diffusion (Bruenn et al, 2020). They follow the evolution for the lowest angular moment of the neutrino distribution: the number density. The Max Planck group's ray-by-ray implementation is based on two-moment closure (Rampp and Janka, 2000). They solve an approximate Boltzmann equation for the purposes of computing the variable Eddington factor needed to close the system of equations describing the evolution of the first two moments of the neutrino distribution (in spherical symmetry, there is only one first moment, corresponding to the radial number flux, together with the zeroth moment, the neutrino number density).
For both two-and three-dimensional core-collapse supernova models that attempt to include general relativity at some level of approximation, if not exactly, Newtonian or general relativistic hydrodynamics, and two-or three-dimensional neutrino transport are all based on the solution of the neutrino moments equations describing the evolution of the lowest angular moments of the neutrino distribution function. For example, in terms of the neutrino distribution function, the number moments (spectral number density, spectral number flux) are defined as where µ ≡ cos θ p is the neutrino direction cosine defined by θ p , one of the angles of propagation defined in terms of the outward pointing radial vector defining the neutrino's position at time t. In three dimensions, two angles are needed to uniquely define a neutrino propagation direction. The angle φ p provides the second. n i is the neutrino direction cosine in the i th direction, whose components are given as functions of µ and φ p . E is the neutrino energy. E, θ p , φ p can be viewed as spherical momentum space coordinates. Above, N and F i are the number density and number fluxes, respectively. In three dimensions, there is of course a number flux for each of the three spatial dimensions, delineated by the superscript i. Integration of the neutrino Boltzmann equation over the angles θ p and φ p , weighted by 1, n i , n i n j , ... defines an infinite set of evolution equations for the infinite number of angular moments of the distribution function, which is obviously impossible to solve. In a moments approach to neutrino transport, the infinite set of equations is rendered finite by truncation, after the equation for the zeroth moment in the case of one-moment closure (e.g., flux-limited diffusion) or after the equations for the first moments in the case of twomoment closure (e.g., M1 closure). In the latter case, closure can be "prescribed" (e.g., M1 closure) or computed (e.g., through a variable Eddington tensor approach). We will discuss these approaches in greater detail later in our review. It is important to understand the essence of the approximations being made in moments approaches to neutrino transport in core-collapse supernova models. One does not integrate out all of the angular information contained within the neutrino distribution function. Some angular information remains. The higher the closure is made in the order of moment equations, the more angular information is kept. For example, two-moment closure keeps the fundamental angular dependencies. The ratio of the number flux in any of the three dimensions to the number density, at any spacetime point, is a measure of how forward peaked the neutrino angular distribution is in that dimension at that point. Thus, a moments approach retains much of the information of the neutrino radiation field contained within the neutrino distribution functions, while providing a sophisticated modeling path forward that is achievable on present leadership-class computing systems. Direct Boltzmann solutions for the neutrino radiation field will have to wait until sustained exascale computing platforms become available over the next decade. Three-dimensional models that include an approximation to general relativistic gravity in the form of an "effective potential," Newtonian hydrodynamics, ray-byray one-or two-moment neutrino transport with some corrections for special relativity (O(v/c)) or general relativity (gravitational redshift of neutrino energies), and a state-of-the-art set of neutrino weak interactions have been performed by the Max Planck and Oak Ridge groups (Hanke et al, 2013;Lentz et al, 2015;Melson et al, 2015b,a;Summa et al, 2018). Three-dimensional models that include general relativistic hydrodynamics and gravity, and three-dimensional, general relativistic, O(1) or fully relativistic (special and general) two-moment neutrino transport with an extensive set of neutrino weak interactions have been performed by Roberts et al (2016) and Kuroda et al (2016), respectively. Three-dimensional models that couple Newtonian hydrodynamics and approximate general relativistic gravity, as above, to threedimensional two-moment neutrino transport with corrections for special and general relativity, as above, and an extensive set of neutrino weak interactions were performed by O'Connor and Couch (2018); Vartanyan et al (2019); Burrows et al (2019).
It is clear the core collapse supernova modeling state of the art in three dimensions is evolving, with some models classifiable as more complete macrophysically -i.e., that implement three-dimensional, general relativistic gravity, hydrodynamics, and neutrino transport -and some models classifiable as more complete microphysically -i.e., that include state-of-the-art microphysics.
Note that the computational cost associated with the solution of the neutrino moment or Boltzmann transport equations is dominated by the computations associated with the "collision term"-i.e., with the neutrino interactions with the stellar core fluid. It is also clear-and of course at this point should come as no surprise-the above history of the development of core-collapse supernova theory over the last fifty-plus years centers on the development of neutrino transport theory and its implementation in this context. Neutrino mass, albeit small in relation to the neutrino energies attained in core collapse supernovae, leads to neutrino flavor transformations. There is growing, though still inconclusive, evidence that such transformations may play a role in neutrino shock reheating (e..g., see Tamborra et al (2017); Abbar et al (2019); Delfan Azari et al (2019)). The existence of so called "fast" flavor transformations, which can ex-Challenges of modeling neutrino transport in core-collapse supernovae 13 ist even in the baryon-laden environment below the supernova shock wave, was first brought to the attention of the supernova modeling community by Sawyer (Sawyer (2005)). Prior to this work, it was assumed that quantum mechanical coherence among the neutrinos in the region beneath the shock would de-cohere due to neutrino-matter collisions, thereby rendering such effects unimportant to neutrino shock reheating. However, fast modes operate on scales much shorter than a neutrino mean free path and, in fact, are not wiped out by collisions and beg to be considered. As in the classical case, the story boils down to capturing the neutrino angular distributions for all three flavors of neutrinos, as a function of space and time during the evolution of the supernova. The neutrinospheres for the three neutrino flavors are distinguished first and foremost by their interactions with the stellar fluid, with electron neutrinos and antineutrinos interacting through both charged and neutral currents and the muon and tau neutrinos interacting only through neutral currents. Moreover, the preponderance of neutrons over protons reduces the opacity of the stellar fluid to electron antineutrinos, and a hierarchy sets in, with the muon and tau neutrinospheres at the highest densities, followed by the neutrinosphere associated with the electron antineutrinos, followed in turn by the neutrinosphere associated with the electron neutrinos, at the lowest densities, relatively speaking. Given the layering of the neutrinospheres, at a given time during neutrino shock revival, the neutrino angular distributions at a given spatial location in the cavity between the neutrinospheres and the shock will differ by flavor. It is the differences between the angular distributions of each flavor that sets the stage for fast flavor transformation.
Thus, the need, in the classical case, for a Boltzmann description of the neutrino radiation field is multifold: (1) Moments approaches are approximations, whose efficacy cannot be known a priori and must be checked against the exact (classical) result. Examples of this will be discussed here.
(2) The development of closure prescriptions for moment models is rife with difficulty, partially because of nonlinearities introduced by the closure procedure. For example, a numerical method for twomoment, multifrequency, general relativistic neutrino transport that respects Fermi-Dirac statistics does not yet exist and will be difficult to develop. Furthermore, the development of nonlinear moment models beyond the two-moment approximation, to capture more kinetic effects, will be even more challenging. (3) Boltzmann and low-order moments approaches can be used together to accelerate convergence of the solution to the Boltzmann equation, potentially becoming competitive, in terms of speed and memory use, with nonlinear, high-order moments approaches. (4) The exploration of fast flavor transformations on the core collapse supernova mechanism will require precise knowledge of the neutrino angular distributions for all three flavors across spacetime of a supernova model. Such information can be obtained only through a solution of the classical Boltzmann kinetic equations for each neutrino flavor in association with simulation of the coherent quantum effects -i.e., through a solution of the multi-angle, multi-frequency neutrino quantum kinetics equations for all neutrino flavors.
While the justification for deploying Boltzmann kinetics in the classical case can be made, it is through a combination of Boltzmann and moments approaches that progress will be made in both the near and the long term. We are attempting to address myriad science questions, and past experience already tells us that the answer to these questions will vary with characteristics of the massive progenitors in which core collapse supernovae occur. How do massive stars explode? Which explode and which do not? Among those that explode, what elements do they produce? How do they contribute to galactic chemical evolution? And the list goes on. At present, there is no foreseeable time at which all of these questions will be addressable with Boltzmann methods, let alone quantum kinetics. An uncountable number of models will ultimately be required to understand the death of the diverse population of stars we are presented with in nature, as well as the death of any one of them. Our understanding of stellar death will not come from a single "hero" simulation, but from many simulations. Thus, it is in the application of both Boltzmann (classically) and moments approaches and, through this, the development of ever more realistic moments approaches that we will be able to advance our knowledge of one of the most important phenomena in the Universe. This is already clear from the modeling history to date. We have come a long way since Colgate and White's seminal work through precisely the hybrid approach discussed here. Hence, this review will focus on both approaches, as well as point to potentially efficacious hybrid approaches that could be developed and deployed in the future.

Design specifications
There have been many lessons learned during the 54 years that have passed since the first numerical simulations of core-collapse supernovae were performed by Colgate and White. These lessons can now be used to construct a list of design specifications for models of neutrino transport that will be used in future core-collapse supernova models: 1. Ultimately, definitive simulations of core-collapse supernovae in the classical limit will require a Boltzmann kinetic description of neutrino transport for all three flavors of neutrinos and their antineutrino partners. 2. In the event sufficient evidence points to the need to consider in greater detail the impact of neutrino quantum kinetics on the supernova explosion mechanism, a quantum kinetics description of neutrino transport would be required. A classical Boltzmann description would be the natural, and required, starting point for the development of a such a quantum kinetics treatment. 3. The simulations must be general relativistic. They must include special and general relativistic effects such as Doppler and red/blue shifts of neutrino energy, respectively, and angular aberration in both cases, due to fluid motion and spacetime curvature. 4. These simulations must include all of the neutrino weak interactions that have been to date demonstrated to be important, and the description of the interactions must be state of the art. 5. The quality of core-collapse supernova simulations will ultimately be gauged by, among other things, the degree to which lepton number and energy are conserved. More specifically, the discretizations of the integro-partial differential Boltzmann equations must conserve lepton number and energy simultaneously.
6. The discretizations of the Boltzmann equations-in particular, the collision termsmust accommodate both small-and large-energy scattering. 7. The numerical methods must also accommodate realistic equations of state for the nuclear, leptonic, and photonic components. In cases where the neutrino opacities depend on the nuclear force model, the neutrino opacities and the equation of state must be consistent. 8. In the interim when moments approaches to neutrino transport must be used until Boltzmann approaches become feasible, all of the above design specifications still hold. 9. For moments models, the closures used must respect the Fermi-Dirac statistics of neutrinos, reflecting the fact that the neutrino distribution functions are bounded.

The equations of neutrino radiation hydrodynamics
In core-collapse supernova models, the stellar fluid is modeled as a perfect fluid, augmented by an equation for the electron density in order to accommodate a nuclear equation of state. (For brevity of presentation, we will not include effects due to electromagnetic fields.) The relevant equations are then where the baryon rest-mass density current is where ρ = m B n B is the baryon rest-mass density, m B the average baryon (rest) mass, n B the baryon density, and u ν is the fluid four-velocity. The fluid energy-momentum tensor is T where h = 1 + (e + p)/ρ is the specific enthalpy, e the internal energy density, and p the pressure. The electron density current is given by where Y e is the electron fraction. The electron density (technically electron minus positron density) is n e = ρ Y e /m B . To close the system given by Eqs. (4)-(6), the pressure p is given by an equation of state (EOS); e.g., p = p(ρ, e,Y e ).
The source terms on the right-hand sides of Eqs. (5) and (6), −G µ and −L, describe four-momentum and lepton exchange between the fluid and neutrinos. These terms depend on the neutrino distribution functions (or moments of the neutrino distribution functions), as already noted in Sect. 2 , as well as on thermodynamic properties of the stellar fluid. This nonlinear coupling is the key to the supernova mechanism, and associated observables, and is the topic of the present review.  Figure 4 shows the magnitude of the neutrino transport mean free paths for the electron neutrino, electron antineutrino, and heavy-flavor neutrinos (muon and tau neutrinos and their antineutrinos). The mean free paths are given at a time of 100 ms after bounce, during the critical shock reheating epoch, in the context of a CHIMERA supernova simulation of a 12 M star. They are given as a function of radius, for select neutrino energies. Also shown are the neutrinospheres for the select energies, as well as the radius of the stalled shock wave. For all neutrino flavors and energies, the mean free paths exceed the respective neutrinosphere radii, as well as the shock radius, at some radius as we move outward. That is, the neutrino mean free paths exceed the scale of the proto-neutron star, as well as the shock radius scale, before we reach the shock radius. Under these circumstances, the neutrinos are not well described as components of the proto-neutron star fluid everywhere within it, and therefore, they are certainly not well described as a fluid in the critical heating layer between the proto-neutron star and the shock. A kinetic description of the neutrinos is required. Such a description, based on the Boltzmann kinetic equations, would supply the neutrino distributions functions, f (r, θ , φ , µ, φ p , E,t), for each species of neutrino and antineutrino, where µ is a the direction cosine taken with respect to the outward radial direction, φ p is the corresponding second angle describing the neutrino propagation direction in these momentum-space spherical polar coordinates, and E = |p| is the neutrino energy. Deep in the proto-neutron star, neutrinos and the proto-neutron star fluid are in weak-interaction equilibrium. The distribution functions are then given by their equilibrium counterparts and the neutrinos are well described as an additional component of the fluid. Of course, the neutrinos fall out of weak equilibrium as the neutrinospheres are approached, and beyond them stream freely. Thus a fluid description of them would be limited to only a small portion of the simulation domain and would be of equally limited utility. The nature of the weak interactions demands the greater computational challenge and the higher computational cost of a kinetic description of neutrino transport in the proto-neutron star and above it in the cavity between it and the shock.

The choice of phase-space coordinates
The expansion from the four dimensions of spacetime to the seven dimensions of relativistic phase space brings with it additional choices. Now, in addition to making what will hopefully be optimal choices for spacetime coordinates, we will also need to consider optimal choices for momentum-space coordinates. And this is not without some give and take. Simplification in some respects afforded by one choice is always accompanied by complexification in other respects.
There is, however, an overarching consideration that guides the typical choice made by most modelers: Neutrino-matter interactions are most naturally and, consequently, most easily described in the frame of reference of the inertial observer instantaneously comoving with the fluid. (The fluid is accelerating, but the instantaneously comoving observer is not.) In this frame, the matter is instantaneously at rest,  Fig. 4: Plots of the neutrino and antineutrino mean free paths at 100 ms after bounce, during the neutrino shock reheating epoch, for all three flavors of neutrinos at select energies. The upper left and right panels show plots of the electron-neutrino and antineutrino mean free paths, respectively. The lower left and right panels show plots of the heavy-flavor (µ and τ) neutrinos and antineutrinos, respectively. The data used to generate the plots are taken from a supernova model beginning with a 12 M progenitor and evolved with the CHIMERA supernova code. To set the correct physical scale against which the mean free paths can be compared, we indicate the location of the various neutrinospheres and the shock wave. All four plots demonstrate that, as we move out in radius to lower densities, all of the mean free paths plotted vary from being much less than to much greater than the neutrinosphere radii-i.e., to the characteristic spatial scale of the proto-neutron star. Consequently, the neutrinos will not behave in a fluid-like manner everywhere, and a kinetic rather than a fluid description of them is necessary. and the neutrino four-momentum components that enter the expressions for the neutrino weak interaction rates are the components measured by the comoving observer. However, while the description of neutrino-matter interactions are simplified in this picture, the choice to use four-momenta measured by instantaneously comoving observers introduces additional terms on the left-hand side of the Boltzmann equation that correspond to relativistic angular aberration and Doppler shift due to the fact that two spatially-adjacent instantaneously-comoving observers do not necessarily have the same velocity-in general, they will measure different neutrino angles of propagation and energies. In the context of Newtonian gravity, this would certainly add considerable complexity to the left-hand side of the Boltzmann equation. But in the general relativistic case, such momentum-space advection terms that involve derivatives with respect to the neutrino angles of propagation (or their direction cosines) and the neutrino energy are already there in light of general relativistic angular aberration and frequency shift in curved spacetime. While the character of the physical effects-special versus general relativistic-is different and, as such, presents different numerical challenges, the relative additional complexity of adding terms corresponding to special relativistic effects-e.g., relativistic Doppler shift and angular aberration-to the left-hand side of the Boltzmann equation versus the significant simplification of the collision term when comoving-frame neutrino four-momenta are used has led most modelers to choose comoving frame neutrino four momenta as phase-space coordinates. With regard, then, to the difficulties associated with the terms/effects added to the advection of neutrinos in phase-space, as we will see in this review, very different numerical approaches have been taken to describe them.
In what follows, we will adopt the following notation: We will designate the neutrino four-momentum components measured by an inertial observer instantaneously comoving with the fluid as pμ . Neutrino four-momentum components measured by an Eulerian observer will be designated as pμ . Finally, the neutrino four-momentum components in the coordinate basis will be designated as p µ .

The general relativistic Boltzmann equation
In light of the need to conserve simultaneously both energy and lepton number, we wish to begin with a version of the Boltzmann equation that is manifestly conservative across all phase-space dimensions. As we will show, this is not true of the standard formulation of the general relativistic Boltzmann equation. In this section, we outline the derivation of both as presented by Cardall and Mezzacappa (2003) to illustrate the differences and, of course, to arrive at a form of the Boltzmann equation that is better suited to numerical application. Before we begin, we emphasize the following: While spacetime is endowed with a natural metric, g µν , which is determined by Einstein's equations given the stress-energy content of spacetime, phase space is not. Consequently, the development of general relativistic neutrino radiation hydrodynamics requires the full machinery of the metric-free language of the differential and integral calculus of forms. That is, the derivation we present below is not a matter of taste. Treatments of non-relativistic kinetic theory typically assume that phase space is endowed with a Euclidean metric. This can serve as a bookkeeping device at best, and it is important to interpret the theory accordingly.
The one-particle phase space for particles of arbitrary mass is an eight-dimensional space, which we label M, of spacetime position x and four-momentum p. If we specify a mass for the particle, m, which satisfies we confine ourselves to a hypersurface of M, which we write as M m , which is the phase space for particles of mass m. The flow in M m defined by the particle trajectories (x, p) is generated by the Liouville operator Lμ µ is the composite transformation that takes us, first, from the coordinate basis to the orthonormal frame of the Eulerian observer at rest with respect to the "laboratory" and, second, via a Lorentz transformation, from the Eulerian frame to the frame of reference comoving with the stellar core fluid: L μ µ is the inverse transformation. Γμ µν are the Ricci Rotation Coefficients and are given by where Γ µ νρ are the Levi-Civita connection coefficients corresponding to the spacetime metric g µν .
For a given type of particle of mass m, the distribution function, f , gives the density of such particles in phase space. An equation for the distribution function, the Boltzmann equation, is derived by considering a closed six-dimensional hypersurface ∂ D bounding a region D in M m . The net number of particles flowing through the boundary of D is given by the generalized Stokes' Theorem where the infinitesimal surface element ω normal to the flow across D is given by and Ω is an infinitesimal volume element in M m . The product rule gives where we have used the fact that dω = 0 (an expression of the general relativistic Liouville's Theorem that tells us that the phase-space flow is incompressible). But f , L m , and Ω obey the identity Then Finally, the number of particles crossing the boundary ∂ D of D in M m is given by the change in the number of particles in D due to emission, absorption, and scattering.

20
Anthony Mezzacappa et al. Defining the "collision term," C [ f ], as the spacetime density of such events, we have and Substituting for L m using Eq. (11), we arrive at the Boltzmann equation in "standard" form: Note that to obtain the Boltzmann equation, we had to consider integration on our phase-space manifold M m on which there is no natural metric. This necessitates the use of the language of differential forms. If we integrate over momentum space, we obtain the balance equation for particle number where is the particle 4-current density and is the invariant momentum-space 3-volume expressed in terms of the spherical momentumspace coordinates: uˆi = (E = ||p||/c, µ ≡ cos θ p , φ p ). But in light of the fact that the Boltzmann equation is not expressed in manifestly conservative form it is not obvious how we arrive at Eq. (22) by integrating over momentum space. We desire to reexpress the Boltzmann equation in terms of spacetime and momentum-space divergences so that it is manifestly conservative with respect to an integration over a spacetime region, a momentum-space region, or both-i.e., a phase-space region. Of course, the generalized Stokes' Theorem, Eq. (14), is an expression of manifest conservation, equating the change in a quantity within a volume of phase space in terms of a surface term involving its flux on the volume's boundary. The key insight by Cardall and Mezzacappa (2003) was to recognize that the total exterior derivative d( f ω) in Eq. (14) can instead be expressed as where Challenges of modeling neutrino transport in core-collapse supernovae 21 Substituting Eq. (25) in Eq. (14) and using Eq. (19), we arrive at which is the manifestly conservative formulation of the Boltzmann equation. It is now obvious that upon integration over momentum space, for example, the momentum derivative terms on the left-hand side of the Boltzmann equation in Eq. (27) will give rise only to surface terms. The counterpart equation for 4-momentum conservation can be derived in the same way (Cardall and Mezzacappa, 2003) and is given by where is the specific particle stress-energy tensor.
As an illustrative example, we specialize Eq. (27) to the case of spherical symmetry, Lagrangian coordinates, and O(v/c) transport, as in Mezzacappa and Bruenn (1993c,a,b). As shown by Cardall and Mezzacappa,Eq. (27) in agreement with the conservative formulation of the Boltzmann equation used in Mezzacappa and Bruenn (1993c,a,b). In spherical symmetry and to O(v/c) one can arrive at a manifestly conservative form of the Boltzmann equation through trial and error. However, in three dimensions and with full general relativity, such trial and error approaches are doomed to failure. A manifestly conservative starting point becomes paramount.

The 3+1 formulation of general relativity
The fundamental building blocks of the "3+1" formulation of general relativity are the spacelike hypersurfaces corresponding to surfaces of constant τ, where τ is some scalar function of the spacetime coordinates x µ : τ = τ(x 0 , x 1 , x 2 , x 3 ). It is natural to choose τ to be x 0 = t. The spacelike hypersurfaces, Σ t , are threaded by a timelike congruence of constant-spatial-coordinate curves. The points of constant x i (t) between two hypersurfaces separated by dt are connected by the four-vector t. At each point of the hypersurface Σ t , there is a unit timelike normal four-vector n satisfying n µ n µ = −1. n corresponds to the four-velocity of the observer at rest with respect to the hypersurface. This is the generalization of the definition of the Eulerian observer familiar from non-relativistic formulations. The four-vector β , known as the "shift" vector, describes how the spatial coordinates move within each hypersurface. The proper time between two hypersurfaces Σ t and Σ t+dt is given by αdt. α is known as the "lapse" function. Given such a foliation of spacetime and such a coordinatization, the squared spacetime line element becomes where γ i j is the metric on the hypersurface Σ t . From Eq. (31), the spacetime metric can be read off as whose determinant g can be computed directly to find √ −g = α √ γ, where γ is the determinant of the spatial metric.
In addition to the intrinsic geometry-specifically, the intrinsic curvature-of each spacelike hypersurface, which is determined by the metric γ i j , we describe how such a hypersurface is embedded in the four-dimensional spacetime by its extrinsic curvature, K i j , which is related to the three-metric by Here D i corresponds to the covariant derivative on Σ t corresponding to the Levi-Civita connection associated with γ i j . We can regard the coordinates of this formulation as the metric components, γ i j , and the components of the extrinsic curvature, K i j , as the velocities. The dynamics is supplied by the Einstein equations, which provide the following evolution equations for the six independent components of K i j : where K is the trace of the extrinsic curvature tensor, and (3) R i j is the Ricci curvature tensor for the spacelike hypersurface. The source terms in Eq. (34) are given in terms of the stress-energy tensor, T αβ , by Challenges of modeling neutrino transport in core-collapse supernovae and provide timelike and spacelike projections, respectively. While not drawn here, there is a corresponding spacelike hypersurface to which the fluid four-velocity is the unit timelike normal, which defines the timelike basis element of the orthonormal frame of reference of the inertial observer instantaneously comoving with the fluid and at rest with respect to the hypersurface. This is our generalized Lagrangian observer in this formalism. The projection into the slice defined by the normal u µ is given by Here, W = −n µ u µ is the Lorentz factor and v µ = (γ µ ν u ν )/W the fluid three-velocity.

3+1 general relativistic hydrodynamics
The 3+1 slicing of spacetime allows us to formulate the radiation-hydrodynamics equations in a form suitable for numerical solution. Here we briefly summarize the 3+1 form of the hydrodynamics equations given by Eqs. (4)-(6) (see, e.g., Anile 1989; Rezzolla and Zanotti 2013 for details). The mass conservation equation (cf. Eq. (4)) becomes where D = W ρ, while the electron number conservation equation (cf. Eq. (6)) becomes 1 Conservative forms of the energy and momentum equations are derived by decomposing Eq. (5) into components relative to the spatial hypersurface. The energy equation becomes Anthony Mezzacappa et al. where τ fluid = E − D, E = ρ hW 2 − p, S µ = ρ hW 2 v µ , and S µν = ρ hW 2 v µ v ν + p γ µν , while the momentum equation is given by The source terms modeling lepton and four-momentum exchange due to neutrinomatter interactions (−L, −n µ G µ , and γ jµ G µ , respectively) will be discussed in detail in Sect. 5.

The 3+1 general relativistic Boltzmann equation
The general relativistic Boltzmann equation in both conservative form and using the spacetime coordinates associated with the 3+1 decomposition of spacetime was derived by Cardall et al (2013a). Essential to the derivation is the recognition that the composite transformation L μ µ can be viewed as the coordinate basis components (µ) of the element of the tetrad of four-vectors (μ) corresponding to the frame carried by the observer instantaneously comoving with the fluid. The Eulerian decomposition of L μ µ into timelike and spacelike components is where Lμ is the coefficient of the timelike component of the tetrad element (fourvector) designated byμ, and l μ µ is the spacelike component of this tetrad element. Explicit expressions for Lμ and l μ µ can be found in (Cardall et al, 2013a). The Ricci Rotation Coefficients can be expressed as Using the decomposition (47), we are left with three terms to evaluate: The results can be found in Cardall et al (2013a). With the decomposition of the momentum-space transformation matrix P˜iˆi into elements parallel and perpendicular to the three-momentum pˆi, Challenges of modeling neutrino transport in core-collapse supernovae The 3+1 general relativistic Boltzmann equation can now be written as where the spacetime divergence is with D N and (F N ) i are, respectively, the conserved number density and number flux. The momentum-space divergence, M N , can be expressed as where describes momentum shifts (i.e., redshift and angular aberration in momentum-space spherical coordinates) due to gravity as embodied in the spacetime geometry, are 'observer corrections' due to the acceleration of the fluid and, consequently, changing comoving observers with different velocities (and partially entangled with the geometry as well), and 4.7 Multi-frequency moment kinetics and the closure problem Because of the prohibitively high computational cost associated with solving the Boltzmann equation with sufficient phase-space resolution, most (all in three spatial dimensions) supernova models to date employ a moments approach to neutrino transport. In the moments approach, one solves for a finite number of moments of the distribution function (instead of the distribution function), and the hierarchy of moment equations is closed by a closure procedure, relating higher-order moments to the evolved lower-order moments. The basic idea of the moments approach can be illustrated by considering the Boltzmann equation in one spatial dimension where, for simplicity, we let the distribution function depend on spatial position, x, momentum-space angle cosine, µ, and time, t. χ is the absorption opacity, f 0 is the isotropic equilibrium distribution, and σ is the scattering opacity due to isotropic and isoenergetic scattering. A finite number (N + 1) of angular moments of the distribution function can be formed as weighted integrals over angle: Thus, in a truncated moments approach the distribution function is approximated by the moments vector so that Challenges of modeling neutrino transport in core-collapse supernovae 27 where c (k) are normalization constants. Similarly, by taking moments of the Boltzmann equation in Eq. (63), the hierarchy of moment equations is given by where on the right-hand sides we have defined When considering the expansion in Eq. (66), the moments approach is simply an approximation to the angular dependence of the distribution function in terms of the monomial basis {µ k } N k=0 . The power of the moments approach becomes evident when collisions are moderate to strong. In this case, collisions tend to drive the zeroth moment m (0) towards the isotropic distribution f 0 , the higher-order odd moments decay exponentially to zero (m (k) → 0; k odd), and the higher-order even moments tend to m (k) → m (0) /(k + 1) (k even). Thus, the angular dependence of the distribution is captured well by only a few moments. In the absence of collisions, more moments are typically needed to capture the angular shape of the distribution function. Note in particular that in Eq. (68), the equation for the k-th moment contains the k + 1th moment. Thus, in a truncated moment model based on N + 1 moments, m N , the equation for m (N) contains the moment m (N+1) , which must be related to the lower order moments by a closure procedure-i.e., m (N+1) := g(m N )-in order to form a closed system of equations. This is referred to as the closure problem. Typically, the closure function g is a nonlinear function of m N , which can make the construction of numerical methods for moment models more difficult. There are several challenges associated with the construction of closures for moment hierarchies (see, e.g., Levermore, 1996), one being the construction of closures that preserve the hyperbolic character of the system of moment equations; see, e.g., Pons et al (2000), for a discussion of this topic in the context of two-moment models. In the remainder of this section, we will discuss relativistic two-moment models (N = 1 in the simpler formalism above). In the multi-dimensional setting, the two-moment model evolves four unknowns (e.g., the energy density and three components of the momentum density), and, in the relativistic setting considered here, second and third moments appear in the equations for the first moments.
Conservative, 3+1 general relativistic, multi-frequency (or multi-energy) twomoment formalisms have been developed by Shibata et al (2011);Cardall et al (2013b). The formalism of Shibata et al (2011) is based on the formalism of Thorne (1981), while the formalism of Cardall et al (2013b) starts out with the conservative formulation of kinetic theory from Cardall and Mezzacappa (2003)  Covariant expressions for the first few moments of the distribution function f are given by where N µ is the four-current density, T µν the stress-energy tensor, and the rank three tensor of moments Q µνρ is sometimes referred to as the tensor of fluxes or heat flux tensor. When expressed in terms of comoving frame spherical-polar momentum coordinates (ε, ϑ , ϕ), the invariant momentum-space 3-volume in Eq. (24) is Higher-order moments can be constructed similarly in a straightforward way, but we will limit the discussion to moment models involving the moments in Eqs. (70)-(72). Note that the moments defined above depend only on position x and time t. However, because neutrino heating and cooling rates are sensitive to the neutrino energy (cf. Sect. 2), supernova models based on moment descriptions for neutrino transport retain the energy dimension and solve for angular moments, or spectral moments, defined by where dω = sin ϑ dϑ dϕ and the integrals extend over the sphere where ϑ and ϕ are momentum-space angular coordinates. The angular moments defined in Eqs. (74)-(76) depend on the neutrino energy, ε, position, x, and time, t. They are related to the moments in Eqs. (70)-(72) by the integral over energy where the infinitesimal energy-space shell-volume element is dV ε = 4πε 2 dε. In forming the angular moments we have used the freedom in choosing distinct spacetime and momentum space coordinates: x and t are spacetime coordinates in a global coordinate basis, while {ε, ϑ , ϕ} are momentum coordinates in a comoving basis. Moment equations governing the evolution of the angular moments are derived from the general relativistic Boltzmann equation discussed in Sect. 4.3. Since current supernova modelers employing angular moment models use either a flux-limited Challenges of modeling neutrino transport in core-collapse supernovae 29 diffusion (one-moment) or a two-moment approach, we will limit the discussion to these approaches. In this context, we will need evolution equations for the spectral neutrino number density, energy density, and three-momentum density. The evolution equation for the neutrino number density is obtained by multiplying Eq. (27) by 1/(4πε) and integrating over S 2 : where u ν is the four-velocity of the observer measuring neutrino energy ε (i.e., the comoving observer). Note that the left-hand side of Eq. (79) is in divergence form, and the use of spherical momentum-space coordinates is apparent from the form of the second term. Integrating over energy (dV ε ) gives rise to the balance equation where the left-hand side is in conservative form. The right-hand side gives rise to lepton exchange sources and sinks due to neutrino-matter interactions (e.g., emission and absorption). In a similar manner, conservative evolution equations for the neutrino four-momentum are obtained by multiplying the four-momentum conservative Boltzmann equation in Eq. (28) by 1/(4πε) and integrating over S 2 : Again, integrating this equation over energy results in the balance equation where the left-hand side is in conservative form, and the right-hand side gives rise to four-momentum exchange with the fluid. Eq. (81) forms a basis for the two-moment model for neutrino transport. Since neutrinos exchange lepton number and four-momentum with the fluid, Eq. (79) needs to be considered, as well. However, these equations are not independent. Due to the relations (obvious from the definitions in Eqs. (74)-(76)) Eqs. (79) and (81) are related in a similar way: Eq (79) can be obtained from Eq. (81) by a contraction with −u µ /ε. In a numerical implementation targeting both lepton and four-momentum exchange between neutrinos and the stellar fluid, such consistency is desirable since the numerical method then preserves a critical structure of the moment system. In the following, we provide versions of the two-moment model in the 3+1 framework of general relativity. Before we delve into the details, we briefly discuss two useful decompositions of the angular moments.

Lagrangian decompositions
With comoving frame four-momentum coordinates, Lagrangian decompositions of tensors is a natural way to express the angular moments in Eqs. (74)-(76) in terms of elementary moments of the distribution function. This is achieved with the Lagrangian decomposition of the particle four-momentum where u µ is the four-velocity of the Lagrangian observer, and µ is a unit four-vector orthogonal to u µ ; i.e., µ µ = 1 and u µ µ = 0. Then, ε = −u µ p µ is the neutrino energy measured by the Lagrangian observer. In terms of the composite transformation of the neutrino four-momentum, where the angular moments are the comoving spectral number density and number flux, respectively. Using the fluid four-velocity u µ and the projector in Eq. (42), these components are obtained from D = −u µ N µ and I µ = h µ ν I ν . The moments in Eq. (87) are the most elementary in the moment hierarchy, and for the two-moment model, these are used in the closure procedure to determine the higher-order moments in terms of D and I µ . Note that for an isotropic distribution function f = f 0 (where f 0 is independent of ω), D = f 0 and I µ = 0.
In a similar way, using Eq. (84) in Eq. (75), the Lagrangian decomposition of the stress-energy tensor is given by where and H µ and K µν are orthogonal to u µ (spacelike in the comoving frame); i.e., u µ H µ = u µ K µν = u ν K µν = 0. In Eq. (89), J , H µ , and K µν are respectively the spectral energy density, momentum density, and stress measured by a Lagrangian Challenges of modeling neutrino transport in core-collapse supernovae 31 observer. The four-velocity u µ and the associated orthogonal projector h µν are used to extract components of the Lagrangian decompositions of T µν : (90) Note that the Lagrangian energy density and momentum density are related to the number density and flux by a factor ε; i.e., Finally, a Lagrangian decomposition of the rank-three tensor in Eq. (76) gives where the spectral rank-three tensor measured by a Lagrangian observer, is orthogonal to u µ -i.e., u µ L µνρ = u ν L µνρ = u ρ L µνρ = 0-and is obtained from Q µνρ using the orthogonal projector:

Eulerian decompositions
Eulerian projections of tensors are particularly useful when deriving evolution equations in the context of moment models for neutrino transport, as it is the Eulerian number density, energy density, and three-momentum density that are governed by conservation laws. In a manner similar to the Lagrangian decomposition in Eq. (86), the Eulerian decomposition of the spectral number current density is where n µ G µ = 0. The four-velocity n µ and the projector γ µν = g µν + n µ n ν can be used to extract the Eulerian components where N and G µ are the spectral number density and number flux density measured by an Eulerian observer, respectively. Note that N and G µ are still considered functions of ε, the neutrino energy measured by a Lagrangian observer. Thus, the definition in Eq. (95) should merely be viewed as a decomposition of N µ in a different basis than in Eq. (86), not as moments of the distribution with respect to Eulerian momentum coordinates. Inserting the Lagrangian decomposition in Eq. (86) into the expressions in Eq. (96), the Eulerian number density and number flux density are expressed in terms of the Lagrangian number density and number flux density as Anthony Mezzacappa et al. Similarly, the Eulerian decomposition of the stress-energy tensor is where E , F µ , and S µν are respectively the spectral energy density, momentum density, and stress measured by an Eulerian observer. The Eulerian momentum density and stress are spacelike (i.e., n µ F µ = n µ S µν = n ν S µν = 0), and the components of the Eulerian decomposition of T µν are extracted using n µ and the associated orthogonal projector γ µν : Inserting the Lagrangian decomposition in Eq. (88) into the expressions in Eq. (100), the Eulerian energy density, momentum density, and stress are expressed in terms of the corresponding Lagrangian quantities as (cf. Equations (B8)-(B10) in Cardall et al, 2013b) Finally, and similar to Eqs. (95) and (99), the Eulerian decomposition of the rank three tensor in Eq. (76) is given by where the Eulerian components are obtained from These components can be expressed in terms of the Lagrangian moments by inserting the Lagrangian decomposition in Eq. (92). We will not repeat these tedious expressions here, but see Eqs. (B15), (B16), (B17), and (B18) in Cardall et al (2013b) for expressions relating respectively Z , Y µ , Z µν , and W µνρ in terms of the Lagrangian moments J , H µ , K µν , and L µνρ (note the difference of the factor of ε between our definition of Q µνρ and the corresponding variable in Cardall et al (2013b)). While components of Lagrangian decompositions are more closely related to the distribution function, Eulerian decompositions appear to be more natural to use in the 3+1 approach, and powerful in simplifying terms appearing in the moment equations, especially for the energy derivative terms in Eqs. (79) and (81), which contain contractions with the covariant derivative of the fluid four-velocity. As elaborated on in Cardall et al (2013b), Eulerian decompositions of T µν and Q µνρ , in combination with the Eulerian decomposition of u µ , in Eq. (41) result in surprisingly simple expressions, without explicit reference to connection coefficients (cf. Eq. (13)). Moreover, as emphasized by Cardall et al (2013b), consistent use of Eulerian decompositions in spacetime and momentum-space divergences in the moment equations turns out to simplify the elucidation of the relationship between the equations for four-momentum and number conservation in the 3+1 case.

Two-moment kinetics
In this section we review two-moment models in the 3+1 formulation of general relativity, which can serve as a basis for the development of numerical methods and their implementation in codes to model neutrino transport in core-collapse supernovae. We present three versions, all based on Eq. (81), but using different projections. The projection of Eq. (81) orthogonal and tangential to the spacelike slice of the Eulerian observer (using n µ and γ µν ) gives rise to the Eulerian two-moment model, while the projection of Eq. (81) orthogonal and tangential to the spacelike slice of the Lagrangian observer (using u µ and h µν ) gives rise to the Lagrangian two-moment model. We also present a number conservative two-moment model, which is closely related to the Lagrangian two-moment model, but uses projections based on u µ /ε and h µν /ε. This results in one of the evolved equations being Eq. (79), which is neutrino number conservative. Analytically, all these formulations are equivalent, but they could have different numerical properties.
Eulerian two-moment model The Eulerian two-moment model evolves the spectral energy density and momentum density measured by an Eulerian observer (E and F j , respectively). The energy equation is obtained as the projection of Eq. (81) onto the four-velocity of the Eulerian observer (i.e., contracting −n µ with Eq. (81)). The result is: where the sources on the right-hand side are due to spacetime geometry and energy exchange between neutrinos and the fluid. The left-hand side is in divergence form, where the divergence operates on the spacetime-plus-energy phase-space. In expressing the terms inside the energy derivative (last term on the left-hand side), we make use of the Eulerian decomposition in Eq. (104) to write which account for changes in the spectral energy density due to gravitational energy shifts and the fact that adjacent comoving observer velocities in spacetime are generally different.
The momentum equation is obtained as the projection of Eq. (81) into the slice with normal given by n µ (i.e., contracting γ jµ with Eq. (81)), which results in where the right-hand side gives rise to changes in the spectral momentum density due to spacetime geometry and neutrino-matter interactions. Again, using the Eulerian decomposition in Eq. (104), the terms inside the energy derivative can be written as which account for changes in the spectral momentum density due to gravitational comoving observer effects. An obvious advantage of the Eulerian two-moment model given by Eqs. (109) and (111) is the conservative form. Integrating these equations over energy space (using dV ε = 4πε 2 dε) results in the radiation energy equation Challenges of modeling neutrino transport in core-collapse supernovae 35 and radiation momentum equation where the energy-integrated Eulerian moments are given by Eqs. (113) and (114) are conservation laws for radiation energy and momentum in the sense that in the case of Cartesian coordinates in flat spacetime, with no neutrinomatter interactions, the right-hand sides vanish, and the equations express exact conservation of radiation energy and momentum. The Eulerian two-moment model presented here is the basis for several codes developed to model neutrino transport in core-collapse supernovae (O'Connor, 2015;Kuroda et al, 2016;Roberts et al, 2016): the GR1D code (O'Connor, 2015) solves the equations in spherical symmetry; the ZELMANI code (Roberts et al, 2016) solves the equations in three spatial dimensions, but does not include velocity dependent terms (i.e., v i = 0 in the transport equations); and Kuroda et al (2016) solve the full system in three spatial dimensions.
Lagrangian two-moment model The Lagrangian two-moment model is an alternative to the Eulerian two-moment model discussed above, where the spectral energy density and momentum density measured by the Lagrangian observer with four-velocity u µ are evolved (J and H j , respectively). The energy equation is obtained as the projection of Eq. (81) along the four-velocity of the Lagrangian observer (i.e., contracting −u µ with Eq. (81)), which results in where the contraction of the stress-energy equation with the covariant derivative of the Lagrangian observer's four-velocity is given in 3 + 1 form as which accounts for changes to the spectral energy density from gravitational effects and from the fact that adjacent comoving observers in spacetime have different velocities. In Eq. (117), we made use of the Eulerian decomposition of the stress-energy tensor, which, as discussed at the end of Sect. 4.7.2, is more convenient than using the Lagrangian decomposition, since it keeps the number of terms in the expression to a minimum and simplifies book-keeping. The components of the Eulerian decomposition are related to the Lagrangian components by Eqs. (101)-(103). The Lagrangian momentum equation is obtained by projecting Eq. (81) tangential to the slice with u µ as the normal (i.e., contracting h jµ with Eq. (81)), which gives where the "geometry" source on the right-hand side can be written as Again, using the Eulerian decomposition of T µν , the first term on the right-hand side of Eq. (119) can be written as which also appears on the right-hand side of Eq. (111). Similarly, the third term on the right-hand side of Eq. (119) can be written as while the second term on the right-hand side of Eq. (119) contains the expression in Eq. (117). Finally, the expression inside the energy derivative term on the left-hand side of Eq. (118) can be written as which is a contraction of Eulerian decompositions of rank two tensors, with com- the covariant derivative of the fluid four-velocity, and can be written in a form similar to Eq. (112).
The Lagrangian two-moment model presented here (Eqs. (116) and (118)) is the basis for several codes used to model neutrino transport in core-collapse supernovae: Müller et al (2010) used it in conjunction with the conformal flatness approximation to GR (CFA) and ray-by-ray neutrino transport transport; and Just et al (2015) and Skinner et al (2019) used this model in its O(v/c) limit to develop multi-dimensional neutrino transport codes.
Number conservative two-moment model The number conservative model is yet another formulation of two-moment transport, which evolves the spectral number density as measured by the Eulerian observer (with four-velocity n µ ) and the spectral number flux. The equation for the number density is obtained (1) directly from Eq. (79), (2) by contraction of Eq. (81) with −u µ /ε, or (3) by dividing Eq. (116) by ε. In 3 + 1 form it is given by where the expression inside the energy derivative (last term on the left-hand side) is given by Eq. (117). Eq. (123) is conservative in the sense that an integration over energy space gives the balance equation Eq. (80), which in 3 + 1 form is given by expressing exact particle conservation in the absence of particle-converting neutrinomatter interactions (e.g., emission and absorption). The equation for the number flux density is obtained by contraction of h jµ /ε with Eq. (81) (or by dividing Eq. (118) by ε): Here, we use the "hat" to denote previously-defined moments divided by ε; e.g., The expression in the energy derivative in Eq. (125) is given by Eq. (122). The first term on the right-hand side of Eq. (125) can be written as (cf. Eq. (118)) while the third term on the right-hand side of Eq. (125) can be written as where N and G i are written in terms of Lagrangian moments in Eqs. (97) and (98). The second term on the right-hand side of Eq. (125) can be written as which is in the same form as Eq. (117), but where Y j , Z µ j , and W µν j , replace E , F µ , and S µν , respectively.
This number conservative two-moment model was presented in spherical symmetry, assuming the conformal flatness approximation (CFA) to general relativity, by Müller et al (2010), and was also presented in the O(v/c) limit by Just et al (2015), but it was not explicitly used in the numerical techniques developed by either of these authors. The model presented here is the 3+1 general relativistic version of that model, without approximation. It should also be mentioned that Rampp and Janka (2002) developed a two-moment, variable Eddington factor method based on solving both the Lagrangian two-moment model and the number conservative two-moment model simultaneously, in spherical symmetry and in the O(v/c) limit, treating the radiation energy density, momentum density, number density and number flux density as independent variables. However, resulting from inconsistency between the energy and number equations, in this approach the mean energy in an energy group, J /D, is not constrained to the group boundaries, and can even move outside the group (Müller et al, 2010).

The closure problem
The two-moment models discussed above are not closed. The rank-two tensor K µν defined in Eq. (89) and the rank-three tensor L µνρ defined in Eq. (93) are present in various terms in the two-moment model: components of K µν are present in spacetime derivative terms, while components of K µν and L µνρ are present in energy derivative terms and source terms. These tensor components must be expressed in terms of the evolved moments to close the system of equations. For the Eulerian and the Lagrangian two-moment models, the evolved quantities are ultimately the energy density and momentum density measured by a comoving observer; J and H j , respectively. (For the number conservative two-moment model, the evolved quantities are the number density and number flux density measured by a comoving observer; D and I j , respectively.) Following Levermore (1984); Anile et al (1992), the general symmetric, rank-two tensor K µν , depending on J and H µ , that is orthogonal to the fluid four-velocity u µ and that satisfies the trace condition K µ µ = J takes the form Challenges of modeling neutrino transport in core-collapse supernovae 39 where k(J , h) is the Eddington factor, h = H /J is the flux factor, H = H µ H µ , and h µ = H µ /H is a unit four-vector parallel to H µ . It is straightforward to show that the Eddington factor can be written as where we have defined In the last step in Eq. (131) we have aligned the momentum-space coordinate system in the comoving frame so that h µ µ = hμ μ = cos ϑ = µ. (Note, this is not the same µ that will be defined later, in Sect. 6.1.1. The angle here is defined in terms of the direction specified by hμ , whereas in Sect. 6.1.1 it will be defined in terms ofr.) The two-moment closure for K µν requires the Eddington factor to be specified in terms of J and h (or equivalently D and h). We will discuss some specific approaches further below.
In a similar way, we can construct the third-order moment, L µνρ , depending on J and H µ , as the symmetric rank-three tensor that is orthogonal to u µ and that satisfies the trace conditions L µν ν = H µ . From (e.g., Pennisi, 1992;Cardall et al, 2013b;Just et al, 2015), where we have defined the "heat flux" factor q(J , h): The two-moment closure for L µνρ requires that we specify the heat flux factor in terms of J and h (or D and h).
To complete the specification of the two-moment closure, the Eddington and heat flux factors must be specified in terms of the zeroth and first moments. To this end, several approaches have been proposed for the Eddington factor, including maximum entropy closure (e.g., Minerbo, 1978), Kershaw-type closure (e.g., Kershaw, 1976), and closures derived from fits to results obtained with higher-fidelity models (e.g., Janka, 1991). In the context of spherically symmetric proto-neutron star models, Murchikova et al (2017) carried out a comprehensive comparison of results obtained with two-moment neutrino transport, using analytic Eddington factors, to results obtained with Monte Carlo transport calculations. Murchikova et al (2017) included Eddington factors from Wilson et al (1975); Kershaw (1976); Levermore (1984); Minerbo (1978); Cernohorsky and Bludman (1994); Janka (1991Janka ( , 1992, and found no closure to perform consistently better than the others in the test cases considered. Because the maximum entropy closures of Minerbo (1978) and Cernohorsky and Bludman (1994) gave practically identical results and never yielded the worst results, and given the simplicity of the closure by Minerbo (1978) relative to the closure by Cernohorsky and Bludman (1994), Murchikova et al (2017) concluded that the Minerbo (1978) closure is the most attractive choice for neutrino transport around proto-neutron stars. The closures provided by Minerbo (1978) and Levermore (1984) are probably the most widely used in core-collapse supernova simulations employing two-moment neutrino transport. Recently, Just et al (2015), comparing the closures of Minerbo (1978); Cernohorsky and Bludman (1994); Levermore (1984) in the context of a simulation of collapse and post-bounce evolution of a 13 M star in spherical symmetry, showed that the differences in shock radii, neutrino luminosities, and mean energies are practically indistinguishable. This may be because the closures are very similar for the values of J and h encountered. Chu et al (2019) considered Eddington factors by Minerbo (1978); Cernohorsky and Bludman (1994); Larecki and Banach (2011); Banach and Larecki (2017) and found that, under certain conditions, results obtained with closures based on Fermi-Dirac statistics can differ significantly from results obtained with the Minerbo (1978) closure, which is based on Boltzmann statistics.
We discuss the closures due to Minerbo (1978), Levermore (1984), and Kershaw (1976) in further detail and give explicit expressions for Eddingon and heat flux factors, which are also plotted in Figure 5 (see figure caption for details).  Minerbo (1978) (black), Levermore (1984) (magenta), and Kershaw (1976) Maximum entropy closure The maximum entropy approach to specifying the Eddington and heat flux factors comes from statistical mechanics, and has been used extensively in moment models for radiation transport (e.g. Minerbo, 1978;Cernohorsky and Bludman, 1994). In this approach, the "most probable" values of k and q are determined by finding the distribution function f ME that maximizes the entropy functional s[f ME ], subject to the constraints that f ME reproduces the known moments (e.g., D and h). The unknowns can then be computed from Eqs. (131) and (134) by setting f = f ME . For the two-moment model, the maximum entropy distribution is obtained by extremizing with respect to f ME , where the Lagrange multipliers α 0 and α 1 are introduced for the constraints. A particularly simple closure is obtained by considering the case of Boltzmann statistics, where s[f ME ] = f ME ln f ME − f ME . This case was considered in detail by Minerbo (1978), and is the low-occupancy limit (D 1) of the more appropriate case (for neutrino transport) of Fermi-Dirac statistics considered by Cernohorsky and Bludman (1994). For the case of Boltzmann statistics, the maximum entropy distribution is easily found to be given by where α 0 and α 1 are found from the known moments. Direct integration of Eq. (136) gives (Minerbo, 1978) which can be solved for α 0 and α 1 . In particular, the flux factor is given by the Langevin function L(α 1 ), and is independent of α 0 . Thus, α 1 (h) = L −1 (h). (The inversion of the Langevin function must be done numerically.) Once α 1 is obtained, α 0 can be obtained directly from either Eq. (137) or Eq. (138), which completes the specification of f ME . Then the Eddington factor and heat flux factor can be computed by direct integration which closes the two-moment model under the simplifying assumption of Boltzmann statistics, which is a reasonable approximation for neutrinos only when the occupation density is low; i.e., when D 1. This closure is referred to as the Minerbo closure, and is a commonly used closure in simulations employing spectral two-moment neutrino transport (e.g., Kuroda et al, 2016;Just et al, 2018;O'Connor and Couch, 2018). In practice, to avoid inverting the Langevin function for α 1 , the Eddington and heat flux factors can be approximated as polynomials in the flux factor. This leads to algebraic expressions, which are computationally more efficient. The algebraic form of the Eddingon factor, which approximates the one in Eq. (140) to better than one percent, is given by (Cernohorsky and Bludman, 1994) Similarly, the algebraic form of the heat flux factor, which approximates the one in Eq. (140) to within a few percent, is given by (Just et al, 2015) q In Figure 5, the Eddington and heat flux factors k Alg and q Alg are plotted versus the flux factor h (denoted Minerbo in the legend, using solid and dotted black lines, respectively). Another two-moment closure based on the maximum entropy principle is the so-called M1 closure (e.g., Levermore, 1984;Dubroca and Fuegas, 1999). The M1 closure is thus based on the same principle as the Minerbo closure, but a different entropy functional is considered; namely the entropy functional for Bose-Einstein statistics s[f ME ] = (1 + f ME ) ln(1 + f ME ) − f ME ln f ME . For the M1 closure the Eddington factor is given by It should be noted that Levermore (1984) derived this result without the maximum entropy principle. More recently, Vaytet et al (2011) proposed a numerical method for multi-group radiation hydrodynamics in the O(v/c) limit, and provided an expression for the heat flux factor in the M1 model: where and a = 4 − 3h 2 . The M1 closure is another commonly used closure in simulations employing spectral two-moment neutrino transport (e.g., Skinner et al, 2019). In Figure 5, the Eddington and heat flux factors k M1 and q M1 are plotted versus the flux factor h (denoted "Levermore" in the legend, using solid and dotted magenta lines, respectively). When plotting the heat flux factor, we found ϕ 1 and ϕ 2 to exhibit oscillatory behavior as h → 0. To avoid these oscillations in q M1 , we used Taylor expansions of ϕ 1 (around h = 0.1) and ϕ 2 (around h = 0.2) to plot q M1 for smaller values of h. The low occupancy assumption used for the Minerbo closure does not hold everywhere in a supernova simulation, but may be a reasonable approximation in the neutrino heating region. The M1 closure based on Bose-Einstein statistics is also not a good approximation when the phase space occupation is high. In this case, a more realistic treatment for neutrinos must consider the entropy functional for Fermi-Dirac statistics, where s[f ME ] = f ME ln f ME + (1 − f ME ) ln(1 − f ME ), and follow the procedure as outlined above, as was done by Cernohorsky and Bludman (1994), and more recently in further detail by Larecki and Banach (2011). For the maximum entropy closure derived by Cernohorsky and Bludman (1994), the Eddington factor is where Θ (x) = x 2 (3−x+3x 2 )/5. To account for Fermi-Dirac statistics, the Eddington factor in Eq. (147) depends on both the number density D and the flux factor h. In the low-occupancy limit when, D 1, this Eddington factor reduces to the Eddington factor due to Minerbo in Eq. (141). Cernohorsky and Bludman (1994) did not provide an expression for the heat flux factor.
It should be noted that the term "M1 closure," used here to refer to the closure in Eqs. (143) and (144), derives from the more general term "MN closure," which is used in transport theory to refer to maximum entropy closures applied to N-moment hierarchies. As such, all the closures discussed in this section are M1 closures, but they differ in the entropy functional that is maximized.
Kershaw closure A different approach to the closure problem was proposed by Kershaw (1976). The key idea behind the Kershaw closure is to consider the bounds on the moments generated by the underlying distribution function. For a nonnegative distribution function (f ≥ 0), the set generated by the normalized moments { 1, h, k, q } is convex and bounded, which in turn allows one to construct any sequence of moments in this set by a convex combination of moment vectors on the boundary of this domain. The moments constructed by this procedure are then "good" in the sense that they can be obtained from a nonnegative distribution function.
For the two-moment model, the Kershaw closure procedure can be used to specify k and q in terms of h. For f ≥ 0, it is straightforward to show that −1 ≤ h ≤ 1, while the bounds on the Eddington factor are given by For ζ ∈ [0, 1], the Eddington factor can be written as the convex combination Demanding that this expression be correct in the limit when h = 0, i.e., k(0) = 1/3, gives ζ = 2/3, so that Similarly, for the heat flux factor, it can be shown that the following bounds hold (e.g., Schneider, 2016): Constructing the heat flux factor from a convex combination of these bounds, and using k K (h), gives Demanding that q K (0) = 0 (isotropic limit) gives ζ = 1/2, so that In Fig. 5, the Eddington and heat flux factors k K and q K are plotted versus the flux factor h (denoted "Kershaw" in the legend; solid and dotted blue lines, respectively). The Kershaw closure considered here only assumes f ≥ 0, which holds for Bose-Einstein and Boltzmann statistics. Kershaw-type closures for Fermi-Dirac statistics, which is appropriate for neutrinos where f ∈ [0, 1], was recently considered by Banach and Larecki (2017).

One-moment kinetics
One-moment models (commonly referred to as flux-limited diffusion models (Levermore and Pomraning, 1981)) are among the earliest transport models adopted for neutrino transport in core-collapse supernova simulations (Bruenn, 1975), and are still in use today (e.g., Bruenn et al, 2020;Rahman et al, 2019). Essentially, onemoment models evolve only the zeroth moment of the distribution function, while higher-order moments are specified through a closure procedure. Specifically, the radiation flux is specified in terms of the zeroth moment in a way that it is correct in both the diffusion and streaming regimes. In order to be correct in the streaming regime, a flux limiter is applied to transition the model from parabolic (diffusion) to hyperbolic (streaming). Here we consider the 3+1 general relativistic formulation presented by Rahman et al (2019), which was derived using the formalisms from Shibata et al ( (2019) define their angular moments with an additional factor of ε 2 relative to our definitions in Eq. (89) and absorb √ γ into the variables; hence, we make the following definitions i.e., similar definitions hold for other moments appearing in the equations. (They also do not normalize their moments by the factor of 4π, but that should not cause confusion in the presentation here.) We can then write Eq. (116) as Challenges of modeling neutrino transport in core-collapse supernovae 45 where we have defined where in the second step, we used Eq. (117), re-expressed in the form given by Rahman et al (2019) (cf. their Eq. (A14)), and we have defined ∂ τ = n µ ∂ µ . Rahman et al (2019) solve for moments defined in an orthonormal comoving frame, and writeĤ whereĤˆi (Similar expressions can be made for higher-order moments; see Endeve et al (2012); Rahman et al (2019).) In Eq. (157), we remind the reader that Λμμ is the Lorentz transformation between the orthonormal comoving frame basis and the orthonormal laboratory frame basis, while e iμ is a transformation between the orthonormal laboratory frame basis and the coordinate basis. We have made the choice e µ0 = n µ , and vˆi are three-velocity components in the orthonormal laboratory frame basis (v¯i =v¯i = vˆi =vˆi), so that v i = e iîvî , where the notation e iî = e iī δ¯iˆi is used.
To close the one-moment (MGFLD) model, Rahman et al (2019) replace the momentum density by the gradient of the energy density: where D is the diffusion coefficient, which they express in terms of the flux-limiter λ ∈ [0, 1/3] and the total opacity κ t as For Levermore-Pomraning and Wilson flux-limiting, respectively, where Rahman et al (2019) define the generalized Knudsen number as Thus, when the opacity is high, R → 0 and λ → 1/3. On the other hand, when the opacity is low, λ → 1/R and The Eddington tensor is related to the neutrino radiation stress tensor: In the MGFLD approximation, the Eddington tensor, which appears in the expression forR ε , takes a form analogous to Eq. (130): In Eq. (165), hˆi is a unit vector in the direction of the neutrino flux, Hˆi, and χ is the Eddington factor, which is given by

Neutrino interactions
The phenomenon of core-collapse supernovae is a magnificent juxtaposition of the macroscopic physics of neutrino radiation hydrodynamics and the microscopic physics of neutrino weak interactions and the nuclear equation of state. In particular, the weak interactions between the neutrinos and the matter are what make neutrinos important to this phenomenon. Thus, any review of neutrino transport in core-collapse supernovae must include a discussion of such interactions. In the history of corecollapse supernova modeling, there have been many important examples of studies that have demonstrated the impact of additional weak interaction physics and/or improved treatments of such physics in supernova models. Here we select a subset of these studies, each selected to investigate one of the dimensions of this component of supernova modeling: (1) The impact of the addition of new weak-interaction channels.
(2) The impact of improved treatments of channels that have already been included in the models. (3) The interplay between different weak-interaction channels and the impact of adding/changing more than one weak-interaction channel at a time in a model. (4) The uncertainties in the weak-interaction rates currently used in core-collapse supernova models and their ramifications for core-collapse supernova modeling.
Challenges of modeling neutrino transport in core-collapse supernovae 47

An intertwined history
Looking back at the history of the development of the theory of weak interactions and of core-collapse supernovae, especially during the time frame after the discovery and publication of the electroweak theory, it becomes obvious that (1) the first period of what can be called modern core-collapse supernova theory, after the publication of the seminal work of Colgate and White, was greatly influenced and greatly accelerated by the new electroweak theory, for more than a decade, and (2) the interplay between advancing descriptions of neutrino weak interactions and core-collapse supernovae continued well beyond this period, even to this day.
A year after the publication of the Colgate and White work, the electroweak theory was published (Weinberg, 1967;Salam, 1968). It was specifically the advent of weak neutral currents that would turn out to be a game changer for core-collapse supernova theory. Seven years after the publication of the electroweak theory, Freedman (1974) showed that, owing to weak neutral currents, neutrinos could scatter coherently off the nucleons in a nucleus, introducing an A 2 dependence in the cross section, where A is the atomic number. During stellar core collapse, the core is neutronized through the emission and escape of electron neutrinos. As a result, the core nuclei become large-i.e., have large A-given that the nuclear size is a competition between Coulomb repulsion and surface tension, the former favoring smaller nuclei, the latter favoring larger nuclei, and the latter winning out. In turn, coherent nuclear scattering cross sections become large. Following Freedman's discovery and publication, Tubbs and Schramm provided an electroweak-theory-based set of cross sections for problems of astrophysical interest (Tubbs and Schramm, 1975). Subsequently, these were implemented in the pioneering work of Arnett (1977), wherein he showed that coherent nuclear scattering led to the trapping of electron neutrinos during stellar core collapse and to the development of a trapped Fermi sea of them in the core. This provided the foundation for the discovery five years later by Wilson that the stalled core-collapse supernova shock wave could be revived by charged-current mediated electron neutrino and antineutrino absorption on the shock-liberated nucleons behind it (Wilson, 1985;Bethe and Wilson, 1985), which marked the beginning of contemporary core-collapse supernova theory, which has largely operated within the framework of the delayed-shock or, equivalently, the neutrino-reheating mechanism. The fifteen years between 1966 and 1982 saw the fundamental and significant advance from the first models of core-collapse supernovae to the establishment of the framework within which all core-collapse supernova modelers operate today. The developments in core-collapse supernova theory during these first fifteen years were very tightly intertwined with the development of weak interaction physics. While this period was certainly unique in this regard, additional milestones, owing to further development in the theory of neutrino interactions in the environments of interest here, occurred since.
In 1985, Bruenn published a landmark paper on the physics of stellar core collapse (Bruenn, 1985). Bruenn included the following electron neutrino emissivities and opacities in his models, which have come to be known as the "Bruenn 85" opacity set. Subsequent to Bruenn's publication and prior to the publications discussed below, this set was frozen in as the canonical neutrino opacity set. It is still used today in code tests and comparisons. Bruenn included electron capture on (free) protons and nuclei and the inverse interactions of electron neutrino absorption, as well as scattering on (free) nucleons and electrons and coherent scattering on nuclei in his models. For electron antineutrino and heavy-flavor neutrino production, electronpositron pair annihilation served as the dominant source after core bounce and shock formation. Hannestad and Raffelt (1998) computed the production of neutrino-antineutrino pairs from nucleon-nucleon bremsstrahlung. Prior to the recognition that such bremsstrahlung could lead to, and perhaps dominate, neutrino pair production, pair production occurred only through electron-positron pair annihilation. Thus, particularly for the muon and tau neutrino flavors, which have only pair production as sources, bremsstrahlung production introduced a fundamental change. Figures 6 and 7 show the relative importance of nucleon-nucleon bremsstrahlung for the production of electron neutrino-antineutrino pairs of all three flavors, relative to the production by electronpositron annihilation. The results shown are for two times after core bounce, at 4 and 100 ms, in a core-collapse supernova model performed with the CHIMERA code, initiated from an 18 M progenitor.
shock ⌫ e < l a t e x i t s h a 1 _ b a s e 6 4 = " c T i d n H W 5 m 5 s m E 9 c V X z t H 5 T b r L C k = " > A A A B 8 n i c b V D L S g N B E J y N r x h f U Y 9 e B o P g K e y K o s e g F 4 8 R z A N 2 l z A 7 6 S R D 5 r H M z A p h y W d 4 8 a C I V 7 / G m 3 / j J N m D J h Y 0 F F X d d H c l K W f G + v 6 3 V 1 p b 3 9 j c K m 9 X d n b 3 9 g + q h 0 W 9 e d Z 7 8 d 6 9 j 0 V r y S t m j t E f e J 8 / T R y R R g = = < / l a t e x i t > ⌫ x < l a t e x i t s h a 1 _ b a s e 6 4 = " p 2 7 c 4 q 3 l H I 4 9 X 8 U n k v P 4 z Y t i 7 M s = " > A A A B 8 n i c b V B N S 8 N A E N 3 U r 1 q / q h 6 9 L B b B U 0 m k o s e i F 4 8 V 7 A c k o W y 2 m 3 b p 7 i b s T s Q S + j O 8 e F D E q 7 / G m / / G b Z u D t j 4 Y e L w 3 w 8 y 8 K B X c g O t + O 6 W 1 9 Y 3 N r f J 2 Z W d 3 b / + g e n j U M U m m K W v T R C S 6 F x H D B F e s D R w E 6 6 W a E R k J 1 o 3 G t z O / + 8 i 0 4 Y l 6 g E n K Q k m G i s e c E r C S H 6 i s n w d a 4 q d p v 1 p z 6 + 4 c e J V 4 B a m h A q 1 + 9 S s Y J D S T T A E V x B j f c 1 M I c 6 K B U 8 G m l S A z L C V 0 T I b M t 1 Q R y U y Y z 0 + e 4 j O r D H C c a F s K 8 F z 9 P Z E T a c x E R r Z T E h i Z Z W 8 m / u f 5 G c T X Y c 5 V m g F T d L E o z g S G B M / + x w O u G Q U x s Y R Q z e 2 t m I 6 I J h R s S h U b g r f 8 8 i r p X N S 9 R v 3 y v l F r 3 h R x l N E J O k X n y E N X q I n u U A u 1 E U U J e k a v 6 M 0 B 5 8 V 5 d z 4 W r S W n m D l G f + B 8 / g B p + 5 F Z < / l a t e x i t > ⌫ x < l a t e x i t s h a 1 _ b a s e 6 4 = " 8 n P a B 7 3 d I U d G W p j 1 Y W b + u x C o 4 p 8 = " > A A A B + 3 i c b V B N S 8 N A E J 3 U r 1 q / Y j 1 6 W S y C p 5 J I R Y 9 F L x 4 r 2 A 9 o Q t h s t + 3 S z S b s b q Q l 5 K 9 4 8 a C I V / + I N / + N 2 z Y H b X 0 w 8 H h v h p l 5 Y c K Z 0 o 7 z b Z U 2 N r e 2 d 8 q 7 l b 3 9 g 8 M j + 7 j a U X E q C W 2 T m M e y F 2 J w V 1 9 e Z 1 0 L u t u o 3 7 1 0 K g 1 b 4 s 4 y n A K Z 3 A B L l x D E + 6 h B W 0 g M I V n e I U 3 K 7 d e r H f r Y 9 l a s o q Z E / g D 6 / M H E u W U e Q = = < / l a t e x i t >⌫ e < l a t e x i t s h a 1 _ b a s e 6 4 = " 3 + V J e P K W a z 9 o f e c q 1 l 2 S D 6 h 1 a Z g = " > A A A B + 3 i c b V B N S 8 N A E N 3 U r 1 q / Y j 1 6 W S y C p 5 K I o s e i F 4 8 V 7 A c 0 I W y 2 k 3 b p Z h N 2 N 2 I J + S t e P C j i 1 T / i z X / j t s 1 B W x 8 M P N 6 b Y W Z e m H K m t O N 8 W 5 W 1 9 Y 3 N r e p 2 b W d 3 b / / A P q x 3 V Z J J C h 2 a 8 E T 2 Q 6 K A M w E d z T S H f i q B x C G H X j i 5 n f m 9 R 5 C K J e J B T 1 P w Y z I S L G K U a C M F d j 3 3 Q i K x J 7 I i y D 0 Z Y y g C u + E 0 n T n w K n F L 0 k A l 2 o H 9 5 Q 0 T m s 2 3 U Q R Q 9 o W f 0 i t 6 s w n q x 3 q 2 P R W v F K m e O 0 B 9 Y n z / 1 9 5 R m < / l a t e x i t > thermalization surfaces Fig. 6: The neutrino number production rate due to neutrino-antineutrino pair production via electron-positron annihilation and nucleon-nucleon bremsstrahlung are plotted. Also shown are the thermalization surfaces for the different neutrino and antineutrino flavors, as well as the shock location at this time after bounce: 4 ms. The data used to generate the plot are taken from a CHIMERA model using an 18 M progenitor. At the high densities present at radii below ∼10 km in the core, pair production from bremsstrahlung dominates. On the other hand, between the heavy-flavor thermalization surfaces and the shock, production by electron-positron pair annihilation is consistently larger.
< l a t e x i t s h a 1 _ b a s e 6 4 = " p 2 7 c 4 q 3 l H I 4 9 X 8 U n k  The same as in Fig. 6 but at a time of 100 ms after bounce, during the critical shock reheating epoch. At this time, at radii above ∼25 km, which is well below the thermalization spheres where the neutrino spectra set in, neutrino pair production is dominated by electron-positron pair annihilation. At high densities, below ∼10 km, production due to bremsstrahlung continues to dominate.
In the same year, Burrows and Sawyer (1998) and Reddy et al (1998) took on the long-term challenge to understand neutrino interactions in dense, interacting, nuclear matter, taking into account nucleon recoil, degeneracy, relativity, thermal motions, and correlations. In particular, these authors computed new differential scattering rates (and new charged-current absorption and emission rates), which were no longer iso-energetic, as had been assumed before (e.g., in the Bruenn 85 opacity set), but resulted in small energy transfer between the neutrinos and the nucleons. Per scattering, the amount of energy transferred would be of little consequence, but taken over all of the scattering events in the dense environment in the vicinity of the neutrinospheres, such small-energy scattering has a notable impact. Müller et al (2012) were the first to demonstrate this. In particular, they showed that small-energy scattering of heavy flavor neutrinos by nucleons at the electron neutrino-and antineutrino-spheres led to heating of these neutrinospheres and, consequently, an increase in the electron neutrino flavor luminosities. Their results are shown in Fig. 8. This in turn impacted neutrino shock reheating. In the absence of small-energy scattering on nucleons, shock revival was delayed by 50-100 ms relative to their baseline model.
In 2003, yet another source of heavy-flavor neutrino pair production was introduced. Buras et al (2003) examined the production of heavy-flavor neutrino pairs through the annihilation of electron-neutrino pairs. They too found that heavy-flavor pair production by electron-flavor pair annihilation dominated the production of such pairs through electron-positron pair annihilation. Moreover, they found that the inclu-50 Anthony Mezzacappa et al.   (2012) initiated from a 15 M progenitor. Solid lines show data from the model that includes neutrino-nucleon small-energy scattering. Evident in the plots is the ∼20% increase in both the electron neutrino and antineutrino luminosities, at a density of 10 11 g cm −3 , due to the heating of the electron neutrinospheres resulting from the scattering of higher-energy heavy flavor neutrinos, emanating from deeper regions, on nucleons in the neutrinospheric region.
sion of this mode of heavy-flavor production in their model boosted the heavy-flavor luminosities during the first ∼ 150 ms after bounce and decreased the electron-flavor luminosities after ∼ 200 ms. They found too that their shock was weaker and reached a smaller peak radius when electron-flavor pair annihilation was included. While the differences were not "dramatic," they concluded they were also not "negligible." And once again in the same year, progress was made on a different front. The rates for electron capture on nuclei in the Bruenn 85 opacity set are based on the Independent Particle Model for the nucleons in the nucleus. That is, the IPM assumes that the nucleons are noninteracting. Under this assumption, the final neutron states are filled for nuclei with N > 40, which is true of the stellar core nuclei, and electron capture is blocked, relying in turn solely on capture on protons. This assumption was finally removed, as nuclear structure models developed. In 2003, rates for electron capture on nuclei using a "hybrid" model, wherein thermal excitation and nucleon-nucleon correlations were both accounted for, were recomputed . Owing to the improved description, capture in nuclei was no longer blocked and in fact dominates capture on protons during core collapse, resulting in a more neutronized/deleptonized core, a smaller inner core, and a deeper shock formation mass .
The importance of the above additions and modifications to the neutrino opacities in core collapse supernova models were reinforced in the context of later twodimensional models developed by other groups (Burrows et al, 2018;Just et al, 2018;Kotake et al, 2018).  (2003), plots of the density, entropy per baryon, electron fraction, and fluid velocity at core bounce using data from two models: one implementing the "Bruenn 85" electron capture rates (Bruenn, 1985), based on the Independent Particle Model of nucleons, and one implementing the rates of Langanke et al (2003), based on the "Hybrid" model, which includes correlations between the nucleons in RPA and finite-temperature effects. The former (latter) data correspond to the thin (thick) black lines in the plots. Given the inclusion of the hybrid model electron capture rates, electron capture is unblocked and proceeds, leading to increased electron capture in the core in this model and, consequently, to a significant (inward) change in the location of the bounce shock in mass.
Earlier in this section, we have seen the impact of adding new weak interaction channels and improving the treatment of those already included in core-collapse supernova model. Here we explore yet another dimension of this important sector of core-collapse supernova physics: the interplay of neutrino weak interaction channels (new and/or modified). Lentz et al (2012a) conducted an in-depth analysis focused largely on the neutrino production and interaction channels discussed above (i.e., nucleon-nucleon bremsstrahlung, non-isoenergetic scattering, and electron capture on nuclei). They demonstrated several important points: (1) While the addition of a single interaction channel may impact the dynamics of stellar core collapse and the post-bounce evolution, the addition of two interaction channels may not be additivein fact, it may render one of the additional channels irrelevant. (2) When two or more interaction channels are included and are instead additive, the additive impact may be nonlinear. As an example, Lentz et al. considered the interplay of electron capture on nuclei and neutrino-electron scattering during stellar core collapse. If we consider the nucleons as independent particles [Independent Particle Model (IPM)], electron capture on nuclei is blocked for N > 40, where N is the neutron number. In this case, the nuclear electron capture rates are given by Bruenn (1985) are appropriate. In this instance, neutrino-electron scattering, which scatters neutrinos to lower energies given the core's electron degeneracy, leads to a significant increase in core deleptonization and a concommitant decrease in the inner core mass. On the other hand, if the improved nuclear electron capture rates of Langanke et al. are used, which factor in nucleon interactions and correlations, nuclear electron capture is no longer blocked. In turn, low-energy neutrino states are filled, and neutrino-electron scattering is no longer able to down scatter neutrinos in energy (and contributes very little to the total neutrino opacity) and becomes rather unimportant. This is captured in Fig. 10. Comparing, for example, the velocity at bounce in the upper left panel of Fig. 10 for the cases "Base," which includes the full set of neutrino weak interactions with "Base-noNES," which leaves out neutrino-electron scattering, it is obvious there is no difference. This is also true of all of the other quantities plotted. On the other hand, a comparison between "Base" and "IPA," which includes nuclear electron capture in the independent particle approximation, it is evident that neutrino-electron scattering had a significant impact during collapse and on the final shock formation location.
That the search for all core collapse-supernova relevant neutrino weak interactions is an ongoing activity is no better illustrated than by the very recent example provided by Bollig et al (2017), whose work illuminated the importance of including muons and neutrino-muon weak interactions in core-collapse supernova models. Past models assumed that the population of muons in the stellar core during collapse, bounce, and the post-bounce neutrino shock reheating epoch would remain low given the large rest mass of the muon. Bollig et al. point out that such arguments are not well motivated. The electron chemical potential in the proto-neutron star at this time exceeds the muon rest mass, and the core temperature is large, as well. In the context of two-dimensional supernova models using the VERTEX code and initiated from a 20 M progenitor, they demonstrated that significant populations of muons are in fact produced and, more importantly, that the inclusion of muons in their supernova models impacted the outcomes quantitatively in all cases and even qualitatively in some cases, depending on the nuclear equation of state used. For the SFHo equation Fig. 10: Plots of velocity, density, entropy, temperature, electron and lepton fraction, and pressure at core bounce across five models with different input physics (Lentz et al, 2012a). The model "Base" includes all weak interactions and uses the modern, hybrid-model electron capture rates. Model Base-NoNES includes the same weak interaction physics, with one exception: neutrino-electron scattering (NES) is not included. Similarly, model "IPA" includes all weak interaction channels, as does model Base, but uses the Independent Particle Approximation (IPA) rates for nuclear electron capture. And model "IPA-NoNES" includes the same weak interaction physics except neutrino-electron scattering. Comparing models Base and Base-NoNES, no significant changes result when NES is excluded. On the other hand, comparing models IPA and IPA-NoNES, we reach a different conclusion: In this case, the inclusion of NES has a significant impact on core deleptonization and, consequently, on the mass of the inner core at bounce. These comparisons demonstrate there is an interplay between different neutrino opacities. An improvement in one opacity may render an otherwise important second opacity relatively unimportant.
of state, models with muons exhibited explosion whereas counterpart models without them did not. For models with the LS220 equation of state, models with muons exhibited earlier explosions, indicating that explosion was facilitated in these models. Bollig et al.'s results are encapsulated in Fig. 11.
We close this section with an emphasis on one final important point: Like all weak interaction cross sections, those the community has found to be important to core-collapse supernova evolution and has included in its supernova models have uncertainties associated with them, which can arise from experimental uncertainties in the few cases where the cross sections have been measured directly, or from uncer-   (2017), the angle-averaged shock trajectories for several models, excluding and including muons, using the Steiner-Fischer-Hempel (SFHo) equation of state, are plotted. In the upper right panel, plotted are the results from the models that instead use the Lattimer-Swesty equation of state with bulk compression modulus K = 220 MeV (LS220). Here "Standard" indicates models without muons. tainties in the theory used to predict them, which in the end the supernova modeling community must rely on given it is impossible to measure all relevant cross sections under all relevant thermodynamic conditions and at all relevant neutrino energies found in a supernova environment. Thus, it is important to explore the potential impact of such uncertainties on the quantitative and qualitative core-collapse supernova model outcomes.
Case in point: The exploration of the impact of the uncertainty in the neutrinonucleon cross section. Melson et al (2015a) performed two state-of-the-art threedimensional simulations of the core-collapse supernova explosion of a 20 M progenitor. In one case, they included what the modeling community regarded at the time as the state-of-the-art neutrino weak interaction set, with no modification to any of the cross sections. In the other, they varied one of the cross sections, albeit a critical one: the cross section for neutrino scattering on nucleons. This cross section is one of the most important for neutrino transport below the neutrinospheres, as the leading opacity source and, as we saw above, as an additional heating source for matter . In particular, in the uppermost left panel, the angle-averaged shock radius is plotted for two pairs of models, one for the two-dimensional case and one for the three-dimensional case. All cases are launched from a 20 M progenitor and were performed with the VERTEX code. Within each case, two simulations were performed, one using the standard weak interaction cross section for neutrino-nucleon scattering and the other including a correction to the strangeness content of the nucleon, which results in a correction to the coupling constants. In two dimensions, both models explode, with some quantitative differences observed in the shock trajectories. In the more important three-dimensional case, the outcomes with and without the correction are qualitatively different. Specifically, explosion is not obtained in their model unless the opacity correction is included.
within the proto-neutron star. Uncertainty in the cross section for neutrino-nucleon scattering arises from, among other things, uncertainty in the strangeness content of the nucleon, which can alter the coupling constants. In particular, Melson et al. varied the cross section by ∼10%, consistent with the experimental uncertainties, and in so doing found they could qualitatively alter the outcome of the model. When the standard weak interaction set was used, they did not obtain an explosion in the model. When they varied the neutrino-nucleon cross section, they did. The results are shown in Fig. 12. Of course, we have already seen that variations in a particular cross section can interact with variations in another. The only way the supernova modeling community can accurately assess the impact of variations in a single cross section is to vary all of them, in a statistically meaningful way-i.e., perform a sensitivity study. And, obviously, this should be performed, at least ultimately, in the context of three-dimensional models. Unfortunately, the last requirement cannot be met at this time. Such a study would require that many three-dimensional models be performed, which at the moment, even with the significant computing power afforded the modeling community by today's supercomputers, is prohibitive. Such studies should be conducted, but they will have to wait for future supercomputing capabilities.  Burrows and Sawyer (1998), Reddy et al (1998), Horowitz (2002) Bruenn and Mezzacappa (1997), Horowitz (1997) Non-Isoenergetic Scattering ν e,µ,τ ,ν e,µ,τ + e −,+ → ν e,µ,τ ,ν e,µ,τ + e +,− Mezzacappa and Bruenn (1993b) ν e,µ,τ ,ν e,µ,τ + n, p → ν e,µ,τ ,ν e,µ,τ + n, p Burrows and Sawyer (1998), Reddy et al (1998), Horowitz (2002) Pair Creation and Annihilation e + + e − ν e,µ,τ +ν e,µ,τ Bruenn (1985) e − +ν e + ν µ µ + + n p +ν µ µ + +ν e e + +ν µ µ + + ν µ e + + ν e µ + e + + ν e +ν µ ν e,µ,τ + µ +,− ν e,µ,τ + µ +,−

The relevant neutrino interactions
The previous section makes clear that the effort to ascertain which neutrino weak interactions are important to core-collapse supernovae theory is an ongoing activity.
To date, the list included in Table 1 is what is deemed to be the essential list. Most, if not all, of the weak interactions in the list have been included in the state of the art simulations whose underlying numerical methods have been the focus of this review. Motivated by the recent example documented in the previous section, in Table 2 we also include a list of the relevant neutrino weak interactions involving muons. At present, these have been included by only one group (Bollig et al, 2017) and, as discussed, have been found to be important by this group. In light of this, adoption of these weak interactions by other groups is certainly warranted.

Boltzmann collision term
We write the collision term as the sum of terms corresponding to the main processesemission and absorption, scattering, and pair creation and annihilation-listed in Table 1: For each of the terms, we focus on its functional form, which is closely related to the computational complexity of including a particular weak interaction in a corecollapse supernova model. Each term-hence, each added interaction-warrants tailored consideration.
The term due to neutrino emission and absorption is written as where η s and χ s are the emissivity and absorption opacity of neutrino species s and are assumed to be isotropic in the momentum-space angle (independent of ω), but depend on the neutrino energy ε. The blocking factor, 1 − f s (p), is included to account for the Fermi-Dirac statistics of neutrinos, and suppresses neutrino emission when the phase-space occupancy is high (i.e., when f s 1). It is common to introducẽ χ s = (η s + χ s ), associated in this case with "stimulated absorption" (as opposed to the stimulated emission of photons), and to define f 0,s = η s /χ s , in which case Eq. (168) can be written in relaxation form: In this form it is easy to see that the collision term drives the distribution function towards the equilibrium distribution, f 0,s . Also note, this interaction is local in momentum-space; i.e., there is no coupling across momentum-space. Neutrino-matter scattering (the second and third category in Table 1) is described by where R IN SCAT (p, p ) is the scattering rate from momentum p into p, and R OUT SCAT (p, p ) is the scattering rate out of momentum p into p . When compared with the collision term in Eq. (169), the coupling in momentum-space (due to the integral operators) increases the computational complexity of evaluating the collision operator. If momentum-space is discretized into N p bins, a brute force evaluation of Eq. (170) for all p requires O(N 2 p ) operations. Note also the blocking factors in Eq. (170), which suppress scattering to high-occupancy regions of momentum-space. The second category in Table (1) (coherent, isoenergetic scattering) is obtained as a simplification of Eq. (170) by letting where cos α = p · p /(|p||p |). For this type of interaction, with d 3 p = |p | 2 d|p | dω , the collision term is given by which is considerably simplified relative to the scattering operator in Eq. (170). Finally, neutrino-antineutrino pair creation and annihilation (e.g., from electronpositron pairs; the fourth category in Table 1) where R IN PAIR (p, p ) and R OUT PAIR are the neutrino-antineutrino pair production and annihilation rates, respectively, andf s is the antineutrino distribution function. We note that the functional form of the collision term for the last of the pair processes included in Table 1 is not represented by the functional form for pair creation and annihilation presented here. In this particular case, both in-states and both out-states correspond to neutrinos, which, when treated without approximation, results in a collision term involving four distribution functions. This non-approximate treatment of the process has yet to be implemented in core-collapse supernova models. As a result, we do not include its functional form here.
All of the above rates η s , χ s , R IN/OUT SCAT , and R IN/OUT PAIR depend on the thermodynamic state of the stellar core fluid (e.g., ρ, T , and Y e ).
Symmetries in some of the collision kernels exist (e.g., Bruenn, 1985;Cernohorsky, 1994), which should be leveraged in computations. First, because the total number of neutrinos is conserved in neutrino-matter scattering, and the following in-out invariance holds: Second, when the neutrino distribution function equals the local Fermi-Dirac distribution, where T is the matter temperature and µ ν is the equilibrium neutrino chemical potential, the net energy and momentum transfer between neutrinos and matter due to scattering must vanish. Thus, requiring for an arbitrary function g(p), gives where Eq. (175) is used in the rightmost expression.

Two-moment collision terms
Collision terms for the two-moment model are derived by taking angular moments of the collision term in Eq. (167). Such terms have been discussed in the context of multidimensional two-moment models by, e.g., Shibata et al (2011). For completeness, we list two-moment collision terms corresponding to angular moments of Eqs. (169), (170), and (173) here. Considering the two-moment models delineated in Sect. 4.7.3, the relevant angular moments of the Boltzmann collision term are (The first of these terms also appears in the one-moment model discussed in Sect. 4.7.5; cf. Eq. (155).) Emission/absorption For emission and absorption, the evaluation is straightforward since the emissivity and opacity are isotropic in momentum space angle. The zeroth moment gives where the zeroth moment of the equilibrium distribution is defined as The first moment gives since the angular moment of j vanishes.
Angular kernel approximations To incorporate scattering and pair processes in the two-moment model, following Bruenn (1985), the kernels are expanded in a Legendre series up to linear order; e.g., where Ω = µ (ω) µ (ω ) is the cosine of the scattering angle. From the orthogonality of the Legendre polynomials, the scattering coefficients are then evaluated form the kernels as Terms beyond linear can be included in the expansion of the kernel in Eq. (182) at the expense of a more complicated collision operator for the two-moment model. Smit and Cernohorsky (1996) investigated the effect of including the quadratic term for neutrino-electron scattering in a configuration during the infall phase of stellar core collapse. They found that including the quadratic term results in a better fit to the scattering kernel, but when comparing stationary state transport solutions with and without the quadratic term, they found no significant difference in relevant quantities such as the neutrino number density, flux, and transfer rates of lepton number, energy, or momentum to the stellar fluid. We also note that Just et al (2018), in their Appendix A, provide expressions for pair processes that include the quadratic term in the Legendre expansion of the kernels. Scattering Employing the expansion in Eq. (182) for the scattering operator gives for the zeroth moment (recall that dV ε = 4πε 2 dε), and for the first moment. Here we have used Pair processes Employing the kernel expansion in Eq. (182) for the neutrino-antineutrino pair creation and annihilation operator in Eq. (173) gives for the zeroth moment of the collision operator, and for the first moment. Here,D andĪ µ are the zeroth and first moments of the antineutrino distribution functionf .

Neutrino-matter coupling
In coupling neutrinos and matter, we are primarily concerned with lepton and fourmomentum exchange. The neutrino lepton current density is where N ν s is the neutrino four-current density for neutrino species s, defined as in Eq. (70) with distribution function f s , and g s is the lepton number of neutrino species s (g s = +1 for neutrinos, and g s = −1 for antineutrinos). From the electron number conservation equation, Eq. (6), and the neutrino number conservation equation, Eq. (80) (one for each neutrino species), we obtain Lepton number conservation demands that the source term of the right-hand side of Eq. (6) takes the form Note that, for simplicity of this exposition, we have assumed that only electron neutrinos and antineutrinos are involved in lepton exchange with the fluid, but see Bollig et al (2017) for a discussion of additional lepton exchange channels when muons are included as a fluid component. When muons are included, an additional equation for the muon number density, similar to Eq. (6), must be evolved, and the definition of the neutrino lepton current density in Eq. (189) must be extended to include contributions from muon neutrinos. (Technically, similar extensions should be done to accommodate tauons, but, because of their large rest mass, they can be neglected as an agent for lepton number exchange with the fluid (Bollig et al, 2017).) The total neutrino stress-energy tensor is defined as where the stress-energy tensor for neutrino species s, T µν s , is defined as in Eq. (71) with distribution function f s and N SP is the total number of neutrino species. Using Eqs. (5) and (82), the divergence of the total (fluid plus neutrino) stress-energy is Then, four-momentum conservation in neutrino-matter interactions demands the righthand side of Eq. (5) takes the form To illustrate the complexity of the neutrino-matter coupling problem further, we consider the neutrino-matter coupling problem in the space-homogeneous case using the number conservative two-moment model discussed in Sect. 4.7.3. The angular moments of the neutrino distribution function of species s evolve according to where we use the ordinary derivative d t = d/dt to indicate that we consider the spacehomogeneous case where physical variables are considered functions of time only. The right-hand sides of Eqs. (195) and (196) will include the contributions from emission and absorption, scattering, pair processes (as discussed above), and other processes. This sub-problem is typically considered in numerical implementations where neutrino-matter interactions are solved for in a time-implicit fashion, e.g., as is done within an implicit-explicit framework for integrating the full neutrino-radiation hydrodynamics system forward in time, which we will discuss in more details later (see, e.g., Sect. 6.5). Coupled to the transport equations, are the fluid evolution equations, which are combined with the transport equations and formulated as constraints due to mass, four-momentum, and lepton number conservation: where N e = DY e /m B , and and where the Eulerian angular moments F s, j , E s , and N s are defined in Section (4.7.2). The Eulerian neutrino number density N s is expressed in terms of the Lagrangian moments in Eq. (97), which is also the expression inside the time-derivative on the left-hand side of Eq. (195). The Eulerian momentum and energy, can also be written as combinations of the quantities in the time-derivatives on the left-hand side of Eqs. (195) and (196): Thus, adopting a closure for the radiation moments, writing " K s,i j in terms of D s and I s, j as discussed in Section (4.7.4), and an equation of state for the fluid p = p(ρ, e,Y e ), the system given by  and (197)-(200) can be solved for the radiation moments D s and I s, j , and the fluid states ρ, v i , e, and Y e . This is a nonlinear system of equations, where nonlinearities are due to the radiation moment closure, the fluid equation of state, the dependence of D, S j , and τ fluid on ρ, v i , e, and Y e , and the nonlinear dependence of the neutrino opacities discussed in Sect. 5.2.2 on the thermodynamic state ρ, e, and Y e . Modeling this four-momentum and lepton exchange between neutrinos and the fluid -with all the relevant neutrino-matter interactions included -constitutes the major computational cost of core-collapse supernova models.
6 Phase-space discretizations and implementations 6.1 Boltzmann kinetics: spatial and energy finite differencing plus discrete ordinates 6.1.1 Phase-space coordinates Fig. 13: Diagram illustrating the spherical momentum-space coordinates used in most neutrino radiation hydrodynamics implementations. The angle θ p is the angle between the outgoing radial direction and the neutrino propagation direction, at the neutrino's location. The neutrino direction cosine, µ ≡ cos θ p , is defined in terms of it. φ p is the associated momentum-space azimuthal angle. In spherical symmetry, the distribution function is only a function of µ, not φ p .
In a spherical spatial coordinate system, the neutrino's direction of propagation is specified relative to the basis vectors {e r,θ ,φ } as (see Fig. 13) where n r = cos θ p , This can be rexpressed as where µ ≡ cos θ p . When spherical spatial and momentum-space coordinates are used, as defined above, the neutrino distribution function has the following dependencies for no imposed symmetry, axisymmetry, and spherical symmetry, respectively, where in all three cases E is the neutrino energy.

Spherical symmetry
We illustrate the approach used by Mezzacappa and Bruenn (1993a); Mezzacappa and Messer (1999); Liebendörfer et al (2004); Mezzacappa et al (2004Mezzacappa et al ( , 2005 in the context of a model that assumes Newtonian gravity and is valid to O(v/c). The fully general relativistic case is detailed in Liebendörfer et al (2004). In the Newtonian gravity, O(v/c) case, the conservative neutrino Boltzmann equation reads where F ≡ f /ρ, m is the Lagrangian mass coordinate, µ is the neutrino direction cosine, as defined above, and E is the neutrino energy. In spherical symmetry, F = F(t, m, µ, E). After the time derivative term on the left-hand side of the Boltzmann equation, the remaining terms correspond to the transport of neutrinos in all three dimensions of phase space: (m, µ, E). The first term corresponds to spatial transport of neutrinos through the stellar core layers. As a neutrino propagates through the core, its direction cosine, defined in spherical coordinates with respect to the outward radial vector at its position, changes. This is captured by the second term. The third and fourth terms capture the transport of neutrinos in angle and energy due to relativistic (in this case to O(v/c)) angular aberration and frequency shift, respectively. On the right-hand side, the collision term includes (1) thermal emission, with emissivity, j, (2) absorption, with absorption opacityχ ≡ j + χ, which accounts for stimulated absorption, (3) iso-energetic scattering, with scattering kernel R IS , (4) non-isoenergetic scattering, with scattering kernel, R NIS , and (5) neutrino pair creation and annihilation, with pair-production kernel, R PAIR . The distribution function for antineutrinos are designated byF. While the left-hand side of the Boltzmann equation is linear in the distribution functions, it is important to note that the right-hand side is not. The nonlinearity on the right-hand side is evident due to the blocking factors corresponding to the boundedness of the neutrino distribution functions: f lies in the range [0, 1]. There is an additional nonlinearity that is implicit in the equation. The distribution functions are updated together with the matter internal energy and electron fraction, due to energy and lepton number exchange between the neutrinos and the matter as a result of the above processes. In turn, the neutrino emissivity, opacity, and scattering kernels depend on the thermodynamic state of the matter, which depends on the matter's density, internal energy, and electron fraction. Thus, a simultaneous linearization of the discretized equations of neutrino radiation hydrodynamics in the neutrino distribution functions, the matter internal energy, and the matter electron fraction is required.
For simplicity, we define the zone-center indices for each of the phase space dimensions with primed indices: i ≡ i + 1/2, j ≡ j + 1/2, and k ≡ k + 1/2. Focusing now on the spatial advection term, the first of the O(1) terms: In the free streaming limit, the advected neutrino number in a time step (as measured by a comoving observer) can be large relative to the neutrino number in a zone (mass shell). Upwind differencing of the advection term is appropriate to limit destabilizing errors in the fluxes. For discrete direction cosines, µ j , the direction of the neutrino "wind" is given by the sign of µ j . On the other hand, in diffusive conditions, the neutrino flux may be orders of magnitude smaller than the nearly isotropic neutrino density in a zone. In this situation, an asymmetric differencing can lead to an overestimation of the first angular moment because of improper cancellations among the contributions of the nearly isotropic neutrino radiation field. As a result, Mezzacappa et al. interpolate between upwind differencing in free streaming regimes and centered differencing in diffusive regimes. Specifically, using the coefficients, β i,k , defined as where λ i,k is the angle-averaged neutrino mean free path, the spatial advection term is discretized as for outward propagating neutrinos µ j > 0 and for inward propagating neutrinos µ j < 0 . Next, focusing on the angular advection term, Mezzacappa et al. use the following discretization: The differencing of the coefficients, ζ = 1 − µ 2 , is defined by where the w j are the weights corresponding to the Gaussian quadrature values used for µ j . The discretization of the coefficient, 1/r, of the angular advection term is set such that in an infinite homogenous medium in thermal equilibrium, ρF = f eq = constant is a solution (Mezzacappa and Bruenn, 1993a). The angular integration of the term ∂ [(1 − µ 2 )F]/r∂ µ produces the zeroth and second angular moments of the neutrino distribution function. Its finite difference representation is therefore not as sensitive to cancellations in the diffusive limit as the differencing of the spatial advection term. Upwind differencing is justified. The angular "wind" always points towards µ = 1. However, for reasons of completeness and consistency, Mezzacappa et al. use centered differencing in the diffusive regime here as well, with angular coefficients, γ i ,k ≡ β i ,k , and Finally, Mezzacappa et al. discretize the last of the O(1) terms in the Boltzmann equation, the collision term, as It is important to note that the collision term is differenced implicitly with respect to time. All of the neutrino and antineutrino distribution functions in Eq. (225) are evaluated at the new time step. Given the implementation of discrete ordinates in angle, the angular integrals in the collision term are evaluated with Gaussian quadrature, using the same quadrature set used for the angular discretizations of the distribution function and terms on the left-hand side of the Boltzmann equation.

Challenges: relativistic effects and the simultaneous conservation of lepton number and energy
Define J N and H N are the zeroth and first angular number moments of the distribution function. Integration of Eq. (216) over µ and E with E 2 as the measure of integration gives the following evolution equation for J N : One more integration over rest mass m from the center of the star to its surface gives the evolution equation for the total neutrino (lepton) number. It is clear from Eq. (228) that the total neutrino (lepton) number in the computational domain will change only as a result of an inflow or an outflow of neutrinos at the boundary of the domain and/or as a result of the exchange of lepton number between the neutrinos and the matter. Now, in the same way, define the energy moments: and Eq. (232) is the evolution equation for the comoving-frame neutrino energy per gram. Eq. (233) is the evolution equation for the comoving-frame neutrino momentum per gram. Combining the two, to O(v/c), one obtains the laboratory-frame neutrino energy conservation equation: Note that J E + vH E is the laboratory-frame neutrino energy per gram as expressed in terms of the comoving-frame moments J E and H E . Similarly, vK E + H E is the laboratory-frame flux per gram expressed in terms of comoving-frame moments. Integration of Eq. (234) over enclosed mass leads to an equation for total neutrino energy conservation. It is clear that, with the exception again of fluxes at the boundary of the computational domain and energy exchange with the matter due to collisions (the terms involving j and χ) and neutrino stress (the term involving vχ), the total neutrino energy as defined in the laboratory frame (where one can speak of conservation of energy) is conserved.
In arriving at Eq. (234), the expressions (∂ lnρ/∂ t+2v/r)K E and K E 4πr 2 ρ∂ v/∂ m in Eqs. (232) and (233) cancel given the continuity equation To achieve global energy conservation in the discrete limit, one must ensure, these cancellations occur in the finite differencing as well. Identifying the origin of the terms (∂ lnρ/∂ t + 2v/r)K E and K E 4πr 2 ρ∂ v/∂ m, we find that (∂ lnρ/∂ t + 2v/r)K E originates from the zeroth moment of the energy advection term, in the Boltzmann equation (216), and K E 4πr 2 ρ∂ v/∂ m originates from the first moment of the spatial advection term, in the same equation. The terms J E − K E v/r also stem from the zeroth moment of the energy advection term, Eq. (236), and the first moment of the angular advection term 1 r in the Boltzmann equation (216). The requirement of global energy conservation in the laboratory frame therefore imposes interdependencies on the finite differencing of the O(1) spatial and angular advection terms, Eqs. (237) and (238), and the O(v/c) energy advection term, Eq. (236) . In particular, given a choice of finite differencing of the O(1) terms on the left-hand side of the Boltzmann equation (216), conservation of energy requires "matched" finite differencing for the coefficients and A bar over a variable indicates its value is to be taken at time step t n . As noted, the total energy equation is obtained when summing Eqs. (232) and (233) and then integrating over m (the integration in µ and E has already taken place). In this sequence of integrations (over µ, E, and then m), the term involving A in Eq. (232) cancels with the term −4πr 2 ρK E dv/dm in Eq. (233).
Identifying the appropriate velocity gradient term in Eq. (241) and focusing on the appropriate integration (in this case, over m), Mezzacappa et al. require that [below, the term involving A comes from the zeroth moment of first term in the observer correction (236) after an integration by parts in energy, E; the term involving the velocity gradient is the next to last term in Eq. (241), corresponding to the first moment of the spatial propagation term in the Boltzmann equation (216) which gives for j ≤ jmax/2 and for j ≥ jmax/2 + 1. (The case i = imax − 1 is a boundary case, the details of which are not important for the present discussion.) Similarly, defining B according to and again focusing on the appropriate integration (in this case, over µ), Mezzacappa et al. require that (below, the term involving B comes from the zeroth moment of the second term in brackets in the energy advection term (236), after an integration by 74 Anthony Mezzacappa et al. parts in angle, µ; the second term is the last term in Eq. (241), corresponding to the first moment of the angular advection term): Given the necessary matched finite differencing for A and B, Mezzacappa et al. then consider the finite difference representation of the energy advection term (236). Using the definitions (239) and (240), they rewrite the equation corresponding to the change in the distribution function due to relativistic energy advection as i.e., They then transform from the "Eulerian" variable x = E to the "Lagrangian" variable y = E/R f , and in so doing they transform Eq. (248): For a small section of energy phase space which has the following interpretation: The neutrinos in the energy interval E 2 ∆ E move along constant E/R f in the phase space of a comoving observer. Given this, Mezzacappa et al. are able to evolve any neutrino quantity in this phase-space intervalin particular, the neutrino specific energy, dε = E 3 F∆ E: They then consider a neutrino energy group k , with neighboring groups k + dk, dk = ±1. From Eq. (252), the number of neutrinos before energy advection, F i , j ,k E 2 k dE k , is equal to the number of neutrinos after advection. The distribution of these neutrinos in energy after the advection yields a diminished number of neutrinos in the neighboring group k + dk such that Eq. (253) defines a similar correction for the specific neutrino energy in group k : where A i ,k and B i , j ,k are given by Eqs. (243), (244), and (247). Equations (254) and (255) can be solved for n − i , j ,k and n + i , j ,k : which leads, given the change in the neutrino distribution function in group k due to energy advection can be expressed as to the following finite difference representation of the energy advection term in the Boltzmann equation (216): Mezzacappa et al. are then left with the task of finding a finite difference representation for the angular advection term in Eq. (216). Their finite differencing of the energy advection term conserved specific neutrino energy. Their finite differencing of the angular advection term is designed to conserve specific neutrino luminosity. With ζ = 1 − µ 2 , the angular aberration term can be rewritten as As before, Mezzacappa et al. seek an analytic solution to Eq. (259). To do so, they convert the prefactor of the angular derivative to a time derivative. For the quantity R a = r 3 ρ, they find They then rewrite Eq. (259) in terms of the "Lagrangian" variable y = ζ −1/2 µ/R a instead of the "Eulerian" variable x = µ. After multiplication by ζ µ, Eq. (259) becomes: As before, the interpretation is clear: The neutrinos initially residing in the interval 1 − 3µ 2 ∆ µ = ζ 2 µ 2 − ζ 1 µ 1 are shifted by angular aberration along constant µ/ Ä ζ R a ä in the phase space of a comoving observer: Given Eq. (262), Mezzacappa et al. are in turn able to evaluate the change in other neutrino quantities-in particular, the specific neutrino luminosity, d = 1 − 3µ 2 µF∆ µ: Identifying their bin size Ä 1 − 3µ 2 j ä ∆ µ j = w j with their Gaussian quadrature weights, Eq. (262) leads to their condition for neutrino number conservation, and Eq. (263) leads to their prescription for the numerical evolution of the specific luminosity, where d j = ±1. The change in the neutrino distribution from angular aberration is then with This leads to the following finite difference representation of the angular aberration term in the Boltzmann equation (216): where d j = +1 for µ j ≤ 0 and d j = −1 for µ j > 0. Given the finite differencing for all of the terms in the Boltzmann equation (216) where a 0 superscript denotes the value of the variable at the current iterate in an outer Newton iteration of the solution algorithm. Given the dependence of j,χ, R IS , R NIS , and R PAIR on ρ, T , and Y e , the above linearizations lead to linearizations in all of these quantities. For example: Insertion of these linearizations into the finite differenced Boltzmann equation leads to a block-tridiagonal linear system of equations for the quantities δ F i , j ,k , δ ε i , and δ (Y e ) i , which is solved for each outer iteration until a prescribed tolerance is reached for all of the variables. The block tridiagonal system has the form where B i and C i are diagnoal, reflecting the fact that spatial advection couples nearest neighbors only, and where A i has the form A i is an M × M matrix, where M = jmax × kmax + 2. jmax corresponds to the number of angular quadratures used in the discrete ordinates implementation for angle, and kmax corresponds to the number of energy groups. The submatrices A 2 and A 3 are of dimension 2 × (M − 2) and (M − 2) × 2, respectively. A 4 is a 2 × 2 matrix. The 2 rightmost columns of A i and the 2 bottom-most rows correspond to the coupling of the Boltzmann equation to the equations for the specific internal energy and electron fraction of the matter, accounting for energy and lepton number exchange. The solution vector, V i , comprising the quantities δ F i , j ,k , δ ε i , and δ (Y e ) i , has the form Challenges of modeling neutrino transport in core-collapse supernovae 79 (D'Azevedo et al, 2005) developed a physics-based preconditioner for the above system. This "ADI-like" preconditioner treats the diagonal dense blocks, which correspond to coupling in momentum space, and the tridiagonal bands, which correspond to coupling in space, separately, and has proven very effective. For Mezzacappa et al., neutrino momentum exchange with the matter is handled separately, during the hydrodynamics update, and is differenced explicitly in time.

Challenges: neutrino-nucleon (small-energy) scattering
In the case of neutrino-electron scattering, for example, where the energy transfer is not small in comparison with the widths of the zones of our energy grid, Eq. (216) is differenced using zone-centered values of energy in both the neutrino distribution function and the scattering kernels. However, for small-energy transfers compared with our energy zone widths, the scattering kernel R in/out NNS (ε k , ε k , cos θ ) will be effectively zero if ε k = ε k , and the scattering will become essentially isoenergetic, with negligible energy transfer. As already discussed, while the transfer of energy between neutrinos and nucleons during a scattering event is small, there are many such scatterings, and the overall impact of the energy exchange between the neutrinos and nucleons in these events is nonnegligible. Thus, a numerical treatment of this scattering contribution that reflects the fact that the energy exchange between neutrinos and matter is important and, more important, captures this exchange accurately, must be developed.
Focusing on this term in the collision term, we have where, for simplicity, we have suppressed the radial and temporal dimensions. With the energy zone centers, ε k+1/2 , defined in terms of the energy zone edges, ε k , as the volume of an energy zone is given by 4πε 2 where The integral over energy can now be replaced by and Eq. (276) becomes In Eq. (281), the first approximation was made by truncating the energy integral at ε N+1 . In the second equality, the integral over the entire energy domain is broken up into segments within the domain, corresponding to the energy zone widths. This is not an approximation. In the last equality, we have inserted a factor of unity inside the summation over energy groups, which, again, is not an approximation. Therefore, no approximations have been made thus far except for truncating the range of the energy integration.
The ultimate goal of an improved treatment of small-energy, neutrino-nucleon scattering is to accurately compute the energy transfer between the neutrinos and the nucleons-i.e., to compute accurately the change in the neutrino energy within each of the groups of our energy grid from such scattering. The change in the neutrino energy within a group is given by where we have inserted Eq. (281) for the time derivative of the neutrino distribution function due to neutrino-nucleon scattering. If we now choose to define the distribution function, f (µ, ε), at the energy zone centers, Eq. (282) can be expressed as The first equality in Eq. (283) stems from the fact that, once the distribution function is evaluated at the energy zone center and, consequently, its time derivative is evaluated there, the time derivative becomes a constant integrand and can be taken outside of the integral. Dividing both sides of Eq. (283) by we obtain which we rewrite as where R in/out With the scattering kernel defined as in Eq. (287) in the collision term of the Boltzmann equation, the energy transfer between the neutrinos and the nucleons resulting from the many neutrino-nucleon scattering events is captured accurately, despite the fact that the energy exchange per scattering is much less than a typical energy zone width.

Axisymmetry
The first implementation of multi-angle, multi-frequency neutrino transport in the context of spatially two-dimensional, axisymmetric core-collapse supernova models was achieved by Ott et al (2008). Their implementation was based on the neutrino transport solver developed by Livne et al (2004) for the neutrino specific intensity (I), whose evolution is given by the following equation: Here, D/Dt is the Lagrangian time derivative, Ω is the unit vector in the direction of neutrino propagation, whose components are (cos θ p , sin θ p cos φ p , sin θ p sin φ p ), where θ p and φ p are spherical momentum-space coordinates defined relative to the outward radial direction, σ is the total absorption cross section, including absorption and scattering, and S is the total emissivity, including emission and scattering. Eq. 288 is temporally discretized fully implicitly. The phase space discretization is handled as follows. Space-i.e., radius and angle-is discretized using a conservative difference scheme. Momentum space-i.e., the space comprising the two dimensions corresponding to the angles of the neutrino's direction of propagation, θ p and φ p , and the dimension corresponding to the neutrino's energy, ε ν , is discretized as follows. The discrete ordinates method is used for the momentum-space dimensions. Further details of the discretization of Eq. 288 have not yet been provided.

Three spatial dimensions
The journey down what will no doubt be a long road toward the implementation of general relativistic, three-dimensional Boltzmann neutrino transport in the context of core-collapse supernovae was begun by Sumiyoshi and Yamada (2012). With corecollapse supernovae in mind, they began by solving the conservative form of the Boltzmann equation for three-dimensional, static stellar core configurations: In light of the use of spherical polar coordinates, there are terms that correspond to advection in momentum space even in a static medium in flat spacetime-i.e., even in the absence of special and general relativistic effects. For example, as a neutrino propagates, its direction cosine, µ ≡ cos θ p , which is defined relative to the outwardly pointing radial basis vector, will necessarily change. This is described by the fourth term on the left-hand side of Eq. (289). This is not a geometric effect, as spacetime is flat in this case. Rather, it is a coordinate effect. The last term on the left-hand side of the same equation has a similar origin and interpretation. Given the assumption of a static medium and flat spacetime, no other terms appear on the left-hand side, which would describe special and general relativistic effects were they considered.
The discretization of Eq. (289) follows and extends that used in Mezzacappa and Bruenn (1993a)-i.e., finite differencing in space and energy, and discrete ordinates in neutrino propagation angles. For the second term on the left-hand side of Eq. (289), corresponding to radial advection of neutrinos, Sumiyoshi and Yamada use the following discretization: where, in their notation, f I−1 and f I are the neutrino distributions at the cell interfaces of the i-th zone. The quantities µ j f I at the cell boundaries are defined by (291) and β I is In the diffusion (free-streaming) limit, β I = 1/2(1). The advection in µ = cos θ p is discretized as Upwind differencing is implemented, and f J = f j . θ -advection is first reexpressed and then discretized as The factor, (1 − µ 2 j θ ) 1 2 cos φ p j φ , determines the direction of advection and the evaluation of f I θ at the cell interface. Given the sign of cos φ p , f I θ is determined by As before, β I θ takes on values between 1 2 (diffusion limit) and 1 (free-streaming limit) and is defined in the same way as β I , using instead the angular zone widths and mean Challenges of modeling neutrino transport in core-collapse supernovae 85 free paths. φ p advection is discretized as In this case, the sign of µ i θ (sin φ p ) J φ determines the direction of advection. Upwind differencing is used to determine f J φ at the cell interface. f J φ is given by Last but not least, φ advection is discretized as follows Given the sign of sin φ p j φ and, therefore, the advection direction, f I φ is given by β I φ is determined in the same way as its counterparts in the radial and θ directions, using the appropriate angular zone widths and mean free paths. Focusing on the temporal discretization, the phase-space discretizations spelled out in Eqs. (290) through (299) are assembled and evaluated in a fully implicit manner, as shown schematically below (i.e., the phase-space discretizations themselves are not inserted; each term is represented by its continuum counterpart): where n designates the current time slice. The left-hand side of Eq. (300) is linear in the distribution function, but the right-hand side is not. Consequently, as in the spherically symmetric case, both sides of Eq. (300) are linearized in f . (In this case, Sumiyoshi and Yamada are working with a hydrostatic and thermally frozen stellar core profile. As a result, linearizations in ε and Y e are not necessary.) This gives rise to a linear system of equations for δ f i . To solve the combination of the outer nonlinear system of equations and the corresponding inner linear system of equations, Sumiyoshi and Yamada implement a Newton-Krylov approach-specifically, they implement Newton-BiCGSTAB, with point-Jacobi preconditioning. The extension of these lepton-number conservative methods to the special relativistic case was documented by Nagakura et al (2014). They deployed novel momentumspace gridding based on three considerations: (1) The invariant emissivity and opacity, which together define an invariant collision term on the right-hand side of the Boltzmann equation, can be computed in either the inertial, Eulerian frame or the inertial frame of an observer instantaneously comoving with the stellar core fluid. The value obtained in both cases would be the same if the neutrino angles and energies used in either case were related by the Lorentz transformation between the two inertial frames. (2) The Lorentz transformations of angles and energies between the Eulerian and comoving frames decouple-i.e., one is free to define one's energy grid in either of the two frames independently of one's angular grids, allowing the choices that would simplify the numerics while respecting the physics. (3) The dominant opacity during stellar core collapse stems from coherent, isoenergetic scattering on nuclei-i.e., any novel gridding should be constructed with this opacity in mind.
In Nagakura et al.'s notation, the invariance of the collision term can be expressed as where t(t) is the Eulerian (comoving) frame time and where the labels lb(fr) correspond to the Eulerian (comoving) frames. The equality in Eq. (301) is to be understood as follows: If one evaluates the left-hand side at a particular neutrino angle and energy as measured by the Eulerian observer, the equality is guaranteed provided the righ-hand side is evaluated at the corresponding Lorentz transformed neutrino angle and energy, which would be the angle and energy measured by the comoving observer. The neutrino energies in the two frames, ε lb and ε fr , are related by where γ is the Lorentz factor, n lb is the neutrino propagation direction as measured in the Eulerian frame, and v is the fluid velocity in the same frame. The unit neutrino propagation direction vectors in the two frames are related by where n fr denotes the unit neutrino propagation direction vector in the comoving frame. Fig. 14: The left panel shows a schematic of uniform momentum-space angular and energy grids in the Laboratory frame. Constant-energy grid lines are represented by concentric circles (Nagakura et al, 2014). Constant angles are indicated by radial lines. The right panel shows the corresponding contours and lines in the comoving frame. Also added (dotted line) is a constant comoving-frame neutrino energy contour. Figure 14 from Nagakura et al. shows two momentum-space grids associated with momentum-space spherical coordinates. The grid on the left corresponds to a choice of uniform gridding in both angle and energy in the Eulerian frame. (Uniform gridding is typically not used for either, but for simplicity Nagakura et al. consider this case to illustrate the essential features of their approach.) The grid on the right corresponds to the Lorentz-transformed Eulerian grid-i.e., the counterpart grid in the comoving frame. This grid is no longer uniform in either angle or energy. On the comoving-frame grid, an isoenergetic scattering event, wherein the neutrino's angle changes but its energy does not, would necessitate an interpolation in energy given the fact the energy grid is not uniform in angle. The number (typically ∼20) of energy "groups" used in most core-collapse supernova simulations is low, and to make matters worse, the groups are typically spaced exponentially, with coarser resolution at higher energies. Interpolation on such grids is problematic for these reasons and for the conservation of neutrino (lepton) number. To overcome these difficulties, Nagakura et al. use the independence of the Lorentz transformation for neutrino angles  (2014) in their Boltzmann transport implementation. On the LRG, the neutrino angular grid is uniform, but the energy grid corresponds to a uniform energy grid in the comoving frame, shown in the right panel by concentric circles. The two energy grids are related by a Lorentz transformation. Given that the angular grid is uniform in the Laboratory frame, the corresponding angular grid in the comoving frame is not uniform. The angular grids, too, are related by a Lorentz transformation between the frames. and energy and choose a hybrid-grid approach. They introduce the Lagrangian Remap Grid (LRG) for the Eulerian observer, which is shown on the left-hand side of Fig. 15, which is the primary grid used in their work. On the LRG, the angular grid is uniform but the energy grid is not. The energy grid on the LRG is the Lorentz transform of the energy grid on the right-hand side of the same figure, which corresponds to the comoving-frame observer's energy grid, which is uniform. Of course, by virtue of the Lorentz transformation and the fact that the angular grid is uniform in the Eulerian frame, the angular grid in the comoving frame cannot be uniform. This presents no difficulties in their approach, so Nagakura et al. opt for the simplicity of the uniform angular grid on the LRG, their primary grid. The evaluation of the collision term on the LRG, which is how the collision term is evaluated in Nagakura et al.'s approach to the discretization and solution of the Boltzmann equation, is the same as its evaluation on the comoving-frame grid, given the invariance of the collision term for such Lorentz-transform-related grids. Since the latter energy grid is uniform across angles, no interpolation in energy is required in evaluating, for example, isoenergetic scattering. The Lorentz transformation between the two grids is spatially and temporally dependent, so the LRG must be continually redefined as the evolution proceeds, but the comoving-frame grid does not change. As the LRG evolves, a conservative remapping procedure is used to remap the neutrino distributions on the previous LRG to the new LRG.
With all of the above in mind, and focusing on isoenergetic scattering, the righthand side of the Boltzmann equation, Eq. (289), is evaluated on the LRG as The last equality follows from the invariance of the distribution function. While the use of the LRG simplifies the evaluation of the collision term and avoids the need to introduce velocity-dependent angle and energy advection on the left-hand side of the Boltzmann equation, there is a cost: It complicates spatial advection. To overcome this inherited difficulty, Nagakura et al. invoke yet another grid, the Laboratory Fixed Grid (LFG). The LFG is like the grid depicted on the left-hand side of Fig. 14. It is the same for all Eulerian observers at different spatial locations and is constant in time. And, in Nagakura et al.'s implementation, it is of higher resolution in energy relative to the LRG. This is evident in Fig. 16.
Given the LFG, the treatment of spatial and angular advection occurs in the following steps: (1) Using the subgrid energy distribution, f subgrid , the values of the distribution function, f , at the zone centers of the LFG grid are determined by are the value of the energies corresponding to the zone centers on the LFG grid for zones A , B , ..., respectively. (For the example points selected here, the LFG energies are the same.) (2) Once the values of the distribution function are defined at the zone centers of the LFG, they can be used to define the spatial and angular fluxes on the LRG as follows. Consider Fig. 16. On the left-hand side of the figure, the LRG is shown. On the right, the LFG is overlaid on the LRG. Note, too, here we are considering advection in space and angle, denoted on the vertical axis by y to represent both. Let us consider LFG zones A and B . The flux at the interface between these two zones is determined from the value of the distribution function there, which is determined by interpolating between the values of the distribution function at the A and B zone centers, as outlined by Sumiyoshi and Yamada (2012). (When invoking the LFG, this interpolation involves only two zones, not three as it would in the case of the LRG.) (3) Given the fluxes on the LFG, we are ready to define the fluxes that will be used on the LRG to update the distribution function in each of the LRG's zones due to advection. Note that advection into (for example) LFG zone B from A involves advection into a single zone. However, it is easy to see from Fig. 16 that advection from A into B involves advection into two zones of the LRG: A and B. To divide the contribution of the LFG flux into B into LRG fluxes into zones A and B, we split the flux as follows: where F A |B is the LFG flux at the interface between LFG zones A and B and where with ε AB corresponding to the value of the energy at the interface of the LRG zones A and B and where ε L(R) corresponds to the energy value associated with the left (right) boundary of the LFG zone B . f A(B) corresponds to the value of the distribution function on the LRG in zone A(B). In other words, the LFG flux at the interface of LFG zones A and B is split, according to the distribution-weighted energy volume, between LRG fluxes into zones A and B. Note that zone B, for example, has multiple LFG fluxes advecting into it. The total LRG flux for zone B would therefore be the sum of all of the relevant LFG fluxes into it determined in the manner described here.
(4) Once the LRG interface fluxes are defined as in step 3, the spatial (or angular) advection on the LRG is carried out as outlined by Sumiyoshi and Yamada (2012). Nagakura et al.'s novel method has been designed to conserve lepton number. A demonstration that it simultaneously conserves energy at an appropriate level remains to be demonstrated.
With regard to the temporal discretization with special relativistic effects included, Nagakura et al. use a semi-implicit method. This is necessitated by the fact that the methods outlined above for the treatment of advection on the LRG cannot be made fully implicit. With the temporal descretization alone in mind, the Boltzmann equation can be written as where The first term on the right-hand side of Eq. (310) is the advection term for the special relativistic case. It is evaluated explicitly at the value of the current iterate, f gs . The second two terms correspond to what the advection terms would be in the nonrelativistic case, evaluated both implicitly and explicitly (at the current iterate), respectively. Together they represent a "correction" to the first term and are introduced for numerical stability. When f gs → f n+1 , the second two terms cancel, and the righthand side of Eq. (310) becomes F SR adv ( f n+1 ), as desired. The parameter, κ, is a limiter and prevents the correction from becoming larger than the first term, which Nagakura et al. note can happen when the fluid velocities become several tens of percent of the speed of light.
Given the solution of the distribution function and, in particular, the numerical determination of the collision term, the update to the matter electron fraction and stress-energy tensor (including both energy and momentum exchange) are computed as follows [see Eqs. (5), (6), (191), and (194)]: where and where, for Nagakura et al., N ν e (our J ν e ) is the electron density current, dV p (our π m ) is the invariant momentum-space volume element, and i indicates the neutrino species.
6.2 Boltzmann kinetics: spatial discontinuous Galerkin discretization plus spectral multigroup P N A numerical treatment of Boltzmann kinetics that implements a finite-element discretizationspecifically, a Discontinuous Galerkin (DG) discretization-for the spatial degrees of freedom together with a spectral decomposition in momentum space was developed by Radice et al (2013) for the Boltzmann equation: In this scheme, the distribution function, F, is first decomposed in momentum space as where the orthonormal basis functions in the energy dimension are defined by Using the orthonormality of the spherical harmonics and χ n (ν), the coefficients in the momentum-space expansion of the distribution function, Eq. (318), are given by Radice et al. introduce the shorthand notation: and reexpress Eq. (318) as  (2018) plot the r − θ component of the Eddington tensor, k rθ , at 190 ms after bounce in a core-collapse supernova simulation they performed with their Boltzmann neutrino transport solver, initiated from a progenitor of 11.2 M . In the corresponding upper right panel, they plot the (absolute) difference between k rθ computed with both Boltzmann neutrino transport and two-moment neutrino transport with M1 closure. In both cases, k rθ is evaluated at the mean neutrino energy at each point of the spatial grid shown here. Nagakura et al. classify such absolute differences in the off-diagonal components of the Eddington tensor in their model as substantial, indicating that Boltzmann transport is needed to accurately compute the components of the neutrino Eddington tensor. In their model, k rθ was demonstrated to dictate the evolution of the lateral neutrino fluxes, not k θ θ , in the critical semitransparent regime.
Inserting the expansion (322) into the Boltzmann equation (317) leads to a coupled system of equations for the expansion coefficients that must be solved to determine them as a function of time and space: Multiplying Eq. (323) by Ψ A (in the notation of Radice et al., a superscript A indicates a complex conjugate), integrating over momentum space, and using the orthonormality of the basis functions Ψ A gives where and In Eqs (325) and (326) In a DG discretization in x, the distribution function is written as an expansion in Lagrange polynomials: where u(x) = u i−1/2 l i−1/2 (x) + u i+3/2 l i+3/2 (x) , and where the Lagrange polynomials are defined by Insertion of the expansion (328) in Eq. (327) yields the following set of coupled ordinary differential equations for the coefficients F A i : where the flux factors are given by In Eq. (332), F is the average flux; F − is the flux computed at the boundary x i+1/2 of the i th element through an exact solution of the Riemann problem with left and right states, F B L and F B R , respectively; F + is defined in the same way, at the boundary and v is a parameter taken to be the first abscissa of the adopted Legendre quadrature (this parameter is introduced by Radice et al. to dissipate numerically zero-speed modes). The threedimensional extension of the scheme is given by constructing the flux factors in each of the three dimensions in the same way, which gives Now, focusing on the temporal discretization of Eq. (333) and using Radice et al.'s rewrite of the equation as the authors evolve the coefficients of the distribution function's DG-spectral expansion, Eqs. (318) and (328), in a two-step, semi-implicit, asymptotic-preserving scheme (McClarren et al, 2008), staged as a predictor step, followed by a corrector step, Given that Radice et al. choose to use a partially spectral scheme, like all others deploying such schemes they had to contend with the Gibbs phenomenon. To do so, they were informed by the seminal work of McClarren and Hauck (2010), who developed a method, using filtering, to mitigate Gibbs phenomena in P N schemes. Unfortunately, as pointed out by McLerran and Hauck and by Radice et al., the filtered P N scheme does not have a unique continuum limit-i.e., it cannot be shown to be a discretization of a system of partial differential equations. In Radice et al.'s approach, the spherical harmonic expansion of the solution is filtered at each time step using a spherical-spline filter: where σ (η) is a filter function of order p such that and where s is a strength parameter, which is chosen to be a function of the time step: where β is a parameter. Radice et al. document success using a modified, secondorder Lanczos filter: With the introduction of filtering, the time stepping algorithm, Eqs. (335) and (336), is modified as follows: where F A B is a diagonal matrix that instantiates the filtering operation. Moreover, Radice et al. were able to show that their filtering method represents the first-order, operator-split discretization of a term added to the underlying system of partial differential equations, Eq. (324): where L A B is a diagonal matrix with coefficients log σ (l/(N + 1)). That is, their filtering method is equivalent to the addition of a forward-scattering term [σ (0) = 1] to Eq. (324), and their overall method is a unique discretization of an underlying system of coupled partial differential equations, Eq. (345).
While the filtering effectively mitigates the Gibbs phenomenon, the distribution function can still become negative, which is unphysical. To contend with negative distribution functions in the context of the filtered P N scheme, Laiu and Hauck (2019) developed and analyzed so-called positivity limiters, which can be used to ensure positivity of distribution function in each step of a time integration scheme. 6.3 Boltzmann kinetics: spectral decomposition across phase space Peres et al (2014) opt for a fully spectral approach to the solution of the 3+1 general relativistic Boltzmann equation in the CFC approximation in non-conservative form: In this case, the 3+1 line element is where the spatial geometry is assumed to be conformally flat-i.e., In Eq. (348), f˜i˜j is the flat metric and Ψ is the conformal factor, In Eq. 346, p µ and ε correspond to the neutrino four-momenta and energy, respectively, measured by an Eulerian observer.Γ j µν are the Ricci rotation coefficients. Peres et al.'s choice of phase-space coordinates is motivated by the known challenge time derivatives present for spectral methods. Specifically, were comovingframe four-momenta chosen instead, the coefficients of the advection terms on the left-hand side of Eq. 346 would contain time derivatives associated with, for example, relativistic Doppler shift. Of course, the collision term is best evaluated in the comoving frame, using comoving-frame four momenta, so the choice of Eulerian frame four-momenta necessitates additional work to treat collisions. Peres et al. leave the detailed treatment of this term to future publication. They also acknowledge the benefits of beginning instead with the conservative form of Eq. (346) and leave that to future publication, as well.
In their approach, the distribution function is written as an expansion in terms of the basis functions across all six dimensions of phase space-in this case, spherical coordinates in both space and momentum space: (350) Chebyshev basis functions are used for r, ε, and Θ -i.e., for the expected nonperiodic nature of the distribution function in these dimensions. Fourier basis functions are used for θ , φ , and Φ-i.e., for the expected periodic nature of the distribution function in these dimensions. Barred variables in Eq. (350) are in the range [−1, 1] and are related to the standard coordinates by affine transformations. In the case of the radial coordinate, the affine transformation is written explicitly as where α r and β r are constants, with R min = β r −α r and R max = α r +β r . R min and R max are the minimum and maximum radii of the spherical shell considered in the Peres et al. analysis, respectively. (The extension of their method to r = 0 is left for future development.) Ignoring the collision term in Eq. (346), it can be written in terms of the Liouville operator,L[ f ], as Substituting the expansion (350) into Eq. (352) results in a system of coupled ordinary differential equations for the solution vector, though they emphasize they are not restricted to explicit updates but could also deploy semi-implicit and implicit methods.

Boltzmann kinetics: Monte Carlo methods
Up to now, we have focused on deterministic methods for the solution of the Boltzmann neutrino transport equations in core-collapse supernovae. But nondeterministicspecifically Monte Carlo-methods have also been used. Until recently, they have been confined to "snapshot" studies in a particular slice of an evolving stellar core and have been used most extensively as a gauge of the accuracy of deterministic, but approximate, methods. Although it has yet to be used in the context of a corecollapse supernova simulation as the method of choice for treating time-dependent neutrino transport, a lepton-number and energy conserving Monte Carlo scheme for such transport has been developed by Abdikamalov et al (2012) for the O(v/c) limit of special relativistic effects and Newtonian gravity. In their paper, Abdikamalov et al. illustrate their method assuming spherical symmetry. They begin with the equation for the neutrino intensity for each neutrino species, here written generically without a species label: The first term term on the right-hand side is the familiar term for emission and absorption of neutrinos. κ a (s) is the total absorption (scattering) opacity. The last term describes the additional source of neutrino as a result of inscattering into the neutrino "beam" with direction µ and energy ε. Eq. (354) is solved using the boundary conditions: In their set of evolution equations, Eq. (354) is coupled to the material energy equation and the equation for the evolution of the electron fraction: The sum over i is over neutrino species, which will be dropped in what follows. s i = +1, −1, 0 for electron neutrinos, electron antineutrinos, and heavy-flavor neutrinos, respectively, and will be carried through the remaining presentation of the method. In Eq. (356), S is the contribution to the material energy from energy-exchanging scattering with neutrinos and is given by Abdikamalov et al. introduce the additional quantities: where U r is the equilibrium neutrino energy density and κ p is the Planck-mean opacity. The evolution equation for U r is related to the evolution equations for U m and Y e by and ζ = 1 In Eqs. (364) through (366), N A is Avogadro's Number and C V is the material heat capacity.
As with deterministic methods, the first step in the solution of Eq. (354), (356), and (357) is to linearize them. As part of this linearization procedure, Abdikamalov also ensure that these three evolution equations become decoupled. The first step in the linearization process involves approximating {κ a , κ p , κ s , κ s , b, χ a , χ p , β , ζ } with {κ a ,κ p ,κ s ,κ s ,b,χ a ,χ p ,β ,ζ }. Abdikamalov define the latter as the time-centered values of the former within the time interval t n ≤ t ≤ t n+1 . In practice, they are chosen at the initial time step: t n . Given this linearization, Eqs. (354), (356), (357), and (364) become: whereγ =βκ a +ζ s iχa , It is understood in Eqs. (368) and (370) where, as pointed out by Abdikamalov et al., α controls the degree to which the method is implicit, and where U * r,n = U r,n +β ∆t nS (374) to obtainŪ r = f n U * r,n + 2π where U * r,n = U r,n +β ∆t nS (377) and Abdikamalov et al. now assume thatŪ = U r (t) andĪ = I(t) in Eq. (376) and use the resultant equation to substitute for U r in Eq. (367), to obtain their final equation for the evolution of the neutrino intensity: where A similar procedure can be used to derive equations for the updates of U m and Y e , as was performed for U r . Abdikamalov et al. point out that care must be taken to use the same expression for U r -specifically, Eq. (376) withŪ r = U r (t) andĪ = I(t)-in the derivation of the equation for U m in order to guarantee conservation of energy, to arrive at and Having linearized and decoupled the equations of motion, the evolution in Abdikamalov et al.'s Monte Carlo approach proceeds as follows: The weight associated with each Monte Carlo paricle (MCP) is the number of particles associated with it and is assumed to be N 0 . The number of particles emitted by the matter in the time interval [t n ,t n + 1] is Then, the number of MCP's emitted in this time interval is where RInt(x) returns the largest integer less than x. The particle energy in each MCP is chosen according to the functional form of κB. Since thermal emission is isotropic, the angle of propagation of each MCP emitted, µ, is chosen uniformly on the interval where ξ is a random number that takes on values in the interval [0, 1]. Similarly, the emission time is chosen uniformly on the interval [t n ,t n+1 ] using To choose the zone in which an MCP is emitted, Abdikamalov et al. use the probability that the MCP is emitted in a particular zone, which is given by the total number of particles emitted in that particular zone divided by the total number of particles emitted across all zones. Once an MCP is emitted in a particular zone, its location (assuming spherical symmetry) within that zone is determined using where j is the zone index. The number of MCPs entering from the outer boundary of the domain, at radius R, during the interval [t n ,t n+1 ] is given by The number of MCPs present at the beginning of the interval is where the spatial zone, propagation angle, and energy of each MCP is chosen randomly using the functional form of I. During transport, an emitted MCP will either (1) travel within the zone without collision and remain in the zone, (2) encounter a collision within the zone, or (3) exit the zone. These three possibilities correspond to three different distances, given by and In Eqs. (394), (395), and (396), d b , d t , and d c are the distance to the boundary of the zone, the distance the particle can travel in the time interval if it does not encounter a collision, and the distance between collisions, respectively. Once these distances are known, the MCP is moved to the location corresponding to the smallest of the three distances, and to the associated time, according to (400) P es,e = κ es,e /(κ e + κ s ), (401) P es,l = κ es,l /(κ e + κ s ).
The sum of all of these probabilities is, of course, equal to 1. As a result, to determine which of the above interactions takes place, Abdikamalov et al. sample a random number ξ in the range [0, 1]. Based on the value of ξ : (1) if ξ < P ea , the MCP undergoes effective absorption, (2) if P ea < ξ < P ea + P s , the MCP is scattered, (3) if P ea + P s < ξ < P ea + P s + P es,e , the MCP undergoes effective scattering in which its total energy is conserved, and (4) if ξ > P ea + P s + P es,e , the MCP undergoes effective scattering in which its total lepton number is conserved. Within the domain [0, 1], the subdomain corresponding to each of the above possibilities is proportional to the probability for each possibility to occur, which ensures that the selection procedure yields a statistically correct result. And their result does not depend on the order in which they consider the possibilities. If the MCP is absorbed, its energy and lepton number are deposited in the zone and it is removed from the population of MCPs. If the MCP undergoes real scattering, it is moved to the location where the scattering occurs. For iso-energetic scattering, its angle is determined randomly using Eq. (389). If its energy changes as well, its new energy is determined by randomly sampling the functional form of the scattering kernel in energy. If the MCP undergoes effective scattering, which is isotropic, the MCP's angle is again determined randomly using Eq. (389) and its energy is determined by randomly sampling the local emissivity spectrum since effective scattering mimics absorption and reemission. If d = d b and the boundary is the zone boundary, the transport sampling process begins again, using the values of the opacities in the new zone. If the boundary is the outer boundary, the MCP is removed from the population of MCPs. Finally, if d = d t , the MCP is stored for the next time step. The above procedure is conducted for all of the MCP's in the computational domain (i.e., in all zones) at the beginning of a time step. For the case of a non-static medium, the comoving and Eulerian frames are no longer coincident and an extension of the Monte Carlo procedure outlined above is necessary. Abdikamalov et al. extend their approach as follows: The emissivities and opacities are naturally computed in the comoving frame. Once calculated, the number of MCPs emitted in this frame in each cell is determined. Assuming spherical symmetry for simplicity, the location, r 0 , direction of propagation, µ 0 , and energy, ε 0 of each MCP emitted at t 0 is sampled based on the comoving frame emissivities. Each of these quantities is then transformed to the Eulerian frame using the well-known transformations (reproduced here for the spherically symmetric case): The index j in the last two equations is the index of the comoving-frame cell in which the MCP is emitted. (Of course, V r, j is measured in the Eulerian frame.) Once these transformations are made, the MCP is transported in the Eulerian frame, as described in the static case. Note, however, the distance to collision must be determined using the Eulerian-frame values of the opacities. Most of the steps in the static case proceed in the same way, with the exception of scattering, which requires additional care. If the MCP scatters, Abdikamalov et al. transform the angle of propagation and the energy of the MCP into the comoving frame, determine a new comovingframe angle and energy due to the scattering event, then transform this new set of momentum-space variables back into the Eulerian frame before the transport of the MCP proceeds. The amount of energy and momentum exchanged between the MCP and the matter during the scattering, determined in the comoving frame, is recorded. One further addition to the method presented by Abdikamalov et al. that should be noted is the computational efficiency they gain by coupling their method to a Discrete Diffusion Monte Carlo (DDMC) method, first developed by Densmore et al (2007) for photon transport and extended by Abdikamalov to neutrino transport. The latter method is used in diffusive regimes, where the original Monte Carlo method is plagued by the short distances between collisions: MCP paths between collisions become very short and the number of such paths that have to be simulated becomes prohibitively large. However, even with the coupling to DDMC, the Monte Carlo approach described here remains expensive and awaits future computing architectures that are more capable and well-suited to such an approach in order to be used for core-collapse supernova simulations.

Two-moment kinetics
Numerical methods for solving equations for two-moment kinetics in core-collapse supernovae have now been developed by multiple groups (Müller et al, 2010;O'Connor, 2015;Just et al, 2015;Kuroda et al, 2016;Roberts et al, 2016;Skinner et al, 2019). There are as many variations in approach as there are groups. Here we focus on common features and highlight specific solutions. For example, some authors have adopted fully relativistic descriptions (e.g., Müller et al, 2010;O'Connor, 2015;Kuroda et al, 2016;Roberts et al, 2016), while others have resorted to approximations that seek to capture relativistic effects (e.g., Just et al, 2015;Skinner et al, 2019). Current methods for solving the equations for neutrino-radiation hydrodynamics using the two-moment approach employ finite-volume or finite-difference type methods. To this end, the system of equations can be written in the compact form (cf. Eqs. (43)-(46), and Eqs. (109) and (111)) where the vector of evolved quantities is given by The spatial flux vectors F i , energy-space flux vector F ε (zero for fluid variables), "geometry" sources S, and the "collision" source due to neutrino-matter interactions C can be inferred from equations given in Sections 4.5, 4.7, and 5.2. Here, as an example, we consider the Eulerian two-moment model described in Sect. 4.7.3 with N SP neutrino species. Note that for each neutrino species, each radiation moment is represented by N ε degrees of freedom to represent the energy distribution of neutrinos, giving a total of 4 × N ε × N SP radiation degrees of freedom (compared to 6 fluid degrees of freedom) per point in spacetime. In core-collapse supernova models, N ε = O(20), while N SP = 3 − 6, resulting in 240 − 480 degrees of freedom per spacetime point. Among the approaches to solve the system of equations given by Eq. (409) numerically, high-resolution shock-capturing (HRSC) methods (e.g., finite-volume or finite-difference), initially developed for compressible hydrodynamics with shocks, have attracted much attention recently. (For simplicity of presentation, we proceed to discuss the case of one spatial dimension.) In the HRSC approach, the spacetime is discretized into spacelike foliations of spacetime with discrete time coordinates {t n } N t n=0 , where the time step ∆t = t n+1 − t n is the separation between foliations. On each foliation, spatial positions are assigned coordinates { x j− 1 2 } N x +1 j=1 , separating N x "cells" with width ∆ x j = (x j+ 1 2 − x j− 1 2 ). In addition, for radiation quantities, momentum (energy) space is discretized into N ε "energy bins" with edges ). Integration of Eq. (409) over the phase-space cell ) and I x j = (x j− 1 2 , x j+ 1 2 ), gives the semi-discretized system where the evolved quantities are the cell averages defined as with S i j and C i j defined analogously, and the fluxes defined as In Eq. (411), the temporal dimension has been left continuous (semi-discrete). Moreover, the equation is still exact. Approximations enter with the specification of the fluxes in Eqs. (413) and (414), and the integrals to evaluate the sources S i j and C i j . These approximations ultimately result in phase-space discretization errors. With these specifications, the approximate system in Eq. (411) can be viewed as a system of ordinary differential equations (ODEs), which can be integrated forward in time with an ODE solver, which introduces temporal discretization errors. This discretization approach is called the method of lines (MOL).

Spatial discretization
The spatial fluxes in Eq. (413) can be approximated with an appropriate numerical flux: where U(ε i , x ± j+ 1 2 ,t) is an approximation of U to the immediate left and right of the Eq. (415), the midpoint rule is used to approximate the integral, but a more accurate quadrature rule can be used if desired.) Two things must be defined when computing the interface fluxes: (1) the procedure to reconstruct the "left" and "right" states, and (2) the numerical flux function " F x . The reconstruction step for radiation variables is essentially identical to that used for hydrodynamics schemes: a polynomial of degree k is reconstructed from the evolved quantities (cell averages). To this end, the accuracy of the numerical method depends in part on the degree of the reconstructed polynomial, and the desired polynomial degree impacts the width of the computational stencil, since values in k + 1 cells are needed to reconstruct a polynomial of degree k. The most commonly used methods are monotonized piecewise linear (van Leer, 1974;LeVeque, 1992) and piecewise parabolic methods (Colella and Woodward, 1984), as well as higher order monotonicity preserving (MP) (Suresh and Huynh, 1997) and weighted essentially nonoscillatory (WENO) reconstruction methods (Liu et al, 1994;Shu, 1997). Monotonicity constraints are placed on the reconstruction polynomial to ensure nonoscillatory solutions around discontinuities. For fluid variables, the numerical flux function can be computed with a standard Riemann solver; e.g., HLL (Harten et al, 1983) or HLLC (Toro et al, 1994). However, when using finitevolume or finite-difference methods to solve for the radiation moments, specification of the numerical flux requires additional care. As elucidated by the analysis in Audit et al (2002) in the context of the O(v/c) limit of the energy integrated (gray) Lagrangian two-moment model presented in Sect. 4.7.3, in the asymptotic diffusion limit (characterized by a short neutrino mean free path) the inherent numerical dissipation associated with the numerical flux used for hyperbolic systems overwhelms the physical radiative diffusive flux and leads to spurious evolution unless the mean free path is resolved by the spatial grid. We discuss this important issue further below (see also Jin and Levermore, 1996;Lowrie and Morel, 2001, for discussions on this topic). Since it is not practical to resolve the neutrino mean free path in core-collapse supernova simulations, the numerical fluxes for the radiation moment equations are modified to better capture the evolution in diffusive regimes. Following Audit et al (2002), O'Connor and propose the following modified HLL numerical fluxes for the two-moment model for neutrino transport (see also Kuroda et al, 2016): ' where F x E s and F x S s, j are the radiation energy and momentum spatial fluxes, respectively, and λ − and λ + are estimates of the largest (absolute) eigenvalues for leftgoing and right-going waves, respectively (see, e.g., Shibata et al, 2011, for explicit expressions of estimates). In the modified numerical fluxes in Eqs. (416) and (417), ξ is a local parameter depending on the ratio of the neutrino mean free path to the local grid size: where λ i j is a local, energy-dependent neutrino mean free path (computed from the neutrino opacities). Thus, when the mean free path is much smaller than a grid cell (ξ → 0), the numerical dissipation term (proportional to the jump in the conserved variables across the interface) vanishes, and the numerical flux switches to an average of the fluxes evaluated with the left and right states (a similar approach is also taken in Just et al (2015); Skinner et al (2019)). It should be noted that the average flux is appropriate for solving parabolic equations, but is in general unstable for hyperbolic equations (e.g., LeVeque, 1992).
To further illustrate the issue with the numerical flux, and to see how the modifications in Eqs. (416)-(417) help, it is easiest to consider the reduced system where and λ is the scattering mean free path. When scattering events are frequent (λ → 0), the system in Eqs. (419)-(420) limits to parabolic behavior governed by

Energy discretization
Next we consider the approximation of the energy fluxes in Eq. (414), which contribute to shifts in the neutrino energy spectrum due to gravitational and moving fluid effects. Müller et al (2010), who solved the Lagrangian two-moment model in Section (4.7.3), developed a method to compute the energy fluxes that is inherently number conservative; i.e., with this discretization of the energy derivative, the energy equation in the Lagrangian two-moment model in Eq. (116) is consistent with the equation for number conservation in Eq. (123) at the discrete level. A key observation in achieving this is that the number conservation equation is obtained by multiplying the Lagrangian energy equation with a factor 1/ε. At the continuum level, when this factor is brought inside the energy derivative, the remainder cancels with the first term on the right-hand side of Eq. (116) where we introduce the shorthand notation Dividing Eq. (429) by ε gives the conservation equation: where N = J/ε is the spectral Eulerian number density (cf. Eq. (123)). Similar to Eq. (411), the semi-discrete form of Eq. (429) can be written as where " F J i± 1 2 are the numerical flux functions to be determined. (Here we drop the spatial index j to simplify the notation.) Dividing Eq. (432) by ε i and defining N i = J i /ε i gives a provisionary semi-discrete form of Eq. (431): Without specifying the numerical fluxes " F J i± 1 2 , the last three terms in the second line of Eq. (433) do in general not cancel, and the neutrino number density is not conserved in the energy advection step, which is contrary to what is suggested by Challenges of modeling neutrino transport in core-collapse supernovae 111 Eq. (431). However, there is some freedom in choosing the numerical fluxes. To determine the numerical fluxes, Müller et al (2010) demand total number conservation upon integration of Eq. (433) over all energy bins; i.e., where zero flux energy space boundaries are assumed (i.e., " F J 1 2 = " F J N ε + 1 2 = 0). Next, the numerical flux is split into "left" and "right" contributions so that the change in the total number density can be written as (assuming ε 1 2 = 0 and Number conservation is then obtained by demanding Furthermore, Müller et al (2010) introduce where ξ i is a local weighting factor depending on the zeroth moment ( j) of the distribution function at cell interfaces, j σ i− 1 2 and j σ i+ 1 2 , which are computed as weighted geometric means of j using values from adjacent energy bins. In regions where J i varies modestly with i, ξ i is close to 1/2, while in the high-energy tail of the neutrino spectrum, where J i decreases rapidly with increasing i, ξ i 1 (see Appendix B in Müller et al, 2010, for further details). Then, using the split in Eq. (435), the numerical flux, e.g., at interface ε i+ 1 2 , to be used in Eq. (432) is given by 112 Anthony Mezzacappa et al.
For a commonly used geometrically progressing grid where ε i+ 1 2 = ∆ ε 1 λ i−1 (where λ > 1 and i = 1, . . . , N ε ), it can be shown that , so that the numerical flux can be written as which is simply a weighted average with nonlinear weights ξ i and (1 − ξ i+1 ). If ξ i , ξ i+1 > 0 and ξ i + ξ i+1 = 1, the numerical flux is a convex combination of F J i and F J i+1 , but this is not guaranteed. Although the numerical flux in Eq. (441) was developed by Müller et al (2010) to ensure neutrino number conservation in the context of the Lagrangian two-moment model, the same approach has also been applied to the Eulerian two-moment model by O'Connor (2015); Kuroda et al (2016). (It is not at all clear that the approach developed by Müller et al (2010) in the context of the Lagrangian two-moment model results in a number conservative scheme when applied to the Eulerian two-moment model. In the Lagrangian two-moment model, the spectral neutrino number and energy equations are related simply by a factor of 1/ε, whereas in an Eulerian two-moment model, the relationship is more complex, involving both the spectral neutrino energy and momentum equations (cf. Endeve et al, 2012;Cardall et al, 2013b).) We also note that the numerical flux in Eq. (441) is also used by Just et al (2015), who solve the Lagrangian two-moment model in the O(v/c) limit. A few remarks should be made about the numerical flux in Eq. (442). First, a numerical flux is said to be consistent if, when the two arguments are set to be equal, it reduces to the common value; i.e., when F J i = F J i+1 = F J the following holds: Consistency of the numerical flux is generally required for a numerical method to be convergent (Crandall and Majda, 1980;LeVeque, 2002). Since it is not guaranteed that ξ i + ξ i+1 = 1, the numerical flux in Eq. (442) is not consistent. Second, if one sets ξ i = 1/2 ∀i (which makes it consistent), the numerical flux in Eq. (442) reduces to a simple arithmetic average, which is known to be notoriously unstable when combined with explicit time integration (e.g., LeVeque, 2002). Skinner et al (2019), who also solve the Lagrangian two-moment model in the O(v/c) limit, follow a different approach adapted from Vaytet et al (2011). In this case, assuming Cartesian coordinates for simplicity, the evolved quantity and the flux in energy space in the neutrino energy equation (cf. Eq. (429)) are given by where K i j is the radiation stress tensor (cf. Eq. (130)) and v i are components of the fluid three-velocity. Similarly, the evolved quantity and flux in energy space from the neutrino momentum equation are given by where L i jk is the heat flux tensor in Eq. (133). With u = J, H k T and f ε (u) = H k , F H k T , the subsystem to be solved is then given by which is a familiar advection-type equation. For the energy equation, the numerical flux in energy space is then given by where an upwind approach is used to compute A similar expression is used for the energy-space fluxes in the radiation momentum equation. The eigenvalues of the flux Jacobian ∂ f ε /∂ u associated with the reduced system of equations governing the "advection" in energy space are always of the same sign (Vaytet et al, 2011). This is one motivation for using the upwind flux. Although the numerical flux in Eq. (447) does not necessarily lead to exact number conservation (as is the case for the corresponding numerical flux developed by Müller et al (2010)), the upwind flux has desirable properties that can improve numerical stability (e.g., the upwind flux is consistent and can be used to design monotone numerical schemes (cf. Crandall and Majda, 1980;LeVeque, 1992)).

Time integration approaches
After the specification of approximations to the terms on the right-hand side of Eq. (411), the system is evolved in time with an ODE solver. When solving the general relativistic radiation hydrodynamics system, Kuroda et al (2016) write the resulting ODE system in the following form: dU dt + S adv,s + S avd,e + S grv + S νm = 0, where the spatial advection term S adv,s , the energy advection term S avd,e , the gravitational source term S grv , and the neutrino-matter interaction term S νm correspond to the terms on the right-hand side of Eq. (411). (Here we omit phase-space indices for brevity.) In their time integration scheme, Kuroda et al (2016) evaluate the spatial advection and gravitational source terms explicitly, while the energy advection and neutrino-matter interaction terms are evaluated implicitly: This splitting is a special case of a more general class of time integration methods referred to as implicit-explicit (IMEX) schemes (Ascher et al, 1997;Pareschi and Russo, 2005). The splitting in Eqs. (450)-(451) is first-order accurate in time, while high-order accurate methods have been developed. The main benefit of introducing this split is to avoid a distributed implicit solve, since the spatial advection term couples neighboring cells in space, which can reside on different processing units. On the downside, the time step is restricted by the speed of light, but this is acceptable for relativistic systems. In general, the neutrino-matter interaction term cannot be integrated efficiently in time with explicit methods because the stable time step needed to resolve the governing time scale is many orders of magnitude shorter than that governing the spatial advection term. There is another benefit of integrating the neutrino-matter interaction term separately with implicit methods. These terms are local in space, which makes them easier to parallelize. The energy advection term can be integrated with explicit or implicit methods. Using explicit methods for this term, an additional time step restriction is needed, but this is usually less severe than that introduced by the spatial advection term (e.g., O'Connor, 2015;Just et al, 2015).
On the other hand, since the neutrino-matter interaction term couple the entire momentum space, including the energy advection term in the implicit update (as is also done by, e.g., Müller et al, 2010), which only couples nearest neighbors in energy, does not add significantly to the computational complexity. One should note that in their Appendix B, Kuroda et al (2016) report significantly different electron fraction profiles when comparing explicit versus implicit integration of S avd,e , but the reason for this is not clear. The implicit solve in Eq. (451) requires the solution of a nonlinear system of equations. To this end, Kuroda et al (2016), write the system as where the unknowns are given by the vector of "primitive" variables: To solve the nonlinear system in Eq. (452), Kuroda et al (2016) employ a Newton-Raphson scheme: for k = 0, 1, 2, . . ., with P 0 = P * . The iteration is continued until |δ P k | < tol |P k |, where the tolerance is typically set to tol = 10 −8 . Kuroda et al (2016) treat the problem fully implicitly, evaluating the neutrino-matter interactions at t n+1 , and thus include derivatives of opacities in S νm with respect to P in the Jacobian (∂ f/∂ P). To help convergence in the Newton-Raphson procedure, Kuroda et al (2016) also monitor the change in total lepton number during iterations (see their Section 3.3 for details), which improves the robustness of the method. Note that in the primitive vector in Eq. (453) the radiation quantities are the Eulerian moments E , F j , while the closure and the neutrino-matter interaction terms are most naturally expressed in terms of the Lagrangian moments J , H j . To evaluate the closure and collision terms during the Newton-Raphson iterations, the Lagrangian moments are kept consistent with the Eulerian moments through the relations: The number of iterations needed to reach convergence varies during a simulation. It is at its maximum in the center around core bounce (several tens), but settles down to ∼ 10 after the shock stalls. Just et al (2015), employing the O(v/c) limit of the Lagrangian two-moment model in Section (4.7.3) coupled to non-relativistic hydrodynamics, also use a combination of explicit and implicit methods to integrate the coupled equations in time, but ease the computational cost by treating some interaction terms explicitly. They split the solution vector into radiation variables X = (J , H j ) and fluid variables U = (ρ, ρY e , ρv, e t ), where the total fluid energy density is e t = e i + ρv 2 /2, and e i is the internal energy density. They write the radiation hydrodynamics system as where in the transport equations, δ t X hyp represents the velocity-independent hyperbolic terms, δ t X vel represents all the velocity-dependent terms in the transport equations, and δ t X src represent neutrino-matter interactions. The phase-space advection terms combine to δ t X adv = δ t X hyp + δ t X vel . In the hydrodynamics equations, δ t U hyd represents the non-radiative physics, while δ t U src the radiative source terms. For a given time step ∆t, when advancing the system from t n to t n+1 = t n + ∆t, a 'predictor' step to t n+1/2 = t n + ∆t/2 is performed first: followed by the 'corrector' step: where double superscripts indicate that the source terms can be evaluated using radiation and hydrodynamics variables in the old and the new state. (The implicit neutrino-matter solve can be simplified considerably by time-lagging some terms. See the discussion below.) When comparing with the scheme of Kuroda et al (2016) in Eqs. (450)-(451), the scheme used by Just et al (2015) uses two explicit evaluations and two implicit evaluations, instead of one of each. Also note that Kuroda et al (2016) treat the velocity-dependent terms implicitly in time, while these terms are treated explicitly by Just et al (2015). While being formally first-order accurate in time, it can be shown that the scheme in Eqs. (459)-(462) is second-order accurate with respect to the explicit part. Except for the use of both old and new variables in the implicit part, it is equivalent to the scheme presented by McClarren et al (2008). The prospect of evaluating some variables in the implicit neutrino-matter solve in the old state is potentially rewarding, since this part of the solve usually accounts for the majority of the computational cost in simulations. When doing this, stability and accuracy concerns are important to consider, and this could be investigated with rigorous analysis. Methods with time lagging can be considered unconverged or partially converged implicit methods, and can be quite accurate, but this depends on the chosen time step and the degree of nonlinearity of the problem (see, e.g., Knoll et al, 2001;Lowrie, 2004). For stability of the explicit part of the IMEX scheme in Eqs. (459)-(462), an upper bound on the time step is given by the advection time scale τ adv = ∆ x/c ≈ 3 µs × (∆ x/1 km). On the other hand, the neutrino-matter interaction time scale can be estimated as τ int = λ ν /c ≈ 10 ns × (λ ν /3 × 10 −3 km), where λ ν is the neutrino mean-free path (cf. Figure 4 in Sect. 4.1). In the core of a core-collapse supernova, λ ν ≈ 3 × 10 −3 km, so that τ int τ adv , which implies that the neutrinomatter interactions terms should be integrated with implicit methods in order to keep ∆t/τ adv = O(1). However, τ int should be viewed as the time scale for neutrino-matter equilibration, and neutrinos have practically equilibrated with the matter for densities above 10 12 g cm −3 . Since in near equilibrium, the matter quantities (i.e., ρ, e i , and the electron density n e ) evolve on time scales that typically exceed τ adv , it is reasonable to ask whether some neutrino opacities, which depend nonlinearly on ρ, e i , and n e , can be evaluated in a lagged fashion in order to avoid costly reevaluations during an iterative implicit solve. Numerical experiments can give valuable insights into this question. To this end, Just et al (2015) considered three cases for comparison (a) The radiation moments X and the fluid variables e i and n e appearing in the source terms δ t X src and δ t U src are defined at t n+1 . Only the Eddington and heat flux factors (k and q) and the coefficients of the Legendre expansion of energycoupling interactions (e.g., scattering; cf. Eq. (182)) are evaluated at t n . (b) Like case (a), but e i and n e in the source terms are evaluated at t n for all the opacities. This alleviates the computational cost of recomputing the opacities within the iteration procedure. Iterations are still performed in this case because the radiation moments appearing in the blocking factors are treated implicitly. (c) Like case (b), but all the energy-coupling interactions are treated explicitly in time. This renders the matrix to be inverted in the implicit solve diagonal.
Using case (b) where ρ > 10 11 g cm −3 and case (c) for ρ ≤ 10 11 g cm −3 , Just et al (2015) performed a detailed comparison of their scheme in spherical symmetry with results from Liebendörfer et al (2005) (obtained with Boltzmann-based codes) for a 13 M star, and found good agreement. In addition, they computed an additional run with the same physical specifications, but where case (b) was replaced with case (a) for ρ > 10 11 g cm −3 , and found the results essentially unaltered (see their Fig. 11). See also Just et al (2018) for an extensive comparison of the two-moment method of Just et al (2015) with the PROMETHEUS-VERTEX code (Rampp and Janka, 2002;Buras et al, 2006), and on the impact of various approximate treatments of relevant Challenges of modeling neutrino transport in core-collapse supernovae 117 physics. We also note that O'Connor (2015), who also used explicit treatment of the matter quantities in evaluation the neutrino-matter sources, reported good agreement with Liebendörfer et al (2005) across many quantities. After obtaining expressions for the radiation moments, the changes to the fluid momentum and kinetic energy densities due to neutrino-matter interactions are computed as where the sums extend over all neutrino frequencies ν and species ξ , and the repeated index on the fluid velocity components v j imply summation over all spatial dimensions. Skinner et al (2019), employing a very similar O(v/c) two-moment model as Just et al (2015) coupled to non-relativistic hydrodynamics, also use explicit and implicit methods to integrate the coupled equations in time. They only describe their time integration scheme in the context of emission, absorption, and isotropic, isoenergetic scattering. Skinner et al (2019) write the radiation hydrodynamics system as where the evolved quantities are Q = ρ, ρv j , ρe, ρY e , J , H j , where e is the total specific energy of the gas, and J and H j are respectively the comoving frame spectral radiation energy density and momentum density, representing all species and groups. Components of J and H j are denoted J sg and H j,sg , where s denotes neutrino species and g denotes frequency group. In Eq. (465), F i Q ;i and S non-stiff represent terms from the phase-space advection operator, while S stiff represents neutrinomatter interactions. Skinner et al (2019) use operator splitting to integrate the coupled system of equations. The phase-space advection terms are integrated with the optimal second-order SSP-RK scheme of Shu and Osher (1988), while the update due to neutrino-matter interactions is followed by a backward Euler solve. This scheme applied to Eq. (465) can be written as which requires two evaluations of F i Q ;i and S non-stiff and one implicit solve to evaluate S stiff per time step.
After the explicit update in Eqs. (466)-(467), a nested iteration scheme is employed, where for each spatial point, the coupled system is solved for the material internal energy density, u and electron fraction, Y e -or equivalently, the temperature, T , and Y e -and the spectral radiation energy density, J . In Eqs. (469)-(471), j sg and κ sg are the emission and absorption coefficients (depending on ρ, which is fixed in this step, T , and Y e ), and where N A is Avogadro's number and ν is the neutrino frequency. In the nested iteration scheme, the updates are separated into "inner" and "outer" parts. In the k-th outer iteration, the radiation energy density is updated implicitly in the inner iteration as where the opacities and emissivities are evaluated using T k−1 and Y k−1 e (as an initial guess in the first iteration The changes in energy and electron fraction are then computed as and the residuals as where u k = u(T k ,Y k e ) (the internal energy also depends on ρ, which is fixed in this part of the solve). Then, using a Newton-Raphson technique, the temperature T k and electron fraction Y k e are found such that r k E = r k Y e = 0. The iteration scheme is terminated when the relative change in temperature and electron fraction, , are below a specified tolerance (e.g., 10 −6 ). In this sense, the converged solutions satisfy Eq. (469)-(471). Skinner et al (2019) report that in practice their iteration procedure converges in a few iterations Challenges of modeling neutrino transport in core-collapse supernovae 119 for a wide range of conditions. An obvious benefit of this nested approach is that nonlinear iterations are performed on a smaller system with only two unknowns (T and Y e ). Note however that modifications to this algorithm are needed if energy coupling interactions such as scattering and pair processes are to be included in an implicit fashion as in cases (a) and (b) from Just et al (2015) discussed above.
After obtaining u n+1 , Y n+1 e , T n+1 , and J n+1 sg by solving Eqs. (469)-(471), the radiation momentum density is updated implicitly as where σ sg is the scattering coefficient. Finally, the fluid momentum and kinetic energy densities ((ρv j ) and (ρe k ), respectively) are updated as where, in the last equation, the repeated index j implies summation over spatial dimensions. The total energy density of the gas at t n+1 is then obtained from where Eq. (469), with Eq. (471) inserted, and Eq. (480) are used. Note that Eq. (481) differs from the total energy update listed in Skinner et al (2019); see their Eq. (32), which is equivalent to Eq. (480), but with ρe k → ρe. We believe Eq. (481) is correct in this context since it accounts for changes in internal and kinetic energy due to neutrino-matter interactions.

Lepton number and energy conservation
We end this section on discretization techniques for two-moment models with a discussion on the topic of lepton number and energy conservation. These are conservation laws inherit in the system of equations evolved, and provide a crucial consistency check on the numerical solution. The challenges discussed here in the context of the two-moment model mirror the challenges discussed in Section (6.1.3) for Boltzmann transport. The concept of lepton number conservation is easily understood by considering Eqs. (44) and (124), which are evolution equations for the electron density and neutrino number density, respectively. The Eulerian electron number is given by N e = DY e /m B = W n e , and the Eulerian neutrino lepton number density and lepton number flux density are respectively. Then, combining Eq. (44), using the source term in Eq. (191), with Eq. (124) results in the conservation law for the total lepton number where G i Lep = N e v i + G i ν . A similar conservation statement for the total energy is not available in the relativistic case because the matter and neutrino equations governing the evolution of the four-momentum-Eqs. (45), (46), (113), and (114)-are not local conservation laws. Instead, the so-called ADM mass, M ADM , (Baumgarte and Shapiro, 2010) (defined as a global quantity) is conserved. (See, e.g., Kuroda et al (2016), their Eq. (71), for a definition applicable to the CCSN context.) In this case, conservation of the ADM mass can be monitored as a consistency check. Kuroda et al (2016), see their Figure 7, report violations of ADM mass conservation, ∆ M ADM , (i.e., deviations from the initial value) of order ∆ M ADM ≈ 8 × 10 50 erg early after core bounce. Müller et al (2010), see their Figure 12, report violations of ADM mass conservation of similar magnitude in a simulation extending beyond 500 ms after core bounce. In their simulation, ∆ M ADM jumps by about 5 × 10 50 erg at bounce, and keeps increasing more gradually to ∆ M ADM ≈ 2 × 10 51 erg at the end of the simulation. This change in the ADM mass is only about 0.5% relative to the initial value. Müller et al (2010) argue that the velocity-dependent terms in the transport equations are the most critical terms responsible for the violation of energy (or ADM mass) conservation. To see this, it is illustrative to consider the equations they solve in the special relativistic limit with Cartesian coordinates and no neutrino-matter interactions. Neutrino-matter interactions are entirely local, and lepton number and fourmomentum conservation in this sector can be enforced by constraints as in Eqs. (198)-(200). The challenge stems from the discretization of the phase-space advection operators; i.e., the left-hand side of the moment equations. In the special relativistic limit with Cartesian coordinates and no neutrino-matter interactions, the Lagrangian two-moment model corresponding to the one used by Müller et al (2010) is given by the energy equation (cf. Eq. (116)) and the momentum equation (cf. Eq. (118)) where the "hat" is used to denote that a factor ε 2 has been absorbed into the definition of the moments; i.e., Note that neither Eq. (484) nor Eq. (485) are local conservation laws. Therefore, a numerical method based on these equations requires care in the discretization process to achieve neutrino number, energy, and momentum conservation. (Neutrino energy and momentum contribute to the ADM mass.) First, note that by dividing Eq. (484) by ε results in which is a local phase-space conservation law for the spectral number density. In arriving at Eq. (487), the remainder after bringing ε −1 inside the energy derivative in Eq. (484) cancels with the right-hand side. This is exactly what the discretization of the energy derivative term developed by Müller et al (2010) (discussed in Sect. 6.5.2) is designed to do in order to achieve lepton number conservation.
On the other hand, where both the Eulerian and Lagrangian decompositions ofT µν are used; cf. Eqs. (99) and (88), respectively. Thus, by adding W times Eq. (484) and the contraction of v j with Eq. (485) gives which is a local phase-space conservation law for the spectral energy density. When arriving at Eq. (489), the remainders after bringing W inside the spacetime derivative in Eq. (484) and v j inside the spacetime derivative of Eq. (485) cancel with the terms due to the sources on the right-hand sides of Eqs. (484) and (485) in a nontrivial way: since, in special relativity, n µ = (−1, 0, 0, 0). Similarly, Then, by adding W v j times Eq. (484) and Eq. (485) one obtains which is a local conservation law for the spectral momentum density. Again, in arriving at Eq. (492), the remainder after bringing W v j inside the spacetime derivative in Eq. (484) cancels with the sources in Eqs. (484) and (485) in a nontrivial way: since, in special relativity and with Cartesian coordinates, ∂ ν g jµ = 0.
Equations (490) and (493) can be viewed as constraints. Since the discretizations of Eqs. (484) and (485) are unlikely to satisfy these constraints, they are inconsistent with energy conservation in the sense of Eq. (489) and momentum conservation in the sense of Eq. (492). In the fully relativistic case, one is faced with the same issue, namely that the discretization of the Lagrangian two-moment model (Eqs. (116) and (118)) is to a certain degree inconsistent with the discretization of the Eulerian twomoment model (Eqs. (109) and (114)). Since it is the Eulerian moments that enter into the definition of the ADM mass, this inconsistency can propagate and manifest itself as violations of ADM mass conservation. On the other hand, by using the Eulerian two-moment model as the starting point for a numerical method-e.g., as in Kuroda et al (2016)-it may be easier to control ∆ M ADM . (The time evolution of the ADM mass reported by Kuroda et al (2016) and Müller et al (2010) are indeed quite different.) However, while the use of the Eulerian two-moment model may provide an advantage with regard to controlling energy conservation, one is still left with the equally challenging task of maintaining consistency with the number equation (Eq. (123)) and controlling lepton number conservation, as discussed in detail by Cardall et al (2013b), and in this case violations of lepton number conservation in the sense of Eq. (483) may still result.
We conclude this section by discussing number, energy, and momentum conservation in the context of the O(v/c) limit of the relativistic Lagrangian two-moment model discussed above, implemented by Just et al (2015) and Skinner et al (2019). (Note, we use units in which c = 1.) The energy equation, Eq. (484), is then given by while the momentum equation, Eq. (485), is given by For simplicity, we ignore terms proportional to the time derivative of the fluid threevelocity, which is a reasonable approximation. In Eqs. (494) and (495), we introduced a constant parameter, Θ , that is either zero or one. For Θ = 0, the two-moment model reduces to the one solved by Just et al (2015) and by Skinner et al (2019). However, when Θ = 1, as we will show below, the two-moment model is better aligned with number, energy, and momentum conservation. First, by dividing Eq. (494) with the particle energy ε and rearranging, one obtains which is a local conservation law for the spectral number densityD + Θ v iÎ i . Note that, when Θ = 0, it is the Lagrangian number density defined in Eq. (87) that is conserved, which is incorrect in the O(v/c) limit. On the other hand, when Θ = 1, Eq. (496) is a conservation law for the O(v/c) approximation of the Eulerian number density defined in Eq. (97), which is conserved.
Next, we consider energy and momentum conservation. Following the approach in the relativistic case, by adding Eq. (494) and the contraction of v j with Eq. (495) Challenges of modeling neutrino transport in core-collapse supernovae 123 one obtains which, to O(v/c), is a local conservation law for the Eulerian spectral energy densitŷ J +(1+Θ ) v iĤ i . With Θ = 1, this is the correct O(v/c) limit of the Eulerian energy density in Eq. (101). Terms of higher order in the fluid velocity have been moved to the right-hand side of Eq. (497), which must remain small for the O(v/c) limit to be valid. Also note that, with Θ = 0, energy conservation breaks down to leading order in the fluid three-velocity (a factor of 2 should appear in the coefficient of the second term inside the parentheses of the time derivative). Similarly, by adding v j times Eq. (494) and Eq. (495) one obtains which, to O(v/c), is a local conservation law for the Eulerian spectral momentum densityĤ j + v jĴ +Θ v iK i j . Again, with Θ = 1, this is the correct O(v/c) limit of the Eulerian momentum density equation, Eq. (102).
Thus, at the expense of some additional computational complexity, by letting Θ = 1 in Eqs. (494) and (495), the two-moment model becomes consistent with number, energy, and momentum conservation in the O(v/c) limit.
6.6 One-moment kinetics 6.6.1 Newtonian-gravity, O(v/c), finite-difference implementation One moment kinetics is typically deployed in the context of neutrino transport in core-collapse supernovae using the multigroup (i.e., multi-frequency) flux-limited diffusion approximation (MGFLD). Such MGFLD approaches solve the neutrino and antineutrino moment equations for the zeroth moment of the distribution function, the multigroup neutrino/antineutrino energy density, with closure provided at the level of the first moment, the neutrino/antineutrino energy flux, via a diffusionlike equation, modified in such a way that the flux cannot be come superluminal (flux limiting). Swesty and Myra (2009) were the first to implement such an approach in axisymmetric simulations of core-collapse supernovae. The equations for the neutrino/antineutrino multigroup energy densities used by Swesty and Myra are expressed as where E ε andĒ ε are the neutrino and antineutrino energy densities per group, P ε and P ε are the neutrino and antineutrino stress tensors, and S ε andS ε are the neutrino and antineutrino matter couplings, respectively. The energy flux in both equations is given by a Fick's-like relation of the form where is the diffusion coefficient, and κ T ε is the total opacity. In flux-limited diffusion schemes, the diffusion coefficient D ε is modified. A general form for such a modified diffusion coefficient is given by In particular, the so-called Levermore-Pomraning flux limiter (Levermore and Pomraning, 1981) is given by where R ε is the radiation Knudsen number, which is the ratio of the mean free path to some characteristic length scale in the problem. The Knudsen number is written as Note that the Knudsen number is different for different energy groups given that the opacities are typically (and for neutrinos in core-collapse supernovae, definitely) energy dependent. The radiation stress tensor takes the typical form where where χ ε is the scalar Eddington factor, which in the case of the Levermore-Pomraning flux-limiting scheme becomes Given the choice of Levermore-Pomraning flux limiting, the evolution equations (500) and (501) become Swesty and Myra note, these equations are not in conservative form. They opt to monitor conservation of lepton number and energy after the fact. The degree to which they achieve either was not documented. Their equations are operator split as follows (written here for just the neutrinos, not the antineutrinos): where For the purpose of describing their numerical method used to treat each of the operator split equations shown above, Swesty and Myra note, first, that the advection equations take the general form where ψ is the scalar field (E ε andĒ ε ) being advected. They then deploy the ZEUS consistent advection scheme of Stone and Norman (1992) in a directionally-split manner to each dimension (in their case, x 1 and x 2 ) of the problem. For the x 1 update, Eq. (515) is discretized as follows: The fluxes in Eq. (516) are given by where and where 126 Anthony Mezzacappa et al. In Eq. (518), ρ is the fluid mass density. I 1 (ψ) is the van Leer monotonic upwind advection function (Van Leer, 1977), given by The x 2 update is computed in the same way, with the obvious substitutions.
The remaining term, due to neutrino diffusion, relativistic effects, and collisions: is differenced implicitly in time and as follows in phase space: Challenges of modeling neutrino transport in core-collapse supernovae and and In Eq. (524), the factors g 2 , g 31 , and g 32 derive from the 3-covariant form of the spatial metric used by Swesty and Myra, which is given by ds 2 = (g 1 ) 2 dx 2 1 + (g 2 ) 2 dx 2 2 + (g 31 g 32 ) 2 dx 2 3 (527) and is written to accommodate Cartesian, cylindrical, and spherical coordinates. In Eq. (526), κ a and κ s are the absorption and scattering opacities, respectively, and G(ε, ε ) is the pair annihilation kernel. The factors α and η are constants. N g is the number of energy groups, and the superscript n + t, with t taking on different values, designates the update stages for the electron, muon, and tau neutrino distributions in the overall update scheme used by Swesty and Myra, shown in their Figure 3. To solve Eq. (523) and its counterpart for antineutrinos, simultaneously, given their coupling, Swesty and Myra implement the Newton-BiCGSTAB subclass of Newton-Krylov iterative methods. Eq. (523) and its antineutrino counterpart are first linearized. BiCGSTAB is used for a solution to the resultant "inner" linear system of equations for the updates to the iterates of the outer Newton iteration. Once the quantities l S ε , where denotes neutrino flavor, are known from the solution of Eq. (523) and its counterparts for heavy-flavor neutrinos, Swesty and Myra then update the fluid electron fraction and energy density using the following operator split equations: ï ï ∂ n e ∂t ò ò where n e is the electron number density and E is the matter energy density. Equation (529) is solved in operator split fashion for each flavor. The discretizations for Eqs. (528) and (529) for electron-neutrino flavor neutrinos (where both lepton number and energy are exchanged) are: [n e ] n+1 i+(1/2), j+(1/2) = [n e ] n+b i+(1/2), j+(1/2) −∆t (531) In a similar manner, the neutrino-matter momentum exchanged is computed.

General-relativistic, finite-difference implementation
A general relativistic implementation of MGFLD was developed by Rahman et al (2019). They begin with the 3+1 metric: and the following definitions of the comoving-frame spectral neutrino energy density, momentum density, and stress tensor: respectively. pμ ≡ ε(1, lˆi) denotes the comoving-frame, momentum-space coordinates. lˆi is a unit comoving-frame, momentum-space three-vector.
where the relation eˆjˆie kî = γ jk was used. Rahman et al. divide the numerical update into three steps, operator splitting Eq. (534) into the source term, the radial and spectral shift terms, and the nonradial terms. In step 1, the focus is on the source term, and the corresponding terms in the matter specific internal energy and electron fraction equations. The set of equations to be solved is given by where ν and ξ indicate the neutrino species and energy bin, respectively, and ∆ ε ξ the energy bin width. m u is the atomic mass unit. These equations are differenced fully implicitly in time and solved using Newton-Raphson iteration. Linearization of the equations in J ν,ξ , e, and Y e leads to a system of linear equations that must be solved for each iteration. To do so, Rahman et al. use a direct (LAPACK) solver. The quantities ρ, α, W , and κ a are all held constant during the Newton-Raphson procedure.
In step 2, the following equation is solved: where includes radial advection, diffusion, and acceleration, as well as spectral shifts. D 1 denotes the radial diffusion coefficient. Equation (536) is solved using the Crank-Nicolson scheme: All gravitation and hydrodynamics variables are kept fixed during transport updates. R n r,i on the right-hand side of equation (538) is evaluated at both t n and t n+1 . For t n+1 , Rahman et al. provide the following discretizations. The diffusion term is discretized as where i − 1/2 and i + 1/2 denote the left and right zone edges for zone i, respectively. Values of the gravity and hydrodynamics variables at zone edges are determined by linear interpolation of their zone-center counterparts. The fluid acceleration term is discretized as where The metric and hydrodynamics variables before and after the metric and hydrodynamics updates are used to evaluate the time derivative in equation (542). The advection term is discretized in an upwind fashion as where 132 Anthony Mezzacappa et al. and Spectral shifts-the last term in equation (537)-are discretized using the numberconservative scheme of Müller et al (2010) discussed in Sect. 6.5.2. The flux factor, fˆi, and the Eddington tensor, χˆiˆj, are used to replace Hˆi and Kˆiˆj by fˆiI and χˆiˆjI , respectively. In evaluating the spectral shift terms, both the flux factor and the Eddington tensor are evaluated at t n , whereas the energy density, I , is evaluated at t n+1 . The remaining advection and diffusion terms are included in the last transport step, encapsulated in the equation where D 2 and D 3 are the diffusions coefficients in the θ and φ directions, respectively. Equation (547) is evolved using one of two explicit methods: Allen-Cheng (Allen and Cheng, 1970) and Runge-Kutta-Legendre (RKL2) Meyer et al (2012). The latter method is a conditionally stable method expressly designed for the diffusion equation.
In the Allen-Cheng method, a predictor step provides the following partial update: which, in turn, is followed by a corrector step that provides the complete update: where In equations (548) and (549), only one spatial index, k, is explicitly shown and represents a zone index in either the θ or the φ direction. Moreover, in the discretizations shown, the gridding in the single dimension is assumed to be uniform, with zone width ∆ y. In the (s-stage) RKL2 method, which Rahman et al. deploy as a 4-stage method, the update in each of the four stages is given by For the s-th stage and zone k, R(J ) is discretized as Finally, it is important to note that Rahman et al. go to great lengths to ensure that their definitions of the diffusion coefficients preserve causality for both the individual and the total radiative fluxes. To accomplish this, they compute the gradient of the energy density as 2 and the Knudsen number as where (κ t ) i, j,k is the transport opacity at the cell center (i, j, k). Equation (161) is then used to compute the flux limiter, and the causality-preserving diffusion coefficients are given by Rahman et al. do not report on the conservation of lepton number in their code, but given their use of the method developed by Müller et al (2010), which is specifically designed to conserve lepton number, it should be quite good. They do report on their conservation of energy. They report a change in total energy of 1.85 × 10 51 erg at 60 ms after bounce, most of which results at bounce, and a much more gradual increase between 60 and 525 ms after bounce to their final value of ∆ E of 2.0 × 10 51 erg. As discussed in Sect. 6.5.4, their use of the Lagrangian two-moment model as the starting point for their MGFLD implementation does not lend itself to conserving energy, nor does their use of flux-limited diffusion, as discussed in Just et al (2015) and in references cited therein.

Newtonian-gravity, O(v/c), finite-volume implementation
As part of the development of the CASTRO code, Zhang et al (2013) developed a MGFLD solver using finite-volume methods. They express the equations of multigroup radiation hydrodynamics as where the group quantities are defined as and In Eq. (561), the neutrino energy density per frequency, E ν , is integrated over the frequency group defined by the interval [ν g − 1/2, ν g + 1/2] to yield the energy density per group. Eq. (562) defines the group emissivity in terms of the emissivity, η, and the group width ∆ ν g = ν g + 1/2 − ν g − 1/2. In order of appearance in the equations, the remaining quantities are, λ g , κ g , and f g , and are defined by evaluating the flux limiter, λ , the absorption coefficient, κ, and the Eddington factor, f , at a representative group frequency, ν g -i.e., they are all group-mean values. Finally, for neutrinos, ξ g is given by Eq. (563), with s = +1 for electron neutrinos and s = −1 for electron antineutrinos. Zhang et al. split these equations into three subsets, based on their mathematical characteristics and in an effort to minimize issues arising from operator splitting. There is a hyperbolic subsystem that includes the evolution of the electron fraction (it also includes pieces of the evolution equation for the neutrino energy density, but the neutrino energy density is not evolved using this subsystem, as will be discussed): There is a second set of hyperbolic equations that governs the evolution of the neutrino energy density sans the diffusion term and the term that describes the coupling of neutrinos to the matter: This second set of equations results from a splitting of their equation for the neutrino energy density per frequency, E ν , prior to integration over group frequencies: Finally, there is a parabolic system of equations that describes the evolution of the neutrino energy density due to the diffusion of neutrinos in the stellar core, as well as the evolution of the matter internal energy and electron fraction as a result of neutrino-matter interactions: The equations in the first hyperbolic subsystem, Eqs. (564) through (568), are solved using an explicit, unsplit, PPM method, with characteristic limiting, full corner coupling, and the approximate Riemann solver of Bell et al (1989). Given the Godunov states computed, the radiation field energy density is in turn updated via Eq. (569). Finally, Eq. (570), which takes the form of an advection equation in neutrino-energy space, is solved using a second, explicit Godunov method, based on the method of lines. In this explicit part of the update scheme, a third-order, TVD, Runge-Kutta scheme developed by Shu and Osher (1988) is used.
The parabolic system, Eqs. (572) through (574), is instead solved implicitly. Zhang et al. reformulate the equations as and linearize in T , Y e , and E g to obtain the (outer) linear system They point out that if the derivatives of the diffusion coefficient, cλ g /χ g , with respect to T , Y e , and E g are ignored, the linear system of equations collapses to an equation where λ g , κ g , χ g , and j g are evaluated at the k th iterate, and where and which stem from Eqs. (575) and (576) upon linearization and are conservative for energy and lepton number. In Eqs. (587) and (588), H, Θ ,H, andΘ are defined by In turn, T is updated, and the next outer iteration is initiated. Zhang et al. deploy the synthetic acceleration scheme of Morel et al (1985); Morel et al (2007), extended in this case by them to neutrino transport, to accelerate convergence of their outer iteration. Note that the system given by Eqs. (572)-(574) does not include energy coupling interactions (e.g., inelastic scattering). Inclusion of these interactions in a fully implicit solve requires modifications to the solution procedure. The degree to which the approach outlined here conserves lepton number and energy was not documented. 6.7 Structure-preserving methods Structure-preserving methods are advanced numerical methods that aim to capture key properties of the underlying, continuous PDEs, and include methods that preserve physical bounds on solutions (e.g., positive distribution functions), achieve asymptotic limits of a multi-scale model (e.g., diffusion limit in radiation transport and steady states), preserve constraints (e.g., the divergence-free condition in magnetohydrodynamics), or conserve secondary quantities (e.g., simultaneous conservation of neutrino number and energy). As such, structure-preserving methods are more faithful to the physics, and often improves accuracy and robustness. The energy conserving discretization of the spherically symmetric Boltzmann equation by Liebendörfer et al (2004) discussed in Sect. 6.1.3, and the number conserving discretization of the energy equation in the Lagrangian two-moment model by Müller et al (2010) discussed in 6.5 are examples of structure-preserving discretizations already in use in simulations. These aim to preserve secondary quantities that are not evolved directly by the numerical method. Below we discuss discretizations that aim to preserve physical bounds on evolved quantities.

Preamble: discontinuous Galerkin methods
Since the following subsections employ the discontinuous Galerkin (DG) method, which has yet to be adapted to modeling CCSN, we include a short description of key elements here by considering the scalar conservation law, with a linear flux f (u) = a u, where a is a constant in space and time. We refer to ; Cockburn et al ( , 1990; Cockburn and Shu (1991); Cockburn and Shu (1998) for pioneering, in-depth expositions on the early development of DG methods. See also Cockburn and Shu (2001); Shu (2016) for reviews. To solve Eq. (593), the computational domain D is divided into a triangulation T of non-overlapping elements K = (x L , x H ), so that D = ∪ K∈T . On each element, the solution will then be approximated by functions in the approximation space where P k (K) denotes the space of one-dimensional polynomials of maximal degree k (e.g., Legendre polynomials). Functions in V k h can be discontinuous across element interfaces (hence discontinuous Galerkin). One then writes the approximate solution to Eq. (593) on element K as the expansion where the expansion coefficients u K i are the unknowns for which we solve the equations, and b K i ∈ V k h are the basis functions. Next, one defines in what sense u K h will approximate u, the solution to Eq. (593). To this end, the residual is defined, which is required to be orthogonal to all test functions ϕ h ∈ V k h ; i.e., Inserting Eq. (596) into Eq. (597), and performing an integration by parts on the flux term gives However, the entirely local formulation in Eq. (598) is problematic because it does not specify how solutions in adjacent elements interact. In addition, a unique flux must be defined on the element interfaces at x L/H to recover the conservation statement inherent in Eq. (593). To resolve this, the fluxes on the element interfaces are replaced by a unique value, which then gives the semi-discrete DG method in weak form: holds for all ϕ h ∈ V k h and all K ∈ T . In Eq. (599), ' f (u K h )(x) is a unique numerical flux defined on the interface. For the scalar problem considered here, the familiar upwind flux can be used: which is defined in terms of approximations to the immediate left and right of x, which can be different. Undoing the integration by parts that resulted in Eq. (599) gives the semi-discrete DG method in strong form: for all ϕ K h ∈ V k h and all K ∈ T . Here, the weak and the strong formulations (Eqs. (599) and (601), respectively) are equivalent statements. By comparing the strong formulation with Eq. (597), one sees that the residual in the DG solution is orthogonal to ϕ h only in the convergent limit when f (u K h (x ± )) → ' f (u K h )(x). In Sections 6.7.2 and 6.7.3, we will only refer to the weak formulation in Eq. (599).
To further illustrate how the weak formulation in Eq. (599) is used in practice, let Then, by inserting Eq. (595) into Eq. (599), and letting ϕ h = b j ( j = 1, . . . , k + 1), one obtains an equation for the expansion coefficients: where components of the mass matrix and stiffness matrix are defined as respectively. Since the basis functions are polynomials, the integrals in Eq. (604) can be computed exactly with, e.g., Gaussian quadratures. Eq. (603) is now a system of ODEs, which can be integrated in time with an ODE solver. For non-stiff problems, explicit Runge-Kutta methods can be used. The DG method has been used to develop structure-preserving methods in a range of applications; see for example Zhang and Shu (2010b) and Wu and Tang (2016) for physical-constraint-preserving methods for the non-relativistic and relativistic Euler equations, respectively, Li and Xing (2018) for a steady-state preserving method for the Euler equations with gravitation, and Juno et al (2018) for an energy-conserving DG method for kinetic plasma simulations. We also mention the work of Heningburg and Hauck (2020), where DG and finite-volume methods are combined to a hybrid transport scheme that captures the diffusion limit and is more efficient in terms of memory usage and computational time than the corresponding DG-only scheme.

Bound-preserving methods
Zhang and Shu (2010a) developed a general framework for "maximum-principlepreserving", high-order methods for scalar conservation laws (see also Zhang and Shu, 2011). Inspired by this work, Endeve et al (2015) developed bound-preserving methods in the context of neutrino transport, aiming to maintain a distribution function satisfying f ∈ [0, 1], as dictated by Pauli's exclusion principle. They considered the (collisionless) phase-space advection problem in curvilinear coordinates, and included a general relativistic example in spherical symmetry with a time-independent spacetime metric given by ds 2 = −α 2 dt 2 + γ i j dx i dx j , with γ i j = ψ 4 diag 1, r 2 , r 2 sin 2 θ , where (For notational brevity, we have suppressed the (µ, ε)-dependence.) Using the numerical flux function in Eq. (617), one can write On the right-hand side of Eq. (632) (last line), the coefficient in front of f n h (r − L ) is nonnegative since α(r L ), ψ 2 (r L ) > 0 and µ + |µ| ≥ 0. Only the coefficient in front of f n h (r + L ) can become negative since µ − |µ| ≤ 0. Assuming f n h (r − L ), f n h (r + L ) ≥ 0, it is easy to show that Φ Similarly, for f n h (r − H ), f n h (r + H ) ≥ 0, one finds that Φ (r) Therefore, assuming f n h ≥ 0 in the combined quadrature set S (r) =Ŝ (r) ⊗S (r) , where the points inŜ (r) are used to evaluate the integral over K (r) in Eq. (624) and the points and S represents the finite set of quadrature points in K where the bounds must hold. For ϑ = 0, the entire solution is limited to the cell-average, while for ϑ = 1 f n+1 h = f n+1 h . It is thus absolutely necessary to maintain the bounds on the cellaverage, otherwise the limiting procedure will be futile. In practice, ϑ remains close to unity, and the limiting is a small correction. It has been shown (Zhang and Shu, 2010a) that this "linear scaling limiter" maintains high order of accuracy. Also, note that the limiting procedure is conservative for particle number since it preserves the cell averaged distribution function; i.e., by inserting Eq. (640) into the definition of the cell average in Eq. (620): In the discussion above, forward Euler time stepping is used, which is only firstorder accurate. For explicit time integration, the bound-preserving scheme can easily be extended to higher-order accuracy in time by using high-order SSP time stepping methods (Shu and Osher, 1988;Gottlieb et al, 2001), which are multi-stage methods that can be formulated as convex combinations of forward Euler operators. Provided limiting is applied at each stage, the bound-preserving property follows from convexity arguments. For neutrino transport problems where neutrino-matter interactions are treated with implicit methods, it is difficult to achieve both high-order accuracy and bounded solutions, and this topic remains open for further research. We will discuss this issue further below in the context of a two-moment model. Another open issue is the challenge of simultaneous number and energy conservation in the phase space advection problem discussed here: The limiter in Eq. (640) preserves the particle number, but not higher moments of the distribution function. In the present model, the space time is stationary, which implies that the so-called Komar mass (α ε f ) is conserved. Thus, if bounded solutions and exact conservation of the Komar mass is desired, modifications to the limiter is needed.

Realizability-preserving moment methods
Chu et al (2019) developed a numerical method for a two-moment model based on DG spatial discretization and IMEX time stepping. The method is specifically designed to preserve bounds on the moments as dictated by Pauli's exclusion principle. As such, it is an extension of the bound-preserving method discussed above, but for a nonlinear system of hyperbolic balance laws with stiff sources. As is reasonable for an initial investigation, the model adopted by Chu et al (2019) is rather simple, when compared to the two-moment models used to model neutrino transport in contemporary core-collapse supernova simulations. However, the work highlighted the role of the moment closure in the design of robust two-moment methods for neutrino transport, and developed an IMEX scheme with a reasonable time step restriction that is compatible with bounded solutions. As such, the work put down the foundations for a framework that may help future developments of robust methods for models with improved physical fidelity. To simplify the discussion, we consider the model in Chu et al (2019) for one spatial dimension and define moments of the distribution function as J , H , K (x,t) = 1 2 The two-moment model can be written as a system of hyperbolic balance laws as where the evolved moment vector is u = (J , H ) T , the flux vector is f = (H , K ) T , the emissivity is η = (σ A J 0 , 0) T , and R = diag(σ A , σ T ). Here, J 0 is the zeroth moment of an equilibrium distribution function, f 0 , satisfying f 0 ∈ [0, 1] (i.e., Fermi-Dirac statistics), σ A ≥ 0 is the absorption opacity, and σ T = σ A + σ S , where σ S ≥ 0 is the scattering opacity (assuming isotropic and isoenergetic scattering). In Eq. (645), a closure is assumed so that K = K (u). For fermions, the Pauli exclusion principle requires the distribution function to satisfy the condition 0 ≤ f ≤ 1. This puts corresponding restrictions on realizable values for the moments of f . It is then interesting to study the design of a numerical method for solving the system of moment equations given by Eq. (645) that preserves realizability of the moments; i.e., the moments evolve within the set of admissible values as dictated by Pauli's exclusion principle. If we let the moments u = (J , H ) T are realizable if they can be obtained from a distribution function f (µ) ∈ R. The set of all realizable moments R is (e.g., Larecki and Banach, 2011) The geometry of the set R in the (H , J )-plane is illustrated in Figure 18 (light blue region). For comparison, the realizable domain R + of positive distribution functions (no upper bound on f ), which is a cone defined by J > 0 and J − |H | > 0 (light red region), is also shown. The realizable set R is a bounded subset of R + . Importantly, the set R is convex. This means that for two arbitrary elements u a , u b ∈ R, the convex combination u c = ϑ u a + (1 − ϑ ) u b ∈ R, where 0 ≤ ϑ ≤ 1. This property is used repeatedly (sometimes in a nested fashion) to design the numerical method. The DG method for the two-moment model is in many ways very similar to that discussed in Sect. 6.7.2. The computational domain D is divided into elements K = (x L , x H ). One each element, the approximation space is where P k is the space of polynomials in x of maximal degree k. The approximation to the moments, u h , is then expressed as where each P i ∈ V k h and each u i is a two-component vector representing the unknowns per element in the DG method. Then, for any x ∈ D and any ϕ h ∈ V k h , the semi-discrete DG method is as follows: holds for all ϕ h ∈ V k h and all K ∈ D. In Eq. (650), is a numerical flux, where h is a numerical flux function. In the DG method, any standard numerical flux designed for hyperbolic conservation laws can be used. However, Chu et al (2019) used the global Lax-Friedrichs flux, where It should be noted that when using the DG method for radiation transport, as long as the approximation space includes at least linear elements, it is not necessary to switch between centered and upwind-type fluxes (e.g., as is done in Eqs. (416)-(417) for finite-volume and finite-difference methods to capture both the streaming and diffusive regimes). As such, the DG spatial discretization is naturally structure-preserving with respect to the diffusion limit, and well-suited for radiation transport (e.g., Larsen and Morel, 1989;Adams, 2001). In fact, the dissipation term in the numerical flux in Eq. (652), which is not present in the diffusive regime when employing switching between centered and upwind fluxes, plays an important role in the proof of the realizability-preserving property of the two-moment method presented here. It may therefore be difficult, if not impossible, to design realizability-preserving methods for the two-moment model without this term. Note that in the diffusion limit, |H | J , the moment vector u is close to the line connecting (0, 0) and (0, 1) in Figure 18. Then, if the particle density is low (J 1) the moment vector is safely inside R. On the other hand, if the particle density is high (J 1), which, e.g., is the case for electron neutrinos in the supernova core, the moment vector is dangerously close to the boundary of R, and care is needed in order to maintain u ∈ R. Further away from the supernova core, where neutrinos transition to streaming conditions, |H | (1 −J ) J (≈ J when J 1), the moment vector is again close to the boundary of R, and care in the numerics is again warranted. Maintaining u ∈ R is necessary to ensure the well-posedness of the moment closure procedure (Levermore, 1996;Junk, 1998;Hauck et al, 2008). Realizability-preserving methods maintain u ∈ R and thus improve robustness.
The semi-discretization of the two-moment model in Eq. (650) results in a system of ODEs of the form dU dt = T(U) + C(U), where U represents all the degrees of freedom evolved with the DG method, which includes the cell-average of u h in each element: In Eq. (653), the transport operator T(U) corresponds to the second (surface) and third (volume) terms on the left-hand side of Eq. (650), while the collision operator C(U) corresponds to the right-hand side of Eq. (650). To evolve Eq. (653) forward in time, Chu et al (2019) developed IMEX schemes, where the transport operator is treated explicitly and the collision operator is treated implicitly. As discussed in Sect. 6.7.2, the extension of the bound-preserving property to high-order methods relies on the strong-stability-preserving (SSP) property of the ODE solver. Explicit SSP Runge-Kutta (RK) methods of moderate order (≤ 3) are relatively easy to construct. Unfortunately, high-order (second-or higher-order temporal accuracy) SSP-IMEX methods with time step restrictions solely due to the explicit transport operator do not exist (see for example Proposition 6.2 in Gottlieb et al (2001), which rules out the existence of implicit SSP-RK methods of order higher than one). Because of this, Chu et al (2019) resorted to develop formally first-order accurate IMEX schemes with the following properties: (i) second-order accurate in the streaming limit, (ii) SSP (called convex-invariant in Chu et al (2019)), with a time step restriction solely due to the explicit part, and (iii) well-behaved in the diffusion limit in the sense that the flux density remains proportional to the gradient of the number density with the correct constant of proportionality. The optimal scheme, in the sense that it is SSP with the same timestep as the forward Euler scheme applied to the explicit part, is given by U (2) = U (1) + ∆t C( U (2) ); U (2) = Λ R U (2) , This IMEX scheme involves two explicit evaluations of the transport operator and two implicit solves to evaluate the collision operator. The explicit stages, Eqs. (656) and (658), are forward Euler steps, while the implicit stages, Eqs. (657) and (659), can be viewed as backward Euler steps. Without collisions (C = 0), the scheme reduces to the optimal second-order accurate SSP-RK scheme of Shu and Osher (1988) (also referred to as Heun's method). Although the scheme is formally only first-order accurate in time when collisions are frequent, quantities evolve on a diffusive time scale in this case, which is much longer than the time step restriction required for stability of the explicit part. Therefore, temporal discretization errors remain small. On the other hand, second-order accuracy in the streaming limit is essential in maintaining non-oscillatory radiation solutions with the DG method in the streaming regime. In Eqs. (656)-(659), Λ R is a realizability-enforcing limiter used to enforce pointwise realizability within each element. The limiter, which we describe in more detail below, assumes that the cell-average is realizable after each step. We begin by finding sufficient conditions for realizability-preservation of the cell-average in each step. For this purpose, since the remaining steps are equivalent, we consider only the explicit step in Eq. (656) and the implicit step in Eq. (657). For an explicit forward Euler update, as in Eq. (656), the equation for the cellaveraged moments (obtained from Eq. (650) with ϕ h = 1) is given by u (1) To construct a realizability-preserving explicit update for the two-moment model, one seeks to find sufficient conditions such that u K ∈ R. The strategy is very similar to that taken for the bound-preserving scheme discussed in Sect. 6.7.2. To evaluate the integral on the right-hand side of Eq. (660) (cf. Eq. (655)), an N-point Gauss-Lobatto quadrature rule is used on the interval K, with pointŝ and weightsŵ q ∈ (0, 1], normalized such that ∑ N q=1ŵ q = 1. Using this quadrature and the numerical flux function in Eq. (651), one can write Eq. (660) as u (1) which is a convex combination of {u n h (x q )} N−1 q=2 and Φ. (Note thatŵ 1 =ŵ N , so that 2ŵ 1 = 2ŵ N =ŵ 1 +ŵ N .) Thus, if, for each element K, u n h (x q ) ∈ R, ∀q = 2, . . . , N − 1 and Φ ∈ R, since the set R is convex it follows that u (1) K ∈ R. In Eq. (662), where λ = ∆t/(∆ xŵ 1 ) = ∆t/(∆ xŵ N ) and In the last line in Eq. (663), if λ ≤ 1, Φ is expressed as a convex combination of Φ 0 , Φ 1 , and Φ 2 . Thus, if Φ 0 , Φ 1 , Φ 2 ∈ R, the time step restriction is sufficient to guarantee u (1) K ∈ R. The condition Φ 0 ∈ R follows from the assumption u n h (x + L ), u n h (x − H ) ∈ R, while the conditions Φ 1 , Φ 2 ∈ R follow from the additional assumptions u n h (x − L ), u n h (x + H ) ∈ R and Lemma 2 in Chu et al (2019), which proves Φ 1 , Φ 2 ∈ R provided these expressions can be generated from distributions f ∈ R. We note that for Lemma 2 in Chu et al (2019) to hold in the current setting, the moments must be consistent with a distribution function satisfying 0 ≤ f ≤ 1, which demands a two-moment closure based on Fermi-Dirac statistics (the second component of Φ 1 and Φ 2 involves the Eddington factor). The maximum entropy closures of Cernohorsky and Bludman (1994); Larecki and Banach (2011) and the Kershaw-type closure of Banach and Larecki (2017) are suitable. On the other hand, the Minerbo, M1, and Kershaw closures discussed in Sect. 4.7.4 are based on positive distribution functions (with no upper bound), and are therefore not suitable if u ∈ R is desired. These closures are only compatible with the relaxed condition u ∈ R + . (In this case the approach discussed here, with minor modifications, is still applicable; e.g., see Olbrant et al (2012) for a method with explicit time stepping.) positivity of the pressure when solving the Euler equations of gas dynamics. If u h is outside R for any quadrature point x ∈Ŝ, i.e., γ( u h ) < 0, since u K ∈ R, there exists an intersection point of the straight line s q (ψ), connecting u K and u h evaluated in the troubled quadrature point x q (denoted u q ), and the boundary of R. This line is parameterized by s q (ψ) = ψ u q + (1 − ψ) u K , where ψ ∈ [0, 1]. The intersection point ψ q is obtained by solving γ(s q (ψ)) = 0 for ψ. (In practice, ψ needs not be accurate to many significant digits, and a bisection algorithm terminated after a few iterations is sufficient.) This completes the description of major steps in the scheme presented in Chu et al (2019).

Hybrid Methods
From the preceding sections, it is clear that the landscape of approaches to neutrino transport, and the associated numerical methods, is growing rapidly. One-and twomoment models have reached a level of maturity where general relativistic corecollapse supernova modeling is feasible (e.g., Kuroda et al, 2016;Rahman et al, 2019). Multidimensional models with Boltzmann neutrino transport -e.g., using discrete ordinate or Monte Carlo methods -are also under development and results in axial symmetry have already been published (Nagakura et al, 2017), but more work is needed to reach the same level of maturity as found in moments-based models. One primary reason is, of course, the computational cost associated with transport models that provide better resolution of the angular dimensions of momentum space, such as Boltzmann models. In particular, the computational cost of the neutrino-matter coupling problem increases dramatically with increased fidelity in this sector. However, the multiscale nature of the neutrino transport problem implies that Boltzmann neutrino transport is probably not necessary everywhere in a simulation. On the one hand, the radiation field is well captured by the low-order moment models in the collision dominated region below the neutrinospheres. On the other hand, higher-fidelity models may be warranted in the gain region since heating rates are sensitive to the angular shape of the neutrino distributions. (There is already some evidence that two-moment closures are unable to capture certain details in the radiation field; e.g., Harada et al (2019).) This motivates the use of hybrid methods, which, for example, aim to combine low-and high-fidelity approaches in order to provide sufficient resolution where needed, but at a reduced computational cost. Hybrid approaches are used in many areas of computational physics, but are not widely adopted to model neutrino transport in core-collapse supernovae. We note that the variable Eddington factor (VEF) method of Rampp and Janka (2002), which has been shown to compare well with Boltzmann neutrino transport in spherical symmetry , can be regarded as a hybrid method, where a simplified (and less computationally expensive) Boltzmann solver is used in the context of a two-moment model to provide the moment closure. Adopting hybrid methods to model neutrino transport in multidimensional models is a potentially rewarding direction for near-future research, and some approaches may even be able to leverage investments in capabilities that have already been developed. Since these methods have not fully found their way into the core-collapse supernova modeling community, we will not go into details, but rather briefly mention some existing work, which in most cases will require further development to account for relativity and domain-specific microphysics details. We hope to report more on this interesting field in the future. So-called high-order-low-order (HOLO) approaches (see, e.g., review by Chacon et al, 2017) are one type of hybrid method gaining popularity for use in radiation transport (and related) applications, and combine, as the name suggests, high-fidelity solvers for the (Boltzmann) transport equation with lower-fidelity solvers (typically based on one-or two-moment models, and commonly in a gray formulation) to accelerate the process of solving the high-fidelity model -in particular, the nonlinear coupling between radiation and a material background. In these applications, the radiation field is governed by a kinetic model, while the material is governed by a fluidlike model (as in the core-collapse supernova problem). The basic idea is that, in the collisional regime, the interaction between the kinetic and fluid components occurs in a low-dimensional subspace where only a few moments of the particle distribution function are needed to accurately capture the coupling. Thus, HOLO methods are effective primarily in regions where the particle mean free path is small and the problem is stiff, and one challenge is to ensure consistency between the two model components. Recent work on HOLO methods applied to the problem of thermal radiative transfer include applications where the high-order model is solved with continuum methods such as discrete ordinates (e.g., Park et al, 2012Park et al, , 2013Lou et al, 2019) or Monte Carlo methods (e.g., Park et al, 2014;Bolding et al, 2017). We also point out related work on solving the linear transport equation (i.e., without nonlinear coupling to the material) with HOLO (or hybrid) methods by Hauck and McClarren (2013); Willert et al (2013); Willert et al (2015); Crockatt et al (2017Crockatt et al ( , 2019Crockatt et al ( , 2020.

Solution methods
When ultimately expressed in computer code, all of the previously discussed deterministic methods require the use of implicit numerical methods. When discretized, the transport equations produce a set of nonlinear algebraic equations. When linearized, these equations in turn lead to linear systems of equations that relate the values of the change in the distribution functions (or moments of the distribution functions) to the neutrino-matter and neutrino-neutrino interactions encoded in the terms on the right-hand side of the equations: the collision term. These source terms depend on the changes in the neutrino radiation field, as well, giving rise to the need for implicit methods.
The solution of these linear systems is associated with the dominant computational cost for any deterministic method for neutrino transport. The remainder of the panoply of physics that complete a core-collapse supernova model-hydrodynamics, nuclear kinetics, and even the global solution of the gravitational field-are typically associated with much less computational intensity and often require significantly less memory capacity and bandwidth. Because the solution techniques for the transport linear system solve depend on almost every important dimension of modern computer platforms-floating point performance, memory bandwidth, communication bandwidth and latency-the particulars of individual platforms become an important consideration when a practitioner looks to instantiate a real implementation in the form of a production code. Therefore, the structural components of modern computers and the quantitative requirements for realistic modeling of transport are inextricably linked together when one looks to build a neutrino radiation hydrodynamics code.
. . .   19: A schematic of the structure of a typical neutrino transport linear system that must be solved at each time step. The diagonal, dense blocks are generally nonsymmetric and have characteristic substructure arising from the coupling in angle, energy, isospin (i.e. between neutrinos and antineutrinos), and neutrino flavor, though the particulars of that structure are dependent on the lexical ordering of the solution vector. Fully implicit methods also couple individual spatial zones to one another, producing a linear system that contains a series of outlying bands in addition to the diagonally dominant dense block structure. This global linear system typically requires considerable communication on parallel platforms, where domain decomposition is often used to spread the spatial extent of the problem across the distributed memory space. IMEX methods do not require solution of this global system, but the inversion of a similarly structured set of dense blocks is required at each spatial index. However, this reduction of the implicit problem to a purely local operation can result in considerable performance advantages.

Simulation requirements
Regardless of the particulars of the architecture enlisted to solve the requisite equations, the computational demands of neutrino radiation hydrodynamics are prodigious. Some of these demands are imposed directly by the high dimensionality of the transport equation itself. The need to discretize the neutrino phase space with adequate resolution to capture the particulars of the neutrino-matter interactions (cf. Sect. 5) results in energy resolutions that are typically on the order of dozens of groups. This requirement is amplified by the need to spatially resolve matter features in the flow that are of roughly the size of the neutrino mean free path at various points in the computational domain. Adaptive mesh refinement (AMR) can help ameliorate the need to refine the grid everywhere to resolve the shortest mean free paths, but this reduction is typically only partially effective. Indeed, the time-dependent nature of the core-collapse supernova problem often leads to much of the grid having to be refined as the reheating and explosion epochs evolve. These resolution requirements directly impact the size of the linear systems that must be solved via deterministic methods, typically resulting in quadratic growth in the size of the system for increases in any given phase space dimension. Therefore, the product of required energy resolution, spatial resolution, number of neutrino flavors and their distribution functions or their angular moments directly translates into a need for scalable implementations of the solution algorithms. Any implementation needs to be able to effectively take advantage of any future platform. This type of scalability is typically termed weak scaling. The figure of merit for weak scaling is how close to a constant runtime can be achieved as the computational load is increased commensurately with the amount of resources. For example, as problem size is increased along with the number of MPI ranks used in a simulation, good weak scalability is achieved if the runtime remains constant. Weak scalability is often highly dependent on effective distributed-memory parallelism, including possibly overlapping slow inter-node communication with on-node computation.
However, this is a necessary, but not sufficient, condition for effective investigation. The resultant simulations must also be capable of execution in reasonable amounts of wall-clock time. Runtimes of several months are untenable if one wishes to explore a more-or-less complete set of supernova progenitors. Therefore, reducing the wall-clock time for transport computations is equally important. This so-called strong scalability is achievable if node-level execution is made faster. On modern platforms, this has very much become a question of the effective use of hybrid-node architectures.

Implementation on heterogeneous architectures
Currently, the most widely available and performant microarchitectures are based on graphical processing units (GPUs). As suggested by their name, GPUs were originally designed to handle computer graphics-intensive tasks in applications ranging from scientific visualization to video games. However, the very high intensity with which they compute and their relatively low power-consumption traits (as compared to modern CPUs) led to their adoption as engines for a variety of scientific computing tasks. Indeed, at this writing, GPU-based architectures dominate much of the highest-end HPC platforms, and all planned near-future exascale platforms will employ GPUs as the primary source of compute power.

160
Anthony Mezzacappa et al. The primary characteristic that provides the compute power of modern GPUs is the large number of compute cores, as compared to traditional CPUs. Modern GPUs (e.g. the NVIDIA V100) contain more than 5000 cores, compared to the few dozen that are present on contemporary CPUs. Each core may have a relatively low clock speed compared to a CPU, but the sheer number of processors available on a GPU leads to a much higher intensity of computation.
The architecture of the GPUs is wholly shaped by the single-instruction, multipledata (SIMD) execution model. In this execution model, each execution unit takes as input two vectors, performs identical operations on both sets of operands (one operand from each vector), and produces a resultant vector. Modern CPUs also typically contain SIMD units: MMX, SSE, and AVX instructions are available on Intel architectures, and POWER and ARM architectures have similar extensions to execution sets to take advantage of similar units. In the case of GPUs, however, these instructions are essentially the only ones available, restricting the amount of branching and conditional execution that can be effectively carried out by the device.
All modern GPU architectures make use of a similar set of hardware components and associated software abstractions. Here, we will primarily make use of the nomenclature used by NVIDIA to describe their GPU devices, but other vendors make use of virtually identical concepts and constructions, albeit with slightly different naming. In all cases, kernels are launched on the device as a set of threads. Each of these threads executes a single SIMD pipeline. Within the kernel launch of threads, threads are grouped into a number of blocks. These thread blocks are mapped to individual streaming multiprocessors (SMs). Each SM executes threads in groups of parallel threads termed a warp (the number of threads in a warp, or wave, is typically some multiple of 32). Inside each warp, a single, common instruction is executed during a clock cycle. This lockstep execution can be broken by conditionals (e.g. if-then-else instructions). When this occurs, the effect of this thread divergence within a warp breaks the parallelization of the warp. The execution on the conditional thread continues in a serial fashion, and all the other threads are stalled.
This execution model is further complicated by the hierarchical memory on GPUs. Global memory is accessible by all cores. This global memory is typically several GBs on each device. The bandwidth of this memory is often termed high-bandwidth, as it typically has bandwidths several times that for DRAM that might be attached to the CPU host. Closer to each multiprocessor there is a shared memory that offers a space accessible to all cores inside the multiprocessor. It is typically used as a usermanaged cache of the global memory. The bandwidth to this cache is typically much faster than fetching addresses from the global memory for each core. Ultimately, each core has a certain number of registers that provide the greatest memory bandwidth, but, concomitantly, have the smallest capacities.
Programming GPUs relies on providing as many operands as possible at the maximum possible rate to all of the SMs on a device. The complexity of the memory hierarchy, the execution model, and the possibility of thread divergences can make this a formidable programming task.
Several programming models have been introduced to program GPUs. These include: 1. CUDA: a minor extension of C/C++ for GPU thread programming. CUDA is a proprietary programming model created and supported by NVIDIA. 2. ROCm: an extension of C/C++, much like CUDA in purpose and syntax. ROCm was created by AMD and is Open Source. 3. OpenCL: a multi-vendor standard. OpenCL is designed to work on a wide variety of platforms, not just GPUs. This makes the model very powerful, but also introduces a measure of irreducible complexity to accommodate this power. 4. OpenACC: A directive-based approach to GPU programming, OpenACC uses code decoration much like OpenMP or other directives-based models. OpenACC provides a straightforward path for GPU programming in Fortran. 5. OpenMP (with offload): Modern OpenMP standards include a set of extensions to provide facilities for thread-level programming on GPU devices.
The choice for any programmer between these options depends on the code to be produced and the relative agility of the development team. For neutrino radiation transport, the Oak Ridge group, for example, has chosen to work primarily in Fortran, with OpenMP directives to marshal the GPUs. This approach allows them to extend legacy code (in Fortran) in a straightforward and performant manner. Using OpenMP provides them with a measure of platform independence, as it is the only programming model currently envisaged to be supported on all major GPU hardware (i.e. NVIDIA, AMD, and Intel devices). The partial loss of thread-level control ceded by not using a more low-level model like CUDA or ROCm is not so important for radiation transport, as the vectorized computational kernels produced in evaluation of both the left and right-hand sides of the transport equation provide plenty of floatingpoint operations to saturate any modern GPU streaming multiprocessor. Therefore, decorating the multi-level loop nests that contain these vectorized operations at their deepest levels with directives is an effective model. In addition, this programming model can be effectively and easily extended with GPU-enabled scientific libraries (e.g., the GPU-accelerated version of BLAS), regardless of the model used by those libraries internally.
Many computational radiation transport practitioners have moved to Monte Carlo (MC) approaches in recent years, driven to this choice by the relative abundance of compute power available on GPUs. However, these approaches are not without complexities on GPUs, as the widely disparate sizes of the memory spaces described above (i.e., GBs to kBs to bytes as one moves from global memory to shared memory to registers) mean that MC histories are not so simply preserved. These complications mean that the relative expense of Monte Carlo methods (cf. Sect. 6.4) cannot be fully ameliorated by porting to GPUs. Because the dense linear algebra underpinning their implementations do make effective use of GPU compute architectures, IMEX and discrete ordinates approaches have the potential to compete with MC approaches with reduced memory footprint. But, this strong reliance on a single class of numeric operations means that the success of these approaches is almost wholly dependent on the performance of linear algebra subprograms on GPUs. This is especially true for so-called batched execution of the solution of linear systems of equations, wherein several matrices and right-hand sides are computed by a single kernel invocation and the solver effectively divides the work among SMs.

Summary and outlook
The last decade has seen considerable, and accelerated, progress made on multiple fronts: (1) Ascertaining the explosion mechanism of core-collapse supernovae.
(2) The development of the theory of general relativistic neutrino radiation hydrodynamics. (3) The development of robust numerical methods for the solution of the neutrino radiation hydrodynamics equations in core-collapse supernova environments. (4) And the application of these methods in increasingly sophisticated threedimensional core-collapse supernova models. At this point, it is fair to say that we are theory and methods rich and that the frontier lies more in the application of these methods in three-dimensional core-collapse supernova models, although further method development is certainly needed. Three-dimensional, fully general relativistic models with all of the relevant neutrino physics in multi-frequency one-or two-moment approaches are on the horizon, the leading examples of which are documented in the work of Kuroda et al (2016); Roberts et al (2016); Rahman et al (2019). But counterpart models in three dimensions using Boltzmann neutrino transport are farther off, though here too there is a leading example in the work of Nagakura et al (2017). Adding a new dimension to the discussion, three-dimensional Boltzmannbased models are limited right now more by supercomputing capabilities than anything else. We have documented both moments and Boltzmann approaches here that have been developed and used by multiple research groups. Boltzmann approaches have been used in core-collapse supernova models with reduced spatial dimensionality and have served to gauge moments approaches in multidimensional models for some time. Recent developments emphasize even more the need for Boltzmann-based models. The history of core-collapse supernova theory has seen quantum leaps on a number of occasions over the past more than fifty years, often associated with an increased glimpse of the rich physics that drive such supernovae. In the past five years, evidence has mounted that neutrino quantum effects-specifically, due to neutrinoneutrino coupling in the proto-neutron star surface region-may impact the electronflavor neutrino luminosities and spectra responsible for neutrino shock reheating and, consequently, may play a role in the supernova mechanism itself. These early conclusions will require the same extensive development to supplant them as has been documented here for the classical neutrino transport problem. We are far from the equivalent three-dimensional, general relativistic, full-physics models that deploy neutrino quantum kinetics. Early serious work on the implementation of neutrino quantum kinetics in supernova-like environments (e.g., see Richers et al 2019) has illuminated yet new numerical challenges that will in turn require augmented methods, to handle both the classical and the quantum mechanical evolution of the three-flavor neutrino radiation field. In this context, then, it is very clear that a Boltzmann kinetic approach, which is a component of a complete quantum kinetics approach, must be a major step toward instantiating full neutrino quantum kinetics. We look forward to watching progress on this front and reporting on these developments as well, as they mature. The core-collapse supernova problem continues to manifest itself as a generational problem, one that will continue to serve as a fertile testbed for the development of transport and radiation hydrodynamics methods.