Abstract
Dense star clusters are spectacular selfgravitating stellar systems in our Galaxy and across the Universe—in many respects. They populate disks and spheroids of galaxies as well as almost every galactic center. In massive elliptical galaxies nuclear clusters harbor supermassive black holes, which might influence the evolution of their host galaxies as a whole. The evolution of dense star clusters is not only governed by the aging of their stellar populations and simple Newtonian dynamics. For increasing particle number, unique gravitational effects of collisional manybody systems begin to dominate the early cluster evolution. As a result, stellar densities become so high that stars can interact and collide, stellar evolution and binary stars change the dynamical evolution, black holes can accumulate in their centers and merge with relativistic effects becoming important. Recent highresolution imaging has revealed even more complex structural properties with respect to stellar populations, binary fractions and compact objects as well as—the still controversial—existence of intermediate mass black holes in clusters of intermediate mass. Dense star clusters therefore are the ideal laboratory for the concomitant study of stellar evolution and Newtonian as well as relativistic dynamics. Not only the formation and disruption of dense star clusters has to be considered but also their galactic environments in terms of initial conditions as well as their impact on galactic evolution. This review deals with the specific computational challenges for modelling dense, gravothermal star clusters.
Similar content being viewed by others
1 Astrophysical introduction
Stars play a fundamental role in astronomy—a large piece of information available to us about the Universe we inhabit comes from stars. Coeval associations of stars, also called star clusters, are the birth place of most if not all stars. Star clusters form, evolve and “die” by dissolution all across the cosmic time, which is covered in the excellent review “Star Clusters across Cosmic Time” that focuses on the interplay between observations and theoretical knowledge (Krumholz et al. 2019); our review, on the other hand, focuses more on the computational challenges to model a special class of dense star clusters. The term “dense” is not very well defined. We use it here in the sense that both the system should be “gravothermal” and that during at least some phases of the evolution close encounters or even direct collisions between stars or binaries occur. This definition constrains us to globular and young dense star clusters, as well as nuclear star clusters (NSCs). On nuclear star clusters there is another nice review (Neumayer et al. 2020); nuclear star clusters are not in the focus of this work; however, if special computational issues have to be taken into account for nuclear star clusters we may elaborate on them here.
Globular star clusters (GCs) are thought to be the oldest objects in our Galaxy, their age covering a large fraction of the age of the Universe, and they are considered as fossil records of the time of early galaxy formation. GCs of variable age are found near all galaxies (except for the smallest dwarfs) and their specific frequency (number of clusters per galaxy mass unit, see e.g., Harris 1996) differs as a function of galaxy type, highlighting the close relation between cluster and galaxy formation. The approximately 150 globular clusters of our own Milky Way have been studied in much more detail for their proximity. Today, starbystar observations with the Hubble Space Telescope (HST), and proper motion studies using Gaia with high resolution spectroscopy to determine their stellar velocity dispersions (Bianchini et al. 2013) are possible. Small and big galaxies in the Local Group have systems of GCs, e.g., the Andromeda galaxy and the Magellanic clouds. Globular clusters in huge quantities have been detected around massive galaxies like M87 or other bright central cluster galaxies (Harris et al. 2017) or at sites of star formation near the Antenna galaxies. Still this is—in cosmological scales—our neighborhood. Do clusters form normally following the cosmic star formation history, which peaks at redshifts of around 2 (ReinaCampos et al. 2019)? Or do massive clusters form preferentially as special objects at much higher redshifts (e.g., \(z\sim 6\), BoylanKolchin 2018)?
Computer simulations of structure formation in the Universe begin to resolve GC scales (RamosAlmendares et al. 2020), but they cannot compensate the current lack of deep observations. Only recently gravitational lensing from galaxy clusters has helped to identify candidates for protoGCs at redshifts of \(z>3\) (Vanzella et al. 2017), and more recently even out to \(z=6\) (Vanzella et al. 2019, 2020, 2021). Future instruments such as Extremely Large Telescope (ELT) and others, will improve the situation significantly, while novel instruments such as James Webb Space Telescope (JWST) are already producing groundbreaking results here (Claeyssens et al. 2023; Charbonnel et al. 2023).
We return to the “dense” and “gravothermal” nature of such star clusters. In gravothermal star clusters it is essential to consider the mutual gravitational interactions between many if not all of their stars. The cumulative effect of small angle gravitational deflections (encounters) between distant stars generates transport of heat and angular momentum (relaxation processes connected to these encounters are analogous to heat conduction and viscosity in a gaseous system), and this effect is dominant during certain stages of the star cluster evolution. In order to get the proper time scales connected to such relaxation processes in Nbody simulations of star clusters typically a very large number of pairwise distant encounters needs to be followed (which asymptotically results in the computational complexity of all force calculations at a point of time \(\approx N^2\), assuming a simple approach without parallelization and hybrid hardware). This physical constraint has led to the use of thermodynamics and statistical mechanics to model gravothermal star clusters (LyndenBell and Wood 1968; Hachisu et al. 1978; Hachisu 1979; Spurzem 1991). The star cluster could be modelled using computer codes for the gas dynamical evolution of stars. The relaxation time in a star cluster is long compared to the dynamical timescale, while in stars the opposite is true (considering the photon diffusion time as relaxation time). This occurs because the mean free path in stellar systems is much longer than in stellar gaseous matter. Therefore, conductivity and viscosity need to be defined in a different functional form than for the interior of stars (cf. the more detailed discussions in LyndenBell and Eggleton 1980; Louis and Spurzem 1991).
An area where observations have picked up greatly through increased angular resolution and sensitivity of spectrographs is the identification of stellar binaries (e.g., Giesers et al. 2018, 2019; Kamann et al. 2020). Binary stars are an extremely important component of star clusters, because they form a dynamically active population which has a dramatic impact on the evolution of the host cluster (e.g., Hénon 1961; Heggie 1975; Elson et al. 1987): for instance stellar exotica observed in clusters (blue stragglers, fast rotating stars, and Xray binaries) all originate from binary star evolution. The special role that binary stars play in the life cycle of a cluster requires that we pin down as accurately as possible what fraction of stars form in binaries if we are to make progress when predicting the statistics of stellar populations at the later stages of a cluster’s evolution. Close stellar encounters, including direct collisions, become a reality in the densest region of dense star clusters. How often these take place, and what the outcomes of such events are remain a puzzle that is only now beginning to be solved. Last, but not least compact objects form in binaries and take part in fewbody interactions and stellar evolution of binaries; ultimately, binaries consisting of compact objects only are possible sources of gravitational waves to be detected by groundbased gravitational wave detectors that are operational today, such as (Advanced) Laser Interferometer GravitationalWave Observatory ((a)LIGO) (Aasi et al. 2015; Abbott et al. 2018, 2019), (Advanced) Virgo Interferometer ((a)Virgo) (Acernese et al. 2015; Abbott et al. 2018, 2019) and Kamioka Gravitational Wave Detector (KAGRA) (e.g., Abbott et al. 2018, 2020a; Kagra Collaboration et al. 2019). Note also the next (third) generation spacebased detectors in planning—Einstein Telescope (Branchesi et al. 2023) and Cosmic Explorer (Reitze et al. 2019).
The numerical and computational tools to model such dense, massive star clusters are based on either approximate models from statistical physics, or the direct Nbody simulation approach. Currently, the latter approach is very dominant since it allows to include details of astrophysics (binaries, stellar evolution, tidal fields) more easily. However, in order to establish the degree of reliability of Nbody simulations approximate models have been very important. These are mostly based on the Fokker–Planck approximation in statistical mechanics, and the numerical solution of resulting kinetic equations by using, e.g., Monte Carlo techniques, direct numerical solution, or gaseous and moment models). Since they are important and fundamental to understand results Nbody simulations this review contains a basic description of them, too.
GCs and NSCs (dense and gravothermal) are ideal laboratories to examine the theoretical physical processes (heat conduction, angular momentum transport through viscosity) and their influence on the formation and evolution of extreme stellar populations like Xray binaries or blue stragglers, compact objects (neutron stars, black holes), and are ideal test beds for stellar population synthesis models and stellar evolution.
2 Theoretical foundations
Computational modelling of star clusters requires to follow the complex interplay of thermodynamic processes such as heat conduction and relaxation with the physics of selfgravitating systems, the stochastic nature of star clusters having finite particle number N, and the astrophysical knowledge and models for the evolution of single and binary stars and of external tidal forces. GCs are a very good laboratory for this, because their dynamical and relaxation timescales are well separated from each other and from the lifetime of the cluster and the Universe in its entirety. This article deals with “direct” Nbody simulations, which are suitable for systems where the interaction between dynamics and relaxation is important (sometimes also denoted as “gravothermal” systems, LyndenBell and Wood 1968). Other kinds of Nbody simulations are useful for example for hydrodynamics (“smoothed particle hydrodynamics”), galaxy dynamics (“collisionless systems”) or cosmological Nbody simulations of structure formation in the Universe, and are not covered here. The main distinction of those from the models presented here, is that the dynamics of systems dominated by twobody relaxation (“collisional systems”) require typically very high accuracy (typical energy error per crossing time \(\varDelta E/E < 10^{5}\) or smaller) over very long physical integration times (thousands of crossing times). The term “collisional” refers here to elastic gravitational encounters (relaxation encounters), which drive the “thermal” cluster evolution. Other processes, such as close encounters, encounters involving one or more binaries, and direct collisions also happen in the system.
Let us begin with the definition of some useful time scales. A typical particle crossing time \(t_{\rm{cr}}\) in a star cluster is
where \(r_{\rm{h}}\) is the radius containing 50 % of the (current) total mass and \(\sigma _{\rm{h}}\) is a typical velocity associated with the root mean square random motion (velocity dispersion) taken at \(r_{\rm{h}}\). If virial equilibrium prevails, we have \(\sigma _{\rm{h}}^2 \approx GM_{\rm{h}}/r_{\rm{h}}\) (where the sign \(\approx \) here and henceforth means “approximately equal” or “equal within an order of magnitude”), thus
This relation is equal to the dynamical timescale, which is also used for example in the theory of stellar structure and evolution. Global dynamical adjustments of the system, like oscillations, are connected with this timescale. Taking the square of Eq. (2) yields \(t_{\rm{cr}}^2 \approx r_{\rm{h}}^3/(GM_{\rm{h}})\) which is related to Kepler’s third law, because the orbital velocity in a Keplerian point mass potential has the same order of magnitude as the velocity dispersion in virial equilibrium. Unlike most laboratory gases stellar systems are not usually in thermodynamic equilibrium, neither locally nor globally. Radii of stars are usually extremely small relative to the average interparticle distances of stellar systems (e.g. the radius of the sun is \(r_\odot \approx 10^{11}\,\rm{cm}\), a typical distance between stars in our galactic neighbourhood is of the order of \(10^{18}\,\rm{cm}\)). Only under rather special conditions in the centres of galactic nuclei and during the short highdensity core collapse phase of a globular cluster, stellar densities might become large enough that stars come close enough to each other to collide, merge or disrupt each other. Therefore it is extremely unlikely under normal conditions that two stars touch each other during an encounter; encounters or collisions usually are elastic gravitational scatterings. The mean interparticle distance is large compared to \(p_0=2Gm/\sigma ^2\), which is the impact parameter for a \(90^{\circ }\) deflection in a typical encounter of two stars of equal mass m, where the relative velocity at infinity is \(\sqrt{2}\sigma \), with local 1D velocity dispersion \(\sigma \). Thus most encounters are smallangle deflections. The relaxation time \(t_{\rm{rx}}\) is defined as the time after which the root mean square velocity increment due to such small angle gravitational deflections is of the same order as the initial velocity dispersion of the system. We use the local relaxation time as defined by Chandrasekhar (1942):
G is the gravitational constant, \(\rho \) the mean stellar mass density, \(\sigma \) the 3D velocity dispersion, N the total particle number. This definition was used by Larson (1970); Bettwieser and Spurzem (1986), because it naturally occurs when computing collisional terms (as in Eq. (11)), if the velocity distribution function is written as a series of Legendre polynomials (Spurzem and Takahashi 1995), with numerical factors being unity (for equipartition terms of lowest order, Spurzem and Takahashi 1995, or only little different from unity, such as 9/10 for the collisional decay of anisotropy (Bettwieser and Spurzem 1986). Other definitions of relaxation can be found frequently, for example in Spitzer (1987). They differ only by numerical factors, except for the socalled Coulomb logarithm \(\ln (\gamma N)\), which may take different functional forms. For common forms of the Coulomb logarithms only \(\gamma \) is of order unity, but may take different values (e.g., 0.11, Giersz and Heggie 1994a, or 0.4, Spitzer 1987).
Assuming virial equilibrium a fundamental proportionality turns out:
(cf., e.g., Spitzer 1987). As a result, for very large N, dynamical equilibrium is attained much faster than thermodynamic equilibrium. Therefore, even if treated them as “gaseous” spheres, stellar systems evolve qualitatively different from stars; in stars the thermal timescale is short compared to the dynamical timescale (Bettwieser and Sugimoto 1984). Another interesting consequence of the long thermal timescale in star clusters is that anisotropy can prevail for many dynamical times. If one assumes a purely kinetic temperature definition, it ensues that in star clusters the temperatures (or velocity dispersions) can remain different for different coordinate directions over many dynamical times. For example, in a spherical system (using polar coordinates) the radial velocity dispersion of stars (“temperature”) \(\sigma _{r}^2\) could be different from the tangential one \(\sigma _{t}^2\). For the relaxation time above the 3D velocity dispersion \(\sigma ^2 = \sigma _{r}^2 + 2\sigma _{t}^2\) is used. If axisymmetric or triaxial the tangential velocity dispersion can be decomposed into two different dispersions \(2\sigma _{\rm{t}}^2 = \sigma _\theta ^2+\sigma _\phi ^2\).
A full account on the relevance of anisotropy for star clusters is beyond the scope of this paper; exemplary we mention here that interest in anisotropy was recently sparked by anisotropic mass segregation in rotating star clusters, both globular and nuclear (Szölgyen and Kocsis 2018; Szölgyen et al. 2019, 2021; Torniamenti et al. 2019; Kamlah et al. 2022b).
3 Direct Fokker–Planck and moment models
Models based on the Fokker–Planck approximation (also denoted as approximate or statistical models) have been designed and implemented in times when it was very difficult to simulate large star clusters directly by Nbody simulations. Dramatic development in hardware and software has made now possible direct Nbody simulations of up to a million bodies with realistic astrophysics and binaries (Wang et al. 2015, 2016) (Dragon simulations). However, the use of the approximate models is very useful to understand the nature of physical processes in star clusters (such as heat conduction or viscosity); by comparison with Nbody models they can mutually support each other.
The Fokker–Planck approximation to describe twobody relaxation in spherical star clusters is the foundation for all Monte Carlo models used nowadays (MOCCA and CMC, see Sect. 4 below). Therefore, it is deemed useful to provide a deeper than usual insight into its theoretical foundations here.
Statistical models have been employed to clarify the physical nature of relaxation processes in star clusters, such as core collapse (LyndenBell and Wood 1968), postcollapse evolution due to an energy source from binaries undergoing close encounters with single stars (Inagaki and LyndenBell 1983), gravothermal oscillations (Sugimoto and Bettwieser 1983; Bettwieser and Sugimoto 1984; Cohn et al. 1989). These methods have also been important for the study of anisotropy, mass segregation, and rotation later on.
Comparison and mutual adjustment of parameters in order to get agreement between statistical models and direct Nbody simulations was started by Aarseth et al. (1974) for a first precollapse comparison, followed by an extended study using statistical averages of Nbody simulations to match gaseous models (Giersz and Heggie 1994a, b; Giersz and Spurzem 1994), see also Fig. 1; in Spurzem and Aarseth (1996) it was shown that relaxation processes consistent with theory dominate core collapse in star clusters, and Makino (1996) showed for the first time a signature of gravothermal oscillations in a Nbody simulation.
Classical models based on the Fokker–Planck approximation use quite strong approximations, like spherical symmetry (in general, with some exceptions allowing axisymmetry), dominance of relaxation encounters, modelling all fewbody effects (binarysingle and binarybinary close encounters) in a statistical way. A mass spectrum would be modelled by discrete dynamical components with different masses (except Monte Carlo models, see below). With the increasing need to have star cluster models matching detailed observations of star clusters, the use of Fokker–Planck type models was no more practical. That hardware and software developments made more and more realistic particle numbers possible in direct Nbody simulations has been another reason for the decline of statistical models.
There is one remarkable exception, the Monte Carlo technique– while it is also based on Fokker–Planck theory it uses a quasi Nbody realization and allows stateoftheart models up to the current time (see Sect. 4). In the following subsections, nevertheless, we will outline the basic theory beneath the statistical models, because in some areas (rotating, flattened star clusters, NSCs, anisotropy) they are still important today in order to analyse results of Nbody simulations.
3.1 Fokker–Planck approximation
The Fokker–Planck approximation truncates the socalled BBGKY hierarchy (named after Bogoliubov–Born–Green–Kirkwood–Yvon) of kinetic equations at lowest order assuming that for most of the time all particles are uncorrelated with each other and only coupled via the smooth global gravitational potential (see Chapt. 8.1, Sects. 2 and 3 of Binney and Tremaine 2008; the following paragraph is a summary from their text). We start with most general Nparticle distribution function \(f^{(N)}\), which depends on positions and velocities \(\vec {r_i},\vec {v_i}\) of a set of N particles and time t:
It provides a probability to find all the particles at their given positions and velocities. In 6N dimensional phase space the particles are an incompressible fluid following Liouville’s “big” equation
where the derivative is the Lagrangian derivative. If all particles are uncorrelated \(f^{(N)} = (f^{(1)})^N\), i.e. the Nparticle distribution function is just the Nfold product of a single particle distribution function. That is the case for example in a collisionless stellar system, where all particles just follow their trajectories determined by a global smooth gravitational potential and any direct interaction between two or few particles (stars) is negligible. For collisional stellar systems, however, gravitational encounters (twobody relaxation) change the phase space distribution, particles are not uncorrelated anymore. The theoretical ansatz in that case would be to define a twobody correlation function g by
So, from knowing \(f^{(1)}\) and g we get \(f^{(2)}\); by using higher order correlation functions one can get from \(f^{(n)}\) to \(f^{(n+1)}\). Integration of Eq. (6) stepbystep over single particles provides a sequence of equations for \(f^{(N1)}\) to \(f^{(1)}\), which is the BBGKY hierarchy. However it is usually not very helpful, because all the correlation functions for \(2,...N\!\!1\) need to be known. For practical purposes in collisional stellar systems, where twobody relaxation is important (e.g. open, globular, and nuclear star clusters), it is sufficient to deal with the twobody correlation, which is done phenomenologically in two different ways for distant and close correlations (encounters and binaries), as described below.
Higher than twobody correlations are rarely important. There could be a relation to Sundmann’s famous theorems (Sundman 1907, 1909), which state that in the threebody problem direct threebody collisions occur only with a negligibly small probability; it means whenever three particles get close to each other, there will be always a sequence of separate close twobody encounters, practically never^{Footnote 1} the three bodies will simultaneously get extremely close together. Burrau’s three body problem is a nice demonstration of that behaviour (Szebehely and Peters 1967); whether the fundamental assumption of dominance of twobody correlations is in fact realized or not can only be checked computationally by comparison of models based on the Fokker–Planck approximation (such as also Monte Carlo models) with direct Nbody simulations. An example for a situation of a very high density in a collapsing core of a star cluster, where higher correlations become important, can be found in Tanikawa et al. (2012).
Instead of determining a general correlation function one resorts to a phenomenological description of the effects of collisions by computing local diffusion coefficients directly from the known solution of the twobody problems. Diffusion coefficients \(D(\varDelta v_i)\) and \(D(\varDelta v_iv_j)\) denote the average rate of change of \(v_i\) and \(v_iv_j\) due to the cumulative effect of many small angle deflections during twobody encounters, at a given radius r (from here assuming spherical symmetry). Let m, \(\vec {v}\) and \(m_f\), \(\vec {v_f}\) be the mass and velocity of a star from a test and field star distribution, respectively (both distributions can but need not to be the same). In Cartesian geometry (for the local velocities) the diffusion coefficients are defined by
Local means here that we do not explicitly consider the dependence of f on the spatial coordinate. g, h are the Rosenbluth potentials defined in Rosenbluth et al. (1957)
Note that provided the distribution function f is given in terms of a convenient polynomial series as in Legendre polynomials the Rosenbluth potentials can be evaluated analytically to arbitrary order, as was seen already by Rosenbluth et al. (1957), see for a modern rederivation and its use for star cluster dynamics (Giersz and Spurzem 1994; Spurzem and Takahashi 1995; Schneider et al. 2011). With these results we can finally write down the local Fokker–Planck equation in its standard form for the Cartesian coordinate system of the \(v_i\):
The subscript “enc” should refer to encounters, which are the driving force of twobody relaxation. Still Eq. (10) is a sixdimensional integrodifferential equation; its direct numerical simulation in stellar dynamics can presently only be done by further simplification. If the encounter term is zero, Eq. (10) is transformed into Liouville’s equation for a collisionless system. For a selfgravitating system Eqs. (10) and (11) are not sufficient, since the knowledge of the gravitational potential \(\varPhi \) is necessary. This can be seen above from the \(\vec {{\dot{v}}}_i\) term—its computation requires to know the gravitational force. In moment or gas models described below (for spherical symmetry) Poisson’s equation takes the simple form Eq. (28); for orbit averaged Fokker–Planck or Monte Carlo models (see Sects. 3.3 and 4.2) the gravitational potential enters directly into the energy as constant of motion (cf. Eq. (30)).
3.2 Moment or gas models
The local Fokker–Planck equation (10) is utilized in another way for gaseous or conducting gas sphere models of star clusters. Integrating it over velocity space with varying powers of the velocity coordinates yields a system of equations in the spatial coordinates; the local approximation is used in the sense that the orbit structure of the system is not taken into account, diffusion coefficients and all other quantities are assumed to be well defined just as a function of the local quantities (density, velocity dispersions and so on). The system of moment equations is truncated in third order by a phenomenological equation of heat transfer. Such an approach has been suggested by LyndenBell and Eggleton (1980); Heggie (1984) and generalized to anisotropic systems by Bettwieser (1983); Bettwieser and Spurzem (1986); Louis and Spurzem (1991). In the following the derivation of the model equations is described.
3.2.1 The “lefthand sides”
In spherical symmetry, polar coordinates r \(\theta \), \(\phi \) are used and t denotes the time. The vector \(\vec {v} = (v_i), i=r,\theta ,\phi \), denotes the velocity in a local Cartesian coordinate system at the spatial point \(r,\theta ,\phi \). In the interest of brevity \(u=v_r\), \(v=v_\theta \), \(w=v_\phi \) is used. The distribution function f, which due to spherical symmetry is a function of r, t, u, \(v^2+w^2\) only, is normalized according to
where \(\rho (r,t)\) is the mass density; if m denotes the stellar mass, we get the particle density \(n=\rho /m\). Then
is the bulk radial velocity of the stars. Note that for the analogously defined quantities \({{\bar{v}}}\) and \({{\bar{w}}}\) we have in spherical systems \({{\bar{v}}} = {{\bar{w}}} = 0\) (rotating, axisymmetric systems: \({{\bar{w}}} \ne 0\)). In order to go ahead to the anisotropic gaseous model equations we now turn back to the lefthand side of the Fokker–Planck equation (10), which is the collisionless Boltzmann or Vlasov operator. For practical reasons we prefer for the lefthand side local Cartesian velocity coordinates, whose axes are oriented towards the r, \(\theta \), \(\phi \) coordinate space directions. With the Lagrange function
the Euler–Lagrange equations of motion for a star moving in the cluster potential \(\varPhi \) become:
The complete local Fokker–Planck equation, derived from Eq. (10), attains the form
where the term subscribed by “enc” denotes the terms involving diffusion coefficients as in Eq. (11). Moments \(\langle i,j,k \rangle \) of f are defined in the following way (all integrations range from \(\infty \) to \(\infty \)):
Note that the definitions of \(p_i\) and \(F_i\) are such that they are proportional to the random motion of the stars. Due to spherical symmetry we have \(p_\theta = p_\phi =: p_t\) and \(F_\theta = F_\phi =: F_t/2\). By \(p_r = \rho \sigma _r^2\) and \(p_t = \rho \sigma _t^2\) the random velocity dispersions are given, which are closely related to observables in GCs and galaxies. It is convenient to define velocities of energy transport by
By multiplication of the Fokker–Planck equation (16) with various powers of u, v, w we get up to second order the following set of moment equations for \(\bar{u}\) dropped in the following):
The terms labeled with “enc” and “bin3” symbolically denote the collisional terms resulting from the moments of the right hand side of the Fokker–Planck equation (Eq. (11)) and an energy generation by formation and hardening of three body encounters. Both will be discussed below. With the definition of the mass \(M_r\) contained in a sphere of radius r
the set of Eqs. 25–27 is equivalent to gasdynamical equations coupled with Poisson’s equation. Since moment equations of order n contain moments of order \(n\!+\!1\), it is necessary to close the system of the above equations by an independent closure relation. Here we choose the heat conduction closure, which consists of a phenomenological Ansatz in analogy to gas dynamics. It was first used (restricted to isotropy) by LyndenBell and Eggleton (1980). It is assumed that heat transport is proportional to the temperature gradient, where we use for the temperature gradient an average velocity dispersion \(\sigma ^2 = (\sigma _r^2 + 2\sigma _t^2)/3\) and assume \(v_r = v_t\) (this latter closure was first introduced by Bettwieser and Spurzem 1986). Therefore, the last two equations to close our model are
With Eqs. (25)–(27), (28), and (29) we have now seven equations for our seven dependent variables \(M_r\), \(\rho \), u, \(p_r\), \(p_t\), \(v_r\), \(v_t\).
3.2.2 The “righthand sides”
All righthand sides of the moment equations (25)–(27) are calculated by multiplying the righthand side (the encounter term) of the Fokker–Planck equation as it occurs in Eq. (11) with the appropriate powers of u, v and w and integrating over velocity space. Since the diffusion coefficients in Eq. (8) also contain the distribution function f, the equation depends nonlinearly on it. That has led in the early papers to a simplification by using an isotropized background distribution function \(f_b\) inside the diffusion coefficients, different from the actual one (Larson 1970; Cohn 1980). in Bettwieser and Spurzem (1986); Einsel and Spurzem (1999); Schneider et al. (2011) there is always full consistency between the background and the actual distribution function.
3.3 Orbit averaged Fokker–Planck models and rotation
The direct solution of the sixdimensional integrodifferential equations (Eqs. (10) and (11)) is generally not possible. To have numerical solutions of the Fokker–Planck equation directly one applies Jeans’s theorem and transforms f into a function of the classical integrals of motion of a particle in a potential under the given symmetry, as e.g., energy E and modulus of the angular momentum \(J^2\) in a spherical potential or E and zcomponent of angular momentum \(J_z\) in axisymmetric coordinates. Thereafter, the Fokker–Planck equation can be integrated over the accessible coordinate space for any given combination of constants of motion and the orbitaveraged Fokker–Planck equation ensues. By transformation from \(v_i\) to E and J and via the limits of the orbital integral the potential enters both implicitly and explicitly. In a twostep scheme alternatively solving the Poisson and Fokker–Planck equation a direct numerical solution is obtained for spherical systems in 1D (using E only [Cohn (1980)]), or in 2D (using both E and \(J^2\) [Cohn (1979); Takahashi (1995, 1996, 1997)]).
One of the main uncertainties in this method is that for nonspherical mass distributions the orbit structure in the system may depend on unknown nonclassical third integrals of motion which are neglected. First 2D models of axisymmetric, rotating globular star clusters (Einsel and Spurzem 1999) used initial models obtained published earlier (Goodman 1983a; Lupton and Gunn 1987; Longaretti and Lagoute 1996), which are generalizations of the standard King models (King 1966), using its dimensionless central potential value \(W_0\) and a new dimensionless rotation parameter \(\omega _0\). These models have a rigid rotation in the core, maximum of the rotation curve around the halfmass radius and a differentially decreasing rotation in the halo. They are still mainly supported by “thermal” pressure (velocity dispersions), the rotational energy provides a smaller part of the energy. Typical initial data for \(W_0=6\) and different rotation parameters are seen in Table 1: the ratio of total rotational to total kinetic energy, dynamical ellipticity, ratios of tidal and halfmass radii to initial core radii, the central relaxation time and finally the halfmass relaxation time in system units are given (for definitions see Einsel and Spurzem 1999).
Fokker–Planck models showed that in presence of rotation there is an effective viscosity transporting angular momentum outwards and accelerating cluster evolution significantly as compared to a spherical cluster (see Fig. 2 and Einsel and Spurzem 1999). A series of followup papers include postcollapse and multimass models (Kim et al. 2002, 2004, 2008) and found an accelerated rotation in the core for heavy masses sinking to the core—as it was predicted by the combined gravogyro and gravothermal “catastrophes” predicted by Hachisu (1979); Akiyama and Sugimoto (1989). One rotating model included in the Dragon simulations (Wang et al. 2016), however, did not show accelerated evolution. Whether this is due to heavy mass loss by stellar evolution (not included in earlier papers) or due to a small deviation from the proper initial model is not clear. There is an urgent need for more coverage of rotating stellar clusters by direct Nbody simulations, see some first progress (Tiongco et al. 2022; Livernois et al. 2022; Kamlah et al. 2023). The initial models of Table 1 are still in frequent use, in particular if realized as Nbody configurations for Nbody models (Hong et al. 2013; Tiongco et al. 2017, 2022; Livernois et al. 2022; Kamlah et al. 2023). Notice also the alternative rotating models of Varri (Varri and Bertin 2012; Varri et al. 2018), which are more suitable with regard to the outer cluster zones under influence of tidal fields.
4 Monte Carlo models
Monte Carlo models of star clusters are the only ones which are still intensively used up to the present time, even though they are based on the Fokker–Planck approximation, in the same way as Fokker–Planck or gaseous/moment models. Sometimes this may not be clear to every reader of current papers using Monte Carlo models, because they provide data equivalent to Nbody simulations—particles with masses, positions and velocities at certain times. Astrophysics (stellar single and binary evolution, stellar collisions, relativistic binaries...) has been included very much like in Nbody models. This review is not about Monte Carlo models, but a brief summary of their history and entry points to the current literature should be given.
4.1 Hénon and Spitzer type method
As the name suggests, Monte Carlo models are based on the principle that stars have an orbit in a known selfconsistent potential; random perturbations are applied, which model the effect of relaxation by distant gravitational encounters. Spitzer’s method follows the orbits of stars in the global potential of the cluster and randomly applies kicks in velocity to the stars; at the end of a long series of papers they included binaries and a mass spectrum (Spitzer and Hart 1971a, b; Spitzer and Shapiro 1972; Spitzer and Thuan 1972; Spitzer and Chevalier 1973; Spitzer and Shull 1975a, b; Spitzer and Mathieu 1980).
Hénon’s method is using the phase space of constants of motion of a star in a spherically symmetric potential, energy and angular momentum. Deflections are selected randomly, and their effect on angular momentum and energy computed and applied (Hénon 1971). The method was extended to include astrophysical effects, including binaries and stellar evolution (Stodołkiewicz 1982, 1986). These models still allowed for “superstars”, i.e., one particle in the Monte Carlo model could represent many real stars.
Current Monte Carlo models are based on Hénon’s method, but restricted to starbystar modelling (much like Nbody), where every star is a particle in the Monte Carlo simulation. This only made it possible to include all astrophysical effects in the same way than it is done in Nbody simulations. This new line of Monte Carlo models was initiated by Giersz (1998) (code name mocca) and the Northwestern team (Joshi et al. 2000) (code name cmc).
4.2 mocca and cmc
The Monte Carlo codes based on the Hénon scheme use constants of motion (specific energy E, specific angular momentum L) as basic variables, properties of stars in the simulation. If the spherically symmetric gravitational potential \(\varPhi (r)\) is known, the pericenter \(r_{\min }\) and apocenter \(r_{\max }\) of the orbit are known. At every point of the orbit r the radial velocity is known from
The orbital integral defines the orbital time \(\tau \) by
With \(p(r) = (2/\tau )\cdot (dr/v_r)\) one gets a probability distribution function, used to randomly pick a radial position \(r_i\) for the star on its orbit (which should be distributed according to p(r)). Let \(m_i\) be the stellar mass of stars (\(i=1,\ldots ,n\)), then the spherically symmetric gravitational potential can be computed according to Hénon (1971)
In addition to that two angles \(\theta \) and \(\phi \) are randomly picked, so as to have a three dimensional position of the star. Velocities are obtained from E, L, and \(U(r_i)\) (one more random number needed). In that way a model star cluster is produced whose data structure is three dimensional—equivalent to that of an Nbody simulation. To model the relaxation effect, two neighbouring stars are selected and a mean squared deflection angle chosen, which is proportional to the timestep over the relaxation time. Using this angle changes in E and L are computed. Binaries and close encounters between them have been first modelled completely stochastically as well (using random impact parameters, and using random realization of known cross sections). More recently a fewbody integration is done in both mocca and cmc codes. This is a very rough account of Monte Carlo principles, the reader interested in more details is referred to the papers cited in the next paragraphs.
An account of the state of the mocca code and comparison with Nbody simulations is published in Giersz et al. (2013, 2015). Recently, it has been used for a large number of simulations of Galactic and extragalactic clusters, the mocca Survey Database has been published (Hong et al. 2020; Leveque et al. 2021, 2022a, 2023). The cmc code (Rodriguez et al. 2021b) has been developed in parallel, with matching models to observations (Rui et al. 2021), and an overview of the current state of the code (Rodriguez et al. 2022). Examples of current use of this code focus on compact remnants and their gravitational wave emission, such as e.g., Rodriguez et al. 2021a; Kremer et al. 2021; Ye et al. 2022.
Both Monte Carlo codes have been very successful in terms of generating a large amount of cluster simulations to be compared with observational data and also to follow the evolution of special objects. However, one should not forget their serious limitations:

if we have a number of massive objects in a central high density region—the assumption of a smooth spherically symmetric potential breaks down;

at high densities and if many binaries are present, the assumption that there are uncorrelated twobody relaxation encounters and close fewbody encounters, which can be clearly separated, breaks down.

Taking into account external tidal fields is quite difficult, though in simple cases not impossible, due to the strictly spherical cluster centered gravitational potential.
The bottom line is that Monte Carlo models have to be used in order to get an overview of large parameter ranges of star cluster evolution, but in many cases a check by comparison with direct Nbody simulations is desirable. They do not suffer from all the problems mentioned above; however, also direct numerical solutions of the Nbody problem have certain issues, see Sect. 5.4. A nice overview of current Monte Carlo models is in Vasiliev (2015), who also present a somewhat restricted Monte Carlo code for rotating systems (see Sect. 6.4.6).
5 Direct Nbody simulations—methods and algorithms
To integrate the orbits of particles in time under their mutual gravitational interaction the total gravitational potential at each particle’s position is required. Poisson’s equation in integral form gives the potential \(\varPhi \) generated at a point in coordinate space \(\vec {r}\) due to a smooth mass distribution \(\rho (\vec {r})\)
A discrete particle distribution in Nbody simulations is given by
with N particles of mass \(m_i\) distributed at positions \(\vec {r}_j\). Putting this into the integral Poisson equation (33) we get Newton’s law for point masses:
5.1 nbody—the growth of an industry
It was already discovered by Sebastian von Hoerner in the earliest published Nbody simulations that the relaxation time (Chandrasekhar 1942) is relevant for star cluster evolution and that the formation of close and eccentric binaries occurs as the rule rather than as an exception. It was particularly difficult to accurately integrate them, effectively the simulation would be stopped if close binaries demanded too small timesteps (von Hoerner 1960, 1963).
About at the same time a young postdoc—Sverre Aarseth—in Cambridge developed a direct Nbody integrator for galaxy clusters with gravitational softening, thereby avoiding von Hoerner’s problems with tight binaries (Aarseth 1963). His code was based on Taylor series evaluation of the gravitational force up to its second derivative. Eight years later regularization methods (Kustaanheimo and Stiefel 1965) were implemented in Aarseth’s direct Nbody code (Aarseth 1971). This allowed to proceed past the binary deadlock detected in von Hoerner’s models.
Another direct Nbody code by Roland Wielen appeared on the market, and in a seminal paper (Aarseth et al. 1974) fair agreement was shown between Aarseth’s and Wielen’s codes and a Monte Carlo code by Lyman Spitzer (see above Sect. 4.1). However, only at the turn of the century Aarseth and von Hoerner could compare their codes, and von Hoerner published a remarkable account on “how it all started” (von Hoerner 2001).
Already in 1985, the code nbody5 (Aarseth 1985a) had become a kind of “industry standard”, attaining world wide use. It employed Taylor series using up to the third derivative of the gravitational force, in a divided difference scheme based on four time points, with individual particle timesteps. Also there were regularizations for more than two bodies, such as the classical chain regularization (Mikkola and Aarseth 1990), and the Ahmad–Cohen (Ahmad and Cohen 1973) neighbour scheme already in nbody5. The advent of vector and parallel computers demanded an optimization towards hierarchically blocked timesteps and the Hermite scheme (Sect. 5.2.1) (Hut and McMillan 1986; Makino and Aarseth 1992), because it used only two time points, which made memory management easier. This became known as nbody6.
The growth of the “industry” (Aarseth 1999a) included further improvements in the regularization techniques (Mikkola and Aarseth 1996, 1998; Aarseth 1999b) and a comprehensive book summary (Aarseth 2003). Table 2 summarizes the main algorithmic, hardware and software development stepping stones in the direct Nbody community up until today.
5.2 The nbody6 scheme
In the following, the nbody6 integrator is described in some more detail (note that nbody7 already contains parallelization through GPU acceleration and will be treated in the next section). nbody6 and its parallelized and GPU accelerated offspring (nbody6++, nbody6gpu, nbody6++gpu, nbody7, see Table 3) is still the most widely used method for direct Nbody simulations, but recently also new approaches have been published (cf. Sect. 5.5).
5.2.1 The Hermite scheme
The Hermite scheme and nbody6 go back to Makino and Aarseth (1992); in conjunction with a hierarchically blocked timestep scheme (see below and Hut and McMillan 1986) it improved the performance on vector computers and turned out to be efficient for all of recent parallel and innovative hardware (general and special purpose parallel computers, GRAPE and GPU). Assume a set of N particles with positions \(\vec {r}_i(t_0)\) and velocities \(\vec {v}_i(t_0)\) (\(i=1,\ldots , N\)) is given at time \(t=t_0\), and let us look at a selected test particle at \(\vec {r} = \vec {r}_0=\vec {r}(t_0)\) and \(\vec {v} = \vec {v}_0 = \vec {v}(t_0)\). Note that here and in the following the index i for the test particle i and also occasionally the index 0 indicating the time \(t_0\) will be dropped for brevity; sums over j are to be understood to include all j with \(j\ne i\), since there should be no selfinteraction. Accelerations \(\vec {a}_0\) and their time derivatives \({{\dot{{\textbf {a}}}}}_0\) are calculated explicitly:
where \(\vec {R}_j:=\vec {r}\!\!\vec {r}_j\), \(\vec {V}_j:= \vec {v}\!\!\vec {v}_j\), \(R_j:=\vert \vec {R}_j\vert \), \(V_j:=\vert \vec {V}_j\vert \). By low order predictions,
new positions and velocities for all particles at \(t>t_0\) are calculated and used to determine a new acceleration and its derivative directly according to Eq. (36) at \(t=t_1\), denoted by \(\vec {a}_1\) and \(\vec {{\dot{a}}}_1\). On the other hand \(\vec {a}_1\) and \(\vec {{\dot{a}}}_1\) can also be obtained from a Taylor series using higher derivatives of \(\vec {a}\) at \(t=t_0\):
If \(\vec {a}_1\) and \(\vec {{\dot{a}}}_1\) are known from direct summation (from Eq. (36) using the predicted positions and velocities) one can invert the equations above to determine the unknown higher order derivatives of the acceleration at \(t=t_0\) for the test particle:
This is the Hermite interpolation, which finally allows to correct positions and velocities at \(t_1\) to high order from
Taking the time derivative of Eq. (44) it turns out that the error in the force calculation for this method is \({{\mathcal {O}}}(\varDelta t^4)\), as opposed to standard leapfrog scheme, which has a force error of \({{\mathcal {O}}}(\varDelta t^2)\) (but see new developments in Sect. 5.5). Additional errors induced by approximate potential calculations (particle mesh or tree) create potentially even larger errors than that. In Fig. 3, however, it is shown that the above Hermite method used for a real Nbody integration sustains generally an error of \({{\mathcal {O}}}(\varDelta t^4)\) for the entire calculation.
5.2.2 Timestep choice
Aarseth (1985a) provides an empirical timestep criterion
The error is governed by the choice of \(\eta \), which in most practical applications is taken to be \(\eta = 0.01  0.04\). It is instructive to compare this with the inverse square of the curvature \(\kappa \) of the curve \(\vec {a}(t)\) in coordinate space
Clearly, under certain conditions the timestep choice of Eq. (45) becomes similar to choosing the timestep according to the curvature of the acceleration curve; since it was determined just empirically, however, it cannot generally be related to the curvature expression above. In Makino (1991b) a different time step criterion has been suggested, which appears simpler and more straightforwardly defined, and couples the timestep to the difference between predicted and corrected coordinates. The standard Aarseth timestep criterion from Eq. (45) has been used in most Nbody simulations so far, because it seems to achieve an optimal step better than (on average) the mathematically more sound Makino step (see the timestep related discussion in Sweatman 1994).
Since the position of all field particles can be determined at any time by the loworder prediction from Eq. (38), the timestep of each particle (which determines the time at which the corrector of Eq. (44) is applied) can be freely chosen according to the local requirements of the test particle; the additional error induced due to the use of only predicted data for the full N sums of Eq. (36) is negligibly small, for the benefit of not being forced to keep all particles in lockstep. Such an individual timestep scheme is in particular for nonhomogeneous systems very advantageous, as was quantitatively pointed out by Makino and Hut (1988). Particles in the high density core of a star clusters need to be updated much more often than particles on orbits very far from the centre. They show that the gain in computational speed due to the individual timestep scheme (as compared to a lockstep scheme where all particles share the minimum required timestep) is of the order \(N^{1/3}\) for homogeneous and \(N^1\) for strongly spatially structured systems (Makino and Hut 1988).
For the purpose of vectorization and parallelization it is better not to have the particles continuously distributed on a time axis. Consequently, Makino (1991a) uses a hierarchical scheme, still on the basis of Eq. (45); but a change of the timestep is considered only if that equation yields a variation of \(\varDelta t\) compared to the last step by more than a factor of 2 (increase or decrease). If this is the case a variation by 2 is applied only. Thus in model units all timesteps are selected from the set \(\{2^{i}\vert i=0,...i_{\max }\}\) with \(k = i_{\max }\) determined by the condition that \(\varDelta t_{\min } > 2^{i_{\max }}\) for the minimum timestep \(\varDelta t_{\min }\) determined from Eq. (45). For core collapse simulations of star clusters of a few ten thousand particles \(i_{\max }\) goes up to about 20; empirically and theoretically (Makino and Hut 1988) \(\varDelta t_{\min }\propto N^{1/3}\), so for large N \(i_{\max }\) becomes larger, however, on the other hand, how large \(i_{\max }\) grows for fixed N depends on the selected criteria for socalled KS regularisation of perturbed two–body motion (see below). The implementation of the block step scheme indeed uses an even stronger condition than the above described one, it is demanded that not only the timesteps, but also the individual accumulated times of each particles are commensurate with the timestep itself. This ensures that for any particle i and any time \(T_i = t_i + \delta t_i \) all particles with \(\delta t_j < \delta t_i \) have for their own time \(T_j = t_j + \delta t_j = T_i \), where the last equality is the non–trivial one. Such procedure is important for the parallelization of the algorithm. For example it has as a consequence that at the big timesteps always huge groups of particles are due for correction, sometimes even all particles (at the largest steps).
5.2.3 Ahmad–Cohen neighbour scheme
Another refinement of the Hermite or Aarseth “brute force” method is the twotimestep scheme, denoted as neighbour or Ahmad–Cohen scheme (Ahmad and Cohen 1973). For each particle a neighbour radius is defined, and \(\vec {a}\) and \(\vec {\dot{a}}\) are computed due to neighbours and nonneighbours separately. Similar to the Hermite scheme the higher derivatives are computed separately for the neighbour force (irregular force) and nonneighbour force (regular force). Computing two timesteps, an irregular small \(\varDelta t_{\rm{irr}}\) and a regular large \(\varDelta t_{\rm{reg}}\), from these two force components by Eq. (45) yields a timestep ratio of \(\gamma := \varDelta t_{\rm{reg}}/\varDelta t_{\rm{irr}}\) being in a typical range of 5–20 for N of the order \(10^3\) to \(10^4\). The reason is that the regular force has much less fluctuations than the irregular force. The Ahmad–Cohen neighbour scheme is implemented in a selfregulated way, where at each regular timestep a new neighbour list is determined using a given neighbour radius \(r_{si}\) for each particle. If the neighbour number found is larger than the prescribed optimal neighbour number, the neighbour radius is increased or vice versa. In Aarseth (1985a); Makino and Hut (1988) more complicated algorithms to adjust the neighbour radius are described. Over a wide range of particle numbers (ranging from \(10^4\) to \(1.6\times 10^7\)) the neighbour number has to be varied only slightly (from 50–400). It has been published for runs with up to a million bodies in Huang et al. (2016) and will be published soon for the larger particle numbers. Generally, the neighbour number is a choice for optimization and can be chosen relatively freely without problems for the accuracy. This is in conflict with earlier claims that the optimal neighbour number should be adjusted as \(N_{n,\mathrm opt} \propto N^{3/4}\) (Makino and Hut 1988). The reason is that by using special purpose machines or parallelization for parts of the code, an optimal neighbour number is not well defined, so the neighbour number can be selected according to accuracy and efficiency requirements. After each regular timestep the new neighbour list is communicated along with the new particle positions to all processors of the parallel machine, thus making it possible to do the irregular timestep in parallel as well.
Using a twotimestep or neighbour scheme again increases the computational speed of the entire integration by a factor of at least proportional to \(N^{1/4}\) (Makino 1991b). Both the regular and irregular timesteps are arranged in the hierarchical, commensurable way.
5.2.4 Regularizations
As the relative distance r of the two bodies becomes small, their timesteps are reduced to prohibitively small values, and truncation errors grow due to the singularity in the gravitational potential. Such a close encounter is characterised by an impact parameter p and a relative velocity at “infinity” (in practice some distance inside the cluster) \(v_\infty \). A close encounter is characterized by
where \(p_{90}\) is the impact parameter related to a 90 degree deflection in a twobody problem; G, \(m_1\), \(m_2\), \(v_\infty \) are the gravitational constant, the masses of the two particles and their relative velocity at infinity. In the cluster centre, it is very likely that two stars come very close together in a hyperbolic encounter. So, if the separation of two particles gets smaller than \(p_{90}\) they are candidates for regularization. To be actually regularized, the two particles have to fulfil two more sufficient criteria: that they are approaching each other, and that their mutual force is dominant. These sufficient criteria are defined as
Here, \({{\textbf{a}}}_{\rm{pert}}\) is the vectorial differential force exerted by other perturbing particles onto the two candidates, R, \({{\textbf{R}}}\), \({{\textbf{V}}}\) are scalar and vectorial distance and relative velocity vector between the two candidates, respectively. The factor 0.1 in the upper equation allows nearly circular orbits to be regularized; \(\gamma < 0.25\) demands that the relative strength of the perturbing forces to the pairwise force is one quarter of the maximum. These conditions describe quantitatively that a twobody subsystem is dynamically separated from the rest of the system, but not necessarily unperturbed.
The idea is to take both stars out of the main integration cycle, replace them by their centre of mass (c.m.) and advance the usual integration with this composite particle instead of resolving the two components. The internal motion of the two members of the regularized pair (henceforth KS pair, for Kustaanheimo and Stiefel, Kustaanheimo and Stiefel 1965) is done in a separate coordinate system. However, as was already noted by Aarseth (1971) there is no need for the perturbation of the KS pair from other stars to be small.
The internal motion of a KS pair is integrated in a 4D vector space obtained from a combined canonical and time transformation of relative Cartesian positions and velocities. The coordinate transformation goes back to LeviCivita in 2D (LeviCivita 1916). A full generalization to higher dimensions is only possible over the mathematical object of a field, the next one to be quaternions in 4D. Kustaanheimo and Stiefel found a way to transform forward from 3D to 4D and back from 4D to 3D by working over a skewed field of quaternions (sacrificing some commutativity rules; their mathematical language was different though). A modern theoretical approach to this subject can be found e.g. in Neutsch and Scherer (1992); the complete formalism including also the time transformation can be found in Mikkola (1997a). Aarseth uses this method to integrate the KS pairs in 4D space, and when using the backtransformation automatically returning to Cartesian 3D space (Aarseth 1971). The KS transformation converts the motion in a singular Newtonian gravitational potential into a harmonic oscillator in 4D space, which has no singularity. Since the harmonic potential is regular, numerical integration with high accuracy can proceed with much better efficiency, and there is no danger of truncation errors for arbitrarily small separations. The internal time–step of such a KS–regularized pair is independent of the eccentricity and of the order of some 50–100 steps per orbit.
While regularization can be used for any analytical two–body solution even across a mathematical singularity (collision), it is practically applied to perturbed pairs only. Once the perturbation \(\gamma \) falls below a critical value of \(\approx 10^{6}\), a KS pair is considered unperturbed, and the analytical solution for the Keplerian orbit is used instead of doing numerical integration. The twobody KS regularization occurs in the code either for shortlived hyperbolic encounters or for persistent binaries.
Close encounters between single particles and binary stars are also a central feature of cluster dynamics. The chain regularization (Mikkola and Aarseth 1998) is invoked if a KS pair has a close encounter with another single star or another pair. Especially, if systems start with a large number of primordial (initial) binaries, such encounters may lead to stable (or quasistable) hierarchical triples, quadruples, and higher multiples. They are treated by using special stability criteria (Mardling and Aarseth 2001).
Every subsystem—KS pair, chain or hierarchical subsystem  could be perturbed by other single stars. Perturbers are typically those objects that get closer to the object than \(R_{\rm{sep}} = R/\gamma _{\min }^{1/3}\), where R is the typical size of the subsystem; for perturbers, the components of the subsystem are resolved in their own force computation as well. Algorithmic regularization is a relatively recent method based on a time transformed leapfrog method (Mikkola and Merritt 2008); it does not employ the KS transformation. See for its use and application the next subsection.
5.3 Parallel and GPU computing and nbody
A fundamental problem was raised by Daiichiro Sugimoto about 30 years ago (Sugimoto et al. 1990)—direct numerical simulations of globular star clusters—with order of a million stars in direct Nbody—could not be completed for decades if extrapolating the standard evolution of computational hardware at that time (Moore’s law) for the future. Therefore astronomers in the Department of Astronomy at Tokyo University started wirewrapping and designing a new integrated circuit, a special purpose computer chip named GRAPE (=GRAvity PipE). The work was continued with great success by the team of Junichiro Makino, the GRAPE chips were finally assembled into GRAPE accelerator mainboards containing several chips (such as HARP, GRAPE4, GRAPE6) (Makino et al. 1993, 1997; Makino and Taiji 1998; Makino et al. 2003).
The GRAPE chip is an application specific integrated circuit (ASIC), which could only compute gravitational forces between particles (it also computed the time derivative of the force, to be directly applicable to the Hermite scheme of nbody6). A GRAPE board is a multicore (multichip) parallel computing device (e.g., GRAPE4 board contained 48 chips with shared memory, each chip contained one pipeline for force calculation; the GRAPE6 chip contained 6 force pipelines, Makino et al. 2003).
Custom built computing clusters using GRAPE were built outside of Japan, e.g., in Rochester and Heidelberg (Harfst et al. 2007). In the following years, graphical processing units (GPU) widely replaced GRAPE; direct Nbody implementations were done on GPU clusters (Portegies Zwart et al. 2007; Schive et al. 2008). Interfaces have been written such that GRAPE users could right away also use GPU with the newly invented programming language CUDA (Yebisu, Nitadori and Makino 2008; Sapporo, Gaburov et al. 2009; Kirin, Belleman et al. 2008, 2014). Still somewhat state of the art is nbody6gpu, which includes GPU acceleration of nbody6 using CUDA kernels for single node servers (Nitadori and Aarseth 2012). Many of these kernels written by Keigo Nitadori are still in current use, even for the massively parallel programs such as nbody6++gpu, see below.
At the same time another development started, parallelization of nbody6 with the (at that time) new standard MPI (message passing interface). nbody6++ (Spurzem 1999) uses the SPMD (Single Program Multiple Data) scheme to run many instances of the code in parallel, while distributing force computations for different particles to the processors of a massively parallel computer. From time to time data transfers using MPI communication routines are necessary, to make sure all processors are synchronous. Systems with hundreds of processing units were used at the time (e.g. CRAY T3E), which demanded efficient coding of the communication scheme. Copy and ring algorithms were developed, and asynchronous data transfer and computation implemented (Makino 2002; Dorband et al. 2003).
A copy algorithm keeps always a complete copy of all particle data on every parallel process; parallelization is over groups of particles due for the correction step; communication sends around all new particle positions and velocities in the Hermite scheme to all other processes. In contrast the ring algorithm uses a domain decomposition, every process has its specific set of particles (at least for some time), and instead of communicating particle positions and velocities partial gravitational forces and their time derivatives are communicated. A copy algorithm has been implemented by Spurzem (1999); Hemsendorf et al. (2002) for nbody6++, and a ring algorithm is used in phigrape (Harfst et al. 2007). All these communication algorithms have been implemented long time ago using the ,pi_sendrecv routine in a cyclic fashion—for p processes \(p\!\!1\) communication steps are needed.^{Footnote 2} Every process simultaneously sends data to its next neighbour and receives data from its other neighbour, in a ring structure. Therefore these algorithms are also denoted as systolic communication algorithms (both copy and ring). Nowadays mpi_allgather or mpi_allreduce may be used, but their implementations are not transparent and vary; the latter would normally use a treebased implementations (instead of systolic)—the number of communication steps is then only \(\log _2(p)\) (while our systolic algorithm needs \({{\mathcal {O}}}(p)\) steps). It can be shown that asymptotically (for large data chunks and low latency) both algorithms are equivalent with regard to the total time required, because the tree based algorithm uses increasingly large data packages, while in systolic algorithms every step communicates the same amount of data (Dorband et al. 2003). Hence, currently still the systolic communication with a copy algorithm is used in nbody6++gpu. If going to ten or hundred million bodies the copy algorithm may become too large for system memories, and should be updated to the ring algorithm with domain decomposition, which is not a fundamental problem (already used in the phigrape code, which is a simple variant of the Hermite scheme with blocked hierarchical time steps). While both ring and copy algorithms scale linearly with p a hypersystolic algorithm exists which scales only with \(\sqrt{n_p}\) (Lippert et al. 1996, 1998). For GRAPE a special implementation of a hypersystolic algorithm for 2D meshes of processing elements has been proposed (Makino 2002). Hypersystolic and other tree based communication algorithms can play out their strengths in case of a huge number of processes (as in case of grape for example) with relatively modest computational load on every process. On the contrary the current nbody6++gpu algorithm requires modest numbers of processes (order 10–100 with current particle numbers of up to around a million, may be more in the future), which have big computing loads and very large data chunks to communicate.
nbody6++ (Spurzem 1999) parallelizes both force loops with MPI, for the regular and the neighbour force in the Ahmad–Cohen scheme.
A ring communication algorithm with domain decomposition in the future would also help for situations when there are many small block timesteps with few particles to integrate. The current code nbody6++ (and its successors nbody6++gpu using GPU) only invoke parallel MPI execution if the number of particles in a time block is large enough (like e.g. 50–100, best value has to be tested for every hardware). For smaller blocks all processors are redundantly computing everything without communication, to avoid the overhead connected with MPI. Since the special hierarchical timestep scheme of nbody6 favours time blocks with many particles this is for usual globular cluster simulations no bottleneck. However, in case of very high central density, like in nuclear star clusters with central supermassive black hole (see Sect. 7) the parallel performance gets degraded.
With the advent of clusters, where nodes would be running MPI, and each node having a GPU accelerator, nbody6++gpu was created – on the top level MPI parallelization is done for the force loops (coarse grained parallelization) and at the bottom level each MPI process calls its own GPU to accelerate the force calculation (Berczik et al. 2013), using Nitadori’s Yebisu library (Nitadori and Makino 2008) for the regular force only. Secondly, an AVX/SSE implementation accelerates prediction and neighbour (irregular) forces, and also a number of other features had been severely optimized and improved (such as particle selection at block times) (Wang et al. 2015). This code currently keeps the record of the largest direct Nbody simulation of a globular cluster with all required astrophysics (single and binary stellar evolution, stellar collisions, tidal field), simulated over 12 Gyrs (Wang et al. 2016).
In recent years, inspired also by LIGO/Virgo/KAGRA gravitationalwave detections (Abbott et al. 2016), numerous current updates have been made with regard to stellar evolution of massive stars in singles and binaries (Kamlah et al. 2022b), and with regard to collisional build up of stars (mass loss at stellar collisions allowed) and intermediatemass black holes (Rizzuto et al. 2021, 2022; ArcaSedda et al. 2021). The current code is available via GitHub.^{Footnote 3} Note that a different service is provided by Long Wang, quoted in Varri et al. (2018). That alternative version of nbody6++gpu has been recently used by the Padova team, replacing the older SSE/BSE stellar evolution by MOBSE (see e.g., Di Carlo et al. 2021).
Star clusters with primordial (initial) binaries inevitably lead to binaries of black holes. If two black holes get close enough to each other, either during a hyperbolic encounter or due to close Newtonian threebody or four body interactions, PostNewtonian corrections have to be taken into account. They take the form of an expansion of the relative acceleration between the two bodies in terms of \((v/c)^{2\rm{i}}\), denoted as PNiterms. PN1, PN2, and PN3 are conservative, producing periastron shifts of orbits, while PN2.5 and PN3.5 provide the energy and angular momentum loss due to gravitational radiation. The first implementation was done in nbody5 up to PN2.5 (Kupi et al. 2006); and for nbody7 (Aarseth 2012). Also relativistic spinspin and spinorbit interactions of orders PN1.5, PN2.0, PN2.5 have been recently included (Brem et al. 2013). The most recent version of nbody7 (Banerjee et al. 2020) includes also the full PN terms by using the ARChain (algorithmic regularization chain) method (Mikkola and Merritt 2008). nbody7 is GPU accelerated, but has not yet the MPI parallelization of nbody6++ and nbody6++gpu. Generally binary evolution governed by PostNewtonian terms has been compared with full numerical solutions of general relativity; deviations between fully relativistic and PostNewtonian evolution only occur during the final period of merger, in a time span usually negligible for astrophysical purposes. The reader interested in the original citations with regard to the derivation and justification of PostNewtonian terms is referred to Kupi et al. (2006); Mikkola and Merritt (2008); Brem et al. (2013) for further references therein.
When black holes merge they experience a kick due to asymmetric gravitational wave emission, see e.g., the MOCCA implementation (Morawski et al. 2018); a similar model is already used in nbody7 (Banerjee et al. 2020), and this is a field where nbody6++gpu is currently lagging behind, current work is ongoing on it. The following Table 3 gives a summary of the features of different variants of the nbody codes.
Table 4 shows for nbody6++gpu a model fit, obtained from a number of simulations using a range of particle numbers N and MPI process number \(N_p\), where each MPI process also uses a GPU (Huang et al. 2016). Eight different pieces of the code have been profiled as indicated. The fit shows the following key information:

1.
regular and irregular force computation are very well parallelized (\(\propto N_p^{1}\));

2.
regular force computation still scales with approximately \(N^2\), but with a very small factor in front, due to the fast GPU processing.

3.
MPI communication and synchronization provide a bottleneck, no further speedup possible for more than 8–16 MPI processes.

4.
Also prediction and sequential parts on the host are bottlenecks if going for large N, because they scale approximately with \(N^{1.5}\), and do not scale down with processor number.
The timing model is already a few years old, the current code version has made progress in MPI parallelization of prediction (pos. 3). To improve the communication scaling faster MPI or NVLink^{Footnote 4} communication hardware will be beneficial (pos. 5, 6). Note that all numerical factors in the fit dependent on the specific hardware used—CPUs, GPUs, communication lines between CPU nodes and between CPU and GPU.
Figure 4 shows in principle similar information as Table 4, but here the eye should inspect the relative weight of the different components, when increasing the number of MPI processes. The coloured fields correspond to the code parts discussed above, but a little more segmented:

a.
Reg. and Irr. correspond to regular and irregular force computation in Table 4;

b.
Pred. is prediction;

c.
Move is data moving;

d.
Comm.R, Send.R., Comm.I. and Send.I is MPI communication (regular, irregular)

e.
Barr. is synchronization

f.
Init.B., Adjust, KS, refer to sequential parts on the host.
The bottom line to stress from these results is that even for one million bodies the bottleneck of the parallel code is NOT the regular force (which would be extremely dominant in a sequential processing), so it is NOT the stumbling block for going to much higher particle number, these are prediction and communication.
There are also phigpu (Berczik et al. 2013), phigrape (Harfst et al. 2007), and higpu (CapuzzoDolcetta et al. 2013; Spera 2014). All of them are using the Hermite scheme with hierarchically blocked timesteps, and are fully parallelized and GPU accelerated. There is no Ahmad–Cohen neighbour scheme and no regularization, which means that on a serial computer they would be much slower than e.g. nbody6++gpu. But with a very efficient parallelization and GPU acceleration this is partly compensated; they have been used for astrophysical problems where starbystar modelling could be neglected, such as e.g. galactic nuclei and galaxy mergers with supermassive black holes (cf., e.g, Zhong et al. 2014, 2015; Li et al. 2017, 2019; Bortolas et al. 2018).
An interesting feature is that these codes implement higher order Hermite integration schemes (in phigpu 4th, 6th, or 8th order can be chosen, higpu uses 6th order. There is another 6th and 8th order Hermite integrator (Nitadori and Makino 2008); so far these higher order integrators have seen relatively little use, consistent with the conclusion that the 4th order integrator is an optimal choice for performance and accuracy (Makino 1991b).
5.4 Are Nbody simulations reliable?
At this point the reader may expect that direct Nbody simulation turn out to be the most reliable (although computationally most expensive) way to simulate the dynamical evolution of a gravitating system consisting of N point masses. It does not involve any serious approximations and assumptions, as e.g., the Fokker–Planck approximation and the Monte Carlo codes. By reducing the \(\eta \)values in the timestep (Eq. (45)) any accuracy can be achieved in principle, as long as machine accuracy permits it. Usually for accuracy and timestep choice globally conserved quantities are used, such as energy and angular momentum, and center of mass conservation (position and velocity).
However, for a system with N particles phase space has 6N dimensions, and a check of say energy, angular momentum, and center of mass alone only checks whether the numerically calculated system remains within an allowed \(6N9\) dimensional hypervolume. There is no a priori information how “exact” the “true” individual trajectories are reproduced in the simulation within this hypervolume. It was early pointed out that, due to repeated close encounters occurring between particles, initial configurations that are very close to each other, quickly diverge in their evolution from each other (Miller 1964). In that work it was shown that the separation in phase space of two trajectories increases exponentially with time, or with other words, the evolution of the configuration is extremely sensitive to initial conditions (particle positions and velocities). The timescale of exponential instability is as short as a fraction of a crossing time, and the accurate integration of a system to core collapse would require of order \({{\mathcal {O}}}(N)\) decimal places (Goodman et al. 1993; Kandrup et al. 1994). These papers argue that the problem is caused by twobody encounters, but chaotic orbits in nonintegrable potentials can be a source of exponential instability and thus cause unreliable numerical integrations as well.
However, the situation is not as bad as it seems. Nbody simulations of star clusters or galactic nuclei do not always exploit the detailed configuration space of all particles. Quantities of interest are global or somehow averaged quantities, like Lagrangian radii or velocity dispersions averaged in certain volumes. As it was nicely demonstrated in a pioneering series of papers (Giersz and Heggie 1994a, b, 1996, 1997) such results are not sensitive to small variations of initial parameters. They took statistically independent initial models (positions and velocities at the beginning selected by different random number sets) and showed that the ensemble average of the dynamical evolution of the system always evolved predictably and in remarkable accord with results obtained from the Fokker–Planck approximation. The method was also partly and successfully used in Giersz and Spurzem (1994), which focused on the evolution of anisotropy and comparisons with the anisotropic gaseous models of the author of this paper, or in more recent examples (Rizzuto et al. 2021, 2022) where the formation of intermediate mass black holes was analyzed over a large set of Nbody simulations, using statistically independent initial models.
As a consequence, it should be remembered, however, that great care has to be taken when interpreting results of Nbody simulations on a particle by particle basis, for example determining rates of specific types of encounters, which could produce mergers in a large direct Nbody model.
The longterm behaviour of dynamical systems as the solar system are being studied by Nbody simulations as well, but clearly there are much higher requirements on the accuracy of the individual orbits in contrast to the star cluster problem. Therefore for the solar system dynamics symplectic methods, using a generalized leapfrog, like the widely used Wisdom–Holman symplectic mapping method (Wisdom and Holman 1991) are the standard integration method. Symplectic mapping methods do not show secular errors in energy and angular momentum. However, in their standard implementation they require a constant timestep (but see recent new developments described in the following subsection). A generalization using a time transformation simultaneously with the generalized leapfrog has been suggested which can cope with variable timesteps (Mikkola 1997b).
It has been proposed to reduce secular errors in Hermite schemes and direct Nbody simulations to a level comparable with symplectic methods by using a timesymmetric scheme. A small variation in the Hermite corrector is needed, which allows to iterate to convergence (few iterations usually enough) and individual timesteps made reversible through another iteration (Hut et al. 1995; Funato et al. 1996; Makino et al. 1997). How well this generally works and its relation to symplectic schemes is presently not clear. But it has been well used for direct Nbody simulations of planet formation and planetary systems (Kokubo et al. 1998; Makino et al. 1997). These codes though are still on the level of nbody4, because they do not use the Ahmad–Cohen neighbour scheme—even in the smallest steps full force calculations over all N particles are needed. nbody6++ has been similarly improved using an extended Hermite scheme to allow iteration, for a hybrid Nbody and Fokker–Planck simulation of planetesimal growth in protoplanetary disks (Glaschke et al. 2014; AmaroSeoane et al. 2014) (no GPU implementation with nbody6++gpu yet).
In Mikkola and Aarseth (1998) it is stressed that even with a newly applied classical method secular errors in the integration of close binaries can be strongly reduced. One should keep in mind though, that the Nbody integration schemes discussed in this paper yield excellent results in the starcluster research (see Sect. 4) but are unsuitable for longterm solar system studies, because they generally have secular errors, although small. Due to the inherently physically chaotic nature of star clusters remaining small secular errors can usually be tolerated. It means that the solution found in the computer always stays near a permitted solution of the underlying Hamiltonian, even if it does not stay on the one trajectory which belongs to the initial conditions (Quinlan and Tremaine 1992). But a recent dynamical study has reiterated that it may not be sufficient just to check a few globally conserved quantities, because that could be dominated by a few high energy objects (binaries) and could cover up errors in other parts of the system (Wang and Hernandez 2021).
As outlined above in star cluster simulations the secular errors are being kept small relative to typical values of energy and angular momentum and an accurate reproduction of all individual stellar orbits is not generally required.
5.5 New approaches
A completely new code, called petar has been introduced (Wang et al. 2020a). It is a hybrid Nbody code, which combining the P\({}^3\)T (particleparticle particletree) method (Oshino et al. 2011; Iwasawa et al. 2015, 2016, 2017) and a slowdown time transformed symplectic integrator (sdar) (Wang et al. 2020c). The latter is mathematically similar to a KS (Kustaanheimo and Stiefel 1965) regularization, using a time transformation in a similar way (based on the Poincaré transform of the Hamiltonian, Preto and Tremaine 1999; Mikkola and Merritt 2008), but regarding the canonical coordinate transformation it uses an extended classical phase space rather than the 4D KS space. Both regularization methods also employ a slowdown procedure. petar uses the parallelization framework for developing particle simulation codes (fdps, Iwasawa et al. 2016, 2020) to manage the particletree construction and longrange force calculation. For single and binary stellar evolution the standard SSE and BSE packages are used as in e.g. MOCCA (Hurley et al. 2005; Banerjee et al. 2020; Kamlah et al. 2022b).
The code is conceptually ahead of nbody6++gpu in several respects; parallelization of a large number of hard binaries is included and a domain decomposition makes it easier to go to particle numbers much larger than \(10^6\), as well as the use of the tree scheme for distant groups of particles, rather than the Ahmad–Cohen neighbour scheme in nbody6++gpu. We show in Fig. 5 its excellent strong scaling obained on the Juwels Booster supercomputer in Germany (Jülich Supercomputing Centre 2021).
In Wang et al. (2020a, the paper about the petar code) performance comparisons with nbody6++gpu seem too pessimistic in comparison with benchmarks published in Huang et al. (2016) and further recent tests done with up to \(1.6\times 10^7\) particles to be published soon. Undoubtedly, petar is ahead in terms of parallelization of binaries, but also in nbody6++gpu regularized binaries can and will be parallelized soon. Also the paper presents a couple of convincing comparisons between both codes, but a real long time model comparison with many binaries is still work in progress. The problem is, that the single and binary stellar evolution in nbody6++gpu has a much stronger coupling to the dynamical evolution of binaries and close encounters – that is from a programmer’s viewpoint a trouble (as stated in Wang et al. 2020a), but it is important when following very hard and relativistic binaries (Rizzuto et al. 2021, 2022). Even the tree algorithm is not really essential—inherently it can be much faster than the direct Hermite scheme (even with Ahmad–Cohen neighbour scheme), but it depends on the accuracy required. By comparison of nbody6++gpu with the bonsai tree code (Bédorf et al. 2012) it has been shown that if same accuracy is required, performance difference is not significant (Huang et al. 2016). This can be also seen in Sect. 5.3 from the performance analysis of nbody6++gpu—even for more than a million bodies the regular distant forces are not the bottleneck, they have been effectively parallelized.
Another novel approach is based on forward symplectic integrators (FSI) (Chin 1997; Chin and Chen 2005; Chin 2007; Dehnen and Hernandez 2017). It is called bifrost (Rantala et al. 2021, 2023), uses MPI parallelization and GPU acceleration and shows competitive benchmarks for very large N (million and more). The authors note that the FSI approach has hardly been used in the community, even though it generates very accurate orbital integrations. However, as discussed above, for most star cluster simulations orbit integration with maximum machine precision is not really needed; however, for the innermost regions of nuclear star clusters, with stellar orbits around supermassive black holes, this will be important and frost may turn out to be excellently suited for such environment. On the other hand the planetary system and protoplanetary disk simulation community regularly uses either symplectic schemes or the improved iterated Hermite schemes (see above Sect. 5.3). A very significant innovation in frost is mstar—a fast parallelized algorithmically regularized integrator. Instead of classical and algorithmic chains, which assemble their particles linearly (in a chain), mstar uses a minimum spanning tree, so the chain can have branches. That leads to smaller average distances within chain particles as before, reduces computing time and errors. Such an algorithm could be as well used for the other chain regularizations.
Finally, hybrid codes have been constantly designed and used for a while to amend the direct Nbody for a large number of distant particles. It started already with nbody5 which was coupled to a tree scheme (McMillan and Aarseth 1993); nbody6++ has been hybridized with a series expansion code (Hemsendorf et al. 2002) (sometimes also called selfconsistent field (Hernquist and Ostriker 1992), SCF). Last, but not least there is a new hybrid code etics (Meiron et al. 2014), which has been coupled with phigrape, and applied to the loss cone problem in a nuclear star cluster around a binary supermassive black hole (Avramov et al. 2021).
6 Astrophysics in star clusters
6.1 Single stellar evolution
In realistic star cluster simulations all stars undergo stellar evolution as time proceeds, see e.g., Church et al. (2009) and therefore, a large array of stellar evolutionary processes must be considered. We briefly outline the fundamentals of single stellar evolution (Sect. 6.1) because it is essential to understand the complexities that need to be modelled before we move on to an area, in which collisional Nbody simulations find some of their strongest applications, which is binary stellar evolution (Sect. 6.2) in dense star clusters. The discussion in this Sect. 6.1 is primarily based on Kippenhahn et al. (2012), but a more recent review may also be found in Salaris and Cassisi (2017).
A star is a selfgravitating object of a hot plasma, which emits energy at the surface in form of photons (and from the inner regions in the form of neutrinos). Furthermore, it is spherically symmetric in the absence of rotation, magnetic fields and a sufficiently close companion or multiple companion stars that induce interior oscillations and bulges through tidal interaction or deform the star through mass transfer. These are typical assumptions in 1D modelling of single stars and they yield four fundamental structure equations that govern stellar evolution under the assumption of hydrostatic equilibrium. Any deviation from hydrostatic equilibrium will become increasingly important in harder binary stars.
Energy transport in a star is either radiative or convective (where convective transport can also include some conduction, which is not that important). Whether it is one or the other is given by the Schwarzschild stability criterion, which compares the temperature gradient in the radiative case with the temperature gradient by an adiabatic movement of matter elements: \(\nabla _{\rm{rad}}< \nabla _{\rm{ad}}\). The less practical Ledoux criterion also takes into account a possible gradient in the density and chemical composition of a star. If some matter is unstable according to the Ledoux criterion, then convection will set in and will mix the material until stellar homogeneity. This process will diminish these gradients. Therefore, in practice the Schwarzschild criterion is more commonly used.
Radiative energy transport is commonly described using a diffusion equation. For convection, on the other hand, there is a convective gradient and since no complete theory of convection exists, the problem is approximated using mixinglength theory (MLT). MLT describes the convective temperature gradient \(\nabla _{\rm{c}}\) surprisingly well despite a large number of unrealistic assumptions, e.g., all convective “bubbles” are assumed to travel the same distance due to buoyancy forces until they disappear leading to the dispersion of all their energy to the next level. MLT is parameterised globally by \(\alpha _{\rm{MLT}}\), which is the ratio of the mixing length to the pressure scale height. \(\alpha _{\rm{MLT}}\) is around 2 for Solar models. Since in deep stellar interiors, convection is very efficient and thus the “blobs" move adiabatically, it is \(\nabla _{\rm{c}} \simeq \nabla _{\rm{ad}}\). In the outermost layers (low densities), however, convection is not so efficient, a lot of energy is lost by a blob moving up and the energy transport is superadiabatic even approaching the radiative gradient in the extreme: \(\nabla _{\rm{rad}}> \nabla _{\rm{c}} > \nabla _{\rm{ad}}\).
The chemical composition of a star changes with time due to nuclear reactions in its interior. It can also be subject to convective mixing, sedimentation, rotation (angular momentum transport) and hydrodynamical instabilities. The inclusion of all of these effects is difficult, because it requires 3D treatment; but most currently used stellar evolution codes, such as Modules for Experiments in Stellar Astrophysics (mesa) (Paxton et al. 2011, 2013, 2015, 2016, 2018, 2019) or HOngo Stellar Hydrodynamics Investigator (hoshi) (Takahashi et al. 2016, 2018, 2019; Yoshida et al. 2019) are 1D.
6.1.1 Two fundamental principles of stellar evolution
The general evolution of a star following the assumptions above is governed by two fundamental principles: the first one is the virial theorem (gravitational energy \(E_\text{g} = 2 E_\text{i}\) total internal energy), which follows from the assumption of hydrostatic equilibrium in the star that is represented by a selfgravitating sphere (Sect. 6.1). The virial theorem implies that on contraction of a star that is modelled as an ideal gas, half of the liberated energy is radiated away and the other half is used to increase the internal energy, which means that the star is heating up. In other words, if stars lose energy from the surface, the star must contract overall and heat up, which is a consequence of its negative heat capacity. That does not mean that some parts like the stellar envelope are not expanding over the star’s evolution, but what is certain that the largest part of the star is contracting over the lifetime and heating up. Interestingly, massive stars, which are radiation pressure dominated, approach the limit of an unbound structure, which is one of the reasons why they lose mass much more easily.
The second fundamental principle is the Coulomb repulsion, which determines the sequence of nuclear burning phases. Due to the virial theorem above that leads to a general increase in the interior stellar temperature, nuclear burning phases follow a sequence of light to heavier elements, i.e. they start with hydrogen (H) burning (the main sequence (MS) phase), followed by helium (He) burning (horizontal branch (HB) phase), the carbon (C) burning phase and so on. This burning sequence stops when an iron (Fe) core is reached, because any further nuclear fusion is endothermic. We reach an “onionlike” stellar structure: in the outer layers original stellar material is still processing (H fusing to He), while at the centre an Fe and Nickel (Ni) core forms simultaneously if the stellar mass is large enough.
6.1.2 Timescales, energy conservation and homology
The following timescales are extremely useful in characterising the evolution of stars:

1.
hydrostatic timescale \(\tau _{\rm{hydro}}\): let us assume that the internal stellar forces are not balanced and the star is not in hydrostatic equilibrium anymore. The timescale to return to hydrostatic equilibrium is: \(\tau _{\rm{hydro}}\simeq \frac{1}{2}(G{\bar{\rho }})^{1/2}\). It is extremely short, on the order of seconds for White Dwarfs (WDs), of minutes for the Sun and of the order of days for Red Giants (RGs).

2.
Kelvin–Helmholtz (thermal) timescale \(\tau _{\rm{KH}}\): it is defined as the timescale during which the entire internal energy of star would be radiated away by its current luminosity. For the Sun it is on the order of 10 million years.

3.
nuclear timescale \(\tau _{\rm{nuc}}\): let us assume that the whole luminosity comes only from the nuclear energy reservoir within the star and that the luminosity stays constant at the current state for the duration of the thought experiment. For the Sun this the emission of all nuclear energy as radiation is on the order of 70 billion years.
In most phases of stellar evolution, we have \(\tau _{\rm{hydro}} \ll \tau _{\rm{KH}} \lessapprox \tau _{\rm{nuc}}\) and mostly also even \(\tau _{\rm{KH}} \ll \tau _{\rm{nuc}}\) for MS and core He burning (CHeB) stars. In late stellar evolution phases \(\tau _{\rm{KH}}\) can approach \(\tau _{\rm{nuc}}\).
If we look at the global energy conservation in stellar evolution, we arrive at the homology (“similarity”) relations for stars. From these we can derive a massluminosity relation that is very fundamental in stellar physics. For MS stars, the homology analysis yields \(L\simeq \mu ^4\,M^3\), where \(\mu \) is the mean molecular weight (\(rT\sim \mu m\)). This relation implies that the luminosity does not directly depend on energy generation; also the proportionality factor predominantly depends on the opacity of the stellar material, which in turn is determined by its chemical composition. If the energy generation in the star changes, it will adjust itself such that is has the same luminosity as before.
Furthermore, a massradius (M–R) relation is derived from the homology relations for stars. The relation now depends on the energy generation too in contrast with the massluminosity (M–L) relation. For the two main nuclear cycles on the MS, we get for the CNOcycle \(R\sim \mu ^{0.61}M^{0.78}\) and for the ppcycle we obtain \(R\sim \mu ^{0.125}M^{0.5}\).
The M–L relation, the M–R relation and the Stefan–Boltzmann law for blackbody radiation leads to the equation for the MS in the Hertzsprung–Russell diagram (HRD): \(\log (L)=8\times \log (T_{\rm{eff}})+\mathrm {const.}\) and also to lines of constant radius in the HRD, which follow \(\log (L)=4\times \log (T_{\rm{eff}})+\mathrm {const.}\).
The lifetime of stars is derived from the M–L relation and \(\tau _{\rm{nuc}}\sim E_{\rm{nuc}}/L \sim M/L\) to get \(\tau _{\rm{nuc}}\sim M^{2}\). This means that more massive stars are brighter, but have shorter lifespans. In phases beyond the MS, the nuclear reaction energy release is smaller and the luminosities are generally larger, which leads to shorter lifetimes. Consequently, the total lifetime of a single star is dominated by its time spent on the MS.
Through the homology relations values of the central temperature \(T_{\rm{c}}\), central pressure \(P_{\rm{c}}\), and central density \(\rho _{\rm{c}}\) of a star on the MS are obtained, which all depend on the stellar mass and the nuclear energy generation. Increasing stellar mass along the MS leads to: (1) increase of central temperature \(T_{\rm{c}}\); (2) decrease of central density \(\rho _{\rm{c}}\) if the CNOcycle (\(1\,M_{\odot } \lessapprox M\)) is the dominant nuclear burning mechanism, while \(\rho _{\rm{c}}\) increases if the ppcycle (\(M\lessapprox 1\,M_{\odot }\)) dominates; (3) decrease of the central pressure. Hence, with increasing mass, stars along the MS are hotter and radiation pressure becomes increasingly dominant until it dominates completely for very highmass stars.
Finally, we discuss the homologous contraction of a gaseous sphere. This analysis yields a relation between the central temperature and central density. For ideal gases, the contraction thereof leads to heating of the gas and for nonrelativistic strongly degenerate gases, this contraction leads to cooling in a transition from nondegenerate to strongly degenerate region. This means that low mass stars will never ignite certain elements, because at some stage they become degenerate in the core and the central temperature drops upon further contraction.
6.1.3 Fundamental parametersmass and composition
While they are incredibly useful to understand fundamental relations in stellar astrophysics, the homology relations (see Sect. 6.1.2) cannot be applied over the full evolution of the star and are typically only applied to MS stars. We need other ways to describe the full evolution of a star. In general, the fundamental parameters of stellar evolution are the zeroage MS (ZAMS) mass and the (homogeneous) chemical composition.
Other very important parameters independent of mass and composition are rotation and magnetic fields. Rotation can lead to additional interior mixing, which changes the chemical composition of the star. Magnetic fields may influence the pressure balance and interact with convection and rotation, which is probably most important for massive stars.
6.1.4 Mass change of stars—stellar winds
The masses of all stars change throughout their lives through winds, parameterised by a stellar mass loss rate \({\dot{M}}\). Stellar winds are the outflows of matter leaving the stellar surface with an energy sufficient to escape from the star’s gravity. The main question is what the nature of the force is that is powerful enough to overcome the star’s gravity. Different types of stars have different winds. Recently, excellent reviews of the winds of lower mass stars were written by Decin (2020) and similarly of highmass stars by Vink (2021).

1.
Hot luminous stars (HMSs), such as massive MS or evolved stars (\(R\sim 10\,R_{\odot }\)), have strong and fast (terminal wind velocities of \(v_{\infty }\sim 2000\)–\(3000\,\mathrm {km\,s}^{1}\)) stellar winds powered by radiative line driving (radiative forces that are exerted on atomic lines, such as ionized C, N, O or Fegroup elements; resonance lines in optically think regions just a couple of \(R_{\odot }\) around the HMS). These have very highmass loss rates \({\dot{M}}\) of \(10^{8}\)–\(10^{4}\,M_{\odot }/\rm{yr}\).

2.
Cool luminous stars (CMSs), such as AGB intermediate mass stars (IMS) (\(R > 100\,R_{\odot }\)) have strong and slow (\(v_{\infty }\le 25\,\mathrm {km\,s}^{1}\)) stellar winds that are pulsationdriven. These two have very highmass loss rates \({\dot{M}}\) of \(10^{8}\)–\(10^{4}\,M_{\odot }/\rm{yr}\). The fact the CMSs are cool, it is believed that close to the stellar atmosphere, these stars can form dust grains, because the pulsations from the star can form regions of large density just above the stellar atmosphere. The dust grains absorb momentum and collide with surrounding gaseous species and thus you get a launch of a stellar wind.

3.
Solartype stars (LMSs) have hot surrounding coronae and have a weak stellar wind that is a pressuredriven coronal wind of intermediate speeds (\(v_{\infty }\le 400\)–\(800\,\mathrm {km\,s}^{1}\)). They have very low mass loss rates \({\dot{M}}\) of \(10^{14}\,M_{\odot }/\rm{yr}\).
Many stellar evolution models used inside Nbody codes express wind acceleration by a \(\mathrm {\varGamma }\) factor. \(\mathrm {\varGamma }\) is defined as the ratio of radiative over gravitational acceleration. Radiative acceleration is due to radiative pressure and introduces an extra force acting on a spherically symmetric, isothermal wind. It is related to electron scattering \(\mathrm {\varGamma }_{\rm{e}}\) or dust scattering \(\mathrm {\varGamma }_{\rm{d}}\), for example. These quantities are introduced into the momentum equation of an isothermal, spherically symmetric stellar wind, which leads to an effective gravitational acceleration \(g_{\rm{eff}}(r)\). Using \(g_{\rm{eff}}(r)\), we can calculate the escape velocities and these are lower by the introduction of the extra force. However, it depends very strongly on the distance to the stellar surface, where this additional force is introduced; the farther out it occurs, the less impactful it becomes on the overall stellar mass loss rate. Therefore, since dust grains form very close to the star (e.g. in CMSs), these are very impactful on the mass loss rate. In red supergiants (RSGs), on the other hand, these grains form much farther out and therefore, dustdriven winds are generally not relevant here.
Moreover, radiation transport and the chemistry in the wind are both essential to a full modelling of a stellar wind. It is important to state that in general, there is no full theory of stellar winds available (Decin 2020). Furthermore, the layperson is overwhelmed by the large number of mass loss rate prescriptions derived predominantly from observations, which differ enormously in magnitude and slope (Decin 2020). The choice of mass loss recipe has an enormous impact on the outcome of realistic Nbody simulations and the dynamics of the star cluster as described in this review. As an astrophysical community, we are just at the beginning of unravelling the complexities of specific stellar winds, such as Wolf–Rayet (WR) stars (Sander and Vink 2020) or the impact of pulsations and variability on winds in AGB and postAGB stars (Trabucchi et al. 2019) before a fully selfconsistent theory can be envisioned.
6.1.5 Formation of compact objects and their natal masses, kicks and spins
Depending on the progenitor star core mass, a compact object such as a white dwarf (WD), neutron star (NS) or black hole (BH) may form. Oftentimes binary processes are involved (Willems et al. 2005; Fragos et al. 2009; Wong et al. 2012, 2014), but these are discussed in the next subchapter. The following processes apply to all single stars in the relevant mass ranges. The formation of a compact object is associated with a natal remnant mass, a natal kick and a natal spin, which are all subject to significant theoretical and observational uncertainty. Nevertheless, it is important to model these as accurately as possible, because the global dynamical evolution of a collisional stellar system critically depends on these. The natal mass depends on a number of factors and we will only focus now on the collapse mechanism and its associates fallback and not the mass loss in the progenitor star, although it is also instrumental. The impact of the mass loss has been discussed already in a previous section. Traditionally, the natal masses of the WDs (and their three main types HeWDs, COWDs, ONeWDs) and their dependence on the progenitor masses are modelled by Hurley et al. (2000); Hurley and Shara (2003). For NSs a maximum mass of around \(2.5\,M_{\odot }\) (Linares 2018, 2020) and the relationship follows typically (Hurley et al. 2000), but the exact masses are unknown because of the large uncertainties mainly in the internal structure of a NS (Lattimer and Prakash 2004; Lattimer 2012). In addition to Hurley et al. (2000), the possibility of a socalled electroncapture SNe (ECSNe) that leads to the formation of a NS (Nomoto 1984, 1987; Podsiadlowski et al. 2004; Kiel et al. 2008; Ivanova et al. 2008; Leung et al. 2020b), which has very important properties that are discussed below, has been included in many codes (Belczynski et al. 2008; Banerjee et al. 2020; Kamlah et al. 2022a). Most attention has arguably been paid to the remnant BH masses (Eldridge and Tout 2004; Belczynski et al. 2008; Fryer et al. 2012) and a number of collapse mechanisms for certain mass ranges have been proposed. In simulations, the most popular prescriptions are the rapid or delayed corecollapse SNe models by Fryer et al. (2012) in combination with various (pulsational) pair instability SNe ((P)PISNe) stellar evolution recipes (Fryer et al. 2001; Yoshida et al. 2016; Spera and Mapelli 2017; Woosley 2017; Woosley and Heger 2021; Belczynski et al. 2016; Leung et al. 2019, 2020a). Figure 6 shows a suite of small simulations when mcluster (Kuepper et al. 2011; Kamlah et al. 2022a; Leveque et al. 2022b) is used as a population synthesis tool with level C stellar evolution Kamlah et al. (2022a). It shows all relevant remnant mass phases, which can be subdivided into a corecollapse SNe, PPISNe, PISNe and a direct collapse phase in increasing ZAMS mass (this is an extension of the corecollapse SNe models for ZAMS masses above which PISNe is ineffective; in our case an extension of the rapid SNe models by Fryer et al. 2012). Two interesting conclusions can immediately be drawn here: first, the metallicity is incredibly important for the production of highmass BHs, because progenitor stars with high metallicities will contain more metal lines for radiative wind mass loss. Secondly, the (P)PISNe prescriptions available from theory can have an enormous impact on the abundance of BHs. This might particularly important in Population III (PopIII) star clusters, where intermediate mass black hole (IMBH) progenitor stars are postulated to have large enough masses and crucially also low enough metallicities from birth to evolve by (P)PISNe from interior evolution alone (e.g., Kamlah et al. 2023; Wang et al. 2022 for recent Nbody simulations of these clusters; see Sect. 6.4.3 for a more general discussion of PopIII stars in the initialisation of star cluster simulations).
The magnitude of natal kicks results, broadly speaking, from an inherent asymmetry in the SNe process and generally their magnitude is rather uncertain (Hansen and Phinney 1997; Hobbs et al. 2005). They affects the dynamical stability of a binary (if one of the binary stars is forming a compact object) and are even able to disrupt a binary completely. This also implies that a large amount of gravitational binding energy in binaries may be removed from the cluster in this way and this will consequently impact the global cluster evolution. WDs are associated with low velocity kicks of the order of \(10^0\,\mathrm {km\,s}^{1}\) (Fellhauer et al. 2003), while neutron stars may reach kicks above even \(10^3\,\mathrm {km\,s}^{1}\) except in the case in which they form as a result of an electroncapture SNe (ECSNe). Here they receive then kicks of order of only \(10^0\,\mathrm {km\,s}^{1}\) (Gessner and Janka 2018) meaning that they can be retained in a star cluster (simulation) (Clark 1975; Abbott et al. 2017, 2020c; Manchester et al. 2005; Kamlah et al. 2022a). Natal kicks for BHs and NSs, which do not undergo an ECSNe (or AIC or MIC, see next chapter), receive kicks traditionally scaled by fallback during the SNe in simulations (Belczynski et al. 2008; Fryer et al. 2012). The more fallback of stellar material there is onto the protoremnant core, the lower the resulting kick is. Furthermore, in simulations, it is typically assumed that the asymmetry is produced by a dominant process (Banerjee et al. 2020; Banerjee 2021): convectionasymmetry driven kicks (Scheck et al. 2004; Fryer and Young 2007; Scheck et al. 2008), collapseasymmetry driven kicks (Burrows and Hayes 1996; Fryer 2004; Meakin and Arnett 2006, 2007) or neutrinodriven natal kicks (Fuller et al. 2003; Fryer and Kusenko 2006; Banerjee et al. 2020; Banerjee 2021). These lead to different retention fractions of BHs in star cluster simulations (Banerjee et al. 2020), which can be seen in Fig. 7 for a sample of nbody7 simulations from Banerjee et al. (2020). It is apparent that for these settings the postulated collapse asymmetry driven kicks will produce most (stellar mass) BHs below \(v_{\rm{esc}}\) of the cluster.
The natal spins of compact objects are important in general binary evolution and can also have significant impact on the mergers of compact objects, for example in a BHBH merger (Morawski et al. 2018, 2019). In the following, we focus on BHs, but the same arguments can be extended to NSs and WDs and the discussion is largely taken from Kamlah et al. (2022a). In general, the spin angular momentum of the parent star does not necessarily translate directly into the natal spin angular momentum of the BH upon collapse. To quantify the spin, Kerr (1963) define a dimensionless parameter \(a_{\rm{spin}}\) that accounts for the natal spin angular momentum. Banerjee (2021) assumes that the magnitude of \(a_{\rm{spin}}\) for the BHs is set directly at the moment of birth without any related mass accretion of GR coalescence processes.
In the following, we highlight three natal spin models that are available now in nbody7, nbody6++gpu, mcluster, petar and mocca, see also Kamlah et al. (2022a). The simplest model of BH natal spins, the Fuller model, produces zero natal spins (Banerjee 2021) as here the TaylerSpruit magnetic dynamo can essentially extract all of the angular momentum of the protoremnant core, leading to nearly nonspinning BHs (Spruit 2002; Fuller and Ma 2019; Fuller et al. 2019). The second spin model is the Geneva model (Eggenberger et al. 2008; Ekström et al. 2012; Banerjee 2021). The basis for this model is the transport of the angular momentum from the core to the envelope. This is only driven by convection, because the Geneva code does not have magnetic fields in the form of the Taylor–Spruit magnetic dynamo. This angular momentum transport is comparatively inefficient and leads to high natal spins for low to medium mass parent Otype stars, whereas for highmass parent Otype stars, the angular momentum of the parent star may already have been transported away in stellar winds and outflows and thus the natal BH spins may be low. The third and last spin model is the MESA model, which also accounts for magnetically driven outflows and thus angular momentum transport (Spruit 2002; Paxton et al. 2011, 2015; Fuller et al. 2019; Banerjee 2021). This generally produces BHs with much smaller natal spins than the Geneva model described above. The Geneva and the MESA models and their metallicity dependence are shown in Fig. 8.
6.2 Binary stellar evolution
In addition to the astrophysical processes that affect all stars in isolation, the proximity (orbital period \(P_{\rm{orb}} \le 10^4\) days (Eggleton 1996) to another star or compact object through the frequent encounters in collisional stellar systems or through intrinsic binary evolution, can affect the individual stars or compact objects dramatically and we need to account for these in the simulations. A population synthesis code should include them all (Eggleton 2006).
6.2.1 Stellar spin and orbital changes due to mass loss or gain
If two stars are in a binary, they can transfer mass via stellar winds and therefore also transfer angular momentum even if they are not yet undergoing Rochelobe overflow (RLOF) (Hurley et al. 2002; Eggleton 2006; Tout 2008). If a secondary star accretes mass by passing through the wind of the primary star, it is spun up intrinsically by a fraction of the spin angular momentum that is lost by the donor star. The accretion rate is traditionally modelled by Bondi and Hoyle (1944), which depends on the wind velocity \(v_{\rm{W}}\). This quantity is observationally difficult to determine (Decin 2020) and should be proportional to the escape velocity from the stellar surface of the star (Hurley et al. 2002).
The mass variations between companion stars also changes the orbital parameters of the binary star. In general, the eccentric orbit is circularised as a result of mass transfer being more effective at periastron than apastron. Additionally, the accretor star is slowed down by the drag induced by the wind it passes through and this dissipates angular momentum away from the system. The orbital circularisation timescale \(\tau _{\rm{circ}}\) as result of mass transfer is orders of magnitudes larger than the equivalent timescale caused by tidal friction for the same binary star system (see Sect. 6.2.2).
6.2.2 Effects of tidal damping
Observations show that the rotation of close binary stars is synchronised with the orbital motion without any dynamical mass transfer having taken place (Lurie et al. 2017; Mazeh 2008; Meibom and Mathieu 2005). Therefore, there must exist a torque that transfers angular momentum between the stellar spin and the orbit in such a way that the binary approaches the observed equilibrium state that is characterised by corotation (spinorbit synchronisation timescale \(\tau _{\rm{sync}}\)) and a circular orbit (circularisation timescale \(\tau _{\rm{circ}}\)) (Zahn 1977; Hut 1981; Hurley et al. 2002; Tout 2008). Alternatively, dissipation of energy might also lead to an accelerated inspiral of the binary stars (Hut 1980; Rasio et al. 1996; Tout 2008).
When two binary star members are detached but sufficiently close, tidal interaction between them becomes important. The mere presence of a companion star causes a tidal force that elongates a star along the line between the centres of mass, thereby resulting in tidal bulges (see e.g., Hurley et al. 2002).
When the binary component rotates uniformly with a circular orbital motion, then the tidal bulges on its stellar surfaces are steady and the stars are in hydrostatic equilibrium. In such a scenario, we also speak of equilibrium tides. However, when this condition no longer holds, the hydrostatic equilibrium is disrupted and the star undergoes forced stellar oscillations. This scenario is described by a combination of equilibrium and now also dynamical tides, the latter of which produce much smaller tidal bulges than the former and they can also take any orientation (Eggleton et al. 1998; Eggleton 2006; Hurley et al. 2002; Zahn 1970, 1974, 1975, 1977; Siess et al. 2013). Dissipative processes within a star cause the tides to be misaligned with the line of centres. This results in a torque that transfers angular momentum between the stellar spin and the orbit (Hurley et al. 2002). This dissipation is nonconservative and happens on relatively long timescales (Eggleton 2006).
The dissipative processes within a star depend on the stellar structure. Typically, a distinction is made between stars with appreciably deep convective envelopes and stars with radiative envelopes. The tides dissipate energy and the binary system approaches an equilibrium state that is characterised by a circular orbit and corotation (Zahn 1977; Hut 1981; Hurley et al. 2002; Tout et al. 2008).
In stars with appreciably deep convective envelopes, turbulent viscosity that acts on equilibrium tides (the same effect on dynamical tides is negligible, Zahn 1975, 1977) is the most efficient form of dissipation (Kopal 1978; Hut 1981; Hurley et al. 2002). The dissipation takes shorter than the nuclear burning timescale \(\tau _{\rm{nuc}}\) (see Sect. 6.1.2) (Zahn 1989, 1991; Hurley et al. 2002).
In stars with radiative envelopes, radiative dissipation near the surface of the star causes an asymmetry in the internal stellar oscillations induced by tides and the tidal field itself. This leads to a torque that is necessary for the binary system to approach the equilibrium state (Zahn 1977, 1989, 1992; Hurley et al. 2002) and in sufficiently close binaries this happens on shorter timescales than the nuclear burning timescale \(\tau _{\rm{nuc}}\) (Zahn 1975). This radiative damping on the dynamical tides is the most efficient process to achieve the equilibrium state in binary stars with member stars that do not have an outer convective zone. However, if they do then the aforementioned turbulent friction on the equilibrium tides provides the primary torquing (Zahn 1975, 1977, 1989).
\(\tau _{\rm{sync}}\) and \(\tau _{\rm{circ}}\) in binary stars with convective envelopes are typically orders of magnitude smaller than those with radiative envelopes (Zahn 1977; Hurley et al. 2002). \(\tau _{\rm{sync}}\) and \(\tau _{\rm{circ}}\) are generally not equal except in a limiting case (Zahn 1977).
If the stars are degenerate but have sufficient stellar structure, i.e. WDs and NSs, then the above two dissipative mechanisms cannot be used as the stellar structure is significantly different. WDs will have very low spins, because the progenitor AGB star has already spun down in its expansion. Furthermore, in WDWD binaries, the orbit will already be circularised (in the absence of WD natal kicks, Fellhauer et al. 2003) due to the stellar wind mass and thus angular momentum loss. For this reason only the synchronisation timescale \(\tau _{\rm{sync}}\) due to degenerate damping is of importance here and it is only applicable for close systems. \(\tau _{\rm{sync}}\) in WDWD, WDNS and NSNS binaries could exceed the age of the Universe (Campbell 1984).
6.2.3 Dynamical mass transfer and its stability
Apart from mass transfer through stellar winds, mass transfer can also happen via RLOF. This happens when the primary star fills it RL as a result of stellar expansion or inspiral. The subsequent mass transfer then happens through the innermost Lagrange point. Typically, this process depends strongly on the mass ratio of the binary (Eggleton 1983) and happens in corotating, circularised binaries but in some instances, it can also occur in highly eccentric binaries, that are a result of tidal capture.
In the RLOF mass transfer, also angular momentum is transferred. The stability of the mass transfer traditionally determined by three logarithmic derivatives of radii with respect to the mass of the RLfilling star following Webbink (1985, 2003): the derivative describing the rate of change of the RL radius \(R_{\rm{L}}\) for conservative mass transfer (total mass and angular momentum conservation) \(\zeta _{\rm{L}}\) (Eggleton 2006), the derivative at constant entropy s and composition of each isotope \(X_{\rm{i}}\) throughout the star \(\zeta _{\rm{ad}}\) and a third derivative that describes the rate of change of the radius of the primary with mass in equilibrium \(\zeta _{\rm{eq}}\). The mass transfer rate \({\dot{M}}\) depends on the relative values of these derivatives (Tout 2008):

1.
\(\zeta _{\rm{ad}} < \zeta _{\rm{L}}\) \(\rightarrow \) \({\dot{M}}\) increases rapidly, there is positive feedback and the mass transfer is unstable, the secondary star cannot accrete at such a high rate and it expands \(\rightarrow \) formation of a common envelope (CE) around the two stars (Paczynski 1976; Ivanova et al. 2013; Ivanova 2019).

2.
\(\zeta _{\rm{eq}}< \zeta _{\rm{L}} < \zeta _{\rm{ad}}\) \(\rightarrow \) \({\dot{M}}\) decreases in its immediate response, but then expands on a thermal timescale.

3.
\(\zeta _{\rm{L}} < \zeta _{\rm{ad}}\) & \(\zeta _{\rm{L}} < \zeta _{\rm{eq}}\) \(\rightarrow \) \({\dot{M}}\) decreases initially, because the stellar radius decreases. RLOF happens again, when the primary fills it RL again.
On these basis of these exponents alone, it is possible to make a number of arguments on the evolution of Cataclysmic Variables (CVs), Algols and other exotic binary stars.
6.2.4 Commonenvelope evolution
The process of CE evolution (CEE) is instrumental in compact binary and close binary formation. Ivanova et al. (2013); Ivanova (2016, 2018, 2019); Paczynski (1976). A CE is the outcome when \(\zeta _{\rm{ad}} < \zeta _{\rm{L}}\) in RLOF or when two stars collide, where one of the stars has a dense core. Generally, CEE occurs when the primary star transfers more mass on dynamical timescales than secondary can accept. It strongly depends on the instabilities in the RLOF preceding the formation of a CE (Olejak et al. 2021). The CE expands and thus rotates more slowly than the orbit of the secondary and primary star. This causes friction, the binary spirals in and transfers orbital energy to the envelope. Either so much energy in this process is transferred that the envelope is expelled completely leaving behind a close binary in corotation or in the process of inspiral the binaries coalesce (Eggleton 2006; Hurley et al. 2002; Tout et al. 1997).
The CE is traditionally modelled with the “\(\alpha \lambda \)” energyformalism (Webbink 1984; Tout et al. 1997; Hurley et al. 2002), which assumes energy is conserved and where \(\alpha \) (\(\alpha < 1\) if no other energy sources other than the binding and orbital energy are present; it can be as high as \(\alpha = 5\) otherwise, Fragos et al. 2019) is the “efficiency” of the energy reuse and \(\lambda \) is a measure of the binding energy between the envelope and the core of the donor star and should depend on the type of the star, its mass and its luminosity (Dewi and Tauris 2000; Claeys et al. 2014; Ivanova 2019; Olejak et al. 2021). This picture is very simplistic and does not take into account the myriad of processes that go on during CEE, which are also not fully understood yet (Ivanova et al. 2013; Ivanova and Nandez 2016; Ivanova 2019; Ivanova et al. 2020). On the other hand, the \(\alpha \lambda \) energyformalism is computationally very efficient and therefore it is widely used in population synthesis codes that require fast and robust stellar evolution computations (Hurley et al. 2002; Belczynski et al. 2008; Claeys et al. 2014; Eldridge et al. 2017; Mapelli 2018; Breivik et al. 2020; Kamlah et al. 2022a). Some of these also allow for recombination energy of hydrogen in the cool outer layers of the CE being transferred back into the binding energy of the CE.
Recently, a new formalism has been developed by Trani et al. (2022), which solves a binary orbit under gas friction with numerical integration. This means that the authors do not approximate CE as an instantaneous process, unlike in many binary population synthesis (BPS) codes around. The new formalism, which can be easily implemented in BPS codes, provides a significant upgrade, which can explain observations of postCE binaries which nonzero eccentricities (Kruckow et al. 2021).
In a binary consisting of a NS or a BH and a giant star, after the CE has been ejected and if the binary survives this phase, the Hrich envelope of giant stars might be stripped completely off. Now, the binary consists of a BH or a NS orbiting a naked He star. There might now be subsequent mass transfer from the naked He star to the NS of BH. This postCE RLOF mass transfer leaves behind a socalled “ultrastripped” He star that explodes in an ultrastripped SNe (Tauris et al. 2013; Tauris 2015; Tauris et al. 2017). This type of SNe is significantly different from the typical corecollapse SNe and the process of ultrastripping leads to a significant decrease in BHNS and BHBH mergers and a slight increase in NSNS mergers (Schneider et al. 2021).
6.2.5 Mergers and general relativistic merger recoil kicks
An outcome of CEE may be the coalescence of the two stars. The subsequent merger product depends on the relative compactness of the two stars and thus it depends on the stellar evolutionary stage (Tout et al. 1997; Hurley et al. 2002). If similar in stellar type, then the two stars mix completely. If one is much more compact than the other, then the more compact core sinks to the centre and the other mixes with the envelope. An unstable Thorne–Żytkow object is created if the merger involves a NS or a BH (Thorne and Zytkow 1977). Detailed calculations on the merger outcomes following coalescence and collisions, which are less likely than coalescence, but still relevant in star clusters (e.g., Rizzuto et al. 2021, 2022), depending on the initial stellar types have been tabulated in Hurley et al. (2002). A coalescence for our purposes here means that at least one of the members is a star with a core and that the binary has a circular orbit before merging, while a collision means an actual physical collision, where none of the binary members is an evolved stellar type, but the member can also be a compact object. Generally, the mixing and the final masses of the merger products are highly uncertain and only approximations can be made according to our current knowledge (Olejak et al. 2020; Kamlah et al. 2022a). There are recent attempts to unravel the masses and compositions of merger products of massive stars with hydrodynamical codes (Costa et al. 2022; Ballone et al. 2023) and they can be used to give approximate formulae for Nbody or BPS codes in the future.
The merger of compact objects is associated with a general relativistic (GR) merger recoil kick due to the asymmetry in the GW (see also Kamlah et al. 2022a for a brief discussion with respect to nbody6++gpu and mocca). The recoil velocity in this process depends on the mass ratio of the two compact objects and their spin vectors (Lousto et al. 2012) and can reach several hundreds \(\mathrm {km\,s}^{1}\) on average (Morawski et al. 2018, 2019), which is much larger than typical star cluster escape speeds. Figure 9 (from Morawski et al. 2018, 2019) shows a conceptual picture of the geometry of a GR merger recoil kick in a BHBH merger and the dependence of the mean recoil velocity on the mass ratio q of the two BHs for a metallicity dependent spin model from Belczynski et al. (2017). It can be seen that q has a huge impact on whether a GR merger recoil kick velocity exceeds the escape speed of the surrounding stellar (and gaseous) material or not. Equal mass mergers might be retained in nuclear star clusters (Schödel et al. 2014) and extreme mass ratio mergers might theoretically even be retained in open clusters (although IMBHs will probably not form there) (Baker et al. 2007, 2008; Portegies Zwart et al. 2010; Baumgardt and Hilker 2018). For (nearly) nonspinning BHs (fuller model, Fuller and Ma 2019), the kick velocity is smaller than for high spins. For nonaligned natal spins and small mass ratios, the asymmetry in the GW may produce GR merger recoils that reach thousands of \(\mathrm {km\,s}^{1}\) (Baker et al. 2008; van Meter et al. 2010). The calculation of the mass ratio is straightforward and the spins may be calculated from, e.g., Hoffman and Loeb (2007) or JiménezForteza et al. (2017).
Generally, the orbital angular momentum of the BHBH dominates the angular momentum budget that contributes to the final spin vector of the postmerger BH and therefore, within limits, the final spin vector is mostly aligned with the orbital momentum vector (Banerjee 2021). In the case of physical collisions and mergers during binarysingle interactions, the orbital angular momentum is not dominating the momentum budget and thus the BH spin can still be low. Banerjee (2021) also includes a treatment for random isotropic spin alignment of dynamically formed BHs. Additionally, Banerjee (2021) assumes that the GR merger recoil kick velocity of NSNS and BHNS mergers (Arca Sedda 2020; Chattopadhyay et al. 2021) to be zero but assigns merger recoil kick to BHBH merger products from numericalrelativity fitting formulae of van Meter et al. (2010) (which is updated in Banerjee 2022). The final spin of the merger product is then evaluated in the same way as a BHBH merger.
The inclusion of these kicks in direct Nbody simulations is still unusual (e.g., Di Carlo et al. 2019, 2020a, b, 2021; Rizzuto et al. 2021, 2022; Kamlah et al. 2022a, b all do not include these in addition to missing PN terms), but it is worth mentioning ArcaSedda et al. (2021) do include the GR merger recoil kicks by posterior analysis. nbody7 (Aarseth 2012; Banerjee et al. 2020; Banerjee 2021) on the other hand does include GR merger recoil kicks based on Lousto et al. (2012); Hoffman and Loeb (2007). In mocca numerical relativity (NR) models (Campanelli et al. 2007; Rezzolla et al. 2008; Hughes 2009; van Meter et al. 2010; JiménezForteza et al. 2017) have been used to formulate semianalytic descriptions for mocca and nbody codes (Morawski et al. 2018, 2019; Banerjee 2021; Belczynski and Banerjee 2020; ArcaSedda et al. 2021; Banerjee 2022). Recently, GR merger recoil kicks have also been added to nbody6++gpu (Arca Sedda et al. 2023a) following Campanelli et al. (2007); JiménezForteza et al. (2017) and with this code version, the whole kick process can be followed selfconsistently.
6.2.6 Accretion or merger induced collapse
In sufficiently close double degenerate COWDCOWD, ONeWDONeWD or COWDONeWD binary stars, sufficiently high and dynamically stable RLOF mass accretion of hot COrich matter may lead to a heating of the outer layers of the secondary, which will result in the ignition of nuclear burning (Saio and Nomoto 2004). If carbon burning is ignited in the COWD envelope, the heat will be transported the stellar core by conduction and then the secondary will evolve into an ONeWD (Saio and Nomoto 1985, 1998), which will eventually collapse into a NS if the critical mass of the ONe core is surpassed (\(M_{\rm{ecs}}=1.38\,M_{\odot }\)) (Nomoto 1984, 1987; Belczynski et al. 2008). This ONeWD collapse is referred to as accretion induced collapse (AIC). If, on the other hand, the ignition happens in the centre then the star will undergo a SNIa explosion, which leaves no remnant behind.
Double degenerate COWD binaries may also coalesce without undergoing dynamically stable mass transfer. During this process the less massive forms a thick, turbulent accretion disk and the more massive COWD will accrete matter close to the Eddington limit. Here the carbon will be ignited on the envelope of the secondary and thus the outcome will be a ONeWD and no SNIa will happen (Saio and Nomoto 2004). Again, if the ONe core mass surpasses \(M_{\rm{esc}}\), then the ONeWD will collapse into a NS and this is known as a mergerinduced collapse (MIC). Other pathways for MIC are mergers of a ONeWD with any type of WD companion if the resulting merger mass surpasses the critical mass for NS formation (Saio and Nomoto 1998; Belczynski et al. 2008).
The distinction between AIC and MIC is made, because the former may be observed already through their stable mass transfer phase or in lowmass Xray binary stars and the latter may be observed through gravitational waves observed with the Laser Interferometer Space Antenna (LISA, Ruiter et al. 2019).
6.2.7 Gravitational radiation and magnetic braking
Gravitational radiation emitted from sufficiently close binary stars (\(P\le 0.6\) days) transports angular momentum away from the system and drives it to a mass transfer state that might result in coalescence (Peters and Mathews 1963; Hurley et al. 2002; Eggleton 2006). The effect this radiation has on the orbit of the binary (excluding PN terms) may be obtained by averaging the rates of energy loss and angular momentum loss over an approximately Keplerian orbit (Peters 1964; Eggleton 2006). Gravitational radiation will circularise the orbit on the same timescale as the orbit shrinks until coalescence.
In corotating and sufficiently close binary stars, magnetic braking slows down the rotation of the individual star with a convective envelope, but also drains angular momentum from the orbit of the binary star, because tidal friction between the stars may conserve corotation (Mestel 1968a, b; Mestel and Spruit 1987; Eggleton 2006). As a result, this process will force a close binary to a state of RLOF within Hubble time. In some situations, this process is dominating binary evolution, such as in CVs above the orbital period gap (Schreiber et al. 2016; Zorotovic et al. 2016; Belloni et al. 2018). In spinspin period evolution (\(P{\dot{P}}\)) of pulsars this process is also important (e.g., Kiel and Hurley 2006, 2009). Both processes outlined above are nonconservative.
6.3 Combining stellar evolution with collisional Nbody codes
There are two main methods that stand out in practice concerning the integration of the complicated stellar evolution into Nbody codes. Both of these, interpolation between tables or approximation of stellar evolution data by some interpolation (fitting) formulae as functions of mass, age and metallicity, has unique advantages and disadvantages that have been known for a long time (Eggleton 1996). As it stands now, the two approaches are not in competition, but rather complement one another (Hurley et al. 2000).
6.3.1 Interpolation between tables
This method calculates stellar parameters from detailed evolutionary tracks (e.g., Pols et al. 1998). These evolutionary tracks are derived from 1D stellar evolution codes and are in tabular format. They are necessarily rather large and therefore, this approach has historically been limited by memory availability on hardware (Eggleton 1996; Hurley et al. 2000; Agrawal et al. 2020). Unlike fitting formulae, stellar parameters from the given set of detailed tracks are calculated in real time with this method. Hence, one just needs to change the input stellar tracks to generate a new set of stellar parameters. It has been claimed that this approach is the most flexible, robust and efficient today when combining detailed stellar evolution with stellar dynamics (Agrawal et al. 2020).
Maeder and Meynet (1989); Schaller et al. (1992); Alongi et al. (1993); Bressan et al. (1993); Fagotto et al. (1994a, 1994b); Claret (1995); Claret and Gimenez (1995) constructed such tables, which were later then expanded upon and refined by Pols et al. (1998). In the aforementioned works, the convective mixing or overshooting length \(l_{\rm{OV}}\) presents another hurdle, which describes the average distance by which convective cells push into stable regions (or radiative regions from Schwarzschild condition, Biermann 1932; Gabriel et al. 2014) beyond the convective boundary (Schaller et al. 1992; Pols et al. 1998; Joyce and Chaboyer 2018). This treatment was modified by Pols et al. (1998) and replaced with a “\(\nabla \) prescription”, which is based on the stability criterion itself (\(\delta _{OV}=0.12\) was found to best reproduce observations, Schroder et al. 1997; Pols et al. 1997, 1998; Hurley et al. 2000). This new criterion avoids physical discontinuities for disappearing classical convective cores. Further quantities that will influence the calibration of the luminosity L of a stellar evolution model are the nuclear reaction rates and the core Helium abundance Y. Another source of large uncertainty was left largely unchanged by Pols et al. (1998). This uncertainty has been described by Pols et al. (1998) as the “Achilles heel” in stellar evolution codes. This uncertainty is in the mixing length of \(\alpha _{\rm{MLT}}\), which is derived from mixinglength theory (BöhmVitense 1958) to describe heat transport in the convective regions of stars (Joyce and Chaboyer 2018; Pasetto et al. 2018) (see also Sect. 6.1). Pols et al. (1998) set \(\alpha _{\rm{MLT}}\)=2.0 (based on the Solar model). But not all stars with convective regions exhibit identical convective properties and \(\alpha _{\rm{MLT}}\) can show large variations from star to star (Joyce and Chaboyer 2018).
Even today, methods stellar evolution by interpolation between tables are being developed with increasing success as hardware memory capabilities also improve:

sevn Spera et al. (2015); Spera and Mapelli (2017); Spera et al. (2019), which has been completed for binary evolution (Sicilia et al. 2022) has been used extensively to study the evolution gravitationalwave source progenitor stars. Additionally, it is not available as sevn2.0, which is integrated in petar Wang et al. (2020a).

and combine (Kruckow et al. 2018) codes, which also has binary evolution implemented (Kruckow 2020; Kruckow et al. 2021) has also been used extensively to study the evolution gravitationalwave source progenitor stars.

metisse code (Agrawal et al. 2020), which is based on the stars (Eggleton 1971, 1972, 1973; Eggleton et al. 1973; Pols et al. 1995, 1997; Schroder et al. 1997), mesa (Paxton et al. 2011, 2013, 2015, 2016, 2018, 2019) and bec (Yoon et al. 2006, 2012; Brott et al. 2011; Köhler et al. 2015; Szécsi et al. 2015, 2022). Unlike sevn or combine, this code does not yet account for binary stars. In general, metisse will be another promising candidate for combining full stellar dynamics with detailed stellar evolution.
6.3.2 Interpolation/fitting formulae
A first attempt to incorporate simple stellar evolution fitting formulae in a direct Nbody code was done by Aarseth (1996) on the basis of Eggleton et al. (1989). Later, as a successor to Eggleton et al. (1989) was created using the method developed by Pols et al. (1998). They based their code on the original Cambridge stars stellar evolution program by Eggleton (1971, 1972, 1973); Eggleton et al. (1973); Pols et al. (1995, 1997); Schroder et al. (1997). The result are the famous single stellar evolution (sse) fitting formulae, which for the first time included metallicity as a free parameter (Hurley et al. 2000; Hurley 2008b; Hurley et al. 2013a). Figure 10 shows the complex discretization of stellar phases and the possible evolutionary pathways between them in the sse package. The figure has been included, because this fundamental structure still remains in many stellar evolution production codes today (see below).
In general, such fitting formulae take much more care and thus time to set up than method of interpolating between tables (Church et al. 2009), because the movement of a star in the Hertzsprung–Russell diagram (HRD) is highly nonuniform and erratic. Furthermore, they are also less adaptable to changes in stellar tracks, for example, when they need to be adjusted due to some new development in astrophysics. On the other hand, the sse provides us with rapid, robust and analytic formulae, which can be easily modified and integrated into an Nbody code along the lines of Aarseth (1996) and give stellar luminosity, radius and core mass of the stars as functions of mass, metallicity and age for all stellar evolutionary phases (Hurley et al. 2000; Railton et al. 2014).
However, these formulae necessarily also discard a lot of crucial stellar evolution information (Hurley 2008b). For example, stellar mixing depends on several timescales and internal stellar structure parameters (Olejak et al. 2020) and so these cannot be modelled directly by the fitting formulae. Only the outcomes can be parameterised for stellar types of the individual stars along the lines of Hurley et al. (2002).
Despite these fundamental complications in stellar evolution modelling that persist to this day (see, e.g., Joyce and Chaboyer 2018; Pasetto et al. 2018; Tang and Joyce 2021; Agrawal et al. 2022) and which translate directly into the continuous and differentiable fitting formulae (polynomial form from least square fitting, Hurley et al. 2000), the sse code has successfully, for the first time, provided us with a method by which we can evolve stars from ZAMS masses (0.1–100) \(M_{\odot }\) (the models from Pols et al. 1998 only reach \(50\,M_{\odot }\), but the sse formulae can be safely extrapolated to \(100\,M_{\odot }\), Hurley 2008b) rapidly and accurately (within 5% of detailed stellar evolution models over all phases of the evolution, Hurley et al. 2000) in Nbody simulations throughout all evolutionary phases taking into account all of the astrophysical processes outlined in Sect. 6.1 and offering a metallicity range from 0.0001 to 0.03 with \(Z_{\odot }\simeq 0.02\) being Solar metallicity as an input parameter.
However, for a complete picture we also need to model the binary evolution processes outlined in Sect. 6.2. For the fitting formulae this is provided by the binary stellar evolution (bse) code (Hurley et al. 2002; Hurley 2008a; Hurley et al. 2013b), which is an addon of the sse package. This has been a huge success story and many full dynamical cluster simulations have utilised sse & bse to evolve the stars, e.g., Wang et al. (2016); Askar et al. (2017c); Di Carlo et al. (2019, 2020a, 2020b, 2021); Rizzuto et al. (2021, 2022); Kamlah et al. (2022a). The sse & bse codes have been the foundation for many other BPS codes:

compas (Team COMPAS et al. 2022)

mse (Hamers and Safarzadeh 2020)

mobse (Giacobbo and Mapelli 2018, 2019; Mapelli et al. 2020) and related code called asps also used in Li et al. (2023),

cosmic (Breivik et al. 2020) and its implementations in cmc (Kremer et al. 2019; Rodriguez et al. 2022)

bselevelc (Kamlah et al. 2022a) and its implementation in mcluster Kuepper et al. (2011); Kamlah et al. (2022a).
The fitting formulae from the sse code are also implemented in BPS code binary_c (Izzard et al. 2004, 2006, 2009).
New fitting formulae have recently been constructed, which are derived from fitting to 1D hoshi stellar evolution models (Takahashi et al. 2016, 2018, 2019; Yoshida et al. 2019) to extremely massive low metallicity (EMP; PopIII) stars (Tanikawa et al. 2020, 2021a, b; Hijikawa et al. 2021). These are constructed such that they can be implemented into any of the bse variants mentioned above in a straightforward fashion and therefore also into stellar dynamics codes such as n0body6++gpu (Wang et al. 2015).
6.4 Initial conditions for star cluster simulations
6.4.1 Global star cluster initial conditions
Defining appropriate global initial conditions for star cluster simulations is highly nontrivial as the formation of a star cluster and the stars within it depend on a large number of parameters that are very uncertain due to a lack of better theoretical understanding and or observations. In the following, we give an overview of the most important parameters in this context for Nbody simulation of star clusters.
6.4.2 Initial 6D phase space distribution
In order to initialise an Nbody star cluster simulation, we need to distribute the N particles in 6D phase space. A statistical approach as described in Sect. 3.1 is taken to realize a star cluster, which follows the probability density distribution \(f(\vec {r},\vec {v},t)\). The full 6D distribution function is rarely known explicitly; under the assumption of steady state, such that f does not depend on time, the Jeans’s theorem (Binney and Tremaine 2008) allows us to express f as a function of integrals of motion of a single star moving in the gravitational potential \(\varPhi (r)\). For now we assume spherical symmetry, so we have for example specific energy and specific angular momentum: \(f=f(\vec {r},\vec {v}) = f(E,L)\), which are defined as \(E = v^2/2 + \varPhi (r)\) and \( L = \vert \vec {r}\times \vec {v}\vert \) (cf. Section 4.2). Deviations from spherical symmetry can be taken into account as well, see for example Sect. 6.4.6 for the importance of initial bulk rotation.
Examples of such selfconsistent distribution functions are given by
where n is an integer index and \(F_n\) a normalization factor to make sure that f(E) is properly normalized as a probability density function. For \(n=5\) this results in the famous Plummer model (Plummer 1915), and for \(n=7/4\) another famous solution, a density cusp (Bahcall and Wolf 1976; Frank and Rees 1976) around supermassive black holes is found (Preto et al. 2004). In Binney and Tremaine (2008) these models are also called stellar polytropes, because their density distribution is the same as a gaseous polytrope (Chandrasekhar 1939) of the same index n. Analytical density distributions exist for \(n=0\), \(n=1\), and \(n=5\) (Kippenhahn et al. 2012), but for stellar systems only \(n=5\) is physically useful.
The theory of gaseous spheres also knows the isothermal solution, which is obtained for \(n=\infty \); in stellar dynamics the equivalent is the isothermal sphere
Here \(\sigma ^2\) is the r.m.s. stellar velocity dispersion, analogous to the temperature in a gaseous sphere. These models have some problem, because their radial extent is unlimited (Plummer and isothermal) or even their mass is infinite (isothermal). Therefore, and since real star clusters are often subject to a tidal cutoff due to the host galaxy, a cutoff radius is introduced (connected to a cutoff energy). If at the cutoff radius the gravitational potential of an isolated star cluster would be \(\varPhi _0\), then a relative potential \(\varPsi \) and a relative energy \(\varepsilon \) are defined by
In that way the cluster extends from the center out to \(\varepsilon = 0\) (and \(\varPsi =0\)), and lowered isothermal or Plummer distribution is defined as follows:
Again \(f_5\) and \(f_\infty \) have to be properly chosen normalization factors. The model Eq. (51) is the widely used King model (King 1966) (“lowered isothermal”). Note that some papers and books also prefer to change the sign of E (or \(\varepsilon \)), such that bound objects have a positive value. We do not follow this here to avoid confusion.
Even in spherical symmetry the distribution function could be 2D, since we have E and L as constants of motion; it corresponds to the possibility that in spherical star clusters still at any given radius r the radial and tangential velocity dispersion may be different. So, a more general approach for the distribution function in case of an isothermal is
which is also known as Michie (Michie 1963) distribution. Numerical solutions of the Fokker–Planck equations in 2D are based on such 2D distribution functions and Michie models could serve as potential initial models (see Sect. 3.3). It is interesting to note that the most well known 1D King distribution is actually based on the older, even more general (since 2D) Michie model; Ivan King himself gives an account of this King (1981).
1D King models are extensively used for initialising star cluster simulations (e.g., Rizzuto et al. 2021, 2022; Kamlah et al. 2022a). While the Plummer model needs two parameters (mass M and scale radius \(r_{\rm{h}}\simeq 1.305 r_{\rm{pl}}\)), the King model needs three parameters (mass M, scale radius \(r_{\rm{pl}}\) and dimensionless central potential \(W_0\)). For intermediate King models (\(2.5 \le W_0 \le 7.5\)), the Plummer models are very similar (\(r_{\rm{h,Plummer}}=0.366 r_{\rm{h,King}}\)) (King 2008). We note that Gieles and Zocchi (2015) developed a new family of lowered isothermal models called the limepy models. Based on the 1D models of King a generalization in 2D for rotating star clusters is now being used and often described as rotating King models (see Sect. 6.4.6 and citations there).
Note that from f(E) or f(E, L) directly a numerical star cluster cannot be constructed, because E depends explicitly on r and implicitly through the gravitational potential (Eq. (30)). Therefore, in order to be selfconsistent, the gravitational potential has to be determined by a velocity space integration over the distribution function and then Poisson’s equation solved to obtain the stellar density as function of radius (see e.g., the textbook by Binney and Tremaine 2008 for examples). In a final step a random procedure has to be used to obtain stellar positions and velocities. If density or gravitational potential are analytically known functions (like in case of a Plummer model) the entire selfconsistent model can be constructed in one loop using random numbers (Aarseth et al. 1974).
6.4.3 Initial stellar mass function
In order to initialise star cluster simulations, we need to draw the ZAMS masses from an assumed distribution. For this purpose, we use an initial stellar mass function (IMF), a “Hilfskonstrukt” (Kroupa et al. 2013; Kroupa and Jerabkova 2018), as a mathematical formulation of an idealised stellar population that has formed from a singular star formation event. We will discuss in Sect. 6.4.5 that this is not the case in nature. An excellent review on the IMF and its construction has been provided by Hopkins (