A buyer's guide to the Hubble Constant

Since the expansion of the universe was first established by Edwin Hubble and Georges Lemaitre about a century ago, the Hubble constant H0 which measures its rate has been of great interest to astronomers. Besides being interesting in its own right, few properties of the universe can be deduced without it. In the last decade a significant gap has emerged between different methods of measuring it, some anchored in the nearby universe, others at cosmological distances. The SH0ES team has found $H_0 = 73.2 \pm 1.3$ km sec$^{-1}$ Mpc$^{-1}$ locally, whereas the value found for the early universe by the Planck Collaboration is $H_0 = 67.4 \pm 0.5$ km sec$^{-1}$ Mpc$^{-1}$ from measurements of the cosmic microwave background. Is this gap a sign that the well-established $\Lambda$CDM cosmological model is somehow incomplete? Or are there unknown systematics? And more practically, how should humble astronomers pick between competing claims if they need to assume a value for a certain purpose? In this article, we review results and what changes to the cosmological model could be needed to accommodate them all. For astronomers in a hurry, we provide a buyer's guide to the results, and make recommendations.


Introduction
In 1917, Einstein was the first to combine the assumptions of homogeneity and isotropy with his new theory of general relativity, and produce a solution for the universe as a whole (Einstein 1917). Einstein imposed his belief in a static universe, and famously introduced the cosmological constant Λ, to make his equations compatible with this assumption. Friedmann (1922) derived a solution for an expanding (or contracting) universe, but his work remained largely unknown until after his death. Establishing expansion as an observational fact was very challenging with the technology of the day. George Lemaître published the first estimate of the expansion rate in Lemaître (1927). Two years later, Edwin Hubble 1 combined his observations of stellar magnitudes using the Mount Wilson telescope with Shapley's, Humanson's and Slipher's redshifts z to a similar result (Hubble 1929). Hubble's constant, as it later became known, is then the constant of proportionality between recession speed v and distance d: Surprisingly perhaps, it was not until 1958 that the first recognisably modern  Hubble (1929). Despite the typo on the labelling of the y axis, which should read km s −1 and not km, it is still easy to read off H 0 500 km s −1 Mpc −1 . Hubble and Humanson were using the largest telescope in the world at the time. M31 is recognisable as the lowest black dot in the bottom left; Humanson had determined it is velocity as 220 km s −1 towards the Milky Way (the modern value is 110 km s −1 ).
value H 0 75 km s −1 Mpc −1 was published (Sandage 1958). Sandage made several corrections to Hubble's earlier results. Firstly, he noted the population of Cepheid variable stars was not as homogeneous as first thought. This added both scatter and bias to distance estimates, compounded by the low numbers of Cepheids observed. Secondly, and more seriously, Hubble had mistaken (far brighter) HII regions as bright stars, and therefore his estimate of the distances to galaxies was too low.
Fast forward to today, and these historical developments seem to echo some present day debates. H 0 is measured in a number of ways, which produce inconsistent values. In particular, in 2018 the Planck collaboration used the Cosmic Microwave Background (CMB) temperature and polarization anisotropies and the ΛCDM cosmological model, to find H 0 = 67.4 ± 0.5 km s −1 Mpc −1 (Planck Collaboration et al. 2020) whereas in 2021 the SH0ES collaboration  used Cepheids and supernovae to find H 0 = 73.2 ± 1.3 km s −1 Mpc −1 (here, and for the rest of the review we quote 68% confidence limits). We show the main modern results in Fig. 2.
Why should this disagreement matter? Firstly, it may be a sign that the standard cosmological model is incomplete and new physics is required. All H 0 results place some reliance on a background cosmology (for example to obtain peculiar velocity adjustments from a model), but the sensitivity is large when comparing results projected over large distances. Secondly, other cosmological parameters such as matter densities, curvature and large scale structure are often degenerate with H 0 in observational data; knowing H 0 more accurately helps resolve their values too. Thirdly, knowing H 0 accurately improves astrophysics, as distances in the universe are ∝ H −1 0 . Its dimensionless cousin h = H 0 /100 km s −1 Mpc −1 is ubiquitous in formulae.
We organise our review as follows. The first section defines H 0 and distances; it may be skipped by a reader familiar with cosmological models. In Sect. 2, we show how H 0 is calculated from observational data, and the type of problems that might generally arise. We also briefly discuss the use of Bayesian methods as a tool to discriminate between competing observations and models. In Sect. 3, we discuss ways in which H 0 has been recently estimated. There is a rich literature on the subject, and it is difficult to cover all papers. Our approach is to cite for each topic a seminal paper, and the recent most significant developments. In Sect. 4, we discuss the possibility that measurements are correct, and it is our understanding of cosmology that is wrong. In Sect. 5 we conclude, and in the spirit of our guide for consumers, we provide our buyer's advice and recommendations. We aim to be impartial, and the views expressed here are solely our own. Busy readers could review Sects. 2, 4 and 5, and dip into Sect. 3 if more detail is needed.
For the remainder of this review, we adopt c = 1 throughout the text.

Linking H 0 to H(z)
Hubble's "constant" (Eq. 1) is not fixed when we observe beyond our local cosmological neighbourhood; which is to say it is not fixed in time. Therefore, we write H 0 ≡ H(t 0 ), where t 0 is the present day and in general H = H(t). The expansion has slowed down in the past, but the universe is now accelerating (since z ∼ 0.6) and has been dark energy dominated since z ∼ 0.3. The change in H(t) may be written using the phenomenological deceleration parameter q(t) as dH dt = −H 2 (t)(1 + q(t)) . We have intentionally selected a limited number of results in order to show those that are as independent from each other as possible, in the sense that they use different photometric data, distance calibrators and so on. More comprehensive versions of this plot can be found for example in Di Valentino et al. (2021). Comments: 1) CCHP and Yuan et al. share a common distance to LMC as a calibrator. 2) TDCOSMO is a re-analysis of almost the same data as H0LiCOW, but with changes to the galactic potentials. 3) BOSS and DES share a prior constraint on baryon densities from BBN. 4) The results of Blakeslee et al. use new SBF observations, whereas Khetan et al. use archival SBF distances to calibrate SN Ia. The code use to generate this figure is publicly available at https://github.com/Pablo-Lemos/plot1d In our local universe q(t) is approximately constant, and hence some authors adopt q 0 ≡ q(t 0 ) as a constant (see for example Riess et al. 2016or Freedman et al. (2019). As redshift is a monotonically decreasing function of time, we can write H = H(z) and approximate H(z) H 0 [1 + (1 + q 0 )z] . ( As q 0 −0.55, light travelling to us from more than 100 Mpc away will have been emitted when the Hubble constant was more than 1% different to its current value. The purpose of phrasing cosmography in this way is to avoid explicit assumptions on the matter or energy content of the universe, and it is of course possible to use more general parametrisations to expand H(z). While this approximation is reasonable for z 0.1, if we wish to go further, or link phenomenological parameters to physical quantities, we need a cosmological model.
Cosmological models usually assume the universe is homogenous and isotropic on large scales, and has a space-time metric. Under those assumptions, the Friedmann-Robertson-Walker (FRW) metric is ds 2 = dt 2 − a 2 (t)( dr 2 1 − kr 2 + r 2 (dθ 2 + sin θ 2 dφ 2 )) , where a(t) is a scale factor defining how physical distances evolve with cosmological time t, and (r, θ, φ) are comoving polar coordinates centered on ourselves. k is a curvature parameter, which here has units of inverse area as we wish to set a(t 0 ) = 1 (an alternative convention is to set k = 0, ±1 but it is not in general possible to do both). Results from Planck (Planck Collaboration et al. 2020) for the CMB in isolation show a preference for mildly closed universe where k > 0, and allowing k to vary from zero lowers the CMB derived H 0 to 63.6 +2.1 −2.3 km s −1 Mpc −1 . However, most other observational evidence points to a flat universe (for example galaxy survey data or gravitational lensing of the CMB -see for example Efstathiou and Gratton 2020). Our discussion would not be materially affected by including spatial curvature, and for this review we will assume a flat universe where k = 0. We return to the point in Sect. 4. The Hubble parameter is then defined as whereȧ ≡ da/dt. The scale factor and redshift are related by a(t) = 1 1 + z cos (t) .
By z cos , we mean the redshift that would be seen if both the observer and emitter were stationary in comoving coordinates, which we take to be the frame in which the CMB has no dipole. Peculiar velocity is then the velocity with respect to this frame. We know the solar peculiar velocity relative to the CMB from our observed dipole, but to estimate z cos from the observed redshift z obs we also need the peculiar velocity of the emitter. For example, a 10% error in a peculiar velocity of 300 km s −1 of a galaxy lying at 50 Mpc away would result in a 1% error in H 0 were we to calculate it solely from that galaxy. For that reason, astronomers seek large numbers of objects distributed across the sky, deep into the "Hubble flow", meaning their peculiar velocities are small compared to Hubble expansion and are assumed to average out. From here, we write z = z cos unless indicated otherwise. The ΛCDM cosmological model is defined as = Ω k,0 (1 + z) 2 + Ω m,0 (1 + z) 3 + Ω r,0 (1 + z) 4 + Ω Λ,0 .
This can be generalised with an equation of state parameter p = wρ for dark energy where p is pressure, and the above corresponds to the cosmological constant w = −1. The fictitious curvature density is Ω k = 1 − Ω m − Ω r − Ω Λ , and we have assumed spatial flatness Ω k = 0. The present day density fractions for matter, radiation and dark energy Ω i,0 ≡ Ω i (z = 0) are defined in terms of the physical densities ρ i as It is straightforward to expand H(a(z)) as a Taylor series in z and obtain Eq.
(3) to first order where So, in a narrow sense, a cosmological model is a function to derive H(z) from H 0 and Ω i or vice versa (provided nothing converts energy from one type to another). This is how H 0 is calculated from H(z) at z ∼ 1100 for the CMB. Hence, one way to reconcile the Hubble tension is to change the function H(z, Ω i ) by adding new energy components or novel interactions, and we return to this later.

Distances from angles and fluxes
Luminosity distance is defined to recover the standard inverse square law ratio between the bolometric luminosity L and flux F that would hold in a flat, static universe: In a homogeneous and isotropic universe we find where it is conventional to make the dependence on H 0 explicit by setting H(z) ≡ H 0 E(z). Substituting in Eq. (7), for flat ΛCDM we have E(z) = Ω Λ,0 + Ω m,0 (1 + z) 3 + Ω r,0 (1 + z) 4 .
Angular diameter distance is the ratio between the physical size l of a distant object, and the small angle δθ it subtends on the sky: The Etherington relation (Etherington 1933) is a useful way to convert between the two 2 .
To link ΛCDM to our local universe, we expand Eq. (7) to second order in z : where the jerk parameter j 0 is defined as Eqns. 15 are now a reasonable approximation to Eq. (7) and Eq. (11) out to z ∼ 0.6.
Hubble's law v = H d is implicit in Eq. (11). Expanding the integral as a Taylor series in z, we see that v = cz as used by Hubble and Lemaître (and re-introducing c here for clarity) is only valid as a low-z approximation. 2 The relation uses z obs rather than zcos as it is due to the total time dilation and redshift between the source and observer. See Davis et al. (2019) for an informative review of redshifts in cosmology. We also recommend Hogg (1999) for a thorough discussion of distance definitions.
1.3 The age of the universe The age of the universe t 0 is inversely proportional to H 0 , with dependence on other cosmological parameters. From the definition of H(t) =ȧ/a we can write t 0 = 1 0 da aH (a) .
In the special case of a flat, radiation-free universe, where we set Ω m,0 +Ω Λ,0 = 1, it can be written analytically as For a flat universe with Ω m,0 = 0.3 this gives H 0 t 0 ≈ 0.96. Historically (in the 1980s), it was believed that the universe was Einstein-de Sitter (Ω m,0 = 1.0 and Ω Λ,0 = 0.0), which yields the product H 0 t 0 = 2 3 . Then, a low value of H 0 ≈ 50 km s −1 Mpc −1 was required in order to ensure that the universe is older than the oldest globular clusters. The existence of Λ > 0 makes the universe naturally older. The Planck estimate is t 0 = 13.797±0.023 Gyr within ΛCDM (Table 2 for Planck alone, 68% CL ;Planck Collaboration et al. (2020)). This value for the age of the universe is comfortably larger than the age of any known astronomical object.
2 Inference of H 0

Standard candles and the nearby distance ladder
A standard candle is any population of stars or events which can be reliably identified have the same characteristics wherever they are seen have an established luminosity law specifying the absolute magnitude in terms of observable quantities.
Although the luminosity law is determined empirically by calibration (except in the case of gravitational waves -see Sect. 2.3), there is an advantage if there is also a solid understanding of the underlying physics of the standard candle as in that case the calibration can be cross-checked against a theoreticallyderived one. The nearby distance ladder starts with a choice of standard candle, and a calibration of the absolute magnitude M , for example using parallax distances and apparent magnitudes m. m is defined by where F X is the energy flux per unit area per second across the wavelength range of the band X, and F X,0 is the fixed reference flux for the magnitude system being used (for example the Vega system). M is the apparent magnitude the object would have if it were at a distance of 10 parsecs. Distance is conveniently quoted as the distance modulus and then the luminosity distance (10) becomes which can then be substituted into Eq. (11), (15) or similar relations to obtain H 0 . A given standard candle seen over a range of distances is termed a "rung", and in turn used as calibrators for the next rung. For example, the SH0ES team  calibrate Classical Cepheids (CC) using parallaxes, a maser distance to NGC4258, and detached eclipsing binaries (DEBs) in the LMC as their first rung. Their next rung is Type Ia supernovae (SN Ia), calibrated using the 19 galaxies in which both Cepheids and SN Ia have been observed. We illustrate their construction in Fig. 3 The quality of the standard candle depends on a number of considerations. Are there enough with good distances to accurately calibrate the absolute magnitude? Can we clearly identify them at large distances? Can they be observed out to a sufficient distance to reach the next rung? Are the objects observed at large distances of the same type as local ones used as calibrators? How to correct for extinction, reddening, metallicity effects and crowded starfields? Which band has the most reliable magnitudes? If data has been combined from different telescopes, have the right adjustments been made to convert photometry? How is magnitude to be defined for variable stars? Each rung depends on the previous one, and errors will propagate up the ladder.
A more subtle issue is that the conversion of observational data to H 0 is non-linear. As the expectation E[d n ] = E[d] n in general, scatter in observational data will introduce systemic bias. Bias can also be introduced by sample selection unless care is taken as E[d 1 < d < d 2 ] = E[d]: we average our selected sample but wish to know the expectation of the unselected one. Selection may be overt (for example by cutting outliers) or due to our telescope seeing only up to m < m 0 (known as Malmquist bias (Malmquist 1922)). Both of these can be (and usually are) corrected for, but require some assumptions and a careful analysis of the data and reduction pipeline.

Standard rulers and the inverse distance ladder
A standard ruler is a feature on the sky of a known physical size l, which enables us to calculate the angular diameter distance d A defined in Eq. (13) from their angular size. Parallax is an obvious example, and also the size of orbits of masers and detached eclipsing binaries can be determined from their positions, light curves and spectroscopy. In the early universe, acoustic pressure waves in the primordial charged particle and radiation plasma set a physical size called the sound horizon r s , which is how far they travel from the initial seeds 3 that generate them. The universe was not that dense at that time, so Thomson scattering by charged particles was necessary to propagate the waves, and hence they are frozen-in by recombination. The sound horizon is then imprinted on the CMB as peaks in the power spectrum of temperature fluctuations, and in the later spatial distribution of galaxies (known as baryon acoustic oscillations, or BAO for short) 4 . The sound horizon may be calculated in a cosmological model, and depends both on the expansion rate H(z) (the waves are carried along by expanding spacetime) and the matter-energy content of the early universe (determining the sound speed).
The inverse distance ladder, as its name suggests, works in the opposite direction to the nearby distance ladder but on the same principle. It can use the sound horizon for a starting d A , a background cosmology, and the Etherington relation (14) to calibrate the luminosity distances d L of standard candles. For example, Lemos et al. (2019) calibrate BAO at z 1 and SN Ia at z < 1 using the CMB sound horizon. They replace the standard ΛCDM formula for H(z) with a parametric form. H(z) is extrapolated to today to find H 0 = 68.42 ± 0.88 km s −1 Mpc −1 . Thus, it is shown that discrepancy between the CMB value of Planck Collaboration et al. (2020) and Riess et al. (2019) is not caused by assuming the late-universe is ΛCDM.
We can express the difference between early and late universe H 0 in terms of ruler size or luminosity differences. Planck implies r s = 147.27±0.31 Mpc in ΛCDM; it would need to be 10 Mpc lower (Knox and Millea 2020) to bring consistency with Riess et al. (2019). Alternatively, Eqs. (21) and (11) show that Cepheids or SN Ia would need to be 0.2 mag brighter than thought to bring consistency with Planck. In summary, we see the two ways of constructing ladders, propagated toward each other by the "guard rails" of SN Ia, do not meet! Hence, the H 0 tension is sometimes characterised as "early" versus "late", from which follows the question "Is ΛCDM right?". This may be premature: in fact, few lateuniverse results in isolation are fully inconsistent with early universe ones, as we discuss later.

No-ladder H 0
We have seen distance ladders require calibration, whether they are nearby or inverse. However, there are some self-calibrating observations from which H 0 may be calculated directly.
One example is the CMB. The detailed shape of the power spectrum determines other parameters in ΛCDM, and the value of H 0 derived from it is best understood as just one part of the simultaneous inference of all cosmological parameters. A further example is maser emission systems, which occur in the nucleus of certain galaxies and are bright enough to be seen at cosmological distances. The emission spots appear to follow Keplerian orbits, and so with some disk modelling, the size of the orbit and hence the angular diameter distance can be deduced directly.
Gravitational waves assume general relativity. The masses of merging compact objects and luminosity can be obtained from the shape of the waveform. That is to say, there is no need for an empirical calibration of their intrinsic luminosities, and instead the observational challenge is to determine the redshift of the source. Often referred to as standard sirens rather than standard candles, a gravitational wave event whose source galaxy has been identified (by locating the optical counterpart) is referred to as a "bright siren", otherwise it is called a "dark siren". Most gravitational wave events are dark, but progress can be made statistically with them given sufficient numbers.
For gravitational lenses, general relativity links the time delay caused by the lensing of the background source (a combination of both the longer path and time dilation) and the mass distribution of the lensing galaxies. The absolute time delay is not known, but if a rapidly varying source like a quasar can be seen in multiple images, the relative time delay between images allows the angular diameter distance of the lens to be calculated. In this case, the challenge is to obtain enough constraints on the mass distribution of both the lensing galaxies and the general concentration of matter along the line of sight, using for example the velocity dispersions and surface brightness of the lensing galaxies, and imaging data.
2.4 What could cause the tension? By using the word "tension", cosmologists mean the discrepancy in measurements of H 0 is at a level which is large compared to the reported errors. This means that if the values and errors are correct, this is very unlikely to be the result of chance.
Measuring the Hubble constant has always pushed the limits of the available telescopes of the day, and as a consequence observers have to be very careful to avoid bias in their derivations of H 0 , and have accounted for all errors. Hubble believed he had one population of Cepheids, whereas we know today he had two, and had also confused nebulae with bright stars. Alternatively, many researchers interpret the tension in the spirit of the precession of the perihelion of Mercury: something is wrong with the (Newtonian) model and a new one is needed (general relativity). We can categorise explanations as follows: -Observational bias. An observational bias is an error in mean photometry that would be expected to increase with magnitude or distance. To give some examples, consider first crowding. For distant stars, resolving them from their neighbours becomes harder, therefore their photometry will be progressively more blended with other stars the more distant they are. Blending increases the apparent magnitude, and changes the colour (see for example the discussion in Sect. 4.2 in Javanmardi et al. 2021). For very faint stars, the response of the detector may be non-linear , and needs to be corrected. Another issue is combining observations between ground and space telescopes, as in general fainter stars will be observed from space, but nearby ones more cheaply from the ground. Aside from atmospheric extinction, each instrument has different passbands, detector response and resolution, meaning the magnitudes of the same star observed in each telescope will be different. Photometry must be transformed to a common system (see for example Eqs. 10-12 in Riess et al. 2016), and if not done (or done incorrectly) some bias will likely have been introduced. Any parameters derived from photometry -such as photometric redshifts -would inherit the same propensity to bias. -Astrophysical bias. An astrophysical bias occurs when the properties of the object being studied are not fully resolved, and those properties differ with distance. For example, consider Cepheids in the LMC and SN Ia hosts. The LMC is close so Cepheids with a full range of periods can be seen, whereas for distant galaxies only the brighter Cepheids with longer periods are seen. Additionally, the LMC is metal-poor compared to a typical spiral galaxy, the Cepheids there may be expected to be relatively metal-poor compared to those in SN Ia hosts. Hence, any curvature in the Leavitt law, or mis-calibrated metallicity dependency could bias distances (see for example Freedman and Madore 2010). A second example is the step-like link between SN Ia magnitudes and properties of the host galaxy (Smith et al. (2020) and references therein). This could indicate there are two distinct populations of SN Ia. If then the SN Ia in the 19 galaxies where both Cepheids and SN Ia were of mostly one type, whereas the rest a blend of both types, H 0 would be biased by the calibration of SN Ia. -Statistical bias. The main causes of statistical bias will be selection effects and scatter as we discussed in the introduction. Statistical biases can be corrected by running random simulated observations through the same selection and analysis pipeline, but the simulations will themselves need some physical parameter choices, perhaps determined from previous surveys, or fixed in advance to "reasonable" levels. In Bayesian analysis, residual dependence on the choice of prior is a feature of sparse observations. Statistical bias correction is subtle and difficult, as we see later in the sections on parallax and SN Ia. -Physics of ΛCDM. Before invoking new physics, could the explanation be found within ΛCDM? Cosmological formulae such as Eq. (7) are derived from a homogeneous, isotropic universe. Could corrections allowing for inhomogeneities be large enough to explain the tension? A specific example is the "Hubble Bubble" or local void proposal (for a recent example, see Shanks et al. 2019), in which we are by chance located in an under-dense region, and our local H 0 is different from the "universal" one. Additionally, inhomogeneities mean we must correct redshifts for peculiar velocities, and further the propagation of light through overdense or underdense regions might bias our analysis (Kaiser and Peacock 2016). -New Physics. If the expansion history of the universe were different to ΛCDM, H 0 inferred from the CMB or BAO might be brought into alignment with the local value. Alternatively, performing our analysis of a non-ΛCDM universe using ΛCDM formulae may have confused us. For example, an extra particle species would increase the pre-recombination expansion speed, so reduce the size of the sound horizon: sound waves have less time to propagate before they are frozen in. To keep the same observed angular size of the CMB temperature fluctuations, the value of H 0 calculated from the CMB will increase (see Eqs. 11,13,14). However, as we see later ΛCDM makes many other successful predictions, such as the CMB spectrum itself or primordial element abundances, and is not lightly tampered with.

Is the tension significant?
The tension is often quoted as the number of standard deviations "mσ". In particle physics, the meaning is clear: there are millions of collisions and probability is frequentist. There is no need to work in a Bayesian framework, with some prior assumption of a parameter to update with new data. The law of large numbers drives distributions to Gaussian normal shape, and we can translate mσ to a probability of occurrence by chance. At the 5σ level, it is unanimously agreed new physics has been detected. None of the above applies in cosmology! Nevertheless, that the tension is significant should not be in doubt: see Fig. 2. But we are looking for more from our data analysis, and there are three ways in which Bayesian statistics are helpful, which we now briefly survey.
Bayes' theorem states that the posterior probability distribution is where D is the data and θ are the parameters of the model M of interest. For example M might represent ΛCDM with its associated parameters including H 0 and M 1 some extension of it with additional parameters. P (θ | M) is any prior belief in the parameters of the model and P (D | θ, M) is called the likelihood, typically specified by the team analysing the data. The denominator is called the evidence. An extended cosmological model will have a smaller evidence if there exist large values of the parameter space with low likelihood, even if it agrees better with the data. Bayesian evidence then naturally embodies Ockham's razor: a simpler model will have a larger evidence, unless the extended model has a significantly better fit to the data. Secondly, Bayesian statistics can also help in re-analysis, in the hope that the data itself may reveal issues. A Bayesian hierarchical analysis was used by Feeney et al. (2018) to test relaxing distributional and outlier assumptions embedded in the χ 2 fit used in SH0ES analyses. Cardona et al. (2017) have re-analysed SH0ES data using "hyperparameters", which are weightings of datasets proposed as a measure of credibility by Lahav et al. (2000). An agnostic prior for the weights is set, and the hyperparameters are marginalised over. Both results are consistent with SH0ES. Bernal and Peacock (2018) extend the hyperparameter method by adding a free parameter shift in the mean of each dataset to account for unknown systematics, which they dub "BAC-CUS". Using this to combine Planck, SH0ES and other datasets, produces a compromise. As shown in Fig. 4, the posterior middles the two with much larger error bars, which perhaps is unsurprising given the agnosticism of the method. Such types of analysis need to be taken with a grain of salt : data that is in tension should not be combined, and BACCUS is not a substitute for a critical analysis of why the tension has happened. However, if one demands a method to merge disparate results in a Bayesian framework, BACCUS is a way to achieve that.
Thirdly, we may want to know how valid a combination of data sets is. If two posteriors for the same parameter barely overlap, a Bayesian analysis will seriously mislead with error bars that are too small, as shown by the yellow line in Fig. 4. We seek a statistic that is symmetric, (reasonably) independent of Fig. 4: An illustration of BACCUS applied to Planck, H0LiCOW and SH0ES data, whose posteriors are shown as thin grey lines. Conventional (orange) is the standard Bayesian combination assuming equal weighting, w/Rescaling (blue) is equivalent to hyper-parameters, w/Shifts (brown) adds an unknown systemic error offset to each dataset, and w/Shifts+Rescaling (green) combines hyper-parameters and systemic offsets. Figure from Bernal and Peacock (2018).
prior assumptions and models, and straightforward to calculate and interpret. The R statistic compares the evidence of dataset D A in light of knowing D B to that of D A alone, but is dependent on the prior and so isn't usually comparable between different papers. Handley and Lemos (2019) define a new statistic called "suspiciousness" as log S = log R − log I , where I is the information ratio log I = D A + D B − D AB which quantifies the information gain between prior and posterior. D A is defined as and D B and D AB are defined similarly by replacing A → B, AB respectively. This is independent of the prior and (being an integral) the choice of parameters. We can interpret log S 0 as the two data sets being in tension: loosely speaking, the evidence of combining them does not exceed the information of considering them separately. Therefore, suspiciousness fits the criteria of simplicity and interpretation we outlined above. Finally, even in light of the tension, we cannot dodge the question posed in our introduction: "All this debate is interesting, but which value for H 0 should I use, and is it valid to use that in ΛCDM?". We suspend judgement until after we have surveyed the data and potential new models.

Parallax
Parallax is both the oldest astrometric technique, and the easiest to understand. Hold out your thumb at arm's length, relative to some fixed point on the wall, and alternately close one eye and then the other. Relative to the fixed wall, the apparent position of your thumb will change, and this is how our depth perception works: the smaller the change in position, the longer your arm must be. The same principle works with stellar distance, where now our "binoculars" correspond to the Earth's position on opposite sides of its orbit. The change in a fixed star's position 2 = θ 1 − θ 2 arcseconds, due to the change in the Earth's position by 2 a.u. over 6 months leads to the distance d = 1/ parsecs 5 . Our measurement of position must be very precise. Although the nearest star, Proxima Centauri, has a parallex of approximately 0.77 arcseconds, modern measurements target a remarkable 10 µas, which is the size of a thumbnail on the Moon as seen from Earth.
Modern parallax measurements began with the satellite Hipparcos (the name alludes to the ancient Greek astronomer Hipparchus, who measured the distance to the Moon). Launched by the European Space Agency in 1989, it measured the parallaxes of 100,000 stars at an accuracy of up 0.5 milliarcsecond (mas), the fixed background frame now being extragalactic sources such as quasars. Although undoubtedly impressive, at 1,000 light years an error of 0.5 mas would still be a distance error of 15%. A further drawback is Cepheids, an important part of the distance ladder we discuss shortly, are relatively rare stars and only a handful are located in our neighbourhood of the Milky Way.
Moving on to the present day, our two best current sources of parallax are Gaia and the Fine Guidance Sensor/Wide Field Cameras (FGS/WFC3) aboard the Hubble Space Telescope (HST). Gaia was launched in 2013 and the mission goal is to measure over a billion stars (including 9,000 Cepheids and half a million reference quasars), both in our galaxy and satellites like the LMC. The mission-expectation precision is 7µas at m = 12, rising to 26µas at m = 20 (Gilmore 2018). Gaia does this by slowly scanning the sky with two telescopes set at relative angles of 106.5°, to make a one-dimensional measurement of the time and position of each star that slowly drifts across the CCD. Up to 70 measurements will be made for each star, which allows the additional calculation of proper motions, and even small changes in position caused by the gravitational tug of planets orbiting the star. The HST operates on similar principles in "spatial scanning" mode; although it cannot survey like Gaia, when focused on nearby Cepheids its errors appear competitive (Riess et al. 2018a).
Gaia's high precision is dependent on a very stable mechanical structure of the spacecraft. A variation in the angle between the two fields (which might be caused by thermal expansion) could cause spatial variations in apparent parallax, or if synchronous with the scan period even a fixed systematic offset. Indeed, such a variation has been inferred from the interim Data Release 2 (DR2) (Gaia Collaboration et al. 2018). The average of the quasar parallaxes in it is -29µas (negative parallaxes can happen when position measurement error is larger than the parallax), and there were indications this "zero point" may vary with stellar colour, luminosity and position on the sky (Arenou et al. 2018). Riess et al. (2018b) compared HST Cepheid parallaxes to Gaia, simultaneously solving for the Gaia zero point and Cepheid calibration. They found a difference between them 46 ± 15µas, with Gaia parallaxes again appearing too low. As the typical parallax of a Milky Way Cepheid is 400µas, this is very material to distance estimates. Breuval et al. (2020) creatively replaced DR2 Cepheid parallaxes with those of resolved bound binary companions (where available), which being dimmer are closer to the ideal magnitude range for Gaia.
During the preparation of this review, Gaia Early Data Release 3 was made available, and has already by used by Riess et al. (2021) to revisit Gaia parallaxes. Gaia EDR3 is not intended to be the final word, but indeed Cepheid zero points seem now to be reduced below 10µas. A calibration of Cepheids using 75 Gaia parallaxes only gives H 0 = 73.0 ± 1.4 km s −1 Mpc −1 , which is slightly lower, but consistent with their previous result based on HST parallaxes.
Before moving on to discuss alternative calibrations, we will first digress into a discussion of parallax bias. The potential for bias occurs in any astrophysical observation, and is often the subject of lengthy analysis in H 0 papers. As it is most easily understood in the context of parallax, it is helpful to discuss it here.
Parallax bias is usually referred to as Lutz-Kelker-Hanson (LKH) bias (Lutz and Kelker 1973;Hanson 1979). This is a summation of three quite different effects: non-linearity of the desired variable (distance) with respect to the observed variable (parallax), population bias (have we observed the object we intended to, or did we confuse it with something else?), and selection bias (our surveys are normally magnitude limited, so we will only "select" objects for which m < m 0 ).
To explain non-linear bias, imagine we have a symmetric error in our parallax measurement, such as might be caused by an instrumental point spread function. To be concrete, suppose the likelihood of measuring 150µas is the same as measuring 50µas when 100µas is the true value. If we average the distance, we will obtaind = 6666.67 + 20000 = 13, 333 which is biased with respect to the true distance of 10, 000 parsecs. In mathematical terms, because Population bias arises in parallax when we consider a broad survey of stars at a given distance r from our position. Assuming a roughly constant spatial density of similar stars, there are more stars in the shell (r, r + ∆r) than there are in the shell (r − ∆r, r) for some finite ∆r. Hence, there are more (further) stars whose parallaxes may be over-estimated to place them at r than (closer) stars whose parallaxes may be over-estimated. Taken to extremes, there are huge numbers of stars with effective parallaxes of zero, waiting for their small but finite chance to "crowd in" to a given measured parallax. This will bias observed parallaxes too low. Note that if we were certain of our identification of the star (as we would be for a Milky Way Cepheid close to us), we need not consider population bias: it would stand out from the crowd. there is a constant spatial density of stars. Then, the region ( , − δ ) has a greater number of stars that can scatter to the observed parallax than the region ( , + δ ) and parallaxes are biased too low. Conversely in the figure on the right, the stellar density drops sharply beyond due to either the edge of the population or magnitude limitations. More stars are available to scatter out than in, and parallaxes are biased too high.
Conversely, suppose we were observing close to our magnitude limitations. Now the opposite bias would occur: we cannot see the further stars, so they can't crowd in. But the same number of closer stars are available to crowd out, so our observed parallaxes will now be biased too high. This is the well-known Malmquist bias (Malmquist 1922), and is a major issue for surveys as naturally we will try to see as far as we can! To deal with bias then involves modelling of the instrumental error, the selection function, the population scatter and so forth. In modern surveys, this is normally done by constructing simulated catalogs with known physical parameters and some assumptions, and putting those catalogs through the same analysis pipeline as the real data to see what biases emerge. For example, Riess et al. (2018a) compute distance modulus biases of between 0.03 and 0.12 mag for MW Cepheids using a model for galactic stellar distributions. If working in a Bayesian framework, a posterior distribution for the distance may be derived using the method of Schönrich et al. (2019). An alternative is to work directly with the parallaxes, instead converting Cepheid magnitudes to predicted parallaxes as done by Riess et al. (2018b). As the magnitudes are measured considerably more accurately than the parallaxes, bias corrections to the magnitude to parallax conversion are not necessary. To check the predictions of LKH bias, Oudmaijer et al. (1998) compared ground-based to Hipparcos parallaxes, finding a bias towards brighter magnitudes up to relative error ≈ 30%, and dimmer magnitudes for larger error when Malmquist bias is dominant, as one would expect from the discussion above.
We end our digression on biases here and move on to discuss other geometrical distances.

Detached eclipsing binaries
Imagine we had a star for which we knew the surface radiant flux density J, and the physical size R. The stellar luminosity would straightforwardly follow as L = 4πR 2 J, and we would have a standard candle.
Cool, stable, helium-burning giants (that is, red clump) are sufficiently bright to be seen outside the Milky Way. For these "late-type" stars, just such an empirical relationship can be established for the surface brightness S V defined as where φ is the stellar angular diameter, and V the visual band magnitude. This relationship has been calibrated by angular diameters obtained from optical interferometry of nearby stars, and is given by where V − K is the colour difference between magnitudes in the V and 2.2 µm near-infrared K band (Pietrzyński et al. 2019). The scatter is just 0.018 mag.
Rearranging the definition of surface brightness and with φ = 2R/d it then follows that where R and φ have been converted to solar radii and milli-arcseconds respectively. The pre-factor is purely geometric. But how can we know the radius of distant stars? Eclipsing binaries allow just that. If the stars are well separated enough to spectroscopically resolve each one 6 , but close enough to eclipse each other, we can obtain their individual surface brightnesses, colours, radial velocities, eclipse depths and shapes, and the orbital period. With this data, the radius (and other physical parameters such as mass, eccentricity and inclination of the orbital plane) of each star can be solved for. Such an alignment of the stars is of course rare, but sufficient numbers do exist! By painstakingly observing 20 systems in the LMC over more than 20 years (covering many eclipses) Pietrzyński et al. (2019) determine the stellar radii to 0.8% accuracy. Crowding can be easily spotted in the light curve and removed. They derive µ LMCCentre = 18.477 ± 0.004 (stat) ± 0.026 (sys), where the main contribution to the systemic error budget is the S V relation above.
This is the most accurate measurement of the distance to the LMC to date. As we shall see shortly when we discuss standard candles, this result has become key to many recent H 0 results: every standard candle can be found in the LMC, and the low error budget allows for a very accurate calibration.
Looking forwards, although DEBs have been found in M31 they are "earlytype" stars with hot atmospheres, for which a reliable surface brightness to colour relation has not been established. The hope is future 30m-class telescopes will have sufficient spectroscopic resolution to extend this to late-types in M31 and other local group galaxies (Beaton et al. 2019).

Masers
Maser emission occurs when thermal collisions in warm gas in an accretion disk around the central black hole of a galaxy drive a population inversion of molecular energy levels. Such systems are rare: the disk must be "just right", not too hot, and not too cold, and have suitable local molecular abundance. The Type 2 Seyfert galaxy NGC 4258 at a distance of 7.5 Mpc is just such a system. Isolated bright spots of 22.235 GHz maser emission (from a hyperfine transition of H 2 O) can be seen in three regions on the sky stretching 20 mas (∼ 0.1 ly) long, with the overall shape of a warped line. After subtracting the overall system redshift, the spots on one side are blue-shifted by ∼ 1000 km s −1 , the other side is red-shifted by the same amount, and the spots in the middle are low velocity (Argon et al. 2008). If the spots are observed for long enough, their accelerations and proper motion can be obtained from the steady drift of their Doppler velocities and positions. The central spots show the lowest l.o.s. velocity and highest acceleration, and the outer spots having the highest velocity and lowest l.o.s. acceleration.
The key assumption these observations support is that all the spots form an orbital system with shared parameters, as it is anticipated that disk viscosity will have reduced the orbits to close to circular. The disk shape is then modelled (including such parameters as inclination, warp, residual eccentricity and so forth), fitted and the orbital parameters of the spots derived. The outer dot velocities v show a Keplerian behaviour with radius: v 2 ∝ GM/r, and as the acceleration is thenv ∝ GM/r 2 , the physical radius of the system can be determined. The angular diameter distance is then where θ is the angular impact parameter. Humphreys et al. (2013) have observed NGC 4258 for 10 years at 6 monthly intervals using VLBI interferometry, which resolves the relative position of the maser spots to < 3 µas accuracy (even the tectonic drift of the ground radio telescopes must be corrected for). The Doppler shifts are measured to within an accuracy of 1 km s −1 , and the central spots accelerate by ∼ 10 km s −1 yr −1 . The proper motionsθ µas yr −1 provide additional information as d A = v/θ Mpc. This pattern fits an accretion disk orbiting a 10 7 M black hole. Reid et al. (2019) derive the distance as d = 7.576±0.082(stat)±0.076(sys) Mpc, an accuracy of 1.4%, competitive with the LMC distance error. The main statistical contribution to the error budget is the positional error of the ∼ 300 spots.
Only 8 megamaser galaxies have been detected, with the furthest being NGC 6264 at 141 Mpc, and it is unlikely more will now be found at usable distances. We illustrate the data for UGC 3789 from Reid et al. (2013) in Figs. 6a and 6b. The Megamaser Cosmology project (Pesce et al. 2020) have used six of these galaxies to find H 0 = 73.9±3.0 km s −1 Mpc −1 , independently of distance ladders.
Although it is too close to determine H 0 directly (as its peculiar motion could be a large fraction of its redshift), NGC 4258 has become a key calibrator of the distance ladder, owing to its low error budget. Its particular usefulness is that, unlike the LMC or SMC, it is a fairly typical barred spiral galaxy, similar in morphology and environmental conditions (metallicity, star-formation rate and so forth) to the ones in which Cepheids and Type Ia supernovae are seen at greater distances. Using Cepheid and SN Ia data from Riess et al. (2016), Reid et al. (2019) find H 0 = 72.0 ± 1.9 km s −1 Mpc −1 using solely NGC 4258 as the geometric calibrator.

Cepheids
Having discussed calibrators, we can now talk about the "engine room" of distance ladders. Cepheids form two classes, but it is the younger, population I, classical Cepheids which are of interest. These are yellow bright giants and supergiants with masses 4-20 M and their brightness cycles over a regular  period, between days and months, by around 1 magnitude. They are bright, up to 100,000 L , and they can be seen out to 30 Mpc with the HST. Although Milky Way Cepheids had been observed and catalogued since the 18th century, it was first discovered by Henrietta Swan Levitt in 1908 that there was linear relation between the logarithm of their oscillation period and absolute magnitudes, the Leavitt period-luminosity law (we use the term P -L law for Miras). She had been observing Cepheids in the SMC and LMC, using the Harvard College Observatory telescope, and decided to rank them in order of magnitude. As stars in the LMC will have roughly the same distance from the Earth, the Leavitt law was immediately clear from their apparent magni-tudes. In fact, one could say modern extragalactic astrometry began with her groundbreaking discovery.
That stars can pulsate is not so surprising; after all, a star is in local equilibrium, and would be expected to oscillate about the equilbrium given any perturbation. However, something must drive the oscillation otherwise it would dissipate. For Cepheids, the driver is heat-trapping by an opaque layer of doubly-ionised He surrounding a He-burning core. The trapped heat increases pressure, which expands the star, cooling the ionised He to the point where it can recombine and therefore becomes transparent. As the radiation escapes, the core cools and re-contracts, which in turn releases gravitational energy into the He layer. The He heats, re-ionises and the cycle repeats, with period proportional to the energy released. The Cepheid population lies in an instability strip in the horizontal branch of the Hertzsprung-Russell diagram; the cool (red) edge of the population is thought to be due to the onset of convection in the He layer, and the hot (blue) edge by the He ionisation layer being too far into the atmosphere for pulsations to occur. They therefore form a well-defined population.
A straightforward understanding of the origin of the Leavitt law can be found in thermodynamic and dynamic arguments. The luminosity of a Cepheid will depend on its surface area via the Stefan-Boltzmann law L = 4πR 2 σT 4 , which expressed in bolometric magnitudes is M = −5 log 10 R − 10 log 10 T + const. .
The stellar radius can be mapped to the period by writing an equation of motion for the He layer as where r denotes the radial position of the layer, p is the pressure and m the mass of the layer. For adiabatic expansion, it is then straightforward to solve for the period P and we find P √ρ = const. whereρ is the mean density (for further details see Cox 1960). Withρ ∝ R −3 and temperature mapped to colour B − V , we obtain the Leavitt law as where P is the period in days, α, β are empirically calibrated from the Cepheid data, and for the zero-point γ we need a distance measurement. It is preferable to use so-called Wesenheit magnitudes, which are constructed to be reddening-free, and have a reduced colour dependence (intrinsically redder Cepheids are fainter). Let's see how this works in practice. Riess et al. (2019) using HST filter magnitudes, where the numerical constant is derived from a reddening law. They observed 70 LMC Cepheids with long periods, and after setting β = 0, they derive α = −3.26 with intrinsic scatter 0.075 mag. By comparison, the scatter using solely optical F555W magnitudes is 0.312 mag. After subtracting the DEB distance modulus (Pietrzyński et al. 2019), we obtain γ = −2.579. The formal error in the Cepheid sample mean is 0.0092 mag, equivalent to 0.42% in distance.
We would now like to feel that we can deduce the distance to any galaxy we can find Cepheids in, by applying this law to convert periods to absolute magnitudes, and comparing to the apparent magnitudes we observe. As is usual, though, things are not that simple! There are three principle objections: -Are Cepheids in the LMC the same as the ones in distant galaxies? The LMC has a lower metallicity than a typical spiral galaxy, so can we expect the Cepheids found there to have the same brightness? Unfortunately, it is hard to determine the metallicity of a Cepheid from its colour alone, and studies on the effects of metallicity are variable (Ripepi et al. 2020). One might try extending the Leavitt law to add a metallicity term κ [Fe/H]. Riess et al. (2019) have estimated Cepheid metallicity in the LMC based on optical spectra of nearby HII regions, and find κ = −0.17 ± 0.06, which is consistent with an earlier estimate from Freedman and Madore (2010) that LMC Cepheids are 0.05 mag dimmer than galactic Cepheids, and later results (Gieren et al. 2018;Breuval et al. 2021). -Is the Leavitt law linear? This matters for H 0 , because the Cepheids seen in distant galaxies have longer periods than those used to calibrate the Leavitt law : they are brighter and more easily observed at distance. So, log P ∼ 1.5 for Cepheids in SN Ia host galaxies, whereas the average for the LMC is log P ∼ 0.8. If the Leavitt law is curved, a bias would be introduced. This can be dealt with by either introducing a separate calibration for short and long period Cepheids, or selecting only long period nearby Cepheids for calibration. -Can we obtain clean astrometry of very distant Cepheids? We want to observe Cepheids as far away as we can, to maximise the overlap with the SN Ia observations. But pushing the limits of resolution brings the risk of crowding, whereby the Cepheid photometry is blended with nearby dimmer, redder and cooler stars. Indeed, this seems to be the main cause of the increased scatter in residuals for distant galaxies. There are some ways to deal with crowding: Riess et al. (2016) (hereafter R16) add random Cepheids to images of the same galaxy and put them through the same analysis pipeline, to see how well input parameters are recovered. Figure 8 shows an example of this. Cepheids which are outliers in colour, indicative of high blending, may be discarded (removing the colour cut lowers H 0 by 1.1 km s −1 Mpc −1 ). Another way to test for a crowding bias is to look for compression of the relative flux difference from peak to trough (a more blended Cepheid will be more compressed) as is done in Riess et al. (2020).
Using an optical Wesenheit magnitude, in which stars may be less crowded than the NIR one, reduces H 0 by 1.7km s −1 Mpc −1 (R16), although the argument can be made this is due to higher metallicity effects in the optical . As with metallicity, the matter of crowding continues to debated. Fig. 7: Illustration the Leavitt law in four galaxies used for the local distance ladder. The LMC and N4258 are the two main calibrators of Cepheid distances. For the two example SN Ia hosts N4536 and N1015, it is harder to observe the fainter, shorter-period Cepheids. The slope is fixed at -3.26, corresponding to the best estimate global fit in Riess et al. (2016), and magnitudes are m W H . We have inverted the normal decreasing magnitude axis used in the literature for presentation purposes.  . U9391 is the most distant with a distance modulus of 32.92 ± 0.06. The association of Cepheids with spiral arms is clearly visible, and in the bottom left is an example crowding correction for point sources and background flux. The scatter around the Leavitt law is ∼ 0.7 mag, thought to be due to residual crowding effects.
Systematic error analysis is provided in Section 4 and Table 8 of R16, where the effect of some different analytical choices such as breaks in the Leavitt law, different assumed values for the deceleration parameter q 0 , and methods of outlier rejection are shown to be ±0.7%. This may not cover all potential systematics. In a recent talk, Efstathiou (2020) re-examined the Leavitt laws of galaxies presented in R16. Calibrating each galaxy individually, the slopes of SN Ia host galaxies are generally shallower than M31 or the LMC, which should not be the case if Cepheids are a single population. A change in the slope will alter the zero-point, as would a change in the distance calibration. It is then noted that forcing the slope of the Leavitt law to −3.3 (the M31 value), in combination with using only NGC4258 as the anchor, lowers the R16 H 0 value to 70.3 ± 1.8 km s −1 Mpc −1 (Equation 4.2b). Additionally, there appears to be tension between the relative magnitudes of Cepheids in the LMC and NGC4258, and the distance modulus implied by Masers and DEBs; calibrating with solely the LMC gives a higher H 0 value by 4.4 km s −1 Mpc −1 . It is a feature of χ 2 fits (as used in R16) that the joint solution will be drawn to the data with the lowest error, which in this case is the LMC value. But if two subsets of the data are in tension, it is uncertain that the one with the lowest χ 2 would have the least systematics.
Such analyses do not show one value is preferable over another, nor can they show where any discrepancy may lie -it might be metallicity effects, crowded Cepheid photometry, the NGC4258 distance, or some other systematic. Reanalyses of the results of R16 by various authors (Feeney et al. 2018;Cardona et al. 2017;Zhang et al. 2017;Follin and Knox 2018;Dhawan et al. 2018) use the same Cepheid photometric reduction data so are not independent as such.
Research has accelerated to close down these potential issues. We have already mentioned replacing LMC and NGC4258 Cepheids with Milky Way Cepheids in the section on parallax. In a recent paper, Riess et al. (2020) show crowding effects can be detected as a light curve amplitude compression, and that their correction method has been robust. Finally, Javanmardi et al. (2021) fully re-derive the Cepheid periods and luminosities from the original HST imaging for NGC 5584 (which is face-on to the line of sight), intentionally using different analytical choices, finding no systematic difference in the light curve parameters.
All this said, it would be clearly preferable if we had some other candles to check against Cepheids. We now turn to two possibilities, Miras and the Tip of the Red Giant Branch.

Miras
Miras are variable stars that have reached the tip of the Asymptotic Giant Branch (AGB), comprising an inert C-O core, and a He-burning shell inside a H-burning shell. Their mass is 0.8-8 M , although they are typically at the lower end of this range. Their large size of ∼ 1 a.u. means they are actually brighter by 2-3 mag than Cepheids in the NIR. That they follow a P -L law was first established in 1928 (Gerasimovic 1928), but being tricky to categorise and observe, were not extensively studied until the demand for cross-checks on Cepheids has brought renewed interest in them.
Miras form two distinct populations with O-rich and C-rich spectra, with the O-rich ones exhibiting somewhat less scatter than the C-rich (Feast 2004), but more than Cepheids. Miras have very long periods 90 < P < 3000 days and their light curves have many peaks of variable amplitude superimposed (see for example Fig. 3 in Yuan et al. 2017), so investment in observation time is needed to reliably determine P . The P -L law curves upwards at around P ∼ 400 days, where it seems likely the extra luminosity has been due to episodes of Hot-Bottom-Burning, so an extra quadratic term is required. Miras have high mass loss rates, and the surrounding dust cloud means Wesenheit magnitudes will be less reliable at subtracting reddening compared to Cepheids (because a standard reddening law is assumed in constructing them). Lastly, a size of 1 a.u. would mean their angular diameter is comparable to their parallax, and in addition the photocentre moves around the star, making parallax measurements challenging.
So why bother with them? Their advantages versus Cepheids is that they are (a) more numerous, and therefore easier to find in halos where there is less crowding (b) older, so can be found in all types of galaxies including SN Ia hosts with no Cepheids (c) 2 − 3 magnitudes brighter than Cepheids in the near infrared. Given imaginative observation strategies and careful population analysis, the issues above can be addressed, and modern studies now exist for Miras in the LMC (Yuan et al. 2017), NGC 4258 (Huang et al. 2018), and the SN Ia host NGC 1559 (Huang et al. 2020).
In the first of the above, Yuan et al. (2017) establish a P -L curve using 600 Miras in the LMC. They have sparse JHK-band observations from the LMC Near Infrared Synoptic Survey (LMCNISS, Macri et al. (2015)), which on their own wouldn't be enough to establish the period, but using much denser Optical Gravitational Lens Experiment (OGLE) I-band observations (Szymański et al. 2011), they are able to establish a regression rule for the relationship between passbands, and use OGLE to "fill in" the light curves. The classification of Miras into O-rich or C-rich is also obtained from OGLE. The reference magnitude is defined as the median of maxima and minima of the fitted light curves. The period is obtained by fitting to a sum of sine functions, progressively adding harmonics if supported by a Bayesian Information Criterion 7 . They fit a law 8 The K-band for O-rich Miras shows the lowest scatter of 0.12 mag, with a 0 = −6.90 ± 0.01, a 1 = −3.77 ± 0.08, a 2 = −2.23 ± 0.2. The zero-point is obtained using the DEB distance to the LMC given by Pietrzyński et al. (2013). By comparison, the m W H scatter for LMC Cepheids obtained by SH0ES was 0.075 mag ). Huang et al. (2018) observed NGC4258 with the HST WFC in 12 epochs over 10 months. Mira candidates were identified by their V and I amplitude variation, and it was possible to use LMC data to show that using this method contamination between O-rich and C-rich could be kept to a minimum. Their "Gold" sample size was 161 Miras, and fitting Eq. (34) for apparent magnitudes gave a 0 = 23.24 ± 0.01 for F160W with scatter 0.12. Adjusting the ground-based photometry of LMCNISS to F160W equivalent, the authors calculate the relative distance modulus between the LMC and NGC4258 of ∆µ = 10.95 ± 0.01(stat) ± 0.06(sys). This is consistent with the Cepheid value of ∆µ = 10.92 ± 0.02 ) and the Maser-DEB value of ∆µ = 10.92 ± 0.041 (Reid et al. 2019;Pietrzyński et al. 2019). Similar methods are used by Huang et al. (2020) for observations of NGC 1559, with the addition of a low period cut to deal with potential incompleteness bias of fainter Miras. Using the LMC DEB and NGC 4258 distances as calibrators, they obtain H 0 = 73.3 ± 4.0 km s −1 Mpc −1 . So Miras are consistent with Cepheids, but due to their larger error budgets, they are also consistent with Planck results.  2017). While non-linearity can straightforwardly be fitted, contamination must be kept under control. As seen in the bottom three panels, Wesenheit magnitudes have more scatter compared to standard magnitudes (the opposite of applying them to Cepheids), and they also hinder separating O-rich from C-rich stars.

Tip of the Red Giant Branch
Red giant branch stars are stars of mass ∼ 0.5 − 2.0M that are at a late stage in their evolution, having moved off the main sequence towards lower temperatures and higher luminosities. The core of the star is degenerate He, and as He ash rains down from the H-burning shell surrounding it, it contracts and heats up. Since the temperature range for He fusion ignition is rather narrow, and the degenerate matter means there is a strong link between core mass and temperature, the core in effect forms a standard candle inside the star, just prior to ignition. As soon as the helium flash ignites, the star moves over the course of 1 My or so (almost instantaneously in stellar evolution terms) to the red clump at a higher temperature and somewhat lower luminosity. Therefore, there is a well-defined edge in a colour-magnitude diagram that can be used to identify the tip of the red giant population just before the flash (TRGB). The TRGB is then not one single star, but a statistical average for a suitably large population.
The TRGB offers many advantages compared to other standard candles. Although red giants are fainter than Cepheids in the optical, in the K-band they are ∼ 1.6 magnitudes brighter. Stars at or near the TRGB are relatively common, and can be observed readily in uncrowded and dust-free galactic halos. They are also abundant in the solar neighbourhood, meaning great numbers are available for calibration by parallax (by contrast, Cepheids with good parallaxes in Gaia DR3 will number in the hundreds). Sufficient numbers to resolve the tip can be seen in globular clusters such as ω Centuari, where well-formed colour-magnitude diagrams can resolve metallicity and extinction effects, and an absolute magnitude calibration can be made either from Gaia DR3 parallaxes (Soltis et al. 2021) or DEB distances (Cerny et al. 2021).
Like Miras, they are found in all galactic types. Observation is efficient: one does not need to revisit fields in order to determine periods. The James Webb Space Telescope is capable of observing red giants in the near IR at distances of ∼ 50Mpc, which is comparable to Cepheid distances obtainable from the HST.
It is also beneficial that the modelling of late-stage stellar evolution is quite well understood. Empirical calibrations of absolute magnitude and residual dependence on metallicity, mass and age can be checked against the results of stellar codes. For example, the dependency on metallicity is somewhat complex: the presence of metals dims optical passbands by absorption in stellar atmospheres, and brightens the NIR. Serenelli et al. (2017) express the TRGB as a curve with linear and quadratic terms in colour. The authors find an I-band median M TRGB I = −4.07 with a variation with colour of ±0.08, and a downward slope in the colour-magnitude diagram as metallicity increases. This median value is consistent with empirical calibrations.
Fitting the TRGB is equivalent to finding the mode of the gradient of the (noisy) stellar distribution in the colour-magnitude diagram, and techniques are borrowed from image processing to do this. The tip has a background of AGB stars, and the edge is sensitive to the distribution of them close to it. If the contamination is large or variable, there is a risk the mode can shift. It is therefore important to have a dense population of stars, and enough fields to test the robustness of the fitting process. For example, Hoyt et al. (2021) bin data by separation from the galactic centre and fit the tip separately for each bin.
The Carnegie-Chicago Hubble program (CCHP) have determined TRGB distances for 10 SN Ia host galaxies that also have Cepheid distances, using HST I-band imaging (Freedman et al. 2019). They calibrate the zero point from the TRGB of the LMC and SMC using the DEB distance, finding M I = −4.049 ± 0.022 (stat) ± 0.039 (sys), which is consistent with the theoretical value above.
We show in Fig. 11 their comparison of TRGB and Cepheid distances, covering a range of distances from 7 Mpc to almost 20 Mpc. By expanding their sample to non-SN Ia host galaxies, the authors show the TRGB has a lower scatter versus their Hubble diagram than do Cepheids, by a factor of 1.4. Calibrating the Pantheon SN Ia sample on TRGB distances alone gives H 0 = 70.4 ± 1.4km s −1 Mpc −1 , with the difference to SH0ES results due to the TRGB distances to the SN Ia host galaxies being further than Cepheid distances.  Fitzpatrick (1999)). They also revise the blending corrections of the older, lower resolution photometry of the SMC. Conversely, a calibration of the TRGB in the outskirts of NGC 4258 to the maser distance by Jang et al. (2021) gives a value fully consistent with Freedman et al. (2019). In a followup paper, Madore and Freedman (2020) present an upgraded methodology in which stellar photometry is itself used to separate metallicity and extinction effects (by exploiting their different sensitivities in J, H and K-band magnitudes), confirming their earlier results. Other results (Soltis et al. 2021;Reid et al. 2019;Capozzi and Raffelt 2020;Cerny et al. 2021 -see for example Table 3 in Blakeslee et al. 2021) cluster evenly around these two values. We also note the difference between SN Ia zero points (Fig. 6 of Freedman et al. 2019) of the TRGB vs Cepheids appears to increase with distance, and is particularily large for one galaxy, N4308. Fig. 11: Comparison of TRGB and Cepheid distance moduli, from which it can be seen TRGB distances are consistently larger across a range of galaxies. The red dots are SN Ia host galaxies, the grey dots other galaxies, and the blue star is the LMC TRGB calibrator. More recently, an updated value of H 0 = 69.8±0.6 (stat)±1.6 (sys)km s −1 Mpc −1 was published by Freedman (2021). New and updated values for M I based on the anchors of the LMC, SMC, NGC 4258 and galactic globular clusters are derived. Additionally, potential zero-point systematic problems with the direct application of Gaia EDR3 data to globular clusters (as done by Soltis et al. 2021) are discussed. Each anchor is consistent with each other (although the closeness of the agreement suggests the errors may be over-estimated), and the paper brings the TRGB method on par with Cepheids in terms of number of anchors used.
The TRGB continues to attract attention because the CCHP result is very interesting: it is a late universe distance ladder that is in reduced 2σ tension with the Planck value of H 0 = 67.4 ± 0.5km s −1 Mpc −1 . While it cannot by itself fully resolve the Hubble tension, in combination with some other systematic -perhaps in the calibration of SN Ia (see next section) -it may offer a resolution of it that does not involve new physics.
What is also clear is that quality of a distance ladder is less a matter of quantity of objects, but rather the accuracy of calibration and control of systematics. The TRGB seems very promising however. Once the calibration is agreed among the community, they offer the tantalising prospect of either confirming the H 0 tension through two independent data sets, or reducing it to a lower statistical significance.

Type Ia supernovae
It is hard to overstate the impact of Type Ia supernovae in cosmology. At M ∼ −19, they are both bright enough to be seen well into the Hubble flow (the furthest to date is SN Wilson at z = 1.914), and have a standardisable luminosity. The mainstream view is that they originate from the accretion of material from a binary companion onto a carbon-oxygen white dwarf. When the white dwarf reaches the maximum mass that can be sustained by degeneracy pressure, the Chandrasekhar mass M ch = 1.4M , a runaway fusion detonation occurs destroying the white dwarf. Approximately ∼ 0.6M fuses into heavier elements during the first few seconds of the explosion, and the observed light curve, which peaks at ∼ 20 days and lasts for ∼ 60 days is powered by radioactive decay of 56 Ni to 56 Co and then to 56 Fe. Hence, a SN Ia is a "standard bomb", primed to explode when it reaches critical mass.
However, there is surprisingly little observational evidence to confirm the accretion theory. Both the lack of H lines and computer modelling suggest the donor star should survive the explosion, but a search of the site of the nearby SN2011fe both pre-and post-explosion have revealed no evidence of a companion. This may support an alternative "double degenerate" theory that some, or all, SN Ia originate from the merger of two white dwarves (for a review, see Maoz et al. 2014). But if that is the case, why are their luminosities so uniform? Whether SN Ia have one or two types of progenitors has important implications for the Hubble constant, which we return to shortly.
Individual SN Ia luminosities can vary by up to a factor of 2, but they are empirically standardisable by the Tripp estimator (Tripp and Branch 1999): where µ is the distance modulus, m b ia the apparent AB magnitude, and M fid is the absolute magnitude of a typical SN Ia. The parameter x is the "stretch" of the light curve, which is a dimensionless measure of how long the bright peak of the light curve lasts: longer duration SN Ia are brighter. c is a measure of colour: redder SN Ia are dimmer. ∆ M is a correction for the environment of the SN Ia, and ∆ B a statistical bias correction for selection effects, analogous to the LKH bias of parallax we described earlier. M fid can be calibrated by finding SN Ia in galaxies for which there are Cepheid or TRGB luminosity distances, or in the inverse distance ladder approach by combination with angular diameter distances from baryon acoustic oscillations, or the CMB. SN Ia are rare events: approximately 1 per galaxy per century, but by scanning a reasonably sized patch of sky, supernova surveys can detect hundreds per year 9 . Hence, supernovae catalogs are not uniformly distributed on the sky. However, SN Ia close enough to calibrate their magnitudes are few: there are just 19 SN Ia in galaxies with Cepheid distances from HST observations, although more are expected soon from new cycles (see for example HST proposal 16198).
To maximise statistics and sky coverage, it is common to use SN Ia from multiple surveys in cosmological analyses. Scolnic et al. (2018) have compiled the Pantheon sample of 1,048 SN Ia from CfA, CSP, PS1, SDSS, SNLS and HST surveys in a uniform light curve calibration, representing almost 40 years of observational data. The sample divides into 180 mostly older Low-z (z < 0.1) SN Ia which are uniformly distributed on the sky, and 868 recent High-z SN Ia concentrated in the survey fields, notably the thin "Stripe 82" along the celestial equator. The sets have a small overlap at z ∼ 0.1, and crosschecks on their spectral and other parameters show they are likely to be from the same underlying population (that is, High-z SN Ia are not intrinsically different to Low-z). However, the mean values of light curve parameters like stretch and colour do drift with z, possibly due to selection effects or changes in the host galaxy properties at higher redshift. Large numbers of Type Ia supernova have also been found, or are in the process of being surveyed, by the Dark Energy Survey ( The process of fitting a SN Ia lightcurve is straightforward. ugriz photometry is corrected for the background sky obtained from imaging the site after the nova has faded (a bonus of SN Ia being transient), and the light curve is fitted to a template, outputting the parameters x, c and m b . However, the bias correction ∆ B is challenging and subtle: it depends on the astrophysical modelling of sources of scatter and contamination by mis-classified supernovae, and the selection function of survey. For example, more distant SN Ia may be preferentially targeted for spectroscopic confirmation when their host galaxy is faint, so if their luminosities do depend on the galactic environment, this could bias the High-z population relative the to Low-z. The selection function of early Low-z surveys are not easy to model, and their biases can be up to 0.06 magnitudes. This could account for a somewhat elevated scatter of Pantheon residuals around z ∼ 0.1 in the Hubble diagram where the two sets join (Fig. 12). After fitting α and β, the residual intrinsic scatter of the SN Ia population is σ int ∼ 0.1 magnitudes, which compares favourably with other standard candles. We return now to the question of whether SN Ia have additional environmental correlations that affect their luminosity (that is, what the nature of the ∆ M term in Tripp estimator is), and why this may matter for H 0 . It is generally accepted that SN Ia in galaxies with stellar mass M * > 10 10 M are intrinsically brighter by ∆ M, * ∼ 0.04 − 0.06 than those in lower mass galaxies, and that the transition is sharp, rather than a gradual evolution (see Smith et al. 2020 and references therein). It has therefore become common to use a step-function for ∆ M, * in fits for H 0 . Riess et al. (2016) calculate the effect is to lower H 0 by 0.7%, as the mass of the calibrator galaxies is slightly lower than the mass of the Hubble flow set. The key point is then this: given that Cepheids are young, bright stars formed in active galaxies, is ∆ M, * sufficiently discriminating to ensure the 19 supernovae in galaxies with Cepheid distances are representative of the thousands in the Hubble flow?
Host galaxy mass itself can't matter to an individual SN Ia, so it must be a proxy for some other environmental condition. Metallicity does vary with galactic mass, but the step-like feature suggests a connection with star forma-tion: galaxies with mass below the threshold tend to be active, whereas those above are mostly passive. One hypothesis is that there is a prompt/bright SN Ia type that is continuously renewed by star formation, and an delayed/dimmer type originating from stars formed when the galaxy was young (Rigault et al. 2015(Rigault et al. , 2013Maoz et al. 2014). But any link between this and SN Ia formed by accretion onto or mergers of white dwarfs is speculative. Also, with estimates of "global" properties being effectively light-weighted, they are biased to galactic centres rather than the outskirts where most supernovae are seen.
If some SN Ia are associated with sites of star-formation, we can expect a better proxy to be local specific star formation rates (that is, star formation normalised by local stellar mass or LsSFR for short), on the grounds that the younger SN Ia population will not have had enough time to disperse from their birth region. Because the pixel resolution of typical SN Ia surveys can resolve a 2 kpc aperture only out to z ∼ 0.1, studies concentrate on the Low-z sample. Star formation can be estimated from photometry, but is best tracked by H α lines from ionised gas or UV imaging, when available. Rigault et al. (2020) have examined 141 SN Ia from the Nearby Supernova Factory sample (which has high resolution spectroscopy) and find a significant correlation between LsSFR and α, the stretch slope, and no correlation for β. This suggests there is indeed an intrinsic difference between SN Ia originating in active areas versus passive ones. The size of the LsSFR step is ∆ M,LsSF R = 0.163±0.029 mag, and including it in the Tripp luminosity estimator eliminates the need for a further global host mass step. The size of the step is interesting as it is significantly larger than ∆ M, * , and because most SN Ia in the Cepheid calibrators are in regions of high LsSFR, whereas in the rest of the Low-z sample the fraction is about a half. Hence, a potential bias to H 0 could be as much as 3%. In contrary results, Jones et al. (2018) find a local stellar mass step the same size as ∆ M, * and little effect on H 0 , and Riess et al. (2016) found just 0.1km s −1 Mpc −1 difference in their H 0 calculation after restricting to SN Ia found in spiral galaxies. In a recent paper, Brout and Scolnic (2021) argue the true factor is host galaxy extinction (correlated to star formation and hence mass), and present evidence of a distribution separated by colour into blue "clean" SN Ia with lower scatter than their red "dusty" counterparts. The host mass step is absent in the blue sample, and they suggest to use solely these in cosmological analyses to avoid bias (another possibility is to use nearinfrared photometry ).
In summary, while there is strong evidence SN Ia depend on local specific star formation rates, there is not yet a consensus on the cause, whether they constitute one or two populations, and the level of bias (if any) to H 0 . They are numerous and therefore high relative precision beyond z > 0.1, and have been dubbed the "guard rails" of the Hubble diagram. However, their absolute accuracy in distance ladders is limited by the small numbers available to calibrate their luminosities, and some uncertainty about the underlying astrophysics. Work is underway to address both of these issues: for example, the Foundation and Zwicky Transient Facility surveys (Jones et al. 2019;Yao et al. 2019) are targeting a new high-quality Low-z set with local spectroscopy to test environmental effects or extensions to the Tripp estimator. New HST observation cycles aim to extend the number of Cepheid calibrators up from 19, and the JWST will access a greater volume, perhaps increasing the calibration set beyond a hundred for both TRGBs and Cepheids. This would make a 1% calibration from at least two independent sources achievable.

Time delay of lensed quasars
Quasars are bright and can be variable on short timescales. Some are seen to be gravitationally lensed by a foreground galaxy, and each image will have a time delay caused by the extra path length and the gravitational time dilation of general relativity: where θ i is the sky position of the image, β the (unknown) source position, and ψ( θ i ) the potential of the lensing galaxy integrated along the line of sight. The relevant quantity for H 0 in the above equation is where D l , D s , D ls is the angular diameter distance to the lens, source, and between the lens and the source respectively as specified by Equations (11, 14) -in the case of D ls the integration would be done between the redshifts of the lens and source. Then, if we have a system with multiple images, the relative time delay ∆t ij = ∆t i − ∆t j of the quasar variability between the images scales with H −1 0 . The proportionality follows from Eqs. (11, 14) with a weak dependence on other cosmological parameters. Acquiring the time delays ∆t i requires extensive observations: relative delays range from weeks to a few months. Errors are typically ∼ 6%, although these are reducing over time. A typical lensing system may have z s ∼ 1.5 and z l ∼ 0.5, so we are probing intermediate cosmological distances. Each single lens system may provide an independent estimate of H 0 . But to do so, we need an idea of the lensing potential ψ( θ i ) above. There is also a tricky degeneracy to deal with, known as the mass sheet transformation. For our purposes, it is sufficient to understand that the same observed time delays may be produced by simultaneously scaling the line-of-sight surface density Σ( θ) of the lensing system by a constant factor Σ( θ) → Σ( θ) (the "masssheet") and H 0 → H 0 . For a clear account of the details, see Falco et al. (1985).
The ingredients for modelling are then this: (a) a range of plausible mass profiles for the lensing galaxy with associated nuisance parameters to be marginalised over (b) a way to deal with the mass sheet degeneracy, through independent constraints on the mass distribution of the lensing galaxy and along the line-of-sight. This is a challenging, time-consuming and model dependent process for each system! Some other choices must be made, such as which of the galaxies along the line of sight will be modelled and which will be absorbed into the line of sight mass average (Chen et al. 2019). Galactic stellar velocity dispersions provide information on the lensing potential, but as a function of which galactic mass profile is assumed, and with an anistropy nuisance parameter. Also, the accuracy of velocity dispersions at such distances are ∼ 10%. If there are visible background galaxies, they may also be lensed and the shape distortions provide extra information on the mass profile. The line of sight mass may be estimated from the density of foreground galaxies, by searching in computer simulations of large scale structure for similar lines of sight, and calculating the line of sight density from them (lenses are associated with over-densities and therefore focusing is more likely). But a major advantage is that this modelling can be done blind to final value of H 0 , eliminating the risk of confirmation bias on the part of the experimenter.
The H0LiCOW team have analysed six such systems so far (Wong et al. 2020), and we show their results below. The main contributions to the errors are uncertainty in the time delay, and velocity dispersion of the lens. H0LiCOW use flat priors of Ω M ∈ (0.05, 0.5) and H 0 ∈ (0, 150) km s −1 Mpc −1 , to avoid any dependence on other cosmological datasets (although they do assume a flat universe). The results are : Their combined result is H 0 = 73.3±1.8 km s −1 Mpc −1 , a final error budget of 2.4%. However, a key assumption in the combination is independence, and there are many modelling features in common between the lenses, in particular the specific mass profiles used. There is also an apparent drift in H 0 values with lensing distance, although the statistical significance is not strong enough to say this is not just random chance.
A re-analysis has been undertaken by the TD Cosmo team (Birrer et al. 2020). By treating the mass sheet degeneracy as a fully unknown parameter at the population level, their analysis reduces reliance on specific mass profiles. For same set of systems, they find H 0 = 74.5 +5.6 −6.1 km s −1 Mpc −1 . The central value is consistent with H0LiCOW, but the wider confidence intervals reflect the greater freedom allowed on lens mass distribution, combined with the small number of time-delay lenses. To re-narrow the uncertainty range, the paper investigates what effect introducing a set of non-time delay lensing systems from the SDSS survey may have. The purpose of this larger set is to use their imaging to constrain the mass-profile of lensings galaxy, on the assumption they are drawn from the same parent population as the H0LiCOW sample. TD Cosmo then find H 0 = 67.4 +4.1 −3.2 km s −1 Mpc −1 . However, the H0LiCOW sample is selected as those systems capable of producing a clear time-delay signal, which may not have the same characteristics as those selected as having clean shear images. So the assumption they share the same mass profile as each other may not be valid.
It is therefore premature to claim lensed quasars are consistent with either SH0ES or Planck. More data is needed: upcoming surveys by the Vera Rubin Observatory, Euclid and Nancy Grace Roman telescope will result in several hundred new time delay systems being discovered (so improvements in analysis efficiency will certainly be needed!). Adaptive optics can provide spatially resolved velocity dispersions to improve the mass models. TD Cosmo estimate 200 extra systems will be needed to constrain H 0 within 1.2%, which may be within reach in the next five years.

Gravitational waves
The gravitational wave signal emitted by the merger of two compact objects can be used as a self-calibrating standard candle. There are now operational detectors at LIGO Hanford and LIGO Livingston in the USA, Virgo in Italy, and KAGRA in Japan, with a further planned LIGO India (Abbott et al. 2020). The detectors measure the strain amplitude of a gravitational wave by using laser interferometry to detect the miniscule changes in the length of perpendicular beams as a wave passes by. The purpose of the two sites in the USA is to filter out local seismic vibrations. The wave amplitude is related to the chirp mass M which is in turn derivable from the waveform calculated for a merger.
where f is the frequency, m 1 and m 2 the merging masses, Φ(t) the phase, and h(t) the measured dimensionless strain of the strongest harmonic (Abbott et al. 2016). The rest-frame chirp mass is redshifted by z obs , and F is a function of the angle between the sky position of the source and detector arms, and the inclination i between the binary orbital plane and line of sight (Arun et al. 2009). Note that as we are measuring amplitude rather than energy flux, h(t) ∝ d −1 L . In a loose sense, every wave cycle is a measurement of the chirp mass M as it sets f,ḟ , although in practice the full waveform is fitted. The relative precision is floored by sensitivity of the strain measurement, currently around 5%. If the redshift and angles were known, d L and hence H 0 would be determined to the same precision.
When a binary neutron star (BNS) system merges, there is an accompanying burst of light from matter outside the combined event horizon. For this reason, it is known as a "bright siren". If the flash can be observed, the host galaxy is identified and one can use its redshift in Eq. (41). The event GW170817 was just such a BNS merger (Collaboration and Collaboration 2017b). The gravitational wave was measured in Hanford and Livingston, which was enough to locate the sky position to 28 deg 2 (see next paragraph for how). Given the search region, an optical counterpart was found in NGC 4993 at a distance of ∼ 40 Mpc 10 . Around 3000 cycles of the wave resolved the chirp mass in the detector frame as M = 1.197M to accuracy of 1 part in 10 3 , consistent with a BNS merger. The main remaining uncertainty is then the inclination angle i. Using a flat prior for cos i, Collaboration and Collaboration For black hole mergers, no optical counterpart is generated, and these are called "dark sirens". However, it is still possible to constrain H 0 using them if a probable redshift can be estimated. To do this, the relative amplitudes and time delay between detectors located around the globe are used to approximately determine the sky position (Soares-Santos et al. 2019). For example, if 3 detectors observe the wave with perfect accuracy, the two independent time delays and three measured amplitudes will in turn determine the two sky position angles, two polarisation amplitudes, and one phase lag between polarisations. Hence, simultaneous detections of an event are essential in narrowing the size of the region on the sky where the source is located. A reasonable prior for H 0 will constrain the redshift range, and hence determine a localisation volume. Given a suitably complete galaxy catalog with sufficient sky coverage (the best available currently being SDSS and DES), galaxies within this volume can be averaged over with a suitable weighting to determine a value for H 0 (Schutz 1986). Although each individual event is not very accurate, there are many more black hole mergers than neutron star mergers, so the errors are competitive in aggregate. It is also straightforward to mix dark and bright sirens to produce a combined result (Palmese et al. 2020).
As each event's inclination angle is independent, averaging over N events will improve the errors relative to a single event by a factor of N −1/2 . The main Fig. 13: H 0 posterior distribution for the events GW190814 and GW170814, which was localised to a region in the DES survey footprint containing ∼ 1, 800 possible host galaxies. The light and dark blue lines represent the dark approach for each siren, which can be compared to the gray dashed line for GW170814 which was localised to one galaxy by its electromagnetic counterpart. The estimate for this bright standard siren can be improved by the addition of the dark sirens, as shown in dark red line. The posteriors of Planck (Planck Collaboration et al. 2020) and SH0ES ) are shown in purple boxes as a guide. Figure from Palmese et al. (2020).
limiting factor is then detector calibration and sensitivity, which determines the event rates. Work is underway at the LIGO sites to improve this, using quantum engineering techniques such as squeezed light 11 . Chen et al. (2018) estimate a 5yr observing run by the upgraded LIGO, Virgo, KAGRA and LIGO India detectors will be enough to measure H 0 to 1% precision by 2030.
In summary, the appeal of gravitation waves is that if one believes General Relativity correct, they are self-calibrating systems independent of the distance ladder and with minimal astrophysics input. Event rates and improvements in detector calibration are sufficient to converge to a ∼ 2% precision within the next decade.
11 Squeezed light is a state of light where the distribution of the Heisenberg uncertainty between observables is controlled for a specific application. In this case, the aim is to reduce photon shot noise at the expected frequency of the gravitational waves, and it is a nice crossover from techniques developed for quantum engineering to fundamental physics.

Baryon acoustic oscillations
Baryon acoustic oscillations (BAO) are the relic of density fluctuations propagating in the pre-recombination universe. The baryons in an initial localised seed of overdensity (such as would originate from inflation) are subject to radiation pressure and expand outwards at the speed of sound c s , leaving the dark matter behind. At recombination, temperatures have dropped enough so that protons and electrons can combine to neutral hydrogen and the Thomson scattering rate quickly drops below the Hubble expansion rate. At the drag redshift z drag shortly after, baryons are released from radiation pressure and the baryon overdensity shell is "frozen in" at a characteristic distance relative to the central dark matter overdensity. This distance in comoving coordinates is what is referred to as the sound horizon. As the universe continues to evolve, both the baryon shell and the central dark matter overdensity attract further gravitational infall of matter, forming galaxies. Thus, the sound horizon becomes visible as a characteristic (statistical) physical separation between galaxies. The sound speed c s = 1/ 3(1 + R) depends on the baryon-to-photon ratio with R = 3ρ b /4ρ γ . z drag has weak cosmological dependence, and Planck data gives z drag = 1059 ± 0.3. The key point is that given a cosmological model, r d is calculable and its scale is large enough that the evolution of structure from z drag to z ∼ 1 is nearly linear. Thus we can compare observations of galaxy clustering in the late universe to r d . Armed with data for r d we can use Eq. (42) above to solve for H 0 in our model. The theory of measuring BAO was largely worked out by the mid 1990s (Feldman et al. 1994). One assumes galaxies are a Poisson sample of the relative matter density field 1 + δ( r) and so P (Volume element δV contains a galaxy) = δVn(1 + δ( r)), wheren is the expected mean space density of galaxies. It is possible to work either with the two-point correlation function ξ( r) ≡ δ( r )δ( r + r) or equivalently the power spectrum It might seem preferable to use the power spectrum, as we are looking for a characteristic wavelength, and the modes of the power spectrum are independent if the density field is Gaussian. Also, the galaxy survey footprint in real space becomes an easily understood point spread function in Fourier space. In practice though, because of issues with binning such as cut-offs in Fourier space, most research papers calculate both the correlation function and power spectrum. Adjustingn above can correct for bias introduced by edge effects of angular and redshift cuts of the survey.
Expanding the power spectrum in spherical harmonics, the monopole determines d 2 A H −1 (z), and the quadropole determines d A H(z) (Padmanabhan and White 2008) in a given redshift bin centred at z. With ∆θ and ∆z, the density peak separation transverse to and along the line of sight, we can solve for d A and H. We start with a mock galaxy catalog constructed in a fiducial cosmology seeded with density perturbations. This is matched to our real data with stretches α ⊥ , α : where is the monopole term and = α ⊥ /α is the quadropole. There is the suggestion of circularity in this: a random catalog constructed in ΛCDM (with small scale power suppressed) is then "tweaked" to compare to the real sky to justify ΛCDM. However, this method is essentially perturbative. If the real cosmology is far from the fiducial one, a bias might be introduced, but if it is close it should work well.
A technique called reconstruction is used to sharpen the BAO signal (Eisenstein et al. 2007;Anderson et al. 2012), and we describe it briefly here as we will mention it in Sect. 4, when we discuss modified gravity. The observed positions of galaxies will be somewhat moved from their original positions "on" the sound horizon, due to their subsequent infall towards overdensities. This has the effect of blurring the peak of the correlation function, and reduces precision. Reconstruction reverses the infall in the following way: (a) take a volume and smooth out small scale structure < 20 Mpc (b) embed it into a larger scale random structure to remove edge effects (c) estimate the displacement q of each galaxy by the continuity equation ∇ · q = −δ gal /b gal where δ gal is the fractional galaxy overdensity and b gal the galaxy bias (d) shift the galaxies by − q plus an additional shift for redshift space distortions (e) do the same to the mock catalog.
Historically, the first galaxy surveys to report a BAO detection were the Two Degree Field survey (Cole et al. 2005), and the Sloan Digital Sky Survey (Eisenstein et al. 2005), followed by WiggleZ (Blake et al. 2011(Blake et al. , 2012 and others. We discuss here results from Baryon Oscillation Spectroscopic Survey (BOSS) (Anderson et al. 2012). BOSS has spectroscopically surveyed 1.5 million luminous galaxies over 10,000 deg 2 between 0.2 < z < 0.7, with angular separations greater than 62 (which is 0.5 Mpc at z ∼ 0.7). Alam et al. (2017) analyse the BAO signal in Data Release 12. The galaxies are divided into three bins each of redshift and cos θ. From a fiducial cosmology with h = 0.676, they find α = 1.042 ± 0.016 after reconstruction. Their result is summarised as a set of three values for d m (z i ) and H(z i ) × r d /r d,fid for z i = 0.38, 0.51, 0.61, with precision ∼ 2.5%. The construction of mock galaxy catalogs is a key part of this analysis. An important parameter is b gal which is the (assumed linear) ratio of the galaxy overdensity to the matter overdensity. Constructing a mock galaxy catalog by brute force of N-body simulations is prohibitively expensive. Instead, construction starts with a fast, approximate gravity solver for the matter density field seeded with initial fluctuations, and resolution down to halo size. Galaxies are inserted into the matter halos using the bias ratio, while preserving the two-and three-point correlation functions. BOSS assumed b gal is constant, but more sophisticated models exist. For example, bias tends to be higher for red galaxies (see for example Zehavi et al. 2011). Attention also needs to be paid to what types of galaxies are being counted: massive luminous galaxies of the type measured by BOSS reside in high density nodes of the cosmic web, whereas emission line galaxies are strung out along the filaments. For further details, see Kitaura et al. (2016). BOSS DR12 uses two independent mock catalogs to mitigate against biases introduced by simulation effects.
BAO in isolation do not constrain H 0 , but rather the combination of H(z) and the sound horizon r d described above. To obtain H 0 , we must either fit a cosmological model with constraints on its density parameters supplied by other datasets (for example SN Ia, CMB, or even the full shape of the matterpower spectrum), or we must supply a prior for r d (which is often obtained from the CMB, but see also below). We describe three independent results below. Alam et al. (2017) used ΛCDM calibrated to a combination of the Planck power spectrum and the JLA sample of SN Ia. They find H 0 = 67.6 ± 0.5 km s −1 Mpc −1 . This value is consistent with the Planck result (Planck Collaboration et al. 2020), but is not independent of it. Lemos et al. (2019) use a parametric formula for H(z), in conjuction with a WMAP-derived prior for r d and the Pantheon SN Ia sample, to derive H 0 = 67.9 ± 1.0 km s −1 Mpc −1 . This result is now independent of Planck, and their parametric H(z) allows for deviation from ΛCDM by separating early and late universe expansion histories. Addison et al. (2018) use data (almost) independent of CMB in ΛCDM. They determine Ω m = 0.292 ± 0.02 by combining galaxy and Lyα BAO 12 . Ω r is derived from the CMB temperature as measured by COBE/FIRAS and numerous balloon experiments (Fixsen 2009). Setting Ω k = 0 determines Ω Λ = 1 − Ω m − Ω r . To get the sound horizon, we now just need the baryon density Ω b (see Eq. (42)). Without using the CMB power spectrum, the best available way to get it is from primordial deuterium abundances. In Big Bang nucleosynthesis (BBN), deuterium is burned to create 4 He, with some uncertainty due to the reaction rate of d(p, γ) 3 He. The reaction rate increases with physical baryon density so [D/H] decreases with Ω b h 2 . The authors use the primordial [D/H] abundance measured from metal-poor damped Ly-α systems, which is 2.547 ± 0.033 × 10 −5 . Putting this all together, they derive H 0 = 66.98 ± 1.18 km s −1 Mpc −1 . The only CMB data that has been used is the temperature, and the result corroborates the Planck value 13 . Repeating the analysis with galaxy data from the Dark Energy Survey (DES) Year 1 produces a consistent result (DES Collaboration 2018).
These results are consistent with more recent ones. DES Y3 survey data has been used to derive a set of three correlation measures (referred to as "3x2pt"). These measure the angular correlation of galaxy positions with each other, with the tangential shear of their shapes caused by weak lensing, and the cross-correlation in the two components of their ellipticities also due to weak lensing. Combining this with Planck, SN Ia data, BAO and redshift space distortions, the DES Collaboration (2021) find H 0 = 68.0 +0.4 −0.3 km s −1 Mpc −1 when fitting to the ΛCDM model. The eBOSS survey, the culmination of more than 20 years of survey work at the Apache Point Observatory, find H 0 = 68.19 ± 0.37 km s −1 Mpc −1 when fitting to a ΛCDM model using their data plus the above additional probes to also fit to ΛCDM (Alam et al. 2021). These seem to be the tightest constraints on H 0 yet published.
We also note that the logic explained above to derive H 0 can be reversed: BAO can be combined with a H 0 prior from nearby universe results, instead treating the sound horizon r d as a free parameter (Bernal et al. 2016). The r d values so inferred are in tension with the CMB, and we return to this point in Sect. 4.2.
In summary, BAO serve as a standard ruler in the late universe which are almost independent of astrophysical assumptions. They may be used in extended models beyond ΛCDM, albeit with the caveat they have been derived perturbatively against a ΛCDM background simulation. The two main data 12 In the analysis of Addison et al. (2018) the two data sets are in mild tension, however recent data has a greater level of compatibility -see for example Figure 5 of Alam et al. (2021). 13 Standard ΛCDM is assumed, and in particular that the expansion rate around the time of BBN as constrained by He abundances (parametrised by the effective number of relativistic species in the early universe N eff ) is consistent with the CMB.
sets of BOSS and DES are soon to be joined by others. The Dark Energy Spectroscopic Instrument is operational, the ESA survey satellite Euclid is scheduled for launch in 2022 which is also when science operations on the Vera Rubin Observatory LSST in Chile commence. All of these surveys are complementary and will increase the precision and range of BAO measurements. Combined with deuterium abundances, BAO are an important corroboration of the Planck and WMAP results for H 0 .

Cosmic microwave background
The CMB power spectrum carries the imprint of the same acoustic oscillations we described for BAO, sampled on the surface of last photon scattering. The Planck 2018 value of H 0 = 67.4 ± 0.5 km s −1 Mpc −1 (Planck Collaboration et al. 2020) is notable for its precision, and is consistent with previous results such as Boomerang 64 ± 10 km s −1 Mpc −1 (Percival et al. 2002) and WMAP 70.0 ± 2.2 km s −1 Mpc −1 (Hinshaw et al. 2013). More recently, the Atacama Cosmology Telescope (ACT) finds 67.9±1.5 km s −1 Mpc −1 (Aiola et al. 2020). There is, however, a moderate tension between Planck and the South Pole Telescope (SPT) result of 71.6 ± 2.0 km s −1 Mpc −1 , and the origin of the difference is not yet clear (Henning et al. 2018;Handley and Lemos 2021).
Two questions are immediately raised: how is H 0 derived from a power spectrum sampled on the two-dimensional sky of the early universe, and why is the Planck value so precise?
To answer the former, we need to explain how the CMB power spectrum determines cosmological parameters. The spectrum is created by primordial fluctuations that evolve as a function of their scale and the sound speed, until recombination. Whilst this radiation is highly isotropic, we can obtain cosmological results from measurements of small anisotropies in both the temperature and the polarization of CMB photons. The former of these provides the largest amount of cosmological information, and can in fact measure the Hubble constant at high enough accuracy to be in tension with direct measurements without relying on polarization. These temperature anisotropies (∆T ≡ T ( n) − T 0 ) can be expanded in a basis of spherical harmonics: The power spectrum of the coefficients is commonly expressed as D = ( + 1)C /2π, where is the estimator of the power spectrum (we have only one CMB, so averaging over m is a way to estimate the true "universe average" we want). The spectrum is shown in Fig. 16 and contains all the statistical information of the temperature anisotropies, as long as these are Gaussian (no evidence supporting non-Gaussianity in the CMB has been detected at the time of writing). This power spectrum has three main features: -Large-scale plateau at scales ( < 100) that are not affected by postrecombination physics, and therefore reflect the primordial power spectrum. As noted above, we have just one "sampling" of the random seeds, so the amount of information we can extract from low multipoles is limited by cosmic variance ∝ (2 + 1) −1/2 . -Acoustic oscillations which depend on the sound horizon size r s (z ) = 1 in flat ΛCDM where a eq = Ω r /Ω m , Ω b h 2 determines c s and both determine a = a(z ), the scale factor at the surface of last scattering. These create the peaks in the spectrum. -Silk damping which is photon diffusion from hotter regions to colder ones, suppressing small scale power (Silk 1968). The mean free path of a photon is λ = 1/(σ T n e ) where n e is the number density of charged particles and σ T the Thomson cross-section, so photons can diffuse a length r Silk = λ/H. As the sound horizon r s ∝ 1/H, then r Silk /r s ∝ H/n e . Silk damping creates the downward slope in the spectrum, so this is sensitive to n e ∝ Ω b .
With all of this, how does the CMB constrain the Hubble parameter? Firstly, we can measure the acoustic scale length r s , which serves as a standard ruler. As described in Eq. 50, this can be calculated as a function of Ω r , Ω m h 2 and Ω b h 2 : The radiation density Ω r is determined by the temperature of the CMB. Ω m h 2 can be estimated from the effect of the integrated Sachs-Wolfe effect (Sachs and Wolfe 1967) -the redshift of CMB photons through wells or hills of gravitational potential during eras of non-matter domination -in the low multipoles of the power spectrum. Finally, the relative amplitudes of the acoustic peaks in the CMB serve to measure the ratio Ω b /Ω m . Once we have computed the acoustic scale, we need a measurement of its comoving angular diameter distance θ s , which is given by the distance between the acoustic peaks 14 . In a flat ΛCDM model, this provides a measurement of the matter density Ω m , which combined with the measurement of Ω m h 2 gives an estimate on the Hubble parameter. This is illustrated graphically in Fig. 15. Planck measured full-sky temperature T and polarisation E fluctuations in 9 frequencies, with angular resolution up to 5 arcmins ( 2000) and sensitivity 2 × 10 −6 µK. Before calculating the T T, T E and EE power spectra, foreground effects must be calculated and removed, and much of the work done by the Planck collaboration between successive data releases has been to 14 This is to speak loosely -if we just knew the location of the peaks to 1% accuracy then the accuracy of H 0 would be only ∼ 7%. Using the entire spectrum is the driver for the precision. For a very accessible account of the CMB power spectrum with semi-analytic equations we recommend Mukhanov (2004). -Peculiar velocity of the earth-sun system induces a dipole which is easily subtracted. The annual motion of the earth is actually helpful to calibrate the detector response. -Galactic synchrotron and free-free emission caused by cosmic rays interacting with the magnetic field and ISM of the MW. This has a nonthermal frequency spectrum (in effect its colour is different to the CMB). -Galactic dust emission whose contribution to the spectrum is also subtracted by estimating its mapped "colour" difference to the CMB. -Point sources of microwave emission are masked. Sky masking introduces some correlation between multipoles. -Sunyaev-Zel'dovich (SZ) effect where CMB photons interact with hot intracluster gas and are Doppler-shifted (the kinetic SZ effect) by bulk motion or up-scattered (the thermal SZ effect) (Sunyaev and Zel'dovich 1972). The former effect does not alter the black-body spectrum, but the latter does. The kinetic SZ effect is sensitive to epoch of re-ionisation at z ∼ 8 (Ade et al. 2014a). The thermal SZ effect may be exploited to produce a cluster map from which information on inhomogeneity can be extracted (Ade et al. 2014b). Modelling of intracluster gas predicts a spectral template (which peaks around 2000) to subtract.
-Weak lensing of the CMB by matter. Weak lensing peaks between 1 < z < 2, and may be used as a probe of the distribution of foreground matter. The effect increases as we move to smaller scales, particularily > 2000 (Zaldarriaga and Seljak 1998). There are two signals of lensing: (a) its nongaussianity, which can be measured by a four-point correlation function (b) a smoothing effect on the peaks and troughs of the spectrum, both of which can be calculated as a function of Ω M . However, there is more smoothing apparent in the spectrum than is predicted by the correlation function so a phenomenological parameter Ω m,spectrum = A lens Ω m,correlation has been introduced to capture their relative amplitude. A lens 1.2 ± 0.1 is seen in both Planck and South Polar Telescope (SPT) data (Bianchini et al. 2020), but appears to be absent in the Atacama Cosmology Telescope (ACT) data (Aiola et al. 2020).
The latter two are perhaps better characterised as secondary anisotropies, as they have been used to derive cosmological information in their own right. The secondary anisotropies then provide a consistency check of certain parameters derived from the primary anisotropies, if the modelling is correct. We examine A lens in greater depth below. Cross-checks on the effectiveness of foreground modelling are done by switching frequency bands, using alternative astrophysical models, and numerical simulations of parameter recovery from random CMB backgrounds (Ade et al. 2016;Planck Collaboration et al. 2020). The spectrum is then compared to one computed in a standard code such as CMBFAST (Seljak and Zaldarriaga 1996), CAMB (Lewis et al. 2000) or CLASS (Blas et al. 2011) and a posterior for each cosmological parameter is computed. In each check, no serious discrepancy with the main results was found. Although each C individually has 7% noise, there are 2,000 of them and the error is reduced further by the lensing signal. The binned spectrum and residuals is shown in Fig. 16. We now consider two questions regarding the data. Is Planck self-consistent? A feature which is (barely) visible by eye in Fig. 16 is an oscillating residual pattern lining up to the spectral peaks, indicating the Planck spectrum is slightly tilted versus theory (see Fig. 24 of Planck Collaboration et al. (2020) for an enlarged view). As each multipole is nearly independent, we can split the spectrum and run the analysis on each half. Addison et al. (2016) find that A lens is equivalent to a drift of cosmological parameters with different scales in the CMB. They fix A lens = 1 and compare H 0 for < 1000 and > 1000 finding H 0 = 69.7 ± 1.7 km s −1 Mpc −1 and H 0 = 64.1 ± 1.7 km s −1 Mpc −1 respectively. Allowing A lens = 1.3, or a tilt parameter restores concordance.
We caution that A lens is not a physical parameter connected to lensing; it is a "fudge factor" that resolves internal inconsistency in CMB data. It does not resolve the H 0 tension, but does resolve matter power amplitude σ 8 tension with galaxy survey data such as the Dark Energy Survey and Kilo-Degree Survey. Could it be a foreground effect? Its value is relatively stable for various band channels and sky masks, which argues that it is not. The Planck Collaboration followed up on this curiosity (Planck Collaboration et al. 2020). They note a dip in the power spectrum for l < 30 pulls low-H 0 higher; this may account for the lower-resolution WMAP measurement being somewhat higher than Planck. The dip between 1420 ≤ ≤ 1480 mimics lensing but may be an unaccounted foreground effect. Potential explanations such as negative curvature density Ω k < 0 and modified gravity are investigated but no convincing evidence is found in favour of these models when BAO data is added to the fit. A re-analysis of the Planck data using a different foreground subtraction methodology  found that A lens decreased from 1.26 to 1.06 as the data was broadened from the temperature-only power spectrum to include polarisation and lensing data. Further, Planck data passes other consistency tests with the baseline ΛCDM model where A-type parameters may be introduced, such as the expected magnitude of the lensing correlation function and the Sachs-Wolfe effect. Hence, the preferred explanation of the Planck Collaboration is that A lens is a statistical fluke. Nevertheless, while A lens does seem to vary over the sky, it is perhaps of concern that it seems to be larger for full-sky Planck data is and is a roughly 2σ effect.
Is Planck consistent with other current CMB measurements? The SPT covers a 6% patch of the southern sky at a resolution 6× that of Planck, finding H 0 = 71.3 ± 2.1 km s −1 Mpc −1 (Henning et al. 2018). SPT is limited to > 650 due its low sky coverage and atmospheric effects and has greater noise, but when the results are compared on a like-for-like basis ("Planckin-patch" with SPT for 650 < < 1800), they are consistent with Planck. The analyses are independent and so argue against systematic errors (in this patch and multipole range) in either experiment. However, including higher SPT multipoles in the 150 GHz spectrum causes H 0 to drift higher: H 0 74 ± 3 km s −1 Mpc −1 for 650 < < 3000 (Aylor et al. 2017). While this is the opposite direction to Planck data, this may be due to the l < 30 effect in Planck noted above. SPT confirm the Planck result of a greater lensing effect than predicted, and also note a spectral tilt versus the best-fit ΛCDM. However, the trend is not apparent in 143 GHz data, so it is not at all clear if this is a physical effect, rather than merely chance or systematics. The ACT Data Release 4 derives cosmological parameters from a 6, 000deg 2 patch of the southern sky up to ∼ 4000, finding H 0 = 67.9 ± 1.5 km s −1 Mpc −1 (Aiola et al. 2020), however ACT appears to be in 2.6σ tension with Planck Handley and Lemos (2021); this appears to be caused by differing physical baryon densities.
In summary, Planck 2018 results for H 0 are consistent with previous and current independent CMB data. The high precision for H 0 is a consequence of Planck's high resolution, lack of atmospheric interference, and full sky coverage. Checks have been performed on the codes, the methods used to remove foregrounds, and beam effects. Internal inconsistencies hint at new physics or foreground effects at small angular scales, manifesting either as a tilt or a smoothing of the spectrum, and non-physical parameters like A lens have been introduced to capture them. However after their introduction the results for H 0 are not materially affected, and remain in tension with late universe results. Nevertheless, the persistence of these effects across differing sky patches, multipole ranges and experiments make them less likely to be a statistical fluke. It is hoped next generation CMB experiments with resolution up to = 5000 will shed further light on this.

Other methods
Promising methods that we did not have space to discuss are Surface Brightness Fluctations (Tonry and Schneider 1988;Blakeslee et al. 2021;Khetan et al. 2021), Cosmic Chronometers (Jimenez and Loeb 2002;Moresco et al. 2018), the Tully-Fisher relation (Brent Tully and Fisher 1976;Kourkchi et al. 2020;Schombert et al. 2020), Type II supernovae (de Jaeger et al. 2020), HII galaxies (Terlevich and Davies 1981;Arenas et al. 2018), and Galaxy Parallax (Croft 2021). Quasars (Risaliti and Lusso 2019) and GRBs (Schaefer 2007) offer the prospect of extending the Hubble diagram up to z ∼ 5, further testing ΛCDM. However, despite improvements in characterising their intrinsic luminosity from observables (for example X-ray and UV flux in the case of quasars), it remains challenging at present to regard them as standard candles. We refer the reader to the above for the latest work in these fields.

Could new physics explain the tension?
We have now reviewed the main recent results on H 0 , and while in the spirit of scientific scepticism we have identified potential sources of systematic error, we hope it is also clear how much effort has been made to root out potential biases. Therefore, a possibility that should be taken seriously is that the tension is real, and a sign of new physics.
Some authors have interpreted the disagreement between CMB results and Cepheids as "early versus late" tension. In this view, ΛCDM with its fluids of radiation, standard model neutrinos, cold dark matter, baryons and dark energy is not the right model to derive H(z = 0) from the apparent H(z ∼ 1100). If late-time physics were changed, H 0 derived from the early universe might be reconciled to the local one. Alternatively, a change to pre-recombination physics would alter our calculation of the sound horizon r s , and hence we would need to change distances to retrieve the same angular sizes of the CMB temperature fluctuations. More radically, some authors have argued our local H 0 is different from an average over randomly placed observers.
The essential problem for model builders is this: in other respects, ΛCDM fits the data very well across a huge span of the history of the universe. The CMB is not merely a "snapshot" of the universe, but carries the imprint of several epochs. The positions and heights of the peaks are sensitive to the content 15 between z ∼ 10 5 -10 3 , the high-slope depends on the baryon-tophoton ratio close to recombination at z ∼ 1100, and CMB lensing by matter peaks between z ∼ 1-2. At earlier times, ΛCDM makes an accurate prediction of primordial element abundances, especially deuterium. At late times, galaxy surveys are consistent with the evolution of perturbations in a ΛCDM background 16 , and place tight limits on deviation from spatial flatness in the mid to late universe. The shape of H(z) can be read off SN Ia luminosities between 0.01 < z < 1.4, and is also consistent with ΛCDM.
What we want to have is a model capable of modifying H 0 by the right amount, without disrupting ΛCDM's other successful predictions. The model must be testable, compatible with particle physics results, and ideally not finetuned. Two serious problems must be avoided : by dint of extra parameters, most models increase the range of allowed H 0 values. When a prior of H 0 from the local universe is convolved with the expanded likelihood, the posterior will -by construction -overlap with the prior. So to claim a resolution of the Hubble tension in this way is to use the same data twice: once to construct the posterior, and once to compare it. This can be avoided if the extra parameters have some preferred values -for example, a range that is predicted by microphysics (Vagnozzi 2020). A second problem with the use of H 0 priors has been discussed by Efstathiou (2021), who points out the difference between it and the actual data analysis performed by the SH0ES team, which is a calibration of the SN Ia fiducial absolute magnitude M B , followed by a conversion from M B to H 0 . To use an H 0 prior is then to transform and compress the data away from its true source; it would instead be better to use M B directly and priors for this are given in Camarena and Marra (2021). Another prevalent issue is the selective use of datasets -in particular, BAO provide strong constraints on the late universe as we shall see shortly, and to omit them is again to risk a misleading analysis.
A large number of creative proposals have been put forward, so much so that unfortunately we do not have space to introduce them all. Our preference here is to review selected models in broad classes that we hope will be informative, and readers are referred to extensive reviews by Mörtsell and Dhawan (2018), Knox and Millea (2020), Di Valentino et al. (2021), and Vagnozzi (2020 if they would like more detail.

Late-universe physics
In some respects, it is easier to propose changes to the late universe: one does not have to "negotiate" with the CMB power spectrum! It is straightforward to generalise the expansion history into any number of "cosmology-independent" forms, that do not rely on any specific form of matter-energy, or general relativity: all that is retained are the Copernican principles of homogeneity and isotropy, and the existence of a space-time metric (for example, see Eq. (15)). The latter is important, as it implies the Etherington relation: d L = (1+z) 2 d A , which locks angular diameter distances against luminosity distances.

Modified gravity
Traditional models of large-scale modified gravity (normally referred to as MOND) would seem to have a low chance of solving the Hubble tension; the success of using reconstruction to sharpen the BAO peak argues that standard gravity works as we expect on these scales. Desmond et al. (2019) have proposed that small-scale modified gravity could distort the calibration of Cepheids in large galaxies like the Milky Way and NGC4258, however this appears to be disfavoured by the fact that using the LMC as a sole calibrator does not materially change H 0 .

Inhomogeneities and redshifts
Clearly our universe is not completely uniform, so could we by chance live in an underdense region, measuring a local H 0 above the universal one? Shanks et al. (2019) have investigated the "local hole" idea, initially noted in galaxy survey data by Keenan et al. (2013). They cite peculiar velocity outflows from 0 < z < 0.1 in the 6dFGS galaxy survey as evidence for a local underdensity, finding the universal (ie CMB) H 0 to be lower than our local one by 1.8%. However, we would expect to see the effect of a line of sight exiting our underdense region as a kink in the residuals of SN Ia luminosities fitted to ΛCDM, or alternatively as an anistropy in H 0 depending on the footprint of the SN Ia survey used. While the sky distribution of low-z SN Ia versus Hubble flow SN Ia are very different, and there is an uptick in magnitude residuals on the boundary between the two (see for example Fig. 11 of Scolnic et al. 2018), the lack of evidence for such a kink places limits on any local under-density (Kenworthy et al. 2019). Also, large-scale structure simulations find the probability for us to be in an underdensity of such magnitude is less than 1% (Odderskov et al. 2017;Macpherson et al. 2019) (although a different likelihood of a void may be obtained in a revision of ΛCDM). That said, the observational tension between claims of local under-density based on galaxy surveys, and the lack evidence for it in SN Ia data, remains unexplained.
Density fluctuations also necessitate the adjustment of redshifts for peculiar velocities. Although heliocentric redshifts can be measured up to an accuracy of 10 −7 , care is needed in the conversion to cosmological ones. Davis et al. (2019) have analysed the effect of redshift biases on H 0 . Changing z by +0.001 for a set of standard candles between 0.01 < z < 0.15 causes a bias in H 0 of ∼ 3%. Systematics might arise in peculiar velocity estimates or corrections for residual stellar motion (for example, if observing on one side of a spiral galaxy). Rameez (2019) has pointed out differences between the redshifts for SN Ia in common between the JLA and Pantheon catalogs; there are 58 with differences > 0.0025, concentrated in an arc opposed to the CMB dipole. Nevertheless, this does not appear to be biasing any H 0 estimates.

Modifying late-time ΛCDM
In ΛCDM, dark energy is a property of the vacuum, with equation of state ρ = wP where w = −1. This distinguishes it from standard inflation models, where −1 < w < 0 is a time-varying function of the potential and kinetic energy of a scalar field, but makes its microphysical origin rather obscure. This has caused some theorists to speculate with alternative models.
In principle then, modifying dark energy in the late universe may be considered as a solution. The CMB has little to say about late-time dark energy: at early times, its physical density is much smaller than matter and radiation. The late time Integrated Sachs-Wolfe effect does imprint on the spectrum, but only as a large scale secondary anisotropy once the universe has entered the dark-energy dominated phase at around z ∼ 0.3.
A simple modification of ΛCDM would be to allow w to take on an arbitrary (constant) value. However as what is required is a boost to late-time expansion to match the local universe H 0 , we would need w < −1 (known as phantom dark energy) to solve the tension. Phantom dark energy does not occur in standard single scalar field models, but is possible in models with more complex field configurations. If sustained, phantom dark energy leads to a "Big Rip" in a finite time, where all matter is pulled apart by accelerated expansion of the universe.
Alternatively, dark energy may be considered a late-universe emergent phenomenon by allowing its equation of state to vary with time. This may be done writing w(a) = w 0 + (1 − a)w a , or by replacing Ω Λ,0 in Eq. (12) with an phenomological Ω(z). This could be motivated by the action of scalar fields, dark matter decaying to dark energy, or some other novel microscopic theory of dark energy (see for example Di Valentino et al. 2021). However, it can be shown that such models have a generic problem: the sound horizon. The sound horizon r d is normally introduced via its definition from early universe physics (Eq. 42). But it is also imprinted on the late-time universe, as the peak in the angular correlation function of galaxy number densities, which constrains the product r d H 0 . All that is needed is a late-time distance measurement to convert the angular size to a physical size at a given redshift, SN Ia to constrain the late-time expansion history, and r d may be calibrated independently of the CMB. Arendse et al. (2019) have used the H0LiCOW lens distances, BAO and Pantheon supernovae to obtain just such a late-time r d . They show that two modified late-time dark energy models applied to Planck data can resolve its H 0 tension with the late-universe, but not its r d tension. Knox and Millea (2020) investigated this in ΛCDM with the same result.
Hence, if all data including BAO are considered -as they should be -it appears that late-time modifications to ΛCDM will not be satisfactory as a solution to the Hubble tension.

Early-universe proposals
We have mentioned how the CMB is a set of rulers whose scale is fixed at different epochs. To recap, the observed angular scale of the sound horizon at a given redshift z is where c s (ω b , z) is the sound speed which depends on the physical density of baryons, given by ω b = Ω b h 2 . Decoupling happens when the mean free path of photons approaches the Hubble radius c/H(z), hence z D is also a function of all of the densities z d ≡ z d (ω b , ω m , ω γ ). Loosely, the goal is to reduce the sound horizon from (42) by 7%, as shown in Fig. 17. We may isolate the free parameters of this formula, from those that are fixed by data as follows. As discussed, we may omit dark energy ω Λ in the numerator. In the late universe the radiation density is not significant, and using the spatial flatness implied by BAO, we may write Ω Λ = 1 − Ω m . Ω m is determined by the heights of peaks in the CMB spectrum. Successive peaks represent modes that have crossed the horizon successively earlier in the universe when the ratio of matter to radiation was lower, and so will have had their growth suppressed by radiation pressure to a greater degree. Ω m determined in this way from the CMB is consistent at a variety of redshifts with values from the late universe: Lyman−α absorbtion lines of quasars, BAO, galaxy lensing and the SN Ia Hubble diagram (see Fig. 3).
Further, Ω b is determined by the Silk damping of high-modes, and also by the relative heights of odd and even peaks: these are respectively compression and rarefaction modes, sensitive to the pressure to density ratio. Ω b from the CMB is consistent with the deuterium fraction predicted by BBN. Then, substituting for H(z) using (7), Eq. (51) becomes Hence we see that for fixed ω b , ω r and Ω M -all parameters whose CMB values are corroborated by non-CMB datasets -h only appears in the numerator. So to change H 0 we must either accept a departure from the Friedmann equations, change ω r , allow conversion between energy types, or add a new non-baryonic, non-dark matter component to the mix of the early universe.

Extra relativistic species
Extra relativistic species, such as additional neutrinos beyond the Standard Model flavours, are well-motivated in extensions to the standard model in particle physics. There is a well-known constraint of 3 on the number of light neutral particles from the decay width of the Z-boson, but this bound may be avoided if the new particles do not couple to the Z (for example, if they are sterile neutrinos which only interact gravitationally). The number of species are parametrised by N eff which is defined by where the fractions are due to the fermionic nature of neutrinos, and in the standard model N eff = 3.045 17 . Adding extra species would mean H 0 in Eq. (52), would need to be increased to keep θ unchanged (Bernal et al. 2016). But if we increase ω r , we have altered the redshift of matter-radiation equality z eq which influences the relative heights of the peaks. If we restore the heights by also increasing Ω m to keep z eq unchanged, we conflict with Ω m determined from SN Ia, BAO and lensing unless we accept dark matter decay between the CMB and then. Additionally, N eff is constrained by He abundance to be < 4 at the 3σ level during BBN (Aver et al. 2015) (see also Fig. 39 in Planck Collaboration et al. 2020).
One way in which these constraints can be avoided is via self-interacting neutrinos. The new species couples to itself, but not the standard model neutrinos (although it mixes with them). Free streaming neutrinos have a small but measurable impact on the spectrum as they travel faster than the sound speed c s and drag some of photon-baryon fluid with them, resulting in a sound horizon that would be larger than a neutrinoless universe. Self-interactions slow the speed of the neutrinos, and by tuning the strength of the interaction it is possible to arrange for the CMB spectral peak heights to be unchanged (with some small changes to other parameters such as the primordial power spectrum tilt n s and Ω Λ ). By adding an additional coupling to a new scalar particle, their mixing may be suppressed at BBN, in which case they evade the He abundance constraints. Kreisch et al. (2020) have analysed such a model and found H 0 = 72.3 ± 1.4 km s −1 Mpc −1 for N eff = 4.02 ± 0.29.
The drawback is the self-interaction must be extremely strong. That is, the effective coupling G eff required between neutrinos is ∼ 10 10 G Fermi where the Fermi constant of weak interactions in the Standard Model is G Fermi = 1.17 × 10 −5 GeV −2 . Such strength contradicts calculations of the decay time of muons and tau leptons, unless the self-interactions are restricted to the tau neutrino, or weakened to a level where they do not fully resolve the tension (Das and Ghosh 2021). So this proposal does seem highly fine-tuned. However, it is accessible to testing by CMB S4 experiments, as it predicts a stronger decrease of the damping tail for modes > 2000.

Early dark energy
Early dark energy (EDE) refers to a boost to the expansion of the prerecombination universe caused by a scalar field potential V (φ) ∼ φ n in a manner similar to (but milder than) inflation. This dark energy must be present at around z ∼ 10 4 , as that is the time from which most of the growth of the sound horizon occurs. BBN constraints mean it must be absent earlier. The dark energy must also decay by the time of recombination, in order not to disrupt the damping tail of the CMB spectrum. EDE then provides a short-term boost to the expansion speed, which decreases the sound horizon by modifying the numerator of Eqn. (52). Agrawal et al. (2019) have shown H 0 ∼ 72 km s −1 Mpc −1 for a fraction Ω EDE ∼ 5% of the energy content and the onset of its decay at redshift z c ∼ 5000, for a range of potentials. It may be said that this feels like finetuning again (though it should be said that the same point may be made about late dark energy -the "why now" problem)! The model shows a small residual oscillatory signal in the CMB spectrum versus a standard ΛCDM fit, so if H 0 is calculated in ΛCDM in an EDE universe, it is progressively biased lower for higher angular resolution (Knox and Millea 2020). Although this trend is already seen in WMAP and Planck, it is opposite to the trend in SPT.
EDE can then be tested by higher resolution in CMB S4 experiments. It may also be testable before then: it is necessarily present around the time of matter-radiation equality, and therefore will leave an imprint in the presentday matter power spectrum P (k). The spectrum captures the strength of matter density fluctuations of wavenumber k and decreases for k > k eq , the wavelength corresponding to matter-radiation equality, with a gradient weakly proportional to k eq . This spectrum is measured in galaxy surveys, and the greater precision of the Vera Rubin Observatory LSST and Euclid survey will constrain the presence of any additional components of the universe at z eq when EDE needs to be close to its maximum effect.

T 0 tension
In Eq. (52), ω r is determined by the temperature of today's CMB, measured by the COBE instrument FIRAS in 1996 to be T 0 = 2.726 ± 0.0013K (Fixsen 2009) and the equation of state for radiation energy T (t) = T 0 (1 + z(t)) 4 . Changes to ω r in (52) can be reabsorbed into changes to h, so there is some degeneracy between the temperature of universe and H 0 . A thought experiment can then be done: what if T 0 was allowed to vary outside FIRAS bounds, keeping other quantities fixed? Ivanov et al. (2020) have considered this, opting to keep fixed ω i /T 3 0 , which corresponds to the absolute energy scale of components. On combining Planck data with SH0ES, they find the H 0 tension is resolved within in ΛCDM, but with T 0 = 2.582 ± 0.033.
While this is not new physics unless one proposes a new equation of state for radiation, it is a helpful recasting of the tension. Firstly, no balloon or other measurement of T 0 has produced such a low T 0 value (Fixsen 2009).
Secondly, the temperature of the universe may be estimated at earlier epochs via the Sunyaev-Zel'dovich effect, providing an independent probe of T (z). Luzzi et al. (2015) use the Planck SZ cluster catalog to find that deviations from T (z) = T 0 (1 + z) are limited to ∼ 3% at 2σ confidence out to z = 0.94.

Conclusions and buyer's advice
Much effort has been expended by all the teams involved to reduce photometric biases, environmental effects, calibration error, lens mass modelling biases, CMB foreground effects and so on. Nevertheless, there remain some areas where a degree of healthy scientific scepticism might be focused, or improvements might be forthcoming. In the spirit of our Buyer's Guide, we offer our view on these areas in Fig. 18.
The last 20 years have seen the error bars on H 0 shrink from 8% to 1%-2% for the highest precision results 18 . For the late universe, the SH0ES team has calibrated the Cepheid luminosity zero point to 1.0% precision, and the values are consistent whether parallaxes, the DEB distance to the LMC, or the maser distance to NGC4258 are used. The calibration of SN Ia luminosity adds another 1.3% uncertainty, resulting in H 0 to 1.8% accuracy ). For the early universe, the high resolution and sky coverage of Planck results in 0.75% accuracy (Planck Collaboration et al. 2020). This has been characterised as "early versus late" tension, but this would be to ignore the results from the CCHP calibration of TRGBs which are a late universe result intermediate between Planck and SH0ES (Freedman et al. 2019). Additionally, the size of the BAO sound horizon in the late universe when calibrated with early universe BBN constraints is consistent with the CMB value (Addison et al. 2018).

Systematics or beyond ΛCDM?
Not too surprisingly there is at present no clear answer yet to the question "What is the true value of H 0 ?". We do however think there is a compelling cosmological model, and that is ΛCDM. It has been the best model around for the last 30 years, and no new model has yet presented a convincing case to replace it. Having said that, it is still disturbing that the two main ingredients in ΛCDM, dark matter and dark energy, are not understood. The H 0 derived from the CMB is therefore part of a "package deal" which involves other cosmological parameters within the ΛCDM paradigm. Fortunately, the underlying physics of the CMB is well defined -the CMB fluctuation spectrum is a solution of Boltzmann's equation. In contrast, the cosmic ladder measurements of H 0 probe directly this parameter and no others, and the astrophysics of the standard candles used is not fully understood.

Analysis Inputs
a. Astrophysical Modelling is whether the object being observed has a solid theoretical understanding. b. Photometric Reduction are uncertainties that may be introduced converting imaging into parameters. c. Calibration is uncertainty about the absolute magnitudes or physical size of the object. d. Population Size characterises the quantity of independent data sets, objects or events that are available.

Probes of H0
1. Parallaxes in Gaia DR3 have considerably improved the zero point, but not yet fully to science mission goals. 2. Detached eclipsing binaries provide a solid distance to the LMC and it is hoped in future to M31. 3. Masers are rare astrophysical systems and it is unlikely many more useful ones will be found, and reliance must be placed on the modelling of the accretion disk. 4. Cepheid crowding, blending and the calibrated slope of the Leavitt law has been the subject of recent work. 5. TRGBs are well understood stars which can be observed in uncrowded fields, but the calibration is debated. The magnitude is a statistical fit to a stellar population, rather than individual stars in the case of Cepheids. 6. Miras relatively new to distance ladders so not many galaxies have Mira distances. Their uncertainties may reduce quickly in future work. 7. SN Ia progenitor theory of an accreting white dwarf lacks observational evidence, hence many models of environmental or progenitor effects have been proposed. It is not clear if the host mass step used in SH0eS analyses is the best one. 8. Gravitational waves have ongoing improvements with detector calibration and sensitivity at LIGO and it is anticipated that more binary neutron star and binary black hole mergers will be detected. 9. Time delay lenses have mass modelling uncertainties, and a larger number of systems will need to be analysed to provide reliable convergence. 10. BAO and CMB future observations will extend the resolution of current data and help discriminate between new cosmological model proposals. Fig. 18: A 'traffic light' colour coding corresponding to our view of the calculation of H 0 has uncertainties that may improve over the next few years. Green corresponds to a solid current position with low uncertainty, yellow is where some improvements can be expected, and red is where caution or cross-checks may be needed. For more detail the reader may consult the relevant subsection in Sect. 3. This diagram was inspired by a similar plot by Wendy Freedman at the ESO H 0 2020 conference.
Although many plausible extensions of ΛCDM can resolve the tension, none enjoy majority support. They either do not resolve other tensions related to H 0 such as the sound horizon, or seem somewhat contrived. Thus, at best they can be seen as effective theories of some unknown, and possibly more natural, microphysics. Given the lack of a compelling explanation from theory, and a greater understanding of how hard it is to "break" ΛCDM, opinion in the community seems to be shifting. At a conference called "Beyond ΛCDM" in Oslo in 2015, a poll suggested 69% of participants believed new physics the most likely explanation (although we suspect some bias in the voter views given the title of the conference!). However, at a conference in 2021 entitled "Hubble Tension Headache" held virtually by Southampton University, UK, over 50% of participants favoured the explanation that there were still systematics in the data.

Open issues
Here we comment on some frequently asked questions: -Is SH0ES H 0 supported? Results from Miras, Megamasers and HST Key Project have central values that are consistent with SH0ES, but due to their larger error bars are only in tension with Planck at the 1.5 ∼ 3.0σ level.
Results from lensed quasars, TRGBs and surface brightness fluctuations vary between papers, and cannot be regarded as convincingly supportive of SH0ES. -Is Planck H 0 supported? The Planck value for H 0 is corroborated by the Atacama Cosmology Telescope to within 1%, and is also consistent with the earlier WMAP satellite. However, it is in 2.1σ tension with the South Pole Telescope. Planck's H 0 is also supported by BAO from the BOSS and DES galaxy surveys combined with an early universe prior for the baryon density. -Are systematics still present? Regarding Cepheids, considerable recent progress has been made in demonstrating crowding is under control and photometric reduction is free from bias. Nevertheless, potential weaknesses remain in the calibration and environmental corrections of SN Ia, and the calibration of TRGBs has been disputed. Re-analyses of lensed quasars show systematics are not yet under control. The nature of CMB analyses do not lend themselves easily to part-by-part decomposition or independent review, but a full re-analysis of Planck by Efstathiou and Gratton (2019) did not alter cosmological parameters. -Is characterisation of "early versus late" helpful? The strongest evidence in favour of this are the Cepheid results and Planck, as while other results are broadly consistent with this view they are individually more uncertain. However, if the calibration of TRGB were resolved in favour of the CCHP value, it would weaken the case for this view. It is however helpful to consider new cosmological models in terms of those modifying the post-or pre-recombination universe, as at present those modifying the pre-recombination universe appear more viable. -Is there a convincing theoretical solution? Simply put, ΛCDM is very hard to break. Proposals need to clear three main hurdles: fitting a wide range of cosmological data, consistency with other predictions of ΛCDM, and providing a convincing Bayesian argument that the model is preferable to ΛCDM. None yet do so. Nevertheless, there are strong hints what a solution might look like, and it is fair to point out that the innovation of Λ in ΛCDM itself solved a previous tension between a low matter density and flat universe. -Are TRGBs and gravitational waves offering a way forwards?
It is likely that a consensus on TRGB calibration will be reached in the near-future, and TRGBs promise cleaner fields and better theoretical modelling than Cepheids. They are also ideal for JWST observations, which has four times the J band resolution of the HST/WFC3 H band. Miras are interesting, but are more expensive to observe and harder to calibrate. Gravitational waves offer an inherently transparent and accurate way to measure H 0 , once enough bright and dark siren data has been collected by the middle of this decade. If these two were to corroborate SH0ES, it would be strong evidence in favour of the breakdown of ΛCDM. Alternatively, if they agree with Planck, one would have to conclude SH0ES is the outlier.

Buyer's advice
We would suggest the following in response to the question of "Which H 0 should I use?", depending on the desired application.
-For cosmological inference: As the tension in the Hubble parameter is between local direct estimates, and the best-fitted ΛCDM model, the only two possible explanations are unknown systematic effects in the local direct estimate, or that our universe is described by a model different from ΛCDM. If it turns out that there are systematics in the local estimate, it is clear that we should use Planck's value H 0 = 67.4 km s −1 Mpc −1 (Planck Collaboration et al. 2020). If the universe is not ΛCDM, it remains the case that ΛCDM is a great fit to cosmological data with all other parameters consistent with their CMB-derived values. We therefore recommend the full package of Planck or Planck-like cosmological parameters should be used, so one can test the the ΛCDM model self consistently. A specific example is in choosing parameters for N-body simulations. It makes sense to select the Planck set of parameters (H 0 , Ω m and σ 8 , etc.) so one can test the growth of structure with cosmic time given ΛCDM fit to Planck. In contrast, the ΛCDM model with H 0 = 73.2 km s −1 Mpc −1 is a poor fit to most existing cosmological observations (CMB and galaxy surveys). Another application of using H 0 and other parameters from Planck is when using them as priors for analyses of new data sets, in a Bayesian framework. Here the errors bars of parameters or the full Planck posterior of the probability distribution are important, as they propagate through the parameter marginalisation process. We also maintain that it is not correct to inflate H 0 confidence intervals to "hedge bets", unless one does that in a non-ΛCDM model and fully rederives the posteriors for all other parameters in a Bayesian fashion. -For the local universe:. The SH0ES value of H 0 = 73.2±1.3 km s −1 Mpc −1  using Gaia parallaxes and Cepheids is a good choice, as it is consistent with independent measurements such as Megamasers and Miras and re-analyses of the data (albeit generally starting with the photometric parameters, and not the original imaging) have produced mostly consistent results. Therefore, the SH0ES value should be used for calculations that require an expansion rate or distances alone, but are local enough to have minimal cosmological model dependence. Having said that, the CCHP TRGB result of H 0 = 69.8 ± 1.9 km s −1 Mpc −1 (Freedman et al. 2019) is noteworthy for its consistency with Planck and the advantages TRGB observations may offer relative to Cepheids. It is very much a case of "watch this space" for developments. -For pedagogical purposes: For the purpose of teaching cosmology, popular talks, or back-of-the-envelope calculations it would make sense to use the intermediate round value of H 0 = 70 km s −1 Mpc −1 . Wherever possible we recommend indicating the dependence of your key results on H 0 either in formulae, or as a method to adjust the key results. Anticipating a future resolution of the tension, this will be of great assistance to future researchers.

Future prospects
Looking to the future, the Zwicky Transient Facility and Foundation surveys of nearby SN Ia will be very helpful in reducing potential calibration issues. They will do this by resolving the underlying population characteristics, having cleaner selection functions, and providing more galaxies in which to calibrate the distant Hubble flow sample. The early signs of Gaia Early Data Release 3 is that it provides a considerable reduction, but not elimination, of the bias apparent in Data Release 2. But it is not the last word, and the next release is scheduled for 2022. The key point of Gaia is that it addresses lingering concerns about how the low metallicity environment of the LMC, or the maser disk modelling and crowded fields of NGC4258 may affect the calibration of standard candles. In particular, it will be useful to have accurate Gaia parallaxes to Milky Way globular clusters such as ωCen to provide additional calibrators of the TRGB. The JWST will greatly expand the range of TRGB observation, and provide continuity in the case of any further degradation of the ageing HST. The Extremely Large Telescope in Chile, scheduled for first light in 2025, can extend the range of DEB distances to M31 allowing it to contribute its Cepheid and TRGB populations to calibrations.
In the field of gravitational waves, it is perhaps disappointing that there have been no more observations of "bright sirens" like the neutron star merger GW170817 so far. But the commencement of operations at the VIRGO detector in Italy, and recently the KAGRA detector in Japan offer hope that the more frequent event detections and better sky localization, combined with an improved instrumental calibration, will provide a 2% measurement of H 0 within this decade. Many more time-delay lensing systems will be seen in future galaxy surveys, so there is a strong incentive to resolve remaining systematics in the modelling and speed up the analysis pipeline.
The Simons Telescope in Chile aims to map the polarisation spectrum of the CMB to an order of magnitude higher than Planck, and will start taking data in the next two to three years. Close to 2030, two new CMB Stage 4 telescopes will be operational in Chile and the South Pole, which will further extend the spectral resolution. The depth of these surveys will be able to support or rule out many pre-combination modifications of ΛCDM. Surveys conducted by the Dark Energy Spectroscopic Instrument (DESI), Vera Rubin Observatory (previously named LSST), Euclid satellite and Nancy Grace Roman Space Telescope (previously named WFIRST), combined with theoretical progress in the quasi-linear regime of structure formation, will enable the matter-power spectrum to be compared with ΛCDM predictions with much greater depth and resolution.