Cosmology and fundamental physics with the Euclid satellite
 3k Downloads
 28 Citations
Abstract
Euclid is a European Space Agency mediumclass mission selected for launch in 2020 within the cosmic vision 2015–2025 program. The main goal of Euclid is to understand the origin of the accelerated expansion of the universe. Euclid will explore the expansion history of the universe and the evolution of cosmic structures by measuring shapes and redshifts of galaxies as well as the distribution of clusters of galaxies over a large fraction of the sky. Although the main driver for Euclid is the nature of dark energy, Euclid science covers a vast range of topics, from cosmology to galaxy evolution to planetary research. In this review we focus on cosmology and fundamental physics, with a strong emphasis on science beyond the current standard models. We discuss five broad topics: dark energy and modified gravity, dark matter, initial conditions, basic assumptions and questions of methodology in the data analysis. This review has been planned and carried out within Euclid’s Theory Working Group and is meant to provide a guide to the scientific themes that will underlie the activity of the group during the preparation of the Euclid mission.
Keywords
Dark energy Cosmology Galaxy evolutionAbbreviations
 AGN
Active galactic nucleus
 ALP
Axiolike particle
 BAO
Baryonic acoustic oscillations
 BBKS
Bardeen–Bond–Kaiser–Szalay
 BOSS
Baryon oscillation spectroscopic survey
 BPol
Bpolarization satellite
 BigBOSS
Baryon oscillation spectroskopic survey
 CAMB
Code for anisotropies in the microwave background
 CDE
Coupled dark energy
 CDM
Cold dark matter
 CDMS
Cryogenic dark matter search
 CL
Confidence level
 CLASS
Cosmic linear anisotropy solving system
 CMB
Cosmic microwave background
 COMBO17
Classifying objects by mediumband observations
 COSMOS
Cosmological evolution survey
 CPL
Chevallier–Polarski–Linder
 CQ
Coupled quintessence
 CRESST
Cryogenic rare event search with superconducting thermometers
 DE
Dark energy
 DES
Dark energy survey
 DETF
Dark energy task force
 DGP
Dvali–Gabadadze–Porrati
 DM
Dark matter
 EBI
Eddington–Born–Infeld
 EDE
Early dark energy
 EMT
Energy–momentum tensor
 EROS
Expérience pour la recherche d’objets Sombres
 eROSITA
Extended ROentgen survey with an imaging telescope array
 FCDM
Fuzzy cold dark matter
 FFT
Fast Fourier transform
 FLRW
Friedmann–Lemaître–Robertson–Walker
 FoM
Figure of merit
 FoG
Fingers of god
 GEA
Generalized Einsteinaether
 GR
General relativity
 HETDEX
Hobby–Eberly telescope dark energy experiment
 ICM
Intracluster medium
 IH
Inverted hierarchy
 IR
Infrared
 ISW
Integrated Sachs–Wolfe
 KL
Kullback–Leibler divergence
 LCDM
Lambda cold dark matter
 LHC
Large hadron collider
 LRG
Luminous red galaxy
 LSB
Low surface brightness
 LSS
Large scale structure
 LSST
Large synoptic survey telescope
 LTB
Lemaître–Tolman–Bondi
 MACHO
Massive compact halo object
 MCMC
Markov Chain Monte Carlo
 MCP
Minicharged particles
 MF
Mass function
 MG
Modified gravity
 MOND
Modified Newtonian dynamics
 MaVaNs
Mass varying neutrinos
 NFW
Navarro–Frenk–White
 NH
Normal hierarchy
 PCA
Principal component analysis
Probability distribution function
 PGB
PseudoGoldstein Boson
 PKDGRAV
Parallel K–D tree GRAVity code
 PPF
Parameterized postFriedmann
 PPN
Parameterized postNewtonian
 PPOD
Predictive posterior odds distribution
 PSF
Point spread function
 QCD
Quantum chromodynamics
 RDS
Redshift space distortions
 RG
Renormalization group
 SD
Savage–Dickey
 SDSS
Sloan digital sky survey
 SIDM
Self interacting dark matter
 SN
Supernova
 TeVeS
Tensor vector scalar
 UDM
Unified dark matter
 UV
Ultra Violett
 WDM
Warm dark matter
 WFXT
Widefield Xray telescope
 WIMP
Weakly interacting massive particle
 WKB
Wentzel–Kramers–Brillouin
 WL
Weak lensing
 WLS
Weak lensing survey
 WMAP
Wilkinson microwave anisotropy probe
 XMMNewton
Xray multimirror mission
 vDVZ
van Dam–Veltman–Zakharov
List of symbols
 \(c_a\)
Adiabatic sound speed
 \(D_A(z)\)
Angular diameter distance
 Open image in new window
Angular spin raising operator
 \(\varPi ^i_j\)
Anisotropic stress perturbation tensor
 \(\sigma \)
Uncertainty
 Bo
Bayes factor
 b
Bias (ratio of galaxy to total matter perturbations)
 \(B_\varPhi (k_1,k_2,k_3)\)
Bispectrum of the Bardeen’s potential
 g(X)
Born–Infeld kinetic term
 \(\zeta \)
Comoving curvature perturbation
 r(z)
Comoving distance
 \(\mathcal {H}\)
Conformal Hubble parameter, \(\mathcal {H} = aH\)
 \(\eta ,\tau \)
Conformal time
 \(\kappa \)
Convergence
 t
Cosmic time
 \(\varLambda \)
Cosmological constant
 \(\varvec{\Theta }\)
Cosmological parameters
 \(r_c\)
Cross over scale
 \(\square \)
d’Alembertian, \(\square =\varvec{\nabla }^2\)
 F
Derivative of f(R)
 \(\theta \)
Divergence of velocity field
 \(\mu \)
Direction cosine
 \(\pi \)
Effective anisotropic stress
 \(\eta (a,k)\)
Effective anisotropic stress parameterization
 \(\rho \)
Energy density
 \(T_{\mu \nu }\)
Energy momentum tensor
 w
Equation of state
 \(F_{\alpha \beta }\)
Fisher information matrix
 \(\sigma _8\)
Fluctuation amplitude at 8 km/s/Mpc
 \(u^\mu \)
Fourvelocity
 \(\varOmega _m\)
Fractional matter density
 \(f_{\mathrm {sky}}\)
Fraction of sky observed
 \(\varDelta _M\)
Gauge invariant comoving density contrast
 \(\tau (z)\)
Generic opacity parameter
 \(\varpi \)
Gravitational slip parameter
 G(a)
Growth function/growth factor
 \(\gamma \)
Growth index/shear
 \(f_g\)
Growth rate
 \(b_{\mathrm {eff}}\)
Halo effective linear bias factor
 h
Hubble constant in units of 100 km/s/Mpc
 H(z)
Hubble parameter
 \(\xi _i\)
Killing field
 \(\delta _{ij}\)
Kronecker delta
 f(R)
Lagrangian in modified gravity
 \(P_l(\mu )\)
Legendre polynomials
 \(\mathcal {L}(\varvec{\Theta })\)
Likelihood function
 \(\beta (z)\)
Linear redshiftspace distortion parameter
 \(D_L(z)\)
Luminosity distance
 Q(a, k)
Mass screening effect
 \(\delta _m\)
Matter density perturbation
 \(g_{\mu \nu }\)
Metric tensor
 \(\mu \)
Modified gravity function: \(\mu =Q/\eta \)
 \(C_\ell \)
Multipole power spectrum
 G
Newton’s gravitational constant
 N
Number of efolds, \(N=\ln a\)
 P(k)
Matter power spectrum
 p
Pressure
 \(\delta p\)
Pressure perturbation
 \(\chi (z)\)
Radial, dimensionless comoving distance
 z
Redshift
 R
Ricci scalar
 \(\phi \)
Scalar field
 A
Scalar potential
 \(\varPsi ,\varPhi \)
Scalar potentials
 \(n_s\)
Scalar spectral index
 a
Scale factor
 \(f_a\)
Scale of Peccei–Quinn symmetry breaking
 \(\ell \)
Spherical harmonic multipoles
 \(c_s\)
Sound speed
 \(\varSigma \)
Total neutrino mass/inverse covariance matrix/PPN parameter
 \(H_T^{ij}\)
Tracefree distortion
 T(k)
Transfer function
 \(B_i\)
Vector shift
 \(\mathbf {k}\)
Wavenumber
1 Introduction
Euclid^{1} (Laureijs et al. 2011; Refregier 2009; Cimatti et al. 2009) is an ESA mediumclass mission selected for the second launch slot (expected for 2020) of the cosmic vision 2015–2025 program. The main goal of Euclid is to understand the physical origin of the accelerated expansion of the universe. Euclid is a satellite equipped with a 1.2 m telescope and three imaging and spectroscopic instruments working in the visible and nearinfrared wavelength domains. These instruments will explore the expansion history of the universe and the evolution of cosmic structures by measuring shapes and redshifts of galaxies over a large fraction of the sky. The satellite will be launched by a Soyuz ST2.1B rocket and transferred to the L2 Lagrange point for a 6year mission that will cover at least 15,000 square degrees of sky. Euclid plans to image a billion galaxies and measure nearly 100 million galaxy redshifts.
These impressive numbers will allow Euclid to realize a detailed reconstruction of the clustering of galaxies out to a redshift 2 and the pattern of light distortion from weak lensing to redshift 3. The two main probes, redshift clustering and weak lensing, are complemented by a number of additional cosmological probes: cross correlation between the cosmic microwave background and the large scale structure; abundance and properties of galaxy clusters and strong lensing and possible luminosity distance through supernovae Ia. To extract the maximum of information also in the nonlinear regime of perturbations, these probes will require accurate highresolution numerical simulations. Besides cosmology, Euclid will provide an exceptional dataset for galaxy evolution, galaxy structure, and planetary searches. All Euclid data will be publicly released after a relatively short proprietary period and will constitute for many years the ultimate survey database for astrophysics.
A huge enterprise like Euclid requires highly considered planning in terms not only of technology but also for the scientific exploitation of future data. Many ideas and models that today seem to be abstract exercises for theorists will in fact finally become testable with the Euclid surveys. The main science driver of Euclid is clearly the nature of dark energy, the enigmatic substance that is driving the accelerated expansion of the universe. As we discuss in detail in Part I, under the label “dark energy” we include a wide variety of hypotheses, from extradimensional physics to higherorder gravity, from new fields and new forces to large violations of homogeneity and isotropy. The simplest explanation, Einstein’s famous cosmological constant, is still currently acceptable from the observational point of view, but is not the only one, nor necessarily the most satisfying, as we will argue. Therefore, it is important to identify the main observables that will help distinguish the cosmological constant from the alternatives and to forecast Euclid’s performance in testing the various models.
Since clustering and weak lensing also depend on the properties of dark matter, Euclid is a dark matter probe as well. In Part II we focus on the models of dark matter that can be tested with Euclid data, from massive neutrinos to ultralight scalar fields. We show that Euclid can measure the neutrino mass to a very high precision, making it one of the most sensitive neutrino experiments of its time, and it can help identify new light fields in the cosmic fluid.
The evolution of perturbations depends not only on the fields and forces active during the cosmic eras, but also on the initial conditions. By reconstructing the initial conditions we open a window on the inflationary physics that created the perturbations, and allow ourselves the chance of determining whether a single inflaton drove the expansion or a mixture of fields. In Part III we review the choices of initial conditions and their impact on Euclid science. In particular we discuss deviations from simple scale invariance, mixed isocurvatureadiabatic initial conditions, nonGaussianity, and the combined forecasts of Euclid and CMB experiments.
Practically all of cosmology is built on the copernican principle, a very fruitful idea postulating a homogeneous and isotropic background. Although this assumption has been confirmed time and again since the beginning of modern cosmology, Euclid’s capabilities can push the test to new levels. In Part IV we challenge some of the basic cosmological assumptions and predict how well Euclid can constrain them. We explore the basic relation between luminosity and angular diameter distance that holds in any metric theory of gravity if the universe is transparent to light, and the existence of large violations of homogeneity and isotropy, either due to local voids or to the cumulative stochastic effects of perturbations, or to intrinsically anisotropic vector fields or spacetime geometry.
Finally, in Part V we review some of the statistical methods that are used to forecast the performance of probes like Euclid, and we discuss some possible future developments.
This review has been planned and carried out within Euclid’s Theory Working Group and is meant to provide a guide to the scientific themes that will underlie the activity of the group during the preparation of the mission. At the same time, this review will help us and the community at large to identify the areas that deserve closer attention, to improve the development of Euclid science and to offer new scientific challenges and opportunities.
2 Part I Dark energy
2.1 Introduction
With the discovery of cosmic acceleration at the end of the 1990s, and its possible explanation in terms of a cosmological constant, cosmology has returned to its roots in Einstein’s famous 1917 paper that simultaneously inaugurated modern cosmology and the history of the constant \(\varLambda \). Perhaps cosmology is approaching a robust and allencompassing standard model, like its cousin, the very successful standard model of particle physics. In this scenario, the cosmological standard model could essentially close the search for a broad picture of cosmic evolution, leaving to future generations only the task of filling in a number of important, but not crucial, details.
The cosmological constant is still in remarkably good agreement with almost all cosmological data more than 10 years after the observational discovery of the accelerated expansion rate of the universe. However, our knowledge of the universe’s evolution is so incomplete that it would be premature to claim that we are close to understanding the ingredients of the cosmological standard model. If we ask ourselves what we know for certain about the expansion rate at redshifts larger than unity, or the growth rate of matter fluctuations, or about the properties of gravity on large scales and at early times, or about the influence of extra dimensions (or their absence) on our four dimensional world, the answer would be surprisingly disappointing.
Our present knowledge can be succinctly summarized as follows: we live in a universe that is consistent with the presence of a cosmological constant in the field equations of general relativity, and as of 2016, the value of this constant corresponds to a fractional energy density today of \(\varOmega _{\varLambda }\approx 0.7\). However, far from being disheartening, this current lack of knowledge points to an exciting future. A decade of research on dark energy has taught many cosmologists that this ignorance can be overcome by the same tools that revealed it, together with many more that have been developed in recent years.
Why then is the cosmological constant not the end of the story as far as cosmic acceleration is concerned? There are at least three reasons. The first is that we have no simple way to explain its small but nonzero value. In fact, its value is unexpectedly small with respect to any physically meaningful scale, except the current horizon scale. The second reason is that this value is not only small, but also surprisingly close to another unrelated quantity, the present matterenergy density. That this happens just by coincidence is hard to accept, as the matter density is diluted rapidly with the expansion of space. Why is it that we happen to live at the precise, fleeting epoch when the energy densities of matter and the cosmological constant are of comparable magnitude? Finally, observations of coherent acoustic oscillations in the cosmic microwave background (CMB) have turned the notion of accelerated expansion in the very early universe (inflation) into an integral part of the cosmological standard model. Yet the simple truth that we exist as observers demonstrates that this early accelerated expansion was of a finite duration, and hence cannot be ascribable to a true, constant \(\varLambda \); this sheds doubt on the nature of the current accelerated expansion. The very fact that we know so little about the past dynamics of the universe forces us to enlarge the theoretical parameter space and to consider phenomenology that a simple cosmological constant cannot accommodate.
These motivations have led many scientists to challenge one of the most basic tenets of physics: Einstein’s law of gravity. Einstein’s theory of general relativity (GR) is a supremely successful theory on scales ranging from the size of our solar system down to micrometers, the shortest distances at which GR has been probed in the laboratory so far. Although specific predictions about such diverse phenomena as the gravitational redshift of light, energy loss from binary pulsars, the rate of precession of the perihelia of bound orbits, and light deflection by the sun are not unique to GR, it must be regarded as highly significant that GR is consistent with each of these tests and more. We can securely state that GR has been tested to high accuracy at these distance scales.
The success of GR on larger scales is less clear. On astrophysical and cosmological scales, tests of GR are complicated by the existence of invisible components like dark matter and by the effects of spacetime geometry. We do not know whether the physics underlying the apparent cosmological constant originates from modifications to GR (i.e., an extended theory of gravity), or from a new fluid or field in our universe that we have not yet detected directly. The latter phenomena are generally referred to as ‘dark energy’ models.
If we only consider observations of the expansion rate of the universe we cannot discriminate between a theory of modified gravity and a darkenergy model. However, it is likely that these two alternatives will cause perturbations around the ‘background’ universe to behave differently. Only by improving our knowledge of the growth of structure in the universe can we hope to progress towards breaking the degeneracy between dark energy and modified gravity. Part I of this review is dedicated to this effort. We begin with a review of the background and linear perturbation equations in a general setting, defining quantities that will be employed throughout. We then explore the nonlinear effects of dark energy, making use of analytical tools such as the spherical collapse model, perturbation theory and numerical Nbody simulations. We discuss a number of competing models proposed in literature and demonstrate what the Euclid survey will be able to tell us about them. For an updated review of present cosmological constraints on a variety of dark energy and modified gravity models, we refer to the Planck 2015 analysis (Planck Collaboration 2016c).
2.2 Background evolution
However, when GR is modified or when an interaction with other species is active, dark energy may very well have a nonnegligible contribution at early times. Therefore, it is important, already at the background level, to understand the best way to characterize the main features of the evolution of quintessence and dark energy in general, pointing out which parameterizations are more suitable and which ranges of parameters are of interest to disentangle quintessence or modified gravity from a cosmological constant scenario.
In the following we briefly discuss how to describe the cosmic expansion rate in terms of a small number of parameters. This will set the stage for the more detailed cases discussed in the subsequent sections. Even within specific physical models it is often convenient to reduce the information to a few phenomenological parameters.
Two important points are left for later: from Eq. (I.2.3) we can easily see that \(w_\phi \ge 1\) as long as \(\rho _\phi >0\), i.e., uncoupled canonical scalar field dark energy never crosses \(w_\phi =\,1\). However, this is not necessarily the case for noncanonical scalar fields or for cases where GR is modified. We postpone to Sect. I.3.3 the discussion of how to parametrize this ‘phantom crossing’ to avoid singularities, as it also requires the study of perturbations.
The second deferred part on the background expansion concerns a basic statistical question: what is a sensible precision target for a measurement of dark energy, e.g., of its equation of state? In other words, how close to \(w_\phi =\,1\) should we go before we can be satisfied and declare that dark energy is the cosmological constant? We will address this question in Sect. I.4.
2.2.1 Parametrization of the background evolution
The second approach is to start from a simple expression of w without assuming any specific darkenergy model (but still checking afterwards whether known theoretical darkenergy models can be represented). This is what has been done by Huterer and Turner (2001), Maor et al. (2001), Weller and Albrecht (2001) (linear and logarithmic parametrization in z), Chevallier and Polarski (2001), Linder (2003) (linear and power law parametrization in a), Douspis et al. (2006), Bassett et al. (2004) (rapidly varying equation of state).
Note that the measurement of \(\rho _X(z)\) is straightforward once H(z) is measured from baryon acoustic oscillations, and \(\varOmega _m\) is constrained tightly by the combined data from galaxy clustering, weak lensing, and cosmic microwave background data—although strictly speaking this requires a choice of perturbation evolution for the dark energy as well, and in addition one that is not degenerate with the evolution of dark matter perturbations; see Kunz (2009).
Another useful possibility is to adopt the principal component approach (Huterer and Starkman 2003), which avoids any assumption about the form of w and assumes it to be constant or linear in redshift bins, then derives which combination of parameters is best constrained by each experiment.
For a crosscheck of the results using more complicated parameterizations, one can use simple polynomial parameterizations of w and \(\rho _{\mathrm {DE}}(z)/\rho _{\mathrm {DE}}(0)\) (Wang 2008b).
2.3 Perturbations
This section is devoted to a discussion of linear perturbation theory in darkenergy models. Since we will discuss a number of nonstandard models in later sections, we present here the main equations in a general form that can be adapted to various contexts. This section will identify which perturbation functions the Euclid survey (Laureijs et al. 2011) will try to measure and how they can help us to characterize the nature of dark energy and the properties of gravity.
2.3.1 Cosmological perturbation theory
Here we provide the perturbation equations in a darkenergy dominated universe for a general fluid, focusing on scalar perturbations.
The problem here is not only to parameterize the pressure perturbation and the anisotropic stress for the dark energy (there is not a unique way to do it, see below, especially Sect. I.3.3 for what to do when w crosses \(\,1\)) but rather that we need to run the perturbation equations for each model we assume, making predictions and compare the results with observations. Clearly, this approach takes too much time. In the following Sect. I.3.2 we show a general approach to understanding the observed latetime accelerated expansion of the universe through the evolution of the matter density contrast.
In the following, whenever there is no risk of confusion, we remove the overbars from the background quantities.
2.3.2 Modified growth parameters
Even if the expansion history, H(z), of the FLRW background has been measured (at least up to redshifts \(\sim \, 1\) by supernova data, i.e., via the luminosity distance), it is not possible yet to identify the physics causing the recent acceleration of the expansion of the universe. Information on the growth of structure at different scales and different redshifts is needed to discriminate between models of dark energy (DE) and modified gravity (MG). A definition of what we mean by DE and MG will be postponed to Sect. I.5.

Model parameters capture the degrees of freedom of DE/MG and modify the evolution equations of the energy–momentum content of the fiducial model. They can be associated with physical meanings and have uniquelypredicted behavior in specific theories of DE and MG.

Trigger relations are derived directly from observations and only hold in the fiducial model. They are constructed to break down if the fiducial model does not describe the growth of structure correctly.
For a largescale structure and weak lensing survey the crucial quantities are the matterdensity contrast and the gravitational potentials and we therefore focus on scalar perturbations in the Newtonian gauge with the metric (I.3.8).
We describe the matter perturbations using the gaugeinvariant comoving density contrast \(\varDelta _M\equiv \delta _M+3aH \theta _M/k^2\) where \(\delta _M\) and \(\theta _M\) are the matter density contrast and the divergence of the fluid velocity for matter, respectively. The discussion can be generalized to include multiple fluids.
Clearly, if the actual theory of structure growth is not the \(\varLambda \)CDM scenario, the constraints (I.3.19) will be modified, the growth Eq. (I.3.20) will be different, and finally the growth factor (I.3.21) is changed, i.e., the growth index is different from \(\gamma _\varLambda \) and may become time and scale dependent. Therefore, the inconsistency of these three points of view can be used to test the \(\varLambda \)CDM paradigm.
I.3.2.1 Two new degrees of freedom
Given an MG or DE theory, the scale and timedependence of the functions Q and \(\eta \) can be derived and predictions projected into the \((Q,\eta )\) plane. This is also true for interacting dark sector models, although in this case the identification of the total matter density contrast (DM plus baryonic matter) and the galaxy bias become somewhat contrived (see, e.g., Song et al. 2010, for an overview of predictions for different MG/DE models).
Many different names and combinations of the above defined functions \((Q,\eta )\) have been used in the literature, some of which are more closely related to actual observables and are less correlated than others in certain situations (see, e.g., Amendola et al. 2008b; Mota et al. 2007; Song et al. 2010; Pogosian et al. 2010; Daniel et al. 2010; Daniel and Linder 2010; Ferreira and Skordis 2010).
Any combination of two variables out of \(\{Q,\eta ,\mu ,\varSigma ,\ldots \}\) is a valid alternative to \((Q,\eta )\). It turns out that the pair \((\mu ,\varSigma )\) is particularly well suited when CMB, WL and LSS data are combined as it is less correlated than others (see Zhao et al. 2010; Daniel and Linder 2010; Axelsson et al. 2014).
I.3.2.2 Parameterizations and nonparametric approaches
So far we have defined two free functions that can encode any departure of the growth of linear perturbations from \(\varLambda \)CDM. However, these free functions are not measurable, but have to be inferred via their impact on the observables. Therefore, one needs to specify a parameterization of, e.g., \((Q,\eta )\) such that departures from \(\varLambda \)CDM can be quantified. Alternatively, one can use nonparametric approaches to infer the time and scaledependence of the modified growth functions from the observations.
Daniel et al. (2010) and Daniel and Linder (2010) investigate the modified growth parameters binned in z and k. The functions are taken constant in each bin. This approach is simple and only mildly dependent on the size and number of the bins. However, the bins can be correlated and therefore the data might not be used in the most efficient way with fixed bins. Slightly more sophisticated than simple binning is a principal component analysis (PCA) of the binned (or pixelized) modified growth functions. In PCA uncorrelated linear combinations of the original pixels are constructed. In the limit of a large number of pixels the model dependence disappears. At the moment however, computational cost limits the number of pixels to only a few. Zhao et al. (2009a, 2010) employ a PCA in the \((\mu ,\eta )\) plane and find that the observables are more strongly sensitive to the scalevariation of the modified growth parameters rather than the timedependence and their average values. This suggests that simple, monotonically or mildlyvarying parameterizations as well as only timedependent parameterizations are poorly suited to detect departures from \(\varLambda \)CDM.
I.3.2.3 Trigger relations

As only one additional parameter is introduced, a second parameter, such as \(\eta \), is needed to close the system and be general enough to capture all possible modifications.

The growth factor is a solution of the growth equation on subHubble scales and, therefore, is not general enough to be consistent on all scales.

The framework is designed to describe the evolution of the matter density contrast and is not easily extended to describe all other energy–momentum components and integrated into a CMBBoltzmann code.
2.3.3 Phantom crossing
In this section, we pay attention to the evolution of the perturbations of a general darkenergy fluid with an evolving equation of state parameter w. Current limits on the equation of state parameter \(w=p{/}\rho \) of the dark energy indicate that \(p\approx \rho \), and so do not exclude \(p<\rho \), a region of parameter space often called phantom energy. Even though the region for which \(w<\,1\) may be unphysical at the quantum level, it is still important to probe it, not least to test for coupled dark energy and alternative theories of gravity or higher dimensional models that can give rise to an effective or apparent phantom energy.
Although there is no problem in considering \(w<\,1\) for the background evolution, there are apparent divergences appearing in the perturbations when a model tries to cross the limit \(w=\,1\). This is a potential headache for experiments like Euclid that directly probe the perturbations through measurements of the galaxy clustering and weak lensing. To analyze the Euclid data, we need to be able to consider models that cross the phantom divide \(w=\,1\) at the level of firstorder perturbations (since the only darkenergy model that has no perturbations at all is the cosmological constant).
However, at the level of cosmological firstorder perturbation theory, there is no fundamental limitation that prevents an effective fluid from crossing the phantom divide.
I.3.3.1 Parameterizing the pressure perturbation
Barotropic fluids.
Nonadiabatic fluids
This divergence appears because for \(w=\,1\) the energy momentum tensor Eq. (I.3.3) reads \(T^{\mu \nu }=pg^{\mu \nu }\). Normally the fourvelocity \(u^{\mu }\) is the timelike eigenvector of the energy–momentum tensor, but now all vectors are eigenvectors. So the problem of fixing a unique restframe is no longer well posed. Then, even though the pressure perturbation looks fine for the observer in the restframe, because it does not diverge, the badlydefined gauge transformation to the Newtonian frame does, as it also contains \(c_{a}^{2}\).
I.3.3.2 Regularizing the divergences
We have seen that neither barotropic fluids nor canonical scalar fields, for which the pressure perturbation is of the type (I.3.39), can cross the phantom divide. However, there is a simple model (called the quintom model Feng et al. 2005; Hu 2005) consisting of two fluids of the same type as in the previous Sect. I.3.3.1 but with a constant w on either side of \(w=\,1\).^{3} The combination of the two fluids then effectively crosses the phantom divide if we start with \(w_{\mathrm {tot}}>\,1\), as the energy density in the fluid with \(w<\,1\) will grow faster, so that this fluid will eventually dominate and we will end up with \(w_{\mathrm {tot}}<\,1\).
This result appears also related to the behavior found for coupled darkenergy models (originally introduced to solve the coincidence problem) where dark matter and dark energy interact not only through gravity (Amendola 2000a). The effective dark energy in these models can also cross the phantom divide without divergences (Huey and Wandelt 2006; Das et al. 2006; Kunz 2009).
However in this class of models there are other instabilities arising at the perturbation level regardless of the coupling used, (cf. Väliviita et al. 2008).
I.3.3.3 A word on perturbations when \(w=\,1\)
For instance, if we set \(w=\,1\) and \(\delta p = \gamma \delta \rho \) (where \(\gamma \) can be a generic function) in Eqs. (I.3.33) and (I.3.34) we have \(\delta \ne 0\) and \(V\ne 0\). However, the solutions are decaying modes due to the \(\frac{1}{a}\left( 13w\right) V\) term so they are not important at late times; but it is interesting to notice that they are in general not zero.
It is also interesting to notice that when \(w = \,1\) the perturbation equations tell us that darkenergy perturbations are not influenced through \(\varPsi \) and \(\varPhi '\) [see Eqs. (I.3.33) and (I.3.34)]. Since \(\varPhi \) and \(\varPsi \) are the quantities directly entering the metric, they must remain finite, and even much smaller than 1 for perturbation theory to hold. Since, in the absence of direct couplings, the dark energy only feels the other constituents through the terms \((1+w)\varPsi \) and \((1+w)\varPhi '\), it decouples completely in the limit \(w=\,1\) and just evolves on its own. But its perturbations still enter the Poisson equation and so the dark matter perturbation will feel the effects of the darkenergy perturbations.
Although this situation may seem contrived, it might be that the acceleration of the universe is just an observed effect as a consequence of a modified theory of gravity. As was shown in Kunz and Sapone (2007), any modified gravity theory can be described as an effective fluid both at background and at perturbation level; in such a situation it is imperative to describe its perturbations properly as this effective fluid may manifest unexpected behavior.
2.4 Generic properties of dark energy and modified gravity models
This section explores some generic issues that are not necessarily a feature of any particular model. We will recall the properties of particular classes of models as examples, leaving the details of the model description to Sect. I.5.
We begin by discussing the general implications of modelling dark energy as an extra degree of freedom, instead of the cosmological constant. We then discuss how the literature tends to categorize models into models of dark energy and models of modified gravity. We focus on the expansion of the cosmological background and ask what precision of measurement is necessary in order to make definite statements about large parts of the interesting model space. Then we address the issue of darkenergy perturbations, their impact on observables and how they can be used to distinguish between different classes of models. Finally, we present some general consistency relations among the perturbation variables that all models of modified gravity should fulfill.
2.4.1 Dark energy as a degree of freedom
De Sitter spacetime, filled with only a cosmological constant, is static, undergoes no evolution. It is also invariant under Lorentz transformations. When other sources of energy–momentum are added into this spacetime, the dynamics occurs on top of this static background, or better to say—vacuum. This is to say that the cosmological constant is a form of dark energy which has no dynamics of its own and the value of which is fixed for all frames and coordinate choices.
A dynamical model for acceleration implies the existence of some change of the configuration in space or time. It is no longer a gravitational vacuum. In the case of a perfectly homogeneous and isotropic universe, the evolution can only be a function of time. In reality, the universe has neither of these properties and therefore the configuration of any dynamical dark energy must also be inhomogeneous. Whether the inhomogeneities are small is a modeldependent statement.
It is important to stress that there exists no such thing as a modified gravity theory with no extra degrees of freedom beyond the metric. All models which seemingly involve just the metric degrees of freedom in some modified sense (say f(R) or f(G)), in fact can be shown to be equivalent to general relativity plus an extra scalar degree of freedom with some particular couplings to gravity (Chiba 2003; Kobayashi et al. 2011). Modifications such as massive gravity increase the number of polarisations.
In the context of \(\varLambda \)CDM, it has proven fruitful to consider the dynamics of the universe in terms of a perturbation theory: a separation into a background, linear and then higherorder fluctuations, each of increasingly small relevance (see Sect. I.3.1). These perturbations are thought to be seeded with a certain amplitude by an inflationary era at early times. Gravitational collapse then leads to a growth of the fluctuations, eventually leading to a breakdown of the perturbation theory; however, for dark matter in \(\varLambda \)CDM, this growth is only large enough to lead to nonlinearity at smaller scales.
When dynamical DE is introduced, it must be described by at least one new (potentially more) degree of freedom. In principle, in order to make any statements about any such theory, one must specify the initial conditions on some spacelike hypersurface and then the particular DE model will describe the subsequent evolutionary history. Within the framework of perturbation theory, initial conditions must be specified for both the background and the fluctuations. The model then provides a set of related evolution equations at each order.
We defer the discussion of the freedom allowed at particular orders to the appropriate sections below (Sect. I.4.3 for the background, Sect. I.4.4 for the perturbations). Here, let us just stress that since DE is a full degree of freedom, its initial conditions will contain both adiabatic and isocurvature modes, which may or may not be correlated, depending on their origin and which may or may not survive until today, depending on the particular model. Secondly, the nonlinearity in the DE configuration is in principle independent of the nonlinearity in the distribution of dark matter and will depend on both the particular model and the initial conditions. For example, the chameleon mechanism present in many nonminimally coupled models of dark energy acts to break down DE perturbation theory in higherdensity environments (see Sect. I.5.8). This breakdown of linear theory is environmentdependent and only indirectly related to nonlinearities in the distribution of dark matter.
Let us underline that the absolute and unique prediction of \(\varLambda \)CDM is that \(\varLambda \) is constant in space and time and therefore does not contribute to fluctuations at any order. Any violation of this statement at any one order, if it cannot be explained by astrophysics, is sufficient evidence that the acceleration is not caused by vacuum energy.
2.4.2 A definition of modified gravity
In this review we often make reference to DE and MG models. Although in an increasing number of publications a similar dichotomy is employed, there is currently no consensus on where to draw the line between the two classes. Here we will introduce an operational definition for the purpose of this document.
Roughly speaking, what most people have in mind when talking about standard dark energy are models of minimallycoupled scalar fields with standard kinetic energy in 4dimensional Einstein gravity, the only functional degree of freedom being the scalar potential. Often, this class of model is referred to simply as “quintessence”. However, when we depart from this picture a simple classification is not easy to draw. One problem is that, as we have seen in the previous sections, both at background and at the perturbation level, different models can have the same observational signatures (Kunz and Sapone 2007). This problem is not due to the use of perturbation theory: any modification to Einstein’s equations can be interpreted as standard Einstein gravity with a modified “matter” source, containing an arbitrary mixture of scalars, vectors and tensors (Hu and Sawicki 2007b; Kunz et al. 2008).

Standard dark energy These are models in which dark energy lives in standard Einstein gravity and does not cluster appreciably on subhorizon scales and does not carry anisotropic stress. As already noted, the prime example of a standard darkenergy model is a minimallycoupled scalar field with standard kinetic energy, for which the sound speed equals the speed of light.

Clustering dark energy In clustering darkenergy models, there is an additional contribution to the Poisson equation due to the darkenergy perturbation, which induces \(Q \ne 1\). However, in this class we require \(\eta =1\), i.e., no extra effective anisotropic stress is induced by the extra dark component. A typical example is a kessence model with a low sound speed, \(c_s^2\ll 1\).

Modified gravity models These are models where from the start the Einstein equations are modified, for example scalar–tensor and f(R) type theories, Dvali–Gabadadze–Porrati (DGP) as well as interacting dark energy, in which effectively a fifth force is introduced in addition to gravity. Generically they change the clustering and/or induce a nonzero anisotropic stress. Since our definitions are based on the phenomenological parameters, we also add darkenergy models that live in Einstein’s gravity but that have nonvanishing anisotropic stress into this class since they cannot be distinguished by cosmological observations.
Therefore, on subhorizon scales and at first order in perturbation theory our definition of MG is straightforward: models with \(Q=\eta =1\) (see Eq. I.3.23) are standard DE, otherwise they are MG models. In this sense the definition above is rather convenient: we can use it to quantify, for instance, how well Euclid will distinguish between standard dynamical dark energy and modified gravity by forecasting the errors on \(Q,\eta \) or on related quantities like the growth index \(\gamma \).
On the other hand, it is clear that this definition is only a practical way to group different models and should not be taken as a fundamental one. We do not try to set a precise threshold on, for instance, how much dark energy should cluster before we call it modified gravity: the boundary between the classes is therefore left undetermined but we think this will not harm the understanding of this document.
2.4.3 The background: to what precision should we measure w?
The effect of dark energy on background expansion is to add a new source of energy density. The chosen model will have dynamics which will cause the energy density to evolve in a particular manner. On the simplest level, this evolution is a result of the existence of intrinsic hydrodynamical pressure of the dark energy fluid which can be described by the instantaneous equation of state. Alternatively, an interaction with other species can result in a nonconservation of the DE EMT and therefore change the manner in which energy density evolves (e.g. coupled dark energy). Taken together, all these effects add up to result in an effective equation of state for DE which drives the expansion history of the universe.
It is important to stress that all background observables are geometrical in nature and therefore can only be measurements from curvatures. It is not possible to disentangle the dark energy and dark matter in a model independent matter and therefore only the measurement of the Hubble parameter up to a normalization factor, \(H(z)/H_0\), and the spatial curvature \(\varOmega _{k0}\) can be obtained in a DEmodel independent manner. In particular, the measurement of the darkmatter density, \(\varOmega _{m0}\), becomes possible only on choosing some parameterization for \(w_\text {eff}\) (e.g., a constant) (Amendola et al. 2013a). One must therefore always be mindful that extracting DE properties from background measurements is limited to constraining the coefficients of a chosen parameterization of the effective equation of state for the DE component, rather than being measurements of the actual effective w and definitely not the intrinsic w of the dark energy.

Since current measurements of the expansion history appear so consistent with \(w=\,1\), do we not already know that the dark energy is a cosmological constant?

To which precision should we measure w? Or equivalently, why is the Euclid target precision of about 0.01 on \(w_0\) and 0.1 on \(w_a\) interesting?
In the first part, we will argue that whereas any detection of a deviation from \(\varLambda \)CDM expansion history immediately implies that acceleration is not driven by a cosmological constant, the converse is not true, even if \(w=\,1\) exactly. We will also argue that a detection of a phantom equation of state, \(w<\,1\), would reveal that gravity is not minimally coupled or that dark energy interacts and immediately eliminate the perfectfluid models of dark energy, such as quintessence.
Then we will see that for single field slowroll inflation models we effectively measure \(w \sim \,1\) with percentlevel accuracy (see Fig. 1); however, the deviation from a scaleinvariant spectrum means that we nonetheless observe a dynamical evolution and, thus, a deviation from an exact and constant equation of state of \(w=\,1\). Therefore, we know that inflation was not due to a cosmological constant; we also know that we can see no deviation from a de Sitter expansion for a precision smaller than the one Euclid will reach.
In the final part, we will consider the Bayesian evidence in favor of a true cosmological constant if we keep finding \(w=\,1\); we will see that for priors on \(w_0\) and \(w_a\) of order unity, a precision like the one for Euclid is necessary to favor a true cosmological constant decisively. We will also discuss how this conclusion changes depending on the choice of priors.
I.4.3.1 What can a measurement of w tell us?
The prediction of \(\varLambda \)CDM is that \(w=\,1\) exactly at all times. Any detection of a deviation from this result immediately disqualifies the cosmological constant as a model for dark energy.
The converse is not true, however. Simplest models of dynamical dark energy, such as quintessence (Sect. I.5.1) can approach the vacuum equation of state arbitrarily closely, given sufficiently flat potentials and appropriate initial conditions. An equation of state \(w=\,1\) at all times is inconsistent with these models, but this may never be detectable.
Moreover, there exist classes of models, e.g., shiftsymmetric kessence with deSitter attractors, which have equation of state \(w=\,1\) exactly, once the attractor is approached. Despite this, the acceleration is not at all driven by a cosmological constant, but by a perturbable fluid which has vanishing sound speed and can cluster. Such models can only be differentiated from a cosmological constant by the measurements of perturbations, if at all, see Sect. I.4.4.
Beyond eliminating the cosmological constant as a mechanism for acceleration, measuring \(w>\,1\) is not by itself very informative as to the nature of dark energy. Essentially all classes of models can evolve with such an equation of state given appropriate initial conditions (which is not to say that any evolution history can be produced by any class of models). On the other hand, the observation of a phantom equation of state, \(w<\,1\), at any one moment in time is hugely informative as to the nature of gravitational physics. It is well known that any such background made up of either a perfect fluid or a minimally coupled scalar field suffers from gradient instabilities, ghosts or both (Dubovsky et al. 2006). Therefore such an observation immediately implies that either gravity is nonminimally coupled and therefore there is a fifth force, that dark energy is not a perfect fluid, that dark energy interacts with other species, or that dynamical ghosts are not forbidden by nature, perhaps being stabilized by a mechanism such as ghost condensation (ArkaniHamed et al. 2004b). Any of these would provide a discovery in itself as significant as excluding a cosmological constant.
In conclusion, we aim to measure w since it is the most direct way of disproving that acceleration is caused by a cosmological constant. However, if it turns out that no significant deviation can be detected this does not imply that the cosmological constant is the mechanism for dark energy. The clustering properties must then be verified and found to not disagree with \(\varLambda \)CDM predictions.
I.4.3.2 Lessons from inflation
In all probability the observed latetime acceleration of the universe is not the first period of accelerated expansion that occurred during its evolution: the current standard model of cosmology incorporates a much earlier phase with \(\ddot{a}>0\), called inflation. Such a period provides a natural mechanism for generating several properties of the universe: spatial flatness, gross homogeneity and isotropy on scales beyond naive causal horizons and nearly scaleinvariant initial fluctuations.
The first lesson to draw from inflation is that it cannot have been due to a pure cosmological constant. This is immediately clear since inflation actually ended and therefore there had to be some sort of time evolution. We can go even further: since de Sitter spacetime is static, no curvature perturbations are produced in this case (the fluctuations are just unphysical gauge modes) and therefore an exactly scaleinvariant power spectrum would have necessitated an alternative mechanism.
The results obtained by the Planck collaboration from the first year of data imply that the initial spectrum of fluctuations is not scale invariant, but rather has a tilt given by \(n_\text {s} = 0.9608\pm 0.0054\) and is consistent with no running and no tensor modes (Planck Collaboration 2014a). This is consistent with the final results from WMAP (Hinshaw et al. 2013). It is surprisingly difficult to create this observed fluctuation spectrum in alternative scenarios that are strictly causal and only act on subhorizon scales (Spergel and Zaldarriaga 1997; Scodeller et al. 2009).
We should note that there are classes of models where the cancellation between \(\eta _H\) and the tilt in Eq. (I.4.3) is indeed natural which is why one cannot give a lower limit for the amplitude of primordial gravitational waves and w lies arbitrarily close to \(\,1\). On the other hand, the observed period of inflation is probably in the middle of a long slowroll phase. By Eq. (I.4.2), this cancellation would only happen at one moment in time. We have plotted the typical evolution of w in inflation in Fig. 2.
I.4.3.3 When should we stop? Bayesian model comparison
In Sect. I.4.3.1, we explained that the measurement of the equation of state w can exclude some classes of models, including the cosmological constant of \(\varLambda \)CDM. However, most classes of models allow the equation of state to be arbitrarily close to that of vacuum energy, \(w=\,1\), while still representing completely different physics. Since precision cannot be infinite, we need to propose an algorithm to determine how well this property should be measured. As we showed in Sect. I.4.3.2 above, inflation provides an example of a period that acceleration that, if it occurred at late times would have been judged as consistent with \(w=\,1\) given today’s constraints. We therefore should require a better measurement, but how much better?
We approach the answer to this question from the perspective of Bayesian evidence: at what precision does the nondetection of a deviation of the background expansion history signifies that we should prefer the simpler null hypothesis that \(w=\,1\).
In our Bayesian framework, the first model, the null hypothesis \(M_0\), posits that the background expansion is due to an extra component of energy density that has equation of state \(w=\,1\) at all times. The other models assume that the dark energy is dynamical in a way that is well parametrized either by an arbitrary constant w (model \(M_1\)) or by a linear fit \(w(a)=w_0+(1a) w_a\) (model \(M_2\)).
Here we are using the constant and linear parametrization of w because on the one hand we can consider the constant w to be an effective quantity, averaged over redshift with the appropriate weighting factor for the observable, see Simpson and Bridle (2006), and on the other hand because the precision targets for observations are conventionally phrased in terms of the figure of merit (FoM) given by \(1\big /\sqrt{{\mathrm {Cov}}(w_0,w_a)}\). We will, therefore, find a direct link between the model probability and the FoM. It would be an interesting exercise to repeat the calculations with a more general model, using e.g. PCA, although we would expect to reach a similar conclusion.

Fluidlike: we assume that the acceleration is driven by a fluid the background configuration of which satisfies both the strong energy condition and the null energy condition, i.e., we have that \(\varDelta _+ = 2/3, \varDelta _ = 0\).

Phantom: phantom models violate the null energy condition, i.e., are described by \(\varDelta _+ = 0, \varDelta _ > 0\), with the latter being possibly rather large.

Small departures: We assume that the equation of state is very close to that of vacuum energy, as seems to have been the case during inflation: \(\varDelta _+ = \varDelta _ = 0.01\).
Strength of evidence disfavoring the three example benchmark models against a \(\varLambda \)CDM expansion history, using an indicative accuracy on \(w=\,1\) from present data, \(\sigma \sim 0.1\)
Model  \((\varDelta _+, \varDelta _)\)  \(\ln B\) today (\(\sigma = 0.1\)) 

Phantom  (0, 10)  4.4 (strongly disfavored) 
Fluidlike  (2 / 3, 0)  1.7 (slightly disfavored) 
Small departures  (0.01, 0.01)  0.0 (inconclusive) 
Required precision \(\sigma \) of the value of w for future surveys in order to disfavor the three benchmark models against \(w=\,1\) for two different strengths of evidence
Model  \((\varDelta _+, \varDelta _)\)  Required \(\sigma \) for odds  

\(>\,20:1\)  \(>\,150:1\)  
Phantom  (0, 10)  0.4  \(5\times 10^{2}\) 
Fluidlike  (2 / 3, 0)  \(3\times 10^{2}\)  \(3\times 10^{3}\) 
Small departures  (0.01, 0.01)  \(4\times 10^{4}\)  \(5\times 10^{5}\) 
We plot in Fig. 3 contours of constant observational accuracy \(\sigma \) in the model predictivity space \((\varDelta _,\varDelta _+)\) for \(\ln B = 3.0\) from Eq. (I.4.5), corresponding to odds of 20–1 in favor of a cosmological constant (slightly above the “moderate” threshold. The figure can be interpreted as giving the space of extended models that can be significantly disfavored with respect to \(w=\,1\) at a given accuracy. The results for the 3 benchmark models mentioned above (fluidlike, phantom or small departures from \(w=\,1\)) are summarized in Table 1. Instead, we can ask the question which precision needs to reached to support \(\varLambda \)CDM at a given level. This is shown in Table 2 for odds 20:1 and 150:1. We see that to rule out a fluidlike model, which also covers the parameter space expected for canonical scalar field dark energy, we need to reach a precision comparable to the one that the Euclid satellite is expected to attain.
By considering the model \(M_2\) we can also provide a direct link with the target DETF FoM: Let us choose (fairly arbitrarily) a flat probability distribution for the prior, of width \(\varDelta w_0\) and \(\varDelta w_a\) in the darkenergy parameters, so that the value of the prior is \(1/(\varDelta w_0 \varDelta w_a)\) everywhere. Let us assume that the likelihood is Gaussian in \(w_0\) and \(w_a\) and centered on \(\varLambda \)CDM (i.e., the data fully supports \(\varLambda \) as the dark energy).
As above, we need to distinguish different cases depending on the width of the prior. If you accept the argument of the previous section that we expect only a small deviation from \(w=\,1\), and set a prior width of order 0.01 on both \(w_0\) and \(w_a\), then the posterior is dominated by the prior, and the ratio will be of order 1 if the future data is compatible with \(w=\,1\). Since the precision of the experiment is comparable to the expected deviation, both \(\varLambda \)CDM and evolving dark energy are equally probable (as argued above and shown for model \(M_1\) in Table 1), and we have to wait for a detection of \(w\ne 1\) or a significant further increase in precision (cf. the last row in Table 2).
To summarize, the most direct effect of dynamical dark energy is the modification of the expansion history. We used inflation as a darkenergy prototype to show that the current experimental bounds of \(w \approx \,1.0 \pm 0.1\) are not yet sufficient to significantly favor a parameterfree \(\varLambda \)CDM expansion history: we showed that we need to reach a percentlevel accuracy both to have any chance of observing a deviation of w from \(\,1\) if the dark energy is similar to inflation, and because it is at this point that a \(w=\,1\) expansion history beings to be favored decisively for prior widths of order 1.
We do not expect to be able to improve much our knowledge with a lowerprecision measurement of w, unless dark energy is significantly different from \(w=\,1\) either at late times or, for example, owing to a significant earlydarkenergy component (Pettorino et al. 2013). A large deviation would be the preferred situation for Euclid, as then we would be able to observe the evolution of dark energy rather than just a fixed state, which would be much more revealing. However, even if the expansion history matches that of \(\varLambda \)CDM to some arbitrary precision, this does not imply that the cosmological constant is accelerating the universe. Even on such configurations a large amount of freedom exists which can then only be tested by investigating the evolution of largescale structure, to which we now turn.
2.4.4 Darkenergy: linear perturbations and growth rate
Without a given model for dark energy, the evolution of its perturbations is not determined by the background expansion history. As we have explained in Sect. I.4.1, the cosmological constant is the only form of dark energy which does not carry fluctuations at all, with all dynamical DE models clustering to a larger or smaller extent. Since both dark matter and dark energy must interact (at least) through gravity, the existence of these fluctuations would alter the geodesics on which pressureless dark matter moves and therefore also change the clustering history of the dark matter. This implies that the appropriate evolution history for perturbations is another consistency check that \(\varLambda \)CDM must satisfy over and above the matching background expansion history.

the field content of the darkenergy sector

the initial conditions for the fluctuations

either the initial conditions for the DE background configuration and subsequent evolution or a measurement of the background expansion history as discussed in Sect. I.4.3

the rules for evolving the fluctuations (i.e., the model of dark energy)
Typically, a vector or tensor degree of freedom will contain all the lower helicities and therefore source all the perturbations of lower spin. For example, a vector dark energy (e.g., EinsteinAether or TeVeS), will in general source both vector and scalar perturbations. Higherspin perturbations would affect polarization predictions for the CMB if the dark energy contributed a significant part of the energy density during recombination (Lim 2005), but otherwise are unconstrained and appear largely uninvestigated in the literature, where most attention is paid to scalar modes even in models containing higherspin matter. If the dark energy itself contains multiple degrees of freedom, the perturbations will also feature internal modes which do not change any of the potential observables, such as the gravitational potentials, and only affect how they evolve in time.
Each of the new dynamical modes must be given appropriate initial conditions. Typically, they should be set during inflation, where the dark energy plays the role of a spectator field(s). In particular, the dark energy will contribute to the scalar adiabatic mode, which is constant on scales larger than the cosmological horizon. In addition, it will introduce new isocurvature modes with respect to matter and radiation. In general, these only decay if the dark energy interacts with the other matter components and equilibrates, in particular if the dark energy features a tracker in its evolution history (Malquarti and Liddle 2002). These isocurvature modes affect the CMB and are strongly constrained, but again only if the dark energy is a significant fraction of total energy density during recombination, such as in early dark energy models. Otherwise, the isocurvature modes do not become relevant until the late universe where they can affect structure formation or at least the magnitude of the ISW effect on the CMB (Gordon and Hu 2004). If the darkenergy is not coupled to the inflaton and is not involved in reheating, the isocurvature modes are likely to be statistically uncorrelated.
In practice, for the purpose of the late universe, the assumption is made that the isocurvature modes are not present and only the scalar adiabatic mode is considered. Let us take this point of view for the remainder of this section. Therefore what we discuss now are the possible rules for evolving the linear scalar perturbations.
In \(\varLambda \)CDM, the darkmatter density perturbation \(\varDelta _M\) and the gravitational potential \(\varPhi \) are related through a constraint, if we ignore the other components in the universe, such as baryons and radiation. In dynamical darkenergy models with a single additional degree of freedom, the \(\varDelta _M\) equation is one of two coupled secondorder differential equations, with time and scaledependent coefficients determined by the model.
Let us now discuss under what circumstances the two functions Q and \(\eta \) deviate from their \(\varLambda \)CDM values considerably and therefore would presumably significantly change observables.
I.4.4.1 Anisotropic stress: \(\eta \ne 1\)
A deviation of \(\eta \) from 1 results from anisotropic stress at first order in perturbations of the fluid. This occurs in the early universe owing to relativistic neutrinos, but is negligible at late times. Note that at second order in perturbations anisotropic stress is always present (e.g., Ballesteros et al. 2012).
The existence of anisotropic stress is a framedependent question. Models such as f(R) gravity which exhibit anisotropic stress, can be redefined through a frame transformation to have none. In addition to specifying a model, we must therefore fix a frame in order to discuss this properly. The natural frame to pick is the Jordan frame of the baryons. This is defined as the frame related to the metric on the geodesics of which visible matter propagates when in free fall. In many modifiedgravity models, this is also the Jordan frame for dark matter, i.e., gravitylike forces couple to just the EMT, irrespective of species. All observations of perturbations to be performed by Euclid are those of the motion of galaxies and light which directly probes the Jordanframe metric for these species.
On the other hand, many coupled dark energy models are constructed to be very similar to f(R), but introduce a split between the dark matter and visible matter frames. When the visible matter is subdominant gravitationally, the growth of dark matter perturbations in these two classes of models should be very similar. However, all the measurements are always performed through the galaxies and weak lensing and therefore observations are different; in particular, there is no anisotropic stress in CDE models (Motta et al. 2013).
When dealing with multiple degrees of freedom, it is in principle possible to tune them in such a way that the timevariation of the Planck mass cancels out and therefore there would be no anisotropic stress despite nonminimal coupling to gravity in the baryon Jordan frame. However, if the action for the two degrees of freedom is of a different model class, it is not clear whether it is possible to perform this cancellation during more than one era of the evolution of the universe, e.g., matter domination (see Saltas and Kunz 2011 for the case of f(R, G) gravity^{4}).
I.4.4.2 Clustering: \(Q\ne 1\)
The implication of DE clustering, \(Q\ne 1\) is that the dark matter perturbations are dressed with dark energy perturbations. This means that the effect of some particular density of matter is to curve space differently than in \(\varLambda \)CDM. On scales where Q is a constant, the darkenergy distribution follows that of the dark matter precisely. \(QG_\text {N}\) is an effective Newton’s constant for nonrelativistic matter.
If the curvature of space sourced by the DM changes, then so does the gravitational force acting on the dark matter. This implies that given a fixed background expansion history, the growth rate of the perturbations is different, see Sect. I.3.2.
As discussed in Sect. I.4.1, only a cosmological constant is not perturbed at all. Therefore only in this case do we have \(Q=1\) exactly up to relativistic corrections near the cosmological horizon, \(k/aH\sim 1\).
In the opposite limit of clustering dark energy, \(c_\text {s}=0\), which is typical of kessence models such as the ghostcondensate (ArkaniHamed et al. 2004b) or dusty dark energy (Lim et al. 2010), there is no sound horizon, just as for darkmatter dust, and in principle the dark energy clusters. If the background expansion is now very close to \(\varLambda \)CDM (i.e., \(w\approx 1\) and \(c_\text {a}^2\approx 0\)), Eq. (I.4.10) reduces to the standard equation in \(\varLambda \)CDM. If the initial conditions are adiabatic, the evolution of both the potential and of the dark matter density is the same as in \(\varLambda \)CDM, i.e., \(Q=1\) again. Any deviations are purely a result of a different background expansion history. The above implies that given a very similar expansion history to \(\varLambda \)CDM, darkenergy models comprising a single degree of freedom that is minimally coupled do not significantly cluster (Sapone and Kunz 2009) if the initial conditions are adiabatic and there is no anisotropic stress.
For significant clustering, a coupling of dark energy to gravity or dark matter is required, with a strength similar to that of gravity (or possibly to other species, with appropriately stronger couplings to compensate for the relatively smaller energy density). All models which exhibit significant anisotropic stress also cluster significantly, since the anisotropic stress is a sign of nonminimal coupling to gravity, see Sect. I.4.4.1. This implies that models such as coupled dark energy also cluster, since they are effectively scalar–tensor models with nonuniversal couplings to matter.
If the couplings are universal, the most general class of models where \(Q\ne 1\) while there is no anisotropic stress are kinetic gravity braiding models of dark energy (Deffayet et al. 2010), which are a class of imperfect fluids (Pujolas et al. 2011). The effective Planck mass is constant in these models, however there is still nonminimal coupling to gravity on the level of the equations of motion. This implies that they cluster significantly (Kimura and Yamamoto 2011).
In summary, given a fixed background expansion history close to \(\varLambda \)CDM, the appearance of anisotropic stress is a sign of a modification of the action for gravitational waves in the Jordan frame of baryons: either a timevarying effective Planck mass, i.e., a normalization scale for graviton kinetic terms or a deviation of the speed of gravitational waves from that of light. On the other hand, a detection of significant clustering, resulting in a growth rate significantly deviating from the \(\varLambda \)CDM one, is a sign of coupling of the darkenergy to gravity or some of the species with a strength similar to that of gravity.
2.4.5 Parameterized frameworks for theories of modified gravity
As explained in earlier sections of this report, modifiedgravity models cannot be distinguished from darkenergy models by using solely the FLRW background equations. But by comparing the background expansion rate of the universe with observables that depend on linear perturbations of an FRW spacetime we can hope to distinguish between these two categories of explanations. An efficient way to do this is via a parameterized, modelindependent framework that describes cosmological perturbation theory in modified gravity. We present here one such framework, the parameterized postFriedmann formalism (Baker et al. 2013)^{5} that implements possible extensions to the linearized gravitational field equations.
The parameterized postFriedmann approach (PPF) is inspired by the parameterized postNewtonian (PPN) formalism (Will and Nordtvedt 1972; Will 1971), which uses a set of parameters to summarize leadingorder deviations from the metric of GR. PPN was developed in the 1970s for the purpose of testing of alternative gravity theories in the solar system or binary systems, and is valid in weakfield, lowvelocity scenarios. PPN itself cannot be applied to cosmology, because we do not know the exact form of the linearized metric for our Hubble volume. Furthermore, PPN can only test for constant deviations from GR, whereas the cosmological data we collect contain inherent redshift dependence.
For these reasons the PPF framework is a parameterization of the gravitational field equations (instead of the metric) in terms of a set of functions of redshift. A theory of modified gravity can be analytically mapped onto these PPF functions, which in turn can be constrained by data.
In principle there could also be new terms containing matter perturbations on the RHS of Eq. (I.4.12). However, for theories that maintain the weak equivalence principle—i.e., those with a Jordan frame where matter is uncoupled to any new fields—these matter terms can be eliminated in favor of additional contributions to \(\delta U_{\mu \nu }^{\mathrm {metric}}\) and \(\delta U_{\mu \nu }^{\mathrm {d.o.f.}}\).
The tensor \(\delta U_{\mu \nu }^{\mathrm {metric}}\) is then expanded in terms of two gaugeinvariant perturbation variables \({\hat{\varPhi }}\) and \({\hat{\varGamma }}\). \({\hat{\varPhi }}\) is one of the standard gaugeinvariant Bardeen potentials, while \({\hat{\varGamma }}\) is the following combination of the Bardeen potentials: \({\hat{\varGamma }}=1/k (\dot{{\hat{\varPhi }}}+\mathcal{H}{\hat{\varPsi }})\). We use \({\hat{\varGamma }}\) instead of the usual Bardeen potential \({\hat{\varPsi }}\) because \({\hat{\varGamma }}\) has the same derivative order as \({\hat{\varPhi }}\) (whereas \({\hat{\varPsi }}\) does not). We then deduce that the only possible structure of \(\delta U_{\mu \nu }^{\mathrm {metric}}\) that maintains the gaugeinvariance of the field equations is a linear combination of \({\hat{\varPhi }}\), \({\hat{\varGamma }}\) and their derivatives, multiplied by functions of the cosmological background (see Eqs. (I.4.13)–(I.4.17) below).
\(\delta U_{\mu \nu }^{\mathrm {d.o.f.}}\) is similarly expanded in a set of gaugeinvariant potentials \(\{{\hat{\chi _i\}}}\) that contain the new degrees of freedom. Baker et al. (2013) presented an algorithm for constructing the relevant gaugeinvariant quantities in any theory.
The final terms in Eqs. (I.4.13)–(I.4.16) are present to ensure the gauge invariance of the modified field equations, as is required for any theory governed by a covariant action. The quantities \(M_\varDelta \), \(M_\varTheta \) and \(M_P\) are all predetermined functions of the background. \(\epsilon \) and \(\nu \) are offdiagonal metric perturbations, so these terms vanish in the conformal Newtonian gauge. The gaugefixing terms should be regarded as a piece of mathematical bookkeeping; there is no constrainable freedom associated with them.
Let us make a comment about the number of coefficient functions employed in the PPF formalism. One may justifiably question whether the number of unknown functions in Eqs. (I.4.13)–(I.4.17) could ever be constrained. In reality, the PPF coefficients are not all independent. The form shown above represents a fully agnostic description of the extended field equations. However, as one begins to impose restrictions in theory space (even the simple requirement that the modified field equations must originate from a covariant action), constraint relations between the PPF coefficients begin to emerge. These constraints remove freedom from the parameterization.
Even so, degeneracies will exist between the PPF coefficients. It is likely that a subset of them can be wellconstrained, while another subset have relatively little impact on current observables and so cannot be tested. In this case it is justifiable to drop the untestable terms. Note that this realization, in itself, would be an interesting statement—that there are parts of the gravitational field equations that are essentially unknowable.
Finally, we note that there is also a completely different, complementary approach to parameterizing modifications to gravity. Instead of parameterizing the linearized field equations, one could choose to parameterize the perturbed gravitational action. This approach has been used recently to apply the standard techniques of effective field theory to modified gravity; see Battye and Pearson (2012), Bloomfield et al. (2013), Gubitosi et al. (2013) and references therein.
2.5 Models of dark energy and modified gravity
In this section we review a number of popular models of dynamical DE and MG. This section is more technical than the rest and it is meant to provide a quick but selfcontained review of the current research in the theoretical foundations of DE models. The selection of models is of course somewhat arbitrary but we have tried to cover the most wellstudied cases and those that introduce new and interesting observable phenomena.
2.5.1 Quintessence
In this review, we refer to scalar field models with canonical kinetic energy in Einstein’s gravity as “quintessence models”. Scalar fields are obvious candidates for dark energy, as they are for the inflaton, for many reasons: they are the simplest fields since they lack internal degrees of freedom, do not introduce preferred directions, are typically weakly clustered (as discussed later on), and can easily drive an accelerated expansion. If the kinetic energy has a canonical form, the only degree of freedom is then provided by the field potential (and of course by the initial conditions). The typical requirement is that the potentials are flat enough to lead to the slowroll inflation today with an energy scale \(\rho _{\mathrm {DE}}\simeq 10^{123}m_{\mathrm {pl}}^{4}\) and a mass scale \(m_{\phi }\lesssim 10^{33}\mathrm {\ eV}\).
Quintessence models are the prototypical DE models (Caldwell et al. 1998) and as such are the most studied ones. Since they have been explored in many reviews of DE, we limit ourselves here to a few remarks.^{6}
During radiation or matter dominated epochs, the energy density \(\rho _{M}\) of the fluid dominates over that of quintessence, i.e., \(\rho _{M}\gg \rho _{\phi }\). If the potential is steep so that the condition \(\dot{\phi }^{2}/2\gg V(\phi )\) is always satisfied, the field equation of state is given by \(w_{\phi }\simeq 1\) from Eq. (I.5.6). In this case the energy density of the field evolves as \(\rho _{\phi }\propto a^{6}\), which decreases much faster than the background fluid density.
However, in order to study the evolution of the perturbations of a quintessence field it is not even necessary to compute the field evolution explicitly. Rewriting the perturbation equations of the field in terms of the perturbations of the density contrast \(\delta _\phi \) and the velocity \(\theta _\phi \) in the conformal Newtonian gauge, one finds (see, e.g., Kunz and Sapone 2006, “Appendix A” section) that they correspond precisely to those of a fluid, (I.3.17) and (I.3.18), with \(\pi =0\) and \(\delta p = c_s^2 \delta \rho + 3 a H (c_s^2c_a^2) (1+w) \rho \theta /k^2\) with \(c_s^2=1\). The adiabatic sound speed, \(c_a\), is defined in Eq. (I.3.36). The large value of the sound speed \(c_s^2\), equal to the speed of light, means that quintessence models do not cluster significantly inside the horizon (see Sapone and Kunz 2009; Sapone et al. 2010, and Sect. I.8.6 for a detailed analytical discussion of quintessence clustering and its detectability with future probes, for arbitrary \(c_s^2\)).
Many quintessence potentials have been proposed in the literature. A simple crude classification divides them into two classes, (i) “freezing” models and (ii) “thawing” models (Caldwell and Linder 2005). In class (i) the field was rolling along the potential in the past, but the movement gradually slows down after the system enters the phase of cosmic acceleration. The representative potentials that belong to this class are

\(V(\phi )=M^{4+n}\phi ^{n}\quad (n>0)\),

\(V(\phi )=M^{4+n}\phi ^{n}\exp \left( \alpha \phi ^{2}/m_{\mathrm {pl}}^{2}\right) \).
In thawing models (ii) the field (with mass \(m_{\phi }\)) has been frozen by Hubble friction [i.e., the term \(H\dot{\phi }\) in Eq. (I.5.9)] until recently and then it begins to evolve once H drops below \(m_{\phi }\). The equation of state of DE is \(w_{\phi }\simeq 1\) at early times, which is followed by the growth of \(w_{\phi }\). The representative potentials that belong to this class are

\(V(\phi )=V_{0}+M^{4n}\phi ^{n}\quad (n>0)\),

\(V(\phi )=M^{4}\cos ^{2}(\phi /f)\).
Potentials can also be classified in several other ways, e.g., on the basis of the existence of special solutions. For instance, tracker solutions have approximately constant \(w_{\phi }\) and \(\varOmega _{\phi }\) along special attractors. A wide range of initial conditions converge to a common, cosmic evolutionary tracker. Early DE models contain instead solutions in which DE was not negligible even during the last scattering. While in the specific Euclid forecasts Sect. I.8 we will not explicitly consider these models, it is worthwhile to note that the combination of observations of the CMB and of large scale structure (such as Euclid) can dramatically constrain these models drastically improving the inverse area figure of merit compared to current constraints, as discussed in Huterer and Peiris (2007).
2.5.2 Kessence
2.5.3 Coupled darkenergy models
A first class of models in which dark energy shows dynamics, in connection with the presence of a fifth force different from gravity, is the case of ‘interacting dark energy’: we consider the possibility that dark energy, seen as a dynamical scalar field, may interact with other components in the universe. This class of models effectively enters in the “explicit modified gravity models” in the classification above, because the gravitational attraction between dark matter particles is modified by the presence of a fifth force. However, we note that the anisotropic stress for DE is still zero in the Einstein frame, while it is, in general, nonzero in the Jordan frame. In some cases (when a universal coupling is present) such an interaction can be explicitly recast in a nonminimal coupling to gravity, after a redefinition of the metric and matter fields (Weyl scaling). We would like to identify whether interactions (couplings) of dark energy with matter fields, neutrinos or gravity itself can affect the universe in an observable way.
 1.
couplings between dark energy and baryons;
 2.
couplings between dark energy and dark matter (coupled quintessence);
 3.
couplings between dark energy and neutrinos (growing neutrinos, MaVaNs);
 4.
universal couplings with all species (scalar–tensor theories and f(R)).
 1.
a fifth force \(\varvec{\nabla } \left[ \varPhi _\alpha + \beta \phi \right] \) with an effective \(\tilde{G}_{\alpha } = G_{N}[1+2\beta ^2(\phi )]\);
 2.
a velocity dependent term \(\tilde{H}\mathbf {v}_{\alpha } \equiv H \left( 1  {\beta (\phi )} \frac{\dot{\phi }}{H}\right) \mathbf {v}_{\alpha }\)
 3.
a timedependent mass for each particle \(\alpha \), evolving according to (I.5.25).
I.5.3.1 Dark energy and baryons
A coupling between dark energy and baryons is active when the baryon mass is a function of the darkenergy scalar field: \(m_b = m_b(\phi )\). Such a coupling is constrained to be very small: main bounds come from tests of the equivalence principle and solar system constraints (Bertotti et al. 2003). More in general, depending on the coupling, bounds on the variation of fundamental constants over cosmological timescales may have to be considered (Marra and Rosati 2005; Dent et al. 2008, 2009; Martins et al. 2010, and references therein). It is presumably very difficult to have significant cosmological effects due to a coupling to baryons only. However, uncoupled baryons can still play a role in the presence of a coupling to dark matter (see Sect. I.6 on nonlinear aspects).
I.5.3.2 Dark energy and dark matter
An interaction between dark energy and dark matter (CDM) is active when CDM mass is a function of the darkenergy scalar field: \(m_c = m_c(\phi )\). In this case the coupling is not affected by tests on the equivalence principle and solarsystem constraints and can therefore be stronger than the one with baryons. One may argue that darkmatter particles are themselves coupled to baryons, which leads, through quantum corrections, to direct coupling between dark energy and baryons. The strength of such couplings can still be small and was discussed in Dent et al. (2009) for the case of neutrino–darkenergy couplings. Also, quantum corrections are often recalled to spoil the flatness of a quintessence potential. However, it may be misleading to calculate quantum corrections up to a cutoff scale, as contributions above the cutoff can possibly compensate terms below the cutoff, as discussed in Wetterich (2008).
Typical values of \(\beta \) presently allowed by observations (within current CMB data) are within the range \(0< \beta < 0.06\) (at 95% CL for a constant coupling and an exponential potential) (Bean et al. 2008b; Amendola et al. 2003b; Amendola 2004; Amendola and Quercellini 2003), or possibly more (La Vacca et al. 2009; Kristiansen et al. 2010) if neutrinos are taken into account or for more realistic timedependent choices of the coupling. This framework is generally referred to as ‘coupled quintessence’ (CQ). Various choices of couplings have been investigated in literature, including constant and varying \(\beta (\phi )\) (Amendola 2000a; Mangano et al. 2003; Amendola 2004; Koivisto 2005; Guo et al. 2007; Quartin et al. 2008; Quercellini et al. 2008; Pettorino and Baccigalupi 2008; Gannouji et al. 2010; Pourtsidou et al. 2013) or within a PPF formalism (Skordis et al. 2015).
The presence of a coupling (and therefore, of a fifth force acting among darkmatter particles) modifies the background expansion and linear perturbations (Amendola 2000b, a, 2004), therefore, affecting CMB and crosscorrelation of CMB and LSS (Amendola and Quercellini 2003; Amendola 2004; Amendola et al. 2003b; Amendola and Quercellini 2004; Bean et al. 2008b; La Vacca et al. 2009; Kristiansen et al. 2010; Xia 2009; Mainini and Mota 2012; Amendola et al. 2011).
Furthermore, structure formation itself is modified (Macciò et al. 2004; Manera and Mota 2006; Koivisto 2005; Mainini and Bonometto 2006; Sutter and Ricker 2008; Abdalla et al. 2009; Mota 2008; Bertolami et al. 2009; Wintergerst and Pettorino 2010; Baldi et al. 2010; Baldi 2011b, a; Baldi and Pettorino 2011; Baldi and Viel 2010; Li et al. 2011; Li and Barrow 2011b; Zhao et al. 2010; Marulli et al. 2012).
An alternative approach, also investigated in the literature (Mangano et al. 2003; Väliviita et al. 2008, 2010; Majerotto et al. 2010; Gavela et al. 2009, 2010; CalderaCabral et al. 2009b; Schaefer et al. 2008; CalderaCabral et al. 2009a), where the authors consider as a starting point Eq. (I.5.21): the coupling is then introduced by choosing directly a covariant stress–energy tensor on the RHS of the equation, treating dark energy as a fluid and in the absence of a starting action. The advantage of this approach is that a good parameterization allows us to investigate several models of dark energy at the same time. Problems connected to instabilities of some parameterizations or to the definition of a physicallymotivated speed of sound for the density fluctuations can be found in Väliviita et al. (2008). It is also possible to both take a covariant form for the coupling and a quintessence darkenergy scalar field, starting again directly from Eq. (I.5.21). This has been done, e.g., in Boehmer et al. (2008) and Boehmer et al. (2010). At the background level only, Chimento et al. (2003), Chimento and Pavon (2006), del Campo et al. (2006), and Olivares et al. (2006) have also considered which background constraints can be obtained when starting from a fixed present ratio of dark energy and dark matter. The disadvantage of this approach is that it is not clear how to perturb a coupling that has been defined as a background quantity.
A Yukawalike interaction was investigated (Farrar and Peebles 2004; Das et al. 2006), pointing out that coupled dark energy behaves as a fluid with an effective equation of state \(w \lesssim 1\), though staying well defined and without the presence of ghosts (Das et al. 2006).
For an illustration of observable effects related to darkenergy–darkmatter interaction see also Sect. (II.10) of this report.
I.5.3.3 Dark energy and neutrinos
A coupling between dark energy and neutrinos can be even stronger than the one with dark matter and as compared to gravitational strength. Typical values of \(\beta \) are order 50–100 or even more, such that even the small fraction of cosmic energy density in neutrinos can have a substantial influence on the time evolution of the quintessence field. In this scenario neutrino masses change in time, depending on the value of the darkenergy scalar field \(\phi \). Such a coupling has been investigated within MaVaNs (Fardon et al. 2004; Peccei 2005; Bi et al. 2005; Afshordi et al. 2005; Weiner and Zurek 2006; Das and Weiner 2011; Takahashi and Tanimoto 2006; Spitzer 2006; Bjælde et al. 2008; Brookfield et al. 2006a, b) and more recently within growing neutrino cosmologies (Amendola et al. 2008a; Wetterich 2007; Mota et al. 2008; Wintergerst et al. 2010; Wintergerst and Pettorino 2010; Pettorino et al. 2010; Brouzakis et al. 2011; Baldi et al. 2011). In this latter case, DE properties are related to the neutrino mass and to a cosmological event, i.e., neutrinos becoming nonrelativistic. This leads to the formation of stable neutrino lumps (Mota et al. 2008; Wintergerst et al. 2010; Baldi et al. 2011) at very large scales only (\(\sim \) 100 Mpc and beyond) as well as to signatures in the CMB spectra (Pettorino et al. 2010). For an illustration of observable effects related to this case see Sect. II.8 of this report.
I.5.3.4 Scalar–tensor theories
Scalar–tensor theories (Wetterich 1988; Hwang 1990a, b; Damour et al. 1990; Casas et al. 1991, 1992; Wetterich 1995; Uzan 1999; Perrotta et al. 2000; Faraoni 2000; Boisseau et al. 2000; Riazuelo and Uzan 2002; Perrotta and Baccigalupi 2002; Schimd et al. 2005; Matarrese et al. 2004; Pettorino et al. 2005a, b; Capozziello et al. 2007; Appleby and Weller 2010) extend GR by introducing a nonminimal coupling between a scalar field (acting also as dark energy) and the metric tensor (gravity); they are also sometimes referred to as ‘extended quintessence’. We include scalar–tensor theories among ‘interacting cosmologies’ because, via a Weyl transformation, they are equivalent to a GR framework (minimal coupling to gravity) in which the darkenergy scalar field \(\phi \) is coupled (universally) to all species (Wetterich 1988; Maeda 1989; Wands 1994; EspositoFarèse and Polarski 2001; Pettorino and Baccigalupi 2008; Catena et al. 2007). In other words, these theories correspond to the case where, in action (I.5.20), the mass of all species (baryons, dark matter, ...) is a function \(m=m(\phi )\) with the same coupling for every species \(\alpha \). Indeed, a description of the coupling via an action such as (I.5.20) is originally motivated by extensions of GR such as scalar–tensor theories. Typically the strength of the scalarmediated interaction is required to be orders of magnitude weaker than gravity (Lee 2010; Pettorino et al. 2005a and references therein for recent constraints). It is possible to tune this coupling to be as small as is required—for example by choosing a suitably flat potential \(V(\phi )\) for the scalar field. However, this leads back to naturalness and finetuning problems.
In Sects. I.5.4 and I.5.5, we will discuss in more detail a number of ways in which new scalar degrees of freedom can naturally couple to standard model fields, while still being in agreement with observations. We mention here only that the presence of chameleon mechanisms (Brax et al. 2004, 2008, 2010a; Mota and Winther 2011; Mota and Shaw 2007; Hui et al. 2009; Davis et al. 2012b) can, for example, modify the coupling depending on the environment. In this way, a small (screened) coupling in highdensity regions, in agreement with observations, is still compatible with a bigger coupling (\(\beta \sim 1\)) active in low density regions. In other words, a dynamical mechanism ensures that the effects of the coupling are screened in laboratory and solar system tests of gravity.
2.5.4 f(R) gravity

(I) The metric formalism
The first approach is the metric formalism in which the connections \(\varGamma _{\beta \gamma }^{\alpha }\) are the usual connections defined in terms of the metric \(g_{\mu \nu }\). The field equations can be obtained by varying the action (I.5.28) with respect to \(g_{\mu \nu }\):where \(F(R)\equiv \partial f/\partial R\) (we also use the notation \(f_{,R}\equiv \partial f/\partial R,\, f_{,RR}\equiv \partial ^{2}f/\partial R^{2}\)), and \(T_{\mu \nu }\) is the matter energy–momentum tensor. The trace of Eq. (I.5.29) is given by$$\begin{aligned} F(R)R_{\mu \nu }(g)\frac{1}{2}f(R)g_{\mu \nu }\nabla _{\mu }\nabla _{\nu }F(R)+g_{ \mu \nu }\square F(R)=\kappa ^{2}T_{\mu \nu }, \end{aligned}$$(I.5.29)where \(T=g^{\mu \nu }T_{\mu \nu }=\,\rho +3P\). Here \(\rho \) and P are the energy density and the pressure of the matter, respectively.$$\begin{aligned} 3\,\square F(R)+F(R)R2f(R)=\kappa ^{2}T, \end{aligned}$$(I.5.30) 
(II) The Palatini formalism
The second approach is the Palatini formalism, where \(\varGamma _{\beta \gamma }^{\alpha }\) and \(g_{\mu \nu }\) are treated as independent variables. Varying the action (I.5.28) with respect to \(g_{\mu \nu }\) giveswhere \(R_{\mu \nu }(\varGamma )\) is the Ricci tensor corresponding to the connections \(\varGamma _{\beta \gamma }^{\alpha }\). In general this is different from the Ricci tensor \(R_{\mu \nu }(g)\) corresponding to the metric connections. Taking the trace of Eq. (I.5.31), we obtain$$\begin{aligned} F(R)R_{\mu \nu }(\varGamma )\frac{1}{2}f(R)g_{\mu \nu }=\kappa ^{2}T_{\mu \nu }, \end{aligned}$$(I.5.31)where \(R(T)=g^{\mu \nu }R_{\mu \nu }(\varGamma )\) is directly related to T. Taking the variation of the action (I.5.28) with respect to the connection, and using Eq. (I.5.31), we find$$\begin{aligned} F(R)R2f(R)=\kappa ^{2}T, \end{aligned}$$(I.5.32)$$\begin{aligned} R_{\mu \nu }(g)\frac{1}{2}g_{\mu \nu }R(g)= & {} \frac{\kappa ^{2}T_{\mu \nu }}{F}\frac{FR(T)f}{2F}g_{\mu \nu } +\frac{1}{F}(\nabla _{\mu }\nabla _{\nu }Fg_{\mu \nu }\square F)\nonumber \\&\frac{3}{2F^{2}}\left[ \partial _{\mu }F\partial _{\nu }F \frac{1}{2}g_{\mu \nu }(\nabla F)^{2}\right] . \end{aligned}$$(I.5.33)
In modified gravity models where F(R) is a function of R, the term \(\square F(R)\) does not vanish in Eq. (I.5.30). This means that, in the metric formalism, there is a propagating scalar degree of freedom, \(\psi \equiv F(R)\). The trace Eq. (I.5.30) governs the dynamics of the scalar field \(\psi \)—dubbed “scalaron” (Starobinsky 1980). In the Palatini formalism the kinetic term \(\square F(R)\) is not present in Eq. (I.5.32), which means that the scalarfield degree of freedom does not propagate freely (Amarzguioui et al. 2006; Li et al. 2007, 2009, 2008b).
It is important to realize that the dynamics of f(R) darkenergy models is different depending on the two formalisms. Here we confine ourselves to the metric case only; details of a viable model in unifying the metric and Palatini formalism can be found in Harko et al. (2012).
Already in the early 1980s, it was known that the model \(f(R)=R+\alpha R^{2}\) can be responsible for inflation in the early universe (Starobinsky 1980). This comes from the fact that the presence of the quadratic term \(\alpha R^{2}\) gives rise to an asymptotically exact de Sitter solution. Inflation ends when the term \(\alpha R^{2}\) becomes smaller than the linear term R. Since the term \(\alpha R^{2}\) is negligibly small relative to R at the present epoch, this model is not suitable to realizing the present cosmic acceleration.
Since a latetime acceleration requires modification for small R, models of the type \(f(R)=R\alpha /R^{n}\) (\(\alpha>0,n>0\)) were proposed as a candidate for dark energy (Capozziello 2002; Carroll et al. 2004; Nojiri and Odintsov 2003). While the latetime cosmic acceleration is possible in these models, it has become clear that they do not satisfy local gravity constraints because of the instability associated with negative values of \(f_{,RR}\) (Chiba 2003; Dolgov and Kawasaki 2003; Soussa and Woodard 2004; Olmo 2005; Faraoni 2006). Moreover a standard matter epoch is not present because of a large coupling between the Ricci scalar and the nonrelativistic matter (Amendola et al. 2007b).

(i) \(f_{,R}>0\) for \(R\ge R_{0}~(>0)\), where \(R_{0}\) is the Ricci scalar at the present epoch. Strictly speaking, if the final attractor is a de Sitter point with the Ricci scalar \(R_{1}~(>0)\), then the condition \(f_{,R}>0\) needs to hold for \(R\ge R_{1}\).
This is required to avoid a negative effective gravitational constant.

(ii) \(f_{,RR}>0\) for \(R\ge R_{0}\).
This is required for consistency with local gravity tests (Dolgov and Kawasaki 2003; Olmo 2005; Faraoni 2006; Navarro and Van Acoleyen 2007), for the presence of the matterdominated epoch (Amendola et al. 2007b, a), and for the stability of cosmological perturbations (Carroll et al. 2006; Song et al. 2007a; Bean et al. 2007; Faulkner et al. 2007).

(iii) \(f(R)\rightarrow R2\varLambda \) for \(R\gg R_{0}\).
This is required for consistency with local gravity tests (Amendola and Tsujikawa 2008; Hu and Sawicki 2007a; Starobinsky 2007; Appleby and Battye 2007; Tsujikawa 2008) and for the presence of the matterdominated epoch (Amendola et al. 2007a).

(iv) \(0<\frac{Rf_{,RR}}{f_{,R}}(r=\,2)<1\) at \(r=\,\frac{Rf_{,R}}{f}=\,2\).
This is required for the stability of the latetime de Sitter point (Müller et al. 1988; Amendola et al. 2007a).
Euclid forecasts for the f(R) models will be presented in Sect. I.8.7.
2.5.5 Massive gravity and higherdimensional models
Instead of introducing new scalar degrees of freedom such as in f(R) theories, another philosophy in modifying gravity is to modify the graviton itself. In this case the new degrees of freedom belong to the gravitational sector itself; examples include massive gravity and higherdimensional frameworks, such as the Dvali–Gabadadze–Porrati (DGP) model (Dvali et al. 2000) and its extensions. The new degrees of freedom can be responsible for a latetime acceleration of the universe, as is summarized below for a choice of selected models. We note here that while such selfaccelerating solutions are interesting in their own right, they do not tackle the old cosmological constant problem: why the observed cosmological constant is so much smaller than expected in the first place. Instead of answering this question directly, an alternative approach is the idea of degravitation (see Dvali et al. 2002, 2003; ArkaniHamed et al. 2002; Dvali et al. 2007), where the cosmological constant could be as large as expected from standard field theory, but would simply gravitate very little (see the paragraph in Sect. I.5.5.2 below).
I.5.5.1 Infrared modifications of gravity
Infrared modifications of gravity are of great interest for cosmology as they can affect the evolution of the Universe in two different ways, via selfacceleration and degravitation, as illustrated below.
Selfacceleration
The first interest in modifications of gravity is the possibility of selfacceleration where the latetime acceleration of the Universe is not sourced by a cosmological constant or dark energy but rather by the graviton itself. This interesting phenomenology was first encountered in the DGP model as is explained below and was later shown to be also present in the Galileon, massive gravity and bigravity. Technically speaking if the Galileon is considered as a scalar field in its own right then the acceleration of the Universe is due to a new scalar degree of freedom and lies in the category of dark energy. However massive gravity and higherdimensional models of gravity often behave as a Galileon model in some limit, where the Galileon plays the role of one of the graviton’s own degree of freedom, in this sense Galileon models are often also thought of models of selfacceleration.
Degravitation
The idea behind degravitation is to modify gravity in the IR, such that the vacuum energy could have a weaker effect on the geometry, and therefore reconcile a natural value for the vacuum energy as expected from particle physics with the observed latetime acceleration. Such modifications of gravity typically arise in models of massive gravity (Dvali et al. 2002, 2003, 2007; ArkaniHamed et al. 2002), i.e., where gravity is mediated by a massive spin2 field. The extradimensional DGP scenario presented below, represents a specific model of soft mass gravity, where gravity weakens at large distance, with a force law going as 1 / r. Nevertheless, this weakening is too weak to achieve degravitation and tackle the cosmological constant problem. However, an obvious way out is to extend the DGP model to higher dimensions, thereby diluting gravity more efficiently at large distances. This is achieved in models of cascading gravity, as is presented below. An alternative to cascading gravity is to work directly with theories of constant mass gravity (hard mass graviton).
I.5.5.2 Models
Infrared modifications of gravity usually weaken the effect of gravity on cosmological scales, i.e., the propagation of gravitational waves is affected at distances and timescales that are of the order of the size and age of the current Universe. These infrared modifications of general relativity are united by the common feature of invoking new degrees of freedom which could be used to either explain the recent acceleration of the Hubble expansion or tackle the cosmological constant problem. Below we will discuss different models which share these features.
DGP
The Galileon
As shown in Nicolis et al. (2009), such theories can allow for selfaccelerating de Sitter solutions without any ghosts, unlike in the DGP model. In the presence of compact sources, these solutions can support sphericallysymmetric, Vainshteinlike nonlinear perturbations that are also stable against small fluctuations. However, this is constrained to the subset of the thirdorder Galileon, which contains only \(\mathcal {L}_{\pi }^{(1)}\), \(\mathcal {L}_{\pi }^{(2)}\) and \(\mathcal {L}_{\pi }^{(3)}\) (Mota et al. 2010).
The fact that they give rise to second order equations of motion, have a symmetry and allow for healthy selfaccelerating solutions, have initiated a wealth of investigations in cosmology. Moreover the nonrenormalization theorem makes them theoretically very interesting since once the parameters in the theory are tuned by observational constraints they are radiatively stable. This means that the coefficients governing the Galileon interactions are technically natural.
“Generalized galileons” and Horndeski interactions
The Galileon terms described above form a subset of the “generalized Galileons”. A generalized Galileon model allows nonlinear derivative interactions of the scalar field \(\pi \) in the Lagrangian while insisting that the equations of motion remain at most second order in derivatives, thus removing any ghostlike instabilities. However, unlike the pure Galileon models, generalized Galileons do not impose the symmetry of Eq. (I.5.59). These theories were first written down by Horndeski (1974). They are a linear combination of Lagrangians constructed by multiplying the Galileon Lagrangians \(\mathcal {L}_{\pi }^{(n)}\) by an arbitrary scalar function of the scalar \(\pi \) and its first derivatives. Just like the Galileon, generalized Galileons can give rise to cosmological acceleration and to Vainshtein screening. However, as they lack the Galileon symmetry these theories are not protected from quantum corrections. The nonrenormalization theorem is lost and hence the technical naturalness. Even if the naive covariantization of the Galileon interactions on nonflat backgrounds break the Galileon symmetry explicitly, one can successfully generalize the Galileon interactions to maximally symmetric backgrounds (Burrage et al. 2011; Trodden and Hinterbichler 2011). It is also worth mentioning that a given subclass of these Horndeski interactions can also be constructed within the context of massive gravity from covariantizing its decoupling limit (de Rham and Heisenberg 2011). Many other theories can also be found within the spectrum of generalized Galileon models, including kessence. Recently, a new way to maintain a generalized Galileon symmetry on curved spacetimes was proposed in Gabadadze et al. (2012), Trodden (2012) by coupling massive gravity to a higherdimensional DBI Galileon as in de Rham and Tolley (2010). In Hinterbichler et al. (2013), it was shown that such a generalized covariant Galileon model can lead to stable selfaccelerating solutions.
Even if the scalar fields are by far the most extensively explored fields in cosmology, there are also motivations for the exploration of the role of vector fields or higher pforms in general. Inspired by the Horndeski interactions of the scalar field, one can construct the most general vectortensor interactions with nonminimal coupling giving rise to second order equations of motion (Jiménez et al. 2013).
Cascading gravity
Cascading gravity is an explicit realization of the idea of degravitation, where gravity behaves as a highpass filter, allowing sources with characteristic wavelength (in space and in time) shorter than a characteristic scale \(r_c\) to behave as expected from GR, but weakening the effect of sources with longer wavelengths. This could explain why a large cosmological constant does not backreact as much as anticipated from standard GR. Since the DGP model does not modify gravity enough in the IR, “cascading gravity” relies on the presence of at least two infinite extra dimensions, while our world is confined on a fourdimensional brane (de Rham et al. 2008b). Similarly as in DGP, fourdimensional gravity is recovered at short distances thanks to an induced Einstein–Hilbert term on the brane with associated Planck scale \(M_4\). The brane we live in is then embedded in a fivedimensional brane, which bears a fivedimensional Planck scale \(M_5\), itself embedded in six dimensions (with Planck scale \(M_6\)). From a fourdimensional perspective, the relevant scales are the 5d and 6d masses \(m_4=M_5^3/M_4^2\) and \(m_5=M_6^4/M_5^3\), which characterize the transition from the 4d–5d and 5d–6d behavior respectively.
Such theories embedded in morethanone extra dimensions involve at least one additional scalar field that typically enters as a ghost. This ghost is independent of the ghost present in the selfaccelerating branch of DGP but is completely generic to any codimensiontwo and higher framework with brane localized kinetic terms. However, there are two ways to cure the ghost, both of which are natural when considering a realistic higher codimensional scenario, namely smoothing out the brane, or including a brane tension (de Rham et al. 2008a, b, 2010).
When properly taking into account the issue associated with the ghost, such models give rise to a theory of massive gravity (soft mass graviton) composed of one helicity2 mode, helicity1 modes that decouple and 2 helicity0 modes. In order for this theory to be consistent with standard GR in four dimensions, both helicity0 modes should decouple from the theory. As in DGP, this decoupling does not happen in a trivial way, and relies on a phenomenon of strong coupling. Close enough to any source, both scalar modes are strongly coupled and therefore freeze.
The resulting theory appears as a theory of a massless spin2 field in fourdimensions, in other words as GR. If \(r\ll m_5\) and for \(m_6\le m_5\), the respective Vainshtein scale or strong coupling scale, i.e., the distance from the source M within which each mode is strongly coupled is \(r_{i}^3=M/m_i^2 M_4^2\), where \(i=5,6\). Around a source M, one recovers fourdimensional gravity for \(r\ll r_{5}\), fivedimensional gravity for \(r_{5}\ll r \ll r_{6}\) and finally sixdimensional gravity at larger distances \(r\gg r_{6}\).
The extension of Cascading gravity to higher dimensions also show the presence of solutions which allow for arbitrarily large cosmological constant without leading to any cosmic acceleration of the \(3+1\) brane (de Rham et al. 2009), hence providing a first ingredient towards tackling the cosmological constant problem.
I.5.5.3 Massive gravity and cosmological consequences
While laboratory experiments, solar systems tests and cosmological observations have all been in complete agreement with GR for almost a century now, these bounds do not eliminate the possibility for the graviton to bear a small hard mass \(m\lesssim 6.10^{32}\mathrm {\ eV}\) (Goldhaber and Nieto 2010). The question of whether or not gravity could be mediated by a hardmass graviton is not only a purely fundamental but could potentially have interesting observational implications and help with the latetime acceleration of the Universe and the original cosmological constant problem. Since the degravitation mechanism is also expected to be present if the graviton bears a hard mass, such models can play an important role for latetime cosmology, and more precisely when the age of the universe becomes on the order of the graviton Compton wavelength. See de Rham (2014) for a recent review on massive gravity and related models.
Spherically symmetric solutions in the decoupling limit were considered in Berezhiani et al. (2013a). Stability of this solutions requires the parameter \(\beta \) to be positive definite which sets another constraint of the parameters of the original theory. Furthermore it was also shown that the solutions are asymptotic to a nontrivial FRW solution which is independent of the source at infinity. Notice however that these solutions are valid within the decoupling limit of massive gravity. At very large distances from the source, the decoupling limit is no longer valid, as the graviton mass takes over. At distances comparable to the graviton’s Compton wavelength one expects any solutions to reach a Yukawalike type of behaviour and so the space–time to be asymptotically flat, although this has not been shown explicitly in any cosmological solution.
As in the studies of the spherically symmetric solutions mentioned above, a considerable amount of insight into the cosmological solutions can be gained from the decoupling limit analysis. Considering the de Sitter geometry as being a small perturbation about Minkowski space–time, one can construct selfaccelerating solutions which are at leading order indistinguishable from a standard \(\varLambda \)CDM model. The helicity0 degree of freedom of massive gravity forms a condensate whose energy density sources selfacceleration (de Rham et al. 2011a). However, as mentioned above, the solutions found in the decoupling limit could be considered just as a transient state of the full solution. In addition, the specific cosmological solution found in the decoupling limit suffers from pathologies since the vector fields lose their kinetic terms.
Beyond the decoupling limit, it has been shown that there is a nogo theorem against the existence of flat and closed FRW solutions, i.e. if the reference metric is chosen to be Minkowski then there is no flat/closed FRW solutions in the full theory beyond the decoupling limit (D’Amico et al. 2011b). The constraint needed for the absence of the Boulware–Deser ghost actually forbids the existence of homogeneous and isotropic cosmological solutions. Despite this nogo, there still exists nonFRW solutions that are approximately homogeneous and isotropic locally within domains of the size of inverse graviton mass. These solutions can be used to put constraints on the magnitude of the graviton mass coming from the consistency with known constraints on homogeneity and isotropy. This kind of solutions demands the successful implementation of the Vainshtein mechanism in the cosmological evolution which so far has not been investigated in detail in the literature.
The nogo theorem for the existence of flat/closed FRW solutions does not apply to the case of open FRW solutions (Gumrukcuoglu et al. 2011). Unfortunately, nonlinear perturbations around this open FRW background are unstable making these solutions phenomenologically unviable.
A possible way out of these problems is to consider a more general reference metric. Indeed, if one takes the reference metric to be de Sitter, then one can construct FRW solutions. Nonetheless, these solutions bring other problems along due to the Higuchi bound, which imposes the mass of the graviton to be \(m^2 > H^2\) which is in conflict with observational constraints in our cosmological past. Promoting the reference metric to a FRW metric leads to a generalized Higuchi bound and one encounters similar problems (Fasiello and Tolley 2013).
Finally another more natural possibility is the presence of inhomogeneous and even possibility anisotropies at large distance scales. Recently there has been a considerable amount of work devoted to this studies and it is beyond the scope of this review to detail them all. We simply refer to Volkov (2013) for a recent review and some of the most general solutions.
Such inhomogeneities/anisotropies are indeed to be expected on distance scales larger than the observable Universe. After all one of the main motivations of inflation is to ensure that such inhomogeneities/anisotropies are diluted in our observable Universe, but if inflation lasted a minimum number of efolds such inhomogeneities/anisotropies would also be expected in General Relativity.
The first type of inhomogeneous solutions corresponds to the case where only the Stückelberg fields (or new degrees of freedom) carry order unity inhomogeneities while the metric remains isotropic and homogeneous. The inhomogeneities are then effectively unobservable since matter only couples to the metric and not directly to the Stückelberg fields.
Solutions where the metric itself carries explicit inhomogeneities while remaining isotropic have also been explored. These solutions can be constructed in such a way that the effective impact of the metric remains homogeneous and isotropic on short distance scales. In some of these cases, the mass term effectively plays the role of a cosmological constant leading to selfaccelerating solutions.
Anisotropic solutions have been explored in Gumrukcuoglu et al. (2012) and subsequent litterature, for which the observed anisotropy remains small at short distance scales. The presence of the anisotropy also allow for stable selfaccelerating solutions.
These represents special cases of exact solutions found in massive gravity although it is understood that the most general solution is likely to differ from these exact cases by carrying order one inhomogeneity or anisotropy or both at large distances which would requires numerical methods to be solved. This is still very much work in progress.
2.5.6 Beyond massive gravity: nonlocal models and multigravity
Different extensions of massive gravity have been introduced which could lead to an enriched phenomenology. First the mass can be promoted to a function of a new scalar field (Huang et al. 2012a). This allows for more interesting cosmology and some stable selfaccelerating solutions. In this model the graviton mass could be effectively larger at earlier cosmological time, which implies that it can have an interesting phenomenology both at early and late times.
Another extension of massive gravity which also includes a new scalar field is the quasidilaton (D’Amico et al. 2013) and its extension (De Felice et al. 2013), where the extra scalar field satisfies a specific symmetry and its interactions are thus radiatively stable. In the original quasidilaton model the selfaccelerating solution has a ghost and is unstable, however this issue is avoided in the extended quasidilaton proposed in De Felice et al. (2013). Moreover new types of stable selfaccelerating solutions were recently found in Gabadadze et al. (2014). Similarly as in massive gravity, the decoupling limit solution must have a completion in the full theory although it might require some level of inhomogeneity at large distance scales, which are screened at small distance scales via the Vainshtein mechanism.
I.5.6.1 Nonlocal models
I.5.6.2 Bi and multigravity
Unlike DGP or cascading gravity, models of massive gravity require the presence of a reference metric. The dynamics of this reference metric can be included and leads to a model of bigravity where two metrics, say \(g_{\mu \nu }\) and \(f_{\mu \nu }\) with their own Einstein–Hilbert kinetic terms respectively \(M_g^2\sqrt{g}R[g_{\mu \nu }]\) and \(M_f^2\sqrt{f}R[f_{\mu \nu }]\) in addition to interactions between the twometrics which takes precisely the same form as the potential term in massive gravity (Hassan and Rosen 2012). In this form bigravity was shown to be ghost free so long as different species of matter couple to either one of both metrics. The absence of ghost when some species couple to both metrics f and g at the same time has not been proven but is feasible.
Bigravity has two metrics and yet only one copy of diffeomorphism invariance. The second copy of diffeomorphism can be restored by introducing three Stückelberg fields similarly as in massive gravity and can be thought of as the three additional degrees of freedom in addition to the two degrees of freedom present in metric. This leads to a total of seven degrees of freedom: two in an effectively massless spin2 field and five in an effectively massive spin2 field. Notice that both the massive and the massless modes are a combination of \(g_{\mu \nu }\) and \(f_{\mu \nu }\).
Among these three additional degrees of freedom, one counts a helicity0 mode which satisfies the same properties as in massive gravity. In particular this helicity0 mode behaves as a Galileon in a similar decoupling limit and is screened via a Vainshtein mechanism.
Recently it has also been shown that a simple form of bigravity that depends on a single parameter (the minimal model) allows for stable selfaccelerating solutions with distinguishable features from \(\varLambda \)CDM and an effective equation of state for small redshift \(\omega (z)\approx \,1.22 ^{+0.02}_{0.02} 0.64^{+0.05}_{0.04}z/(1+z)\) (Könnig and Amendola 2014). At the linearly perturbed level, however, this model has been shown contain a gradient instability, ultimately due to the violation of the Higuchi bound. Linear perturbations in bimetric gravity have been studied extensively in Comelli et al. (2012b), Könnig and Amendola (2014), Solomon et al. (2014), Könnig et al. (2014), Lagos and Ferreira (2014), Cusin et al. (2015), Yamashita and Tanaka (2014), De Felice et al. (2014), Enander et al. (2015), Amendola et al. (2015), Johnson and Terrana (2015), and the models have been shown to contain either ghost or gradient instabilities. Cosmological solutions can be made stable back to arbitrarily early times by taking one Planck mass to be much smaller than the other (Akrami et al. 2015), or by reintroducing a cosmological constant which is much larger than the bimetric interaction parameter (Könnig and Amendola 2014). It is also possible that the gradient instability in bigravity is cured at the nonlinear level (Mortsell and Enander 2015) due to a version of the Vainshtein screening mechanism (Vainshtein 1972; Babichev and Deffayet 2013).
Bigravity was also shown to be extendable to an arbitrary number of interacting metrics in Hinterbichler and Rosen (2012), which would lead to multiple Galileon in its decoupling limit. In Lüben et al. (2016), several nontrivial cosmological solutions in a model with three metrics have been identified.
2.5.7 Effective field theory of dark energy
One of the most productive recent ideas in darkenergy cosmology has been the employing of effective fieldtheory methods originally developed for inflation (Creminelli et al. 2006, 2009; Cheung et al. 2008a) to limit the space of possible parameterisations of gravity to that obtainable from local actions with a fixed number of degrees of freedom and also to describe the perturbation evolution in different models of modified gravity using a common approach (Gubitosi et al. 2013; Bloomfield et al. 2013; Gleyzes et al. 2013).
We refer the reader to, e.g., the review by Gleyzes et al. (2015b) for details, here mentioning only the rough principles. The EFT of DE approach depends on choosing an FRW cosmological background as well as being able to pick one of the degrees of freedom of the model to be used as a clock on this background. This means that the approach is most directly applicable to models with at least one scalar degree of freedom, where the background configuration of the scalar field evolves monotonically (i.e., does not oscillate during the evolution). When this is possible, the scalar will play a role of a goldstone boson of the broken time symmetry in cosmology, its field value will define a time slicing (a unitary gauge). The symmetries of the FRW background must then also be the symmetries of the action which describes the evolution of fluctuations on the cosmological background: the action for perturbations must obey timereparameterisation invariance and the remaining unbroken diffeomorphism invariance of the spatial slice.
One then writes down various operators for the FRW background, the quadratic action for fluctuations and, in principle, higherorder actions, if nonlinearities are of interest. The symmetry of the cosmological background is such that the coefficients of all these operators are allowed to be arbitrary functions of time t, but the resulting scale dependence is given by the particular operators in a fixed manner. Note that since the ADM curvatures do not contain time derivatives, but only spatial derivatives \(D_{i}\), by using them, one does not introduce new degrees of freedom through higher timederivatives. Thus one avoids this complication of the usual covariant approach for actions. However, higherorder spatial derivatives are generated through this approach.
The simplest application is to universally coupled scalar–tensor theories, modifications of gravity which contain no more than one extra scalar degree of freedom. For the background, the end result is that an arbitrary expansion history H(t) can be generated, and thus the background should be thought of as an input of the EFT approach (e.g., \(w=\,1\) or any other such choice). The question of determining the theory of gravity is then one of constraining the behaviour of perturbations and growth of structure on this chosen background. Linear structure formation is then determined by the quadratic action for fluctuations.
 1.Nonminimal coupling of gravity. These functions modify both the scalar and the tensor propagation (Saltas et al. 2014):
 (a)\(M_{*}^{2}(t)\), the effective Planck mass. \(M_{*}^{2}\) is the normalisation of the kinetic term for gravitons. It encodes the strength of the gravitational force/space–time curvature produced by a fixed amount of energy. Largescale structure is sensitive only to the time variation of the Planck mass,or the Planckmass run rate.$$\begin{aligned} \alpha _{\text {M}}\equiv \frac{\mathrm {d}\ln M_{*}^{2}}{\mathrm {d}\ln a}, \end{aligned}$$(I.5.64)
 (b)
\(\alpha _{\text {T}}(t)\), tensor speed excess. This parameter denotes the difference in the speed propagation of gravitational waves compared to the speed of light, i.e., \(\alpha _{\text {T}}=c_{\text {T}}^{2}1\). It applies to modes propagating on cosmological scales and is quite weakly constrained despite the recent detection from LIGO (Abbott et al. 2016; Blas et al. 2016; Creminelli and Vernizzi 2017; Ezquiaga and Zumalacárregui 2017; Crisostomi and Koyama 2017; Baker et al. 2017; Sakstein and Jain 2017).
 (a)
 2.Kinetic terms. The scalar mode is in addition affected by the following three functions:Note that either \(\alpha _{\text {M}}\), \(\alpha _{\text {T}}\) or \(\alpha _{\text {H}}\) must not vanish in order for gravitational slip to be generated by perfectfluid matter sources. such a case, the equation of motion for the propagation of gravitational waves is also modified.
 (a)
\(\alpha _{\text {K}}(t)\), kineticity. Coefficient of the kinetic term for the scalar d.o.f. before demixing (see Bellini and Sawicki 2014). Increasing this function leads to a relative increase of the kinetic terms compared to the gradient terms and thus a lower sound speed for the scalar field. This creates a sound horizon smaller than the cosmological horizon: supersoundhorizon the scalar does not have pressure support and clusters similarly to dust. Inside, it is arrested and eventually can enter a quasistatic configuration (Sawicki and Bellini 2015). When looking only at the quasistatic scales, inside the sound horizon, this function cannot be constrained (Gleyzes et al. 2016). This is the only term present in the simplest DE models, e.g. quintessence and in perfectfluid dark energy.
 (b)
\(\alpha _{\text {B}}(t)\), braiding. This operator gives rise to a new mixing of the scalar field and the extrinsic curvature of the spatial metric, K. This leads to a modification of the coupling of matter to the curvature, independent and additional to any change in the Planck mass. This is typically interpreted as an additional fifth force between massive particles and can be approximated as a modification of the effective Newton’s constant for perturbations. It is present in archetypal modified gravity models such as f(R) gravity (see (Bellini and Sawicki 2014) for details). A purely conformal coupling of the scalar to gravity leads to the universal property \(\alpha _{\text {M}}+\alpha _{\text {B}}=0\).
 (c)
\(\alpha _{\text {H}}(t)\), beyond Horndeski. This term is generated by a kinetic mixing of the scalar with the intrinsic curvature R. It results in thirdorder derivatives in the equations of motion, but which cancel once all the constraints are solved for. It produces a coupling of the gravitational field to the velocity of the matter.
 (a)
Since Horndeski theories include as subclasses the majority of the popular models of modified gravity, including perfectfluid dark energy, linear structure formation in all these models can be solved for in a unified manner by obtaining these \(\alpha \) functions. This method is now employed in the publicly available Boltzmann codes EFTCAMB (Hu et al. 2014), used also in the planck analysis on dark energy and modified gravity (Planck Collaboration 2016c), and hi_class (Zumalacárregui et al. 2017).
2.5.8 Observations and screening mechanisms
All models of modified gravity presented in this section have in common the presence of at least one additional helicity0 degree of freedom that is not an arbitrary scalar, but descends from a fullfledged spintwo field. As such it has no potential and enters the Lagrangian via very specific derivative terms fixed by symmetries. However, tests of gravity severely constrain the presence of additional scalar degrees of freedom. Interestingly this degree of freedom would severly affect the behavior of voids and could potentially help reducing the tension between Planck and supernovae. Euclid could detect such an effect at the \(5\sigma \) confidence level (Spolyar et al. 2013). Outside voids, as it is well known, in theories of massive gravity the helicity0 mode can evade fifthforce constraints in the vicinity of matter if the helicity0 mode interactions are important enough to freeze out the field fluctuations (Vainshtein 1972). This Vainshtein mechanism is similar in spirit but different in practice to the chameleon and symmetron mechanisms presented in detail below. One key difference relies on the presence of derivative interactions rather than a specific potential. So, rather than becoming massive in dense regions, in the Vainshtein mechanism the helicity0 mode becomes weakly coupled to matter (and light, i.e., sources in general) at high energy. This screening of scalar mode can yet have distinct signatures in cosmology and in particular for structure formation.
Different classes of screening
While quintessence introduces a new degree of freedom to explain the latetime acceleration of the universe, the idea behind modified gravity is instead to tackle the core of the cosmological constant problem and its tuning issues as well as screening any fifth forces that would come from the introduction of extra degrees of freedom. As mentioned in Sect. I.5.3.1, the strength with which these new degrees of freedom can couple to the fields of the standard model is very tightly constrained by searches for fifth forces and violations of the weak equivalence principle. Typically the strength of the scalar mediated interaction is required to be orders of magnitude weaker than gravity. It is possible to tune this coupling to be as small as is required, leading however to additional naturalness problems. Here we discuss in more detail a number of ways in which new scalar degrees of freedom can naturally couple to standard model fields, whilst still being in agreement with observations, because a dynamical mechanism ensures that their effects are screened in laboratory and solar system tests of gravity. This is done by making some property of the field dependent on the background environment under consideration. These models typically fall into three classes; either the field becomes massive in a dense environment so that the scalar force is suppressed because the Compton wavelength of the interaction is small, or the coupling to matter becomes weaker in dense environments to ensure that the effects of the scalar are suppressed. The latter can be achieved either in regions of space–time where the coupling is dynamically driven to small values (the Damour–Polyakov mechanism, Damour and Polyakov 1994) or the wave function normalisation of the field becomes large (the Kmouflage, Babichev et al. 2009, or Vainshtein mechanisms). These types of behavior require the presence of nonlinearities. One can also see that the different types of screening mechanisms can be differentiated by a screening criterion (Khoury 2013) which requires the potential to be large (chameleon and Damour–Polyakov, Brax et al. 2012a), the gravitational acceleration (Kmouflage, Brax and Valageas 2014) or the local spatial curvature (Vainshtein).
Density dependent masses: the chameleon
The chameleon (Khoury and Weltman 2004) is the archetypal model of a scalar field with a mass that depends on its environment, becoming heavy in dense environments and light in diffuse ones. The ingredients for construction of a chameleon model are a conformal coupling between the scalar field and the matter fields of the standard model, and a potential for the scalar field, which includes relevant selfinteraction terms.
The environmental dependence of the mass of the field allows the chameleon to avoid the constraints of fifthforce experiments through what is known as the thinshell effect. If a dense object is embedded in a diffuse background the chameleon is massive inside the object. There, its Compton wavelength is small. If the Compton wavelength is smaller than the size of the object, then the scalar mediated force felt by an observer at infinity is sourced, not by the entire object, but instead only by a thin shell of matter (of depth the Compton wavelength) at the surface. This leads to a natural suppression of the force without the need to fine tune the coupling constant.
The Vainshtein mechanism
In models such as DGP, the Galileon, Cascading gravity, massive gravity and bi or multigravity, the effects of the scalar field(s) are screened by the Vainshtein mechanism (Vainshtein 1972; Deffayet et al. 2002b), see also Babichev and Deffayet (2013) for a recent review on the Vainshtein mechanism. This occurs when nonlinear, higherderivative operators are present in the Lagrangian for a scalar field, arranged in such a way that the equations of motion for the field are still second order, such as the interactions presented in Eq. (I.5.57).
Inside the Vainshtein radius, when the nonlinear, higherderivative terms become important they cause the kinetic terms for scalar fluctuations to become large. This can be interpreted as a relative weakening of the coupling between the scalar field and matter. In this way the strength of the interaction is suppressed in the vicinity of massive objects.
Related to the Vainshtein mechanism but slight more general is the screening via a disformal coupling between the scalar field and the stress–energy tensor \(\partial _\mu \pi \partial _\nu \pi T^{\mu \nu }\) (Koivisto et al. 2012) as is present in DBIbraneworld types of models (de Rham and Tolley 2010) and massive gravity (de Rham and Gabadadze 2010).
The Symmetron
In sufficiently dense environments, \(\rho >\mu ^2M^2\), the field sits in a minimum at the origin. As the local density drops the symmetry of the field is spontaneously broken and the field falls into one of the two new minima with a nonzero vacuum expectation value. In highdensity symmetryrestoring environments, the scalar field vacuum expectation value should be near zero and fluctuations of the field should not couple to matter. Thus, the symmetron force in the exterior of a massive object is suppressed because the field does not couple to the core of the object. This is an example of Damour–Polyakov mechanism.
The Olive–Pospelov model
The Olive–Pospelov model (Olive and Pospelov 2008) again uses a scalar conformally coupled to matter. In this construction both the coupling function and the scalar field potential are chosen to have quadratic minima. If the background field takes the value that minimizes the coupling function, then fluctuations of the scalar field decouple from matter. In nonrelativistic environments the scalar field feels an effective potential, which is a combinations of these two functions. In highdensity environments the field is very close to the value that minimizes the form of the coupling function. In lowdensity environments the field relaxes to the minimum of the bare potential. Thus, the interactions of the scalar field are suppressed in dense environments. This is another example of Damour–Polyakov mechanism.
2.5.9 Einstein Aether and its generalizations
Indeed, when the geometry is of the form (I.5.74), anisotropic stresses are negligible and \(A^{\mu }\) is aligned with the flow of time \(t^{\mu }\), then one can find appropriate values of the \(c_{A}\) and \(\ell \) such that K is dominated by a term equal to \(\nabla \varPsi ^{2}/a_{0}^{2}\). This influence then leads to a modification to the timetime component of Einstein’s equations: instead of reducing to Poisson’s equation, one recovers an equation of the form (I.5.73). Therefore the models are successful covariant realizations of MOND.
Returning to the original motivation behind the theory, the next step is to look at the theory on cosmological scales and see whether the GEA models are realistic alternatives to dark matter. As emphasized, the additional structure in spacetime is dynamical and so possesses independent degrees of freedom. As the model is assumed to be uncoupled to other matter, the gravitational field equations would regard the influence of these degrees of freedom as a type of dark matter (possibly coupled nonminimally to gravity, and not necessarily ‘cold’).
The possibility that the model may then be a viable alternative to the dark sector in background cosmology and linear cosmological perturbations has been explored in depth in Zlosnik et al. (2008), Li et al. (2008a) and Zuntz et al. (2010). As an alternative to dark matter, it was found that the GEA models could replicate some but not all of the following features of cold dark matter: influence on background dynamics of the universe; negligible sound speed of perturbations; growth rate of dark matter ‘overdensity’; absence of anisotropic stress and contribution to the cosmological Poisson equation; effective minimal coupling to the gravitational field. When compared to the data from large scale structure and the CMB, the model fared significantly less well than the concordance model and so is excluded. If one relaxes the requirement that the vector field be responsible for the effects of cosmological dark matter, one can look at the model as one responsible only for the effects of dark energy. It was found (Zuntz et al. 2010) that the current most stringent constraints on the model’s success as dark energy were from constraints on the size of large scale CMB anisotropy. Specifically, possible variation in w(z) of the ‘dark energy’ along with new degrees of freedom sourcing anisotropic stress in the perturbations was found to lead to new, nonstandard time variation of the potentials \(\varPhi \) and \(\varPsi \). These time variations source large scale anisotropies via the integrated Sachs–Wolfe effect, and the parameter space of the model is constrained in avoiding the effect becoming too pronounced.
In spite of this, given the status of current experimental bounds it is conceivable that a more successful alternative to the dark sector may share some of these points of departure from the Concordance Model and yet fare significantly better at the level of the background and linear perturbations.
2.5.10 The tensor–vector–scalar theory of gravity
Although no further studies of accelerated expansion in TeVeS have been performed, it is very plausible that certain choices of function will inevitably lead to acceleration. It is easy to see that the scalar field action has the same form as a kessence/kinflation (ArmendarizPicon et al. 2000) action which has been considered as a candidate theory for acceleration. It is unknown in general whether this has similar features as the uncoupled kessence, although Zhao’s study indicates that this a promising research direction (Zhao 2008).
Until TeVeS was proposed and studied in detail, MONDtype theories were assumed to be fatally flawed: their lack of a dark matter component would necessarily prevent the formation of largescale structure compatible with current observational data. In the case of an Einstein universe, it is well known that, since baryons are coupled to photons before recombination they do not have enough time to grow into structures on their own. In particular, on scales smaller than the diffusion damping scale perturbations in such a universe are exponentially damped due to the Silkdamping effect. CDM solves all of these problems because it does not couple to photons and therefore can start creating potential wells early on, into which the baryons fall.
TeVeS contains two additional fields, which change the structure of the equations significantly. The first study of TeVeS predictions for largescale structure observations was conducted in Skordis et al. (2006). They found that TeVeS can indeed form largescale structure compatible with observations depending on the choice of TeVeS parameters in the free function. In fact the form of the matter power spectrum P(k) in TeVeS looks quite similar to that in \(\varLambda \)CDM. Thus TeVeS can produce matter power spectra that cannot be distinguished from \(\varLambda \)CDM by current observations. One would have to turn to other observables to distinguish the two models. The power spectra for TeVeS and \(\varLambda \)CDM are plotted on the right panel of Fig. 4. Dodelson and Liguori (2006) provided an analytical explanation of the growth of structure seen numerically by Skordis et al. (2006) and found that the growth in TeVeS is due to the vector field perturbation.
It is premature to claim (as in Slosar et al. 2005; Spergel et al. 2007) that only a theory with CDM can fit CMB observations; a prime example to the contrary is the EBI theory (Bañados et al. 2009). Nevertheless, in the case of TeVeS (Skordis et al. 2006) numerically solved the linear Boltzmann equation in the case of TeVeS and calculated the CMB angular power spectrum for TeVeS. By using initial conditions close to adiabatic the spectrum thus found provides very poor fit as compared to the \(\varLambda \)CDM model (see the left panel of Fig. 4). The CMB seems to put TeVeS into trouble, at least for the Bekenstein free function. The result of Dodelson and Liguori (2006) has a further direct consequence. The difference \(\varPhi \varPsi \), sometimes named the gravitational slip (see Sect. I.3.2), has additional contributions coming from the perturbed vector field \(\alpha \). Since the vector field is required to grow in order to drive structure formation, it will inevitably lead to a growing \(\varPhi  \varPsi \). The difference \(\varPhi  \varPsi \) will be measured by Euclid, and therefore, when the data is available, one will be able to provide a substantial test that can distinguish TeVeS from \(\varLambda \)CDM.
2.5.11 Other models of interest
I.5.11.1 Models of varying alpha
Whenever a dynamical scalar field is added to a theory, the field will naturally couple to all other fields present, unless a (still unknown) symmetry is postulated to suppress such couplings (Carroll 1998). A coupling to the electromagnetic sector leads to spacetime variations of Nature’s fundamental constants, which are constrained both by local atomic clock experiments and by astrophysical observations (Uzan 2011). Joint constraints on dynamical dark energy model parametrizations and on the coupling with electromagnetism were obtained in Calabrese et al. (2014), combining weak lensing and supernova measurements from Euclid with highresolution spectroscopy measurements from the European extremely large telescope (Martins 2015). These forecasts suggest that in the CPL parametrization of these models, the addition of spectroscopic data (which spans the range \(0<z \lesssim 4\)) improves constraints from Euclid observables by a factor of 2 for \(w_0\) and by one order of magnitude for \(w_a\).
I.5.11.2 f(T) gravity
f(T) gravity is a generalization of teleparallel gravity, where the torsion scalar T, instead of curvature, is responsible for gravitational interactions. In this theory, spacetime is endowed with a curvaturefree Weitzenbock connection. Thus, torsion acts as a force, allowing for the interpretation of gravity as a gauge theory of the translation group (Arcos and Pereira 2004). Teleparallel gravity and GR yield completely equivalent dynamics for \(f(T)=T\), but differ for any other choice f(T) (Ferraro and Fiorini 2008; Fiorini and Ferraro 2009). Unlike analogous approach of f(R) theories, f(T) gravity yields equations that remain at second order in field derivatives; however, local Lorentz invariance is lost.
In f(T) cosmology, structure formation is modified because of a time dependent effective gravitational constant. Cardone et al. (2012) analysed two viable f(T) gravity models and showed that both are in very good agreement with a wide set of data, including SNIa and GRB Hubble diagrams, BAOs at different redshifts, Hubble expansion rate measurements and the WMAP7 distance priors. Yet, that wide dataset is unable to constrain the model parameters enough to discriminate among the considered f(T) models and the standard \(\varLambda \)CDM scenario. Therefore, Camera et al. (2014) investigated the imprints of f(T) gravity on galaxy clustering and weak gravitational lensing in the context of Euclid.
In particular, by studying weak lensing tomographic cosmic shear and both 2D and 3D clustering of Euclid H\(\alpha \) galaxies. They found that with such a combination of probes it will indeed be possible to tightly constrain f(T) model parameters. Again, it is the combination of clustering and lensing that yields the tighest constraints, thanks to the high complementarity of the two probes when it comes to tracking the different behaviour of the metric potentials. By such probe combination, bounds on the two modified f(T) models get more constraining by more than an order of magnitude, thus allowing us to rule out the models with more than 3\(\sigma \) confidence.
2.6 Nonlinear aspects
In this section, we discuss how the nonlinear evolution of cosmic structures in the context of different nonstandard cosmological models can be studied by means of numerical simulations based on Nbody algorithms and of analytical approaches based on the spherical collapse model.
2.6.1 Nbody simulations of dark energy and modified gravity
Here we discuss the numerical methods presently available for this type of analyses, and we review the main results obtained so far for different classes of alternative cosmologies. These can be grouped into models where structure formation is affected only through a modified expansion history (such as quintessence and early darkenergy models, Sect. I.5.1) and models where particles experience modified gravitational forces, either for individual particle species (interacting darkenergy models and growing neutrino models, Sect. I.5.3 or for all types of particles in the universe (modified gravity models). A general overview on the recent developments in the field of dark energy and modified gravity Nbody simulations can be found in Baldi (2012a).
I.6.1.1 Quintessence and early darkenergy models
In general, in the context of flat FLRW cosmologies, any dynamical evolution of the darkenergy density (\(\rho _{\mathrm {DE}}\ne {\mathrm {const.}} = \rho _{\varLambda }\)) determines a modification of the cosmic expansion history with respect to the standard \(\varLambda \)CDM cosmology. In other words, if the dark energy is a dynamical quantity, i.e., if its equation of state parameter \(w\ne 1\) exactly, for any given set of cosmological parameters (\(H_{0}\), \(\varOmega _{\mathrm {CDM}}\), \(\varOmega _{\mathrm {b}}\), \(\varOmega _{\mathrm {DE}}\), \(\varOmega _{\mathrm {rad}}\)), the redshift evolution of the Hubble function H(z) will differ from the standard \(\varLambda \)CDM case \(H_{\varLambda }(z)\).
Early dark energy (EDE) is, therefore, a common prediction of scalar field models of dark energy, and observational constraints put firm bounds on the allowed range of \(\varOmega _{\mathrm {DE}}\) at early times, and consequently on the potential slope \(\alpha \).
As we have seen in Sect. I.2.1, a completely phenomenological parametrization of EDE, independent from any specific model of dynamical dark energy has been proposed by Wetterich (2004) as a function of the present darkenergy density \(\varOmega _{\mathrm {DE}}\), its value at early times \(\varOmega _{\mathrm {e}}\), and the present value of the equation of state parameter \(w_{0}\). From Eq. I.2.4, the full expansion history of the corresponding EDE model can be derived.
A modification of the expansion history indirectly influences also the growth of density perturbations and ultimately the formation of cosmic structures. While this effect can be investigated analytically for the linear regime, Nbody simulations are required to extend the analysis to the nonlinear stages of structure formation. For standard Quintessence and EDE models, the only modification that is necessary to implement into standard Nbody algorithms is the computation of the correct Hubble function H(z) for the specific model under investigation, since this is the only way in which these non standard cosmological models can alter structure formation processes.
This has been done by the independent studies of Grossi and Springel (2009) and Francis et al. (2008a), where a modified expansion history consistent with EDE models described by the parametrization of Eq. I.2.4 has been implemented in the widely used Nbody code Gadget2 (Springel 2005) and the properties of nonlinear structures forming in these EDE cosmologies have been analyzed. Both studies have shown that the standard formalism for the computation of the halo mass function still holds for EDE models at \(z=0\). In other words, both the standard fitting formulae for the number density of collapsed objects as a function of mass, and their key parameter \(\delta _{c} = 1.686\) representing the linear overdensity at collapse for a spherical density perturbation, remain unchanged also for EDE cosmologies.
The work of Grossi and Springel (2009), however, investigated also the internal properties of collapsed halos in EDE models, finding a slight increase of halo concentrations due to the earlier onset of structure formation and most importantly a significant increment of the lineofsight velocity dispersion of massive halos. The latter effect could mimic a higher \(\sigma _{8}\) normalization for cluster mass estimates based on galaxy velocity dispersion measurements and, therefore, represents a potentially detectable signature of EDE models.
Besides determining a different expansion history with respect to the standard \(\varLambda \)CDM cosmology due to the presence of an EDE component, scalarfield DE cosmologies also predict the existence of spatial perturbations of the DE density, resulting in a modification of the shape of the matter power spectrum. Even though such density perturbations are suppressed by freestreaming at subhorizon scales (thereby allowing to discard the effect of DE fluctuations on the dynamical evolution of cosmic structures), they remain frozen to a constant value at superhorizon scales. Therefore, as new large scales continuously enter the causal horizon, they will be affected by the presence of DE perturbations before these are eventually damped by freestreaming. Consequently, DE perturbations are expected to slightly change the largescale shape of the linear power spectrum, thereby affecting the initial conditions of structure formation (Ma et al. 1999; Alimi et al. 2010). This has motivated the development of DE Nbody simulations with extremely large volumes, comparable or larger to the comoving size of the cosmic horizon, in order to investigate the nonlinear signatures of the largescale DE perturbations (Alimi et al. 2010, 2012; Rasera et al. 2010). Such studies have highlighted that the nonlinear regime of structure formation carries information on the initial conditions of the Universe and keeps memory of the growth history of density perturbations even for the case of perfectly degenerate linear matter power spectra and \(\sigma _{8}\) values. Therefore, nonlinear structure formation processes represent a precious source of information for the highly demanding requirements of precision cosmology.
I.6.1.2 Interacting darkenergy models
While such direct interaction with baryonic particles (\(\alpha =b\)) is tightly constrained by observational bounds, and while it is suppressed for relativistic particles (\(\alpha =r\)) by symmetry reasons (\(13w_{r}=0\)), a selective interaction with cold dark matter (CDM hereafter) or with massive neutrinos is still observationally viable (see Sect. I.5.3).
As a consequence of these new terms in the Newtonian acceleration equation the growth of density perturbations will be affected, in interacting darkenergy models, not only by the different Hubble expansion due to the dynamical nature of dark energy, but also by a direct modification of the effective gravitational interactions at subhorizon scales. Therefore, linear perturbations of coupled species will grow with a higher rate in these cosmologies In particular, for the case of a coupling to CDM, a different amplitude of the matter power spectrum will be reached at \(z=0\) with respect to \(\varLambda \)CDM if a normalization in accordance with CMB measurements at high redshifts is assumed.
Clearly, the new acceleration Eq. (I.6.6) will have an influence also on the formation and evolution of nonlinear structures, and a consistent implementation of all the above mentioned effects into an Nbody algorithm is required in order to investigate this regime.
For the case of a coupling to CDM (a coupling with neutrinos will be discussed in the next section) this has been done, e.g., by Macciò et al. (2004) and Sutter and Ricker (2008) with 1D or 3D gridbased field solvers, and more recently by means of suitable modifications (by Baldi et al. 2010; Carlesi et al. 2014a) of the TreePM hydrodynamic Nbody code Gadget2 (Springel 2005), and similarly through a modified version (by Li and Barrow 2011a) of the adaptive mesh refinements code Ramses (Teyssier 2002).

The suppression of power at small scales in the power spectrum of interacting darkenergy models as compared to \(\varLambda \)CDM (see, e.g., Baldi 2011a);

An enhanced lensing power spectrum as compared to \(\varLambda \)CDM (see e.g. Beynon et al. 2012);

The development of a gravitational bias in the amplitude of density perturbations of uncoupled baryons and coupled CDM particles defined as \(P_{b}(k)/P_{c}(k)<1\), which determines a significant decrease of the baryonic content of massive halos at low redshifts in accordance with a large number of observations (Baldi et al. 2010; Baldi 2011a);

The increase of the number density of highmass objects at any redshift as compared to \(\varLambda \)CDM (see, e.g., Baldi and Pettorino 2011; Baldi 2012b; Cui et al. 2012);

An enhanced ISW effect (Amendola 2000a, 2004; Mainini and Mota 2012); such effects may be partially reduced when taking into account nonlinearities, as described in Pettorino et al. (2010);

A modification in the shape of zspace distortions (Marulli et al. 2012) and an enhanced pairwise infall velocity of colliding massive clusters (Lee and Baldi 2012);

A less steep inner core halo profiles (depending on the interplay between fifth force and velocitydependent terms) (Baldi et al. 2010; Baldi 2011a, b; Li et al. 2011; Li and Barrow 2011b);

A lower concentration of the halos (Baldi et al. 2010; Baldi 2011b; Li and Barrow 2011b);

CDM voids are larger and more underdense when a coupling is active (Baldi and Viel 2010; Sutter et al. 2015).

A modified amplitude and time evolution of the halo bias (see, e.g., Marulli et al. 2012; Moresco et al. 2014), which might determine a counterintuitive behaviour in the connection between CDM and halo populations statistics in the context of interacting dark energy cosmologies.

Small scale power can be both suppressed and enhanced when a growing coupling function is considered, depending on the magnitude of the coupling time derivative \(\mathrm {d}\beta (\phi )/\mathrm {d}\phi \)

The inner overdensity of CDM halos, and consequently the halo concentrations, can both decrease (as always happens for the case of constant couplings) or increase, again depending on the rate of change of the coupling strength (Cui et al. 2012);

The abundance of halo substructures (see Giocoli et al. 2013) as well as the CMB lensing power spectrum (see Carbone et al. 2013) might show both an enhancement or a suppression with respect to \(\varLambda \)CDM depending on the specific model under exam.
All these effects represent characteristic features of interacting darkenergy models and could provide a direct way to observationally test these scenarios.
A slightly more complex realisation of the interacting dark energy scenario has been recently proposed by Baldi (2012c), and termed the “Multicoupled dark energy” model. The latter is characterised by two distinct CDM particle species featuring an opposite constant coupling to a single classical dark energy scalar field, and represents the simplest possible realisation of the general multipleinteraction scenario proposed by Gubser and Peebles (2004a, b) and Brookfield et al. (2008). The most noticeable feature of such model is the dynamical screening that effectively suppresses the coupling at the level of the background and linear perturbations evolution, although leaving room for a possible interesting phenomenology at nonlinear scales (see, e.g., Piloyan et al. 2013, 2014). Some first Nbody simulations of the multicoupled dark energy scenario have been performed by Baldi (2013, 2014), showing for the first time the halo fragmentation process occurring in these cosmologies as a consequence of the repulsive longrange fifthforce between CDM particles of different types. Higher resolution simulations will be required in order to investigate possible observable effects of this new phenomenon on the shape and abundance of CDM halos at very small scales.
Alternatively, the coupling can be introduced by choosing directly a covariant stress–energy tensor, treating dark energy as a fluid in the absence of a starting action (Mangano et al. 2003; Väliviita et al. 2008, 2010; CalderaCabral et al. 2009b; Schaefer et al. 2008; Majerotto et al. 2010; Gavela et al. 2009, 2010; CalderaCabral et al. 2009a).
I.6.1.3 Growing neutrinos
In case of a coupling between the darkenergy scalar field \(\phi \) and the relic fraction of massive neutrinos, all the above basic Eqs. (I.6.5)–(I.6.8) still hold. However, such models are found to be cosmologically viable only for large negative values of the coupling \(\beta \) (as shown by Amendola et al. 2008a), that according to Eq. I.6.5 determines a neutrino mass that grows in time (from which these models have been dubbed “growing neutrinos”). An exponential growth of the neutrino mass implies that cosmological bounds on the neutrino mass are no longer applicable and that neutrinos remain relativistic much longer than in the standard scenario, which keeps them effectively uncoupled until recent epochs, according to Eqs. (I.6.3 and I.6.4). However, as soon as neutrinos become nonrelativistic at redshift \(z_{\mathrm {nr}}\) due to the exponential growth of their mass, the pressure terms \(13w_{\nu }\) in Eqs. (I.6.3 and I.6.4) no longer vanish and the coupling with the DE scalar field \(\phi \) becomes active.
Therefore, while before \(z_{\mathrm {nr}}\) the model behaves as a standard \(\varLambda \)CDM scenario, after \(z_{\mathrm {nr}}\) the nonrelativistic massive neutrinos obey the modified Newtonian Eq. (I.6.6) and a fast growth of neutrino density perturbation takes place due to the strong fifth force described by Eq. (I.6.8).
The growth of neutrino overdensities in the context of growing neutrinos models has been studied in the linear regime by Mota et al. (2008), predicting the formation of very large neutrino lumps at the scale of superclusters and above (10–100 Mpc/h) at redshift \(z\approx 1\).
The analysis has been extended to the nonlinear regime in Wintergerst et al. (2010) by following the spherical collapse of a neutrino lump in the context of growing neutrino cosmologies. This study has witnessed the onset of virialization processes in the nonlinear evolution of the neutrino halo at \(z\approx 1.3\), and provided a first estimate of the associated gravitational potential at virialization being of the order of \(\varPhi _{\nu }\approx 10^{6}\) for a neutrino lump with radius \(R \approx 15\mathrm {\ Mpc}\).
An estimate of the potential impact of such very large nonlinear structures onto the CMB angular power spectrum through the integrated Sachs–Wolfe effect has been attempted by Pettorino et al. (2010). This study has shown that the linear approximation fails in predicting the global impact of the model on CMB anisotropies at low multipoles, and that the effects under consideration are very sensitive to the details of the transition between the linear and nonlinear regimes and of the virialization processes of nonlinear neutrino lumps, and that also significantly depend on possible backreaction effects of the evolved neutrino density field onto the local scalar filed evolution.
A full nonlinear treatment by means of specifically designed Nbody simulations is, therefore, required in order to follow in further detail the evolution of a cosmological sample of neutrino lumps beyond virialization, and to assess the impact of growing neutrinos models onto potentially observable quantities as the lowmultipoles CMB power spectrum or the statistical properties of CDM large scale structures. Simulations of the growing neutrino scenario have been performed for the first time by Baldi et al. (2011) by means of a modified version of the Gadget2 code which assumed the linearity of the scalar field spatial perturbations (and consequently of the neutrino mass) and no backreaction of the growth of neutrino lumps on the overall background cosmic expansions. Although such approximations are quite restrictive, the simulations performed by Baldi et al. (2011) allowed to follow the evolution of the formation of a few large neutrino structures down to \(z\sim 1\), after which neutrino particles start to become relativistic thereby breaking the Newtonian implementation of gravitational dynamics implemented in standard Nbody algorithms. Such restrictions and approximations have been subsequently removed by the specific Nbody algorithm developed by Ayaita et al. (2012) which selfconsistently implements both the relativistic evolution of neutrino particles and the backreaction effect on the background cosmology, and which employs a Newton–Gauss–Seidel relaxation scheme to solve for nonlinear spatial fluctuations of the scalar field. Nonetheless, even this more accurate numerical treatment has been so far successfully employed only down to \(z\sim 1\) due to its high computational cost.
I.6.1.4 Modified gravity
Modified gravity models, presented in Sect. I.5, represent a different perspective to account for the nature of the dark components of the universe. Although most of the viable modifications of GR are constructed in order to provide an identical cosmic expansion history to the standard \(\varLambda \)CDM model, their effects on the growth of density perturbations could lead to observationally testable predictions capable of distinguishing modified gravity models from standard GR plus a cosmological constant.
Since a modification of the theory of gravity would affect all test masses in the universe, i.e., including the standard baryonic matter, an asymptotic recovery of GR for solar system environments, where deviations from GR are tightly constrained, is required for all viable modified gravity models. Such “screening mechanism” represents the main difference between modified gravity models and the interacting darkenergy scenarios discussed above, by determining a local dependence of the modified gravitational laws in the Newtonian limit. Different modifications of the GR Action integral might feature different types of screening mechanisms (see Sect. I.5 for an introduction to modified gravity theories and screening mechanisms)—as, e.g., the “Chameleon” (Khoury and Weltman 2004), the “Symmetron” (Hinterbichler and Khoury 2010), the “Dilaton” (Damour and Polyakov 1994) or the “Vainshtein” (Vainshtein 1972) mechanisms—which in turn might require different numerical implementations in order to solve for the fully nonlinear evolution of the additional degrees of freedom associated to the modifications of gravity.
While the linear growth of density perturbations in the context of modified gravity theories can be studied (see, e.g., Hu and Sawicki 2007a; Motohashi et al. 2010b; Amarzguioui et al. 2006; Appleby and Weller 2010) by parametrizing the scale dependence of the modified Poisson and Euler equations in Fourier space (see the discussion in Sect. I.3), the nonlinear evolution of the additional degrees of freedom of any viable modified gravity scenario makes the implementation of these theories into nonlinear Nbody algorithms much more challenging. Nonetheless, enormous progress has been made over the past few years in the development of specific Nbody codes for various classes of modified gravity cosmologies, such that the investigation of nonlinear structure formation for (at least some) alternative gravitational theories by means of dedicated Nbody simulations is becoming a mature field of investigation in computational cosmology. The first simulations of modified gravity cosmologies, limited to the “Chameleon” screening mechanism featured by f(R) theories, have been performed by means of meshbased iterative relaxation schemes (Oyaizu 2008; Oyaizu et al. 2008; Schmidt et al. 2009; Khoury and Wyman 2009; Zhao et al. 2011; Davis et al. 2012a; Winther et al. 2012) and showed an enhancement of the power spectrum amplitude at intermediate and small scales. These studies also showed that this nonlinear enhancement of small scale power cannot be accurately reproduced by applying the linear perturbed equations of each specific modified gravity theory to the standard nonlinear fitting formulae (as, e.g., Smith et al. 2003).
After these first pioneering studies, a very significant amount of work has been done in both extending the numerical implementation of modified gravity models within Nbody algorithms and in using highresolution Nbody simulations to investigate the impact of various modifications of gravity on possible observable quantities.
Concerning the former aspect, the main advancements have been obtained by extending the range of possible screening mechanisms implemented in the modified gravity nonlinear solvers, in order to include “Symmetron” (see, e.g., Davis et al. 2012a), “Dilaton” (see, e.g., Brax et al. 2012b), and Vainshteinlike (see, e.g., Li et al. 2013a, b; Barreira et al. 2013, for the cases of general Vainshtein as well as for cubic and quartic Galileon models) mechanisms, and by optimising the nonlinear Poisson solvers. Presently, three main parallel codes for modified gravity models have been developed by independent groups, based on different underlying Nbody numerical schemes and procedures, namely the ECOSMOG (Li et al. 2012c), the MGGADGET (Puchwein et al. 2013), and the ISIS (Llinares et al. 2014) codes. The latter has also recently provided the first implementation of Chameleon and Symmetron modified gravity theories beyond the quasistatic approximation (Llinares and Mota 2013, 2014).
Concerning the latter aspect, a wide range of results about the impact of various modified gravity theories on several observable quantities have been obtained with the abovementioned Nbody codes, generally finding a good agreement between the different algorithms even though a properly controlled codecomparison study has yet to be performed. Among the most relevant results it is worth mentioning the identification of modified gravity signatures in the largescale structure statistics (see, e.g., Li et al. 2012b; Lombriser et al. 2013, 2014; Arnold et al. 2014), in the environmental dependence of CDM halo properties (Winther et al. 2012), on the largescale velocity field (see, e.g., Li et al. 2012a; Jennings et al. 2012; Hellwing et al. 2014), and on the ISW effect (see, e.g., Cai et al. 2013). Furthermore, a recent study (Baldi et al. 2014) performed with the MGGADGET code has highlighted the issue of a severe observational degeneracy between the effects of an f(R) modification of gravity and a cosmological background of massive neutrinos
Despite the huge advancements in the field of nonlinear simulations of modified gravity models achieved in recent years,
higher resolution simulations and new numerical approaches will be necessary in order to extend these results to smaller scales and to accurately evaluate the deviations of specific models of modified gravity from the standard GR predictions to a potentially detectable precision level.
2.6.2 The spherical collapse model
A popular analytical approach to study nonlinear clustering of dark matter without recurring to Nbody simulations is the spherical collapse model, first studied by Gunn and Gott (1972). In this approach, one studies the collapse of a spherical overdensity and determines its critical overdensity for collapse as a function of redshift. Combining this information with the extended Press–Schechter theory (Press and Schechter 1974; Bond et al. 1991; see Zentner et al. 2008 for a review) one can provide a statistical model for the formation of structures which allows to predict the abundance of virialized objects as a function of their mass. Although it fails to match the details of Nbody simulations, this simple model works surprisingly well and can give useful insigths into the physics of structure formation. Improved models accounting for the complexity of the collapse exist in the literature and offer a better fit to numerical simulations. For instance, Sheth and Tormen (1999) showed that a significant improvement can be obtained by considering an ellipsoidal collapse model. Furthermore, recent theoretical developments and new improvements in the excursion set theory have been undertaken by Maggiore and Riotto (2010) and other authors (see, e.g., Shaw and Mota 2008).
In the following we will discuss the spherical collapse model in the contest of other dark energy and modified gravity models.
I.6.2.1 Clustering dark energy
In its standard version, quintessence is described by a minimallycoupled canonical field, with speed of sound \(c_s=1\). As mentioned above, in this case clustering can only take place on scales larger than the horizon, where sound waves have no time to propagate. However, observations on such large scales are strongly limited by cosmic variance and this effect is difficult to observe. A minimallycoupled scalar field with fluctuations characterized by a practically zero speed of sound can cluster on all observable scales. There are several theoretical motivations to consider this case. In the limit of zero sound speed one recovers the Ghost Condensate theory proposed by ArkaniHamed et al. (2004b) in the context of modification of gravity, which is invariant under shift symmetry of the field \(\phi \rightarrow \phi + {\mathrm {constant}}\). Thus, there is no fine tuning in assuming that the speed of sound is very small: quintessence models with vanishing speed of sound should be thought of as deformations of this particular limit where shift symmetry is recovered. Moreover, it has been shown that minimallycoupled quintessence with an equation of state \( w< \,1\) can be free from ghosts and gradient instabilities only if the speed of sound is very tiny, \( c_s  \lesssim 10^{15}\). Stability can be guaranteed by the presence of higher derivative operators, although their effect is absent on cosmologically relevant scales (Creminelli et al. 2006, 2009; Cheung et al. 2008c).
The fact that the speed of sound of quintessence may vanish opens up new observational consequences. Indeed, the absence of quintessence pressure gradients allows instabilities to develop on all scales, also on scales where dark matter perturbations become nonlinear. Thus, we expect quintessence to modify the growth history of dark matter not only through its different background evolution but also by actively participating to the structure formation mechanism, in the linear and nonlinear regime, and by contributing to the total mass of virialized halos.
Following Creminelli et al. (2010), in the limit of zero sound speed pressure gradients are negligible and, as long as the fluid approximation is valid, quintessence follows geodesics remaining comoving with the dark matter (see also Lim et al. 2010 for a more recent model with identical phenomenology). In particular, one can study the effect of quintessence with vanishing sound speed on the structure formation in the nonlinear regime, in the context of the spherical collapse model. The zero speed of sound limit represents the natural counterpart of the opposite case \(c_s = 1\). Indeed, in both cases there are no characteristic length scales associated with the quintessence clustering and the spherical collapse remains independent of the size of the object (see Basse et al. 2011; Mota and van de Bruck 2004; Nunes and Mota 2006 for a study of the spherical collapse when \(c_s\) of quintessence is small but finite).
where the energy density of quintessence \(\rho _Q\) has now a different value inside and outside the overdensity, while the pressure remains unperturbed. In this case the quintessence inside the overdensity evolves following the internal scale factor R, \(\dot{\rho }_Q + 3 (\dot{R}/R) (\rho _Q + {\bar{p}}_Q) =0\) and the comoving regions behave as closed FLRW universes. R satisfies the Friedmann equation and the spherical collapse can be solved exactly (Creminelli et al. 2010).
Quintessence with zero speed of sound modifies dark matter clustering with respect to the smooth quintessence case through the linear growth function and the linear threshold for collapse. Indeed, for \(w >\,1\) (\(w < \,1\)), it enhances (diminishes) the clustering of dark matter, the effect being proportional to \(1+w\). The modifications to the critical threshold of collapse are small and the effects on the dark matter mass function are dominated by the modification on the linear dark matter growth function. Besides these conventional effects there is a more important and qualitatively new phenomenon: quintessence mass adds to the one of dark matter, contributing to the halo mass by a fraction of order \(\sim (1 + w) \varOmega _Q/\varOmega _m\). Importantly, it is possible to show that the mass associated with quintessence stays constant inside the virialized object, independently of the details of virialization. Moreover,the ratio between the virialization and the turnaround radii is approximately the same as the one for \(\varLambda \)CDM computed by Lahav et al. (1991). In Fig. 5, we plot the ratio of the mass function including the quintessence mass contribution, for the \(c_s=0\) case to the smooth \(c_s=1\) case. The sum of the two effects is rather large: for values of w still compatible with the present data and for large masses the difference between the predictions of the \(c_s = 0\) and the \(c_s = 1\) cases is of order one.
I.6.2.2 Coupled dark energy
We now consider spherical collapse within coupled darkenergy cosmologies. The presence of an interaction that couples the cosmon dynamics to another species introduces a new force acting between particles (CDM or neutrinos in the examples mentioned in Sect. I.5.3) and mediated by darkenergy fluctuations. Whenever such a coupling is active, spherical collapse, whose concept is intrinsically based on gravitational attraction via the Friedmann equations, has to be suitably modified in order to account for other external forces. As shown in Wintergerst and Pettorino (2010), the inclusion of the fifth force within the spherical collapse picture deserves particular caution. Here we summarize the main results on this topic and we refer to Wintergerst and Pettorino (2010) for a detailed illustration of spherical collapse in presence of a fifth force.
I.6.2.3 Early dark energy
A convenient way to parametrize the presence of a nonnegligible homogeneous darkenergy component at early times was presented in Wetterich (2004) and has been illustrated in Sect. I.2.1 of the present review. If we specify the spherical collapse equations for this case, the nonlinear evolution of the density contrast follows the evolution Eqs. (I.6.16) and (I.6.17) without the terms related to the coupling. As before, we assume relativistic components to remain homogeneous. In Fig. 8, we show \(\delta _c\) for two models of early dark energy, namely models I and II, corresponding to the choices (\(\varOmega _{m,0} = 0.332, w_0 = \,0.93, \varOmega _{\text {DE},e} = 2\times 10^{4}\)) and (\(\varOmega _{m,0} = 0.314, w_0 = \,0.99, \varOmega _{\text {DE},e} = 8\times 10^{4}\)) respectively. Results show \(\delta _c(z_c = 5) \sim \,1.685\) (\(\sim 5\times 10^{2}\%\)) (Francis et al. 2008b; Wintergerst and Pettorino 2010).
I.6.2.4 Universal couplings
The results of the numerical collapse simulation and the fitting function are shown in Fig. 9.
In Fig. 10, we show an example of the derived mass function computed for \(f(R_0)=10^{5}\) for different redshits (solid lines). The comparison to the simulations (points), shows good agreement, but not highprecision agreement as would be required for a detailed cosmological data analysis using the mass function.
2.7 Observational properties of dark energy and modified gravity
Both scalar field darkenergy models and modifications of gravity can in principle lead to any desired expansion history H(z), or equivalently any evolution of the effective darkenergy equation of state parameter w(z). For canonical scalar fields, this can be achieved by selecting the appropriate potential \(V(\varphi )\) along the evolution of the scalar field \(\varphi (t)\), as was done, e.g., in Bassett et al. (2002). For modified gravity models, the same procedure can be followed for example for f(R) type models (e.g., Pogosian and Silvestri 2008). The evolution history on its own can thus not tell us very much about the physical nature of the mechanism behind the accelerated expansion (although of course a clear measurement showing that \(w \ne 1\) would be a sensational discovery). A smoking gun for modifications of gravity can thus only appear at perturbation level.
In the next subsections, we explore how dark energy or modified gravity effects can be detected through weak lensing and redshift surveys.
2.7.1 General remarks
Quite generally, cosmological observations fall into two categories: geometrical probes and structure formation probes. While the former provide a measurement of the Hubble function, the latter are a test of the gravitational theory in an almost Newtonian limit on subhorizon scales. Furthermore, possible effects on the geodesics of test particles need to be derived: naturally, photons follow nullgeodesics while massive particles, which constitute the cosmic largescale structure, move along geodesics for nonrelativistic particles.
In some special cases, modified gravity models predict a strong deviation from the standard Friedmann equation as in, e.g., DGP, (I.5.58). While the Friedmann equation is not known explicitly in more general models of massive gravity (cascading gravity or hard mass gravity), similar modifications are expected to arise and provide characteristic features, (see, e.g., Afshordi et al. 2009; Jain and Khoury 2010) that could distinguish these models from other scenarios of modified gravity or with additional dynamical degrees of freedom.
In general, however, the most interesting signatures of modified gravity models are to be found in the perturbation sector. For instance, in DGP, growth functions differ from those in darkenergy models by a few percent for identical Hubble functions, and for that reason, an observation of both the Hubble and the growth function gives a handle on constraining the gravitational theory (Lue et al. 2004). The growth function can be estimated both through weak lensing and through galaxy clustering and redshift distortions.
Concerning the interactions of light with the cosmic largescale structure, one sees a modified coupling in general models and a difference between the metric potentials. These effects are present in the anisotropy pattern of the CMB, as shown in Sawicki and Carroll (2005), where smaller fluctuations were found on large angular scales, which can possibly alleviate the tension between the CMB and the \(\varLambda \)CDM model on small multipoles where the CMB spectrum acquires smaller amplitudes due to the ISWeffect on the lastscattering surface, but provides a worse fit to supernova data. An interesting effect inexplicable in GR is the anticorrelation between the CMB temperature and the density of galaxies at high redshift due to a sign change in the integrated Sachs–Wolfe effect. Interestingly, this behavior is very common in modified gravity theories.
A very powerful probe of structure growth is of course weak lensing, but to evaluate the lensing effect it is important to understand the nonlinear structure formation dynamics as a good part of the total signal is generated by small structures. Only recently has it been possible to perform structure formation simulations in modified gravity models, although still without a mechanism in which GR is recovered on very small scales, necessary to be in accordance with local tests of gravity.
In contrast, the number density of collapsed objects relies only little on nonlinear physics and can be used to investigate modified gravity cosmologies. One needs to solve the dynamical equations for a spherically symmetric matter distribution. Modified gravity theories show the feature of lowering the collapse threshold for density fluctuations in the largescale structure, leading to a higher comoving number density of galaxies and clusters of galaxies. This probe is degenerate with respect to darkenergy cosmologies, which generically give the same trends.
Finally, supernova observations—able of accurately mapping the expansion history of the universe—are themselves lensed by foreground matter structures. This extra spread in the Hubble diagram caused by lensing contains precious clustering information, which is encoded in the onepoint lensing PDF and can be used to constrain parameters such as the power spectrum normalization \(\sigma _8\) or the growth index \(\gamma \). Therefore, forthcoming supernova catalogs can be seen as both geometrical and structure formation probes. It is important to point out that the onepoint statistics is independent of and complementary to the methods based on cosmic shear and cluster abundance observables. See Marra et al. (2013c) and Quartin et al. (2014) for more details and references therein.
2.7.2 Observing modified gravity with weak lensing
The magnification matrix is a \(2\times 2\) matrix that relates the true shape of a galaxy to its image. It contains two distinct parts: the convergence, defined as the trace of the matrix, modifies the size of the image, whereas the shear, defined as the symmetric traceless part, distorts the shape of the image. At small scales the shear and the convergence are not independent. They satisfy a consistency relation, and they contain therefore the same information on matter density perturbations. More precisely, the shear and the convergence are both related to the sum of the two Bardeen potentials, \(\varPhi +\varPsi \), integrated along the photon trajectory. At large scales however, this consistency relation does not hold anymore. Various relativistic effects contribute to the convergence, see Bonvin (2008). Some of these effects are generated along the photon trajectory, whereas others are due to the perturbations of the galaxies redshift. These relativistic effects provide independent information on the two Bardeen potentials, breaking their degeneracy. The convergence is therefore a useful quantity that can increase the discriminatory power of weak lensing.
The convergence can be measured through its effect on the galaxy number density, see e.g., Broadhurst et al. (1995). The standard method extracts the magnification from correlations of distant quasars with foreground clusters, see Scranton et al. (2005), Menard et al. (2010). Recently, Zhang and Pen (2005, 2006) designed a new method that permits to accurately measure autocorrelations of the magnification, as a function of the galaxies redshift. This method potentially allows measurements of the relativistic effects in the convergence.
I.7.2.1 Magnification matrix
where \({}_s X\) is an arbitrary field of spin s and \(\theta \) and \(\varphi \) are spherical coordinates.
I.7.2.2 Observable quantities
We evaluate \(C_\ell ^{\mathrm {vel}}\) and \(C_\ell ^{\mathrm {st}}\) in a \(\varLambda \)CDM universe with \(\varOmega _m = 0.25\), \(\varOmega _\varLambda = 0.75\) and \(\delta _H=5.7\times 10^{5}\). We approximate the transfer function with the BBKS formula, see Bardeen et al. (1986). In Fig. 11, we plot \(C_\ell ^{\mathrm {vel}}\) and \(C_\ell ^{\mathrm {st}}\) for various source redshifts. The amplitude of \(C_\ell ^{\mathrm {vel}}\) and \(C_\ell ^{\mathrm {st}}\) depends on \((\alpha 1)^2\), which varies with the redshift of the source, the flux threshold adopted, and the sky coverage of the experiment. Since \((\alpha 1)^2\) influences \(C_\ell ^{\mathrm {vel}}\) and \(C_\ell ^{\mathrm {st}}\) in the same way we do not include it in our plot. Generally, at small redshifts, \((\alpha 1)\) is smaller than 1 and consequently the amplitude of both \(C_\ell ^{\mathrm {vel}}\) and \(C_\ell ^{\mathrm {st}}\) is slightly reduced, whereas at large redshifts \((\alpha 1)\) tends to be larger than 1 and to amplify \(C_\ell ^{\mathrm {vel}}\) and \(C_\ell ^{\mathrm {st}}\), see e.g., Zhang and Pen (2006). However, the general features of the curves and more importantly the ratio between \(C_\ell ^{\mathrm {vel}}\) and \(C_\ell ^{\mathrm {st}}\) are not affected by \((\alpha 1)\).
Figure 11 shows that \(C_\ell ^{\mathrm {vel}}\) peaks at rather small \(\ell \), between 30 and 120 depending on the redshift. This corresponds to rather large angle \(\theta \sim 90\textendash 360\mathrm {\ arcmin}\). This behavior differs from the standard term (Fig. 11) that peaks at large \(\ell \). Therefore, it is important to have large sky surveys to detect the velocity contribution. The relative importance of \(C_\ell ^{\mathrm {vel}}\) and \(C_\ell ^{\mathrm {st}}\) depends strongly on the redshift of the source. At small redshift, \(z_S=0.2\), the velocity contribution is about \(4\times 10^{5}\) and is hence larger than the standard contribution which reaches \(10^{6}\). At redshift \(z_S=0.5\), \(C_\ell ^{\mathrm {vel}}\) is about 20% of \(C_\ell ^{\mathrm {st}}\), whereas at redshift \(z_S=1\), it is about 1% of \(C_\ell ^{\mathrm {st}}\). Then at redshift \(z_S=1.5\) and above, \(C_\ell ^{\mathrm {vel}}\) becomes very small with respect to \(C_\ell ^{\mathrm {st}}\): \(C_\ell ^{\mathrm {vel}} \le 10^{4}\,C_\ell ^{\mathrm {st}}\). The enhancement of \(C_\ell ^{\mathrm {vel}}\) at small redshift together with its fast decrease at large redshift are due to the prefactor \(\left( \frac{1}{\mathcal {H}_S \chi _S}1\right) ^2\) in Eq. (I.7.15). Thanks to this enhancement we see that if the magnification can be measured with an accuracy of 10%, then the velocity contribution is observable up to redshifts \(z\le 0.6\). If the accuracy reaches 1% then the velocity contribution becomes interesting up to redshifts of order 1.
2.7.3 Observing modified gravity with redshift surveys
Widedeep galaxy redshift surveys have the power to yield information on both H(z) and \(f_{g}(z)\) through measurements of baryon acoustic oscillations (BAO) and redshiftspace distortions. In particular, if gravity is not modified and matter is not interacting other than gravitationally, then a detection of the expansion rate is directly linked to a unique prediction of the growth rate. Otherwise galaxy redshift surveys provide a unique and crucial way to make a combined analysis of H(z) and \(f_{g}(z)\) to test gravity. As a widedeep survey, Euclid allows us to measure H(z) directly from BAO, but also indirectly through the angular diameter distance \(D_A(z)\) (and possibly distance ratios from weak lensing). Most importantly, Euclid survey enables us to measure the cosmic growth history using two independent methods: \(f_g(z)\) from galaxy clustering, and G(z) from weak lensing. In the following we discuss the estimation of \([H(z), D_A(z)\) and \(f_g(z)]\) from galaxy clustering.
From the measure of BAO in the matter power spectrum or in the 2point correlation function one can infer information on the expansion rate of the universe. In fact, the sound waves imprinted in the CMB can be also detected in the clustering of galaxies, thereby completing an important test of our theory of gravitational structure formation.
The characteristic scale of the BAO is set by the sound horizon at decoupling. Consequently, one can attain the angular diameter distance and Hubble parameter separately. This scale along the line of sight (\(s_{}(z)\)) measures H(z) through \(H(z) = c\varDelta z/s_{}(z)\), while the tangential mode measures the angular diameter distance \(D_A(z) = s_{\perp }/\varDelta \theta (1+z)\).
One can then use the power spectrum to derive predictions on the parameter constraining power of the survey (see, e.g., Amendola et al. 2005a; Guzzo et al. 2008; Wang 2008a; Wang et al. 2010; Di Porto et al. 2012).
In general, bias can be measured from weak lensing through the comparison of the shearshear and sheargalaxy correlations functions. A combined constraint on bias and the growth factor G(z) can be derived from weak lensing by comparing the crosscorrelations of multiple redshift slices.
Of course, if bias is assumed to be linear (\(b_2=0\)) and scale independent, or is parametrized in some simple way, e.g., with a power law scale dependence, then it is possible to estimate it even from linear galaxy clustering alone, as we will see in Sect. I.8.3.
2.7.4 Constraining modified gravity with galaxy–CMB correlations
Two of the abovementioned observable signatures of dark energy and modified gravity are especially suitable to study the time evolution of dark energy at the perturbative level: the ISW effect and CMB lensing. Both effects produce subdominant secondary anisotropies imprinted on the CMB at late times, and can be measured as a function of redshift by crosscorrelating CMB temperature and lensing maps with galaxy surveys, thus allowing a tomographic analysis of the dark energy properties.
I.7.4.1 The ISW effect
The ISW has been detected, in agreement with the \(\varLambda \)CDM predictions, at the \(\sim \,4\, \sigma \) significance level by crosscorrelating WMAP and Planck CMB data with numerous galaxy catalogues: see Ho et al. (2008), Giannantonio et al. (2008), Giannantonio et al. (2012a) and references therein.
Future galaxy surveys including the Euclid satellite are expected to improve current ISW measurements by increasing redshift depth and survey volume, thus allowing a consistent tomographic study from one galaxy survey, as well as by improving the control of systematics; the total signaltonoise is however not expected to exceed the \(\sim \,8 \,\sigma \) level (Crittenden and Turok 1996) in the \(\varLambda \)CDM scenario, since the ISW signal peaks on the largest scales, which are dominated by cosmic variance. The measurement of ISW at high redshift has however a significant discovery potential, as in case exotic dark energy models are correct, the actual level of ISW may be significantly higher.
I.7.4.2 CMB lensing
Maps of the CMB lensing potential have been reconstructed from higherorder statistics of the CMB temperature maps by the Planck (Planck Collaboration 2014b), south pole telescope (van Engelen et al. 2012) and atacama cosmology telescope (Das et al. 2011b) surveys; crosscorrelations between these lensing maps and galaxy surveys have also been confirmed with these three data sets (see, e.g., Giannantonio and Percival 2014): such crosscorrelations allow once again to study the redshift evolution of the gravitational potentials, and thus the physical properties of the dark sector.
Upcoming and future galaxy surveys leading up to the Euclid satellite mission, combined with rapidly improving CMB data, will increase the signaltonoise of the CMB lensing crosscorrelations well beyond the current levels, since the CMB lensing signal is maximum on smaller scales, which are currently dominated by statistical and systematic errors, but not by cosmic variance.
2.7.5 Cosmological bulk flows
As we have seen, the additional redshift induced by the galaxy peculiar velocity field generates the redshift distortion in the power spectrum. In this section we discuss a related effect on the luminosity of the galaxies and on its use to measure the peculiar velocity in large volumes, the socalled bulk flow.
Over the years the bulk flows has been estimated from the measured peculiar velocities of a large variety of objects ranging from galaxies (Giovanelli et al. 1998a, b; Dekel et al. 1999; Courteau et al. 2000; da Costa et al. 2000; Sarkar et al. 2007) clusters of galaxies (Lauer and Postman 1994; Branchini et al. 1996; Hudson et al. 2004) and SN Ia (Riess et al. 1995). Conflicting results triggered by the use of errorprone distance indicators have fueled a long lasting controversy on the amplitude and convergence of the bulk flow, that is still ongoing. For example, the recent claim of a bulk flow of \(407\pm 81\mathrm {\ km\ s}^{1}\) within \(R=50\,\hbox {h}^{1}\mathrm {\ Mpc}\) (Watkins et al. 2009), inconsistent with expectation from the \(\varLambda \)CDM model, has been seriously challenged by the reanalysis of the same data by Nusser and Davis (2011) who found a bulk flow amplitude consistent with \(\varLambda \)CDM expectations and from which they were able to set the strongest constraints on modified gravity models so far. On larger scales, Kashlinsky et al. (2010) claimed the detection of a dipole anisotropy attributed to the kinetic SZ decrement in the WMAP temperature map at the position of Xray galaxy clusters. When interpreted as a coherent motion, this signal would indicate a gigantic bulk flow of \(1028\pm 265\mathrm {\ km\ s}^{1}\) within \(R=528\,\hbox {h}^{1}\mathrm {\ Mpc}\). This highly debated result has been seriously questioned by independent analyses of WMAP data (see, e.g., Osborne et al. 2011)
The large, homogeneous dataset expected from Euclid has the potential to settle these issues. The idea is to measure bulk flows in large redshift surveys, based on the apparent, dimming or brightening of galaxies due to their peculiar motion. The method, originally proposed by Tammann et al. (1979), has been recently extended by Nusser et al. (2011) who propose to estimate the bulk flow by minimizing systematic variations in galaxy luminosities with respect to a reference luminosity function measured from the whole survey. It turns out that, if applied to the photoz catalog expected from Euclid, this method would be able to detect at \(\> 5 \sigma \) significance a bulk flow like the one of Watkins et al. (2009) over \(\sim \,50\) independent spherical volumes at \(z \ge 0.2\), provided that the systematic magnitude offset over the corresponding areas in the sky does not exceed the expected random magnitude errors of 0.02–0.04 mag. Additionally, photoz or spectralz could be used to validate or disproof with very large (\(> \,7 \sigma \)) significance the claimed bulk flow detection of Kashlinsky et al. (2010) at \(z=0.5\).
As for the bulk flow case, despite the many measurements of cosmological dipoles using galaxies (Yahil et al. 1980; Davis and Huchra 1982; Meiksin and Davis 1986; Strauss et al. 1992; Schmoldt et al. 1999; Kocevski and Ebeling 2006), there is still no general consensus on the scale of convergence and even on the convergence itself. Even the recent analyses of measuring the acceleration of the local group from the 2MASS redshift catalogs provided conflicting results. Erdoğdu et al. (2006) found that the galaxy dipole seems to converge beyond \(R=60\,h^{1}\mathrm {\ Mpc}\), whereas (Lavaux et al. 2010) find no convergence within \(R=120\,h^{1}\mathrm {\ Mpc}\).
Once again, Euclid will be in the position to solve this controversy by measuring the galaxy and cluster dipoles not only at the LG position and out to very large radii, but also in several independent ad truly allsky spherical samples carved out from the the observed areas with \(b>20^{\circ }\). In particular, coupling photometry with photoz one expects to be able to estimate the convergence scale of the fluxweighted dipole over about 100 independent spheres of radius \(200\,h^{1}\mathrm {\ Mpc}\) out to \(z=0.5\) and, beyond that, to compare numberweighted and fluxweighted dipoles over a larger number of similar volumes using spectroscopic redshifts.
Similarly, the growth rate can be constrained by studying the possibility of a Hubble bubble, a local region of space with a (slightly) different Hubble rate. This study was triggered by the fact that global observables such as Planck and BAO (Planck Collaboration 2014a, Table 5) yield a presentday Hubble constant 9% lower than local measurements performed by considering recession velocities of objects around us (Riess et al. 2011). This \(2.4\sigma \) tension could be relieved if the effect of a local Hubble bubble is taken into account, see Marra et al. (2013a) and references therein. With Euclid one will of course use the data the other way around, using observations to constrain the Hubble bubbles (velocity monopoles) at different radii, and so the growth rate of matter structures, similarly to what discussed regarding the bulk flow.
2.7.6 Model independent observations
When observing galaxies we measure their redshift and angular position. To convert this into a threedimensional galaxy catalog we must make model assumptions in order to relate the observed redshift to a distance. For small redshift, \(z\ll 1\), the simple relation \(r(z)=z/H_0\) can be used. When expressing distances in units of \(h^{1}\) Mpc the uncertainty in the measurement of \(H_0\) is then absorbed in \(h=H_0/(100\,\hbox {Mpc/(km/s)}\). However, when \(z\simeq 1\) the distance \(r(z)=\chi (z)/H_0\) depends on the full cosmic expansion history, i.e., on the parameters \(\varOmega _m\), \(\varOmega _k\), \(\varOmega _{DE}=1\varOmega _m\varOmega _k\) and \(w_{DE}\), see Eq. (I.7.22), and wrong assumptions about the distance redshift relation will bias the entire catalog in a nontrivial way.
The power spectra for \(z_1=z_2 = 0.1,~0.5,~1\) and 3 are shown on Fig. 14, lower, left panel, and \(C_{20}(1,z_2)\) is plotted as a function of \(z_2\) in Fig. 14, lower, right panel.
These directly observed quantities are functions of three variables, \(\theta , z_1\) and \(z_2\). Therefore, they are harder to infer from observations than a function of only one variable. Especially, the shot noise problem is much more severe. However, they also contain more information. They combine in a nontrivial way clustering information given by \(\xi (r)\) and geometrical information about the evolution of distances in the Universe via \(r(\theta ,z_1,z_2)\). The Euclid galaxy survey will be sufficiently big to beat down the significant shot noise problem and profit maximally from this cleaner cut between observations and modeling, see Sect. I.8.11 for forecasts.
As an illustration of what can be done with this correlation function, we briefly consider the baryon acoustic oscillations (BAOs). The transverse BAOs at fixed redshift z in \(\xi (\theta ,z,z)\) are shown in Fig. 14, top left panel.
2.8 Forecasts for Euclid
Here^{11} we describe forecasts for the constraints on modified gravity parameters which Euclid observations should be able to achieve. We begin with reviewing the relevant works in literature. Then, after we define our “Euclid model”, i.e., the main specifics of the redshift and weak lensing survey, we illustrate a number of Euclid forecasts obtained through a Fisher matrix approach.
2.8.1 A review of forecasts for parametrized modified gravity with Euclid
Thomas et al. (2009) construct Fisher matrix forecasts for the Euclid weak lensing survey, shown in Fig. 15. The constraints obtained depend on the maximum wavenumber which we are confident in using; \(\ell _{\max }=500\) is relatively conservative as it probes the linear regime where we can hope to analytically track the growth of structure; \(\ell _{\max }=10{,}000\) is more ambitious as it includes nonlinear power, using the Smith et al. (2003) fitting function. This will not be strictly correct, as the fitting function was determined in a GR context. Note that \(\gamma \) is not very sensitive to \(\ell _{\max }\), while \(\varSigma _0\), defined in Amendola et al. (2008b) as \(\varSigma = 1 + \varSigma _0 a\) [and where \(\varSigma \) is defined in Eq. (I.3.28)] is measured much more accurately in the nonlinear regime.
Amendola et al. (2008b) find Euclid weak lensing constraints for a more general parameterization that includes evolution. In particular, \(\varSigma (z)\) is investigated by dividing the Euclid weak lensing survey into three redshift bins with equal numbers of galaxies in each bin, and approximating that \(\varSigma \) is constant within that bin. Since \(\varSigma _1\), i.e., the value of \(\varSigma \) in the \(a=1\) bin (presentday) is degenerate with the amplitude of matter fluctuations, it is set to unity. The study finds that a deviation from unit \(\varSigma \) (i.e., GR) of 3% can be detected in the second redshift bin, and a deviation of 10% is still detected in the furthest redshift bin.
The forecasts for modified gravity parameters are shown in Fig. 16 for the Euclid lensing data. Even with this larger range of parameters to fit, Euclid provides a measurement of the growth factor \(\gamma \) to within 10%, and also allows some constraint on the \(\alpha _1\) parameter, probing the physics of nonlinear collapse in the modified gravity model.
Finally, Song et al. (2010) have shown forecasts for measuring \(\varSigma \) and \(\mu \) using both imaging and spectroscopic surveys. They combine 15,000 squaredegree lensing data (corresponding to Laureijs et al. 2009 rather than to the updated Laureijs et al. 2011) with the peculiar velocity dispersion measured from redshift space distortions in the spectroscopic survey, together with stringent background expansion measurements from the CMB and supernovae. They find that for simple models for the redshift evolution of \(\varSigma \) and \(\mu \), both quantities can be measured to 20% accuracy.
2.8.2 Euclid surveys
The Euclid mission will produce a catalog of up to 30 million galaxy redshifts with \(f_{H_\alpha } > 3 \times 10^{16}\) and 50 million with \(f_{H_\alpha } > 2 \times 10^{16}\) and an imaging survey that should allow to estimate the galaxy ellipticity of up to 1.5 billion galaxy images with photometric redshifts. Here we discuss these surveys and fix their main properties into a “Euclid model”, i.e., an approximation to the real Euclid survey that will be used as reference mission in the following.
Modeling the Redshift Survey.
The main goals of next generation redshift surveys will be to constrain the darkenergy parameters and to explore models alternative to standard Einstein gravity. For these purposes they will need to consider very large volumes that encompass \(z\sim 1\), i.e., the epoch at which dark energy started dominating the energy budget, spanning a range of epochs large enough to provide a sufficient leverage to discriminate among competing models at different redshifts.
Here we consider a survey covering a large fraction of the extragalactic corresponding to \(\sim \,15{,}000\mathrm {\ deg}^2\) capable to measure a large number of galaxy redshifts out to \(z\sim 2\). A promising observational strategy is to target H\(\alpha \) emitters at nearinfrared wavelengths (which implies \(z>0.5\)) since they guarantee both relatively dense sampling (the space density of this population is expected to increase out to \(z\sim 2\)) and an efficient method to measure the redshift of the object. The limiting flux of the survey should be the tradeoff between the requirement of minimizing the shot noise, the contamination by other lines (chiefly among them the [O ii] line), and that of maximizing the socalled efficiency \(\varepsilon \), i.e., the fraction of successfully measured redshifts. To minimize shot noise one should obviously strive for a low flux. Indeed, Geach et al. (2010) found that a limiting flux \(f_{\mathrm {H}\alpha } \ge 1\times 10^{16}\mathrm {\ erg\ cm^{2}\ s^{1}}\) would be able to balance shot noise and cosmic variance out to \(z=1.5\). However, simulated observations of mock H\(\alpha \) galaxy spectra have shown that \(\varepsilon \) ranges between 30 and 60% (depending on the redshift) for a limiting flux \(f_{\mathrm {H}\alpha }\ge 3\times 10^{16}\mathrm {\ erg\ cm^{2}\ s^{1}}\) (Laureijs et al. 2011). Moreover, contamination from [O ii] line drops from 12 to 1% when the limiting flux increases from \(1\times 10^{16}\) to \(5\times 10^{16}\mathrm {\ erg\ cm^{2}\ s^{1}}\) (Geach et al. 2010).
Taking all this into account, in order to reach the toplevel science requirement on the number density of H\(\alpha \) galaxies, the average effective H\(\alpha \) line flux limit from a 1arcsec diameter source shall be lower than or equal to \(3\times 10^{16}\mathrm {\ erg\ cm^{2}\ s^{1}}\). However, a slitless spectroscopic survey has a success rate in measuring redshifts that is a function of the emission line flux. As such, the Euclid survey cannot be characterized by a single flux limit, as in conventional slit spectroscopy.
We use the number density of H\(\alpha \) galaxies at a given redshift, n(z), estimated using the latest empirical data (see Figure 3.2 of Laureijs et al. 2011), where the values account for redshift—and flux—success rate, to which we refer as our reference efficiency \(\varepsilon _r\).

Reference case (ref.). Galaxy number density n(z) which include efficiency \(\varepsilon _r\) (column \(n_2(z)\) in Table 3).

Pessimistic case (pess.). Galaxy number density \(n(z)\cdot 0.5\), i.e., efficiency is \(\varepsilon _{r}\cdot 0.5\) (column \(n_3(z)\) in Table 3).

Optimistic case (opt.). Galaxy number density \(n(z)\cdot 1.4\), i.e., efficiency is \(\varepsilon _{r}\cdot 1.4\) (column \(n_1(z)\) in Table 3).
Expected galaxy number densities in units of \((h/\mathrm {Mpc})^{3}\) for Euclid survey
z  \(n_{1}(z) \times 10^{3}\)  \(n_{2}(z)\times 10^{3}\)  \(n_{3}(z) \times 10^{3}\) 

0.65\(\textendash \)0.75  1.75  1.25  0.63 
0.75\(\textendash \)0.85  2.68  1.92  0.96 
0.85\(\textendash \)0.95  2.56  1.83  0.91 
0.95\(\textendash \)1.05  2.35  1.68  0.84 
1.05\(\textendash \)1.15  2.12  1.51  0.76 
1.15\(\textendash \)1.25  1.88  1.35  0.67 
1.25\(\textendash \)1.35  1.68  1.20  0.60 
1.35\(\textendash \)1.45  1.40  1.00  0.50 
1.45\(\textendash \)1.55  1.12  0.80  0.40 
1.55\(\textendash \)1.65  0.81  0.58  0.29 
1.65\(\textendash \)1.75  0.53  0.38  0.19 
1.75\(\textendash \)1.85  0.49  0.35  0.18 
1.85\(\textendash \)1.95  0.29  0.21  0.10 
1.95\(\textendash \)2.05  0.16  0.11  0.06 
2.8.3 Forecasts for the growth rate from the redshift survey
In this section, we forecast the constraints that future observations can put on the growth rate and on a scaleindependent bias, employing the Fisher matrix method presented in Sect. I.7.3. We use the representative Euclid survey presented in Sect. I.8.2. We assess how well one can constrain the bias function from the analysis of the power spectrum itself and evaluate the impact that treating bias as a free parameter has on the estimates of the growth factor. We estimate how errors depend on the parametrization of the growth factor and on the number and type of degrees of freedom in the analysis. Finally, we explicitly explore the case of coupling between dark energy and dark matter and assess the ability of measuring the coupling constant. Our parametrization is defined as follows. More details can be found in Di Porto et al. (2012).
Equation of state
Growth rate

fparameterization. This is in fact a nonparametric model in which the growth rate itself is modeled as a stepwise function \(f_g(z)=f_{i}\), specified in different redshift bins. The errors are derived on \(f_{i}\) in each ith redshift bin of the survey.
 \(\gamma \)parameterization. As a second case we assumewhere the \(\gamma (z)\) function is parametrized as$$\begin{aligned} f_g\equiv \varOmega _{m}(z)^{\gamma (z)}. \end{aligned}$$(I.8.4)As shown by Wu et al. (2009) and Fu et al. (2009), this parameterization is more accurate than that of Eq. (I.8.3) for both \(\varLambda \)CDM and DGP models. Furthermore, this parameterization is especially effective to distinguish between a wCDM model (i.e., a darkenergy model with a constant equation of state) that has a negative \(\gamma _{1}\) (\(0.020\lesssim \gamma _{1}\lesssim 0.016\)) and a DGP model that instead, has a positive \(\gamma _{1}\) (\(0.035<\gamma _{1}<0.042\)). In addition, modified gravity models show a strongly evolving \(\gamma (z)\) (Gannouji et al. 2009; Motohashi et al. 2010a; Fu et al. 2009), in contrast with conventional darkenergy models. As a special case we also consider \(\gamma =\) constant (only when w also is assumed constant), to compare our results with those of previous works.$$\begin{aligned} \gamma (z)=\gamma _{0}+\gamma _{1}\frac{z}{1+z}. \end{aligned}$$(I.8.5)
 \(\eta \)parameterization. To explore models in which perturbations grow faster than in the \(\varLambda \)CDM case, like in the case of a coupling between dark energy and dark matter (Di Porto and Amendola 2008), we consider a model in which \(\gamma \) is constant and the growth rate varies aswhere \(\eta \) quantifies the strength of the coupling. The example of the coupled quintessence model worked out by Di Porto and Amendola (2008) illustrates this point. In that model, the numerical solution for the growth rate can be fitted by the formula (I.8.6), with \(\eta =c\beta _{c}^{2}\), where \(\beta _{c}\) is the dark energydark matter coupling constant and best fit values \(\gamma =0.56\) and \(c=2.1\). In this simple case, observational constraints over \(\eta \) can be readily transformed into constraints over \(\beta _{c}\).$$\begin{aligned} f_g\equiv \varOmega _{m}(z)^{\gamma }(1+\eta ), \end{aligned}$$(I.8.6)
We assume as reference model a “pseudo” \(\varLambda \)CDM, where the growth rate values are obtained from Eq. (I.8.3) with \(\gamma =0.545\) and \(\varOmega _m(z)\) is given by the standard evolution. Then \(\varOmega _m(z)\) is completely specified by setting \(\varOmega _{m,0}=0.271\), \(\varOmega _k=0\), \(w_0=\,0.95\), \(w_1=0\). When the corresponding parameterizations are employed, we choose as fiducial values \(\gamma _{1}=0\) and \(\eta =0\), We also assume a primordial slope \(n_s=0.966\) and a present normalization \(\sigma _8=0.809\).

DGP model. We consider the flat space case studied in Maartens and Majerotto (2006). When we adopt this model then we set \(\gamma _{0}=0.663\), \(\gamma _{1}=0.041\) (Fu et al. 2009) or \(\gamma =0.68\) (Linder and Cahn 2007) and \(w=\,0.8\) when \(\gamma \) and w are assumed constant.

f(R) model. Here we consider different classes of f(R) models: (i) the one proposed in Hu and Sawicki (2007a), depending on two parameters, n and \(\mu \) in Eq. (I.5.36), which we fix to \(n=0.5,1,2\) and \(\mu =3\). For the model with \(n=2\) we assume \(\gamma _{0}=0.43\), \(\gamma _{1}=\,0.2\), values that apply quite generally in the limit of small scales [provided they are still linear, see Gannouji et al. (2009)] or \(\gamma =0.4\) and \(w=\,0.99\). Unless differently specified, we will always refer to this specific model when we mention comparisons to a single f(R) model. (ii) The model of Eq. (I.5.37), proposed in Starobinsky (2007), where we fix \(\mu =3\) and \(n=2\), which shows a very similar behavior to the previous one. (iii) The one proposed in Tsujikawa (2008), Eq. (I.5.38), fixing \(\mu =1\).

Coupled darkenergy (CDE) model. This is the coupled model proposed by Amendola (2000a) and Wetterich (1995). In this case we assume \(\gamma _{0}=0.56\), \(\eta =0.056\) (this value comes from putting \(\beta _{c}=0.16\) as coupling, which is of the order of the maximal value allowed by CMB constraints) (Amendola and Quercellini 2003). As already explained, this model cannot be reproduced by a constant \(\gamma \). Forecasts on coupled quintessence based on Amendola et al. (2011), Amendola (2000a), Pettorino and Baccigalupi (2008) are discussed in more detail in Sect. I.8.8.
Now we are ready to present the main result of the Fisher matrix analysis. We note that in all tables below we always quote errors at 68% probability level and draw in the plots the probability regions at 68 and/or 95% (denoted for shortness as 1 and 2\(\sigma \) values). Moreover, in all figures, all the parameters that are not shown have been marginalized over or fixed to a fiducial value when so indicated.
Results for the fparameterization
The total number of parameters that enter in the Fisher matrix analysis is 45: 5 parameters that describe the background cosmology (\(\varOmega _{m,0}h^{2},\varOmega _{b,0}h^{2},\) h, n, \(\varOmega _{k}\)) plus 5 zdependent parameters specified in 8 redshift bins evenly spaced in the range \(z=[0.5,2.1]\). They are \(P_{\text {s}}(z)\), D(z), H(z), \(f_g(z)\), b(z). However, since we are not interested in constraining D(z) and H(z), we always project them to the set of parameters they depend on (as explained in Seo and Eisenstein 2003) instead of marginalizing over, so extracting more information on the background parameters.
\(1\sigma \) marginalized errors for the bias and the growth rates in each redshift bin
z  \(\sigma _{b}\)  \(b^F\)  z  \(f_g^F\)  \(\sigma _{f_g}\)  

Ref.  Opt.  Pess.  Ref.  Opt.  Pess.  
0.7  0.016  0.015  0.019  1.30  0.7  0.76  0.011  0.010  0.012 
0.8  0.014  0.014  0.017  1.34  0.8  0.80  0.010  0.009  0.011 
0.9  0.014  0.013  0.017  1.38  0.9  0.82  0.009  0.009  0.011 
1.0  0.013  0.012  0.016  1.41  1.0  0.84  0.009  0.008  0.011 
1.1  0.013  0.012  0.016  1.45  1.1  0.86  0.009  0.008  0.011 
1.2  0.013  0.012  0.016  1.48  1.2  0.87  0.009  0.009  0.011 
1.3  0.013  0.012  0.016  1.52  1.3  0.88  0.010  0.009  0.012 
1.4  0.013  0.012  0.016  1.55  1.4  0.89  0.010  0.009  0.013 
1.5  0.013  0.012  0.016  1.58  1.5  0.91  0.011  0.010  0.014 
1.6  0.013  0.012  0.016  1.61  1.6  0.91  0.012  0.011  0.016 
1.7  0.014  0.013  0.017  1.64  1.7  0.92  0.014  0.012  0.018 
1.8  0.014  0.013  0.018  1.67  1.8  0.93  0.014  0.013  0.019 
1.9  0.016  0.014  0.021  1.70  1.9  0.93  0.017  0.015  0.025 
2.0  0.019  0.016  0.028  1.73  2.0  0.94  0.023  0.019  0.037 
In Figs. 18 and 19, we show the errors on the growth rate \(f_g\) as a function of redshift, overplotted to our fiducial \(\varLambda \)CDM (green solid curve). The three sets of error bars are plotted in correspondence of the 14 redshift bins and refer (from left to right) to the Optimistic, Reference and Pessimistic cases, respectively. The other curves show the expected growth rate in three alternative cosmological models: flat DGP (red, longdashed curve), CDE (purple, dotdashed curve) and different f(R) models (see description in the figure captions). This plot clearly illustrates the ability of next generation surveys to distinguish between alternative models, even in the less favorable choice of survey parameters.
 1.
The ability of measuring the biasing function is not too sensitive to the characteristic of the survey (b(z) can be constrained to within 1% in the Optimistic scenario and up to 1.6% in the Pessimistic one) provided that the bias function is independent of scale. Moreover, we checked that the precision in measuring the bias has a very little dependence on the b(z) form.
 2.
The growth rate \(f_g\) can be estimated to within 1–2.5% in each bin for the Reference case survey with no need of estimating the bias function b(z) from some dedicated, independent analysis using higher order statistics (Verde et al. 2002) or fullPDF analysis (Sigad et al. 2000).
 3.
The estimated errors on \(f_g\) depend weakly on the fiducial model of b(z).

\(\gamma \)parameterization. We start by considering the case of constant \(\gamma \) and w in which we set \(\gamma =\gamma ^F=0.545\) and \(w=w^F=\,0.95\). As we will discuss in the next Section, this simple case will allow us to crosscheck our results with those in the literature. In Fig. 20, we show the marginalized probability regions, at 1 and \(2\sigma \) levels, for \(\gamma \) and w. The regions with different shades of green illustrate the Reference case for the survey whereas the blue longdashed and the black shortdashed ellipses refer to the Optimistic and Pessimistic cases, respectively. Errors on \(\gamma \) and w are listed in Table 5 together with the corresponding figures of merit [FoM] defined to be the squared inverse of the Fisher matrix determinant and therefore equal to the inverse of the product of the errors in the pivot point, see Albrecht et al. (2006). Contours are centered on the fiducial model. The blue triangle and the blue square represent the flat DGP and the f(R) models’ predictions, respectively. It is clear that, in the case of constant \(\gamma \) and w, the measurement of the growth rate in a Euclidlike survey will allow us to discriminate among these models. These results have been obtained by fixing the curvature to its fiducial value \(\varOmega _k=0\). If instead, we consider curvature as a free parameter and marginalize over, the errors on \(\gamma \) and w increase significantly, as shown in Table 6, and yet the precision is good enough to distinguish the different models. For completeness, we also computed the fully marginalized errors over the other cosmological parameters for the reference survey, given in Table 7.
As a second step we considered the case in which \(\gamma \) and w evolve with redshift according to Eqs. (I.8.5) and (I.8.2) and then we marginalized over the parameters \(\gamma _{1}\), \(w_{1}\) and \(\varOmega _k\). The marginalized probability contours are shown in Fig. 21 in which we have shown the three survey setups in three different panels to avoid overcrowding. Dashed contours refer to the zdependent parameterizations while red, continuous contours refer to the case of constant \(\gamma \) and w obtained after marginalizing over \(\varOmega _k\). Allowing for time dependency increases the size of the confidence ellipses since the Fisher matrix analysis now accounts for the additional uncertainties in the extraparameters \(\gamma _{1}\) and \(w_{1}\); marginalized error values are in columns \(\sigma _{{\gamma }_{\text {marg},1}}\), \(\sigma _{{w}_{\text {marg},1}}\) of Table 8. The uncertainty ellipses are now larger and show that DGP and fiducial models could be distinguished at \(>\,2\sigma \) level only if the redshift survey parameter will be more favorable than in the Reference case.
We have also projected the marginalized ellipses for the parameters \(\gamma _{0}\) and \(\gamma _{1}\) and calculated their marginalized errors and figures of merit, which are reported in Table 9. The corresponding uncertainties contours are shown in the right panel of Fig. 20. Once again we overplot the expected values in the f(R) and DGP scenarios to stress the fact that one is expected to be able to distinguish among competing models, irrespective on the survey’s precise characteristics.

\(\eta \)parameterization.
We have repeated the same analysis as for the \(\gamma \)parameterization taking into account the possibility of coupling between DE and DM, i.e., we have modeled the growth factor according to Eq. (I.8.6) and the darkenergy equation of state as in Eq. (I.8.2) and marginalized over all parameters, including \(\varOmega _k\). The marginalized errors are shown in columns \(\sigma _{{\gamma }_{\text {marg},2}}\), \(\sigma _{{w}_{\text {marg},2}}\) of Table 8 and the significance contours are shown in the three panels of Fig. 22 which is analogous to Fig. 21. Even if the ellipses are now larger we note that errors are still small enough to distinguish the fiducial model from the f(R) and DGP scenarios at \(>\,1\sigma \) and \(>\,2\sigma \) level respectively.
Marginalizing over all other parameters we can compute the uncertainties in the \(\gamma \) and \(\eta \) parameters, as listed in Table 10. The relative confidence ellipses are shown in the left panel of Fig. 23. This plot shows that next generation Euclidlike surveys will be able to distinguish the reference model with no coupling (central, red dot) to the CDE model proposed by Amendola and Quercellini (2003) (white square) only at the \(1\textendash 1.5\sigma \) level.
Numerical values for \(1\sigma \) constraints on parameters in Fig. 20 and figures of merit
Case  \(\sigma _{\gamma }\)  \(\sigma _{w}\)  FoM  

\(b=\sqrt{1+z}\)  Ref.  0.02  0.017  3052 
With  Opt.  0.02  0.016  3509 
\(\varOmega _{k}\) fixed  Pess.  0.026  0.02  2106 
Bias  Case  \(\sigma _{\gamma }\)  FoM  

\(b=\sqrt{1+z}\)  Ref.  0.03  0.04  1342 
Opt.  0.03  0.03  1589  
Pess.  0.04  0.05  864 
Numerical values for marginalized \(1\sigma \) constraints on cosmological parameters using constant \(\gamma \) and w
Case  \(\sigma _{h}\)  \(\sigma _{\varOmega _m h^2}\)  \(\sigma _{\varOmega _b h^2}\)  \(\sigma _{\varOmega _k}\)  \(\sigma _{n_s}\)  \(\sigma _{\sigma _8}\)  

\(b=\sqrt{1+z}\)  Ref.  0.007  0.002  0.0004  0.008  0.03  0.006 
\(1\sigma \) marginalized errors for parameters \(\gamma \) and w expressed through \(\gamma \) and \(\eta \) parameterizations
Bias  Case  \(\sigma _{\gamma _{\mathrm {marg},1}}\)  \(\sigma _{w_{\mathrm {marg},1}}\)  FoM  \(\sigma _{\gamma _{\mathrm {marg},2}}\)  \(\sigma _{w_{\mathrm {marg},2}}\)  FoM 

\(b=\sqrt{1+z}\)  Ref.  0.15  0.07  97  0.07  0.07  216 
Opt.  0.14  0.06  112  0.07  0.06  249  
Pess.  0.18  0.09  66  0.09  0.09  147 
Numerical values for \(1\sigma \) constraints on parameters in right panel of Fig. 20 and figures of merit
Bias  Case  \(\sigma _{\gamma _{0}}\)  \(\sigma _{\gamma _{1}}\)  FoM 

\(b=\sqrt{1+z}\)  Ref.  0.15  0.4  87 
Opt.  0.14  0.36  102  
Pess.  0.18  0.48  58 
Numerical values for \(1\sigma \) constraints on parameters in Fig. 23 and figures of merit
Bias  Case  \(\sigma _{\gamma }\)  \(\sigma _{\eta }\)  FoM 

\(b=\sqrt{1+z}\)  Ref.  0.07  0.06  554 
Opt.  0.07  0.06  650  
Pess.  0.09  0.08  362 
Finally, in order to explore the dependence on the number of parameters and to compare our results to previous works, we also draw the confidence ellipses for \(w_0\), \(w_1\) with three different methods: (i) fixing \(\gamma _{0}, \gamma _{1}\) and \(\varOmega _k\) to their fiducial values and marginalizing over all the other parameters; (ii) fixing only \(\gamma _{0}\) and \(\gamma _{1}\); (iii) marginalizing over all parameters but \(w_0\), \(w_1\). As one can see in Fig. 24 and Table 11 this progressive increase in the number of marginalized parameters reflects in a widening of the ellipses with a consequent decrease in the figures of merit. These results are in agreement with those of other authors (e.g., Wang et al. 2010).
 1.
If both \(\gamma \) and w are assumed to be constant and setting \(\varOmega _k=0\), then a redshift survey described by our Reference case will be able to constrain these parameters to within 4 and 2%, respectively.
 2.
Marginalizing over \(\varOmega _{k}\) degrades these constraints to 5.3 and 4% respectively.
 3.
If w and \(\gamma \) are considered redshiftdependent and parametrized according to Eqs. (I.8.5) and (I.8.2) then the errors on \(\gamma _{0}\) and \(w_{0}\) obtained after marginalizing over \(\gamma _{1}\) and \(w_{1}\) increase by a factor \(\sim \) 7, 5. However, with this precision we will be able to distinguish the fiducial model from the DGP and f(R) scenarios with more than \(2\sigma \) and \(1\sigma \) significance, respectively.
 4.
The ability to discriminate these models with a significance above \(2\sigma \) is confirmed by the confidence contours drawn in the \(\gamma _{0}\)–\(\gamma _{1}\) plane, obtained after marginalizing over all other parameters.
 5.
If we allow for a coupling between dark matter and dark energy, and we marginalize over \(\eta \) rather than over \(\gamma _{1}\), then the errors on \(w_{0}\) are almost identical to those obtained in the case of the \(\gamma \)parameterization, while the errors on \(\gamma _{0}\) decrease significantly.
\(1\sigma \) marginalized errors for the parameters \(w_{0}\) and \(w_{1}\), obtained with three different methods (reference case, see Fig. 24)
\(\sigma _{w_{0}}\)  \(\sigma _{w_{1}}\)  FoM  

\(\gamma _{0}, \gamma _{1}\), \(\varOmega _{k}\) fixed  0.05  0.16  430 
\(\gamma _{0},\gamma _{1}\) fixed  0.06  0.26  148 
Marginalization over all other parameters  0.07  0.3  87 
However, our ability in separating the fiducial model from the CDE model is significantly hampered: the confidence contours plotted in the \(\gamma \)–\(\eta \) plane show that discrimination can only be performed wit \(1 \textendash 1.5\sigma \) significance. Yet, this is still a remarkable improvement over the present situation, as can be appreciated from Fig. 23 where we compare the constraints expected by next generation data to the present ones. Moreover, the Reference survey will be able to constrain the parameter \(\eta \) to within 0.06. Reminding that we can write \(\eta =2.1 \beta _c^2\) (Di Porto and Amendola 2008), this means that the coupling parameter \(\beta _c\) between dark energy and dark matter can be constrained to within 0.14, solely employing the growth rate information. This is comparable to existing constraints from the CMB but is complementary since obviously it is obtained at much smaller redshifts. A variable coupling could therefore be detected by comparing the redshift survey results with the CMB ones.
It is worth pointing out that, whenever we have performed statistical tests similar to those already discussed by other authors in the context of a Euclidlike survey, we did find consistent results. Examples of this are the values of FoM and errors for \(w_0\), \(w_1\), similar to those in Wang et al. (2010), Majerotto et al. (2012) and the errors on constant \(\gamma \) and w (Majerotto et al. 2012). However, let us notice that all these values strictly depend on the parametrizations adopted and on the numbers of parameters fixed or marginalized over (see, e.g., Rassat et al. 2008).
2.8.4 Weak lensing nonparametric measurement of expansion and growth rate
In this section, we apply powerspectrum tomography (Hu 1999) to the Euclid weak lensing survey without using any parameterization of the Hubble parameter H(z) as well as the growth function G(z). Instead, we add the fiducial values of those functions at the center of some redshift bins of our choice to the list of cosmological parameters. Using the Fisher matrix formalism, we can forecast the constraints that future surveys can put on H(z) and G(z). Although such a nonparametric approach is quite common for as concerns the equationofstate ratio w(z) in supernovae surveys (see, e.g., Albrecht et al. 2009) and also in redshift surveys (Seo and Eisenstein 2003), it has not been investigated for weak lensing surveys.
Values used in our computation
\(\omega _m\)  0.1341  \(f_\mathrm {sky}\)  0.375 
\(\omega _b\)  0.02258  \(z_\mathrm {mean}\)  0.9 
\(\tau \)  0.088  \(\sigma _z\)  0.05 
\(n_s\)  0.963  \(n_\theta \)  30 
\(\varOmega _m\)  0.266  \(\gamma _\mathrm {int}\)  0.22 
\(w_0\)  – 1  \(\ell _{\max }\)  \(5\times 10^3\) 
\(w_1\)  0  \(\varDelta \log _{10}\ell \)  0.02 
\(\gamma \)  0.547  
\(\gamma _{\mathrm {ppn}}\)  0  
\(\sigma _8\)  0.801 
The values for our fiducial model (taken from WMAP 7year data, Komatsu et al. 2011) and the survey parameters that we chose for our computation can be found in Table 12.
Notice that here we assumed no prior information. Of course one could improve the FoM by taking into account some external constraints due to other experiments.
2.8.5 Testing the nonlinear corrections for weak lensing forecasts
In order to fully exploit the scientific potential of the next generation of weak lensing surveys, accurate predictions of the matter power spectrum are required. The signaltonoise ratio of the cosmic shear signal is highest on angular scales of 5–10 arcminutes, which correspond to physical scales of \(\sim \,1\) Mpc. Restricting the analysis to larger scales does not necessarily solve the problem, because the observed twopoint ellipticity correlation functions are still sensitive to small scale structures projected along the lineofsight. This may be avoided using a full 3D shear analysis (see Castro et al. 2005; Kitching et al. 2011, for details), but using only the larger scales increases the statistical uncertainties due to cosmic variance.
It is important to distinguish between gravityonly simulations, which are used to make the forecasts, and hydrodynamical simulations that attempt to capture the modifications to the matter power spectrum due to baryon physics. Although most of the matter in the Universe is indeed believed to be in the form of collissionless cold dark matter, baryons represent a nonnegligible fraction of the total matter content. The distribution of baryons traces that of the underlying dark matter density field and thus gravityonly simulations should capture most of the structure formation. Nonetheless, differences in the spatial distribution of baryons with respect to the dark matter is expected to lead to changes that exceed the required accuracy of 1 per cent.
Various processes, which include radiative cooling, star formation and energy injection from supernovae and active galactic nuclei, affect the distribution of baryons. Implementing these processes correctly is difficult, and as a consequence the accuracy of hydrodynamic simulations is under discussion. That baryon physics cannot be ignored was perhaps most clearly shown in van Daalen et al. (2011) who looked at the changes in the matter power spectra when different processes are included. This was used by Semboloni et al. (2011) to examine the impact on cosmic shear studies. The results suggest that AGN feedback may lead to a suppression of the power by as much as \(10\%\) at \(k\sim \,1\,\hbox {h Mpc}^{1}\).
Semboloni et al. (2011) showed that ignoring the baryonic physics leads to biases in the cosmological parameter estimates that are much larger than the precision of Euclid. In the case of the AGN model, the bias in w is as much as \(40\%\). Unfortunately our knowledge of the various feedback processes is still incomplete and we cannot use the simulations to interpret cosmic shear signal. Furthermore, hydrodynamic simulations are too expensive to simulate large volumes for a range of cosmological parameters. To circumvent this problem several approaches have been suggested. For instance, Bernstein (2009) proposed to describe the changes in the power spectrum by Legendre polynomials, and to marginalise over the nuisance parameters (also see Kitching and Taylor 2011, for a similar approach). Although this leads to unbiased estimates for cosmological parameters, the precision decreases significantly, by as much as 30% (Zentner et al. 2008).
Instead Semboloni et al. (2011) and Semboloni et al. (2013) examined whether it is possible to model the effects of baryon physics using a halo model approach, in which the baryons and stars are treated separately from the dark matter distribution. The model parameters, rather than being mere nuisance parameters, correspond to physical quantities that can be constrained observationally. These works showed that even with this still rather simple approach it is possible to reduce the biases in the cosmological parameters to acceptable levels, without a large loss in precision.
The forecasts do not include the uncertainty due to baryon physics, hence the results implicitly assume that this can be understood sufficiently well that no loss in precision occurs. This may be somewhat optimistic, as more work is needed in the coming years to accurately quantify the impact of baryon physics on the modelling of the matter power spectrum, but we note that the initial results are very encouraging. In particular, Semboloni et al. (2013) found that requiring consistency between the two and threepoint statistics can be used to selfcalibrate feedback models.
Another complication for the forecasts is the performance of the prescriptions for the nonlinear power spectrum for non\(\varLambda \)CDM models. For instance, McDonald et al. (2006) showed that, using halofit for non\(\varLambda \)CDM models, requires suitable corrections. In spite of that, halofit has been often used to calculate the spectra of models with nonconstant DE state parameter w(z). This procedure was dictated by the lack of appropriate extensions of halofit to non\(\varLambda \)CDM cosmologies.
In this paragraph we quantify the effects of using the halofit code instead of Nbody outputs for nonlinear corrections for DE spectra, when the nature of DE is investigated through weak lensing surveys. Using a Fishermatrix approach, we evaluate the discrepancies in error forecasts for \(w_{0}\), \(w_{a}\) and \(\varOmega _m\) and compare the related confidence ellipses. See Casarini et al. (2011) for further details.
The weak lensing survey is as specified in Sect. I.8.2. Tests are performed assuming three different fiducial cosmologies: \(\varLambda \)CDM model (\(w_0 = \,1\), \(w_a = 0\)) and two dynamical DE models, still consistent with the WMAP+BAO+SN combination (Komatsu et al. 2011) at 95% C.L. They will be dubbed M1 (\(w_0 = \,0.67\), \(w_a = 2.28\)) and M3 (\(w_0 = \,1.18\), \(w_a = 0.89\)). In this way we explore the dependence of our results on the assumed fiducial model. For the other parameters we adopt the fiducial cosmology of Sect. I.8.2.
The derivatives needed to calculate the Fisher matrix are evaluated by extracting the power spectra from the Nbody simulations of models close to the fiducial ones, obtained by considering parameter increments \(\pm \, 5\%\). For the \(\varLambda \)CDM case, two different initial seeds were also considered, to test the dependence on initial conditions, finding that Fisher matrix results are almost insensitive to it. For the other fiducial models, only one seed is used.
Nbody simulations are performed by using a modified version of pkdgrav (Stadel 2001) able to handle any DE state equation w(a), with \(N^3 = 256^{3}\) particles in a box with side \(L = 256\, h^{1}\mathrm {\ Mpc}\). Transfer functions generated using the camb package are employed to create initial conditions, with a modified version of the PM software by Klypin and Holtzman (1997), also able to handle suitable parameterizations of DE.
Matter power spectra are obtained by performing a fast Fourier transform (FFT) of the matter density fields, computed from the particles distribution through a CloudinCell algorithm, by using a regular grid with \(N_{g}=2048\). This allows us to obtain nonlinear spectra in a large kinterval. In particular, our resolution allows to work out spectra up to \(k \simeq 10\, h\mathrm {\ Mpc}^{1}\). However, for \(k > 2\textendash 3\, h\mathrm {\ Mpc}^{1}\) neglecting baryon physics is no longer accurate (Jing et al. 2006; Rudd et al. 2008; Bonometto et al. 2010; Zentner et al. 2008; Hearin and Zentner 2009). For this reason, we consider WL spectra only up to \(\ell _{\max } = 2000\).
Particular attention has to be paid to matter power spectra normalizations. In fact, we found that, normalizing all models to the same linear \(\sigma _8 (z=0)\), the shear derivatives with respect to \(w_0\), \(w_a\) or \(\varOmega _m\) were largely dominated by the normalization shift at \(z=0\), \(\sigma _{8}\) and \(\sigma _{8,nl}\) values being quite different and the shift itself depending on \(w_0\), \(w_a\) and \(\varOmega _m\). This would confuse the z dependence of the growth factor, through the observational zrange. This normalization problem was not previously met in analogous tests with the Fisher matrix, as halofit does not directly depend on the DE state equation.
In Fig. 27 we show the confidence ellipses, when the fiducial model is \(\varLambda \)CDM, in the cases of 3 or 5 bins and with \(\ell _{\max } = 2000\). Since the discrepancy between different seeds are small, discrepancies between halofit and simulations are truly indicating an underestimate of errors in the halofit case.
As expected, the error on \(\varOmega _m\) estimate is not affected by the passage from simulations to halofit, since we are dealing with \(\varLambda \)CDM models only. On the contrary, using halofit leads to underestimates of the errors on \(w_0\) and \(w_a\), by a substantial 30–40% (see Casarini et al. (2011) for further details).
Figure 28 then show the results in the \(w_0\)–\(w_a\) plane, when the fiducial models are M1 or M3. It is evident that the two cases are quite different. In the M1 case, we see just quite a mild shift, even if they are \({\mathcal {O}}\) (10%) on error predictions. In the M3 case, errors estimated through halofit exceed simulation errors by a substantial factor. Altogether, this is a case when estimates based on halofit are not trustworthy.
The effect of baryon physics is another nonlinear correction to be considered. We note that the details of a study on the impact of baryon physics on the power spectrum and the parameter estimation can be found in Semboloni et al. (2011)
2.8.6 Forecasts for the darkenergy sound speed
As we have seen in Sect. I.3.1, when dark energy clusters, the standard subhorizon Poisson equation that links matter fluctuations to the gravitational potential is modified and \(Q\ne 1\). The deviation from unity will depend on the degree of DE clustering and therefore on the sound speed \(c_s\). In this subsection, we try to forecast the constraints that Euclid can put on a constant \(c_s\) by measuring Q both via weak lensing and via redshift clustering. Here we assume standard Einstein gravity and zero anisotropic stress (and therefore we have \(\varPsi =\varPhi \)) and we allow \(c_{s}\) to assume different values in the range 0–1.
 1.
perturbations larger than the causal horizon (where perturbations are not causally connected and their growth is suppressed),
 2.
perturbations smaller than the causal horizon but larger than the sound horizon, \(k\ll aH/c_{s}\) (this is the only regime where perturbations are free to grow because the velocity dispersion, or equivalently the pressure perturbation, is smaller than the gravitational attraction),
 3.
perturbations smaller than the sound horizon, \(k\gg aH/c_{s}\) (here perturbations stop growing because the pressure perturbation is larger than the gravitational attraction).

The growth of matter perturbations
There are two ways to influence the growth factor: firstly at background level, with a different Hubble expansion. Secondly at perturbation level: if dark energy clusters then the gravitational potential changes because of the Poisson equation, and this will also affect the growth rate of dark matter. All these effects can be included in the growth index \(\gamma \) and we therefore expect that \(\gamma \) is a function of w and \(c_s^2\) (or equivalently of w and Q).
The growth index depends on darkenergy perturbations (through Q) as (Sapone and Kunz 2009)where$$\begin{aligned} \gamma =\frac{3\left( 1wA\left( Q\right) \right) }{56w} \end{aligned}$$(I.8.24)Clearly here, the key quantity is the derivative of the growth factor with respect to the sound speed:$$\begin{aligned} A\left( Q\right) =\frac{Q1}{1\varOmega _{M}\left( a\right) }. \end{aligned}$$(I.8.25)From the above equation we also notice that the derivative of the growth factor does not depend on \(Q1\) like the derivative Q, but on \(QQ_{0}\) as it is an integral (being \(Q_0\) the value of Q today). The growth factor is thus not directly probing the deviation of Q from unity, but rather how Q evolves over time, see Sapone et al. (2010) for more details.$$\begin{aligned} \frac{\partial \log G}{\partial \ln c_s^2}\propto \int _{a_0}^{a_1}{\frac{\partial \gamma }{\partial c_{s}^2}{\mathrm {d}}a}\propto \int _{a_0}^{a_1}{\frac{\partial Q}{\partial c_{s}^2}{\mathrm {d}}a} \propto \int _{a_0}^{a_1}{\left( Q1\right) {\mathrm {d}}a}. \end{aligned}$$(I.8.26) 
Redshift space distortions
The distortion induced by redshift can be expressed in linear theory by the \(\beta \) factor, related to the bias factor and the growth rate via:The derivative of the redshift distortion parameter with respect to the sound speed is:$$\begin{aligned} \beta (z,k)=\frac{\varOmega _{m}\left( z\right) ^{\gamma (k,z)}}{b(z)}. \end{aligned}$$(I.8.27)We see that the behavior versus \(c_{s}^{2}\) is similar to the one for the Q derivative, so the same discussion applies. Once again, the effect is maximized for small \(c_{s}\). The \(\beta \) derivative is comparable to that of G at \(z=0\) but becomes more important at low redshifts.$$\begin{aligned} \frac{\partial \log \left( 1+\beta \mu ^{2}\right) }{\partial \log c_s^2}= \frac{3}{56w}\frac{\beta \mu ^{2}}{1+\beta \mu ^{2}}\frac{x}{1+x}\left( Q1\right) . \end{aligned}$$(I.8.28) 
Shape of the dark matter power spectrum
Quantifying the impact of the sound speed on the matter power spectrum is quite hard as we need to run Boltzmann codes (such as camb, Lewis et al. 2000b) in order to get the full impact of darkenergy perturbations into the matter power spectrum. Sapone et al. (2010) proceeded in two ways: first using the camb output and then considering the analytic expression from Eisenstein and Hu (1999) (which does not include dark energy perturbations, i.e., does not include \(c_{s}\)).
They find that the impact of the derivative of the matter power spectrum with respect the sound speed on the final errors is only relevant if high values of \(c_s^2\) are considered; by decreasing the sound speed, the results are less and less affected. The reason is that for low values of the sound speed other parameters, like the growth factor, start to be the dominant source of information on \(c_{s}^{2}\).

The direct contribution of the perturbations to the gravitational potential through the factor Q.

The impact of the darkenergy perturbations on the growth rate of the dark matter perturbations, affecting the time dependence of \(\varDelta _{M}\), through \(G\left( a,k\right) \).

A change in the shape of the matter power spectrum P(k), corresponding to the dark energy induced k dependence of \(\varDelta _{M}\).
In Fig. 29, we report the \(1\sigma \) confidence region for \(w_{0},c_s^2\) for two different values of the sound speed and \(z_{\max }\). For high value of the sound speed (\(c_s^2=1\)) we find \(\sigma (w_{0})=0.0195\) and the relative error for the sound speed is \(\sigma (c_s^2)/c_s^2=2615\). As expected, WL is totally insensitive to the clustering properties of quintessence darkenergy models when the sound speed is equal to 1. The presence of darkenergy perturbations leaves a w and \(c_s^2\) dependent signature in the evolution of the gravitational potentials through \(\varDelta _{\mathrm {DE}}/\varDelta _{m}\) and, as already mentioned, the increase of the \(c_s^2\) enhances the suppression of darkenergy perturbations which brings \(Q \rightarrow 1\).
Impact on galaxy power spectrum.
We now explore a second probe of clustering, the galaxy power spectrum. The procedure is the same outlined in Sect. I.7.3. We use the representative Euclid survey presented in Sect. I.8.2. Here too we also consider in addition possible extended surveys to \(z_{\max }=2.5\) and \(z_{\max }=4\).
In conclusion, as perhaps expected, we find that darkenergy perturbations have a very small effect on dark matter clustering unless the sound speed is extremely small, \(c_{s}\le 0.01\). Let us remind that in order to boost the observable effect, we always assumed \(w=\,0.8\); for values closer to \(\,1\) the sensitivity to \(c_s^2\) is further reduced. As a test, Sapone et al. (2010) performed the calculation for \(w=\,0.9\) and \(c_s^2=10^{5}\) and found \(\sigma _{c_s^2}/c_s^2=2.6\) and \(\sigma _{c_s^2}/c_s^2=1.09\) for WL and galaxy power spectrum experiments, respectively.
Such small sound speeds are not in contrast with the fundamental expectation of dark energy being much smoother that dark matter: even with \(c_{s}\approx 0.01\), darkenergy perturbations are more than one order of magnitude weaker than dark matter ones (at least for the class of models investigated here) and safely below nonlinearity at the present time at all scales. Models of “cold” dark energy are interesting because they can cross the phantom divide (Kunz and Sapone 2006) and contribute to the cluster masses (Creminelli et al. 2010) (see also Sect. I.6.2 of this review). Small \(c_{s}\) could be constructed for instance with scalar fields with nonstandard kinetic energy terms.
2.8.7 Weak lensing constraints on f(R) gravity
In principle one has complete freedom to specify the function f(R), and so any expansion history can be reproduced. However, as discussed in Sect. I.5.4, those that remain viable are the subset that very closely mimic the standard \(\varLambda \)CDM background expansion, as this restricted subclass of models can evade solar system constraints (Chiba 2003; Tsujikawa et al. 2008a; Gu 2011), have a standard matter era in which the scale factor evolves according to \(a(t) \propto t^{2/3}\) (Amendola et al. 2007b) and can also be free of ghost and tachyon instabilities (Nariai 1973; Gurovich and Starobinsky 1979).
Whilst these models are practically indistinguishable from \(\varLambda \)CDM at the level of background expansion, there is a significant difference in the evolution of perturbations relative to the standard GR behavior.
The evolution of linear density perturbations in the context of f(R) gravity is markedly different than in the standard \(\varLambda \)CDM scenario; \(\delta _{\mathrm {m}} \equiv \delta \rho _{\mathrm {m}}/\rho _{\mathrm {m}}\) acquires a nontrivial scale dependence at late times. This is due to the presence of an additional scale M(a) in the equations; as any given mode crosses the modified gravity ‘horizon’ \(k = aM(a)\), said mode will feel an enhanced gravitational force due to the scalar field. This will have the effect of increasing the power of small scale modes.
In Fig. 32, the linear matter power spectrum is exhibited for this parameterization (dashed line), along with the standard \(\varLambda \)CDM power spectrum (solid line). The observed, redshift dependent tilt is due to the scalaron’s influence on small scale modes, and represents a clear modified gravity signal. Since weak lensing is sensitive to the underlying matter power spectrum, we expect Euclid to provide direct constraints on the mass of the scalar field.
2.8.8 Forecast constraints on coupled quintessence cosmologies
In this section, we present forecasts for coupled quintessence cosmologies (Amendola 2000a; Wetterich 1995; Pettorino and Baccigalupi 2008), obtained when combining Euclid weak lensing, Euclid redshift survey (baryon acoustic oscillations, redshift distortions and full P(k) shape) and CMB as obtained in Planck (see also the next section for CMB priors). Results reported here were obtained in Amendola et al. (2011) and we refer to it for details on the analysis and Planck specifications (for weak lensing and CMB constraints on coupled quintessence with a different coupling see also Martinelli et al. 2010b; De Bernardis et al. 2011). In Amendola et al. (2011), the coupling is the one described in Sect. I.5.3.4, as induced by a scalar–tensor model. The slope \(\alpha \) of the Ratra–Peebles potential is included as an additional parameter and Euclid specifications refer to the Euclid Definition phase (Laureijs et al. 2011).
1\(\sigma \) errors for the set \(\varTheta \equiv \{\beta ^{2},\alpha ,\varOmega _{c},h,\varOmega _{b},n_{s}\,\sigma _{8},\log (A)\}\) of cosmological parameters, combining \(\hbox {CMB}+P(k)\) (left column) and \(\hbox {CMB}+P(k)+\hbox {WL}\) (right column)
Parameter  \(\sigma _{i}\hbox {CMB}+P(k)\)  \(\sigma _{i}\,\hbox {CMB}+\hbox {P}(k)+\hbox {WL}\) 

\(\beta ^{2}\)  0.00051  0.00032 
\(\alpha \)  0.055  0.032 
\(\varOmega _{c}\)  0.0037  0.0010 
h  0.0080  0.0048 
\(\varOmega _{b}\)  0.00047  0.00041 
\(n_{s}\)  0.0057  0.0049 
\(\sigma _{8}\)  0.0049  0.0036 
\(\log (A)\)  0.0051  0.0027 
We can also ask whether a better knowledge of the parameters \(\{\alpha ,\varOmega _{c},h,\varOmega _{b},n_{s},\sigma _{8},\log (A)\}\), obtained by independent future observations, can give us better constraints on the coupling \(\beta ^{2}\). In Table 14 we show the errors on \(\beta ^{2}\) when we have a better knowledge of only one other parameter, which is here fixed to the reference value. All remaining parameters are marginalized over.
1\(\sigma \) errors for \(\beta ^{2}\), for CMB, P(k), WL and \(\hbox {CMB}+P(k)+\hbox {WL}\)
Fixed parameter  CMB  P(k)  WL  \(\hbox {CMB}+P(k)+\hbox {WL}\) 

(Marginalized on all params)  0.0094  0.0015  0.012  0.00032 
\(\alpha \)  0.0093  0.00085  0.0098  0.00030 
\(\varOmega _{c}\)  0.0026  0.00066  0.0093  0.00032 
h  0.0044  0.0013  0.011  0.00032 
\(\varOmega _{b}\)  0.0087  0.0014  0.012  0.00030 
\(n_{s}\)  0.0074  0.0014  0.012  0.00028 
\(\sigma _{8}\)  0.0094  0.00084  0.0053  0.00030 
\(\log (A)\)  0.0090  0.0015  0.012  0.00032 
2.8.9 Forecasts for the anisotropic stress parameter \(\eta \)
One problem, encountered in trying to constrain the MG theoretical parameters Q(a, k) and \(\eta (a,k)\) introduced earlier, is that one has to assume, or parametrize, the initial fluctuation power spectrum and its evolution until the epoch at which observations are made. This is of course fine if one assumes a standard cosmology until dark energy begins driving the expansion, but not necessarily so if dark energy, or any other nonstandard process, is active also in the past. In this section, we examine briefly how can one perform a test of the anisotropic stress parameter \(\eta \) that is independent of the shape and the volution of the power spectrum.
The function \(\eta \) can assume in principle any form, but if one confine themselves to single scalar fields with secondorder equation of motion or to bimetric gravity, then the relatively simple form (I.4.8) holds true. Then, Eq. (I.8.39) can test a vast class of models at once. In Amendola et al. (2014), the forecasts for \(\eta \) have been performed for a Euclidlike survey and assuming a LSSTlike amount of supernovae Ia. The result is that \(\eta \) can be measured to within 1–2% when assumed constant in redshift and space and to within 10%, roughly, when varying only in redshift, while the error rapidly degrades when assuming a more general form like Eq. (I.4.8).
2.8.10 ExtraEuclidean data and priors
In addition to the baseline Euclid surveys, a possibility may exist for an auxiliary Euclid survey, for example focused on Type Ia supernovae. Type Ia supernovae used as standardized candles (luminosity distance indicators) led to the discovery of cosmic acceleration and they retain significant leverage for revealing the nature of dark energy. Their observed flux over the months after explosion (the light curve) is calibrated by an empirical brightness–light curve width relation into a luminosity distance multiplied by a factor involving the unknown absolute brightness and Hubble constant. This nuisance factor cancels when supernovae at different redshifts are used as a relative distance measure. The relative distance is highly sensitive to cosmic acceleration and provides strong complementarity with other cosmological probes of dark energy, at the same or different redshifts.
Another advantageous property of supernovae is their immunity to systematics from cosmology theory—they are purely geometric measures and do not care about the matter power spectrum, coupling to dark matter, cosmologically modified gravity, dark energy clustering, etc. Their astrophysical systematics are independent of other probes, giving important crosschecks. The cosmological parameter likelihood function arising from supernovae constraints can to a good approximation simply be multiplied with the likelihood from other probes. Current supernovae likelihoods are in user friendly form from the joint lightcurve analysis (JLA) of the supernova legacy survey (SNLS) and sloan digital sky survey (SDSS) of Betoule et al. (2014) or the Union2.1 compilation of Suzuki et al. (2012). In the near future the Union3 compilation should merge these sets and all other current supernova data, within an improved Bayesian framework.
Cosmological performance of the simulated surveys
\({\varvec{\sigma }}({\varvec{w}}_{\varvec{a}})\)  \({\varvec{z}}_{\varvec{p}}\)  \({\varvec{\sigma }}({\varvec{w}}_{\varvec{p}})\)  FoM  

\(\hbox {lowz} + \hbox {LSSTDDF} + \hbox {DESIRE}\)  0.22  0.25  0.022  203.2 
\(\hbox {lowz} + \hbox {LSSTDDF}\)  0.28  0.22  0.026  137.1 
\(\hbox {LSSTDDF} + \hbox {DESIRE}\)  0.40  0.35  0.031  81.4 
Other darkenergy projects will enable the crosscheck of the darkenergy constraints from Euclid. These include Planck, BOSS, WiggleZ, HETDEX, DES, Panstarrs, LSST, BigBOSS and SKA.
Planck will provide exquisite constraints on cosmological parameters, but not tight constraints on dark energy by itself, as CMB data are not sensitive to the nature of dark energy (which has to be probed at \(z<2\), where dark energy becomes increasingly important in the cosmic expansion history and the growth history of cosmic large scale structure). Planck data in combination with Euclid data provide powerful constraints on dark energy and tests of gravity. In the next Sect. I.8.10.1, we will discuss how to create a Gaussian approximation to the Planck parameter constraints that can be combined with Euclid forecasts in order to model the expected sensitivity.
The galaxy redshift surveys BOSS, WiggleZ, HETDEX, and BigBOSS are complementary to Euclid, since the overlap in redshift ranges of different galaxy redshift surveys, both space and groundbased, is critical for understanding systematic effects such as bias through the use of multiple tracers of cosmic large scale structure. Euclid will survey H\(\alpha \) emission line galaxies at \(0.5< z < 2.0\) over 15,000 square degrees. The use of multiple tracers of cosmic large scale structure can reduce systematic effects and ultimately increase the precision of darkenergy measurements from galaxy redshift surveys (see, e.g., Seljak et al. 2009).
Currently ongoing or recently completed surveys which cover a sufficiently large volume to measure BAO at several redshifts and thus have science goals common to Euclid, are the Sloan Digital Sky Survey III Baryon Oscillations Spectroscopic Survey (BOSS for short) and the WiggleZ survey.
BOSS^{13} maps the redshifts of 1.5 million Luminous Red Galaxies (LRGs) out to \(z\sim 0.7\) over 10,000 square degrees, measuring the BAO signal, the largescale galaxy correlations and extracting information of the growth from redshift space distortions. A simultaneous survey of \(2.2< z < 3.5\) quasars measures the acoustic oscillations in the correlations of the Lyman\(\alpha \) forest. LRGs were chosen for their high bias, their approximately constant number density and, of course, the fact that they are bright. Their spectra and redshift can be measured with relatively short exposures in a 2.4 m groundbased telescope. The datataking of BOSS will end in 2014.
The WiggleZ^{14} survey is now completed, it measured redshifts for almost 240,000 galaxies over 1000 square degrees at \(0.2<z<1\). The target are luminous blue starforming galaxies with spectra dominated by patterns of strong atomic emission lines. This choice is motivated by the fact that these emission lines can be used to measure a galaxy redshift in relatively short exposures of a 4m class groundbased telescope.
Red quiescent galaxies inhabit dense clusters environments, while blue starforming galaxies trace better lower density regions such as sheets and filaments. It is believed that on large cosmological scales these details are unimportant and that galaxies are simply tracers of the underlying dark matter: different galaxy type will only have a different ‘bias factor’. The fact that so far results from BOSS and WiggleZ agree well confirms this assumption.
Between now and the availability of Euclid data other widefield spectroscopic galaxy redshift surveys will take place. Among them, eBOSS will extend BOSS operations focusing on 3100 square degrees using a variety of tracers. Emission line galaxies will be targeted in the redshift window \(0.6<z<1\). This will extend to higher redshift and extend the sky coverage of the WiggleZ survey. Quasars in the redshift range \(1<z<2.2\) will be used as tracers of the BAO feature instead of galaxies. The BAO LRG measurement will be extended to \(z \sim 0.8\), and the quasar number density at \(z>2.2\) of BOSS will be tripled, thus improving the BAO Lyman\(\alpha \) forest measure.
HETDEX aims at surveying 1 million Lyman\(\alpha \) emitting galaxies at \(1.9< z < 3.5\) over 420 square degrees. The main science goal is to map the BAO feature over this redshift range.
Further in the future, we highlight here the proposed BigBOSS survey and SuMIRe survey with HyperSupremeCam on the Subaru telescope. The BigBOSS survey will target [OII] emission line galaxies at \(0.6< z < 1.5\) (and LRGs at \(z < 0.6\)) over 14,000 square degrees. The SuMIRe wide survey proposes to survey \(\sim \, 2000\) square degrees in the redshift range \(0.6<z<1.6\) targeting LRGs and [OII] emissionline galaxies. Both these surveys will likely reach full science operations roughly at the same time as the Euclid launch.
Wide field photometric surveys are also being carried out and planned. The ongoing Dark Energy Survey (DES)^{15} will cover 5000 square degrees out to \(z\sim 1.3\) and is expected to complete observations in 2017; the Panoramic Survey Telescope and Rapid Response System (PanSTARRS), ongoing at the singlemirror stage, The PanSTARSS survey, which first phase is already ongoing, will cover 30,000 square degrees with 5 photometry bands for redshifts up to \(z\sim 1.5\). The second pause of the survey is expected to be competed by the time Euclid launches. More in the future the Large Synoptic Survey Telescope (LSST) will cover redshifts \(0.3<z<3.6\) over 15,000 square degrees, but is expected to begin operations in 2021, after Euclid’s planned launch date. The galaxy imaging surveys DES, Panstarrs, and LSST will complement Euclid imaging survey in both the choice of band passes, and the sky coverage.
Figure 35 puts Euclid into context. Euclid will survey H\(\alpha \) emission line galaxies at \(0.5< z < 2.0\) over 15,000 square degrees. Clearly, Euclid with both spectroscopic and photometric capabilities and wide field coverage surpasses all surveys that will be carried out by the time it is launched. The large volume surveyed is crucial as the number of modes to sample for example the power spectrum and the BAO feature scales with the volume. The redshift coverage is also important especially at \(z<2\) where the darkenergy contribution to the density of the universe is nonnegligible (at \(z>2\) for most cosmologies the universe is effectively Einstein–de Sitter, therefore, high redshifts do not contribute much to constraints on dark energy). Having a single instrument, a uniform target selection and calibration is also crucial to perform precision tests of cosmology without having to build a ‘ladder’ from different surveys selecting different targets. On the other hand it is also easy to see the synergy between these groundbased surveys and Euclid: by mapping different targets (over the same sky area and ofter the same redshift range) one can gain better control over issues such as bias. The use of multiple tracers of cosmic large scale structure can reduce systematic effects and ultimately increase the precision of darkenergy measurements from galaxy redshift surveys (see, e.g., Seljak et al. 2009).
Moreover, having both spectroscopic and imaging capabilities Euclid is uniquely poised to explore the clustering with both the three dimensional distribution of galaxies and weak gravitational lensing.
I.8.10.1 The Planck prior
In this scheme, \(l_a\) describes the peak location through the angular diameter distance to decoupling and the size of the sound horizon at that time. If the geometry changes, either due to nonzero curvature or due to a different equation of state of dark energy, \(l_a\) changes in the same way as the peak structure. R encodes similar information, but in addition contains the matter density which is connected with the peak height. In a given class of models (for example, quintessence dark energy), these parameters are “observables” related to the shape of the observed CMB spectrum, and constraints on them remain the same independent of (the prescription for) the equation of state of the dark energy.
As a caveat we note that if some assumptions regarding the evolution of perturbations are changed, then the corresponding R and \(l_a\) constraints and covariance matrix will need to be recalculated under each such hypothesis, for instance, if massive neutrinos were to be included, or even if tensors were included in the analysis (Corasaniti and Melchiorri 2008). Further, R as defined in Eq. (I.8.40) can be badly constrained and is quite useless if the dark energy clusters as well, e.g., if it has a low sound speed, as in the model discussed in Kunz (2009).
In order to derive a Planck fisher matrix, Mukherjee et al. (2008) simulated Planck data as described in Pahud et al. (2006) and derived constraints on our base parameter set \(\{R,l_a,\varOmega _b h^2,n_s\}\) with a MCMC based likelihood analysis. In addition to R and \(l_a\) they used the baryon density \(\varOmega _bh^2\), and optionally the spectral index of the scalar perturbations \(n_s\), as these are strongly correlated with R and \(l_a\), which means that we will lose information if we do not include these correlations. As shown in Mukherjee et al. (2008), the resulting Fisher matrix loses some information relative to the full likelihood when only considering Planck data, but it is very close to the full analysis as soon as extra data is used. Since this is the intended application here, it is perfectly sufficient for our purposes.
R, \(l_a\), \(\varOmega _bh^2\) and \(n_s\) estimated from Planck simulated data
Parameter  mean  rms variance 

\(\varOmega _k\ne 0\)  
R  1.7016  0.0055 
\(l_a\)  302.108  0.098 
\(\varOmega _b h^2\)  0.02199  0.00017 
\(n_s\)  0.9602  0.0038 
Covariance matrix for \((R, l_a, \varOmega _b h^2, n_s)\) from Planck
R  \(l_a\)  \(\varOmega _b h^2\)  \(n_s\)  

\(\varOmega _k\ne 0\)  
R  0.303492E–04  0.297688E–03  \(\)0.545532E–06  \(\)0.175976E–04 
\(l_a\)  0.297688E–03  0.951881E–02  \(\)0.759752E–05  \(\)0.183814E–03 
\(\varOmega _b h^2\)  \(\)0.545532E–06  \(\)0.759752E05  0.279464E–07  0.238882E–06 
\(n_s\)  \(\)0.175976E–04  \(\)0.183814E03  0.238882E–06  0.147219E–04 
Fisher matrix for (\(w_0\), \(w_a\), \(\varOmega _{\mathrm {DE}}\), \(\varOmega _k\), \(\omega _m\), \(\omega _b\), \(n_S\)) derived from the covariance matrix for \((R, l_a, \varOmega _b h^2, n_s)\) from Planck
\(w_0\)  \(w_a\)  \(\varOmega _{\mathrm {DE}}\)  \(\varOmega _k\)  \(\omega _m\)  \(\omega _b\)  \(n_S\)  

\(w_0\)  .172276E\(+\)06  .490320E\(+\)05  .674392E\(+\)06  \(\).208974E\(+\)07  .325219E\(+\)07  \(\).790504E\(+\)07  \(\).549427E\(+\)05 
\(w_a\)  .490320E\(+\)05  .139551E\(+\)05  .191940E\(+\)06  \(\).594767E\(+\)06  .925615E\(+\)06  \(\).224987E\(+\)07  \(\).156374E\(+\)05 
\(\varOmega _{\mathrm {DE}}\)  .674392E\(+\)06  .191940E\(+\)06  .263997E\(+\)07  \(\).818048E\(+\)07  .127310E\(+\)08  \(\).309450E\(+\)08  \(\).215078E\(+\)06 
\(\varOmega _k\)  \(\).208974E\(+\)07  \(\).594767E\(+\)06  \(\).818048E\(+\)07  .253489E\(+\)08  \(\).394501E\(+\)08  .958892E\(+\)08  .666335E\(+\)06 
\(\omega _m\)  .325219E\(+\)07  .925615E\(+\)06  .127310E\(+\)08  \(\).394501E\(+\)08  .633564E\(+\)08  \(\).147973E\(+\)09  \(\).501247E\(+\)06 
\(\omega _b\)  \(\).790504E\(+\)07  \(\).224987E\(+\)07  \(\).309450E\(+\)08  .958892E\(+\)08  \(\).147973E\(+\)09  .405079E\(+\)09  .219009E\(+\)07 
\(n_S\)  \(\).549427E\(+\)05  \(\).156374E\(+\)05  \(\).215078E\(+\)06  .666335E\(+\)06  \(\).501247E\(+\)06  .219009E\(+\)07  .242767E\(+\)06 
2.8.11 Forecasts for model independent observations
As discussed in Sect. I.7.6, it is worth to complement the standard P(k) analysis, see Eq. (I.7.31), with the \(C_{\ell }(z_1,z_2)\) method, which involves the directly observable redshift and angular separations instead of reconstructed modeldependent comoving distances. The full relativistic expression of the redshift dependent angular power spectra of galaxy number counts, \(C_{\ell }(z_1,z_2)\), which holds for any theory of gravity whose metric can be written as in Eq. (I.7.2) and in which photons and dark matter particles move along geodesics, is given in Bonvin and Durrer (2011) and Di Dio et al. (2013). In particular, it includes the lensing contribution [see Eq. (I.7.17)] and redshiftspace distortions due to peculiar velocities, see Kaiser (1987), as well as other terms depending on the gravitational potentials.
This is due to the fact that this analysis makes optimal use of the redshift information and does not average over directions. For spectroscopic redshifts, however, the large number of redshift bins which would be needed to fully profit from the redshift information, is severely limited by shot noise. In fact when using redshift bins that are significantly thicker than the redshift resolution of the survey, the P(k) analysis, in principle, has an advantage since it makes use of the full redshift resolution in determining distances of galaxies, while in the \(C_{\ell }(z_1,z_2)\) analysis we do not distinguish redshifts of galaxies in the same bin. However, for spectroscopic surveys we can in principle allow for very slim bins with a thickness significantly smaller than the nonlinearity scale, and the maximal number of useful bins is decided by the shot noise, as well as by numerical limitations related to Markov Chain Monte Carlo data analysis.
The cross correlations from different redshift bins provide an alternative measure of the lensing potential (Montanari and Durrer 2015), which is complementary to the analysis of shear with completely different systematic errors. This will allow the measurement of \(\langle \delta (z_1)\kappa (z_2)\rangle \) for \(z_2>z_1\).
2.9 Summary and outlook
 1.
Euclid (RS) should be able to measure the main standard cosmological parameters to percent or subpercent level as detailed in Table 7 (all marginalized errors, including constant equation of state and constant growth rate, see Table 11 and Fig. 24).
 2.
The two CPL parameters \(w_0,w_1\) should be measured with errors 0.06 and 0.26, respectively (fixing the growth rate to fiducial), see Table 11 and Fig. 24.
 3.
The equation of state w and the growth rate parameter \(\gamma \), both assumed constant, should be simultaneously constrained to within 0.04 and 0.03, respectively.
 4.
The growth function should be constrained to within 0.01–0.02 for each redshift bin from \(z=0.7\) to \(z=2\) (see Table 4).
 5.
A scaleindependent bias function b(z) should be constrained to within 0.02 for each redshift bin (see Table 4).
 6.
The growth rate parameters \(\gamma _0,\gamma _1\) defined in Eq. (I.8.5) should be measured to within 0.08, 0.17, respectively.
 7.
Euclid will achieve an accuracy on measurements of the dark energy sound speed of \(\sigma (c_s^2)/c_s^2=2615\) (WLS) and \(\sigma (c_s^2)/c_s^2=50.05\) (RS), if \(c_s^2=1\), or \(\sigma (c_s^2)/c_s^2=0.132\) (WLS) and \(\sigma (c_s^2)/c_s^2=0.118\) (RS), if \(c_s^2=10^{6}\).
 8.
The coupling \(\beta ^{2}\) between dark energy and dark matter can be constrained by Euclid (with Planck) to less than \(3\times 10^{4}\) (see Fig. 34 and Table 13).
 9.
Any departure from GR greater than \(\simeq 0.03\) in the growth index \(\gamma \) will be distinguished by the WLS (Heavens et al. 2007).
 10.
Euclid WLS can detect deviations between 3 and 10% from the GR value of the modifiedgravity parameter \(\varSigma \) (Eq. I.3.28), whilst with the RS there will be a 20% accuracy on both \(\varSigma \) and \(\mu \) (Eq. I.3.27).
 11.
With the WLS, Euclid should provide an upper limit to the present dimensionless scalaron inverse mass \(\mu \equiv H_{0}/M_{0}\) of the f(R) scalar [where the time dependent scalar field mass is defined in Eq. (I.8.37)] as \(\mu = 0.00 \pm 1.10\times 10^{3}\) for \(l < 400\) and \(\mu = 0.0 \pm 2.10 \times 10^{4}\) for \(l < 10{,}000\)
 12.
The WLS will be able to rule out the DGP model growth index with a Bayes factor \(\ln B\simeq 50\) (Heavens et al. 2007), and viable phenomenological extensions could be detected at the \(3\sigma \) level for \(1000\lesssim \ell \lesssim 4000\) (Camera et al. 2011a).
 13.
The photometric survey of Euclid, i.e., the WLS, is very promising in measuring directly observable angular and redshift dependent power spectra \(C_{\ell }(z_1,z_2)\) (and correlation function) as discussed in Di Dio et al. (2014). This spectra are truly model independent and especially well suited to estimate cosmological parameter or test models of modified gravity.
 1.
The results of the redshift survey and weak lensing surveys should be combined in a statistically coherent way
 2.
The set of possible priors to be combined with Euclid data should be better defined
 3.
The forecasts for the parameters of the modified gravity and clustered darkenergy models should be extended to include more general cases
 4.
We should estimate the errors on a general reconstruction of the modified gravity functions \(\varSigma ,\mu \) or of the metric potentials \(\varPsi ,\varPhi \) as a function of both scale and time.
 5.
We should use the \(C_\ell (z_1,z_2)\)method to constrain modified gravity models.
3 Part II Dark matter and neutrinos
3.1 Introduction
The identification of dark matter is one of the most important open problems in particle physics and cosmology. In standard cosmology, dark matter contributes 85% of all the matter in the universe, but we do not know what it is made of, as we have never observed dark matter particles in our laboratories. The foundations of the modern dark matter paradigm were laid in the 1970s and 1980s, after decades of slow accumulation of evidence. Back in the 1930s, it was noticed that the Coma cluster seemed to contain much more mass than what could be inferred from visible galaxies (Zwicky 1933, 1937), and a few years later, it became clear that the Andromeda galaxy M31 rotates anomalously fast at large radii, as if most of its mass resides in its outer regions. Several other pieces of evidence provided further support to the dark matter hypothesis, including the so called timingargument. In the 1970s, rotation curves were extended to larger radii and to many other spiral galaxies, proving the presence of large amounts of mass on scales much larger than the size of galactic disks (Peacock 1999).
We are now in the position of determining the total abundance of dark matter relative to normal, baryonic matter, in the universe with exquisite accuracy; we have a much better understanding of how dark matter is distributed in structures ranging from dwarf galaxies to clusters of galaxies, thanks to gravitational lensing observations (see Massey et al. 2010, for a review) and theoretically from highresolution numerical simulations made possible by modern supercomputers (such as, for example, the Millennium or Marenostrum simulations).
Originally, Zwicky thought of dark matter as most likely baryonic—missing cold gas, or low mass stars. Rotation curve observation could be explained by dark matter in the form of MAssive Compact Halo Objects (MACHOs, e.g., a halo of black holes or brown dwarfs). However, the MACHO and EROS experiments have shown that dark matter cannot be in the mass range \(0.6\times 10^{7}\,M_{\odot }<M<15\,M_{\odot }\) if it comprises massive compact objects (Alcock et al. 2000; Tisserand et al. 2007). Gas measurements are now extremely sensitive, ruling out dark matter as undetected gas (Bi and Davidsen 1997; Choudhury et al. 2001; Richter et al. 2006; but see Pfenniger et al. 1994). And the CMB and Big Bang Nucleosynthesis require the total mass in baryons in the universe to be significantly less that the total matter density (Rebolo 2002; Coc et al. 2002; Turner 2002).
This is one of the most spectacular results in cosmology obtained at the end of the 20th century: dark matter has to be nonbaryonic. As a result, our expectation of the nature of dark matter shifted from an astrophysical explanation to particle physics, linking the smallest and largest scales that we can probe.
During the seventies the possibility of the neutrino to be the dark matter particle with a mass of tenth of eV was explored, but it was realized that such light particle would erase the primordial fluctuations on small scales, leading to a lack of structure formation on galactic scales and below. It was therefore postulated that the dark matter particle must be cold (low thermal energy, to allow structures on small scale to form), collisionless (or have a very low interaction cross section, because dark matter is observed to be pressureless) and stable over a long period of time: such a candidate is referred to as a weakly interacting massive particle (WIMP). This is the standard cold dark matter (CDM) picture (see Frenk et al. 1990; Peebles et al. 1991).
Particle physicists have proposed several possible dark matter candidates. Supersymmetry (SUSY) is an attractive extension of the Standard Model of particle physics. The lightest SUSY particle (the LSP) is stable, uncharged, and weakly interacting, providing a perfect WIMP candidate known as a neutralino. Specific realizations of SUSY each provide slightly different dark matter candidates (for a review see Jungman et al. 1996). Another distinct dark matter candidate arising from extensions of the Standard Model is the axion, a hypothetical pseudoGoldstone boson whose existence was postulated to solve the so called strong CP problem in quantum chromodynamics (Peccei and Quinn 1977), also arising generically in string theory (Witten 1984; Svrcek and Witten 2006). They are known to be very well motivated dark matter candidates (for a review of axions in cosmology see Sikivie 2008). Other wellknown candidates are sterile neutrinos, which interact only gravitationally with ordinary matter, apart from a small mixing with the familiar neutrinos of the Standard Model (which should make them ultimately unstable), and candidates arising from technicolor (see, e.g., Gudnason et al. 2006). A wide array of other possibilities have been discussed in the literature, and they are currently being searched for with a variety of experimental strategies (for a complete review of dark matter in particle physics see Amsler et al. 2008).
There remain some possible discrepancies in the standard cold dark matter model, such as the missing satellites problem, and the cuspcore controversy (see below for details and references) that have led some authors to question the CDM model and to propose alternative solutions. The physical mechanism by which one may reconcile the observations with the standard theory of structure formation is the suppression of the matter power spectrum at small scales. This can be achieved with dark matter particles with a strong selfscattering cross section, or with particles with a nonnegligible velocity dispersion at the epoch of structure formation, also referred to as warm dark matter (WDM) particles.
Another possibility is that the extra gravitational degrees of freedom arising in modified theories of gravity play the role of dark matter. In particular this happens for the EinsteinAether, TeVeS and bigravity models. These theories were developed following the idea that the presence of unknown dark components in the universe may be indicating us that it is not the matter component that is exotic but rather that gravity is not described by standard GR.

The discovery of an exponential suppression in the power spectrum at small scales, that would rule out CDM and favor WDM candidates, or, in absence of it, the determination of a lower limit on the mass of the WDM particle, \(m_{\mathrm {WDM}}\), of 2 keV;

the determination of an upper limit on the dark matter selfinteraction cross section \(\sigma /m\sim 10^{27}\mathrm {\ cm^2\ GeV^{1}}\) at 68% CL, which represents an improvement of three orders of magnitude compared to the best constraint available today, which arises from the analysis of the dynamics of the bullet cluster;

the measurement of the slope of the dark matter distribution within galaxies and clusters of galaxies with unprecedented accuracy;

the determination of the properties of the only known—though certainly subdominant—nonbaryonic dark matter particle: the standard neutrino, for which Euclid can provide information on the absolute mass scale, its normal or inverted hierarchy, as well as its Dirac or Majorana nature;

the test of unified dark matter (UDM, or quartessence) models, through the detection of characteristic oscillatory features predicted by these theories on the matter power spectrum, detectable through weak lensing or baryonic acoustic oscillations studies;

a probe of the axiverse, i.e., of the legacy of string theory through the presence of ultralight scalar fields that can affect the growth of structure, introducing features in the matter power spectrum and modifying the growth rate of structures.
3.2 Dark matter halo properties
Dark matter was first proposed by Zwicky (1937) to explain the anomalously high velocity of galaxies in galaxy clusters. Since then, evidence for dark matter has been accumulating on all scales. The velocities of individual stars in dwarf galaxies suggest that these are the most dark matter dominated systems in the universe (e.g., Mateo 1998; Kleyna et al. 2001; Simon and Geha 2007; Martin et al. 2007; Walker et al. 2007). Low surface brightness (LSB) and giant spiral galaxies rotate too fast to be supported by their stars and gas alone, indicating the presence of dark matter (de Blok et al. 2001; Simon et al. 2005; Borriello and Salucci 2001; Klypin et al. 2002). Gravitationally lensed giant elliptical galaxies and galaxy clusters require dark matter to explain their observed image distributions (e.g., Refsdal 1964; Bourassa and Kantowski 1975; Walsh et al. 1979; Soucail et al. 1987; Clowe et al. 2006a). Finally, the temperature fluctuations in the cosmic microwave background (CMB) radiation indicate the need for dark matter in about the same amount as that required in galaxy clusters (e.g., Smoot et al. 1992; Wright et al. 1992; Spergel et al. 2007).
3.2.1 The halo mass function as a function of redshift
Attempts have already been made to probe the small scale power in the universe through galaxy counts. Figure 37 shows the best measurement of the ‘baryonic mass function’ of galaxies to date (Read and Trentham 2005). This is the number of galaxies with a given total mass in baryons normalized to a volume of 1 Mpc. To achieve this measurement, Read and Trentham (2005) sewed together results from a wide range of surveys reaching a baryonic mass of just \(\sim \,10^6\,M_{\odot }\)—some of the smallest galaxies observed to date.
The baryonic mass function already turns up an interesting result. Overplotted in blue on Fig. 37 is the dark matter mass function expected assuming that dark matter is ‘cold’—i.e., that it has no preferred scale. Notice that this has a different shape. On large scales, there should be bound dark matter structures with masses as large as \(10^{14}\,M_{\odot }\), yet the number of observed galaxies drops off exponentially above a baryonic mass of \(\sim \,10^{12}\,M_{\odot }\). This discrepancy is wellunderstood. Such large dark matter haloes have been observed, but they no longer host a single galaxy; rather they are bound collections of galaxies—galaxy clusters (see e.g. Zwicky 1937). However, there is also a discrepancy at low masses that is not so well understood. There should be far more bound dark matter haloes than observed small galaxies. This is the wellknown ‘missing satellite’ problem (Moore et al. 1999; Klypin et al. 1999).
The missing satellite problem could be telling us that dark matter is not cold. The red line on Fig. 37 shows the expected dark matter mass function for WDM with a (thermal relic) mass of \(m_{\mathrm {WDM}}=1\mathrm {\ keV}\). Notice that this gives an excellent match to the observed slope of the baryonic mass function on small scales. However, there may be a less exotic solution. It is likely that star formation becomes inefficient in galaxies on small scales. A combination of supernovae feedback, reionization and rampressure stripping is sufficient to fully explain the observed distribution assuming pure CDM (Kravtsov et al. 2004; Read et al. 2006; Macciò et al. 2010). Such ‘baryon feedback’ solutions to the missing satellite problem are also supported by recent measurements of the orbits of the Milky Way’s dwarf galaxies (Lux et al. 2010).
II.2.1.1. Weak and strong lensing measurements of the halo mass function
To make further progress on WDM constraints from astrophysics, we must avoid the issue of baryonic physics by probing the halo mass function directly. The only tool for achieving this is gravitational lensing. In weak lensing this means stacking data for a very large number of galaxies to obtain an averaged mass function. In strong lensing, this means simply finding enough systems with ‘good data’. Good data ideally means multiple sources with wide redshift separation (Saha and Read 2009); combining independent data from dynamics with lensing may also prove a promising route (see e.g. Treu and Koopmans 2002).
Euclid will measure the halo mass function down to \(\sim \, 10^{13}\,M_{\odot }\) using weak lensing. It will simultaneously find 1000s of strong lensing systems. However, in both cases, the lowest mass scale is limited by the lensing critical density. This limits us to probing down to a halo mass of \(\sim \,10^{11}\,M_{\odot }\) which gives poor constraints on the nature of dark matter. However, if such measurements can be made as a function of redshift, the constraints improve dramatically. We discuss this in the next Section.
II.2.1.2 The advantage of going to high redshift
Dark matter constraints from the halo mass function become much stronger if the halo mass function is measured as a function of redshift. This is because warm dark matter delays the growth of structure formation as well as suppressing small scale power. This is illustrated in Fig. 38, which shows the fraction of mass in bound structures as a function of redshift, normalized to a halo of Milky Way’s mass at redshift \(z=0\). Marked are different thermal relic WDM particle masses in keV (black solid lines). Notice that the differences between WDM models increase significantly towards higher redshift at a given mass scale. Thus we can obtain strong constraints on the nature of dark matter by moving to higher z’s, rather than lower halo mass.
3.2.2 The dark matter density profile
3.3 Euclid dark matter studies: widefield Xray complementarity
The predominant extragalactic Xray sources are AGNs and galaxy clusters. For dark matter studies the latter are the more interesting targets. Xrays from clusters are emitted as thermal bremsstrahlung by the hot intracluster medium (ICM) which contains most of the baryons in the cluster. The thermal pressure of the ICM supports it against gravitational collapse so that measuring the temperature through Xray observations provides information about the mass of the cluster and its distribution. Hence, Xrays form a complementary probe of the dark matter in clusters to Euclid weak lensing measurements.
The ongoing Xray missions XMMNewton and Chandra have good enough angular resolution to measure the temperature and mass profiles in \(\sim \,10\) radial bins for clusters at reasonable redshifts, although this requires long exposures. Many planned Xray missions aim to improve the spectral coverage, spectral resolution, and/or collection area of the present mission, but they are nonetheless mostly suited for targeted observations of individual objects. Two notable exceptions are eROSITA^{16} (Cappelluti et al. 2011) and the Wide Field Xray Telescope^{17} (WFXT Giacconi et al. 2009; Vikhlinin et al. 2009; Sartoris et al. 2010; Rosati et al. 2011; Borgani et al. 2011; Sartoris et al. 2012, proposed) which will both conduct full sky surveys and, in the case of WFXT, also smaller but deeper surveys of large fractions of the sky.
A sample of highangular resolution Xray cluster observations can be used to test the prediction from Nbody simulations of structure formation that dark matter haloes are described by the NFW profile (Navarro et al. 1996) with a concentration parameter c. This describes the steepness of the profile, which is related to the mass of the halo (Neto et al. 2007). Weak or strong lensing measurements of the mass profile, such as those that will be provided from Euclid, can supplement the Xray measurement and have different systematics. Euclid could provide wide field weak lensing data for such a purpose with very good point spread function (PSF) properties, but it is likely that the depth of the Euclid survey will make dedicated deep field observations a better choice for a lensing counterpart to the Xray observations. However, if the WFXT mission becomes a reality, the sheer number of detected clusters with mass profiles would mean Euclid could play a much more important rôle.
Xray observations of galaxy clusters can constrain cosmology by measuring the geometry of the universe through the baryon fraction \(f_{\mathrm {gas}}\) (Allen et al. 2008) or by measuring the growth of structures by determining the highmass tail of the mass function (Mantz et al. 2010). The latter method would make the most of the large number of clusters detected in fullsky surveys and there would be several benefits by combining an Xray and a lensing survey. It is not immediately clear which type of survey would be able to better detect clusters at various redshifts and masses, and the combination of the two probes could improve understanding of the sample completeness. An Xray survey alone cannot measure cluster masses with the required precision for cosmology. Instead, it requires a calibrated relation between the Xray temperature and the cluster mass. Such a calibration, derived from a large sample of clusters, could be provided by Euclid. In any case, it is not clear yet whether the large size of a Euclid sample would be more beneficial than deeper observations of fewer clusters.
Finally, Xray observations can also confirm the nature of possible ‘bulletlike’ merging clusters. In such systems the shock of the collision has displaced the ICM from the dark matter mass, which is identified through gravitational lensing. This offers the opportunity to study dark matter haloes with very few baryons and, e.g., search for signatures of decaying or annihilating dark matter.
3.4 Dark matter mapping
Gravitational lensing offers a unique way to chart dark matter structures in the universe as it is sensitive to all forms of matter. Weak lensing has been used to map the dark matter in galaxy clusters (see for example Clowe et al. 2006b) with high resolution reconstructions recovered for the most massive strong lensing clusters (see for example Bradač et al. 2006). Several lensing studies have also mapped the projected surface mass density over degree scalefields (Gavazzi and Soucail 2007; Schirmer et al. 2007; Kubo et al. 2009) to identify shearselected groups and clusters. The minimum mass scale that can be identified is limited only by the intrinsic ellipticity noise in the lensing analysis and projection effects. Using a higher number density of galaxies in the shear measurement reduces this noise, and for this reason the Deep Field Euclid Survey will be truly unique for this area of research, permitting high resolution reconstructions of dark matter in the field (Massey et al. 2007; Heymans et al. 2008) and the study of lenses at higher redshift.
There are several nonparametric methods to reconstruct dark matter in 2D which can be broadly split into two categories: convergence techniques (Kaiser and Squires 1993) and potential techniques (Bartelmann et al. 1996). In the former one measures the projected surface mass density (or convergence) \(\kappa \) directly by applying a convolution to the measured shear under the assumption that \(\kappa \ll 1\). Potential techniques perform a \(\chi ^2\) minimization and are better suited to the cluster regime and can also incorporate strong lensing information (Bradač et al. 2005). In the majority of methods, choices need to be made about smoothing scales to optimize signaltonoise whilst preserving reconstruction resolution. Using a wavelet method circumvents this choice (Starck et al. 2006; Khiabanian and Dell’Antonio 2008) but makes the resulting significance of the reconstruction difficult to measure.
In Van Waerbeke et al. (2013) the techniques of weak lensing mass mapping were applied to a widefield survey for the first time, using the CFHTLenS data set. These mass maps were used to generate higher order statistics beyond the twopoint correlation function.
3.4.1 Charting the universe in 3D
The lensing distortion depends on the total projected surface mass density along the line of sight with a geometrical weighting that peaks between a given source and observer, while increasing with source distance. This redshift dependence can be used to recover the full 3D gravitational potential of the matter density as described in Hu and Keeton (2002), Bacon and Taylor (2003) and applied to the COMBO17 survey in Taylor et al. (2004) and the COSMOS survey in Massey et al. (2007). This work has been extended in Simon et al. (2009) to reconstruct the full 3D mass density field and applied to the STAGES survey in Simon et al. (2012).
All 3D mass reconstruction methods require the use of a prior based on the expected mean growth of matter density fluctuations. Without the inclusion of such a prior, Hu and Keeton (2002) have shown that one is unable to reasonably constrain the radial matter distribution, even for densely sampled spacebased quality lensing data. Therefore 3D maps cannot be directly used to infer cosmological parameters.
The driving motivation behind the development of 3D reconstruction techniques was to enable an unbiased 3D comparison of mass and light. Dark haloes for example would only be detected in this manner. However the detailed analysis of noise and the radial PSF in the 3D lensing reconstructions presented for the first time in Simon et al. (2012) show how inherently noisy the process is. Given the limitations of the method to resolve only the most massive structures in 3D the future direction of the application of this method for the Euclid Wide survey should be to reconstruct large scale structures in the 3D density field. Using more heavily spatially smoothed data we can expect higher quality 3D resolution reconstructions as on degree scales the significance of modes in a 3D mass density reconstruction are increased (Simon et al. 2009). Adding additional information from flexion may also improve mass reconstruction, although using flexion information alone is much less sensitive than shear (Pires and Amara 2010).
3.4.2 Mapping largescale structure filaments
Structure formation theory robustly predicts that matter in the Universe is concentrated in sheets and filaments and that galaxy clusters live at the intersection of these filaments. The most comprehensive analytical framework for describing the emergence of these structure from anisotropic gravitational collapse is the work of Bond et al. (1996), which coined the term “cosmic web” for them. It combines the linear evolution of density fluctuations in the Zeldovich approximation (Zel’dovich 1970) with the statistics of peaks in the primordial density field (Bardeen et al. 1986) using the the peakpatch formalism (Bond and Myers 1996a, b, c).
Numerically, filaments have been seen since the early days of Nbody simulations (e.g., Klypin and Shandarin 1983). Increasing mass and spatial resolution of these simulations have refined our understanding of them and a detailed inventory of the mass distribution over the different kinds of largescale structures (galaxy clusters, filaments, sheets, voids) indicates that a plurality of all mass in the Universe and thus probably of all galaxies is in filaments (AragónCalvo et al. 2010).
Observationally, filaments have been traced by galaxy redshift surveys from early indications (Joeveer et al. 1978) of their existence to conclusive evidence in the CfA redshift survey (Geller and Huchra 1989) to modern day redshift surveys like 2dF, SDSS, BOSS and VIPERS (Colless et al. 2001; Ahn et al. 2012; Dawson et al. 2013; Guzzo et al. 2014). In Xrays the tenuous warmhot intergalactic medium expected to reside in filaments (Davé et al. 2001) has been seen in emission (Werner et al. 2008) and absorption (Buote et al. 2009; Fang et al. 2010). Observing the underlying dark matter skeleton has been much more challenging and early weaklensing candidates for direct detections (Kaiser et al. 1998; Gray et al. 2002) could not be confirmed by higher quality followup observations (Gavazzi et al. 2004; Heymans et al. 2008).
The most significant weaklensing detection of a largescale structure filament yet was presented by Dietrich et al. (2012), who found a mass bridge connecting the galaxy clusters Abell 222 and Abell 223 at \(4\sigma \) in a mass reconstruction. This dark matter filament is spatially coincident with an overdensity of galaxies (Dietrich et al. 2005) and extended soft Xray emission (Werner et al. 2008). This study, like the others mentioned before, makes use of the fact that filaments have a higher density closer to galaxy clusters and are expected to be particular massive between close pairs of galaxy clusters (Bond et al. 1996). Jauzac et al. (2012), reported another weaklensing filament candidate at \(3\sigma \) significance coming out of a galaxy cluster but not obviously connecting to another overdensity of galaxies or dark matter. The works of Dietrich et al. (2012) and Jauzac et al. (2012) also provide the first direct mass measurements of filaments. These are in agreement with the prediction that massive filaments can be as heavy as small galaxy clusters (AragónCalvo et al. 2010).
The relative dearth of weak lensing filament observations compared to galaxy cluster measurements is of course due to their much lower density contrast. Numerical simulations of weaklensing observations accurately predict that filaments will generally be below their detection threshold (Dietrich et al. 2005; Mead et al. 2010). A statistical analysis of large set of raytracing simulations indicates that even with a survey slightly deeper than the Euclid widesurvey, the vast majority of filaments will not be individually detectable (Higuchi et al. 2014). Maturi and Merten (2013) propose a matched filter tuned to the shape of filaments to overcome the obstacles in filament detections in weak lensing data.
An alternative to lensing by individual filaments is to average (or “stack”) the lensing signal of many filaments. These filaments could either be identified in the Euclid spectroscopic survey or one could use the high probability that neighbouring massive dark matter halos are often connected by filaments. Zhang et al. (2013) pioneered this technique of blindly stacking the area between galaxy cluster pairs to boost the overdensity of filament galaxies with respect to the field. Their selection of cluster pairs was based on statistical studies of the abundance and properties of filaments between cluster pairs (Pimbblet et al. 2004; Colberg et al. 2005). This stacking approach was extended to weak lensing by Clampitt et al. (2016). They developed a method to measure the lensing signal of extended structures while at the same time nulling the contribution of the halo pairs at the endpoints of filaments. Stacking the lensing signal in the regions between luminous red galaxies in SDSS, Clampitt et al. (2016) were able to put first constraints on the density profiles of filaments.
3.5 Constraints on dark matter interaction cross sections
We now move towards discussing the particulate aspects of dark matter, starting with a discussion on the scattering crosssections of dark matter. At present, many physical properties of the dark matter particle remain highly uncertain. Prospects for studying the scattering of dark matter with each of the three major constituents of the universe—itself, baryons, and dark energy—are outlined below.
3.5.1 Dark matter–dark matter interactions
Selfinteracting dark matter (SIDM) was first postulated by Spergel and Steinhardt (2000), in an attempt to explain the apparent paucity of lowmass haloes within the Local Group. The required crosssection \(\sigma /m\sim 1\,\hbox {cm}^2\)/g was initially shown to be infeasible (Meneghetti et al. 2001; Gnedin and Ostriker 2001), but recent highresolution simulations have revised the expected impact of selfinteraction, which now remains consistent with observations of cluster halo shapes and profiles. Indeed, selfinteraction within a hidden dark sector is a generic consequence of some extensions to the Standard Model. For example, atomic, glueballino, and mirror dark matter models predict a crosssection \(\sigma /m\approx 0.6\,\hbox {cm}^2/\hbox {g}=1\) barn/GeV (similar to nuclear crosssections in the Standard Model). Note that couplings within the dark sector can be many orders of magnitude larger than those between dark matter and Standard Model particles, which is of order picobarns. Interactions entirely within the dark sector are unprobed by direct detection or collider experiments, but leads to several phyical effects that can potentially be observed by Euclid.
Clusters of galaxies present an interesting environment in which the dark matter density is sufficiently high for collisions to play a significant role. If dark matter particles possess even a small crosssection for elastic scattering, smallscale structure can be erased, and cuspy cores can be smoothed. In particular, collisions between galaxy clusters act as astronomicalscale particle colliders. Since dark matter and baryonic matter are subject to different forces, they follow different trajectories out of the collision. If dark matter’s particle interactions are rare but exchange a lot of momentum (often corresponding to shortranged forces), dark matter will tend to be scattered away and lost. If the interactions are rare but exchange little momentum (often corresponding to longranged forces),the dark matter will be decelerated by an additional drag force and become spatially offset (Kahlhoefer et al. 2014).
As highlighted by Gnedin and Ostriker (2001), crosssections large enough to alleviate the structure formation issues would also allow significant heat transfer from particles within a large halo to the cooler subhaloes. This effect is most prominent close to the centers of clusters. As the subhalo evaporates, the galaxy residing within the halo would be disrupted. Limiting this rate of evaporation to exceed the Hubble time allows an upper bound to be placed on the scattering crosssection of approximately \(\sigma _p/m_p\lesssim 0.3\mathrm {\ cm^{2}\ g^{1}}\) (neglecting any velocity dependence). Note the dependence on particle mass—a more massive CDM particle would be associated with a lower number density, thereby reducing the frequency of collisions.
II.5.1.2 Dark matter deceleration
Particulate dark matter and baryonic matter may be temporarily separated during collisions between galaxy clusters, such as 1E 065756 (Clowe et al. 2006a; Bradač et al. 2006) and MACS J0025.41222 (Bradač et al. 2008). These ‘bullet clusters’ have provided astrophysical constraints on the interaction crosssection of hypothesized dark matter particles (Randall et al. 2008), and may ultimately prove the most useful laboratory in which to test for any velocity dependence of the crosssection. Unfortunately, highspeed collisions between two massive progenitors are rare (Shan et al. 2010a, b), and constraints from individual systems are limited by uncertainties in their collision velocity, impact parameter and angle with respect to the plane of the sky.
However, all galaxy clusters grow through almost continual minor merger accretion. In Massey et al. (2011) and Harvey et al. (2014), a statistical ‘bulleticity’ method has been proposed to exploit every individual infalling substructure in every cluster. For each piece of infalling substructure, a local vector from the dark matter peak (identified using weak lensing analysis) and the baryonic mass peak (from Xrays). An average bulleticity signal of zero would imply an equal cross sections for the dark matter and baryonic matter. By measuring any observed, finite amplitude of bulleticity, one can empirically measure the ratio between the dark matter selfinteraction and baryonic selfinteraction cross sections. Since we know the baryonic crosssection relatively well, we can infer the dark matterdark matter crosssection.
Finally, a Fisher matrix calculation has shown that, under the assumption that systematic effects can be controlled, Euclid could use such a technique to constrain the relative particulate crosssections to \(6\times 10^{27}\mathrm {\ cm^{2}\ GeV^{1}}\).
The dark matterdark matter interaction probed by Euclid using this technique will be complementary to the interactions constrained by direct detection and accelerator experiments where the primary constraints will be on the dark matterbaryon interaction.
II.5.1.3 Dark matter halo shapes
Selfinteracting dark matter circularises the centres of dark matter halos, removing triaxiality (Feng et al. 2010; Peter et al. 2013), and smooths out cuspy cores. These profiles can be measeured directly using strong gravitational lensing (e.g., Sand et al. 2008; Newman et al. 2013).
Meneghetti et al. (2001) have performed raytracing through Nbody simulations, and have discovered that the ability for galaxy clusters to generate giant arcs from strong lensing is compromized if the dark matter is subject to just a few collisions per particle. This constraint translates to an upper bound \(\sigma _p/m_p\lesssim 0.1\mathrm {\ cm^{2}\ g^{1}}\). Furthermore, more recent analyses of SIDM models (Markevitch et al. 2004; Randall et al. 2008) utilize data from the Bullet Cluster to provide another independent limit on the scattering cross section, though the upper bound remains unchanged. Massey et al. (2011) have proposed that the tendency for baryonic and dark matter to become separated within dynamical systems, as seen in the Bullet Cluster, could be studied in greater detail if the analysis were to be extended over the full sky in Euclid.
3.5.2 Dark matter–baryonic interactions
Currently, a number of efforts are underway to directly detect WIMPs via the recoil of atomic nuclei. The underground experiments such as CDMS, CRESST, XENON, EDELWEISS and ZEPLIN have pushed observational limits for the spinindependent WIMPnucleon crosssection down to the \(\sigma \lesssim 10^{43}\mathrm {cm}^2\) régime.^{18} A collection of the latest constraints can be found at http://dmtools.brown.edu.
Another opportunity to unearth the dark matter particle lies in accelerators such as the LHC. By 2018 it is possible these experiments will have yielded mass estimates for dark matter candidates, provided its mass is lighter than a few hundred GeV. However, the discovery of more detailed properties of the particle, which are essential to confirm the link to cosmological dark matter, would have to wait until the International Linear Collider is constructed.
3.5.3 Dark matter–dark energy interactions
3.6 Constraints on warm dark matter
Nbody simulations of largescale structures that assume a \(\varLambda \)CDM cosmology appear to overpredict the power on small scales when compared to observations (Primack 2009): ‘the missingsatellite problem’ (Kauffmann et al. 1993; Klypin et al. 1999; Strigari et al. 2007; Bullock 2010), the ‘cuspcore problem’ (Li and Chen 2009; Simon et al. 2005; Zavala et al. 2009) and sizes of minivoids (Tikhonov et al. 2009). These problems may be more or less solved by several different phenomena (e.g. Diemand and Moore 2011), however one which could explain all of the above is warm dark matter (WDM) (Bode et al. 2001; Colin et al. 2000; Boyanovsky et al. 2008). If the dark matter particle is very light, it can cause a suppression of the growth of structures on small scales via freestreaming of the dark matter particles whilst relativistic in the early universe.
3.6.1 Warm dark matter particle candidates

Sterile neutrinos may be constructed to extend the standard model of particle physics. The standard model active (lefthanded) neutrinos can then receive the observed small masses through, e.g., a seesaw mechanism. This implies that righthanded sterile neutrinos must be rather heavy, but the lightest of them naturally has a mass in the keV region, which makes it a suitable WDM candidate. The simplest model of sterile neutrinos as WDM candidate assumes that these particles were produced at the same time as active neutrinos, but they never thermalized and were thus produced with a much reduced abundance due to their weak coupling (see Biermann and Munyaneza 2008, and references therein).

The gravitino appears as the supersymmetric partner of the graviton in supergravity models. If it has a mass in the keV range, it will be a suitable WDM candidate. It belongs to a more general class of thermalized WDM candidates. It is assumed that this class of particles achieved a full thermal equilibrium, but at an earlier stage, when the number of degrees of freedom was much higher and hence their relative temperature with respect to the CMB is much reduced. Note that in order for the gravitino to be a good dark matter particle in general, it must be very stable, which in most models corresponds to it being the LSP (e.g. Borgani and Masiero 1997; Cembranos et al. 2005).
3.6.2 Dark matter freestreaming
3.6.3 Current constraints on the WDM particle from largescale structure
Measurements in the particlephysics energy domain can only reach masses uninteresting in the WDM context, since direct detectors look mainly for a WIMP, whose mass should be in the GeV–TeV range. However, as described above, cosmological observations are able to place constraints on light dark matter particles. Observation of the flux power spectrum of the Lyman\(\alpha \) forest, which can indirectly measure the fluctuations in the dark matter density on scales between \(\sim \,100\mathrm {\ kpc}\) and \(\sim \,10\mathrm {\ Mpc}\) gives the limits of \(m_{\mathrm {WDM}}>4\mathrm {\ keV}\) or equivalently \(m_{\mathrm {\nu s}}>28\mathrm {\ keV}\) at 95% confidence level (Viel et al. 2008, 2005; Seljak et al. 2006). For the simplest sterile neutrino model, these lower limits are at odds with the upper limits derived from Xray observations, which come from the lack of observed diffuse Xray background from sterile neutrino annihilation and set the limit \(m_{\mathrm {\nu s}}<1.8\mathrm {\ keV}\) at the 95% confidence limit (Boyarsky et al. 2006). However, these results do not rule the simplest sterile neutrino models out. There exist theoretical means of evading smallscale power constraints (see e.g. Boyarsky et al. 2009, and references therein). The weak lensing power spectrum from Euclid will be able to constrain the dark matter particle mass to about \(m_{\mathrm {WDM}}>2\mathrm {\ keV}\) (Markovič et al. 2011).
3.6.4 Nonlinear structure in WDM
In order to extrapolate the matter power spectrum to later times one must take into account the nonlinear evolution of the matter density field.
Such fits can be used to calculate further constraints on WDM from the weak lensing power spectrum or galaxy clustering (Markovič et al. 2011; Markovič and Viel 2014)
It should be noted that in order to use the present day clustering of structure as a probe for WDM it is crucial to take into account baryonic physics as well as neutrino effect, which are described in the following section.
3.7 Neutrino properties
The first significant evidence for a finite neutrino mass (Fukuda et al. 1998) indicated the incompleteness of the standard model of particle physics. Subsequent experiments have further strengthened this evidence and improved the determination of the neutrino mass splitting required to explain observations of neutrino oscillations.
As a summary of the last decade of neutrino experiments, two hierarchical neutrino mass splittings and three mixing angles have been measured. Furthermore, the standard model has three neutrinos: the motivation for considering deviations from the standard model in the form of extra sterile neutrinos has disappeared (Melchiorri et al. 2009; AguilarArevalo et al. 2007). Of course, deviations from the standard effective numbers of neutrino species could still indicate exotic physics which we will discuss below (Sect. II.7.4).

Relic neutrinos produced in the early universe are hardly detectable by weak interactions, making it impossible with foreseeable technology to detect them directly. But new cosmological probes such as Euclid offer the opportunity to detect (albeit indirectly) relic neutrinos, through the effect of their mass on the growth of cosmological perturbations.

Cosmology remains a key avenue to determine the absolute neutrino mass scale Particle physics experiments will be able to place lower limits on the effective neutrino mass, which depends on the hierarchy, with no rigorous limit achievable in the case of normal hierarchy (Murayama and PeñaGaray 2004). Contrarily, neutrino free streaming suppresses the smallscale clustering of largescale cosmological structures by an amount that depends on neutrino mass.

“What is the hierarchy (normal, inverted or degenerate)?” Neutrino oscillation data are unable to resolve whether the mass spectrum consists in two light states with mass m and a heavy one with mass M—normal hierarchy—or two heavy states with mass M and a light one with mass m—inverted hierarchy—in a modelindependent way. Cosmological observations, such as the data provided by Euclid, can determine the hierarchy, complementarily to data from particle physics experiments.

“Are neutrinos their own antiparticle?” If the answer is yes, then neutrinos are Majorana fermions; if not, they are Dirac. If neutrinos and antineutrinos are identical, there could have been a process in the early universe that affected the balance between particles and antiparticles, leading to the matter antimatter asymmetry we need to exist (Fukugita and Yanagida 1986). This question can, in principle, be resolved if neutrinoless double\(\beta \) decay is observed (see Murayama and PeñaGaray 2004, and references therein). However, if such experiments (ongoing and planned, e.g., Cremonesi 2010) lead to a negative result, the implications for the nature of neutrinos depend on the hierarchy. As shown in Jiménez et al. (2010), in this case cosmology can offer complementary information by helping determine the hierarchy.
3.7.1 Evidence of relic neutrinos
The hot big bang model predicts a background of relic neutrinos in the universe with an average number density of \(\sim \,100\,N_{\nu }\mathrm {\ cm}^{3}\), where \(N_{\nu }\) is the number of neutrino species. These neutrinos decouple from the CMB at redshift \(z\sim 10^{10}\) when the temperature was \(T\sim o(\mathrm {MeV})\), but remain relativistic down to much lower redshifts depending on their mass. A detection of such a neutrino background would be an important confirmation of our understanding of the physics of the early universe.
Massive neutrinos affect cosmological observations in different ways. Primary CMB data alone can constrain the total neutrino mass \(\varSigma \), if it is above \(\sim \,1\mathrm {\ eV}\) (Komatsu et al. 2011, finds \(\varSigma <1.3\mathrm {\ eV}\) at 95% confidence) because these neutrinos become nonrelativistic before recombination leaving an imprint in the CMB. Neutrinos with masses \(\varSigma <1\mathrm {\ eV}\) become nonrelativistic after recombination altering matterradiation equality for fixed \(\varOmega _mh^2\); this effect is degenerate with other cosmological parameters from primary CMB data alone. After neutrinos become nonrelativistic, their free streaming damps the smallscale power and modifies the shape of the matter power spectrum below the freestreaming length. The freestreaming length of each neutrino family depends on its mass.
Current cosmological observations do not detect any smallscale power suppression and break many of the degeneracies of the primary CMB, yielding constraints of \(\varSigma <0.3\mathrm {\ eV}\) (Reid et al. 2010) if we assume the neutrino mass to be a constant. A detection of such an effect, however, would provide a detection, although indirect, of the cosmic neutrino background. As shown in the next section, the fact that oscillations predict a minimum total mass \(\varSigma \sim 0.054\mathrm {\ eV}\) implies that Euclid has the statistical power to detect the cosmic neutrino background. We finally remark that the neutrino mass may also very well vary in time (Wetterich 2007); this might be tested by comparing (and not combining) measurements from CMB at decoupling with lowz measurements. An inconsistency would point out a direct measurement of a time varying neutrino mass (Wetterich and Pettorino 2009).
3.7.2 Neutrino mass
Particle physics experiments are sensitive to neutrino flavours making a determination of the neutrino absolutemass scales very model dependent. On the other hand, cosmology is not sensitive to neutrino flavour, but is sensitive to the total neutrino mass.
The smallscale powersuppression caused by neutrinos leaves imprints on CMB lensing and prior to the experiment forecasts indicated that Planck should be able to constrain the sum of neutrino masses \(\varSigma \), with a \(1\sigma \) error of 0.13 eV (Kaplinghat et al. 2003; Lesgourgues et al. 2006; de Putter et al. 2009). In Planck Collaboration (2014a) reported constraints on the \(N_{\mathrm{eff}}=3.30{+/}0.27\) for the effective number of relativistic degrees of freedom, and an upper limit of 0.23 eV for the summed neutrino mass. However the Planck cosmological constraints also reported a relatively low value of the Hubble parameter with respect to previous measurements, that resulted in several papers, for example (Wyman et al. 2014), that investigated the possibility that this tension could possibly be resolved by introducing an eVscale (possibly sterile) neutrino. Combining the Planck results with large scale structure measurements or weak lensing measurements has resulted in reported claims of even stronger constraints on the sum of neutrino masses, for example (RiemerSørensen et al. 2014) found an upper limit on the sum of neutrino masses of \(<\, 0.18\) eV (95% confidence) by combining with WiggleZ data (Battye and Moss 2014) and (Hamann and Hasenkamp 2013) combined Planck data with weak lensing data from CFHTLenS and found higher values for the sum of neutrino masses, as a result of tension in the measured and inferred values of \(\sigma _8\) between lensing and the CMB where the lensing prefers a lower value, however (Kitching et al. 2014) find that such a lower value of \(\sigma _8\) is consistent with Baryon feedback models impacting the smallscale distribution of dark matter.
Euclid’s measurement of the galaxy power spectrum, combined with Planck (primary CMB only) priors should yield an error on \(\varSigma \) of 0.04 eV (for details see Carbone et al. 2011b) which is in qualitative agreement with previous work (e.g., Saito et al. 2009), assuming a minimal value for \(\varSigma \) and constant neutrino mass. Euclid’s weak lensing should also yield an error on \(\varSigma \) of 0.05 eV (Kitching et al. 2008a). While these two determinations are not fully independent (the cosmic variance part of the error is in common given that the lensing survey and the galaxy survey cover the same volume of the universe) the size of the errorbars implies more than \(1\sigma \) detection of even the minimum \(\varSigma \) allowed by oscillations. Moreover, the two independent techniques will offer crosschecks and robustness to systematics. The error on \(\varSigma \) depends on the fiducial model assumed, decreasing for fiducial models with larger \(\varSigma \). Euclid will enable us not only to detect the effect of massive neutrinos on clustering but also to determine the absolute neutrino mass scale. However, recent numerical investigations found severe observational degeneracies between the cosmological effects of massive neutrinos and of some modified gravity models (Baldi et al. 2014). This may indicate an intrinsic theoretical limit to the effective power of astronomical data in discriminating between alternative cosmological scenarios, and in constraining the neutrino mass as well. Further investigations with higher resolution simulations are needed to clarify this issue and to search for possible ways to break these cosmic degeneracies (see also La Vacca et al. 2009; Kristiansen et al. 2010; Marulli et al. 2012).
3.7.3 Hierarchy and the nature of neutrinos
A detection of neutrinoless double\(\beta \) decay from the next generation experiments would indicate that neutrinos are Majorana particles. A null result of such double\(\beta \) decay experiments would lead to a result pointing to the Dirac nature of the neutrino only for degenerate or inverted mass spectrum. Even in this case, however, there are ways to suppress the double\(\beta \) decay signal, without the neutrinos being Dirac particles. For instance, the pseudoDirac scenario, which arises from the same Lagrangian that describes the seesaw mechanism (see, e.g., Rodejohann 2012). This information can be obtained from largescale structure cosmological data, improved data on the tritium beta decay, or the longbaseline neutrino oscillation experiments. If the small mixing in the neutrino mixing matrix is negligible, cosmology might be the most promising arena to help in this puzzle.
3.7.4 Number of neutrino species
Neutrinos decouple early in cosmic history and contribute to a relativistic energy density with an effective number of species \(N_{\nu ,\mathrm {eff}}=3.046\). Cosmology is sensitive to the physical energy density in relativistic particles in the early universe, which in the standard cosmological model includes only photons and neutrinos: \(\omega _{\mathrm {rel}}=\omega _{\gamma }+N_{\nu ,\mathrm {eff}}\omega _{\nu }\), where \(\omega _{\gamma }\) denotes the energy density in photons and is exquisitely constrained from the CMB, and \(\omega _{\nu }\) is the energy density in one neutrino. Deviations from the standard value for \(N_{\nu ,\mathrm {eff}}\) would signal nonstandard neutrino features or additional relativistic species. \(N_{\nu ,\mathrm {eff}}\) impacts the big bang nucleosynthesis epoch through its effect on the expansion rate; measurements of primordial light element abundances can constrain \(N_{\nu ,\mathrm {eff}}\) and rely on physics at \(T\sim \mathrm {MeV}\) (Bowen et al. 2002). In several nonstandard models—e.g., decay of dark matter particles, axions, quintessence—the energy density in relativistic species can change at some later time. The energy density of freestreaming relativistic particles alters the epoch of matterradiation equality and leaves therefore a signature in the CMB and in the mattertransfer function. However, there is a degeneracy between \(N_{\nu ,\mathrm {eff}}\) and \(\varOmega _m h^2\) from CMB data alone (given by the combination of these two parameters that leave matterradiation equality unchanged) and between \(N_{\nu ,\mathrm {eff}}\) and \(\sigma _8\) and/or \(n_s\). Largescale structure surveys measuring the shape of the power spectrum at large scale can constrain independently the combination \(\varOmega _m h\) and \(n_s\), thus breaking the CMB degeneracy. Furthermore, anisotropies in the neutrino background affect the CMB anisotropy angular power spectrum at a level of \(\sim \,20\%\) through the gravitational feedback of their free streaming damping and anisotropic stress contributions. Detection of this effect is now possible by combining CMB and largescale structure observations. This yields an indication at more than \(2\sigma \) level that there exists a neutrino background with characteristics compatible with what is expected under the cosmological standard model (Trotta and Melchiorri 2005; De Bernardis et al. 2008).
The forecasted errors on \(N_{\nu ,\mathrm {eff}}\) for Euclid (with a Planck prior) are \(\pm \,0.1\) at \(1\sigma \) level (Kitching et al. 2008a), which is a factor \(\sim \,5\) better than current constraints from CMB and LSS and about a factor \(\sim \,2\) better than constraints from light element abundance and nucleosynthesis.
3.7.5 Model dependence
A recurring question is how much model dependent will the neutrino constraints be. It is important to recall that usually parameterfitting is done within the context of a \(\varLambda \)CDM model and that the neutrino effects are seen indirectly in the clustering. Considering more general cosmological models, might degrade neutrino constraints, and vice versa, including neutrinos in the model might degrade darkenergy constraints. Here below we discuss the two cases of varying the total neutrino mass \(\varSigma \) and the number of relativistic species \(N_{\mathrm {eff}}\), separately. Possible effects of modified gravity models that could further degrade the neutrino mass constraints will not be discussed in this section.
3.7.6 \(\varSigma \) forecasted error bars and degeneracies
In Carbone et al. (2011b) it is shown that, for a general model which allows for a nonflat universe, and a redshift dependent darkenergy equation of state, the \(1\sigma \) spectroscopic errors on the neutrino mass \(\varSigma \) are in the range 0.036–0.056 eV, depending on the fiducial total neutrino mass \(\varSigma \), for the combination Euclid+Planck.
On the other hand, looking at the effect that massive neutrinos have on the darkenergy parameter constraints, it is shown that the total CMB + LSS darkenergy FoM decreases only by \(\sim \) 15–25% with respect to the value obtained if neutrinos are supposed to be massless, when the forecasts are computed using the socalled “P(k)method marginalized over growthinformation” (see Methodology section), which therefore results to be quite robust in constraining the darkenergy equation of state.
For what concerns the parameter correlations, at the LSS level, the total neutrino mass \(\varSigma \) is correlated with all the cosmological parameters affecting the galaxy power spectrum shape and BAO positions. When Planck priors are added to the Euclid constraints, all degeneracies are either resolved or reduced, and the remaining dominant correlations among \(\varSigma \) and the other cosmological parameters are \(\varSigma \)–\(\varOmega _{\mathrm {de}}\), \(\varSigma \)–\(\varOmega _m\), and \(\varSigma \)–\(w_a\), with the \(\varSigma \)–\(\varOmega _{\mathrm {de}}\) degeneracy being the largest one.
II.7.6.1 Hierarchy dependence
In addition, the neutrino mass spectroscopic constraints depend also on the neutrino hierarchy. In fact, the \(1\sigma \) errors on total neutrino mass for normal hierarchy are \(\sim \) 17–20% larger than for the inverted one. It appears that the matter power spectrum is less able to give information on the total neutrino mass when the normal hierarchy is assumed as fiducial neutrino mass spectrum. This is similar to what found in Jiménez et al. (2010) for the constraints on the neutrino mass hierarchy itself, when a normal hierarchy is assumed as the fiducial one. On the other hand, when CMB information are included, the \(\varSigma \)errors decrease by \(\sim \) 35% in favor of the normal hierarchy, at a given fiducial value \(\varSigma _{\mathrm {fid}}\). This difference arises from the changes in the freestreaming effect due to the assumed mass hierarchy, and is in agreement with the results of Lesgourgues et al. (2004), which confirms that the expected errors on the neutrino masses depend not only on the sum of neutrino masses, but also on the order of the mass splitting between the neutrino mass states.
II.7.6.2 Growth and incoherent peculiar velocity dependence
\(\varSigma \) spectroscopic errors stay mostly unchanged whether growthinformation are included or marginalised over, and decrease only by 10%–20% when adding \(f_g\sigma _8\) measurements. This result is expected, if we consider that, unlike darkenergy parameters, \(\varSigma \) affects the shape of the power spectrum via a redshiftdependent transfer function T(k, z), which is sampled on a very large range of scales including the P(k) turnover scale, therefore this effect dominates over the information extracted from measurements of \(f_g\sigma _8\). This quantity, in turn, generates new correlations with \(\varSigma \) via the \(\sigma _8\)term, which actually is anticorrelated with \(M_\nu \) (Marulli et al. 2011). On the other hand, if we suppose that early darkenergy is negligible, the darkenergy parameters \(\varOmega _{\mathrm {de}}\), \(w_0\) and \(w_a\) do not enter the transfer function, and consequently growth information have relatively more weight when added to constraints from H(z) and \(D_A(z)\) alone. Therefore, the value of the darkenergy FoM does increase when growthinformation are included, even if it decreases by a factor \(\sim \) 50–60% with respect to cosmologies where neutrinos are assumed to be massless, due to the correlation among \(\varSigma \) and the darkenergy parameters. As confirmation of this degeneracy, when growthinformation are added and if the darkenergy parameters \(\varOmega _{\mathrm {de}}\), \(w_0\), \(w_a\) are held fixed to their fiducial values, the errors \(\sigma ({\varSigma })\) decrease from 0.056 to 0.028 eV, for Euclid combined with Planck.
We expect that darkenergy parameter errors are somewhat sensitive also to the effect of incoherent peculiar velocities, the socalled “Fingers of God” (FoG). This can be understood in terms of correlation functions in the redshiftspace; the stretching effect due to random peculiar velocities contrasts the flattening effect due to largescale bulk velocities. Consequently, these two competing effects act along opposite directions on the darkenergy parameter constraints (see methodology Sect. V).
On the other hand, the neutrino mass errors are found to be stable again at \(\sigma ({\varSigma })=0.056\), also when FoG effects are taken into account by marginalising over \(\sigma _v(z)\); in fact, they increase only by 10%–14% with respect to the case where FoG are not taken into account.
\(\sigma (M_\nu )\) and \(\sigma (N_{\mathrm {eff}})\) marginalized errors from LSS + CMB
General cosmology  
Fiducial \(\rightarrow \)  \(\varSigma =0.3\mathrm {\, eV}^a\)  \(\varSigma =0.2\mathrm {\, eV}^a\)  \(\varSigma =0.125\mathrm {\, eV}^b\)  \(\varSigma =0.125\mathrm {\, eV}^c\)  \(\varSigma =0.05\mathrm {\, eV}^b\)  \(N_{\mathrm {eff}}=3.04^d\) 
\(\hbox {Euclid}+\hbox {Planck}\)  0.0361  0.0458  0.0322  0.0466  0.0563  0.0862 
\(\varLambda \)CDM cosmology  
\(\hbox {Euclid}+\hbox {Planck}\)  0.0176  0.0198  0.0173  0.0218  0.0217  0.0224 
3.7.7 \(N_{\mathrm {eff}}\) forecasted errors and degeneracies
Regarding the \(N_{\mathrm {eff}}\) spectroscopic errors, Carbone et al. (2011b) finds \(\sigma (N_{\mathrm {eff}})\sim \,0.56\) from Euclid, and \(\sigma (N_{\mathrm {eff}})\sim \,0.086\), for Euclid+Planck. Concerning the effect of \(N_{\mathrm {eff}}\) uncertainties on the darkenergy parameter errors, the CMB + LSS darkenergy FoM decreases only by \(\sim \,5\%\) with respect to the value obtained holding \(N_{\mathrm {eff}}\) fixed at its fiducial value, meaning that also in this case the “P(k)method marginalized over growth–information” is not too sensitive to assumptions about model cosmology when constraining the darkenergy equation of state.
About the degeneracies between \(N_{\mathrm {eff}}\) and the other cosmological parameters, it is necessary to say that the number of relativistic species gives two opposite contributions to the observed power spectrum \(P_{\mathrm {obs}}\) (see methodology Sect. V), and the total sign of the correlation depends on the dominant one, for each single cosmological parameter. In fact, a larger \(N_{\mathrm {eff}}\) value suppresses the transfer function T(k) on scales \(k\le k_{\max }\). On the other hand, a larger \(N_{\mathrm {eff}}\) value also increases the Alcock–Paczyński prefactor in \(P_{\mathrm {obs}}\). For what concerns the darkenergy parameters \(\varOmega _{\mathrm {de}}\), \(w_0\), \(w_a\), and the darkmatter density \(\varOmega _m\), the Alcock–Paczynski prefactor dominates, so that \(N_{\mathrm {eff}}\) is positively correlated to \(\varOmega _{\mathrm {de}}\) and \(w_a\), and anticorrelated to \(\varOmega _m\) and \(w_0\). In contrast, for the other parameters, the T(k) suppression produces the larger effect and \(N_{\mathrm {eff}}\) results to be anticorrelated to \(\varOmega _b\), and positively correlated to h and \(n_s\). The degree of the correlation is very large in the \(n_s\)\(N_{\mathrm {eff}}\) case, being of the order \(\sim \, 0.8\) with and without Planck priors. For the remaining cosmological parameters, all the correlations are reduced when CMB information are added, except for the covariance \(N_{\mathrm {eff}}\)\(\varOmega _{\mathrm {de}}\), as happens also for the \(M_\nu \)correlations. To summarize, after the inclusion of Planck priors, the remaining dominant degeneracies among \(N_{\mathrm {eff}}\) and the other cosmological parameters are \(N_{\mathrm {eff}}\)\(n_s\), \(N_{\mathrm {eff}}\)\(\varOmega _{\mathrm {de}}\), and \(N_{\mathrm {eff}}\)h, and the forecasted error is \(\sigma (N_{\mathrm {eff}})\sim \,0.086\), from Euclid+Planck. Finally, if we fix to their fiducial values the darkenergy parameters \(\varOmega _{\mathrm {de}}\), \(w_0\) and \(w_a\), \(\sigma (N_{\mathrm {eff}})\) decreases from 0.086 to 0.048, for the combination Euclid+Planck. However, it has to be noticed that if \(N_{\mathrm {eff}}\) is allowed to vary, then the shape of the matter power spectrum in itself cannot constrain \(\varOmega _m h\). Indeed, in \(\varLambda \)CDM models, the power spectrum constrains \(\varOmega _m h\) because the turning point \(k_\mathrm {eq}\) corresponds to the comoving Hubble rate at equality. If the radiation content is known, then \(k_\mathrm {eq}\) depends only on \(\varOmega _m h\). However, if the radiation content is unknown, then \(k_\mathrm {eq}\) is not linked to a unique value of \(\varOmega _m h\) (Abazajian et al. 2012b). The fact that one can use a combination of CMB (excluding the damping tail) and matter power spectrum data to break the \(N_{\mathrm {eff}}\)–\(\varOmega _m h^2\) degeneracy is due to a decreasing baryon fraction \(f_b = \varOmega _b h^2/\varOmega _m h^2\) when \(N_{\mathrm {eff}}\) is increased (while keeping \(z_\mathrm {eq}\) fixed) (e.g., Bashinsky and Seljak 2004).
3.7.8 Nonlinear effects of massive cosmological neutrinos on bias, P(k) and RSD
In general, forecasted errors are obtained using techniques, like the Fishermatrix approach, that are not particularly well suited to quantifying systematic effects. These techniques forecast only statistical errors, which are meaningful as long as they dominate over systematic errors. Possible sources of systematic errors of major concern are the effects of nonlinearities and galaxy bias.
The description of nonlinearities in the matter power spectrum in the presence of massive neutrinos has been addressed in several different ways: Wong (2008), Saito et al. (2008, 2009, 2011) have used perturbation theory, Lesgourgues et al. (2009) the timeRG flow approach and Brandbyge et al. (2008), Brandbyge and Hannestad (2009), Brandbyge et al. (2010), Viel et al. (2010) different schemes of Nbody simulations. Another nonlinear scheme that has been examined in the literature is the halo model. This has been applied to massive neutrino cosmologies in Abazajian et al. (2005a) and Hannestad et al. (2005, 2006).
On the other hand, galaxy/halo bias is known to be almost scaleindependent only on large, linear scales, but to become nonlinear and scaledependent for small scales and/or for very massive haloes. From the above discussion and references, it is clear that the effect of massive neutrinos on the galaxy power spectrum in the nonlinear regime must be explored via Nbody simulations to encompass all the relevant effects.
The pressure produced by massive neutrino freestreaming contrasts the gravitational collapse which is the basis of cosmic structure formation, causing a significant suppression in the average number density of massive structures. This effect can be observed in the high mass tail of the halo MF in Fig. 43, as compared with the analytic predictions of Sheth and Tormen (2002) (ST), where the variance in the density fluctuation field, \(\sigma (M)\), has been computed via camb (Lewis et al. 2000b), using the same cosmological parameters of the simulations. In particular, here the MF of substructures is shown, identified using the subfind package (Springel et al. 2001), while the normalization of the matter power spectrum is fixed by the dimensionless amplitude of the primordial curvature perturbations \(\varDelta ^2_\mathcal{R}(k_0)_{\mathrm {fid}}=2.3\times 10^{9}\), evaluated at a pivot scale \(k_0=0.002/\mathrm {Mpc}\) (Larson et al. 2011), which has been chosen to have the same value both in the \(\varLambda \)CDM\(\nu \) and in the \(\varLambda \)CDM cosmologies.
In Fig. 43, two fiducial neutrino masses have been considered, \(\varSigma =0.3\) and \(\varSigma =0.6\mathrm {\ eV}\). From the comparison of the corresponding MFs, we confirm the theoretical predictions, i.e., that the higher the neutrino mass is, the larger the suppression in the comoving number density of DM haloes becomes. These results have been overall confirmed by recent numerical investigations (VillaescusaNavarro et al. 2014; Castorina et al. 2014; Costanzi et al. 2013). Moreover, it was shown that an even better agreement with numerical simulations can be obtained by using the linear CDM power spectrum, instead of the total matter one (see also Ichiki and Takada 2012).
Massive neutrinos also strongly affect the spatial clustering of cosmic structures. A standard statistic generally used to quantify the degree of clustering of a population of sources is the twopoint autocorrelation function. Although the freestreaming of massive neutrinos causes a suppression of the matter power spectrum on scales k larger than the neutrino freestreaming scale, the halo bias is significantly enhanced. This effect can be physically explained thinking that, due to neutrino structure suppression, the same halo bias would correspond, in a \(\varLambda \)CDM cosmology, to more massive haloes (than in a \(\varLambda \)CDM\(\nu \) cosmology), which as known are typically more clustered.
This effect is evident in Fig. 44 which shows the twopoint DM halo correlation function measured with the Landy and Szalay (1993) estimator, compared to the matter correlation function. In particular, the clustering difference between he \(\varLambda \)CDM and \(\varLambda \)CDM\(\nu \) cosmologies increases at higher redshifts, as it can be observed from Figs. 44, 45, and 46. Note also the effect of nonlinearities on the bias, which clearly starts to become scaledependent for separations \(r<20\mathrm {\ Mpc}/h\) (see also VillaescusaNavarro et al. 2014; Castorina et al. 2014; Costanzi et al. 2013).
There are indications from 3D weak lensing in the CFHTLenS survey (Kitching et al. 2014) that the matter power suppressed is suppressed with respect to the \(\varLambda \)CDM expectation in the wavenumber range 1–10 h Mpc\(^{1}\), which may be a hint of either massive neutrinos, or feedback from AGN, or both. Euclid will be able to probe this regime with much greater precision, and potentially disentangle the two effects.
RSD are also strongly affected by massive neutrinos. Figure 47 shows the real and redshift space correlation functions of DM haloes as a function of neutrino mass. The effect of massive neutrinos is particularly evident when the correlation function is measured as a function of the two directions perpendicular and parallel to the line of sight. The value of the linear growth rate that can be derived by modelling galaxy clustering anisotropies can be greatly suppressed with respect to the value expected in a \(\varLambda \)CDM cosmology. Indeed, neglecting the relic massive neutrino background in data analysis might induce a bias in the inferred growth rate, from which a potentially fake signature of modified gravity might be inferred. Figure 48 demonstrates this point, showing the bestfit values of \(\beta \) and \(\sigma _{12}\), as a function of \(\varSigma \) and redshift, where \(\beta = {\frac{f(\varOmega _{\mathrm {M}})}{b_\mathrm {eff}}}\), \(b_{\mathrm {eff}}\) being the halo effective linear bias factor, \(f(\varOmega _{\mathrm {M}})\) the linear growth rate and \(\sigma _{12}\) the pairwise velocity dispersion.
3.8 Coupling between dark energy and neutrinos
As we have seen in Sect. I.5.3, it is interesting to consider the possibility that dark energy, seen as a dynamical scalar field (quintessence), may interact with other components in the universe. In this section we focus on the possibility that a coupling may exist between dark energy and neutrinos.
The idea of such a coupling has been addressed and developed by several authors within MaVaNs theories first (Fardon et al. 2004; Peccei 2005; Bi et al. 2005; Afshordi et al. 2005; Weiner and Zurek 2006; Das and Weiner 2011; Takahashi and Tanimoto 2006; Spitzer 2006; Bjælde et al. 2008; Brookfield et al. 2006b, a) and more recently within growing neutrino cosmologies (Amendola et al. 2008a; Wetterich 2007; Mota et al. 2008; Wintergerst et al. 2010; Wintergerst and Pettorino 2010; Pettorino et al. 2010; Brouzakis et al. 2011). It has been shown that neutrinos can play a crucial role in cosmology, setting naturally the desired scale for dark energy. Interestingly, a coupling between neutrinos and dark energy may help solving the ‘why now’ problem, explaining why dark energy dominates only in recent epochs. The coupling follows the description illustrated in Sect. I.5.3 for a general interacting darkenergy cosmology, where now \(m_\nu =m_\nu (\phi )\).
Typically, in growing neutrino cosmologies, the function \(m_\nu (\phi )\) is such that the neutrino mass grows with time from low, nearly massless values (when neutrinos are nonrelativistic) up to present masses in a range in agreement with current observations (see the previous section of this review for latest bounds on neutrino masses). The key feature of growing neutrino models is that the amount of dark energy today is triggered by a cosmological event, corresponding to the transition from relativistic to nonrelativistic neutrinos at redshift \(z_\mathrm {NR}\sim \,5{}10\). As long as neutrinos are relativistic, the coupling plays no role on the dynamics of the scalar field, which follows attractor solutions of the type described in Sect. I.5.3. From there on, the evolution of dark energy resembles that of a cosmological constant, plus small oscillations of the coupled dark energyneutrino fluid. As a consequence, when a coupling between dark energy and neutrinos is active, the amount of dark energy and its equation of state today are strictly connected to the present value of the neutrino mass.
The interaction between neutrinos and dark energy is a nice and concrete example of the significant imprint that dynamical coupled dark energy can leave on observables and in particular on structure formation and on the cosmic microwave background. This is due to the fact that the coupling, playing a role only after neutrinos become nonrelativistic, can reach relatively high values as compared to gravitational attraction. Typical values of \(\beta \) are order 50–100 or even more such that even the small fraction of cosmic energy density in neutrinos can have a substantial influence on the time evolution of the quintessence field. During this time the fifth force can be of order \(10^2{}10^4\) times stronger than gravity. The neutrino contribution to the gravitational potential influences indirectly also dark matter and structure formation, as well as CMB, via the Integrated Sachs–Wolfe effect and the nonlinear Rees–Sciama effect, which is nonnegligible at the scales where neutrinos form stable lumps. Furthermore, backreaction effects can substantially modify the growth of large scale neutrino lumps, with effects which are much larger than in the dark matter case. The presence of a fifth force due to an interaction between neutrinos and dark energy can lead to remarkably peculiar differences with respect to a cosmological constant scenario.

existence of very large structures, order \(10{}500\mathrm {\ Mpc}\) (Afshordi et al. 2005; Mota et al. 2008; Wintergerst et al. 2010; Wintergerst and Pettorino 2010; Pettorino et al. 2010);

enhanced ISW effect, drastically reduced when taking into account nonlinearities (Pettorino et al. 2010): information on the gravitational potential is a good mean to constrain the range of allowed values for the coupling \(\beta \);

largescale anisotropies and enhanced peculiar velocities (Watkins et al. 2009; Ayaita et al. 2009);

the influence of the gravitational potential induced by the neutrino inhomogeneities can affect BAO in the darkmatter spectra (Brouzakis et al. 2011).
3.9 Unified dark matter
The appearance of two unknown components in the standard cosmological model, dark matter and dark energy, has prompted discussion of whether they are two facets of a single underlying dark component. This concept goes under the name of quartessence (Makler et al. 2003), or unified dark matter (UDM). A priori this is attractive, replacing two unknown components with one, and in principle it might explain the ‘why now?’ problem of why the energy densities of the two components are similar (also referred to as the coincidence problem). Many UDM models are characterized by a sound speed, whose value and evolution imprints oscillatory features on the matter power spectrum, which may be detectable through weak lensing or BAO signatures with Euclid.
The field is rich in UDM models (see Bertacca et al. 2010, for a review and for references to the literature). The models can grow structure, as well as providing acceleration of the universe at late times. In many cases, these models have a noncanonical kinetic term in the Lagrangian, e.g., an arbitrary function of the square of the time derivative of the field in a homogeneous and isotropic background. Early models with acceleration driven by kinetic energy (kinflation ArmendarizPicon et al. 1999; Garriga and Mukhanov 1999; Bose and Majumdar 2009) were generalized to more general Lagrangians (kessence; e.g., ArmendarizPicon et al. 2000, 2001; Scherrer 2004). For UDM, several models have been investigated, such as the generalized Chaplygin gas (Kamenshchik et al. 2001; Bento et al. 2002; Bilić et al. 2002; Zhang et al. 2006; Popov 2010), although these may be tightly constrained due to the finite sound speed (e.g. Amendola et al. 2003a; Bento et al. 2003; Sandvik et al. 2004; Zhu 2004). Vanishing sound speed models however evade these constraints (e.g., the silent Chaplygin gas of Amendola et al. 2005b). Other models consider a single fluid with a twoparameter equation of state (e.g., Balbi et al. 2007), models with canonical Lagrangians but a complex scalar field (Arbey 2006), models with a kinetic term in the energy–momentum tensor (Gao et al. 2010; Chimento and Forte 2008), models based on a DBI action (Chimento et al. 2010), models which violate the weak equivalence principle (Füzfa and Alimi 2007) and models with viscosity (Dou and Meng 2011). Finally, there are some models which try to unify inflation as well as dark matter and dark energy (Capozziello et al. 2006; Nojiri and Odintsov 2008; Liddle et al. 2008; Lin 2009; Henriques et al. 2009).
A requirement for UDM models to be viable is that they must be able to cluster to allow structure to form. A generic feature of the UDM models is an effective sound speed, which may become significantly nonzero during the evolution of the universe, and the resulting Jeans length may then be large enough to inhibit structure formation. The appearance of this sound speed leads to observable consequences in the CMB as well, and generally speaking the speed needs to be small enough to allow structure formation and for agreement with CMB measurements. In the limit of zero sound speed, the standard cosmological model is recovered in many models. Generally the models require finetuning, although some models have a fast transition between a dark matter only behavior and \(\varLambda \)CDM. Such models (Piattella et al. 2010) can have acceptable Jeans lengths even if the sound speed is not negligible.
3.9.1 Theoretical background
3.9.2 Euclid observables
Of interest for Euclid are the weak lensing and BAO signatures of these models, although the supernova Hubble diagram can also be used (Thakur et al. 2009). The observable effects come from the power spectrum and the evolution of the equationofstate parameter of the unified fluid, which affects distance measurements. The observational constraints of the generalized Chaplygin gas have been investigated (Park et al. 2010), with the model already constrained to be close to \(\varLambda \)CDM with SDSS data and the CMB. The effect on BAO measurements for Euclid has been calculated by (Camera et al. 2012), whereas the weak lensing effect has been considered for noncanonical UDM models (Camera et al. 2011c). The change in shape and oscillatory features introduced in the power spectrum allow the sound speed parameter to be constrained very well by Euclid, using 3D weak lensing (Heavens 2003; Kitching et al. 2007) with errors \(\sim \, 10^{5}\) (see also Camera et al. 2009, 2012).
3.10 Dark energy and dark matter
In Sect. I.5, we have illustrated the possibility that dark energy, seen as a dynamical scalar field (quintessence), may interact with other components in the universe. When starting from an action such as Eq. (I.5.20), the species which interact with quintessence are characterized by a mass function that changes in time (Kodama and Sasaki 1984; Amendola 2000a, 2004; Pettorino and Baccigalupi 2008). Here, we consider the case in which the evolution of cold dark matter (CDM) particles depends on the evolution of the darkenergy scalar field. In this case the general framework seen in Sect. I.5 is specified by the choice of the function \(m_c=m_c(\phi )\). The coupling is not constrained by tests of the equivalence principle and solar system constraints, and can therefore be stronger than the coupling with baryons. Typical values of \(\beta \) presently allowed by observations (within current CMB data) are within the range \(0< \beta < 0.06\) at 95% CL for a constant coupling and an exponential potential, (Bean et al. 2008b; Amendola et al. 2003b; Amendola 2004; Amendola and Quercellini 2003), or possibly more if neutrinos are taken into account or more realistic timedependent choices of the coupling are used (La Vacca et al. 2009; Kristiansen et al. 2010). As mentioned in Sect. I.5.3, this framework is generally referred to as ‘coupled quintessence’ (CQ). Various choices of couplings have been investigated in the literature, including constant \(\beta \) (Amendola 2000a, 2004; Mangano et al. 2003; Koivisto 2005; Guo et al. 2007; Quartin et al. 2008; Quercellini et al. 2008; Pettorino and Baccigalupi 2008) and varying couplings (Baldi 2011b), with effects on Supernovæ, CMB and crosscorrelation of the CMB and LSS (Bean et al. 2008b; Amendola et al. 2003b; Amendola 2004; Amendola and Quercellini 2003; La Vacca et al. 2009; Kristiansen et al. 2010; Mainini and Mota 2012).
 1.
a fifth force \(\varvec{\nabla } \left[ \varPhi _\alpha + \beta \phi \right] \) with an effective \(\tilde{G}_{\alpha } = G_{N}[1+2\beta ^2(\phi )]\);
 2.
a velocitydependent term \(\tilde{H}\mathbf {v}_{\alpha } \equiv H \left( 1  {\beta (\phi )} \frac{\dot{\phi }}{H}\right) \mathbf {v}_{\alpha }\);
 3.
a timedependent mass for each particle \(\alpha \), evolving according to Eq. (I.5.25).

enhanced ISW effect (Amendola 2000a, 2004; Mainini and Mota 2012); such effects may be partially reduced when taking into account nonlinearities, as described in Pettorino et al. (2010);

increase in the number counts of massive clusters at high redshift (Baldi and Pettorino 2011);

scaledependent bias between baryons and dark matter, which behave differently if only dark matter is coupled to dark energy (Baldi et al. 2010; Baldi 2011a);

less steep inner core halo profiles (depending on the interplay between fifth force and velocitydependent terms) (Baldi et al. 2010; Baldi 2011a, b; Li et al. 2011; Li and Barrow 2011b);

lower concentration of the halos (Baldi et al. 2010; Baldi 2011b; Li and Barrow 2011b);

voids are emptier when a coupling is active (Baldi and Viel 2010).
3.11 Ultralight scalar fields
Ultralight scalar fields arise generically in high energy physics, most commonly as axions or other axionlike particles (ALPs). They are the pseudogoldstone bosons (PGBs) of spontaneously broken symmetries. Their mass remains protected to all loop orders by a shift symmetry, which is only weakly broken to give the fields a mass and potential, through non perturbative effects. Commonly these effects are presumed to be caused by instantons, as in the case of the QCD axion, but the potential can also be generated in other ways, for example, in the study of quintessence (Panda et al. 2011). Here, we will be considering a general scenario, motivated by the suggestions of Arvanitaki et al. (2010) and Hu et al. (2000), where an ultralight scalar field constitutes some fraction of the dark matter, and we make no detailed assumptions about its origin.
3.11.1 Phenomenology and motivation
There may be a small modeldependent thermal population of ALPs, but the majority of the cosmological population will be cold and nonthermally produced. Production of cosmological ALPs proceeds by the vacuum realignment mechanism. When the Peccei–Quinnlike U(1) symmetry is spontaneously broken at the scale \(f_a\) the ALP acquires a vacuum expectation value, the misalignment angle \(\theta _i\), uncorrelated across different causal horizons. However, provided that inflation occurs after symmetry breaking, \(f_a>H_I/2pi\), then the field is homogenized over our entire causal volume. This is the scenario we will consider, since large \(f_a\gtrsim 10^{16}\mathrm {\ GeV}\) is required for ultralight scalars to make up any significant fraction of the DM^{20}.