Cosmic acceleration as an optical illusion

We consider light propagation in an inhomogeneous irrotational dust universe with vanishing cosmological constant, with initial conditions as in standard linear perturbation theory. A non-perturbative approach to the dynamics of such a universe is combined with a distance formula based on the Sachs optical equations. Then a numerical study implies a redshift–distance relation that roughly agrees with observations. Interpreted in the standard homogeneous setup, our results would appear to imply the currently accepted values for the Hubble rate and the deceleration parameter; furthermore there is consistency with density perturbations at last scattering. The determination of these three quantities relies only on a single parameter related to a cutoff scale. Discrepancies with the existing literature are related to subtleties of higher order perturbation theory which make both the reliability of the present approach and the magnitude of perturbative effects beyond second order hard to assess.


Introduction
The fact that cosmological observations do not conform to the predictions of Friedmann-Lemaitre-Robertson-Walker (FLRW) models with a vanishing cosmological constant is usually interpreted as an indication that differs from zero. Clearly our actual universe deviates from the idealized FLRW cases by hosting inhomogeneities, and there have been many suggestions that the latter might have effects which would explain the data without requiring ; see e.g. Ref. [1] for an early proposal of this kind. The main challenge for any such claim is to explain why we perceive an accelerated expansion. Basically there are two possible routes as well as combinations of them. On the one hand the inhomogeneities might have an impact on the actual expansion of the universe (suitably defined in terms of the evolution of volumes of spatial regions). On the other hand there is the possibility that they a e-mail: skarke@hep.itp.tuwien.ac.at affect light propagation in a subtle way which modifies the usual distance-redshift relations. In the present work we are mainly concerned with the second scenario, which relies on the obvious yet important insight that almost every single piece of evidence on the evolution of the cosmos relies on the observation of photons with telescopes or other devices; Ref. [2,3] provides a particularly forceful presentation of this point.
The present work will take the Sachs optical equations as a starting point, but will use them to analyze the evolution of the "structure distance" (cf. Weinberg [22]) d S = (1 + z)d A . The result, a second order ordinary differential equation, looks more complicated at first sight than the corresponding formula for d A , but it turns out that the two non-trivial coefficients have very simple interpretations: one of them is a local (and directed) expansion rate that agrees with the standard Hubble rate in the homogeneous case, and the other one is a quantity that vanishes in a spatially flat homogeneous geometry. These expressions (more precisely: their suitably defined expectation values) are then computed nonperturbatively within a recently introduced statistical framework [23] whose only assumptions are an irrotational dust approximation for the matter content and initial conditions consistent with linear perturbation theory with only Gaussian fluctuations. With the help of some approximations (but not of the Dyer-Roeder type) and the use of a computer program we find that in such a universe with = 0 there is a time t o with the following properties. An observer at t o will see redshift-distance pairs which, if interpreted with formulas that ignore the inhomogeneities, would indicate H (t o )t o ≈ 1, a deceleration parameter q(t o ) ≈ −0.5, and density perturbations at a redshift of z ≈ 1090 from t o that agree with those assumed for dark matter at last scattering. In other words, such an observer sees what present day cosmologists see, despite living in a universe in which the cosmological constant vanishes.
In the next section we derive a differential equation for the structure distance and discuss the meaning of its coefficients; furthermore we elucidate the relationship between local expansion data along a lightlike geodesic and the inferences that a cosmologist who ignores the inhomogeneities would make. In Sect. 3 the coefficients are computed explicitly for the cases of homogeneous and irrotational dust universes. Section 4 contains a brief summary of the methods of Ref. [23] for a non-perturbative statistical treatment of an irrotational dust universe with initial conditions from linear perturbation theory. In Sect. 5 the "photon path average" is introduced: this is the concept that we use to estimate the overall effect of the changing environments that a photon experiences on the way from its source to an observer. Section 6 contains calculations up to second order in perturbation theory (we will see that they do not suffice to produce the relevant effects). In Sect. 7 we present the results of a numerical computation that transcends perturbation theory: we find quantities that are in rough agreement with today's observations even though we assume = 0. In the final section we briefly reiterate our findings and summarize the approximations that were made in deriving them. We also explain why some of the approximations are not as good as they originally appeared, thus leaving the question of the non-perturbative impact of inhomogeneities still open; this is the main modification compared to previous versions of the paper.

Sachs equations and distance formulas
Let us start with a brief summary of the homogeneous case in order to provide some reference points for our subsequent generalization. A homogeneous universe is usually described with the help of a time-dependent scale factor a(t) in terms of which the Hubble expansion rate is defined as and the deceleration parameter as The redshift z of a photon emitted at time t and observed at time t o , with both the source and the observer at rest with respect to a comoving frame, is given by which implies In the case of vanishing spatial curvature several distance formulas can be summarized as where we have to take λ = −1 for the angular diameter distance d A , and λ = 1 for the luminosity distance d L . The resulting identity d L /d A = (1 + z) 2 actually holds in any pseudo-Riemannian geometry; this is known as Etherington's theorem [24]. The simplest version of Eq. (5) occurs if we take d to be the geometric mean of d A and d L , for which there exist a variety of names in the literature; we will follow Weinberg [22] who calls d S the "structure distance". Then λ = 0, and Eq. (5) implies and, with Eq. (4), In the following we consider an arbitrary spacetime geometry. We want to analyze a lightlike geodesic corresponding to the path of a photon emitted at x μ e and observed at x μ o . With an affine parameter s and a corresponding tangent vector k μ = dx μ /ds the redshift z is determined in general by the formula where u e and u o are the normalized tangent vectors to the worldlines of the source and the observer, respectively. If we assume that we have a distinguished timelike coordinate t such that both the source and the observer have worldlines with normalized tangent vectors ∂/∂t, and that s is normalized so that ds = dt at the observer, we get (to be evaluated at the source, i.e. at t = t e ; the same holds for the following equations). We write d dt or use dots when we treat t as parametrizing the geodesic, and we denote the partial derivative by the spacetime coordinate t = x 0 by ∂ 0 or ∂ ∂t . The Sachs optical equations [15] (see [25] for a textbook derivation) are where θ opt and σ opt are the expansion rate and the shear of the null bundle, respectively. In general the terms expansion rate and shear refer to the change in the size and the shape of a bundle of geodesics. Since we will later apply the same notions to worldlines of dust particles, we indicate with the subscript that we are referring to the optical quantities. Furthermore ε = ε (1) + √ −1 ε (2) where ε (1) , ε (2) are spacelike unit vectors orthogonal both to k and to the observer's worldline; because of these properties the right-hand side of the second equation remains the same if the Riemann tensor R αβμν is replaced by the Weyl tensor C αβμν , and corresponding effects are often referred to as "Weyl focusing". The angular diameter distance d A is determined by which can be used to reformulate the Sachs equations as We now want to transform Eq. (14) into an equation for the structure distance d S = (1 + z)d A as a function of time. By using Eq. (10) we find As we will demonstrate in Sect. 3, the quantity i actually vanishes for spatially flat homogeneous universes. In that case Eq. (16) is solved by Even for i = 0 the introduction of d S is useful because we can simplify Eq. (16) by treating d S as a function of d S , which results in (19) with boundary conditions at d S = 0 given by There is no perfectly natural way of generalizing the concept of a Hubble rate to an inhomogeneous universe. Two operational definitions of a "Hubble rate" associated with a specific point on a geodesic can be made as generalizations of Eq. (7): Both formulas reduce to the standard Hubble rate for the case of a homogeneous spatially flat universe. While H inf is essentially the quantity that is inferred from observations under the assumption of flat homogeneity, H is the expansion at the source in the direction of the photon emission: by virtue of Eq. (18) we have in perfect analogy with Eq. (4); also note that H is just the second coefficient in Eq. (16). With the help of Eqs. (19) and (20) we find This means that the two definitions of H coincide at the observer, and that for positive i observations tend to overestimate and for negative i to underestimate expansion rates in previous epochs; in particular, for sufficiently large negative i we can perceive acceleration even if it does not take place.
As we have seen, someone who ignores the non-vanishing of i (in other words, any cosmologist believing in the standard concordance model) would interpret H inf as "the Hubble rate". Furthermore, from Eq. (8) such a person would (wrongly!) infer a time parameter t inf with In fact, H inf and t inf satisfy an analog of Eqs. (4) and (22): Let us also introduce the deceleration parameters By using the chain rule, the definitions of the various quantities and Eq. (16) one can show that they are related via This demonstrates again that negative i can lead to the perception of acceleration even if it does not take place. We can summarize the results of this section in the following way. From the values of the pairs (d S , z) along a given lightlike geodesic, without taking into account the quantity i, which encodes the effects of curvature and inhomogeneity, one would infer an expansion history along that geodesic in terms of quantities t inf , H inf and q inf . The actual expansion history along that specific geodesic is encoded by t, H and q . The two sets of quantities are related by Eqs. (23), (27) and In reality we have at most a single data point (d S , z) for any observed direction, and we require a statistical analysis. As we will see, even H and q (suitably averaged over photon paths) can become quite different from the corresponding results from volume averaging.

Homogeneous and irrotational dust universes
While all of our results up to now are exact in an arbitrary pseudo-Riemannian geometry with a distinguished timelike coordinate, we assume in the following that the metric can be written, in the synchronous gauge, as this is true for any homogeneous spacetime as well as for irrotational dust, where the dust particles have constant space coordinates x i . We want to express our quantities in terms of the spatial 3-geometry with the time-dependent metric g i j .
To distinguish it from the spacetime geometry we adopt the convention that an expression with Greek indices or at least one index of zero or a left superscript of (4) pertains to the 4metric g αβ , whereas any other quantity, in particular the Ricci scalar R = R i i , refers to g i j . The connection coefficients for the metric (30) vanish if two or three indices are 0, and the non-vanishing coefficients are with the notation ∂ 0 for ∂/∂ x 0 = ∂/∂t and more generally ∂ μ for ∂/∂ x μ , so that The expansion tensor θ i j and the scalar expansion rate θ are defined by and the shear is the traceless part of the expansion tensor, The Riemann tensor R αβγ δ can be expressed in terms of the expansion tensor and the Riemann tensor R i jkl of the spatial metric g i j : with θ i j = g ik θ k j and with the vertical strokes denoting covariant spatial derivatives.
We now want to specialize our analysis of photon paths to a metric of the type (30), with the assumption that both the source and the observer are comoving: or, upon division by (1 + z) 2 and application of Eq. (10), Asẋ μ is lightlike and x 0 = t, the spatial partẋ i must be a unit vector with respect to g i j , whereby the previous equation becomes Similarly we can transform the spatial component of the geodesic equation intö Upon using this, together with (32), in the derivative of Eq. (41), we find Note that up to now we have never used the Einstein equations Let us assume that the spatial part of the energy-momentum tensor is proportional to the metric, T i j = g i j T k k /3, and that T 0i = 0. This holds not only in the homogeneous case but also in the general irrotational dust case, where T i j = 0. Then Eq. (45) implies that the spacetime Ricci tensor R αβ must be of the same type, (4) in the last step we have used k 0 = dx 0 /ds = 1 + z and With the help of Eqs. (33)-(37) this results in The traceless spatial part of the Einstein equations amounts to represents the traceless part of the spatial Ricci tensor. Using this after inserting Eqs. (44) and (47) into (17) we get This result is still exact within the irrotational dust framework and also for any homogeneous cosmological model. In the latter case it reduces to i = R/6 = K /a 2 with (19) lead to the well-known distance formulas that involve sin or sinh functions for K = 0.
Let us also note that Eq. (15) for the optical shear is determined by for any metric of the type (30), as one can ascertain by using similar methods. This expression vanishes for any homogeneous model.

Mass-weighted average
If we knew the spatial metric g i j in the vicinity of a given lightlike geodesic in an irrotational dust universe, we could now compute the redshift and the structure distance along that geodesic simply by solving Eqs. (41) and (16) with input from Eq. (50) (assuming we are also solving for σ opt along the way). In practice we do not know the precise form of the metric and need to rely on statistical methods; in addition we have to make simplifications to keep the computations manageable. As we aim for results beyond perturbation theory, we choose the approach of Ref. [23] for our underlying statistical framework. The present section is devoted to a brief summary of the relevant ideas and results. The central concept in this approach is the mass-weighted average [26] of a scalar quantity X , where D is a large domain (e.g. all of the visible universe), ρ(x, t) is the local mass density and is the mass content of D. For the case of an irrotational dust universe, energy conservation implies and therefore ∂ 0 X mw = ∂ 0 X mw . This makes it possible to evade the technical difficulties that arise with the more common volume average, where averaging and taking time derivatives do not commute. Nevertheless volume averages are easily computed within this approach as here a is the local scale factor defined as whereρ is an arbitrary fixed mass. Then the dust expansion rate can be expressed as and a set of rescaled quantitieŝ obeys the evolution equations where The initial values for these evolution equations can be found by comparison with linear perturbation theory: upon neglecting vector, tensor and decaying scalar modes the space metric g x) at early times can be expressed in terms of a single time-independent scalar Gaussian random function C(x) as here a EdS = const × t 2/3 is the standard EdS (Einstein-de Sitter, i.e. flat matter-only FLRW) scale factor. By comparing with section 5.3 of Ref. [22] one finds that this metric is equivalent to a Newtonian gauge metric with = = −C/3. It turns out that the initial conditions for our evolution equations are lim t→0 a t 2 3 = where S and s k j are the trace and traceless parts of the matrix of second derivatives of the function C(x). In this setup it can be shown that and that the evolution equation of the local scale factor a(x, t) is As long as one neglects the last term (a 2 Y ki j|k ) in Eq. (60), the evolution in a given region will depend only on the initial conditions within that region; furthermore, if one chooses a coordinate system in which the symmetric matrix S i j (x) is diagonal then r i j and σ i j will be diagonal in that system at any time t. In this way it suffices to work with the probability distribution for the three eigenvalues of S i j . As shown in Ref. [23], the assumption that C(x) is a Gaussian random field suffices to compute this distribution explicitly in terms of a single dimensionful parameter which is related to the value of an integral that requires an ultraviolet cutoff. Then one can switch to dimensionless units by taking a specific value for this parameter. With the computationally convenient choice that was adopted in Ref. [23] and that will also be used here, one finds If one also choosesρ such that 6π G Nρ = 1 in the corresponding units then the perturbative series for a starts as where we have neglected cubic and higher orders in perturbation theory. In the following we will develop the theory further in terms of the dimensionless quantities that we have introduced here. This leads to unique results (up to ambiguities in approximations), with the only free parameter coming from the reintroduction of a dimensionful scale once we start comparing our results with physical quantities.

Photon path average
Finally we want to connect the distance formula (16), which relies on the values of the quantities H = −[ln(1+ z)]˙and i along a photon path, with the framework of Ref. [23] as summarized above. We propose to do the following. We replace the right-hand sides of Eqs. (41) and (50) by suitable expectation values which we will denote by · · · pp , where the subscript stands for "photon path". The idea is that X pp (t) should be the average of X over all spatial positions x occupied by a photon of a given type (e.g. supernova or CMB) at the time t, as well as all directions v of propagation of such a photon. A complete realization of this concept would automatically guarantee consistency with correct ensemble and angular averaging. While the approximation we will make at the beginning of the next paragraph leads to a mild angular deviation, statistical isotropy and homogeneity will be manifest in all our computations. Every photon path corresponds to a random walk in the probability space determined by the six entries of S i j (or, alternatively, three eigenvalues and three direction components). Then X = X pp + X with X pp = 0, and by the linearity of Eq. (16) the contribution of X gets small if a photon probes different regions of the probability space within a short time.
Every photon path corresponds to a curve C in x-space (the R 3 parametrized by the spatial coordinates x 1 , x 2 , x 3 ) that ends at x o . In the flat homogeneous case these curves are just straight lines. If the shapes of these curves were not altered by the presence of inhomogeneities, then our methods would tell us how the basic parameters are distributed with respect to the euclidean metric dl 2 = δ i j dx i dx j along such a curve. We will make the simple approximation of assuming the same distribution even in the general case. As a next step we want to move on to a description that is based on physical time rather than euclidean length. We denote by the tangent vector to C normalized to euclidean unit length, i.e. δ i j v i v j = 1. Upon taking the g-norm g i j v i v j of v and using Eq. (40) we find which reflects the fact that the photon flight time is proportional to the traversed distance as measured with the physical metric g. For any path segment of length dl we average over the three basic statistical parameters (indicated by · · · mw ) and over all directions v, and weight by the time dt = g i j v i v j dl spent in such a segment. This results in where the integrations are taken over the unit sphere S 2 = {v : δ i j v i v j = 1} in tangent space; if X depends onẋ i explicitly, we make use oḟ which follows from Eqs. (72), (73).
Our aim is to compute X pp for the non-trivial coefficients in Eq. (16), i.e. for the cases X = −[ln(1 + z)]˙and X = i. To this end we require integrals over S 2 of expressions that are polynomials in the v i except for the occurrence of factors of g i j v i v j . Since exact results would involve elliptic functions we work in a basis in which the metric is diagonal and write with g = g 11 + g 22 + g 33 3 , γ 11 + γ 22 + γ 33 = 0. Then on the sphere δ i j v i v j = 1. For each term in this expansion we require only integrals of polynomials in the v i , such as From now on we simply omit any terms that are of quadratic or higher order in the γ i j . While this may look excessively crude, one can check that even in the extremal cases of one or two vanishing eigenvalues the error is at most around 15%. For the integral in the denominator of (74) this gives, upon using (77), According to Eq. (41), −[ln(1 + z)]˙= θ/3 + σ i jẋ iẋ j . Since θ has no direction dependence, In evaluating the second term we use the fact that σ i j is diagonal in the same coordinate system in which g i j is: Upon restricting this to terms linear in γ i j and using the formulas (79) and (77) we get Combining our results gives Next we turn our attention to i pp . Since no direction is singled out, the expressions in the third line of Eq. (50), which are all odd underẋ i → −ẋ i , do not contribute after averaging. The optical shear σ opt is determined by Eq. (15). The behavior for small t o − t is easily found to be σ opt ≈ 1 6 (t o − t)R αβμν ε α k β ε μ k ν , i.e. well behaved and vanishing in the limit t → t o . Under a 90 • rotation ε (1) → ε (2) , ε (2) → −ε (1) the right-hand side of Eq. (15) changes sign, hence its photon path average vanishes and the behavior of σ opt resembles a random walk around zero. Near t = 0 we can use the results of linear perturbation theory as presented in Sect. 4 to find that the right-hand side of Eq. (51) behaves like t −8/3 , hence that of Eq. (15) like t −4/3 . Naively this would result in σ opt ∼ t −1 and a contribution of type t −2/3 to Eq. (50), which is the same power as the leading (second order) behavior of the other terms, as we will shortly see; because of the random walk nature it will, however, be suppressed. In the following we will neglect the term (1 + z) −2 |σ opt | 2 in Eq. (50), but keep in mind that i will receive a moderate positive correction for intermediate redshift values; in particular we should remember that this makes our results more reliable for smaller than for larger redshifts.
According to Eq. (68), whereR in = lim t→0 a 2 R. The contribution of (−σ i j θ − r i j )ẋ iẋ j can be treated like that of σ i jẋ iẋ j before, resulting in With slightly more work we also find and Putting the pieces together we obtain Our formulas rely explicitly on the spatial metric g i j in the diagonal basis. To obtain it from the quantities whose evolution is studied in Sect. 4 we use which implies with analogous expressions for g 22 and g 33 . Comparison with Eq. (62) shows that the constant must be the same in each case, and that setting it to 1 corresponds to a normalization where a 2 = a 2 FLRW .

Perturbative results
Before proceeding to the results of a non-perturbative numerical computation, let us first assume that we are still so close to the EdS case that in most regions perturbation theory provides a good approximation. We work with the dimensionless quantities described at the end of Sect. 4. Again our first goal is the photon path average of the right-hand side of Eq. (41). From Eq. (71) we find (to the same accuracy as there) From this formula it is evident that we cannot trust perturbation theory wherever St 2 3 is of order unity or larger: then the second order term would dominate over the first order one, giving rise to absurd results such as contraction in the center of a void (the location of the largest initial expansion, corresponding to a maximal value of S). From the expectation values of S 2 and s 2 as given in Eq. (70) it is clear that a significant part of the universe will start to violate perturbative results around t ≈ 1.
Keeping this fact in mind for later reference, let us now proceed with the perturbative computation. The approximation (81) is valid at linear order, and with Eq. (92) and the fact that σ j i is traceless we get note that I (2) has dropped out at quadratic order so that Eqs. (70), (71), (93) and (94) suffice for computing to the same order as a and θ before. at leading (second) order. This can be evaluated via (here and in the following equation we only consider leading orders), Combining this with Eq. (96) we obtain where the approximation again neglects terms of cubic or higher order in perturbation theory. In order to compute i pp up to second order in perturbation theory we require the mass-weighted average of Eq. (90). We begin with where the approximations neglect contributions of third or higher order in perturbation theory; the linear term has dropped out upon averaging. All other expressions in Eq. (90) are explicitly of quadratic or higher order: with Eq. (98) we find and Eq. (99) together with implies Combining all contributions and using Eq. (70) we arrive at The integral in Eq. (23) can be computed explicitly to leading (second) order by using this value for i and replacing the other quantities by their EdS values By integrating the second order equation (101) which is easily inverted to Reinserting this into Eq. (101) results in which allows us to compute Integrating this expression yields the redshift-distance relation The expression in the large parentheses is the correction that the structure distance gets compared to an EdS universe of the same age t o . If we want to compare instead to the EdS case with the same H o we must correct by the corresponding factor of 1 + t  [18]), however, the second order corrections are only roughly 10 −4 . The reason for this discrepancy is as follows. While our computations in the present section respect the essential terms at second order perturbation theory, some of the approximations introduced at the beginning of the previous section do not. A discussion of these deficits and their consequences will be given in the final section. In the meantime we will proceed with our approximations, since they nevertheless give rise to intriguing results.

Non-perturbative results
In this section we present the results of numerical computations performed with GNU octave [28]. We used the Euler method with logarithmic time steps to solve the evolution equations (59) and (69). We assumed, however, constant r =r in instead of using Eq. (60), for the following reasons: the last term in that equation describes wavelike perturbations which probably play no role and cannot be described directly within the present setup, and the other terms have extremely little impact on overall results (at least when volume evolutions are studied; see Fig. 12 of Ref. [23] and note that the tiny deviations only occur for t 1). This was done for a large set of initial conditions, and the resulting values for a, σ , r and R were used to evaluate the formulas of Sect. 5, with an appropriate probability measure for each set of initital conditions. More algorithmic details can be found in the appendix of Ref. [23].
In regions that collapse, the treatment in terms of irrotational dust breaks down and it is necessary to give a prescription on how to proceed with them. We followed the standard assumption, as suggested by the virial theorem, that collapsing regions shrink to half of their maximal sizes; somewhat unrealistically we pretended that such regions contract according to the irrotational dust evolution equations until that size is reached. The collapsed regions themselves were then treated in two distinct ways: firstly, by keeping them and letting all quantities retain the values that they had in the last moment of collapse, and secondly by just removing them from the statistics. The second approach makes more sense since it is doubtful whether many of the observed photons would have passed through a collapsed region, and also because the strong anisotropies that can occur during collapse should not persist in the virialized regions; nevertheless it is useful to have the other approach as well in order to get an idea of how strongly our results depend on details of modeling. In order to check that our results do not come solely from collapsing regions, we also performed computations in which we excluded any region from the statistics as soon as it started to contract. We will refer to these approaches as scenarios 1/2/3, respectively.
The starting point is a computation of the basic results of the averaging process. The time evolution of √ḡ mw = √ (g 11 + g 22 + g 33 )/3 mw as computed according to Eq. (92), does not differ substantially from that of its EdS equivalent t 2/3 (the discrepancy is less than 15% for the scenarios and time intervals that we consider here). We present our further results mainly in the form of figures created by GNU octave [28]. In these figures we use a color coding of blue/cyan/green for scenarios 1/2/3, respectively, with dashed lines for the quantities H , q and d S and solid lines for the other quantities corresponding to these scenarios; furthermore EdS values are indicated by red dash-dotted, volume average results by solid yellow, perturbative results by dotted magenta and CDM reference values by black dotted lines. Figure 1 displays Ht over the time t for various (nonperturbative, perturbative and homogeneous) versions of the Hubble rate H . The strong deviations from the homogeneous case are a consequence mainly of local anisotropy, by the following mechanism. Consider a region R characterized by some specific values of θ and σ i j and pick a frame {e 1 , e 2 , e 3 } in which σ i j is diagonal. Assume, without loss of generality, that σ 11 > σ 22 and that originally R had the same diameters along the corresponding directions e 1 , e 2 . Even though the overall volume expansion of R is determined by θ , it will expand faster along e 1 and more slowly along e 2 , so that after a while R will have a larger extension in the e 1 -direction than in the e 2 -direction. A photon traversing R along e 1 will not only experience a stronger redshift per unit of time spent in R than one moving along e 2 , but it will also spend more time in R. The corresponding weighting that favors directions with stronger expansion results in the effect that on average a photon traversing R experiences a higher redshift than the volume expansion of R would suggest. In Fig. 2 the time evolution of i √ḡ is displayed for our three non-perturbative scenarios; to be precise, i pp √ḡ mw , i.e. the mass-weighted average of the right-hand side of Eq. (90) is shown. Here the differences between the pertur-  H inf (t o )). This is suggested by the fact that it seems to be a very good approximation in the case of the CDM model and also close to lower bounds coming from ages of globular clusters; in a more general analysis one should probably also allow for values of H o t o somewhat above 1. For our first scenario we find t o ≈ 0.7 in this way. The three lines ending at that value show various versions of the structure distance as functions of t = t e ∈ [0, t o ]: the solid blue line shows d S itself, the dashed blue line below corresponds to d S , and the dash-dotted red line that "starts late" corresponds to an EdS universe with the same value of H o , which would have had a shorter lifetime up to now. The other two triplets of lines correspond in an analogous way to the second scenario, where t o ≈ 1.35, and to the third one with t o ≈ 2.3.
For producing Fig. 4, a plot of various versions of the structure distance over ln(1 + z), the result of Eq. (85) (as shown in Fig. 1) was integrated to get ln(1+z) as a function of t, and combined with the values for the structure distance as displayed in Fig. 3. This plot shows that d   Once again we see that the photon path prescription leads to strongly different results, with effects of roughly the same order coming from the more precise treatments of the two non-trivial coefficients in Eq. (16).
While all the results presented so far refer to times and distances in terms of the mathematically convenient but observationally meaningless units of Sect. 4, the following plot uses standard units of years and parsecs. Figure 6 is identical to Fig. 3 except for the normalization and the inclusion of a reference CDM curve (again as a black dotted line). This figure shows that, with the cor- rect scaling, the predictions of the three different scenarios actually differ less than it appeared originally. Somewhat surprisingly, d S is closer to the CDM values than d S here; in particular our results for d S overestimate the distances for early emission times. We can make this discrepancy quite precise by computing the distance to the last scattering surface from which the cosmic microwave background stems (see also Ref. [30]). This is not completely straightforward because the step width of our programs is not fine enough for handling the time t ls of last scattering that corresponds to z = 1090. We have circumvented this obstacle by using a combination of our programs and linear perturbation theory to find t ls , noting that the solution of Eq. (16) near t = 0 takes the form d S (t) = d S (0) + d (1) S t 1/3 + O(t 2/3 ), checking that the numerical results for small t are very well fitted by the first two terms, and using them to get d S (t ls ). Upon doing this and converting the result to standard units, we found d S (t ls ) ≈ 20.7/20.9/19.8 Gpc for scenarios 1/2/3, respectively. These numbers overestimate d S by almost 50% compared to Planck results [31] of 13.9 Gpc (see their Table 2 and use d S = r * /θ * [Mpc]), which is the largest discrepancy from standard values that we found in the present work. There are two possible explanations. On the one hand, we have omitted the term (1 + z) −2 |σ opt | 2 (related to Weyl focusing) in Eq. (50); cf. the discussion after Eq. (85). Inclusion of this term would make the shape of the function d S (z) flatter and therefore more similar to the CDM reference curve. On the other hand it is not clear whether the angular distance as inferred from the Planck results really should be exactly the same one as that computed via the Sachs equations. The Planck results refer to finite physical distances at t = t ls , whereas the Sachs equations refer to the intersection of the observer's backward light cone with that time slice. In the homogeneous case this intersection will be perfectly spherical, but in a realistic inhomogeneous universe it might be somewhat crumpled (more like the surface of an orange), and the distance that corresponds to a total length along that surface (which is what the Sachs equations compute) will be somewhat larger. Figure 7 displays, like Fig. 4, distance over redshift, the changes being the normalization of the distance to EdS values, the narrower range of z-values, the use of z instead of ln(1+z), and the inclusion of the CDM scenario and supernova data. Both the second and the third scenario perform much better than the EdS case; actually the CDM curve lies between the second and third scenario for most of the redshift values shown in the plot, and the second one somewhat overestimates the deviation from EdS. The first scenario, in which collapsed regions are included with the values for [ln(1 + z)]ȧ nd i that they had in the last moment of collapse, overestimates these deviations even more strongly. This suggests that our methods would be improved by introducing a smooth slowing of the collapse (as it happens in reality), with a corresponding smooth transition of [ln(1+z)]˙and i to zero. The fact that even our third scenario, in which we have suppressed the effects from contracting regions, deviates strongly (and in the right direction) from the EdS case demonstrates that such an improvement could not obliterate the total effect of our treatment of inhomogeneities.
What have we seen up to now? Considering a universe with = 0 and with distributions of geometric quantities that follow directly from initial conditions based on a Gaussian distribution, and with photons that obey the Sachs optical equations, we have shown that the following facts hold: there is a time t o such that an observer at that time sees a redshiftdistance relation remarkably similar to that predicted by the standard CDM scenario, and if the observer analyzes the data without taking into account the inhomogeneities, he will infer a Hubble rate H inf such that H inf t o = 1 and a deceleration parameter q inf ≈ −0.5.
We have already considered the time t ls of last scattering in our discussion of Fig. 6. We can make a further, less ambiguous, statement on that era in the following manner. In our most realistic scenario (the second one), t ls ≈ 5.3×10 −5 in the dimensionless units of Sect. 4 (with t = 0 the instant at which the singularity would have occurred in a purely matter dominated universe). At this time linear perturbation theory is still perfectly valid so that we can compute density perturbations at last scattering with the help of formulas (71) and (70): These are the density perturbations for the total matter, which are dominated by the ones for dark matter. According to Eq. (2.6.30) of Ref. [22], the density perturbations of baryonic matter satisfy ρ B /ρ B = 3 T /T , where T is temperature; using the commonly cited value of 10 −5 for the relative temperature fluctuations in the CMB we find that, at last scattering, the total density perturbations are roughly 50 times as large as those for the baryonic matter. This fits very well with the fact that dark matter decouples from photons (hence clumps gravitationally) earlier than baryons. Similar values for the ratios of the baryonic versus total density perturbations are required for structure formation; see e.g. Fig. 1 of Ref. [29]. We can turn this argument around: from the density perturbations we see that the time of last scattering cannot have occurred significantly before the time t ls ≈ 5.3 × 10 −5 that corresponds to t o ≈ 1.35. But then it is clear that the inhomogeneities will have a significant impact on inferred Hubble and deceleration rates, so that the assumption that a homogeneous universe (with or without a cosmological constant) give correct predictions necessarily breaks down. Conversely, since we do not require a non-zero to account for present observations the simplest assumption is to take = 0.

Discussion
Let us start our discussion with a brief reiteration of our assumptions and conclusions. Considering a universe that -is matter dominated and obeys the Einstein equations, -in its early stages was very close to being spatially flat and homogeneous, with only Gaussian perturbations, and -has vanishing cosmological constant, = 0, we found that there is a time t o such that observations made at that time and interpreted with formulas appropriate to the homogeneous case, would suggest -an inferred Hubble rate H inf such that H inf t o ≈ 1, -an inferred deceleration parameter of q inf ≈ −0.5, and -density perturbations at a redshift of 1090 that fit well with values required at last scattering to lead to structure formation.
In other words, an observer at time t o in such a universe sees essentially what present day cosmologists see, even though vanishes. This is the consequence of a framework that has only one parameter (the overall scale) which can be adjusted. Once this parameter has been fixed by any of the three quantities that were just mentioned (and thus t o identified with the present age of the universe), the prediction for either of the other two provides a highly non-trivial test. Our methods have performed very well on both of them.
In order to arrive at these results it is essential to consider the effects of inhomogeneities on light propagation (not just on the evolution of volumes), and to use a formalism that transcends perturbation theory. The main steps involve the derivation of the differential equation (16) for the structure distance d S = (1 + z)d A , and the computation of the two non-trivial coefficients H = −[ln(1 + z)]˙and i that occur in this equation. In the spatially flat homogeneous case H is just the usual Hubble rate and i = 0; otherwise each of these coefficients contributes significantly, with effects of roughly the same magnitude, to the deviations in the values of d S , H inf and q inf . The main source of discrepancies from FLRW universes is the local anisotropy, as encoded in the dust shear σ i j and the traceless part r i j of the Ricci tensor, and not so much the inhomogeneity which manifests itself by variations of the expansion rate θ and the spatial Ricci scalar R. While Eq. (16) is valid in an arbitrary geometry in which photons follow lightlike geodesics, the subsequent computations required a number of approximations: -The matter was modeled as irrotational dust. While this is an excellent approximation during expansion, it would not permit stable structures such as galaxies and clusters as the results of collapse. Our way of treating this problem, by simply assuming that collapse holds at half the maximum size (or ignoring collapsing regions altogether), is certainly somewhat ambiguous. In particular, the differences between the three variants that we chose show that the results do depend on such details; at the same time our third scenario demonstrates that deviations from the homogeneous case do not stem exclusively from collapse. As we argued in the discussion of Fig. 7, a smoother transition to the virialized state in our framework would probably lead to even better agreement with observations. -We have replaced statistical quantities by their expectation values in order to arrive at a description in which distance can be seen as a function of redshift, as in homogeneous models (cf. the first paragraph of Sect. 5). From the set of supernova data it is clear that this is a gross oversimplification. -We assumed a distribution of photon paths in x-space (the space in which our matter is at rest, which starts out as being almost perfectly euclidean) that was the same as if the photons moved along straight lines in that space. -While exact evolution equations were used for the local scale factor a, the shear σ i j and the Ricci scalar R, the evolution of the traceless part r i j of the Ricci tensor was simplified by ignoring the right-hand side of Eq. (60). -In our analysis of expressions that arise upon taking photon path averages, we have neglected terms of quadratic or higher order in γ i j (a scaled version of the traceless part of the metric g i j ). -For reasons that we discussed after Eq. (85) we ignored Weyl focusing, i.e. the contribution of the optical shear σ opt . -For the numerical treatment the time axis and the probability distribution for the background parameters were discretized. The resulting errors are, however, much smaller than those coming from the other approximations.
Unfortunately the second and third item are not as harmless as they originally seemed. Upon replacing i and H by their expectation values, we have introduced errors i and H which have vanishing expectation values and are of first order. These lead to errors d and ln(1 + z) of the same type. The transition from d(t) and ln(1 + z)(t) to d(z) then generates products of errors which are of second order and non-vanishing expectation value. The approximation introduced in the second paragraph of Sect. 5 probably leads to similar problems, whereas our computations in Sect. 6 respect the essential terms at second order perturbation theory.
A general nth order term is an n-fold product of C (or, equivalently, the Newtonian potential ) or its derivatives, in such a way that typically the nth order term has a total of up to 2(n − 1) spatial derivatives more than the first order term (see Ref. [27] for a detailed discussion). While C itself is small, ∂ 2 C can be large; for example, density perturbations are of this type. In particular, among the terms contributing to the redshift-distance relation at second order, the largest ones that we find are proportional to (∂ 2 C) 2 and give rise to corrections of the order of several percent. However, according to the two independent groups that have performed complete computations up to second order [10,13,18,19], terms of that type cancel out completely and subleading terms have an order of magnitude of only around 10 −4 .
This can be explained in the following way. In our approach, using the synchronous gauge, the whole setup relies on expressing quantities in terms of the entries (or eigenvalues) of the matrix S i j = ∂ i ∂ j C; to be precise, the nth order contribution to any of the quantities a,σ ,R and r is homogeneous of degree n in S. Terms of this type also produce the dominant contribution to the deviation of the metric from the FLRW case. But terms of (schematically) type ∂ 2n C n in g i j give rise to terms of type ∂ 2n+2 C n in the curvature, which must all cancel. This means that the part of the spatial metric consisting of the highest derivatives is flat, implying that it is possible to reparameterize the spatial slices in such a way that the metric no longer contains the ∂ 2n C n terms. Hence any approximation in the synchronous gauge that does not respect the precise structure of the ∂ 2n C n terms introduces errors that are potentially larger than the physical effects from the inhomogeneities. Since our approach suffers from this problem, it does not provide a conclusive argument that the standard CDM picture require modification. Nevertheless it is intriguing how well it appears to perform-after all, one would expect mere errors to result in random nonsense rather than something that closely resembles observations.
If we could be sure of the validity of perturbation theory in the present universe, we would still have to consider our results to be purely accidental. But standard perturbation theory cannot be trusted either. The real universe features shell crossings and vorticity, which do not occur in a purely perturbative modeling of an irrotational dust universe, but whose effects are taken into account by the approach to virialization in the present framework. Besides, it is well known that higher order terms are not smaller than first order terms [27]. This is confirmed in the present work: as the argument after Eq. (93) demonstrates, perturbation theory breaks down around t ≈ 1, which corresponds to the present era.