The Cosmic Microwave Background

We present a brief review of current theory and observations of the cosmic microwave background (CMB). New predictions for cosmological defect theories and an overview of the inflationary theory are discussed. Recent results from various observations of the anisotropies of the microwave background are described and a summary of the proposed experiments is presented. A new analysis technique based on Bayesian statistics that can be used to reconstruct the underlying sky fluctuations is summarised. Current CMB data is used to set some preliminary constraints on the values of fundamental cosmological parameters Ω and Ho using the maximum likelihood technique. In addition, secondary anisotropies due to the Sunyaev-Zel’dovich effect are described.

smaller scales, corresponding to regions in causal contact at recombination, we should be able to see in the CMB the effects of physical processes occurring at recombination, such as acoustic oscillation of the coupled photon-baryon fluid, which thus also gives us a direct link with the physics of galaxy formation.
In 1992, the NASA Cosmic Microwave Background Explorer (COBE) satellite was the first experiment to detect the bumps [86]. These initial measurements were in the form of a statistical detection rather than individual physical features (Some features in the first year maps were real CMB anisotropies. but it was not possible to distinguish these from the noise except in a statistical way.). Today, experiments all around the world are finding these bumps, both causally connected and non-causally connected, that eventually grew into galaxies and clusters of galaxies. The required sensitivities called for new techniques in astronomy. The main principle behind all of these experiments is that, instead of measuring the actual brightness, they measure the difference in brightness between different regions of the sky. The experiments at Tenerife produced what was probably the first detection of the real, individual CMB fluctuations [33] on scales comparable to the beam size of the experiment. These particular features were later confirmed by the COBE two-year data [53].
There are many different theories of how the universe began its life and how it evolved into the structures seen today. Each of these theories make slightly different predictions of how the universe looked at the very early stages which up until now have been impossible to prove or disprove. Knowing the structure of the CMB, within a few years it should be possible for astronomers to tell us where the universe came from, how it developed and what will happen to it in the future.
Useful overall reviews of the physics of the production of CMB fluctuations, which will complement the informal presentation given in the next section, are contained in White et al., 1994 [97], Scott et al., 1995 [81] and Hu et al., 1995 [40]. This review is intended to be an extension and update of Lasenby and Jones, 1997 [51], and parts of that review are reproduced here so that the arguments are easily followed.

Perturbations from inflation
We start from Einstein's equation for the evolution of the Universė where R describes the size of the Universe, G is the gravitational constant, ρ • is the present density of the Universe, k is a measure of the curvature of space and Λ is the Cosmological constant which can be thought of as the zero energy of a vacuum. If the Cosmological term dominates (as the scalar field is expected to at very high temperatures), then the other two terms become negligible and it is possible to solve and find Therefore, the inflationary theory describes an exponential expansion of space in the very early Universe. Amplification of initial quantum irregularities then results in a spectrum of long wavelength perturbations on scales initially bigger than the horizon size. Central to the theory of inflation is the potential V (φ), which describes the self-interaction of the scalar inflaton field φ. Due to the unknown nature of this potential, and the unknown parameters involved in the theory, inflationary theory is bad (at the moment) at predicting the overall amplitude of the matter fluctuations at recombination. However, there is a reasonable agreement that the form of the fluctuation spectrum coming out of inflation should be given by where k is the comoving wavenumber and n is the 'tilt' of the primordial spectrum. The latter is predicted to lie close to 1 (the case n = 1 being the Harrison-Zeldovich, or 'scale-invariant' spectrum). An overdensity in the early Universe does not collapse under the effect of self-gravity until it enters its own particle horizon when every point within it is in causal contact with every other point. The perturbation will continue to collapse until it reaches the Jean's length, at which time radiation pressure will oppose gravity and set up acoustic oscillations. Since overdensities of the same size will pass the horizon size at the same time, they will be oscillating in phase. These acoustic oscillations occur in both the matter field and the photon field and so will induce 'Doppler peaks' in the photon spectrum.
The level of the Doppler peaks in the power spectrum depend on the number of acoustic oscillations that have taken place since entering the horizon. For overdensities that have undergone half an oscillation, there will be a large Doppler peak (corresponding to an angular size of ∼ 1 • ). Other peaks occur at harmonics of this. As the amplitude and position of the primary and secondary peaks are intrinsically determined by the number of electron scatterers and by the geometry of the Universe, they can be used as a test of the density parameter of baryons and dark matter, as well as other cosmological constants.
Prior to the last scattering surface, the photons and matter interact on scales smaller than the horizon size. Through diffusion, the photons will travel from high density regions to low density regions 'dragging' the electrons with them via Compton interaction. The electrons are coupled to the protons through Coulomb interactions, and so the matter will move from high density regions to low density regions. This diffusion has the effect of damping out the fluctuations and is more marked as the size of the fluctuation decreases. Therefore, we expect the Doppler peaks to vanish at very small angular scales. This effect is known as Silk damping [83].
Another possible diffusion process is free streaming. It occurs when collisionless particles (e.g. neutrinos) move from high density to low density regions. If these particles have a small mass, then free streaming causes a damping of the fluctuations. The exact scale this occurs on depends on the mass and velocity of the particles involved. Slow moving particles will have little effect on the spectrum of fluctuations as Silk damping already wipes out the fluctuations on these scales, but fast moving, heavy particles (e.g. a neutrino with 30 eV mass), can wipe out fluctuations on larger scales corresponding to 20 Mpc today [28].
Putting this all together, we see that on large angular scales (> 2 • ) we expect the CMB power spectrum to reflect the initially near scale-invariant spectrum coming out of inflation; on intermediate angular scales we expect to see a series of peaks, and on smaller angular scales (< 10 arcmin) we expect to see a sharp decline in amplitude. These expectations are borne out in the actual calculated form of the CMB power spectrum in what is currently the 'standard model' for cosmology, namely inflation together with cold dark matter (CDM). The spectrum for this, assuming Ω = 1 and standard values for other parameters, is shown in Figure 2.
The quantities plotted are ( + 1)C , versus where C is defined via and the Y m are standard spherical harmonics. The reason for plotting ( +1)C is that it approximately equals the power per unit logarithmic interval in . Increasing corresponds to decreasing angular scale θ, with a rough relationship between the two of θ ≈ 2/ radians. In terms of the diameter of corresponding proto-objects imprinted in the CMB, a rich cluster of galaxies corresponds to a scale of about 8 arcmin, while the angular scale corresponding to the largest scale of clustering we know about in the Universe today corresponds to 1/2 to 1 degree. The first large peak in the power spectrum, at 's near 200, and therefore angular scales near 1 • , is known as the 'Doppler', or 'Sakharov', or 'acoustic' peak. As stated above, the inflationary CMB power spectrum plotted in Figure 2 is that predicted by assuming the standard values of the cosmological parameters for a CDM model of the Universe. In order for an experimental measurement of the angular power spectrum to be able to place constraints on these parameters, we must consider how the shape of the predicted power spectrum varies in Living Reviews in Relativity  http://www.livingreviews.org response to changes in these parameters. In general, the detailed changes due to varying several parameters at once can be quite complicated. However, if we restrict our attention to the parameters Ω, H 0 and Ω b , the fractional baryon density, then the situation becomes simpler.
Perhaps most straightforward is the information contained in the position of the first Doppler peak, and of the smaller secondary peaks, since this is determined almost exclusively by the value of the total Ω, and varies as peak ∝ Ω −1/2 . (This behaviour is determined as mentioned above by the linear size of the causal horizon at recombination, and the usual formula for angular diameter distance.) This means that if we were able to determine the position (in a left/right sense) of this peak, and we were confident in the underlying model assumptions, then we could read off the value of the total density of the Universe. (In the case where the cosmological constant was non-zero, we would effectively be reading off the combination Ω matter + Ω Λ .) This would be a determination of Ω free of all the usual problems encountered in local determinations using velocity fields etc.
Similar remarks apply to the Hubble constant. The height of the Doppler peak is controlled by a combination of H 0 and the density of the Universe in baryons, Ω b . We have a constraint on the combination Ω b H 2 0 from nucleosynthesis, and thus using this constraint and the peak height we can determine H 0 within a band compatible with both nucleosynthesis and the CMB. Alternatively, if we have the power spectrum available to good accuracy covering the secondary peaks as well, then it is possible to read off the values of Ω tot , Ω b and H 0 independently, without having to bring in the nucleosynthesis information. The overall point here is that the power spectrum of the CMB contains a wealth of physical information, and that once we have it to good accuracy and have become confident that an underlying model (such as inflation and CDM) is correct, then we can use the spectrum to obtain the values of parameters in the model, potentially to high accuracy. This will be discussed further below, both in the context of the current CMB data, and in the context of what we can expect in the future.

New predictions for topological defects
The formation of topological defects is a generic property of symmetry breaking in unified field theories, and formation of such defects is expected to occur during phase transitions in the early Universe. Such phase transitions lead to spatial gradients in the value of the field which cannot be eliminated, and the resulting field defects contain energy concentrations that can gravitationally perturb the surrounding matter and thus induce structure formation. Depending on the properties of the field involved, the topological defects are strings, monopoles, walls and textures. If the seeds for galaxy formation are indeed provided by topological defects, we can expect the resulting power spectrum of CMB fluctuations to differ from that predicted by inflation. The full calculation of the power spectrum predicted by defects is much harder than that for inflation, and consequently the progress in this field has been relatively slow. Qualitatively, however, the main difference from the inflationary scenario is that all perturbations have to be generated causally, that is, they must be within the horizon volume at a given epoch. Thus anisotropies on scales above about 2 • , as already measured by e.g the COBE satellite, have to be generated after recombination, and correspond to late-time effects. (The horizon size grows after inflation, roughly like (1 + z) −1/2 .) This entails, for example in the case of strings, the calculation of the properties of the string network through recombination to late times, which is computationally intensive.
Recently new techniques to compute the power spectra have become available. Figure 3 shows a comparison of predictions for global strings, monopoles and textures made by Pen, Seljak & Turok [67] with experimental results from present CMB experiments. The defect model predictions were each normalised to COBE at = 10. In practice, one is allowed to slide each of the curves up and down so as to best match a range of criteria, rather than (as here) matching a single large-scale C value. However, even with this freedom it is easily seen that the models do not fit the data very well. There appears to be evidence for a much larger acoustic peak in the data than predicted by defect theories.
The same models can be used to predict the matter power spectrum; this is shown in Figure 4. Again, the models have been normalised to COBE, but a change in the normalisation does not appear to be sufficient to produce a good agreement between the predictions and experimental results. Therefore, at present, the models considered by Pen, Seljak & Turok appear to be ruled out by current experimental data, and the case for a topological defect origin of CMB fluctuations is less strong.
However, recent work [7] which uses an analytic expression for the evolution of local strings from the radiation to matter dominated era as well as including a non-zero cosmological constant, shows that it is possible to produce results that contain a broad peak in the angular power spectrum of the CMB ( Figure 5). However, this peak occurs at ∼ 400 − 600 which is in disagreement with the CAT and OVRO detections ( Table 2) at these scales. The numerical method for calculating the matter and CMB power spectra used in this work differs from Living Reviews in Relativity (1998-11) http://www.livingreviews.org Living Reviews in Relativity  http://www.livingreviews.org Living Reviews in Relativity (1998-11) http://www.livingreviews.org that of Pen, Seljak and Turok. It is unclear at present how to compare the two sets of predictions.

Observing modes and foregrounds
Having given a description of the theoretical context, we now consider the experimental considerations that go into making measurements of the CMB. These include both the observing modes that are used to gain the sensitivity required and the various contaminants that emit at the frequencies of interest to CMB astronomy.
There are a number of foregrounds that need to be well understood, or at the very least minimised, when making CMB measurements. One of the main foregrounds that is seen in CMB data originates from extra-galactic sources. These are usually unresolved point sources such as quasars and radio-loud galaxies. The best analysis of the contribution by unresolved point sources to CMB experiments has been produced by Franceschini it et al. [31] and de Zotti et al. [26]. They used numerous surveys, including VLA and IRAS data, to put limits on the contribution to single beam CMB experiments by a random distribution of point sources. This analysis assumes that there are no unknown sources that only emit radiation in a frequency range between ∼ 30 GHz and ∼ 200 GHz. This range of frequency has not been properly surveyed and therefore there is still a cause of concern in the CMB community. The problem with using past surveys to predict the effects of point sources is that a large fraction of these sources are highly variable. Therefore, by far the best way to subtract the sources from the data is to make simultaneous observations with a higher resolution telescope at the same observing frequency (the Ryle Telescope at Cambridge is used by CAT for this purpose).
At the higher frequency range of the microwave background experiments, dust emission starts to become dominant. This is the hardest galactic foreground to estimate, as it depends on the properties of the individual dust grains and their environment. At lower frequencies Galactic synchrotron and free-free emission become important. Each of the Galactic foregrounds are extremely difficult to eliminate, and so far the best method has been to observe in regions where their emissions are expected to be small and at frequencies where they are less dominant. With the multi-frequency observations it is possible to subtract the effect of these foregrounds if their spectral dependencies are known (see Section 6.1).
The final foreground that is seen with experiments looking at the microwave background is closer to Earth than those already discussed. This is the atmosphere. Fluctuations in the atmosphere are hard to distinguish from actual extra-terrestrial fluctuations, when limited frequency coverage is available. There are three ways to overcome this problem. The first method is to eliminate the atmospheric effect completely. Space missions are the best way to do this, but their main problem is cost. High altitude sites (either at the top of a mountain or in a balloon) can reduce the atmospheric contribution as can moving the experiment to a region with a stable atmosphere. A cheaper alternative to physically moving the experiment is to observe with the experiment for a long time. As the atmospheric effects occur on a short time-scale, compared with the life-time of the experiment (typically of order a few months for each Living Reviews in Relativity (1998-11) http://www.livingreviews.org data set taken with ground based CMB experiments), and the extra-terrestrial fluctuations are essentially constant, by integrating over a long time the contribution from the extra-terrestrial fluctuations are increased with respect to the atmospheric effects. Stacking together n data points (taken from n separate observations) will reduce the variable atmospheric signal with respect to the constant galactic or cosmological one by a factor of √ n (providing that they are independent with respect to the atmospheric signal and any long term atmospheric effects that affect the gain have been removed). The third way, which can also be combined with both the first and second way, is to design the experiment to be as insensitive as possible to atmospheric variations.
The first obvious design consideration is to make the telescope sensitive to frequencies at which the atmospheric contribution is a minimum. By avoiding various bands in the spectrum, where much emission is expected (for example water lines), the atmosphere becomes less of a problem. Above a frequency of about 100 GHz the atmospheric effect is too large to allow useful observations from a ground based telescope. Taken with the increasing foreground contamination from the Galaxy at low frequencies (It is expected that the Galaxy dominates over the CMB signal at frequencies below 10 GHz.), this reduces the observable frequencies for ground based CMB experiments to between 10 and 100 GHz. This narrow observable range results in the need for balloon or satellite experiments so that a larger frequency coverage can be made to check the consistency of the results and to check the contamination from the various foregrounds that are expected. Figure 6 shows the expected level of the various foregrounds for a typical CMB experiment. Figure 7 shows a typical region of the sky over a range of frequencies covered by CMB experiments (no atmospheric emission has been added).
The largest atmospheric variations occur mainly on relatively long time scales, compared to the integration time of telescopes (typically of order a few minutes), as the variations are produced by pockets of air moving over the telescope. If an experiment could be insensitive to these long term variations, then it should effectively see through the atmosphere. An interferometer (e.g. CAT) extracts a small range of Fourier coefficients from the sky, reducing any incoherent signal (short time scale variations) or any signal that is coherent on large angular scales (long time scale variations), and so should see through the atmosphere very well. Similarly, an experiment that switches between two positions on the sky relatively quickly will also reduce the long term atmospheric variations. This technique is called beamswitching (e.g. Tenerife, MSAM). A variation on the beamswitching technique is to scan a single beam backwards and forwards across the sky (e.g. Saskatoon, Python).

The SZ effect
In addition to the primary CMB anisotropies discussed so far, we also expect secondary anisotropies due to the interaction of CMB photons with clusters along the line of sight. The best known (and observed) is the Sunyaev Zel'dovich (SZ) effect, which is due to the upscattering of CMB photons by electrons in Living Reviews in Relativity  http://www.livingreviews.org Living Reviews in Relativity (1998-11) http://www.livingreviews.org Living Reviews in Relativity (1998-11) http://www.livingreviews.org the hot gas (T e ∼ 10 7 − 10 8 K) at the centre of clusters. Other secondary anisotropies include, for example, the Rees-Sciama effect, which is a result of a cluster collapsing during its encounter with a CMB photon.
In this review, we shall consider only the SZ effect, since this the only secondary anisotropy to have been observed to date. The SZ effect is in fact made up of two separate effects: One is due to the bulk velocity of the cluster, and the other due to the thermal velocities of the electrons in the cluster gas. These are called the kinematic and thermal SZ effects respectively. The kinematic effect measures the cluster peculiar velocity, whereas the thermal effect can be used, in conjunction with images and spectra of its X-ray emission, to study the cluster gas. In particular, for a dynamically relaxed cluster, we can use the thermal SZ effect and X-ray data to estimate the physical size of the cluster, and hence its distance. This, in turn, yields an estimate of the Hubble constant H 0 .
If the cluster has a peculiar velocity v p along the observer's line of sight, then the temperature of a CMB photon is Doppler-shifted by an amount where T 0 is the temperature of the CMB, n e is the electron number density, σ T is the Thomson scattering cross-section and the integral is taken along the line of sight. In terms of the Rayleigh-Jeans brightness temperature this becomes or as an intensity where we have set x = hν/kT 0 . The effect due to thermal motions of the electrons is second-order in the electron velocity, and does not preserve the blackbody shape of the CMB spectrum. For the thermal SZ effect the change in the Rayleigh-Jeans brightness temperature is given by where T e is the electron temperature and m e the electron mass. In terms of intensity this becomes It is usual to describe the magnitude of the thermal effect in terms of the y parameter, which is given by y = n e σ T kT e m e c 2 dl, Living Reviews in Relativity (1998-11) http://www.livingreviews.org The frequency dependencies of the kinetic and thermal effects (i.e. those functions in curly brackets in Equations 6 -9), are shown in Figure 8.
Note that at ν = 210 GHz, the maximum change in intensity due to the kinematic effect coincides with the null of the thermal effect. This, in principle, allows one to separate the two effects. The magnitude of the thermal effect for a hot, dense cluster is (∆T RJ ) thermal ≈ 1 mK, and for reasonable cluster velocities the kinematic effect is an order of magnitude smaller.
Observations of the SZ effect have been made in Cambridge using the Ryle telescope, which is an 8-dish interferometer operating at 15 GHz [80], in Caltech using the Owen's Valley 5.5m telescope [61], the Owen's Valley 40m telescope [41] and the Owen's Valley Millimeter Array (OVMMI) [14] [15], at NASA using the MSAM balloon experiment [84], at the Caltech Submillimeter Observatory using a purpose built instrument called the Sunyaev-Zel'dovich Infrared Experiment (SuZIE) [39] and by various other groups. The magnitude of the observed SZ effect in these clusters can be combined with X-ray data from ROSAT and ASCA to place limits on H 0 . This has been reviewed in Lasenby & Jones [52].
Another important feature of the SZ effect is that the decrement does not change with redshift. Therefore, it should be possible to detect clusters out to very high redshift. To test this the Ryle telescope has been making observations towards quasar pairs. Figure 9 shows the Ryle telescope map of the sky towards the quasar pair PC1643+4631 (Jones et al. 1997) [46]. These two quasars are at a redshift of ∼ 3.8 and are ∼ 3 apart on the sky. They are a strong candidate for a gravitational lensed object due to the similarity in their spectra. As there is no X-ray detection with ROSAT the cluster responsible for the lensing must be at a redshift greater than 2.5 and from modelling of the gravitational lensing, or from fitting for a density profile in the SZ effect, it must have a total mass of about 2 × 10 15 M .
With observations of distant clusters it is possible to predict the mass density of the Universe. Using the Press-Schecter formalism the number of clusters in terms of SZ flux counts can be predicted. Figure 10 shows the results from Bartlett et al. [6]. From this figure it is seen that, if the decrements are really due to the SZ effect and there was no bias in selecting the fields (e.g. the field was chosen because of the magnification of the quasar pair images), then the Ω = 1 model of the Universe is ruled out. An open model is required to be consistent with the data. Confirmation of the detections are needed. Until these follow up observations have been made, it is impossible to say how accurate these findings are.

Relativistic corrections to the SZ effect
The above treatment of the SZ effect is purely non-relativistic. In clusters where k B T e > 10keV, relativistic effects become important. The relativistic effect can be included either by extending the equations to include relativistic terms ( [18], [88], [89]) or by including multiple scattering descriptions of the Comptonization process ( [100], [29], [91], [54], [76]). Both of these approaches give consistent Living Reviews in Relativity  http://www.livingreviews.org Living Reviews in Relativity (1998-11) http://www.livingreviews.org  results. To first order in temperature the correction for the thermal SZ effect is given by (10) and in the Rayleigh-Jeans limit (small x) we find and so it is seen that the inclusion of the relativistic treatment tends to lead to a small decrease in the SZ effect. The Hubble constant is inferred from combined Sunyaev-Zel'dovich and X-ray data by a relation of the form [49] H The reduction in ∆T in the Rayleigh-Jeans region, for given cluster parameters, leads to a decrease in the constant of proportionality in Equation 12, and hence a small reduction in the determined values of H • . For example, it is found that if the cluster temperature is 8keV the reduction in H • due to relativistic effects is 5%.

Current and future CMB experiments
Since the discovery of a statistical anisotropy four years ago by the COBE satellite, there has been a great increase in the number of ground and balloon-based measurements of CMB anisotropy. Many groups around the world have been engaged in projects which aim to detect individual features in the CMB, and to establish their statistical properties. Table 1 lists all of the major experiments in the field and includes links to their home pages. The focus of these has tended to be towards smaller angular scales than measured by COBE, and this holds with it the exciting prospect of being able to detect structure in the primordial power spectrum.  [94] A summary of the properties and results of several experiments, covering the period through to 1996, is given in Lasenby & Hobson [50]. Since that time, i.e. in the last year, there have been significant developments with regard to a number of experiments, and we concentrate on these new results here. Table 2 shows in a convenient form the main parameters of several recent experiments.

The COBE satellite
The Cosmic Background Explorer satellite (COBE) was launched on 18th November 1989. Primordial anisotropy measurements are made using the DMR experiment, which consists of six differential microwave radiometers, two at each of 31.5 GHz, 53.0 GHz and 90.0 GHz. The first-year COBE observations provided convincing statistical evidence for the existence of CMB fluctuations. It was not however, possible to see individual CMB features, on the scale of the beam size, in the DMR maps, because even combining all of the maps together, the noise level per beam area was ∼ 45µK and the signal to noise remained less than one.
The results of the analysis of all four years of DMR data have now become available. A convenient summary of all the results is given in Bennett et al. (1996) [9]. On a statistical level, the results can be used to constrain the normalisation of a power law primordial spectrum. For a given slope n, normalisation is usually expressed via the implied amplitude of the quadrupole component of the power spectrum, C 2 , as   This restriction on the value of n is of course of great interest in the context of inflationary predictions that n = 1. It is also of interest that inflation predicts Gaussian fluctuations, and while this is much harder to test for than finding the amplitude and slope of the spectrum, the data are also consistent with this prediction. Specifically, Bennett et al. state 'statistical tests prefer Gaussian over other toy statistical models by a factor of ∼ 5'. With the accumulation of four years of data, the individual anisotopy features within the maps on the scale of the beam size are now becoming statistically significant. Figure 11 shows the all-sky maps at each frequency taken from Bennett et al. [9]. Some of the features in these maps away from the Galactic plane are expected to be real CMB fluctuations, since the signal to noise in these regions is now about 2 sigma per 10 degree sky patch. Indeed, features which repeat well between the different frequencies are now clearly visible. A description of these experiments was given in Lasenby & Hancock [48]. Here we briefly summarise the relevant features. The Tenerife experiments constitute a suite of three instruments, working at 10, 15 and 33 GHz, designed and built at Jodrell Bank (Davies et al. 1992 [25], 1996 [24]) and operated by Living Reviews in Relativity (1998-11) http://www.livingreviews.org Living Reviews in Relativity  http://www.livingreviews.org  [36] and Reich and Reich (1420 MHz) [75] indicate a minimum in Galactic foreground emission. Analysis of the data at declination +40.0 • has been made and reported in Hancock et al. (1994) [33]. Results from the Dec. 35 • data scans at 10 and 15 GHz have recently been reported in Gutierrez et al. [32] and are consistent with the Dec. 40 • data. A level of Q rms−ps = 20 ± 6 µK was found in a high Galactic latitude region at Dec. 35 • at 15 GHz. The COBE data was used in Bunn et al. (1996) to make a prediction for the Tenerife data. The comparison between this prediction and the Tenerife 15 GHz data is shown in Figure 12, and it is seen that there is very good agreement with the main features in the COBE scan. This is evidence for features that are constant in amplitude over a frequency range of 15 GHz to 90 GHz and is a very strong candidate for a real CMB anisotropy.

The Tenerife experiments
Living Reviews in Relativity (1998-11) http://www.livingreviews.org The Tenerife programme is continuing, with the objective of mapping some 4000 square degrees of sky at 10, 15 and 33 GHz. In conjunction with the COBE four-year data set, these experiments will continue to offer a useful source of large-scale CMB anisotropy measurements and hence to directly probe cosmological theories. In terms of power spectrum results, a revised estimate of the fluctuation amplitude for values near ∼ 20 has been constructed from the Dec 40 • data, making an allowance for an atmospheric contribution (Hancock et al. 1997 [34]) that was not subtracted in Hancock et al. [33]. This is used below in comparison with theoretical curves.
Preliminary analysis of the full 15 GHz data set (Dec. +30 • to 45 • ) has been made and will be presented in a future paper. For an assumed value of n = 1 a full two dimensional likelihood analysis gives Q rms−ps = 22 ± 4 µK.

Saskatoon
This experiment is a ground-based telescope located in Saskatoon, Canada. A cooled HEMT receiver with six channels is used to span the frequency range 26 to 46 GHz. The chopping strategy is quite complex, and can be used to synthesise 'window functions' appropriate to a range of angular scales. The beam sizes used range from ∼ 1.5 • at the lowest frequency to ∼ 0.5 • at the highest. The analysis of observations of a 24 hour RA strip at declination +85.1 • is described in Wollack et al. (1993) [99] and Netterfield et al. (1995) [64], indicating a detection of primordial anisotropy. More recently in Netterfield et al. (1997) [63], exciting results have been presented which show not just a detection, but for the first time for a switched-beam instrument, evidence for the form of the power spectrum itself on the angular scales probed. The final map after 3 years of data from this experiment is shown in Figure 13 and the power spectrum from this map will be used below, in comparison with theoretical predictions. We note here that there is currently an overall scaling uncertainty in the Saskatoon results of ±14%, due to calibration uncertainties. Recent analysis of the Saskatoon data (Knox, in prep.) appear to show that the previous calibration is an underestimate of the true level of the Saskatoon data. This would make the Doppler peak even higher and lower the value of H • found below.

The Owen's Valley Radio Observatory
The Owen's Valley Radio Observatory (OVRO) is a ground-based, 40m single dish, telescope. A HEMT reciever operating at two frequencies (14.5 GHz and 32 GHz) has recently been added to improve the sensitivity of the telescope. The resolution is 0.12 • . The original experiment put very strong constraints on Galaxy formation scenarios [62] [73]. Recent data on 36 pointed observations at Dec. 88 • give δT = 56 +8.6 −6.5 µK at ∼ 589 (Leitch, private communication).
Living Reviews in Relativity (1998-11) http://www.livingreviews.org  The Cosmic Anisotropy Telescope (CAT) is a three element, ground-based interferometer telescope, of novel design [77]. Horn-reflector antennas mounted on a rotating turntable, track the sky, providing maps at four (non-simultaneous) frequencies of 13.5, 14.5, 15.5 and 16.5 GHz. The interferometric technique ensures high sensitivity to CMB fluctuations on scales of 0.5 • , (baselines ∼ 1m) whilst providing an excellent level of rejection to atmospheric fluctuations. Despite being located at a relatively poor observing site in Cambridge, the data is receiver noise limited for about 60% of the time, proving the effectiveness of the interferometer strategy. The first observations were concentrated on a blank field (called the CAT1 field), centred on RA 08 h 20 m , Dec. +68 • 59 , selected from the Green Bank 5 GHz surveys under the constraints of minimal discrete source contamination and low Galactic foreground. The data from the CAT1 field were presented in O'Sullivan et al. (1995) [65] and Scott et al. (1996) [82].

The Cosmic Anisotropy Telescope
Recently observations of a new blank field (called the CAT2 field), centred on RA 17 h 00 m , Dec. +64 • 30 , have been taken. Accurate information on the point source contribution to the CAT2 field maps, which contain sources at much lower levels, has been obtained by surveying the fields with the Ryle Telescope at Cambridge, and the multi-frequency nature of the CAT data can be used to separate the remaining CMB and Galactic components. Some preliminary results from CAT2 have been presented in Baker (1997) [4] and the 16.5 GHz map is shown in Figure 14. Clear structure is visible in the central region of this map, and is thought to be actual structure, on scales of about 1/4 • , in the surface of last scattering. When interpreting this map, however, it should remembered that for an interferometer with just three horns, the 'synthesised' beam of the telescope has large sidelobes, and it is these sidelobes that cause the regular features seen in the map. In the full analysis of the data, these sidelobes must be carefully taken into account.
For an interferometer, 'visibility space' correlates directly with the space of spherical harmonic coefficients discussed earlier, and the data may be used to place constraints directly on the CMB power spectrum in two independent bins Living Reviews in Relativity  http://www.livingreviews.org  in . These constraints, along with those from the other experiments, are shown in Figure 15.

Future experiments
The next few years should bring great improvements in the quality of CMB data available, both as regards to accuracy, and to range of coverage of angular scales. For balloon experiments, these improvements will come through the use of long-duration balloon flights, launched e.g. in Antarctica, and circling the pole, and from the use of arrays of detectors covering several frequency bands. Specific projects already in the pipeline using these techniques are TOPHAT, BEAST, ACE and Boomerang. On the ground, interferometers will play an increasingly important rôle, with three new arrays now under development, the Very Small Array (VSA), the Degree Angular Scale Interferometer (DASI), the Cosmic Background Interferometer (CBI) and the Millimeter Anisotropy eXperiment Imaging Array (MAXIMA). We will concentrate here briefly on a new interferometer array to be built by Cambridge and Jodrell Bank in the U.K., and to be sited in Tenerife (the design of the three new interferometer arrays are similar), and then discuss the two new satellite projects recently selected.

The Very Small Array
Although the CAT has already provided maps of CMB anisotropy on scales ∼ 0.25 • , these are relatively poor as images due to the limited number of baseline lengths and pixels available. In fact, the CAT is a prototype for a considerably more advanced instrument, the Very Small Array (VSA). The objectives of the VSA are to obtain detailed maps of the CMB with a sensitivity approaching 5µK and covering a range of angular scales from 10 to 2 • . An artists impression of the VSA is shown in Figure 16. Using two interchangeable T -shaped configurations of 10-15 horn elements, simulations have shown that it is possible to obtain maps of suitable sensitivity over the desired range of angular scales. The planned instrument would be sited at the Teide Observatory, Tenerife, at an altitude of 2400m and would make observations between 28 and 38 GHz, to enable the Living Reviews in Relativity (1998-11) http://www.livingreviews.org Galactic component to be estimated and removed. The good accuracy available over a scale range that is well-matched to the positions of the first and secondary Doppler peaks in the power spectrum, should enable measurements of Ω and H 0 to be made to an accuracy of better than 10% after 12 months of observations. We believe that this will be refined somewhat by better array configuration design and the use of proper models and secondary peak information (all work in progress). In addition, simulations have shown that the proposed observing strategy will be quite sensitive to the non-Gaussian features expected on these angular scales if (e.g.) textures or monopoles are the seed perturbations for galaxy formation (Maisinger,Hobson,Lasenby & Turok [55]). The instrument is currently under construction at Cambridge and Jodrell Bank in the U.K., and is hoped it will be operational by the year 2000.

Future satellite experiments
Two new satellite experiments to study the CMB have recently been selected as future missions. These are MAP, or Microwave Anisotropy Probe, which has been selected by NASA as a Midex mission, for launch in August 2000, and the Planck Surveyor, which has been selected by ESA as an M3 mission, and will be launched hopefully soon after 2005. An artist's impression of the MAP satellite, which has five frequency channels from 30 GHz to 100 GHz, is shown in Figure 17. An artist's impression of the Planck Surveyor satellite, which combines both HEMT and bolometer technology in 10 frequency channels covering the range 30 GHz to 850 GHz, is shown in Figure 18. A crucial feature of a satellite experiment is the potential all-sky coverage that it affords, and the ability to map features on large angular scales (> 10 • ).

.1 Maximum Entropy analysis
With such a large increase in the amount of data available from CMB experiments it is becoming increasingly important to improve the analysis techniques used. With multifrequency observations of the same patch of sky to high precision (the satellite will cover the full sky) it should be possible to extract information on the foregrounds and leave a 'clean' map of the CMB. One technique that has been shown to be robust in the analysis of a number of different experiments is Maximum Entropy ( [38], [44] and [55]). If we start with Bayes' theorem which states, given a hypothesis H and some data D If the instrumental noise on each frequency channel is Gaussian-distributed, then the probability distribution of the noise is a multivariate Gaussian. Assuming the expectation value of the noise to be zero at each observing frequency, the likelihood is therefore given by where the χ 2 misfit statistic has been introduced. We now have to decide the form of the prior probability. Let us consider a discretised image h j consisting of L cells, so that j = 1, . . . , L; we may consider the h j as the components of an image vector H. If we base the derivation of the prior on purely information theoretic considerations (subset independence, coordinate invariance and system independence) we are naturally led to the Maximum Entropy Method (MEM). It may be shown [85] the prior probability takes the form where the dimensional constant α depends on the scaling of the problem and may be considered as a regularising parameter, and M is a model vector to which H defaults in the absence of any data. In standard applications of the maximum entropy method, the image H is taken to be a positive additive distribution (PAD). Nevertheless, the MEM approach can be extended to images that take both positive and negative values by considering them to be the difference of two PADS, so that where U and V are the positive and negative parts of H respectively. In this case, the cross entropy is given by Living Reviews in Relativity (1998-11) http://www.livingreviews.org where ψ j = [h 2 j + 4m uj m v j ] 1/2 and M u and M v are separate models for each PAD. The global maximum of the cross entropy occurs at H = M u − M v . The most probable image H is then just the result from finding the maximum probability or, equivalently, the minimum of χ 2 − αS. This can be done by using any known minimising routine.
It can be shown [38] that the Lagrange multiplier α is completely defined in a Bayesian way and any prior correlation information can also be incorporated into the analysis. Also, the assignment of errors is straightforward in the Fourier domain where all the pixels in the discretised image will be independent.
Hobson et al. [38] simulated data taken by the Planck Surveyor satellite and used MEM to reconstruct the underlying CMB and foregrounds. They used six input maps (the CMB, thermal and kinetic SZ, dust emission, freefree emission and synchrotron emission) to make up the data and then added Gaussian noise to each frequency. After using MEM with the Bayesian value for α and giving the algorithm the average power spectra of each channel, it was found that features in all six maps were recovered. Without any prior power spectrum information it was found that only the kinetic SZ was not recovered and all others were recovered to some degree (the CMB and dust were almost indistinguishable from the input maps with residual errors of 6µK and 2µK per pixel respectively). Figure 20 shows the results from MEM as compared to the input maps for the case with assumed average power spectrum. It is easily seen that MEM reconstructs both the Gaussian CMB and the non-Gaussian thermal SZ effect very well.

Wiener filtering
In the absence of non-Gaussian sources it is possible to simplify the Maximum Entropy method using the quadratic approximation to the entropy (Hobson et al. [38]). This has the same affect as chosing a Gaussian prior so that Equation 16 becomes where C is the covariance matrix of the image vector H given by The solution to the analysis with this Bayesian prior is called Wiener filtering (see, for example, [12] and [92]) and has been applied to many data sets in the past when non-Gaussianity could be ignored. The data D can be written as where the convolution of the image vector H is with B, the beam response of the instrument and frequency dependance of H. is the noise vector. In this case, the best reconstructed image vector,Ĥ, is given by Living Reviews in Relativity (1998-11) http://www.livingreviews.org Figure 20: The left hand side shows the input maps used in the Planck simulations for a CDM simulation of the CMB and the thermal SZ effect. The right hand side shows the reconstructions obtained by MEM. It is easily seen that MEM does a very good job at reconstructed these two components. For comparison, the grey scales on the input maps are the same as on the reconstructed maps.
where the Wiener filter, W , is given by and N is the noise covariance matrix given by N = † . It is important to note that not only the CMB signal but all the foregrounds are implicitly assumed to be non-Gaussian in this method ([38] [45] and [37]).

The CMB power spectrum versus experimental points
It will have become apparent in the preceding sections that the CMB data are approaching the point where meaningful comparison between theory and prediction, as regards the shape and normalisation of the power spectrum, can be made. This is particularly the case with the new availability of the recent CAT and Saskatoon results, where the combination of scales they provide is exactly right to begin tracing out the shape of the first Doppler peak. (If this exists, and if Ω tot = 1.) Before embarking on this exercise, some proper cautions ought to be given. First, the current CMB data is not only noisy, with in some cases uncertain calibration, but will still have present within it residual contamination, either from the Galaxy, or from discrete radio sources, or both. Experimenters make their best efforts to remove these effects, or to choose observing strategies that minimise them, but the process of getting really 'clean' CMB results, free of these effects to some guaranteed level of accuracy, is still only in its infancy. Secondly, in any comparison of theory and data where parameters are to be estimated, the results for the parameters are only as good as the underlying theoretical models and assumptions that went into them. If CDM turns out not to be a viable theory for example, then the bounds on Ω derived below will have to be recomputed for whatever theory replaces it. Many of the ingredients which go into the form of the power spectrum are not totally theory-specific (this includes the physics of recombination, which involves only well-understood atomic physics), so that one can hope that at least some of the results found will not change too radically. Bearing these caveats in mind, it is certainly of interest to begin this process of quantitative comparison of CMB data with theoretical curves. Figure 21 shows a set of recent data points, many of them discussed above, put on a common scale (which may effectively be treated as ( + 1)C ), and compared with an analytical representation of the first Doppler peak in a CDM model. The work required to convert the data to this common framework is substantial, and is discussed in Hancock et al. (1997) [35], from where this figure was taken. The analytical version of the power spectrum is parameterised by its location in height and left/right position, and enables one to construct a likelihood surface for the parameters Ω and A peak , where A peak is the height of the peak, and is related to a combination of Ω b and H 0 , as discussed above. The dotted and dashed extreme curves in Figure 21 indicate the best fit curves corresponding Living Reviews in Relativity (1998-11) http://www.livingreviews.org Living Reviews in Relativity  http://www.livingreviews.org to varying the Saskatoon calibration by ±14%. The central fit yields a 68% confidence interval of 0.2 < Ω < 1.5, with a maximum likelihood point of Ω = 0.7 after marginalisation over the value of A peak . Incorporating nucleosynthesis information as well, as sketched above (specifically the Copi et al. [22] bounds of 0.009 ≤ Ω b h 2 ≤ 0.02 are assumed), a 68% confidence interval for H 0 of 30 km s −1 Mpc −1 < H 0 < 50 km s −1 Mpc −1 (25) is obtained. This range ignores the Saskatoon calibration uncertainty. Generally, in the range of parameters of current interest, increasing H 0 lowers the height of the peak. Thus taking the Saskatoon calibration to be lower than nominal, for example by the 14% figure quoted as the one-sigma error, enables us to raise the allowed range for H 0 . By this means, an upper limit closer to 70 km s −1 Mpc −1 is obtained. The best angular resolution offered by MAP is 12 arcmin, in its highest frequency channel at 90 GHz, and the median resolution of its channels is more like 30 arcmin. This means that it may have difficulty in pining down the full shape of the first and certainly secondary Doppler peaks in the power spectrum. On the other hand, the angular resolution of the Planck Surveyor extends down to 5 arcmin, with a median (across the six channels most useful for CMB work) of about 10 arcmin. This means that it will be able to determine the power spectrum to good accuracy, all the way into the secondary peaks, and that consequently very good accuracy in determining cosmological parameters will be possible. Figure 19, taken from the Planck Surveyor Phase A study document, shows the accuracy to which Ω, H 0 and Ω b can be recovered, given coverage of 1/3 of the sky with sensitivity 2 × 10 −6 in ∆T /T per pixel. The horizontal scale represents the resolution of the satellite. From this we can see that the good angular resolution of the Planck Surveyor should mean a joint determination of Ω and H 0 to ∼ 1% accuracy is possible in principle. Figure 22 show the likelihood contours for two experiments with different resolutions. These figures do not, however, take into account any reduction in sensitivity as a result of the need to separate Galactic foregrounds from the CMB. Nevertheless, simulations using a maximum entropy separation algorithm (Hobson, Jones, Lasenby & Bouchet, in press) suggest that for the Planck Surveyor the reduction in the final sensitivity to the CMB is very small indeed, and that the accuracy of the cosmological parameters estimates indicated in Figure 19 may be attainable.
One additional problem is that of degeneracy. It is possible to formulate two models with similar power spectra, but different underlying physics. For example, standard CDM and a model with a non zero cosmological component and a gravity wave component can have almost identical power spectra (to within the accuracy of the MAP satellite). To break the degeneracy more accuracy is required (like the Planck Surveyor) or information about the polarisation of the CMB photons can be used. This extra information on polarisation is very good at discriminating between theories but requires very sensitive polarimeters. Living Reviews in Relativity  http://www.livingreviews.org