1 Executive summary

The present generation of imaging atmospheric Cherenkov telescopes (H.E.S.S., MAGIC and VERITAS) has in recent years opened the realm of ground-based gamma ray astronomy for energies above a few tens of GeV. The Cherenkov Telescope Array (CTA) will explore in depth our Universe in very high energy gamma-rays and investigate cosmic processes leading to relativistic particles, in close cooperation with observatories of other wavelength ranges of the electromagnetic spectrum, and those using cosmic rays and neutrinos.

Besides guaranteed high-energy astrophysics results, CTA will have a large discovery potential in key areas of astronomy, astrophysics and fundamental physics research. These include the study of the origin of cosmic rays and their impact on the constituents of the Universe through the investigation of galactic particle accelerators, the exploration of the nature and variety of black hole particle accelerators through the study of the production and propagation of extragalactic gamma rays, and the examination of the ultimate nature of matter and of physics beyond the Standard Model through searches for dark matter and the effects of quantum gravity.

With the joining of the US groups of the Advanced Gamma-ray Imaging System (AGIS) project, and of the Brazilian and Indian groups in Spring 2010, and with the strong Japanese participation, CTA represents a genuinely world-wide effort, extending well beyond its European roots.

CTA will consist of two arrays of Cherenkov telescopes, which aim to: (a) increase sensitivity by another order of magnitude for deep observations around 1 TeV, (b) boost significantly the detection area and hence detection rates, particularly important for transient phenomena and at the highest energies, (c) increase the angular resolution and hence the ability to resolve the morphology of extended sources, (d) provide uniform energy coverage for photons from some tens of GeV to beyond 100 TeV, and (e) enhance the sky survey capability, monitoring capability and flexibility of operation. CTA will be operated as a proposal-driven open observatory, with a Science Data Centre providing transparent access to data, analysis tools and user training.

To view the whole sky, two CTA sites are foreseen. The main site will be in the southern hemisphere, given the wealth of sources in the central region of our Galaxy and the richness of their morphological features. A second complementary northern site will be primarily devoted to the study of Active Galactic Nuclei (AGN) and cosmological galaxy and star formation and evolution. The performance and scientific potential of arrays of Cherenkov telescopes have been studied in significant detail, showing that the performance goals can be reached. What remains to be decided is the exact layout of the telescope array. Ample experience exists in constructing and operating telescopes of the 12-m class (H.E.S.S., VERITAS). Telescopes of the 17-m class are operating (MAGIC) and one 28-m class telescope is under construction (H.E.S.S. II). These telescopes will serve as prototypes for CTA. The structural and optical properties of such telescopes are well understood, as many have been built for applications from radio astronomy to solar power installations. The fast electronics needed in gamma ray astronomy to capture the nanosecond-scale Cherenkov pulses have long been mastered, well before such electronics became commonplace with the Gigahertz transmission and processing used today in telephony, internet, television, and computing.

The extensive experience of members of the consortium in the area of conventional photomultiplier tubes (PMTs) provides a solid foundation for the design of cameras with an optimal cost/performance ratio. Consequently, the base-line design relies on conventional PMTs. Advanced photon detectors with improved quantum efficiency are under development and test and may well be available when the array is constructed. In short, all the technical solutions needed to carry out this project exist today. The main challenge lies in the industrialisation of all aspects of the production and the exploitation of economies of scale.

Given the large amounts of data recorded by the instrument and produced by computer simulations of the experiment, substantial efforts in e-science and grid computing are envisaged to enable efficient data processing. Some of the laboratories involved in CTA are Tier 1 and 2 centres on the LHC computing grid and the Cosmogrid. Simulation and analysis packages for CTA are developed for the grid. The consortium has set up a CTA-Virtual Organisation within the EGEE project (Enabling Grids for E-sciencE; funded by the European Union) for use of grid infrastructure and the sharing of computing resources, which will facilitate worldwide collaboration for simulations and the processing and analysis of scientific data.

Unlike current ground-based gamma-ray instruments, CTA will be an open observatory, with a Science Data Centre (SDC) which provides pre-processed data to the user, as well as the tools necessary for the most common analyses. The software tools will provide an easy-to-use and well-defined access to data from this unique observatory. CTA data will be accessible through the Virtual Observatory, with varying interfaces matched to different levels of expertise. The required toolkit is being developed by partners with experience in SDC management from, for example, the INTEGRAL space mission.

Experiments in astroparticle physics have proven to be an excellent training ground for young scientists, providing a highly interdisciplinary work environment with ample opportunities to acquire not only physics skills but also to learn data processing and data mining techniques, programming of complex control and monitoring systems and design of electronics. Further, the environment of the large multi-national CTA Collaboration, working across international borders, ensures that presentation skills, communication ability and management and leadership proficiency are enhanced. Young scientists frequently participate in outreach activities and, thus, hone also their skills in this increasingly important area. With its training and mobility opportunities for young scientists, CTA will have a major impact on society.

Outreach activities will be an important part of the CTA operation. Lectures and demonstrations augmented by web-based non-expert tools for viewing CTA data will be offered to pupils and lay audiences. Particularly interesting objects will be featured on the CTA web pages, along the lines of the “Source of the Month” pages of the H.E.S.S. collaboration. CTA is expected to make highly visible contributions towards popularising science and generating enthusiasm for research at the cosmic frontier and to create interest in the technologies applied in this field.

2 CTA, a new science infrastructure

In the field of very high energy gamma-ray astronomy (VHE, energies >100 GeVFootnote 1), the instruments H.E.S.S. (http://www.mpi-hd.mpg.de/hfm/HESS), MAGIC (http://magic.mppmu.mpg.de) and VERITAS (http://veritas.sao.arizona.edu) have been driving the development in recent years. The spectacular astrophysics results from the current Cherenkov instruments have generated considerable interest in both the astrophysics and particle physics communities and have created the desire for a next-generation, more sensitive and more flexible facility, able to serve a larger community of users. The proposed CTAFootnote 2 (http://www.cta-observatory.org) is a large array of Cherenkov telescopes of different sizes, based on proven technology and deployed on an unprecedented scale (Fig. 1). It will allow significant extension of our current knowledge in high-energy astrophysics. CTA is a new facility, with capabilities well beyond those of conceivable upgrades of existing instruments such as H.E.S.S., MAGIC or VERITAS. The CTA project unites the main research groups in this field in a common strategy, resulting in an unprecedented convergence of efforts, human resources, and know-how. Interest in and support for the project is coming from scientists in Europe, America, Asia and Africa, all of whom wish to use such a facility for their research and are willing to contribute to its design and construction. CTA will offer worldwide unique opportunities to users with varied scientific interests. The number of in particular young scientists working in the still evolving field of gamma-ray astronomy is growing at a steady rate, drawing from other fields such as nuclear and particle physics. In addition, there is increased interest by other parts of the astrophysical community, ranging from radio to X-ray and satellite-based gamma-ray astronomers. CTA will, for the first time in this field, provide open access via targeted observation proposals and generate large amounts of public data, accessible using Virtual Observatory tools. CTA aims to become a cornerstone in a networked multi-wavelength, multi-messenger exploration of the high-energy non-thermal universe.

Fig. 1
figure 1

Conceptual layout of a possible Cherenkov Telescope Array (not to scale)

3 The science case for CTA

3.1 Science motivation in a nutshell

3.1.1 Why observing in gamma-rays?

Radiation at gamma-ray energies differs fundamentally from that detected at lower energies and hence longer wavelengths: GeV to TeV gamma-rays cannot conceivably be generated by thermal emission from hot celestial objects. The energy of thermal radiation reflects the temperature of the emitting body, and apart from the Big Bang there is and has been nothing hot enough to emit such gamma-rays in the known Universe. Instead, we find that high-energy gamma-rays probe a non-thermal Universe, where other mechanisms allow the concentration of large amounts of energy onto a single quantum of radiation. In a bottom-up fashion, gamma-rays can be generated when highly relativistic particles—accelerated for example in the gigantic shock waves of stellar explosions—collide with ambient gas, or interact with photons and magnetic fields. The flux and energy spectrum of the gamma-rays reflects the flux and spectrum of the high-energy particles. They can therefore be used to trace these cosmic rays and electrons in distant regions of our own Galaxy or even in other galaxies. High-energy gamma-rays can also be produced in a top-down fashion by decays of heavy particles such as hypothetical dark matter particles or cosmic strings, both of which might be relics of the Big Bang. Gamma-rays therefore provide a window on the discovery of the nature and constituents of dark matter.

High-energy gamma-rays, as argued above, can be used to trace the populations of high-energy particles in distant regions of our own or in other galaxies. Meandering in interstellar magnetic fields, cosmic rays will usually not reach Earth and thus cannot be observed directly. Those which do arrive have lost all directional information and cannot be used to pinpoint their sources, except for cosmic-rays of extreme energy >1018 eV. However, such high-energy particle populations are an important aspect of the dynamics of galaxies. Typically, the energy content in cosmic rays equals the energies in magnetic fields or in thermal radiation. The pressure generated by high-energy particles drives galactic outflows and helps balance the gravitational collapse of galactic disks. Astronomy with high-energy gamma-rays is so far the only way to directly probe and image the cosmic particle accelerators responsible for these particle populations, in conjunction with studies of the synchrotron radiation resulting form relativistic electrons moving in magnetic fields and giving rise to non-thermal radio and X-ray emission.

3.1.2 A first glimpse of the astrophysical sources of gamma-rays

The first images of the Milky Way in VHE gamma-rays have been obtained in the last few years. These reveal a chain of gamma-ray emitters situated along the Galactic equator (see Fig. 2), demonstrating that sources of high-energy radiation are ubiquitous in our Galaxy. Sources of this radiation include supernova shock waves, where presumably atomic nuclei are accelerated and generate the observed gamma-rays. Another important class of objects are “nebulae” surrounding pulsars, where giant rotating magnetic fields give rise to a steady flow of high-energy particles. Additionally, some of the objects discovered to emit at such energies are binary systems, where a black hole or a pulsar orbits a massive star. Along the elliptical orbit, the conditions for particle acceleration vary and hence the intensity of the radiation is modulated with the orbital period. These systems are particularly interesting in that they enable the study of how particle acceleration processes respond to varying ambient conditions. One of several surprises was the discovery of “dark sources”, objects which emit VHE gamma rays, but have no obvious counterpart in other wavelength regimes. In other words, there are objects in the Galaxy which might in fact be only detectable in high-energy gamma-rays. Beyond our Galaxy, many extragalactic sources of high-energy radiation have been discovered, located in active galaxies, where a super-massive black hole at the centre of the galaxy is fed by a steady stream of gas and is releasing enormous amounts of energy. Gamma-rays are believed to be emitted from the vicinity of these black holes, allowing the study of the processes occurring in this violent and as yet poorly understood environment.

Fig. 2
figure 2

The Milky Way viewed in VHE gamma-rays, in four bands of Galactic longitude [1]

3.1.3 Cherenkov telescopes

The recent breakthroughs in VHE gamma-ray astronomy were achieved with ground-based Cherenkov telescopes. When a VHE gamma-ray enters the atmosphere, it interacts with atmospheric nuclei and generates a shower of secondary electrons, positrons and photons. Moving through the atmosphere at speeds higher than the speed of light in air, these electrons and positrons emit a beam of bluish light, the Cherenkov light. For near vertical showers this Cherenkov light illuminates a circle with a diameter of about 250 m on the ground. For large zenith angles the area can increase considerably. This light can be captured with optical elements and be used to image the shower, which vaguely resembles a shooting star. Reconstructing the shower axis in space and tracing it back onto the sky allows the celestial origin of the gamma-ray to be determined. Measuring many gamma-rays enables an image of the gamma-ray sky, such as that shown in Fig. 2, to be created. Large optical reflectors with areas in the 100 m2 range and beyond are required to collect enough light, and the instruments can only be operated in dark nights at clear sites. With Cherenkov telescopes, the effective area of the detector is about the size of the Cherenkov pool at ground. As this is a circle with 250-m diameter this is about 105× larger than the size that can be achieved with satellite-based detectors. Therefore much lower fluxes at higher energies can be investigated with Cherenkov Telescopes, enabling the study of short time scale variability.

The Imaging Atmospheric Cherenkov Technique was pioneered by the Whipple Collaboration in the United States. After more than 20 years of development, the Crab Nebula, the first source of VHE gamma-rays, was discovered in 1989. The Crab Nebula is among the strongest sources of very high energy gamma-rays, and is often used as a “standard candle”. Modern instruments, using multiple telescopes to track the cascades from different perspectives and employing fine-grained photon detectors for improved imaging, can detect sources down to 1% of the flux of the Crab Nebula. Finely-pixellated imaging was first employed in the French CAT telescope [2], and the use of “stereoscopic” telescope systems to provide images of the cascade from different viewing points was pioneered by the European HEGRA IACT system [3]. For summaries of the achievements in recent years and the science case for a next-generation very high energy gamma ray observatory see [48].

In March 2007, the High Energy Stereoscopic System (H.E.S.S.) project was awarded the Descartes Research Prize of the European Commission for offering “A new glimpse at the highest-energy Universe”. Together with the instruments MAGIC and VERITAS (in the northern hemisphere) and CANGAROO (in the southern hemisphere), a new wavelength domain was opened for astronomy, the domain of very high energy gamma-rays with energies between about 100 GeV and about 100 TeV, energies which are a million million times higher than the energy of visible light.

At lower energies, in the GeV domain, the launch of a new generation of gamma-ray telescopes (like AGILE, but in particular Fermi, which was launched in 2008) has opened a new era in gamma-ray discoveries [9]. The Large Area Telescope (LAT), the main instrument onboard Fermi, is sensitive to gamma-rays with energies in the range from 20 MeV to about 100 GeV. The energy range covered by CTA will smoothly connect to that of Fermi-LAT and overlap with that of the current generation of ground based instruments and extends to the higher energies, while providing an improvement in both sensitivity and angular resolution.

3.2 The CTA science drivers

The aims of the CTA can be roughly grouped into three main themes, serving as key science drivers:

  1. 1.

    Understanding the origin of cosmic rays and their role in the Universe

  2. 2.

    Understanding the nature and variety of particle acceleration around black holes

  3. 3.

    Searching for the ultimate nature of matter and physics beyond the Standard Model

Theme 1 comprises the study of the physics of galactic particle accelerators, such as pulsars and pulsar wind nebulae, supernova remnants, and gamma-ray binaries. It deals with the impact of the accelerated particles on their environment (via the emission from particle interactions with the interstellar medium and radiation fields), and the cumulative effects seen at various scales, from massive star forming regions to starburst galaxies.

Theme 2 concerns particle acceleration near super-massive and stellar-sized black holes. Objects of interest include microquasars at the Galactic scale, and blazars, radio galaxies and other classes of AGN that can potentially be studied in high-energy gamma rays. The fact that CTA will be able to detect a large number of these objects enables population studies which will be a major step forward in this area. Extragalactic background light (EBL), Galaxy clusters and Gamma Ray Burst (GRB) studies are also connected to this field.

Finally, Theme 3 covers what can be called “new physics”, with searches for dark matter through possible annihilation signatures, tests of Lorentz invariance, and any other observational signatures that may challenge our current understanding of fundamental physics.

CTA will be able to generate significant advances in all these areas.

3.3 Details of the CTA science case

We conclude this chapter with a few examples of physics issues that could be significantly advanced with an instrument like CTA. The list is certainly not exhaustive. The physics of the CTA is being explored in detail by many scientists and their findings indicate the huge potential for numerous interesting discoveries with CTA.

3.3.1 Cosmic ray origin and acceleration

A tenet of high-energy astrophysics is that cosmic rays (CRs) are accelerated in the shocks of supernova explosions. However, while particle acceleration up to energies well beyond 1014 eV has now clearly been demonstrated with the current generation of instruments, it is by no means proven that supernovae accelerate the bulk of cosmic rays. The large sample of supernovae which will be observable with CTA—in some scenarios several hundreds of objects—and in particular the increased energy coverage at lower and higher energies, will allow sensitive tests of acceleration models and determination of their parameters. Improved angular resolution (arcmin) will help to resolve fine structures in supernova remnants which are essential for the study of particle acceleration and particle interactions. Pulsar wind nebulae surrounding the pulsars (created in supernova explosions) are another abundant source of high-energy particles, including possibly high-energy nuclei. Energy conversion within pulsar winds and the interaction of the wind with the ambient medium and the surrounding supernova shell challenge current ideas in plasma physics.

The CR spectrum observed near the Earth can be described by a pure power law up to an energy of a few PeV, where it slightly steepens. The feature is called the “knee”. The absence of other features in the spectrum suggests that, if supernova remnants (SNRs) are the sources of galactic CRs, they must be able to accelerate particles at least up to the knee. For this to happen, the acceleration in diffusive shocks has to be fast enough for particles to reach PeV energies before the SNR enters the Sedov phase, when the shock slows down and consequently becomes unable to confine the highest energy CRs [10] Since the initial free expansion velocity of SNRs does not vary much from object to object, only the amplification of magnetic fields can increase the acceleration rate to the required level. Amplification factors of 100–1,000 compared to the interstellar medium value and small diffusion coefficients are needed [11]. The non-linear theory of diffusive shock acceleration suggests that such an amplification of the magnetic field might be induced by the CRs themselves, and high resolution X-ray observations of SNR shocks seem to support this scenario, though their interpretation is debated. Thus, an accurate determination of the intensity of the magnetic field at the shock is of crucial importance for disentangling the origin of the observed gamma-ray emission and understanding the way diffusive shock acceleration works.

Even if a SNR can be detected by Cherenkov telescopes during a significant fraction of its lifetime (up to several 104 years), it can make 1015 eV CRs only for a much shorter time (several hundred years), due to the rapid escape of PeV particles from the SNR. This implies that the number of SNRs which have currently a gamma-ray spectrum extending up to hundreds of TeV is very roughly of the order of ∼10. The actual number of detectable objects will depend on the distance and on the density of the surrounding interstellar medium. The detection of such objects (even a few of them) would be extremely important, as it would be clear evidence for the acceleration of CRs up to PeV energies in SNRs. A sensitive scan of the galactic plane with CTA would be an ideal way of searching for these sources. In general, the spectra of radiating particles (both electrons and protons) and therefore also the spectra of gamma-ray radiation, should show characteristic curvature, reflecting acceleration at CR modified shocks. However, to see such curvature, one needs a coverage of a few decades in energy, far from the cutoff region. CTA will provide this coverage. If the general picture of SNR evolution described above is correct, the position of the cutoff in the gamma-ray spectrum depends on the age of the SNR and on the magnetic field at the shock. A study of the number of objects detected as a function of the cutoff energy will allow tests of this hypothesis and constraints to be placed on the physical parameters of SNRs, in particular of the magnetic field strength.

CTA offers the possibility of real breakthroughs in the understanding of cosmic rays; as there is the potential to directly observe their diffusion (see, e.g., [12]) The presence of a massive molecular cloud located in the proximity of a SNR (or any kind of CR accelerator) provides a thick target for CR hadronic interactions and thus enhances the gamma-ray emission. Hence, studies of molecular clouds in gamma-rays can be used to identify the sites where CRs are accelerated. While travelling from the accelerator to the target, the spectrum of cosmic rays is a strong function of time, distance to the source, and the (energy-dependent) diffusion coefficient. Depending on the values of these parameters varying proton, and therefore gamma-ray, spectra may be expected. CTA will allow the study of emission depending on these three quantities, which is impossible with current experiments. A determination, with high sensitivity, of spatially resolved gamma-ray sources related to the same accelerator would lead to the experimental determination of the local diffusion coefficient and/or the local injection spectrum of cosmic rays. Also, the observation of the penetration of cosmic rays into molecular clouds will be possible. If the diffusion coefficient inside a cloud is significantly smaller than the average in the neighbourhood, low energy cosmic rays cannot penetrate deep into the cloud, and part of the gamma-ray emission from the cloud is suppressed, with the consequence that its gamma-ray spectrum appears harder than the cosmic-ray spectrum.

Both of these effects are more pronounced in the denser central region of the cloud. Thus, with an angular resolution of the order of ≤1 arcmin one could resolve the inner part of the clouds and measure the degree of penetration of cosmic rays [13].

More information on general aspects of cosmic rays and their relationship to VHE gamma observations is available in the review talks and papers presented at the International Cosmic Ray Conference 2009 held in Łódź and the online proceedings are a good source of information [14].

3.3.2 Pulsar wind nebulae

Pulsar wind nebulae (PWNe) currently constitute the most populous class of identified Galactic VHE gamma-ray sources. As is well known, the Crab Nebula is a very effective accelerator (shown by emission across more than 15 decades in energy) but not an effective inverse Compton gamma-ray emitter. Indeed, we see gamma rays from the Crab because of its large spin-down power (∼1038 erg s − 1), although the gamma-ray luminosity is much less than the spin-down power of its pulsar. This can be understood as resulting from a large (mG) magnetic field, which also depends on the spin-down power. A less powerful pulsar would imply a weaker magnetic field, which would allow a higher gamma-ray efficiency (i.e. a more efficient sharing between synchrotron and inverse Compton losses). For instance, HESS J1825-137 has a similar TeV luminosity to the Crab, but a spin-down power that is 2 orders of magnitude smaller, and its magnetic field has been constrained to be in the range of a few, instead of hundreds, of μG. The differential gamma-ray spectrum of the whole emission region from the latter object has been measured over more than two orders of magnitude, from 270 GeV to 35 TeV, and shows indications of a deviation from a pure power law that CTA could confirm and investigate in detail. Spectra have also been determined for spatially separated regions of HESS J1825-137 [15]. Another example is HESS J1303-61 [16] The photon spectra in the different regions show a softening with increasing distance from the pulsar and therefore an energy dependent morphology. If the emission is due to the inverse Compton effect, the pulsar power is not sufficient to generate the gamma-ray luminosity, suggesting that the pulsar had a higher injection power in the past. Is this common for other PWNe and what can that tell us about the evolution of pulsar winds? In the case of Vela X [17], the first detection of what appears to be a VHE inverse Compton peak in the spectral energy distribution (SED) was found. Although a hadronic interpretation has also been put forward it is as yet unclear how large the contribution of ions to the pulsar wind could be. CTA can be used to test leptonic vs. hadronic models of gamma-ray production in PWNe.

The return current problem for pulsars have not been solved to date, but if we detect a clear hadronic signal, this will show that ions are extracted from the pulsar surface, which may lead to a solution of the most fundamental question in pulsar magnetospheric physics: how do we close the pulsar current? In systems where we see a clear leptonic signal, it is important to measure the magnetisation (or “sigma”) parameter of the PWNe. Are the magnetic fields and particles in these systems in equipartition (as in the Crab Nebula) or do have particle dominated winds? This will contribute significantly to the understanding of the magnetohydrodynamic flow in PWNe. Understanding the time evolution of the multi-wavelength synchrotron and inverse Compton (or hadronic) intensities is also an aim of CTA. Such evolutionary tracks are determined by the nature of the progenitor stellar wind, the properties of the subsequent composite SNR explosion and the surrounding interstellar environment. Finally, the sensitivity and angular resolution achievable with CTA will allow detailed multi-wavelength studies of large/close PWNe, and the understanding of particle propagation, the magnetic field profile in the nebula, and inter-stellar medium (ISM) feedback.

The evolution and structure of pulsar wind nebulae is discussed in a recent review [18]. Many key implications for VHE gamma ray measurements, and an assessment of the current observations can be found in [19].

3.3.3 The galactic centre region

It is clear that the galactic centre region itself will be one of the prime science targets for the next generation of VHE instruments [20, 21]. The galactic centre hosts the nearest super-massive black hole, as well as a variety of other objects likely to generate high-energy radiation, including hypothetical dark-matter particles which may annihilate and produce gamma-rays. Indeed, the galactic centre has been detected as a source of high-energy gamma-rays, and indications for high-energy particles diffusing away from the central source and interacting with the dense gas clouds in the central region have been observed. In observations with improved sensitivity and resolution, the galactic centre can potentially yield a variety of interesting results on particle acceleration and gamma-ray production in the vicinity of black holes, on particle propagation in central molecular clouds, and, possibly, on the detection of dark matter annihilation or decay.

The VHE gamma-ray view of the galactic centre region is dominated by two point sources, one coincident with a PWN inside SNR G0.9+0.1, and one coincident with the super-massive black hole Sgr A* and another putative PWN (G359.95-0.04). After subtraction of these sources diffuse emission along the galactic centre ridge is visible, which shows two important features: it appears correlated with molecular clouds (as traced by the CS (1–0) line), and it exceeds by a factor of 3 to 9 the gamma-ray emission that would be produced if the same target material was exposed to the cosmic-ray environment in our local neighbourhood. The striking correlation of diffuse gamma-ray emission with the density of molecular clouds within ∼150 pc of the galactic centre favours a scenario in which cosmic rays interact with the cloud material and produce gamma-rays via the decay of neutral pions. The differential gamma-ray flux is stronger and harder than expected from just “passive” exposure of the clouds to the average galactic cosmic ray flux, suggesting one or more nearby particle accelerators are present. In a first approach, the observed gamma-ray morphology can be explained by cosmic rays diffusing away from an accelerator near the galactic centre into the surroundings. Adopting a diffusion coefficient of D = O(1030) cm2/s, the lack of VHE gamma-ray emission beyond 150 pc in this model points to an accelerator age of no more than 104 years. Clearly, improved sensitivity and angular resolution would permit the study of the diffusion process in great detail, including any possible energy dependence. An alternative explanation (which CTA will address) is the putative existence of a number of electron sources (e.g. PWNe) along the galactic centre ridge, correlated with the density of molecular clouds. Given the complexity and density of the source population in the galactic centre region, CTA’s improved sensitivity and angular resolution is needed to map the morphology of the diffuse emission, and to test its hadronic or leptonic origin.

CTA will also measure VHE absorption in the interstellar radiation field (ISRF). This is impossible for other experiments, like Fermi-LAT, as their energy coverage is too small, and very hard or perhaps impossible for current air Cherenkov experiments, as they lack the required sensitivity. At 8 kpc distance, VHE gamma-ray attenuation due to the CMB is negligible for energies <500 TeV. But the attenuation due to the ISRF (which has a comparable number density at wavelengths 20–300 μm) can produce absorption at about 50 TeV [22]. Observation of the cutoff energy for different sources will provide independent tests and constraints of ISRF models. CTA will observe sources at different distances and thereby independently measure the absorption model and the ISRF. Due to their smaller distances there is less uncertainty in identifying intrinsic and extrinsic features in the spectrum than is the case for EBL studies.

3.3.4 Microquasars, gamma-ray, and X-ray binaries

Currently, a handful of VHE gamma-ray emitters are known to be binary systems, consisting of a compact object, a neutron star or a black hole, orbiting a massive star. Whilst many questions on the gamma-ray emission from such systems are still open (in some cases it is not even clear if the energy source is a pulsar-driven nebula around a neutron star or accretion onto a black hole) it is evident that they offer a unique chance to “experiment” with cosmic accelerators. Along the eccentric orbits of the compact objects, the environment (including the radiation field) changes, resulting in a periodic modulation of the gamma-ray emission, allowing the study of how particle acceleration is affected by environmental conditions. Interestingly, the physics of microquasars in our own Galaxy resembles the processes occurring around super-massive black holes in distant active galaxies, with the exception of the much reduced time scales, providing insights in the emission mechanisms at work. The following are key questions in this area which CTA will be able to address, because of the extension of the accessible energy domain, the improvement in sensitivity, and the superior angular resolution it provides:

  1. (a)

    Studies of the formation of relativistic outflows from highly magnetised, rotating objects. If gamma-ray binaries are pulsars, is the gamma-ray emission coming mostly from processes within the pulsar wind zone or rather from particles accelerated in the wind collision shock? Is the answer to this question a function of energy? What role do the inner winds play, particularly with regard to particle injection? Gamma-ray astronomy can provide data that will help to answer these questions, but which will also throw light on the particle energy distribution within the pulsar wind zone itself. Recent Fermi-LAT results on gamma-ray binaries, such as LS I +61 303 and LS 5039 (which are found to be periodic at GeV and TeV energies, although anti-correlated [23]), show the existence of a cutoff in the SED at a few GeV (a feature that was not predicted by any models). Thus, the large energy coverage of CTA is an essential prerequisite in disentangling of the pulsed and continuous components of the radiation and the exploration of the processes leading to the observed GeV–TeV spectral differences.

  2. (b)

    Studies of the link between accretion and ejection around compact objects and transient states associated with VHE emission. It is known that black holes display different spectral states in X-ray emission, with transitions between a low/hard state, where a compact radio jet is seen, to a high/soft state, where the radio emission is reduced by large factors or not detectable at all [24]. Are these spectral changes related to changes in the gamma-ray emission? Is there any gamma-ray emission during non-thermal radio flares (with increased flux by up to a factor of 1,000)? Indeed, gamma-ray emission via the inverse Compton effect is expected when flares occur in the radio to X-ray region, due to synchrotron radiation of relativistic electrons and radiative, adiabatic and energy-dependent escape losses in fast-expanding plasmoids (radio clouds). Can future gamma-ray observations put constraints on the magnetic fields in plasmoids?

    Continued observations of key objects (such as Cyg X-1) with the sensitivity of current instruments (using sub-arrays of CTA) can provide good coverage. Flares of less than 1 hour at a flux of 10% of the Crab could be detected at the distance of the Galactic Centre. Hence variable sources could be monitored and triggers provided for observations with all CTA telescopes or with other instruments. For short flares, energy coverage in the 10–100 GeV band is not possible with current instruments (AGILE and Fermi-LAT lack sensitivity). Continuous coverage at higher energies is also impossible, due to lack of sensitivity with the current generation of Imaging Atmospheric Cherenkov Telescopes (IACTs). CTA will provide improved access to both regions.

  3. (c)

    Collision of the jet with the ISM, as a non-variable source of gamma-ray emission. Improved angular resolution at high energies will provide opportunities for the study of microquasars, particularly if their jets contain a sizeable fraction of relativistic hadrons. While inner engines will still remain unresolved with future Cherenkov telescope arrays, microquasar jets and their interaction with the ISM might become resolvable, leading to the distinction of emission from the central object (which may be variable) and from the jet-ISM interaction (which may be stable). Microquasars, gamma-ray, and X-ray binaries, and high-energy aspects of astrophysical jets and binaries are discussed in [25].

3.3.5 Stellar clusters, star formation, and starburst galaxies

While the classical paradigm has supernova explosions as the dominant source of cosmic rays, it has been speculated that cosmic rays are also accelerated in stellar winds around massive young stars before they explode as supernovae, or around star clusters [26]. Indeed, there is growing evidence from gamma-ray data for a population of sources related to young stellar clusters and environments with strong stellar winds. However, lack of sensitivity currently prevents the detailed study and clear identification of these sources of gamma radiation. CTA aims at a better understanding of the relationship between star formation processes and gamma-ray emission. CTA can experimentally establish whether there is a direct correlation between star formation rate and gamma-ray luminosity when convection and absorption processes at the different environments are taken into account. Both the VERITAS and H.E.S.S. arrays have done deep observations of the nearest starburst galaxies, and have found them to be emitting TeV gamma-rays at the limit of their sensitivity. Future observations, with improved sensitivity at higher and lower energies, will reveal details of this radiation which in turn will help with an understanding of the spectra, provide constraints on the physical emission scenarios and extend the study of the relationship between star formation processes and gamma-ray emission to extragalactic environments. A good compendium of the current status of this topic can be found in the proceedings of a recent conference [27].

3.3.6 Pulsar physics

Pulsar magnetospheres are known to act as efficient cosmic accelerators, yet there is no complete and accepted model for this acceleration mechanism, a process which involves electrodynamics with very high magnetic fields as well as the effects of general relativity. Pulsed gamma-ray emission allows the separation of processes occurring in the magnetosphere from the emission in the surrounding nebula. That pulsed emission at tens of GeV can be detected with Cherenkov telescopes was recently demonstrated by MAGIC with the Crab pulsar [28] (and the sensitivity for pulsars with known pulse frequency is nearly an order of magnitude higher than for standard sources). Current Fermi-LAT results provide some support for models in which gamma-ray emission occurs far out in the magnetosphere, with reduced magnetic field absorption (i.e. in outer gaps). In these models, exponential cut-offs in the spectral energy distribution are expected at a few GeV, which have already been found in several Fermi pulsars. To make further progress in understanding the emission mechanisms in pulsars it is necessary to study their radiation at extreme energies. In particular, the characteristics of pulsar emission in the GeV domain (currently best examined by the Fermi-LAT) and at VHE will tell us more about the electrodynamics within their magnetospheres. Studies of interactions of magnetospheric particle winds with external ambient fields (magnetic, starlight, CMB) are equally vital. Between ∼10 GeV and ∼50 GeV (where the LAT performance is limited) CTA, with a special low-energy trigger for pulsed sources, will allow a closer look at unidentified Fermi sources and deeper analysis of Fermi pulsar candidates. Above 50 GeV CTA will explore the most extreme energetic processes in millisecond pulsars. The VHE domain will be particularly important for the study of millisecond pulsars, very much as the HE domain (with Fermi) is for classical pulsars. On the other hand, the high-energy emission mechanism from magnetars is essentially unknown. For magnetars, we do not expect polar cap emission. Due to the large magnetic field, all high-energy photons would be absorbed if emitted close to the neutron star, i.e., CTA would be testing outer-gap models, especially if large X-ray flares are accompanied by gamma-emission.

CTA can study the GeV-TeV emission related to short-timescale pulsar phenomena, which is beyond the reach of currently working instruments. CTA can observe possible high-energy phenomena related to timing noise (in which the pulse phase and/or frequency of radio pulses drift stochastically) or to sudden increases in the pulse frequency (glitches) produced by apparent changes in the momentum of inertia of neutron stars.

Periodicity measurements with satellite instruments, which require very long integration times, may be compromised by such glitches, while CTA, with its much larger detection area and correspondingly shorter measurement times, is not.

A good compendium of the current status of this topic can be found in the proceedings and the talks presented at the “International Workshop on the High-Energy Emission from Pulsars and their Systems” [29].

3.3.7 Active galaxies, cosmic radiation fields and cosmology

Active Galactic Nuclei (AGN) are among the largest storehouses of energy known in our cosmos. At the intersection of powerful low-density plasma inflows and outflows, they offer excellent conditions for efficient particle acceleration in shocks and turbulences. AGN represent one third of the known VHE gamma-ray sources, with most of the detected objects belonging to the BL Lac class. The fast variability of the gamma-ray flux (down to minute time scales) indicates that gamma-ray production must occur close to the black hole, assisted by highly relativistic motion resulting in time (Lorentz) contraction when viewed by an observer on Earth. Details of how these jets are launched or even the types of particles of which they consist are poorly known. Multi-wavelength observations with high temporal and spectral resolution can help to distinguish between different scenarios, but this is at the limit of the capabilities of current instruments. The sensitivity of CTA, combined with simultaneous observations in other wavelengths, will provide a crucial advance in understanding the mechanisms driving these sources.

Available surveys of BL Lacs suffer several biases at all wavelengths, further complicated by Doppler boosting effects and high variability. The big increase of sensitivity of CTA will provide large numbers of VHE sources of different types and opens the way to statistical studies of the VHE blazar and AGN populations. This will enable the exploration of the relation between different types of blazars, and of the validity of unifying AGN schemes. The distribution in redshift of known and relatively nearby BL Lac objects peaks around z ∼0.3. The large majority of the population is found within z < 1, a range easily accessible with CTA. CTA will therefore be able to analyse in detail blazar populations (out to z ∼2) and the evolution of AGN with redshift and to start a genuine “blazar cosmology”.

Several scenarios have been proposed to explain the VHE emission of blazars.Footnote 3 However, none of them is fully self-consistent, and the current data are not sufficient to firmly rule out or confirm a particular mechanism. In the absence of a convincing global picture, a first goal for CTA will be to constrain model-dependent parameters of blazars within a given scenario. This is achievable due to the wide energy range, high sensitivity and high spectral resolution of CTA combined with multi-wavelength campaigns. Thus, the physics of basic radiation models will be constrained by CTA, and some of the models will be ruled out. A second more difficult goal will be to distinguish between the different remaining options and to firmly identify the dominant radiation mechanisms. Detection of specific spectral features, breaks, cut-offs, absorption or additional components, would be greatly helpful for this. The role of CTA as a timing explorer will be decisive for constraining both the radiative phenomena associated with, and the global geometry and dynamics of, the AGN engine. Probing variability down to the shortest time scales will significantly constrain acceleration and cooling times, instability growth rates, and the time evolution of shocks and turbulences. For the brightest blazar flares, current instruments are able to detect variability on the scales of several minutes. With CTA, such flares should be detectable within seconds, rather than minutes. A study of the minimum variability times of AGN with CTA would allow the localisation of VHE emission regions (parsec distance scales in the jet, the base of the jet, or the central engine) and would provide stringent constraints on the emission mechanisms as well as the intrinsic time scale connected to the size of the central super-massive black hole.

Recently, radio galaxies have emerged as a new class of VHE emitting AGN [37]. Given the proximity of the sources and the larger jet angle to the line of sight compared to BL Lac objects, the outer and inner kpc jet structures will be spatially resolved by CTA. This will allow precise location of the main emission site and searches for VHE radiation from large-scale jets and hot spots besides the central core and jets seen in very long baseline interferometry images.

The observation of VHE emission from distant objects and their surroundings will also offer the unique opportunity to study extragalactic magnetic fields at large distances. If the fields are large, an e  +  e − pair halo forms around AGNs, which CTA, with its high sensitivity and extended field of view, should be capable of detecting. For smaller magnetic field values, the effect of e  +  e − pair formation along the path to the Earth is seen through energy-dependent time-delays of variable VHE emission, which CTA with its excellent time resolution will be ideally suited to measure.

CTA will also have the potential to deliver for the first time significant results on extragalactic diffuse emission at VHE, and offers the possibility of probing the integrated emission from all sources at these energies. While well measured at GeV energies with the EGRET and Fermi-LAT instruments, the diffuse emission at VHE is extremely challenging to measure due to its faintness and the difficulty of adequately subtracting the background. Here, the improved sensitivity coupled with the large field of view puts detection in reach of CTA.

VHE gamma-rays traveling from remote sources interact with the EBL via e  +  e − pair production and are absorbed. Studying such effects as a function of the energy and redshift will provide unique information on the EBL density, and thereby on the history of the formation of stars and galaxies in the Universe. This approach is complementary to direct EBL measurements, which are hampered by strong foreground emission from our planetary system (zodiacal light) and the Galaxy.

We anticipate that MAGIC II and H.E.S.S. II will at least double the number of detected sources, but this is unlikely to resolve the ambiguity between intrinsic spectral features and effects due to the EBL. It would still be very difficult to extract spectral information beyond z > 0.5, if our current knowledge of the EBL is correct. Only CTA will be able to provide a sufficiently large sample of VHE gamma-ray sources, and high-quality spectra for individual objects. For many of the sources, the SED will be determined at GeV energies, which are much less affected by the absorption and, thus, more suitable for the study of the intrinsic properties of the objects. We therefore anticipate that with CTA it will be possible to make robust predictions about the intrinsic spectrum above 40–50 GeV, for individual sources and for particular source classes.

The end of the dark ages of the Universe, the epoch of reionisation, is a topic of great interest [38]. Not (yet) fully accessible via direct observations, most of our knowledge comes from simulations and from integral observables like the cosmic microwave background. The first (Population III) and second generations of stars are natural candidates for being the source of reionisation. If the first stars are hot and massive, as predicted by simulations, their UV photons emitted at z > 5 would be redshifted to the near infrared and could leave a unique signature on the EBL spectrum. If the EBL contribution from lower redshift galaxies is sufficiently well known (for example, as derived from source counts) upper limits on the EBL density can be used to probe the properties of early stars and galaxies. Combining detailed model calculations with redshift-dependent EBL density measurements could allow the probing of the reionisation/ionisation history of the Universe. A completely new wavelength region of the EBL will be opened up by observations of sources at very high redshifts (z > 5), which will most likely be gamma-ray bursts. According to high-redshift UV background models, consistent with our current knowledge of cosmic reionisation, spectral cut-offs are expected in the few GeV to few tens of GeV range at z > 5. Thus, CTA could have the unique potential to probe cosmic reionisation models through gamma-ray absorption in high-z GRBs. We analyse the GRB prospects in more detail in the following.

A good compendium of the current state of this topic can be found in the talks and the proceedings of the meeting, High-energy phenomena in relativistic outflows II [39].

3.3.8 Gamma-ray bursts

Gamma-Ray Bursts are the most powerful explosions in the Universe, and are by far the most electromagnetically luminous sources known to us. The peak luminosity of GRBs, equivalent to the light from millions of galaxies, means they can be detected up to high redshifts, hence act as probes of the star formation history and reionisation of the Universe. The highest measured GRB redshift is z = 8.2 but GRBs have been observed down to z = 0.0085 (the mean redshift is z∼2.2). GRBs occur in random directions on the sky, briefly outshining the rest of the hard X-ray and soft gamma-ray sky, and then fade from view. The rapid variability seen in gamma- and X-rays indicates a small source size, which together with their huge luminosities and clearly non-thermal spectrum (with a significant high-energy tail) require the emitting region to move toward us with a very large bulk Lorentz factor of typically >100, sometimes as high as >1,000 [4042].

Thus, GRBs are thought to be powered by ultra-relativistic jets produced by rapid accretion onto a newly formed stellar-mass black hole or a rapidly rotating highly-magnetised neutron star (i.e. a millisecond magnetar). The prompt gamma-ray emission is thought to originate from dissipation within the original outflow by internal shocks or magnetic reconnection events. Some long duration GRBs are clearly associated with core-collapse supernovae of type Ic (from very massive Wolf–Rayet stars stripped of their H and He envelope by strong stellar winds), while the progenitors of short GRBs are much less certain: the leading model involves the merger of two neutron stars or a neutron star and a black hole [43, 44].

Many of the details of GRB explosions remain unclear. Studying them requires a combination of rapid observations to observe the prompt emission before it fades, and a wide energy range to properly capture the spectral energy distribution. Most recently, GRBs have been observed by the Swift and Fermi missions, which have revealed an even more complex behaviour than previously thought, featuring significant spectral and temporal evolution. As yet, no GRB has been detected at energies >100 GeV due to the limited sensitivity of current instruments and the large typical redshifts of these events. In just over a year of operation, the Fermi-LAT has detected emission above 10 GeV (30 GeV) from 4 (2) GRBs. In many cases, the LAT detects emission >0.1 GeV for several hundred seconds in the GRB rest-frame. In GRB090902B a photon of energy ∼33.4 GeV was detected, which translates to an energy of ∼94 GeV at its redshift of z = 1.822. Moreover, the observed spectrum is fairly hard up to the highest observed energies.

Extrapolating the Fermi spectra to CTA energies suggests that a good fraction of the bright LAT GRBs could be detected by CTA even in ∼minute observing times, if it could be turned to look at the prompt emission fast enough. The faster CTA could get on target, the better the scientific return. Increasing the observation duty cycle by observing for a larger fraction of the lunar cycle and at larger zenith angles could also increase the return.

Detecting GRBs in the CTA energy range would greatly enhance our knowledge of the intrinsic spectrum and the particle acceleration mechanism of GRBs, particularly when combined with data from Fermi and other observatories. As yet it is unclear what the relative importance is of the various proposed emission processes, which divide mainly into leptonic (synchrotron and inverse-Compton, and in particular synchrotron-self-Compton) and hadronic processes (induced by protons or nuclei at very high energies which either radiate synchrotron emission or produce pions with subsequent electromagnetic cascades). CTA may help to determine the identity of the distinct high-energy component that was observed so far in three out of the four brightest LAT GRBs. The origin of the high-energy component may in turn shed light on the more familiar lower-energy components that dominate at soft gamma-ray energies. The bulk Lorentz factor and the composition (protons, e  +  e − pairs, magnetic fields) of the outflows are also highly uncertain and may be probed by CTA. The afterglow emission which follows the prompt emission is significantly fainter, but should also be detectable in some cases. Such detections would be expected from bright GRBs at moderate redshift, not only from the afterglow synchrotron-self-Compton component, but perhaps also from inverse-Compton emission triggered by bright, late (hundreds to thousands of seconds) flares that are observed in about half of all Swift GRBs.

The discovery space at high energies is large and readily accessible to CTA. The combination of GRBs being extreme astrophysical sources and cosmological probes make them prime targets for all high-energy experiments. With its large collecting area, energy range and rapid response, CTA is by far the most powerful and suitable VHE facility for GRB research and will open up a new energy range for their study.

3.3.9 Galaxy clusters

Galaxy clusters are storehouses of cosmic rays, since all cosmic rays produced in the galaxies of the cluster since the beginning of the Universe will be confined there. Probing the density of cosmic rays in clusters via their gamma-ray emission thus provides a calorimetric measure of the total integrated non-thermal energy output of galaxies. Accretion/merger shocks outside cluster galaxies provide an additional source of high-energy particles. Emission from galaxy clusters is predicted at levels just below the sensitivity of current instruments [45].

Clusters of galaxies are the largest, gravitationally-bound objects in the Universe. The observation of mainly radio (and in some cases X-ray) emission proves the existence of non-thermal phenomena therein, but gamma-rays have not yet been detected. A possible additional source of non-thermal radiation from clusters is the annihilation of dark matter (DM). The increased sensitivity of CTA will help to establish the DM signal, and CTA could possibly be the first instrument to map DM at the scale of galaxy clusters.

3.3.10 Dark matter and fundamental physics

The dominant form of matter in the Universe is the as yet unknown dark matter, which is most likely to exist in the form of a new class of particles such as those predicted in supersymmetric or extra dimensional extensions to the standard model of particle physics. Depending on the model, these DM particles can annihilate or decay to produce detectable Standard Model particles, in particular gamma-rays. Large dark matter densities due to the accumulation in gravitational potential wells leads to detectable fluxes, especially for annihilation, where the rate is proportional to the square of the density. CTA is a discovery instrument with unprecedented sensitivity for this radiation and also an ideal tool to study the properties of the dark matter particles. If particles beyond the standard model are discovered (at the Large Hadron Collider or in underground experiments), CTA will be able to verify whether they actually form the dark matter in the Universe. Slow-moving dark matter particles could give rise to a striking, almost mono-energetic photon emission. The discovery of such line emission would be conclusive evidence for dark matter. CTA might have the capability to detect gamma-ray lines even if the cross-section is loop-suppressed, which is the case for the most popular candidates of dark matter, i.e. those inspired by the minimal supersymmetric extensions to the standard model (MSSM) and models with extra dimensions, such as Kaluza-Klein theories. Line radiation from these candidates is not detectable by Fermi, H.E.S.S. II or MAGIC II, unless optimistic assumptions on the dark matter density distribution are made. Recent updates of calculations regarding the gamma-ray spectrum from the annihilation of MSSM dark matter indicate the possibility of final-state contributions giving rise to distinctive spectral features (see the reviews in [46]).

The more generic continuum contribution (arising from pion production) is more ambiguous but, with its curved shape, potentially distinguishable from the usual power-law spectra exhibited by known astrophysical sources.

Our galactic centre is one of the most promising regions to look for dark matter annihilation radiation due to its predicted very high dark matter density. It has been observed by many experiments so far (e.g. H.E.S.S., MAGIC and VERITAS) and high-energy gamma emission has been found. However, the identification of dark matter in the galactic centre is complicated by the presence of many conventional source candidates and the difficulties of modelling the diffuse gamma-ray background adequately. The angular and energy resolution of CTA, as well as its enhanced sensitivity will be crucial to disentangling the different contributions to the radiation from the galactic centre.

Other individual targets for dark matter searches are dwarf spheroidals and dwarf galaxies. They exhibit large mass-to-light ratios, and allow dark matter searches with low astrophysical backgrounds. With H.E.S.S., MAGIC and Fermi-LAT, some of these objects were observed and upper limits on dark matter annihilation calculated, which are currently about an order of magnitude above the prediction of the most relevant cosmological models. CTA will have good sensitivity for Weakly Interacting Massive Particle (WIMP) annihilation searches in the low and medium energy domains. An improvement in flux sensitivity of 1–2 orders of magnitude over current instruments is expected. Thus CTA will allow tests in significant regions of the MSSM parameter space.

Dark matter would also cause spectral and spatial signatures in extra-galactic and galactic diffuse emission. While the emissivity of conventional astrophysical sources scale with the local matter density, the emissivity of annihilating dark matter scales with the density squared, causing differences in the small-scale anisotropy power spectrum of the diffuse emission.

Recent measurements of the positron fraction presented by the PAMELA Collaboration [47] point towards a relatively local source of positrons and electrons, especially if combined with the measurement of the e  +  e − spectrum by Fermi-LAT [48]. The main candidates being put forward are either pulsar(s) or dark matter annihilation. One way to distinguish between these two hypotheses is the spectral shape. The dark matter spectrum exhibits a sudden drop at an energy which corresponds to the dark matter particle mass, while the pulsar spectrum falls off more smoothly. Another hint is a small anisotropy, either in the direction of the galactic centre (for dark matter) or in the direction of the nearest mature pulsars. The large effective area of CTA, about six orders of magnitudes larger than for balloon- and satellite-borne experiments, and the greatly improved performance compared to existing Cherenkov observatories, might allow the measurement of the spectral shape and even the tiny dipole anisotropy.

If the PAMELA result originated from dark matter, the DM particle’s mass would be >1 TeV/c2, i.e. large in comparison to most dark matter candidates in MSSM and Kaluza-Klein theories. With its best sensitivity at 1 TeV, CTA would be well suited to detect dark matter particles of TeV/c2 masses. The best sensitivity of Fermi-LAT for dark matter is at masses of the order of 10–100 GeV/c2.

Electrons and positrons originating from dark matter annihilation or decay also produce synchrotron radiation in the magnetic fields present in the dense regions where the annihilation might take place. This opens up the possibility of multi-wavelength observations. Regardless of the wavelength domain in which dark matter will be detectable using present or future experiments, it is evident that CTA will provide coverage for the highest-energy part of the multi-wavelength spectrum necessary to pinpoint, discriminate and study dark matter indirectly.

Due to their extremely short wavelength and long propagation distances, very high-energy gamma-rays are sensitive to the microscopic structure of space-time. Small-scale perturbations of the smooth space-time continuum should manifest themselves in an (extremely small) energy dependence of the speed of light. Such a violation of Lorentz invariance, on which the theory of special relativity is based, is present in some quantum gravity (QG) models. Burst-like events in which gamma-rays are produced, e.g. in active galaxies, allow this energy-dependent dispersion of gamma-rays to be probed and can be used to place limits on certain classes of quantum gravity scenarios, and may possibly lead to the discovery of effects associated with Planck-scale physics.

CTA has the sensitivity to detect characteristic time-scales and QG effects in AGN light curves (if indeed any exist) on a routine basis without exceptional source flux states and in small observing windows. CTA can resolve time scales as small as few seconds in AGN light curves and QG effects down to 10 s. Very good sensitivity at energies >1 TeV is especially important to probe the properties of QG effects at higher orders. Fermi recently presented results based on observations of a GRB which basically rule out linear-in-energy variations of the speed of light up to 1.2× the Planck scale [49] To test quadratic or higher order dependencies the sensitivity provided by CTA will be needed.

This topic is thoroughly discussed in the book “Particle dark matter” edited by G. Bertone [46], and aspects of the fundamental physics implications of VHE gamma-ray observations are covered in a recent review [50].

3.3.11 Imaging stars and stellar surfaces

The quest for better angular resolution in astronomy is driving much of the instrumentation developments throughout the world, from gamma-rays through low-frequency radio waves. The optical region is optimal for studying objects with stellar temperatures, and the current frontier in angular resolution is represented by optical interferometers such as ESO’s VLTI in Chile or the CHARA array in California. Recently, these have produced images of giant stars surrounded by ejected gas shells and revealed the oblate shapes of stars deformed by rapid rotation. However, such phase interferometers are limited by atmospheric turbulence to baselines of no more than some 100 m, and to wavelengths longer than the near infrared. Only very few stars are large enough to be imaged by current facilities. To see smaller details (e.g. magnetically active regions, planet-forming disks obscuring parts of the stellar disk) requires interferometric baselines of the order of 1 km. It has been proposed to incorporate such instruments on ambitious future space missions (Luciola Hypertelescope for the ESA Cosmic Vision; Stellar Imager as a NASA vision mission), or to locate them on the Earth in regions with the best-possible seeing, e.g. in Antarctica (KEOPS array). However, the complexity and cost of these concepts seems to put their realisation beyond the immediate planning horizon.

An alternative that can be realised much sooner is offered by CTA, which could become the first kilometre-scale optical imager. With many telescopes distributed over a square km or more, its unprecedented optical collecting area forms an excellent facility for ultrahigh angular resolution (sub-milliarcsecond) optical imaging through long-baseline intensity interferometry. This method was originally developed by Hanbury Brown and Twiss in the 1950s [51] for measuring the sizes of stars. It has since been extensively used in particle physics (“HBT interferometry”) but it has had no recent application in astronomy because it requires large telescopes spread out over large distances, which were not available until the recent development of atmospheric Cherenkov telescopes.

The great observational advantages of intensity interferometry are its lack of sensitivity to atmospheric disturbances and to imperfections in the optical quality of the telescopes. This is because of the electronic (rather than optical) connection of telescopes. The noise relates to electronic timescales of nanoseconds (and light-travel distances of centimetres or metres) rather than to those of the light wave itself (femtoseconds and nanometres).

The requirements are remarkably similar to those for studying Cherenkov light: large light-collecting telescopes, high-speed optical detectors with sensitivity extending into the blue, and real-time handling of the signals on nanosecond levels. The main difference to ordinary Cherenkov Telescope operation lies in the subsequent signal analysis which digitally synthesises an optical telescope. From the viewpoint of observatory operations, it is worth noting that bright stars can be measured for interferometry during bright-sky periods of full Moon, which would hamper Cherenkov studies.

Science targets include studying the disks and surfaces of hot and bright stars [52, 53] Rapidly rotating stars naturally take on an oblate shape, with an equatorial bulge that, for stars rotating close to their break-up speed, may extend into a circumstellar disk, while the regions with higher effective gravity near the stellar poles become overheated, driving a stellar wind. If the star is observed from near its equatorial plane, an oblate image results. If the star is instead observed from near its poles, a radial temperature gradient should be seen. Possibly, stars with rapid and strong differential rotation could take on shapes, midway between that of a doughnut and a sphere. The method permits studies in both broad-band optical light and in individual emission lines, and enables the mapping of gas flows between the components in close binary stars.

3.3.12 Measurements of charged cosmic rays

Cherenkov telescopes can contribute to cosmic ray physics by detecting these particles directly [54]. CTA can provide measurements of the spectra of cosmic-ray electrons and nuclei in the energy regime where balloon- and space-borne instruments run out of data. The composition of cosmic rays has been measured by balloon- and space-borne instruments (e.g. TRACER) up to ≈ 100 TeV. Starting at about 1 PeV instruments can detect air showers at ground level (e.g. KASCADE). Such air shower experiments have, however, difficulties in identifying individual nuclei, and consequently their composition results are of lower resolution than direct measurements. Cherenkov telescopes are the most promising candidates to close the experimental gap between the TeV and PeV domains, and will probably achieve better mass resolution than ground based particle arrays. Additionally, CTA can perform crucial measurements of the spectrum of cosmic-ray electrons. TeV electrons have very short lifetimes and thus propagation distances due to their rapid energy loss. The upper end of the electron spectrum (which is not accessible by current balloon and satellite experiments) is therefore expected to be dominated by local electron accelerators and the cosmic-ray electron spectrum can provide valuable information about characteristics of the contributing sources and of the electron propagation. While such measurements involve analyses that differ from the conventional gamma-ray studies, a proof-of-principle has already been performed with the H.E.S.S. telescopes. Spectra of electrons and iron nuclei have been published [55]. The increase in sensitivity expected from CTA will provide significant improvements in such measurements.

3.4 The CTA legacy

The CTA legacy will most probably not be limited to individual observations addressing the issues mentioned above, but also comprise a survey of the inner Galactic plane and/or, depending on the final array capabilities, a deep survey of all or part of the extragalactic sky. Surveys provide coverage of large parts of the sky, maximise serendipitous detections, allow for optimal use of telescope time, and thereby ensure the legacy of the project for the future scientific community. Surveys of different extents and depths are among the scientific goals of all major facilities planned or in operation at all wavelengths. In view of both H.E.S.S. (see Fig. 2) and Fermi-LAT survey results, the usefulness of surveys is unquestioned, and many of the scientific cases discussed above can be encompassed within such an observational strategy.

Two possible CTA survey schemes have been studied to date:

  • All-sky survey: With an effective field-of-view of 5°, 500 pointings of 0.5 hours would cover a survey area of a quarter of the sky at the target sensitivity of 0.01 Crab. Hence, using about a quarter of the observing time in a year, a quarter of the sky can be surveyed down to a level of <0.01 Crab, which is equivalent to the flux level of the faintest AGN currently detected at VHE energies.

  • Galactic plane survey: The H.E.S.S. Galactic plane survey covered 1.5% of the sky, at a sensitivity of 0.02 Crab above 200 GeV, using about 250 hours of observing time. The increase in CTA sensitivity means that a similar investment in time can be expected to result in a sensitivity of 2-3 mCrab over the accessible region of the Galactic plane.

The high-energy phenomena which can be studied with CTA span a wide field of galactic and extragalactic astrophysics, of plasma physics, particle physics, dark matter studies, and investigations of the fundamental physics of space-time. They carry information on the birth and death of stars, on the matter circulation in the Galaxy, and on the history of the Universe. Optimisation of the layout of CTA with regards to these different science goals is a difficult task and detailed studies of the response of different array configurations to these scientific problems being conducted during the Design Study and the Preparatory Phase.

4 Advancing VHE gamma-ray astronomy with CTA

The latest generation of ground-based gamma-ray instruments (H.E.S.S., MAGIC, VERITAS, Cangaroo III (http://icrhp9.icrr.u-tokyo.ac.jp) and MILAGRO (http://www.lanl.gov/milagro)) allow the imaging, photometry and spectroscopy of sources of high energy radiation and have ensured that VHE gamma ray studies have grown to become a genuine branch of astronomy. The number of known sources of VHE gamma rays is exceeding 100, and source types include supernovae, pulsar wind nebulae, binary systems, stellar winds, various types of active galaxies and unidentified sources without obvious counterparts. H.E.S.S. has conducted a highly successful survey of the Milky Way covering about 600 square degrees, which resulted in the detection of tens of new sources. However, a survey of the full visible sky would require at least a decade of observations, which is not feasible.

Due to the small fluxes, instruments for detection of high-energy gamma rays (above some 10 GeV) require a large effective detection area, eliminating space-based instruments which directly detect the incident gamma rays. Ground-based instruments allow much larger detection areas. They measure the particle cascade induced when a gamma ray is absorbed in the atmosphere, either by using arrays of particle detectors to record the cascade particles which reach the ground (or mountain altitudes), or by using Cherenkov telescopes to image the Cherenkov light emitted by secondary electrons and positrons in the cascade.

Compared to Cherenkov telescopes, air shower arrays (such as MILAGRO, AS-gamma or ARGO) have the advantage of a large duty cycle—they can observe during the daytime—and of a large solid angle coverage. However, their current sensitivity is such that they can only detect sources with a flux around the level of the flux from the Crab Nebula, the strongest known steady source of VHE gamma rays. Results from air shower arrays demonstrate that there are relatively few sources emitting at this level. The recent rapid evolution of VHE gamma-ray astronomy was therefore primarily driven by Cherenkov instruments, which reach sensitivities of 1% of the Crab flux for typical observing times of 25 h, and which provide significantly better angular resolution. While there are proposals for better air shower arrays with improved sensitivity (e.g. the HAWC project), which will certainly offer valuable complementary information, such approaches will not be able to compete in sensitivity with next-generation Cherenkov telescopes.

The properties of the major current and historic Cherenkov instruments are listed in Table 1. The instruments consist of up to four Cherenkov telescopes (or 5 for the H.E.S.S. II upgrade). They reach sensitivities of about 1% of the flux of the Crab Nebula at energies in the 100 GeV–1 TeV range. Sensitivity degrades towards lower energies, due to threshold effects, and towards higher energies, due to the limited detection area. A typical angular resolution is 0.1° or slightly better for single gamma rays. Sufficiently intense sources can be located with a precision of 10–20′′.

Table 1 Properties of selected air-Cherenkov instruments, including two of historical interest (HEGRA and CAT)

All these instruments are operated by the groups who built them, with very limited access for external observers and no provision for open data access. Such a mode is appropriate for current instruments, which detect a relatively limited number of sources, and where the analysis and interpretation can be handled by the manpower and experience accumulated in these consortia. However, a different approach is called for in next-generation instruments, with their expected ten-fold increase in the number of detectable objects. CTA will advance the state of the art in astronomy at the highest energies of the electromagnetic spectrum in a number of decisive areas, all of which are unprecedented in this field:

  • European and international integration CTA will for the first time bring together and combine the experience of all virtually all groups world-wide working with atmospheric Cherenkov telescopes.

  • Performance of the instrument CTA aims to provide full-sky view, from a southern and a northern site, with unprecedented sensitivity, spectral coverage, angular and timing resolution, combined with a high degree of flexibility of operation. Details are addressed below.

  • Operation as an open observatory The characteristics listed above imply that CTA will, for the first time in this field, be operated as a true observatory, open to the entire astrophysics (and particle physics) community, and providing support for easy access and analysis of data. Data will be made publicly available and will be accessible through Virtual Observatory tools. Service to professional astronomers will be supplemented by outreach activities and interfaces for laypersons to the data.

  • Technical implementation, operation, and data access While based on existing and proven techniques, the goals of CTA imply significant advances in terms of efficiency of construction and installation, in terms of the reliability of the telescopes, and in terms of data preparation and dissemination. With these characteristics, the CTA observatory is qualitatively different from experiments such as H.E.S.S., MAGIC or VERITAS and the increase in capability goes well beyond anything that could ever be achieved through an expansion or upgrade of existing instruments.

Science performance goals for CTA include in particular:

  • Sensitivity CTA will be about a factor of 10 more sensitive than any existing instrument. It will therefore for the first time allow detection and in-depth study of large samples of known source types, will explore a wide range of classes of suspected gamma-ray emitters beyond the sensitivity of current instruments, and will be sensitive to new phenomena. In its core energy range, from about 100 GeV to several TeV, CTA will have milli-Crab sensitivity, a factor of 1,000 below the strength of the strongest steady sources of VHE gamma rays, and a factor of 10,000 below the highest fluxes measured in bursts. This dynamic range will not only allow study of weaker sources and of new source types, it will also reduce the selection bias in the taxonomy of known types of sources.

  • Energy range Wide-band coverage of the electromagnetic spectrum is crucial for understanding the physical processes in sources of high-energy radiation. CTA is aiming to cover, with a single facility, three to four orders of magnitude in energy range. Together with the much improved precision and lower statistical errors, this will enable astrophysicists to distinguish between key hypotheses such as the leptonic or hadronic origin of gamma rays from supernovae. Combined with the Fermi gamma-ray observatory in orbit, an unprecedented seamless coverage of more than seven orders of magnitude in energy can be achieved.

  • Angular resolution Current instruments are able to resolve extended sources, but they cannot probe the fine structures visible in other wavebands. In supernova remnants, for example, the exact width of the gamma-ray emitting shell would provide a sensitive probe of the acceleration mechanism. Selecting a subset of gamma-ray induced cascades detected simultaneously by many of its telescopes, CTA can reach angular resolutions in the arc-minute range, a factor of 5 better than the typical values for current instruments.

  • Temporal resolution With its large detection area, CTA will resolve flaring and time-variable emission on sub-minute time scales, which are currently not accessible. In gamma-ray emission from active galaxies, variability time scales probe the size of the emitting region. Current instruments have already detected flares varying on time scales of a few minutes, requiring a paradigm shift concerning the phenomena in the vicinity of the super-massive black holes at the cores of active galaxies, and concerning the jets emerging from them. CTA will also enable access to episodic and periodic phenomena such as emission from inner stable orbits around black holes or from pulsars and other objects where frequent variations and glitches in period smear the periodicity when averaging over longer periods.

  • Flexibility Consisting of a large number of individual telescopes, CTA can be operated in a wide range of configurations, allowing on the one hand the in-depth study of individual objects with unprecedented sensitivity, and on the other hand the simultaneous monitoring of tens of potentially flaring objects, and any combination in between (see Fig. 3).

  • Survey capability A consequence of this flexibility is the dramatically enhanced survey capability of CTA. Groups of telescopes can point at adjacent fields in the sky, with their fields of view overlapping, providing an increase of sky area surveyed per unit time by an order of magnitude, and for the first time enabling a full-sky survey at high sensitivity.

  • Number of sources Extrapolating from the intensity distribution of known sources, CTA is expected to enlarge the catalogue of objects detected from currently several tens of objects to about 1,000 objects.

  • Global coverage and integration Ultimately, CTA aims to provide full sky coverage from multiple observatory sites, using transparent access and identical tools to extract and analyse data.

Fig. 3
figure 3

Some of the possible operating modes of CTA: a very deep observations, b combining monitoring of flaring sources with deep observations, c a survey mode allowing full-sky surveys

The feasibility of the performance goals listed above is borne out by detailed simulations of arrays of telescopes, using currently available technology (details are given below). The implementation of CTA does requires significant advances in the engineering, construction and operation of the array, and the data access. These issues are addressed in the design study and the preparatory phase of CTA. Issues include:

  • Construction, installation and commissioning of the telescopes To reach the performance targets, tens of telescopes of 2–3 different types will be required, and the design of the telescopes must be optimised in terms of their construction cost, making best use of the economics of large-scale production. In current instruments, consisting at most of a handful of identical telescopes, design costs were a substantial fraction of total costs, enforcing a different balance between design and production costs. The design of the telescopes will have to concentrate on modularity and ease of installation and commissioning.

  • Reliability The reliability of current instruments is far from perfect, and down-times of individual telescopes due to hardware or software problems are non-negligible. For CTA, telescope design and software must provide significantly improved reliability. Frequent down-times of individual telescopes in the array or of pixels within a telescope not only require substantial technical on-site support and cause higher operating costs, but in particular they make the data analysis much more complicated, requiring extensive simulations for each configuration of active telescopes, and inevitably result in systematic errors which are likely to limit the achievable sensitivity.

  • Operation scheduling and monitoring The large flexibility provided by the CTA array also raises new challenges concerning the scheduling of observations, taking into account the state of the array and the state of the atmosphere. For example, sky conditions may allow “discovery observations” in certain parts of the sky, but may prevent precise, deep observations of a source. Availability of a given telescope may be critical for certain types of observations, but may not matter at all in modes where the array is split up in many sub-arrays tracking different sources at somewhat reduced sensitivity. To make optimum use of the facility, novel scheduling algorithms will need to be developed, and the monitoring of the atmosphere over the full sky needs to be brought to a new level of precision.

  • Data access So far, none of the current Cherenkov telescopes has made data publicly available, or has tools for efficient non-expert data access. Cherenkov telescopes are inherently more complicated than, say, X-ray satellite instruments in that they do not directly take images of the sky, but rather require extensive processing to go from the Cherenkov images to the parameters of the primary gamma ray. Depending on the emphasis in the data analysis—maximum detection rate, lowest energy threshold, best sensitivity, or highest angular resolution—there is a wide range of selection parameters, all resulting in different effective detection areas and instrument characteristics. Effective detection areas also depend on zenith angle, orientation relative to the Earth’s magnetic field, etc. Background subtraction is critical in particular for extended sources which may cover a significant fraction of the field of view. Providing efficient data access and analysis tools represents a major challenge and requires significant lead times and extensive software prototyping and tests.

5 Performance of Cherenkov Telescope Arrays

In order to achieve improvements of a factor of 10 in several areas, it is essential to understand and review the factors limiting the performance, and to establish the extent to which limitations are of technical nature which can be overcome with sufficient effort (e.g. due to a given size of the camera pixels or point spread function (PSF) of the re ector), and to which extent they represent fundamental limitations of the technique (e.g. due to unavoidable fluctuations in the development of air showers).

To detect a cosmic gamma-ray source in a given energy band, three conditions have to be fulfilled:

  • The number of detected gamma rays N γ has to exceed a minimum value, usually taken to be between 5 and 10 gamma rays. The number of gamma rays is the product of flux ϕ γ , effective detection area A, observing time T (usually for sensitivity evaluation taken as between 25 and 50 h) and a detection efficiency ε γ which is typically not too far below unity. The number of detected gamma rays and hence the effective area A are virtually always the limiting factor at the high-energy end of the useful energy range. For example, to detect a 1% Crab source above 100 TeV, which equivalent to a flux of 2 ×10 − 16 cm − 2 sec − 1, in 50 h, an area A of ≥ 30 km2 is required.

  • The statistical significance of the gamma ray excess has to exceed a certain number of standard deviations, usually taken to be 5. For background dominated observations of faint sources, significance can be approximated as \(N_\gamma/\sqrt{N_{bg}}\) where the background events N bg arise from cosmic ray nuclei, cosmic ray electrons, local muons, or random images caused by night-sky background (NSB). Background events are usually distributed more or less uniformly across the useful field of view of the instrument. Their number is given by the flux per unit solid angle, ϕ bg , the solid angle Ω src over which gamma rays from a candidate source (and hence background) are accumulated, the effective detection area A bg , the observation time and a background rejection factor ϵ bg . The sensitivity limit ϕ γ is hence proportional to \(\sqrt{\epsilon_{bg} A_{bg} T\Omega_{src}}/(\epsilon_\gamma A_\gamma T) \sim \sqrt{\Omega_{src}}/\sqrt{\epsilon_{bg} A T}\) (assuming A bg A γ ). In current instruments, electron and cosmic nucleon backgrounds limit the sensitivity in the medium to lower part of their energy range.

  • The systematic error on the number of excess gamma rays due to uncertainties in background estimates and background subtraction has to be sufficiently small, and has to be accounted for in the calculation of the significance. Fluctuations in the background rates due to changes in voltages, pulse shapes, calibration, in particular when non-uniform over the field of view, or in the cut efficiencies, e.g. due to non-uniform NSB noise, will result in such background systematics. Effectively, this means that a minimal signal-to-background ratio is required to safely detect a source. The systematic limitation becomes important in the limit of small statistical errors, when event numbers are very large due to large detection areas, observation times, or low energy thresholds resulting in high count rates. Since both signal and background scale with A and T, the systematic sensitivity limit is proportional to the relative background rate, ϕ γ ∼(ϵ bg Ω src )/ϵ γ . For current instruments, background uncertainties at a level of a few % have been reported [57]. High reliability and availability of telescopes and pixels as well as improved schemes for calibration and monitoring will be crucial in controlling systematic errors and exploiting the full sensitivity of the instrument. An accuracy of the background modelling and subtraction of 1% seems reasonable and is assumed in the following. Systematic errors may still limit sensitivity in the sub-100 GeV range.

Figure 4 illustrates the various sensitivity limitations in the context of a simple toy model. Obviously, sensitivity is boosted by large effective area A, efficient rejection of background, i.e. small ϵ bg , and in the case of point-like structures by good angular resolution δ with \(\Omega_{src} \propto \delta^2\). Sensitivity gains can furthermore be achieved with a large field of view of the instrument, observing multiple sources at a time and effectively multiplying the attainable observation time T.

Fig. 4
figure 4

Toy model of a telescope array to illustrate limiting sensitivity, quoted as the minimal detectable fraction of the Crab flux per energy band Δlog10(E) = 0.2 (assuming a simple power law for the Crab flux and ignoring the change in spectral index at low energy). The model assumes an energy-independent effective detection area of 1 km2, a gamma-ray efficiency of ϵ γ of 0.5, the same efficiency for detection of cosmic-ray electrons, a cosmic-ray efficiency after cuts of ϵ bg  = 0.01, an angular resolution δ of 0.1° defining the integration region Ω src , and a systematic background uncertainty of 1%. The model takes into account that cosmic-ray showers generate less Cherenkov light than gamma-ray showers, and are hence reconstructed at lower equivalent gamma-ray energy. At high energy, the sensitivity is limited by the gamma-ray count rate (black line), at intermediate energies by electron (red) and cosmic-ray backgrounds (green), and at low energies, in the area of high statistics, by systematic background uncertainty (purple). The plot includes also the effect of the PSF improving like 1/\(\sqrt{\rm E}\) (with PSF = 0.1° for 80% containment at 200 GeV)

The annual exposure time amounts to about 1,000 h of useful moonless observation time per year, varying by maybe 20% between good and excellent sites. Observations with partial moon may increase this by a factor of 1.5, at the expense of reduced performance, depending on the amount of stray light. Some instruments, such as MAGIC, routinely operate under moonlight [58]. While in principle more than 500 h per year can be dedicated to a given source (depending on its RA, and the maximum zenith angle under which observations are carried out), in practice rarely more than 50 h to at most 100 h are dedicated to a given source per year. With the increased number of sources detectable for CTA, there will be pressure to reduce the time per source compared to current observations.

In real systems, the effective area A, background rejection ϵ bg and angular resolution δ depend on gamma-ray energy, since a minimal number of detected Cherenkov photons (around 50–100) are required to detect and analyse an image, and since the quality of shower reconstruction depends both on the statistics of detected photons and shower particles. The performance of the instrument depends on whether gamma-ray energies are in the sub-threshold regime, near the nominal energy threshold, or well above threshold.

In the sub-threshold regime, the amount of Cherenkov light is below the level needed for the trigger logic, at a sufficiently low rate of random triggers due to NSB photons. Only showers with upward fluctuations in the amount of Cherenkov light will occasionally trigger the system. At GeV energies these fluctuations are large and there is no sharp trigger threshold. Energy measurement in this domain is strongly biased.

In the threshold regime, there is usually enough Cherenkov light for triggering the system but the signal in each telescope may still be too low for (a) location of the image centroid, (b) determination of the direction of the image major axis, or (c) accurate energy assignment. Frequently, a higher threshold than that given by the trigger is imposed in the data analysis. Most showers with upward fluctuations will be reconstructed in a narrow energy range at the trigger (or analysis) threshold. Sources with cut-offs below the analysis threshold may be detectable but only at very high flux levels. Good imaging and spectroscopic performance of the instrument is only available at energies ≥1.5× the trigger threshold.

High sensitivity over a wide energy range, therefore, requires an instrument which is able to detect a sufficient number of Cherenkov photons for low energy showers, which covers a very large area for high-energy showers, and which provides high angular resolution and background rejection. High angular resolution is also crucial to resolve fine structures in extended sources such as supernova remnants. On the other hand, for the detection of extended sources, the integration region Ω src is determined by the source size rather than the angular resolution and cosmic-ray rejection becomes a most critical parameter in minimising statistical and systematic uncertainties.

A crucial question is therefore to which extent angular resolution and cosmic-ray rejection can be influenced by the design of the instrument, by parameters such as the number of Cherenkov photons detected or the size of the photo-sensor pixels. Simulation studies assuming an ideal instrument [59], one which detects all Cherenkov photons reaching the ground with perfect resolution for impact point and photon direction, show that achievable resolution and background rejection are ultimately limited by fluctuations in the shower development. Angular resolution is in addition influenced by the deflection of shower particles in the Earth’s magnetic field, making the reconstructed shower direction dependent on the energy sharing between electron and positron in the first conversion of a gamma ray (Fig. 5). However, these resolution limits (Fig. 6) are well below the resolution achieved by current instruments. At 1 TeV, a resolution below one arc-minute is in principle achievable. Similar conclusions appear to hold for cosmic-ray background rejection. There is a virtually irreducible background due to events in which, in the first interaction of a cosmic ray, almost all the energy is transferred to one or a few neutral pions and, therefore, to electromagnetic cascades (see, e.g. [60]). However, with their typical cosmic-ray rejection factors of >103 at TeV energies, current instruments still seem 1–2 orders of magnitude away from this limit, offering space for improvement. Such improvements could result from improved imaging of the air shower, both in terms of resolution and photon statistics, and from using a large and sensitive array to veto cosmic-ray induced showers based on the debris frequently emitted at relatively large angles to the shower axis.

Fig. 5
figure 5

Two low-energy gamma-ray showers developing in the atmosphere. Both gamma rays were incident vertically. The difference in shower direction results from the energy sharing between electron and positron in the first conversion and the subsequent deflection in the Earth’s magnetic field

Fig. 6
figure 6

Limiting angular resolution of Cherenkov instruments as a function of gamma-ray energy, derived from a likelihood fit to the directions of all Cherenkov photons reaching the ground, and assuming perfect measurement of photon impact point and direction. At low energies, the resolutions differ in the bending plane of the Earth’s magnetic field (open symbols) and in the orthogonal direction (closed symbols). The simulations assume near-vertical incidence at the H.E.S.S. site in Namibia

At low energies, cosmic-ray electrons become the dominant background, due to their steep spectrum. Electrons and gamma-rays cannot be distinguished efficiently using shower characteristics, as both induce electromagnetic cascades. The height of the shower maximum differs by about one radiation length [61], but this height also fluctuates from shower to shower by about one radiation length, rendering an efficient rejection impossible. A technique which is beyond the capability of current instruments but might become possible with future arrays is to detect Cherenkov radiation from the primary charged particle and use it as a veto [59]. Detection of the “direct Cherenkov light” has been proposed [54] and successfully applied [62] for highly charged primary nuclei such as iron, where Cherenkov radiation is enhanced by a factor of Z 2. While in a 100 m2 telescope, an iron nucleus generates O(1000) detected photons, a charge-1 primary will provide at most a few photons, not far from night sky noise levels. Larger telescopes, possibly with improved photo-sensors, fine pixels and high temporal resolution, could enable detection of primary Cherenkov light from electrons, at the expense of gamma-ray efficiency, since gamma-rays converting at high altitude will be rejected, too, and since unrelated nearby cosmic rays may generate fake vetos. Nevertheless, this approach (not yet studied in detail) may help at the lowest energies where event numbers are high but there are large uncertainties in the background systematics. Sakahian et al. [63] note that at energies <20 GeV, deflection of electrons in the Earth’s magnetic field is sufficiently large to disperse Cherenkov photons over a larger area on the ground, reducing light density and therefore the electron-induced trigger rate. The effect is further enhanced by a dispersion in photon arrival times.

In summary, it is clear that the performance of Cherenkov telescope arrays can be improved significantly, before fundamental limitations are reached.

6 The Cherenkov Telescope Array

The CTA consortium plans to operate from one site in the southern and one in the northern hemisphere, allowing full-sky coverage. The southern site will cover the central part of the galactic plane and see most of the galactic sources and will therefore be designed to have sensitivity over the full energy range. The northern site will be optimised for extragalactic astronomy, and will not require coverage of the highest energies.

Determining the arrangement and characteristics of the CTA telescopes in the two arrays is a complex optimisation problem, balancing cost against performance in different bands of the spectrum. This section will address the general criteria and considerations for this optimisation, while the technical implementation is covered in the following sections.

6.1 Array layout

Given the wide energy range to be covered, a uniform array of identical telescopes, with fixed spacing, is not the most efficient solution for the CTA. For the purpose of discussion, separation into three energy ranges, without sharp boundaries, is appropriate:

  • The low-energy range \(\boldsymbol{\le}\) 100 GeV To detect showers down to a few tens of GeV, the Cherenkov light needs to be sampled and detected efficiently, with the fraction of area covered by light collectors being of the order of 10% (assuming conventional PMT light sensors). Since event rates are high and systematic background uncertainties are likely to limit the achievable sensitivity, the area of this part of the array can be relatively small, being of order of a few 104 m2. Efficient photon detection can be achieved either with few large telescopes or many telescopes of modest size. For very large telescopes, the cost of the dish structures dominates, for small telescopes the photon detectors and electronics account for the bulk of the cost. A (shallow) cost optimum in terms of cost per telescope area is usually reached for medium-sized telescopes in the 10–15 m diameter range. However, if small to medium-sized telescopes are used in this energy range, the challenge is to trigger the array, since no individual telescope detects enough Cherenkov photons to provide a reliable trigger signal. Trigger systems which combine and superimpose images at the pixel level in real time, with a time resolution of a few ns, can address this issue [64] but represent a significant challenge, given that a single 1,000-pixel telescope sampled at (only) 200 MHz and 8 bits per pixel generates a data stream of more than one Tb/s. CTA designs conservatively assume a small number of very large telescopes, typically with about a 20–30 m dish diameter, to cover the low energy range.

  • The core energy range from about 100 GeV to about 10 TeV shower detection and reconstruction in this energy range are well understood from current instruments, and an appropriate solution seems a grid of telescopes of the 10–15 m class, with a spacing of about 100 m. Improved sensitivity is obtained both by the increased area covered, and by the higher quality of shower reconstruction, since showers are typically imaged by a larger number of telescopes than is the case for current few-telescope arrays. For the first time, array sizes will be larger than the Cherenkov light pool, ensuring that images will be uniformly sampled across the light pool, and that a number of images are recorded close to the optimum distance from the shower axis (about 70–150 m), where the light intensity is large and intensity fluctuations are small, and where the shower axis is viewed under a sufficiently large angle for efficient reconstruction of its direction. At H.E.S.S. for example, events which are seen and triggered by all four telescopes provide significantly improved resolution and strongly reduced backgrounds, but represent only a relatively small fraction of events. Unless energies are well above trigger threshold, only events with shower core locations within the telescope square can trigger all telescopes. A further advantage is that an extended telescope grid operated with a two-telescope trigger condition will have a lower threshold than a small array, since there are always telescopes sufficiently close to the shower core.

  • The high-energy range above 10 TeV Here, the key limitation is the number of detected gamma-ray showers and the array needs to cover multi-km2 areas. At high energies the light yield is large, so showers can be detected well beyond the 150-m radius of a typical Cherenkov light pool. Two implementation options can be considered: either a large number of small telescopes with mirror areas of a few m2 and spacing matched to the size of the light pool of 100–200 m, or a smaller number of larger telescopes with some 10 m2 area which can see showers up to distance of ≥500 m, and can hence be deployed with a spacing of several 100 m, or in widely separated subclusters of a few telescopes. While it is not immediately obvious which options offers best cost/performance ratio at high energies, the subcluster concept with larger telescopes has the advantage of providing additional high-quality shower detection towards lower energies, for impact points near the subcluster.

Figure 7 shows possible geometries of arrays with separate regions optimized for low, intermediate and high energies.

Fig. 7
figure 7

A quadrant of possible array schemes promising excellent sensitivity over an extended energy range, as suggested by the Monte Carlo studies. The centre of the installation is near the upper left corner. Telescope diameters are not drawn to scale. In the upper right part, clusters of telescopes of the 12-m class are shown at the perimeter, while in the lower left part an option with wide-angle telescopes of the 3–4 m class is shown

6.2 Telescope layout

Irrespective of the technical implementation details, as far as its performance is concerned, a Cherenkov telescope is primarily characterised by its light collection capability, i.e. the product of mirror area, photon collection efficiently and photon detection efficiency, by its field of view and by its pixel size, which limits the size of image features which can be resolved. The optical system of the telescope should obviously be able at achieve a point spread function matched to the pixel size. The electronics for signal capture and triggering should provide a bandwidth matched to the length of Cherenkov pulses of a few nanoseconds. The performance of an array is also dependent on the triggering strategy; Cherenkov emission from air showers has to be separated in real time from the high flux of night sky background photons, based on individual images and global array information. The huge data stream from Cherenkov telescopes does not allow untriggered recording.

The required light collection capability in the different parts of the array is determined by the energy thresholds, as outlined in the previous section. In the following, field of view, pixel size and the requirements on the readout system and trigger system are reviewed.

6.2.1 Field of view

Besides mirror area, an important telescope design parameter is the field of view. A relatively large field of view is mandatory for the widely spaced telescopes of the high-energy array, since the distance of the image from the camera centre scales with the distance of the impact point of the air shower from the telescope. For the low- and intermediate-energy arrays, the best choice of the field of view is not trivial to determine. From the science point of view, large fields of view are highly desirable, since they allow:

  • the detection of high-energy showers at large impact distance without image truncation;

  • the efficient study of extended sources and of diffuse emission regions; and

  • large-scale surveys of the sky and the parallel study of many clustered sources, e.g. in the band of the Milky Way.

In addition, a larger field of view generally helps in improving the uniformity of the camera and reducing background systematics.

However, larger fields of view for a given pixel size, result in rapidly growing numbers of photo-sensor pixels and electronics channels. Large fields of view also require technically challenging telescope optics. With the current single-mirror optics and f/d ratios in the range up to 1.2, an acceptable point spread function is obtained out to 4–5°. Larger fields of view with single-mirror telescopes require increased f/d ratios, in excess of 2 for a 10° field of view (see Fig. 8, [65]), which are mechanically difficult to realise, since a large and heavy focus box needs to be supported at a long distance from the dish. Also, the single-mirror optics solutions which provide the best imaging use Davies–Cotton or elliptical dish geometries, which in turn result in a time dispersion of shower photons which seriously impacts on the trigger performance once dish diameters exceed 15 m. An alternative solution is the use of secondary mirrors. Using non-spherical primaries and secondaries, good imaging over fields of up to 10° diameter can be achieved [66]. Disadvantages are the increased cost and complexity, significant shadowing of the primary mirror by the secondary, and complex alignment issues if faceted primary and secondary mirrors are used. With the resulting large range of incidence angles of photons onto the camera, can imply that baffling of albedo also becomes an issue.

Fig. 8
figure 8

Focal ratio required for sufficiently precise shower imaging, as a function of the half angle of the field of view [65]. Points: simulations for spherical design (green), parabolic design with constant radii (red), Davies–Cotton design (violet), parabolic design with adjusted radii (blue). Lines: third-order approximation for a single-piece paraboloid (red) and a single-piece sphere (green)

The choice of the field of view therefore requires that the science gains and the cost and increased complexity be carefully balanced. When searching for unknown source types which are not associated with non-thermal processes in other, well-surveyed wavelength domains, a large field of view helps, as several sources may appear in typical fields of view. This increases the effective observation time per source by a corresponding factor compared to an instrument which can look only at one source at a time. An instrument with CTA-like sensitivity is expected to detect of the order of 1,000 sources. In the essentially one-dimensional galactic plane, there will always be multiple sources in a field of view. In extragalactic space, the average angular distance between (an estimated 500) sources would be about 10°, implying that even for the maximum conceivable fields of view the gain is modest. Even in the galactic plane, a very large field of view will not be the most cost-effective solution, since the gain in terms of the number of sources viewed simultaneously scales essentially with the diameter of the field of view, given that sources are likely to cluster within a fraction of a degree from the plane, whereas camera costs scale with the diameter squared. A very rough estimate based on typical dish costs and per-channel pixel and readout costs suggests an economic optimum in the cost per source-hour at around a diameter of 6–8° field of view.

The final choice of the field of view will have to await detailed studies related to dish and mirror technology and costs, and the per-channel cost of the detection system.

Sensitivity estimates given below do not include an enhancement factor accounting for multiple sources in the field of view, but effective exposure time should increase by factors of ≥4 for Galactic sources, and sensitivity correspondingly by factors of ≥2.

6.2.2 Pixel size

The size of focal plane pixels is another parameter which requires careful optimisation. Figure 9 illustrates how a shower image is resolved at pixel sizes ranging from 0.28° (roughly the pixel size of the HEGRA telescopes) down to pixel sizes of 0.07°, as used for example in the large H.E.S.S. II telescope. The cost of focal plane instrumentation is currently driven primarily by the number of pixels and, therefore, scales like the square of the inverse pixel size. The gain due to the use of small pixels depends strongly on the analysis technique. In the classical second-moment analysis, performance seems to saturate for pixels smaller than 0.2–0.15° [67]. Analysis techniques which use the full image distribution (e.g. [68]), on the other hand, can extract the information contained in the well-collimated head part of high-intensity images, as compared to the more diffuse tail, and benefit from pixel sizes as small as 0.06–0.03° [59, 66]. Pixel size also influences trigger strategies. For large pixels, gamma-ray images are contiguous, allowing straight-forward topological triggers, whereas for small pixels, low-energy gamma-ray images may have gaps between triggered pixels.

Fig. 9
figure 9

Part of the field of view of cameras with different pixel sizes (0.07, 0.10, 0.14, 0.20, and 0.28°) but identical field-of-view (of about 6°), viewing the same shower (460 GeV gamma-ray at 190 m core distance) with a 420 m2 telescope. Low-energy showers would be difficult to register, both with very small pixels (signal not contiguous in adjacent pixels) and with very large pixels (not enough pixels triggered above the increased thresholds, due to high NSB rates)

The final decision concerning pixel size (and telescope field of view) will to a significant extent be driven by the cost per pixel. Current simulations favour pixel sizes of 0.07–0.1° for the large telescopes, allowing the resolution of compact low-energy images and reducing the rate of NSB photons in each pixel, 0.15–0.2° for the medium size telescopes, similar to the pixel sizes used by H.E.S.S. and VERITAS, and 0.2–0.3° for the pixels of the telescopes in the halo of the array, where large fields of view are required but shower images also tend to be long due to the large impact distances and the resulting viewing angles. Studies to determine the benefits of smaller pixels, as are proposed for AGIS-type dual-mirror telescopes (http://tmva.sourceforge.net), are underway for the medium-sized telescopes.

6.2.3 Signal recording

Most modern telescopes use some kind of transient recorders to capture pixel signals, either with analogue switched-capacitor systems or with fast digitisers [69], so that, at least in principle, signal shape and timing can be used in the image analysis. Signal shape and timing can be employed in two ways: (a) to reject backgrounds such as hadronic showers and local muons; and (b) to reduce the signal integration windows and hence the amount of NSB noise in the shower image. For example, muon rejection based on signal waveform is discussed in [70]. Quantifying how much background rejection can be improved using these techniques is non-trivial. The effect of signal-shape image selection is correlated with other cuts imposed in the analysis. For single telescopes, signal shape and timing can provide significant improvements. For telescope systems, the cuts on image shapes in multiple telescopes are already very powerful and background events passing these cuts will have images and signal shapes that look very much like those of gamma-rays, so that less improvement is expected, if any. The second area where signal waveform recording can improve performance concerns the signal amplitudes. In particular for larger shower impact parameters, photon arrival times are not isochronous across the image (Fig. 10), and photons in the “tail” end of the image arrive with significant delays compared to those from its “head”. Use of variable and matched integration windows across the image allows the extraction of shower signals with minimal contamination from NSB noise. Use of signal shape and timing information is already used in the current MAGIC [71] and VERITAS systems, and these results will help to guide final design choices for CTA.

Fig. 10
figure 10

Integrated signal (upper left) and 1 ns samples of the development of a 10 TeV gamma shower at 250 m core distance as seen in a telescope with optics and pixels similar to a H.E.S.S.-1 telescope but with a FoV of 10° diameter. Pixels near the “head” of the shower have a pulse width dominated by the single photoelectron pulse width, while those in the “tail” of the shower see longer pulses. The shower image moves across almost half the FoV in about 25 ns

The performance numbers quoted for the simulations described below are conservative in that they are based on fixed (and relatively large) signal integration windows. Improvements can be expected once the use of image shape information is fully understood.

6.2.4 Trigger

The trigger scheme and readout electronics are closely related and fundamentally influence the design and performance of the telescope array. For most applications, multi-telescope trigger coincidence is required to reject backgrounds at the trigger level and to reduce the load on the data acquisition system. The main issue here is how much information is exchanged between telescopes, and how image information is stored while the trigger decision is made.

One extreme scenario is to let each telescope trigger independently and only exchange a trigger flag with neighbouring telescopes, allowing identification of coincident triggers (e.g. [72]). The energy threshold of the system is then determined by the minimum threshold at which a telescope can trigger. The other extreme is to combine signals from different telescopes at the pixel level, either in analogue or digital form, and to extract common image features. In this case, the system energy threshold could be well below the thresholds of individual telescopes, which is important when the array is made up of many small or medium-sized telescopes. However, the technical complexity of such a solution is significant. There is a wide range of intermediate solutions, where trigger pre-processors extract image features, such as the image centroid, on a telescope basis and the system trigger decision includes this information.

In cases where individual telescopes generate a local trigger, pixel signals need to be stored while a global trigger decision is made. The time for which signals can be stored without introducing deadtime, is typically ms in the case of digital storage and μs if analogue storage is used, which strongly influences the design of higher level triggers.

Trigger topology is another important issue. Triggers can either be derived locally within the array by some trigger logic connecting neighbouring telescopes, or all trigger information can be routed to a central station where a global decision is made, which is then propagated back to the telescopes. The first approach requires shorter signal storage at the telescopes and is more easily scaled up to large arrays, the second provides maximum flexibility. Whether local or global, trigger schemes will employ a multi-level hierarchy, with a first trigger level acting on pixels and pixel groups, and higher levels using information on image topology and/or the topology of triggered telescopes in the array. As in modern high-energy physics experiments, trigger decisions will, to the extent possible, be performed using programmable rather than “hardwired” processors. If the signal is recorded using fast digitisers, even the first-level discrimination of pixel signals could be implemented digitally in the gate array controlling the digitiser, instead of applying analogue thresholds.

Whatever implementation is chosen, it is important that the trigger system is very flexible and software-configurable, since operation modes vary from deep observations, where all telescopes follow the same source, to monitoring or survey applications, where groups of a few telescopes or even single telescopes point in different directions.

The simulations discussed below assume a very conservative approach. Each telescope makes an independent trigger decision with thresholds defined such that the telescope trigger rate is in the manageable range of a few to some tens of kHz. This is followed by a global decision based on the number of triggered telescopes.

6.3 CTA performance summary

Section 8 gives a detailed description of the layout and performance studies conducted so far for CTA. Many candidate layouts have been considered. Here we provide a brief description of the nature and performance of one promising configuration (E), which is illustrated in Fig. 18. This configuration utilises three telescope types: four 24 m telescopes with 5° field-of-view and 0.09° pixels, 23 telescopes of 12 m diameter with 8° field-of-view and 0.18° pixels, and 32 telescopes of 7 m diameter with a 10° field-of-view and 0.25° pixels. The telescopes are distributed over ∼3 km2 on the ground and the effective collection area of the array is considerably larger than this at energies beyond 10 TeV. The sensitivity of array E from detailed calculations and using standard data analysis techniques is shown in Fig. 23. More sophisticated analyses result in sensitivities that are ∼20% better across the whole energy range. As Fig. 23 shows, such an array performs an order of magnitude better than an instrument like H.E.S.S. over most of required energy range. Figure 25 shows the angular resolution of this array, which approaches one arcminute at high energies. The energy resolution of layout E is better than 10% above a few hundred GeV.

Array layout E has a nominal construction cost of 80 M€ and meets the main design goals of CTA. Given that the configuration itself, and the analysis methods used, have not yet been optimised, it is likely that a significantly better sensitivity can be achieved with an array of this nominal cost which follows the same basic concept. Therefore, despite the uncertainties in the cost model employed (see Section 7.5), we are confident that the design goals of CTA can be realised at close to the envisaged cost.

7 Realizing CTA

This section provides a brief overview of the position of CTA in the European and global context, the organisation of CTA during the various stages, of its operation as an open observatory, of the potential sites envisaged for CTA, and of the schedule for and cost of CTA design, construction and operation.

7.1 CTA and the European strategy in astrophysics and astroparticle physics

CTA, as a major future facility for astroparticle physics, is firmly embedded in the European processes guiding science in the fields of astronomy and astroparticle physics.

  • The European Strategy Forum on Research Infrastructures (ESFRI) ESFRI is a strategic organisation whose objective is to promote the scientific integration of Europe, to strengthen the European Research Area and to increase its international impact. A first Roadmap for pan-European research infrastructures was released in 2006, listing CTA as an “emerging project”. In the December 2008 update of this Roadmap, CTA was included as one of eight Physical Sciences and Engineering projects, together with facilities such as E-ELT, KM3Net and SKA. As such, CTA is eligible for FP7 Preparatory Phase funding. The CTA application for this funding was successful, providing up to 5.2 M€ for the preparation of the construction of the observatory in 3 years time. The contracts with the EC are in the process of being finalised and signed.

  • The Astroparticle Physics European Coordination (ApPEC) group ApPEC was created to enhance coordination in astroparticle physics across Europe. It has stimulated cooperation and convergence between competing groups in Europe, and has initiated the production of a European roadmap in astroparticle physics, on which CTA is one of the key projects.

  • ASPERA ASPERA is a network of national government agencies responsible for coordinating and funding national research efforts in Astroparticle Physics. One of the tasks of ASPERA is to create a scientific roadmap for Astroparticle Physics (http://www.aspera-eu.org/images/stories/roadmap/aspera_roadmap.pdf) and link it with the more general European scientific infrastructure roadmap. A Phase I roadmap has been published, presenting the overarching science questions and the new instruments planned to address these questions. Phase II saw the release of the resulting “European Strategy for Astroparticle Physics” in September 2008, prioritising the projects under consideration. In this roadmap, CTA emerges as a near-term high-priority project. The roadmap states:

    The priority project for VHE gamma-ray astrophysics is the Cherenkov Telescope Array, CTA. We recommend design and prototyping of CTA, the selection of sites, and proceeding rapidly towards start of deployment in 2012.

    CTA was one of the two projects targeted by the 2009 ASPERA Common Call for cross-national funding and received in total 2.7 M€ from national funding agencies.

  • The ASTRONET Eranet ASTRONET was created by a group of European funding agencies to establish comprehensive long-term planning for the development of European astronomy. The objective of this effort is to consolidate and reinforce the world-leading position that European astronomy attained at the beginning of this century. Late in 2008, ASTRONET released “The ASTRONET Infrastructure Roadmap: A Strategic Plan for European Astronomy”. CTA is one of the three medium-scale facilities recommended on this roadmap, together with the neutrino telescope KM3Net and the solar telescope EST.

7.2 CTA in the world-wide context

Ground-based gamma-ray astronomy has attracted considerable attention world-wide, and while CTA is the key project in Europe, other projects have been considered elsewhere. These include primarily:

  • The Advanced Gamma-ray Imaging System (AGIS) In both science an instrumentation, AGIS (http://www.agis-observatory.org/) followed a very similar plan to that of CTA. The AGIS project was presented in a White Paper prepared for the Division of Astrophysics of the American Physical Society [8]. AGIS proposed a square-kilometre array of mid-sized telescopes, similar to the core array of mid-sized telescopes in CTA but without the additional large telescopes to cover the very lowest energies, and an extended array of small telescopes to provide large detection area at the very highest energies. The baseline configuration of AGIS consisted of 36 two-mirror Schwarzschild-Couder telescopes with an 11.5 m diameter primary mirror. These have a large field of view and a very good angular resolution. Close contacts were established between AGIS and CTA, during the design study phase; information was openly exchanged and common developments undertaken. After a US review panel recommended that AGIS join forces with CTA, the US members of the AGIS Collaboration have joined CTA in spring 2010. Within the overall context of CTA, development of Schwarzschild-Couder telescopes will be continued to investigate their potential for further improving CTA performance. Significant intellectual, technological and financial contributions to CTA from the US groups are anticipated. Strong US participation in CTA was endorsed by PASAGFootnote 4 and the Decadal Survey in Astronomy and Astrophysics (Astro-2010).

  • The High-Altitude Water-Cherenkov Experiment (HAWC) HAWC (http://hawc.umd.edu/) builds on the technique developed by the MILAGRO group, which detects shower particles on the ground using water Cherenkov detectors, and reconstructs the shower direction using timing information. It is proposed to construct the new detector on a site at 4,100 m a.s.l. in the Sierra Negra, Mexico. HAWC will provide a tenfold increase in sensitivity over MILAGRO and detection capability down to the lower energy of 100 GeV, largely due to its increased altitude. While it will have lower sensitivity, poorer angular resolution and a higher energy threshold compared to CTA, HAWC has the advantage of a large field of view (≈ 2π sr) and nearly 100% duty cycle. HAWC therefore complements imaging Cherenkov instruments. In fact, it would be desirable to construct and operate a similar instrument in the southern hemisphere, co-located with CTA.

  • The Large High Altitude Air Shower Observatory (LHAASO) LHAASO is an extensive (km2) cosmic ray experiment. The proposal is to locate this near the site of the ARGO and AS-Gamma experiments in Tibet, at 4,300 m a.s.l. The array includes large-scale water Cherenkov detectors (90,000 m2), ground scintillation counter arrays for detecting both muons and electromagnetic particles, fluorescence/Cherenkov telescope arrays and a shower core detector array. The science goals encompass a survey of gamma-ray sources in the energy range ≥100 GeV, measurement of gamma-ray energy spectra of sources above 30 TeV to identify cosmic ray sources, and the measurement of cosmic ray spectra and composition at energies above 30 TeV. If realised, LHASSO will complement the northern CTA array, as it concentrates primarily on the detection of low-energy gamma-rays in the energy range from a few times 10 GeV to some 100 GeV.

In summary, the other large-scale instruments for ground-based gamma-ray astronomy that are being discussed outside Europe (e.g. HAWC, LHAASO), are complementary to CTA in their capabilities.

7.3 Operation of CTA as an open observatory

CTA is to address a wide range of astroparticle physics and astrophysics questions. The majority of studies will be based on observations of specific astronomical sources. The scientific programme will hence be steered by proposals to conduct measurements of specific objects. CTA will be operated as an open observatory. Beyond a base programme, which will include for example a survey of the Galaxy and deep observations of “legacy sources”, observations will be conducted according to observing proposals selected for scientific excellence by peer-review among suggestions received from the community. Following the general procedures developed for and by other major astrophysical facilities, a substantial number of outstanding proposals from scientists working in institutions outside the CTA-supporting countries will be executed. All data obtained by the CTA will be made available in an archive that is accessible to scientists outside the proposing team.

Following the experience of currently operating Cherenkov telescope observatories, the actual observations will normally be conducted over an extended period in time, with several different projects being scheduled each night. The operation of the array will be fairly complex. CTA observations will not, therefore, be conducted by the scientists whose individual proposals were selected, but by a dedicated team of operators.

CTA observatory operation involves proposal handling and evaluation, managing observation and data-flow, and maintenance. The actual work may be conducted in a central location or in decentralised units (e.g. a data centre and an operations centre) with a coordinating office.

7.3.1 Observatory logistics

The main logistic elements of the CTA observatory are: the Science Operation Centre (SOC), which is in charge of the organisation of observations; the Array Operation Centre (AOC), which looks after the operation and monitoring of the telescopes, and the Science Data Centre (SDC), which provides and disseminates data and analysis software to the science community at large, and using the standards of the International Virtual Observatory Alliance (see Fig. 11).

Fig. 11
figure 11

Work flow diagram of the CTA observatory. The three main elements which guarantee the functionalities of the observatory are the Science Operation Centre, the Array Operation Centre and the Data Centre. Data handling and dissemination will build on existing infrastructures, such as EGEE and GÉANT

The use of existing infrastructures, such as EGEE and GÉANT, and the use of a Virtual Observatory is recommended for all data management tasks in the three elements of the CTA observatory. The high data rate of CTA, together with the large computing power required for data analysis, demand dedicated resources. Hence, EGEE-Grid infrastructures and middleware for distributed data storage, analysis and data access are considered the most efficient solution for CTA. The CTA observatories will very probably be placed in remote locations in southern Africa, Latin or Central America, and/or the Canary Islands. Thus, high-bandwidth networking is critical for remote diagnostics and instant transfer of the data to well-connected European data centres. As for other projects in astronomy, a CTA Virtual Organisation, will provide access to the data. CTA aims to support a wide scientific community, providing access to all levels of data that is archived in a standardised way.

It is envisaged to start CTA operations already during the construction phase as soon as the first telescopes are ready to conduct competitive science operations.

7.3.2 Proposal handling

The world-wide community of scientists actively exploiting the results from ground-based VHE gamma-ray experiments currently consists of about 600 physicists (about 150 in each of the H.E.S.S. and MAGIC Collaborations, about 100 in VERITAS, 50 in Cangaroo and 50 in Indian gamma ray activities, plus about 100 scientists either associated, or regularly collaborating, with these experiments). Planning and designing CTA involves about another 100 scientists not currently participating in either of the currently running experiments. Proposals for observations with CTA are hence expected to serve a community of at least 700 scientists, larger than that of any national astronomical facility in Europe, and comparable to the size of the community using the ESO observatory in the 1980s. CTA must therefore efficiently deal with a large number of proposals for a facility which, based on experience with current experiments, is expected to be oversubscribed by a large factor. CTA plans to follow the practice of other major, successful observatories (e.g. ESO), and announce calls for proposals at regular intervals. These proposals will be peer-reviewed by a group of international experts which will change on a regular basis. Different classes of proposals (targeted, surveys, time-critical, target of opportunity, and regular programmes) are foreseen, as is common for current experiments and other ground-based observatories. Depending on the science under investigation, subarray operation may be required. Each site may therefore be conducting several different observation programmes concurrently.

7.3.3 Observatory operations

The observing programme of the CTA will be driven by the best proposals from the scientific community, which will be selected in a peer-review process. Successful applicants will provide all the information required for the optimum completion of their measurements. An observing programme will be compiled by the operations centre, taking the requirements of individual projects into account. The programme will be conducted in robotic fashion with a minimum amount of professional staff on site. Proposers are not expected to participate in measurements. Quicklook analysis will enable triggers and on-the-fly modification of projects, if required. Data and calibration files will be provided to the user. Frequent modifications to the scheduled observing programme can be expected for several reasons. Openness of triggers is essential given the transitory and variable nature of many of the phenomena to be studied by CTA. CTA must adapt its schedule to changing atmospheric conditions to ensure the science programme is optimised. The flexibility to pursue several potentially very different programmes at the same time may increase the productivity of the CTA observatory. Routine calibrations and monitoring of the array and of environmental data must be scheduled as needed to ensure the required data quality.

Observatory operations covers day-to-day use of the arrays, including measurements and continuous hardware and software maintenance, proposal handling and evaluation, automated analysis and user support, as well as the long-term programme for upgrades and improvements to ensure continued competitiveness over the lifetime of the observatory.

7.3.4 Data dissemination

The measurements made with CTA will be subject to on-line analysis, including event-selection and calibration for instrumental effects. The analysis of data obtained with Cherenkov telescopes differs from the procedures typical in other wavelength ranges in that extended Monte-Carlo simulations are used to determine the effects of, and calibrate for, the influence of a large range of factors on the measurements. The necessary simulations will be carried out by CTA, used in calibrating standard pipline-processed data and will also be made available to the community for use in proposal planning etc. The principal investigators of accepted proposals will be provided with the results of standard processing and access to the standard MC simulations and the analysis pipelines used in data processing. Storage of data and archiving of scientific and calibration data, programs, and MC simulations used in the processing will be organised through the distributed computing resources made available in support of the CTA EGEE Virtual Organisation.

The processing of CTA data represents a major computational challenge. It will be necessary to reduce a volume of typically 10 TBytes of raw data per observation to a few tens of MBytes of high-level data within a couple of hours. This first-level data processing will make heavily use of Grid technology by running hundreds of processes within a global pipeline. Data processing requires also the production and analysis of the MC simulations needed for calibration. The integrated services and infrastructures dedicated to the MC production, analysis and dissemination have to be taken into account in the CTA data pipeline.

All levels of data will be archived in a standardised way, to allow access and re-processing by the scientific community. Access to all levels of data and Grid infrastructures will be provided through a single access point, the “VHE gamma-ray Science Gateway”.

Figure 12 shows an overview of the integrated application e-infrastructures such as EGEE-Grid, GÉANT and CTA VO.

Fig. 12
figure 12

Schematic of the integrated application of e-infrastructures like EGEE-GRID, GÉANT and VO for the CTA observatory, together with the 2009 status of the CTACG (CTA Computing Grid) project (http://lappwiki01.in2p3.fr/CTA-FR/doku.php?id=cta-computing-grid-public). The VO-CTA Grid Operation Centre houses the EGEE services

It is foreseen that the high level analysis of CTA data can be conducted by individual scientists using the analysis software made available by CTA. This software will follow the standards used by other high-energy observatories and will be provided free of charge to the scientific community.

7.4 CTA organisation

The organisation of the CTA consortium will evolve over the various stages of the project. These include:

  • The design study phase. Definition of the layout of the arrays, specification of the telescope types, design of the telescopes and small-scale prototyping.

  • The prototyping and preparatory phase. Prototyping and deployment of full-scale telescopes, preparation of the construction and installation including solving technical, organisational and legal issues, site preparation.

  • Construction phase. Construction, deployment and commissioning of the telescopes.

  • Operation Phase. Operation as an open observatory, with calls for proposals and scheduling, operation and maintenance of the facility, processing of the data and provision of analysis tools.

For the design study phase, the organisation of the consortium was defined in a Memorandum of Understanding modelled on those proven by large experiments in particle and astroparticle physics. The governing body is the Consortium Board and operational decisions are taken and work is coordinated by the Spokespersons and the Executive Board. Work Package Convenors organise and drive the work on essential parts of the project. The work packages and the area they cover are:

PHYS:

The astrophysics and astroparticle physics that will be studied using CTA.

MC:

Development of simulations for optimisation of the array layout and analysis algorithms, and for performance studies.

SITE:

Evaluation of possible sites for CTA and infrastructure requirements.

MIR:

Design of telescope optics and mirror construction.

TEL:

Design of telescope structure and associated drive and control systems.

FPI:

Development of focal plane instrumentation.

ELEC:

Design and development of the readout electronics and trigger.

ATAC:

Development of atmospheric monitoring and calibration techniques and associated instrumentation.

OBS:

Development of observatory operation and access strategies.

DATA:

Studies of data handling, processing, management and data access.

QA:

Quality assurance and risk assessment strategies.

The CTA design study phase was organised in terms of scientific/technical topics, rather than in terms of telescope types, to ensure that, as far as possible, common technical solutions are employed across the array, maximising economies of scale and simplifying array operation.

For the preparatory phase, the organisation will be adapted to the needs of the project. The Project Office will be extended, and work packages for each telescope type will be established to steer prototyping and preparations for construction. External advisors will assist in guiding and reviewing the project.

A significant task for the preparatory phase will be the definition of the legal framework and governance structure of the CTA Collaboration and observatory. Different models exist, each of which has its own advantages and disadvantages. CTA could for example be realised within an existing international organisation such as CERN or ESO. CTA could also be operated by a large national laboratory which has sufficient administrative and technical infrastructure. Suitable national laboratories exist e.g. in Germany, France, or the UK, for example. On a smaller scale, H.E.S.S. and MAGIC are operated in this mode. CTA could be established as an independent legal entity under the national law of some country, following the example of IRAM. The definition of the legal structure of CTA will be determined in close interaction with ASPERA (a group of European Research Area funding agencies which coordinates astroparticle physics in Europe). One of their main tasks is the “Implementation of new European-wide procedures for large infrastructures”.

Regardless of the legal implementation, CTA management will be assisted by an international scientific and technical Advisory Board, and a Resource Board, composed of representatives of the national funding organisations supporting CTA.

Close contacts between CTA and the funding agencies (via the Resource Board) during all stages of the project are vital to secure sufficient and timely funding for the construction of the facility.

7.5 Time schedule and costs

CTA builds largely on proven technologies and Cherenkov telescopes of sizes similar to those needed for CTA have already been built or are in the advanced stages of construction. Remaining challenges are: (a) optimisation of the cost of telescope components; (b) improvement of the reliability of telescope components, requiring extensive prototyping; (c) establishment of the formal framework for building and operating the instrument, and the selection and provision of sites; and (d) the funding of the infrastructure.

These challenges will be addressed during the Preparatory Phase (2010–2013) which will be supported by an FP7 grant of up to 5.2 M€ from the European Community and by grants from various national funding agencies.

After a successful Preparatory Phase, and provided the funding has been secured, construction and deployment will then take from 2013 until 2018.

A detailed evaluation of the required construction and running costs is part of the Preparatory Phase studies. Current design efforts are conducted within an envelope of investment costs for the CTA construction and site infrastructure of 100 M€ for the southern site, featuring full energy coverage, and 50 M€ for the more specialised northern site (all in 2005 €). CTA aims to keep running costs below 10% of the total investment, in line with typical running costs for other astrophysical facilities.

Estimates for the costs of all major components of CTA are required for any optimisation of the array design. The current model makes the following assumptions:

  • The investment required to construct CTA (according to European accounting schemes) is 100 M€ for CTA-South and 50 M€ for CTA-North.

  • For both sites 20% of the budget is required for infrastructure and a central processing farm. Therefore, for example, telescope construction for CTA-South is anticipated to cost 80 M€.

  • The construction of the telescope foundation, optical support structure, drive/safety system and camera masts will cost 450 k€ for a 12 m telescope and the cost scales as (dish area)1.35.

  • Mirrors, mounts and actuators will cost ≈ 1.7 k€/m2.

  • Camera mechanics, photo-sensor and electronics costs will be 400 €/pixel, including lightcones, support structures and cooling systems.

  • Miscellaneous additional costs of about 20 k€/telescope will be incurred.

This cost model will evolve as the design work on the different components of CTA progresses.

8 Monte Carlo simulations and layout studies

The performance of an array of imaging atmospheric Cherenkov telescopes such as CTA depends on a large number of technical and design parameters. These include the general layout of the installation, with telescope sizes and locations, telescope optics, camera field-of-view and pixel size, signal shapes and trigger logic. In searching for the optimum configuration of a Cherenkov telescope array, one finds that most of these parameters are intimately related, either technically or through constraints on the total cost. For many of these parameters there is experience from previous gamma-ray installations such as HEGRA, CAT, H.E.S.S., and MAGIC that provide reasonable starting points for the optimisation of CTA parameters. Whilst the full optimisation of CTA has not yet been completed, extensive simulation studies have been performed and demonstrate that an array of ≥60 Cherenkov telescopes can achieve the key performance targets for CTA, within the cost envelope described earlier. This section gives a summary of the most important simulation studies performed so far.

8.1 Simulation tools

Only a modest number of candidate configurations has been simulated in full detail during the design study, but this still required the simulation of close to 1011 proton, gamma, and electron induced showers, with full treatment of every interaction, tracking all the particles generated in these showers through the atmosphere, simulating emission of Cherenkov light, propagating the light down to the telescopes, reflecting it on multi-faceted mirrors, entering photomultiplier tubes, generating pulses in complex trigger electronics, and having them registered in analogue-to-digital circuits. Simulations include not only Cherenkov photons but also NSB light resulting in the registration of photons at rates of ∼100 MHz in a typical photo-sensor.

Since the discrimination between γ-ray and hadron showers in CTA will surpass that of the best current instruments by a significant factor, huge numbers of background showers must be simulated before conclusions on the performance of a particular configuration can be drawn. Work is underway to reduce the CPU-time requirement by preferentially selecting proton showers early in their development if they are more likely to appear γ-like. This should lead to a substantial speed improvement in future studies. Early results from Toy models, which parametrize shower detection characteristics and are many orders of magnitude faster, are encouraging, but cannot yet be seen as adequate replacements for the detailed simulation process.

The air-shower simulation results presented here are based on the CORSIKA program [73], which is widely used in the community and very well tested. Cross-checks with the KASCADE-C + +  air-shower code [74] have been performed as part of this study. Simulations of the instrument response have been carried out with three codes. Two packages initially developed for H.E.S.S. (sim_telarray [75] and SMASH [76]), and one for MAGIC simulations [77], were cross-checked using an initial benchmark arrays configurations.

The large volume of simulations, dominated by those of proton-induced showers needed for background estimations, has motivated the use of EGEE (Enabling Grids for E-sciencE) for the massive production of shower and detector simulations. A Virtual Organisation has been founded and a first set of CORSIKA showers has been generated on the GRID, while a specific interface for job submission and follow-up for simulations and analysis is currently under development.

The detailed simulations described here, result in data equivalent to experimental raw data (ADC counts for each time-slice for each pixel). Analysis tools are needed to reconstruct shower parameters (in particular energy and direction) and to identify γ-ray showers against the background from hadron-initiated showers (note that the additional background from electron-induced showers is important at intermediate energies despite the much lower electron flux as electron showers are extremely difficult to differentiate from those initiated by photons). The analysis methods currently used are based on experience with past and current instruments, but are being developed to make full use of the information available for CTA, in particular to exploit the large number of shower images that CTA will provide for individual events.

The analyses in this study are based on several independent codes, all of which start with cleaning of images to identify signal pixels, and a parametrisation of images by second-moment Hillas parameters [78], augmented by parameters such as the height of shower maximum as reconstructed from stereo images. Background rejection is achieved both by direct cuts on (suitably normalised) image parameters, and more general multivariate analysis tools such as a Random Forest [79] classifier and Boosted Decision Trees within the open source software package TMVA (http://tmva.sourceforge.net) [80, 81]. There are also other analysis methods in use for the analysis of Cherenkov telescope data, such as the 3-D-model analysis [82] the Model+ + analysis [68], and analytical combinations of probability density functions of discriminating variables which have advantages over the standard second-moments analysis in at least some energy ranges. Some of these alternative methods have been used for a subset of the studies presented here.

8.2 Verification of simulation tools

The optimisation of CTA relies heavily on detailed simulations to predict signal and background rates, angular resolution and overall sensitivity. To demonstrate that the simulation tools in use accurately describe reality, we show here some key data/simulation comparisons, taking H.E.S.S. as an example.

A key aspect of the simulation of the detector response to Cherenkov light from an air-shower is the ray-tracing of light through the optical system of an individual telescope. An understanding of the typical misalignments of all components is needed at this stage, as is the ideal performance. The optical performance of a telescope is described by its point spread function (PSF), which degrades for off-axis rays. Figure 13 illustrates that the modelling of the optical system of, in this case, a H.E.S.S. telescope reproduces the width and shape of the PSF in all details, and that essentially identical imaging is achieved for different telescopes in the system.

Fig. 13
figure 13

Optical point spread function of two H.E.S.S. telescopes as a function of angle of incidence, measured using stars, and compared to simulations. Data points are shown for the radial and tangential width of the PSF, and the 80% containment radius. Lines represent the results of simulations of the telescope optics using sim_telarray. See [83] for details

An end-to-end test of the correct simulation of gamma-ray induced showers can be made using the signal from a strong source under very high signal/background conditions. The giant flare from the blazar PKS 2155-304 observed with H.E.S.S. in 2006 provides an excellent opportunity for such a test. Figure 14 shows the satisfactory agreement (typically at the 5% level) between the simulated and detected shape of the shower image as characterised by their Hillas width and length parameters. Gamma-ray showers were simulated with the CORSIKA and KASKADE-C + +  programs and have been passed through one of the H.E.S.S. detector simulation and analysis chains. The measured spectrum, optical efficiency, zenith angle and other runtime parameters were used as inputs to this simulation.

Fig. 14
figure 14

Comparison of measured (black squares) and simulated (red triangles and blue circles) image parameters for the H.E.S.S. telescopes. Measured data are taken from a flare of the blazar PKS 2155-304 [84] for which the signal/noise ratio was very high and large gamma-ray statistics are available

In the analysis of experimental data, it is sufficient for simulations to describe the characteristics of gamma-ray detection, since the cosmic-ray background can (except for very diffuse sources) be modelled and subtracted using measurements in regions without gamma-ray emission. However, for the design of new instruments, simulations must also provide a reliable modelling of all relevant backgrounds. Experience with existing systems shows that this is indeed possible, provided that background events are simulated over a very wide area, up to an impact distance of around a kilometre from any telescope and over a large solid angle, well beyond the direct field of view of the instrument, so that far off-axis shower particles are properly included.

An inherent uncertainty in the simulation of the hadronic background is given by the currently limited knowledge of hadronic interaction processes at very high energies. The impact of this uncertainty on the Cherenkov light profile has been studied using CORSIKA simulations with different interaction models. As can be seen in Fig. 15, the low energy (<80 GeV) models FLUKA [85] and UrQMD [86] do not exhibit significant differences, whereas the known discrepancy between the high-energy models QGSJet-01 [87], QGSJet-II [88, 89] and SIBYLL 2.1 [90] leads to an uncertainty of about 5% in the Cherenkov light profile at 1 TeV.

Fig. 15
figure 15

Comparison of the Cherenkov light profiles for proton-induced showers generated with different hadronic interaction models. The profiles for FLUKA and UrQMD at 50 GeV (left) and 100 GeV (right) are shown in the top panel. Two QGSJet versions and SIBYLL at 1 TeV are compared in the bottom panels

As can be seen in Fig. 16, the raw cosmic-ray detection rate as a function of zenith angle is described to within about 20%. Given the uncertainties on cosmic-ray flux, composition above the atmosphere and in the hadronic interaction models, better agreement cannot be expected. In the background-limited regime this uncertainty corresponds to a 10% uncertainty in sensitivity, assuming that the fraction of γ-like events is understood. Figure 17 demonstrates that the fraction of such events, and the distributions of separation parameters, are indeed well understood for instruments such as H.E.S.S. using the simulation and analysis tools applied here to CTA.

Fig. 16
figure 16

Dependence of H.E.S.S. system trigger rate on zenith angle, for data and simulations. The simulations assume two different model atmospheres, with the atmosphere at the H.E.S.S. site representing an intermediate case. See [72] for more details

Fig. 17
figure 17

Measured distribution of the proton/electron separation parameter ζ for 239 hours of H.E.S.S. data on sky fields without gamma emission, compared to simulations of proton- and electron-induced showers. The shape of the background is very well reproduced by simulations across the full range of ζ. Gamma-ray signals appear close to ζ = 1. The electron background is therefore important despite the relatively low flux of electrons in comparison to hadrons. See [91] for more details