Future of optical-infrared interferometry in Europe

This Topical Collection was motivated and can be seen as a synthesis of the topics discussed by the working group Future of interferometry in Europe (FIE) of the ECFP7–2 OPTICON Network, work-package 14.4, funded by the Europ0an Commission. It demonstrates the current momentum in the field driven by an exciting range of new instruments, and by improved interferometric facilities becoming available to the larger astronomical community right now. The central theme of this collection is to discuss which steps are necessary to benefit from this current momentum and to focus it into an even brighter future, making eventually the 2030s the decade of optical-IR interferometry, as the natural next revolutionary step of astronomical observing tools after the construction of the 30 +m class of so-called extremely large telescopes (ELTs), which is currently under way. The recent discoveries of very nearby exo-planets in the habitable zone of Proxima Centauri and near the snow-line of Barnard’s star emphasise the need for high-fidelity interferometric imaging at the smallest inner working angles. Combining the superb angular resolution of interferometry with high-contrast techniques will be the only way to follow up observationally in the required detail the continuously rising number of exo-planets, their formation processes and interactions with their host circumstellar material. Further highlights of new scientific cases, which should drive the technological roadmap of future developments, are the fundamental physics of stars as driven by detailed images of their atmospheres, and the investigation of the physics of the most energy-efficient light sources in the Universe, namely the mass accretion onto supermassive black holes in active galaxy nuclei (AGN). Eventually, sensitive interferometric studies of AGN will allow us to bridge the cosmologically important, but elusive discrepancy of Hubble-Lemaître constant values as either measured from low-redshift supernovae, gravitational lensing time delays or the high-redshift cosmic microwave background. Experimental Astronomy (2018) 46:381–387 https://doi.org/10.1007/s10686-018-9614-1


Jörg-Uwe Pott, Jean Surdej (FIE-WG chairs)
In the following report, we present a synthesis of the topics discussed by the working group Future of interferometry in Europe (FIE) of the EC-FP7-2 OPTICON Network, work-package 14.4, funded by the European Commission. We describe the current momentum in the field driven by an exciting range of new instruments, and by improved interferometric facilities becoming available to the larger astronomical community right now. The central theme of this report is to discuss which steps are necessary to benefit from this current momentum and to focus it into an even brighter future, making eventually the 2030s the decade of optical interferometry, as natural next revolutionary step of astronomical instrumentation after the construction of the 30+m class of so-called extremely large telescopes (ELTs), which is currently under way.
The recent discovery of the planet Proxima Centauri b, located within the habitable zone of its red dwarf stellar host, emphasises the need for high-fidelity interferometric imaging at smallest inner working angles. Combining the superb angular resolution of interferometry with high-contrast techniques will be the only way to follow up observationally in the required detail the rising number of exo-planets, their formation processes and interactions with their host circumstellar material. Further highlights of new scientific cases, which should drive the technological roadmap of future developments, are the fundamental physics of stars as driven by detailed images of their atmospheres, and the investigation of the physics of the most energy-efficient light sources in the universe, namely the mass accretion onto super-massive black holes in the centres of active galaxies (AGN). Eventually, sensitive interferometric studies of AGN will allow us to bridge the cosmologically important, but elusive discrepancy of Hubble constant values as either measured from low-redshift supernovae or the high-redshift cosmic microwave background.
Interferometry will remain indispensable to successfully carry out all the above mentioned studies in the future and make important discoveries, even when the monolithic ELTs become available. This report mostly concentrates on the development of ground-based interferometric facilities, capable of combining wavelengths from the visible to warm thermal infrared (0.5-13 micron), although several aspects discussed are certainly very relevant for interferometric space applications, too. 4/127

Roadmap to the future of interferometry in Europe
Studying stellar radii and multiplicity was the bread and butter science for the first generations of optical interferometers. New technological possibilities allow now to contribute to astronomical research at the smallest scales of solar system research, and gravitational microlensing for planet hunting over mapping stellar surface proto-planetary disc and planet evolution in detail, as well as to the largest scales of investigating the cosmic distance scale with detailed study of AGN at significant redshifts. This situation motivates scientists throughout the community to develop ideas, concepts and technology to further improve our interferometric observing capabilities, and the access to it.
In the following we cast such developments into a roadmap to the future of interferometry in Europe, based on Part II & Part III of this report. To achieve the proposed interferometric roadmap, a strong collaboration between ESO, ESA, EU and other international organizations is essential.
To structure the roadmap, we separate it in time in two parts, the nearer future, parallel to the construction of the first 30+m class telescopes, and the following decade. Despite of this separation, both phases should be tightly linked for optimal use of resources. A construction of a new interferometric facility as currently discussed for the second phase, is only feasible at reasonable time and cost, if it is prepared by developing and testing new technology with the existing facilities in the upcoming years. This happens already now with the use of photonics and newest detector technologies in the newest generation of instruments, and should be emphasized for the next generation of instruments, as discussed below.

Before 2025: parallel to VLT 3 rd generation and E-ELT 1 st generation construction
While the E-ELT and its first suite of scientific instruments are being constructed, a 3 rd generation of instruments for the VLTI could emphasize particular aspects (angular or spectral resolution, extended wavelength coverage, multi-object), and benefit of a mature and improved telescope infrastructure, including adaptive optics 3 and piston-stabilized beam trains 4 . In Chapter 15, such a focused instrument concept is presented, which could be realized as visitor-instrument, similar to the successful PIONIER 5 . This high-dynamic range imager for the thermal infrared relies on a combination of new integrated optics technology which is currently developed for thermal infrared wavelengths (Chapter 10), and statistically robust data processing techniques, emphasizing the importance of technological progress.

5/127
precision of the measurement process. Key steps to go for larger arrays and the thermal infrared as sweet spot for direct detection of planets and their formation, are the development of larger APD focal plane arrays working in the infrared, and IO-BCs for such wavelengths, the latter being discussed in Chapter 10. Wavelength up-conversion, as discussed in Chapter 12, is a different approach towards interferometric science at thermal wavelengths.
Furthermore, we would like to see research ideas become reality to improve on the current fringe tracking limits of the VLTI (Chapters 9, and 11). While the first focusses on an ideal use of photons entering the beam combining laboratory, in the latter chapter, a plan is outlined to create synergies between operating adaptive optics and fringe tracking in parallel by the use of predictive control algorithms.
The other key area of technological progress is in optimizing and automatizing the process of image reconstruction to derive model-independent images, and a reliable snapshot imaging mode (getting an image in less than a night), to open arrays of 4-6 telescope to the area of time-domain astronomy (Chapter 6). Interferometric imaging hugely benefits from the availability of chromatic multi-baseline datasets, as provided now by the VLTI 2 nd generation instruments (Chapter 13), and the coming years will provide a new era of interferometric imaging with reliable image quality benchmarking (Chapter 14).
While emphasis shall be put on these technological advancements to fully exploit the scientific potential of the current suite of instruments and infrastructure, and prepare for future instrumentation, we discuss in this report as well, how the near-term future can benefit from improved community building, teaching and interaction. While an overview of various topics to maximize the scientific community exploitation of the VLTI is listed in Chapter 17, we discuss the idea of building a network of expertise centres in Europe in Chapter 18. Such centres, not unlike the ALMA regional centres, shall lead the process of training users, and bringing established expert knowledge to the broader community. This effort is supported by EII and the Horizon 2020 programme via the new OPTICON network grant, and is seen as a key element not only to fully exploit the current investments in interferometry, but also to prepare for a future facility beyond the VLTI, either ground or space-based. 6/127 visible interferometry at the VLTI will allow for a complementary scientific use. The science case for visible interferometry with long baselines is discussed in Chapter 8. Opto-mechanical upgrades of the current VLTI infrastructure would be needed to control the wavelengths with a 2-3 times higher precision, as needed for visible-wavelength interferometry. Important technological pathfinding is currently done at the CHARA facility.
At this early stage of developing post-VLTI facilities, also alternative approaches to high-dynamic range interferometric imaging at highest angular resolution should be studied (Chapter 20). PFI will eventually become feasible as an international facility if building on the experience of today's optimized arrays, choosing the best fringe tracking concepts, focussing on simple light weight telescopes and mass production of now standard technology to phase apertures (adaptive optics) and arrays (fringe tracking). The seed of many great discoveries in observational astronomy stems from a scientific meeting, workshop or conference during which the blurry concept of an innovative instrument is discussed for the first time. For large optical interferometers, there is probably no better example of such a success story than the VLTI . Although optical interferometry was proposed in 1868 (Fizeau 1868) and led to its first scientific results in 1891 (Michelson 1891), it remained idle until 1975 when the idea of coherently coupling large telescopes was first proposed by A. Labeyrie during a Conference in Geneva on Optical telescopes of the future (Pacini et al. 1977) and became the successful state-of-the-art high-angular resolution observatory that is the VLTI. Today, as several existing interferometric facilities are reaching their maturity, we take the opportunity to review and summarize the best recommendations and conclusions established during past conferences about the future of optical interferometry. We discuss in particular the most persisting sciences cases that must be tackled by next-generation interferometric instruments and/or facilities.
Over the past decades, the astronomical community met at many occasions to discuss about the best science cases for optical interferometry and the adequate technological roadmap (see the list of most workshops and conferences since 1994 here: http://iau-c54.wikispaces.com/Meetings). In Europe, the first meeting clearly addressing the science cases in the post-VLTI era occurred in Liège in 2004 (Surdej et al. 2004) and was followed one year later by a conference on the technology roadmap (Surdej et al. 2005). At the same time, the future of the VLTI was intensely discussed and various concepts were reviewed during a conference in Garching in 2005 on the power of optical interferometry (Richichi et al. 2005). In 2010, during the JENAM conference, the working group on the Future of Interferometry in Europe (FIE WG) defined the near-term priorities for the VLTI, a longterm vision in the ELT era, and guidelines for future facilities. The first step of the plan is now coming to fruition with the second-generation interferometric instruments bringing the VLTI capabilities and science to the next level.
In the meantime, on the US side, nearly independent workshops came to similar conclusions regarding the best science cases for optical interferometry and the need for higher angular resolution. In particular, during a Workshop on the Future Directions for Ground-based Optical Interferometry held in Tucson in 2006, very high angular resolution was identified as the top priority and a key element toward a revolution in both planet formation and stellar physics. The study of energetic and interacting systems, including AGN, relativistic stellar systems, and binary systems with 14/127 mass transfer was also listed as a second high priority. These science cases also emerged later from the Astro2010 decadal survey as top priorities.
While international collaboration was often recognized during these meetings as essential for any future major interferometric facility, there was no consensus in the community on the best concept to choose and the lack of prospective vision was deplored at a few occasions. In addition, most discussions were generally focused on improving current facilities rather than on the need for a new international facility. This situation changed in 2013 during the OHP Colloquium "Improving the performances of current optical interferometers & future designs". During a round-table discussion between members of the EII, ASHRA, FRINGE and IF working groups (see acronym list in footnote 9 and at the end of this report), it was concluded that direct imaging of the planet formation process at AU-scale radii can serve as a versatile science case of broad interest in the astronomical community, which at the same time is sufficiently focused to help developing the technical roadmap towards the next interferometric facility (Surdej and Pott, 2013). Shortly after the meeting this conclusion was distilled into the Planet Formation Imager (PFI) project (see Chapter 21).
To be complete and parallel to the now classical Fizeau-Michelson interferometry, it is worth mentioning here a possible revival of intensity interferometry, pioneered by Hanbury-Brown & Twiss in the 60s, then abandoned for its lack of sensitivity despite other great advantages, notably with respect to atmospheric phase degradations. The currently planned Cerenkov Telescope Array (CTA) will scatter over 1 square km a number of "light collectors", probably not adaptable to the purpose of optical interferometry but, in the future, large similar apertures could provide kilometric-base resolution on sufficiently bright objects, especially stars (Dravins et al 2015).
Also, it is worth noticing that the above considerations and most conferences, in the last two decades, dealt essentially with ground-based optical interferometry. Indeed, science cases would also be numerous and extremely interesting for an interferometer observing from space, as it was argued for as early as 1993 with the Darwin project submitted to the European Space Agency (Cockell et al. 2009). An optical interferometer in space, providing long integration times and high sensitivity, probably represents the must in the domain. But it would require expensive technologies, especially for a formation-flying concept, which becomes necessary for decametric or longer baselines. Such technique is today being successfully explored with the PRISMA mission of the Swedish Space Corporation and ESA's future Proba 3 mission. The issue "ground vs. space" was already discussed in the late 1980s, questioning an approval of the VLTI with its atmospheric limitations. Fortunately the VLTI was decided, and 29 years later no interferometric space mission, even an exploratory one, has yet emerged! Although it would be wise to make sure such an exploration would not be disregarded in the future, it seems sound today to focus on science cases with ground-based instruments which could deal with them in the next two or three decades. This chapter is articulated around the conclusions and recommendations of the few key meetings and conferences mentioned above. The following section summarizes the most persistent recommendations related to sciences cases and mentions some key technological developments 15/127 required to address them. More details on possible technological avenues for long-baseline interferometry are given in other chapters of this report.

Recommended science cases
Optical interferometry has made substantial contributions to many scientific areas over the past decades. Not surprisingly, the greatest science impacts were achieved in domains that require highangular resolution and high-precision measurements. While the scientific applications of such a unique technique are very wide, there is a consensus in the scientific community on a few science areas where the greatest science impacts from optical interferometers are expected to be realized today and in the near future. The most persistent trend is clearly the near-IR and mid-IR study of planet forming regions, which was also mentioned in the US Astro2010 Decadal survey report as the highest impact area for optical interferometry. The other main topics being clearly on a converging 16/127 trend are stellar physics and the study of AGN. In the following subsections, we give more details about each of these topics.

Stars and stellar evolution
While the theory of stellar evolution was thought to be well established for single stars, asteroseismology recently revealed that the basic assumptions about interior rotation, and along with it the chemical mixing and angular momentum distribution it induces, have major shortcomings (e.g., Chaplin & Miglio 2013 and Aerts 2015 for reviews). A general conclusion is that unknown coupling between the stellar core and envelope must occur as stars evolve. The sample of stars with interior rotation from asteroseismology is currently limited in size and covers only few mass ranges, metallicities, and evolutionary stages. This will change in a decade from now thanks to the PLATO mission (Rauer et al. 2014; launch in 2024 and operational until at least 2030).
The quest to measure a stellar radius with high precision is more appealing than ever for the types of stars where asteroseismology cannot be of help. This is the case for stars with considerable mass loss or accretion. The capacity to achieve a high-precision radius is particularly pertinent for stars in their formation process while they are forming their planetary systems (cf. Section b). Even in the case of future successful asteroseismic applications of stars just before starting hydrogen burning in their core (e.g. Zwintz et al. 2014 for a proof-of-concept), the availability of an interferometric radius during the various stages of stellar life would be a major asset to understand star formation. Indeed, the combination of a high-precision angular diameter, distance (with the new Gaia data), and spectroscopic effective temperature delivers a model-independent luminosity estimate with far better predictive power for the evaluation and calibration of stellar evolution models than available presently. The addition of asteroseismic measurements of the interior density and rotation profiles to an interferometric radius implies the capacity of high-precision inferences of key interior quantities of stars, decreasing also drastically the model dependency of their aging (e.g., Huber et al. 2013 for a proof-of-concept). Hitherto the ages of stars in the formation process are not known, preventing the derivation of a relative timeline for the various disc properties that have been measured. Solid aging of stars in their formation process can be achieved from a combined asteroseismic and interferometric approach. It would allow defining a homogeneous solid star formation evolution scenario from quasi model-independent relative aging.
Optical interferometers will also play a key role for the most massive stars in the Universe, which are continuously facing heavy mass loss since the start of their life. It was proven recently that more than 75% of all stars with masses above some 25 solar masses at birth occur in binaries or are the merger product of binary interaction (e.g., Sana et al. 2012). The first systematic high angular resolution survey to search for companions within a physical distance below some 200 AU seems to indicate that all O stars originate from binary formation channels. Hence stellar evolution theory for the most massive stars requires drastic improvements and interferometry has a major role to play here, as discussed by Sana et al. (2014). Dedicated combined interferometric and spectroscopic long-term monitoring is certainly the best and most efficient way to make progress in this area of stellar evolution, which is of prime importance for chemical galactic evolution.
In the farther future, an extremely important step ahead could come from long-term time-resolved stellar imaging at visual wavelengths rather than in the infra-red. Indeed, direct mapping of the surface of seismically and/or magnetically active regions on the surface of stars could reveal modest temperature or chemical spots as well as pulsational patches, revealing directly the spot and 17/127 pulsation configurations. This would open up an entirely new research field: local seismology of (active) stars, following on local helioseismology (e.g., Gizon 2009). Moving interferometry to visual wavelengths is currently still a technological challenge but the gain of achieving this in terms of scientific exploitation would be appreciable.

Planet formation
Planet formation constitutes one of the major unsolved problems in modern astrophysics (Millan-Gabet et al. 2010). Planets are believed to form out of the material left over by the star formation process but the details on how it actually happens are still speculative. On the one hand, spectral energy distributions and spectroscopy alone do not uniquely constrain disc models. On the other hand, spatially resolving the inner disc region (interior to ~10 AU, most relevant in the context of planet formation) poses difficult observational challenges. Long-baseline interferometric observations are the only tool today to reach the required spatial resolution to break model degeneracies as shown at many occasions (e.g., Kraus et al. 2010). It has also yielded many unexpected results in this field as summarized by Creech-Eakman et al. (2010).
Recent advent in imaging capabilities at the VLTI and CHARA, sometimes combined with highresolution spectroscopy, enabled direct and fundamentally new measurements of the inner disc: the radial location of the sources of continuum and line emission, gas chemistry and dust mineralogy, and surface brightness. However, major fundamental questions remain unanswered: • The detailed structure and composition of the dust evaporation front, which is fundamental to the knowledge of the terrestrial planet formation zone. • The disc density/temperature profiles, helping to explain issues such as the location/migration of gas giant planets or disc "dead zones". • The connection between the disc and the star itself, particularly with regards to angular momentum transportation and disc viscosity.
In the short-term, the second-generation instrument VLTI/MATISSE (Lopez et al. 2014) will improve our spatial coverage and imaging capabilities in the mid-infrared (L, M, and N spectral bands) by combining four telescopes instead of two previously with VLTI/MIDI. Single-baseline measurements often provide ambiguous interpretations, and MATISSE will play a crucial role to address pivotal questions about the inner regions of young circumstellar discs, such as those listed above. In the long term, one of the most challenging goals is to probe planet-forming systems at the natural spatial scales over which material is being assembled (the so-called, Hill Sphere which delimits the region of influence of a gravitating body within its surrounding environment). The Planet Formation Imager project (PFI; see Chapter 21) has crystallized around this challenging goal: to deliver resolved images of Hill-Sphere-sized structures within candidate planet-hosting discs in the nearest star-forming regions.
Finally, it is also worth mentioning the study of extrasolar planets as a persisting important goal of long-baseline interferometry. While young giant exoplanets are within reach of a high-precision ground-based instrument (see e.g., Chapter 15), a space-based interferometer will be required to address the most fundamental questions such as the habitability of rocky exoplanets and the search for biosignatures (e.g., Cockell et al. 2009). There are however many technological challenges to overcome before launching such an instrument, and a vigorous investment is needed. Starting from a few brightest objects about a decade ago, AGN have been observed with longbaseline infrared interferometers quite extensively over the last  We have now tens of AGN with interferometric size measurements in the infrared. So far, all these size measurements focus on the torus, the inner dusty region considered to surround mostly equatorially the central engine. One current finding is that the characteristic size over decreasing IR wavelengths from mid-IR to near-IR decreases quite fast, a little faster than the spatial resolution for a given baseline, at least in some objects. Since AGN observations are currently limited to baselines up to ~100 m, the same object is often more resolved in the mid-IR than in the near-IR. In fact, with the Keck Interferometer and VLTI (baseline 85-130m), objects have been relatively well resolved in the mid-IR for a significant number of cases, while only very partially resolved in the near-IR, except for one object. Based on the brightness of the objects in the AGN catalogue (Véron & Véron 2010), it is clear that the number of doable objects will explode with a small advancement of the current limiting magnitudes, which we definitely need. So far, these observations have mainly been limited to broad-band (continuum) observations, but high spectral resolution observations are being attempted in the near-IR as we discuss below.

Study of AGN
In the mid-IR, a few objects have been relatively well resolved with the VLTI baseline lengths, and overall sizes and some radial structure have been measured for these and some more objects. However, the observations have been limited to those with two beams, meaning that we lack phases or more technically, we still have no closure phase information --essentially, we do not have an "image" of each object yet. Differential phases over the mid-IR spectrum are being utilized, but the application is still limited and complicated. The next robust step forward is to get phase information with a ≥3 beam interferometer, leading to a first imaging. Based on the existing observations, we do have some morphological information, but this provides an even greater motivation to explore further. The two largest AGN in angular size on the sky, NGC1068 and Circinus, seem to show a twocomponent structure --an equatorial one at several sublimation radii R sub , and a polar elongation at a larger scale, around a few tens of R sub (Tristram et al. 2014;Lopez-Gonzaga et al. 2014). Somehow, this polar component is seen both in Type 2 and Type 1 objects (corresponding to edge-on and faceon objects, respectively; Hoenig et al. 2012, 2013). Naively, we would have expected to see a more equatorially elongated structure for the obscuring torus, but this is not the case, and interestingly, the polar elongated component is even radiating more predominantly in the mid-IR. This could be an outflowing material, but at the same time, it seems to be participating in obscuring the central region, since the interferometric size measurements indicate an emissivity of a few tenths, which is expected from a directly illuminated, UV-optically-thick material. It is puzzling that this polar elongated structure is seen in a Type 1, face-on object, i.e. without obscuring the centre. Furthermore, in the two edge-on objects, NGC1068 and Circinus, the equatorial component shows quite a good correspondence with maser spots observed in the radio domain at least roughly in size and direction. It is well known that these spots rather suggest a warped structure, while UV/optical high-resolution images such as those with HST show a clear cone-like structure indicating a corresponding shadowing. With the mid-IR interferometric imaging, we will need to sharply show the structure that reconciles both requirements. That will potentially give us a clue on the nature of this obscuring material and the relation to the accretion flow.

19/127
In the near-IR, for Type 1 (supposedly face-on) AGN, which are currently the main objects we can observe interferometrically, it has been a bit more difficult to resolve the structure in the near-IR with 100 m baselines than in the mid-IR --the wavelength dependence of the characteristic spatial scale often decreasing a little faster than ∝ λ. The visibilities we observe in the near-IR with 100 m baselines for the brightest Type 1 AGN turn out to be V^2~0.9, thus we are only marginally resolving the structure. The situation with fainter ones would be even worse since they probably have a smaller angular size. What we definitely need is a longer baseline length, in order to unambiguously resolve the structure. At the moment, we believe that we are partially resolving a ring-like dustemitting region, having a 'hole' due to the dust sublimation by the central engine's harsh heating. We will first be able to confirm this picture with 300m-class baselines. On the other hand, the central engine, the putative accretion disc, is believed to remain unresolved. However, the outermost part of the disc, where the mass accretion on to the disc truly occurs, is the region we do really poorly understand. As we approach these radii with the higher resolution, we can prove or disprove, and explore, how purely they remain to be unresolved for the first time.
Before we reach this accretion region, we will actually go through the region mainly emitting broad emission lines. The kinematics and geometry of this region is very important as they are being used for estimating the mass of the central black hole. With a spectral resolution of ~100 km/s, detecting such kinematics in differential phase spectra will thus be very yielding. Such medium spectral resolution observations have already shown to be feasible with the current instruments for one AGN, 3C273 (Petrov et al. 2013), but again, the baseline lengths seem to be still too short. However, a simple differential visibility spectrum over a broad emission line already turned out to be intriguing and puzzling --the tentative result is that the overall size of the emission line region looks slightly larger than that of the dust-emitting region at least in this particular case. We should be able to follow up this issue over the next couple of years even with the current instrument, and the observations with longer baselines will surely be very decisive.

Summary
Optical long-baseline interferometry provides a unique and powerful resource for astrophysics that has now reached a level of technical and operational readiness enabling scientific breakthroughs. The untapped potential of optical interferometry is however still immense. It is today the only technique capable to probe at optical wavelengths spatial scales at sub-milliarcsecond angular resolution, which will be beyond the ability of planned extremely large telescopes and indispensable to study planet formation, understand the fundamental physics of stars and unravel the real structure of AGN. . Even hot dust closer to the star may significantly degrade the coronagraphic performance at the level needed for exo-Earth imaging. It represents emission that is more extended than the star and consequently cannot be perfectly suppressed by a coronagraph. Thus, the characterisation of warm and hot exozodis is critical for the success of such missions. Furthermore, the study of the properties, distribution, and evolution of exozodiacal dust can inform about the properties and evolution of the innermost regions of planetary systems, close to their habitable zones. Interferometry provides a unique method of separating this dusty emission from the stellar emission and thus is currently the only method able to detect the dust in most of these systems. The broad wavelength coverage of the second-generation suite of VLTI instruments (and PIONIER) is particularly well suited for the characterization of exozodiacal dust systems.

Current state of the art
The presence of warm/hot dust in other planetary systems in analogy to the zodiacal dust in our own solar system has long been hypothesised. However, its detection remained elusive due to the faintness of the emission and small angular separation from even nearby stars, until near-IR interferometry using the FLUOR beam combiner at the CHARA array revealed a hot excess of ~1% in K band around the prototype debris disc host star Vega

Detection using optical long baseline interferometry
By far the largest number of exozodis has been detected using near-IR optical interferometry as it is used on the VLTI. Using this technique at short baselines (few 10m), the star remains mostly unresolved, resulting in fully coherent emission. In contrast, the extended emission from a dust disc is ideally fully resolved, resulting in incoherent emission and thus a visibility deficit compared to the values expected from the star alone. This visibility deficit allows for detecting the dust and the starto-disc flux ratio can be measured as half the visibility deficit ( Figure 3-1).
Due to the small flux ratio of typically 1%, only few instruments reach the accuracy on the visibility measurements necessary to detect the dust.

New opportunities with LBT and VLT interferometry
Recently, the Large Binocular Telescope Interferometer (LBTI) started operations and will survey approx. 50 stars in the mid-IR for warm exozodis with unprecedented sensitivity. However, this survey is designed to only detect exozodis and to measure their flux levels, while only limited information on the detected systems will be derived . Only the characterisation of exozodis can answer the two most urgent key questions beyond the frequency and abundance of massive dust systems: 24/127 1. What is the connection between the warm and hot dust? This is important because most systems so far have been detected in the near-IR, but the implications of this for the presence of habitable zone dust are unclear. A tentative anti-correlation between the presence of hot and warm dust has been suggested (Mennesson et al. 2015). 2. What are the dust properties? The dust is detected as thermal emission in the mid-IR and as a potential combination of thermal emission and scattered light measured in the near-IR. Only with a detailed knowledge of the dust properties is it possible to estimate from these observations the amount of scattered light expected in the optical where future exo-Earth imaging missions will operate.
The second-generation VLTI instruments GRAVITY and MATISSE, together with PIONIER, can provide the ideal tools to address these questions through multi-wavelength measurements of the spectral energy distribution (SED) of the excess emission in near-IR to mid-IR wavelength range (Figure 3-2). In this range the emission of warm and hot dust peaks and carries most information about the dust temperature and properties. The well-established detection strategy used for FLUOR and PIONIER can be employed with all instruments. The broad spectral capabilities of GRAVITY and MATISSE will allow for strong constraints on the dust properties through a better characterization of the SED shape and the potential detection of dust emission features (e.g., 3um and 10um silicate features). First steps toward a spectral characterisation of the excesses have been taken with PIONIER (Defrère et al. 2012). Furthermore, a new survey in a wavelength range where the dust emits more strongly than in the H band reached with PIONIER will result in a larger sample to characterise and stronger statistical constraints on the incidence, properties, and evolution of the dust. It has been shown already, that the detection rate of hot exozodis is about twice as high in K band compared to H band at similar accuracy of the visibility measurements ). The K band beam combiner GRAVITY and the short wavelength channels of MATISSE (L, M bands) will be ideally suited for such surveys assuming a similar accuracy on the measurements as with PIONIER can be reached.

Critical technical requirements
There are three main challenges to be overcome for being able to fully characterise hot and warm exozodis with the VLTI: • It is critical to reach an accuracy of ~1% on the single squared visibilities measured. This is necessary to reach a sufficient cumulative accuracy on the source visibilities to detect the excess and accurately measure the disc-to-star flux ratio. While this is readily reached with PIONIER and a specification for GRAVITY, it is only expected to be reached with MATISSE if a fringe tracker is used. • Pointing (field rotation) dependent polarization effects in the VLTI optical path limit the absolute calibration of single PIONIER observations to ~3%. A specific observing strategy and additional calibration of this effect has to be employed to circumvent this limit ). This, however, requires observations of a large number of targets in a consistent way throughout a whole night which can only be carried out in visitor mode and puts significant limits on the flexibility of such observations. Correcting this effect on the instrument side (as expected for GRAVITY) or solving it on the VLTI side is critical for efficient and flexible high accuracy observations. • To reach a sufficient cumulative accuracy on the measured source visibilities requires several measurements of one target. PIONIER has proven to be very efficient due to the simultaneous use of 4 telescopes (6 baselines). Still, several consecutive, calibrated measurements per target are necessary. Moreover, a potential variability of the hot emission has been detected on a timescale at least as short as one year (Ertel et al. 2016). The shortest timescale of these variations is not known. This variability calls at least for quasisimultaneous observations of a target with all three instruments. To significantly increase the efficiency of the observations for both a survey for new systems and the characterisation of known systems, a fully simultaneous use of all three instruments (such as the "i-SHOOTER" concept) will be highly beneficial.
In addition to these critical requirements, further increasing the sensitivity to circumstellar excess would be highly beneficial for exozodi science with the VLTI. In particular, it would allow for surveys for exozodiacal dust in the Southern hemisphere with a sensitivity similar to that reached in the North with the LBTI and to detect systems only a few times brighter than our own zodiacal dust. The sensitivity of VLTI observations to faint circumstellar emission is currently limited by the ability to accurately measure and calibrate visibilities and to predict the stellar visibilities. These limitations can be overcome by nulling interferometry, where the stellar contribution is removed from the signal through destructive interference and only the extended circumstellar emission remains and can be detected directly. Such an instrument concept as introduced by Defrère et al. (this report) can improve the high contrast capabilities of the VLTI by one order of magnitude. The addition of high spectral resolution to the capabilities of optical interferometers has a considerable potential for important discoveries on stars as already demonstrated by the VLTI first generation instruments AMBER (Petrov et al. 2007) and MIDI (Leinert et al. 2003). In less than a decade these two instruments have produced more than half of the peer reviewed papers on interferometry and their spectro-interferometric capability hugely participated to their success. This is particularly the case of AMBER which offers a high spectral resolution mode with R=12000, allowing to probe variation of visibility and phase in narrow spectral features such as atomic and molecular lines. At the CHARA array, VEGA (Mourard et al. 2009), which operates in the visible, also demonstrated the capability of high spectral-resolution interferometry with a resolution up to R=30000 to probe stars in photospheric and circumstellar lines.

Stellar physics science cases for high spectral resolution interferometry and very long baselines
On the other hand, kilometric baselines would also be useful for several topics in stellar physics. Nevertheless, many types of astrophysical objects have representatives already resolvable with hectometric baselines, at least in the visible. . For a sun-like spectral type, such a diameter roughly corresponds to CHARA current limiting magnitude in the visible, i.e. m V =8. Consequently, the limit on diameters measurement for later spectral types is due to sensitivity and not to angular resolution.
Therefore, the first priority should be on the spectral resolution (R>30 000) and number of apertures (≥6), rather than on the access to ultra-high angular resolution (<0.1 mas). Moreover, we can address a number of important topics in stellar physics by implementing the high spectral resolution capability for a small fraction of the cost of building very long baselines with large telescopes.

Interferometry at very high spectral resolution
Having access to high spectral and spatial resolution simultaneously through the interferometric equivalent of integral-field spectroscopy would provide extraordinary new research opportunities. High resolution spectroscopy is historically at the origin of astrophysics in general, and stellar physics in particular. In the context of optical interferometry, high resolution spectroscopy can add many physical dimensions to the classical interferometric measurements.
Using Doppler effect on photospheric lines, kinematics of stellar surfaces can be measured with an unprecedented accuracy. The possibility to estimate radial velocities in a spatially resolved manner has strong potential to characterize the surface and environment of nearby stars. Considering the 28/127 typical velocities encountered in stellar physics, a spectral resolution of 30000, i.e. 10km/s in terms of Doppler shift, is a lower limit to spectrally resolve most photospheric lines.
The change of opacity between various lines and surrounding continuum can also be used to probe the vertical structure of stellar atmospheres as it can be done on the Sun. Finally, high-spectral resolution interferometry gives access to spatially-resolved chemical composition of stellar photospheres and the close-by environments. However, AMBER suffers from a limited spectral resolution of R=12000 and too short baselines for the K-band (160m, i.e. 3mas resolution) and VEGA, from a very low sensitivity at high spectral resolution, with a limiting magnitude about m R =3.5. Moreover, in that context, GRAVITY, the second generation instrument at VLTI in the near-infrared, is not an improvement compared to AMBER, as its highest spectral resolution is only 4000, reducing dramatically its utility for stellar physics.
Consequently, the development of a new generation of near-infrared and visible spectrointerferometric instruments with both a higher spectral resolution and high-enough sensitivity is critical to achieve the full potential of interferometry in stellar physics. The velocity field of the convective cells of red giants and red supergiants could be inferred and constraints could be put on the physical properties of their atmosphere taking advantage of the variation of the opacity in spectral lines compared to the surrounding continuum (Chiavassa et al. 2011). Using longer baselines, properties of convection for main sequence and sub-giants stars could also be studied, at least in a statistical approach as proposed by Chiavassa et al. (2014).
The study of pulsations could also benefit from such spectro-interferometric instruments. This is especially the case of pulsating stars such as Cepheids and RR Lyr for which the radial velocity could be spatially resolved. Surface velocity mapping could also be achieved on main sequence and giant stars allowing to add additional constraints on asteroseismologic measurements.
In a more general manner, the vertical atmospheric structure of many stars at various metallicities (including nearby metal-poor) could be inferred. In the case of chemically peculiar stars, surface inhomogeneities could also be spatially resolved.

29/127
Although the study of circumstellar environments has strongly benefited from the current generation of spectro-interferometric instruments, a higher sensitivity and an access to a higher spectral resolution will bring new insights in that field, especially for fainter objects such as young stellar objects, AGBs, or post-AGBs stars.
Finally, spectro-interferometry is also a very promising technique to study close binary systems. Spectra from the components can be separated, allowing to refine their fundamental parameters. In the case of faint companions, ultra-accurate (< 1µas) differential spectro-astrometry could be achieved to determine their orbit. Mass transfer in interacting binaries can also be spatially and kinematically constrained, allowing to map the Roche lobe overflow and the accretion in three dimensions.
To be efficient, the wavelength range of such studies should cover both the visible and the near infrared, Ideally, from B to K bands (0.4 to 2.5 µm). This would provide access to most of the important spectral signatures. Thermal infrared domain is also valuable to probe the dust distribution at intermediate temperatures, in conjunction with the near infrared (hot dust close to the stars) and the millimetric domain (cool dust).
As a side note, magnetic field measurements using the polarimetric Zeeman effect are also possible, although the practical implementation of this capability in optical interferometry poses technical difficulties.

Toward kilometric baselines
Spectral resolution is essential for many topics, but for few others there is no substitute for angular resolution delivered by very long baselines. For instance, resolving properly the white dwarf Sirius B (~40 µas in angular diameter) would require ~3 km baseline in the visible. High-mass X-ray binary systems are in the same league. Very metal-poor stars in globular clusters (including RR Lyr) are also very small angularly. The determination of their physical properties would be very valuable for stellar population studies. Stars in the Galactic Center and the super massive black hole itself are also a reason to go for ultra-long baselines, but as we are stuck in the K band or longer in the GC, this would mean baselines of >10 km. It should be noted that extremely long baselines of several kilometres in length require very large apertures (i.e. 8m telescopes with AO) to have a sufficient sensitivity. This is simply a consequence of the surface brightness of stars as thermal sources. To obtain a reasonable (u,v) coverage, we would need ≥ 6 of these large apertures. This is not impossible of course, but costly. Having too small apertures would strongly limit the usefulness of the extremely long baselines (e.g. only very hot stars would be observable), unless the typical sensitivity can be significantly improved over current limits. Infrared (IR) interferometry has made widely recognised contributions to the way we look at the dusty environment of supermassive black holes on parsec scales (see Netzer 2015 for a review of the field). It finally provided direct evidence for orientation-dependent unification of active galactic nuclei (AGN); however it also showed that the classical "torus" picture is oversimplified. New scientific opportunities for AGN have been suggested, and will soon be carried out, focusing on the dynamical aspects of spectrally and spatially resolved interferometry (most notably with the new VLTI GRAVITY instrument), as well as the potential to employ interferometry for cosmology (Hönig et al. 2014).

The past ten years
Long-baseline infrared (IR) interferometry is a rather young technique for extragalactic science. The key challenge is the available combination of sensitivity and baseline length of most current interferometers. Most compact extragalactic targets are faint for interferometry purposes and required the light collecting power of 8m class telescopes.

32/127
The major focus of this first decade was the study of the dusty environment of the actively accreting supermassive black hole. It was revealed that this "dusty torus" is clumpy, i.e. the dust is confined in optically thick clouds instead of being smoothly distributed (Jaffe et al. 2004, Tristram et al. 2007). This confirmed early predictions and initiated a decisive shift within the community towards models that reproduce this clumpiness.
With the help of the growing sample of AGN, it was realised that the dusty environment among different objects is quite diverse (Tristram et al. 2009, Burtscher et al. 2013). This diversity may be influenced by accretion rate and/or luminosity of the AGN, but a clear answer to this is still pending (Kishimoto et al. 2011a(Kishimoto et al. , 2011b. More recently, detailed mid-IR interferometry of the brightest targets indicated that the classical torus picture is probably too simplistic. Instead, a combination of dusty disc and dusty winds may provide a better representation of the dust distribution and the kinematics of the nuclear region (e.g.

New scientific pathways for the next decade
New instruments on existing interferometers and plans for new facilities will provide exciting new opportunities for extragalactic astrophysics. In the following, some near-and mid-term science cases are outlined.

Toward a new paradigm for the dust distribution around AGN
With the upcoming new instruments at the VLTI (MATISSE and GRAVITY) with full imaging capability, efforts to understand the disc+wind features in the dust distribution around AGN will be intensified. Key questions that will be addressed are: Why do we see a two-phase structure? What is the physical origin? How do the parsec scale dusty wind features fit into the large-scale AGN winds and black hole feedback? These efforts will require targeted, detailed interferometry (including phase information) and further pushes in theory towards full radiative (magneto-) hydrodynamical simulations of the dusty environment.

Unveiling the black hole in the centre of the Milky Way
Since 2016, the VLT Interferometer (VLTI) is equipped with the new near-IR phase-referencing instrument GRAVITY, combining all four UTs. It was particularly designed to be able to track the dim near-IR light from the accretion flow onto the supermassive black hole in the centre of our own Galaxy and the stars orbiting the black hole. The instrument is poised to achieve micro-arcsecond resolution, which will reveal general relativistic effects in the vicinity of the black hole. The major goal will be observing the periodic signal of gas close to the innermost stable orbit and tracing the relativistic shadow of the emission from the gas while it revolves around the black hole.

AGN as a standard ruler
It was recently suggested that the combination of AGN near-IR time-delay measurements and interferometry can be used to measure direct geometric distances to extragalactic targets (Hönig et al. 2014). Since a sizeable fraction of even the brightest AGN is located in the Hubble flow at distances ≥100 Mpc, this opens the possibility to use AGN as cosmological probes. This method bypasses the cosmic distance ladder and is independent on any other normalization. Therefore, AGN interferometry can be used to address the origin of the elusive discrepancy of Hubble constant values as either measured from low-redshift supernovae or the high-redshift cosmic microwave background.

Dynamical black hole mass measurements in the local universe and beyond
Getting a hold of black hole -galaxy coevolution requires accurate knowledge of black hole masses. The most direct method relies on tracing the motion of gas or stars in the central regions of active or non-active galaxies. However, given the high masses of the stellar bulges and central star clusters,

34/127
the sphere of influence of the black hole can only be resolved in few nearby galaxies with single dish telescopes. Alternatively, emission line reverberation mapping traces the dynamical motion of gas clouds in the centre of active galaxies and converts emission line widths and time delays via the virial theorem into a black hole mass. However, complicated kinematics of inflows and outflows and projection effects cause significant systematic uncertainties. With the new VLTI/GRAVITY instrument, spatially and spectrally resolved interferometry will allow to remove many uncertainties in this method and observe the sphere of influence beyond galaxies in the local universe.
The physical nature of time variable objects is often inferred from photometric light-curves and spectroscopic variations. Optical long-baseline interferometry (OLBI) has the power to resolve the spatial structure of time variable sources directly in order to measure their physical properties and test the physics of the underlying models. Recent studies of variable objects using OLBI include measuring the angular expansion and spatial structure during the early stages of novae outbursts, studying the transits and tidal distortions of the components in eclipsing and interacting binaries, measuring the radial pulsations in Cepheid variables, monitoring changes in the circumstellar discs around rapidly rotating massive stars, and imaging starspot cycles. Future applications will include measuring the image size and centroid displacements in gravitational microlensing events. Upcoming projects like the Large Synoptic Survey Telescope will dramatically increase the number of timevariable objects detected each year, potentially providing many new variable targets to observe using OLBI. For short-lived transient events, it is critical for interferometric arrays to have the flexibility to respond rapidly to targets of opportunity in order to optimize the selection of baselines and beam combiners to provide the necessary resolution and sensitivity to resolve the source as its brightness and size change. In this chapter we discuss the science opportunities made possible by resolving variable sources using OLBI.

Angular Expansion and Asymmetries in Novae Outbursts
A classical nova occurs when material accreting onto the surface of a white dwarf in a close binary system ignites in a thermonuclear runaway (Bode & Evans 2008). Studying the structure of novae during the earliest phases requires spatial resolutions on the order of milli-arcseconds. OLBI can measure the size and expansion of novae as early as days to weeks after the explosion. This has been accomplished for seven bright novae using interferometers in operation over the past couple of decades (see review by Chesneau & Banerjee 2012). About three classical novae are detected per year in the galaxy (Warner 2008), and only about one per year is brighter than V = 6-8 mag at peak, in the sensitivity range of current interferometers. LSST could increase significantly the number of Galactic novae detected per year.
The angular expansion curve and reconstructed images of Nova Del 2013 are shown in Figure 6-1 (Schaefer et al. 2014). Changes in the apparent expansion rate can be explained by a model consisting of an optically thick core surrounded by a diffuse envelope. The optical depth of the 36/127 ejected material changes as it expands and cools. Studying how the structure of novae changes at the earliest stages brings new insights to theoretical models of novae eruptions. Interferometric observations indicate that elliptical asymmetries can develop as early as a few days after the outburst, suggesting that the explosions might be inherently bipolar (Orio & Shaviv 1993;Porter et al. 1998;Scott 2000) or that the elliptical shape develops early during the common envelope phase (Livio et al. 1990; Lloyd et al. 1997). The angular expansion rate can be combined with radial velocity measurements to derive a geometric distance to the nova. Multi-wavelength interferometric observations can reveal wavelength dependent changes in the optical depth of the expanding material; whereas imaging in the mid-infrared can shed light on where dust formation might occur.

Eclipsing Binaries
OLBI has produced images, mapped the orbits, and studied the circumstellar environments of shortperiod interacting binaries where the components show tidal distortions due to Roche lobe filling

37/127
companion star. More of these types of systems will continue to be discovered through automated sky surveys. Resolving the transiting disks through OLBI could reveal sub-structure within the disks, such as density waves, rings, and bow shocks, providing clues on the evolutionary status and history of mass transfer within these systems.

Radial Pulsations in Cepheid Variables
Cepheid variables are the central rung of the extragalactic distance ladder. Their pulsation periods (~ 2-100 days) are directly correlated to their luminosities through the Leavitt Period-Luminosity relation. As supergiant stars with a high intrinsic brightness (up to 10 5 L ☺ ), they are observable in distant galaxies, and therefore can be used both as primary distance indicators in the Local Group and to calibrate secondary distance indicators. The Cepheid distance scale is one of the two most competitive methods in determining the Hubble constant H 0 (the other one being the cosmic microwave background; e.g., To accurately determine distances and thus cosmological parameters, an accurate calibration of the period-luminosity relation is crucial. However, systematics in the zero-point still exist at the 5-10% level, contributing the largest source of error to the H 0 error budget (Riess et al. 2011). The most common way to calibrate the zero-point is to use the Baade-Wesselink method (also called the parallax of pulsation). This technique combines the linear radius variation (from integration of the radial velocity over the pulsation period) with the angular diameter variation to derive the distance. This method is dependent on the projection factor used to convert the radial velocity into the pulsation velocity. This factor is the main source of bias in the technique; it relates the line-forming regions in the atmosphere, the limb darkening, and the photosphere dynamics.
The two main methods to measure the angular variation are the surface brightness-colour relation based on photometric light curves and direct interferometric measurements. Monitoring the pulsation of Cepheids using OLBI provides a powerful way to measure the cyclic change in the angular diameter and derive the distance without relying on photometrically determined properties (see

38/127
combine time series of all observables, including photometry, spectroscopy, and interferometry Breitfelder et al. 2016), provides a way to reduce statistical errors, use the redundancy between observables to mitigate systematic uncertainties, and achieve a 2% accuracy on the derived radius and distances. Furthermore, spectro-interferometric capabilities show promising prospects for resolving the velocity field at the surface of the Cepheid to probe the dynamical structure of the atmosphere with time and investigate the projection factor which biases distances measured using the Baade-Wesselink method ).

Variations in the Discs around Be Stars
Be stars are rapidly rotating B-type stars that eject gas into a circumstellar disc. The properties of the discs can be derived from the presence of double-peaked emission line profiles, infrared excesses, linear polarization, and by spatial resolving the disc through OLBI (see review by Rivinius et al. 2013). Cyclical variations in the intensity of the blue-and red-shifted peaks of the emission lines over timescales of years to decades can be explained by a one-armed spiral density wave that precesses with the rotation of the disc (Okazaki 1991). OLBI has been used to measure these spatial asymmetries within the discs of Be stars (Carciofi et

Imaging Starspots
Starspots on cool stars result from strong magnetic fields suppressing convection in the outer layers of the star (e.g., Strassmeier 2009, and references therein). These spots can evolve significantly on observable timescales ranging from a few days (e.g., Roettenbacher et al. 2013) to months (e.g., Henry et al. 1995). Much work has been done to detect this evolution in both photometric and spectroscopic observations; however, both methods suffer from degeneracies, particularly in spot latitude. Interferometric imaging allows for the elimination of the degeneracies, removing the ambiguity in the location of stellar features. Additionally, interferometric imaging provides a complete understanding of the star's orientation on the sky.
Recently, the first images of spotted stellar surfaces have been reconstructed from interferometric data (Parks et al. 2015;Roettenbacher et al. 2016). The most recent images reconstruct the entire stellar surface at once combining many nights of data from a single rotation. The spotted giant primary star of the close, short-period (17.7 days) binary zeta Andromedae shows spot evolution between data sets separated by two years (Roettenbacher et al. 2016, see Figure 6-3). These images also emphasize the distinction between the solar and stellar magnetic dynamos showing polar spots and latitude asymmetries not seen on the Sun.

Microlensing Events
Gravitational microlensing was first proposed by Paczynski (1986) as an observational technique to probe the dark mass content of the Galaxy's halo. Mao & Paczynski (1991) further extended microlensing applications to the detection of brown dwarfs and exoplanets located in the Galactic disc or bulge. Microlensing observations find today that exoplanets are ubiquitous in our Milky Way (Cassan et al. 2012), and that free-floating exoplanets may be common as well (Sumi et al. 2011).

Characterizing microlensing events by long-baseline interferometry
Gravitational microlensing offers a unique opportunity to detect exoplanets and brown dwarf companions to stars, as well as Galactic stellar mass black holes. It is based on the bending of light rays originating from a background (bulge) star passing close to the line-of-sight to a foreground (lens) star. During a microlensing event, the source is split into several images of different sizes and whose angular separations are typically of order of a milli-arcsecond. While the photometric light curve of a microlensing event provides important constraints on the lens physical parameters, in many cases the lens mass and distance from Earth remain degenerate. Long-baseline interferometric observations of microlensing events combined with the modeling of the light curve can break this degeneracy, and further provide the lens-source motion. Two approaches are possible: visibility and closure phase measurements when the individual microlensed images of the source star are resolved, and if not, astrometric measurements of the photocenter of the differentially magnified images. Microlensing events suitable for interferometric observations can be alerted based on realtime analysis of the photometric light curves monitored by networks of telescopes, from which the date and magnitude of the peak magnification are predicted.

Observational strategy
The goal of the interferometric observation is to measure the two components of vector Einstein radius θE. At any given time, more than a hundred microlensing events are in progress. The most promising events are followed-up at high photometric cadence by survey telescopes such as OGLE (Optical Gravitational Lensing Experiment), MOA (Microlensing Observations in Astrophysics), KMTnet (Korea Microlensing Telescope Network) and monitored by networks of telescopes such as RoboNet (Las Cumbres Observatory LCOGT), PLANET (Probing Lensing Anomalies NETwork), μFUN or MiNDSTEp. The interferometric targets have to be identified in this large number of ongoing events. One of the main difficulties is to predict the peak magnitude of the events in advance, but experience of real-time modelling shows that a fair estimation can usually be obtained two (sometimes three) days in advance, which is sufficient for issuing a Target of Opportunity observation 48h before the peak. New generations of alert telescopes (2011+) have increased the rate of microlensing event detections from a few hundreds to more than 2000 per year. This provides an unprecedented ground for predicting interferometric microlensing targets.

40/127
In Figure 6-4, we show the cumulative histogram of the number of events with K peak magnitudes lower than K. The first potential target appears at K ≃ 7.8, while 26 events already have K ≤ 10 (hence, a mean of 6-7 per year). These magnitudes are already within reach of VLTI using not only Unit Telescopes (UT, 8m), but also Auxiliary Telescopes (AT, 1.8m). An increase by only one magnitude in sensitivity would provide an order of magnitude more microlensing targets for the next generation of instruments.

Measuring the lens mass and distance
The light curve model provides the parameters of the lens and trajectory, but in favourable cases also a measurement of the parallax vector πE or the source size ρ in θE units. The best photometric model then predicts the shape and position of the images in θE units at any time t (with a given uncertainty) and thus yields the corresponding visibility pattern in the Einstein 1/θE units at t, which can be compared to interferometric data points in the Einstein (u,v) plane. Then, the two components of θE are adjusted as two independent parameters to match the values of the predicted and measured Einstein radius. Besides classical Earth rotation supersynthesis, microlensing itself provides an intrinsic microlensing supersynthesis: as the source moves relative to the lens, the microlensed images will change in position and shape, resulting in a change of the visibility pattern. Depending on the configuration, this change can range from barely noticeable to very strong. In the case of a single lens for example, the two diametrically opposite images rotate with time around the Einstein ring, but the images number, brightness and positions are much more complex for a binary or planetary microlensing event.
To illustrate, how interferometric observations must be performed for a good constraint on a typical single lens, characterized by u0 = 0.01, tE = 30d and an Einstein radius with North-East components θE,N = 0.325 mas and θE,E = 0.563 mas (θE = 0.650 mas), we simulated an observation with the ESO/VLTI PIONIER instrument, which combines the light from 4 Auxiliary Telescopes (ATs) at a time, leading to six simultaneous baselines. We take into account the errors on the visibility by adding a gaussian noise to the squared visibilities of 3%. The resulting confidence intervals on θE (1 to 4 σ) obtained after one, two and three observations around the peak of the event are drawn in Figure

41/127
6-5. A seen in the figure, very good constraints are obtained with two (or more) measurements. Hence, a good observing strategy is that each microlensing event be observed at least two times, but an additional third observation is a plus to ensure a good measurement at magnitudes close to the sensitivity limits of the instruments. Furthermore, observations at three epochs close in time (spread over about 48h) should show up the displacement with time of the multiple images.

Perspectives
While no interferometric microlensing event has been observed to date, the current number of microlensing alerts (about 2000 per year delivered by OGLE IV) and the capabilities of the new generation of robotic telescopes working in network (like the RoboNet collaboration) has greatly improved the capability to detect and follow-up suitable targets for interferometric observations. New perspectives of interferometric observations of microlensing events have been opened by recent improvements in the sensitivity of long baseline interferometers, and we have shown that several microlensing events per year are already at reach. The observational strategy requires a rapid-response microlensing photometric follow-up and efficient alert system, which are already in place. Interferometric microlensing observations carry great promises to characterize completely many more microlensing systems in a near future. Second generation VLTI instruments are expected to greatly increase the number of microlensing events monitored by interferometers in the coming years.

Introduction
The Atacama Large mm/submm Array (ALMA) is, by far, the most powerful telescope in the mm/submm wavelength range. It consists of 66 antennas, distributed in a "Main Array" (50 antennas of 12 m diameter), a "compact array" (i.e., the ACA, with 12 antennas of 7 m diameter), and a set of Total-Power stations (4 antennas of 12 m diameter, used as single-dish stand-alone stations). The Main Array has a "dynamic" configuration (i.e., the antennas can be moved to a set of different positions) that changes throughout the year, with maximum baseline lengths ranging from about 100 meters up to almost 15 km. The observing frequencies range from 86 GHz to 950 GHz, therefore the synthesized resolution can be as high as a few milli-arcseconds in the most extended configurations. With the combined use of the ACA, the range of spatial scales instantaneously sampled by ALMA can eventually cover several orders of magnitude, with extremely high image fidelity.
The power of ALMA is not entirely related to its large collecting area, large number of antennas, and extremely efficient receivers, but also to its very flexible correlator. With a computing power of nearly 20 PFLOPS (i.e., it is currently the most powerful civil computer in the world) it can reach instantaneous bandwidth coverages of several GHz (at lower spectral resolutions) or resolutions up to a few tens of m/s (in narrower bandwidths).
In short, the unprecedented collecting area of ALMA, together with the wide range of frequencies and the flexibilities in the array and/or correlator configurations, make ALMA one of the most heterogeneous instruments available to the astronomical community; an instrument for big science and an extremely wide usage: from star (and planet) formation, to evolved stars, Solar-System observations, galactic and extra-galactic chemistry, Active Galactic Nuclei (AGN), Cosmology, etc.
Needless to say that both ALMA and VLTI do share many key science projects where fruitful synergies can be established and exploited. In the following sections, we will briefly describe a (very incomplete) selection of science topics where VLTI and ALMA may definitely complement, and benefit, from each other.

The Galactic Center and Active Galactic Nuclei
SgrA* is the closest Super-Massive Black hole (SMBH) to the Earth (at ~8 kpc and with ~4 million solar masses, e.g. Gillessen et al. 2009) and it is the most intensively observed one, from radio to gamma rays. It shows strong and rapid variability (e.g., Marrone et al. 2008;Dexter et al. 2013Dexter et al. , 2014 and clear signatures of sub-structures at Event-Horizon (EH) scales (~10 µas, e.g. Doeleman et al. 2008). Although the activity of SgrA* is rather low, compared to other AGN, its short distance to the Earth still makes it one of the best laboratories to study General Relativity (GR) effects in the regime of strong gravity, as well as the Physics of accretion (and eventual jet formation) in SMBH.
The GRAVITY instrument of VLTI will allow us to track orbits from material in the inner accretion disc of SgrA*, which also contributes to the mm-wave emission detected with ALMA. In addition to this, 44/127 ALMA, as a fundamental component of the Event Horizon Telescope (EHT), will contribute to VLBI mm-wave observations with resolutions able to resolve EH scales and, eventually, detect the EHrelated "shadow" of the black hole, projected over the innermost part of the accretion disc (e.g., Fish et al. 2013, see also Figure 7-1). The EHT represents, thus, a valuable complement to GRAVITY on the study of accretion and post-Newtonian dynamics in SgrA*. Besides this, the high sensitivity of ALMA (used just as a stand-alone interferometer) will enable precise variability studies of SgrA* at submm wavelengths (also with a high polarization purity), which is a key source of information to understand the panchromatic emission mechanisms in SgrA*, and how the IR and submm emission are related, one to the other, in this unique source.

Active Galactic Nuclei
The high spectro-astrometric precision of GRAVITY will allow us to resolve velocity gradients in the Broad Line Regions (BLR) of several nearby AGN. It will thus be possible to characterize, with a high accuracy, the dynamical masses of the central AGN engines and characterize their immediate neighborhood. At longer wavelengths, ALMA (as part of the EHT) will also allow us to resolve the spatial scales involved in the jet-launching process, which likely originates at the innermost part of the accretion discs, thus completing the observational picture of accretion and jet production in AGN. At (much) larger scales, VLTI and ALMA will also enable the detailed study of the morphology and temperature gradients in AGN molecular tori, since the IR-to-sub-mm brightness ratios (i.e. the tracing of warm and cold dust) obviously depend on the torus structure. Last but not least, complementary ALMA and VLTI observations can help us deepen our understanding of the trade-off between AGN activity and star formation, by spatially resolving gas inflow and AGN feedback into the ISM.

Evolved stars and stellar atmospheres
The onset of the stellar winds in evolved stars, at scales of a few stellar radii, is still poorly understood. The radiation pressure is still not effective at those short distances to the star, so other processes (e.g., pulsations, convection, magnetic fields, etc.) must be invoked at the onset of the winds. Such effects may also be needed to explain the break of spherical symmetry in the birth of Planetary Nebulae (PNe).
Whereas high-resolution IR observations with VLTI allow us to probe extended molecular structures close to the stellar surface in a variety of evolved stars (from RSGs, e.g. Wittkowski et al. 2012Wittkowski et al. , 2015 to AGBs, e.g. Ohnaka

45/127
the molecular layers further away, in the dust formation region, where radiation pressure starts to dominate (e.g., Richards et al. 2014). Combined ALMA and VLTI observations can thus give us the observational link between the regions where the stellar winds are created and enhanced, providing a crucial source of information to understand the long-lasting problem of the origin of strong winds in evolved stars (see Figure 7-2).

Young Stellar Objects and planet formation
Both VLTI and ALMA are able to resolve proto-planetary discs with enough resolution to actually detect disc substructures (caused by proto-planet interactions) and/or even planets (with the power of high dynamic-range observations, one of the flagships of ALMA). In addition to this, the high differential-astrometry precision of VLTI can be used to track the eventual position changes of the stellar photo-center, using it as an indirect probe for the existence of planets. The complementarity between ALMA and VLTI in the studies of planet formation is thus very clear.
In regard to the study of Young Stellar Objects (YSOs), the potential of both instruments is remarkable. VLTI will be able to resolve the jet-launching regions of YSOs at sub-AU scales, for objects at typical distances of 100-200 pc. The differential astrometry with GRAVITY will also allow us to track, in real time, the evolution of individual outflow components (typical outflow speeds of the lower components are of the order of 150 km/s, e.g. Pio et al. 2009). On the other hand, ALMA, with its highest spectral resolution, will allow us to study the outflow dynamics in gory detail, with precisions up to just a few tens of m/s.

Conclusions
The main purpose of this chapter is to give the reader a few example ideas (covering a wide range of science cases) of the potential for a ground-breaking symbiosis between ALMA and the nextgeneration VLTI back-ends, which will enhance dramatically the sensitivity and astrometry power in IR/NIR interferometry. 46/127 ALMA, already in its Early Science stage, has become a game changer in the community of mm/submm Astronomy. It will soon start its full science operations, notably increasing (even more!) the quantity and quality of its observations, and guiding us into a brand new era of Astronomy. A new window in sensitivity and resolution has just been open to us at submm wavelengths.
The combined use of ALMA and VLTI, which are essentially complementary in many aspects, can only boost the potential of both instruments in all their key science cases. These are new powerful eyes to look at the Universe, which, used together, will lead us to new and fascinating discoveries, just waiting to be caught by our telescopes.
We present the most important science cases for visible interferometry from the various topics discussed during a meeting in Nice held in 2014. For more details, and references please read the corresponding contribution in the white book on visible interferometry or contact the above authors.

Fundamental properties of main sequence and sub-giant stars:
A major topic for visible interferometry is the diameter estimation of planet-hosting stars for characterization of planetary systems and in particular to determine the planet's radius. This requires up to 2% precision on the visibility measurements.
Determining the effective temperatures of metal-poor stars for setting the temperature scale is also fundamental. The interferometric temperatures provide reference points for fixing the effective temperature scale. This thus allows a more precise and accurate determination of the chemical abundances of more distant stars for studying the chemical evolution of our Galaxy.
Radii and effective temperatures can be used for calibrating the 1D mixing-length parameter in stellar evolution models and determining ages of stars. Interferometric radii are complementary to asteroseismic data for solar-like stars for determining precise masses (3%) and ages (10%).
Magnetic inflation of cool, rapidly rotating stars can also be studied. Closure phases and imaging of stellar granulation and detection of planetary transits are possible in the visible as well as limb-49/127 darkening and surface spots and surface inhomogeneity determination in stars across the HR diagram.
For that purpose, the following requirements are the most important in terms of fundamental parameters of main-sequence and sub-giant stars: increase of the angular resolution and sensitivity of facilities compared to today, and for higher precision, more telescope time is needed.

Pre-Main Sequence Stars:
The young stellar objects (YSOs) present a strong science case for interferometry in the visible. Interferometric imaging and spectroscopy will provide unique and complementary data for understanding star and planet formation. The techniques will probe the innermost regions of protoplanetary discs and will enable diameter measurements of stars still contracting to the main sequence.
This science case is very challenging mainly because of the brightness of targets. Determining the fundamental parameters of pre-main sequence stars is certainly a key subject. For instance measuring the angular diameter of stars in Nearby Young Moving Groups (NYMGs) will provide their physical diameter and hence an independent measurement of their age.
Revealing complex structures around young stellar objects such as spirals, gaps, and holes at distances of a few tens of astronomical units, probing the innermost regions are key processes for the star-disc-proto-planet(s) interactions.
The detection of accretion-ejection tracers is also possible by studying the HeI 10830 A line as well as recombination and forbidden lines.
Companions-hunting with high-contrast imaging is a key issue for pre-main sequence stars formation studies, and scattered light studies in the visible can reveal asymmetric structures that might be linked to planet formation.

Breaking the frontier of the cosmic distance scale using visible interferometry of cepheids and eclipsing binaries
The general principle of this science case is to strengthen the physics of Cepheids in order to improve distance scales calibration. High-spectral resolution visible interferometry is an asset for this science case.
The study of the projection factor derived from Cepheids in binaries can be done by the detection and characterization of Cepheid binary systems. Such detections bring direct constraints on the mass of Cepheids.
The limb-darkening of Cepheids as a constraint on the projection factor also constitutes a key topic. A precision of 5% on the visibility in the second lobe would give the average value of the limbdarkening of the star at about 1% of precision. 1% on the visibility in the second lobe would provide the time-dependency of the limb-darkening of the star with a 5 sigma detection.
Characterizing the circumstellar environment of Cepheids in the visible is mandatory since the circumstellar envelopes we have discovered around several Galactic Cepheids create a positive bias on their apparent luminosity.

50/127
The robust Baade-Wesselink method combined with Gaia can put a constraint on the periodprojection factor relation. For instance, a 2% precision on the distance (or similarly 2% on the projection factor) for 75 Cepheids with CHARA and 40 with VLTI would lead to a 2% precision on the individual projection factors, bringing exciting results on the period-projection factor relation.
In addition, with a new generation of visible long-baseline interferometry instrumentation, it will be possible to measure orbital separations of a large sample of Galactic eclipsing binaries.
Thus, we could compare several types of distances: dynamical distance from interferometric measurements of binary cepheids, distance derived from the classical approach: photometry and/or spectroscopy using the Surface Brightness relation (SBCR), distance obtained with the Baade-Wesselink approach using interferometric diameters, and finally trigonometric parallaxes with Gaia. These different approaches will greatly help in resolving systematics in each method.
Finally, as for Cepheids, the projection factor of delta Scuti and RR Lyrae stars can be constrained.
The objective is to derive the expected period-projection factor relation of RR Lyrae and High-Amplitude delta Scuti stars (HADS) by applying the inverse BW method. HADS are generally supposed to pulsate radially. However, weak non-radial pulsations could be detected using long baseline visible interferometry. In that case, the Baade-Wesselink method, if revised, can constitute an interesting tool to distinguish radial and non-radial modes of pulsation.
Last but not least, another very interesting aspect is to study pulsating stars in binaries in order to measure their orbital solution with interferometry and derive their mass, which put important constraints on the evolutionary models. It concerns not only delta Scuti stars, but also RR Lyrae, Cepheids, Dor and RoAp stars.
Finally, a next generation interferometer in the visible, with high precision measurements, 6 telescopes with baselines of 300m, but also high spectral resolution would be extremely important to reach the 1% precision and accuracy on the distances in our Galaxy and beyond, but also to understand the physics of Cepheids and eclipsing binaries. Such improvements are crucial to reduce the precision and systematics on the Hubble constant.

Massive Stars
Massive multiple stars studies is actually a key topic in massive stars formation theories. Interferometry in the visible is able to resolve binaries with separations ranging between 1 and 40 mas (at 0.6 µm). Establishing the distribution of binary separations and of mass ratios is essential to constrain the influence of mass loss and mass transfer on the evolution of high-mass binaries and to ascertain the origin and properties of their remnants. The nature of the circumstellar environment (CE) of active massive stars is also central to understand many issues related to these objects. As a natural tracer of mass-loss, the study of this material is crucial to improve massive star evolution models.
Moreover, the effect of rotation on the non-spherical distribution of mass-loss for these luminous objects is an active subject of debate. This is especially for classical Be stars which are known to be the fastest-rotating non-degenerated stars.
To make progress in the understanding of these objects and the role of rotation in mass-loss processes, one needs to accurately determine the structure of their CE. Bn stars, which are nearly as fast rotators as Be stars, do not have spectra perturbed by circumstellar matter, so that the study of 51/127 their apparent geometry, rotation and differential rotation effects can be carried out more properly and reliably. By providing simultaneously high spatial (0.2 mas) and spectral (R=100000) resolution, interferometry in the visible is particularly well suited for the study of these objects.
The direct detection of Non Radial Pulsations of massive stars is also foreseen with a visible instrument. They can be detected using the dynamic spectra of photo-centre shift variability characterized by bumps traveling from blue to red within the spectral lines. The theoretical estimation of expected signal-to-noise ratios in differential speckle interferometry demonstrated the practical applicability of the technique to a wide number of sources.

Evolved stars, Planetary Nebulae, delta Scuti, and RR Lyrae as seen by a visible interferometer
The atmosphere of an Asymptotic Giant Branch (AGB) star is cool enough to allow formation of various dust species. Dust likely plays a crucial role in the AGB stellar wind, which is still not understood. The gain of going to the visible to study the dust distribution around AGBs is that the scattering from dust contribution becomes more important than in the infrared. The challenge is the fact that dusty stars are faint in the visible. Another promising subject is to probe the TiO lines thanks to a visible instrument. TiO may serve as seed nuclei for dust formation, it is therefore important to know the physical properties of TiO layers to understanding the dust formation.
The geometrical characterization of shock waves in Mira stars is probably feasible in the visible. Radially pulsating AGB stars (i.e. Mira stars) are characterized by strong emission in the Balmer lines linked to the propagation of a strong hypersonic radiative shock wave, that may also contribute to the mass loss. Many AGBs are faint because of dust obscuration. Typical magnitudes for the most evolved Mira variables will vary between 8 and 13 mag in the V-band. One has also to take into account the fact that Mira variable stars will change their brightness by up to 9 magnitude within one year.
post-AGB stars are commonly found in binary systems with separations of typically 1 AU and that are surrounded by rather compact, stable circumbinary discs that need complete characterization. The spatially resolved spectra of regions with strong local magnetic fields (e.g., star spots) will show the Zeeman splitting, which will enable us to map the magnetic fields over the surface of stars.
Imaging the chromosphere of other stars than the Sun will become possible with a visible instrument. Its physical properties will be derived with the H alpha and Ca II lines. High resolution imaging will help to understand the heating mechanism of the chromosphere, its role in the mass loss, and why it can coexist with the molecular component.
Massive evolved stars with masses between roughly 10 and 25 solar mass spend some time as red supergiant (RSG) making them the largest stars in the universe. The understanding of the physics of their convective envelope is crucial for these stars that contribute extensively to the dust and chemical enrichment of the galaxies. Moreover, the mass-loss significantly affects the evolution of massive stars, and it is a key to understanding the progenitors of core-collapse supernovae. The effects of convection and non-radial waves can be represented by numerical multi-dimensional timedependent radiation hydrodynamics (RHD) simulations with realistic input physics carried out with CO5 BOLD code. These simulations have been used extensively to predict and interpret interferometric observations. 52/127

Interacting binaries
The presence of a companion star will be betrayed by its signature in the interferometric signal for any target we point at. If the difference in magnitude between the two objects is less than 6, i.e. the contrast in flux is about 200 or less, and then current interferometers such as PIONIER on ESO's Very Large Telescope can detect the companion (Le Bouquin et al. 2011). A visual orbit can then be established, which, coupled with radial velocity, leads to the measurement of the masses of the two stars, and the astrometric distance.
Working at high spectral resolution allows one to probe the smallest interacting systems with spectro-astrometry. While the giant is in general dominating the continuum emission in the visible domain, the faint companion is situated in the energetic accretion zone presenting strong emission line like H alpha. The photo-centre of the binary, centred on the giant companion in the continuum, shifts towards the accretor in the line, which could be measured by help of differential phases.
A simple calculation and the experience gained on VEGA show that reaching precision of order 10 µas or better is well within reach, and is therefore able to probe the most compact semi-detached systems. Mass is the most crucial input in stellar internal structure modelling. It predominantly influences the luminosity of a star and, therefore, its lifetime. Unfortunately, the mass of a star can generally only be determined when the star belongs to a binary system. Therefore, modelling stars with extremely accurate masses (better than 1%), in different ranges of masses, would allow to firmly anchor the models of the more loosely constrained single stars.
Symbiotic stars show a composite spectrum, composed of the absorption features of a cool giant, in addition to strong hydrogen and helium emission lines, linked to the presence of a hot star and a nebula. It is now well established that such a "symbiosis" is linked to the fact that these stars are active binary systems. The red giant is losing mass, transferred to the accreting companion. How do the mass transfer take place: by stellar wind? Through Roche-lobe overflow (RLOF)? Or through some intermediate process?
The radii of the stars to their Roche lobe radius is the key, and optical interferometry is currently the only available technique that can achieve this.

AGN
Active Galactic Nuclei (AGN) are extremely bright sources powered by the accretion of material on a central super massive black hole (SMBH). They emit more than 1/5 of the electromagnetic power in the universe and a majority of galaxies might host a central BH triggering some level of nuclei activity. If well understood, they could be used as standard candles for the evaluation of cosmological distances at redshifts z > 3. Quasars make the current reference grid for astrometry but, at the tens of µas accuracy of GAIA, the structure of these sources could have a significant impact on the definition of their photocentre.
We have a simplistic unified model of AGN (Antonucci (1993), Urry and Padovani (1995). It features a very compact accretion disc (AD) around the central SMBH, a Broad Line Region (BLR) composed of high velocity gas clouds producing broad emission line, and a clumpy dust torus (DT) located after the dust sublimation radius. The DT orients the light from the central source that can ionize a region called Narrow Line Region (NLR). When an AGN is seen close to equator-on, the dust torus shields the BLR, that can be detected only in polarized light reprocessed by the more far away NLR clouds. Some AGN emit high velocity jets.

53/127
Observations in the visible would bring a decisive improvement, if the interferometer can be made sensitive enough in that spectral band. First we will gain in resolution by a factor 4 compared to nearinfrared observations, that are sensitive mainly to the DT. Second we will be allowed to use Balmer lines (for low z sources) that are 3 to 20 times stronger than the Paschen and Bracket lines available in the near IR. The combination of these two effects would yield a significant gain in the accuracy of quasar parallax distance measurements, as well as in the other parameters measures such as the masses. It will also strongly enhance the possibility to image gas in direct relation with the dust torus.
Another major advantage of visible observations is to allow a direct combination with RM observations to provide more insights in the physics, and, more importantly, a direct distance estimate to the AGN • at V=14, we would obtain visibility, differential visibility and differential phase for all the GRAVITY targets • At V=15, we will be able to observe more than 120 quasars.

Summary
As presented here, the visible science case gathers the following key science, sorted by technical prominent requirements : • Low spectral resolution, high sensitivity: o The fundamental parameters of planet-hosting stars detected in a(n almost) modelindependent way and, by inference the planet fundamental parameters, o Fundamental parameters of young stars, and the accretion-ejection process characterization, o Active Galaxies Nuclei • Very high spectral resolution, spectral multiplexing: o

Context
In the last 15 years, OI has produced about 600 refereed-papers with new astrophysical results, mainly in the field of stellar physics, with recent breakthroughs on Active Galactic Nuclei and some rare results in planetary science 12  However, the fields of application of optical long baseline interferometry as well as its users community remain relatively limited by three interconnected limits: Limiting sensitivity: The classical sensitivity limit is set by the "Fringe Tracker" (FT), that allows to detect and to stabilize the fringes produced by an optical interferometer and therefore to extend the exposure times and the spectral resolution. FTs are the "Adaptive Optics" of multi-aperture interferometers.
Imaging capacity: The main limit is the number of apertures of the interferometer, as this sets the final coverage of the u-v plane (i.e. the completeness of the aperture synthesis) as well as the time necessary to reach a given image quality. The performance of standard FTs decreases with the number of apertures of the interferometer and there is a conflict between sensitivity and imaging capability. For a given number of apertures, the performances of the FT will set the minimum size of the telescopes necessary to achieve a given sensitivity, with a decisive impact on the cost of the array.
Dynamic range of the reconstructed images (or models). For a given number of apertures, this dynamical range depends on the accuracy of the interferometric measures. This accuracy depends on 56/127 many factors, such as proper spatial filtering, stable instrumentation and robust calibration procedures, but again the possibility to stabilize the fringes is decisive.
Since 2014 we are developing a project called "iLimits" for "Limits of Interferometry" based on a coordinated analysis of all these limits with the goal to very substantially extend the field of application of optical interferometry. Our main focus has been on the improvement of the limiting sensitivity even if we tried to keep in mind both the potential of improvement of the image reconstruction techniques, mainly from a full use of the colour differential information (see chapter 13), and the progress in measurement accuracy from a massive use of integrated optics (GRAVITY in the near infrared) or from the combination of several modulation methods to improve the data calibration (MATISSE in the mid infrared).

Main Science goals and specifications
We are driven by two main topics: AGN and Planet Formation. Its sensitivity requirements are the same as the VLTI: K~10. This needs to be achieved with individual apertures small enough to make the project affordable.
In summary, we need K>14 with the VLTI/UTs and K>10 with the VLTI/ATs as well as with a PFI with about 16 apertures. These specifications are decisive for the full exploitation of the VLTI in the fields of AGN and stellar and planetary formation, as well as for the feasibility of the Planet Formation Imager with telescopes of affordable size (less than 2m in diameter as explained below). Achieving these performances would also boost other scientific fields, like stellar interferometry, and other interferometers and instruments like MROI and CHARA, including single mode visible instruments.

Key concepts
Let us first introduce some key definitions. We are using the standard vocabulary of OI even if it is not perfectly coherent with the one of optical physics.
• Cophasing and coherencing: o Cophasing means that the OPDs are equalized and stabilized within a small fraction of the observing wavelength (better than typically λ/10). Cophasing allows long integration times, decisive for high accuracy measurements when we are detector noise limited. o Coherencing means that the OPDs are equalized and maintained within a fraction of the coherence length Rλ where R is the spectral resolution. In some case we can operate in "blind mode" because the coherence length is much larger than the OPDs drifts during a full observation. If the interferometer is coherenced but not cophased, short exposures are necessary to maintain fringe contrast. o All Fringe Trackers (FT) have both cophasing and coherencing functions. Coherencing is necessary to find and recover rapidly the fringes and to avoid or at least detect the phase jumps that might be undetected by cophasing in a single spectral channel. • Coherent and incoherent integration of frames or spectral channels.
o Coherent integration means that we add the complex coherent flux, which assumes that we are able to correct the phase effect of the piston in each measure. The condition is to have a SNR on the coherent flux larger than 1 for each measure (frame and/or spectral channel). Then the gain in SNR is proportional to the square root of the number of measurements: SNR n ≈ SNR 1 √n. o Incoherent integration is the addition of second order quantities such as the modulus of the coherent flux, when the phase is unknown. The global SNR grows like SNR 1 2 √n that makes this method quite inefficient when SNR 1 ≪1. • On axis tracking, Off axis tracking and Sky coverage: o On axis Fringe Tracking is performed on the scientific target itself and is limited by its coherent magnitude. o Off axis Fringe Tracking uses a guide star near enough to be affected by the same OPDs, within the cophasing or coherencing specifications. We say that the guide star is within the isopistonic angle. o The Sky Coverage is the probability to find a guide star bright enough to cophase (or coherence) in the isopistonic patch around any science source. The sky coverage depends from the accuracy specification, which sets the isopistonic angle for a given seeing, and from the cophasing (or coherencing) limiting magnitude.

Main methods
We work on three main ways to improve the limiting sensitivity: • In spectro-interferometry we extend the limiting magnitudes of medium and high spectral resolution observations beyond these set by the FT cophasing limit by an "a posteriori cophasing" of all spectral channels, that allows integrating them coherently for fringe detection and coherencing. This can be extended to "a posteriori cophasing" between different frames • We analyse the fundamental limits of cophasing fringe tracking and we propose a new concept, called Hierarchical Fringe Tracking (HFT) that should reach the "ultimate" limiting magnitude allowed by a given technological level. • We consider a revision of off-axis fringe tracking in the context of these updates in limiting magnitudes for cophasing and coherencing. 58/127

The investigated solutions 9.4.1 Coherent integration in spectro-interferometry
The classical limit for spectro-interferometry at resolution higher than typically 20 (corresponding for example to 5 spectral channels in the K band) is set by the FT limit for cophasing. For AMBER and FINITO this limit was around K~8 with the UTs. For GRAVITY and its internal FT, we have K~10.5 with the UTs, mainly because GRAVITY FT uses a much better detector.
A simultaneous analysis of all spectral channels, by a 2D Fourier Transform of the x-λ AMBER images provided a coherencing mode working up to K~11 with the UTs. This is equivalent to a coherent integration of all spectral channels because, in a way, the 2DFT explores all possible pistons affecting a given frame. In this mode we are mainly interested in differential measures that remain accurate even if we use long exposure times (up to 300 ms instead of the very few ms acceptable for coherencing). This allowed medium spectral resolution observations on the quasar 3C273, 3 magnitudes fainter than the FINITO limiting magnitude (Petrov et al. 2012). The same approach could be applied to PIONIER and GRAVITY, by Fourier Transforming, in the wavenumber σ=1/λ dimension the coherent flux that they obtain in each spectral channel. The gain for the medium spectral resolution of GRAVITY still has to be investigated.
This mode yields a new instrument concept based on x-λ images like in AMBER where the fringes peaks are separated in the OPD dimension instead of the spatial direction. This needs much less pixels in each spectral channel and allows a gain in readout noise. We have proposed such an extension, called OASIS (Optimizing AMBER for Spectro-Interferometry and Sensitivity) to the AMBER instrument, showing that its limiting magnitude would be higher than the one of GRAVITY (Rakshit 2015). ESO found that (simple and cheap) proposal incompatible with the schedule of implementation of the 2nd generation instruments. We will discuss again the interest of an OASIS like module in Sect. 9.4.4 below.

59/127
Jorgensen and collaborators at NPOI (Jorgensen et al. 2012) have developed an approach where the exact piston shifts between frames are reconstructed "a posteriori" by exploring "all possible piston tracks" in an approach similar to image reconstruction. This allows coherent integration of the information of all these frames. Soulez, Thiebaut and collaborators (Soulez et al. 2014) analysed the limiting magnitude to achieve this processing and found that the limit is of the order of a global SNR~1 per x-λ frame. This is potentially better than cophasing FT limit that are limited by a SNR~1 per spectral channel in very short exposure frames. The combination of these two approaches with an optimized x-λ instrument for medium spectral resolution, like the OASIS concept, is particularly interesting in wavelength domains where we are little sensitive to detector readout noise, like the visible or the thermal infrared. We have investigated this option for the near infrared with the new SELEX detectors used in GRAVITY (Eisenhauer et al. 2016), which have a readout noise smaller than 1eand found (Rakshit et al. 2015) that the limiting magnitude for medium resolution observation with the UTs is larger than K~15. We plan to investigate this approach with MATISSE and with future visible instruments on CHARA. We will also propose an additional data processing mode of GRAVITY to be able to operate it beyond the limits of its internal fringe tracker.

Cophasing limits
The limiting magnitude of a FT is set by two criteria: a coherent flux SNR C1 ≈1 per spectral channel and a global SNR C <2π⁄spec from a λ⁄spec tracking accuracy with SNR C =SNR C1 /√(n λ ) using all n λ spectral channels. In most cases the first criteria dominates and we found (Petrov 2016) that the number of photons necessary for cophasing are n * ∝n pair n λ where n pair is the number of cophasing Each SF2B has the cophasing accuracy of a standard ABCD device (within a √2 factor). This is a conceptual design and its actual feasibility must be analysed. We are prototyping a bulk optics very broadband achromatic FS2B described in Petrov et al, 2016.
telescope pairs using the light of a given telescope and n λ is the number of spectral channels necessary in the FT to allow the coherencing, i.e. the fringe acquisition and the elimination of phase jumps. With GRAVITY on the VLTI we have n pair =3 and n λ =5.

A new cophasing concept: the Hierarchical fringe Tracker
In a pair-wise FT like in GRAVITY on the VLTI the flux of each telescope is distributed within n pair =3 cophasing pairs and n λ =5. This sets the fundamental limit of this approach in spite of the excellent state-of-the-art GRAVITY detector and optics. These limitations worsen with the number of apertures for a full pair-wise approach where n pair =n T -1. We are investigating a new concept called the Hierarchical Fringe Tracker or HFT (Petrov et al. 2016), where n pair =1 and n λ ≤2 whatever the number of apertures. The key component of the HFT is a Spatial Filter for 2 Beams (SF2B) that: • Transmits most (and in any case more than 50%) of the flux of the incoming beams when they are cophased, as if it came from a single aperture. • Deflects a fraction (never more than 75% and typically 50% far from cophasing) of the flux toward pixels measuring the differential piston when the input beams are not cophased.
The SF2B can be used to cophase pairs of telescopes, but also pairs of pairs and group of pairs. Each SF2B has typically the performance of the cophaser using all the flux from the two incoming beams.
As the different SF2B transmit a large fraction of the flux to the next level, the coherencing can be left to a final optimized "OASIS like" coherencer, allowing each SF2B cophaser to work with only one or two spectral channels, depending on the number of apertures. With 4T, like in the VLTI, the limiting magnitude for cophasing in each SF2B with n λ =1 is about the same as the limiting magnitude for the optimized OASIS receiving 25% of the flux of each telescope. This n λ n pair =1 solution yields a gain of 3 magnitudes over the GRAVITY's n λ n pair =15. The concept of a HFT with integrated optics SF2B is illustrated in Figure 9-1. This is intended to illustrate the concept and show how the SF2B can be derived from a classical ABCD two beams integrated optics combiner. In fact we are building a prototype for a very broadband achromatic and polarization free bulk optics SF2B that we have fully validated by computer simulations. It is described in Petrov et al 2016.

Optimizing the transmission and the control loop
The limiting stellar flux necessary to provide n * is also inversely proportional to the transmission T and the exposure time τ. For a VLTI science instrument, such as GRAVITY, T~1% for the FT. A device optimized for FT and using for example both the H and K bands and the two polarizations together could easily approach T=4% even on the VLTI. A new interferometer specifically designed (Ireland et al. 2016) could have a "sky-to-computer" overall efficiency larger than 10%.
We have shown (Folcher et al. 2016) that for Paranal seeing, the optimum exposure time τ for a fringe tracker affected only by the seeing is between 5 and 10 ms. On the other hand, GRAVITY is operated at frequencies higher than 1kHz, i.e.τ < 1ms because of the telescopes and interferometer vibrations. We can work along two directions. First, we can reduce the vibration level. This is easier for new interferometers but there is a VLTI plan to act in that direction that must be supported by the community. Second, we can try to further improve the control loop with more sophisticated control algorithms.

Updating the sky coverage for off-axis tracking
We have to update the sky coverage for off-axis fringe tracking according to the new limiting magnitudes. On the VLTI, the available field is very small and the sky coverage will remain very poor, 61/127 allowing off-axis tracking only for very specific targets such as the galactic center for GRAVITY. With PFI and a limiting magnitude of the order of 10, the sky coverage can be significant only if we strongly relax the requirements. For example, it might be worth investigating what is the sky coverage for a moderate quality coherencing, which might be sufficient to allow an efficient "a posteriori" cophasing in the visible on AGN.  We report recent results on passive mid-infrared integrated optics from the project "Advanced Laserwriting for Stellar Interferometry" and shortly describe the perspectives of their hybridization with active components.

Context
Aperture synthesis imaging is the highest ambition of the optical/IR interferometry community for the next decades as this will remain the only route to reach a level of angular resolution equivalent to that of a diffraction-limited telescope of a few hundred meters' aperture. For the last ten years, photonic-based solutions such as silica single-mode integrated optics and fibers have shown to be powerful and reliable instrumental concepts to combine interferometrically the beams from a large number of telescopes. After circa 15 years of R&T activities such concepts have become "mainstream" solutions in the near-IR and are currently implemented in the community instruments GRAVITY and PIONIER. Multi-telescope interferometry represents a highly valuable route both for ground-and space-based astrophysics, with the variant of nulling interferometry delivering superior high-contrast capabilities for the detection of faint circumstellar emission or planets in the making.
The recognized importance of the mid-infrared spectral range for the study of exoplanetary systems and AGN at high resolution motivates the extension towards longer wavelengths of photonics integrated optics solutions capable of mixing infrared radiation and passively/actively controlling the relative phases. The silica-based technological platform is unable to fabricate infrared waveguides for the 3-20 µm spectral range. A specific mid-infrared platform is needed to develop guiding structures made of infrared materials, with possible high-impact applications in other fields of applied research [Eggleton 2011].
Among the diverse techniques of glass processing investigated by different groups [Labadie2012], the ALSI project (Advanced Laser writing for Stellar Interferometry) supported by BMBF German federal funding aims at the optimization of the ultrafast laser inscription (ULI) technique [Thomson2009] in Chalcogenide glasses such as Gallium Lanthanum Sulphide (GLS) for mid-infrared interferometric applications.

Laser-writing and qualification of mid-IR IO beam combiners
A striking property of Chalcogenide glasses (ChGs) is their photosensitivity to impinging external radiation. In Arsenic-based ChGs, such a property has been successfully implemented to laser-write single-mode waveguides [Ho2006] and 2-telescope planar Y-junctions [Labadie2011] based on the absorption of CW-laser photons with an energy larger than the bandgap of the irradiated glass. In ULI, the inscription source is a high-repetition pulsed laser delivering ultrashort pulses with 10 12 W/cm 2 power density. Even though the photon energy is sub-bandgap, the high peak power reached at the laser focus induces multi-photon absorption, resulting in local modifications of the material refractive index [Gross2015]. This effect is observable around the position of the laser focus, which can be translated along the three directions in a single-material glass substrate. The technique has been successfully applied to the manufacturing of three-dimensional 3-telescope IO beam combiner based on Y-junctions [Rodenas 2012].
In its first phase, ALSI has concentrated on the development of elementary blocks forming an interferometric combiner. Besides Y-junctions, a key unit for the development of more sophisticated IO functions is the directional and asymmetric couplers, which form together with Y-junctions the backbone of more sophisticated components such as ABCD phase-shifting units [Benisty2009]. The manufacturing proof-of-concept of mid-IR directional couplers was explored in [Arriola2014] based on ULI inscription in GLS glasses. In ALSI, a similar technological platform was used, and the astronomical potential of the manufactured samples were characterized in the laboratory. This experimental phase has focused on understanding the achievable broadband instrumental contrasts, the phase-shifting property, the spectral splitting ratio (relevant to test the (a)chromatic behaviour over a spectral band), the polarization behaviour, the level of chromatic dispersion, or the total throughput and propagation losses.
A large set of couplers contained in chips like in Figure 10-1 were tested interferometrically. The quality of the writing is visible through the expected πphase shift between the output ports 1 and 2 that results from energy conservation in a directional coupler. Contrasts larger than 97% at 3.39 µm were routinely measured [Tepper2016].
Extended broadband and spectral tests have also been performed in the lab in the L band (3.1-3.6 µm) and M band (4.5-4.9µm) on these samples [Tepper2017]. They reveal high broadband interferometric contrasts larger than 0.95 as well as negligible differential dispersion and phase curvature (<0.5 rad) over the 2.5 cm length of the component. The differential polarization behaviour between the two channels appears also very small, but it was however observed that the ULI process induces stresses in the material that result in a measurable birefringence of the individual waveguides [Diener2016]. If Fresnel reflection losses due to the high refractive index of the GLS can With the ALSI project we extended the numerical investigation of the DBC architecture to 4 and more telescopes [Minardi2015, Diener2016, Errmann2016] and manufactured a first working 4-ways DBC operating at mid-infrared wavelength [Diener2017] (Figure 10-3). In this respect, an important intermediate result of the project has been to demonstrate the role of next-nearest-neighbor coupling in the symmetry breaking of the field transfer function of the waveguide array, which turned out to be a necessary condition to operate it as an interferometric beam combiner [Minardi2015]. We could thus identify the "zig-zag" waveguide lattice as a suitable component for high-resolution spectro-interferometry ([Diener2016], see Figure 10-3 right). The first interferometric test of manufactured zig-zag DBC with monochromatic light at =3.39 m (Figure 10-3) allowed a satisfactory coherence retrieval but better performance is expected if a more uniform array could be manufactured by a better control of the ULI-induced stresses in the substrate [Diener2016].

Contribution of active phase-control IO combiners
For interferometric observations, fringe-tracking capabilities are of primordial importance to compensate for the random fast-varying disturbance of the relative phases between the beams to be combined. So far, this was always implemented piezo-actuators in a control loop. Another possibility is to implement active integrated optics modulators to mitigate the hardware complexity of such a control loop.
Electro-optic materials where the refractive index can be modified by applying an external electric field and where optical waveguides can be realized have been used for a long time in the telecom field for optical routing, intensity modulation and optical phase delay applications. In interferometry such concepts open the route for on-chip photometry balancing, fringe scanning and fringe locking. One of the most popular materials for active integrated optics is Lithium Niobate (LiNbO3), where high contrast (36dB) interferometric rejection ratios have been obtained at 3.39 µm [Martìn2014]. The transparency range of the bulk material covers the visible up to 5 µm. However, one of the main issues concerning mid-infrared interferometric applications is the high chromatic dispersion of the fringes and the high propagation losses (~15 dB/cm) in the mid-IR due to low field confinement. Electro-optics can here be used to compensate for dispersion by cascading the electrodes and slightly modifying the refractive index of the waveguides. Finally, recent results on direct laser writing in similar electro-optic materials allow to access the 3D fabrication of waveguides, thus increasing compactness and avoiding waveguide crossings [He2013]. When used at shorter wavelengths where lower propagation levels are achievable, the combination with mid-IR passive IO is still highly valuable to deliver a hybrid component that contains a fringe-tracking or active phase control stage.

Perspectives
Ultrafast laser inscription in ChGs, and in particular in GLS glasses, proves to be a reliable platform to develop mid-infrared integrated optics components having interferometric performances compatible with an astronomical instrument as demonstrated in the ALSI project. The next step is to develop optical functions such as asymmetric couplers and full ABCD sub-units, as well as improving the performance of 4-telescope discrete beam combiners. In addition to specific issues connected to the writing process itself (e.g. stress-induced birefringence) the main difference between the ULI and lithographic platforms is found on the achievable refractive index contrast, with ∆n~10 -3 and ∆n~10 -2 , respectively. A compromise in the design phase of a component needs to be found between short and compact components with higher bending losses, and longer components with increased intrinsic propagation losses.
In comparison to the etching/lithography platform the ULI platform is also interesting from the cost perspective. The former platform is more adapted to mass production applications, usually not targeted by astronomers. At the contrary, the simplicity and relative small financial investment associated to the ULI platform (i.e. for which cost is mainly driven by the acquisition of a femtosecond laser) is well adapted to the low-volume production typically sought in astronomical instrumentation. Interestingly for chalcogenide glasses with high mid-IR transparency, it is also possible to generate electro-optic phase modulation by mean of thermal poling [Guignard2005].
The perspective of hybridization between mid-infrared passive IO components and near-IR active components for phase control appears a promising route for interferometry. Such an approach is already implemented on the instrument FIRST in the visible range thanks to LiNbO 3 active phase modulators [Martìn2016].

Jorg-Uwe Pott, Felix Widmann (all MPIA, DE)
Contact email: jpott@mpia.de For sensitive infrared long-baseline interferometry, it is crucial to control the differential piston between the apertures. Classically this co-phasing (co-herencing) is achieved with a fringe tracker which measures the movement of the interferometric fringes. In this paper, we describe a new method to reconstruct the piston variation introduced by atmospheric turbulence with real-time data from adaptive optics wave-front sensing. Concurrently, the dominant wind speed vector can also be retrieved. The method is currently analysed in simulation for atmospheric turbulence of various strength, and wind vectors varying with layer altitude. First results show that this method could help to reliably retrieve the piston variation and wind speed from wavefront sensor data. The method is related to concepts of predictive control AO algorithms and reconstruction of the point spread function.

Introduction
Atmospheric piston variation as observed by AO: The scientific goal is to increase the sensitivity of an adaptive optics (AO) supported optical interferometer (like the VLTI) to observe larger, statistically relevant samples of rare objects, like massive young stars, and AGN as well as to reach new target classes like brown dwarfs and microquasars, currently out of reach for optical interferometry. To do so, we propose to increase the sensitivity of the interferometer by deriving the time-variable atmospheric piston drift from AO-data, and therefore increasing the coherence time by up to two orders of magnitude over the currently implemented approach of direct fringe tracking. This in turn allows for much longer coherent integration times on the beam-combining camera. This new approach uses the time series of AO wavefront information to reconstruct the atmospheric piston variation.
The core of the algorithm derives the dominating wind speed and direction, using the Taylor's frozen atmosphere approximation (TFFH), and has been demonstrated to work with multi-layer atmospheric simulations (Schoeck et al. 2000). Combining wind and atmospheric tilt information then gives the piston drift. A key advantage is that no additional hardware is needed, if the interferometer is already equipped with a piston-neutral AO system and with fast delay line fringe tracker actuators as is the case for the VLTI. The initial goal is to apply the algorithm to MATISSE operation and to address GRAVITY in a second stage, since the longer operating wavelengths of MATISSE

The piston variation retrieval
The basic idea behind the piston retrieval method is the following: one takes two reconstructed phase maps at time t and t+Δt. If Δt is short enough (<< 1sec as typical for AO), TFFH approximately holds, and this time difference can be converted into a spatial travel distance. Now the two phase maps have a crossing area where the phase for both is identical (see Figure 11-1). This identity is also valid for the piston term, since boiling effects do not dominate at these small wavelengths. Now, the piston term of the wave fronts reconstructed from the AO data is artificially set to zero. with φ'1(x,y), φ'2(x,y) the wavefront measured in the overlapping region of frame 1 and frame 2 respectively : φ'1(x,y)= φ'(x,y)-Imp1 φ'2(x,y)= φ'(x,y)-Imp2 Imvar=Imp1-Imp2= <φ'2(x,y)>-<φ'1(x,y)> Thus the piston variation Imvar measured over the crossing area only is the differential piston we are looking for. The different steps of this procedure are the following: • Step 1: Get two adjacent reconstructed frames of wave fronts, φ'1 and φ'2 from the WFS. • Step 2: Determine the wind direction and speed, i.e., the displacement between the two frames. This can be done with cross-correlation methods on the crossing area of the phase map • Step 3: After obtaining the crossing areas, we calculate the pistons of the crossing areas of two frames respectively. Then the differential piston of crossing areas is (see Figure 11-2) Key to success is efficiency and precision of the real-time cross-correlation between adjacent wavefronts. Given the turbulence power spectral density, the larger outer scale with respect to the single apertures, and relatively high power at low spatial frequencies, in practice we concentrate on cross-correlating the first or second order derivatives of the reconstructed wavefronts. In fact, these are the measurements of Shack-Hartmann and Curvature wavefront sensors, the actual algorithm can work directly on the WFS data acquired, which is beneficial for the SNR.

Discussion and outlook
We briefly presented a method to measure the atmospheric piston drift with a monocular telescope, equipped with a higher-order AO system. The method uses the temporal evolution of the atmosphere and relies on Taylor's frozen flow hypothesis (TFFH). We deduced the piston variation in simulation for atmospheres with three layers of turbulence with different wind speeds and strength in turbulence.
The method can retrieve the wind speed and direction of the ground layer very well as this is the dominant layer. The full atmospheric piston can be retrieved within a small error (Pott et al. 2016). This shows that the WFS sensor can be used to retrieve the piston and wind velocities in real-time as a new effective method to coherence interferometric long-baseline arrays.
Since TFFH is shown to be a realistic assumption, at least on short timescales (few 10ms) (Schoeck et al. 2000, Guesalaga et al. 2014), we will now as next step set up a realistic end-2-end model, scaled to the on-sky performance of the VLTI AO system. In particular this model shall include the measured atmospheric decorrelation rate, due to boiling effects, so we can better explore the achievable gain with P-REx. Since VLTI and LBT are equipped with both AO and fringe tracking systems, it will be straightforward to verify the eventually achievable gain on sky with test data from the telescope.

Context
Over the last decades, a greatly improved knowledge of the Universe and, more generally, of astrophysical sources has been achieved by means of huge telescope arrays. These very large instruments are able to provide the best sensitivity / spatial resolution trade-off to observe astrophysical sources with a sharp analysis never reached before. To investigate the wide wavelength domain, a large variety of instruments have been designed and implemented. Even in this limited optical spectral domain, the usual way to propose an instrumental concept is to develop an experimental chain (including the collecting antenna, wave propagation, optical processing and detection) specifically dedicated to the narrow spectral window to be investigated. For ground observation, these bands (J,H,K etc.) are defined by the throughput of the atmosphere. Nonetheless, this method can lead to very complex designs and stringent manufacturing of the related optical components to be implemented in the instrument. It results in poorer performance for the current instruments dedicated to the mid and far infrared (MIR and FIR) than in the optical.
With a completely opposite approach, we propose to use an instrumental chain working in a technologically-mature wavelength domain and to shift the astronomical spectrum to be investigated into this propitious spectral domain, where nearly ideal photon detection technology is available. This completely new approach allows to propose a new generation of instrument able to address the problem of the mid and far infrared spectral domain, which is very informative for astrophysical studies (Active Galaxy Nucleous, Young Star Objects, exoplanets…).
The key point in this new concept is the possibility to convert the light from a far infrared wavelength to the visible-to-near infrared wavelength range through an "up-conversion stage" as shown in Figure 12-2. For this purpose it is possible to use nonlinear optics taking care that the nonlinear effect makes necessary the use of intense pump co-propagating with the faint astronomical beam. As the imaging process uses a spatial coherence analysis, the up-conversion stage has to preserve the mutual coherence of the waves and to operate with a minimum additional noise.
Conversely there are several advantages to using such a frequency conversion, especially from far and mid infrared to near-infrared or visible wavelengths: the possibility of using spatially single-mode and polarization maintaining components which are easy to handle and have low optical losses (optical fibers and integrated optical combiners), the availability of efficient detectors (high quantum efficiency, low noise, room temperature operation) and not to be compelled to use complex cooling systems over the entire instrument (assuming that the frequency conversion takes place right after the telescope focus).

The physical basis of this new concept
Why is this new approach different from the existing ones? Figure 12 In the middle of the 20 th century, A. Labeyrie has promoted the use of separated telescopes as shown in Figure 12-1. Nowadays, it has resulted in the implementation of hectometric instruments such as the VLTI and the CHARA array that provide routinely astrophysical data in the near infrared spectral range. In a similar way, the spatial coherence can be analyzed after an optical to electric conversion in appropriate photodetectors (IR detectors) followed by a subsequent electronic cross correlation. This can either be done by direct detection (Figure 12-3 bottom left) as proposed by Brown and Twiss or by heterodyne detection (Figure 12-3 top right) as experimented by Townes. In the latter case the optical signals are mixed with a local optical oscillator which strongly improves the sensitivity. However, in both cases highly sensitive ultrafast photodetectors are required, which are not available for the infrared spectral domain.
Our new concept is not so far from a mix between the heterodyne intensity interferometer proposed Townes and the one currently used on telescope arrays such as the VLTI in Chile. The main difference results from the possibility to make a cross-correlation in the optical wave after the up-conversion stages and not on the electric signal detection. This is possible nowadays due to significant advances in the field of nonlinear optics. 74/127

Feasibility and first experimental results
If this idea is very attractive, before promoting such a proposal, the first question to be addressed is: Is it possible to use a nonlinear effect that usually involves powerful sources to process an astronomical beam well known to be related to very faint sources?
1. We have selected the Sum Frequency Generation (SFG) as the nonlinear effect because this process is well known to be intrinsically noiseless. Nevertheless extra-noise coming from other nonlinear effect may be expected and analyzed. 2. We have planned and performed a set of in-lab preliminary experiments in order to validate the concept before trying an on-sky experiment in a real astronomical environment. 3. We have tested the sensitivity of our instrument on a one arm configuration to check if our instrument prototype is sensitive enough to plan a preliminary one sky demonstration. 4. In 2015 eventually, spring we got the first on-sky fringes.
The more efficient components being currently developed with Lithium Niobate crystals in the 1.55 µm wavelength domain, all the first preliminary investigations discussed below have been performed in the astronomical H band. We are now addressing longer wavelength spectral domains such as L and in a near future the N band.
The following list reports the main results that make us confident to promote this proposal and on the possibility to succeed in a real astronomical context. All over the reported experiments, the observables to be recorded are related to the complex visibility of the fringes observed at the output of the interferometer: contrast and phase closure. In the following, the quality of their measurements will be used as a proof of the quality of the instrument under test.  These promising results allow to plan the future developments of our study. The following paragraph proposes a roadmap.

Outlook
In the coming five years we plan to develop our study in the following directions:

Further tests on the CHARA telescope array @1.55 µm
Using the instrument developed in lab at 1.55 µm, it is already possible to test our concept on a real instrument in the astronomical H band. For this purpose, we have initiated a collaboration with the Georgia State University to use the CHARA Array located on the Mount Wilson beside the Hooker telescope (CA, USA). Even if this spectral domain is not the most demonstrative, the first on sky observation is an important corner-stone of our approach. We got first fringes with the S1-S2 telescopes during the last run in collaboration with the CHARA team. In parallel we are going to improve the performances of the components, test a spectroscopic configuration and the global architecture of the instrument to enhance its sensitivity. Our up-conversion interferometer is placed after the delay lines of CHARA in order to minimize the complexity of the first on sky demonstration

Towards MIR and FIR spectral domain.
These spectral domains are the most promising ones for a scientific use of the up-conversion as our proposal gives solutions on the following points: 1. Changing the spectral range at the focus of the telescope by means of up-conversion avoid the optical beam to be disturbed by the black body radiation all over the optical train from the telescope to the mixing station. 2. Reaching a shorter wavelength spectral domain allows to use silica optical fibers to ropagate the beams from the telescopes to the mixing station. This way, the attenuation is very low thanks to the very good throughput of silica fibers. Using highly birefringent or polarization single-mode fibers makes possible to achieve a very efficient spatial filtering and to preserve the polarization state. In addition, a wide variety of integrated components allows to design an all guided instrument. 3. Shifting the MIR or FIR to the spectral range compatible with the Silicium or InGaAs Avalanche Photodiode allows the use of room temperature photon counting detectors with a simple cooling system.
We plan to implement a two arm interferometer using nonlinear crystals got in the frame of collaborations with FEMTO engineering and the Paderborn University (L band) and THALES R&T and the Institut Néel Grenoble for MIR and FIR components. The study will begin with a high flux source to manage the full operation of the instrument. In a second step the experiment will be achieved with a black body source in the photon counting regime. 78/127

Large bandwidth conversion and noise management
As mentioned above, the limited spectral bandwidth over which the up-conversion is effective is one of the main hurdles. We propose the use of a multi-laser pump emitting a spectral comb in order to address the wavelength of the broadband astronomic light. The first attempt will be conducted using nonlinear waveguides operating for example at 1.55 µm with a pump comb at around 1.064 µm. First results have been obtained with two lines demonstrating the basic principle. During this work we have observed a spectral compression on the converted signal. The spectral comb pump technique will be extended later to the MIR and FIR spectral domains. The next challenge could be to convert a 10 µm radiation using a pump comb at 1.55 µm with a InGaAs photon counting detector. Over this process the additional noise will be studied as function of the pump properties.
On the longer term, we plan for:

Very large baseline MIR and FIR interferometers
The very low propagation losses in silica fibers allows us to propose a telescope array with a fiber link over very long baselines. To propose such kind of interferometers, two main difficulties can be overcome when working in the MIR or FIR domains: 1. The first one deals with the spectral shift from the MIR and FIR to the silica fiber spectral window. This point will be fully answered by our up-conversion interferometer as long as efficient up-conversion would be available (this point is addressed in the previous paragraph). 2. The second point concerns the possibility to design and implement an all guided delay line.
Taking advantage of the spectral compression mentioned above and using fibers with a convenient dispersion design, we plan to propose this new delay line concept without any free space propagation. Our manufacturing skills on fibered delay lines will be very helpful during this work.

Exoplanet imaging with a temporal hypertelescope / ALOHA instrument
For fifteen years we have developed a high dynamics, high resolution instrument dedicated to direct imaging: the Temporal HyperTelescope (THT). This instrument has been implemented in an all guided configuration down to the photon counting regime at 1.55 µm using an hybrid detection at the output of the interferometer. The nanometric accuracy servo control system has been successfully operated in the photon counting regime using a genetic algorithm. The next step will be to use the up-conversion stage at each telescope focus to propose a FIR instrument using a full guided optical instrument thanks to the silica fiber technology. A typical astrophysical target could be the direct imaging of exoplanets.

Introduction
Optical long-baseline interferometric instruments have, since their beginnings, faced a series of challenges to produce science-grade images. These challenges are related to: • sparse sampling of the measurements since interferometers observe only a limited number of spatial frequencies • non-convex inverse problem to solve (i.e. there may be several local solutions) • phase disturbance by the atmosphere in front of the telescope or interferometer, which washes out the phase information of the object.
In addition, improving the detection of the (dispersed!) fringes at low signal to noise ratio is critical for improving the limiting magnitude. This is requested to enlarge the community of interferometry users.
The two first aspects are the subject of current image reconstruction software packages (see details in e.g. Thiebaut & Giovanelli 2010).
While it would be tempting to elude the first point (sparse sampling), one still critically needs to make progress on the number of telescopes for good images, i.e. we need to reduce the sparsity of the (u,v) coverage. 4-telescope recombination with MATISSE and GRAVITY will be more effective than the first generation instruments 3-telescope AMBER and 2-telescope MIDI, but we need to go further if we want to improve image reconstruction quality anyway.

82/127
The second point (non-convex problem) makes image reconstruction difficult for a non-expert user. We need to progress on that point if we want interferometry to spread out. The supervision of the algorithms is yet more complex (and a slow process) with many spectral channels. Tuning a set of hyper-parameters is a tedious process... The JRA4 work package can be considered as an important step that must be followed by other steps on this track to make image reconstruction software widely used.
About the last point (phase disturbance), already in the 70's, using colours was pro-posed as a way to tackle image recovery problems (Koechlin 1978). Later works announced the future application of radio-astronomy techniques to the image reconstruction process of optical interferometers (Monnier 2003, Millour 2006).
Recent developments have triggered advancement in the field that we present here:

Solving the phase problem
The problem of image reconstruction was faced already a few decades ago in radio-astronomy. Several methods like CLEAN and the Maximum Entropy Method (MEM) were developed to produce images out of visibilities and phases. However, it was not until a new set of techniques were developed (Hybrid mapping and self-calibration, which rely on similar assumptions) that imaging with radio-telescopes came to a boom. These techniques are grouped under the self-calibration name (Pearson & Readhead 1984). They explicitly pose the problem of ill-calibrated datasets and try to solve it by a series of iterative considerations both on the data and on the images.

Recent Developments in optical long-baseline interferometry :
It was considered up to very recently that the phase information provided by a ground-based optical interferometer was completely lost due to the turbulent atmosphere. However, the development of spectro-interferometric instruments like AM-BER, MIDI or VEGA, made the need for chromaticaware tools blatant: • Visibilities and closure phases contain spectroscopic information in addition to pure geometric one. The common used tools (model-fitting and image reconstruction), need to be updated to take that information into account • "New" available wavelength-dependent phase measurement called "differential phase" brings a whole lot of additional information compared to closure phase alone.
The wavelength-differential phase was not considered in optical interferometry imaging until Millour In-deed, differential phase provides a corrugated phase measurement, which, in theory, can be incorporated into a self-calibration algorithm, in a very similar way as what is done in radio-interferometry (Pearson & Readhead 1984).
As early as 2003, J. Monnier anticipated "revived activity [on self-calibration] as more interferometers with imaging capability begin to produce data." And indeed, the conceptual bases for using differential phases in image reconstruction were laid in Millour (2006).
The Schmitt et al. (2009) paper was a first attempt to use differential phases in image reconstruction. They considered that the phase in the continuum was equal to zero, making it possible to use the differential phase (then equal to the phase) in the Hα emission line of the β Lyr system. They were able this way to image the shock region between the two stars at different orbital phases.

83/127
The paper Millour et al. 2011 went one step further, by using an iterative process similar to radiointerferometry self-calibration (Pearson & Readhead 1984) in order to reconstruct the phase of the object from the closure phases and differential phases. This way, they could reconstruct the image of a rotating gas+dust disc around a super-giant star, whose image is asymmetric even in the continuum (non-zero phase). This method was subsequently used in a few papers to reconstruct images of supergiant stars (Ohnaka et al. 2011;. A more recent work (Mourard et al. 2014) extended the method to the visibilities, in order to tackle the image reconstruction challenges posed by visible interferometry, lacking the closure phases and a proper calibration of spectrally-dispersed visibilities. This method allows one to reconstruct image cubes on which similar methods as those used in integral field spectroscopy can be used.

The POLCA project
POLCA stands for Processing of pOLychromatic interferometriC data for Astrophysics. It is an Agence Nationale de la Recherche (ANR)-funded project aiming at developing new-generation chromatic algorithms for image reconstruction. The project has come to a conclusion early 2015. Several advances were made: • Statistical analysis of AMBER data showed that the interferometric data does not follow the usual assumptions of uncorrelated data and Gaussian (Normal) noise distribution.
Correlations over time are indeed significant and can be partly disentangled by considering differential visibilities in addition to absolute visibilities. A Student distribution of noise on the visibilities should be used instead of a Normal distribution (it is expected since visibility is calculated as the division of two random variables, like in Tatulli et al. 2007). Student distributions can be far from the commonly used Gaussian (Normal) distribution. This could lead to a future improvement on descent algorithms used in model-fitting or image reconstruction (Schutz et al. 2014a). • New development on the core image reconstruction algorithm to take into ac-count the wavelength-dependence of the data and the differential phases has been achieved and is distributed under the "PAINTER" software (Schutz et al. 2014b). It works on chromatic datasets and produces chromatic image cubes by using the ADMM descent algorithm and spatio-spectral regularizations. A faster version has been developed and will be presented in Schutz et al. (2015), it uses wavelets for spatial regularization and DCT for spectral regularization. for low-spectral resolution datasets. The software "SPARCO" was developed to demonstrate this. The potential of this technique is great and allows one to perform "numerical coronagraphy" on the interferometric data, by removing the (main) contribution from the central star.
All these software efforts have a great potential, and they could lead to a new generation of image reconstruction software combined together. Especially, combining the new core algorithms of "PAINTER" or "MIRA-3D" with the chromatic model-fitting features of "SPARCO" and self-calibration could produce a leading edge image reconstruction factory for the coming imaging interferometric instruments MATISSE and GRAVITY. 84/127

Future & perspectives
With the developments mentioned above, one can produce images without closure phases. Indeed, the differential phases contain many phase information, and imaging with 2 telescopes has become possible thanks to the self-calibration algorithm and PAINTER.
We There is a working group at JRA4 aiming at providing advances in easing the use of image reconstruction software. One option would be to provide several software on a server with a webbased interface in a very similar way as what has been done for model-fitting and the LITpro software at JMMC. Related to this aspect and of interest is also the Inputs/Outputs formats and visualization tools, which should be standardized in some way to allow the comparison of different software and different runs in the huge image-reconstruction parameter space.
Of interest are also a series of recipes that are used to produce "good" or "science-grade" images: ). These recipes can take several forms or can be part of the image reconstruction software (for example: MCMC methods inside the reduction package MACIM). However, loops on parameters or external tools can be developed to make these recipes available to all image reconstruction software.
During the last decade, the first generation of beam combiners at the Very Large Telescope Interferometer has proved the importance of optical interferometry for high-angular resolution astrophysical studies in the near-and mid-infrared. With the advent of 4-beam combiners at the VLTI, the u−v coverage per pointing increases significantly, providing an opportunity to use reconstructed images as powerful scientific tools. Therefore, imaging will be a key feature in the coming generation of VLTI instruments: GRAVITY and MATISSE. It is thus imperative to characterize the expected performance of such instruments in terms of their image reconstruction capabilities to optimize the use of the available observing time. As part of the OPTICON FP7-2 joint research activity on interferometric imaging, multi-wavelength frames were created from simulated MATISSE data and reconstructed using the MCMC software SQUEEZE, paying special attention to a reliable estimation of the expected performance of the instrument. This allowed obtaining a wider view on the imaging advantages and constraints, still faced in optical interferometry. Furthermore, the evaluation of SQUEEZE capabilities is essential for the primary goal of the JRA.4, which aims to homogenize the imaging resources for the community, providing cookbooks and a general GUI for the various algorithms.

Introduction
During the last decade, image reconstruction has become an important tool to scientifically assess the information encoded in the optical interferometry data. Therefore, characterizing the imaging capabilities of the different interferometric arrays is necessary, especially in the frame of the upcoming instruments at the European Southern Observatory (ESO) Very Large Telescope Interferometer (VLTI). MATISSE (Multi-Aperture mid-Infrared SpectroScopic Experiment; Lopez et al., 2008;Lopez et al., 2009) is one of these second-generation interferometric instruments of the VLTI. This instrument is conceived to interconnect up to four telescopes, either the Unit Telescopes (UTs) or the Auxiliary ones (ATs) to capture visibilities, closure phases and differential phases in the midinfrared. It represents a major advance compared with its predecessor MIDI, mainly, because it will allow us to recover, for the first time, the closure phase and chromatic differential phase information at three different bands: L, M, and N. 87/127

Data simulation
One of the major science case studies of MATISSE is the characterization of proto-planetary discs around young stellar objects. In this respect, image reconstruction represents a unique tool to obtain constraints on (i) the physics in the inner Astronomical Unit (AU) of the discs, (ii) the signatures of interaction between forming planets and the dusty disc, (iii) detection of companions in the disc-like structure, (iv) the signatures tracing different dust mineralogy (e.g., the silicate feature at 10 μm) and (v) the gas disc kinematics, among others.
Therefore, we selected a prototypical Herbig Ae star as our image reconstruction source. HD 179 218 is a B9 star with an effective temperature Teff = 9600 K ( The applied noise model was generated using the MATISSE simulator developed at the Observatoire de la Côte d'Azur. This simulator uses the pre-computed theoretical interferometric observables and adds two main types of noise: (i) the fundamental noise and (ii) the calibration noise. Once the different error contributions are calculated, the theoretical observables are randomly changed following a Gaussian distribution within the computed error-bars. Figure 14-2 displays an example of the squared visibilities and closure phases recovered for the AT configuration with the shortest baselines.

Image reconstruction
Due to the sparseness of the u − v coverage, the poor calibration of the complex visibility amplitude and the lack of complete phase information, image reconstruction from interferometric data is an "ill-posed" optimization problem (Thiébaut et al., 2009(Thiébaut et al., , 2010(Thiébaut et al., , 2013. Therefore, to reconstruct the most reliable image that reproduce the data, we must include "a-priori" information in the reconstruction process. Due to this Bayesian inference approach, the image reconstruction process becomes into a regularized optimization problem where the likelihood of the model is given by the classical 2 (x) (i.e., the difference between the data and the model) and the prior probability is given by a regularization function R(x). The balance between two terms is given by a "parent variable" that weighs the contribution of each one of the terms. The image with the highest posterior probability, x ML, is thus given by: There are several methods to find x ML . Two of the most important ones are the Gradient Descent and the Monte Carlo Markov Chain (MCMC) algorithms. The Gradient Descent is an optimization algorithm that takes steps proportional to the negative of a gradient function with respect to the image pixels in order to find the best solution (image). This method is fast; however, it may fall on local minima that can lead to a misleading solution in the image reconstruction process (Thiébaut et al., 2010, Baron et al., 2008). On the other hand, the Monte Carlo Markov Chain method is based on a random process, which determines flux elements in a pixel grid until the desired distribution of pixel flux fits the data, and reaches an equilibrium distribution. This method uses algorithms like "Nested Sampling" or "Parallel tempering" to determine whether the convergence criteria have been reached (Baron et al., 2010, Ireland et al., 2006. The great advantage of this method is that it could find a global minimum, at the cost of being slower than the Gradient Descent. Here, we perform the image reconstruction from our simulated MATISSE data using the MCMC software SQUEEZE. One of the key parameters in interferometric image reconstruction is the selection of the regularization function R(x). There are several types of regularizers that deliver different properties to the reconstructed image (e.g., l0-norm, l2-norm, Maximum Entropy, Total Variation, etc.; see also Renard et al., 2011). The second important parameter for the reconstruction is the hyperparameter µ that controls the trade-off between the χ 2 and the prior information of the brightness distribution encoded in R(x). Therefore, selecting the appropriate value of µ is crucial for the image reconstruction process. One of the most common methods to select the optimum µ is the L-curve (  This assumption is equivalent to consider that the morphology of the target evolves as a gray-body object over all the channels inside the bandpass. This assumption is not true for most of the astrophysical objects, but it represents a good starting point to calibrate the different parameters used for the reconstruction. In this case, we used both the squared visibilities and closure phases to recover the brightness distribution of our object. From this reconstruction, we could notice that the different regularizers were able to recover the general morphology of the target. Nevertheless, there were still some significant differences among them. For example, while the Total Variation was able to recover smooth rim morphology, the other regularizers underestimated the brightness distribution of the rim for position angles between 90º and 180º (East of North). Additionally, all the reconstructed images show several bright-spots well localized along the rim, instead of a uniform distribution like in the model.

Monochromatic and polychromatic reconstructions
However, one of the main goals of the new generation of infrared interferometers is to recover the morphological changes of the astrophysical objects along the band-pass of the observations. This aspect is particularly important for MATISSE, which will have a waveband as large as ∆λ ∼5 µm in the N-band. Therefore, we explored this capability by performing a polychromatic

90/127
reconstruction by including the differential phase information of the data. The initial setup of the reconstructed images considers the best image from the first monochromatic reconstruction as starting point. In the previous reconstructions, the l 0 −norm and the TV regularizers exhibited the best performance. Therefore, for this new reconstruction, both of them were used together with a transpectral regularization. The hyperparameters were selected by tuning them manually with values close to the best ones obtained from the L-curve analysis of the monochromatic case. Nevertheless, we are aware that selecting the optimal ones from a multi-dimensional L-curve of the used regularizers can optimize these values. Fourteen images were recovered; each one of them corresponding to one of the simulated spectral channels. Figure 14-4 displays the recovered images. It is clear that, with this initial setup, the rim morphology was recovered at all the reconstructed channels. However, the central source was only recovered in the first three of them. It is important to mention that the total flux in the central source corresponds to only a small percentage of the total flux in the object. Even for the first spectral channel at 8.18 µm, it only corresponds to 5% of the total flux, decreasing for longer wavelengths up to ∼0.8% at 12.72 µm.

Conclusions
• The recovery of milliarcsecond resolution interferometric images in the infrared will represent a major breakthrough for the coming generation of VLTI instruments. For example, MATISSE will allow us to image astrophysical objects in the mid-infrared with unprecedented resolution, representing a tremendous advantage with respect to its predecessor MIDI, which only allowed for parametric modeling of the interferometric data.
• Our current understanding of the image reconstruction problem and the current developed software allowed us to perform both monochromatic and polychromatic image reconstructions of simulated interferometric data. In both reconstructions, we could recover the different components of a proto-typical young stellar object. However, image reconstruction with MATISSE-like data sets is still not trivial and required a systematic study of the parameters used in the reconstruction, particularly, of the different regularizers and the value of the hyperparameter µ. Therefore, it is necessary to compare the results of the image reconstruction with several software and methods to better understand the systematics.
• The better we understand the requirements to achieve a science-grade images from interferometric observations, the easier will be to provide tools and procedures to the community to make more accessible the use of the current techniques. This is a task that should be addresses in the coming years as part of an effort to broaden and engage the field with more members of the international community.
• Testing the image capabilities of the different imaging algorithms, in this case SQUEEZE, is essential to have a full description of them. This agrees with the main goal of the JRA.4, which consists in providing a simple an homogeneous view of image reconstruction in optical interferometry to the community, by having complete cookbooks of the different packages as well as a dedicated GUI to use them. High dynamic range imaging provides a key technique to observe, characterize, and understand extrasolar planetary systems. While current XAO-assisted 10-m class telescopes provide very highcontrast images (up to 10 -7 beyond 0.3'' in the H band), their angular resolution is generally insufficient to study directly the inner planetary region. Interferometric instruments can circumvent this limitation by observing within the diffraction limit of a single aperture but generally at much reduced contrasts. Currently, the most precise instrument at the VLTI (i.e., PIONIER) achieves a contrast of a few 10 -3 in the near-infrared and second-generation instruments are not designed to improve that limit. Based on the experience gained with PIONIER, as well as with mid-infrared nulling instruments (KIN, LBTI), and thanks to recent advent of new data reduction techniques, the VLTI could reach the next level of high-dynamic range observations at small angular separation with a nulling interferometric instrument operating in the thermal infrared, a sweet spot to image and characterize young extrasolar planetary systems. Technical and science motivations for such an instrument are described in this chapter.
The development of high dynamic range capabilities has long been recognized as one of the top priorities for future interferometric instruments (e.g., Ridgway et al. 2007) and for the VLTI in particular (e.g., Léna et al. 2006). In the early 2000s, pushed by the need to prepare the way for future space-based infrared interferometric missions, a concept for such an instrument was designed and studied in detail for the VLTI ). This study demonstrated the feasibility of reaching a contrast of 10 -4 , approximately one order of magnitude better than what is achievable with the current and second-generation VLTI instrument suite. While this project did not materialize in an actual instrument, the key scientific questions that it was supposed to address remain, and high-contrast infrared interferometry is still nowadays the best option to answer them. New scientific questions that would benefit from such an instrument have also appeared in the last 10 years, making the case even stronger. Today, recent advances in interferometric data reduction (the 95/127 so-called Nulling Self Calibration or NSC, see Mennesson et al. 2011), beam combination architecture (Lacour et al. 2014a), and mid-infrared lithium niobate beam combiners (Lacour et al. 2014b) offer new possibilities to bring the VLTI to the next level of high dynamic range observations at small angular separation.
With an anticipated contrast of 10 -4 , the VLTI would significantly contribute to three main areas related to extrasolar planetary science: exo-planets, exo-zodiacal discs, and planet-forming regions. First, it would be sensitive to young self-luminous or irradiated gas giant planets at angular separations smaller than what future extremely large telescopes will be capable to resolve. Lowresolution spectroscopic observations of such planets (e.g., τ Boo b, Gliese 86d, or HD 69830b) in the thermal infrared (3.5-4.5 μm) are ideal to derive the radius and effective temperature of the observed planets and provide critical information to study the non-equilibrium chemistry of their atmosphere via the CH 4 and CO spectral features. Second, a contrast of 10 -4 would allow faint exozodiacal disc emissions to be detected around nearby main-sequence stars, at the ~50 zodis level. Such observations are crucial to unravel the mystery of hot dust (e.g., ) and to constrain the faint-end of the exo-zodiacal disc luminosity function (complementarity with the KIN and LBTI survey in the Northern hemisphere). Finally, the improved dynamic range in the thermal infrared would open a new observational window on planet-forming regions and would allow studying the physics of planet formation at higher contrasts, including forming proto-planets. Other major fields that make use of interferometric observations such as stellar physics and the study of AGN would also benefit from a higher dynamic range.
Besides these scientific motivations, a new high dynamic range imager at the VLTI would also serve as a technology demonstrator and scientific precursor for future interferometric instruments such as PFI, or for TPF/Darwin-like missions if a nulling architecture is selected. Technology demonstration would include key technologies and detection strategies like four-telescope NSC, the combination of closure phases and nulling, and mid-infrared integrated optics components for interferometric combination. Heterodyne techniques using laser frequency combs could also be considered ). Scientific preparation would include for instance exo-zodiacal dust reconnaissance for southern stars that will be targeted by future exo-Earth characterization missions. Note also that the VLTI offers at L band an angular resolution which is similar to that of ALMA in its most extended configuration or that of future ELTs in the near-infrared (i.e., ~5 mas or 0.1 AU at 20 pc). Hence the VLTI can be used to trace complementary dust species and molecular lines in ALMA-detected circumstellar discs or to get complementary information on ELT-detected planets, possibly on much less-solicited telescopes such as the ATs, which could also be more easily used to carry out large surveys. New discoveries could then be followed up with ELTs or ALMA.

Status
The CHARA array offers interferometry at visible wavelengths with the VEGA instrument. The VLTI instead has larger telescopes but no visible beam combiner up to now. CHARA/VEGA offers spectroscopic capabilities but is currently limited due to saturation effects on the detector. Varying spectral resolution as a function of baseline length is a powerful method in some hyperspectral remote sensing studies since high frequency information position is not going to change much depending on high frequency baseline. The spectrum of the object is also quite important for image reconstruction.
Regarding the baselines length at CHARA, short baselines are really mandatory for imaging purposes in order to fix features in the field of view. Longest baselines are not always usable due to SNR issues, and more short baselines are needed to get the low frequencies.

FRIEND pathfinder instrument
In 2016-2017, the CHARA 1-meter telescopes will be equipped with Adaptive Optics (AO) systems. At the same time, the VLTI will equip its ATs with the NAOMI adaptive optics. This improvement opens the possibility to apply, in the visible domain, the principle of spatial filtering with single mode fibres, well demonstrated in the near-infrared. It will clearly open new astrophysical fields by taking benefit of an improved sensitivity and state-of-the-art precision and accuracy on interferometric observables. A demonstrator called FRIEND (Fibered and spectrally Resolved Interferometric Experiment -New Design) has been developed by the Observatoire de la Côte d'Azur. FRIEND combines the beams coming from 3 telescopes after injection in single mode optical fibres and provides photometric channels as well as some spectral capabilities for characterization purposes (Figure 16-1). It operates around the R spectral band (from 600nm to 750nm) and uses the world's fastest and most sensitive analogue detector OCAM2. On sky tests at the focus of the CHARA interferometer have been performed to estimate of the stability of the instrumental visibility. Complementary lab tests have permitted to characterize the birefringence of the fibres, and the characteristics of the detector (Martinod et al. 2016, Berio et al. 2014).
Following the FRIEND pathfinder development, a study has started to consider a new focal instrument for CHARA, called SPICA (Mourard et al. 2016). SPICA will take profit of the experience gained on VEGA and FRIEND. It will use fibre optics to perform spatial filtering, and will have likely two arms, one for low spectral resolution ultra-precise measurements, aiming at diameter estimation and imaging, and one for very high spectral resolution, aiming at in-depth stellar physics studies (photospheres, chromospheres, Zeeman effect, spots, etc.). Such a new visible instrument will 98/127 provide high-accuracy broadband visibilities and closure phases, as well as differential visibilities and differential phases. To reach its goals, this instrument will need an infrared fringe tracker, that is also under study.

Plans for the future
The SPICA instrument will open the way to perform surveys of hundreds to thousands of stars, with imaging and precise diameter measurements capabilities. Regarding the baselines length, short baselines are really mandatory for imaging purposes in order to fix features in the field of view. Longest baselines are not always usable due to SNR issues, and short baselines are needed to get the low frequencies. The CHARA array has few short baselines, limiting de facto its imaging potential in the visible. Adding one telescope in the array would be an interesting idea to boost its imaging capabilities. The VLTI has access to smaller baseline and also has larger telescopes, making it a good target for the installation of a visible combiner. However, the VLTI is not yet ready for visible light combination, and a few technical tweaks need to be implemented to enable interferometric V band observations (RIJ do go through the delay lines already). For example, the MACAO dichroic would need to be changed. The guiding strategy of the telescopes should also be modified.
A minimum number of telescopes for such an instrument is certainly 4 telescopes and different configurations to make imaging. Experience at CHARA with MIRC shows that 6 telescopes is a strict minimum for snapshot imaging. In the same direction we recall that the Plateau de Bure interferometer of IRAM was only able to make direct images when the telescope network was extended to 6 antennas within 10 years and that actually the NOEMA project has extended the array to 12 antennas. Therefore, the SPICA instrument is considered from the beginning as a 6-telescope instrument, with possible extensions to up to 9 telescopes.
Improving the interferometric imaging capabilities (see also Chapters 13,and 14) pushes for enlarging the existing arrays with additional telescopes. On CHARA, this would be a 7th telescope located wisely in the array. On the VLTI, several options could arise: 2 more AT, at fixed positions, to facilitate their maintenance, and provide more flexibility in the available configurations, and to complete the (u,v) coverage at small baselines. The Very Large Telescope Interferometer (VLTI, Figure 17-1) is the European flagship interferometric facility and was conceived with the goal to make optical interferometry available to the whole European astronomy community and to serve the needs both of expert as well as non-expert users. The 1st-generation instruments AMBER and MIDI and the visitor instrument PIONIER took major steps towards this goal but revealed also challenges, for instance related to attracting non-expert users to interferometry. In this chapter, we will reflect on how the arrival of the VLTI 2nd-generation instruments GRAVITY and MATISSE might help to expand the VLTI user community and we will discuss steps that could be taken to support this process. Also the expert community, both inside and outside of the instrument consortia, need to coordinate in order to optimize the scientific output of the new VLTI instruments (Figure 17-2).

Potential for extending the user base of VLTI
Being 4-telescope beam combiners, GRAVITY and MATISSE might offer a distinct advantage for attracting new VLTI users, as they will be much more efficient in enabling interferometric imaging than the 2-and 3-telescope beam combiners MIDI and AMBER. ESO and the instrument consortia have realized the importance that imaging might play and will include image reconstruction algorithms in the GRAVITY and MATISSE data reduction packages. The existing 4-telescope beam combination instruments CHARA/MIRC and VLTI/PIONIER have shown that meaningful images can be reconstructed if a realistic amount of observing time is invested (e.g. half nights on three VLTI AT configurations). However, it is important not to raise overly-optimistic expectations, as imaging will continue to be applicable only to retrieve structures in a specific angular size range (e.g. 3-60 mas), contrast range (e.g. <1:50), and of modest complexity. The users of GRAVITY and MATISSE will need to take these aspects into account during proposal planning and will need to gain experience in applying the available image reconstruction algorithms and learn to identify artefacts introduced by the incomplete uv-coverage or by the image reconstruction method. Also most quantitative science results will still be extracted by visibility fitting and not from reconstructed images. Therefore, the community needs to offer adequate assistance to new GRAVITY and MATISSE users. For this purpose, we are currently in the process of establishing a network of VLTI Expertise Centres that will offer assistance on aspects ranging from project planning, data reduction, to image reconstruction.

Importance of promoting science opportunities of VLTI
VLTI Expertise Centres will organize regular proposal preparation workshops, provide tutorials and software tools on a centralized website, and offer personal assistance to new users. The details are outlined in a separate chapter in this report (Duvert et al., Sect. 18). Another important building block in the strategy to make interferometry more accessible to non-experts is provided by the OPTICON-funded working group on image reconstruction. This working group develops a software tool that will allow users to run different image reconstruction algorithms under one graphical interface and to compare the resulting images quantitatively and to investigate how the final product depends on image reconstruction parameters such as the regularization weight (see report by Sanchez et al., Sect. 14).
Another important component for expanding the VLTI user base is to promote the scientific opportunities of VLTI more effectively to the non-expert community. Outside of the VLTI user community, many astronomers are not aware of the capabilities provided by optical interferometry nor to its complementarity to other techniques. At present, these capabilities are communicated primarily through individuals promoting individual science results on conferences. However, we would like to encourage also the consortia to see it as part of their responsibility to contribute towards building the future user community of their instruments.

Maximizing science output by coordinated/simultaneous observations
Also the expert community should organize itself with the goal to maximize the scientific return of VLTI. For instance, it is clear that GRAVITY and MATISSE will, to a large extent, target the same objects, including for instance the brightest active galactic nuclei, many young stellar objects, and some key evolved stars. These observing programs will deliver exciting science in their own right, but in many cases it is clear that coordinated simultaneous objects could yield even more spectacular results. Therefore, the consortia might consider coordinating the execution of their guaranteed time observing (GTO) programs, so that MATISSE and GRAVITY will target time-variable objects at the same epoch, enabling multi-wavelength studies on key objects. In order to support this process, it should be considered to establish an "i-SHOOTER" mode, where GRAVITY, MATISSE, and PIONIER would be able to record data in the H/K/L/M/N-bands at the same time. Conducting such multiinstrument observations of double-GTO-protected targets will require a high degree of coordination and collaboration, but it could offer the GTO teams an opportunity to share the time that is spent and therefore to execute their programs more effectively. There exist already some examples of inter-consortia collaboration, for instance in the ongoing attempt to make the GRAVITY fringe tracking accessible to MATISSE, and such collaborations could be further extended in order to maximize also the scientific exploitation phase of GRAVITY and MATISSE.

Opportunities for scientists outside the GTO consortia
Of course, excellent science will also be done outside of the GTO programs. The GTO time provides the instrument teams with a well-earned compensation for their long-term engagement in envisioning, building, and commissioning new instruments for ESO telescopes. Besides the actual guaranteed time (that needs to be spent over a duration of about 5 years), the GTO teams are also awarded the right to protect a list of targets for their programs. In the past, the target protected has been implemented either as a protection over the whole GTO duration (i.e. independent of the specifically planned observations by the GTO team for a given semester; this scheme was adopted for MIDI) or on a semester-by-semester basis (i.e. only those targets are protected which the GTO team actually plans to observe in a particular semester; as adopted for AMBER).
To stringent target protection policies can form a barrier for non-GTO teams to formulate own science programs. This can pose a problem in particular if the number of accessible targets is already small due to instrument sensitivity specifications, such as in the mid-infrared where only a few hundred targets were accessible with MIDI. Therefore, there is a risk that a GTO target list could effectively block whole object classes for the non-GTO community, instead of reserving only some prime-targets. In these cases, it is an important role of ESO to find a balance between the legitimate interests of the consortia and of the wider community, e.g. by adopting semester-by-semester protection rules and by protecting specific target+instrumentation mode combinations, such that some instrumentation modes are still accessible by non-GTO teams (e.g. continuum versus spectral line observations; astrometric versus visibility modes, etc.). Also, ESO should enforce its existing policy to schedule only a maximum of 50% of the total time for GTO programs.

Balancing interests during instrument commissioning
A remarkable development at VLTI has been the arrival of instruments with instrument modes that have been specifically designed to answer key scientific questions. Examples might be the wide-angle astrometric mode of PRIMA (which aimed at the detection of extrasolar planets) and the narrowangle astrometric mode of GRAVITY (which aims at localizing the origin of the flares around Sgr A*). Commissioning these technically demanding modes is essential so that the instruments can reach their ultimate science objectives, but it requires also by far the largest investment of resources and time. Given that these modes push the very frontier of technology, they sometimes reveal deficits in the infrastructure, which then introduces further delays. However, it is worth considering that many

103/127
(most?) science applications do not rely on these advanced modes, but will exploit improvements in sensitivity, uv-coverage, or spectral coverage in standard observing modes, such as imaging. The commissioning of these standard operational modes can often be completed on time scales of months, while the commissioning of the most advanced operational modes has taken years in some cases. Therefore, we consider it the responsibility of ESO to find a balance between completing the commissioning even of the most advanced instrument modes and of offering standard operational modes that will often already serve the interests of the majority of the community in a timely fashion.

Potential of Large Programs
Finally, it seems a good time for the community to start developing ideas for large collaborative projects that could help to exploit the opportunities provided by VLTI in a more systematic and comprehensive way than what is possible in standard proposals. Several successful large programs have been conducted with MIDI, namely on AGN (184.B-0832) and evolved stars (187.D-0924), as well as with PIONIER, where a multiplicity survey on massive stars (189.C-0644) and on Herbig Ae/Be stars (189.C-0963) has been conducted. Such large programs typically result in high impact publications, and can bring scientists with complementary expertise together or can address scientific questions that require large object samples.

105/127
reached a state of maturity where it is now possible to broaden their use to a wider community of non-specialists.
In those respects the situation is comparable to the millimetre radio astronomy of the 80 's in terms of mapping quality and image "CLEANing". With a plus: interferometric data that are now routinely produced by the current instruments have since nearly the beginning benefitted from a common interchange format, OIFITS. The observational data have also gained a better visibility in the Virtual Observatory framework, if nothing with the JMMC OiDb database. The second generation of VLTI instruments represents further progress that should go together with a broader and easier data access for the non-specialists.

Expertise centres for knowledge dissemination
Most of the expertise on the interferometric data and how to exploit them, lies among experts in various European research units. Most of the tools are here, although neither fully standardized nor as simple of use as desirable-below a number of « apertures » comparable to ALMA's, imaging is still an art! Interaction with specialists will be needed for data interpretation for quite some time yet, as well as follow-up of instrument's data by the interferometric community. To operate successfully a "hands-on" phase where optical interferometry goes from a specialist niche to the simpler status of a handy high-angular resolution technique as Adaptive Optics has become today, it is advisable to structure the community around a network of « Expertise Centres » that would provide a direct link between specialists and more general users. Organizing an easy-to-reach helpdesk is the key to make optical interferometry « mainstream ».
In practical terms, non-specialists would find various services and a helpdesk in those Centres, which could at least provide and/or organize: 1. Hands-on schools on proposal preparation and data reduction for PhD students and young postdocs 2. Exchange of researchers each year across Europe under the Fizeau Programme of EII, mainly but not only visiting the OLBI expertise centres 3. Yearly "VLTI Community days" workshops 4. Frequent training in the observation tools and techniques, data reduction, and data analysis 5. Face-to-face personal assistance to proposal preparation, observation planning, data reduction, data analysis and interpretation, including expert support for non-standard modes of observation 6. "Data Mining" assistance for the scientific needs that may benefit from already available observations and unpublished data.
Each Centre would typically provide services based upon its local expertise or interests. Assisting in, e.g., data reduction for a particular instrument will preferentially be provided at laboratories pertaining to that instrument's consortium. Similarly, research groups involved in data analysis techniques, e.g., image reconstruction, could provide assistance of this kind. All these contributions should be recognized as pertaining to a country-based, or theme-based, Centre. There should exist a coordination level, providing at least a Scientific Council, for the network of Centres to ensure and maximize the best return. This coordinating body would be in charge of finding the funds needed by the Centres and would typically discuss on a few years basis the distribution of tasks between the Centres.

106/127
Optical Interferometry Instruments are seldom, usually costly, and thus made to live for typically a decade. Besides, they are nothing without the interferometer array itself. It is the ensemble that makes the instrument. It shall be no surprise then, that besides the pure « service » mode described above as the main activity of the Centres, there is a vital need to maintain a close relation with observatory operations (e.g. ESO/Paranal for VLTI) throughout the lifetime of the instruments in order to ensure an optimum service and secure the quality of the data calibration and interpretation. Indeed, instrumental response varies in time, due to the aging of components (e.g. optical surfaces), hardware improvements, software improvements (Data Reduction Software but also evidently new imaging techniques) and observing procedures.

Staffing & Funding:
The services described above cannot be provided without funds, mostly funds to hire people, plus networking (travel) funds, e.g., for face-to-face helpdesk activities. Some of the funding could be made part of the budget of future instruments. Another part will come from Europe Exchange programs (cf. Fizeau program coordinated by EII). Another possible source of funding could be found at ESO if the maintenance of the instrumental pipelines was to be transferred to the Centres, with a cost-compensation system. Countries could also count their involvement in the Centres « in-kind » contributions to ESO.
In any case, there should be a small number of new "Support Astronomer" positions involved in each Centre (theme-based/country-based). These could be filled up by duties (e.g., 30% of their time) of permanent position researchers (France, perhaps alone, has the « CNAP » civil servant status corresponding to this case), but the majority would certainly be in the form of post-doctoral fellowships, with half-time duties at the Centre(s) and half-time research. In any case, there will be the need of having these people spend part of their time using the instruments (e.g., at VLTI) to acquire hands-on knowledge on the instrument operations.
The next step to establish such interferometric expertise centres in Europe has been made by integrating the idea into the new successful OPTICON proposal, which is funded via the EC/Horizon 2020 programme until 2020. This comes just in time to bring the upcoming VLTI second generation of instruments closer to the community. The expertise centre effort should be combined with other strategies to maximize the scientific exploitation of our infrastructure, as discussed in Section 17.

ESO-VLTI in the future
Jean-Phillipe Berger 15

ESO's Vision
The VLTI will remain, even in the ELT and ALMA era, the European facility with the highest angular resolution. The last decade has seen ESO mastering the difficulties of coherent combination of an array with four Unit or Auxiliary telescopes. These successes have paved the way for ambitious second-generation instruments: GRAVITY and MATISSE. With VLTI and ALMA the ESO user has now gained access to milli-arcsecond astronomy from the near infrared to the millimetric regime.
Since its conception, VLTI has pursued two goals: delivering an imaging capability at the milliarcsecond resolution level and providing precise relative astrometry with an ultimate accuracy goal of ten micro-arcseconds; the latter being technically much more challenging.
The scientific production of VLTI has been vastly dominated by simple but important morphological measurements of the near and mid infrared emission of bright sources and the raise of spectroscopy with milli-arcsecond angular resolution. As an imager, and while this has challenged a number of established theories in the field of stellar physics and AGN, the VLTI has to enter now a time when the true complexity of objects is revealed. VLTI must offer the possibility to spatially and spectroscopically resolve a wealth of time-variable astrophysical processes that will never be accessible through other techniques. It must be the tool that will allow us to challenge our indirect understanding of stars, explore rotation, pulsation, convection, shocks, winds, accretion, and ejection phenomena as they happen and reveal the complex interplay between a star and its environment throughout its lifetime. The VLTI capability of resolving the complexity of AGN nuclei, measure precisely the central black-hole mass and pinpoint their distance with unmatched accuracy has, by far, not been exploited yet.
As an astrometric machine with GRAVITY the VLTI will offer a unique way to observe strong gravity in action and explore physical conditions close to the horizon of the Galactic Center black hole. As such it offers one of the rare opportunities for ground-based astronomy to contribute to the field of fundamental physics by probing the nature of Gravity. The technology requested to enable such an ambitious goal will most probably open the way for more science projects exploiting microarcsecond astrometric capability from the ground. The unique combination of a fascinating science case and instrumental innovation is a strong incentive to support the development of VLTI.
The VLTI infrastructure evolution and the increase in performance associated with it should bring trust in our ability to continue developing milli-arcsecond astronomy from the ground. Whether it will be by expanding VLTI or developing other facilities is still to be explored. 108/127

Challenges
ESO has harnessed the difficulty of optical coherent combination and the VLTI is now entering a consolidation phase. The next challenge to be met is the combination of increased sensitivity and precision. Phasing (the so-called fringe tracking) of the array of telescopes on-axis (and later off-axis) is therefore the mandatory next step. This will improve considerably the size of samples accessible and will enable high-resolution spectroscopy. As learned with AMBER and MIDI difficulties, efficient phasing requires a number of subsystem implementation or upgrades and a particular attention to global performance (in particular robust wavefront correction on both UT and AT array). ESO has acquired sufficient expertise to tackle this difficulty but will have to dedicate the competent resources. Without phasing both GRAVITY and MATISSE will not deliver to their full potential.
The second challenge is to bring GRAVITY up to its final astrometric performances. The remarkable scientific outcome will be delivered at a price of an important technological and system effort. GRAVITY's success will require patience and commitment on both sides.
The third challenge is to democratize the access to the VLTI by providing user assistance to help the user harnessing, observation preparation, data reduction and image reconstruction. Considerable progresses have been made by the VLTI community to develop reliable software, dedicated training schools etc. Yet, as mentioned by ESO's user committee, dealing with VLTI data remains perceived as an "experts-only" process. The VLTI benefits from a very active and dedicated community. Both ESO and the VLTI community have to explore a proper interface to provide users with an easier access to data reduction and image reconstruction. Without any doubt the broadening of the community will bring new ideas for the scientific exploitation of VLTI. In addition to that, the development of synergies with ALMA and diffraction limited telescopes and the expansion of VLTI large programs should be pursued. Finally, in the context, the capability of VLTI to carry a greater fraction of large programs/surveys should be enhanced.

Science questions:
The need for milli-arcsecond resolution observations will not disappear with the advent of GRAVITY and MATISSE. Moreover we can already anticipate that both instruments will have opened new avenues. Beyond this immediate horizon VLTI should maintain the ambitious to be a major contributor to the following astrophysical questions: • Improvement of the cosmological distance scale; • Ground based astrometric follow-up of exoplanet detections (post-GAIA); • Characterisation of host stars in the context of exoplanet and asteroseismology transit missions (e.g. PLATO); • Constraints on strong lensing.

Possible axis of development for the VLTI
It is of the uttermost importance to maintain an active research and development program in interferometric instrumentation. Taking example from the mm interferometry one can already consider the expansion of the VLTI capability in four areas: • Increasing the imaging capability • Increasing the sensitivity • Increasing the instrumentation capability • Increasing the astrometric capability Unlike traditional single-dish instrumentation, which has already developed a number of capabilities, optical interferometry has still a considerable margin for development. The following capabilities would benefit all the topics mentioned earlier: • Higher number of telescopes for imaging • Sensitive co-phasing • Polarimetry • Expansion to the visible (J to V) • Very high spectral resolution (>30000) • High dynamic range (e.g. > 1000)

Priorities and roadmap
The roadmap for the VLTI can be divided in the following epochs: • Epoch 1 (-2020) a. Make GRAVITY and MATISSE a success by providing a well-performing VLTI infrastructure. Demonstrate robust fringe tracking b. Expand the VLTI user base by improving the accessibility to the facility to non-experts through dedicated expertise-centres c. Develop large programs and surveys together with an efficiency-oriented operational model; d. Establish a development plan for the VLTI (white book mid 2017).
• Epoch 2 (2020-2030) a. Exploit fully the existing infrastructure by expanding its instrumentation, for example to the visible and to very high spectral resolution or towards higher contrasts or by enhancing the astrometric capability b. Encourage visiting instruments pushing the technique in new directions (e.g. polarimetry, visible, high dynamic, etc.) c. Study infrastructure improvements that could improve the sky coverage, optimise the (u,v) coverage • Epoch 3 (beyond 2030) This later epoch will depend on the funding situation in the ELT-era and the ability of the 110/127 community to propose strong science-driven projects that could justify the expansion of the infrastructure (e.g. more telescopes).
Elaborating this roadmap will be carried under joint ESO and European Interferometry initiative (EII) leadership. We propose to build this roadmap by establishing working groups for each specific topic listed in the previous sections, that would explore the possibilities offered by an expanded VLTI. The instrumental framework will have to be realistic and supported by a simulation team. The conclusions of this work should be presented and debated at a conference held in summer 2017. A first stab to the challenge by the EII community is described in the other chapters of this report. The performance of optical/infrared interferometers has largely increased over the past fifteen years. In particular, state-of-the-art interferometers, such as the VLTI, the CHARA array, and the LBTI, have taken the first images of stellar surfaces (e.g., Monnier et al. 2007), planet forming regions (e.g., Kluska et al. 2016), and even a Jovian moon (Conrad et al. 2015) at high-angular resolution (see also Chapter 2).

Hypertelescope concepts: from Carlina prototypes into space
However, due to the relatively low sensitivity (mv < 10-12) and limited imaging capability of current facilities, the number of observable objects is still small. Numerous research works have been carried out to increase the sensitivity of the focal instruments by improving the co-phasing (Fringe Tracker), and/or the way the light is recombined (ex: Berger & Mérand 2013; Petrov 2014, etc.). Furthermore, a major challenge for future optical Interferometry will be to manage the complexity and cost of high resolution imaging arrays combining hundreds or thousands of sub-apertures. The hypertelescope concept addresses these shortcomings of optical long-baseline arrays.

Concept
The theory of hypertelescope imaging (Labeyrie et al., 1996, Lardière et al., 2007 indeed indicates that dilute arrays of many sub-apertures can concentrate most light collected from a resolved compact source into a direct high-resolution image, if the light is co-phased and co-focused through a pupil-densifier element. It also predicts a much better imaging performance and science output, for a given collecting area and meta-aperture size, to be achievable with many small apertures rather than with a few larger ones.
These theoretical predictions were verified with extended versions of the Fizeau interferometer, having a miniature size but using many sub-apertures and equipped with a pupil densifier by Pedretti et al., and Gillet et al. And large hypertelescopes are feasible with an opto-mechanical architecture similar to the Arecibo radio telescope. It has a fixed and diluted segmented concave meta-mirror, replacing its primary mirror and a focal camera, suspended above it, tracks the source's image, cofocused through an attached pupil-densifier.
Meta-apertures larger than a kilometer appear feasible at suitably concave terrestrial sites for milliarcsecond resolution. And much larger sizes, up to perhaps 100,000km, are proposed for space versions using a flotilla of small mirrors, accurately driven by small solar sails or a laser-trapping scheme.
Because it does not require optical delay lines, the hypertelescope's optics is thus much simplified with respect to that of multi-telescope interferometers. It employs specific components such as: 1. a pupil densifier system capable of accommodating the pupil drift; 2. optionally, a field-dissector for simultaneously observing multiple compact sources such as star clusters or galaxies, with astrometric capabilities; 3. coronagraphic cameras are also optionally usable for high-contrast imaging of sources such as exo-planets near their parent star.
A feature of terrestrial so-called Carlina architectures of the hypertelescope concept (Labeyrie 2000, Le Coroller et al. 2004), with a diluted Arecibo-like optical array of small spherical apertures, is the absence of a giant steerable mount for globally pointing the hypertelescope as a solid system. This causes an apparent drift of the sub-pupils pattern with respect to the meta-pupil observed by an eye located at the focal image of a moving star. This complication is balanced by removing the element which is currently limiting the optical diameter of ELT's to about 40m, namely the steerable mount. Instead, terrestrial hypertelescopes of the kilometric size can be considered based on the Carlina architecture (Labeyrie et al., 2012).
In space, no mount will be needed at all for supporting or steering a flotilla of mirrors together with its focal spaceships, using thrusters such as ion jets, small solar sails, or "laser trapping" beams.
Meta-apertures as large as 100,000km may then become feasible. Several versions have been proposed to NASA and the European Space Agency (ESA), including a "Laser Trapped Hypertelescope

113/127
The adaptive cophasing of terrestrial versions requires interferometric wave sensing, achievable by analyzing the science image according to the "Dispersed Speckle" or "Chromatic Phase Diversity" methods. And a high limiting magnitude can be expected if the applicability of Laser Guide Star systems becomes confirmed. For early science, before installing the adaptive cophasing system, images can be reconstructed by Speckle Imaging.
Pending hypertelescopes in space, where the reference star for wave sensing can be degrees away from that observed, terrestrial versions would much benefit from using an artificial reference star for adaptive cophased observing on faint sources, a mode which interferometers have not yet been able to attempt. The modified "Hypertelescope Laser Guide Star" (H-LGS) version (Nunez et al., 2014) of the Laser Guide Star (LGS) systems (Foy and Labeyrie, 1985) which became used at the largest telescopes is a candidate approach which requires further assessment, particularly regarding the laser power needed.

Carlina testbed at OHP
The principle of the diluted telescopes (ex: the hypertelescopes proposed by Labeyrie 1996) consists of recombining a large number of mirrors (telescopes) using direct imaging techniques in order to optimize the signal-to-noise ratio of the image. On-sky studies with Carlina-type technology have been performed at OHP (Le Coroller et al. 2004). Above the diluted primary mirror made of fixed cospherical segments, a helium balloon or cables suspended between two mountains and/or pylons, carries a gondola containing the focal optics. This concept does not require delay lines and could work with hundreds of mirrors.
The Carlina prototype was built at the Haute-Provence Observatory (OHP, Figure 20 This work helped us to better understand the advantages of this system, and its limits: For example, we checked that the ground moves slowly (typical deformation <<100 microns during one night with ten meter baselines). This is clearly an advantage for Carlina-like architectures. But even if the ground deformations change relatively slowly, three motors by mirror at least will be necessary to adjust the piston and tip-tilt for each sub-aperture. With tens of mirrors, this can become relatively complex
The OHP experiment was stopped once the main goals were reached with the technical demonstrator. More studies are also required to determine if such an opto-mechanical design can work with AO systems (for example, regarding high frequencies vibrations in the cables).

Carlina testbed at Ubaye valley 16
As previously described in more details, hypertelescopes are multi-aperture interferometers providing a direct image of compact sources. The meta-beam, containing beams from all subapertures which converge on their way toward the image plane where they are co-focused, is then densified before reaching it. Such densification does not destroy the direct image, but shrinks the diffractive envelope of the interference function. The notion of a point spread function then vanishes since the image of a point source becomes position-dependent. The image of an extended source may then be described by a pseudo-convolution with the source function. It restricts the field size which can be directly imaged down to a "Direct Imaging Field" (DIF) (Labeyrie, 2007, Lardière et al., 2007. And it intensifies the image, approximately as γ 2 if γ is the pupil densification factor, by concentrating into the interference peak the light diffracted across the sub-aperture lobe. Sources smaller than the DIF are directly imaged. Larger ones can to some extent be reconstructed postdetection with the algorithm of Mary et al, or extended versions of the CLEAN algorithm exploiting the a priori known off-axis evolution of the interference function. The DIF size is λ/s , on the order of 0.1 arc-second in visible light if the primary sub apertures are spaced s= 1m apart, but only 0.01 arcsecond if s= 10m . 16 See also: http://hypertelescope.org/

116/127
Hence the advantage of using, for a given meta-aperture size and total collecting area, more mirrors of smaller size, which also improve the contrast of the interference peak, and thus the dynamic range of the direct images.
To progress in the direction of performance estimation, we tested a prototype hypertelescope during the last four years on the heights of the ubaye Valley in the upper Moutière valley of the Southern Alps ( Figure 20-5). It demonstrated the feasibility and operability of the concept with its cablesuspended focal camera, driven with millimetric accuracy for tracking the motion of a star's image (Autuori et al., 2016). The prototype also served to develop co-spherization techniques for the primary meta-mirror, in tip/tilt and piston, as well as auto-guiding techniques for the focal gondola. Adaptive devices will be needed in the future to fulfil the sensitivity requirements of the various science programs. Among the future perspectives, we are also studying the possibility of removing the suspension cable and of replacing it by an autonomous drone supporting the focal optics.

Outlook
We think that an interesting scientific case for a diluted telescope of 70-120m, as post-E-ELT facility, could be to image, and study planets in the most interior orbits of the nearest stars. A 70m baseline indeed provides the required resolution to image such systems, and the number of mirrors required for this topic is probably reasonable. More work is needed to determine if we will be able to reach very high contrast (<10 -7 ) at 2-3 times smaller inner working angle than the E-ELT. This kind of light recombination may benefit from the recent progresses achieved in the field of mono-pupil telescopes (itself, often inspired from interferometry techniques): sensor post-coronagraphy, phasediversity methods, Kernel-Phase, etc. A study of an interferometric recombination (as 3 rd generation or visitor instrument) on the E-ELT (Le Coroller, 2016), could also help to prepare such a project and to evaluate the possible gains in terms of contrast, sensitivity, etc. In the very long term, diluted telescopes could offer the possibility to launch and build giant apertures (>> 100m) in space to carry out very ambitious science programs, such as imaging exoplanet surfaces.

Proposal for a "European Extremely Large HYPERtelescope (E-ELHyt)"
The current Moutière site of the Ubaye testbed (Sect. 20.3) in the Southern Alps has a topography suitable for upgrading the meta-aperture diameter toward about 200m, a modest size relative to that possible at other potential sites in wider valleys. Although considered to be among the best astronomical sites in Europe, being located very close to the Restefond pass where many European amateur astronomers bring their instruments for summertime observing, its astronomical quality is likely below that of some world-class sites outside Europe. We selected it, among many other candidate sites in the southern Alps, for its rather smooth curvature, the high ridges at 2600m to the North and South, the access road section (2km, unpaved) from the neighbouring village, and the quality of "seeing", with a frequent near-zero local wind at night during typical good observing conditions, a rare feature in mountain valleys where strong thermal solar effects tend to create katabatic winds. Their near absence at Moutière may result from the valley's slope to the West, which may conflict with the prevailing westerly winds. The appreciable snow cover in wintertime may allow unattended remote observing once sufficient automation is installed. A small laboratory may be installed in a temporary or permanent building, more comfortable than the laboratory tent used during the four recent years.

117/127
Other potential sites in the same area are located much closer to established observatories, such as the Calern observatory which has a strong tradition in optical and infra-red interferometry. It itself lacks the deep depression needed for installing a sufficiently large meta-mirror, 100m or more. But nearby valleys are potentially suitable. Calern itself now has a full-size test-bed, flat and static, for preparing the Moutière observing.
Following the demonstration of construction and alignment techniques with the Moutière prototype, and the development of its operating software, we propose the step-wise construction of a "European Extremely Large HYPERtelescope (E-ELHyt)" (Labeyrie et al., 2012). As achieved successfully by ESO for the ALMA, a low-risk strategy with low initial cost involves the on-site testing of a few small mirror segments feeding a small flying focal camera. It is carried by an electric drone which is assisted by a small helium balloon for reduced generation of air turbulence. Once the embryonic array reaches a sufficient Technology Readiness Level for science operation, it can be expanded by adding mirror segments. Their optimal size is comparable with Fried's radius of the turbulent cells, on the order of 20 to 60cm for visible and near-infrared optimization. A metaaperture size on the order of a kilometer appears feasible in sites featuring a suitably concave topography; its collecting area can eventually become comparable to that of the E-ELT, i.e. about 1500m 2 . The mirror supporting system can be similar to the FAST radio telescope, currently being completed in China, and can similarly provide active piston corrections for a spherical or active paraboloidal geometry, which may be quickly switchable to either mode if only three actuators carry each mirror segment. The former geometry allows many focal gondolas to observe different sources at the same time, but requires within each a pair of additional small mirrors for correcting spherical aberration.

Feasibility of an E-ELT-coupled hypertelescope
Following the successful interferometric coupling of smaller ATs with one or more UTs at the VLTI, numerical simulations have evidenced the science potential of larger coupled systems involving an ELT and the many small mirrors of an adjacent hypertelescope. There is an optical synergy which significantly improves the resolution and luminosity of the cophased direct image. No adjustable optical delay lines are required, in addition to the short-stroke cophasing actuators, if the hypertelescope's meta-mirror virtually intersects the mechanical node of the ELT mount. Then, the ELT's coudé beam can be directed toward the hypertelescope's focal camera for a combined image. This appears feasible at the E-ELT site, which dominates a 500m-deep, and 5.5km wide, valley on its East side, oriented North-South. And an embryonic test system can probably be installed at low-cost once, or even before, the E-ELT construction is completed. Two potential limitations must however be considered: the North-South valley orientation reduces the declination coverage; and the prevailing winds at the E-ELT site are appreciably faster than at some other sites, requiring faster adaptive phasing.

Abstract
The Planet Formation Imager (PFI) Project (www.planetformationimager.eu) was initiated to explore the scientific and technical potential for understanding the complex processes at play during planet formation using high angular resolution imaging of thermal emission around young stars 17 . With breakthrough technical advances in imaging of mm-wave emission (ALMA) and scattered light (GPI/SPHERE), we are at the beginning of a revolution in planet formation studies spurred on by advanced hydrodynamic simulations of protoplanetary discs. PFI will go beyond the few AU resolution of modern instruments to push down to the scale of the circum-planetary accretion discs themselves, requiring <1 milli-arcsecond resolution for nearby star forming regions. A key objective of PFI is to trace the gas giant population in nearby young systems at different stages of the planet formation and planet migration process, providing information that is crucial to understand the demographics of mature exoplanetary systems. Solving the riddle of planetary formation has profound and far-reaching implications beyond astronomy, for it helps inform our place in the 17 The idea consolidated into the PFI project formulation for the first time at the recent international meeting on the future of interferometry at OHP in 2013: http://interferometer.osupytheas.fr

121/127
universe and the prospects for life on other worlds. PFI is currently in a Concept Definition phase, with an active Science Working Group and Technical Working Group preparing exploratory white papers in preparation for the 2020 "Decadal Survey" planning process.

Update on PFI Project (2016 September)
The PFI Science Working Group (SWG), headed by Stefan Kraus, has begun to work on a white paper covering the following topics: protoplanetary disc structures and workings, planet formation signatures in gas-rich (young) and gas-poor (evolved) discs, proto-planet properties and detection, circum-planetary accretion discs, late stages of planet system formation, planetary system architecture, planet formation in multiple systems, and a detailed look at potential target selection in nearby star-forming regions. In addition, we have groups looking at science cases beyond the core planet formation goals, including more broad exoplanet-related science, stellar astrophysics topics, and especially extragalactic observations of active galactic nuclei. The SWG is partnering with the simulation community to produce high resolution, multi-wavelength synthetic datasets using the most advanced planet formation simulations.
The Technical Working Group (TWG), with leadership from David Buscher, Michael Ireland, and John Monnier, has formed groups to study both concept architectures and the availability of the needed technologies. The science requirement of <1 milli-arcsecond angular resolution requires interferometric techniques although the optimal wavelength range is not decided yet. Based on simulations of discs, the mid-infrared (5-20 microns) appears to be best regime for studying the discs over the widest range of spatial scales and to probe small dust grains missed by sub-mm studies. We are developing detailed plans for a "direct detection" architecture similar to CHARA/VLTI as well as a heterodyne architecture building on experience from the Berkeley ISI system. The baseline design consists of twenty 3-m class telescopes spread over 5 km to achieve the top-level science requirements. In addition to architecture definition and detailed simulations of sensitivity and imaging fidelity, the Technical Working Group is surveying the technologies needed for PFI. The resulting Technology Roadmap will highlight key areas needing investment, such as inexpensive lightweight telescopes, mid-IR laser combs, and optimal fringe tracking strategies. Lastly, the TWG is reviewing space-based and non-interferometry approaches for achieving the science priorities laid out by the SWG as an alternative to the ground-based interferometer approach.
The SWG and TWG plan to publish their white paper results in a refereed journal. Following a series of initial SPIE papers (ref.