9.1 Introduction

Among all tracking devices used in particle physics, nuclear emulsion particle detectors feature the highest spatial resolution in measuring ionizing particle tracks. Emulsions have contributed to outstanding achievements and discoveries in particle physics. Although there was a period of decline of the emulsion technique, the interest in the technique has moved into the front line of physics research because of the advances in digital read-out by high-speed automated scanning and the continuous development of emulsion gel design. In particular, they are unsurpassed for the topological detection of short-lived particles and for specific applications in neutrino physics and other emerging fields. Indeed, a huge potential of emulsion detectors in applied research will be shown in this study. In this chapter, we will mainly focus on developments in experimental techniques for particle physics and briefly present a selection of the main experimental results.

A nuclear emulsion comprises a large number of small silver halide crystals, uniformly dispersed in gelatine. Each crystal has a typical diameter of 200 nm and works as an independent detection channel, which results in a very high detection channel density of O (1014) channels/cm3 in emulsion detectors. This makes emulsion detectors unique as particle detectors. The latest knowledge of the general photographic process is described in [1]. Herein, we discuss the detection principle of nuclear emulsions for ionizing particles.

The recent nuclear emulsion is made from silver bromide with a small fraction of iodide (AgBr_1-xI_x, x being the fraction of iodide, about a few mol%). The crystal structure of AgBr used for nuclear emulsions is face-centred cubic, and its shape is octahedral, as shown in Fig. 9.1. An AgBr crystal has a band gap of 2.684 eV. When a charged particle passes though the crystal, electrons in the valence band are transferred to the conduction band. Owing to shallow electron traps of 21–25 meV, the electrons diffuse inside the crystal until they are trapped in one of the sensitisation centres located at the surface of the crystal (electronic process). The sensitisation centre is artificially created via chemical sensitisation (e.g. sulphur-and-gold sensitisation), which is positively charged at the initial stage and works as an electron trap. The sensitisation centre, which traps an electron, is negatively charged; therefore, it attracts interstitial silver ions, which are ions migrating in the crystal lattice. The silver ion reacts with the trapped electron and forms a single silver atom (Ag^+ + e →Ag, ionic process). The sensitisation centre is again positively charged, being ready to trap an electron. These electronic and ionic processes are repeated several times to form an aggregate of silver atoms, Ag_n-1 + e  + Ag^+ →Ag_n, deepening its energy level. The energy level of an aggregate equal to or larger than Ag_4 is sufficiently deep to be “developable”, and the sensitisation centre at this stage is called the “latent image centre”. This signal is chemically amplified during the development procedure. The emulsion film is soaked in a developing solution, namely a reduction chemical. The above-mentioned electronic and ionic processes are repeated by receiving electrons from the reducer through the latent image centre because it is a deep electron trap. This repetition lasts until all the crystals are reduced. The reaction is expressed as follows:

where Red and Ox are the developing agent and the oxidized developing agent, respectively; n is the number of ions and m is the number of protons produced. Thus, a metallic silver filament remains at the position of the crystal with a latent image centre, whereas crystals without latent image centres remain unchanged. The gain of this amplification is very high, O(10^8). After washing out the remaining AgBr crystals via the fixing procedure, particle tracks are ready to be observed under the microscope, as shown in the right image of Fig. 9.1.

Fig. 9.1
figure 1

Left: silver bromide crystals (0.2 µm linear size), as seen with an electronic microscope. Right: the track left by a minimum ionizing particle (10 GeV π ) in nuclear emulsions; about 36 grains/100 µm are detected. Compton electrons of approximately 100 keV are also visible on right-bottom of the view

The detection efficiency of a single crystal for minimum ionizing particles (MIP) is about 0.17 [2]. The sensitivity of nuclear emulsions is translated into the number of grains per unit length. A typical emulsion has a sensitivity of 30–50 grains per 100 µm along the particle trajectory for minimum ionizing particles. Apart from the crystal size and chemical sensitisation, the sensitivity scales with the volume occupancy of the AgBr crystals with respect to the total volume of the emulsion layer, which ranges from 30 to 55%. The number of grains is proportional to the ionization power of the particle, which allows the measurement of local energy deposition (dE/dx) of each track. The random noise, so called “fog”, is due to several reasons, such as thermal noise, gelatine impurity and over-sensitisation. In general, a fog density of < 5 grains/10-µm-cubic is considered acceptable. In the process of producing nuclear emulsion as detectors, emulsion layers with thicknesses of 10–300 µm are formed on a glass or plastic base. To track high-energy particles (> 100 MeV), a double-side coated emulsion film with a 50-µm-thick emulsion layer on either side of a 200-µm-thick plastic base is often employed. To observe both emulsion layers across the plastic base with optical microscopes, the plastic base material should not have double refraction; e.g. triacetyl cellulose and polymethylmethacrylate are appropriate.

The RMS resolution of a one-dimensional detector with a segmentation pitch of D is \(D/\sqrt {12}\). Assuming that the silver halide crystal shape is approximately spherical, the resolution with a crystal diameter D is \(\sqrt {\pi }D/8\) (RMS). For example, this gives 44 nm for an emulsion with 200-nm-diameter crystals. In reality, these values are slightly larger owing to the delta-ray component. A measured resolution of 50 nm (RMS) was reported for an emulsion film with a 200-nm crystal size by using high-energy particles [3], as shown in Fig. 9.2. The one-dimensional intrinsic angular resolution of a double-sided emulsion film with 200-nm-diameter crystals and a base thickness of 200 µm is therefore 0.35 mrad. Owing to the excellent position and angular resolution, one can build a vertex detector, while using a sampling calorimeter to reconstruct electromagnetic showers and also measuring the momentum of particles by the multiple Coulomb scattering, which will be discussed in Sect. 9.2. Nuclear emulsion detectors may be coupled with electronic detectors to add timing information and/or muon identification. Since emulsion detectors can be produced in many different sizes and shapes, there is a large variety of possibilities for a hybrid detector system, depending on physics goals, which we shall discuss in next sections. One double-sided film with a size of 10 cm × 10 cm has approximately 1-cc emulsion layer and comprises O(1014) detection channels, as mentioned above. After chemical treatment, such a huge number of channels has to be read-out for physics analyses. This is the task of automated scanning microscopes, whose implementation is of fundamental importance in modern experiments making use of nuclear emulsions. This will also be discussed in the following sections.

Fig. 9.2
figure 2

Distribution of the distances between grains and straight-line fits to the tracks of minimum ionizing particles, showing the emulsion intrinsic spatial resolution [3]

9.2 Early Times of the Technique and the Emulsion Cloud Chamber

Thorough reviews of the basic properties and early applications of nuclear emulsions can be found in [4, 5]. The first notable examples of the use of photographic emulsion (plates) are the discovery of radioactivity by Becquerel in 1896 [6] and the measurements by Kinoshita [7], who in 1910 found records of alpha-particle radiation detected as tracks by means of optical microscopes. The emulsion technique greatly improved during the 1930s and 1940s thanks to the group of Bristol University led by Powell. He developed electron-sensitive nuclear emulsions produced by ILFORD and KODAK [8]. Powell and his group had further developed and greatly extended the seminal work of Marietta Blau. She is also known for the development of thick emulsions by a two-bath method [9].

The thickness of emulsions increased from the original 50–100 μm used in 1946 to 600–1000 μm. Even with a 500 μm thickness, a large part of the tracks of charged particles originated in the emulsion were not contained. Although an exceptional attempt to process a 2000 μm thick emulsion was reported in 1950 [10], the difficulties of processing increased rapidly with the thickness and new difficulties appeared in the visual inspection, due to the larger scattering of light in the emulsions and the loss of optical contrast.

Plates were arranged in pairs with emulsions face to face, thus doubling the effective thickness. In 1952 a new approach was established [11,12,13]. Once a batch of plates was produced, the emulsions were stripped from the glass and packed together to form an almost solid sensitive mass, named stack. After exposure, the emulsions were dipped in a solution of glycerine with gelatine and then made to adhere to specially prepared glass plates. The use of a penetrating X-ray beam defined a reference frame to connect consecutive emulsion layers. With such a procedure, tracks of single particles could be quickly followed through the successive emulsions of a stack. The use of stripped emulsions became popular and allowed to make important contributions to many experiments in particle physics, as we will see in the following.

Photographic plates with 600 μm thickness were manufactured by means of newly produced emulsion gel able to record and detect the passage of ionizing particles. In parallel, dedicated microscopes were developed to observe and measure the particle tracks. With these emulsion detectors exposed to cosmic-rays Powell solved in 1947 the mystery of the Yukawa meson, by detecting the pion through its decay into a muon [14,15,16]. A picture of this decay as seen in nuclear emulsions is shown in Fig. 9.3. Powell was awarded the Nobel Prize for physics in 1950 for his discovery made possible by using nuclear emulsions. In the presentation of the Nobel Committee, the simplicity of the apparatus used to make such a discovery was underlined.

Fig. 9.3
figure 3

Photomicrographs of one example of π → μ decay taken from [15]

Few years later, in 1955, exotic hyper-nuclei were also identified by nuclear emulsions [17]. In a large balloon experiment in 1960, a 70 l emulsion chamber called “Bloury Stack” was exposed to high-altitude cosmic-rays to study their nature and the features of the induced high-energy interaction phenomena [18]. However, a crucial limitation of the technique (for those days) was met: due to the lack of scanning power, the experiment could not achieve the expected results.

A major breakthrough in the emulsion technique was the introduction of the so-called Emulsion Cloud Chamber (ECC) detector [19]. With the ECC a drastic change in the detector design philosophy occurred: emulsions became a high-resolution tracking detector with three-dimensional reconstruction capabilities, rather than a visual and volume detector. This is obtained by sandwiching emulsion films or plates with passive material layers, usually made of plastic or metal plates. Today, we would call such a detector a very finely subdivided sampling-calorimeter, by means of which all charged tracks originating from the shower are reconstructed in space with high resolution. In the ECC, emulsion films are placed perpendicular to the incoming particles, so acting as a tracking detector featuring high spatial resolution (up to 1 μm).

The first design of the ECC consisted of a sandwich of brass plates and thin emulsion films. This type of detector was first developed by Kaplon and used to study heavy primaries in cosmic-ray interactions [19]. ECC detectors were applied to the study of the cosmic-ray spectrum and to very-high energy interaction processes. Nishimura, in particular, proposed the cascade shower analysis method to measure the energy of interacting γ-rays and predicted the capability of this detector to regulate the development of electron showers by selecting passive material plates on purpose [20].

Niu developed double-sided emulsion plates in which the sensitive emulsion layer is deposited on either side of a plastic substrate (see e.g. [21]). For this purpose FUJI developed a special 800 μm thick plastic base to allow gel pouring on both sides of the layer. The emulsion layers were 50 μm thick. With this new film design, two problems had to be solved: the availability of a plastic base with optical properties compatible with that of nuclear emulsions, and of a high-power objective lens with a working distance longer than 1 mm. The first problem was overcome with meta-acrylic (lucite) plates, the second was solved thanks to the efforts of Tiyoda Optical Co. The use of a plastic base between the two emulsion layers allows a precise measurement of the track angle by connecting those grains closest to the base. These points indeed are not affected by distortions. The long lever arm available with such a thick base improves the angular resolution up to 1 mrad.

The ECC opened the way to a series of important experiments of large size, thanks to the use of the dense metal plates allowing the realization of large-mass detectors with unprecedented space resolution. For the study of high-energy cosmic-rays (10 TeV) and the determination of their power law spectrum, we mention in particular the Chacaltaya experiment [22] that allowed the study of the central core of air showers, and the relatively large-size Mt. Fuji experiment [23]. For even higher cosmic-ray energies (1000 TeV and more), the RUNJOB [24] and JACEE [25] experiments studied the spectrum of primary heavy ions.

As said above, the analysis methods of ECC events are based on the reconstruction of all tracks produced following a primary interaction, likely occurring in the dense passive material. Space angles are measured for all track segments. Shower reconstruction and identification (electromagnetic or hadronic) can be performed on the basis of the topological features of the shower. In the same way, one can also reconstruct particle decays. In addition to the topological studies, powerful kinematical analyses can be conducted with ECC detectors by exploiting Multiple Coulomb Scattering and emulsion ionization measurements, which can lead to surprisingly accurate measurements of particle momenta and particle identification.

A notable example is given by the Niu’s discovery of the so-called X-particles in 1971 [21, 26]. Figure 9.4 shows a sketch of the observed topology for one event where two charged particles produced in the cosmic-ray interaction show a kink decay-topology. Today we know that this event had to be attributed to charmed meson production and decay. This happened 3 years earlier than the discovery of the J/Ψ particle by the groups of Richter [27] and Ting [28]. During those 3 years, several papers referring to that event were published by Japanese theorists [29], while there was no comparable response from the western high energy physics community. The reason for that could probably be attributed to the lack of confidence in the emulsion technique felt at that time by western scientists and also to the fact that the community carrying out cosmic-ray studies was quite apart from that employing particle accelerators, at that time not active in Japan. It is worth noting that the main distinctive features of charmed mesons were indicated already in 1974 as a result of the kinematical analysis conducted by Niu and coworkers [30], who had also re-analysed ECC data from cosmic-ray exposures carried out many years before. In addition, these authors realized that the lifetimes of charged and neutral charmed hadrons differed by a factor ranging from 2 to 3 [31].

Fig. 9.4
figure 4

Schematic drawing of the first evidence for the production and decay of short-lived “X particles” (charmed particles) in cosmic-ray interactions

ECC detectors allowed to design hybrid experiments combining emulsions and electronic detectors, the latter mainly used for two purposes: (1) to provide time resolution to the emulsion stack (trigger signal), (2) to pre-select the region of interest for the event occurring in the ECC, indicating the place where to start the emulsion scanning. The first hybrid experiments employed semi-automated video-camera systems to read out the emulsion tracks and reconstruct three-dimensional vectors by measuring X, Y , θ X, θ Y, with Z the emulsion depth. Computers were only used to assist an operator in performing the track measurements and to provide the micro-metric movement of the microscope stage. Relative alignment was performed by fiducial X-ray marks combined with the precision measurement of the film edge positions. Typical thickness of the double-sided emulsion plate was 1 mm or larger, so allowing to follow-up tracks with a given angle w.r.t. the emulsion plane by varying the focal plane of the objective lenses. The video-camera was used to grab the image from the objective lens with a rather time-consuming procedure. An operator had to manually adjust the video-image on the visually detected track, while dark spots could be automatically detected. The TV screen also allowed to run graphic tools for measuring track positions and angles.

The mechanical stability of the ECC sandwich was ensured by a vacuum packing paper known as “origami”, also required to isolate the emulsion films from the external light, humidity and polluting gases. Plate-to-plate alignment was performed by X-ray lines and/or X-ray spots typically from a55Fe source. The association between the ECC and the electronic detectors was accomplished by joining particle tracks, better if of high momentum and hence less affected by Multiple Coulomb Scattering. In this respect, the idea of using interface emulsion plates in between the ECC module and the electronic detectors has proven to be very effective. These interface films were called Changeable Sheets (CS) because they were frequently replaced during the physics run in order to limit the integration of background tracks and to easily identify tracks found in the electronic detectors. This concept was first applied in the E531 experiment at Fermilab [32], as we will see in the following, and it is presently being used in several applications also for large scale ECCs.

9.3 Notable Experiments Employing Nuclear Emulsions

During the 1970s, emulsion detectors of increasing mass and complexity were developed for applications to particle physics experiments conducted at particle accelerators with experimental setups also including electronic detectors (hybrid experiments). Emulsions are often employed as active targets with high space-resolution, and the electronic detectors, namely trackers, calorimeters and spectrometers, are used to pre-select or trigger specific events in the emulsions and to complement the kinematical information of the events.

In early experiments with accelerators, nuclear emulsions were coupled to spark and bubble chambers in order to reduce the total scanning time. We recall here the observation of the decay of a charmed particle produced in a high-energy neutrino interaction in a Fermilab experiment [33]. The latter was performed in the wide-band neutrino beam produced by 400 GeV protons, by using a detector made of spark chambers placed downstream of nuclear emulsion stacks. Stacks containing altogether 16 l of ILFORD X5 emulsion made up of pellicles of 20 cm × 8 cm × 0.6 mm dimensions were placed in association with a double wide-gap spark chamber followed by a detector of electromagnetic showers and a muon identifier. A veto counter upstream discriminated against interactions in the emulsion produced by charged particles. About 250 neutrino interactions were predicted by the spark chamber. Given its vertex position resolution, a volume of about 0.7 cm3 was visually scanned around the prediction for about one third of the events; 16 of them were located and fully reconstructed in the emulsions and one of them was found with a topology consistent with that of charm.

A search for charmed particles in neutrino interactions was carried out at CERN in 1977 with stacks of nuclear emulsions placed in front of the entrance window of the Big European Bubble Chamber (BEBC) [34], filled with liquid hydrogen and placed in a magnetic field of 3.5 T. A veto-coincidence counter system was added in front of BEBC for this purpose. The emulsion stacks were made of 3150 pellicles of ILFORD emulsion, each 600 μm thick. The quality of the emulsion as well as the high level of muon track background precluded any systematic scanning along the track. A “surface” scan was therefore carried out for the bulk of the events with 200× and 300× objective lenses, over an emulsion volume centred on the predicted vertex position of 5×31 mm2 for 7 plates. A total of 206,000 BEBC pictures were analysed, leading to 935 neutrino interaction vertices inside the emulsion, 523 of which identified as charged current events. After kinematical and topological cuts, 169 charged current interactions were selected, 8 of them being identified as neutrino-induced charmed particles. The experiment reported the first direct observation of a charmed baryon decay [35] and of a neutral charmed particle [36].

The E531 experiment [32] was proposed in 1978 at Fermilab to study the properties of charmed particles and their production mechanism in neutrino interactions [37]. The neutrino beam was produced by 350 GeV protons for a first exposure (7.2 × 1018 protons on target) and by 400 GeV protons for the second one (6.8 × 1018 protons on target). The overall beam composition was 92.3% ν μ, 7.0% \(\bar {\nu }_{\mu }\), 0.5% ν e and 0.2% \(\bar {\nu }_e\). The active neutrino target was made of nuclear emulsions where short-lived particles were detected with micrometer accuracy. The decay products were then measured by means of an electronic spectrometer, thus making E531 the first hybrid particle physics experiment.

The emulsion target consisted of 22.6 l in the first run and of 30 l in the second one; it was made of modules composed of plates with 300 μm emulsion layers coated on both sides of 70 μm thick polystyrene foils. Downstream of the emulsion modules, two large lucite plates 800 μm thick, coated on both sides with 75 μm emulsion layers, acted as interface emulsion films, so establishing the new detector concept of the Changeable Sheets (CS). Tracks reconstructed by electronic detectors were first searched for in these interface films and then followed back in the bulk target up to the neutrino interaction vertex. The CS were replaced every 2 or 3 days of data taking in order to limit the number of accumulated background tracks that would have affected the efficiency of finding the interaction vertex in the target.

Downstream of the target, a magnet equipped with high-resolution drift chambers provided the track prediction in the CS with an accuracy of about 150 μm and 1 mrad. A time-of-flight detector made of two scintillator planes located 2.7 m apart yielded a time resolution better than 1 ns. The setup was complemented by a lead glass array and a hadron calorimeter followed by a muon spectrometer. Three thousand eight hundred eighty-six neutrino interactions were located in the fiducial volume of the target. One hundred and twenty-two events were tagged by the presence of a secondary vertex in the target, 119 induced by neutrinos and 3 by anti-neutrinos. Events with a candidate charmed hadron in the final state were studied in detail in order to detect the presence of heavily ionizing particles (baryons) and fully reconstruct the kinematics at the decay vertex. Among those events, 57 were classified as D 0 candidates.

The analysis of the charmed hadrons is reported in [38]. Re-analyses of these results were conducted later and removed some biases present in the original studies ([39, 40]). The result on the cross-section measurements are given in [41]. In this paper, the observation of one event with the \(D^0 - \bar {D}^0\) topology was reported, interpreted as associated charm production in neutral current interactions. The lifetime of charmed particles was extensively studied by E531 [42]. Limits were also set on ν μ ↔ ν τ oscillations [43].

After the discovery of the b quark in 1977 [44], experiments with nuclear emulsions aimed at the direct observation of the production and decay of B flavored hadrons. A successful search was first performed by the WA75 experiment at CERN by using a π beam of 350 GeV [45]. Eight hundred and one of nuclear emulsion, in the form of double-coated plates and stripped pellicles, was exposed in 1983 and 1984. The emulsion stacks were placed both parallel and perpendicular to the beam, so exploiting the advantages of both approaches. Emulsions held perpendicular to the beam in vertical position can in fact tolerate higher track densities, while those placed parallel are more sensitive to short particle lifetimes.

The emulsion was delivered in gel form by FUJI (75 l) and ILFORD (5 l) and the pouring was done in a facility set-up at CERN [46]. Each vertical stack was made of 25 double-coated plates (330 μm thick emulsion, poured on both sides of a 70 μm thick Lexan support), 25×25 cm2 wide and packed in vacuum. The horizontal stacks were made of 60 stripped emulsion pellicles, 11 cm × 4 cm (4 cm along the beam) and 600 μm thick, piled-up and clamped between two rigid Perspex plates. The processing of the films was carried out in Nagoya for double-coated plates, in Rome for pellicles, and at CERN for both. After processing, each double-coated plate was cut into 64 squares, 3 ×3 cm in size, so-called mini-modules. Twenty-five squares of a module were then stuck, in sequence, on a single Lucite foil. With such a technique, the corresponding areas of consecutive emulsion plates were adjacient, thus reducing the time needed to follow a track through the stack [47]. The size of the beam was so small that it was necessary to move the target during each beam spill in order to have a uniform irradiation, thus introducing the concept of target mover. The WA75 experiment observed one event [48], schematically depicted in Fig. 9.5 as recorded in the pellicles, where both B hadrons are observed to decay into a charmed particle. The experiment also made the first observation of the purely muonic D s decay measuring the decay constant \(f_{D_s}\) [49].

Fig. 9.5
figure 5

Schematic drawing of the first hadro-produced \(B\bar {B}\) pair event observed in nuclear emulsions by the WA75 experiment

The Fermilab E653 experiment [50] was designed to measure the lifetime of B hadrons. This detector was an extension of the hybrid emulsion technique developed for the E531 experiment and was optimized for a hadron beam. In fact, while in the E531 neutrino experiment charm was produced in one out of twenty charged current interactions, only one hadronic interaction in a thousand produces charm, and one in a million bottom. Thus, larger discrimination against non-heavy quark background was required to limit the emulsion scanning load. To achieve this, a high-resolution electronic spectrometer was placed downstream of the emulsions. Moreover, in order to cope with the large number of candidate events, the emulsion analysis required the development of computer-aided microscope techniques [51]. Events reconstructed in the spectrometer with a muon of high transverse momentum (p >  1.5 GeV/c) were selected for scanning in the emulsion. In the first run of 1985 a 800 GeV proton beam was used, mainly aiming at charm production. In a second run in 1987 a 600 GeV negative pion beam was exploited for the study of B mesons. Two types of target modules were employed; 55 were “vertical” and the rest “horizontal”. In the first run, vertical modules were exposed to 1.5×105 protons/cm2 and the horizontal ones to 0.8×105 protons/cm2. The second-run exposures corresponded to 3.0×105 pions/cm2 and 1.0×105 pions/cm2, respectively for the two orientations.

Forty nine and fifty six target modules were exposed, respectively in the first and second run, for a total of 71 l of FUJI nuclear emulsion. Each vertical module consisted of 20 thick emulsion plates (330 μm emulsion layer on each side of a 25 cm×25 cm×70 μm polystyrene plate) and a thin film (70 μm emulsion layer on either side of a 25 cm×25 cm×500 μm lucite plate). The thin film was separated from the main block of thick plates by a 10 mm thick honeycomb, the latter combination being considered as the analysing region, while thick plates made the target region.

The emulsion modules were mounted on a target mover and displaced through the beam during the slow spill, in order to have a uniform exposure. The movement of the target was digitally controlled and the positioning encoding system granted an accuracy of 10 μm [52]. 18 silicon microstrip planes in the electronic vertex detector were located 5.7 cm downstream of the emulsion target. Secondary vertices were reconstructed by the silicon planes with typical resolutions of 6 μm transverse to and 350 μm along the beam direction. The total fiducial decay region for bottom particles including emulsions and silicon planes was 12 cm long.

The emulsion analysis procedure first located the primary vertex. Six thousand five hundred forty-two events were selected within the fiducial volume of the emulsion and for all but 9 the primary vertex was found thanks to the excellent performance of the electronic detectors. The majority of the events were discarded by requiring a stringent angular agreement (2 mrad for tracks with a slope within 40 mrad) between the reconstructed spectrometer track and any track at the primary vertex. Three hundred and fifty-nine events in which the muon did not come from the primary vertex were retained for the secondary vertex search. Nine events met the selection criteria for bottom [53]. The b lifetime was also measured.

At the end of the 1980s, the production of charmed particles from quark-gluon plasma was expected to differ from that due to proton-nucleus interactions [54]. In particular, a large enhancement of the charmed quark pair creation was expected. From the experimental point of view, the major difficulty for charm detection in such nucleus-nucleus interactions came from the very short-path decay in a region close to the primary interaction where the particle density was extremely high. Two studies were carried out at CERN on this subject with emulsions, one within the NA34/2 emulsion-HELIOS programme [55] and the other one within the EMU09 Collaboration [56]. In NA34 [55], the production of charmed particles was detected in 200 GeV/nucleon 16O-emulsion interactions and its cross-section was measured. Stacks of FUJI gel were exposed vertically to the 16O beam. Each stack consisted of 8 double-coated plates with a surface of 25 × 15 cm2 and a thickness of 700 μm (70 μm polystyrene base coated on both sides with a 315 μm thick emulsion layer).

In order to study charmed particle production in central interactions of 200 GeV per nucleon 32S nuclei, the EMU09 Collaboration designed an emulsion-counter hybrid experiment at CERN [56]. The hybrid design was meant to reduce the background from secondary interactions in the emulsion, which would have spoiled the signal with heavier projectiles, differently from the case of 16O. A thin and pure target was made of 100 μm thick Ag and Pb plates. Two emulsion plates, in the form of tapes, were placed downstream of the target and used as a tracking device, able to detect short-path decay vertices and producing very little secondary activity. The emulsion tape used in this experiment was derived from an Acetate base 200 μm thick and FUJI gel poured on both sides of the base, to obtain 70 μm thick layers. The emulsion analysis speed at that time did not allow to integrate a sufficiently large statistics for such rare events.

Nuclear emulsions have also played an important role in the study of multiquark systems and the quark confinement aspects of QCD. The hybrid emulsion experiment E176 [57] was carried out at KEK by using a 1.66 GeV/c K beam to study double-strangeness nuclei produced via Ξ hyperon capture at rest. Indeed, the K p → K + Ξ interaction produces a Ξ, which at rest may be captured via the process Ξp → Λ Λ. In the hypothesis that the H-dibaryon (ssuudd) exist, the double Λ hypernucleus can decay by emitting a H-dibaryon, in turn decaying into Σp within less than 1 mm from the Ξ stopping point. Unlike old-fashioned emulsion experiments where only emulsion stacks were exposed to K beams [58], the hybrid design allowed the identification of the K + meson and the accumulation of a large statistics. Emulsion stacks were exposed vertically, perpendicular to the beam. Emulsion plates were of two types: 550 μm thick emulsion layers on both sides of a 70 μm thick polystyrene base, and 70 μm layers on both sides of a 500 μm lucite base. Thinner films with thicker base were used to avoid the degradation of the angular resolution due to distortion effects. Three double-Λ hypernuclei candidates were observed [57, 59]. However, no conclusive answer was provided on the Λ-Λ interaction. With this aim, the E373 experiment at KEK [60, 61] searched for S = -2 nuclei in nuclear emulsion with higher statistics. The apparatus was based on an emulsion-counter hybrid method, where a laser microscope performed the three-dimension graphic processing of the emulsion images, scintillating fiber blocks detected the decay products of strange particles, and a glass capillary tracker filled with liquid scintillator provided precise predictions of the Ξ emission angle and position. The experiment reported the observation of double hypernuclei and the Λ-Λ interaction was finally measured [62, 63]. A follow-up experiment is planned for the new J-PARC hadron facility at Tokai, still employing the hybrid detector technique with an emulsion plate stack [64].

9.4 Nuclear Emulsion Detectors with Digital Technology

9.4.1 Automated Scanning Systems and Analysis Methods

A major breakthrough in the emulsion technique occurred in 1974 when the idea of a tomographic read out of the emulsion plates was introduced by the Nagoya group [65]. In the case of an emulsion layer about 20 times thicker than the focal plane depth, one can take multiple tomographic images by sampling the emulsion layer. Those images can then be superimposed according to a given value of the presumed track slope, looking for space coincidences of the grains. After applying a detection threshold needed to remove the accidental background, a track can be defined. A first implementation of this concept led to the development of a first generation system [47] where 16 tomographic images were superimposed and a TV tube used to grab the image. This concept was developed and successfully applied to the CS emulsion scanning of the E653 experiment at Fermilab [51].

This technique was further developed by the Nagoya group and led to the so-called Track Selector [66]. The TV video was replaced by a CCD camera, yielding to higher stability and better space resolution. An FPGA-based image processor handled the 16 tomographic images of each emulsion plate. The scanning speed was actually limited by the time required for the computer-controlled objective lenses to move to the 16 different focal positions, since for each step some time was needed to damp the stage vibrations. Another limiting factor was the size of the optics field of view. A tracking efficiency as large as 90% was reached, with the main source of noise given by short Compton electron tracks. A scanning system based on a different approach was developed in Salerno [67]. This device exploited a multi-track approach without any angular preselection.

A further important step was the establishment of fully-automatic offline analysis methods, beyond the digitization of the individual tracks around a given angle performed with the Track Selector. This progress was mainly driven by the availability of faster electronics and CCDs and more performing stage mechanics. The so-called net-scan method (described in [68]) developed in Nagoya allowed the reconstruction of tracks by associating all detected track segments regardless of their angle. Obviously, the area over which net-scan could be realistically performed depended on the available scanning speed. The latter was about 1 cm 2 per h with the UTS system [69] that exploited parallel data processing. The net-scan method allowed complete event reconstruction both at the interaction and decay vertices, precise measurements [70], search for downstream particle decays, momentum determination by Multiple Coulomb Scattering [71, 72], and electron identification by cascade shower analysis [73, 74]. Figure 9.6 shows the different steps of the emulsion data processing in the net-scan method.

Fig. 9.6
figure 6

Different steps of the emulsion data processing in the net-scan method. On the left plot all base-tracks in 15 films of the volume under study are reconstructed; they participate in the alignment process from which tracks are reconstructed, as shown in the middle plot; on the right plot passing-through tracks are discarded and the interaction vertex is reconstructed

9.4.2 Applications to Neutrino Experiments

In the early 1990s it became evident that the next generation of neutrino (oscillation) experiments would greatly profit from the use of the dense, high space-resolution emulsions to realize hybrid detectors well suited to the high sensitivity study of short decay topologies (charm, τ) with the possibility of a full reconstruction of the event kinematics, in turn required for background suppression. This approach was indeed motivated and justified by the advances in the emulsion technique in relation to the possibility of handling large quantities of emulsions, and also thanks to the above mentioned progress in the emulsion scanning and offline analysis, which could allow reducing the analysis time of the emulsions by orders of magnitude as compared to the early times.

The CHORUS detector [75] is a good example of a large hybrid experimental setup combining a nuclear emulsion target with various electronic detectors. The detector was designed to search for ν μ ↔ ν τ oscillations in the CERN WANF neutrino beam with high sensitivity. At that time, a relatively massive ν τ was a preferred candidate to explain the Dark Matter of the Universe. Since charmed particles and the τ lepton have similar lifetimes, the detector was also well suited for the observation of the production and decay of charmed particles.

Also in CHORUS nuclear emulsions acted both as neutrino target and as a high space-resolution detector, allowing three-dimensional reconstruction of short-lived particles. The emulsion target had an unprecedented large mass of 770 kg and was segmented into four stacks, each consisting of eight modules, each in turn composed of 36 plates with a size of 36×72 cm2. Each plate had a 90 μm plastic support coated on both sides with a 350 μm emulsion layer [76]. Each stack was followed by a set of scintillating fibre tracker planes. Three Changeable Sheets with a 90 μm emulsion layer on both sides of a 800 μm thick plastic base were used as interface between the fibre trackers and the bulk emulsion. The accuracy of the fibre tracker prediction was about 150 μm in position and 2 mrad in the track angle. The electronic detectors downstream of the emulsion target and the associated trackers included a hadron spectrometer measuring the bending of charged particles in an air-core magnet, a calorimeter where the energy and direction of showers were measured and a muon spectrometer.

CHORUS represents a milestone in the history of nuclear emulsions for the size of the target and of the CS, which implied very labor-intensive procedures for emulsion gel production, pouring on the plastic bases, and development conducted in the CERN emulsion laboratory [46], as well as for the first massive use of automated scanning microscopes running in the Japanese and European laboratories of the Collaboration [75].

The operation of the experiment consisted of several steps. It is worth noting that the large-size emulsion target was replaced only once during the entire duration of the experiment, while the CSs were periodically exchanged with new detectors, therefore integrating tracks for a relatively short period. The best time resolution was obviously provided by the electronic detectors. With the CS scanning, the association between electronic detectors and emulsions took place, and tracks with position and angle compatible with that of the electronic trackers’ predictions were searched for in the interface emulsions. If found, these tracks were further extrapolated into the bulk emulsion, with a much better resolution, up to the track stopping point, with a procedure called scan-back, consisting in connecting emulsion layers progressively more upstream. After that, a “volume scan” (net-scan) around the presumed vertex was accomplished and repeated for all stopping tracks until the neutrino interaction vertex was found.

In the search for charmed particle decays, a dedicated topological selection was applied to the collected net-scan data. The analysis procedure was complemented by the visual inspection of the selected event candidates, aimed at checking both primary and secondary vertices making used of the “stack” configuration. Decay topologies could be well separated from ordinary nuclear interactions, since the latter usually exhibit fragments from nuclear break-up or so-called “blobs” from nuclear recoil.

More than 100,000 neutrino interactions were located in CHORUS. The search for oscillations was negative and an upper limit to the oscillation probability was eventually set [77]. CHORUS reported the first observation of the associated charm production in charged current interactions [78]. This first observed event is shown schematically in Fig. 9.7. It represents the production of two charmed particles in a charged current interaction induced by a muon neutrino. Apart from six tracks of high ionization coming from the nuclear break-up and not drawn in the sketch, at the primary neutrino interaction vertex there are two charged tracks: one is the negative muon and the other one, indicated as particle 1, is a charmed hadron. The charged charmed particle shows a 417 mrad kink angle after travelling 1010 μm. The outgoing particle, indicated as particle 2, shows a flight length of 7560 μm and a reinteraction with an outgoing particle (particle 3) of high ionization. In addition to the charged charmed hadron, the decay of a neutral charmed particle is visible 340 μm downstream of the primary vertex. Two particles are generated from the neutral particle decay. The non-planarity of parent and daughter particles rules out the two-body decay and thus both the \(K^0_s\) and the Λ hypotheses for the neutral particle. A kinematical analysis confirmed the interpretation of the event as the associated charm production induced by a muon neutrino in a charged current interaction [78]. An unprecedented statistics of about 2000 fully reconstructed neutrino-induced charmed hadron event vertices was collected. With this statistics, CHORUS measured the Λc and D 0 exclusive production cross-section [79] and the double-charm production cross-section in both neutral and charged current interactions [80]. The CHORUS emulsion data also provided an upper limit to the production of charmed pentaquark states [81].

Fig. 9.7
figure 7

Schematic drawing of the first neutrino-induced associated charm production event observed in the emulsions of the CHORUS experiment (see the text for explanation)

Higher sensitivity follow-ups of the CHORUS experiment were proposed, with the purpose of increasing by more than one order of magnitude its sensitivity in the measurement of the oscillations (smaller mixing angle). We mention in particular the COSMOS proposal at Fermilab [82]. The use of emulsions as large-surface trackers for the high-resolution measurement of hadron and muon momenta was proposed in [83] and then applied for the proposal of the TOSCA experiment at CERN [84]. Eventually, all those experiments were not realized mainly due to the first strong indications for ν μ ↔ ν τ oscillations detected with atmospheric neutrinos, in disappearance mode, in a complementary region of the oscillation parameters.

The DONUT experiment at Fermilab aimed at the first direct detection of ν τs, in this case promptly produced in a 800 GeV proton beam dump and not coming from the possible oscillation mechanism as in CHORUS. The experimental apparatus and the detection techniques used in the experiment are described in [68, 85]. The DONUT Collaboration employed an iron/emulsion ECC target able to offer a sufficiently high mass to the interaction of the neutrinos and to provide the detection of the interaction vertex, as well as a clear observation of the short track of the τ lepton (up to a few mm) produced in the ν τ charged current interaction. The ECC was complemented by high-precision fiber trackers to drive the scan back in the emulsions.

The emulsion target eventually integrated a relatively high muon background. In a first analysis, 203 neutrino interactions were located in the ECC target, observing 4 ν τ candidate events with an estimated background of 0.34 events [86]. This represents the first direct detection of the ν τ. Figure 9.8 shows a display of two candidate events. In the final analysis, 9 ν τ charged-current (CC) events were detected, with an estimated background of 1.5 events, from a total of 578 observed neutrino interactions and were used to estimate ν τ CC cross section for the first time [87]. The main source of error in measuring the ν τ cross section was due to the systematic uncertainties, whereas 33% of the relative uncertainty was due to the limited number of detected ν τ events. Owing to the lack of accurate measurements of the D s differential production cross section, DONUT expressed its ν τ cross-section measurement as a function of the parameter n, responsible for the differential production cross section of D s, as \(\sigma ^{\mathrm {const}}_{\nu _\tau }\) = 2.51n 1.52 ×10−40 cm2 GeV−1. The cross section was estimated to be \(\sigma _{\nu \tau }^{const}\) = (0.39±0.13(stat.)±0.13(syst.))×10−38 cm2 GeV−1, when assuming the value of the parameter n as derived from PYTHIA 6.1 simulations.

Fig. 9.8
figure 8

Schematic drawing of two ν τ induced events measured by the DONUT experiment. The kinks relative to the τ decay are visible

9.5 Present Emulsion Detectors

9.5.1 Fast Scanning Systems and Large-Scale Film Production

As stated above, the advances in the scanning systems aimed at higher efficiency and speed have led in recent times to the rebirth of the emulsion detectors. A further generation of the Track Selector, called S-UTS (Super-Ultra Track Selector), was developed in Nagoya [88]. It is based on highly customized components. The main feature of this approach is the removal of the stop-and-go process of the stage in the image data taking, which is the mechanical bottleneck of traditional systems. To avoid the stop, the objective lens moves at the same constant speed of the stage while moving also along the vertical axis and grabbing images with a very fast CCD camera running at 3000 Hz. The optical system is driven by a piezoelectric device. The camera has a sensor with 512×512 pixels that imposes a smaller field of view (∼120×120 μm2) to ensure a comparable position resolution (about 0.3 μm/pixel). The high-speed camera provides a data rate of 1.3 GB/s. This is handled by a front end image processor that makes the zero-suppression and the pixel packing, reducing the rate to 150–300 MB/s. A dedicated processing board performs track recognition, builds micro-tracks and stores them in a temporary device with a rate of 2–10 MB/s. A computer algorithm links the micro-tracks of different emulsion layers and writes the resulting tracks in a database that is used as input for physics analysis. The routine scanning speed is 20 cm2/h/layer while one of the S-UTS systems has reached the speed of 72 cm2/h/layer by using a larger field of view, without deteriorating the intrinsic micrometric accuracy of the emulsion films.

In the framework of the OPERA experiment (see next section), a joint effort of several European laboratories allowed the development of an automated scanning system (ESS) that employs commercial subsystems in a software-based framework. The ESS, derived from a system developed in Salerno [67], is extensively described elsewhere [89,90,91]. The microscope is a Cartesian robot, holding the emulsion film on a horizontal stage movable in X − Y  coordinates, with a CMOS camera mounted on the optical axis (Z), along which it can be moved to change the focal plane with a step roughly equal to the focal depth of about 3 μm. The control workstation hosts a motion control unit that directs the stage to span the area to be scanned and drives the camera along the Z axis to produce optical tomographic image sequences (with the X − Y  stage holding steady). Areas larger than a single field of view (∼300 × 400 μm2) are scanned by repeating the data acquisition sequence on a grid of adjacent fields of view. The stage is moved to the desired position and the images are grabbed after it stops, with a stop-and-go algorithm. The images are grabbed by a Mpixel camera at the speed of 376 frames per second while the camera is moving in the Z direction. The whole system can work at a sustained speed of 20 cm2/h/layer, 24 h/day, with an average data rate as large as 4 GB/day/microscope still preserving the intrinsic emulsion accuracy. A different setup of this system makes no use of immersion oil as interface between the objective lens and the film being scanned [92].

The track building method applied in both systems is schematically drawn in Fig. 9.9. The whole emulsion thickness is spanned by adjusting the focal plane of the objective lens and a sequence of 16 tomographic images is taken for each field of view at equally spaced depth levels, matching the focal depth of the objective. Emulsion images are then digitized, converted into a grey scale of 256 levels, sent to a vision processor board and analyzed to recognize sequences of aligned grains, i.e. clusters of dark pixels of given shape and size. Some of these spots are track grains; others, in fact the majority, are fog grains not associated to particle tracks. The three-dimensional structure of a track in an emulsion layer (microtrack) is reconstructed by combining clusters belonging to images at different levels and searching for geometrical alignments (Fig. 9.9a). Each microtrack pair is finally connected across the plastic base to form the so-called base track (Fig. 9.9b).

Fig. 9.9
figure 9

(a) Microtrack reconstruction in one emulsion layer by combining clusters belonging to images at different levels; (b) microtrack connections across the plastic base to form base tracks

Figure 9.10 shows a S-UTS system and the scanning station in Bern employing the ESS system with dry objectives and with an automated emulsion film changer. The latter device allows fully unattended operation [93].

Fig. 9.10
figure 10

Left: photograph of one of the Nagoya S-UTS scanning systems; right: the Bern scanning station equipped with five ESS microscopes with the associated automated film changers

A second feature that significantly contributed to the rebirth of the emulsion detectors in recent times has been the realization of industrial emulsion films, optimized for micro-tracking applications. This is in particular the case of the FUJI R&D work conducted in collaboration with the Nagoya University [2] for the OPERA experiment that will be described later. Uniform automated machine coating of 44 μm emulsion layers on either side of a plastic base was achieved for the unprecedented large-scale application of the OPERA ECC modules. The quality and the uniformity of the films is remarkable.

For machine coating in an industrial plant, dilution of the gel is required in order to reduce the viscosity. This implies a reduction of the grain density. In order to recover for this degradation, improvements in the gel sensitivity were applied, such as a controlled double jet method for the production of mono-dispersion of AgBr micro crystals. The crystal size is well controlled by this method. The number of crystals along a particle trajectory is increased, while the volume occupancy of AgBr and the average diameter of the crystals is kept constant.

In order to match the experimental requirements of a relatively thick layer with the limitations coming from the industrial process, a multi-coating method was adopted by FUJI. After the first layer (20 μm thick) is coated on both sides of a rolled plastic base, a second layer is coated over the first one. A thin 2 μm gelatine spacer protects the emulsion layers. The resulting thickness is 44 μm, well sufficient for automated track recognition. A glycerine bath is used to restore the thickness of the emulsion layers to its original value, thus recovering for the shrinkage induced by the development process.

Another notable development related to the OPERA experiment has been the realization of the so-called emulsion refreshment. High temperature and high relative humidity enhance the latent image fading. This possibility is in particular useful when the exposure occurs much later than the film production and a low background is required, as in the case of OPERA. A good tuning of the fading features was achieved by introducing 5-methylbenzotriazole into the emulsion gel [2]. Absorption of this chemical by the silver specks induced by radiation lowers the oxidation reduction potential and makes the specks easy to oxidize. On the other hand, the sensitized centers (sulfur and gold) remain stable against the oxidation. Therefore, the recorded tracks are erased while the sensitivity remains sufficiently high. For example, keeping the films for 3 days at 98% relative humidity and 27 C, the grain density of tracks accumulated before the refreshing goes from 30 to less than 10 grains/100 μm, thus erasing about 96% of the stored tracks, including those from Compton electrons and cosmic-rays. The industrially produced films also feature a rather low track distortion induced by the development, as well as a limited level of fog density, with an initial value of 2.9 fog grains/1000 μm3).

9.5.2 The OPERA Experiment

The OPERA experiment was designed to unambiguously prove ν μ → ν τ oscillations in appearance mode. Indeed, studies of atmospheric neutrinos had shown the disappearance of muon neutrinos [94], later confirmed by accelerator experiments [95] and interpreted in terms of ν μ → ν τ oscillations. Therefore, the appearance of tau neutrinos in a pure muon neutrino beam was the missing tile in the coherent scenario of neutrino mixing.

The conceptual design of the experiment was originally proposed in [96,97,98] and the detector is extensively described in [99, 100]. The distinctive feature of ν τ charged-current interactions is the production of a short-lived τ lepton ( = 87 μm). Thus, one has to accomplish the very difficult task of detecting sub-millimeter τ decay topologies out of a huge background of ν μ reactions in a target of more than a kiloton, as required to have a sufficient interaction rate. This is achieved in OPERA by employing a modern version of the ECC technology.

The OPERA experiment has been running from 2008 to 2012 at the underground LNGS laboratory in Italy, 730 km away from CERN where the CNGS neutrino beam was produced. OPERA is the first very large scale emulsion experiment, profiting from all the technological advances in the emulsion technology and in the scanning systems described in the previous section. To give a figure, the ECC target is made of films with a total surface of 110,000 m2 and 105,000 m2 lead plates. The industrially produced, machine-coated emulsion films by FUJI provided very uniform layer thickness and the possibility of erasing unwanted background tracks by the refreshing technique. The scanning of the events was performed with about 40 fully automated microscopes, each of them faster by about two orders of magnitude than those used in the CHORUS experiment [75].

The ECC target consisted of multi-layer arrays of target walls interleaved with pairs of planes of plastic scintillator strips. A target wall (with about 10×10 m2 cross-section) was an assembly of horizontal trays each loaded with ECC target units called bricks. A brick consisted of 57 emulsion films interleaved with 56 lead plates, 1 mm thick, light-tight packed. Brick dimensions were 128 × 102 × 79 mm3 for a weight of 8.3 kg (Fig. 9.11). Interface Changeable Sheets (CS) were attached to the downstream face of each brick. The choice of the CS geometry was such to assemble two adjacent emulsion films as a doublet, coupled as an independent, detachable package to the downstream face of the brick (Fig. 9.11). The use of doublets provided the cancellation of random coincidences of tracks accumulated during the storage and transportation and unerased by the refreshing procedure.

Fig. 9.11
figure 11

Schematic view of the ECC unit (brick) used in the OPERA experiment. A detail of the Changeable Sheet doublet is also shown

There were 150,000 bricks in total for a target mass of 1.25 kton. This represents the largest ever ECC detector assembly and posed an unprecedented challenge for the production of emulsion films and bricks, as well as for the emulsion handling, development and analysis, i.e. scanning power. Just to give some numbers, more than nine million emulsion films were produced and the corresponding 150,000 bricks were built by a fully robotised chain assembling films and lead plates in an underground dark-room at LNGS. Large infrastructures were also realized at LNGS for brick manipulation (automatic extraction from the target matrix), X-ray marking, cosmic-ray exposure and emulsion development [100].

The principle of the experiment can be summarized as follows. At the occurrence of a neutrino interaction, the resulting charged particle tracks are detected by the scintillator counter planes placed behind each brick target wall, similarly to what happens in a sampling calorimeter. The reconstruction of the “shower axis” or the identification of a penetrating track (e.g. a muon) allows identifying the brick where the neutrino likely interacted. At this point, the brick is extracted from the wall, the attached CS doublet is removed and developed, while the brick, still packed, is placed in an underground storage area waiting for the response of the CS scanning.

It is important to stress the key roles accomplished by the CS in OPERA [101]: the first step is to confirm that the ECC brick contains the neutrino interaction; the second step is to provide event-related tracks to be used for the ECC scan-back analysis. By using Compton electrons from environmental radioactivity, the systematic uncertainties in the relative alignment between the two CS doublet films are reduced, thus bringing the position accuracy to the level of 1 μm [102]. Such an accuracy allows using CS tracks made of only 3 out of the possible 4 track segments, thus increasing the track finding efficiency. Thanks to the CS, the bricks wrongly identified by the scintillator trackers are not disassembled but put back in the target with a fresh CS attached to them. This avoids useless film handling, processing and scanning of the misidentified bricks, and minimizes the corresponding waste of target mass. Moreover, whenever the electronic detector reconstruction is compatible with two or more “candidate” bricks, these are ordered by probability and their CS are scrutinized accordingly. This significantly increases the event finding efficiency.

If one or more “event related” tracks are found, the selected brick is exposed to cosmic-rays for about 12 h, thus providing a set of tracks to be used for precise correction of local deformations as required for precision topological and kinematical measurements. The brick is then disassembled and its films are developed. The tracks measured in the CS analysis provide predictions for the so-called scan-back procedure. The latter consists of following a predicted track upstream in the ECC brick until it “disappear”. This procedure is initiated in the most downstream film of the brick.

The disappearance of a scan-back track indicates a possible neutrino interaction vertex. A wide area scan is performed over a volume of about 1 cm3 around the track stopping point, looking for partner tracks and/or secondary decays with a dedicated decay search procedure [103]. This procedure, developed for the tau neutrino search, was successfully applied to the search for charmed hadron production induced by neutrinos. The latter process was indeed used as a control sample to check the efficiency for the detection of the tau lepton, given the similar lifetime of charmed hadrons (about 10−12 s). The application of this procedure to muon neutrino interactions led to the observation of 50 decay candidates [103], in good agreement with the expected charmed hadron yield (54±4), derived from the value measured by the CHORUS experiment [104]. Good agreement was found also in the shape of the relevant kinematical and topological variables, like the angle in the transverse plane between the charmed hadron and the muon and the impact parameter of the decay daughter particles with respect to the primary neutrino interaction vertex [103].

Unlike the experiments using “bulk” emulsions like CHORUS where the visual inspection of the primary and decay vertices allows rejecting most of the residual background, the ECC structure prevents the direct check of the vertices for the majority of the events. However, one can still exploit precise kinematical measurements for background suppression. For interesting event topologies, in fact, a detailed kinematical analysis is performed in OPERA by means of the electromagnetic shower energy measurement in the downstream part of the brick, the determination of the momentum by Multiple Coulomb Scattering measurement in the lead/emulsion structure [105], and the connection of tracks in consecutive target walls.

During the five CNGS production runs from 2008 to 2012, OPERA collected about 1.8×1020 protons on target and more than 19,000 neutrino interactions. The first tau neutrino candidate was reported in 2010 [106] and the display of its event reconstruction is shown in Fig. 9.12. The primary neutrino vertex consists of 7 tracks of which one shows a kink decay topology. None of the primary particles is consistent with neither a muon nor an electron. Two electromagnetic showers induced by γ conversions are visible in Fig. 9.12, indicated as γ 1 and γ 2. These two γs originate from the secondary vertex and their invariant mass is consistent with that of a π 0. From the kinematical analysis performed, the observed decay is consistent with the τ → ρν τ channel (B.R. ≃ 25%), followed by ρ → π 0π.

Fig. 9.12
figure 12

Display of the first ν τ candidate. Top left: view transverse to the neutrino direction. Top right: same view zoomed on the primary and secondary vertices. Bottom: longitudinal view. Track 4 exhibits a kink topology with an angle of (41 ± 2) mrad after a path length of (1335 ± 35) μm and produces track 8 and the two γ’s. Track 2 is identified as a proton and the other charged particles are all consistent with being hadrons [106]

The second [107] and third [108] tau neutrino candidates were reported in 2013, respectively in the τ → πππν τ and \(\tau \rightarrow \mu \bar {\nu }_\mu \nu _{\tau }\) decay channels. The forth candidate was reported in 2014 [109] while the discovery of ν τ appearance was achieved in 2015 with the observation of a fifth tau neutrino candidate over an expected background of 0.25 events [110]. The OPERA discovery of tau neutrino appearance was explicitly mentioned in the Scientific Background of the 2015 Nobel Prize in Physics.

The emulsion handling was completed in 2015 while the emulsion film scanning was completed in 2016 when the detector was decommissioned. The final number of events passing all the analysis chain up to the decay search are shown in Table 9.1. Events are divided in two categories according to the presence (1μ) or absence (0μ) of a muon in the final state and undergo different selections: a momentum cut of 15 GeV/c is applied to the muons in order to reduce the background.

Table 9.1 Overall number of located neutrino interactions with the decay search procedure applied

Given the data-driven validation of the simulation in all corners of the parameter space [103, 111, 112], the OPERA Collaboration decided to release the cuts and exploit the kinematical features of the events with a likelihood approach: this approach enlarges the selected sample, thus reducing the statistical uncertainty for the estimate of the oscillation parameters. Ten tau neutrino candidates were found with the new analysis strategy in the final sample. The distribution of the visible energy for the 10 candidates is shown in Fig. 9.13 together with the expected spectrum.

Fig. 9.13
figure 13

Visible energy distribution of the 10 tau neutrino candidates found in the final sample [114]

The number of expected tau neutrino events with looser cuts applied is reported in Table 9.2, together with the number of observed ν τ candidates in each tau decay channel. The reported values assume \(\Delta m^{2}_{23} = 2.50 \times 10^{-3}\) eV2 [113] and \(\sin ^{2}2\theta _{23}\) = 1. The discovery of tau neutrino appearance is confirmed with a significance of 6.1σ, evaluated by accounting for the features of the events with a likelihood analysis. The increased statistical sample was used to provide the first measurement of \(\Delta m^2_{23}\) in appearance mode with an improved accuracy, giving \(\Delta m^2_{23} = (2.7^{+0.7}_{-0.6}) \times 10^{-3}\) eV2 [114].

Table 9.2 Expected signal and background events for the analysed data sample

OPERA has demonstrated the capability of identifying all three neutrino flavours. Emulsion cloud chambers can clearly distinguish between electrons and γs, given their micrometric accuracy emphasizing the displacement between the γ production and conversion vertices. Unlike other detectors, this feature makes the eπ 0 separation particularly efficient and their selection pure: this translates into a very good separation between ν e charged-current interactions and ν μ neutral-current ones with a π 0 in the final state. OPERA has searched for the sub-dominant ν μ → ν e oscillations also to constraint the existence of sterile neutrinos. In the analysis of the 2008 and 2009 run data, 19 electron neutrino candidates were found and the results are summarised in [115]. The analysis of the final sample has collected 35 ν e candidates and the constraints to sterile neutrinos are reported in [116]. Constraints to sterile neutrinos were set also with the analysis of ν μ → ν τ oscillations [117].

9.6 Future Experiments and Applications

After more than 100 years since its first use, nuclear emulsions are still attractive in a wide range of scientific fields and applications. As it was the case for past developments, the future of nuclear emulsions will again rely on the parallel progress of high-performance readout systems as well as of innovative detector design. We discuss here the cutting edge technology and also review ongoing and emerging applications.

9.6.1 The State-of-the-Art Emulsion Technology

9.6.1.1 High-Performance Scanning Systems

Improvements of scanning systems in speed and quality are continuously progressing. One of the recent breakthrough was the appearance of GPGPU (General Purpose Graphic Processing Unit, or simply GPU). Up to the systems for OPERA, either FPGAs or CPUs were employed for image processing and track reconstruction. The FPGA has a big computing power, but also difficulties in implementing sophisticated algorithms and in flexibility. The CPU can process complicated algorithms but is limited in computing capability. On the other hand, the GPU provides both computing power and flexibility.

The effort to implement GPUs for scanning systems started soon after the release of CUDA [118], and it has quickly become the “standard” in the scanning system development nowadays. The early works were aiming at improving the angular acceptance in track reconstruction which was limited by the lack of computing power for online processing. The previously mentioned S-UTS, the scanning system for OPERA, could recognize tracks with their angle within 30 with respect to the normal of the film surface. This angular acceptance is equivalent to 14% of the entire solid angle. An extension of the S-UTS algorithm was translated into the GPU code, which reconstructed tracks up to 72 (68%) with a reasonable processing time [119]. In parallel, new algorithms suitable for parallel processing were developed to extend the track reconstruction to almost the entire 4π solid angle [120, 121], which finally allowed to fully exploit the 3D tracking capability of nuclear emulsion. Examples of applications of such systems will be discussed further below.

In the data acquisition, there are two complementary ways for the fast readout of emulsion data: maximize the field of view or minimize the dead time due to microscope stage movement. An extreme case of the first approach was implemented in the HTS system (Hyper Track Selector, [122]) as shown in Fig. 9.14, which is the fastest readout system at present. Conventional systems were using a field of view (FOV) of 0.12 mm × 0.12 mm (S-UTS) or 0.3 mm × 0.4 mm (ESS). HTS makes use of the custom made objective lens with a large FOV of 5.1 mm × 5.1 mm and a magnification of 12.1. The optical path is divided into six, correspondingly the image is projected on six “mosaic camera modules” as also schematically drawn in Fig. 9.14. Each mosaic camera module consists of 12 2.2-Mpixel image sensors. In total, 72 image sensors work in parallel to build the large FOV. The raw image data throughput from 72 image sensors amounts to 48 GBytes/s, which is then processed in real-time by 36 tracking computers with two GPUs each. The scanning speed has reached 4700 cm^2/h, which is clearly a big leap from the previous generations as shown in Fig. 9.15.

Fig. 9.14
figure 14

Left: the fast emulsion readout system, Hyper Track Selector (HTS) [122]. Right: the optics and camera system for HTS. The optical path is divided into six mosaic camera modules. Each camera module consists of 12 2.2-Mpixel image sensors. In total 72 image sensors work in parallel to realize a large FOV of 5.1 mm × 5.1 mm with sub-micrometric resolution

Fig. 9.15
figure 15

Time evolution of the scanning speed of the Track Selector system. The scanning speed progress in log scale

Another approach is to remove the dead time due to the microscope stage movement. In conventional systems, the data taking sequence is the so-called “stop-and-go” where the need to dump stage vibrations limits the repetition cycle up to 6 Hz. In order to minimize this effect, it was proposed to use tomographic image data taking without stopping the stage. In fact, S-UTS was the first system to implement the continuous motion as above mentioned. However, the camera resolution was relatively small (512 × 512 pixels) when compared to the market standard of today. The New Generation Scanning System (NGSS) was developed with a larger camera resolution (2336 × 1728 pixels) and with a different style of continuous motion that allowed running on normal motion hardware of ESS. The schematic of image taking sequence is shown in Fig. 9.16. By realizing a 12 Hz data taking, the scanning speed reached 190 cm^2/h [123].

Fig. 9.16
figure 16

Schematic drawing of the Stop and Go (SG) motion and Continuous Motion (CM) of NGSS [123]

The advances in scanning speed allows physicists to design experiments with a detector areas of 1000 m^2 to be analysed in a year, to be compared to the total scanned area of OPERA of about 500 m^2 in 5 years. The environment of emulsion readout is continuously changing as long as technologies grow. A new design of scanning system, so called HTS2, is going to combine the large field of view of HTS and the continuous motion [122], which might reach a scanning speed of 5 m^2/h in early 2020s. At this stage, the scanning speed would be no longer a bottleneck of any experiment and new challenging experiments might be proposed, based on such a high-speed readout framework.

9.6.1.2 Fine-Grained Emulsion Production

Owing to its unbeatable position and angular resolution, the emulsion technique is being adopted in different applications in the fields of fundamental physics and applied science. The OPERA film [2], which was mass produced in industries, has been used for some applications, although the properties of the detector are tuned for the OPERA experiment. Following the increased interest in using emulsion detectors in a broad range of applications, the R&D of emulsion gel has become essential for optimising the detector for each application. However, conducting R&D for each small-scale experiment is difficult in industrial companies. This motivated the Nagoya University group to set up their own emulsion gel production facility in 2010. With the help of experts from FUJI Film Co. Japan, custom-made emulsion gels were successfully produced with an improved sensitivity to minimum ionizing particles with respect to OPERA films [3]. Moreover, some R&D programs were conducted to control silver halide crystal size, which defines spatial resolution and sensitivity. Fine-grained emulsions were produced with a crystal size of a few tens of nanometres, which is approximately one order of magnitude smaller than the conventional one (Fig. 9.17). They are called Nano Imaging Trackers (NIT) [124, 125]. The average size of NIT crystals was measured to be 44.2±0.2 nm, with a standard deviation of 6.8 nm. NITs are not sensitive to the minimum ionizing particles but are sensitive enough to low-velocity heavy ions. They are considered a possible detector for detecting the recoiled nuclei induced by dark matter.

Fig. 9.17
figure 17

Left: Silver halide crystals in the fine-grained emulsion [124, 125], as seen with a transmission electron microscope. Photolytic silver grains are also visible on the surfaces of silver halide crystals. Right: Tracks of Kr ions in such an emulsion, as seen with a scanning electron microscope

9.6.1.3 Large Grain Emulsion Production

For certain applications such as muon radiography, large-scale detectors are required. An improvement in the readout speed is therefore crucial to make future large-scale applications possible, and the availability of a new type of emulsion featuring crystals of larger sizes is one way to pursue this goal. This would allow a lower magnification for the microscopes and, consequently, a larger field of view resulting in a faster data analysis. The size of the crystals used for the neutrino oscillation experiments mentioned above was 200 nm and has never been larger than 300 nm in previous experiments. The production of new types of emulsions with crystal sizes of 600–1000 nm, 3–5 times larger than those of standard films, has been studied and realised using the gel production machine at Nagoya University. The first results characterising newly produced emulsions have been reported [126], showing a sufficient sensitivity and a good signal to noise ratio (Fig. 9.18). This development will allow a 25 times faster readout speed by using lower magnification objective lenses. These new detectors will pave the way to future large-scale applications of the technology, e.g. 3D imaging using muon radiography or future neutrino experiments.

Fig. 9.18
figure 18

Electron microscope pictures of silver halide crystals (left) and electron tracks (right) in a conventional film and in the newly developed samples [126]

In close connection with the production of large crystals, there has also been a study to produce crystals slightly larger (350–400 nm) than 200 nm and to check the dependence of crystal sensitivity on its size. This study was motivated by the interest to understand the phenomenology of the latent image formation, which predicts that the quantum sensitivity can be better at such a crystal size. Further studies are in progress with the aim of developing emulsions with higher sensitivity. The conditions of chemical sensitisation and development were optimised for each crystal size in the range of 200–800 nm. The increase in the crystal sensitivity depending on the crystal size was confirmed for crystals of 350–800 nm [127]. These R&D activities form the base for a broad range of future applications.

9.6.2 Projects in Fundamental Physics

9.6.2.1 Balloon Experiments

Balloon experiments employing emulsion detectors were reported in [128]. The use of emulsion technique for cosmic-ray physics experiments has recently attracted research interest after the significant technological advances in the last decades. In 2004, a balloon experiment using emulsions was performed to observe primary cosmic-ray electrons [129]. Various innovations such as the industrial emulsion films, the refreshing technique, the automated emulsion read-out system and the off-line analysis methods were introduced. In addition, a dedicated device was developed to distinguish between particles passing through a chamber at the balloon level altitude and those recorded during other periods. The mechanism of this device is such that it causes intentional shift of the upper block of the chamber with respect to the lower block, when the balloon reaches float altitude and again when the flight at float altitude is terminated. The working principle of the technique was successfully demonstrated.

Based on these techniques, a balloon-borne emulsion γ-ray telescope was proposed [130] and the Gamma-ray Astro-Imager with Nuclear Emulsion (GRAINE) project was developed for the observation of γ-rays in the energy range of 10 MeV–100 GeV. A precise, polarisation-sensitive, large-aperture-area emulsion telescope with repetitive long-duration balloon flights was employed. The electron and positron angles at the pair creation point can be measured in emulsions and the angular resolution for γ-rays (10 MeV–10 GeV) is about one order of magnitude higher than that of the Fermi Large Area Telescope (Fermi-LAT) (Fig. 9.19). The polarisation sensitivity of an emulsion-based telescope was demonstrated using a polarised γ-ray beam at SPring-8/LEPS [131].

Fig. 9.19
figure 19

Angular resolution of the emulsion γ-ray telescope (lines show simulation results and dots with error bars show experimental results) [132]. The measurements were performed with γ-ray beams (LEP/SPring-8, UVSOR, and New SUBARU) and using a flight data [133]. Dotted lines show angular resolution with Fermi-LAT for the front section with thin radiation foils and the back section with thick foils [134]

An emulsion multi-stage shifter was used to develop an innovative solution capable of providing the event time-stamp and hence the γ-ray absolute direction [135]. The relative alignment between the automatically sliding emulsion films, each moving with a known different speed, provides the required time association of the event. This technique allows γ-ray detection with low energy threshold, minimising the electric power and also limiting the overall detector mass.

In 2011, the first balloon-borne experiment was performed with a 12.5 × 10 cm2 aperture area and 4.6-h flight duration for a feasibility test [133]. The chamber comprised three sections. The top section was made of an ECC with emulsion films interleaved with copper foils (50 μm), meant to measure the γ-ray angle around the conversion point. The middle section included an emulsion multi-stage shifter, providing the time-stamp of the events. The bottom part contained a calorimeter comprising a lead/emulsion ECC for the γ energy measurement. With this flight data, systematic detection, energy reconstruction, and timestamping of γ-ray events were performed [133] and subsecond time resolution of the emulsion γ-ray telescope was demonstrated [136]. The second balloon-borne experiment was performed at the Alice Springs balloon-launching station in 2015 [137]. The telescope had a 3780 cm2 aperture and was taking data for a total of 14.4 h. The experiment aimed at demonstrating the overall performance of the emulsion γ-ray telescope. The improvements in the emulsion characteristics and handling applied to this experiment are summarised in [138]. The project plans a third balloon-borne experiment in 2018 for the celestial source detection and envisions scientific observations from 2021.

9.6.2.2 The NEWSdm Experiment

The nature of Dark Matter is one of the fundamental questions to be answered. Direct Dark Matter searches are focussed on the development, construction, and operation of detectors looking for the scattering of Weakly Interactive Massive Particles (WIMPs) with target nuclei. The measurement of the direction of WIMP-induced nuclear recoils is a challenging strategy to extend the sensitivity of dark matter searches beyond the neutrino-induced background event rate and provide an unambiguous signature of the detection of Galactic dark matter [139]. Current directional experiments are based on the use of gas TPC whose sensitivity is strongly limited by the small achievable detector mass. Nuclear Emulsions for WIMP Search with directional measurement, NEWSdm, is an innovative directional experiment proposal based on the use of a solid target made by newly developed nuclear emulsion films and read-out systems capable to detect nanometric trajectories.

The approach proposed by the NEWSdm Collaboration [140] consists of using a nuclear emulsion-based detector acting both as target and as nanometric tracking device. The NEWSdm project foresees the employment of NIT. The detector is conceived as a bulk of NIT surrounded by a shield to reduce the external background. The detector is then placed on an equatorial telescope in order to absorb the Earth rotation, thus keeping fixed the detector orientation with respect to the incoming apparent WIMP flux. The angular distribution of the WIMP-scattered nuclei is therefore expected to be strongly anisotropic with a peak centred in the forward direction.

NIT have a linear density of about 11 crystals/μm [124], thus making the reconstruction of trajectories with path lengths as short as 100 nm possible, if analysed by means of microscopes with enough resolution. The presence in the emulsion gel of lighter nuclei such as carbon, oxygen and nitrogen, in addition to the heavier nuclei of silver and bromine, is a key feature of the NEWSdm project, resulting in a good sensitivity to WIMPs in the mass range between 10 to 100 GeV/c2.

In the NEWSdm experiment a WIMP signal consists of short-path, anisotropically distributed, nuclear recoils over an isotropically distributed background. The search for signal candidates requires the scanning of the whole emulsion volume. The read-out system has therefore to fulfil two main requirements: a fast, completely automated, scanning system is needed to analyse the target volume over a time scale comparable with the exposure; the spatial resolution achievement has to go well beyond the diffraction limit, in such a way to ensure high efficiency and purity in the selection of signal candidates. The analysis of NIT emulsions is performed with a two-step approach: a fast scanning with a state-of-the-art resolution for the signal pre-selection followed by a pin-point check of preselected candidates with unprecedented nanometric resolution to further enhance the signal to noise ratio.

In the first analysis phase, a fast scanning is performed by means of an improved version of the optical microscope used for the scanning of the OPERA films [141]. An R&D program has achieved a speed of about 200 cm2/h [123, 142].

The starting point of the emulsion scanning is the image analysis to collect clusters making up silver grains. Given the intrinsic resolution of the optical microscope (∼200 nm), the sequence of several grains making a track of a few hundred nanometers may appear as a single cluster. Nevertheless, a cluster made of several grains tends to have an elliptical shape with the major axis along the direction of the trajectory, while a cluster produced by a single grain tends to have a spherical shape. The shape analysis with an elliptical fit is indeed the first approach to select signal. In order to simulate the effect of a WIMP-induced nuclear recoil and to measure the efficiency and the resolution of the new optical prototype, a test beam with low velocity ions was performed. Kr ion beams with energies of 200 and 400 keV [143] and C ion beams with energies of 60, 80 and 100 keV were used. Silver grains belonging to the tracks appear as a single cluster. An elliptical fit of the cluster shape allows a clear separation between fog grains and signal tracks [144].

The second analysis step at the microscope makes use of the plasmon resonance effect occurring when nanometric silver grains are dispersed in a dielectric medium [145]. The polarization dependence of the resonance frequencies strongly reflects the shape anisotropy and can be used to infer the presence of non-spherical nanometric silver grains within a cluster made of several grains. NEWSdm is using this technology to retrieve track information beyond the diffraction limit. Images of the same cluster taken with different polarization angles show a displacement of the position of its barycentre. The analysis of this displacement allows to distinguish clusters made of a single grain from those made of two or more grains building up a track, as shown in Fig. 9.20: unlike the single grain reported in the top plots, the Carbon ion track in the bottom plot shows a barycentre displacement of 100 nm length while changing the polarization angle. An unprecedented nanometric accuracy has been achieved in both coordinates with this method: Fig. 9.21 reports the displacement of the barycentre of clusters made of single grains, showing an RMS smaller than 10 nm. Such an achievement allows to detect path lengths where the barycentre displacement induced by the polarization change is only a few tens of nanometres. The actual threshold achievable on path lengths depends on the crystal size and can in principle be reduced to a few tens of nanometres as well.

Fig. 9.20
figure 20

Displacement of the barycentre as a function of the light polarization angle. The response to a single grain (top) and to a C ion track (bottom) are compared. The ion track shows a clear displacement

Fig. 9.21
figure 21

Barycentre displacement of clusters made by single grains

The wavelength of the scattered light depends on the size of the grains where light is scattered off. In order to exploit this effect, the latest version of the optical microscope makes use of a colour camera, thus providing sensitivity to the sense of the track, since grains are expected to be larger at the end of the track range and therefore the scattered light shifts to the red colour. The prototype of this new system in operation in Naples is shown in the left plot of Fig. 9.22 while the image of an α track is reported on the right: different grains show different colours due to their different size, that in turn can provide sensitivity to the particle sense.

Fig. 9.22
figure 22

Left: new optical microscope equipped with colour camera in Naples. Right: the last few microns of an α track path showing grains of different colours

The NEWSdm collaboration has installed at the Gran Sasso underground laboratory a facility for the emulsion handling and film production. Moreover, a dedicated structure was constructed in the Hall B of the underground Gran Sasso Laboratory early in 2017 to shield a detector of 10 g mass against the environmental background sources over an exposure time of about 1 month. The experimental setup consists of a shield from environmental backgrounds, made of a few tons of polyethylene and lead, and a cooling system to ensure the required temperature level to the NIT emulsion detector. The aim is to measure the detectable background from environmental and intrinsic sources and to validate estimates from simulations [146]. The confirmation of a negligible background would pave the way for the construction of a pilot experiment with an exposure of about 10 kg year.

9.6.2.3 Development of Cold-Neutron Detector

A new detector for detecting cold and ultra-cold neutrons has been recently developed. It employs fine-grained emulsion detectors with 35-nm-diameter crystals and nuclides with large neutron absorption cross sections, such as 6Li and 10B. One detector type is realised by doping LiNO3 into fine-grained emulsion detectors [147]. The cross-sectional view of the detector is shown in Fig. 9.23 (left). An α particle and a tritium are emitted during the reaction: 6Li + n → α + t + 4.78 MeV. Events of neutron absorption by 6Li were successfully observed by exposing the detector to thermal neutrons at the Kyoto University Research Reactor Institute (KURRI). The spatial resolution achieved in the measurement of the absorption point was estimated to be 0.34 μm from the average grain density of the track far from the end of its range. The detection efficiency was measured by exposing the detector to a cold neutron beam at BL05 port in the Materials and Life Science Experimental Facility (MLF) at J-PARC [148]. The measured efficiency of (3.3 ± 0.6)×10−4 was consistent with the expectation.

Fig. 9.23
figure 23

Cross-sectional view of the detectors [149]. Detectors with LiNO3 doping (left) and the 10B4C thin layer (right)

The other detector type consists of a 50-nm-thick converter layer made of 10B4C formed on a 0.4-mm-thick silicon substrate and coated by 10-μm-thick, fine-grained emulsion [149]. The converter layer was covered by C (50 nm) and NiC (60 nm) layers. An α particle or a 7Li nucleus will be detected in the emulsion, as shown in Fig. 9.23 (right). They are produced via the reactions: 10B + n → α + 7Li + 2.79 MeV (6%) or 10B + n → α + 7Li + 2.31 MeV (94%). The detector was exposed to cold and ultra-cold neutrons at J-PARC, and the events of neutron absorption by 10B were clearly observed. The position resolution of the absorption point in the 10B4C layer depends on the track angle. By limiting the track angle, the expected position resolution is ∼100 nm [150], which is 1–2 orders of magnitude higher than that of the conventional detectors used for detecting cold and ultra-cold neutrons. Further optimisation of the thickness of the converter layer and development of automatic track reconstruction are explored. The development of these detectors paves the way to future applications such as the precise measurement of the position distribution of quantised states of ultra-cold neutrons or neutron imaging with future neutron sources.

9.6.2.4 Study of Double-Hypernuclei

The knowledge of Λ − Λ interaction is limited as only one out of the nine double hypernuclei events detected by E373 is fully analysable to extract information of the interaction (Fig. 9.24). In order to answer questions such as the nuclear mass dependence of Λ − Λ interaction, the E07 experiment at J-PARC is being carried out, which is aiming at studying Λ − Λ interactions with 100 double hypernuclei events, one order of magnitude larger statistics with respect to E373. As schematically shown in the right side of the Fig. 9.24, E07 uses a 1.7 GeV/c K beam hitting a diamond target to produce Ξ hyperons (dss), subsequently stopped and captured by one of emulsion detector nuclei to produce double hypernuclei. The detector has a hybrid structure with a silicon strip detector and a spectrometer system for the K + identification, needed to tag Ξ hyperons. The emulsion detector will be analysed by an automated scanning system dedicated to this experiment. E07 conducted the physics runs in 2016 and 2017. The emulsion readout and analysis are in progress.

Fig. 9.24
figure 24

Left: The so-called “Nagara” double hypernucleus event found in the E373 experiment at KEK. Ξ was captured at rest by a carbon nucleus in the emulsion detector and produced a double hypernucleus (\({ }^6_{\Lambda \Lambda }\mathrm {He}\)), which decayed in series, leaving a peculiar event topology in emulsion [60]. Right: A schematic of the experimental setup of the E07 experiment at J-PARC [151]

9.6.2.5 Measurements of Antimatter

Emulsion detectors have been recently considered as high-accuracy position sensitive detectors for low-energy antimatter studies. These studies include the AEgIS experiment at CERN [152, 153], with the goal of measuring the Earth’s gravitational acceleration on antihydrogen atoms to the ultimate precision of 1%. The vertical deflection of the \(\bar {H}\) atoms due to gravity will be detected by a setup comprising material gratings coupled with a position-sensitive detector. The position detector requires the best possible position resolution, which currently is provided by emulsion-based detectors.

There were technical challenges for emulsion detectors to be operated in vacuum and at cryogenic temperatures. In vacuum, water loss leads to cracks in the emulsion layer and an increase in random noise due to mechanical stress caused by the drying process. Two ways of solving these problems were then established. One is to mix glycerin with emulsion gel to replace water with glycerin, and the other one is to put gas barrier films on emulsions to keep water in the films. Both approaches have proven to work in ordinary vacuum (10−7–10−5 mbar). At 77 K, the performance of emulsion detectors was not well known. The sensitivity of the emulsion at 77 K was studied and observed to be 43% of the value at 300 K. By optimising the track reconstruction, detecting minimum ionizing particles with such a sensitivity will be feasible since the tracking efficiency for tracks with more than 10 grains in an 50–100 μm thick emulsion layer could be close to 100%. In 2012, an antiproton exposure to the emulsion detectors was performed for the feasibility study at the Antiproton Decelerator (AD [154]) at CERN. Fig. 9.25 shows an annihilation vertex on the bare emulsion surface. Annihilation vertices in the metal target were also reconstructed, demonstrating a resolution of 1 μm on the vertical position. In addition, a proof-of-principle experiment with mini-moiré deflectometer was performed [156]. The periodic patterns were observed as expected in the emulsions, and the measured shift between antiprotons and light was consistent with the force from the magnetic field at the given position. The results are a crucial step toward the direct detection of the gravitational acceleration of antihydrogen. In 2014, measurements of the multiplicities of charged annihilation products on different target materials, namely copper, silver, and gold, were performed [157] at the CERN AD. Apart from the obvious applications in nuclear physics, this measurement can provide a useful check of the ability of standard Monte Carlo packages to reproduce fragment multiplicities and energy distributions. The measured fragment multiplicities were not well reproduced by the different models used in Monte Carlo simulation with the exception of FLUKA [158, 159], which is in good agreement with the particle multiplicities for both minimum and heavily ionizing particles (Fig. 9.26).

Fig. 9.25
figure 25

An antiproton annihilation vertex in an emulsion layer [155]. The view is perpendicular to the antiproton beam direction

Fig. 9.26
figure 26

Particle multiplicity from antiproton annihilations as a function of atomic number for minimum and heavily ionizing particles [157]

There is another proposal by the QUantum interferometry and gravity with Positrons and LASers (QUPLAS) project to use emulsions for their studies on positrons [160]. The sensitivity of the emulsion detectors was studied using a mono-energetic positron beam at energies as low as 9–18 keV. The obtained results prove that the emulsions are highly efficient at detecting positrons at these energies. This achievement paves the way to perform matter-wave interferometry with positrons using this technology.

9.6.2.6 Accelerator Beam Characterization: Muon Measurements at the T2K ν Beamline

The high spacial resolution of emulsion detectors turns out to have an advantage in the characterization of high-intensity accelerator beams, in particular in fast extraction mode where billions of particles arrive within a nanosecond: in these conditions electronic detector cannot identify particles on an event by event base given their limited occupancy. A notable example is the muon measurement at the T2K neutrino beam from J-PARC in Japan [161]. As neutrinos and muons are both produced by meson decays (π, K → μν μ), the understanding of the muons provides valuable information about neutrinos, such as the parent hadron production and momentum distribution. Nevertheless, low energy electromagnetic components highly contaminate the muon flux at the muon pit downstream of the decay volume (μ ± = 53%, e ± = 7%, γ = 40%, estimated by MC). This makes it difficult to extract meaningful information from the muon beam. The muons are regularly measured by the silicon photodiodes and ionization chambers at the muon pit to monitor the beam direction in each spill. These are charge-integration detectors not optimised to measure muon tracks. Therefore, a measurement of muons by means of the emulsion detectors was performed. The emulsion detector module was composed of 8 OPERA-type films with a cut-off momentum of 30 MeV/c for electrons given by the dedicated track recognition procedure, efficiently achieving a 99% purity of muons after reconstruction. The measurement was done at an intensity of the order of 1011 protons on target, which yielded O(104) muons/cm^2 in the emulsion detectors. The measured profile is shown in Fig. 9.27 with the expected profile predicted by the FLUKA simulation with dedicated hadron production tuning. The absolute muon flux was measured for the first time at neutrino beamlines, thus characterising the beamline. In addition to the flux measurement, the momentum distribution of muons has been measured with the OPERA-like ECC with 25 films sandwiched with 24 1-mm-thick lead plates, which is to be published.

Fig. 9.27
figure 27

Left: A schematic of the T2K neutrino beamline. The muon measurement was performed at the muon monitor pit behind the decay volume. Right: Comparison of the muon flux with the prediction at the horn current of 250 kA (top) and off (bottom). Figures from [161]

Such muon measurements can be performed at future neutrino beamlines e.g. the J-PARC neutrino beamline for the Hyper-K experiment [162] and the LBNF (Long-Baseline Neutrino Facility) for the DUNE experiment [163]. In general, nuclear emulsions provide unique capability to study high-intensity accelerator beamline operated in fast extraction mode. Thanks to the automated scanning system, this field is expected to grow in the future.

9.6.2.7 The NINJA Project

The Neutrino Interaction research with Nuclear emulsion and J-PARC Accelerator (NINJA) project was initiated for the precise measurement of neutrino–nucleus interactions. The study of neutrino–nucleus interactions in the sub-multi-GeV region is important to reduce systematic uncertainties in present and future neutrino oscillation experiments. The emulsion detector can measure particles with a low energy threshold for various targets such as iron, carbon and water. It also exhibits good electron/gamma separation capability, allowing for precise measurements of electron–neutrino interactions. Given these capabilities, the future program includes searches for sterile neutrinos with a detector made of three components. The upstream part is made of an ECC with emulsion films interleaved by the target material, which is used for detecting neutrino interactions. The middle part includes an emulsion multi-stage shifter device [164], providing the timing information of the events. The downstream part is the Interactive Neutrino Grid (INGRID), which is one of the near detectors for the T2K experiment [165] used to identify muons.

A test experiment (J-PARC T60) was implemented as a first step in the project to check the performance of the detector. Its neutrino event analysis is based on scanning the full area of the emulsion detectors by the HTS system. A more detailed analysis of the detected events could be performed by a dedicated scanning procedure with an extended angular acceptance [119]. The feasibility of the project was studied in the first exposure to a 2-kg iron target in 2015. The full area of the emulsion films (∼1.2 m2) was scanned and a systematic analysis was performed to locate neutrino interactions. The neutrino candidate events located in the emulsions were matched to events observed by INGRID by employing the timing information from the multi-stage shifter. The hybrid analysis of ECC and INGRID was demonstrated [166]. An analysed event is shown in Fig. 9.28. Further, some other exposures of the anti-neutrino beam to the detectors were conducted, testing the first proto-type of water-target ECC or checking the detector performance with higher statistics [167]. The plan is to scale up the detector step-wise. Physics runs will be planned based on the results of these test experiments.

Fig. 9.28
figure 28

Hybrid analysis of ECC and INGRID (side view of an event) [166]

9.6.2.8 Tau-Neutrino Production Studies

At the CERN Super Proton Synchrotron (SPS), a new project called DsTau has been proposed to study tau-neutrino production [168] aiming at providing important information for future ν τ measurements where high ν τ statistics is expected. The results of DsTau are a prerequisite for measuring the ν τ charged-current cross section, which has never been adequately measured (only the DONUT measurement was reported so far [87]). Precise measurement of the cross section would enable a search for new physics effects in ν τ–nucleon CC interactions. It also has practical implications for neutrino oscillation experiments such as Super-K, Hyper-K [162] and DUNE [163], which suffer from a ν τ background to their ν e measurements. As for the DONUT experiment, the dominant source of ν τ is the sequential decay of D s mesons, \(D_s^+ \rightarrow \tau ^+ \nu _\tau \rightarrow X \nu _\tau \overline {\nu }_\tau \) and \(D_s^- \rightarrow \tau ^- \overline {\nu }_\tau \rightarrow X \overline {\nu }_\tau \nu _\tau \) produced in high-energy proton interactions. The topology of such an event is shown in Fig. 9.29. Directly measuring D s → τ decays will provide an inclusive measurement of the D s production rate and the decay branching ratio to τ. The D s momentum will be reconstructed by combining the topological variables measured in the emulsion detector.

Fig. 9.29
figure 29

Topology of D s → τ → X events (left) and simulated kink angle distribution of D s → τ (right) [168]

The project aims at detecting 103D s → τ decays to study the differential production cross section of D s mesons. For this purpose, emulsion detectors with a nanometer precision readout will be used. An emulsion detector with a crystal size of 200 nm has a position resolution of 50 nm [3], as shown in Fig. 9.2, allowing for kink detection with a threshold of 2 mrad at the 4σ confidence level. The global analysis will be based on fast scanning of the full area by the HTS system [122]. After the τ decay trigger, the events will be analysed by dedicated high-precision systems [120] using a piezo-based high-precision z-axis, allowing the emulsion hits to be measured with a nanometric resolution. Each detector unit consists of a 500 μm-thick tungsten target, followed by 10 emulsion films interleaved with 200 μm-thick plastic sheets acting as decay volumes for short-lived particles as well as high-precision particle trackers. Ten such units are used to construct a module, which is followed by an ECC to measure the momenta of the daughter particles. With this module, 4.6 × 109 protons on target are needed to accumulate 2.3 × 108 proton interactions in the tungsten plates. The data generated by this project will enable the ν τ cross section measured by DONUT to be re-evaluated, which should significantly reduce the total systematic uncertainty. Once ν τ production is established, the next stage will be to increase the number of ν τ detected events. This could be achieved within the framework of the SHiP project [171] at CERN because its beamline (beam-dump type) is well suited for this task. The DsTau project aims to look for new physics effects in ν τ–nucleon CC interactions with a total uncertainty of 10%. In addition to the main aim of measuring D s, analysing 2.3×108 proton interactions, combined with the high yield of 105 charmed decays produced as by-products, will enable the extraction of additional physical quantities. Based on the results of beam tests undertaken in 2016 and 2017 for the feasibility study, a pilot run is scheduled for 2018 and physics runs are planned from 2021 after the upcoming long shutdown of the accelerator complex at CERN.

Another proposal, called the SHiP-charm project [169], aims at measuring the associated charm production by employing the SPS 400 GeV/c proton beam. Charmed hadrons are produced either directly from interactions of the primary protons or from subsequent interactions of the particles produced in the hadronic cascade showers. Recent detailed simulation studies of proton interactions in heavy and thick targets show a sizeable contribution from the cascade production to the charmed hadron yield [170]. This proposal includes a study of the cascade effect to be carried out using ECC techniques, i.e. slabs consisting of a replica of the SHiP experiment target [171] interleaved with emulsion films. The detector is hybrid, combining the emulsion technique with electronic detectors to provide the charge and momentum measurement of charmed hadron decay daughters and the muon identification. This allows a full kinematical reconstruction required by the double-differential cross-section measurement. According to the simulation performed, the delivery of 2×107 protons on target would allow the detection of about 1000 fully reconstructed charmed hadron pairs. An optimisation run is scheduled for 2018 and the full measurement is planned after the long shutdown LS2 of the CERN accelerator complex, with 5 × 107 protons on target and a charm yield of about 2500 fully reconstructed interactions.

These two approaches, DsTau and SHiP-charm, are complementary since DsTau will detect 105 charmed hadron pairs with good D s selection capability, while SHiP-charm will study about 2500 fully reconstructed charmed hadron pairs including the hadronic cascade effect. The results of these approaches will provide essential input for future ν τ measurements.

9.6.2.9 The SHiP Experiment

The discovery of the Higgs boson in 2012 has fully confirmed the Standard Model of particles and fields. Nevertheless, there are still fundamental phenomena, like the existence of dark matter, the baryon asymmetry of the Universe and the origin of neutrino masses, that could be explained by the discovery of new particles. Searches for new physics with accelerators are performed at the LHC, looking for very massive particles coupled to matter with ordinary strength. A new experiment, Search for Hidden Particles (SHiP), has been proposed [171], designed to operate at a beam dump facility to be built at CERN and to search for weakly coupled particles in the few GeV mass range. A beam dump facility using high intensity 400 GeV/c protons would be a copious source of such unknown particles in the GeV mass range. Since a high-intensity tau neutrino flux is produced by such a facility from D s decays, the experimental apparatus foresees a neutrino detector to study the tau neutrino cross-section and discover the tau anti-neutrino. This detector is also suited to detect dark matter or any weakly interacting particle through its scattering off the atoms of the apparatus target. The physics case for such an experiment is widely discussed in [172].

Figure 9.30 shows the SHiP facility to be placed in the North Area. In 5 years, the facility will integrate 2×1020 400 GeV/c protons, produced by the SPS accelerator complex, impinging on a 12 interaction length (λ int) target made of Molybdenum and Tungsten, followed by a 30 λ int iron hadron absorber. Downstream of the target, the hadron absorber filters out all hadrons, therefore only muons and neutrinos are left. An active muon shield [173] is designed with two sections with opposite polarities to maximize the muon flux reduction: it reduces the muon flux from ∼1010 down to ∼ 105 muons per spill. Approximately 4 × 1013 protons are extracted in each spill, designed to be 1 s long to reduce the detector occupancy. The tau neutrino detector is located downstream of the muon shield, followed by the decay vessel and the detector for hidden particles.

Fig. 9.30
figure 30

Layout of the SHiP project

The neutrino detector is made of a magnetised target region, followed by a muon spectrometer. The neutrino target is based on the emulsion cloud chamber technology employed by the OPERA experiment, with a compact emulsion spectrometer, made of a sequence of very low density material and emulsion films to measure the charge and momentum of hadrons in magnetic field. Indeed, this feature would allow to discriminate between tau neutrinos and anti-neutrinos also in the hadronic decay channels of the tau lepton. The emulsion target is complemented by high resolution tracking chambers to provide the time stamp to the event and connect muon tracks from the target to the muon spectrometer. The muon spectrometer is based on the concept developed for the OPERA apparatus: a dipolar iron magnet where high precision tracking chambers provide the momentum and coarse resolution chambers provide the tracking within the iron slabs. About 10,000 tau neutrino interactions are expected to be observed in SHiP.

The emulsion target also acts as the target of very weakly interacting particles, like the dark matter, produced at the accelerator, if their mass is in the GeV range. Unlike the non-relativistic galactic dark matter producing nuclear recoils of the keV energy range, dark matter produced at the accelerator is ultra-relativistic and it could be observed through its scattering off the electrons of the emulsion target of the neutrino detector. The elastic interaction of dark matter particles with electrons produces one electron in the final state, thus mimicking elastic interaction of neutrinos that constitute the main background for this search. In [171] the sensitivity to light dark matter shows to be very competitive with all the planned experiments in the next decade.

The SHiP Collaboration is preparing a Comprehensive Design Report to be submitted within 2018, in the framework of the Physics Beyond Colliders working group, that will be evaluated within 2020. The construction and installation is expected to start in 2021 with data taking to start in 2026.

9.6.3 Projects in Applied Science

9.6.3.1 Muon Radiography

Muon radiography measures the absorption of cosmic-ray muons in matter, analogously to the conventional radiography that makes use of X-rays. The interaction of primary cosmic-rays with the atmosphere provides an abundant source of muons that can be used for various applications of muon radiography. Muon radiography was first proposed to determine the thickness of snow layers on a mountain [174]. The first application was realised in 1971 with the seminal work of Alvarez and collaborators searching for unknown burial cavities in Chephren’s pyramid [175]. The pioneering work done in Japan for the radiography of the edifice of volcanoes by using quasi-horizontal cosmic-ray muons [176, 177] has opened new possibilities for the study of their internal structure.

Nuclear emulsions were used for the first time in 2006 for the muon radiography of the Asama volcano in Japan [178]. The main advantages of the emulsion technique are the simplicity and portability of the detector setup, and the absence of power supplies and electronic data acquisition systems, usually difficult to transport and operate on the summit of a volcano.

In 2012 an emulsion detector was installed on the Stromboli volcano to image its crater region. Despite of the strong influence that the crater area and the Sciara del Fuoco slope have on the volcanic dynamics of the Stromboli island, their internal structure is not well known because of the limited resolution of conventional geophysical methods. An emulsion detector of 0.73 m2 surface was exposed there and took data for about 5 months in 2012. Emulsion films were exposed in the form of two doublets separated by 5 mm iron slabs intended to reject the background induced by the soft component of cosmic-rays. Figure 9.31 shows an excess in the rate of muons in the crater region that is interpreted in terms of a lower density region [179]. This excess lies in the region B of Fig. 9.31, the one where the detector is sensitive to density variations, while A denotes the free sky region. In the region C, instead, the average thickness is larger than 800 m, such that the rate is too low to appreciate density changes. The data analysis provided an image of the crater area of Stromboli with a resolution of about 10 m in the center of the target area. The observed muon excess larger than 30% indicates an average density decrease along the muon path down to 1.7 g/cm3 with respect to the standard rock density of 2.65 g/cm3. Further measurements campaigns are foreseen with larger detector surfaces at Stromboli as well as on other volcanoes.

Fig. 9.31
figure 31

Excess of the muon rate seen with an emulsion detector in the crater region of the Stromboli volcano [179]. The colour scale indicates the number of muons over an angular range of 10 × 10 mrad2 and a surface of 0.73 m2

An interdisciplinary project between the fields of geosciences and particle physics was also initiated. This project aims to image the bases of Alpine glaciers in three dimensions via cosmic-ray muon radiography using emulsion particle detectors. The results will be used to test the models for erosional processes and provide clues revealing how the Alpine glaciers have been shaped. The results also have an impact on society since they can be used to check the possibility of disasters caused by glacier retreats. However, studying the morphology of active Alpine glaciers has been a difficult task due to the lack of technology. Muon radiography is considered as a powerful tool to address this issue. The technique has been applied to map the bases of the Eiger Glacier and Aletsch Glaciers in Central Swiss Alps, where the Jungfrau railway tunnel provides a situation suitable for placing the detectors.

Recently, a measurement at the upper part of the Aletsch Glacier has been performed [180]. Muon detectors made of emulsion films were installed at three sites along the wall of the Jungfrau railway tunnel running through the bedrock underneath the Aletsch Glacier. The detectors had a total effective detection area of 250 cm2 for each site, and the data were collected for 47 days. The shape of the boundary between glacial ice and the bedrock at the upper part of the Aletsch Glacier was measured as shown in Fig. 9.32. This is the first successful application of this technology to a glaciated environment, which demonstrates that muon radiography can be a complementary method for determining the bedrock topography in such an environment when suitable detector sites are available. To image the bedrock topography underlying the Eiger glacier, another measurement is underway.

Fig. 9.32
figure 32

The three-dimensionally reconstructed ice–rock interface (blue surface) determined via muon radiography analysis [180]

This technique has also been applied to other fields such as investigations of archaeological sites (pyramids and tumuli). One recent success was the discovery of a large void in Khufu’s Pyramid [181]. This void was first observed with emulsion detectors installed in the Queen’s chamber and later confirmed with scintillator hodoscopes placed in the same chamber and with gas detectors outside the pyramid. Figure 9.33 shows that large known structures were observed as expected. In addition, an unexpected muon excess was observed, indicating that there is an additional void. This discovery demonstrated that this technique is useful for such investigations. The muon radiography technique with emulsions has been established and is further broadening our knowledge in several new fields, such as safety inspections, by looking for underground cavities or diagnosing furnace problems.

Fig. 9.33
figure 33

Two-dimensional histogram of the detected muon flux at a position (left) and the result of a simulation with the known inner structures (right) [181]. The large known structures (A: the King’s chamber and B: the Grand Gallery) and a new void were observed. The colour scale indicates muons per square centimetre per day per steradian

9.6.3.2 Medical Applications

Medical applications of the emulsion technique have also been attempted in the last decade. In the treatment of cancer by hadron-therapy, beams of carbon nuclei present therapeutic advantages over proton beams. The knowledge of the fragmentation of carbon nuclei when they interact with human tissues is important to evaluate the spatial profile of the energy deposition in the human body, thus maximizing the effectiveness in hitting the cancer with minimal damage to the neighbouring tissues. For this purpose, ECC detectors simulating human tissues have been realized and exposed to ion beams. The ECC technique, in fact, allows to integrate target and tracking devices in a very compact structure. This fact, together with the development of techniques of controlled fading of particle tracks in nuclear emulsions [2], has opened the way to measurements of the specific ionization over a very broad dynamic range. The application of several refreshing treatments to the emulsion films makes them sensitive to different ionization values. The combined analysis of several films allows to overcome saturation effects, so that films normally sensitive to minimum ionizing particles can be used to measure the charge of carbon ions and of their induced fragments. Details of the technique are reported in [182, 183].

Figure 9.34 shows the identification of fragments produced by the interaction of carbon ions with Lexan plates, which simulate the human body tissues given the similar electron density [183]. Such a charge identification capability allowed the measurement of the charge-changing cross-section of carbon ions with water by placing the target ECC inside a water tank [184] as well as the charge-changing cross-section of carbon ions with lexan [185].

Fig. 9.34
figure 34

Measurement of the electric charge of nuclear fragments produced by carbon ion interactions in an ECC detector [183]. Left: separation between hydrogen and helium ions. Right: separation of heavier fragments

In the framework of the FIRST (Fragmentation of Ions Relevant for Space and Therapy) experiment [186], two Emulsion Cloud Chambers were exposed to the fragments produced by a 12C beam (400 MeV/n) impinging on a composite target. The detectors were located in such a way to collect 12C fragments emitted at large angles with respect to the beam axis, as shown in the left plot of Fig. 9.35. Indeed, the characterization of secondary fragments produced by a 12C beam incident on a target is crucial to monitor the dose deposition inside the patient and to estimate the overall biological effectiveness due to the fragmentation of the incident beam. Data available in literature are rather scarce in this respect. Films used in this experiment were belonging to the same batch produced for the OPERA experiment. The ECC structure was made of two sections [187]: the first section, consisting of six nuclear emulsion films, was meant to trigger all the incoming fragments entering the detector; the second section, consisting of 55 nuclear emulsion films interleaved with 1 mm thick lead plates, was optimised for the momentum measurements of fragments using the particle range. Given the peculiar geometry, tracks impinge on the emulsion films with rather large incident angles. Recent developments in the scanning technology [123, 141, 142] were essential to analyse these films. Almost 37,000 proton tracks were fully reconstructed and their angular and momentum spectra were measured in a wide angular range extending for the first time to more than 80, as shown in the right plot of Fig. 9.35. The momentum was also measured through the particle range as reported in [188].

Fig. 9.35
figure 35

Left: ECC used in the FIRST experiment. Right: angular distribution of protons [188]

The FOOT (FragmentatiOn Of Target) experiment [189] is designed to study target and projectile fragmentation processes. Target nuclei (16O,12C) fragmentation induced by 150–250 MeV proton beams will be studied via the inverse kinematic approach. The detector includes a magnetic spectrometer based on silicon pixel detectors and drift chambers, a scintillating crystal calorimeter with TOF capabilities, thick enough to stop the heavier fragments produced, and a ΔE detector based on scintillating bars to achieve the needed energy resolution and particle identification. An alternative setup of the experiment will exploit the emulsion chamber capabilities. Dedicated emulsion chambers will be coupled with the interaction region to measure the interaction vertices within the target, tag the produced light charged fragments such as protons, deuterium, tritium and Helium nuclei, and measure their angular and momentum spectra. Given the very good identification capability of the emulsion technology for low Z fragments, the results from the emulsion chamber detectors are expected to be of particular interest also for the radio-protection in the space where Helium is relevant. The FOOT data taking is foreseen at the CNAO centre in Pavia starting from 2018 with the emulsion setup, while the electronic detectors will start data taking in 2019. Data taking in the major European laboratories such as HIT at Heidelberg and GSI at Darmstadt is also foreseen.