Experimental Astronomy

, Volume 40, Issue 1, pp 61–89 | Cite as

The infrared camera onboard JEM-EUSO

  • The JEM-EUSO Collaboration
  • J. H. AdamsJr.
  • S. Ahmad
  • J. -N. Albert
  • D. Allard
  • L. Anchordoqui
  • V. Andreev
  • A. Anzalone
  • Y. Arai
  • K. Asano
  • M. Ave Pernas
  • P. Baragatti
  • P. Barrillon
  • T. Batsch
  • J. Bayer
  • R. Bechini
  • T. Belenguer
  • R. Bellotti
  • K. Belov
  • A. A. Berlind
  • M. Bertaina
  • P. L. Biermann
  • S. Biktemerova
  • C. Blaksley
  • N. Blanc
  • J. Błȩcki
  • S. Blin-Bondil
  • J. Blümer
  • P. Bobik
  • M. Bogomilov
  • M. Bonamente
  • M. S. Briggs
  • S. Briz
  • A. Bruno
  • F. Cafagna
  • D. Campana
  • J. -N. Capdevielle
  • R. Caruso
  • M. Casolino
  • C. Cassardo
  • G. Castellinic
  • C. Catalano
  • G. Catalano
  • A. Cellino
  • M. Chikawa
  • M. J. Christl
  • D. Cline
  • V. Connaughton
  • L. Conti
  • G. Cordero
  • H. J. Crawford
  • R. Cremonini
  • S. Csorna
  • S. Dagoret-Campagne
  • A. J. de Castro
  • C. De Donato
  • C. de la Taille
  • C. De Santis
  • L. del Peral
  • A. Dell’Oro
  • N. De Simone
  • M. Di Martino
  • G. Distratis
  • F. Dulucq
  • M. Dupieux
  • A. Ebersoldt
  • T. Ebisuzaki
  • R. Engel
  • S. Falk
  • K. Fang
  • F. Fenu
  • I. Fernández-Gómez
  • S. Ferrarese
  • D. Finco
  • M. Flamini
  • C. Fornaro
  • A. Franceschi
  • J. Fujimoto
  • M. Fukushima
  • P. Galeotti
  • G. Garipov
  • J. Geary
  • G. Gelmini
  • G. Giraudo
  • M. Gonchar
  • C. González Alvarado
  • P. Gorodetzky
  • F. Guarino
  • A. Guzmán
  • Y. Hachisu
  • B. Harlov
  • A. Haungs
  • J. Hernández Carretero
  • K. Higashide
  • D. Ikeda
  • H. Ikeda
  • N. Inoue
  • S. Inoue
  • A. Insolia
  • F. Isgrò
  • Y. Itow
  • E. Joven
  • E. G. Judd
  • A. Jung
  • F. Kajino
  • T. Kajino
  • I. Kaneko
  • Y. Karadzhov
  • J. Karczmarczyk
  • M. Karus
  • K. Katahira
  • K. Kawai
  • Y. Kawasaki
  • B. Keilhauer
  • B. A. Khrenov
  • J. -S. Kim
  • S. -W. Kim
  • S. -W. Kim
  • M. Kleifges
  • P. A. Klimov
  • D. Kolev
  • I. Kreykenbohm
  • K. Kudela
  • Y. Kurihara
  • A. Kusenko
  • E. Kuznetsov
  • M. Lacombe
  • C. Lachaud
  • J. Lee
  • J. Licandro
  • H. Lim
  • F. López
  • M. C. Maccarone
  • K. Mannheim
  • D. Maravilla
  • L. Marcelli
  • A. Marini
  • O. Martinez
  • G. Masciantonio
  • K. Mase
  • R. Matev
  • G. Medina-Tanco
  • T. Mernik
  • H. Miyamoto
  • Y. Miyazaki
  • Y. Mizumoto
  • G. Modestino
  • A. Monaco
  • D. Monnier-Ragaigne
  • J. A. Morales de los Ríos
  • C. Moretto
  • V. S. Morozenko
  • B. Mot
  • T. Murakami
  • M. Nagano Murakami
  • M. Nagata
  • S. Nagataki
  • T. Nakamura
  • T. Napolitano
  • D. Naumov
  • R. Nava
  • A. Neronov
  • K. Nomoto
  • T. Nonaka
  • T. Ogawa
  • S. Ogio
  • H. Ohmori
  • A. V. Olinto
  • P. Orleański
  • G. Osteria
  • M. I. Panasyuk
  • E. Parizot
  • I. H. Park
  • H. W. Park
  • B. Pastircak
  • T. Patzak
  • T. Paul
  • C. Pennypacker
  • S. Perez Cano
  • T. Peter
  • P. Picozza
  • T. Pierog
  • L. W. Piotrowski
  • S. Piraino
  • Z. Plebaniak
  • A. Pollini
  • P. Prat
  • G. Prévôt
  • H. Prieto
  • M. Putis
  • P. Reardon
  • M. Reyes
  • M. Ricci
  • I. Rodríguez
  • M. D. Rodríguez Frías
  • F. Ronga
  • M. Roth
  • H. Rothkaehl
  • G. Roudil
  • I. Rusinov
  • M. Rybczyński
  • M. D. Sabau
  • G. Sáez-Cano
  • H. Sagawa
  • A. Saito
  • N. Sakaki
  • M. Sakata
  • H. Salazar
  • S. Sánchez
  • A. Santangelo
  • L. Santiago Crúz
  • M. Sanz Palomino
  • O. Saprykin
  • F. Sarazin
  • H. Sato
  • M. Sato
  • T. Schanz
  • H. Schieler
  • V. Scotti
  • A. Segreto
  • S. Selmane
  • D. Semikoz
  • M. Serra
  • S. Sharakin
  • T. Shibata
  • H. M. Shimizu
  • K. Shinozaki
  • T. Shirahama
  • G. Siemieniec-Oziȩbło
  • H. H. Silva López
  • J. Sledd
  • K. Słomińska
  • A. Sobey
  • T. Sugiyama
  • D. Supanitsky
  • M. Suzuki
  • B. Szabelska
  • J. Szabelski
  • F. Tajima
  • N. Tajima
  • T. Tajima
  • Y. Takahashi
  • H. Takami
  • M. Takeda
  • Y. Takizawa
  • C. Tenzer
  • O. Tibolla
  • L. Tkachev
  • H. Tokuno
  • T. Tomida
  • N. Tone
  • S. Toscano
  • F. Trillaud
  • R. Tsenov
  • Y. Tsunesada
  • K. Tsuno
  • T. Tymieniecka
  • Y. Uchihori
  • M. Unger
  • O. Vaduvescu
  • J. F. Valdés-Galicia
  • P. Vallania
  • L. Valore
  • G. Vankova
  • C. Vigorito
  • L. Villaseñor
  • P. von Ballmoos
  • S. Wada
  • J. Watanabe
  • S. Watanabe
  • J. WattsJr
  • M. Weber
  • T. J. Weiler
  • T. Wibig
  • L. Wiencke
  • M. Wille
  • J. Wilms
  • Z. Włodarczyk
  • T. Yamamoto
  • Y. Yamamoto
  • J. Yang
  • H. Yano
  • I. V. Yashin
  • D. Yonetoku
  • K. Yoshida
  • S. Yoshida
  • R. Young
  • M. Yu. Zotov
  • A. Zuccaro Marchi
Original Article

Abstract

The Extreme Universe Space Observatory on the Japanese Experiment Module (JEM-EUSO) on board the International Space Station (ISS) is the first space-based mission worldwide in the field of Ultra High-Energy Cosmic Rays (UHECR). For UHECR experiments, the atmosphere is not only the showering calorimeter for the primary cosmic rays, it is an essential part of the readout system, as well. Moreover, the atmosphere must be calibrated and has to be considered as input for the analysis of the fluorescence signals. Therefore, the JEM-EUSO Space Observatory is implementing an Atmospheric Monitoring System (AMS) that will include an IR-Camera and a LIDAR. The AMS Infrared Camera is an infrared, wide FoV, imaging system designed to provide the cloud coverage along the JEM-EUSO track and the cloud top height to properly achieve the UHECR reconstruction in cloudy conditions. In this paper, an updated preliminary design status, the results from the calibration tests of the first prototype, the simulation of the instrument, and preliminary cloud top height retrieval algorithms are presented.

Keywords

JEM-EUSO Space observatory IR-camera Ultra-High Energy Cosmic Rays (UHECR) 

1 Introduction

The JEM-EUSO Space Observatory is presently planned to be launched and attached to the Japanese module of the International Space Station (ISS) [1, 2, 3]. It aims to observe UV photon tracks produced by Ultra High Energy Cosmic Rays (UHECR) that generate Extensive Air Showers (EAS) while crossing the atmosphere. However, the atmospheric clouds introduce uncertainties in the UV radiation produced by the Extensive Air Showers (EAS) and are measured by JEM-EUSO [4-6]. Therefore, it is extremely important to know the atmospheric conditions and properties of the clouds in the Field of View (FoV) of the telescope. The IR measurement will also be important for understanding the point flashes and tracks that JEM-EUSO will measure from the Global Light System (GLS) [7] that has been recently funded by NASA.

As far as cloud impact is concerned, the observation of EAS should be carried out without further consideration by selecting good conditions and rejecting those events affected by clouds. In this case, the exposure is lowered by the reduction of observation time. On the other hand, space-based telescope monitors continuously changing landscapes within its wide FoV. The atmospheric conditions are also largely variable by location and time along the satellite trajectory. This leads the JEM-EUSO telescope to watch all possible conditions, most relevant to cloud properties, inside its FoV. The timescale of transitions between cloudy and clear atmosphere conditions may be of an order of less than a minute. Seasonal variations also appear every quarter of the orbit period, ±20 min. However, the presence of clouds may only be relevant if the EAS takes place behind the cloud, especially those clouds with large optical depths. The cloud impact is obviously characterized by their top altitude. Therefore, the portion of FoV where high-altitude clouds are present may reduce the instantaneous aperture for EAS observation, while it is still possible to detect the EAS [8]. Moreover, EAS are very fast and localized events that take place over a relatively small area and, therefore, accurate information about the atmospheric conditions, with high enough resolution to show small changes in the atmospheric profile that might have blurred the UHECR signal, is crucial to ensure an accurate UHECR reconstruction. Therefore, JEM-EUSO will implement its own Atmospheric Monitoring System (AMS), while data from other atmospheric satellites will be used to complement and enhance our understanding of the atmospheric conditions where the UHECR has been detected, as these satellite measurements can come with lower resolution and have a few hours delay from the time the UHECR has been observed in the JEM-EUSO FoV. In terms of resolution, we are designing JEM-EUSO to have a spatial resolution of 0.51 km at ground [1], while satellites, such as Meteosat Second Generation (MSG), have a spatial resolution of 3 km [11].

The AMS [9] of the JEM-EUSO Space Mission is intended to monitor the atmosphere in the JEM-EUSO FoV in order to obtain the cloud coverage and cloud top height [10]. The AMS is crucial for estimating the effective exposure of the telescope and to properly carry out the analysis of the UHECRs events [12]. Therefore, the JEM-EUSO mission has implemented the AMS and, combined with other atmospheric telescopes observations, will give us the capability to characterize our observational FoV [13]. The AMS of JEM-EUSO consists of a LIDAR and an infrared (IR) camera (Fig. 1). Global atmospheric models will be used, as well. The IR-Camera will provide the cloud coverage and cloud top height in the FoV of the JEM-EUSO main telescope. The LIDAR [9] is the other crucial instrument of the JEM-EUSO AMS. It provides detailed atmospheric profiles over a very small area used for the calibration of the IR-Camera images and the information necessary to determine the cloud top altitude to classify the clouds according to the optical depth. Due to Space budget constraints, the LIDAR can not provide shots over the whole FoV of JEM-EUSO in a scanning mode, therefore, the IR-Camera provides valuable and crucial information that complements the few LIDAR shots in the JEM-EUSO FoV.
Fig. 1

Schematic view of the IR-Camera observation concept along the International Space Station orbit. ACT (ACross Track). ALT (ALong Track)

The main scientific requirements on the accurate UHECRs measurements in JEM-EUSO are: a) reconstruction of the UHECR primary energy with a precision not worse than 30%, and b) determination of the depth of the shower maximum, Xmax, with a precision not worse than 120g/cm2. These two scientific requirements drive the requirements of the AMS and, therefore, the systematic error of the event in the primary energy and the depth of the maximum development of the EAS induced by the uncertainty of atmospheric conditions must be significantly below those of the EAS measurements required. Moreover, a scientific requirement to measure the cloud top altitude with an accuracy of △H≤500m has to be accomplished. Although the atmospheric lapse rate varies, under normal atmospheric conditions the average atmospheric lapse rate in the Troposphere can be assumed as 6.4K/km and, therefore, our accuracy requirements of △H≤500m in the cloud top height leads to an accuracy of the cloud top temperature of ±3K [9].

In this paper, the status of the Infrared Camera (Fig. 2) in charge of the JEM-EUSO Spanish Consortium [14] is presented. Section 2 summarizes the System Preliminary Design of the IR-Camera. A Bread Board prototype is under development to characterize the detector to be used in the JEM-EUSO Infrared Camera and the optical system, as well. This work will be explained in Section 3. Moreover, to simulate the temperature of the images measured by the Infrared Camera, an End to End (E2E) simulator is under development, and will be described in Section 4. Because of the unknown cloud emissivity and atmospheric effects, the temperature directly measured by the IR-Camera, the brightness temperature, has to be corrected in order to obtain the target temperature. In order to infer the cloud emissivity, the Look Up Tables procedure has been implemented and preliminary results will be presented in Section 5. Moreover, it is mandatory that these retrieval algorithms achieve not only the cloud top temperature, but the Cloud Top Height (CTH), as this is the scientific requirement of the Mission, therefore a strong effort in this direction has to be made in order to fulfill this scientific requirement of the Mission.
Fig. 2

Illustration of the Infrared Camera external view with the two main boxes that will allocate the electronics, the telescope, and the calibration unit

2 The infrared camera preliminary design

The scientific and technical requirements for the IR-Camera are summarized in Table 1. To obtain enough information to correct the atmospheric effects on the IR radiation emitted by the clouds, a 2-band design for the infrared camera is being carried out to comply with the space budgets of power, mass, data rate, and dimensions, in order to avoid a negative impact of resources in the main instrument. The preliminary design can be divided into three main blocks: the Telescope Assembly (Section 2.1 and 2.2), the Electronic Assembly (Section 2.3), and the Calibration Unit (Section 2.4) [15]. This preliminary design is complemented by the Mechanical and Thermal design (Section 2.5) of the instrument housing.
Table 1

Technical Requirements for the IR-Camera of the JEM-EUSO Space Mission.

Parameter

Target value

 

Measurement

 

Annual cloud temperature variation plus 20 K margin

range

220 K - 320 K

temperature plus 20 K margin

 

10-12 μm

Two atmospheric windows available,

Wavelength

(10.3-11.3 μm &

by means of a bi-band filter. To use

 

11.5-12.5 μm)

half detector in each band. (see Section 2.2 )

FoV

48 o

Same as main instrument.

Spatial

0.1 o (Goal)

@FoV center

resolution

 

0.2° (Threshold)

Absolute

3 K

500 m in cloud top altitude. According with

accuracy

 

the average atmosphere lapse rate 6.4K/km

Data Budget

≤40 kbps

Including Telemetry data.

Acquisition rate

17s (Image+Offset)

Assures gap-less images.

Mass

≤11 kg

Including 20 % margin.

Dimensions

400×400×370 mm

w/o Insulation and mounting bracket.

Power

≤15 W

Inc 20 % margin.

Lifetime

5 years On-orbit

+2 years On-ground

2.1 The telescope assembly; detector and front end electronics

The IR-Camera Telescope assembly is comprised of the Infrared detector (μbolometer), the Front End Electronics (FEE), and the Optical lens assembly (Fig. 3). The infrared detector selected for the JEM-EUSO IR-Camera is the UL04171 from the ULIS Company [16]. The UL04171 is an infrared opto-electronic device with a μbolometer Focal Plane Array (FPA); a two-dimensional detector array made from amorphous Silicon. A μbolometer measures the power of incident electromagnetic radiation via the heating of a material with a temperature-dependent electrical resistance. The working operative temperature is around 30 oC and a dedicated Thermo-Electric Cooler (TEC) is implemented to guarantee a temperature set point within ±10mK.
Fig. 3

Illustration of the infrared camera telescope assembly

The main characteristics of this uncooled detector are the following:
  • Spectral range: 8 to 14 μm

  • 640 x 480 pixels Focal Plane Array (FPA)

  • 25 x 25 μm / pixel

  • 2 readout channels

  • Maximum frame rate 30 Hz with 1 channel & 60 Hz with 2 channels

  • Typical responsivity 5 mV/K

  • Thermo-electric cooler (TEC) integrated in the FPA

  • Temperature stability < 10 mK

  • Low power consumption < 170 mW (without TEC)

The FEE (Front End Electronic) manages and drives the μbolometer; it provides the bias voltages and the sequencer (electric signals commands to control the detector), and manages the image acquisition modes. The FEE communicates with the ICU and provides it with the uncompressed raw images. The core of the FEE shall be a FPGA, VIRTEX family, in charge of implementing the main FEE functions. This includes the control of the UL04171, the generation of all the synchronism (including the generation of the clocks), and the interface with the sequencer. The polarization of the detector (bias, gain, offset generation and control) will also be controlled by the FPGA. The data acquisition will be implemented with an Analog Digital Converter (ADC) in each detector output channel before the FPGA input. The ADC number of bits will be chosen according to the pixel data resolution required by the IR-Camera.

To reduce the NETD (Noise Equivalent Temperature Difference) of the target under investigation, the technique of the ”Frame Averaging” will be used. The total number of the frames to be averaged (current baseline, 4 frames) is obtained from the velocity of the ISS and the acquisition rate of the detector. Moreover, the IR-Camera will perform an offset reduction every 17 seconds. When the ICU sends the sync image command to the FEE, the FEE will acquire 5 consecutive full frame target images. When the ICU sends the sync offset command to the FEE, the FEE will acquire 5 consecutive offset images. These offset images are taken with the shutter closed. The offset reduction will be achieved by discarding the first offset image acquired and averaging the other 4 offset images. The target image reduction will consist of the discarded first image and the average of the other 4 images. The resulting offset image will be substracted from the reduced target image.

2.2 The optical subsystem preliminary design

In order to secure an optimal operation of the Infrared Camera, the design of the Optical subsystem has to fulfill the following technical requirements (including the requirements listed in Table 1):
  • To be very fast in terms of F#

  • To secure an optimal operative temperature for the ULIS (29 oC) detector, for both the cold operative case (-15 oC) and the hot operative case (15 oC).

  • The thermal excursion of the lenses has to be less than 20 oC.

  • To keep the Cold Stop temperature 15 oC below the ULIS μbolometer temperature.

A brightness temperature measurement, intended for a single band configuration, may not provide the required radiometric accuracy without the use of external information for atmospheric effects correction. Since the availability of such information is unknown, a multispectral approach has been selected as a baseline. In this configuration, the IR-Camera has a bi-band design with two filters in the cold spot of the optic that allows a multispectral snapshot camera without a dedicated filter wheel mechanism. This is a very smart solution that leads to a more reliable baseline, reducing the costs of a complicated filter wheel mechanism intended for Space applications. The only drawback of this solution is that, in order to overcome the use of half of the available area of the detector for each spectral band, the IR-Camera images acquisition time has to be faster to avoid gaps during the ISS orbit and to secure enough of the overlap necessary for the stereoscopic retrieval algorithm of the cloud top height from the overlapped images of the infrared camera.

Presently, the optical system design (Fig. 4) has a refractive objective, based in a triplet, with one more lens close to the stop and a window for the filters close to the focal plane. The first surface of the first lens and the second surface of the third lens are aspheric, which allows for a better quality of the complete system. The aperture stop is situated at 0.40mm behind the fourth lens, in order to separate the optical system from the detection module. The system, consisting of four lenses, has a focal length of 19.10mm, and a f-number of 1, and it shall work with a total FoV of 48 o. The overall length between the first surface to the focal plane is 62.30mm. The full system has been designed with only one optical material, Germanium, with a refraction index of 4.003118.
Fig. 4

Schematic illustration of the optic preliminary design of the Infrared Camera

A tolerancing test was performed, using CodeV tolerancing tool (TOR) [17], in order to manufacture a breadboard model. Tolerancing provides information about the sensitivity an optical system has toward typical fabrication and mounting errors. Tolerancing can also help improve the design by allowing for the largest sample of lense designs, which can help to determine the manufacturing tolerances necessary for performance.

A breadboard model (Fig. 5) has been manufactured to test the optical performances of the system. The breadboard lenses have been mounted in the same way that they will be assembled in the flight model. The tolerances and opto-mechanical process has been successfully tested and verified at this stage of development.
Fig. 5

Breadboard model manufactured at INTA facilities for the Infrared Camera of JEM-EUSO

2.3 The electronics assembly

The Electronics Assembly is composed of two main sections: the Instrument Control Unit (ICU) and the Power Supply Unit (PSU). Both blocks follow cold redundancy architecture and are placed on individual PCBs. Therefore, four boards are defined: ICU Main, ICU Redundant, PSU Main, and PSU Redundant.

The ICU controls and manages the overall system behavior, including the data management (compression, format), the power drivers and the mechanisms (shutter, black-bodies etc.) controller FPGA. The IR-Camera electronics shall provide mechanisms for processing and transmitting images obtained from an IR detector controlled by a dedicated FEE board, a Firmware (FW) solution considered as baseline.

Data generated by the FEE is then processed by the Instrument Control Unit (ICU), which is in charge of controlling several aspects of the system management, such as the electrical system, the thermal control, and the communication with the platform computer. The PSU receives the main power bus from the JEM-EUSO main telescope and it provides the required power regulation to the system and sub-systems. The actuator will be managed by the ICU, providing control to a stepper motor and acquiring its position by means of micro-switches placed in the stable positions.

The PSU will be composed of DC/DC Converters necessary to supply the electronics inside the ICU, the calibration unit (motor, black-bodies), heaters, and electronics inside the FEE. Voltage levels of 15V, 7V, -7V, 5V and 3.3V are supplied to the FEE. An Electromagnetic Interference (EMI) Filter is defined on the PSU as well.

2.4 The calibration subsystem

The calibration unit (Fig. 6) is dedicated to managing and controlling the IR calibration operation. This unit has to guarantee a reference internal temperature to ensure the calibration of the data coming out of the FEE. The calibration unit is mainly composed of:
  • Two Black Bodies + Temperature-controlled Shutter.

  • Moving mechanism and motor.

  • The positioning system.

  • Calibration thermal control.

Fig. 6

Illustration of the calibration subsystem. The Black bodies and shutter change position for the different operating modes of the camera

In order to define the timing needed to perform the two calibration points, a dedicated test campaign will be performed in the next phases of the project. For the current baseline, it has been assumed that there are similarities with other on-flight calibrators, e.g the ISIR Shuttle mission STS-85, performing the hot point calibration every 5 minutes and the cold point calibration every 30 minutes.

The calibration mechanism consists of a stepper motor governing the black-bodies and shutter. A pin-puller has been implemented in order to hold the support during the spacecraft launch. The actuator is designed to be powered only during movements. The detent torque (position holding torque) shall be capable of maintaining the device in the desired position, both for calibration and imaging.

The stepper motor will be commanded in an open loop where each step corresponds to a rotation of 1.8 o. Nevertheless, position detectors shall be installed in order to guarantee that the motor has arrived in the commanded position. Furthermore, adjustable end stops have been introduced to physically limit the maximum angle of rotation. Reed switches will be used as position detectors for the mechanism, as well. The reed switches are activated when a magnet is close enough. For the calibration mechanism, one magnet will be attached to each of the arms. The reed switches, on the contrary, will be soldered to the motor cover in the corresponding locations where the magnets will arrive for the four stable positions.

2.5 The thermal and mechanical design

The main mechanical structure of the Infrared Camera contains and protects the Telescope Assembly and Calibration Unit. It is attached to the bench of the JEM-EUSO Telescope by means of three flexure-pads. The main housing is an aluminum Al6082 monocoque body-shell. It has three different compartments to accommodate the required subsystems and provide overall stiffness and thermal isolation of the Optical Assembly and FPA from the Calibration Unit and the FEE. The FEE box (Fig. 7) is made in aluminum 6082T6 and will be Alodined 1200S with the convenient roughness (better than 1.6 μm) to be compliant with the low emissivity requirement. The thickness of the walls is 2mm to provide good radiation protection, still having a margin within the minimum of 1mm needed.
Fig. 7

FEE box illustration

The power dissipation of the FPA generates a high heat flux to the ULIS which could raise its temperature over its optimal work temperature during the Hot Operative Case (+15 oC). Therefore, it is necessary to evacuate the heat to the JEM-EUSO Telescope Bench by means of two high conductance thermal straps. The thermal straps are made of Annealed Pyrolitic Graphite (APG). They are attached to the Lenses Barrel, instead of the FPA, to achieve the desired temperature in the Cold Stop (15°C below the μbolometer). Since the Thermal Straps are passive thermal components for the Cold Operative Case (-15 oC), it is necessary to actively control its temperature by a heater placed on the base of the FPA. For intermediate Interface temperatures (between -15 o and 15 oC), the heater will be controlled to achieve the required temperature.

3 Infrared camera prototype tests

In this section, the characteristics of the prototype are described (Section 3.1). The test setup (Section 3.2) is explained and the main results of the tests are summarized (Section 3.3).

3.1 Prototype description

The Astrophysics Institute of the Canary Islands (Instituto de Astrofísica de Canarias, IAC, Tenerife) has a global facility (Laboratory of Imaging Sensors for Astronomy, LISA) to test astronomical arrays and related devices. This facility has the appropriate environment and equipment to characterize this type of detector. A electronic prototype module developed by INO (Canada) has been used [18]. This electronic core is known as IRXCAM-640. Figure 8 shows a picture of the detector mounted on the electronics module. Providing 16-bit raw signal outputs at 60 Hz, the electronics give total access to the detector configuration parameters and the TEC and shutter controls. Although the chip architecture exploits a TEC less operation, the already integrated TEC and the control loop allow 10 mK stability in temperature, keeping very low NETD values. For the camera optics, we have decided to use a commercial unit, the Surnia lenses from Janos [19], capable of measuring in the (7-14) b μm region. The main characteristics of the optics are: focal length = 25 mm and f#=0.86, with a circular FOV of 45 o. The wavelength is limited in the 7 to 14 μm range.
Fig. 8

Photograph of the ULIS-04-17-1 unit mounted in one IRXCAM-640 module for prototyping at the IAC-LISA laboratory. A detailed description of the detector is found in Section 2.1

3.2 Prototype tests setup

In order to determine the prototype performance, images of an IR calibrator have to be measured. The used infrared radiation source was a Black Body (model DCN-1000-L3) from HGH Systems Infrarouges (France) [20], with an emissive area of 75x75 mm and an absolute temperature range from -40 oC to 150 oC. Moreover, thermal uniformity is better than 0.01 oC and stability 0.002 oC. A control system was built to complete the testing procedure. The system consists of: (a) 8 Pt-100 temperature sensors to register the prototype temperature, ambient temperature, and Black Body plate temperature, (b) a commercial Lakeshore-218 8-channel temperature monitor and (c) a Proportional Integral Derivative (PID) control loop handled by a Lakeshore-331 temperature controller to keep the optics case in the 10 mK environment. The entire device has been synchronized and controlled by a user-friendly interface developed under NI-Labview, using a PC-platform. Most typical tests, such as linearity, temperature stability, non-uniformity calibration or NETD, were fully automatized for these purposes.

3.3 Prototype tests results

We have checked to make sure that the bolometer sensor behaves well while changing the tunable voltages, both in gain (VSK = 5500 mV) and offset (VGFID = 3350 mV). As the dynamic range requested for this space mission is, in principle, small (220-320 K), we can tune the μbolometer to get the minimum NETD possible. We have determined NETD values to be less than the ULIS specification (≤120 mK) and found no dependence with different FPA temperatures (8 oC, 16 oC, 24 oC and 30 oC). A typical NETD value is 70 mK for a 50 samples run, following the same evaluation procedure as the manufacturer.

As expected, the μbolometer is very offset-dependent. Any small change in the temperature of the reference images changes the resulting value of the images, while the operating temperature of the FPA is controlled by the TEC. In our available temperature range (TEC-controlled, from 8 oC to 30 oC), neither the response of the ULIS μbolometer nor the main performances change, while the offset should be thermalized, due to heat fluxes transmitted to shutter and optics. It is strongly recommended that the same temperature control be applied both the FPA and shutter allocated into the optics case.

The operating point of these type of cameras, based on μbolometer devices, depends on several factors: (a) the tuning of the FPA voltages and Capacitive Transimpedance Amplifier (CTIA) gain, (b) the integration time as a function of the Time Master Clock and (c) the FPA temperature. We have reduced the variables involved (voltages, gain, integration time) in order to analyze the response to the radiation and we got hundreds of curves plotting the FPA response versus the black-body temperature in the range -40 to 140 oC by increasing and decreasing the temperature (UpRamp and DownRamp respectively). Some of the results are shown in Fig. 9. In this test, the black body temperature was increased from -40 to 140 oC and then decreased from 140 to -40 oC. This temperature range covers the range of temperatures of the expected scenarios of the IR-Camera.
Fig. 9

One complete calibration curve and its comparison with the theoretical laws. In detail, a plot of the best fit obtained in the calibration range is shown. In this figure, the X axis represents the temperature selected in the black body and the Y axis, the detector response

In order to analyze the IR-Camera response, the theoretical Stefan Boltzmann Law has been depicted in this graph after converting irradiance units into digital counts. It can be seen that the curves fit the theoretical Stefan Boltzmann Law very well (<1% difference in the -40 to 30 oC range), where the irradiance is defined as E = CσT4, with σ the Stefan-Boltzmann constant and C the parameter determined by our system in order to reproduce the detector response, in terms of digital counts. The comparison is much better when the up and down ramps are compared with Plancks Law, integrated in the spectral band of the detector (7 to 14 μm) and converted to digital counts [21].

In Fig. 10, the FPA response versus the theoretical black-body irradiance is shown. We have achieved a very linear response, good enough to foresee a 2-point calibration procedure onboard. Moreover, the uncertainty analysis of several calibration lines, using two high temperature calibration points, against a target temperature of -40 oC, has demostrated that it is possible to perform the calibration with an uncertainty lower than 500 mK and even lower than 200 mK if one of the calibration points is close enough to the temperature of the target.
Fig. 10

Fit of the radiance for the (7−14)μm band versus the camera output. In detail, plot of the linear best fit in the possible calibration range

Therefore, we can estimate our error budget due to the detector in ≈500 mK (calibration) plus ≈60 mK (NETD). The rest of the allowed budget (<3 K) is allocated for the other components of the system, like the cloud top height reconstruction algorithms, the data compression algorithms, and the uncertainty contribution of the calibration.

4 End to end simulation

An End to End (E2E) simulation of the infrared camera will give us simulated images of those we expect to obtain with the instrument. It provides us with the ability to study the impact of several scenarios found in the atmosphere, in terms of the accuracy of the cloud top height reconstruction, analyzing the detection capabilities, calibration procedures, and the correction factor that will be taken into account for the final data released by the AMS of the JEM-EUSO Space Mission[22].

The first step is to simulate the IR scenario using atmospheric simulation software, like the Satellite Data Simulator Unit (SDSU) software [23] or real satellite IR images, taken by missions like MODIS [24] or CALIPSO [25]. After the input scene is read by the simulator, an optic element simulation takes place, starting from the simulation of the diffraction, distortion, and efficiency of the optics module. This simulation is performed, based on an evaluation of the design, using optics design software Code-V [17] by the Instituto Nacional de Tecnica Aeroespacial (INTA, Spain).

In order to create a model of a detector, we have relied on the test described in Section 3. Therefore, we can translate the input values to analog voltage values that should be similar to the detector response. Moreover, we can apply the ADC (Analog to Digital Conversion) of 12 bits, and its corresponding change to 10 bits. As a last step, in the instrument simulation, we have compressed the image, using HP (Hewlet Packard) code LOCO-I/JPEG-LS algorithm [26] with near loss-less code.

Figure 11 shows a sample of a simulated IR image. In the IR scenario, cloud and atmosphere have been generated by the SDSU code. The effect of the optics and the detector has been included in the E2E simulation software to obtain the product that the IR-Camera would provide. The IR scenario has been divided into two frames to apply the filter centered in 10.8 μm to the left side, and the filter centered in 12 μm to the right side of the image.
Fig. 11

Image of a cloud simulated in SDSU + IR-Camera E2E (140x140 pixels, 1km resolution). Notice the small change in contrast on left side (10.8 μm band), and right side filter (12 μm band)

5 The IR-camera cloud top-height reconstruction algorithms

The reconstruction of the Cloud Top Height (CTH) can be performed using stereo vision algorithms that require two different views of the same scene (Section 5.1). Preliminary studies are being performed, which use radiometric information (Section 5.2), to develop reconstruction algorithms for determining cloud top temperature, and then determine the cloud top height in the JEM-EUSO FoV. An inter-satellite comparison of Cloud Top Height is explained in Section 5.3.

5.1 Stereo vision algorithm

The idea of using stereo technique to estimate CTH from Space for Earth’s observations is present in the recent literature. Only in the last 10 years has it has become a real possibility that differs from the standard methods. The new polar multiview instruments, currently active from space, and the faster scans and higher pixel resolution of the geostationary satellite instruments, have improved the studies and results in this field [27, 28]. Comparative studies have been performed considering different scenarios, sensors, and methods. The techniques are discussed in [29-34]. Advantages and weaknesses are dependent on the geometry of the stereo system, the sensor specs, the algorithms, and, finally, the bands used. Although the conclusions cannot be generalized, it can be stated that the image contrast, the cloud boundaries, and the cloud optical thickness under different conditions, are sources of some bias. It is worth pointing out that cloud-top height retrieval methods different from stereo (e.g., radiative methods) may suffer from the same problems. Stereo vision, in general, attempts to infer information about the 3D structure and distance of a scene, from two images taken by two spatially separated cameras. Image processing algorithms are then applied to extract the 3-D information. Each camera gets a different view of the same object, and the parallax effect disparity in the following is used to reconstruct the depth, i.e. the distance of the object from the visual sensor.

The main steps of a stereo algorithm can be summarized as follows:
  • Correspondence problem: search for the best match between the pixels of both images that result projections of the same scene element. That means finding the apparent displacement of a point from one image to the second image (disparity).

  • Reconstruction problem: given a number of corresponding points on both images (the disparity map) and information on the geometry of the stereo system, recover the 3D location and structure of the observed objects.

This is the general stereo approach that is currently under test in the JEM-EUSO mission. The ’JEM-EUSO Stereo System’ is provided by the ISS, the IR-Camera, and the ISS movement and it is constrained by the mission requirements. Instead of having two different cameras, the stereo imaging is accomplished by one camera moving along the observed scene, exploiting the ISS displacement. The scene results are imaged from two different views and the intersection is processed to retrieve the distance from the IR device. Finally, the CTH is obtained by subtracting the estimated depth from the known ISS altitude. According to the specifications of the IR sensor, when it takes an image, half of the scene is observed by the camera in the first band and the other half in the second band. In the next shot, the scene will appear displaced in such a way that the part acquired in the first image by band B1 will be taken in the second image by band B2. Referring to Fig. 12, the cloud in the middle of the scene lies in the intersection of the views and it is shot both in B1 band and B2 band (see the image plane). In following shots, the cloud on the right will be imaged in the same way and the whole FoV will be totally covered. In Fig. 12, the parallax effect, as an apparent motion of the cloud in the middle of the scene, is also visible, highlighted in the overlapped part of the two views in the image plane. Figure 13 shows how the stereo height reconstruction works. The depth of each scene point is recovered by triangulating the corresponding pairs of projected pixels that were detected by the matching step. Finally, the height of each image pixel is calculated by subtracting the depth value from the ISS altitude.
Fig. 12

’JEM-EUSO Stereo System’.The scheme shows the overlapping needed for stereo to be applied. As the IR-Camera moves right, consecutive images overlap in different bands

Fig. 13

’Stereo Reconstruction’.The scheme shows the height reconstruction for two cloudy pixels P1 and P2. Their projection points (P1′,P1 and P2′,P2 respectively) are detected by the matching step and the depth is recovered by triangulation. Finally the heights are calculated by subtracting these values from the ISS altitude

Multispectral stereo algorithms are currently under test. However, as a first step of the study, a slightly modified version of the method in [35], has been applied to mono band satellite stereo pairs. At the bottom of Fig. 14, an example of disparity on the map is reported, obtained by applying the method to the ’stereo system’ composed by the Meteosat Second Generation geostationary satellites MSG-8 and MSG-9 (one of the input image is shown on the top of the figure). Although their baseline doesn’t allow for the height reconstruction to be obtained with good accuracy, it can be used as test for the disparity estimation, which is a crucial step for the final height estimation. They are nearly synchronous, with a resolution worse than the one of the IR-Camera but in the same bands. Areas that have the same disparity grey level, represent points of the scene with the same distance from the IR-Camera. The more distant an object is, the smallest is the disparity and the darkest is the colour. The preliminary results on the disparity estimation show a good agreement with what was expected. Further tests are in progress on different stereo satellite configuration and sensors.
Fig. 14

Disparity map.Top: One of the MSG images used to estimate the map. Bottom: The corresponding disparity map where points having the same depth have the same gray level. The brightest points are the closest to the sensor and, therefore, the highest from the ground

5.2 Radiometric temperature retrieval algorithms

Methods based on radiative measurements of a target (Earth or sea surface, clouds, etc.) can be applied to retrieve its temperature. However, due to atmospheric effects, the IR radiance emitted by those targets would not be received by the IR-Camera because atmospheric gases such as CO2, H20, CH4 and O3 absorb IR radiation. Therefore, the brightness temperature retrieved from the measured radiance would not be the true temperature of the cloud. Consequently, it is necessary to implement some strategies to correct the atmospheric effects and to retrieve the real cloud temperature. This is the aim of the temperature retrieval algorithms. The objectives of this part of the work are first, to develop an algorithm that retrieves the cloud temperature from the brightness temperature in the two channels of the IR-Camera, then testing the performance of the algorithm in simulated scenarios of clouds whose temperature is known and, finally, applying the algorithm to the real data of MODIS to compare our retrieved cloud temperature with the one given by MODIS. In this sense, MODIS is used as an external instrument that provides us with real scenarios.

The Split Windows Algorithms (SWAs) use the brightness temperature in two channels defined in the 8-14 μm spectral region to retrieve the real temperature of a target. There are a high number of works devoted to measuring the land surface temperature from space by using these SWAs (see, for example, [36, 37, 38]). The mean standard error of these algorithms is around 1-2 K. Other SWAs focus on the retrieval of the sea surface with a mean standard error around 0.5-1 K [39, 40]. There are also many instruments devoted to measuring cloud properties. However, most of them use the multispectral information of IR channels. Some of them also use visible channels. For example, the AVHRR, onboard the NOAA satellite, has 6 channels, three of them in the IR region (channels 4 and 5 in 8-12 μm band); MODIS, onboard the AQUA and TERRA satellites, has 36 channels (channels 31 and 32 corresponds to JEM-EUSO IR-Camera) and ATSR, onboard the ERS satellites, has 7 channels (channels 3 and 4 in 8-12 μm band).

Since the baseline of the IR-Camera is bi-spectral with two 1 μm-width bands centered at 10.8 and 12 μm (hereafter referred as B1 and B2 respectively), we can develop SWA to retrieve the Cloud Top Temperature (CTT) of water clouds from brightness temperatures measured by the IR-Camera in these bands.

The retrieval algorithms are based on radiometric simulations of the physical problems: the emitters (Earth surface, cloud and atmosphere) and the interfering atmosphere (atmospheric absorbing gases, mainly water vapor). The simulations were performed by means of the radiative transfer equation and MODTRAN atmospheric simulation code which is used to calculate the transmittance of atmospheric gases in the spectral bands of the IR-Camera by using average vertical profiles of temperature and gases concentrations corresponding to standard atmospheric models [41]. As a first approximation to the problem, an emissivity equal to 1 has been considered in the radiative transfer equation which means that the simulations represent thick clouds.

In order to retrieve the temperature of thick water clouds, the SWA has been designed from simulations of thick clouds (emissivity 1) at different heights (from 0.5 to 12 km in 0.5 km steps) and in different atmospheric conditions (different concentrations of atmospheric gases). The equation 1 shows this algorithm.
$$ T_{r}(B_{1},B_{2}) = -0.53819 + 2.6331 T_{B_{1}} - 1.6305 T_{B_{2}} $$
(1)

Where Tr is the cloud temperature retrieved by the SWA and \(T_{B_{1}}\) and \(T_{B_{2}}\) are the brightness temperature in the bands centered in 10.8 μm and 12 μm, respectively.

More scenarios with atmospheric conditions, different from those used for the algorithm design, have been simulated in order to test the SWA. The SWA has been applied successfully to water clouds with emissivity 1. Figure 15 shows the errors calculated when the SWA is applied to different simulated scenarios with clouds.
Fig. 15

The temperature retrieval errors are shown for different scenarios. In X axis, the height of the cloud for each simulation is represented. In Y axis, errors are represented

In the X axis, the cloud top height is represented. In the Y axis, the retrieval error is shown. The retrieval error is defined as the difference between the temperature retrieved by the algorithm and the temperature used in the simulations. Different points at the same height correspond to clouds at the same height that have different atmospheric profiles of temperature and water vapor distribution. As can be seen in Fig. 15, the errors are bigger for low clouds (0.5 km) and atmospheres with high water vapor content (water vapor concentrates in lower layers). However, errors stay below 0.3 K, which is a very good result.

In order to apply the SWA to real data, MODIS images have been used. MODIS is an instrument with high capabilities (36 spectral bands) that supplies a wealth of information [24]. From all the products that MODIS provides, we have only used some of the images: brightness temperatures in bands 11 and 12 μm (to apply our SWA), the cloud top temperature (to compare with the CTT retrieved by the SWA), and the cloud phase and emissivity images (to analyzed and interpret the results). Since MODIS bands are quite similar to our bands, we can apply our SWA to the images corresponding to those bands to obtain the cloud top temperature image. Then this image and the cloud top temperature image provided by MODIS are subtracted. This difference cannot be considered the algorithm error since MODIS does not provide the real CTT. However, we can consider the difference between the CTT retrieved by our algorithm and the CTT given by MODIS as an indicator of the accuracy of the algorithm, even though MODIS has shown some discrepancies with other sensors [42].

As an example, this procedure has been applied to a MODIS image of South Hemisphere (Pacific Ocean) at 40S-60S latitude and 160W-120W longitude (16/05/2012-09:55 UTC). In Fig. 16, the corresponding CTT differences image is shown. White pixels correspond to clear sky. The range of TMODISTSWA differences is wide. To explain those values, the TMODIS-TSWA image has been compared with the phase and emissivity images.
Fig. 16

TMODISTSWA differences calculated when the SWA is applied to MODIS images in 11 and 12 μm bands

Figure 17 (right image) shows the cloud phase in which blue denotes water, cyan is assigned to ice, yellow is used for pixels with a mixture of water and ice, and red corresponds to unknown phase. The right image of figure 3 displays the emissivity image. From a comparison of Figs. 16 and 17, it is clear that the correlation between the TMODISTSWA differences and the cloud phase is strong. TMODISTSWA values for water phase stay below 1 K for the 85% of the pixels and TMODISTSWA values higher than 1 K are related to ice pixels or pixels with effective emissivity lower than 1.
Fig. 17

Image on the left is the cloud phase product provided by MODIS. On the right, the cloud emissivity image, also provided by MODIS

Summarizing, SWA is able to retrieve the temperature of thick water clouds with high accuracy but it is not applicable to thin water clouds or ice clouds, as would be expected since the SWA was designed just for thick water clouds (emissivity 1). Nevertheless, other strategies are being developed to retrieve the temperature of thin clouds and to identify ice clouds. The brightness temperature difference (BTD) between bands can be used to classify clouds [43, 44], and retrieve their microphysical properties by using the so-called look-up-tables (LUTs) [45]. This is a procedure that has also been applied to calculate the cloud temperature and the emissivity [46] . The procedure to be applied to JEM-EUSO IR-camera to retrieve the CTT and the emissivity has been designed and is in validation phase, but giving promising results.

5.3 Inter-satellite comparison of cloud height retrieval

The JEM-EUSO IR-camera is used for realtime, fast-response applications during cosmic ray detection. The camera provides a valuable quality check of cosmic ray energy estimations, discriminating high and middle level clouds from the total of clouds detected; later, during off-line events analysis, all available satellite-based cloud observations are used to get the most detailed and reliable scenario. From the perspective of a satellite-based cloud observation, inter-satellite comparison is a common way to quantify uncertainties in observations. These comparisons reduce the effects of certain types of sampling biases, including those introduced by the attenuation of surface-based lidar and cloud radar in thick and precipitating clouds, provide a larger and statistically robust set of observations for comparison, and facilitate near-global sampling for most types of clouds. Moreover, meteorological cameras use infrarred bands close to the JEM-EUSO camera, allowing new algorithms designed for JEM-EUSO to be tested. Since these cameras have higher resolution and accuracy, the performance of the obtained algorithm represents an upper limit with respect to the JEM-EUSO infrared camera. To assess cloud top height uncertainties, an IR scene, obtained from SEVIRI (on-board Meteosat-9 satellite), has been analyzed and the CTH has been compared with those derived from MODIS (on-board NASAs Aqua satellite) and CALIOP (main instrument of CALIPSO satellites constellation). The two first sensors are infrared radiometers [47] [48] and the third is a three-channel LIDAR that uses a Nd:YAG laser emitting linearly polarized pulses of light at 1064 nm and 532 nm. The data from atmospheric radiosoundings are used to convert the measured cloud temperature into the related cloud height.

The analyzed scene has been collected by SEVIRI on July 10, 2011, over the Gulf of Guinea, close to sub-satellite point. Cloud top height estimated by MODIS has been obtained by MYD06L2 cloud product relative to Aqua acquisition on 12:05 UTC. CTHs derived from SEVIRI have been obtained with two different estimations of cloud top temperature:
  • T1cloud = BT @ 10.8 μm without any correction

  • T2cloud = 1.0178 BT @ 10.8 4.149

The T1cloud correction has been derived from radiative simulations of different atmospheres and thick clouds at different levels. As for the SWA, the radiative transfer equation and ModTran atmospheric simulation code have been used in the simulations. The scene is characterized by sea surface underlying a compact deck of low and middle clouds. Warm sea surface improves cloud detection due to high differences with cloud temperature. For this reason, pixels with BT less than 289.15K have been considered cloudy. SEVIRI BT data for cloudy pixels has been converted in CTHs (above sea level), deriving the related height profile from the atmospheric radiosounding performed in St. Elena (latitude -15.93 N and longitude -5.66 E) on the same day at 12:00 UTC. To compare different observations, all satellite-based data have been re-sampled and compared to the SEVIRI ground resolution and georeferenced to WGS84 geographical coordinate systems. Figure 18 shows CTHs from SEVIRI and MODIS for the analyzed scene. CTHs derived from different sensors and, once the T1cloud or the T2cloud expressions are applied, follow in good agreement, with a uniform deck of low and middle cloud occupying the largest part of the scene. Highest, convective, and irregular clouds, up to 10 km ASL, are visible in the south-west part of the scene.
Fig. 18

CTHs from MODIS (left) and SEVIRI (right) on July 10, 2011

Comparing only middle and low clouds, both the T1cloud or T2cloud show an overestimation of SEVIRI-derived estimations with respect to MODIS-derived ones. The overestimation is reduced from BT without any correction, with a mean bias equal to 206 m, to BT corrected with mean bias equal to 59 m. The standard deviation is almost constant for both algorithms ranging (339 m and 369 m). When we consider high irregular clouds, both mean bias and standard deviation increase to 300 m and to 1.500 m, respectively.

The discrepancy increment can be partially attributed to the algorithm. This algorithm was designed to retrieve thick water clouds. High clouds can be composed of ice or mixed phase. The algorithm cannot be applied with enough accuracy to ice or thin/broken water clouds that can be found in the edge of the clouds. A large part of this change is also due to the cloud’s boundary and the different sampling area of the sensors. As mentioned before, the algorithm was designed for thick clouds of water and, in the boundaries, there is a high probability of finding thin and broken clouds. For this reason, a Sobel Edge operator [49] has been applied to the scene to detect clouds boundaries and select uniformly covered regions.

CTHs derived from SEVIRI have been also compared with cloud top height provided by the LIDAR CALIOP. The scene is characterized by high clouds in latitudes from 10 N and 5 S, then widespread low clouds from 5 S and 30 S (Fig. 19). Unfortunately, the satellite passing over the Gulf of Guinea was on 01:12 UTC, several hours before the SEVIRI scene on 10:59 UTC. Considering observation time, the atmospheric radiosounding retrieved in Cape Town (latitude -33.96 N and longitude 18.6 E) on July 10, 2011 at 00:00 UTC has been considered to estimate CHTs. The comparison between CTHs derived from SEVIRI and CALIOP reveals uncertainties, under more deep analysis, in CTH ranging from 500 m for low continuous clouds and about 900 m for high clouds. This is in good agreement with the previous comparison of MODIS estimates.
Fig. 19

CALIPSO Feature mask on July 10, 2011 01:05 UTC. Latitude are indicated as negative values

The edge detection through the gradient calculation can be a pre-selecting method to exclude the critical areas (broken or thin clouds), while mean bias can be reduced by comparing CTHs derived from the brightness temperature with LIDAR data. The analysis also shows that atmospheric radiosoundings are crucial to monitor the status for the atmosphere. Due to the scarcity of these observations, the use of mesoscale numerical meteorological models (NWP), like the Weather Research and Forecasting (WRF) model [50], to reconstruct the atmospheric conditions is strongly recommended.

6 Conclusions

In the UHECR regime observed above 10 19 eV by JEM-EUSO, the existence of clouds is an important characteristic of the atmosphere. Therefore, the monitoring of the cloud coverage by the JEM-EUSO Atmospheric Monitor System (AMS) is crucial for estimating the effective exposure with high accuracy, and the atmosphere calibration for the analysis of the UHECRs events just above the threshold energy of the telescope.

From the prototype test, we can conclude that this detector appears to be a good choice for our mission. The response of the ULIS μbolometer to the incident radiation is similar to those data reported by the manufacturer, and predicted by theory. Therefore, the design of the camera should continue using this prototype as the baseline, if the requirements do not change and the advances in the electronics will not bring a better detector in the upcoming years.

The development of the End to End (E2E) simulation is ongoing, making the model more extensive and covering each area of the design more deeply. Our objective is to address the impact of several design characteristics, have a detailed study of the detection error, and to provide feedback for the IR-Camera design to test the changes necessary at various stages of development.

The SWA is able to retrieve the temperature of thick water clouds with high accuracy but it is not applicable to thin water clouds or ice clouds, as would be expected since the SWA was designed only for thick water clouds. Other strategies are being developed to retrieve the temperature of thin clouds and to identify ice clouds. A methodology to retrieve the emissivity and temperature of thin clouds, based on Look up Tables, is in the validation phase with promising results. The possibility of using the Brightness Temperature Difference between the bands to discriminate ice clouds is being investigated and seems to be useful for identifying thin ice clouds. Nevertheless, the comparison with MODIS results is not definitive; therefore, an inter-satellite comparison of cloud height retrievals has been carried out.

The IR-Camera full design, prototyping, space qualified construction, assembly, verification, and integration is under the responsibility of the Spanish Consortium involved in JEM-EUSO. At this step of development, the System Preliminary Desigh (SPDR) of the infrared camera has been successfully completed.

Notes

Acknowledgements

This work is supported by the Spanish Government MICINN & MINECO under the Space Program: projects AYA2009-06037-E/AYA, AYA-ESP 2010-19082, AYA-ESP 2011-29489-C03, AYA-ESP 2012-39115-C03, CSD2009-00064 (Consolider MULTIDARK) and by Comunidad de Madrid under project S2009/ESP-1496. M. D. Rodriguez Frias deeply acknowledge Instituto de Astrofisica de Canarias (IAC) the grant under the ”Excelence Severo Ochoa Program” to perform a mid-term visit at this institute (Tenerife, Canary Islands).

References

  1. 1.
    Adams, J.H., et al.: (JEM-EUSO collaboration), An evaluation of the exposure in nadir observation of the JEM-EUSO mission. Astropart. Phys. 44, 76–90 (2013)CrossRefADSGoogle Scholar
  2. 2.
    Rodríguez Frías, M.D.: The JEM-EUSO Space Mission collaboration, The JEM-EUSO Space Mission @ forefront of the highest energies never detected from Space. In: Proceedings Workshop on Multifrequency Behaviour of High Energy Cosmic Sources, MdSAI, vol. 83, pp. 337–341 (2012)Google Scholar
  3. 3.
    Rodríguez Frías, M.D., et al.: The JEM-EUSO Space Mission, The JEM-EUSO Space Mission: Frontier Astroparticle Physics @ ZeV range from Space, Nova Science Publishers, ISBN: 978-1-62618-998-0, Inc (2014). In pressGoogle Scholar
  4. 4.
    Sáez-Cano, G., Morales de los Rios, J.A., Prieto, H., del Peral, L., Pacheco Gmez, N., Hernanndez Carretero, J., Shinozaki, K., Fenu, F., Santangelo, A., Rodríguez Frías, M.D.: JEM-EUSO collaboration. Observation of ultra-high energy cosmic rays in cloudy conditions by the JEM-EUSO space observatory. In: Proceedings of 32nd International Cosmic Ray Conference (ICRC), Beijing, vol. 3, p.231 (2011). (Preprint) arXiv: 1204.5065.
  5. 5.
    Sáez-Cano, G., et al.: Observation of ultra-high energy cosmic rays in cloudy conditions by the Space-based JEM-EUSO observatory. J. Phys. Conf. Ser. 375, 052010 (2012)CrossRefADSGoogle Scholar
  6. 6.
    Sáez-Cano, G., Shinozaki, K, del Peral, L., Bertaina, M., Rodríguez Frías, M.D.: Observation of extensive air showers in cloudy conditions by the JEM-EUSO space mission. Advances in space research: Special issue “centenary of the discover of the cosmic rays”. In press (2014)Google Scholar
  7. 7.
    The JEM-EUSO collaboration, Calibration aspects of the JEM-EUSO mission, this edition (2014)Google Scholar
  8. 8.
    The JEM-EUSO collaboration, JEM-EUSO observation in cloudy conditions, this edition (2014)Google Scholar
  9. 9.
    The JEM-EUSO collaboration, The Atmospheric Monitoring System of the JEM-EUSO instrument, this edition (2014)Google Scholar
  10. 10.
    Wada, S., Ebisuzaki, T., Ogawa, T., Sato, M., Peter, T, Mitev, V., Matthey, R., Anzalone, A., Isgro, F., Tegolo, D., Colombo, E., Morales de los Ríos, J.A., Rodríguez Frías, M.D., Park II, Nam Shinwoo, Park Jae, JEM-EUSO Collaborators: Potential of the Atmospheric Monitoring System of JEM-EUSO Mission. In: Proceedings 31st International Cosmic Ray Conference (2009)Google Scholar
  11. 11.
  12. 12.
    Rodríguez Frías, M.D.: JEM-EUSO collaboration, The Atmospheric Monitoring System of the JEM-EUSO space mission. In: Proceedings International Symposium on Future Directions in UHECR Physics, CERN. Eur. Phys. J. 53 (10005), 1–7 (2013) doi:10.1051/epjconf/20135310005 Google Scholar
  13. 13.
    Neronov, A., Wada, S., Rodríguez Frías, M.D., et al.: Atmospheric Monitoring System of JEM-EUSO, In: Proceedings of 32nd International Cosmic Ray Conference (ICRC), Beijing, vol. 6, p. 332 (2011). (Preprint) arXiv: 1204.5065
  14. 14.
    Rodríguez Frías, M.D., et al.: Towards the Preliminary Desigh Review of the Infrared Camera of the JEM-EUSO space mission. In: Proceedings of 33rd International Cosmic Ray Conference (ICRC), Rio de Janeiro (2013). (Preprint) arXiv: 1307.7071
  15. 15.
    Morales de los Ríos, J.A., et al.: The IR-Camera of the JEM-EUSO Space Observatory. In: Proceedings of 32nd International Cosmic Ray Conference (ICRC) Beijing, vol. 11, p. 466 (2011)Google Scholar
  16. 16.
    UL 04 17 1 640x480 VGA LWIR uncooled microbolometer Data sheet from ULIS Propietary (UL 04 17 1/15.10.07/UP/DVM/NTC07 1010-4 rev.4)Google Scholar
  17. 17.
    CODE V Optical Design Software, http://www.opticalres.com/
  18. 18.
    IRXCAM Control Application Software User Manual, INO, September, 2010Google Scholar
  19. 19.
  20. 20.
    Infrared Reference Source DCN 1000 Users Manual, HGH Systems infrarouges (version August, 2010)Google Scholar
  21. 21.
    Morales de los Ríos, J.A., Joven, E., del Peral, L., Reyes, M., Licandro, J., Rodríguez Frías, M.D.: The Infrared Camera Prototype Characterization for the JEM-EUSO Space Mission, Nuclear Instruments and Methods in Physics Research section A, vol. 749, pp. 74–83 (20114). ISSN 0168–9002. doi:10.1016/j.nima.2014.02.0050
  22. 22.
    Morales de los Ríos, J.A., et al.: An End to End Simulation code for the IR-camera of the JEM-EUSO Space Observatory. In: Proceedings of 33rd International Cosmic Ray Conference (ICRC), Rio de Janeiro (2013)Google Scholar
  23. 23.
    Masunaga, H., et al.: Satellite Data Simulator Unit, a multisensor, multispectral satellite simulator package. Am. Meteorol. Soc. (2010)Google Scholar
  24. 24.
    MODIS (Moderate Resolution Imaging Spectroradiometer) NASA website: http://modis.gsfc.nasa.gov/, http://ladsweb.nascom.nasa.gov/data/search.html
  25. 25.
    CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation) NASA website: http://www-calipso.larc.nasa.gov/, CNES website: http://smsc.cnes.fr/CALIPSO/
  26. 26.
    HP Labs LOCO-I/JPEG-LS. http://www.hpl.hp.com/loco/
  27. 27.
    Moroney, C., et al.: IEEE Trans. Geos. Remote. Sens. 40 (7), 1532–1540 (2002). doi:10.1109/TGRS.2002.801150 CrossRefADSGoogle Scholar
  28. 28.
    Muller, J.P., et al.: Int. J. Remote. Sens. 28 (9), 1921–1938 (2007). doi:10.1080/01431160601030975 CrossRefADSGoogle Scholar
  29. 29.
    Naud, C., Muller, J.P., Clothiaux, E.E.: Comparison of cloud-top heights derived from misr stereo and modis co2-slicing. Geophys. Res. Lett. 29 (16) (2002)Google Scholar
  30. 30.
    Naud, C., Baum, B., Bennartz, R., Fischer, J., Frey, R., Menzel, P., Muller, J.P., Preusker, R., Zhang, H.: Inter-comparison of meris, modis and misr cloud top heights. In: Proceedings of the ESA MERIS Workshop, Frascati (2003)Google Scholar
  31. 31.
    Naud, C., Muller, J.P., Haeffelin, M., Morille, Y., Delaval, A.: Assessment of misr and modis cloud top heights through inter comparison with a back-scattering lidar at sirta. Geophys. Res. Lett. 31 (4) (2004)Google Scholar
  32. 32.
    Naud, C., et al.: Inter-comparison of multiple years of modis, misr and radar cloud-top heights. Ann. Geophys. 23, 2415–2424 (2005)CrossRefADSGoogle Scholar
  33. 33.
    Nauss, T., Kokhanovsky, A.A., Nakajima, T.Y., Reudenbach, C., Bendix, J. Atmos. Res. 78, 46–78 (2005)CrossRefGoogle Scholar
  34. 34.
    Marchand, R.T., Ackerman, T.P., Moroney, C.: An assessment of Multiangle Imaging Spectroradiometer (MISR) stereo-derived cloud top heights and cloud top winds using ground-based radar, lidar, and microwave radiometers. J. Geophys. Res. 112, D06204 (2007). doi:10.1029/2006JD007091
  35. 35.
    Manizade, K.F., Spinhirne, J.D.: IEEE Trans. Geosci. Remote Sens. 44 (9), 2481–2491 (2006)CrossRefADSGoogle Scholar
  36. 36.
    Qin, Z., Dall’Olmo, G., Karnieli, A., Berliner, P.: Derivation of split window algorithm and its sensitivity analysis for retrieving land surface temperature from noaa-advanced very high resolution radiometer data. J. Geophys. Res. Atmos. (1984-2012) 106 (D19), 22,655 (2001)CrossRefGoogle Scholar
  37. 37.
    Ouaidrari, H., Goward, S., Czajkowski, K., Sobrino, J., Vermote, E.: Land surface temperature estimation from AVHRR thermal infrared measurements: An assessment for the AVHRR land pathfinder II data set. Remote. Sens. Environ. 81 (1), 114 (2002)CrossRefGoogle Scholar
  38. 38.
    Zhao, S., Qin, Q., Yang, Y., Xiong, Y., Qiu, G.: Comparison of two split-window methods for retrieving land surface temperature from MODIS data. J. Earth. Syst. Sci. 118 (4), 345 (2009)CrossRefADSGoogle Scholar
  39. 39.
    Merchant, C., Le Borgne, P., Marsouin, A., Roquet, H.: Optimal estimation of sea surface temperature from split-window observations. Remote. Sens. Environ. 112 (5), 2469 (2008)CrossRefGoogle Scholar
  40. 40.
    Sobrino, J., Li, Z., Stoll, M., Becker, F.: Multi-channel and multi-angle algorithms for estimating sea and land surface temperature with ATSR data. Int. J. Remote Sens. 17 (11), 2089 (1996)CrossRefADSGoogle Scholar
  41. 41.
    Briz, S., de Castro, A.J., Fernández-Gómez, I., Rodríguez, I., López F.: JEM-EUSO collaboration: Remote Sensing of Water Clouds Temperature with an IR Camera on board the ISS in the frame of JEM-EUSO Mission. Remote Sensing of Clouds and the Amosphere XVIII; and Optics in Atmospheric Propagation and Adaptive Systems XVI. In: Comeron, A., et al. (eds.) Proceedings of SPIE, vol. 8890, 88900K1-12 (2013)Google Scholar
  42. 42.
    Genkova, I., Seiz, G., Zuidema, P., Zhao, G., Di Girolamo, L.: Cloud top height comparisons from ASTER, MISR and MODIS for trade wind cumuli. Remote. Sens. Environ. 107, 211–222 (2007)CrossRefGoogle Scholar
  43. 43.
    Inoue, T.: A cloud type classification with noaa 7 split-window measurements. J. Geophys. Res. Atmos. (1984-2012) 92 (D4), 3991–4000 (1987)CrossRefGoogle Scholar
  44. 44.
    Ackerman, S.A., Strabala, K.I., Menzel, P.W., Frey, R.A., Moeller, C.C., Gumley, L.E.: Discriminating clear sky from clouds with MODIS. J. Geophys. Res. 103 (D24), 32,141–32,157 (1998)CrossRefADSGoogle Scholar
  45. 45.
    Nauss, T., Kokhanovsky, A., Nakajima, T., Reudenbach, C., Bendix, J.: The inter-comparison of selected cloud retrieval algorithms. Atmos. Res. 78 (1), 46–78 (2005)CrossRefGoogle Scholar
  46. 46.
    Hamada, A., Nishi, N.: Development of a cloud-top height estimation method by geostationary satellite split-window measurements trained with CloudSat data. J. Appl. Meteorol. Climatol. 49, 2035–2049 (2010)CrossRefADSGoogle Scholar
  47. 47.
    Schmetz, J., et al.: BAMS 83, 977 (2002). doi:10.1175/BAMS-83-7-Schmetz-1
  48. 48.
  49. 49.
    Vincent, O.R., Folorunso, O.: A descriptive Algorithm for Sobel Image Edge Detection. In: Proceedings of Informing Science and IT Education Conference (InSITE) (2009)Google Scholar
  50. 50.
    Michalakes, J., Dudhia, J., Gill, D., Henderson, T., Klemp, J., Skamarock, W., Wang, W.: The Weather Reseach and Forecast Model: Software Architecture and Performance. In: Mozdzynski, G. (ed.) Proceedings of the 11th ECMWF Workshop on the Use of High Performance Computing in Meteorology, 25-29 October 2004, Reading U.K. (2004). (to appear)Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  • The JEM-EUSO Collaboration
  • J. H. AdamsJr.
    • 76
  • S. Ahmad
    • 3
  • J. -N. Albert
    • 2
  • D. Allard
    • 4
  • L. Anchordoqui
    • 78
  • V. Andreev
    • 77
  • A. Anzalone
    • 18
    • 24
  • Y. Arai
    • 48
  • K. Asano
    • 46
  • M. Ave Pernas
    • 67
  • P. Baragatti
    • 25
  • P. Barrillon
    • 2
  • T. Batsch
    • 59
  • J. Bayer
    • 9
  • R. Bechini
    • 22
  • T. Belenguer
    • 66
  • R. Bellotti
    • 11
    • 12
  • K. Belov
    • 77
  • A. A. Berlind
    • 80
  • M. Bertaina
    • 21
    • 22
  • P. L. Biermann
    • 7
  • S. Biktemerova
    • 61
  • C. Blaksley
    • 4
  • N. Blanc
    • 70
  • J. Błȩcki
    • 60
  • S. Blin-Bondil
    • 3
  • J. Blümer
    • 7
  • P. Bobik
    • 64
  • M. Bogomilov
    • 1
  • M. Bonamente
    • 76
  • M. S. Briggs
    • 76
  • S. Briz
    • 68
  • A. Bruno
    • 11
  • F. Cafagna
    • 11
  • D. Campana
    • 16
  • J. -N. Capdevielle
    • 4
  • R. Caruso
    • 13
    • 24
  • M. Casolino
    • 49
    • 19
  • C. Cassardo
    • 21
    • 22
  • G. Castellinic
    • 14
  • C. Catalano
    • 5
  • G. Catalano
    • 18
    • 24
  • A. Cellino
    • 21
    • 23
  • M. Chikawa
    • 30
  • M. J. Christl
    • 79
  • D. Cline
    • 77
  • V. Connaughton
    • 76
  • L. Conti
    • 25
  • G. Cordero
    • 54
  • H. J. Crawford
    • 73
  • R. Cremonini
    • 22
  • S. Csorna
    • 80
  • S. Dagoret-Campagne
    • 2
  • A. J. de Castro
    • 68
  • C. De Donato
    • 19
  • C. de la Taille
    • 3
  • C. De Santis
    • 19
    • 20
  • L. del Peral
    • 67
  • A. Dell’Oro
    • 21
    • 23
  • N. De Simone
    • 19
  • M. Di Martino
    • 21
  • G. Distratis
    • 9
  • F. Dulucq
    • 3
  • M. Dupieux
    • 1
  • A. Ebersoldt
    • 7
  • T. Ebisuzaki
    • 49
  • R. Engel
    • 7
  • S. Falk
    • 7
  • K. Fang
    • 74
  • F. Fenu
    • 9
  • I. Fernández-Gómez
    • 68
  • S. Ferrarese
    • 21
    • 22
  • D. Finco
    • 25
  • M. Flamini
    • 25
  • C. Fornaro
    • 25
  • A. Franceschi
    • 15
  • J. Fujimoto
    • 48
  • M. Fukushima
    • 33
  • P. Galeotti
    • 21
    • 22
  • G. Garipov
    • 63
  • J. Geary
    • 76
  • G. Gelmini
    • 77
  • G. Giraudo
    • 21
  • M. Gonchar
    • 61
  • C. González Alvarado
    • 66
  • P. Gorodetzky
    • 4
  • F. Guarino
    • 16
    • 17
  • A. Guzmán
    • 9
  • Y. Hachisu
    • 49
  • B. Harlov
    • 62
  • A. Haungs
    • 7
  • J. Hernández Carretero
    • 67
  • K. Higashide
    • 44
    • 49
  • D. Ikeda
    • 33
  • H. Ikeda
    • 42
  • N. Inoue
    • 44
  • S. Inoue
    • 33
  • A. Insolia
    • 13
    • 24
  • F. Isgrò
    • 16
    • 26
  • Y. Itow
    • 40
  • E. Joven
    • 69
  • E. G. Judd
    • 73
  • A. Jung
    • 51
  • F. Kajino
    • 35
  • T. Kajino
    • 38
  • I. Kaneko
    • 49
  • Y. Karadzhov
    • 1
  • J. Karczmarczyk
    • 59
  • M. Karus
    • 7
  • K. Katahira
    • 49
  • K. Kawai
    • 49
  • Y. Kawasaki
    • 49
  • B. Keilhauer
    • 7
  • B. A. Khrenov
    • 63
  • J. -S. Kim
    • 53
  • S. -W. Kim
    • 50
  • S. -W. Kim
    • 52
  • M. Kleifges
    • 7
  • P. A. Klimov
    • 63
  • D. Kolev
    • 1
  • I. Kreykenbohm
    • 6
  • K. Kudela
    • 61
  • Y. Kurihara
    • 48
  • A. Kusenko
    • 77
  • E. Kuznetsov
    • 76
  • M. Lacombe
    • 5
  • C. Lachaud
    • 4
  • J. Lee
    • 52
  • J. Licandro
    • 69
  • H. Lim
    • 52
  • F. López
    • 68
  • M. C. Maccarone
    • 18
    • 24
  • K. Mannheim
    • 10
  • D. Maravilla
    • 54
  • L. Marcelli
    • 20
  • A. Marini
    • 15
  • O. Martinez
    • 56
  • G. Masciantonio
    • 19
    • 20
  • K. Mase
    • 27
  • R. Matev
    • 1
  • G. Medina-Tanco
    • 54
  • T. Mernik
    • 9
  • H. Miyamoto
    • 2
  • Y. Miyazaki
    • 29
  • Y. Mizumoto
    • 38
  • G. Modestino
    • 15
  • A. Monaco
    • 12
  • D. Monnier-Ragaigne
    • 2
  • J. A. Morales de los Ríos
    • 65
    • 67
    • 81
  • C. Moretto
    • 2
  • V. S. Morozenko
    • 62
  • B. Mot
    • 5
  • T. Murakami
    • 48
  • M. Nagano Murakami
    • 29
  • M. Nagata
    • 34
  • S. Nagataki
    • 37
  • T. Nakamura
    • 57
  • T. Napolitano
    • 15
  • D. Naumov
    • 61
  • R. Nava
    • 54
  • A. Neronov
    • 71
  • K. Nomoto
    • 47
  • T. Nonaka
    • 33
  • T. Ogawa
    • 49
  • S. Ogio
    • 41
  • H. Ohmori
    • 49
  • A. V. Olinto
    • 74
  • P. Orleański
    • 60
  • G. Osteria
    • 16
  • M. I. Panasyuk
    • 63
  • E. Parizot
    • 4
  • I. H. Park
    • 52
  • H. W. Park
    • 52
  • B. Pastircak
    • 61
  • T. Patzak
    • 4
  • T. Paul
    • 78
  • C. Pennypacker
    • 73
  • S. Perez Cano
    • 67
  • T. Peter
    • 72
  • P. Picozza
    • 19
    • 20
    • 49
  • T. Pierog
    • 7
  • L. W. Piotrowski
    • 49
  • S. Piraino
    • 9
    • 18
  • Z. Plebaniak
    • 59
  • A. Pollini
    • 70
  • P. Prat
    • 4
  • G. Prévôt
    • 4
  • H. Prieto
    • 67
  • M. Putis
    • 61
  • P. Reardon
    • 76
  • M. Reyes
    • 69
  • M. Ricci
    • 15
  • I. Rodríguez
    • 68
  • M. D. Rodríguez Frías
    • 67
    • 81
    • 82
  • F. Ronga
    • 15
  • M. Roth
    • 7
  • H. Rothkaehl
    • 60
  • G. Roudil
    • 5
  • I. Rusinov
    • 1
  • M. Rybczyński
    • 57
  • M. D. Sabau
    • 66
  • G. Sáez-Cano
    • 67
  • H. Sagawa
    • 33
  • A. Saito
    • 36
  • N. Sakaki
    • 7
  • M. Sakata
    • 35
  • H. Salazar
    • 56
  • S. Sánchez
    • 68
  • A. Santangelo
    • 9
  • L. Santiago Crúz
    • 54
  • M. Sanz Palomino
    • 66
  • O. Saprykin
    • 62
  • F. Sarazin
    • 75
  • H. Sato
    • 35
  • M. Sato
    • 45
  • T. Schanz
    • 9
  • H. Schieler
    • 7
  • V. Scotti
    • 16
    • 17
  • A. Segreto
    • 18
    • 24
  • S. Selmane
    • 4
  • D. Semikoz
    • 4
  • M. Serra
    • 69
  • S. Sharakin
    • 63
  • T. Shibata
    • 43
  • H. M. Shimizu
    • 39
  • K. Shinozaki
    • 49
    • 9
  • T. Shirahama
    • 44
  • G. Siemieniec-Oziȩbło
    • 58
  • H. H. Silva López
    • 54
  • J. Sledd
    • 79
  • K. Słomińska
    • 60
  • A. Sobey
    • 79
  • T. Sugiyama
    • 39
  • D. Supanitsky
    • 54
  • M. Suzuki
    • 42
  • B. Szabelska
    • 59
  • J. Szabelski
    • 59
  • F. Tajima
    • 31
  • N. Tajima
    • 49
  • T. Tajima
    • 8
  • Y. Takahashi
    • 45
  • H. Takami
    • 48
  • M. Takeda
    • 69
  • Y. Takizawa
    • 49
  • C. Tenzer
    • 9
  • O. Tibolla
    • 10
  • L. Tkachev
    • 61
  • H. Tokuno
    • 46
  • T. Tomida
    • 49
  • N. Tone
    • 49
  • S. Toscano
    • 71
  • F. Trillaud
    • 54
  • R. Tsenov
    • 1
  • Y. Tsunesada
    • 46
  • K. Tsuno
    • 49
  • T. Tymieniecka
    • 59
  • Y. Uchihori
    • 28
  • M. Unger
    • 7
  • O. Vaduvescu
    • 69
  • J. F. Valdés-Galicia
    • 54
  • P. Vallania
    • 21
    • 23
  • L. Valore
    • 16
    • 17
  • G. Vankova
    • 1
  • C. Vigorito
    • 21
    • 22
  • L. Villaseñor
    • 55
  • P. von Ballmoos
    • 5
  • S. Wada
    • 49
  • J. Watanabe
    • 38
  • S. Watanabe
    • 45
  • J. WattsJr
    • 76
  • M. Weber
    • 7
  • T. J. Weiler
    • 80
  • T. Wibig
    • 59
  • L. Wiencke
    • 75
  • M. Wille
    • 6
  • J. Wilms
    • 6
  • Z. Włodarczyk
    • 57
  • T. Yamamoto
    • 35
  • Y. Yamamoto
    • 35
  • J. Yang
    • 51
  • H. Yano
    • 42
  • I. V. Yashin
    • 63
  • D. Yonetoku
    • 32
  • K. Yoshida
    • 35
  • S. Yoshida
    • 27
  • R. Young
    • 79
  • M. Yu. Zotov
    • 63
  • A. Zuccaro Marchi
    • 49
  1. 1.St. Kliment Ohridski University of SofiaSofiaBulgaria
  2. 2.LALUniv Paris-Sud, CNRS/IN2P3OrsayFrance
  3. 3.Omega, Ecole Polytechnique, CNRS/IN2P3PalaiseauFrance
  4. 4.APC, Univ Paris Diderot, CNRS/IN2P3, CEA/Irfu, Obs. de ParisSorbonne Paris CitéParisFrance
  5. 5.IRAPUniversité de Toulouse, CNRSToulouseFrance
  6. 6.ECAPUniversity of Erlangen-NurembergErlangenGermany
  7. 7.Karlsruhe Institute of Technology (KIT)KarlsruheGermany
  8. 8.Ludwig Maximilian UniversityMunichGermany
  9. 9.Institute for Astronomy and Astrophysics, Kepler CenterUniversity of TübingenTübingenGermany
  10. 10.Institut für Theoretische Physik und AstrophysikUniversity of WürzburgWürzburgGermany
  11. 11.Istituto Nazionale di Fisica Nucleare, Sezione di BariBariItaly
  12. 12.Universita’ degli Studi di Bari Aldo Moro and INFN, Sezione di BariBariItaly
  13. 13.Dipartimento di Fisica e AstronomiaUniversita’ di CataniaCataniaItaly
  14. 14.Consiglio Nazionale delle Ricerche (CNR)Ist. di Fisica Applicata Nello CarraraFirenzeItaly
  15. 15.Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali di FrascatiFrascatiItaly
  16. 16.Istituto Nazionale di Fisica Nucleare - Sezione di NapoliNaplesItaly
  17. 17.Dipartimento di Scienze FisicheUniversita’ di Napoli Federico IINaplesItaly
  18. 18.INAF - Istituto di Astrofisica Spaziale e Fisica Cosmica di PalermoPalermoItaly
  19. 19.Istituto Nazionale di Fisica Nucleare - Sezione di Roma Tor VergataRomaItaly
  20. 20.Dipartimento di FisicaUniversita’ di Roma Tor VergataRomaItaly
  21. 21.Istituto Nazionale di Fisica Nucleare - Sezione di TorinoTorinoItaly
  22. 22.Dipartimento di FisicaUniversità di Torino, INFN TorinoTorinoItaly
  23. 23.Istituto Nazionale di AstrofisicaOsservatorio Astrofisico di TorinoTorinoItaly
  24. 24.Istituto Nazionale di Fisica Nucleare - Sezione di CataniaCataniaItaly
  25. 25.Dipartimento di IngegneriaUTIURomeItaly
  26. 26.DIETIUniversita’ degli Studi di Napoli Federico IINapoliItaly
  27. 27.Chiba UniversityChibaJapan
  28. 28.National Institute of Radiological SciencesChibaJapan
  29. 29.Fukui University of TechnologyFukuiJapan
  30. 30.Kinki UniversityHigashi-OsakaJapan
  31. 31.Hiroshima UniversityHiroshimaJapan
  32. 32.Kanazawa UniversityKanazawaJapan
  33. 33.Institute for Cosmic Ray ResearchUniversity of TokyoKashiwaJapan
  34. 34.Kobe UniversityKobeJapan
  35. 35.Konan UniversityKobeJapan
  36. 36.Kyoto UniversityKyotoJapan
  37. 37.Yukawa InstituteKyoto UniversityKyotoJapan
  38. 38.National Astronomical ObservatoryMitakaJapan
  39. 39.Nagoya UniversityNagoyaJapan
  40. 40.Solar-Terrestrial Environment LaboratoryNagoya UniversityNagoyaJapan
  41. 41.Graduate School of ScienceOsaka City UniversityOsakaJapan
  42. 42.Institute of Space and Astronautical Science/JAXASagamiharaJapan
  43. 43.Aoyama Gakuin UniversitySagamiharaJapan
  44. 44.Saitama UniversitySaitamaJapan
  45. 45.Hokkaido UniversitySapporoJapan
  46. 46.Interactive Research Center of ScienceTokyo Institute of TechnologyTokyoJapan
  47. 47.University of TokyoTokyoJapan
  48. 48.High Energy Accelerator Research Organization (KEK)TsukubaJapan
  49. 49.RIKENWakoJapan
  50. 50.Korea Astronomy and Space Science Institute (KASI)DaejeonRepublic of Korea
  51. 51.Ewha Womans UniversitySeoulRepublic of Korea
  52. 52.Sungkyunkwan UniversitySeoulRepublic of Korea
  53. 53.Center for Galaxy Evolution ResearchYonsei UniversitySeoulRepublic of Korea
  54. 54.Universidad Nacional Autónoma de México (UNAM)Mexico CityMexico
  55. 55.Universidad Michoacana de San Nicolas de Hidalgo (UMSNH)MoreliaMexico
  56. 56.Benemérita Universidad Autónoma de Puebla (BUAP)PueblaMexico
  57. 57.Institute of PhysicsJan Kochanowski UniversityKielcePoland
  58. 58.Astronomical ObservatoryJagiellonian University KrakowPoland
  59. 59.National Centre for Nuclear ResearchLodzPoland
  60. 60.Space Research Centre of the Polish Academy of Sciences (CBK)WarsawPoland
  61. 61.Joint Institute for Nuclear ResearchDubnaRussia
  62. 62.Central Research Institute of Machine Building, TsNIIMashKorolevRussia
  63. 63.Skobeltsyn Institute of Nuclear PhysicsLomonosov Moscow State UniversityMoscowRussia
  64. 64.Department of Space PhysicsInstitute of Experimental PhysicsKosiceSlovakia
  65. 65.Consejo Superior de Investigaciones Científicas (CSIC)MadridSpain
  66. 66.Instituto Nacional de Técnica Aeroespacial (INTA)MadridSpain
  67. 67.Universidad de Alcalá (UAH)MadridSpain
  68. 68.Universidad Carlos III de MadridMadridSpain
  69. 69.Instituto de Astrofísica de Canarias (IAC)TenerifeSpain
  70. 70.Swiss Center for Electronics and Microtechnology (CSEM)NeuchâtelSwitzerland
  71. 71.ISDC Data Centre for AstrophysicsVersoixSwitzerland
  72. 72.Institute for Atmospheric and Climate ScienceETH ZürichZürichSwitzerland
  73. 73.Space Science LaboratoryUniversity of CaliforniaBerkeleyUSA
  74. 74.University of ChicagoChicagoUSA
  75. 75.Colorado School of MinesGoldenUSA
  76. 76.University of Alabama in HuntsvilleHuntsvilleUSA
  77. 77.University of California (UCLA)Los AngelesUSA
  78. 78.University of Wisconsin-MilwaukeeMilwaukeeUSA
  79. 79.NASA - Marshall Space Flight CenterMadison CountyUSA
  80. 80.Vanderbilt UniversityNashvilleUSA
  81. 81.SPace and AStroparticle GroupUniversidad de AlcaláAlcalá de HenaresSpain
  82. 82.Instituto de Astrofísica de Canarias (IAC)MadridSpain

Personalised recommendations