In recent decades, the tidal wave of information technologies has resulted in the flourishing of computational imaging techniques. The progress in this field is pushing the envelope in our observation abilities either along a single dimension of light or across multiple dimensions jointly, and along the way bringing powerful tools to many disciplines, such as life sciences, medicine, and materials science. We will introduce some leading edge computational imaging methods and systems, from the perspective of the different dimensions of the light signal.
3.1 Spatial dimension
The spatial resolution of conventional imaging methods is limited in two main respects: (1) The resolution of an optical imaging system is always limited by diffraction; (2) Due to the limitation of the resolving power and spatial-bandwidth product of the objective lens, there exists a fundamental tradeoff between the field of view and the spatial resolution. In this section, we review the work carried out in two main areas in high-resolution computational imaging: computational super-resolution imaging and large-field-of-view, high-resolution imaging.
In 1873, the German scientist Ernst Abbe advanced the diffraction limit based on wave optics; i.e., there exists an upper limit in the resolution of an optical system. For a long time, the diffraction barrier limited high-resolution imaging. Recently, by exploiting new imaging principles, researchers have broken the Abbe diffraction limit and achieved super-resolution imaging. The typical research includes stimulated emission depletion (STED) (Hell and Wichmann, 1994; Hein et al., 2008), photo activated localization microscopy (PALM) (Hess et al., 2006; Manley et al., 2008), stochastic optical reconstruction microscopy (STORM) (Rust et al., 2006), and structured illumination microscopy (SIM) (Gustafsson, 2005). The 2014 Nobel Prize in Chemistry was awarded jointly to these outstanding researchers for their breakthroughs in super-resolution imaging. Specifically, in STED, super-resolution images were generated by modulating excitation illumination to reduce the sizes of the emitting fluorochromes to sub-diffraction sizes. In contrast, STORM adopts photo-switchable molecules, and then activates and deactivates the sparsely located molecules that are separated by a distance that exceeds the Abbe diffraction limit. The high-resolution imaging is achieved by locating the sparsely located molecules in each capture. In SIM, the illumination is modulated with structured patterns to encode high-frequency components of the microscopic object. Illustrations and comparisons of the principles of these methods are shown in Fig. 1 (Schermelleh et al., 2010).
On the other hand, observation of many biological phenomena requires high-resolution imaging of the microscopic objects over a large field of view, to study the cooperation among different components across a large range (Frenkel, 2010; Greenbaum et al., 2012). Macroscopic photography has similar requirements, as it ranges from surveillance and astronomical and Earth observation to entertainment, etc. Representative works in large-field-of-view, high-resolution imaging include gigapixel imaging (Brady et al., 2012; Marks et al., 2012) and Fourier ptychographic microscopy (FPM) (Zheng et al., 2013). Brady et al. (2012) developed a high-resolution imaging system, AWARE-2, with a cascading optical design. In this gigapixel imaging system, there is a relay lens with high resolving power, and the camera arrays after the relay lens are used to focus and capture images of different fields of view. Each image comprises 9.6 hundred million pixels, which are captured and transmitted in parallel. The high-resolution, large-field-of-view image is reconstructed through computational stitching in the spatial domain. Since this spatial-stitching design requires a large number of cameras, it is very costly. At a much lower cost, Zheng et al. (2013) proposed an alternative method, in which they adopted low numerical aperture objectives to capture images of a wide field of view, and by changing the illumination angles, they captured different frequency areas of the microscopic objects and stitched these large-field-of-view but low-resolution images in the Fourier domain. The high-resolution and large-field-of-view microscopic images are reconstructed through a phase retrieval algorithm. Fig. 2 shows the optical setup of these two typical systems.
3.2 Temporal dimension
Conventional cameras capture 2D images sequentially to capture dynamic scenes, and their time resolution is limited by the sensitivity of the sensor, the data transfer speed and storage. So far, the fastest commercial high-speed camera is around 1 µs. Recently, research in the ultra-fast imaging field has achieved picosecond time resolution by coding the temporal information to either the spatial or spectral domain. Here, we present three representative works, which have outstanding performances.
Velten et al. (2012) applied a rapid ultra-fast laser into macroscopic imaging and electronically mapped the time-domain information into the spatial domain using a streak camera. The time resolution of this system reached 2 ps. The fundamental principle of this technique is illustrated in Fig. 3. Specifically, the laser scans the scene by line (1D), and for each point in the line, the streak camera spreads the photons that arrive at different times into another spatial dimension that is perpendicular to the scanned spatial dimension, so each captured 2D streak image is actually with one spatial dimension and one time dimension. By scanning the scene by line and stitching the captured images, 2D spatial images with ultra-high time resolution can be computationally reconstructed. With this ultra-fast camera, the light speed is no longer unlimited and the whole propagation process of the light can be recorded, which is called ‘transient imaging’. From the transient images, one can separate different scatterings or reflections of light transport in the target scene, and achieve looking-around-corner effects (Velten et al., 2012). Another important application of such ultra-fast imaging techniques is depth capture. Depth imaging based on time of flight (TOF) is realized by emitting light pulses continuously and retrieving the depth from their traveling time, which is calculated from the phase shift between the emitted light and the received light. Recently, Heide et al. (2013) achieved 3D imaging with a high-frequency illuminated TOF camera.
Goda et al. (2009) invented serial time-encoded amplified imaging (STEAM), which maps the time-domain information to the spectral domain, and uses an ultra-fast single-pixel camera to detect the spectral information. The fundamental principle of this imaging method is illustrated in Fig. 4. Each light pulse first goes through a 2D spectral separation setup, and its different spectral components are mapped to the different spatial positions of the object, and in this way, the 2D information of the object is coded into the spectral dimension. By mapping different spectral components back to the time domain and recording the sequential signals with an ultra-fast single-pixel camera, the spatial information of the object can be reconstructed computationally. STEAM is a continuous imaging system, which can capture the scenic information continuously at 6.1 Mb/s, with a shutter speed of 440 ps. This ultra-fast imaging system has been applied successfully in the detection of high-speed cell flows, combined with micro-fluid techniques.
Later, the Goda group invented STAMP. The system codes the ultra-fast time phenomenon into the spectral dimension, and then maps different spectral channels to different spatial positions of a high-resolution 2D sensor for burst mode ultra-fast imaging (Nakagawa et al., 2014). Specifically, as shown in Fig. 4b, an ultra-short laser pulse is split by the temporal mapping device (TMD) into a series of discrete daughter pulses in different spectral bands, which are incident to the target as successive ‘flashes’ for stroboscopic image acquisition. The spatial information of the object at different times is carried by different spectral bands, dispersed to different spatial positions by dispersion optics, and detected with a single 2D sensor.
3.3 Angular dimension
The angle information from visual signals reveals various scenic properties, such as the illumination, the scenic material, and the 3D structures. However, the angle information is almost lost in conventional imaging. In computational imaging, researchers propose various techniques to sample the visual information for different angles, to computationally reconstruct high-dimensional or high-resolution images. Fig. 5 shows some recent computational imaging systems based on multi-angle information, from coherent light to partially coherent light to incoherent light, from microscopic to macroscopic, and from multi-angle illumination to multi-angle sampling.
In the reconstruction of multi-angle information under coherent illumination, the light field is modeled as a complex field. Combined with theories from wave optics, these methods are applied commonly in microscopy. For example, Choi et al. (2007) built a system for 3D refraction index reconstruction (Fig. 5a). Using coherent illumination at each angle, they retrieved the quantitative phase information through digital holography methods, and then adopted tomography to combine the phase information of different angles and reconstruct the labelfree 3D refraction image of live cells. For thin microscopic samples, coherent illumination of different angles can shift the frequency information and one can reconstruct high-resolution images by capturing and stitching different spatial frequencies. Zheng et al. (2013) proposed a gigapixel microscopic system (Fig. 5), using an LED array to illuminate the microscopic sample from different angles. Their system bypasses the spatial-bandwidth limit of the objective lens and realizes low-cost gigapixel microscopy.
Under partially coherent illumination, the light field is usually described by the Wigner distribution function. With the phase-spatial 4D sampling system displayed in Fig. 5, Waller et al. (2012) used the spatial light modulator and sampled dense angular information for the reconstruction of the 4D phase-amplitude data. This provides a better understanding of the non-linear light transport theory and has already been applied to 3D localization through a scattering medium.
When the illumination is incoherent, a 4D light field is used commonly for the modeling of the scene’s geometry information (Levoy and Hanrahan, 1996). In a macroscopic scene, by illuminating the scene from different angles and applying photometric stereo methods, high-resolution 3D information of the scene can be retrieved. Micro-lens arrays (Ng et al., 2005) and camera array systems (Wilburn et al., 2004) have been successfully used for rapid capturing of the 4D light field, which helps realize fast depth detection and refocusing. At the same time, these 4D light-field-capturing techniques are applied to microscopic imaging (Levoy et al., 2006; Lin et al., 2015). Fast 3D reconstruction of a fluorescent sample and transparent sample can be realized by combining 3D deconvolution algorithms and phase retrieval algorithms. Prevedel et al. (2014) have applied light field microscopy to simultaneous whole-animal 3D imaging of neuronal activity.
In other words, introducing multi-angle information on the illumination side and combining it with the corresponding reconstruction algorithm can overcome the limitations of conventional imaging, introduce more 3D information, compensate for the optical aberration, and realize high-performance imaging. Introducing multi-angle information on the sampling side can help couple 3D information into the 2D sensor, and enable reconstruction of 3D information from 2D measurements.
3.4 Spectral dimension
Current multi-channel imaging techniques are designed mostly to capture three colors: red, green, and blue. Although the three-color imaging systems match the perception of the human vision system well, from the perspective of physics, real-world scenes contain abundant wavelengths and display rich spectral information. This spectrum information can reflect the essential properties of the light source and scene, so spectrum imaging becomes an important tool for both scientific research and engineering applications. Recently, high-resolution hyperspectral imaging has drawn increased attention and made a great progress, from the extension of the spectrum range, to improvement of the spatial resolution, to acceleration of the imaging speed. Hyperspectral imaging can capture abundant scene information in space, time, and spectral dimensions. Benefiting from the encoded rich information, hyperspectral imaging has already been applied widely in military security, environment monitoring, biological science, medical diagnosis, and scientific observation (Backman et al., 2000; Delalieux et al., 2009; Wong, 2009; Kester et al., 2011). With the development of dynamic spectrum imaging techniques, there are also many emerging applications in computer vision and graphics, such as object detection, image segmentation, image recognition, and scene rendering.
The following are some representative methods of computational spectral imaging techniques in the spectral dimension. Computational spectral imaging couples the 3D spectral information into a 2D sensor and then computationally reconstructs the whole volume of the 3D spectrum. Based on this basic principle, researchers have designed different spectrum sampling systems, with various optical implementations and system structures. We can classify them into branches according to their sampling and reconstruction strategies, including computed tomography (Descour and Dereniak, 1995), interferometry (Chao et al., 2005), coded aperture (Willett et al., 2007), and hybrid-camera systems (Ma et al., 2014). In addition, compressive sensing has recently drawn much attention in the computational imaging field. Charles et al. (2011) and Chakrabarti and Zickler (2011) have extensively exploited the sparsity of spectrum data, and proposed spectrum dictionary-based imaging methods. In addition to coded aperture-based hyperspectral imaging, researchers have further effectively exploited the compressibility of spectrum data based on micromirror arrays, principal component imaging (Pal and Neifeld, 2003), feature-specific imaging (Neifeld and Shankar, 2003), fluorescence imaging (Suo et al., 2014), and spatial-temporal coding (Lin et al., 2014). These techniques manage to compressively sense the 3D spectral data through different coupling and decoupling methods.
Recently, Bao and Bawendi (2015) proposed a novel spectrometer based on colloidal quantum dots, a type of highly controllable, tiny, and light sensitive semi-conductor material (Fig. 6). With quantum dot printing replacing conventional Bayer-pattern color filter arrays, the size of the spectrometer is similar to a conventional three-color camera. The size of the spectrometer is reduced dramatically without influencing the resolution, generality, or efficiency. This is the first attempt at using nanometer materials to build spectrometers and represents good progress in the miniaturization of spectrometers. This method paves the way for making high-performance, low-cost, small-volume micro-spectrometers, with broad applications in space exploitation, personalized medical service, diagnostic platforms based on micro-fluid chips, etc.
Micro-scale spectrum sampling has also drawn a considerable attention and been researched extensively (Fig. 7). Orth et al. (2015) built a gigapixel multispectral microscope based on micro-lens arrays. This system captures about 13 spectral channels for each point and the spatial resolution can reach 1.3 billion pixels. It can effectively observe the inner structure of biological samples such as cells. A multispectral microscope with such a high spatial resolution may prove to be a substantial benefit in the development of biomedicine and drug research. For in vivo observation of biological specimens, it is common to use fluorescence staining techniques to label the target object. In the presence of multiple fluorescence dyes, the spectrum of the target object can provide effective classification information. Based on this point, Jahr et al. (2015) invented hyperspectral light sheet microscopy. This imaging system can capture the spectral images of large biology samples by combining active scanning illumination and computational reconstruction. It can not only achieve optical sectioning of the 3D data, but also guarantee the spatial resolution in the order of the cell. This new technique also provides great opportunities for in vivo detection of biological samples.
Phase imaging is applied widely in life sciences especially in microscopy. This takes into consideration that most microscopic samples are almost transparent without fluorescence stain, such as prokaryotes and bacteria. These specimens absorb very small amounts of incident light, so the intensity of the in-focus image does not present noticeable spatial variance. Through phase imaging, we can retrieve the outline of the transparent samples and achieve label-free cell imaging. The phase-contrast microscopy proposed by Zernike (1955) is one of the earliest phase imaging techniques, and provides a new tool for transparent object imaging. In addition, quantitative phase imaging realizes accurate phase measurements of the transparent microscopic samples. Combined with multi-angle or multi-focal-plane acquisition, one can also achieve 3D label-free refractive imaging and nanometer imaging (Levoy et al., 2006; Cotte et al., 2013; Kim et al., 2014).
Quantitative phase imaging techniques can be divided mainly into two main types: iterative phase imaging and non-iterative phase imaging (Fienup, 1982). The former uses the light transport model under coherent or partially coherent light, imposes constraints in either the spatial or the Fourier domain, and designs iterative reconstruction algorithms to retrieve phase information from the intensity measurements. This type of method generally requires complex computations and multiple snapshots. The well-known method in this field is the Gerchberg-Saxton (GS) iteration algorithm (Fienup, 2013). There are mainly two types of non-iterative phase imaging, i.e., non-iterative reconstructions which include imaging under coherent and partially coherent illuminations. Some typical imaging systems are shown in Fig. 8.
Under coherent illumination, refined interference and diffraction patterns can be retrieved to reconstruct highly accurate phase information. Typical examples include digital holography and some phase-shift interference measuring methods (Cuche et al., 1999). Some other methods couple the phase information into the intensity of the focal plane by modulating either phase or amplitude in the Fourier domain. With computational reconstructions, the accuracy and speed of quantitative phase imaging can be further improved (Popescu et al., 2004). One can also reconstruct the 3D refractive index of label-free transparent samples from multi-angle quantitative imaging under coherent light, and assist studies in the biological domain (Choi et al., 2007). In spite of this progress, there are still some problems in quantitative phase imaging under coherent illuminations, including problems introduced from the phase period, the interference of the laser speckle, low spatial resolution, and high price of high power lasers.
Recently, more efforts have concentrated on phase imaging with partially coherent light. The Shack-Hartmann sensor can record the phase information of a target sample under partially coherent light at a good quality (Stoklasa et al., 2014), and further display the optical coherence of the signal. However, both the phase and spatial resolutions are low and thus of limited accuracy. Some Fourier plane phase modulation methods under completely coherent light are further extended to partially coherent light imaging and improve its applicability dramatically (Kim et al., 2014). When the light is partially coherent, the phase information cannot be detected at the focal plane, but is encoded by the intensity variation at the defocused planes. Measuring multiple times along the axial direction and taking advantage of the phase retrieval algorithm of the light transport function lend new insights into quantitative phase reconstruction with partially incoherent light (Teague, 1983; Waller et al., 2010b).
Fast and high-accuracy phase reconstruction would enable dynamic transparent objects. Various computational reconstruction methods have reduced the required measurement number under partially coherent light and realized single-shot quantitative imaging through introducing chromatic aberrations (Waller et al., 2010a) or using a volume holographic microscope (Waller et al., 2010b), etc. These works greatly expand the application scenario and provide new opportunities for label-free dynamic cell observation.