Background

Imaging cameras in various fields has been used to capture three-dimensional (3D) objects in reality into two-dimensional (2D) images without depth information with conventional image sensors. However, this deficiency of 3D information limits the perception and intensifies the confusion for understanding the real world. For decades, numerous technical efforts have been made in research, development, and commercialization for 3D surface imaging technologies in many applications. Recently, as the miniaturization trend of electrical devices continues, technical demand for integration of 3D imaging techniques into miniaturized devices has significantly increased and led to the introduction of recent commercially available devices for 3D surface imaging in portable devices [1], light detection and ranging (LiDAR) system [2], medical imaging scanners [3], and movement recognition of video games [4]. However, existing technologies still suffer from minimizing the overall system size, require to achieve one of the modalities of 3D surface imaging techniques. Recently, MEMS fabrication technology enabled the compact packaging of various optical systems by minimizing the key optical components. This article will provide a mini-review for optical MEMS devices focused on compact 3D surface imaging applications, which cover the principle of the major 3D surface imaging techniques and their applications.

Conventional 3D surface imaging techniques

Numerous 3D surface imaging techniques have been developed using stereoscopic vision [5, 6], structured light [7,8,9], time-of-flight (ToF) [10], interferometry [11], holographic imaging [12], and so on. Stereoscopic vision, structured light and ToF are considered as three major techniques and actively investigated because they provide higher resolution, high speed, and intuitive applicable principle compared to others. Figure 1 shows the schematic illustration of three representing techniques for 3D surface imaging and Table 1 describes the distinctive aspects of those representatives. The stereoscopic vision method utilizes two or multiple image sensors and concurrently captures the same scene from different viewpoints (Fig. 1a). After the rectification process of stereo images, depth information can be calculated by comparing the image pixel disparities of rectified stereo images. The stereo vision method has difficulty in measuring non-textured smooth 3D surfaces because the method extracts 3D depth information by comparing pixel intensities between left and right images. In addition, the method shows weak 3D imaging performance in low light intensity circumstance because the stereo image comparing process requires high-contrast images with appropriate intensities. However, the stereoscopic vision method has already been widely adopted in various 3D imaging commercialized products such as 3D movie recorder [19] and 3D medical endoscopes [20] with significant advantages in low-cost, intuitive principle, and compact configuration. The structured light method utilizes a pattern projector, which can generate single or multiple light patterns with certain geometries such as dot arrays, speckle patterns, line arrays, or sinusoidal fringe patterns, and detects the distortions of illuminated patterns from the image captured by the single image sensor (Fig. 1b). Structured light method has been actively developed for real-time 3D scanning system using spatial light modulator (SLM), such as a digital micro-mirror device (DMD) or rotating patterned apertures, for high-speed and temporal-varying programmable pattern generation [21,22,23,24]. The ToF method utilizes the transit time of the reflected light pulse from the target object. The illuminator unit emits a light pulse onto the target surface with a combination of scanning devices, or beam expander to cover bi-dimensional scenes of 3D objects. The reflected light pulse from the target surface is received and calculated to reconstruct the depth information from light travel time and its intensity (Fig. 1c). The ToF methods are suitable for capturing the objects at short to long ranged distances and require high speed circuitry because the temporal resolution should be in the pico-second range for appropriate 3D imaging resolution. As a result, 3D surface imaging of ToF method is often applied in long-range applications such as military scanning [25] or LiDAR system for autonomous driving [26].

Fig. 1
figure 1

Schematic illustration of three representing techniques for 3D surface imaging and their principles; a stereoscopic imaging, b structured light, and c time-of-flight (ToF)

Table 1 Distinctive aspects of three representing techniques for 3D surface imaging [13,14,15,16,17,18]

Optical MEMS devices for compact 3D surface imaging

Recently, market demand for miniaturized 3D optical imaging modules has been remarkably increased since smart devices, wearable devices, or multifunctional imaging devices has attracted both customer and developer’s interest. However, regardless of the high technological maturity of 3D surface imaging techniques mentioned above, miniaturized optical key elements for 3D surface imaging is required to be packaged into compact imaging systems, such as multifunctional cameras in smartphones and 3D endoscopic catheter. In this section, previous works on optical MEMS devices for compact 3D surface imaging system, which are the stereoscopic vision, structured light, and ToF will be introduced. Recent researches on MEMS-enabled 3D stereoscopic imaging systems were focused on using a single image sensor rather than two identical cameras to reduce the overall size of the optical systems [27,28,29,30]. Hexagonal arrays of liquid crystal (LC) lens device operated by the applied voltage enabled the focus-tunable 3D endoscopic system using a single image sensor [27]. The upper patterned 7 hole-like ITO electrodes enabled smooth parabolic-like gradient electric field distribution to manage the phase profiles in each LC lens. The hexagonal array of LC lens could capture the object images with the different viewpoint on a single image sensor, which were used to reconstruct 3D images (Fig. 2a). Moreover, they reported 2D/3D tunable endoscopy imaging system using dual layer electrode LC lens [28]. The multi-functional LC lens (MFLC-lens) based endoscope was 2D/3D switchable as well as focus-tunable in both modes by controlling the voltage (Fig. 2b). Another single-imager based stereoscopic camera utilized parallel plate-rotating MEMS device by changing the beam path through the transparent parallel plate [29]. They fabricated electrothermal bimorph actuator and an anti-reflective optical plate was directly placed above the microstructure to generate the binocular disparities between subsequent images in a temporal division by changing the parallel plate rotation angle up to 37° in front of an endoscopic camera module, which was comparable to 100 μm baseline distance binocular cameras (Fig. 2c). In addition, they successfully demonstrated the anaglyph image and calculated disparity maps for 3D imaging by capturing two optical images at the relative positions. Another MEMS-enabled stereoscopic imaging system was microprism arrays (MPA) based stereo endoscopic camera [30]. The MPA with 24° of apex angle and symmetric arrangement, which was microfabricated by using conventional photolithography, thermal reflow, and polydimethylsiloxane (PDMS) replication, splits light rays from an object into two stereo images when placed in front of a single camera module (Fig. 2d). Measured distances of the object were calculated and compared with the actual distance by comparing the two stereo images from refraction of symmetric MPA.

Fig. 2
figure 2

Single image sensor based optical systems for 3D stereoscopic imaging; a hexagonal LC lens arrays for 3D endoscopy and 3D reconstruction result [27]. b Dual layer electrode LC lens arrays for 2D/3D tunable endoscopy and their 2D/3D mode imaging results [28]. c Electrothermal MEMS parallel plate rotation device and anaglyph image, calculated disparity map of the slanted object with textures [29]. d Microprism arrays based stereo endoscopic camera and stereoscopic imaging result [30]

The structured light method with the digital micromirror device (DMD), which can selectively reflect the incoming light ray and generates structured light patterns, enabled various 3D imaging researches with high-speed performances. However, the overall size of the DMD system is considerably large to be assembled in various miniaturized optical devices, so that the recent researches on structured light generation for 3D surface imaging utilized optical MEMS devices for compact configuration. Previous works on 3D surface imaging using structured light with optical MEMS devices mainly divide into the utilization of actuating reflective MEMS mirror [31,32,33,34] and diffraction generation from laser transmission through grating micro-/nanostructures [35,36,37,38]. Liquid immersed MEMS mirror was demonstrated to enlarge the scanning FOV for 3D surface imaging from 90° to 150° by “Snell’s window” effect (Fig. 3a) [31]. Fabricated 1D scanning MEMS mirror generates a structured light pattern by a combination with a cylindrical lens to convert the laser spot into a laser line stripe. In addition, they reconstructed depth map by illuminating structured light from the designed projector toward the objects positioned at 64° to 128°. The projector module can only capture the stationary scenes because the liquid immersed MEMS actuator caused heat transfer inside the liquid and turbulence when operated with high speed. In addition, line array projector module by combining a single-axis torsional MEMS mirror with a diffractive microstructure was demonstrated (Fig. 1b) [32, 33]. The deformation of the projected line array pattern, which was generated with the scanning of the diffractive dot array patterns in 25-kHz frequency, was captured by the CMOS camera, was calculated to estimate the depth profile of the object, and found in accordance with the geometrical size of the target object. Besides, variable structured illumination projector using a laser-modulated 2D Lissajous scanning MEMS mirror were reported (Fig. 1c) [34]. The pattern density of the projected structured light pattern was controlled by the modulation of a laser beam at the least common multiple of the scanning frequencies, while the MEMS mirror was scanned at a frequency with the greatest common divisor (GCD) greater than 1. The variable structured illumination was performed by changing GCD of scanning frequencies and the phase of operating signals.

Fig. 3
figure 3

Structured light pattern generation system by scanning the MEMS mirror for 3D surface imaging; a wide-angle structured light generation with 1D MEMS mirror immersed in liquid and its 3D imaging results with the pattern generation FOV over 90° [31]. b Line array projector consisted of a 1D scanning MEMS mirror and a diffractive microstructure and the estimation of the depth profile of the object by calculating the line deformation [32, 33]. c Variable structured illumination using Lissajous scanning MEMS mirror and optical patterns from the projector module with different GCD and phase [34]

Another researches using transmitting diffraction grating for structured light pattern generation have also been conducted because of their compact optical configurations without MEMS mirror and its actuating circuit. A binocular 3D imaging system utilized the conventional stereoscopic camera with the 64 × 64 Dammann grating for laser spot array generation [35, 36]. Dammann array projector using laser diode (LD), collimating lens, Dammann grating, and objective lens with simple configuration, was placed between binocular cameras to provide laser spot arrays for stereo matching of two cameras (Fig. 3a). The overall system was less than 14 cm and weighs less than 170 g. Another structured light projector could generate dot array patterns by combining a designed transmission diffractive optical element (DOE) with two types of light sources: the edge emitting laser (EEL) and the patterned vertical cavity semiconductor emission laser (VCSEL) array (Fig. 4b) [37]. E-beam lithography and nano-imprint lithography enabled the fabrication of the designed DOE with the Gerchberg–Saxton algorithm to generate the phase distribution. The fabricated DOE, placed in front of the collimated light source with EEL or patterned VCSEL arrays, produced irregular random or regular structured light patterns, respectively. Another structured light projector using multifunctional binary DOE could generate line pattern arrays with high contrast and uniformity [38]. Multiple-stripe patterns were generated with high diffraction efficiency by designing the binary surface relief, which combines functions of a diffractive lens, Gaussian-to-tophat beam shaper, and Dammann grating (Fig. 4c). The designed multifunctional DOE, fabricated by E-beam lithography, showed diffraction efficiencies up to 88% with 20° fanout angles.

Fig. 4
figure 4

Structured light system by diffraction generation from laser transmission through grating structures; a binocular 3D imaging system using a structured light projector with a Dammann grating and captured diffraction patterns by a designed Dammann grating (inset) [35, 36]. b Structured light projector with a DOE designed by the Gerchberg–Saxton algorithm and patterned VCSEL arrays. Their projected dot array pattern is shown at the bottom line with fabricated DOE (inset) [37]. c Multifunctional binary DOE in combination with diffractive lens, Gaussian-to-tophat beam shaper, and Dammann grating. Their projected tophat line array pattern is shown at the bottom line with fabricated DOE (inset) [38]

MEMS fabrication techniques also enabled the miniaturized and low-cost ToF based 3D imaging systems [39,40,41]. A LIDAR system with an optical 256 × 64-pixel ToF sensor and MEMS laser scanning device was introduced [39]. Emitted pulsed signals from three LDs traveled through the collimating lenses and reflected by the two-axis MEMS scanner toward the target scenes with FOV divided into three scanning regions (Fig. 5a). Reflected pulsed light from the target objects were then received by designed single-photon image CMOS sensor with 256 × 64 pixels to calculate the depth profile. The authors could precisely measure the distance up to 20 m with maximum error of 13.5 cm. Another MEMS-enabled ToF researches using micromachined electro-absorptive optical modulator was reported [18, 40, 41]. The optical modulator was designed as a multi-layer stacked structure of diffractive mirrors and electro-absorptive layers, to maximize the magnitude of optical modulation. The fabricated device modulates the IR image reflected from the target object to extract the phase delay of the traveled IR light. The transmittance difference generated by applying the voltages to the device was 51.8%, which was sufficiently large amount of IR light modulation to obtain enough IR intensity and good signal-to-noise ratio. After characterization, optical modulator was placed between the beam splitter and CMOS image sensors to identify the phase delay of incoming IR lights of each pixel for depth calculation and RGB image matching (Fig. 5b).

Fig. 5
figure 5

ToF based 3D surface imaging systems; a 254 × 64-pixel single photon CMOS sensor with two-axis scanning MEMS mirror and their measured depth image of 3D scene [39]. b Micromachined electro-absorptive optical modulator for ToF based 3D imaging and the depth map of 3D objects acquired using the ToF system [18, 40, 41]

Conclusion

We have overviewed optical MEMS devices for 3D surface imaging applications, depending on the utilized 3D imaging techniques; stereoscopic vision, structured light, and time-of-flight. Table 2 shows the summary of optical MEMS devices for 3D surface imaging camera systems. MEMS techniques enabled a single image sensor based 3D stereoscopic imaging by introducing novel micro-optical devices rather than using two identical camera modules of conventional stereoscopic apparatus, which can lead to the reduction of the overall system with relatively simple configurations. MEMS-enabled structured light based 3D imaging was achieved by introducing the scanning MEMS mirror with additional modulation or diffraction generation from laser transmission through micro grating structures. MEMS-based structured laser pattern generating devices are suitable for compact optical systems for 3D surface imaging. The number of MEMS-enabled ToF imaging researches were insufficient compared to stereoscopic vision or structured light, since the fabrication of devices is limited by the high-cost and complex procedures for high-speed performances. However, miniaturized ToF sensors using MEMS techniques are more suitable in long-range distance measuring applications, such as LiDAR, compared to other 3D imaging techniques. The proper optimization and utilization of compact MEMS-based 3D surface imaging systems will lead to more effective 3D imaging and distance measuring applications.

Table 2 MEMS-enabled 3D imaging system comparison summary