Abstract
Full-field three-dimensional (3D) measurement technology based on phase information has become an indispensable part of geometric dimension measurement in modern scientific research and engineering applications. This field has been developing and evolving for the study of highly reflective phenomena, diffuse reflections, and specular surfaces, and many novel methods have emerged to increase the speed of measurements, enhance data accuracy, and broaden the robustness of the system. Herein, we will discuss the latest research progress in full-field 3D shape measurement based on phase information systematically and comprehensively. First, the fundamentals of 3D shape measurement based on phase information are introduced, namely, phase-shifting and transform-based methods. Second, recent technological innovations are highlighted, including increases in measurement speed and automation and improvements in robustness in complex environments. In particular, the challenges faced by these technological advances in solving highly dynamic, composite surface measurement problems are presented, i.e., with multiexposure techniques proposed for high dynamics that extend the dynamic range of the camera to reduce the effects of overexposure but increase the cost of time and have high hardware requirements, fringe adaptive techniques that overcome light variations but are computationally complex, and multipolarized camera techniques that reduce the effects of light variations but are sensitive to the light source. Third, the phase-shifting method combined with coding is proposed to improve the measurement speed, but the accuracy is slightly reduced. Deep learning techniques are proposed to cope with measurements in complex environments, but the dataset computation process is cumbersome. Finally, future research directions are suggested, and the challenges are presented. Overall, this work provides a reference for researchers and engineers.
Highlights
-
1.
This paper reviews advancements in technology aimed at the high precision and high speed of measurements, ensuring more efficient and reliable full-field 3D shape measurements.
-
2.
The challenges related to the advancement of error compensation for phase calculations are outlined and discussed in various techniques with their advantages and trade-offs.
-
3.
The examination of future research directions and challenges in the field of full-field 3D measurements based on phase information culminates by providing insightful perspectives.
Similar content being viewed by others
1 Introduction
Three-dimensional (3D) measurement technology has emerged to be increasingly important in manufacturing, aerospace, medical diagnosis, and cultural relic protection because it can provide comprehensive and rich details of measured objects [1,2,3]. Thus, how to obtain comprehensive 3D shape information quickly and accurately has become an important and difficult point of increasing scientific attention. 3D measurement technology is divided into contact and noncontact operation. Coordinate measuring machines (CMMs) operate in contact mode by contacting the measured object surface. Therefore, soft objects cannot be measured by CMMs because of the potential damage [4]. Noncontact optical measurement mainly includes the time-of-flight, interferometry, and structured light techniques [5]. Because of full-field, high-precision, high-speed, automatic data processing, and robustness, the phase calculation-based structured light (PSL) technique has gradually become the mainstream of 3D measurement [6, 7].
PSL has two types: fringe projection profilometry (FPP), which can be applied on diffused surfaces, and phase measuring deflectometry (PMD), which can be applied to specular surfaces. FPP projects sine–cosine fringe patterns through a projector, while PMD displays sine–cosine fringe patterns on a screen [8,9,10]. Meanwhile, the fringe patterns on the measured surface are deformed and captured by a camera. Phase information is calculated from the captured fringe patterns. After calibrating the relationship between phase and depth (or gradient for PMD), the 3D shape of the measured surface can be obtained. High accuracy and high speed are the two most challenging problems in PSL. To improve both, many new 3D measurement methods have continuously emerged, but there is a lack of systematic comparison and analysis in the literature.
He et al. compared the commonly used phase-unwrapping methods for FPP systems and analyzed their principles and noise models in depth [11, 12]. Juarez-Salazar et al. reviewed the absolute phase recovery methods that apply to digital fringe projection systems, explained the principles of each individual absolute phase-unwrapping method, and finally provided some practical tips on handling the common phase-unwrapping artifact problems [13]. Xu et al. discussed the advancement of high-curvature surfaces, portable online measurement, and so on for PMD [14, 15] and tried to address the technical challenges in measuring high-curvature specular surfaces. Meanwhile, it was emphasized that PMD techniques are moving toward portable and online measurements to meet the needs of different application scenarios. Based on the above literature, a comprehensive comparison of the latest advances has been carried out for phase errors to select the appropriate measuring method. Although these papers have reviewed the recent advancements in 3D measurement, none of them summarize and quantitatively evaluate high-precision and high-speed phase-unwrapping methods for composite surfaces.
Especially for 3D shape measurement based on phase information, the above PSL method is the only way to achieve full-field 3D shape and an important typical technology due to the advantages of high accuracy, fast speed, full field, and automatic data processing. Herein, we will discuss the advantages and disadvantages of new emerging techniques for highly reflective, diffused, and specular surface measurements. The paper is organized as follows. Section 2 introduces the basic principles of FPP and PMD. Section 3 presents an overview of improved methods in phase error compensation methods for saturated pixels and nonlinearity. Section 4 presents an overview of the research progress of high-speed 3D measurement technology based on PSL in hardware and algorithms. Section 5 discusses the advantages and disadvantages of the latest optimization methods. Section 6 draws the conclusion and highlights the prospected future trends.
2 Basic Principles
FPP and PMD are used to measure the 3D topography of diffused (including highly reflective and specular surfaces [16, 17]). Both methods use the sine–cosine fringe patterns to modulate depth (or gradient) information, which needs to be demodulated by using fringe analysis [18, 19]. Fringe analysis involves how to obtain the wrapped and unwrapped phases [20, 21]. Multiple-step phase shifting and transform-based methods have been widely studied to calculate the wrapped phase, while spatial and temporal phase-unwrapping techniques have been used to determine the unwrapped phase [22].
As an example of the multiple-step phase-shifting method, one fringe pattern captured by the camera can be described as
where \(A\left(x,y\right)\) is the background light intensity, \(B\left(x,y\right)\) is the fringe contrast, N is the total number of the shifted fringe patterns, and \(\varphi \left(x,y\right)\) is the fringe phase value, which can be solved using
The four-step phase-shifting method is often used in the traditional wrapped phase calculation, considering the compromise of speed and accuracy.
During phase calculations, both FPP and PMD methods are affected by the nonlinear response of digital optical devices, including cameras, screens, and projectors. There is also an essential difference between the two methods. For FPP, phase and depth are directly related, and for PMD, gradient integration is required. Using the triangulation principle, the phase value can be directly converted into depth data after calibrating the FPP-based system. In PMD, the phase value is related to gradient data, which needs to be integrated to obtain depth by regularization after calibrating the systematic parameters. When composite surfaces combine diffuse reflection and specular surfaces, the two methods are combined to solve the inversion problem using FPP depth data to modulate the PMD.
2.1 Basic Principle of FPP
FPP is the most representative technique in the field of structured light 3D measurement for diffused surfaces. There are two main phase measurement methods, namely the multiple-step phase-shifting method implemented in the time domain and the transform-based method implemented in the spatial domain [23]. Figure 1 displays the FPP system, which consists mainly of a digital projector, a charge coupled device (CCD) camera, and a computer. A sinusoidal fringe pattern is generated by a computer and then projected onto the measured object surface by the projector. Meanwhile, the fringe pattern is deformed with regard to the measured surface and collected by the camera. The deformed fringe pattern is demodulated to obtain the wrapped and unwrapped phases. After calibrating the system to establish the relationship between phase and depth, 3D morphology information is reconstructed.
2.2 Basic Principle of PMD
PMD is another well-known technique for measuring the 3D shape of specular surfaces based on the law of reflection due to its advantages of high dynamic range (HDR), high accuracy, and high speed. Figure 2 illustrates the PMD-based system, which consists of a liquid crystal display (LCD) screen, a CCD camera, and a computer. The computer creates a sinusoidal fringe pattern, which is then displayed on the LCD. From the specular reflection direction, the fringe pattern is deformed with regard to the normal of the measured specular surface and the deformed fringe pattern is captured by the CCD camera [24, 25]. The phase information contained in the deformed fringe pattern can be obtained by the above-mentioned algorithms [26]. The obtained phase is related to the gradient, which needs to be integrated to reconstruct the 3D shape of the measured specular surface after calibrating the system parameters [27].
To skip the integration process, a direct PMD (DPMD) is developed to establish the mapping relationship between phase and depth, which is illustrated in Fig. 3 [28, 29]. The system comprises two LCD screens, a CCD camera, and a plate beam splitter (BS). The BS is placed in a fixed position, such that a virtual image LCD1´ of LCD1 is parallel to LCD2; that is, two LCD screens are located at two different positions. The plate BS is therefore used to realize the parallel design of the two screens to avoid mechanically moving a screen to two positions, LCD1´ and LCD2, along the normal of the reference plane.
The height h of the measured specular surface is directly obtained using the geometric relationship in Fig. 3.
Therefore, DPMD has a great advantage in measuring the 3D shape of isolated and/or discontinuous specular surfaces.
2.3 Composited Diffused/Specular Surface
The current FPP and DPMD techniques are applicable to diffuse and specular reflective surfaces, respectively [30]. However, there are many composite reflective surfaces in aerospace and advanced manufacturing in which both diffuse and specular reflective surfaces coexist [31]. Figure 4 shows a schematic of the PSL-based composite surface measurement, which is composed of a projector, two LCD1 and LCD2 screens, and a camera, all located on the same side of the measured object [32].
To achieve the measurement of objects with different reflection characteristics, the projector and two LCD screens project and display, respectively, sinusoidal fringe patterns that conform to multifrequency heterodyne sequences. The fringe pattern displayed on the display screen is reflected and imaged through the specular part of the composite reference plane, while the fringe pattern projected by the projector is reflected through the diffuse part of the composite reference plane. In Fig. 4, the reference plane is parallel to two LCD displays, and the spatial distance between the LCD1 and LCD2 displays is Δd. The spatial distance between LCD1 and the composite reference plane is d. The incident light \({l}_{1}\) comprising reference phase points \({\varphi }_{\text{r}1}\) and \({\varphi }_{\text{r}2}\) on the display screen intersects with the reference plane at point K and follows the reflection law of light. The outgoing light is \({l}_{0}\), which is imaged at point m on the imaging target surface.
3 Advancement of Error Compensation for Phase Calculations
Diffused surfaces have a wider range of reflectivity, with light scattered in multiple different directions [33]. While diffused surfaces do not typically cause bright reflection spots as in specular surfaces, they can still cause camera pixel saturation under high-light conditions [34]. Specular surfaces are highly reflective, with light bouncing off them in a particular direction, creating bright reflective spots [35]. At certain angles of light, the reflected light can be so intense that it causes the camera to saturate the pixels. Pixel saturation makes the pixel points on the measured object surface distorted and unavailable for accurate 3D reconstruction. Discussions and overviews on pixel saturation and new techniques of compensating it to improve data accuracy in FPP and PMD are given in the next sections.
3.1 Saturation
Many applications require measurements on very reflective or glossy surfaces; thus, the solution to saturation problems is crucial to improve data accuracy. Highly reflective and shiny surfaces often result in bright spots or areas in which the intensity of the captured image is overexposed. In Fig. 5, in overexposed areas, the brightness of the fringes may be maximized, resulting in a loss of sinusoidal fringe detail. This can also blur surface features in highly reflective areas, making it difficult to extract information on surface detail from the fringes.
To obtain accurate 3D data, resolving the saturation problem is essential. Using conventional methods usually requires multiple projections, which increase measurement time costs. Thus, developing more time-saving methods can improve the practical feasibility and efficiency of measurement. The solution to this problem is important not only for scientific research but also for industrial applications. In manufacturing and quality control, accurate measurements of highly reflective objects can ensure product quality and reduce waste [36, 37]. Errors in HDR imaging can come from several factors, such as saturation pixels, sensor noise, lens abnormalities, incompatible lighting, camera response function (CRF), and image calibration errors [38, 39].
3.1.1 Hardware-Assisted Method
Based on the traditional monocular system [40], Zhang et al. added a linear polarizer into a binocular system to reduce the light intensity of highly reflective surfaces and eliminate saturation due to specular reflection [41], thus avoiding errors in 3D reconstruction.
By using a DLP projector, Zhang et al. enabled adaptive adjustment of the projection intensity to cope with changing lighting conditions and avoid saturation [42]. Such adjustments help increase the signal-to-noise ratio of the image, which in turn improves the measurement accuracy of the phase information [43]. Li et al. developed multiexposure techniques by projecting a structured light pattern multiple times and capturing images with different exposure times [44], which may have improved the way the camera modulates [45], allowing more efficient imaging of HDR scenes [46, 47]. Wang et al. employed a multipolarization camera, which incorporates four micropolarization filters in different directions at each pixel position [48]. This hardware design allows images from different directions of polarized light to be captured simultaneously, providing more information for subsequent data processing than conventional cameras. Table 1 compares the merits and demerits of the above-mentioned hardware methods.
3.1.2 Data Processing
Zhou et al. generated a fringe pattern without saturated regions by analyzing the response of neighboring pixels and calculating the optimal projection intensity [49]. The patterns can be used for subsequent phase measurements and 3D shape reconstruction. Compared with conventional methods, this method reduces computational complexity through a more efficient projection and data fusion process. Li et al. employed an estimation-based saturated pixel identification method that uses an estimate of the local (CRF) to identify saturated pixels [44]. This provides more accurate saturated pixel identification, unlike traditional methods that may use fixed thresholds or are based on image intensity. Spatially varying pixel intensity (SVPI) technology focuses on improving traditional methods in terms of data processing [50, 51]. It achieves an improvement of the dynamic range of raster projection systems by adjusting the intensity of the projection pattern on a pixel-by-pixel basis [52, 53]. Two different strategies are introduced to reduce the number of parameters for which optimization needs to be performed, namely, (1) increasing the computational speed and (2) reducing the number of possible local optimal solutions [54]. Zhang et al. used hybrid phase fusion with hybrid quantity-orientated phase fusion, in which the weights of the quantity metrics are used in the fusion process [55]. This helps better handle intensity nonlinearity in the reconstruction.
Table 2 compares the merits and demerits of the data processing methods. Figure 6a–f illustrate the simulation results of the different methods, while Fig. 6g, h display the combined comparison and root mean square errors (RMSEs) of the different methods, respectively.
In addition, further experimental validation is carried out. Multiple exposure conditions and an iterative algorithm are selected for the experimental recovery of 3D shape data [56].
In Fig. 6h, the neighborhood pixel fringe-based adaptive generation method requires only one projection operation to adaptively estimate the optimal projection intensity of each pixel, reducing the time cost, but it needs high requirements for hardware devices, and the saturation threshold setting may lead to large errors. In Fig. 6b, c, h, multiexposure combined estimation CRF successfully solves the saturated pixel problem in HDR imaging but is restricted by its high computational cost and limited applicability. In Fig. 6d, SVPI successfully overcomes the dynamic range limitations of raster projection systems and reduces measurement errors but requires complex calculations to determine the appropriate luminance adjustment for each pixel, relying on hardware devices. Figure 7a shows camera captured fringe pattern of the measured object step. Figure 7b, d, and f respectively represent the phase unwrapping obtained by using the traditional multifrequency method, multiexposure method and iterative method. In Fig. 7e, g, least-squares iterative optimization is used to progressively approach the optimal solution that converges stably under various noise levels [57]. However, the method is still sensitive to surface edges and discontinuities, which can lead to measurement errors. A weighted fusion method based on a quality metric is used to reduce the impact of low-quality data on the final 3D point cloud [58, 59], but its drawbacks are higher complexity and more computation and analysis time than traditional methods.
3.2 Intensity Nonlinearity
Using cameras and projectors in the system can cause nonsinusoidal fringes due to the gamma effect, resulting in nonlinear phase errors [60]. Due to the nonlinear variation of input and output light intensity, the error of the structured light system may greatly increase, resulting in significant differences between the collected sine fringes and the standard sine waveform, leading to additional phase errors. Although the use of the multistep phase-shifting method can reduce the impact of nonlinear response, it is very time-consuming due to the increased number of collected images. Numerous methods have been developed to eliminate phase errors caused by nonlinear responses.
3.2.1 Hardware-Assisted Method
Hardware-based methods mainly include coherent light illumination and defocus projection techniques. Pak et al. modeled nonlinear errors and corrected them using the coherent light illumination method [61]. Guo et al. used laser excitation as the illumination source to ensure a stable phase relationship of illumination [62]. Compared with incoherent light sources, coherent light illumination can more accurately control phase information, thereby reducing nonlinear errors.
You et al. improved the high-quality sine wave fringe pattern of binary defocus [63]. Chen et al. used a projector to defocus binary fringes, thereby reducing the influence of higher-order harmonics by suppressing high-frequency components and compensating for nonlinear phase errors [64]. Hu et al. used the depth discrete Fourier series fitting method to establish the mathematical phase error function and developed an accurate gamma calculation method to compensate for system nonlinearity [65]. Li et al. used super-grayscale multifrequency temporal phase unwrapping (STPU) with a wider range of grayscale [66]. Therefore, STPU can better deal with the nonlinear response of a camera sensor. This reduces the effect of strength nonlinearity on the measurement. Table 3 compares the merits and demerits of correcting nonlinear hardware methods.
3.2.2 Data Processing
The phase error compensation method based on the probability distribution function (PDF) has been proposed [67]. By statistically analyzing the phase error, especially the modeling and utilization of the PDF of the phase distribution, the nonlinear effect can be more effectively compensated. Wang et al. further proposed an improved phase-shifting profilometry with a phase probability equalization method that used the equalized phase to estimate the linear phase [68]. Yu et al. used the correlation process and the probability density function to determine the error coefficient and compensated the error by a general two-coefficient error model [69]. Zheng et al. used Zernike polynomial fitting to obtain more accurate surface shape information [70]. Yan et al. optimized the Gauss–Markov theorem to obtain the optimal solution [71]. Muñoz et al. estimated and compensated for the nonlinear gamma factors introduced by optical systems in FPP by adjusting the least-squares plane to the estimated phase from the reference plane [22].
Qiao and You proposed a deep learning method for phase calculation, which was promising for a wide range of applications [72, 73]. Zhang et al. applied back propagation neural networks for single high-precision FPP to predict the phase of the fringe pattern, which can effectively suppress the phase error caused by gamma nonlinearity and further improve the measurement accuracy [74,75,76]. Ueda et al. used a data augmentation method to train the deep learning network [77]. Although deep learning methods reduce the number of parameters, they still require considerable computational resources to train deep learning networks, which may pose a challenge to the computational power of some researchers or application areas.
Table 4 compares the merits and demerits of the data processing methods. Figure 8a–f display the simulation results of the different methods, while Fig. 8g, h show the combined comparison and RMSEs of the different methods, respectively.
In addition, further experimental validation is carried out. PDF and least-squares iterative methods are selected to compensate for the nonlinearity in the phase calculation stage, and the experiment is verified.
The coherent illumination method can reduce the nonlinear response of the system and improve the accuracy of phase measurement but requires a more complex optical arrangement and equipment, which increases the complexity and cost of the system. The defocus projection technique in Fig. 8a is relatively simple and can be achieved by adjusting the optics while affecting the spatial resolution [78]. Figure 9a, b show camera captured fringe pattern of the measured object. Figure 9c, d use the PDF method unwrapping phases of diffuse and specular surfaces. Figure 9g, h use the iterative method of unwrapping phases of diffuse and specular surfaces. The PDF applied to the system error is more complex, as in Fig. 9e, f, has a certain versatility and requires a large number of sample data to establish a statistical model. The iterative method in Fig. 9i, j can adapt to different environments and conditions and has strong self-adaptability to the compensation of system errors; however, it requires more computing resources and time, and there may be certain limitations for applications with high real-time requirements.
4 High-Speed 3D Measurement
With the increasing demand for real-time 3D measurement and rapid development of measurement hardware equipment, high-speed 3D surface measurement technology has great scientific research significance and wide application value.
4.1 Hardware-Assisted Method
Landmann et al. reduced measurement time using the stereovision method through an infrared camera [79]. Liu et al. proposed a rotation projection method to achieve fast and low-cost structured light sequence projection [80]. Guo et al. designed a time series and achieved real-time 3D measurement at 50 fps [81].
Due to the high cost and difficulty of independently developing projection devices, research has begun on projection modes. Kang et al. used a projector to defocus and project binary fringes to achieve fast sinusoidal fringe projection [82]. To improve the binary projection pattern, Guo et al. achieved a sinusoidal pattern distribution in the defocus position of the projector [83, 84]. By using binary defocus projection technology, 3D measurement devices with DLP projectors can be used to increase imaging speed to kilohertz [85, 86].
Reducing the number of images required for 3D reconstruction can not only further improve the efficiency of 3D measurement systems, but also facilitate real-time 3D measurement based on low-cost 3D measurement system hardware [87]. Yu et al. proposed an encoding method based on shifting Gray code to avoid the generation of the traditional Gray code order jump errors but observed that the shifting Gray code method also increases the number of Gray code images [88]. To improve the encoding efficiency of the Gray code, Lu et al. designed a misaligned Gray code to reduce the number of Gray code images [89]. Li et al. proposed a tripartite Gray code method, which increased the encoding grayscale level of the Gray code, as well as improved its encoding efficiency [90].
4.2 Data Processing
The 3D reconstruction algorithm based on a single-shot pattern is the most suitable for real-time 3D measurement [72, 91, 92]. These algorithms can be roughly divided into single-shot fringe analysis methods represented by Fourier transform profilometry and data-driven deep learning-assisted methods [93].
Nakayama et al. enhanced the traditional Fourier transform profilometry (FTP) method, enabling it to analyze significant phase changes by reducing the spectral bandwidth of the Fourier transform [94]. Hugo et al. used the nonlinear discrete Fourier transform algorithm to improve the accuracy of high-frequency measurements [95]. The traditional FTP filters in the spatial domain lose high-frequency detail information about the measured object. To preserve the information as much as possible and complete the dynamic measurement of steeply changing objects, Zhang et al. performed a time Fourier transform (TFTP) method pixel by pixel [96]. To solve the phase recovery problem of TFTP in a single window situation, Zhou et al. proposed a new phase and amplitude algorithm to achieve phase recovery in multiple windows [97]. The TFTP method overcomes the limitations of the traditional FTP method for measuring discontinuous and steeply changing objects while retaining the advantages of single pattern measurement. In the process of shooting, after collecting the light field of the reference plane, FTP only needs to collect one more frame of fringe pattern, and the 3D data of the object can be obtained by Fourier transform, frequency spectrum filtering, inverse Fourier transform, and phase unwrapping. Figure 10 depicts the single-shot pattern phase unwrapping.
The biggest contradiction between traditional single pattern transformation and multipattern phase-shifting methods is that they cannot balance measurement efficiency and accuracy, resulting in the inability to preserve high-frequency information of the measured object in single pattern measurement methods. Li et al. applied deep learning to single-shot pattern phase unwrapping, which is promising for a wide range of applications [98]. Convolutional neural networks are employed by Zhang et al. to extract phase information from raster patterns in a single pattern [99], allowing obtaining highly accurate 3D measurements from a single snapshot without the need for multiple images or a phase-shifting process [100].
For the FTP method, only a single shot of the pattern needs to be projected to complete the reconstruction, with low requirements for projection equipment. However, because of the presence of spatial filtering, reconstructing the detailed object information is impossible. Therefore, the FTP method is only suitable for 3D measurements of flat and continuous surfaces without rapid changes. Deep learning-based measurement methods learn a large number of datasets, enabling the network to have better encoding and decoding capabilities than the traditional methods, making deep learning-based measurement methods have an excellent performance in terms of measurement efficiency, noise resistance, and motion blur resistance. Because the deep learning-based measurement methods collect an additional large number of datasets and require more computing power in parameter optimization, they are suitable for single and fixed measurement scenes and insufficient measurement hardware.
5 Discussion
Highly reflective or shiny surfaces often result in bright spots or overexposed areas in the image, where loss of information can lead to measurement errors. Therefore, it is critical to solve the saturation problem to obtain accurate 3D measurement results. Conventional methods usually require multiple projections so that they increase the time cost of the measurement. Developing more time-efficient methods can improve the practical feasibility and efficiency.
The hardware-assisted method uses tools such as polarizers to restrict the incident reflected light onto the camera image sensor to a certain angle, thus effectively eliminating the highlights. However, it increases the complexity of calibration and is not suitable for metallic materials. In the multiple exposure method, a portion of the highly reflection area will appear as a pattern in each image through multiple exposures at different times, and the effect of reflection can be mitigated by blending these images. Unlike the multiple exposure method, the adaptive fringe projection method determines the optimal intensity of the projection by adjusting for each pixel, thus reducing the area of missing information in the image. The phase compensation method directly post-processes the highlighted areas of the image and iteratively supplements the missing areas caused by the interference of the highlights, which is visually better but is unreliable for industrial applications because it is essentially a guess at the missing areas.
Using cameras and projectors in the system can cause nonsinusoidal fringes due to the gamma effect, resulting in nonlinear phase errors. The hardware-assisted method mainly includes coherent light illumination and defocus projection technology. Coherent light illumination reduces the nonlinear response and improves the phase measurement accuracy, making it suitable for high-precision phase measurements. Coherent light sources are usually more expensive than incoherent ones, and achieving coherent light illumination requires complex optical arrangements and packaging and has higher costs. Defocus projection is a relatively simple method that can be achieved by adjusting the optical components; however, balancing the defocus and measurement range is required. In fringe calculations, the corresponding calibration values in the table are searched based on the current measurement conditions to compensate for system errors, which can provide effective compensation for specific error situations. This method relies on a pre-established lookup table and may not cover all possible error situations, making it difficult to achieve comprehensive compensation for complex systems. This method of identifying and modeling statistical patterns of system errors through analyzing a large amount of measurement data has a certain universality but requires a large amount of sample data to establish statistical models. The iterative method can adapt to different environments and conditions and has strong adaptability for compensating system errors, requiring more computing resources and time.
Most high-speed projection devices require customization, and the production difficulty is relatively high, which poses high requirements for the control accuracy of the projection system. The binary square wave and pulse width modulation defocus methods can be used when the stripe frequency is high, which can suppress the phase error caused by high-order harmonics to varying degrees. However, binary defocus technology needs to be adjusted to the appropriate defocus degree. The Gray code-based phase-unwrapping method effectively reduces the number of projected patterns through unequal period gray scale coding and multifrequency phase slices. This method still has high measurement accuracy in low grayscale error conditions. FTP spatial phase-unwrapping (SPU) techniques are usually fast and more effective in responding to noisy regions in the phase map, but their robustness and ability to handle discontinuities have some limitations. If SPU can be used robustly and effectively for phase recovery of discontinuous object measurements, it will enable much faster 3D measurements and can solve a series of related industrial and engineering problems that have a strong demand for measurement speed.
The phase-unwrapping technique based on deep learning has achieved certain technical breakthroughs in optical metrology and has become an effective method. It also plays an increasingly important role in the field of optics due to the advantages of a small number of required stripe patterns, low sensitivity to complex textures and discontinuous surfaces, and fast measurement speed. However, a large number of reliable datasets is still an urgent problem for deep learning.
6 Summary and Prospect
Temporal phase-unwrapping (TPU) techniques are mostly used in measurement scenarios that require high measurement accuracy and low measurement time, and the structured 3D method that combines phase shifting and Gray code has been gradually proven to be effective for high-speed and high-dynamic measurements. The SPU technique is mostly used in measurement scenarios that require high measurement speed and low measurement accuracy. The deep learning-based phase-unwrapping technique can solve the deficiencies of TPU and SPU techniques to a certain extent, and the current studies have gradually proven the reliability, measurement efficiency, and accuracy. However, its application to actual measurements has limitations. In addition, other phase-unwrapping techniques, such as spatiotemporal phase unwrapping, geometrical constraints, and photometric constraints, have been proposed for specific scenarios and requirements, which also have certain breakthroughs in accuracy and complex scenarios.
Aiming at the importance of phase unwrapping in the process of structured light 3D morphology measurement technology and the factors affecting the accuracy, the future development directions are summarized as follows.
-
1.
Reduce the number of projection patterns. Among the current methods for reducing the number of projection patterns, the number of projection patterns required by the TPU technique is still the largest, although it can ensure the accuracy of measurement. The SPU technique requires fewer projection patterns, but the accuracy of the measurement is relatively low. The phase-unwrapping technique based on deep learning has also made some progress in reducing the number of projections, but there is still uncertainty. Therefore, the phase-unwrapping technique to reduce the number of projection patterns is still a challenge to be explored.
-
2.
Enhance noise immunity and robustness. For the complex measurement environment and the complexity of the object to be measured, it is necessary to enhance the noise resistance of the phase expansion algorithm to a certain extent to improve the reliability and accuracy of the measured data. For real-time 3D reconstruction technology in a low signal-to-noise ratio environment, enhancing the robustness of the phase expansion algorithm is necessary to ensure effectiveness and reliability.
-
3.
Reduce the complexity of calculation. With the development of computer technology, the issue of calculation costs has been effectively solved. However, the algorithm with lower computational complexity not only saves unnecessary computational costs but also can be realized to ensure the reliability of the algorithm, the increase in speed of the measurement, and simpler algorithmic processes than complex ones with higher reliability and universality.
-
4.
Improve the accuracy of phase unwrapping. The accuracy of phase unwrapping has an important impact on the final 3D shape of the measured object surfaces. Accurate measurement is of great significance for applications such as precision manufacturing; thus, in scenarios with high requirements, ensuring the accuracy of phase unwrapping can effectively guarantee measurement accuracy. Under the premise of meeting other measurement requirements (number of projection patterns, measurement speed, measurement range, etc.) other than accuracy, the more universal high-precision phase-unwrapping technology warrants further investigation and development.
-
5.
Realize high-speed real-time measurement. With the wide application of 3D measurement technology, the measurement needs of various industries are also increasing. Only static object measurement methods can not meet the higher measurement needs, and high-speed and highly dynamic object measurements are increasing. To realize 3D measurement in high-speed scenes, the main aspects involved that can be improved include improving measurement efficiency by reducing the number of images required for each reconstruction, accelerating projection and shooting speed by upgrading hardware equipment, and simplifying the computation process to realize fast measurement. High-speed and high-precision realization of phase unwrapping remains a key research area and development direction in the future.
Availability of Data and Materials
The authors declare that all data supporting the findings of this study are available within the article.
References
Olamide Y, Kesa H, Adebo O (2022) Trends in functional food development with three-dimensional (3D) food printing technology: prospects for value-added traditionally processed food products. Crit Rev Food Sci Nutr 62(28):7866–7904
Khan M, Sharma N (2022) Three-dimensional hole size (3DHS) approach for water flow turbulence analysis over emerging sand bars: Flume-scale experiments. Water 14(12):1889
McGinley J, Baker R, Wolfe R (2009) The reliability of three-dimensional kinematic gait measurements: a systematic review. Gait Posture 29(3):360–369
Liu K, Liang Y, Tang M (2023) Calibration method for structural parameters of the articulated arm coordinate measuring machine utilizing a modified hybrid algorithm. Meas Sci Technol 34(5):19–25
Zhu S, Liu J, Guo A (2022) Non-contact measurement method for reconstructing three-dimensional scour depth field based on binocular vision technology in laboratory. Measurement 200:111556
Zong Y, Duan M, Yu C (2021) Robust phase unwrapping algorithm for noisy and segmented phase measurements. Opt Express 29(15):24466–24485
Zhu Z, Li M, Xie Y (2022) The optimal projection intensities determination strategy for robust strip-edge detection in adaptive fringe pattern measurement. Optik 257:168771
Xing C, Huang J, Wang Z (2023) A high-accuracy online calibration method for structured light 3D measurement. Measurement 210:112488
Jin Z, Lu L, Wu K (2020) A new phase unwrapping method based on inner-fringe coding. Opt Metrol Inspect Ind Appl VII SPIE 11552:330–335
Wang Y, Liu L, Wu J (2020) Spatial binary coding method for stripe-wise phase unwrapping. Appl Opt 59(14):4279–4285
He X, Qian K (2021) A comparative study on temporal phase unwrapping methods in high-speed fringe projection profilometry. Opt Lasers Eng 142:106613
Lv S, Tang D, Zhang X (2022) Fringe projection profilometry method with high efficiency, precision, and convenience: theoretical analysis and development. Opt Express 30(19):33515–33537
Juarez-Salazar R, Rodriguez-Reveles G, Esquivel-Hernandez S (2023) Three-dimensional spatial point computation in fringe projection profilometry. Opt Lasers Eng 164:107482
Xu Y, Gao F, Jiang X (2020) A brief review of the technological advancements of phase measuring deflectometry. PhotoniX 1(1):1–10
Kulkarni R, Rastogi P (2020) Fringe denoising algorithms: a review. Opt Lasers Eng 135:106190
Zhu Q, Zhao H, Zhao Z (2023) Research on iterative decoupling algorithm in color fringe projection profilometry. Opt Laser Technol 164:109541
Cheng X, Wang T, Zhu W (2023) Phase deflectometry for defect detection of high reflection objects. Sensors 23(3):1607
Su W, Chen S (2021) Error-corrected fringe discrimination using nonary-encoded patterns for phase-shifting projected fringe profilometry. Opt Lasers Eng 141:106554
Yan F, Qi J, Liu Y, Wu D (2023) Inter-partition phase unwrapping method based on Gray code. J Appl Opt 44(1):79–85
Broadley L, Chrimes A, Mitchell A (2021) Fringe analysis approach for imaging surface undulations on technical surfaces. Opt Express 29(21):33067–33076
Wang Y, Tang Z, Tang T (2022) High-efficiency 3D shape measurement based on redesigned Gray code and aligned phase unwrapping method. Optick 265:169510
Muñoz A, Flores J, Parra-Escamilla G (2021) Least-squares gamma estimation in fringe projection profilometry. Appl Opt 60(5):1137–1142
Ri S, Takimoto T, Xia P (2020) Accurate phase analysis of interferometric fringes by the spatiotemporal phase-shifting method. J Opt 22(10):105703
Cheng N, Su W (2021) Phase-shifting projected fringe profilometry using binary-encoded patterns. Photon MDPI 8(9):362
Xie X, Tian X, Shou Z (2022) Deep learning phase-unwrapping method based on adaptive noise evaluation. Appl Opt 61(23):6861–6870
Yu J, Da F (2023) Absolute phase unwrapping for objects with large depth range. IEEE Trans Instrum Meas 72:5013310
Yue M, Wang J, Zhang J (2022) Color crosstalk correction for synchronous measurement of full-field temperature and deformation. Opt Lasers Eng 150:106878
Li Z, Gao N, Meng Z (2023) Aided imaging phase measuring deflectometry based on concave focusing mirror. Photon MDPI 10(5):519
Wang Y, Xu Y, Zhang Z (2021) 3D measurement of structured specular surfaces using stereo direct phase measurement deflectometry. Machines 9(8):170
Liu X, Zhang Z, Gao N (2020) 3D shape measurement of diffused/specular surface by combining fringe projection and direct phase measuring deflectometry. Opt Express 28(19):27561–27574
Zhang Z, Liu X, Guo Z (2020) Shape measurement of specular/diffuse complex surface based on structured light. Infrared Laser Eng 49(3):0303015
Liu S, Zhang Z, Gao N (2023) Elimination method of crosstalk and chromatic aberration between color channels for composite surface measurement. Opto-Electron Eng 50(4):220340
Yao P, Gai S, Da F (2021) Coding-Net: a multi-purpose neural network for fringe projection profilometry. Opt Commun 489:126887
Zheng Z, Gao J, Zheng Z (2023) An anti-saturation phase retrieval algorithm based on optimal composite fringe patterns for 3D measurement with shiny surface. Measurement. https://doi.org/10.1016/j.measurement.2023.113095
Liu C, Lin J, Chen B (2023) A novel 3D scanning technique for reflective metal surface based on HDR-like image from pseudo exposure image fusion method. Opt Lasers Eng 168:107688
Xu F, Zhang Y, Zhang L (2020) An effective framework for 3D shape measurement of specular surface based on the dichromatic reflection model. Opt Commun 475:126210
Figueroa A, Rivera M (2021) Deep neural network for fringe pattern filtering and normalization. Appl Opt 60(7):2022–2036
Kang J, Kim C, Pak I (2021) A new phase to height model in fringe projection profilometry by considering radial distortion of camera lens. Optik 247:167895
Zhang G, Xu B, Lau D (2022) Correcting projector lens distortion in real time with a scale-offset model for structured light illumination. Opt Express 30(14):24507–24522
He K, Sui C, Lyu C (2020) 3D reconstruction of objects with occlusion and surface reflection using a dual monocular structured light system. Appl Opt 59(29):9259–9271
Zhang L, Chen Q, Zuo C (2020) Real-time high dynamic range 3D measurement using fringe projection. Opt Express 28(17):24363–24378
Zhang S, Yang Y, Shi W (2021) 3D shape measurement method for high-reflection surface based on fringe projection. Appl Opt 60(34):10555–10563
Sun J, Zhang Q (2022) A 3D shape measurement method for high-reflective surface based on accurate adaptive fringe projection. Opt Lasers Eng 153:106994
Li J, Guan J, Wang Y (2022) Accurate estimation of camera response function for high dynamic range measurement. Appl Opt 61(1):167–176
Li W, Hou D, Luo Z (2021) 3D measurement system based on divergent multi-line structured light projection, its accuracy analysis. Optick 231:166396
Bao Q, Zhang T, Liu F (2022) Phase unwrapping algorithm based on phase edge tracking for dynamic measurement. Opt Express 30(5):7551–7565
Zheng Z, Gao J, Zhuang Y (2021) High dynamic defocus response method for binary defocusing fringe projection profilometry. Opt Lett 46(15):3749–3752
Wang Y, Zhang Q, Hu Y (2021) Rapid 3D measurement of high dynamic range surface based on multi-polarization fringe projection. Opt Eng 60(8):084107
Zhou P, Wang H, Lai J (2021) 3D shape measurement for shiny surface using pixel-wise composed fringe pattern based on multi-intensity matrix projection of neighborhood pixels. Opt Eng 60(10):104101
Dowd N, Wachtor A, Todd M (2021) A probability density function model describing height estimation uncertainty due to image pixel intensity noise in digital fringe projection measurements. Opt Lasers Eng 138:106422
Zhao Q, Tang C, Min X (2021) Dynamic shape measurement for objects with patterns by Fourier fringe projection profilometry based on variational decomposition and multi-scale Retinex. Appl Opt 60(33):10322–10331
Wang H, Lin B, Zhou P (2021) End-to-end pattern optimization technology for 3D shape measurement. Appl Opt 60(22):6495–6502
Duan M, Jin Y, Chen H (2020) Dynamic 3-D shape measurement in an unlimited depth range based on adaptive pixel-by-pixel phase unwrapping. Opt Express 28(10):14319–14332
Yin Z, Du Y, She P (2021) Generalized 2-step phase-shifting algorithm for fringe projection. Opt Express 29(9):13141–13152
Zhang P, Zhong K, Li Z (2022) Hybrid-quality-guided phase fusion model for high dynamic range 3D surface measurement by structured light technology. Opt Express 30(9):14600–14614
Chen Y, Qian K (2021) General iterative algorithm for phase-extraction from fringe patterns with random phase-shifts, intensity harmonics and non-uniform phase-shift distribution. Opt Express 29(19):30905–30926
Han H, Wu S, Song Z (2021) 3D reconstruction of the specular surface using an iterative stereoscopic deflectometry method. Opt Express 29(9):12867–12879
Bai F, Gao X, Li P (2021) Estimation of global visibility for low-quality fringe pattern using Fourier-polar transform. Optik 248:168213
Murray C, Ikuta K, Sharara F (2022) Global burden of bacterial antimicrobial resistance in 2019: a systematic analysis. The Lancet 399(10325):629–655
Peng R, Tian M, Xu L (2021) A novel method of generating phase-shifting sinusoidal fringes for 3D shape measurement. Opt Lasers Eng 137:106401
Pak I, Kim C, Kang J (2021) Verification of phase measurement error sources in phase-shifting interferometry with four step phase-shifting algorithms. Appl Opt 60(13):3856–3864
Guo R, Lu S, Wu Y (2022) Robust and fast dual-wavelength phase unwrapping in quantitative phase imaging with region segmentation. Opt Commun 510:127965
You D, You Z, Zhang X (2022) High-quality 3D shape measurement with binary half truncated sinusoidal fringe pattern. Opt Lasers Eng 155:107046
Chen J, Cao Y, Wu H (2022) Phase measuring profilometry based on binary grating projection for fringe order self-extraction. Opt Commun 517:128280
Hu J, Zhang S, Hu Y (2021) Defocused binary fringe phase error modeling and compensation using depth-discrete Fourier series fitting. Appl Opt 60(32):10047–10054
Li H, Cao Y, Wan Y (2022) An improved temporal phase unwrapping based on super-grayscale multi-frequency grating projection. Opt Lasers Eng 153:106990
Liu Y, Yu X, Xue J (2020) A flexible phase error compensation method based on probability distribution functions in phase measuring profilometry. Opt Laser Technol 129:106267
Wang Y, Cai J, Liu Y (2022) Motion-induced error reduction for phase-shifting profilometry with phase probability equalization. Opt Lasers Eng 156:10708
Yu X, Liu Y, Chen W (2022) Efficient phase-shift error compensation algorithm based on probability density function. Opt Lasers Eng 153:107007
Zheng W, Li D, Wang R (2022) Front and back surface measurement of the transparent planar element based on multi-frequency fringe deflectometry. Opt Express 30(20):35409–35430
Yan F, Liu Z, Pan X (2020) High-accuracy calibration of cameras without depth of field and target size limitations. Opt Express 28(19):27443–27458
Qiao G, Huang Y, Song Y (2020) A single-shot phase retrieval method for phase measuring deflectometry based on deep learning. Opt Commun 476:126303
You D, Zhu J, Duan Z (2021) One-shot fringe pattern analysis based on deep learning image denoiser. Opt Eng 60(12):124113
Zhang B, Lin S, Lin J (2022) Single-shot high-precision 3D reconstruction with color fringe projection profilometry based BP neural network. Opt Commun 517:128323
Yao P, Gai S, Chen Y (2021) A multi-code 3D measurement technique based on deep learning. Opt Lasers Eng 143:106623
Yu H, Zheng D, Fu J (2020) Deep learning-based fringe modulation-enhancing method for accurate fringe projection profilometry. Opt Express 28(15):21692–21703
Ueda K, Ikeda K, Koyama O (2022) Absolute phase retrieval of shiny objects using fringe projection and deep learning with computer-graphics-based images. Appl Opt 61(10):2750–2756
An H, Cao Y, Wu H (2021) The spatial phase-shifting measuring profilometry based on dual-frequency grating. Opt Lasers Eng 143:106638
Landmann M, Speck H, Dietrich P (2021) High-resolution sequential thermal fringe projection technique for fast and accurate 3D shape measurement of transparent objects. Appl Opt 60(8):2362–2371
Liu Y, Zhang Q, Liu Y (2021) High-speed 3D shape measurement using a rotary mechanical projector. Opt Express 29(5):7885–7903
Guo W, Wu Z, Zhang Q (2021) Real-time motion-induced error compensation for 4-step phase-shifting profilometry. Opt Express 29(15):23822–23834
Kang X, Yin Z, Dong S (2021) Evaluating binary defocusing quantitatively in real-time for fringe projection profilometry. Opt Eng 60(6):064110
Guo W, Huntley J, Coggrave C (2022) High speed fringe projection for dynamic shape measurement using binary phase mask. Part 1: theory and simulation. Opt Lasers Eng 154:107021
Guo W, Coggrave C, Huntley M (2022) High speed fringe projection for dynamic shape measurement using binary phase mask. Part 2: manufacture and test. Opt Lasers Eng 154:107022
Zhu S, Cao Y, Zhang Q (2022) High-efficiency and robust binary fringe optimization for superfast 3D shape measurement. Opt Express 30(20):35539–35553
Hu Y, Liu Z, Yang D (2020) Online fringe pitch selection for defocusing a binary square pattern projection phase-shifting method. Opt Express 28(21):30710–30725
Qi X, Zhou C, Ding Y (2022) Novel absolute phase measurement method with few-patterns. Opt Lasers Eng 154:107031
Yu S, Gong T, Wu H (2022) 3D shape measurement based on the unequal-period combination of shifting gray code and dual-frequency phase-shifting fringes. Opt Commun 516:128236
Lu L, Wu Z, Zhang Q (2022) High-efficiency dynamic three-dimensional shape measurement based on misaligned Gray-code light. Opt Lasers Eng 150:106873
Li D, Chen J, Tang T (2022) High-speed three-dimensional shape measurement based on tripartite complementary gray-coded light. Appl Opt 61(17):5083–5089
Nguyen A, Ly K, Li C (2022) Single-shot 3D shape acquisition using a learning-based structured-light technique. Appl Opt 61(29):8589–8599
Omidi P, Najiminaini M, Diop M (2021) Single-shot 4-step phase-shifting multispectral fringe projection profilometry. Opt Express 29(18):27975–27988
Shi J, Liu Q, Tan Y (2022) Fast and in-situ correction of camera channel crosstalk based on Fourier transform of carrier fringes in multi-colour interferometry. Opt Lasers Eng 151:106911
Nakayama S, Toba H, Fujiwara N (2020) Enhanced Fourier-transform method for high-density fringe analysis by iterative spectrum narrowing. Appl Opt 59(29):9159–9164
Jonquière H, Mugnier L, Mercier-Ythier R (2021) Study of linear phase shift algorithms and application to deflectometry. Opt Lasers Eng 143:106640
Zhang Q, Guo Z, Liu B (2023) Uniqueness of STFT phase retrieval for bandlimited vector functions. Numer Funct Anal Optim 44(4):311–331
Zhou X, Wang H (2022) Solving multiple windowed STFT phase retrieval problems in phase and amplitude respectively. Appl Math Sci Eng 30(1):688–707
Li Y, Shen J, Wu Z (2021) Passive binary defocusing for large depth 3D measurement based on deep learning. Appl Opt 60(24):7243–7253
Zhang L, Chen Q, Zuo C (2020) High-speed high dynamic range 3D shape measurement based on deep learning. Opt Lasers Eng 134:106245
Zheng Y, Wang S, Li Q (2020) Fringe projection profilometry by conducting deep learning from its digital twin. Opt Express 28(24):36568–36583
Acknowledgements
Natural Science Foundation of China (U2341275, 52075147); Scientific research project of Education Department of Hebei Province (JZX2024021).
Author information
Authors and Affiliations
Contributions
All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflicts of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bai, Y., Zhang, Z., Fu, S. et al. Recent Progress of Full-Field Three-Dimensional Shape Measurement Based on Phase Information. Nanomanuf Metrol 7, 9 (2024). https://doi.org/10.1007/s41871-024-00227-8
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s41871-024-00227-8