1 Introduction

Three-dimensional (3D) measurement technology has emerged to be increasingly important in manufacturing, aerospace, medical diagnosis, and cultural relic protection because it can provide comprehensive and rich details of measured objects [1,2,3]. Thus, how to obtain comprehensive 3D shape information quickly and accurately has become an important and difficult point of increasing scientific attention. 3D measurement technology is divided into contact and noncontact operation. Coordinate measuring machines (CMMs) operate in contact mode by contacting the measured object surface. Therefore, soft objects cannot be measured by CMMs because of the potential damage [4]. Noncontact optical measurement mainly includes the time-of-flight, interferometry, and structured light techniques [5]. Because of full-field, high-precision, high-speed, automatic data processing, and robustness, the phase calculation-based structured light (PSL) technique has gradually become the mainstream of 3D measurement [6, 7].

PSL has two types: fringe projection profilometry (FPP), which can be applied on diffused surfaces, and phase measuring deflectometry (PMD), which can be applied to specular surfaces. FPP projects sine–cosine fringe patterns through a projector, while PMD displays sine–cosine fringe patterns on a screen [8,9,10]. Meanwhile, the fringe patterns on the measured surface are deformed and captured by a camera. Phase information is calculated from the captured fringe patterns. After calibrating the relationship between phase and depth (or gradient for PMD), the 3D shape of the measured surface can be obtained. High accuracy and high speed are the two most challenging problems in PSL. To improve both, many new 3D measurement methods have continuously emerged, but there is a lack of systematic comparison and analysis in the literature.

He et al. compared the commonly used phase-unwrapping methods for FPP systems and analyzed their principles and noise models in depth [11, 12]. Juarez-Salazar et al. reviewed the absolute phase recovery methods that apply to digital fringe projection systems, explained the principles of each individual absolute phase-unwrapping method, and finally provided some practical tips on handling the common phase-unwrapping artifact problems [13]. Xu et al. discussed the advancement of high-curvature surfaces, portable online measurement, and so on for PMD [14, 15] and tried to address the technical challenges in measuring high-curvature specular surfaces. Meanwhile, it was emphasized that PMD techniques are moving toward portable and online measurements to meet the needs of different application scenarios. Based on the above literature, a comprehensive comparison of the latest advances has been carried out for phase errors to select the appropriate measuring method. Although these papers have reviewed the recent advancements in 3D measurement, none of them summarize and quantitatively evaluate high-precision and high-speed phase-unwrapping methods for composite surfaces.

Especially for 3D shape measurement based on phase information, the above PSL method is the only way to achieve full-field 3D shape and an important typical technology due to the advantages of high accuracy, fast speed, full field, and automatic data processing. Herein, we will discuss the advantages and disadvantages of new emerging techniques for highly reflective, diffused, and specular surface measurements. The paper is organized as follows. Section 2 introduces the basic principles of FPP and PMD. Section 3 presents an overview of improved methods in phase error compensation methods for saturated pixels and nonlinearity. Section 4 presents an overview of the research progress of high-speed 3D measurement technology based on PSL in hardware and algorithms. Section 5 discusses the advantages and disadvantages of the latest optimization methods. Section 6 draws the conclusion and highlights the prospected future trends.

2 Basic Principles

FPP and PMD are used to measure the 3D topography of diffused (including highly reflective and specular surfaces [16, 17]). Both methods use the sine–cosine fringe patterns to modulate depth (or gradient) information, which needs to be demodulated by using fringe analysis [18, 19]. Fringe analysis involves how to obtain the wrapped and unwrapped phases [20, 21]. Multiple-step phase shifting and transform-based methods have been widely studied to calculate the wrapped phase, while spatial and temporal phase-unwrapping techniques have been used to determine the unwrapped phase [22].

As an example of the multiple-step phase-shifting method, one fringe pattern captured by the camera can be described as

$$\begin{array}{c}I\left(x,y\right)=A\left(x,y\right)+B\left(x,y\right)\bullet \text{cos}\left[\varphi \left(x,y\right)-\frac{2\uppi n}{N}\right]\end{array}$$
(1)

where \(A\left(x,y\right)\) is the background light intensity, \(B\left(x,y\right)\) is the fringe contrast, N is the total number of the shifted fringe patterns, and \(\varphi \left(x,y\right)\) is the fringe phase value, which can be solved using

$$\begin{array}{c}\varphi \left(x,y\right)=\text{arctan}\left[\frac{\sum_{n=0}^{N-1}I\left(x,y\right)\text{sin}\left(\frac{2\uppi n}{N}\right)}{\sum_{n=0}^{N-1}I\left(x,y\right)\text{cos}\left(\frac{2\uppi n}{N}\right)}\right]\end{array}$$
(2)

The four-step phase-shifting method is often used in the traditional wrapped phase calculation, considering the compromise of speed and accuracy.

During phase calculations, both FPP and PMD methods are affected by the nonlinear response of digital optical devices, including cameras, screens, and projectors. There is also an essential difference between the two methods. For FPP, phase and depth are directly related, and for PMD, gradient integration is required. Using the triangulation principle, the phase value can be directly converted into depth data after calibrating the FPP-based system. In PMD, the phase value is related to gradient data, which needs to be integrated to obtain depth by regularization after calibrating the systematic parameters. When composite surfaces combine diffuse reflection and specular surfaces, the two methods are combined to solve the inversion problem using FPP depth data to modulate the PMD.

2.1 Basic Principle of FPP

FPP is the most representative technique in the field of structured light 3D measurement for diffused surfaces. There are two main phase measurement methods, namely the multiple-step phase-shifting method implemented in the time domain and the transform-based method implemented in the spatial domain [23]. Figure 1 displays the FPP system, which consists mainly of a digital projector, a charge coupled device (CCD) camera, and a computer. A sinusoidal fringe pattern is generated by a computer and then projected onto the measured object surface by the projector. Meanwhile, the fringe pattern is deformed with regard to the measured surface and collected by the camera. The deformed fringe pattern is demodulated to obtain the wrapped and unwrapped phases. After calibrating the system to establish the relationship between phase and depth, 3D morphology information is reconstructed.

Fig. 1
figure 1

Schematic of FPP

2.2 Basic Principle of PMD

PMD is another well-known technique for measuring the 3D shape of specular surfaces based on the law of reflection due to its advantages of high dynamic range (HDR), high accuracy, and high speed. Figure 2 illustrates the PMD-based system, which consists of a liquid crystal display (LCD) screen, a CCD camera, and a computer. The computer creates a sinusoidal fringe pattern, which is then displayed on the LCD. From the specular reflection direction, the fringe pattern is deformed with regard to the normal of the measured specular surface and the deformed fringe pattern is captured by the CCD camera [24, 25]. The phase information contained in the deformed fringe pattern can be obtained by the above-mentioned algorithms [26]. The obtained phase is related to the gradient, which needs to be integrated to reconstruct the 3D shape of the measured specular surface after calibrating the system parameters [27].

Fig. 2
figure 2

Schematic of PMD

To skip the integration process, a direct PMD (DPMD) is developed to establish the mapping relationship between phase and depth, which is illustrated in Fig. 3 [28, 29]. The system comprises two LCD screens, a CCD camera, and a plate beam splitter (BS). The BS is placed in a fixed position, such that a virtual image LCD1´ of LCD1 is parallel to LCD2; that is, two LCD screens are located at two different positions. The plate BS is therefore used to realize the parallel design of the two screens to avoid mechanically moving a screen to two positions, LCD1´ and LCD2, along the normal of the reference plane.

Fig. 3
figure 3

Schematic of DPMD

The height h of the measured specular surface is directly obtained using the geometric relationship in Fig. 3.

$$\begin{array}{c}h=\frac{\Delta d\left({\varphi }_{\text{r}1}-{\varphi }_{\text{m}1}\right)-d\left[\left({\varphi }_{\text{r}1}-{\varphi }_{\text{r}2}\right)-\left({\varphi }_{\text{m}1}-{\varphi }_{\text{m}2}\right)\right]}{\left({\varphi }_{\text{m}1}-{\varphi }_{\text{m}2}\right)+\left({\varphi }_{\text{r}1}-{\varphi }_{\text{r}2}\right)}\end{array}$$
(3)

Therefore, DPMD has a great advantage in measuring the 3D shape of isolated and/or discontinuous specular surfaces.

2.3 Composited Diffused/Specular Surface

The current FPP and DPMD techniques are applicable to diffuse and specular reflective surfaces, respectively [30]. However, there are many composite reflective surfaces in aerospace and advanced manufacturing in which both diffuse and specular reflective surfaces coexist [31]. Figure 4 shows a schematic of the PSL-based composite surface measurement, which is composed of a projector, two LCD1 and LCD2 screens, and a camera, all located on the same side of the measured object [32].

Fig. 4
figure 4

Schematic of composite surface measurement based on structured light projection and reflection

To achieve the measurement of objects with different reflection characteristics, the projector and two LCD screens project and display, respectively, sinusoidal fringe patterns that conform to multifrequency heterodyne sequences. The fringe pattern displayed on the display screen is reflected and imaged through the specular part of the composite reference plane, while the fringe pattern projected by the projector is reflected through the diffuse part of the composite reference plane. In Fig. 4, the reference plane is parallel to two LCD displays, and the spatial distance between the LCD1 and LCD2 displays is Δd. The spatial distance between LCD1 and the composite reference plane is d. The incident light \({l}_{1}\) comprising reference phase points \({\varphi }_{\text{r}1}\) and \({\varphi }_{\text{r}2}\) on the display screen intersects with the reference plane at point K and follows the reflection law of light. The outgoing light is \({l}_{0}\), which is imaged at point m on the imaging target surface.

3 Advancement of Error Compensation for Phase Calculations

Diffused surfaces have a wider range of reflectivity, with light scattered in multiple different directions [33]. While diffused surfaces do not typically cause bright reflection spots as in specular surfaces, they can still cause camera pixel saturation under high-light conditions [34]. Specular surfaces are highly reflective, with light bouncing off them in a particular direction, creating bright reflective spots [35]. At certain angles of light, the reflected light can be so intense that it causes the camera to saturate the pixels. Pixel saturation makes the pixel points on the measured object surface distorted and unavailable for accurate 3D reconstruction. Discussions and overviews on pixel saturation and new techniques of compensating it to improve data accuracy in FPP and PMD are given in the next sections.

3.1 Saturation

Many applications require measurements on very reflective or glossy surfaces; thus, the solution to saturation problems is crucial to improve data accuracy. Highly reflective and shiny surfaces often result in bright spots or areas in which the intensity of the captured image is overexposed. In Fig. 5, in overexposed areas, the brightness of the fringes may be maximized, resulting in a loss of sinusoidal fringe detail. This can also blur surface features in highly reflective areas, making it difficult to extract information on surface detail from the fringes.

Fig. 5
figure 5

a Sinusoidal pattern and corresponding cross-section, b overexposed area pattern and corresponding cross-section

To obtain accurate 3D data, resolving the saturation problem is essential. Using conventional methods usually requires multiple projections, which increase measurement time costs. Thus, developing more time-saving methods can improve the practical feasibility and efficiency of measurement. The solution to this problem is important not only for scientific research but also for industrial applications. In manufacturing and quality control, accurate measurements of highly reflective objects can ensure product quality and reduce waste [36, 37]. Errors in HDR imaging can come from several factors, such as saturation pixels, sensor noise, lens abnormalities, incompatible lighting, camera response function (CRF), and image calibration errors [38, 39].

3.1.1 Hardware-Assisted Method

Based on the traditional monocular system [40], Zhang et al. added a linear polarizer into a binocular system to reduce the light intensity of highly reflective surfaces and eliminate saturation due to specular reflection [41], thus avoiding errors in 3D reconstruction.

By using a DLP projector, Zhang et al. enabled adaptive adjustment of the projection intensity to cope with changing lighting conditions and avoid saturation [42]. Such adjustments help increase the signal-to-noise ratio of the image, which in turn improves the measurement accuracy of the phase information [43]. Li et al. developed multiexposure techniques by projecting a structured light pattern multiple times and capturing images with different exposure times [44], which may have improved the way the camera modulates [45], allowing more efficient imaging of HDR scenes [46, 47]. Wang et al. employed a multipolarization camera, which incorporates four micropolarization filters in different directions at each pixel position [48]. This hardware design allows images from different directions of polarized light to be captured simultaneously, providing more information for subsequent data processing than conventional cameras. Table 1 compares the merits and demerits of the above-mentioned hardware methods.

Table 1 Comprehensive comparison of saturation compensation hardware methods. √ indicates good performance, – indicates not involved, and × indicates bad performance

3.1.2 Data Processing

Zhou et al. generated a fringe pattern without saturated regions by analyzing the response of neighboring pixels and calculating the optimal projection intensity [49]. The patterns can be used for subsequent phase measurements and 3D shape reconstruction. Compared with conventional methods, this method reduces computational complexity through a more efficient projection and data fusion process. Li et al. employed an estimation-based saturated pixel identification method that uses an estimate of the local (CRF) to identify saturated pixels [44]. This provides more accurate saturated pixel identification, unlike traditional methods that may use fixed thresholds or are based on image intensity. Spatially varying pixel intensity (SVPI) technology focuses on improving traditional methods in terms of data processing [50, 51]. It achieves an improvement of the dynamic range of raster projection systems by adjusting the intensity of the projection pattern on a pixel-by-pixel basis [52, 53]. Two different strategies are introduced to reduce the number of parameters for which optimization needs to be performed, namely, (1) increasing the computational speed and (2) reducing the number of possible local optimal solutions [54]. Zhang et al. used hybrid phase fusion with hybrid quantity-orientated phase fusion, in which the weights of the quantity metrics are used in the fusion process [55]. This helps better handle intensity nonlinearity in the reconstruction.

Table 2 compares the merits and demerits of the data processing methods. Figure 6a–f illustrate the simulation results of the different methods, while Fig. 6g, h display the combined comparison and root mean square errors (RMSEs) of the different methods, respectively.

Table 2 Comprehensive comparison of saturation compensation software methods. √ indicates good performance, – indicates not involved, and × indicates bad performance
Fig. 6
figure 6

Unwrapped phase. a Four-step phase shift combined with the optimal three-fringe phase-unwrapping algorithm for comparison, b multiple exposures for the domain pixel multi-intensity projection method, c corrected camera response function, d SPVI method, e least-squares iterative algorithm, f quantity wizard unwrapping algorithm, g combined comparison of the above methods, h root mean square error numerical comparison

In addition, further experimental validation is carried out. Multiple exposure conditions and an iterative algorithm are selected for the experimental recovery of 3D shape data [56].

In Fig. 6h, the neighborhood pixel fringe-based adaptive generation method requires only one projection operation to adaptively estimate the optimal projection intensity of each pixel, reducing the time cost, but it needs high requirements for hardware devices, and the saturation threshold setting may lead to large errors. In Fig. 6b, c, h, multiexposure combined estimation CRF successfully solves the saturated pixel problem in HDR imaging but is restricted by its high computational cost and limited applicability. In Fig. 6d, SVPI successfully overcomes the dynamic range limitations of raster projection systems and reduces measurement errors but requires complex calculations to determine the appropriate luminance adjustment for each pixel, relying on hardware devices. Figure 7a shows camera captured fringe pattern of the measured object step. Figure 7b, d, and f respectively represent the phase unwrapping obtained by using the traditional multifrequency method, multiexposure method and iterative method. In Fig. 7e, g, least-squares iterative optimization is used to progressively approach the optimal solution that converges stably under various noise levels [57]. However, the method is still sensitive to surface edges and discontinuities, which can lead to measurement errors. A weighted fusion method based on a quality metric is used to reduce the impact of low-quality data on the final 3D point cloud [58, 59], but its drawbacks are higher complexity and more computation and analysis time than traditional methods.

Fig. 7
figure 7

Phase unwrapping and three-dimensional (3D) reconstruction. a Measured object step, b unwrapped phase map obtained by the traditional multifrequency heterodyne method, c corresponding 3D reconstruction results, d unwrapped phase map obtained by the multiexposure method, e corresponding 3D reconstruction results, f iterative unwrapped phase, g corresponding 3D reconstruction results

3.2 Intensity Nonlinearity

Using cameras and projectors in the system can cause nonsinusoidal fringes due to the gamma effect, resulting in nonlinear phase errors [60]. Due to the nonlinear variation of input and output light intensity, the error of the structured light system may greatly increase, resulting in significant differences between the collected sine fringes and the standard sine waveform, leading to additional phase errors. Although the use of the multistep phase-shifting method can reduce the impact of nonlinear response, it is very time-consuming due to the increased number of collected images. Numerous methods have been developed to eliminate phase errors caused by nonlinear responses.

3.2.1 Hardware-Assisted Method

Hardware-based methods mainly include coherent light illumination and defocus projection techniques. Pak et al. modeled nonlinear errors and corrected them using the coherent light illumination method [61]. Guo et al. used laser excitation as the illumination source to ensure a stable phase relationship of illumination [62]. Compared with incoherent light sources, coherent light illumination can more accurately control phase information, thereby reducing nonlinear errors.

You et al. improved the high-quality sine wave fringe pattern of binary defocus [63]. Chen et al. used a projector to defocus binary fringes, thereby reducing the influence of higher-order harmonics by suppressing high-frequency components and compensating for nonlinear phase errors [64]. Hu et al. used the depth discrete Fourier series fitting method to establish the mathematical phase error function and developed an accurate gamma calculation method to compensate for system nonlinearity [65]. Li et al. used super-grayscale multifrequency temporal phase unwrapping (STPU) with a wider range of grayscale [66]. Therefore, STPU can better deal with the nonlinear response of a camera sensor. This reduces the effect of strength nonlinearity on the measurement. Table 3 compares the merits and demerits of correcting nonlinear hardware methods.

Table 3 Comprehensive comparison of nonlinearity compensation hardware methods. √ indicates good performance, – indicates not involved, and × indicates bad performance

3.2.2 Data Processing

The phase error compensation method based on the probability distribution function (PDF) has been proposed [67]. By statistically analyzing the phase error, especially the modeling and utilization of the PDF of the phase distribution, the nonlinear effect can be more effectively compensated. Wang et al. further proposed an improved phase-shifting profilometry with a phase probability equalization method that used the equalized phase to estimate the linear phase [68]. Yu et al. used the correlation process and the probability density function to determine the error coefficient and compensated the error by a general two-coefficient error model [69]. Zheng et al. used Zernike polynomial fitting to obtain more accurate surface shape information [70]. Yan et al. optimized the Gauss–Markov theorem to obtain the optimal solution [71]. Muñoz et al. estimated and compensated for the nonlinear gamma factors introduced by optical systems in FPP by adjusting the least-squares plane to the estimated phase from the reference plane [22].

Qiao and You proposed a deep learning method for phase calculation, which was promising for a wide range of applications [72, 73]. Zhang et al. applied back propagation neural networks for single high-precision FPP to predict the phase of the fringe pattern, which can effectively suppress the phase error caused by gamma nonlinearity and further improve the measurement accuracy [74,75,76]. Ueda et al. used a data augmentation method to train the deep learning network [77]. Although deep learning methods reduce the number of parameters, they still require considerable computational resources to train deep learning networks, which may pose a challenge to the computational power of some researchers or application areas.

Table 4 compares the merits and demerits of the data processing methods. Figure 8a–f display the simulation results of the different methods, while Fig. 8g, h show the combined comparison and RMSEs of the different methods, respectively.

Table 4 Comprehensive comparison of nonlinearity compensation software methods. √ indicates good performance, – indicates not involved, and × indicates bad performance
Fig. 8
figure 8

Unwrapped phase. a Binary defocus method, b super-grayscale multifrequency temporal phase unwrapping method, c probability density function method, d polynomial fitting method, e Gauss–Markov theory, f least-squares, g corresponding three-dimensional reconstruction results, and h RMSE numerical comparison

In addition, further experimental validation is carried out. PDF and least-squares iterative methods are selected to compensate for the nonlinearity in the phase calculation stage, and the experiment is verified.

The coherent illumination method can reduce the nonlinear response of the system and improve the accuracy of phase measurement but requires a more complex optical arrangement and equipment, which increases the complexity and cost of the system. The defocus projection technique in Fig. 8a is relatively simple and can be achieved by adjusting the optics while affecting the spatial resolution [78]. Figure 9a, b show camera captured fringe pattern of the measured object. Figure 9c, d use the PDF method unwrapping phases of diffuse and specular surfaces. Figure 9g, h use the iterative method of unwrapping phases of diffuse and specular surfaces. The PDF applied to the system error is more complex, as in Fig. 9e, f, has a certain versatility and requires a large number of sample data to establish a statistical model. The iterative method in Fig. 9i, j can adapt to different environments and conditions and has strong self-adaptability to the compensation of system errors; however, it requires more computing resources and time, and there may be certain limitations for applications with high real-time requirements.

Fig. 9
figure 9

Phase unwrapping and three-dimensional (3D) reconstruction. a, b Measured object, c, d unwrapping phases of diffuse and specular surfaces using the PDF method, e, f corresponding 3D reconstruction results, g, h unwrapping phases of diffuse and specular surfaces using the iterative method, and i, j corresponding 3D reconstruction results

4 High-Speed 3D Measurement

With the increasing demand for real-time 3D measurement and rapid development of measurement hardware equipment, high-speed 3D surface measurement technology has great scientific research significance and wide application value.

4.1 Hardware-Assisted Method

Landmann et al. reduced measurement time using the stereovision method through an infrared camera [79]. Liu et al. proposed a rotation projection method to achieve fast and low-cost structured light sequence projection [80]. Guo et al. designed a time series and achieved real-time 3D measurement at 50 fps [81].

Due to the high cost and difficulty of independently developing projection devices, research has begun on projection modes. Kang et al. used a projector to defocus and project binary fringes to achieve fast sinusoidal fringe projection [82]. To improve the binary projection pattern, Guo et al. achieved a sinusoidal pattern distribution in the defocus position of the projector [83, 84]. By using binary defocus projection technology, 3D measurement devices with DLP projectors can be used to increase imaging speed to kilohertz [85, 86].

Reducing the number of images required for 3D reconstruction can not only further improve the efficiency of 3D measurement systems, but also facilitate real-time 3D measurement based on low-cost 3D measurement system hardware [87]. Yu et al. proposed an encoding method based on shifting Gray code to avoid the generation of the traditional Gray code order jump errors but observed that the shifting Gray code method also increases the number of Gray code images [88]. To improve the encoding efficiency of the Gray code, Lu et al. designed a misaligned Gray code to reduce the number of Gray code images [89]. Li et al. proposed a tripartite Gray code method, which increased the encoding grayscale level of the Gray code, as well as improved its encoding efficiency [90].

4.2 Data Processing

The 3D reconstruction algorithm based on a single-shot pattern is the most suitable for real-time 3D measurement [72, 91, 92]. These algorithms can be roughly divided into single-shot fringe analysis methods represented by Fourier transform profilometry and data-driven deep learning-assisted methods [93].

Nakayama et al. enhanced the traditional Fourier transform profilometry (FTP) method, enabling it to analyze significant phase changes by reducing the spectral bandwidth of the Fourier transform [94]. Hugo et al. used the nonlinear discrete Fourier transform algorithm to improve the accuracy of high-frequency measurements [95]. The traditional FTP filters in the spatial domain lose high-frequency detail information about the measured object. To preserve the information as much as possible and complete the dynamic measurement of steeply changing objects, Zhang et al. performed a time Fourier transform (TFTP) method pixel by pixel [96]. To solve the phase recovery problem of TFTP in a single window situation, Zhou et al. proposed a new phase and amplitude algorithm to achieve phase recovery in multiple windows [97]. The TFTP method overcomes the limitations of the traditional FTP method for measuring discontinuous and steeply changing objects while retaining the advantages of single pattern measurement. In the process of shooting, after collecting the light field of the reference plane, FTP only needs to collect one more frame of fringe pattern, and the 3D data of the object can be obtained by Fourier transform, frequency spectrum filtering, inverse Fourier transform, and phase unwrapping. Figure 10 depicts the single-shot pattern phase unwrapping.

Fig. 10
figure 10

Phase unwrapping and three-dimensional (3D) reconstruction. a, b Unwrapping phases of diffuse and specular surfaces using the Fourier transform profilometry method and c, d corresponding 3D reconstruction results

The biggest contradiction between traditional single pattern transformation and multipattern phase-shifting methods is that they cannot balance measurement efficiency and accuracy, resulting in the inability to preserve high-frequency information of the measured object in single pattern measurement methods. Li et al. applied deep learning to single-shot pattern phase unwrapping, which is promising for a wide range of applications [98]. Convolutional neural networks are employed by Zhang et al. to extract phase information from raster patterns in a single pattern [99], allowing obtaining highly accurate 3D measurements from a single snapshot without the need for multiple images or a phase-shifting process [100].

For the FTP method, only a single shot of the pattern needs to be projected to complete the reconstruction, with low requirements for projection equipment. However, because of the presence of spatial filtering, reconstructing the detailed object information is impossible. Therefore, the FTP method is only suitable for 3D measurements of flat and continuous surfaces without rapid changes. Deep learning-based measurement methods learn a large number of datasets, enabling the network to have better encoding and decoding capabilities than the traditional methods, making deep learning-based measurement methods have an excellent performance in terms of measurement efficiency, noise resistance, and motion blur resistance. Because the deep learning-based measurement methods collect an additional large number of datasets and require more computing power in parameter optimization, they are suitable for single and fixed measurement scenes and insufficient measurement hardware.

5 Discussion

Highly reflective or shiny surfaces often result in bright spots or overexposed areas in the image, where loss of information can lead to measurement errors. Therefore, it is critical to solve the saturation problem to obtain accurate 3D measurement results. Conventional methods usually require multiple projections so that they increase the time cost of the measurement. Developing more time-efficient methods can improve the practical feasibility and efficiency.

The hardware-assisted method uses tools such as polarizers to restrict the incident reflected light onto the camera image sensor to a certain angle, thus effectively eliminating the highlights. However, it increases the complexity of calibration and is not suitable for metallic materials. In the multiple exposure method, a portion of the highly reflection area will appear as a pattern in each image through multiple exposures at different times, and the effect of reflection can be mitigated by blending these images. Unlike the multiple exposure method, the adaptive fringe projection method determines the optimal intensity of the projection by adjusting for each pixel, thus reducing the area of missing information in the image. The phase compensation method directly post-processes the highlighted areas of the image and iteratively supplements the missing areas caused by the interference of the highlights, which is visually better but is unreliable for industrial applications because it is essentially a guess at the missing areas.

Using cameras and projectors in the system can cause nonsinusoidal fringes due to the gamma effect, resulting in nonlinear phase errors. The hardware-assisted method mainly includes coherent light illumination and defocus projection technology. Coherent light illumination reduces the nonlinear response and improves the phase measurement accuracy, making it suitable for high-precision phase measurements. Coherent light sources are usually more expensive than incoherent ones, and achieving coherent light illumination requires complex optical arrangements and packaging and has higher costs. Defocus projection is a relatively simple method that can be achieved by adjusting the optical components; however, balancing the defocus and measurement range is required. In fringe calculations, the corresponding calibration values in the table are searched based on the current measurement conditions to compensate for system errors, which can provide effective compensation for specific error situations. This method relies on a pre-established lookup table and may not cover all possible error situations, making it difficult to achieve comprehensive compensation for complex systems. This method of identifying and modeling statistical patterns of system errors through analyzing a large amount of measurement data has a certain universality but requires a large amount of sample data to establish statistical models. The iterative method can adapt to different environments and conditions and has strong adaptability for compensating system errors, requiring more computing resources and time.

Most high-speed projection devices require customization, and the production difficulty is relatively high, which poses high requirements for the control accuracy of the projection system. The binary square wave and pulse width modulation defocus methods can be used when the stripe frequency is high, which can suppress the phase error caused by high-order harmonics to varying degrees. However, binary defocus technology needs to be adjusted to the appropriate defocus degree. The Gray code-based phase-unwrapping method effectively reduces the number of projected patterns through unequal period gray scale coding and multifrequency phase slices. This method still has high measurement accuracy in low grayscale error conditions. FTP spatial phase-unwrapping (SPU) techniques are usually fast and more effective in responding to noisy regions in the phase map, but their robustness and ability to handle discontinuities have some limitations. If SPU can be used robustly and effectively for phase recovery of discontinuous object measurements, it will enable much faster 3D measurements and can solve a series of related industrial and engineering problems that have a strong demand for measurement speed.

The phase-unwrapping technique based on deep learning has achieved certain technical breakthroughs in optical metrology and has become an effective method. It also plays an increasingly important role in the field of optics due to the advantages of a small number of required stripe patterns, low sensitivity to complex textures and discontinuous surfaces, and fast measurement speed. However, a large number of reliable datasets is still an urgent problem for deep learning.

6 Summary and Prospect

Temporal phase-unwrapping (TPU) techniques are mostly used in measurement scenarios that require high measurement accuracy and low measurement time, and the structured 3D method that combines phase shifting and Gray code has been gradually proven to be effective for high-speed and high-dynamic measurements. The SPU technique is mostly used in measurement scenarios that require high measurement speed and low measurement accuracy. The deep learning-based phase-unwrapping technique can solve the deficiencies of TPU and SPU techniques to a certain extent, and the current studies have gradually proven the reliability, measurement efficiency, and accuracy. However, its application to actual measurements has limitations. In addition, other phase-unwrapping techniques, such as spatiotemporal phase unwrapping, geometrical constraints, and photometric constraints, have been proposed for specific scenarios and requirements, which also have certain breakthroughs in accuracy and complex scenarios.

Aiming at the importance of phase unwrapping in the process of structured light 3D morphology measurement technology and the factors affecting the accuracy, the future development directions are summarized as follows.

  1. 1.

    Reduce the number of projection patterns. Among the current methods for reducing the number of projection patterns, the number of projection patterns required by the TPU technique is still the largest, although it can ensure the accuracy of measurement. The SPU technique requires fewer projection patterns, but the accuracy of the measurement is relatively low. The phase-unwrapping technique based on deep learning has also made some progress in reducing the number of projections, but there is still uncertainty. Therefore, the phase-unwrapping technique to reduce the number of projection patterns is still a challenge to be explored.

  2. 2.

    Enhance noise immunity and robustness. For the complex measurement environment and the complexity of the object to be measured, it is necessary to enhance the noise resistance of the phase expansion algorithm to a certain extent to improve the reliability and accuracy of the measured data. For real-time 3D reconstruction technology in a low signal-to-noise ratio environment, enhancing the robustness of the phase expansion algorithm is necessary to ensure effectiveness and reliability.

  3. 3.

    Reduce the complexity of calculation. With the development of computer technology, the issue of calculation costs has been effectively solved. However, the algorithm with lower computational complexity not only saves unnecessary computational costs but also can be realized to ensure the reliability of the algorithm, the increase in speed of the measurement, and simpler algorithmic processes than complex ones with higher reliability and universality.

  4. 4.

    Improve the accuracy of phase unwrapping. The accuracy of phase unwrapping has an important impact on the final 3D shape of the measured object surfaces. Accurate measurement is of great significance for applications such as precision manufacturing; thus, in scenarios with high requirements, ensuring the accuracy of phase unwrapping can effectively guarantee measurement accuracy. Under the premise of meeting other measurement requirements (number of projection patterns, measurement speed, measurement range, etc.) other than accuracy, the more universal high-precision phase-unwrapping technology warrants further investigation and development.

  5. 5.

    Realize high-speed real-time measurement. With the wide application of 3D measurement technology, the measurement needs of various industries are also increasing. Only static object measurement methods can not meet the higher measurement needs, and high-speed and highly dynamic object measurements are increasing. To realize 3D measurement in high-speed scenes, the main aspects involved that can be improved include improving measurement efficiency by reducing the number of images required for each reconstruction, accelerating projection and shooting speed by upgrading hardware equipment, and simplifying the computation process to realize fast measurement. High-speed and high-precision realization of phase unwrapping remains a key research area and development direction in the future.