1 Introduction

Radio astronomy is used to observe the night sky through a lens different from the one used in optical astronomy, able to see the unseen. The manufacturing paradigms are also significantly different. In radio astronomy, direct recording of radio wave phase information can be done for GHz frequencies, meaning that signals from radio telescopes separated by great distances can be correlated in post-processing to achieve angular resolution far better than optical telescopes. This is clearly demonstrated by the images of black hole event horizons taken by the Event Horizon Telescope (EHT) with micro-arcsecond resolution [1, 2]. The use of longer wavelengths in radio astronomy proportionally loosens the accuracy requirements of the main reflector relative to those for optical astronomy. The trade-off is that much larger reflectors are required to obtain a good signal-to-noise (SNR) ratio, as radio signals are typically weaker than optical. It is well known that optical telescopes scale cost with aperture diameter at approximately \(D^{2.77}\). Fortunately, the cost of radio telescopes tends to scale at a rate closer to \(D^2\), increasing linearly with the collecting area [3, 4]. As radio telescopes aim to achieve ever-higher resolutions, higher frequencies are required; thus, tighter requirements are imposed on the primary reflector. The Ruze equation describes how the surface root mean square (RMS) error of the reflector affects the antenna gain, where \(G_0\) is the nominal gain considering a perfect reflector, \(\epsilon \) is the surface RMS, and \(\lambda \) is the wavelength expressed in the same physical units [5].

$$\begin{aligned} G(\epsilon ) = G_0{\text{e}}^{\frac{4\uppi \epsilon }{\lambda }^2} \end{aligned}$$
(1)

Figure 1 shows the antenna gain degradation as a function of the RMS-to-wavelength ratio. A common bare minimum performance goal is to achieve a 3 dB loss (50%), which correlates to \(\approx \lambda /15\). For example, the EHT observes a 1.3 mm wavelength at 230 GHz. Thus, to preserve antenna gain, the surface must meet \(\approx 85\ \upmu \text {m}\) RMS. This RMS value includes the stack-up error from the individual panel accuracy, gravity deformations, temperature gradients, wind shake, and panel misalignment [6]. Panel alignment and adjustment are critical to deploying a radio telescope for optimal gain. The manufacturing cost associated with making accurately shaped panels is wasted if they are not properly aligned. If the status of panel-to-panel alignment and panel deformation is not known, adjustments cannot be made.

Fig. 1
figure 1

Plotting the percent gain dictated by the Ruze Equation as a function of the surface RMS to wavelength ratio. At \(\lambda /15\), a 3 dB loss, or 50%, is achieved

Common methods for measuring panel alignment and shape have historically included photogrammetry [7,8,9,10], holography [11,12,13,14], and laser trackers or laser trusses [10, 15,16,17]. Oftentimes, a combination of methods is used [10, 18,19,20]. All of these methods deliver the information on the surface accuracy of a dish, but they have many fundamental limitations and drawbacks. The spatial sampling of photogrammetry and laser trackers is limited to the number of fiducials or manually scanned points. The manual nature of these methods makes them time-consuming and expensive, requiring large teams of researchers to execute the metrology. The University of Arizona recently tuned the alignment of dish panels in a 12-m diameter radio telescope using photogrammetry. It required a team of three (scientists and engineers) to work for two weeks. The accuracy of these methods also degrades with working distance, so as aperture size increases, depth resolution decreases. For extremely large apertures that observe with millimeter wavelengths, holography has been the default method for measuring reflector deformations. Holography uses a smaller dish (that is assumed to be perfect) pointed at a satellite beacon (typically a geosynchronous satellite to avoid the need for tracking) as a reference signal. The antenna under test then raster scans across the source to sample the beam [21, 22]. Correlations with the reference signal are used to recover the absolute phase errors, and inverse Fourier transforms are used to recover the aperture wavefront error, which feeds back to required surface adjustments to make corrections. While holography is known to have great sensitivity, it is limited in logistics. Specialized cryogenic detectors designed to match the frequency of the geosynchronous satellite beacon are required to take measurements. Also, because of the use of geosynchronous satellites, only one elevation angle can be tested, meaning gravitational deformations at other elevation angles remain unknown. The measurements are time-consuming because of the need to raster scan the entire telescope to sample the beam, and good environmental conditions are required for successful measurements. The algorithms to determine adjustments on the primary reflector vary widely from telescope to telescope depending on the size, on-axis versus off-axis configuration, and the presence of secondary or tertiary reflectors. Many future plans for large radio telescopes involve multiple off-axis reflectors [23,24,25,26].

Regardless of the current metrology method used on a radio antenna dish, there are drawbacks in cost, time, logistics, and data quality. Oftentimes, the use of multiple methods is required. In 2022, our research group developed a metrology technique based on binocular fringe projection profilometry for measuring radio telescope panels in a laboratory setting to assist our research into the rapid fabrication of radio telescope panels [27]. As a demonstration of the panel forming technique, our research team is constructing a 2.4 m × 3.2 m radio telescope, known as the Student Radio Telescope (SRT), using our own fabricated panels [28]. The problem of aligning the panels has driven the team to adapt the previously developed panel metrology method to be portable, function outdoors, measure discontinuous objects, and cover large areas with high resolution. As a demonstration, we will show in this paper how the system can be used to deliver alignment feedback information for two adjacent panels on the SRT. The telescope is currently under construction for the purpose of public outreach. Figure 2 shows a 3D rendering of the SRT.

Fig. 2
figure 2

3D rendering of the SRT. The telescope will consist of 26 500 mm \(\times \) 500 mm panels, each with a different shape due to the off-axis paraboloid design

2 Background

Before installing and aligning panels on a radio telescope, panels are manufactured and measured in a factory. Typically, Coordinate Measuring Machines (CMMs) are used to perform metrology on these surfaces to ensure they meet the required accuracy. However, as panels approach one or even two meters in size, CMMs become more costly, and data collection becomes very slow. In 2022, our team developed a technique for measuring panels and panel molds with a modification to fringe projection profilometry (FPP) [27]. The method uses two calibrated cameras (using Zhang’s method [29]) as a stereo pair and a DLP projector. Figure 3 shows the current system based in the lab for measuring the mold and the panels. The projector displays a series of phase-stepped fringe patterns in vertical and horizontal directions onto a unit under test (UUT). The N-step phase shifting algorithm [30] is used to recover the wrapped phase of the patterns. The phase is then unwrapped using a spatial phase unwrapping method [31]. The resulting horizontal and vertical phase combination is unique for every projector pixel, and as a result, the series of patterns encodes the UUT with unique phase pairs. These phase pairs are used as fiducials to find matching points in the images captured by the two cameras. The calibrated intrinsic and extrinsic parameters of the stereo pair are used to triangulate the set of matching points to produce a 3D point cloud relative to the perspective of camera 1 [32]. Conventional FPP uses one camera and one projector to directly convert phase to height, so the measurement sensitivity is dependent on the chief ray angle between the projector and the camera. Since phase in this system is used only as a matching fiducial, the measurement sensitivity is dependent on the chief ray angle between the stereo cameras rather than between the camera and the projector. The nature of this method allows nearly every pixel that falls on the object to be converted into a 3D point.

Fig. 3
figure 3

Binocular FPP system in a laboratory setting. The system is used for measuring a flexible mold and measuring the panels that are thermally formed to it

3 Methods

3.1 Binocular Fringe Projection Profilometry with Hierarchical Unwrapping

The previously described method does not work for discontinuous surfaces as the spatial phase unwrapping for a single frequency produces only a relative phase, not an absolute phase. As a result, there is an integer multiple of 2\(\uppi \) phase ambiguity between discontinuous objects, for example, two adjacent panels in a radio telescope, when using a spatial phase unwrapping method. To ensure reliable and absolute phase unwrapping, we have implemented a temporal phase unwrapping technique known as hierarchical, or multifrequency, phase unwrapping [33]. The method uses an initial low-frequency fringe period so that less than one period covers the entire span of the projected area, so it has no 2\(\uppi \) phase ambiguity. Increasingly higher frequencies are applied, with the lower frequency used to unwrap the next highest frequency, as in Eq. (2).

$$\begin{aligned} k_n(x,y)= & {} {\text{Round}} \Biggl (\frac{\frac{\lambda _{n-1}}{\lambda _n}\phi _{n-1}(x,y)-\phi _n(x,y)}{2\uppi }\Biggl ) \end{aligned}$$
(2)
$$\begin{aligned} \varPhi _n(x,y)= & {} \phi _n(x,y)+2\uppi k_n(x,y) \end{aligned}$$
(3)

In Eq. (2), n indicates the frequency being used from lowest to highest. \(k_n(x,y)\) is the map of fringe orders used to unwrap \(\phi _n(x,y)\) to create the unwrapped map, \(\varPhi _n(x,y)\), for fringe period \(\lambda _n\). The round(.) function rounds the argument to the nearest integer, with steps of 0.5 rounded up to the next highest integer. The process of Eqs. (2) and (3) is done iteratively until the final highest frequency or shortest fringe period is reached. We used 5-step phase shifts for each fringe period of 1920, 500, 100, and 10 pixels. Fringe period 1920 matches the widest screen dimension for 1080P projectors, producing no unwrapping ambiguity. In theory, only the lowest frequency and the highest frequency are needed, but in the presence of noise and imperfect fringe patterns, using multiple frequencies helps ensure proper identification of fringe orders for unwrapping each consecutive frequency [34].

3.2 Dish Measurement with Global Reference Frame

As mentioned in the Introduction, this stereo camera FPP system returns 3D data points relative to one of the camera axes. This is sufficient to align panels to one another to make the smoothest surface, but this surface needs to focus incoming radio signals on a known position. The SRT is an off-axis paraboloidal design with a direct feed. To ease the process of determining a global reference frame, two circular fiducials are included in the FPP measurement of the field of view of the SRT to define the optical axis of the telescope, one at the paraboloid vertex and another at the desired focal point.

These two fiducials will also be identified and triangulated in the stereo camera system, returning two 3D points relative to camera 1: \(\varvec{P}_{\text{vertex}}=(X_{\text{vertex}},Y_{\text{vertex}},Z_{\text{vertex}})\) and \(\varvec{P}_{\text{focus}}=(X_{\text{focus}},Y_{\text{focus}},Z_{\text{focus}})\). Their centers are identified using the Hough transform [35, 36]. We perform the following coordinate transformations to orient the measured panels (\(\varvec{P}_{\text{panel}}=(\varvec{X}_{\text{panel}},\varvec{Y}_{\text{panel}},\varvec{Z}_{\text{panel}})\)) to the optical axis and the paraboloidal equation. Then, we extract the required rigid body motions for each panel to obtain the lowest RMS surface error.

  1. 1.

    Translate the entire map to locate \(\varvec{P}_{\text{vertex}}\) at the origin.

    $$\begin{aligned} \begin{aligned} \varvec{P}_{\text{panel}}' = \varvec{P}_{\text{panel}}-\varvec{P}_{\text{vertex}} \\ \varvec{P}_{\text{focus}}' = \varvec{P}_{\text{focus}}-\varvec{P}_{\text{vertex}} \end{aligned} \end{aligned}$$
    (4)
  2. 2.

    Calculate Euler angles \(\alpha \) and \(\beta \) of the line that connects the vertex and the focus.

    $$\begin{aligned} \begin{aligned} \alpha = \text{arctan}\left( \frac{Z_{\text{focus}}}{X_{\text{focus}}}\right) \\ \beta = \text{arctan}\left( \frac{Z_{\text{focus}}}{Y_{\text{focus}}}\right) \end{aligned} \end{aligned}$$
    (5)
  3. 3.

    Calculate the rotation matrix using Euler angles from the coordinates of the measured focal position.

    $$\begin{aligned} {\varvec{R}}= \begin{bmatrix} \text{cos}(\beta ) &{}\quad \text{sin}(\alpha )\text{sin}(\beta ) &{}\quad \text{cos}(\alpha )\text{sin}(\beta )\\ 0 &{}\quad \text{cos}(\alpha ) &{}\quad -\text{sin}(\alpha )\\ -\text{sin}(\beta ) &{}\quad \text{sin}(\alpha )\text{cos}(\beta ) &{}\quad \text{cos}(\alpha )\text{cos}(\beta ) \end{bmatrix} \end{aligned}$$
    (6)
  4. 4.

    Apply rotation matrix to each panel point cloud and focus point.

    $$\begin{aligned} \begin{aligned} \varvec{P}''_{\text{panel}} = {\varvec{R}}\varvec{P}'_{\text{panel}} \\ \varvec{P}''_{\text{focus}} = {\varvec{RP}}'_{\text{focus}} \end{aligned} \end{aligned}$$
    (7)
  5. 5.

    Compare each panel to an ideal paraboloid (radius R).

    $$\begin{aligned} Z_{\text{ideal}}(X,Y)= & {} r^2/(2R) = (X^2+Y^2)/(2R) \end{aligned}$$
    (8)
    $$\begin{aligned} \varvec{Z}_{\text{residual}}(X,Y)= & {} Z_{\text{ideal}}(\varvec{X}''_{\text{panel}},\varvec{Y}''_{\text{panel}})-\varvec{Z}''_{\text{panel}} \end{aligned}$$
    (9)
  6. 6.

    Fit residual XYZ points of each panel to a plane.

    $$\begin{aligned} A\varvec{X}''_{\text{panel}}+B\varvec{Y}''_{\text{panel}}+C\varvec{Z}_{\text{residual}}+D = 0 \end{aligned}$$
    (10)
  7. 7.

    Calculate piston, tip, and tilt from plane-fit coefficients.

    $$\begin{aligned} \delta Z= & {} D \nonumber \\ \theta= & {} {\text{arctan}}(C/A) \nonumber \\ \phi= & {} {\text{arctan}}(C/B) \end{aligned}$$
    (11)

The metrology process is set up to automatically deliver the required tip, tilt, and piston at the end of each measurement. Each panel has four actuators 400 mm apart in each corner, as shown in Fig. 4. Equation (12) shows how to convert the tip, tilt, and piston to the actuator number of rotations using the angle approximation, where \(\delta S\) is the spacing between actuators. Once the number of mm of translation is determined, the threads per inch (TPI) of the actuators are used to convert it to a number of rotations.

Fig. 4
figure 4

Coordinate system and actuator layout for each panel. Once tip, tilt, and piston values are extracted, they must be converted to actual numbers of rotations on the actuators

$$\begin{aligned} \begin{aligned} \mathbf {1:} (0.5\theta \delta S+0.5\phi \delta S+\delta Z)*{\text{TPI}}/25.4 \\ \mathbf {2:} (-0.5\theta \delta S+0.5\phi \delta S+\delta Z)*{\text{TPI}}/25.4 \\ \mathbf {3:} (0.5\theta \delta S-0.5\phi \delta S+\delta Z)*{\text{TPI}}/25.4 \\ \mathbf {4:} (-0.5\theta \delta S-0.5\phi \delta S+\delta Z)*{\text{TPI}}/25.4 \end{aligned} \end{aligned}$$
(12)

4 Experimental Setup, Calibration, and Configuration

4.1 Hardware

The hardware for this system uses two FLIR Blackfly S USB 3 machine vision cameras with Sony IMX183 sensors (20MP 5472x3638, with 2.4 \(\upmu \text {m}\) pixels), each with a Computar V0826-MPZ lens (8 mm focal length). Each camera is mounted to the ends of a 0.8 m 8020 aluminum extrusion. The 8020 is mounted on a tripod for portability and pointing. With this setup, the entire 3.2 m dish can be seen from a distance of 3 ms. We used an off-axis short-throw 1080P projector from BenQ (Model MW817ST) that can cover the entire surface from a distance of 1.5 ms. The software for capturing the images, processing the fringe patterns, calculating the unwrapped phases, matching the phase pairs, and triangulating the matched pairs to produce the 3D point clouds is written in MATLAB. Photos of the camera and projector hardware are shown in Fig. 5.

Fig. 5
figure 5

a One of the two FLIR Blackfly USB3 cameras with an 8 mm lens. b Both cameras mounted to an 8020 aluminum extrusion, mounted on a tripod. c BenQ 1080P short-throw projector used in this experiment

4.2 Calibration

Calibration of the stereo camera pair was performed indoors for easier control of lighting and environmental conditions. An 800 mm \(\times \) 600 mm aluminum and low-density polyethylene (LDPE) checkerboard calibration board with a square size of 30 mm was used. The calibration board was mounted on a tripod and traversed through the overlapping field of view (FOV) at a working distance of 3 ms for a set of 20 images. An example of one of these images is shown in Fig. 6. Using the detected checkerboard corners on each sensor, each camera was calibrated individually for intrinsic parameters. Then, the pair were calibrated together for extrinsic parameters, keeping the intrinsic parameters fixed. The calibrated intrinsic and extrinsic parameters are shown in Table 1.

Fig. 6
figure 6

An example image from the 20-image set used to calibrate the intrinsic and extrinsic parameters of the stereo camera pair. The checkerboard was moved throughout the overlapping FOV in order to properly calibrate for radial distortion

Table 1 Calibrated stereo camera parameters

4.3 Measurement Setup

As described in Sect. 3.2, fiducials are placed at the paraboloid vertex and the telescope focal plane to define the optical axis. The vertex and the focal plane were located using dimensions from the SRT CAD files. We placed 1.0″ white stickers in the middle of 1.5″ black stickers as reliable contrast fiducials for circle detection with the Hough transform. Figure 7 shows the physical locations of the fiducials with respect to the rest of the telescope structure and the two adjacent panels to be aligned.

Fig. 7
figure 7

Layout of the telescope structure. Two fiducials are placed where the paraboloid vertex and focus should be, using 8020 extrusions. Example Panels 1 and 2 are installed on the telescope. They are the bottom two rows of the center column of panels

Measurements of the SRT are performed at night in order to increase the SNR of the projected patterns relative to ambient lighting. We also aimed for a night with low to no wind to reduce temporal errors associated with telescope structural bending or vibration. To avoid boosting noise in the images, the cameras were used with a gain of 0. To utilize the full dynamic range of the camera bitdepth, a 5-s exposure was used for each camera. As mentioned in Sect. 3.1, we employed four frequencies for the hierarchical phase unwrapping method, in which we used the N-step phase shifting algorithm [30] for each frequency and each phase shifting direction (horizontal and vertical). This results in a 40-image pattern sequence (four frequencies, five phase steps, two directions). Each pattern was captured three times and averaged to further eliminate noise, resulting in a 600-s (10-min) acquisition time. Figure 8 shows an actual data acquisition in progress at night.

Fig. 8
figure 8

Nighttime measurement setup. The projector is placed near the telescope and uses the short throw to cover the entire dish. The tripod carrying both cameras is placed roughly 3 ms away from the dish and positioned so that the entire dish can be seen in the field of view of both cameras. Shown in the image is the 10-pixel period frequency projected onto the dish structure

Once a measurement is made, the rigid body motions for each panel are extracted using the process described in Sect. 3.2, and each panel is adjusted using four manual actuators located in the corners of each panel. The actuators have a 1 mm thread pitch and are separated by 400 mm. Thus, to tilt a panel by \(1^{\circ }\) (17.45 mrad), opposite actuators must move \(\approx 400\,\text {mm} * 0.01745\ \text {rad} = 7\,\text {mm}\implies \pm \) 3.5 mm. After an iteration of adjusting the tip, tilt, and piston is performed, the panels are then remeasured, and the adjustment process is repeated until the remaining errors are corrected.

5 Results

Figure 9 shows an example set of horizontal and vertical phases from the perspectives of Camera 1 and Camera 2. Each pixel on each panel in Camera 1 has a unique combination of horizontal and vertical phases. The Camera 2 phases are searched for a matching phase pair for each pixel on Camera 1. The resulting matched locations on each camera detector are triangulated using the calibrated parameters from Table 1. Figure 10 demonstrates the data produced by the system described in this paper. Figure 10a shows the data returned from the software in its raw format, with the locations of the cameras, paraboloid vertex, focus, and point clouds of both panels. Using steps 1–4 from Sect. 3.2 yields Fig. 10b to align the optical axis of the telescope with the z-axis. Steps 5–6 are followed to produce the piston, tip, and tilt error for each panel. Figure 11 shows the best fit plane for Panels 1 and 2 before adjustments begin, and Table 2 shows the actual starting piston, tip, and tilt values.

Fig. 9
figure 9

Images from each camera are masked to isolate the two panels being measured. The phase unwrapping process produces four images: horizontal and vertical phase for each camera. These maps are used to find matching object locations from pixels on Camera 1 to locations on Camera 2

Fig. 10
figure 10

Each FPP measurement results in a point cloud for each panel and the relative location of both the vertex and the focus. Raw data is shown in a. The triangulated 3D points are produced relative to the coordinate system defined by Camera 1. Matrix rotations and translations are applied to the panel point clouds to align the optical axis with the z-axis, shown in b. In this configuration, the panels can be directly compared to Eq. (9) to extract the residual error in the panels and the rigid body alignment error

Fig. 11
figure 11

Best fit planes for each panel, including piston

Table 2 Results before the first adjustment iteration

Besides the rigid body errors, each panel has imperfections relative to the ideal shape. Figure 12 shows the residual from the ideal paraboloid for each panel with tip, tilt, and piston errors removed. Panel 1 has a 1.28 mm RMS, and Panel 2 has a 0.81 mm RMS. These are not actual panels that will be used in the telescope but are examples used for this initial alignment test. For this experiment, the panel actuators on the SRT are not capable of shape correction; however, the system delivers information that could be used for this on most other large radio telescopes. The goal of the adjustment process is to make the dish surface accuracy limited by the individual panel accuracy, not the rigid body errors.

Fig. 12
figure 12

In addition to finding the rigid body motion error of each panel, the system also has enough resolution to map the surface errors of each panel compared to its ideal shape. Shown here are the residual surface maps compared to the ideal paraboloid

We performed four measurements with three adjustment iterations. Within three iterations, piston error was reduced to < 0.25 mm, and tip/tilt was reduced to \(<\, 0.1^{\circ }\). The final panel RMS, including alignment errors, was 1.325 mm and 0.842 mm for Panel 1 and Panel 2, respectively. Collectively, the two panels make a small dish with a 1.12 mm RMS error. Looking back at the plot of the Ruze equation in Fig. 1, this error would make the dish capable of observing 2 cm wavelengths (15 GHz) with less than 50% loss. Most of the error in this test is attributed to panel shape, which the actuators for this telescope are not configured to correct. Figure 13 shows how the piston and tip and tilt errors converged through each iteration. Table 3 shows the final adjustment results.

Fig. 13
figure 13

Four measurements were performed, with three adjustment iterations between each measurement. a shows the tip and tilt of each panel improving across each iteration. b shows the piston for each panel also improving. Panel 1 was, fortunately, close to the ideal piston position before an adjustment was performed

Table 3 Final results after three adjustments

Because of the premise of a large measurement area and depth variation, it is difficult to perform a direct comparison using another method to validate the accuracy of the method proposed. To understand the performance of the system, including environmental effects and structural performance, we performed three repeated measurements on the same night as the iterative alignment process. Figure 14 shows the change in the calculated tip and tilt feedback compared to the average over the three measurements, each taken every 10 min.

While this repeatability experiment does not directly establish the limitations of this metrology method, it does give insight into how the system may perform under imperfect environmental conditions. Figure 14 shows that, in the worst case, we can expect \(\pm \,0.02^{\circ }\). Over a 500 mm panel, this is roughly \(\pm 175\,\upmu \text {m}\), or \(71\,\upmu \text {m}\) RMS. With a minimum test uncertainty ratio (TUR) of 4 to 1 [37,38,39], this system could be used to verify alignment to \(280\,\upmu \text {m}\) RMS. Applying the \(\lambda /15\) requirement in Fig. 1, it could be used to qualify a reflector operating at a wavelength as short as 4.2 mm or at a frequency as high as 71.4 GHz. A conservative TUR of 10 to 1 would yield a 10.5 mm wavelength or 28.6 GHz.

Fig. 14
figure 14

Shown is the change in tip and tilt feedback for each panel compared to the average of the three measurements

6 Discussions and Future Work

This method provides several advantages over other antenna metrology methods. The major improvement results from its non-contact nature. This has a twofold benefit: speed and logistics. The setup time is less than 10 min, and the measurement takes only 10 min. Since the camera pair is pre-calibrated indoors, as shown in Fig. 6, the setup time to begin taking measurements only involves positioning the tripod so that the entire surface is in the field of view of both cameras and positioning the projector so that the entire surface is covered with projected light. During the measurement time of 10 min, the surface may vary due to environmental conditions, such as vibrations and static movements caused by wind. While this does affect the uncertainty of the measurement, it also captures the antenna performance beyond merely the surface accuracy, assuming the measurement system is stable. The acquisition time could be reduced even further by implementing a brighter projector and decreasing the required exposure time. Thanks to its rapid setup time, the system could be moved from telescope to telescope for the purpose of aligning arrays with large numbers of dishes. This would avoid the need to coordinate with satellites and the use of cryogenic detectors required by the holography method. This method also eliminates much of the manual labor required by photogrammetry and laser trackers, which require manual placement of stickers or retroreflectors. These methods are time-consuming and labor-intensive and create safety risks for workers who need to climb the dishes and place the fiducials. This method could also be installed on the telescope structure as a permanent metrology system for telescopes that have active surfaces or require regular maintenance [40]. This would also allow surface measurements to be performed at a variety of elevation angles to properly characterize gravity deformations.

Additionally, this method could be scalable to extremely large radio telescope apertures like the ngVLA 18-m dish. Sensor size, lens focal length, and the number of cameras can be adjusted to achieve the required spatial resolution and field of view to cover the aperture of a large dish. Some practical challenges related to camera calibration and projector brightness must be overcome to achieve this. Further development is needed to calibrate a large number of cameras over a large area with a long working distance. Methods that utilize auxiliary sensors to calibrate cameras with large baselines could be employed [41, 42], but they should be improved to adapt to more cameras. In addition, developing or sourcing a projector with enough brightness to cover an area this large might be difficult. One possible solution might be found in the use of commercial cinema projectors. The authors hope that this method contributes to the development of non-contact metrology technologies needed to quickly, safely, and reliably commission large telescopes in the future.