Camera calibration and orientation for PCB jet printing inspection

To assess the quality of printed solder joints in the jet printing of printed circuit boards, a calibrated camera is used to reconstruct the solder volume via photogrammetry as a specific task of computer vision. This requires a procedure for calibration and orientation determination due to restrictions within the image acquisition process. We herein consider a novel application area called an ultra-close range normal case photogrammetry, where a camera acquires small objects at small operating distances and fields of depth. Because the camera cannot rotate or translate in the z-direction, we propose and evaluate four calibration procedures in terms of their capability in a normal case calibration within the ultra-close range. To set up an optimal calibration pipeline, a three-dimensional (3D) calibration field is used for single and multi-image calibrations. To enhance the accuracy and to simplify the assignment of 3D coordinates to the detected markers, we propose a geometry-based estimation of the lens model to undistort the image points. Close-range applications utilize spacer rings and extension tubes to enlarge the magnification and reduce the operating distance. We also examine the influence of these extensions on the intrinsics of the camera and the reconstruction result. Additionally, we demonstrate the dependence of the accuracy on the lens model in terms of radial and tangential distortions and the number of distortion coefficients regarding the reprojection error ϵrepro\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\epsilon _{repro}$$\end{document}. Finally, we provide recommendations for a lens configuration for ultra-close range normal case calibrations and measurements, based on the calibration and reconstruction results, which are evaluated by the 2D reprojection error ϵrepro\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\epsilon _{repro}$$\end{document} and the 3D reconstruction error ϵrecon\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\epsilon _{recon}$$\end{document} obtained from a second independent calibration field.


PCB jet printing
An indispensable component of printed circuit boards (PCBs) is the solder paste that ensures a solid and electrically conductive connection between the electronic parts. The state-of-the-art method to apply solder paste at the correct position with the correct volume is the usage of stencils. However, the use of stencils is disadvantageous if small components and complex layouts are required. PCB jet printing was developed to overcome these disadvantages. It is a method of applying solder paste onto a PCB. The primary principle is comparable to that of an ink jet printer; however, instead of ink, solder is printed through a piezo-controlled ejector mechanism onto the PCB. The primary advantage is that the system can print any amount of solder at any PCB position, thus allowing for more flexible layouts and the placement of smaller components immediately next to large components, the possibility of which is limited when using stencils. The so-called cassette includes a cartridge that holds the solder paste and the

Problem and approach
In addition to the advantages of jet printing, errors in the printing result can occur. Due to the viscosity of the solder paste, air blisters can be trapped within the cartridge. The printer itself does not recognize if the right amount of solder or air was printed. This results in deviations in the applied solder volume, or even a missing solder joint in which no connection between the board and component is established.
Various inspection systems are already available. The principles of these system have a wide variety ranging from color based solder joint segmentations [3] and x-ray based [4] inspection to laser [5] or moire pattern [6] based 3D inspections. However, available inspection systems are stand-alone solutions that are used within the manufacturing line after the application of solder paste is finalized. Thus, detected errors lead to a rejection of the boards and not to a correction.
The overall objective of our work is to develop an inspection system that detects the errors within the printer and provides feedback to correct the misprinted areas immediately. This will decrease the rejection of the printed boards and enhance the system efficiency. While a simple two-dimensional (2D) imaging and analysis sequence is sufficient to detect the presence of paste and the covered area, a more advanced three-dimensional (3D) imaging and analysis is required to obtain the printed solder volume. Our approach is the photogrammetric reconstruction where a camera is attached to the printer head (see Fig. 1) and acquires the overlapping images of printed areas. We set up a photogrammetric inspection system that can perform an online quality assessment during the printing process. Therefore, a computer vision system was designed to operate within a working distance of 5 to 20 mm. This system requires a procedure for calibration and orientation that can set up a camera model within the photogrammetric normal case, with parallel principal axis pointing normal to an examination area or surface. Further in ultra-close range normal case an additional parallel case is introduced, where the camera considers a translational movement within the sensor plane. For simplicity, we will refer to the calibration and orientation determination only by calibration. To allow for the assessment of the printing result, the reconstruction error recon using the obtained camera model must be less than 33 μ m, which coincides with the deposit accuracy of the jet printer [1]. An initial camera calibration is required to achieve online capabilities. While bundle adjustment estimates the acquired object and camera model simultaneously, an initial camera calibration enables the system to reconstruct a point cloud during the image acquisition, because the camera model is already known.

Ultra-close range normal case calibration
In photogrammetry, the terms aerial and close range are typical in different application fields [7]. While aerial defines the application of cameras in aircrafts or unmanned aerial vehicles, close range usually considers a non-topographic photogrammetry [8], however topographic applications, e.g. from low altitude UAV's, can also be included in close range. The dimensions of close range are not well defined in the literature. Some define any application with an operating distance below 300 m to be close range [7]; others consider distances in the lower range (1-4 m) [9,10]. An application where one or more cameras acquire the images of an object or surface without significant alteration in the operating distance and orientation (the camera's primary axis is perpendicular to the examination surface, and the cameras axes are parallel to each other) is considered to be normal [8,11]. We herein define an ultra-close range normal case, with an operating distance in the range of 5-100 mm, and following the definition of a normal case, which considers two or multiple cameras (or camera positions). Camera calibration and orientation is the process of determining a mathematical model that describes how a world point In computer vision calibration toolboxes use Zhang's calibration method and chessboard pattern calibration fields [12]. However, these fields and methods require multiple views of the calibration fields under different angles,  Fig. 1 Schematic of print head: solder paste is ejected onto the PCB printed areas are imaged by a camera which is not applicable within ultra-close range applications. In this application, the camera calibration process becomes challenging because the camera has restricted degrees of freedom, due to its fixation (i.e., the camera cannot be rotated around any axis and has no translation along the z-axis), a short operating distance, and a large magnification using spacer rings and extension tubes in between the lens and the image sensor. In addition, the user must recalibrate the camera as soon as the imaging setup changes and the spacers are varied (typically a variation in the operating distance and magnification), without the extraction of the camera. To allow for a user-friendly adaption of spacers, without a dismount of the camera, we chose a specific high precision lense that includes a thread to vary the lens-sensor distance. Those restrictions and a postulated ease of use result in a non-applicability of the standard calibration methods (e.g., Zhang's camera calibration with various camera [13,14]) with the corresponding circular or chessboard patterns. Instead of moving the camera, also the chess board could be moved. However, in ultra-close range applications a working distance of less than 20 mm and a small depth of field of less than 1 mm limits the rotation of the chessboard to very small angles resulting in uncertain calibration results and additionally does not provide a user friendly procedure. We herein propose a camera calibration approach and a corresponding calibration field in a millimeter range without any rotations of the camera or the calibration field. We name this procedure the ultra-close range normal case calibration. It can be adapted to other calibration fields and ensures precise camera calibration. The calibration is essential for subsequent measurements and reconstructions. Altogether, we call this field of application ultra-close range normal case photogrammetry.
Luhmann et al. attempted to remove the distortion by correcting the image until the marker position match the projected position using the perspective camera model and known 3D coordinates (when using a single image) of the markers [11]. Wu et al. presented a method where a 3D line that is treated as an arc would match a line in the image [15]. However, in these methods, the markers must be assigned to a line or 3D coordinates. We propose a geometry-based lens distortion correction, where such assignment is not necessary. To obtain precise camera models, we developed single and multi-image calibration methods. Each method was used within the imaging setups, where the operating distance and the spacer rings were varied. Each setup and method was evaluated by the reprojection error repro and reconstruction error recon , since the precision of reconstructed 3D points is of high importance in this or similar applications. Figure 2 shows the laboratory demonstrator system for the PCB analysis. Similar to the intended usage within the printer, the camera's (3) primary axis was oriented orthogonal to the examination area or the calibration field. Two linear stages (5) simulated the x-y-motion of the print head, while two bar lights (1) illuminated the object. Using the demonstrator system, we acquired images with 2048 x 2048 pixels, using an 8-bit grayscale camera. To compare our results with the standard methods, we also calibrated the camera as far as possible using the standard chessboard pattern based on the method of Zhang [12], which is available in openCV. The operating distances (200-3000 mm) of the three lenses (fixed focal length of 10 mm, 12.5 mm, and 16 mm) were not appropriate for ultra-close range photogrammetry. We examined the use of 17 spacer sizes (from 1.4 to 22.3 mm) to ensure proper operating distances. The fields were imaged with the three lenses and spacers to examine the correlation of spacers, operating distance, and calibration parameters (e.g., focal length, distortion parameters). The image processing and calibration methods have been evaluated using 700 images. The different combination of spacers and lenses lead to varying image scales and depth of field (DOF) as it is shown in Table 1.

Calibration fields
We performed single and multi-image calibrations for a 3D field where the calibration markers are distributed along the x-, y-, and z-directions.  Figure 3a shows the 3D calibration field we designed. The calibration markers (white dots) are distributed stepwise on three height levels. For the origin and orientation determination, the field also includes three special central markers (square, triangle, and cross). The calibration markers have a spacing of 2.5 mm on each level, while the levels have a difference in height of 0.5 mm. The field was anodized black to reduce reflections as much as possible. Laser labeling was used to apply the markers. The surface of the calibration field was evaluated by a structured light 3D measurement with an accuracy of 2 μm.
The 2D field (for evaluation purposes, Fig. 3) consists of circular chromium markers on glass within three areas, where the markers' diameters are 1.0, 0.5, and 0.25 mm with a spacing of 2.0, 1.0, and 0.5 mm, respectively. The markers are distributed on a single plane with an accuracy of 0.5 μ m. This field is used only for evaluation and not for calibration.

Pinhole camera model
The model of a pinhole camera defines a central perspective projection P of the world points X i to the image points x i [16].
The perspective projection matrix P consists of geometric transformations and camera parameters. Thus, it is established from the camera's extrinsics and intrinsics:  x bb , y bb : origin of the bounding box; d1, d2, d3, d4: distances from the centroid to the centers of bounding box edges; C tri centroid of the triangle; h bb bounding box height; w bb bounding box width K is the calibration matrix containing the camera's intrinsics, whereas R is the camera's rotation matrix, and C is the position of the camera center. R and C are also known as the extrinsic parameters. I is the identity matrix.
The matrix K consists of the intrinsic parameters, namely, principal distance c, primary point h x , h y , and skew s.

Lens model
The pinhole model represents an ideal central perspective projection. However, in real imaging applications, the projection is distorted by nonlinear lens properties. The distortions consist of decentering and radial distortion components. The radial distortion was modeled by the polynomial function in Eq. 4, which is aborted after the 4th term [17]: x cor and y cor are the corrected point coordinates, whereas x d and y d are the coordinates of the distorted points, and r is the radius from the principal point to a considered image point. It has been shown, that 4 coefficients are sufficient for the estimation of radial distortion [18]. The tangential distortion, also called decentering distortion [19], caused by the improper alignment of the sensor, is defined by the coefficients t i : Thus, the compounded model is as follows: Using this model to undistort the image points, the correspondences x i ↔ X i follow the central perspective projection x = PX . Usually calibration determines the intrinsic parameters, whereas the orientation includes the extriniscs.

Preprocessing
To automate the jet printing system, the calibration field must be detected automatically. The detection is based on the known geometric arrangement of the calibration markers. Our detection approach operates with arbitrary calibration fields where a known structured pattern is used. The images can contain artifacts, which are produced by dust, scratches, or reflections. These artifacts may appear in certain imaging configurations. To handle the field scales and artifacts, the detection consists of the steps indicated in Fig. 4. In the following, contiguous pixel regions that remain after the preprocessing are objects considered as calibration markers. The processing was performed up to five scales, where each scale was set up from a downsampled image by the factor two in combination with a Gaussian filter. To obtain the correct marker positions for the 2D-3D correspondences, the image region properties (size, eccentricity, major and minor axis lengths, centroid) in combination with the geometric information of the calibration field (see Fig. 3) were used within a property filter. In the first step, all objects with small areas (less than 10 pixels) and large eccentricities were removed. These areas typically represent most of the mentioned artifacts. As the markers, which are located at the edge of the image, may not be completely visible, the weighted centroids c w may not correspond to the real center of mass, and will result in inappropriate point correspondences. Therefore, all regions with a bounding box touching the edge of the image are removed as well. These steps remove most of the artifacts. However, some artifacts may remain, as they exhibit similar properties as the markers. To prevent the use of wrong marker positions, the geometric a-priori knowledge about the calibration field was applied (see Fig. 3). For the 3D field, this information was used to extract the additional markers (cross, rectangle, and triangle). In an image without artifacts, the rectangle should correspond to the largest area. A nearest neighbor search [20] was used to obtain the two neighbor markers N 1 and N 2 of the largest area, the vectors v 1 and v 2 between the centroids c of the calibration markers, the vectors v l and v r between the centroids of the largest area, and the neighbor calibration markers N 1 and N 2 and their sum vector v s , the vectors, and the major axis length r a every region. The major axis length is defined by the ellipse corresponding to the normalized second central moments of the region.
Through these vectors and the region properties, a decision can be made, if the selected area (maximum area) is the rectangle marker of the 3D field: If one of the conditions is false, the region is neither a special nor a calibration marker, and is removed. The center of the rectangle defines the origin of the field and will be set to (0, 0, 0) T . The primary orientation of the marker rows is given by 1 and 2 . Depending on the field, a number of nearest neighbors (2D field: 5; 3D field: 40) is extracted for each marker. For every set pair of a current marker and k-nearest neighbors, the following parameters are calculated: • v e c t o r s | i | a n d | ij | | i = [0;39] a n d j = [i + 1; 40] (3D);i = [0; 4] a n d j = [i + 1;5] (2D) between marker and neighbors similar to Eq. 7 • norm of the vectors: | i | and | ij | • angle ij between vectors | i | and | ij | : • relation of the minimum and maximum vector norm: • angle beta between v1 and the field direction v r • relation of the minimum of norms and the norm of the direction vector v r A proper marker has at least two pairs of neighbor vectors that form a 90° angle. Rel v should be one for the 2D field and as a consequence of the design, it should be 0.41 for the 3D field and 0.5 for the Rel o . Due to lens distortions, we allow a tolerance of ± 10° for the angles ij and , and ± 0.2 for the norm relation (Eq. 10). Otherwise, the region is removed. The characterizations of the markers for the two fields are as follows: 2D field

3D field
R is an extracted region (contiguous pixels) and R m is a pixel region corresponding to a marker. For the 3D field, its orientation that can be determined by the triangle marker is important because the markers are assigned to their world coordinate depending on the orientation. The triangle is classified by the second largest region following the conditions in Eq. 8. Considering the centroid c tri and the triangle's bounding box defined by its origin x bb , y bb , width w bb , and height h bb , the minimum of all distances  Fig. 3e) between the centroid and the center of each bounding box edge defines the orientation. At each marker, an ellipse fitting [21] was performed to obtain a better estimation of the centroid, as the intensity-weighted centroid might be distorted as a consequence of artifacts. When no markers or rectangles are detected, or the number of markers is insufficient for the calibration and distortion estimation, the next scale is processed. If this fails for all scales, the algorithm exits with the statement of poor image quality. After preprocessing, only the calibration markers are left in the image.

Initial distortion correction
As shown in Fig. 3, the markers are arranged along the lines. This arrangement was used to simplify the assignment between the markers' 2D and 3D coordinates. However, due to lens distortions, the markers may not be imaged on the straight lines. Especially in images acquired with short focal lengths, this effect becomes problematic. The geometric relations of the calibration points are disturbed by lens distortion and can be optimized. We used up to four radial distortion coefficients k i that were estimated by a nonlinear least-squares approach where the difference between the angles ij and 90° is minimized: After the optimization of the coefficients and the application of Eq. 4, the image contains the corrected coordinates, and the markers lay on straight lines.

Point correspondences setup
3D field Initially, we extracted the lines where the markers were arranged on. The marker coordinates were rotated by the angle between the field's direction vector and the x-unit vector of the image coordinate system. The minimum y-distance between the lines on the field was 1.25 mm. Via the extracted rectangle, one can estimate the number of pixels per mm using the width or height of the rectangle. All markers within an interval of 1.25 mm were assigned to the same line. The z value was determined by the modulus of the rectangle's line index and four, due to the distribution on three z-levels: Because of the field's layout, the coordinates of the markers in every extracted line can be calculated by geometric considerations. 2D field The 2D markers are assigned similarly. Initially, the markers were separated by their size and each class was sorted by lines, as described previously. As no specific markers indicate the orientation or origin of the field, the upper left marker of the smaller dots (marked with a red rectangle in Fig. 3d) was used as the origin. Starting from this point and the known field dimensions, the extracted lines were used to assign each marker in the image to its 3D coordinates.

Single image calibration
We have proposed the first approach of this step in [22] and [23] using the 3D calibration field. In general, the projection matrix P is searched. Based on the DLT approach [17], the system in Eq. 1 with homogeneous coordinates T are rearranged to the following: This system can be solved using singular value decomposition [24,25]. The initial solution is refined by minimizing the reprojection error repro_s [25] between the projected coordinates of the 3D calibration field and detected image coordinates using the nonlinear optimization method of Levenberg and Marquardt [26,27].
For this purpose, the image points corrected using the parameter k obtained using Eq. 6. Within the optimization, all extrinsic, intrinsic, and lens parameters were refined, resulting in a small reprojection error repro_s . As an abbreviation for this method, we will use SC for single image calibration, and SCCo for single image calibration where the reconstruction result is compared with the corrected field of the Sect. 2.5.5.

Multi-image calibration
During our examinations, we observed an improper initialization using a single image, thereby resulting in deviations in the camera model. In our specific application, due to the small calibration field and the correlation of intrinsics and extrinsics, especially the principal distance and working distance, the reprojection error repro_s does not increase significantly if the camera center, along with the principal distance, changes (e.g., the camera center comes closer to the field while the principal distance decreases). To handle this, we introduce a multi-image calibration, where we acquire overlapping images of the field, each with a normal case orientation. The initial model calculation and a first optimization were performed as described in Sect. 2.5.3. However, to suppress the mentioned effect, we used another optimization, where the model for each camera of each single image must exhibit the same intrinsic parameters. Thus, a second sum of the errors was introduced over the image acquisition position j.
We will refer to this method as MC.

Iterative multi-image calibration
In the final adaption of the previous procedure, we introduced an iterative loop comparable to [28]. The optimized camera positions were used to reconstruct the field points using a linear triangulation without optimizing the camera parameters [29]. Due to given uncertainties in the manufactured field, the modified calibration points were close to the real 3D coordinates of the calibration markers. These points were used to recalibrate the camera within all imaging positions. This loop was performed until a predefined reprojection error repro_m or a fixed number of iterations was reached. The multi-image calibration using a corrected field is abbreviated MCCo.

Model evaluation
In addition to the reprojection errors repro_s and repro_m , we used the mean reconstruction error recon to evaluate the obtained models via a 3D reconstruction of the 2D calibration field. We used the obtained models to reconstruct the 2D field (calibration markers) and calculated the total 3D reconstruction error recon , as well as the error along the x-y plane and along the z direction.
The reprojection errors repro_s and repro_m were obtained by a synthetic projection of the known marker positions using the obtained camera models and the measured image coordinates using Eqs. 18 and 19.

Image preprocessing
An example of the results of the preprocessing steps is shown in Fig. 5. Image A is the input image. In image b, small reflections were removed. After the scale space filtering (c and d), all artifacts were removed and the markers were obtained by the Otsu threshold in image e. After the filtering of image properties, only fully visible markers remained. In images c, d it can be seen that the markers are blur and after show a smaller size after thresholding compared to the original image (a). However this does not lead to a shift of the centroid and thus ensures stability of the calibration. Figure 6 shows the decrease in the mean value deviation of all angles ij between the vectors v 1 and v 2 from 90°. The result of the image rectification by undistorting the image points is visible in Fig. 7.

Initial geometry-based distortion correction
Our method for an initial geometry-based distortion correction could remove the lens distortion effects from the calibration image. After five iterations, our method converged to a minimum error of 0.174° while the distorted image contains an error of 4.5° for a 10-mm lens, which is the most significant distortion. After 12 iterations, no change was found in the distortion coefficients and thus, the error was observed. The corrected markers were on the straight lines along the x coordinate, while the originally detected markers formed a curve.

Comparison of reprojection and reconstruction error
The results of the best models and imaging setup (lens and spacer) are represented in Table 2 for the total  Table 3 splits these results into the x-y and z errors for the same configurations.
It is evident from Figs. 8 and 9 that the small reprojection error repro (e.g., within the single image 2D calibration) does not correspond to a good camera model in terms of reconstruction capability in this application. The corresponding model exhibits a reprojection error repro_s below 0.1 pixel and a reconstruction error recon of above 1 millimeter, while the multi-image calibration using the 3D field delivers a reprojection error repro_m of 1.1 pixels (10 mm lens, 5.97 spacer) with a reconstruction error recon of 14 micrometers.
As shown in Tables 2 and 3, we achieved a reconstruction error recon of 2.27 μ m (1.4 μ m lateral and 1.8 m transversal) with the 10-mm lens, and the largest (5.9 mm) spacer. This result was significantly better than the reconstruction from Zhang's calibration with a reconstruction error recon of 221 μm. Figure 10 shows the reconstruction as well as the reprojection error repro of the 10-mm lens setup using up to four radial distortion coefficients and two additional decentering coefficients, excluding the errors where no lens model was used, as those are significantly higher. The reprojection error repro as well as the reconstruction error recon exhibit a higher value when only one distortion coefficient (k1) was used. No significant change in the errors occurred if more than two coefficients and additional tangential distortion coefficients were used. Considering at least two distortion coefficients, the use of spacers resulted in an increased reprojection error repro (except for the 4.41 mm spacing), albeit a decreased reconstruction error recon , where the smallest error was achieved using the largest spacer (5.9 mm). The reprojection error repro_m and reconstruction error recon are maximal when no lens model was used. By changing the lens model from one to two radial parameters, a significant decrease in the error can be observed. However, the use of more than two coefficients cannot establish a significant change in the reprojection error repro_m . In the reconstruction, the error decreases by 0.5 μ m when changing from two to four coefficients.

Discussion
Our proposed preprocessing could detect a sufficient number of calibration markers (at least 30 markers were detected) for the subsequent calibration. The distortion correction results in the proper assignment of point correspondences for the proposed calibration methods. Our image processing chain and the distortion correction are suitable for other calibration fields (with known geometry) and combinations of spacer rings and lenses. An important fact to mention is the specific design of the lens/spacer combination. We used industrial high precision lenses with integrated threads to adjust the spacer size without a dismount neither of the camera nor the lens. This ensures additional stability of the imaging model compared to a standard setup where spacer rings are added in between the camera and the lens by an unmount of the lens, adding spacers and remount the lens. As the 2D field was manufactured with high precision, implying an uncertainty of 1 μ m and a planar surface, no artifacts caused by a rough surface were visible; thus, the preprocessing steps 2-4 were not required and less neighbors needed to be extracted. As a consequence of the rougher surface, artifacts may remain in the image of the 3D field. This fact, and the different geometric layouts of the 3D field necessitated 30 neighbors.
We showed in Figs. 8 and 9 that a small operating distance using large spacer rings is important to obtain a small reconstruction error recon . Due to this setup, a large magnification in combination with the possible manufacturing uncertainties of a 3D field can result in large deviations in the camera model and a large reprojection error repro_s using single image calibration. A multi-image calibration, where equal intrinsic camera parameters were used for each acquisition position, eliminates this issue  Most studies (e.g., [11,28,[30][31][32]) used the reprojection error repro to verify the calibration results. However, especially in the normal ultra-close range photogrammetry, we show (compare Figs. 8 and 9 that this value (mean reprojection error repro ) might not be appropriate to conclude on the accuracy of the calibration method. In the case of 2D measurements, a low reprojection error repro will result in proper measurements (distances, areas). However, the corresponding calibrated camera model might result in improper reconstructions in 3D applications. A small reprojection error repro does not necessarily result in a small reconstruction error recon . This is due to the specific imaging setup. The short working distance and large magnification show larger reprojection errors since deviations in the calibration field lied to larger deviations in the projection compared to setup using larger working distances and smaller magnifications. However, the ultraclose range setup is indispensable for proper reconstructions. Consequently, in ultra-close range photogrammetry, the 3D reconstruction error recon should be used.
From Tables 2 and 3 it becomes clear that for the normal ultra-close range photogrammetry, a large magnification, short operating distance, and wide angle lens are indispensable. This can be achieved using large spacers. Finally, we showed in Sect. 3.4 that the number of distortion parameters exhibits a restricted influence on the resulting camera model. Because of the results, we consider two parameters as sufficient. Another approach for reconstructing solder joint from images is the bundle adjustment. Due to novel hardware technologies and accelerations bundle adjustment becomes real time capable. However a direct linear triangulation using calibrated cameras is less computationally intense. Thus, bundle adjustment was not examined in this work.

Conclusion
We herein presented a novel quality assessment problem in PCB manufacturing that required a specific image acquisition setup, and a precise camera model from calibration in terms of measurements and 3D reconstructions. Our multi-image calibration delivered a camera model that yielded a suitable reconstruction error recon . This small error is crucial for ultra-close range photogrammetry and the corresponding measurements. In our subsequent work, we will use the obtained camera models from the multi-image 3D calibration to reconstruct a point cloud of the PCB from model-based corrected image features. This point cloud will be post processed to obtain closed solder joint surfaces for volume and position estimation. The acquisition setup that yields a small error (10 mm lens, 5.9 mm spacer, recon = 2.27μ m) is sufficient for this task. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.